diff --git a/data_all_eng_slimpj/shuffled/split2/finalzdks b/data_all_eng_slimpj/shuffled/split2/finalzdks new file mode 100644 index 0000000000000000000000000000000000000000..ed44c72aae683507f1e0c503a6fafefea12aba78 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzdks @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nNon-orthogonal multiple access (NOMA) is a promising multiple access technique for 5G networks\nwhich has the potential to address the\nissues associated with the exponential growth of data traffic\nsuch as spectrum scarcity and massive connectivity \\cite{6730679,6868214,7273963,7676258,7263349}.\nIn contrast to conventional multiple access schemes,\nNOMA allows different users to efficiently share the same resources (i.e., time, frequency and code)\nat different power levels so that the user with lower channel gain is served with a higher power and vice versa.\nIn this technique, a successive interference cancellation (SIC) approach is employed at receivers to separate multi-user\nsignals, which significantly enhances\nthe overall spectral efficiency.\nIn other words,\nNOMA has the capability to control the\ninterference by sharing resources\nwhile increasing system throughput\nwith a reasonable additional complexity \\cite{7263349}.\n\nRecently, a significant amount of research has focused in studying\nseveral practical issues in NOMA scheme.\nIn particular, beamforming designs for multiple antenna NOMA networks have received \na great deal of interest in the research community due to their additional degrees of freedom and diversity gains \\cite{7433470,7095538,7277111}.\nA general framework for a multiple-input multiple-output (MIMO) NOMA system has been developed for both the downlink\nand the uplink in \\cite{7433470} whereas\nthe throughput maximization problem was studied for a two-user MIMO NOMA system in \\cite{7095538}.\nThe sum rate maximization problem for a multiple-input single-output (MISO) NOMA has been investigated in \\cite{7277111}\nthrough the minorization maximization algorithm.\nIn most of the existing work, beamforming designs have been proposed \nfor NOMA schemes with the assumption of perfect channel state information (CSI) at the transmitter \\cite{7433470,7095538,7277111}.\nHowever,\nthis assumption might not be always valid for practical scenarios\ndue to channel estimation and quantization errors \\cite{4700145,4895658,5073854,5074429,5665898,6967767,7047328}. On the other hand, channel uncertainties significantly influence the performance of the\nSIC based receivers as the\ndecoding order of the received multi-user signals is determined with respect to the users' effective channel gains.\nTherefore, it is important to take into account the channel\nuncertainties especially in the beamforming design for NOMA networks.\nMotivated by this practical constraint, we\nfocus on robust beamforming design based on the worst-case performance optimization framework\nto tackle the norm-bounded channel uncertainties \\cite{5755208,6156468,7177089,7547398}.\nIn \\cite{5755208}, the robust beamforming design has been developed\nfor providing secure communication in wireless networks with \nimperfect CSI.\n\nBy incorporating the\nbounded channel uncertainties, the robust sum power minimization\nproblem\nis investigated in \\cite{6156468}\nfor a downlink multicell network with the worst-case signal-to-interference-plus-noise-ratio (SINR) constraints\nwhereas the robust weighted sum-rate maximization was studied for multicell downlink MISO systems in \\cite{7177089}.\nIn \\cite{7547398}, a robust minimum mean square error based beamforming technique is\nproposed for multi-antenna relay channels with imperfect CSI between the relay and the users.\nIn the literature, there are two types of NOMA schemes considered:\nI) clustering NOMA \\cite{6735800,7015589,7442902}, II) non-clustering NOMA \\cite{5755208,6156468,7177089,7547398}.\nIn the clustering NOMA scheme, all the users in a cell are grouped into $N$ clusters with two users\nin each cluster, for which a transmit beamforming vector is designed to support those two users\nthrough conventional multiuser beamforming designs. The users in each cluster are supported by a NOMA\nbeamforming scheme. However, in the non-clustering NOMA scheme, there is no clustering and each\nuser is supported by its own NOMA based beamforming vector.\nIn \\cite{7442902}, the authors studied a robust NOMA scheme for the MISO channel\nto maximize the worst-case achievable sum rate with a\ntotal transmit power constraint.\n\nIn this letter, we follow the second class of research where NOMA scheme applied between all users and there is the spectrum\nsharing between all users in cell. Then, we propose a robust beamforming design for NOMA-based MISO\ndownlink systems.\nIn particular, the robust power minimization problem is solved\nbased on worst-case optimization framework to provide the required quality of service at each user\nregardless of the associated channel uncertainties.\nBy exploiting \\mbox{S-Procedure,} the original \\mbox{non-convex} problem is\nconverted into a convex one by recasting the non-convex constraints into linear matrix inequality (LMI) forms.\nSimulation results are provided\nto validate the \\mbox{effectiveness}\nof the robust design by comparing the performance of the robust scheme with that of the non-robust approach.\n{The work in \\cite{7442902} also studied the worst-case based robust scheme\nfor MISO NOMA system, however, there are main differences between our proposed scheme and the work in \\cite{7442902}.\nA clustering NOMA scheme is developed in \\cite{7442902} by grouping\nusers in each cluster.\nIn this scheme, a single beamformer is designed to transmit the signals for all users\nin the same cluster whereas, in this letter, the signal for each user is transmitted with a dedicated beamformer.\nIn addition, both beamforming designs are completely different as the work in \\cite{7442902}\nproposes robust sum-rate maximization based design whereas this letter solves robust power minimization problem\nwith rate constraint on each user. In terms of solutions, the work in \\cite{7442902}\nexploits the relationship between MSE and achievable rate and derives an equivalent non-convex problem, which\nis decoupled into four sub-problems and those problems are iteratively solved to realize the solution of the original problem.\nIn this letter, the robust power minimization problem is formulated by deriving the worst-case achievable rate.\nThe original problem formulation turns out to be non-convex and we exploit S-Procedure and semidefinite\nrelaxation to convert it to a convex one. Hence, the work in \\cite{7442902} and the proposed work in\nthis letter are different including problem formulation and the solution approaches.}\n\n\\vspace{-0.2cm}\n\\section{System Model and Problem Formulation}\nWe consider NOMA-based downlink transmission where a\nbase station (BS) sends information to $K$ users\n$U_1,U_2, \\ldots, U_K$.\nIt is assumed that the BS is equipped with $M$ antennas whereas each user consists of a single antenna.\nThe channel coefficient vector between the BS and the $k^{th}$ user $U_k$ is denoted by $\\mathbf {h}_k \\in \\mathbb{C}^{M \\times 1}~ (k = 1,\\ldots,K)$\nand $\\mathbf{w}_{k}\\in \\mathbb{C}^{M \\times 1}$ represents the corresponding beamforming vector of the $k^{th}$ user $U_{k}$.\nThe received signal at $U_k$ is given by\n\\begin{equation}\\label{signal}\n y_k = \\mathbf {h}_k^H \\mathbf{w}_k s_k + \\sum_{m\\neq k}\\mathbf{h}_k^H \\mathbf {w}_m s_m + n_k, \\qquad \\forall k,\n\\end{equation}\n\n\\vspace{-0.3cm}\n\\noindent where $s_{k}$ denotes the symbol intended for $U_{k}$\nand \\mbox{$n_k \\sim \\mathcal{CN }(0, \\sigma_k^2)$} represents a zero-mean additive white Gaussian noise with variance $\\sigma_k^{2}$.\nThe power of the symbol $s_{k}$ is assumed to be unity, i.e., $\\mathbb{E}(|s_k|^2)=1$. In practical scenarios, it is difficult to provide perfect CSI at the transmitter due to \nchannel estimation and quantization errors. Therefore, we consider a robust beamforming design to overcome these channel uncertainties.\nIn particular, we incorporate norm-bounded channel uncertainties in the design as\n\n\\vspace{-0.5cm}\n\\begin{align} \\label{delta}\n& \\mathbf{{h}}_k=\\mathbf{\\hat{h}}_k +\\Delta\\mathbf{\\hat{h}}_k,\\quad \\|\\Delta\\mathbf{\\hat{h}}_k\\|_2=\\|\\mathbf{{h}}_k-\\mathbf{\\hat{h}}_k\\|_2 \\leq \\epsilon,\n\\end{align}\n\n\\vspace{-0.25cm}\n\\noindent where $\\mathbf{\\hat{h}}_k$, $\\Delta\\mathbf{\\hat{h}}_k$ and\n$\\epsilon \\geq 0$ denote the estimate of $\\mathbf{{h}}_k$, the norm-bounded channel estimation error and the channel estimation error bound, respectively.\n\nIn the NOMA scheme, user multiplexing is performed in the power domain\nand the SIC approach is employed at receivers to separate signals between different users.\nIn this scheme, users are sorted\nbased on the norm of their channels,\ni.e., $\\|\\mathbf{h}_{1}\\|_2\\leq\\|\\mathbf{h}_{2}\\|_2\\leq\\ldots\\leq\\|\\mathbf{h}_{K}\\|_2$.\nFor example, the $k^{th}$ user decodes the signals intended for the users from $U_{1}$ to $U_{k-1}$\nusing the SIC approach\nwhereas the signals intended for the rest of the users (i.e., $U_{k+1},\\ldots,U_{K}$) are treated as interference at the $k^{th}$ user.\nBased on this SIC approach, the $l^{th}$ user can detect and remove\nthe $k^{th}$ user's signals for $ 1 \\leq k < l $ \\cite{7277111}.\nHence, the signal at the $l^{th}$ user after removing the first $k-1$ users' signals to detect the $k^{th}$ user is represented as\n\n\\vspace{-0.6cm}\n\\begin{align}\\nonumber\ny_l^k=\\mathbf {h}_l^H \\mathbf{w}_k s_k+\\sum_{m=1}^{k-1} \\Delta\\mathbf{\\hat{h}}_l \\mathbf{w}_m s_m+\\sum_{m=k+1}^{K}\\mathbf{h}_l^H \\mathbf {w}_m s_m+n_l,&&\n\\end{align}\n\n\\vspace{-0.7cm}\n\\begin{eqnarray} \\label{signal2}\n \\qquad\\qquad \\qquad\\qquad\\quad\\quad\\forall k, \\, l\\in\\{k,k+1,\\ldots,K\\},&&\n\\end{eqnarray}\n\n\\vspace{-0.3cm}\n\\noindent\nwhere the first term is the desired signal to detect $s_k$ and\nthe second term is due to imperfect CSI at the receivers during\nthe SIC process.\nDue to the channel uncertainties, the signals intended for the users $U_{1},\\ldots,U_{k-1}$ cannot be completely removed by the $l^{th}$ user. The third term is the interference\nintroduced by the signals intended to the users $U_{k+1},\\ldots,U_{K}$.\nAccording to the SIC based NOMA scheme, the $l^{th}$ user should be able to detect\nall $k^{th}\\:(k =\nN_f (246.7 MeV)^3$ and $f_{\\pi} = 93 MeV$. For the remainder of this paper we\nwill use the numerical solution\n\\cite{Be,BeHaMe}\n\\begin{eqnarray}\nM = 335.1 MeV \\qquad \\Lambda = 630.9 MeV \\nonumber\n\\end{eqnarray}\n\\begin{eqnarray}\n\\tilde{G}\\Lambda^2 = 2.196,\n\\label{e2.24}\n\\end{eqnarray}\nwhich assumes a current quark mass $m_u = m_d = 4 MeV$.\n\n\\section{POLYAKOV LOOP POTENTIAL}\n\\label{s3}\nWe denote $tr_c {\\cal P}$ by $\\phi$. Near the deconfinement temperature $T_D$\nthe potential of Polyakov loops in pure QCD can be approximated by a polynomial\npotential of the form \\cite{TrWi}\n\\begin{eqnarray}\nU(\\phi ,T) = \\lambda \\left( \\phi - {\\phi^2 \\over \\phi_D} \\right)^2 -\n{T - T_D \\over T_D L^{-1}} \\left( {\\phi \\over \\phi_D} \\right)^3\n\\label{e3.1}\n\\end{eqnarray}\nThe potential is parametrized such that for a pure gauge theory the phase\ntransition occurs at $T_D$ with a latent heat $L$ and the Polyakov loop jumping\nfrom $0$ to $\\phi_D$.\n\nThis choice for $U$ is phenomenological. Although perturbative QCD does yield a\nquartic polynomial potential for the $A_0$ field, the potential so obtained\ndoes not display critical behavior, and it is only valid at high temperatures\n\\cite{We}.\nHowever, it does suggest that the parameter $\\lambda$ should be taken to be on\nthe order of $T_D^4$. We have chosen $\\lambda = T_D^4$ and $L = 2T_D^4$ for the\nnumerical calculations below, neglecting any possible temperature corrections\nor other dependencies in $\\lambda$, $T_D$, $\\phi_D$ and $L$. The direct\nassociation of $T_D$, $\\phi_D$ and $L$ with measureable quantities holds only\nfor very heavy quarks. For light quarks, these parameters must be determined\nby fitting to the observed behavior.\n\nWe can now couple the chiral symmetry and deconfinement phase transitions. Let\n\\begin{eqnarray}\n{\\cal V}(M,\\phi,T) = V(M,\\phi,T) + U(\\phi,T),\n\\label{e3.3}\n\\end{eqnarray}\nwhere $V(M,\\phi,T)$ is given in Eq.~(\\ref{e2.20}) with ${\\rm tr_c}{\\cal P} =\n\\phi$. To determine the critical behavior of the coupled effective potential,\nthe absolute minimum of the potential ${\\cal V}(M,\\phi,T)$ as a function of $M$\nand $\\phi$ must be found as $T$ is varied. A satisfactory determination of the\nentire phase diagram requires numerical investigation.\n\n\\section{RESULTS}\n\\label{s4}\nThe two flavor Nambu-Jona-Lasinio model has a second-order chiral phase\ntransition for massless quarks. For the parameter set used here, this\ntransition occurs at $T_{\\chi} = 194.6$ MeV, as determined numerically from\n$V(M,\\phi \\equiv 1,T)$. The order of the transition is consistent with Monte\nCarlo simulations of two-flavor QCD, and the critical temperature is plausible.\nHowever, the finite temperature quark determinant is not consistent with the\nknown behavior of QCD unless the effects of a non-trivial Polyakov loop are\nincluded. As Eq.~\\ref{e2.16} makes clear, the effects of finite temperature in\nthe quark determinant are suppressed by the small expected value of the\nPolyakov loop at low temperatures. Without the Polyakov loop effects, the\nconventional Nambu-Jona-Lasinio model displays the $T^4$ behavior of a free\nquark gas at arbitrarily low temperatures.\n\n\\begin{figure}[htb]\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\special{psfile=Fig1.ps angle=0 hoffset=30 voffset=-28 hscale=35 vscale=35}\n\\caption {Schematic behavior of $\\phi$ and $M$ in the three regions of the\n$(T_D, \\phi_D)$ plane as temperature increases.}\n\\label{f1}\n\\end{figure}\n\nNumerical investigation shows that the critical behavior of\n${\\cal V}(M,\\phi,T)$ is most sensitive to $T_D$ and $\\phi_D$. Figure~\\ref{f1}\nillustrates the three distinct types of critical behavior observed as $T_D$ and\n$\\phi_D$ are varied with the other parameters held fixed and the current mass\n$m$ set to $0$. In Region I of the $(T_D, \\phi_D)$ plane the effective quark\nmass changes continuously from its $T = 0$ value to a mass of $0$ MeV as the\ntemperature increases, signaling a second-order chiral phase transition.\nSimultaneously, the expectation value of the trace of the Polyakov loop\nincreases smoothly with temperature, exhibiting a large, but continuous,\nincrease in the vicinity of the chiral transition. This region exhibits\nbehavior most similar to that observed in simulations of two-flavor QCD. In\nRegion II the Polyakov loop experiences a first-order jump at some $T_c$. At\nthe same $T_c$ the constituent quark mass undergoes a sudden drop, but chiral\nsymmetry is {\\it not} restored. As the temperature increases beyond $T_c$, the\nconstituent mass moves continuously to zero. Thus, there are two distinct phase\ntransitions in Region II, a first-order transition driven by the dynamics of\ndeconfinement, and a later second-order chiral symmetry restoring transition.\nIn Region III the Polyakov loop experiences a first-order jump at some $T_c$,\nand simultaneously the effective quark mass drops suddenly to $0$ MeV,\nindicating a single first-order transition which restores chiral symmetry.\n\nFigure~\\ref{f2} shows the location of the three regions in the $(T_D, \\phi_D)$\nplane. Figure~\\ref{f3} plots the behavior of the Polyakov loop, $\\phi$, and the\nconstituent quark mass, $M$, as a function of temperature at the point\n$(T_D = 220 {\\rm MeV}, \\phi_D = 1.2)$ in Region II. There is a clear separation\nof the first and second-order transitions.\n\n\\begin{figure}[htb]\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\special{psfile=Fig2.ps angle=0 hoffset=12 voffset=-45 hscale=30 vscale=25}\n\\caption{Phase diagram in the $(\\phi_D, T_D)$ plane.}\n\\label{f2}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\bigskip\n\\special{psfile=Fig3.ps angle=0 hoffset=9 voffset=-60 hscale=25 vscale=25}\n\\caption{Plot of $\\phi$ and $M$ as a function of temperature at the point\n$(T_D = 220 {\\rm MeV}, \\phi_D = 1.2)$ in Region II.}\n\\label{f3}\n\\end{figure}\n\n\\section{LATTICE VERSION OF THE EFFECTIVE POTENTIAL}\n\\label{s5}\nA lattice version of the effective potential has the advantage that many\nparameters can be set from QCD simulations. There are two related issues in\nextending our work to the lattice: the choice of a lattice fermion formalism\nand setting other parameters in the effective action.\n\nA lattice version of the NJL model can be straightforwardly implemented using\nnaive fermions \\cite{BiVr}. Species doubling can be handled by a formal\nreplacement of $N_c$ with $N_c \/ 16$. Using the single quark current mass,\nchiral condensate, and constituent mass of the last section as inputs, we find\n\\begin{eqnarray}\na^{-1} = 389.0 MeV \\qquad \\tilde{G}a^{-2} = 0.8343.\n\\label{e5.8}\n\\end{eqnarray}\nHence, $f_{\\pi} = 94.09$ $MeV$. Also note that $M_{T=0}a$ is of order 1.\nThe temperature is given by $T = 1 \/ (N_t a)$ where $N_t$ is the temporal\nextent of the lattice. This places a problematic upper limit on a discrete\nrange of available temperatures. Given our choice of parameters,\n$T_{max}$ is only $194.5$ $MeV$.\n\nThe temperature may be changed continuously by an asymmetric rescaling of the\nlattice in which the lattice spacing $a_t$ in the temporal direction varies\nindependently of the spatial lattice spacing $a_s$ \\cite{Ka}. Defining\n$a_s = a$ and $\\xi = a_t \/ a_s$, we now have in physical units\n$T = 1 \/ (N_t a \\xi)$. The effective potential is\n\\begin{eqnarray}\nV(M, \\phi, \\xi) = \\nonumber\n\\end{eqnarray}\n\\begin{eqnarray}\n- {N_f N_c \\over 4 \\xi} \\int_{-\\pi}^{\\pi} {d^3 \\vec{k} \\over (2 \\pi)^3}\n\\rm{ln} \\left[ \\sqrt{1 + {\\xi}^2 A^2(\\vec{k})} + \\xi A(\\vec{k})\n\\right] \\nonumber\n\\end{eqnarray}\n\\begin{eqnarray}\n- {N_f N_c \\over 2 N_t \\xi} \\int_{-\\pi}^{\\pi} {d^3 \\vec{k} \\over (2 \\pi)^3}\n\\rm{ln} \\Bigl\\{ 1 + \\Bigl[ \\sqrt{1 + {\\xi}^2 A^2(\\vec{k})} \\nonumber\n\\end{eqnarray}\n\\begin{eqnarray}\n- \\xi A(\\vec{k}) \\Bigr]^{N_t} {tr_c(\\phi) \\over N_c} \\Bigr\\}\n+ {(M - m)^2 \\over 4 \\tilde{G}}\n\\label{e5.11}\n\\end{eqnarray}\nwhere $A(\\vec{k}) = \\sqrt{M^2 + \\sum_j {\\sin}^2(p_j)}$.\nHowever, using the same inputs as the symmetric lattice case, as well as\n$f_{\\pi} = 93$ $MeV$, we find the unique answer\n\\begin{eqnarray}\na^{-1} = 477.2 MeV \\qquad \\tilde{G}a^{-2} = 1.255\n\\label{e5.15}\n\\end{eqnarray}\nwith $\\xi = 2.002$.\n\nAn alternative to asymmetric lattices might be variant or improved actions for\nwhich $a^{-1}$ is larger.\n\n\\section{CONCLUSION}\n\\label{s6}\nPolyakov loop effects have a strong impact on the possible critical behavior of\nthe NJL model. The universality argument \\cite{PiWi} which predicts a\nsecond-order chiral transition for two flavors and a first-order transition for\nthree or more flavors may fail when this additional order parameter is\nincluded.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe existence of gravitational waves is predicted in general relativity\nand many modified gravity theories. By observing gravitational waves (GWs),\nwe can not only test the general relativity and modified gravity\ntheories in strong gravitational fields, but also reveal the merger\nprocess of compact objects through the history of the universe and\nphysics in the early universe such as inflation ({\\it e.g.}\n\\cite{2011PhRvL.107u1301G}). In particular, GWs which\noriginate from the cosmological inflation have almost scale-invariant\nspectrum and propagate freely since their generation, and thus the\ndetection of such scale-invariant GWs can be\nconsidered as a direct proof of the existence of cosmological inflation.\nIn order to prove the primordial GWs to be\nscale-invariant, we have to observe GWs with a wide frequency range \nand high sensitivity.\n\nPrecise measurements of primordial GWs at high\nfrequencies will tell us about the thermal history of the early\nuniverse, which could not be reached in other ways. For example, it has\nbeen suggested that the amplitude of GWs at the\nfrequency range $f \\gtrsim 10^{-2.5}$ Hz can be used to infer the reheating\ntemperature \\cite{2008PhRvD..77l4001N}, and the epoch of quark-gluon\nphase transition and neutrino decoupling at lower frequency range \n$10^{-10} \\lesssim f \\lesssim 10^{-8}$ Hz \\cite{2009PhRvD..79j3501K}. \nIf the universe underwent a strongly\ndecelerating phase after inflation realized in quintessential inflation\nmodels \\cite{1999PhRvD..59f3505P},\ninflationary GWs have a blue spectrum at higher frequency $f\\gtrsim 10^{-2}$ Hz\n\\cite{1999CQGra..16.2905G,2004CQGra..21.1761T}. Therefore, while the\nfirst detection of primordial GWs is expected through\nthe CMB experiments which probe the waves at smallest frequencies\n({\\it e.g.} \\cite{2000cils.book.....L})\nit is significant to explore gravitational\nwaves with a wide range of frequency.\n\nIn addition, measurements of GWs at high frequency will\nopen a new window on the black hole merger events in the universe.\nMatsubayashi {\\it et al.} showed that the frequency of gravitational\nwaves that originate from mergers of intermediate mass black holes\n(IMBHs) falls inside the range of $10^{-5}\\lesssim f \\lesssim 10^{1}$ Hz, and one\ncan discover or set constraints on these merger events within several\ngigapersec from the Earth in future gravitational wave observation\nprojects \\cite{2004ApJ...614..864M}.\n\nIn order to observe GWs, many kinds of gravitational wave\ndetectors have been proposed and developed. The Laser Interferometer\nGravitational Wave Observatory (LIGO) has already set a strong\nconstraint on the strain amplitude $h_{\\rm c}<4\\times 10^{-24}$ at the\nfrequency of GWs around $10^{2}$ Hz\n\\cite{2009RPPh...72g6901A,2013PrPNP..68....1R}. However, seismic\noscillations interfere ground-based observations of gravitational\nwaves at lower frequencies as $f\\lesssim 10$ Hz ({\\it see}\n\\cite{2001PhyU...44R...1G}). Aiming at this frequency range, Torsion\nbar detectors such as TOBA have been developed. Ishidoshiro {\\it et al.}\nset a constraint on the strain amplitude of the continuous component of\nGWs as $h_{\\rm c}< 2\\times 10^{-9}$ at $f\\sim 0.2$Hz\n\\cite{2011PhRvL.106p1101I}. \n\n\n\nAt slightly lower frequency range, with some planetary explorers,\nconstraints such as $h_{\\rm c}<1\\times 10^{-15}$ at $f=10^{-2}$ Hz \\cite{1995A&A...296...13B}\nand $h_{\\rm c}<2\\times 10^{-15}$ at $f=3\\times 10^{-4}$ Hz \\cite{2003ApJ...599..806A}\nhave been set by the Doppler tracking method.\nHowever it is difficult to distinguish\nsignals of GWs as the frequency of targeting\nGWs increases higher than $10^{-2}$ Hz due to the noise\nin the electric circuits of which are installed in the receiver in this method. \n\nIn this work, we consider the modulation of frequency of radio\nwaves from GPS satellites by stationary\/prompt GWs.\nGPS satellites emit precious, stable and high intensity radio waves \nin order to provide positional accuracy for the GPS in navigation. \nIt is possible because the oscillators of GPS satellites are designed to \nbe synchronized with atomic clocks which are loaded with the satellites.\nThese oscillators are so stable that \nthe fractional error of the frequency of their emitting radio waves \nis suppressed to $\\Delta \\nu \\slash \\nu<10^{-15}$\n\\cite{2008.gps.standard}.\n\nThe GPS method is an application of the Doppler tracking method which has two\nadvantages. First, GPS method can probe in the frequency range \n$f\\gtrsim 10^{-2}$ Hz where the conventional Doppler tracking methods can not\nreach because GPS satellites are much closer than the planetary\nexplorers, while the distance is large enough for the noise due to\nseismic oscillations to be negligible as shown in the next section.\nSecondly, the radio waves from GPS \nsatellites can be detected everywhere and everytime on the ground.\nThis condition will be suitable to make cross correlation study and\ncrucial to detect GWs from prompt events. \n\nIn this study, we set a constraint on the amplitude of GWs\nfor $0.01 \\le f \\le 1$ Hz by using the Doppler tracking method with\nGPS disciplined oscillators (GPSDOs). The GPSDO is the stable oscillator with\na quartz oscillator whose output is controlled to agree with the signals\nfrom GPS satellites \\cite{2008.gps.standard,2005ptti.conf..677L}. To\nhave GPSDOs operating with high stability, the amplitude of the\ncontinuous components of GWs should be small. Recently,\nthe frequency stability of GPSDOs for a short time interval about from\nseconds to a few hundred seconds has been reached to the level of\n${\\Delta \\nu}\\slash{\\nu} \\simeq 10^{-12}$ \\cite{2008.gps.standard,2005ptti.conf..677L}. \nThis short time stability of the\nfrequency of GPSDOs enables us to set a constraint on the strain\namplitude of the continuous GWs.\n\nThis paper is organized as follows. We estimate the effects of\nGWs on the \nmeasurements of the radio waves from GPS satellites and derive the \nconstraint in \\S 2. In our\nformulation, we adopt TT gauge to describe the GWs. In\n\\S 3, we discuss implications of the result to the\nmerger events of IMBHs and inflationary GWs. We conclude\nthis paper in \\S 4. Throughout \nthis paper, $\\nu$ and $f$ mean the frequency of the electro-magnetic\nwaves and the GWs, respectively. The speed of light is\ndenoted by $c$. A dot represents a partial derivative respective to the\nphysical time, {\\it i.e.} $\\dot{x}\\equiv {\\partial x}\\slash{\\partial\nt}$.\n\n\n\\section{Effects of GWs on GPS measurements}\nIn this section, we consider the effect of gravitational wave\nbackgrounds (GWBs) on the radio waves emitted by GPS satellites. Here\nwe assume that GWBs are expected to be isotropic and stationary ({\\it\nsee.} \\cite{2000PhR...331..283M}). For GWs whose\nwavelength are longer than the typical distance between GPS satellites\nand detectors on the ground, the frequency of radio waves emitted by the\nsatellites is modulated as \n\\begin{equation} \n\\dfrac{\\Delta \\nu}{\\nu}=\n\\dfrac{l}{2c}\\dot{h}(t)\\sin^{2}\\theta ~,\\label{fractional01}\n\\end{equation}\nwhere $\\dot{h}(t)$, $\\theta $ and $l$ are the time derivative of the\namplitude of GWs at time $t$, the angle between the \ndirections of propagation of radio waves and gravitational\nwaves, and the distance between the GPS satellite and the observer,\nrespectively.\n\nBy assuming GWs are monotonic and plane waves\nparallel to the $z$-axis, the amplitude of GWs $h(t)$ and its time derivative $\\dot{h}(t)$\ncan be written with the strain $h_{\\rm c}$ as\n\\begin{eqnarray} \nh(t)&=& h_{\\rm c}\\sin \\left(2\\pi f \\left(t-\\dfrac{z}{c}\\right)+\\phi \\right)~,\\\\\n\\dot{h}(t)&=& 2\\pi f h_{\\rm c}\\cos \\left(2\\pi f \\left(t-\\dfrac{z}{c}\\right)+\\phi \\right)~,\\label{differential}\n\\end{eqnarray}\nwhere $\\phi $ is the phase of GWs at $z=t=0$.\n\nFrom Eq. (\\ref{fractional01}), signals of GWs generate\nan additional shift of frequency of radio waves. When one considers the\ncase where $\\theta =\\pi \\slash 2$, the effect of GWs on\nthe frequency modulation of electro-magnetic waves can be estimated as\n\\begin{equation} \n\\dfrac{\\Delta \\nu}{\\nu}=\n\\dfrac{\\pi fl}{c}h_{\\rm c}~.\n\\label{fractional02}\n\\end{equation}\nIn reality, the signal sourced by GWs is buried within \nthe noise. Therefore, if one can receive \nthe radio waves which is emitted at a distance of $l$ \nwith a time variance of the frequency fluctuations $\\sigma$\n\\cite{baran2010modeling}, \none can set the upper bound of the strain amplitude of gravitational\nwaves as, \n\\begin{equation} \nh_{\\rm c}<\\dfrac{c}{\\pi lf}\\sigma ~.\\label{hc}\n\\end{equation}\n\nFor the GPS, the distance $l$ is approximately $2\\times 10^{7}$ m and \nthe standard variation of frequency from GPS satellites converges to \n$\\sigma \\simeq 1\\times 10^{-12}$ \nby integrating the signal from GPS satellites for the period \nfrom one second to one hundred seconds \n\\cite{2005ptti.conf..677L,2008.gps.standard}.\nBy receiving the signal from the GPS for a period $t_{\\rm i}$, \nthe frequency range of the GWs which one can probe is limited\nas $f\\ge t_{\\rm i}^{-1}$. By combining it with the condition \nthat the wavelength of GWs is longer than \nthe distance between the GPS satellites and the observer, \nthe probed range of frequency with the GPS can be written as\n\\begin{equation} \nt_{\\rm i}^{-1}\\le f \\le \\dfrac{l}{c}~.\n\\end{equation}\n\nFrom Eq. (\\ref{fractional01}) \nbecause the amplitude of\nthe modulation is proportional to the frequency of GWs\n$f$, the strain amplitude $h_{\\rm c}$ is constrained tighter for higher\nfrequency. By substituting the numbers into Eq. (\\ref{hc}), we can set\na constraint on the strain of GWBs as\n\\begin{equation} \nh_{\\rm c}<4.8\\times 10^{-12}\\left(\\dfrac{1 {\\rm Hz}}{f} \\right)\n~.\\label{constraintGPS}\n\\end{equation}\nIn figure \\ref{fig.1}, we plot the constraint on the strain amplitude of\nthe continuous component of GWs and compare the result\nfrom the torsion bar detector \\cite{2011PhRvL.106p1101I}. We find that\nthe GPS gives a tighter constraint on the relevant frequency range.\n\nThe GPS method does not suffer seismic oscillations that disturb\nground-based observations. The reason can be given as follows.\nIn the GPS method, the effect of GWs can be seen \nas changes of observed distances between the satellites and observers.\nWhen plane and monotonic GWs come to an observer, \nthe observed distance $l$ between the satellite and the observer on the\nground can be written in terms of the proper distance $l_0$ as \n\\begin{equation} \nl=l_{0}\\left(1+h \\right)+x_{\\rm s}~,\\label{equivalentEq}\n\\end{equation}\nwhere $h$ is the amplitude of GWs and $x_{\\rm s}$ is the\namplitude of seismic oscillations. Thus the effect of seismic\noscillations relative to the strain amplitude of GWs can\nbe characterized by $x_{\\rm s}\\slash l_{0}$. Shoemaker {\\it et al.}\nreported a typical power spectrum of seismic oscillations as\n\\cite{1988PhRvD..38..423S}\n\\begin{equation} \nx_{\\rm s}\\simeq 3\\times 10^{-7} \\left(f\\slash 1~{\\rm Hz}\n\t\t\t\t\\right)^{-2}~{\\rm [m]}~. \n\\end{equation}\nBecause $l_{0}\\simeq 2.0\\times 10^{7}$ m, the effect of seismic\noscillations is suppressed as,\n\\begin{equation} \n\\dfrac{x_{\\rm s}}{l_{0}}\\simeq \n1.5\\times 10^{-14}\\left(f\\slash 1~{\\rm Hz}\n\t\t\t\t\\right)^{-2}~.\n\\end{equation}\nThis is smaller than the upper limit of our constraint, Eq\n(\\ref{constraintGPS}). \n\nIn addition, because GPS satellites fly much closer to the ground\nthan the planetary explorers which have been used in the Doppler\ntracking method, we can set a constraint on the strain amplitude of GWBs\nin the higher frequency region than those of ULYSSES and Cassini (\n\\cite{1995A&A...296...13B,2003ApJ...599..806A}).\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=85mm]{gw}\n \\end{center}\n \\caption{The upper limit on the strain amplitude of GWBs from GPS\n satellites (solid line). The dashed line represents the constraint\n from the torsion bar detector \\cite{2011PhRvL.106p1101I}. The dotted\n line represents the constraint from the Doppler tracking\n method\\cite{1995A&A...296...13B}. } \\label{fig.1}\n\\end{figure}\n\nThe intensity of GWBs can be characterized by the dimensionless\ncosmological density parameter $\\Omega_{\\rm gw}(f)$. The parameter is\ndefined as\n\\begin{equation} \n\\Omega_{\\rm gw}(f)=\\dfrac{10\\pi^{2}}{3H_{0}^{2}}(fh_{\\rm c})^{2}~,\n\\end{equation}\nwhere $H_{0}$ is the Hubble constant, and the Planck collaboration\nreported as $H_{0}=67.11~{\\rm km\/sec\/Mpc}=2.208\\times 10^{-18}~{\\rm\nsec}^{-1}$\\cite{2013arXiv1303.5076P}. Then the constraint Eq.~(\\ref{constraintGPS}) can be written as\n\\begin{equation} \n\\Omega_{\\rm gw}(f)<1.7\\times 10^{14} ~~{\\rm for}~10^{-2} \\lesssim f \\lesssim 10^{0}[{\\rm Hz}]~.\n\\end{equation}\nIn figure \\ref{fig.2}, we compare our constraint with the previous ones.\nIt can be seen that the Doppler tracking method with the GPS is setting\na constraint on the amplitude of continuous component of gravitational\nwaves in the frequency range of $10^{-2} \\lesssim f \\lesssim 10^{0}$ Hz, a window between\nthe constraints from the Torsion bar and the planetary explores.\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=85mm]{gwOmega2}\n \\end{center}\n \\caption{Summary of the constraints on GW background in terms of\n$\\Omega_{\\rm gw}(f)$, which includes {\\tt COBE} \\cite{2000PhR...331..283M}, {\\tt CMB\nhomogeneity} and {\\tt BBN} \\cite{2006PhRvL..97b1301S}, {\\tt Pulsar\ntiming} \\cite{2006ApJ...653.1571J}, {\\tt LLR}\n\\cite{1981ApJ...246..569M,2009IJMPD..18.1129W}, {\\tt Cassini}\n\\cite{2003ApJ...599..806A}, {\\tt ULYSSES} \\cite{1995A&A...296...13B},\n{\\tt Lunar orbiter} \\cite{2001Icar..150....1K},\n{\\tt Torsion bar} \\cite{2011PhRvL.106p1101I}, {\\tt LIGO}\n\\cite{2009RPPh...72g6901A,2013PrPNP..68....1R}, and {\\tt GPS}.} \\label{fig.2}\n\\end{figure}\n\n\n\\section{Discussion}\n\n\nIn this work, we show that satellites with atomic clocks are available\nfor setting constraints on the strain amplitude of GWs\nat $10^{-2}\\lesssim f \\lesssim 10^{0}$ Hz. The constraints based on the Doppler tracking\nmethod with planetary explorer such as ULYSSES and Cassini are mainly\nlimited from the stability of the hydrogen maser clock on the ground\n$\\sim 10^{-15}$ at the frequency range $10^{-6}\\lesssim f\\lesssim 10^{-2}$ Hz. \nThe stability of the optical lattice clock is expected\nto reach $10^{-18}$ \\cite{2003PhRvL..91q3005K}. If future explorers \nhave the optical lattice clock on board, they will become useful\ninstruments for detection\/setting constraints on the strain amplitude\nof GWs ({\\it see also }\\cite{2006LRR.....9....1A}).\n\nDetermination accuracy of frequency fluctuations of received radio waves \nfrom GPS satellites depends mostly on the time resolution of \nthe received electric-signal in the A\\slash D converter\nof the GPS receiver $\\sigma_{\\rm r}$. It is reported that $\\sigma_{\\rm r}\n\\ge 2\\times 10^{-13}\\slash t_{\\rm i}$ and the largest error comes from\nthe frequency transfer \\cite{2005ptti.conf..149F}. It is difficult to\nimprove the GPS constraints on the strain amplitude until the \nresolution of the quantization in the A\\slash D converter is much improved.\n\nHere let us apply our result to setting a constraint on the\nmerger events of compact objects. In particular, it is predicted that\nmergers of the binary of IMBHs emit large GWs in the\nrelevant frequency range, while it is difficult to detect the events\nthrough X-rays or radio waves if the systems do not have accretion\ndisks. Therefore observations or constraints of GWs\nenable one to estimate the number density of the binary of IMBHs and the\nmerger events. \n\nDuring the quasi-normal (QNM) phase of the IMBHs mergers, in which\nmerging IMBHs are expected to emit the largest amplitude of \nGWs, the amplitude $h_{\\rm QNM}$, typical\nfrequency $f_{\\rm QNM}$ and the duration time of the event $t_{\\rm QNM}$\nare given by \\cite{2004PThPS.155..415M},\n\\begin{eqnarray} \nf_{\\rm QNM}&\\simeq &4\\times 10^{-2}\\left(\\dfrac{M}{10^{6} M_{\\odot}} \\right)^{-1}~[{\\rm Hz}],\\label{f.insp}\\\\\nh_{\\rm QNM}&\\simeq &2\\times 10^{-12}\\left(\\dfrac{M}{10^{6} M_{\\odot}} \\right)\n\\left(\\dfrac{\\varepsilon }{10^{-2}} \\right)^{\\frac{1}{2}}\n\\left(\\dfrac{R}{4 {\\rm kpc}}\\right)^{-1}\n~,\\label{h.insp} \\\\\nt_{\\rm QNM}&\\simeq &30\\times \n\\left(\\dfrac{M}{10^{6} M_{\\odot}} \\right)~[{\\rm sec}].\\label{t.insp}\n\\end{eqnarray}\nHere $\\varepsilon$ is the eccentricity of the orbit, $R$ is the distance\nto the binary, and we assumed that two IMBHs have the same mass $M$.\nFrom Eqs. (\\ref{constraintGPS}), (\\ref{f.insp}) and (\\ref{h.insp}), \none can rule out merger events of IMBHs from the GPS constraint as\n\\begin{eqnarray} \nR&\\gtrsim&0.1\\left(\\dfrac{\\varepsilon }{0.01} \\right)^{-1\\slash 2} [{\\rm kpc}]~\\\\\n& &{\\rm for}~4\\times 10^{4}M_{\\odot} \\le M \\le 4\\times 10^{6}M_{\\odot}~. \\notag\n\\end{eqnarray}\nThe minimum and maximum masses are \ndetermined from the frequency range we can probe in this method, i.e. Eq.(4).\n\nFrequency modulation signals induced by GWs may be\ndisturbed by the plasma effect in the ionosphere and the atmosphere. \nFluctuations of the column density of free electrons in the plasma,\ncalled the dispersion measure (DM), induce those in frequency of GPS\nradio waves as\n({\\it e.g.} \\cite{2004tra..book.....R,Jackson})\n\\begin{equation} \n\\dfrac{\\Delta \\nu}{\\nu}=\\dfrac{e^{2}\\nu^{-2}}{2\\pi m_{\\rm e}cr}{\\rm DM}~,\n\\end{equation}\nwhere ${\\rm DM}=\\int_{0}^{r}n_{\\rm e}ds$ is the column density of\nelectrons, $m_{\\rm e}$ and $e$ are the mass and the electric charge of\nan electron, respectively, and $r$ is the distance between the GPS\nsatellite and the observer. By comparing the above equation with\nEq. (\\ref{fractional02}), one can see that the frequency dependences are\ndifferent between the modulations induced by GWs and the\nplasma effect. Therefore the modulation originated from the plasma can\nin principle be removed by using multiple frequencies. Some of the\nGPSDOs observe the two bands of GPS radio waves (L1 \\& L2) \\footnote{The\nfrequency of the band L1 and L2 are 1.57542 GHz and 1.2276 GHz,\nrespectively.} and take into account this modulation.\n\nHowever, even when GPSDOs give the best performance, the distance at\nwhich one can detect the merger of IMBHs with GPSDOs is limited only to\n0.1 kilopersec from the Earth. In order to detect mergers of IMBHs and compact\nobjects and the GW background from inflation with realistic amplitude,\nspace-based gravitational wave detectors are needed. \nAs future gravitational wave detectors, Laser Interferometer Space Antenna (LISA)\nand Deci-hertz Interferometer Gravitational wave Observatory (DECIGO)\nare in progress. LISA and DECIGO are expected to reach $h_{\\rm c}\\simeq 3\\times\n10^{-21}$ at $f\\simeq 6 \\times 10^{-3}$ Hz and $h_{\\rm c}\\simeq 2\\times 10^{-24}$ at\n$f\\simeq 0.3$ Hz, respectively\n\\cite{2013arXiv1305.5720C,2009JPhCS.154a2040S}. These sensitivities\nwill enable us to detect almost all the merger events of IMBHs with mass\n$M\\sim 10^{3}M_{\\odot}$ within the current horizon \\footnote{The current\nhorizon scale is approximately equal to the distance to the last\nscattering surface of CMB $d_{\\rm A(CMB)}$. Jarosik {\\it et al.}\nreports that $d_{\\rm A}=14.116^{+0.160}_{-0.163}$ Gpc\n\\cite{2011ApJS..192...14J}.}, and to reveal the properties of the strong\ngravity field and the cosmological inflation\n\\cite{2004PThPS.155..415M}.\n\n Finally let us discuss another constraint that can be obtained from the lunar\norbitting explorers in a similar way to the GPS constraint. \nIn the Apollo mission, in order\nto study the gravity field of the moon,\nlunar explorers such as Apollo 15 and 16 measured \nthe change in distance between the\nexplorers and the Earth precisely \nvia S-band transponders \\cite{2001Icar..150....1K}.\nThe typical distance between the explorers and the ground is $l\\approx\n3.8\\times 10^{8}$ m, which is much longer than that of the GPS case.\nIt was reported that anomalous oscillating motions had never been\nfound in their every ten-second sampling data with $1\\times\n10^{-4}~$m\/sec accuracy \\cite{2001Icar..150....1K}.\nFrom Eqs. \\eqref{differential} and \\eqref{equivalentEq}, this \nresult is translated to\nthe strain amplitude of GWs as $h_{\\rm c}<2.6\\times 10^{-13}$ at\nthe frequency range $f < c\/l \\simeq 0.8$Hz \\footnote{ To be more\nprecise, the distance between the moon and the earth is so long that\nthis constraint should be corrected at frequencies above 0.1Hz as\n$h_{\\rm c}<2.6\\times 10^{-13}\\sqrt{1+\\left(f\/0.16~{\\rm\n[Hz]}\\right)^{2}}$.\nAt $f=1$ Hz, we obtain the constraint $h_{\\rm c}<1.6\\times 10^{-12}$.\n}.\nThe constraint is also depicted in Fig. \\ref{fig.2}.\nRecent lunar explorers such as Kaguya \\cite{2009Sci...323..900N} \nand GRAIL \\cite{2013Sci...339..668Z,2010EGUGA..1213921Z} may improve this upper bound. \nIn addition, future\nmeasurements with lunar surface transponders \nsuch as those proposed by Gregnanin {\\it et al.} \n\\cite{2012P&SS...74..194G} will be available for setting \nconstraints on $h_{\\rm c}$.\n\n\\section{Conclusion}\nWe set a constraint on the strain amplitude of the continuous component\nof GWs as $h_{\\rm c }<4.8\\times 10^{-12}\\left(1[{\\rm\nHz}]\\slash f \\right)$ at the frequency range $10^{-2}\\lesssim f \\lesssim 1$ Hz \nwith the radio waves emitted by GPS\nsatellites in operation. \nBecause the distance between GPS satellites and observers\nis order $10^{7}$ m, seismic oscillations do not affect the\nconstraints on the strain amplitude. The sensitivity to the\nGWs is limited to that of the A\\slash D converter on the\nGPS receiver at the frequency range.\n\n\\section{Acknowledgments}\n{\nWe thank the referee, Peter L. Bender, for making us aware of the fact that past and future lunar orbitar experiments can be used to set better constraints on the amplitude of continuous GWs. Our thanks also go to Jun'ichi Miyao for discussion on the characteristics of the GPS, and Seiji Kawamura, Yoichi Aso on the noise in the electric circuits installed on the gravitational wave detectors.}\nThis work is supported in part by scientific research expense for Research Fellow of the Japan Society for the Promotion of Science from JSPS Nos. 24009838\n(SA) and 24340048 (KI).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Auger spectroscopy involves the creation of two localized holes at\nor close to the same atom, hence giving access to local electronic\nproperties.\nDirect information on the local density of valence states is brought by\ncore-core-valence transitions; in the case of core-valence-valence (CVV)\nones, which will be investigated here, one can in addition access the\nscreened Coulomb repulsion amongst the two valence holes in the final\nstate which is relevant to a wide class of phenomena, and study its\neffects.\n\nFrom the theoretical point of view, a large amount of work has been\ndevoted to the calculation of the Auger spectra of solids during the\nlast three decades.\\cite{Verdozzi:2001:AugerRev} A general formulation\nof the dynamical Auger decay, where the creation of the initial core\nhole and the Auger decay are considered as coherent processes, was\ngiven by Gunnarsson and\nSch\\\"{o}nhammer\\cite{Gunnarsson:1980:dynTheoryAuger} but is of hard\npractical implementation.\nIn solids with (almost) closed valence bands, where no dynamical\ncore-hole screening can occur before the Auger decay, one can employ a\nsimpler two-step approximation and consider the above events as\nindependent.\nUnder this assumption, Cini\\cite{Cini:1977:CSM} and\nSawatzky\\cite{Sawatzky:1977:CSM} (CS) proposed a simple model\nproviding the Green's function describing the two valence holes left\nafter the Auger decay.\nGood agreement with experiments was achieved using fitting parameters\nfor the screened Coulomb interaction, giving a quantitative\nunderstanding of the Auger spectra of transition metals located at the\nbeginning and the end of the row, such as Ti,\\cite{Cini:1995:AugerTi}\nAg,\\cite{Cole:1994:AgAugerOffSite} and\nAu.\\cite{Verdozzi:1995:AuAugerOffSite} These results confirmed the\nusefulness of including explicitly on-site Hubbard terms to one-body\nHamiltonian, and prompted an extension to nearest-neighbour\ninteractions.\\cite{Verdozzi:1995:OffSiteRev}\n\nThese studies determined the relevant physical parameters by\nreproducing experimental findings within a semi-empirical approach.\nOf particular interest is the parameter governing the interaction\namongst the two holes in the final state, which has an analogue in the\npopular LDA+U description for correlated\nsystems.\\cite{Anisimov:1991:LDA+U, Anisimov:1993:LDA+U} Even if\nmethods for its {\\em ab initio} evaluation have been proposed, also\nthis quantity is often determined by phenomenological arguments.\n\nThe possibility of evaluating CVV spectra from first-principles, rather\nthan from a model with parameters fitted to experiments, would then be\nvery desirable as it would allow predicting different situations (e.g.,\ninvestigate the effect of a given chemical environment on the Auger\ncurrent) and a deeper interpretation of experimental findings.\n\nThis paper addresses such possibility, by proposing a\nmethod to compute the parameters entering the CS model by {\\em ab\ninitio} simulations. In this step towards a first-principle\ndescription of CVV Auger spectra in systems where the interaction of\nthe final-state holes cannot be neglected, we aim at highlighting the\nmost important contributions to the spectrum, which one should\nfocus to in forthcoming improvements.\nThe method is based on Density Functional Theory (DFT) simulations in\nthe Kohn-Sham (KS) framework, with constrained occupations. We make\nuse of comparison with reference atomic calculations to extrapolate\nthe electronic properties of the sample when they are more difficult\nto evaluate directly. Results are presented for the\n$L_{23}M_{45}M_{45}$ Auger line of Cu and Zn metals, which have been\nchosen as benchmark systems with closed $3d$ bands: the former being\nmore challenging for the proposed procedure, and the latter bearing\nmore resemblance with the atomic case.\n\nThe paper is organized as follows. In section~\\ref{sec:methods} we\ndescribe our method to evaluate the Auger spectra by first-principles\ncalculations. Section~\\ref{sec:results} presents our theoretical\nresults for Cu and Zn metals, comparing them with experimental results\nin the literature. In section~\\ref{sec:discussion} we analyze the\nweight of various contributions and discuss improvements. Finally,\nsection~\\ref{sec:conclusions} is devoted to conclusions.\n\n\n\\section{\\label{sec:methods}Theoretical methods}\n\n\\subsection{Model Hamiltonian and the Cini-Savatzky solution}\n\nWe describe the electron system in the hole representation, by an\nHubbard-like~\\cite{Hubbard:1963:HubbardH} model Hamiltonian\n\\begin{equation}\n\\label{eq:HHubbard}\nH=\\epsilon_c c_c^\\dagger c_c\n +\\sum_v\\epsilon_v c_v^\\dagger c_v\n +\\frac{1}{2}\\sum_{\\varphi_1 \\varphi_2 \\varphi_3 \\varphi_4} U_{\\varphi_1 \\varphi_2 \\varphi_3 \\varphi_4}\nc^\\dagger_{\\varphi_1}c^\\dagger_{\\varphi_2} c_{\\varphi_4}c_{\\varphi_3},\n\\end{equation}\nwhere $c$ and $v$ label the core state involved in the transition and\nthe valence states of the system, respectively, including the spin\nquantum number. In bulk materials, $v$ is a continuous index. The\nlast term is the hole-hole interaction Hamiltonian, parametrized by\nthe screened repulsion $U$, and is for simplicity restricted here to a\nfinite set of wavefunctions, $\\varphi$, centered at the emitting atom\n(hence neglecting interatomic interactions).\nIn closed shell systems, $\\epsilon_c$ and $\\epsilon_v$ yield the core\nand valence photoemission energies, since the two-body term has no\ncontribution on the one-hole final state or on the zero-hole initial\none.\n\nA two-step model is adopted to represent the Auger process, assuming\nthat the initial ionization and the following Auger decay of the core\nhole can be treated as two independent events. In other terms, we assume\nthat the Auger transition we are interested in follows a fully relaxed\nionization of a core shell.\nIf the ground state energy of the neutral $N$-electron system is chosen\nas a reference, the energy of the initial state is simply given by\n$\\epsilon_c$.\nThe total spectrum for electrons emitted with kinetic energy $\\omega$ is\nproportional to\n\\begin{equation}\n\\label{eq:Spectrum}\nS(\\omega)=\\sum_{XY} A^{*}_X D_{XY}(\\epsilon_c-\\omega) A_Y,\n\\end{equation}\nwhere $X$ and $Y$ are the final-state quantum numbers, $A_X$ is the\nAuger matrix element corresponding to the final state $X$, and\n$D_{XY}$ represents the two-hole density of states.\nNotice that Eq.~(\\ref{eq:Spectrum}) coincides with the Fermi golden rule\nif the states $X$, $Y$ are eigenstates of the Hamiltonian, so that\n$D_{XY}$ is diagonal.\nThe presence of the transition matrix elements effectively reduces the\nset of states contributing to Eq.~(\\ref{eq:Spectrum}) to those with a\nsignificant weight close to the emitting atom.\nThis motivates the approximation to restrict $X$ and $Y$ to two-hole\nstates based on wavefunctions centered at the emitting atom, such as\nthe set $\\{\\varphi\\}$ previously introduced.\nTherefore, the CVV spectrum is a measure of the two-hole local density\nof states (2hLDOS), with modifications due to the matrix elements.\nThe 2hLDOS could in principle be determined as the imaginary part of\nthe two-hole Green's function, $G_{XY}$, solution of\nEq.~(\\ref{eq:HHubbard}). However, because of the presence of the\nhole-hole interaction term, evaluating $G_{XY}$ is in general a\nformidable task.\n\nFor systems with filled valence bands, the two holes are created in a\nno-hole vacuum and one is left with a two-body problem. A solution in\nthis special case has been proposed by Cini\\cite{Cini:1977:CSM} and\nSawatzky,\\cite{Sawatzky:1977:CSM} and is briefly reviewed here (see\nRef.~\\onlinecite{Verdozzi:2001:AugerRev} for an extended review).\nThe two-holes interacting Green's function, $G$, is found as the\nsolution to a Dyson equation with kernel $U$, which reads:\n\\begin{equation}\n\\label{eq:G}\nG(\\omega)=G^{(0)}(\\omega) \\big(1-UG^{(0)}(\\omega)\\big)^{-1}.\n\\end{equation}\nHere, $G^{(0)}$ is the non-interacting Green's function which can be\ncomputed from the non-interacting 2hLDOS, $D^{(0)}$, via Hilbert\ntransform. Such 2hLDOS results from the self-convolution of the\none-hole local density of states (1hLDOS),\n$D^{(0)}{\\equiv}d{*}d$.\n\nThe quantum numbers $LSJM_J$ (intermediate coupling scheme) are the\nmost convenient choice to label the two-hole states, allowing for the\nstraightforward inclusion of the spin-orbit interaction in the final\nstate by adding to the Hamiltonian the usual diagonal term,\nproportional to $[J(J+1)-L(L+1)-S(S+1)]$.\nFinally, the CVV lineshape is:\n\\begin{equation}\n\\label{eq:PCiniLSJMJ}\nS(\\omega)=-\\frac{1}{\\pi}\n\\sum_{LSJM_J \\atop L'S'J'M_{J'}'} A^{*}_{LSJ} A_{L'S'J'} \\text{Im}\n\\left[\n\\frac{\n G^{(0)} ( \\epsilon_c - \\omega )\n}{\n 1 - U G^{(0)}( \\epsilon_c - \\omega )\n}\n\\right]_{LSJM_J \\atop L'S'J'M_{J'}'}.\n\\end{equation}\nFor comparison with experimental results, this is to be convoluted\nwith a Voigt profile to account for core-hole lifetime and\nexperimental resolution.\n\n\nIt is customary to isolate two limiting regimes:\n(i) When $U$ is small with respect to the valence band width $W$\n(broad, band-like spectra) the 2hLDOS is well represented by\n$D^{(0)}(\\omega)$. However, in such a case it might be even\nqualitatively important to account for a dependence of the matrix\nelements on the Auger energy $\\omega$. As a consequence, accurate\ncalculations of the lineshape require the simultaneous evaluation of\nthe matrix elements and the DOS.\n(ii) For $U$ larger than $W$, narrow atomic-like peaks dominate the\nspectrum, each peak from an $LSJ$ component. Hence, to the first\napproximation the spectrum is described by a sum of\n$\\delta$-functions, weighted by matrix elements whose dependence on\nthe Auger energy may be neglected. If we take the matrix $U$ diagonal\nin the $LSJ$ representation, and indicate by $E^{(0)}_{LSJ}$ the\nweighted average of $D^{(0)}_{LSJ}(\\omega)$, one obtains:\n\\begin{equation}\n\\label{eq:PDeltaLSJ}\nS(\\omega)\\approx\\sum_{LSJ}(2J+1)|A_{LSJ}|^2\n\\delta\\big( (\\epsilon_c - E^{(0)}_{LSJ} - U_{LSJ}) - \\omega \\big).\n\\end{equation}\nAtomic matrix elements can be taken as a first approximation, often\nsatisfactory, and can be evaluated as shown in\nRef.~\\onlinecite{Antonides:1977:LMM_Cu_Zn_Ga_Ge_I}.\nAn approach which could bridge between these two limiting regimes,\nconsidering both finite values of $U$ and the energy dependence of the\nmatrix elements, is still missing to our knowledge.\n\nIn the present work we adopt Eq.~(\\ref{eq:PCiniLSJMJ}) in order to\nsimulate the spectrum. Accordingly, one has to determine the quantities\n$A$, $U$, $D^{(0)}(\\omega)$, and $\\epsilon_c$.\nIn this paper we make use of a $U$ matrix which does not include the\nspin-orbit interaction, and is diagonal on the $LS$ basis.\nWe take atomic results in the literature for the matrix elements $A$,\nwhich are assumed independent of $J$\ntoo.\\cite{Antonides:1977:LMM_Cu_Zn_Ga_Ge_I}\nThe other quantities are computed by DFT simulations, as detailed in the\nfollowing Section.\n\n\n\n\\subsection{{\\em Ab initio} determination of the relevant parameters}\n\nTo evaluate {\\em ab initio} the photoemission energies we use a method\nclosely related to Slater's transition-state theory, while the\nparameter $U$ is computed following a general procedure first proposed\nin Ref.~\\onlinecite{Gunnarsson:1989:DFTcalcAnderson} and then adopted\nby several authors.\n\n\nOne extrapolates total energies for the system with $N$, $N-1$ and $N-2$\nelectrons by DFT calculations with constrained occupations for $N-q$\nelectrons, with $q$ small (typically, up to $0.05$), so that ionized\natoms in otherwise periodic systems can be treated in rather small\nsupercells. We make the approximation that the total energy of the\nsystem with $q_i$ electrons removed from the level $i$ is given by a\npower expansion in $q_i$ up to third order:\n\\begin{equation}\n\\label{eq:EtotQ}\nE(N-q_i) = E(N) + A_i q_i + B_i q_i^2 + C_i q_i^3.\n\\end{equation}\nIn the following we shall assume that this can be extended to finite\nvalues of $q_i$.\nThe introduction of the cubic term $C_i q_i^3$ allows for a\n$q$-dependence of the screening properties of the system.\nThe coefficients $A_i$, $B_i$, and $C_i$, where $i$ labels core and\nvalence states involved in the transition, are in this framework all\nwhat is needed to compute the Auger electron energy. They can be\nevaluated in two equivalent ways, whichever is most convenient: by\ntaking the first, second, and third derivatives of the total energy\n$E(N-q_i)$ for $q_i\\rightarrow0$; by using Janak's\ntheorem\\cite{Janak:1978:theorem} and computing the KS eigenvalue of\nlevel $i$ and its first and second derivatives:\n\\begin{equation}\n\\label{eq:JanakQ}\n-\\epsilon^\\text{KS}_i(N-q_i) = A_i + 2 B_i q_i + 3 C_i q_i^2.\n\\end{equation}\nIn particular, $A_i$ is given by (minus) the KS eigenvalue in the\nneutral system.\n\nThe binding energy of a photoemitted electron, $\\epsilon_i\n{\\equiv}E^\\text{XPS}_i=E(N-1_i)-E(N)$, to be used in\nEq.~(\\ref{eq:HHubbard}), is given by Eq.~(\\ref{eq:EtotQ}) as:\n\\begin{equation}\n\\label{eq:XPSABC}\n \\epsilon_i = A_i + B_i + C_i.\n\\end{equation}\nThis is very close to the well-known Slater's transition-state\napproach, in which the XPS energy equals the (minus) eigenvalue at\nhalf filling. The latter amounts to $A_i+B_i+\\frac{3}{4}C_i$ when\napproximating the total energy by a cubic expansion as in\nEq.~(\\ref{eq:EtotQ}). In other terms, it differs from the result of\nEq.~(\\ref{eq:XPSABC}) only by $\\frac{1}{4}C_i$, with $C_i\\lesssim1$~eV\nin the cases considered here (see below).\nIt is worth noticing that the term $B_i+C_i$ acts like a correction to\nthe (minus) KS eigenvalue $A_i$, accounting for dynamical relaxation\neffects even though all terms are evaluated within KS-DFT.\n\nThe evaluation of the $A$, $B$, and $C$ coefficients for localized\nstates poses no additional difficulty. Instead, care must be taken\nwhen determining those corresponding to the delocalized valence shells\nof bulk materials ($A_v$, $B_v$, and $C_v$), for which we propose the\nfollowing method.\nAs for $A_v$, this is a continuous function of the quantum number $v$\nand, by taking advantage of Janak's theorem, it is the KS band energy\nwith reversed sign.\nTo estimate $B_v$ and $C_v$, we neglect their dependence on $v$ and\nassume that a single value can be taken across the valence band,\nacting as a rigid shift of the band. Hence, the 1hLDOS is obtained\nfrom the KS LDOS, $d^\\text{KS}(\\omega)$, as:\n\\begin{equation}\n\\label{eq:1hLDOS}\nd(\\omega)=d^\\text{KS}(-\\omega+B_v+C_v).\n\\end{equation}\nFor sake of the forthcoming discussion, one can also define a single\nvalue of $A_v$ in the solid by taking the KS valence band average.\n\nWe expect the above approximation to be a good one as long as the\nvalence band is sufficiently narrow and deep (since eventually the\ncorrection should approach zero at the Fermi level). Still under this\nsimplification, the direct evaluation of $B_v$ and $C_v$ would ask for\nconstraining the occupations for fairly delocalized states, which is a\nfeasible but uneasy task.\nAs an alternative route, we suggest a simpler approach based on the\nworking hypothesis that the environment contribution to the screening\nof the positive charge $q_i$ in Eq.~(\\ref{eq:EtotQ}) does not depend\nstrongly on the shape of the charge distribution.\nPractically, we take the neutral isolated atom as a reference\nconfiguration in Eq.~(\\ref{eq:EtotQ}), and evaluate the coefficients\n$B_i^a$ and $C_i^a$ for this system. The two quantities\n${\\Delta}B=B_i-B_i^a$ and ${\\Delta}C=C_i-C_i^a$ can be easily computed\nfor core levels.\nSuch bulk-atom corrections are reported in Table~\\ref{tab:deltaBC} for\nCu and Zn, which demonstrates that they are almost independent of the\ncore level. This supports our working hypothesis, and enables us to\nextrapolate to the valence shell. Accordingly, $B_v$ and $C_v$ are given\nby:\n\\begin{eqnarray}\n\\label{eq:B=Ba+DeltaB}\nB_v=B_v^a+{\\Delta}B,\\\\\n\\label{eq:C=Ca+DeltaC}\nC_v=C_v^a+{\\Delta}C.\n\\end{eqnarray}\n\n\nWe remark that by choosing the neutral atom as the reference system\nsome degree of arbitrariness is introduced. In principle, one could\nevaluate the atomic coefficients in a configuration which is\nclosest to the one of the atom in the solid, depending on its chemical\nenvironment. However, such arbitrariness has limited effect on the\nfinal value of $B_v$ (similar discussion applies for $C_v$), owing to\ncancellations between $B_c^a$ and $B_v^a$ in\nEq.~(\\ref{eq:B=Ba+DeltaB}), as will be demonstrated in the following.\n\n\\begin{table}\n\\begin{tabular}{|c|c|rrrrr|r|}\n\\hline\n\\hline\n &\n & \\multicolumn{1}{c}{$1s$}\n & \\multicolumn{1}{c}{$2s$}\n & \\multicolumn{1}{c}{$2p$}\n & \\multicolumn{1}{c}{$3s$}\n & \\multicolumn{1}{c|}{$3p$}\n & \\multicolumn{1}{c|}{average} \\\\\n\\hline\nCu\n & $\\Delta{}B$\n & $-4.77$\n & $-4.92$\n & $-4.90$\n & $-4.81$\n & $-4.76$\n & $-4.85\\pm0.07$ \\\\\n & $\\Delta{}C$\n & $-0.88$\n & $-0.80$\n & $-0.82$\n & $-0.73$\n & $-0.72$\n & $-0.81\\pm0.07$ \\\\\n\\hline\nZn\n & $\\Delta{}B$\n & $-4.21$\n & $-4.26$\n & $-4.27$\n & $-4.19$\n & $-4.17$\n & $-4.23\\pm0.04$ \\\\\n & $\\Delta{}C$\n & $-0.49$\n & $-0.33$\n & $-0.26$\n & $-0.32$\n & $-0.35$\n & $-0.35\\pm0.09$ \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{\\label{tab:deltaBC}\nDifferences amongst the values of $B$ and $C$ in the bulk and the atom,\n$\\Delta{}B=B-B^a$ and $\\Delta{}C=C-C^a$, for core levels of Cu and Zn.\nThe last column reports the average and standard deviation across the\ncore levels.\nAll values in eV.\n}\n\\end{table}\n\nRegarding the interaction energy $U$ for the two holes in a valence\nlevel, defined by $[E(N-2)-E(N)]-2[E(N-1)-E(N)]$, let us consider the\ncase of spherically symmetric holes (non-spherical contributions,\ngiving rise to multiplet splitting, will then be added). Such\nspherical interaction, denoted by $U_\\text{sph}$, can be determined\nvia Eq.~(\\ref{eq:EtotQ}), resulting in\n\\begin{equation}\nU_\\text{sph} = 2B_v + 6C_v.\n\\end{equation}\nThis amounts to the second derivative of the DFT energy as a function\nof the band occupation,\n$U(q)= \\partial^2 E(N-q) \/ \\partial q^2 $,\nas originally suggested by Gunnarsson and\ncoworkers,\\cite{Gunnarsson:1989:DFTcalcAnderson} here evaluated for\nthe $N-1$-electron system rather than for the neutral one.\nDifferently, the interaction energy commonly used in LDA+U\ncalculations of the ground state is defined as $E(N+1)+E(N-1)-2E(N)$\nand hence evaluated by the second derivative in $q=0$, resulting in\n$2B_v$ only.\nNotice here that the role of the cubic term in Eq.~(\\ref{eq:EtotQ}) is\nto introduce a dependence of the interaction energy on the particle\nnumber, following the one of the screening properties of the system.\nFinally, non-spherical contributions, which give rise to multiplet\nsplitting, are added to $U_\\text{sph}$. It has been\ndemonstrated\\cite{Antonides:1977:LMM_Cu_Zn_Ga_Ge_I} for a number of\nmaterials, including Cu and Zn, that these terms are well reproduced\nby a sum of atomic Slater integrals,\\cite{Slater:1960:atoms}\n$a_2F^2+a_4F^4$, where the coefficients $a_2$ and $a_4$ depend on the\nmultiplet configuration and\n\\begin{equation}\nF^k=\\int_0^\\infty r_1^2 dr \\int_0^\\infty r_2^2 dr_2\n \\frac{r^k_<}{r^{k+1}_>}\n \\left[\\varphi^a(r_1)\\varphi^a(r_2)\\right]^2.\n\\end{equation}\nHere $\\varphi^a(r)$ is the atomic radial wave function relevant to the\nprocess under investigation (e.g., the $3d$ one for a $CM_{45}M_{45}$\nAuger transition), and $r_<$ ($r_>$) is the smaller (larger) of $r_1$\nand $r_2$.\nNotice that the spherical Slater integral $F^0$ is implicit into\n$U_\\text{sph}$, which has the meaning of a screened Coulomb\nintegral.\\cite{Anisimov:1991:F0effDFT}\n\nSummarizing, one has:\n\\begin{equation}\n\\label{eq:U}\nU=2B_v^a+6C_v^a+2{\\Delta}B+6{\\Delta}C+a_2F^2+a_4F^4.\n\\end{equation}\nIt is customary to write $U=F-R$, where $F=F^0+a_2F^2+a_4F^4$, and $R$\nis the ``relaxation energy''.\\cite{Shirley:1973:KLL_relaxation} This can\nbe further decomposed into an atomic and an extra-atomic contribution,\n$R=R_a+R_e$. From Eq.~(\\ref{eq:U}), one identifies\n$R_a=F^0-2B_v^a-6C_v^a$ and $R_e=-2{\\Delta}B-6{\\Delta}C$. Notice that by\nour approach we compute $F^0-R_a$ as a single term, so that it is not\npossible to separate the two contributions.\n\n\nIn other formulations,\\cite{Pickett:1998:LDA+U, Cococcioni:2005:LDA+U}\nthe derivative of the energy with respect to the occupation number of\na broad band is computed by shifting the band with respect to the\nFermi level. This adds a non-interacting contribution to the\ncurvature of the energy, since the level whose occupation is varied is\nitself a function of the band occupancy. Such non-interacting term\nhas to be subtracted when computing $U$ by these\napproaches.\\cite{Cococcioni:2005:LDA+U} Our formulation is\nconceptually more similar to scaling the occupation of all valence\natomic levels in a uniform way, and the non-interacting term is\nvanishing.\n\n\n\n\\subsection{Computational details}\n\nThe results presented in this paper have been obtained by DFT\ncalculations with the Perdew-Burke-Ernzerhof\\cite{Perdew:1996:PBE}\ngeneralized gradient approximation for the exchange and correlation\nfunctional.\nWe used an all-electron linearized augmented-plane-wave code to perform\nthe simulations with constrained core occupations. Periodically repeated\nsupercells at the experimental lattice constants were adopted to\ndescribe the solids.\nOne atom was ionized in a unit cell containing four and eight atoms\nfor Cu and Zn, respectively. In both cases the ionized atom has no\nionized nearest neighbours.\nCell neutrality is preserved by increasing the number of the valence\nelectrons, simulating the screening of the core hole by the solid.\nThe spin-orbit splitting in core states as well as in the final state\nwith two holes was taken into account by adopting DFT energy shifts\nfor free atoms,~\\cite{NIST::DFTdata} and is here assumed independent\non the fractional charge $q$ (we verified that the latter approximation\naffects our final results by no more than $0.2$~eV).\nAs for the coefficients $A$, $B$, and $C$ in Eq.~(\\ref{eq:EtotQ}), we\nfound values numerically more stable, with respect to convergence\nparameters, by performing a second order expansion of the eigenvalues\nrather than a third order expansion of the total energy. Therefore,\nwe made use of Janak's theorem and Eq.~(\\ref{eq:JanakQ}), with\neigenvalues relative to the Fermi level in the solid (hence, resulting\nXPS and Auger energies are given with respect to the same reference).\nFulfillment of Janak's theorem and coincidence of results of\nEq.~(\\ref{eq:EtotQ}) and (\\ref{eq:JanakQ}) were numerically verified\nto high accuracy in a few selected cases. The values of $q$ ranged\nfrom $0$ to $0.05$ at intervals of $0.01$.\nComparison with denser and more extended meshes for the free atom case\nshowed that results are not dependent on the chosen mesh. Matrix\nelements and Slater integrals $F^2$ and $F^4$ are taken from\nRef.~\\onlinecite{Antonides:1977:LMM_Cu_Zn_Ga_Ge_I}, and core hole\nlifetimes from Ref.~\\onlinecite{Yin:1973:widthL2L3}.\n\n\\section{\\label{sec:results}Results}\n\nIn this Section we report our results for the $L_{23}M_{45}M_{45}$\nAuger lineshape of Cu and Zn. The core and the valence indices, $c$\nand $v$ in the previous Section, are specialized to the $2p$ and $3d$\nlevel of such elements, respectively.\n\nAs an example of our procedure to extract the parameters $A$, $B$, and\n$C$ [see Eq.~(\\ref{eq:EtotQ})], we report the case for the $2p$ level\nof Cu metal in Fig.~\\ref{fig:sampleFit} (the following considerations\nare also valid in the other cases).\nWe remove the fractional number of electrons $q$ from the $2p$ level\nof a Cu atom, and plot its (minus) KS $2p$ eigenvalue in\nFig.~\\ref{fig:sampleFit}a.\nSuch a curve is fitted by the expression in Eq.~(\\ref{eq:JanakQ}).\nIt is apparent from Fig.~\\ref{fig:sampleFit}a that a linear fit\nalready reproduces the KS eigenvalue in this range of $q$ to high\naccuracy. However, since results are to be extracted up to $q=1$ or\n$2$, the quadratic term in the expansion is also of interest. This is\nshown in Fig.~\\ref{fig:sampleFit}b, where the linear contribution\n($A+2Bq$) has been subtracted. The parabola accurately fits the\nnumerical results, with residuals of the order of $10-50$~$\\mu$eV.\n\\begin{figure}\n\\includegraphics[width=8.5cm]{fig1_sampleFit.eps}\n\\caption{\\label{fig:sampleFit}Example of fitting the Kohn-Sham\neigenvalue to extract the coefficients $A$, $B$, and $C$. Results are\nshown for the $2p$ level of metal Cu. Panel (a) plots the KS\neigenvalue (relative to the Fermi energy) with reversed sign (circles)\nand the fitted parabola from Eq.~(\\ref{eq:JanakQ}) (line), as function\nof the number of electrons removed from the $2p$ level, $q$. Panel (b)\nreports the same quantities after subtracting the linear term\n$A+2Bq$.}\n\\end{figure}\n\nTable~\\ref{tab:ABC_CuZn} collects our results for the coefficients\n$A$, $B$, and $C$, needed for the determination of the\n$L_{23}M_{45}M_{45}$ lineshape of Cu and Zn.\nThe values of $B$ and $C$ for the $L_2$ and $L_3$ cases are identical,\nfollowing the assumption that spin-orbit splitting is independent of\nthe fractional charge.\nThe coefficients $B_{M_{45}}$ and $C_{M_{45}}$ in the solid have been\nobtained by comparing results for core levels in the bulk and in the\nfree neutral atom according to\nEqs.~(\\ref{eq:B=Ba+DeltaB}-\\ref{eq:C=Ca+DeltaC}), with the values of\n$\\Delta{}B$ and $\\Delta{}C$ averaged across the core levels as\nreported in Table~\\ref{tab:deltaBC}.\nTheir negative sign indicates that the interaction amongst the two\nholes is more effectively screened in the solid.\n\n\n\n\\begin{table}\n\\begin{tabular}{|c|c|rrr|rrr|rr|}\n\\hline\n\\hline\n\\multicolumn{2}{|c|}{Level} & \\multicolumn{1}{c}{$A^a$} & \\multicolumn{1}{c}{$B^a$}\n& \\multicolumn{1}{c|}{$C^a$} &\n \\multicolumn{1}{c}{$A$} & \\multicolumn{1}{c}{$B$}\n& \\multicolumn{1}{c|}{$C$} &\n \\multicolumn{1}{c}{$E^\\text{XPS}$} & \\multicolumn{1}{c|}{Exp.}\n\\\\\n\\hline\n &$L_2$ & $ 930.17$ & $27.74$ & $1.25$ & $ 928.40$ & $22.84$ & $0.43$ & $ 951.67$ & $ 952.0$ \\\\\nCu &$L_3$ & $ 909.81$ & $27.74$ & $1.25$ & $ 908.04$ & $22.84$ & $0.43$ & $ 931.31$ & $ 932.2$ \\\\\n &$M_{45}$ & $ 5.04$ & $ 5.72$ & $0.91$ & $ 2.86$ & $ 0.87$ & $0.10$ & $ 3.84$ & $ 3.1$ \\\\\n\\hline\n &$L_2$ & $1019.40$ & $30.44$ & $1.08$ & $1016.98$ & $26.17$ & $0.82$ & $1043.97$ & $1044.0$ \\\\\nZn &$L_3$ & $ 995.69$ & $30.44$ & $1.08$ & $ 993.27$ & $26.17$ & $0.82$ & $1020.26$ & $1020.9$ \\\\\n &$M_{45}$ & $ 10.14$ & $ 7.06$ & $0.77$ & $ 7.53$ & $ 2.82$ & $0.42$ & $ 10.78$ & $ 9.9$ \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{\\label{tab:ABC_CuZn}\nCoefficients for the expansion of the total energy as a function of\nthe number of electrons, $E(N-q)$, for atomic ($A^a$, $B^a$, $C^a$)\nand bulk ($A$, $B$, $C$) Cu and Zn.\nAs for the $M_{45}$ values: by $A_{M_{45}}$ we indicate (minus) the\nweighted average of the $3d$ KS band; $B_{M_{45}}$ and $C_{M_{45}}$\nare obtained according to\nEqs.~(\\ref{eq:B=Ba+DeltaB}-\\ref{eq:C=Ca+DeltaC}). Theoretical XPS\nenergies are given by Eq.(\\ref{eq:XPSABC}); experimental data are\ntaken from Ref.~\\onlinecite{Antonides:1977:LMM_Cu_Zn_Ga_Ge_I}.\nValues in eV.\n}\n\\end{table}\n\n\nLet us now consider the dependence of our results on the particular\nchoice of the reference atomic configuration.\nFor comparsion, the Cu$^+$ and Zn$^+$ ions (with one electron removed\nfrom the $4s$ shell) have been used as a starting point for the\nevaluation of the atomic coefficients instead of the neutral one.\nWe find similar modifications, canceling each other in\nEq.~(\\ref{eq:B=Ba+DeltaB}), for core and valence $B_i^a$ atomic\ncoefficients (larger by about $1.5$~eV in Cu and $1.3$~eV in Zn). The\nsame is found for the $C_i^a$ coefficients (lower by $0.2$~eV in Cu\nand $0.1$~eV in Zn).\nAs a consequence, the values for $B_{M_{45}}$ differ by less than\n$0.2$~eV, and those for $C_{M_{45}}$ are identical within $0.01$~eV,\nwith data reported in Table~\\ref{tab:ABC_CuZn}.\nHence, as anticipated in the previous section, the choice of the\nreference atomic configuration does not affect significantly the\nevaluated XPS and Auger energies.\n\nRecall now that $A+B+C$ is our estimate for the XPS excitation\nenergies [see Eq.~(\\ref{eq:XPSABC})], which are reported in\nTable~\\ref{tab:ABC_CuZn}, and compared with experimental\nvalues.\\cite{Antonides:1977:LMM_Cu_Zn_Ga_Ge_I}\nNotice that bare KS excitations energies can be $30$~eV smaller than\nthe experimental value, but the addition of $B$ and, to a smaller\nextent, of $C$, properly accounts for the missing relaxation energy,\nthe left discrepancy being smaller that $1$~eV.\n\n\nWe report next our results for the $3d$ component of the 1hLDOS,\n$d(\\omega)$, for Cu and Zn in Fig.~\\ref{fig:1hLDOS_CuZn}. We remind\nthat such quantity is obtained by converting the KS density of states\ninto the hole picture, and by translating the result by $B_v+C_v$ to\naccount for relaxation effects [see Eq.~(\\ref{eq:1hLDOS})].\nThe total $d$ 1hLDOS, $\\bar{d}(\\omega)$, is shown as a shaded area\ntogether with the components on the different irreducible\nrepresentations over which the $d$ matrix is diagonal. For both\nmetals, the various components differ among themselves in the detailed\nenergy dependence, but their extrema are very similar.\n\n\\begin{figure}\n\\includegraphics[width=8.5cm]{fig2a_1hdosCu.eps}\n\\includegraphics[width=8.5cm]{fig2b_1hdosZn.eps}\n\\caption{\\label{fig:1hLDOS_CuZn}One-hole local density of states,\n$d(\\omega)$, for Cu (top) and Zn (bottom), relative to the Fermi\nenergy and normalized to unity. The shaded area is the total $d$ DOS,\n$\\bar{d}$.}\n\\end{figure}\n\nAs a final ingredient, Table~\\ref{tab:U} lists the values of $U$ for\nthe five $LS$ components of the multiplet, computed by\nEq.~(\\ref{eq:U}). Notice that the inclusion of a $q$ dependence in\n$U$ (via a cubic term in the expansion of the total energy with\nrespect to a fractional charge) proves to be quite important.\nIndeed, in evaluating $U$ the $C$ coefficient is counted six times,\nhence bringing a larger contribution than in the XPS energies\npreviously discussed.\nSuch inclusion gives an estimate of $U=U(q=1)$ which is $0.60$~eV and\n$2.52$~eV larger for Cu and Zn, respectively, than the corresponding\nvalues obtained as $U(q=0)$.\n\n\\begin{table}\n\\begin{tabular}{|c|r|rrrrr|}\n\\hline\\hline\n & \\multicolumn{1}{c|}{$U_\\text{sph}$}\n & \\multicolumn{1}{c}{$^1S$}\n & \\multicolumn{1}{c}{$^1G$}\n & \\multicolumn{1}{c}{$^3P$}\n & \\multicolumn{1}{c}{$^1D$}\n & \\multicolumn{1}{c|}{$^3F$} \\\\\n\\hline\nCu & 2.38\n&7.76\n&3.34\n&2.67\n&2.25\n&0.33\n\\\\\n\\hline\nZn & 8.16\n&14.31\n&9.26\n&8.49\n&8.01\n&5.82\n\\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{\\label{tab:U} Values of $U$ resulting from the application of\nEq.~(\\ref{eq:U}), in eV. Slater's integrals from\nRef.~\\onlinecite{Antonides:1977:LMM_Cu_Zn_Ga_Ge_I}.}\n\\end{table}\n\n\nWe then compute the $L_{23}M_{45}M_{45}$ Auger spectrum following\nEq.~(\\ref{eq:PCiniLSJMJ}).\nThe outcome has been convoluted with a core hole lifetime of $0.49$\nand $0.27$~eV ($0.42$ and $0.33$~eV)\\cite{Yin:1973:widthL2L3} for the\n$L_2$ and $L_3$ lines of Cu (Zn), respectively, and results in a\nmultiplet of generally narrow atomic-like peaks, shown in\nFig.~\\ref{fig:LVV}.\n\n\nTo analyze these results, let us focus on the principal peak ($^1G$) in\nthe spectrum, which can be associated with the absolute position of the\nmultiplet. (The internal structure of the multiplet in our description\nonly depends on the values of $F^2$ and $F^4$ which, as previously\nspecified, were taken from the literature.)\nThe experimental energy of the (most intense) $^1G$\ntransition\\cite{Antonides:1977:LMM_Cu_Zn_Ga_Ge_I} is marked by a\nvertical line in Fig.~\\ref{fig:LVV}. The agreement of our results is\nrather good considering the absence of adjustable parameters in the\ntheory: focusing on the $L_3VV$ line, the $^1G$ peak position\n($918.0$~eV from experiments) is overestimated by $1.6$~eV, while the\none for Zn ($991.5$~eV) is underestimated by $1.9$~eV.\n\n\\begin{figure}\n\\includegraphics[width=8.5cm]{fig3_spectrum.eps}\n\\caption{\\label{fig:LVV}Simulated $L_{23}M_{45}M_{45}$ spectrum for Cu\n(top) and Zn (bottom) metals. The vertical lines mark the position of\nthe principal ($^1G$) peaks from\nexperiments.\\cite{Antonides:1977:LMM_Cu_Zn_Ga_Ge_I}}\n\\end{figure}\n\n\\section{\\label{sec:discussion}Discussion}\n\nIt is interesting to compare these results with those obtained by an\nexpression commonly adopted for Auger energies, i.e., $\\omega_{LS}\n\\approx \\epsilon_c - 2\\epsilon_v - U_{LS}$. This is an excellent\napproximation when $U$ is larger than $W$: for example in Zn, where\n$W\\approx1.5$~eV and $U_{^1G}=9.26$~eV, its application to the\ncomputed parameters yields a value which is only $0.10$~eV larger than\nthe $^1G$ peak position derived from Eq.~(\\ref{eq:PCiniLSJMJ}).\nHowever, when $U$ is of order of $W$, significant deviations can be\nobserved: e.g., for Cu ($W\\approx3.5$~eV and $U_{^1G}=3.34$~eV) the\n$^1G$ position is overestimated by $0.66$~eV. For smaller values of\n$U$, the quasi-atomic peak is lost for a broad band-like structure.\nThis is the case for the $^3F$ component of Cu (the rightmost shoulder\nin the spectrum) for which we obtain $U_{^3F}=0.33$~eV. However, this\nis an artifact of our underestimate of $U_\\text{sph}$ in Cu:\nexperimentally, the $^3F$ peak is resolved as well.\n\nBesides these observations, the expression $\\omega_{LS} \\approx\n\\epsilon_c - 2\\epsilon_v - U_{LS}$ is accurate enough for the $^1G$\npeak to discuss the discrepancy of our results with respect to the\nexperimental ones.\nLet us focus on the $L_3VV$ part of the spectrum, and decompose the\nAuger kinetic energy $\\omega$ into its contributions (see\nTable~\\ref{tab:decomposeEkin}).\nDespite the fact that the overall agreement is similar in magnitude for\nCu and Zn, it is important to remark that this finding has different\norigins.\nIn both metals, we underestimate slightly the core photoemission\nenergy and overestimate the valence photoemission energy by a similar\namount. Both effects contribute underestimating the kinetic energy.\nIn Zn, where our value of $U$ is excellent, the error in $\\omega$ stems\nfrom the errors in the photoemission energies.\nIn Cu, instead, $U$ is seriously underestimated. This overcompensates\nthe error in the photoemission energies, resulting in a fortuitous\noverall similar accuracy.\n\n\\begin{table}\n\\begin{tabular}{|c|c|rrr|r|}\n\\hline\n\\hline\n\n &\n & \\multicolumn{1}{c}{$\\epsilon_c$}\n & \\multicolumn{1}{c}{$-2\\epsilon_v$}\n & \\multicolumn{1}{c|}{$-U_{^1G}$}\n & \\multicolumn{1}{c|}{$\\omega_{^1G}$} \\\\\n\\hline\n & Theory & $931.31$ & $-7.67$ & $-3.34$ & $920.29$ \\\\\nCu & Exp. & $932.2$ & $-6.2$ & $-8.0$ & $918.0$ \\\\\n & Diff. & $-0.9$ & $-1.5$ & $4.7$ & $2.3$ \\\\\n\\hline\n & Theory & $1020.26$ & $-21.55$ & $-9.26$ & $989.45$ \\\\\nZn & Exp. & $1020.9$ & $-19.8$ & $-9.5$ & $991.5$ \\\\\n & Diff. & $-0.6$ & $-1.8$ & $0.2$ & $-2.1$ \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{\\label{tab:decomposeEkin}\nDecomposition of the $^1G$ $L_3M_{45}M_{45}$ Auger kinetic energy into\nits contributions, according to the simple approximation\n$\\omega=\\epsilon_c-2\\epsilon_v-U$. Theoretical values of\n$\\epsilon_c$, $\\epsilon_v$, and $U$ from tables \\ref{tab:ABC_CuZn} and\n\\ref{tab:U}; experimental data from\nRef.~\\onlinecite{Antonides:1977:LMM_Cu_Zn_Ga_Ge_I}.\nValues in eV.\n}\n\\end{table}\n\nWe now examine the relative weight of two different ingredients of our\nmethod. First, the role of the spin-orbit interaction in the final\nstate, which will be analyzed by comparing with results where such\nterm is neglected; second, the resolution of the 2hLDOS in its angular\ncomponents. To this respect, we notice that the matrix expression for\nthe spectrum given in Eq.~(\\ref{eq:PCiniLSJMJ}) can be significantly\nsimplified under the assumption that the 2hLDOS is spherically\nsymmetric and the spin-orbit contribution can be neglected. In this\ncase, we can just take the spherically averaged, i.e., the total $d$\n1hLDOS, and compute its self-convolution,\n$\\bar{D}^{(0)}\\equiv\\bar{d}*\\bar{d}$.\nAn averaged Green's function, $\\bar{G}^{(0)}$, is then defined as the\nHilbert transform of $\\bar{D}^{(0)}$. By replacing\n$G^{(0)}_{LSJM_J,L'S'J'M_{J'}'}$ in Eq.~(\\ref{eq:PCiniLSJMJ}) with the\ndiagonal matrix $\\delta_{LSJM_J,L'S'J'M_{J'}'}\\bar{G}^{(0)}$, we\nobtain the simple scalar equation\n\\begin{equation}\n\\label{eq:PCini0}\nS(\\omega)=-\\frac{1}{\\pi}\\sum_{LSJ} (2J+1)|A_{LS}|^2 \\text{Im}\n\\left[\n\\frac{\n \\bar{G}^{(0)} ( \\epsilon_c - \\omega )\n}{\n 1 - U_{LS} \\bar{G}^{(0)}( \\epsilon_c - \\omega )\n}\n\\right],\n\\end{equation}\nwhere the dependence on $LS$ quantum numbers is only via the matrix\nelements and the interaction matrix $U$, and each $LS$ component of\nthe spectrum is decoupled from the others.\n\nThe Auger spectra simulated neglecting the spin-orbit interaction and\ncalculated with the simplified expression of Eq.~(\\ref{eq:PCini0}) are\nplotted in Fig.~\\ref{fig:SOvsALL} as dashed and dotted line,\nrespectively, to be compared with the result of the full calculation\n[Eq.~(\\ref{eq:PCiniLSJMJ})], solid line.\nFor simplicity, we limit the discussion to the $L_3VV$ line of Zn. The\nresemblance of the three results is remarkable. Indeed, in Cu and Zn\nthe spin-orbit splitting for $3d$ levels is relatively small, $0.27$\nand $0.36$~eV, respectively.\\cite{NIST::DFTdata} Furthermore, despite\nthe differences which characterize the angular components of the\n1hLDOS (see Fig.~\\ref{fig:1hLDOS_CuZn}), the convoluted 2hLDOS are\nonly mildly different from $\\bar{D}^{(0)}$, as reported in\nFig.~\\ref{fig:2hLDOS_CuZn}.\nNow, in systems with a large $U\/W$ ratio, fine details of the 2hLDOS\nare not relevant for the position of quasi-atomic peaks, which only\ndepend on the weighted averages $E^{(0)}_{LS}$ as in\nEq.~(\\ref{eq:PDeltaLSJ}). In our case, the values of $E^{(0)}_{LS}$\nlie within $0.1$~eV from those corresponding to the averaged\n2hLDOS. Hence the practically equivalent results obtained by\nEq.~(\\ref{eq:PCiniLSJMJ}) and Eq.~(\\ref{eq:PCini0}).\n\n\n\\begin{figure}\n\\includegraphics[width=8.5cm]{fig4_comparison.eps}\n\\caption{\\label{fig:SOvsALL}Simulated spectrum for the\n$L_3M_{45}M_{45}$ line of Zn. Solid line: full treatment of\nEq.~(\\ref{eq:PCiniLSJMJ}), as presented in this paper. Dashed line:\nneglecting the spin-orbit interaction in the two-hole final\nstate. Dotted line: adopting the spherically averaged 2hLDOS and the\nscalar formulation, Eq.~(\\ref{eq:PCini0}). The origin of the vertical\naxis is shifted for improved clarity.}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=8.5cm]{fig5_2pdos.eps}\n\\caption{\\label{fig:2hLDOS_CuZn}Non-interacting two-holes density of\nstates, $D^{(0)}$, for Cu (left) and Zn (right). The solid curve\nrepresents $\\bar{D}^{(0)}\\equiv\\bar{d}*\\bar{d}$, the self-convolution\nof the total $d$ 1hLDOS; the shaded area indicates the largest\ndeviations from this result found amongst the angular resolved\n2hLDOSs.}\n\\end{figure}\n\nThis analysis shows that, for a wide class of systems with strong\nhole-hole interaction, weak spin-orbit interaction, and a spherical\nsymmetry to some extent, the simple formulation presented in\nEq.~(\\ref{eq:PCini0}) is practically as accurate as the expression in\nEq.~(\\ref{eq:PCiniLSJMJ}). One should instead adopt the full treatment\nfor, e.g., heavier elements, or systems with low dimensionality.\nThis remark is independent of the methodology to determine the\nparameters entering the model, either fully {\\em ab initio} as in the\npresent approach, or by phenomenological arguments.\n\nOur method provides an agreement with experimental photoemission\nenergies of the order of $1$~eV and of $2$~eV for Auger energies.\nWe consider this to be a rather good result, as a starting point,\nconsidering the absence of adjustable parameters in the model which is\nthe new feature of our approach for CVV transitions in correlated\nsystems.\nWe trust that our simple method could already yield qualitative\ninformation on the variations of the spectrum to be expected following\nto modifications in the sample, e.g., when the emitting atom is\nlocated in different environments.\nOf course, a much better agreement would be obtained by inserting\nphenomenological parameters, but at the cost of loosing predictive\npower.\n\nThe internal structure of the lineshape, i.e., the multiplet\nsplitting, is given very precisely. However, the first-principles\ntreatment of the latter is not a new aspect of our approach, which is\nindeed based in this respect on atomic results available in the\nliterature since decades.\nLet us instead focus again on the estimated position of the multiplet,\nwhich crucially depends on the parameters evaluated by our {\\em ab\ninitio} method.\nEven though the agreement with the experiment is about as good as for\nZn as for Cu, actually the results for Zn are much better.\nIn Zn, the $3d$ band is deep and narrow, and electronic states bear\nmostly atomic character. Their hybridization with the states closer to\nthe Fermi level, which mainly contributes to screening, is small,\nsomehow in an analogous way as for core states. Consequently, the\napproximations to neglect the energy dependence of $B_v$ and $C_v$,\nand to transfer the values of $\\Delta{}B$ and $\\Delta{}C$ from the\ncore states to the valence ones, produce very good results.\nIn Cu, instead, the $3d$ band is higher and broader, and hybridizes\nsignificantly with the $s$-like wavefunctions. Our approximations\nturn out to be less adequate: the resulting $U$ is about half the one\nderived from experiments.\n\n\nPart of this discrepancy might also have a deeper physical origin,\nsince the Hamiltonian adopted, Eq.~(\\ref{eq:HHubbard}), does not allow\nfor the interaction amongst two holes located at different atomic\nsites. The CS model has been extended to consider the role of\ninteratomic (``off-site'') correlation effects which mainly produce an\nenergy shift of the Auger line to lower\nenergies.\\cite{Verdozzi:1995:OffSiteRev} Studies based on\nphenomenological parameters suggest that such an energy shift could be\nof about $2.5$~eV in Cu\\cite{Ugenti:2008:spinAPECS} (smaller values\nare expected in Zn where holes are more localized and screening is\nmore effective). For sake of simplicity, the off-site term has not\nbeen considered here and is left for future investigations. The\nparameters entering this term could be determined by {\\em ab initio}\nmethods in analogy to the procedure shown here for evaluating $U$. It\nis however important to notice that adding the off-site term would not\nfix all the discrepancies observed in Cu, where also the lineshape, in\naddition to the peak position, is not satisfactory owing to small\nvalues for the on-site interaction $U$ (e.g., the non-resolved $^3F$\npeak).\n\nEnhancing the accuracy of the values of $U$ seems therefore the most\nimportant improvement for the method presented here, especially for\nsystems with broad valence bands. As a possibility, it would be\ninteresting to use approaches which are capable to compute the total\nenergy in presence of holes in the valence state. The methodology\npresented in Ref.~\\onlinecite{Cococcioni:2005:LDA+U}, in which the\nvalence occupation is changed by means of Lagrange multipliers\nassociated with the KS eigenvalues, could be particularly effective.\nOne should pay attention as some arbitrariness is anyway introduced.\nNamely, the value of $U$ does depend on the chosen form of the valence\nwavefunctions. Such an arbitrariness is compensated in LDA+U\ncalculations performed self-consistently.\\cite{Cococcioni:2005:LDA+U}\nFurthermore, to apply this method to systems with closed band lying\nwell below the Fermi energy, large shifts of the KS eigenvalues would\nbe needed to alter the occupation of the band to an appreciable\namount.\n\nAnother possible improvement concerns the photoemission energies.\nCalculations by the $GW$ approach\\cite{Aryasetiawan:1998:GW} of the\n1hLDOS could be used to account for relaxation energies, rather than\nadopting Eq.~(\\ref{eq:XPSABC}). Results available in the literature\n(e.g., for Cu~\\cite{Marini:2002:GWCu}) are very promising in that\nsense.\nIt is interesting to notice that the factor $B+C$ plays the role of a\nself-energy expectation value, and that the use of a single value of\n$B+C$ to shift rigidly the band is formally analogous to the ``scissor\noperator'' often introduced to avoid expensive self-energy calculations.\nThe accuracy of such rigid shifts for valence-band photoemission in Cu\nis discussed in Ref.~\\onlinecite{Marini:2002:GWCu}.\n\nSystems with larger band width or smaller hole-hole interaction would\nrequire to extend the approach to treat the dependence of the matrix\nelements on energy together with the interaction in the final state.\nReleasing the assumption that matrix elements equal the atomic ones,\nas in the current treatment, or that particles are non-interacting in\nformulations accounting for such an energy dependence (like, e.g., the\none in Ref.~\\onlinecite{Bonini:2003:MDS}), would allow switching\ncontinously between systems with band-like and atomic-like spectra.\nThis possibility is currently under investigation.\n\nFinally, let us recall the basic assumption considered here that the\nvalence shell is closed, which is crucial to the CS model in its\noriginal form. Efforts have been devoted towards releasing this\nassumption, resulting in a formulation by more complicated three-hole\nGreen's functions,\\cite{Marini:1999:3bodyCVVAuger} for which no {\\em\nab initio} treatment is nowadays available to our knowledge.\n\n\n\\section{\\label{sec:conclusions}Conclusions}\n\nWe have presented an {\\em ab initio} method for computing CVV Auger\nspectra for systems with filled valence bands, based on the\nCini-Sawatzky model.\nOnly standard DFT calculations are required, resulting in a very\nsimple method which allows working out the spectrum with no adjustable\nparameters. The accuracy on the absolute position of the Auger\nfeatures is estimated to a few eV, as we have demonstrated by the\nanalysis of Cu and Zn metals.\nWe have shown that in these systems further simplifications like\nneglecting spin-orbit interaction for the two valence holes, or the\nnon sphericity of the emitting atom, give results practically\nequivalent to the full treatment.\nAttention should be paid to the problematic parameter $U$. We\nobtained such a term with a good accuracy for the more localized,\natomic-like valence bands in Zn, while it results underestimated in\nCu.\nIts occupation number dependence, included via a cubic term in the\nexpansion of the total energy, has been considered, and shown to play\nan important role.\n\nThis step towards a first-principles description of CVV spectroscopy\nin closed-shell correlated systems enables identifying improvements\nwhich future investigations could focus on. In particular, one would\nbenefit from detailed calculations of the single-particle densities of\nstates (e.g., by the $GW$ method), from truly varying the valence-band\noccupation to obtain the $U$ parameter, and from including off-site\nterms in the Hamiltonian. Prospectively, it would be desirable to take\ninto account the energy dependence of the transition matrix elements.\n\n\\section{Acknowledgment}\n\nThis work was supported by the MIUR of Italy (Grant\nNo. 2005021433-003) and the EU Network of Excellence NANOQUANTA (Grant\nNo. NMP4-CT-2004-500198). Computational resources were made available\nalso by CINECA through INFM grants. EP is financially supported by\nFondazione Cariplo (n. Prot. 0018524).\n\n\n\n\\bibliographystyle{apsrev}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\nWhen a black hole is viewed against a backdrop of light sources,\nthe observer sees a black disc in the sky which is known as the\n\\emph{shadow} of the black hole. Points inside this black disc\ncorrespond to past-oriented light rays that go from the observer\ntowards the horizon of the black hole, while points outside this\nblack disc correspond to past-oriented light rays that are more\nor less deflected by the black hole and then meet one of the \nlight sources. For a Schwarzschild black hole, which is non-rotating\nthe shadow is circular and its boundary corresponds to light rays \nthat asymptotically spiral towards circular photon orbits\nthat fill the so-called photon sphere at 1.5 Schwarzschild radii\naround the black hole. For a Kerr black hole, which is rotating, \nthe shadow is flattened on one side and its boundary corresponds \nto light rays that spiral towards spherical photon orbits\nthat fill a 3-dimensional photon region around the black hole.\nFor the supermassive black hole that is assumed to sit at the\ncentre of our Galaxy, the predicted angular diameter of the shadow is \nabout 53 microarcseconds which is within reach of VLBI observations.\nThere is an ongoing effort to actually observe this shadow, and \nalso the one of the second-best black-hole candidate at the centre\nof M87, see {\\tt http:\/\/www.eventhorizontelescope.org}.\n\nWhen calculating the shadow one usually considers an \\emph{eternal}\nblack hole, i.e., a black hole that is static or stationary and exists\nfor all time. For a Schwarzschild black hole, there is a simple\nanalytical formula for the angular radius of the shadow which goes\nback to Synge \\cite{Synge1966}. (Synge did not use the word ``shadow''\nwhich was introduced much later. He calculated what he called the\n\\emph{escape cones} of light. The opening angle of the escape \ncone is the complement of the angular radius of the shadow.) For\na Kerr black hole, the shape of the shadow was \ncalculated for an observer at infinity by Bardeen \\cite{Bardeen1973}.\nMore generally, an analytical formula for the boundary curve\nof the shadow was given, for an observer anywhere\nin the domain of outer communication of a Pleba{\\'n}ski-Demia{\\'n}ski\nblack hole, by Grenzebach et al. \n\\cite{GrenzebachPerlickLaemmerzahl2014,GrenzebachPerlickLaemmerzahl2015}.\nFor the Kerr case, this formula was further evaluated by Tsupko\n\\cite{Tsupko2017}.\nThese analytical results are complemented by ambitious numerical\nstudies, performing ray tracing in black hole spacetimes with various\noptical effects taken into account. We mention in particular a \npaper by Falcke et al. \\cite{FalckeMeliaAgol2000} where the perspectives\nof actually observing black-hole shadows were numerically investigated\ntaking the presence of emission regions and scattering into account, and\na more recent article by James et al. \n\\cite{JamesTunzelmannFranklinThorne2015}\nwhich focusses on the numerical work that was done for the\nmovie \\emph{Interstellar} but also reviews earlier work.\n\nAs we have already emphasised, in all these analytical and numerical\nworks an eternal black hole is considered. Actually, we believe that\nblack holes are not eternal: They have come into existence some finite\ntime ago by gravitational collapse (and are then possibly growing by \naccretion or mergers with other black holes). This brings us to the \nquestion of how an observer who is watching the collapse would see \nthe shadow coming about in the course of time. This is the question \nwe want to investigate in this paper.\n\nThe visual appearance of a star undergoing gravitational collapse\nhas been studied in several papers, beginning with the pioneering\nwork of Ames and Thorne \\cite{AmesThorne1968}. In this work, and \nin follow-up papers e.g. by Jaffe \\cite{Jaffe1969}, Lake and Roeder\n\\cite{LakeRoeder1979} and Frolov et al. \\cite{FrolovKimLee2007},\nthe emphasis is on the frequency shift of light coming from the \nsurface of the collapsing star. More recent papers by Kong et al.\n\\cite{KongMalafarinaBambi2014,KongMalafarinaBambi2015} and\nby Ortiz et al. \n\\cite{OrtizSarbachZannias2015a,OrtizSarbachZannias2015b}\ninvestigated the frequency shift of light passing through a \ncollapsing transparent star, thereby contrasting the collapse\nto a black hole with the collapse to a naked singularity. In \ncontrast to all these earlier articles, here we consider\na \\emph{dark} and \\emph{non-transparent} collapsing star which\nis seen as a black disc when viewed against a backdrop of light \nsources and we ask how this black disc changes \nin the course of time. \n\nFor the collapsing star we use a particularly simple model: We\nassume that the star is spherically symmetric and that it begins\nto collapse, at some instant of time, in free fall like a ball of\ndust until it ends up in a point singularity at the centre. \nThe metric inside such a collapsing ball of dust was\nfound in a classical paper by Oppenheimer and Snyder\n\\cite{OppenheimerSnyder1939}. However, for our purpose, as \nwe assume the collapsing star to be non-transparent, we do\nnot need this interior metric. All we need to know is\nthat a point on the surface follows \na timelike geodesic in the ambient Schwarzschild spacetime. \nWe will demonstrate that in this situation the time \ndependence of the shadow can be given analytically. We do\nthis first for a static observer who is watching the collapse from a\ncertain distance, and then also for an observer who is falling\ntowards the centre and ending up in the point singularity \nafter it has formed. The latter situation is (hopefully) not\nof relevance for practical astronomical observations but\nwe believe that the calculation is quite instructive from\na conceptual point of view.\n\nThe paper is organised as follows. In Section \\ref{sec:PG}\nwe review some basic facts on the Schwarzschild solution \nin Painlev{\\'e}-Gullstrand coordinates. These coordinates \nare particularly well suited for our purpose because they\nare regular at the horizon, so they allow to consider worldlines\nof observers or light signals that cross the horizon without the\nneed of patching different coordinate charts together. In \nSection \\ref{sec:shbh} we rederive in Painlev{\\'e}-Gullstrand\ncoordinates the equations for the shadow of an eternal \nSchwarzschild black hole. We do this both for a static and\nfor an infalling observer. The results of this section will then\nbe used in the following two sections for calculating the shadow \nof a collapsing star. We do this first for a static observer\nin Section \\ref{sec:shcoll1} and then for an infalling observer\nin Section \\ref{sec:shcoll2}. We summarise our results in \nSection \\ref{sec:conclusions}. \n\n\\section{Schwarzschild metric in Painlev{\\'e}-Gullstrand coordinates}\\label{sec:PG}\n\nThroughout this paper, we work with the Schwarzschild metric \nin Painlev{\\'e}-Gullstrand coordinates \\cite{Painleve1921,Gullstrand1922},\n\n\\begin{equation}\\label{eq:gPG}\ng_{\\mu \\nu}dx^{\\mu} dx^{\\nu} \n= - \\left( 1 - \\dfrac{2m}{r} \\right) c^2 d T ^2 + 2 \\sqrt{\\dfrac{2m}{r}} \\, c \\, d T \\, dr\n+ dr^2 +\nr^2 \\big( d \\vartheta ^2 + \\mathrm{sin} ^2 \\vartheta \\, \nd \\varphi ^2 \\big) \\, .\n\\end{equation}\nHere\n\n\\begin{equation}\\label{eq:m}\nm=\\dfrac{GM}{c^2}\n\\end{equation}\nis the mass parameter with the dimension of a length; $M$ is \nthe mass of the central object in SI units, $G$ is Newton's \ngravitational constant and $c$ is the vacuum speed of light.\n\nThe Painlev{\\'e}-Gullstrand coordinates $(T,r,\\vartheta, \\varphi )$\nare related to the standard text-book Schwarzschild coordinates\n$(t,r,\\vartheta , \\varphi )$ by\n\n\\begin{equation}\\label{eq:Tt}\nc \\, dT= c \\, dt + \n\\sqrt{\\dfrac{2m}{r}} \\, \\dfrac{dr}{\\Big( 1- \\dfrac{2m}{r} \\Big)}\n\\, .\n\\end{equation} \nAs a historical side remark, we mention that both Painlev{\\'e}\n\\cite{Painleve1921} and Gullstrand \\cite{Gullstrand1922} believed\nthat they had found a new solution to Einstein's vacuum field\nequation before Lema{\\^\\i}tre \\cite{Lemaitre1933} demonstrated \nthat it is just the Schwarzschild solution in other coordinates. \nWhereas in the standard Schwarzschild coordinates the metric\nhas a coordinate singularity at the horizon at $r=2m$, in the\nPainlev{\\'e}-Gullstrand coordinates the metric is regular\non the entire domain $0 < r < \\infty$.\n\nOn the domain $2m2m)$, proper time $\\tau$ is related \nto the Painlev{\\'e}-Gullstrand time coordinate $T$ by \n\\begin{equation}\\label{eq:taustatic}\n\\dfrac{dT}{d \\tau} = \\dfrac{1}{\\sqrt{1- \\dfrac{2m}{r}}}\n\\, .\n\\end{equation}\n\nIn the Painlev{\\'e}-Gullstrand coordinates, the geodesics \nin the Schwarzschild spacetime are the solutions to the\nEuler-Lagrange equations of the Lagrangian \n \n\\begin{equation}\\label{eq:LagPG}\n\\mathcal{L} \\big( x , \\dot{x} \\big) \n= \\dfrac{1}{2} \\left(\n- \\left( 1 - \\dfrac{2m}{r} \\right) c^2 \\dot{T}{} ^2 + \n2 \\sqrt{\\dfrac{2m}{r}} \\, c \\, \\dot{T} \\, \\dot{r}\n+ \\dot{r}{}^2 +\nr^2 \\, \\Big( \\mathrm{sin}^2 \\vartheta \\, \\dot{\\varphi}{} ^2 \n+ \\dot{\\vartheta}{} \\Big) \\right) \n\\, .\n\\end{equation}\nHere the overdot means derivative with respect to an affine \nparameter.\n\nThe $T$ and $\\varphi$ components of the Euler-Lagrange\nequations give us the familiar constants of motion \n$E$ and $L$ in Painlev{\\'e}-Gullstrand coordinates,\n\n\\begin{equation}\\label{eq:E}\nE = - \\dfrac{\\partial \\mathcal{L}}{\\partial \\dot{T}} \n= \\left( 1 - \\dfrac{2m}{r} \\right) c^2 \\dot{T} \n- \\sqrt{\\dfrac{2m}{r}} \\, c \\, \\dot{r} \n\\end{equation}\nand\n\n\\begin{equation}\\label{eq:L}\nL = \n\\dfrac{\\partial \\mathcal{L}}{\\partial \\dot{\\varphi}} \n= r^2 \\mathrm{sin} ^2 \\vartheta \\, \\dot{\\varphi} \\, .\n\\end{equation}\nFor the purpose of this paper we will need the \nradial timelike geodesics and the lightlike geodesics\nin the equatorial plane. \n\n\\subsection{Radial timelike geodesics}\\label{sec:radial}\n\nWe consider massive objects in radial free \nfall, i.e., radial geodesics ($\\dot{\\varphi}=0$ and \n$\\dot{\\vartheta} =0$) which are timelike. Then we may \nchoose the affine parameter equal to proper time $\\tau$, \n\n\\begin{equation}\\label{eq:Lagrad}\n- \\, c^2 =\n - \\left( 1 - \\dfrac{2m}{r} \\right) c^2 \\Big( \\dfrac{dT}{d \\tau} \\Big) ^2 \n+ 2 \\sqrt{\\dfrac{2m}{r}} \\, c \\, \\dfrac{dT}{d \\tau} \\, \\dfrac{dr}{d \\tau}\n+ \\Big( \\dfrac{dr}{d \\tau} \\Big) ^2 \\, .\n\\end{equation}\nIn this notation (\\ref{eq:E}) can be rewritten as\n\n\\begin{equation}\\label{eq:Erad}\n\\varepsilon := \\dfrac{E}{c^2} = \n\\left( 1 - \\dfrac{2m}{r} \\right) \\dfrac{dT}{d \\tau} \n- \\sqrt{\\dfrac{2m}{r}} \\, \\dfrac{1}{c} \\, \\dfrac{dr}{d \\tau} \n\\end{equation}\nwhereas (\\ref{eq:L}) requires $L=0$. \n\nIn the following we restrict to the case that the parametrisation by proper\ntime is future oriented with respect to the Painlev{\\'e}-Gullstrand\ntime coordinate, $dT\/d \\tau >0$, and we consider only ingoing motion,\n$dr\/d \\tau <0$, that starts in the domain $r>2m$. Then $\\varepsilon >0$ \nand (\\ref{eq:Erad}) and (\\ref{eq:Lagrad}) imply\n\n\\begin{equation}\\label{eq:rTrad}\n\\dfrac{dr}{d \\tau} = \n- \\, c \\, \\sqrt{\\varepsilon ^2- 1+ \\dfrac{2m}{r}} \n\\, , \\quad\n\\dfrac{dT}{d \\tau} =\n\\dfrac{\n\\varepsilon - \\sqrt{\\dfrac{2m}{r}} \n\\sqrt{\\varepsilon ^2- 1 + \\dfrac{2m}{r} }\n }{\n1 - \\dfrac{2m}{r} \n}\n\\end{equation} \nWe distinguish three cases, see Fig.~\\ref{fig:radial}:\n\n(a) $0<\\varepsilon < 1$: Then $dr\/d \\tau =0$ at a radius coordinate \n$r_i$ given by $\\varepsilon ^2=1-2m\/r_i$, i.e., this case \ndescribes free fall from rest at $r_i$. Clearly, the \npossible values of $r_i$ are $2m1$ (dashed). For the four \nworldlines with $\\varepsilon \\ge 1$ we have chosen the initial conditions \nsuch that they arrive simultaneously at $r=0$. For the three worldlines \nwith $\\varepsilon <1$ we have assumed that they start from rest at the \nsame Painlev{\\'e}-Gullstrand time; then these worldlines do not arrive \nsimultaneously at $r=0$. As an alternative, one could have considered \nworldlines that start from rest at the same Schwarzschild time; they \nwould arrive simultaneously at $r=0$.\n\\label{fig:radial}}\n\\end{figure}\n\nChoosing a value of $\\varepsilon >0$ defines a family of infalling observers.\nWe associate with this family the tetrad \n\n\\begin{gather}\n\\nonumber\n\\tilde{e}{}_0 =\n\\dfrac{\n\\varepsilon -\\sqrt{\\dfrac{2m}{r}} \\sqrt{\\varepsilon ^2-1+\\dfrac{2m}{r}}\n}{ \n\\Big( 1 - \\dfrac{2m}{r} \\Big) \\, c\n}\n\\, \\dfrac{\\partial}{\\partial T} \n-\n\\sqrt{\\varepsilon ^2-1+\\dfrac{2m}{r}} \\, \\dfrac{\\partial }{\\partial r} \\, ,\n\\\\\n\\nonumber\n\\tilde{e}{}_1 = \\varepsilon \\, \\dfrac{\\partial}{\\partial r} \n+\n\\dfrac{\n\\varepsilon \\sqrt{\\dfrac{2m}{r}} -\\sqrt{\\varepsilon ^2-1+\\dfrac{2m}{r}}\n}{\n\\Big( 1- \\dfrac{2m}{r} \\Big) \\, c\n}\n\\dfrac{\\partial}{\\partial T} \\, ,\n\\\\\n\\tilde{e}{}_2 = \\dfrac{1}{r }\n\\, \\dfrac{\\partial}{\\partial \\vartheta} \\, , \\quad\n\\tilde{e}{}_3 = \\dfrac{1}{r \\, \\mathrm{sin} \\, \\vartheta}\n\\, \\dfrac{\\partial}{\\partial \\varphi} \\, .\n\\label{eq:temu}\n\\end{gather}\nFor $\\varepsilon \\ge 1$ this tetrad is well-defined and orthonormal\non the entire domain $0 < r < \\infty$. For $0 < \\varepsilon < 1$ the tetrad\nis restricted to the domain $0 < r < r_i = 2m\/(1- \\varepsilon ^2)$.\n\nThe relative velocity $v$ of a radially infalling observer \nwith respect to the static observer at the same event can \nbe calculated from the special relativistic formula\n\n\\begin{equation}\\label{eq:v1}\ng_{\\mu \\nu} \\, e_0^{\\mu} \\, \\tilde{e}{}_0^{\\nu} \n= \\dfrac{-1}{\\sqrt{1-\\dfrac{v^2}{c^2}}} \\, .\n\\end{equation}\nThis results in\n\n\\begin{equation}\\label{eq:vrad}\n\\dfrac{v}{c} = \n\\dfrac{1}{\\varepsilon} \\, \\sqrt{\\varepsilon ^2- 1+ \\dfrac{2m}{r} }\n\\, .\n\\end{equation}\nClearly, this formula makes sense only on the domain on which \nboth families of observers are defined. For $\\varepsilon \\ge 1$ this\nis true on the domain $2m2m$, in the last expression the second term is \nbigger than the first. Therefore, in this domain the \nupper sign has to be chosen if $dT\/dr > 0$ and the \nlower sign has to be chosen if $dT\/dr <0$.\n\nBy differentiating (\\ref{eq:orbit}) with respect to $\\varphi$ \nwe find \n\n\\begin{equation}\\label{eq:d2rdphi}\n\\dfrac{d^2r}{d \\varphi ^2} =\n\\dfrac{4E^2r^3}{c^2L^2} - 2r +2m \\, .\n\\end{equation}\nIf along a lightlike geodesic the radius coordinate goes \nthrough an extremum at value $r_m$, (\\ref{eq:orbit}) and \n(\\ref{eq:d2rdphi}) imply\n\n\\begin{equation}\\label{eq:Erm}\n0 = \n\\dfrac{E^2r_m^3}{c^2L^2} - r_m +2m \\, ,\n\\end{equation}\n\n\\begin{equation}\\label{eq:Erm2}\n\\dfrac{d^2 r}{d \\varphi ^2} \\Big| _{r=r_m} = \nr_m-3m \\, .\n\\end{equation}\nThis demonstrates that only local minima may occur in the domain $r>3m$ \nand only local maxima may occur in the domain $r<3m$. The sphere at\n$r=3m$ is filled with circular photon orbits that are unstable with respect \nto radial perturbations. These well-known facts will be crucial for the \nfollowing analysis.\n\nFor a lightlike geodesic with an extremum of the radius coordinate at $r_m$,\n(\\ref{eq:Erm}) may be used for expressing $E^2\/L^2$ in (\\ref{eq:orbit}) and \n(\\ref{eq:trtime}) in terms of $r_m$. This results in\n\n\\begin{equation}\\label{eq:orbit2}\n\\dfrac{dr}{d \\varphi} = \\pm \\sqrt{\\dfrac{(r_m-2m)r^4}{r_m^3} - r^2 +2mr}\n\\end{equation}\nand\n\n\\begin{equation}\\label{eq:trtime2}\nc \\, \\dfrac{dT}{d r} = \n\\dfrac{\\sqrt{2mr}}{(r-2m)} \\pm \n\\dfrac{\\sqrt{(r_m-2m)r^5}}{(r-2m) \\sqrt{(r_m-2m)r^3-(r-2m)r_m^3}}\n\\end{equation}\n\n\\vspace{0.5cm}\n\n\\section{The shadow of an eternal Schwarzschild black hole}\\label{sec:shbh}\nIn this section we rederive the formulas for the angular radius\nof the shadow of an eternal Schwarzschild black hole, both for\na static and for an infalling observer, using Painlev{\\'e}-Gullstrand\ncoordinates. The results of this section will then be used for calculating \nthe shadow of a collapsing star in the following sections. \n\nWe consider a lightlike geodesic $\\big( T(s),r(s),\\varphi (s) \\big)$ \nin the equatorial plane, where $s$ is an affine parameter. As before,\nwe denote the derivative with respect to $s$ by an overdot. We may\nthen expand the tangent vector of the lightlike geodesic with respect \nto the static tetrad (\\ref{eq:emu}) and also, as an\nalternative, with respect to the infalling tetrad \n(\\ref{eq:temu}) for some chosen $\\varepsilon >0$. Of course, \nthe resulting equations are restricted to the domain where\nthe respective tetrad is well-defined and orthonormal. As the \ntangent vector is lightlike, these expansions may be written in \nterms of two angles $\\alpha$ and $\\tilde{\\alpha}$,\n\\begin{gather}\n\\nonumber\n\\dot{T} \\dfrac{\\partial}{\\partial T}\n+\n\\dot{r} \\dfrac{\\partial}{\\partial r}\n+\n\\dot{\\varphi} \\dfrac{\\partial}{\\partial \\varphi}\n=\n\\chi \\Big( \ne_0 + \\mathrm{cos} \\, \\alpha \\, e_1 -\n\\mathrm{sin} \\, \\alpha \\, e_3 \\Big) \n\\\\\n=\n\\tilde{\\chi} \\Big( \n\\tilde{e}{}_0 + \\mathrm{cos} \\, \\tilde{\\alpha} \\, \\tilde{e}{}_1 -\n\\mathrm{sin} \\, \\tilde{\\alpha} \\, \\tilde{e}{}_3 \\Big) \\, .\n\\label{eq:tangent1}\n\\end{gather}\nIf the parametrisation of the lightlike geodesic is future-oriented\nwith respect to the $T$ coordinate, the scalar factors $\\chi$ and \n$\\tilde{\\chi}$ are positive; otherwise they are negative.\n$\\alpha$ is the angle between the lightlike geodesic and \nthe radial direction in the rest system of the static observer, \nwhereas $\\tilde{\\alpha}$ is the analogously defined angle in \nthe rest system of the infalling observer. \n$\\alpha$ and $\\tilde{\\alpha}$ may take all values between 0 and $\\pi$.\nOf course, $\\alpha$ is well-defined on the domain where the \nstatic observer exists (i.e., for $2m < r < \\infty$) whereas\n$\\tilde{\\alpha}$ is well-defined on the domain where the \ninfalling observer exists (i.e., for $0 < r < r_i = 2m\/(1-\\varepsilon ^2)$ \nif $0< \\varepsilon < 1$, and for $0 < r < \\infty$ if $1 \\le \\varepsilon < \\infty$).\n\nComparing coefficients of $\\partial\/ \\partial r$\nand $\\partial\/\\partial \\varphi$ in (\\ref{eq:tangent1}) yields\n\n\\begin{equation}\\label{eq:cc2}\n\\dot{r} \n= \\, \\chi \\, \\sqrt{1-\\dfrac{2m}{r}} \\, \\mathrm{cos} \\, \\alpha \n= \\tilde{\\chi} \\left( \\varepsilon \\, \\mathrm{cos} \\, \\tilde{\\alpha} \n- \\sqrt{\\varepsilon ^2 -1+\\dfrac{2m}{r}} \\right) \\, ,\n\\end{equation}\n\n\\begin{equation}\\label{eq:cc3}\n\\dot{\\varphi} \n= - \\chi \\, \\dfrac{\\mathrm{sin} \\, \\alpha}{r} \n= - \\tilde{\\chi} \\, \\dfrac{\\mathrm{sin} \\, \\tilde{\\alpha}}{r} \\, .\n\\end{equation}\n\nFrom (\\ref{eq:cc2}) and (\\ref{eq:cc3}) we find\n\n\\begin{equation}\\label{eq:alpha1}\n\\dfrac{\\dot{\\varphi}}{\\dot{r}} \n=\n\\dfrac{\n- \\mathrm{sin} \\, \\alpha\n}{\nr \\sqrt{1-\\dfrac{2m}{r}} \\mathrm{cos} \\, \\alpha\n}\n=\n\\dfrac{\n- \\mathrm{sin} \\, \\tilde{\\alpha}\n}{\nr \\left( \\varepsilon \\, \\mathrm{cos} \\, \\tilde{\\alpha} - \n\\sqrt{\\varepsilon ^2 -1+\\dfrac{2m}{r}} \\right)\n} \\, .\n\\end{equation}\nNow we apply these results to the case of a lightlike geodesic that goes through an extremum\nof the radius coordinate at some value $r_m$. If we evaluate (\\ref{eq:alpha1})\nat a radius value $r>3m$ this extremum is necessarily a local minimum, whereas it is necessarily \na local maximum if we evaluate (\\ref{eq:alpha1}) at a radius value $r<3m$. \nIn either case (\\ref{eq:orbit2}) implies that the angles $\\alpha$ and $\\tilde{\\alpha}$ \nat $r$ satisfy\n\n\\begin{equation}\\label{eq:alpha2}\n\\dfrac{1}{\\dfrac{r^2(r_m-2m)}{r_m^3}-1 + \\dfrac{2m}{r}} \n=\n\\dfrac{ \\mathrm{sin} ^2 \\alpha}{ \\left( 1-\\dfrac{2m}{r} \\right) \n\\mathrm{cos} ^2 \\alpha}\n=\n\\dfrac{\n\\mathrm{sin} ^2 \\tilde{\\alpha}\n}{\n\\left( \\varepsilon \\, \\mathrm{cos} \\, \\tilde{\\alpha} \n- \\sqrt{\\varepsilon ^2 -1+\\dfrac{2m}{r}} \\right) ^2\n} \\, .\n\\end{equation}\nFrom the second equality sign in (\\ref{eq:alpha2}) we find\n\n\\begin{equation}\\label{eq:aberr2}\n\\mathrm{sin} \\, \\alpha =\n\\dfrac{\n\\sqrt{1-\\dfrac{2m}{r}} \\, \\dfrac{1}{\\varepsilon} \\, \\mathrm{sin} \\, \\tilde{\\alpha}\n}{\n1- \\dfrac{1}{\\varepsilon} \\sqrt{\\varepsilon ^2 -1+\\dfrac{2m}{r}} \\, \n\\mathrm{cos} \\, \\tilde{\\alpha}\n} \\, .\n\\end{equation}\nBy (\\ref{eq:vrad}), this just demonstrates that $\\alpha$ and $\\tilde{\\alpha}$ are \nrelated by the standard aberration formula. \n\nFrom the first equality sign in (\\ref{eq:alpha2}) we find\n\n\\begin{equation}\\label{eq:alpha4}\n\\mathrm{sin} \\, \\alpha =\n\\sqrt{\\dfrac{r_m^3(r-2m)}{r^3(r_m-2m)}}\n\\end{equation}\nand equating the first to the third expression in (\\ref{eq:alpha2}) yields\n\n\\begin{equation}\\label{eq:alpha5}\n\\mathrm{sin} \\, \\tilde{\\alpha} =\n\\dfrac{\n\\sqrt{1 -\\dfrac{2m}{r}} \\, \n\\sqrt{\\dfrac{(r-2m)r_m^3}{(r_m-2m) r^3}} \n}{\n\\varepsilon \\pm \\, \n\\sqrt{\\varepsilon ^2 -1+\\dfrac{2m}{r}} \\, \n\\sqrt{1-\\dfrac{(r-2m)r_m^3}{(r_m-2m) r^3}}\n} \\, .\n\\end{equation}\nIn (\\ref{eq:alpha5}) the upper sign is valid if $dr\/d \\varphi >0$ \nand the lower sign is valid if $dr\/d \\varphi <0$ at $r$. \n\n\n\\begin{figure}[h]\n \\psfrag{x}{$r_O$}\n \\psfrag{y}{$\\,$ \\hspace{-0.3cm} $\\alpha _{\\mathrm{sh}}$}\n \\psfrag{a}{\\small $\\,$ \\hspace{-0.5cm} $2m$}\n \\psfrag{b}{\\small $\\,$ \\hspace{-0.42cm} $3m$}\n \\psfrag{c}{\\small $\\,$ \\hspace{-0.5cm} $\\dfrac{\\pi}{2}$}\n \\psfrag{d}{\\small $\\,$ \\hspace{-0.5cm} $\\pi$}\n\\centerline{\\epsfig{figure=alphabh.eps, width=11cm}}\n\\caption{Angular radius $\\alpha _{\\mathrm{sh}}$ of the shadow of \na Schwarzschild black hole for a static observer. What we have \nplotted here is Synge's formula (\\protect{\\ref{eq:alpha6}}). For \nan observer at $r_O=3m$, we have $\\alpha = \\pi\/2$, i.e., half\nof the sky is dark. For $r_O \\to 2m$, we have $\\alpha \\to \\pi$, \ni.e., in this limit the entire sky becomes dark. \\label{fig:alphabh}}\n\\end{figure}\n\nFrom (\\ref{eq:alpha4}) and (\\ref{eq:alpha5}) we can now easily determine \nthe angular radius of the shadow. The latter is defined in the following way:\nConsider an observer at radius coordinate $r=r_O >3m$. Then a lightlike geodesic \nissuing from the observer position into the past may either go to infinity,\npossibly after passing through a minimum of the radius coordinate\nat some $r_m>3m$, or it may go to the horizon. Similarly, for an observer position \n$2m 1$ \n(dashed). In any case $\\tilde{\\alpha}{}_{\\mathrm{sh}}$ approaches $\\pi\/2$, i.e., half of \nthe sky becomes dark, for $r_O \\to 0$. For the sake of comparison the \nangular radius $\\alpha _{\\mathrm{sh}}$ for a static observer is again\nplotted in this diagram as a thick (blue) line, cf. \nFigure~\\protect{\\ref{fig:alphabh}}. \\label{fig:talphabh}}\n\\end{figure}\nTherefore, we get the angular radius of the shadow for an observer\nat $r=r_O$ if we send $r_m \\to 3m$ in (\\ref{eq:alpha4}) and \n(\\ref{eq:alpha5}). This results in \n\n\\begin{equation}\\label{eq:alpha6}\n\\mathrm{sin} \\, \\alpha _{\\mathrm{sh}}=\n\\dfrac{\\sqrt{27} m}{r_O} \\sqrt{ 1- \\dfrac{2m}{r_O}}\n\\end{equation}\nand\n\n\\begin{equation}\\label{eq:alpha7}\n\\mathrm{sin} \\, \\tilde{\\alpha} {}_{\\mathrm{sh}} =\n\\dfrac{\n\\dfrac{\\sqrt{27} m}{r_O} \\, \n\\left(\n\\varepsilon \\mp \n\\sqrt{\\varepsilon ^2 -1+\\dfrac{2m}{r_O}}\n\\, \\sqrt{1- \\dfrac{27 m^2}{r_O^2} \n\\Big( 1 - \\dfrac{2m}{r_O} \\Big)} \n\\right)\n}{\n1 + 27 \\, \\dfrac{m^2}{r_O^2} \\, \n\\left( \\varepsilon ^2 -1+\\dfrac{2m}{r_O} \\right)\n} \\, ,\n\\end{equation}\nrespectively. (\\ref{eq:alpha6}) gives us the angular radius \n$\\alpha _{\\mathrm{sh}}$ as it is seen by a static observer \nat $r_O$, see Figure~\\ref{fig:alphabh}. \nThis formula is known since Synge \\cite{Synge1966}. It is \nmeaningful only for observer positions $2m < r_O < \\infty$ \nbecause the static observers exist on this domain only. \nBy contrast, (\\ref{eq:alpha7}) gives us the angular radius \n$\\tilde{\\alpha}{}_{\\mathrm{sh}}$ of the shadow as it is \nseen by an infalling observer at momentary radius coordinate \n$r_O$, see Figure~\\ref{fig:talphabh}. A similar formula was\nderived by Bakala et al. \\cite{BakalaEtAl2007}, even for \nthe more general case of a Schwarzschild-deSitter (Kottler) \nblack hole. (\\ref{eq:alpha7}) is meaningful for \n$00$ the worldline of an\nobserver on the surface of the star is then given by one of the dotted lines\nin Figure~\\ref{fig:radial}. From (\\ref{eq:rTrad}) with $\\varepsilon ^2=1-2m\/r_i$\nwe find that for $T_S>0$\n\n\\begin{equation}\\label{eq:surface}\nc T_S = \n\\bigintss _{r_S} ^{r_i} \n\\dfrac{ \\sqrt{r_i-2m} \\, \\sqrt{r^3} \\, dr}{(r-2m) \\sqrt{2m(r_i-r)}}\n- \\bigintss _{r_S}^{r_i} \\dfrac{\\sqrt{2mr} \\, dr}{(r-2m)}\n\\end{equation}\nEquating $r_S$ to zero gives the collapse time, $T_S^{\\mathrm{coll}}$, i.e., the\nPainlev{\\'e}-Gullstrand time when the star has collapsed to a point singularity \nat the centre, see Figure~\\ref{fig:collstat},\n\n\\begin{equation}\\label{eq:Tcoll}\nc T_S^{\\mathrm{coll}} = \n\\bigintss _{0} ^{r_i} \n\\dfrac{ \\sqrt{r_i-2m} \\, \\sqrt{r^3} \\, dr}{(r-2m) \\sqrt{2m(r_i-r)}}\n- \\bigintss _{0}^{r_i} \\dfrac{\\sqrt{2mr} \\, dr}{(r-2m)} \\, .\n\\end{equation}\nNote that necessarily $2m 3m$. \nFor calculating the shadow we have to consider lighlike geodesics\nthat graze the surface of the collapsing star. If such a lightlike\ngeodesic passes through a minimum radius value $r_m$, we may\ndetermine $r_m$ by equating the first and the last expression in \n(\\ref{eq:alpha2}) with $r=r_S$, $\\tilde{\\alpha} = \\pi\/2$ and $\\varepsilon ^2 = \n1- 2m\/r_i$. This results in\n\n\\begin{equation}\\label{eq:rmcoll2}\n\\dfrac{r_m^3}{r_m-2m} = \\dfrac{r_i r_S^2}{r_i-2m} \\, . \n\\end{equation}\nRecall that a minimum value is possible only for $r_m>3m$,\ni.e., in (\\ref{eq:rmcoll2}) $r_S$ must satisfy the inequality $r_S>r_S^{(2)}$\nwhere \n\n\\begin{equation}\\label{eq:rS2}\nr_S ^{(2)} = 3 m \\sqrt{3-\\dfrac{6m}{r_i}} \\, .\n\\end{equation}\nAs $r_i>3m$, (\\ref{eq:rS2}) implies that \n\n\\begin{equation}\\label{eq:rS2in}\n3mT_S$. With (\\ref{eq:surface}) this results in\n\n\\begin{gather}\\label{eq:Tcollstat}\nc T _O = \n\\bigintss _{r_S} ^{r_i} \n\\dfrac{ \\sqrt{r_i-2m} \\, \\sqrt{r^3} \\, dr}{(r-2m) \\sqrt{2m(r_i-r)}}\n+ \\bigintss _{r_i} ^{r_O} \n\\dfrac{\\sqrt{2mr} \\, dr}{(r-2m)}\n\\\\\n\\nonumber\n+\n \\bigintss _{r_S}^{r_O} \\dfrac{\\sqrt{r_i-2m} \\, \\sqrt{r^5} \\, dr}{(r-2m) \\sqrt{(r_i-2m)r^3-(r-2m)r_ir_S^2}}\n\\, .\n\\end{gather}\n\nIf $r_i$ and $r_O$ are given, with $3m T_O^{(2)}$, the angular radius of \nthe shadow is given by Synge's formula (\\ref{eq:alpha6}). Past-oriented light \nrays grazing the surface of the star cannot escape to infinity anymore,\ni.e., they do not give the boundary of the shadow; the latter is determined by light\nrays that spiral asymptotically to $r=3m$.\n\n\\begin{figure}[h]\n \\psfrag{x}{$r_O$}\n \\psfrag{y}{$\\,$ \\hspace{-1.3cm} $cT_O^{(2)}-cT_O^{(1)}$}\n \\psfrag{a}{\\small $\\,$ \\hspace{-0.5cm} $25m$}\n \\psfrag{b}{\\small $\\,$ \\hspace{-0.5cm} $50m$}\n \\psfrag{c}{\\small $\\,$ \\hspace{-0.5cm} $75m$}\n \\psfrag{d}{\\small $\\,$ \\hspace{-0.5cm} $100m$}\n \\psfrag{e}{\\small $\\,$ \\hspace{-0.9cm} $25m$}\n \\psfrag{f}{\\small $\\,$ \\hspace{-0.9cm} $50m$}\n \\psfrag{g}{\\small $\\,$ \\hspace{-0.9cm} $75m$}\n\\centerline{\\epsfig{figure=colltime.eps, width=10.1cm}}\n\\caption{Time $T_O^{(2)}-T_O^{(1)}$ over which the static observer sees the \nstar collapse, plotted against the observer position $r_O$. The star is\ncollapsing from an initial radius $r_i=5m$ (dotted), $r_i=10m$ (solid)\nor $r_i=15m$ (dashed), respectively. \\label{fig:colltime}}\n\\end{figure}\n\nWe summarise our analysis in the following way. In the first phase, which lasts \nfrom $T_O = - \\infty$ to $T_O = T_O^{(1)}$ given by (\\ref{eq:TO1}), the observer\nsees a shadow of constant angular radius given by (\\ref{eq:shcollstat1}). In the\nsecond phase, which lasts from $T_O= T_O^{(1)}$ until $T_O=T_O^{(2)}$ given by\n(\\ref{eq:TO2}), the observer sees a shrinking shadow whose angular radius\nas a function of observer time $T_O$ is given in parametric form by (\\ref{eq:Tcollstat})\nand (\\ref{eq:shcollstat2}). The parameter $r_S$ runs down from $r_S^{(1)}=r_i$ to\n$r_S^{(2)}= 3m \\sqrt{3-6m\/r_i}$. The third phase lasts from $T_O = T_O^{(2)}$ to\n$T_O=\\infty$. In this period the observer sees a shadow of constant angular\nradius given by Synge's formula (\\ref{eq:alpha6}). The angular radius of the\nshadow is plotted against $T_O$, over all three periods, for $r_i=5m$\nand $r_O=10m$ in Figure \\ref{fig:alphacoll1}. \n\nIn Fig. \\ref{fig:colltime} we plot the time $T_O^{(2)}-T_O^{(1)}$ over which \nthe observer sees the star collapse against the observer position $r_O$. We\nsee that this time is largely independent of $r_O$, unless the observer is\nvery close to the star. For a star collapsing from an initial radius of 5\nSchwarzschild radii, $r_i=10m$, we see that $T_O^{(2)}-T_O^{(1)} \\approx\n34 \\, m\/c$ for a sufficiently distant observer. For a stellar black hole, a typical\nvalue would be $m \\approx 15 \\, \\mathrm{km}$, resulting in $T_O^{(2)}-T_O^{(1)}\n\\approx 0.001 \\, \\mathrm{sec}$, so such a collapse would happen quite quickly.\nEven for a supermassive black hole of $m \\approx 10^6 \\, \\mathrm{km}$, the\nobserver would see the collapse happen in less than 2 minutes. For the case \nof a collapsing cluster of galaxies the formation of the shadow would take longer, \nbut in this case it is more reasonable to model the collapsing object as transparent.\nNote that on the worldline of a distant static observer Painlev{\\'e}-Gullstrand \ntime $T_O$ is practically the same as proper time $\\tau _O$ because, \nby (\\ref{eq:taustatic}), \n \n\\begin{equation}\\label{eq:tauTO}\n \\tau _O = \\sqrt{1- \\dfrac{2m}{r_O}} \\, T_O + \\mathrm{constant}.\n\\end{equation}\n\n\\begin{figure}[h]\n \\psfrag{x}{$c T _O$}\n \\psfrag{y}{$\\,$ \\hspace{-0.3cm} $\\alpha _{\\mathrm{sh}}$}\n \\psfrag{a}{\\small $\\,$ \\hspace{-0.62cm} $\\begin{matrix} \\, \\\\[-0.12cm] 4 m\\end{matrix}$}\n \\psfrag{b}{\\small $\\,$ \\hspace{-0.58cm} $\\begin{matrix} \\, \\\\[-0.12cm] 8 m\\end{matrix}$}\n \\psfrag{c}{\\small $\\,$ \\hspace{-0.62cm} $\\begin{matrix} \\, \\\\[-0.12cm] 12 m\\end{matrix}$}\n \\psfrag{d}{\\small $\\,$ \\hspace{-0.58cm} $\\begin{matrix} \\, \\\\[-0.12cm] 16 m\\end{matrix}$}\n \\psfrag{p}{\\small $\\,$ \\hspace{-0.56cm} $\\begin{matrix} \\, \\\\[-0.2cm] cT_O^{(0)} \\end{matrix}$}\n \\psfrag{q}{\\small $\\,$ \\hspace{-0.64cm} $\\begin{matrix} \\, \\\\[-0.2cm] cT_O^{(2)} \\end{matrix}$}\n \\psfrag{e}{\\small $\\,$ \\hspace{-0.5cm} $\\dfrac{\\pi}{4}$}\n \\psfrag{f}{\\small $\\,$ \\hspace{-0.5cm} $\\dfrac{\\pi}{2}$}\n \\psfrag{r}{\\small $\\,$ \\hspace{-0.85cm} $\\alpha _{\\mathrm{sh}}^{(0)}$}\n\\centerline{\\epsfig{figure=alphacoll2.eps, width=11.7cm}}\n\\caption{Angular radius $\\alpha _{\\mathrm{sh}}$ of the shadow of a collapsing \ndark star for a static observer. The observer is at $r_O=4.5 m$, the \nsurface of the star is collapsing from $r_i = 5 m$.\nThe observation begins at Painlev{\\'e}-Gullstrand time $T_O^{(0)}$ \nwhen the surface of the star passes through the radius value $r_O$. \nAt this moment the angular radius of the shadow takes a value $\\alpha _{\\mathrm{sh}}^{(0)}$ \nwhich is smaller than $\\pi \/2$, because of aberration.\nAs a function of the Painlev{\\'e}-Gullstrand time coordinate $T _O$, the \nangular radius $\\alpha _{\\mathrm{sh}}$ then decreases until it becomes a \nconstant at time $T_O^{(2)}$. This constant value, which is again shown as a dotted\nline, is given by Synge's formula (\\protect{\\ref{eq:alpha6}}). \\label{fig:alphacoll2}}\n\\end{figure}\n\nA very similar analysis applies to case (b). The only difference is that then in \nthe beginning the observer is inside the star. The observation can begin only at \nthe time when the surface of the star passes through the radius value $r_O$\nwhich, by assumption, is bigger than $r_S^{(2)}$. From that time on, the angular \nradius of the shadow is given by the same equations as before for the second and \nthe third phase. A plot of the angular radius of the shadow against $T_O$ is\nshown in Figure~\\ref{fig:alphacoll2} for $r_O=4.5 m$ and $r_i=5m$.\n\nIn case (c) the observer is initially inside the star, as in case (b). The\ndifference is in the fact that now the radius of the star is smaller than $r_S^{(2)}$ \nat the moment when the observation begins. Therefore, the shadow is never\ndetermined by light rays that graze the surface of the star; it is always \ndetermined by light rays that spiral towards $r=3m$, i.e., the angular radius \nof the shadow is constant from the beginning of the observation and given by \nSynge's formula. \n \n\\section{The shadow of a collapsing star for an infalling observer}\\label{sec:shcoll2}\n\nWe consider the same collapsing dark star as in the preceding section,\nbut now we want to calculate the shadow as it is seen by an infalling\nobserver. The relation between the coordinates $r_O$ and $T_O$ of the \ninfalling observer can be found by integrating (\\ref{eq:rTrad}),\n\\begin{equation}\\label{eq:TOrO}\nc T_O = \\bigintss_{r_O} ^{r_O^*} \n\\dfrac{\n\\Big( \\varepsilon \\sqrt{r^3}- \\sqrt{2mr} \\sqrt{ \\varepsilon ^2 r -r+2m}\\Big) dr\n}{\n( r -2m ) \\sqrt{\\varepsilon ^2r -r+2m} \n} \\, .\n\\end{equation}\nHere $r_O^*$ is an integration constant that gives the position of the observer at $T=0$ which\nis the time when the star begins to collapse.\nWe assume that $r_i>3m$, hence $3m < r_S^{(2)} 75$\\% of atoms are transferred to the 2nd band. \nThis transfer method is also applicable in the presence of the lattice confinement along the $y$-axis,\nthough the decay time of the oscillation $\\tau =86(7)$~$\\mu$s is much shorter than the case of a weak harmonic\nconfinement (1D-tube), $\\tau =260(10)$~$\\mu$s.\n\n\\section{Relaxation dynamics of a flat band}\n\\begin{figure*}[hbt]\n\t\\includegraphics[width=160mm]{Fig_Lifetime_rev1.eps}\n\t\\caption{\n\t\n\t\t\\textbf{Lifetime of atoms in the flat band.}\n\t\t\\textbf{a}, Absorption images for the lifetime measurement of the 2nd band with three different hold times,\n\t\ttaken after $14$~ms time-of-flight. The diagonal lattice depth is $\\sdiag=9.5$.\n\t\tThe first three Brillouin zones are indicated by the white dashed lines.\n\t\tIn the top image, the areas used to evaluate the lifetime of a condensate ($\\tau_{\\rm c}$) are also displayed\n\t\twith the red squares.\n\t\t\\textbf{b}, Decay of the flat band at $(\\slong, \\sshort)= (8, 8)$ and variable $\\sdiag$. Solid lines are\n\t\tthe fit results with double exponential curves. Error bars denote the standard deviation of three independent\n\t\tmeasurements.\n\t\t\\textbf{c}, Lifetime of the flat band. $\\tau_{1,2}$ are the fast and slow decay time obtained from the data\n\t\tshown in \\textbf{b}, respectively. $\\tau_{\\rm c}$ is the ${\\rm e}^{-1}$ lifetime of condensates. \n\t\tError bars represent fitting error.\n\t}\n\t\\label{fig_lifetime}\n\\end{figure*}\n\nWe measure the lifetime of atoms in the 2nd band of the optical Lieb lattice.\nAfter transferring to the 2nd band, we change the depth $\\sdiag$ of the diagonal lattice to control the energy gap\nbetween the 1st and 2nd band. As well as the band gap \\cite{Mueller2007}, lifetime of a quantum gas\nin the excited band is strongly affected by the density overlap with the states in the lower bands \\cite{Olschlager2012}.\nAs we increase $\\sdiag$, the average gap between the 1st and 2nd band becomes smaller and at the same time\ntheir density profiles become similar to each other. In the opposite limit of shallow $\\sdiag$, the band gap increases and two bands\nhave no density overlap, as the lowest band mostly consists of the $A$-sublattice.\nWe take a variable hold time in the lattice, followed by band mapping to count the atom number in the excited bands.\nTypical absorption images are shown in Fig.~\\ref{fig_lifetime}a.\nThe decay curves displayed in Fig.~\\ref{fig_lifetime}b show expected behavior of increasing lifetime with decreasing $\\sdiag$.\nIn addition, increasing the gap makes the dynamics more clearly separate into two process: decay of the condensate\nwithin the 2nd band (middle image of Fig.~\\ref{fig_lifetime}a) and decay of atoms into the lowest band (bottom image).\nWe find that the curve is well fitted by double exponential with the form $a_1 \\exp(-t\/\\tau_1)+a_2 \\exp(-t\/\\tau_2)+b$.\nThe fast component $\\tau_1$ shows only weak dependence on $\\sdiag$, whereas the slow component $\\tau_2$ shows\nover $20$-fold changes from the smallest to largest $\\sdiag$. We also extract the lifetime of the condensate\nin the 2nd band, $\\tau_{\\rm c}$, by counting atoms on the corner of the 2nd Brillouin zone (Figure~\\ref{fig_lifetime}a),\nand find similar behavior with $\\tau_1$. This implies that the intial fast decay is related to the decay of the condensate,\nwhich involves the decay to the lower band with faster time constant compared with the non-condensed atoms.\n\n\\section{Localization of a wave function in a flat band}\n\\begin{figure*}[bt]\n\t\\includegraphics[width=160mm]{Fig_TunnelingDynamics_wide_rev1.eps}\n\t\\caption{\n\t\n\t\t\\textbf{Tunneling dynamics in the Lieb lattice}.\n\t\t\\textbf{a}, Demonstrating the measurement of sublattice occupancy. Here, sublattice mapping technique is\n\t\tapplied to atoms loaded into (left) $((\\slong^{(x)},\\slong^{(z)}), \\sshort, \\sdiag) = ((8,8), 8, 0)$,\n\t\t(center) $((2,8), 8, 19)$, and (right) $((8,2), 8, 19)$, corresponding to atoms in $A$-, $B$-, and $C$-sites, respectively.\n\t\t\\textbf{b}, Measured tunneling dynamics of $\\ket{+}$ and $\\ket{-}$ initial states\n\t\tin the Lieb lattice with $(\\slong, \\sshort, \\sdiag) = (8, 8, 9.5)$. Solid lines are the fits to the experimental data\n\t\twith damped sinusoidal oscillation (for $\\ket{+}$) and double exponentials (for $\\ket{-}$).\n\t\tInset shows dynamics of the $\\ket{-}$ state for longer hold times. Error bars denote standard deviation.\n\t\tIllustration of tunneling process for each initial state is also shown on the right-hand side.\n\t\t\\textbf{c}, Frequencies of coherent inter-site oscillations. Solid lines are the calculated band gap between\n\t\tthe 1st and 2nd (red) and the 1st and 3rd bands (blue). Error bars denote fitting error.\n\t\t\\textbf{d}, Bending flat band. Dynamics of the $\\ket{-}$ state in the presence of imbalance\n\t\t$\\Delta \\slong = \\slong^{(x)} - \\slong^{(z)}$ shows restoration of coherent dynamics.\n\t\tError bars denote standard deviation.\n\t}\n\t\\label{fig_tunneling}\n\\end{figure*}\n\nAs described above, the most intriguing property of a flat band is the localization of the wave function\nat certain sublattice sites.\nIn the case of the Lieb lattice, the wave function of the flat band vanishes on the $A$-sublattice.\nHere we reveal this property by observing the tunneling dynamics of a Bose gas\ninitially condensed at the $\\ket{-}=\\ket{B}-\\ket{C}$ state, and compare it to the dynamics of the state with\nopposite relative phase, $\\ket{+}=\\ket{B}+\\ket{C}$.\nIn order to observe real-space dynamics of the system, we perform projection measurement of the occupation\nnumber in each sublattice, which we call sublattice mapping. In this method, first we quickly change the lattice potential to\n$((\\slong^{(x)},\\slong^{(z)}), \\sshort, \\sdiag) = ((8,14), 20, 0)$.\nIn this configuration, all three sublattices are energetically well separated from one another and the lowest three bands\nconsist of the $A$-,$B$- and $C$-sublattice, respectively. This maps sublattice occupations to band occupations,\nwhich then can be measured by band mapping technique. Figure \\ref{fig_tunneling}a shows the demonstration of\nthis method, in the cases where atoms occupy only one of the sublattices.\nNote that the populations in the $B$- and $C$-sublattices are mapped to the 2nd Brillouin zones for the one-dimensional\nlattice along the $x$ and $z$-axis, respectively. This is because the turning off of the diagonal lattice decouples\nthese two directions and the fundamental bands are labeled by the combination of band indices of 1D lattices.\n\nWe prepare the initial state $\\ket{+}$ by simply loading a BEC into the Lieb lattice with deep $\\vdiag$.\nOn the other hand, the $\\ket{-}$ state is obtained by applying band transfer method to the $\\ket{+}$ state.\nDynamics of these initial states after the lattice depths are changed to satisfy the equal-offset condition $E_A = E_B = E_C$\nis measured by the sublattice mapping.\nAs shown in Fig. \\ref{fig_tunneling}b, we reveal qualitatively different behaviors of these two states: the $\\ket{-}$ state\nshows a significant suppression of the $A$-sublattice occupancy, indicating the freezing of the tunneling dynamics to the\n$A$-sublattice from the $\\ket{-}$ state\nwith only a slow decay to the $A$-sublattice, whereas the $\\ket{+}$ state exhibits coherent oscillations\nbetween the $A$- and ($BC$)-sublattices. This clearly features the geometric structure of the Lieb lattice\nmentioned above. Double exponential fit to the data for the $\\ket{-}$ initial state yields\n$\\tau_1=0.36(4)$~ms and $\\tau_2=5.5(9)$~ms, indicating that the leakage to the $A$-sublattice is caused by\nthe decay to the lowest band.\n\nIn the Bloch basis, the state $\\ket{+}$ is expressed as $\\ket{\\text{1st}}-\\ket{\\text{3rd}}$ and its time evolution\nis driven by the band gap $\\Delta E_{3,1}$ which equals $4\\sqrt{2}J$ in the tight binding limit. After a half period\n$\\pi \\hbar\/\\Delta E_{3,1}$ the state evolves to $\\ket{A}=\\ket{\\text{1st}}+\\ket{\\text{3rd}}$, leading to\ncoherent tunneling to the $A$-sublattice.\nSimilarly, it is possible to arrange the initial lattice depths so that the lowest Bloch state has the maximum overlap with\na certain superposition of $\\ket{\\text{1st}}$ and $\\ket{\\text{2nd}}$. Sudden potential change to the Lieb lattice\ndrives oscillation between the $B$- and $C$-sublattices, whose frequency gives the band gap $\\Delta E_{2,1}$.\nWe fit these data with a damped sinusoidal oscillation and compare the extracted frequency with the result of\nsingle particle band calculations (see Fig. \\ref{fig_tunneling}c). \nQualitative behavior is well reproduced, while quantitative discrepancies are found.\nThis is caused by interactions, as we present a systematic study of the density dependence\nof the oscillation frequency in Section IV of Supplementary Information.\n\nWe further investigate the tunneling dynamics of the $\\ket{-}$ initial state by adding the perturbations\nwhich destroy the flatness of the second band. The flatness is robust against the independent change\nof nearest-neighbor tunneling amplitudes $J_x$, $J_z$ along the $x$- and $z$-directions, and energy offset $E_{A}$,\njust as a dark state in a $\\Lambda$-type three level system persists regardless of laser intensities and\ndetuning from the excited state.\nHowever, if the energy difference between the $B$- and $C$-sublattices is introduced -- the two-photon Raman\noff-resonant case --, the flat band is destroyed.\nNote that the finite $E_B-E_C$ induces population in the $A$-sublattice even at $k=0$. On the other hand,\nthe direct diagonal tunneling between the $B$- and $C$-sublattices, which is another flat-band-breaking term\nexisting in our system, keeps a dark state at $\\vect{k}=0$ provided $J_x = J_z$.\nWe create the energy difference by introducing the imbalance of $\\Delta \\slong = \\slong^{(x)} - \\slong^{(z)}$.\nFigure \\ref{fig_tunneling}d shows the time dependence of the $A$-sublattice population for the $\\ket{-}$\ninitial state. It can be clearly seen that the coherent tunneling dynamics starts to grow as the lattice parameters\ndeviate from the flat-band condition $\\Delta \\slong = 0$.\n\n\\section{Methods}\n\t\\subsection{Preparation of ${}^{174}$Yb BEC}\n\t\tAfter collecting about $10^7$ atoms with a magneto-optical trap with the intercombination transition,\n\t\tthe atoms are transferred to a crossed optical trap. Then we perform an evaporative cooling,\n\t\tresulting in an almost pure BEC with about $10^5$ atoms with no discernable thermal component.\n\t\t\n\t\tAll of the optical lattice experiments presented in this paper are subject to additional weak confinement\n\t\tdue to a crossed optical dipole trap operating at $532$~nm.\n\t\tGaussian shape of laser beams for the trap and lattices impose a harmonic confinement on atoms,\n\t\twhose frequencies are $(\\omega_{x'}, \\omega_{y'}, \\omega_z)\/2\\pi = (147, 37, 105)$~Hz at the lattice depths\n\t\tof $(\\slong, \\sshort, \\sdiag) = (8, 8, 9.5)$. Here, the $x'$- and $y'$-axes are tilted from the lattice axes ($x$ and $y$)\n\t\tby $45^\\circ$ in the same plane.\n\n\t\\subsection{Construction of optical Lieb lattice}\n\t\tThe relative phases between the long and short lattice ($\\phi_x$, $\\phi_z$)\n\t\tcan be adjusted by changing the frequency difference between these lattice beams \\cite{Foelling2007}.\n\t\tThe proper frequencies that realize the Lieb lattice ($\\phi_x=\\phi_z=0$) are determined by analyzing\n\t\tthe momentum distribution of a $^{174}$Yb BEC released from the lattice,\n\t\tas in the case of the parameter $\\psi$ of the diagonal lattice (Supplementary Information).\n\t\tThe relative phase between the long and short lattices at the position of atoms depends on the optical\n\t\tpath lengths from common retro-reflection mirrors, and in general two phases $\\phi_x$ and $\\phi_z$ are not equal.\n\t\tWe shift the frequency of long lattice laser by an acousto-optic modulator (AOM) inserted in the path for the $z$-axis\n\t\tin order to simultaneously realize $\\phi_x=\\phi_z=0$.\n\t\tOptimal frequency difference is sensitive to the alignment of the lattice beams and day-by-day calibration of\n\t\tthe phases is needed. Typical drift of the required RF frequency for the compensation AOM is within $5$~MHz.\n\n\t\tTo stabilize the phase $\\psi$, we construct a Michelson interferometer along the optical path of the diagonal lattice\n\t\twith frequency stabilized $507$~nm laser. The interferometer has two piezo electric transducer (PZT)-mountded\n\t\tmirrors one of which is shared with the lattice laser beam for phase stabilization, and another one for shifting the phase\n\t\tover the range $10 \\pi$ with stabilization kept active. The short-term stability of $\\psi$ is estimated\n\t\tto be $\\pm 0.007 \\pi$.\n\t\tThe last few optics in front of the chamber are outside of the active stabilization,\n\t\twhich causes slow drift of $\\psi$ due to the change of environment such as temperature.\n\t\tThe typical phase drift is $0.05 \\pi$ per hour, and all measurements of sequential data set are finished\n\t\twithin $20$ minutes from the last phase calibration.\n\n\t\tAt the proper phase parameters $\\phi_x=\\phi_z=0$ and $\\psi=\\pi\/2$, the potential depth at the center of\n\t\teach site becomes equal when $\\vlong=\\vshort=\\vdiag$. In this condition, however, the energy offset $E_A$\n\t\tbecomes lower than $E_B$ and $E_C$ because of the difference in the zero-point energies.\n\t\tWe search optimal $\\vdiag$ by single-particle band calculation (see also Supplementary Information for the derivation\n\t\tof Hubbard parameters).\n\n\t\\subsection{Band occupation measurement and sublattice mapping}\n\t\tTo measure the quasimomentum distribution of atoms, we turn off all the lattice potentials with a exponential form\n\t\t\\begin{eqnarray}\n\t\t\tV(t) = \\left\\{\n\t\t\t\\begin{array}{cc}\n\t\t\t\tV(0) \\exp (-4t\/\\tau)\t&\t0 1$~ms causes considerable deformation\n\t\tof the distribution whereas $\\tau > 1.5$~ms is desirable to suppress interband transition.\n\t\tDue to this non-adiabaticity, up to $20$\\% of atoms occupying a certain Brillouin zone are detected in\n\t\tits neighboring zones, depending on the shape of the observed quasimomentum distribution.\n\n\\section{Acknowledgements}\n\tWe thank K. Noda, K. Inaba, M. Yamashita, I. Danshita, S, Tsuchiya, C. Sato, S. Capponi, Z. Wei and Q. Zhou\n\tfor valuable discussions.\n\tThis work was supported by the Grant-in-Aid for Scientific Research of JSPS (No. 25220711) and\n\tthe Impulsing Paradigm Change through Disruptive Technologies (ImPACT) program.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \nThe Berezin transform was introduced and studied in \\cite{Ber1}, \\cite{Ber2}, for classical complex symmetric spaces and in \\cite{Ber-Cob}, \\cite{Eng}, \\cite{HKZ} for the Bergman, Hardy and Bargman-Fock spaces. This is a quite relevant transform relating the covariant and the contravariant symbols of a bounded linear operator which is also useful in quantization theory and in the correspondence principle. In \\cite{Ber1}, the author used the correspondence principle to express this transform in the sphere and in the Lobachevsky plane through their corresponding Laplace-Beltrami operators.\nMore generally, the Berezin transform stemming from systems of coherent states attached to generalized Bergman spaces on $\\mathbb{C}^n, n \\geq 1$ and the hyperbolic complex unit ball $\\mathbb{B}^n$ were introduced in \\cite{Mou1} and \\cite{Gha-Mou} respectively. There, these spaces arise as eigenspaces of a Schr\\\"odinger operator with a uniform magnetic field corresponding to Euclidean and hyperbolic Landau levels and generalizations of Berezin formula were given (see also \\cite{Pee} and for Berezin formulas on $\\mathbb{B}^n$ and $\\mathbb{C}^n$). Yet another expression for the Berezin transform on $\\mathbb{B}^n$ in the magnetic setting were derived in \\cite{Bou-Mou} using the Fourier-Helgason transform and involves the Wilson polynomials (\\cite{AAR}.\nIn this paper, we are interested in the $n$-dimensional complex projective space $\\mathbb{CP}^n, n \\geq 1$ model endowed with its Fubini-Study metric for which we construct a system of coherent states. Actually, the latter are attached to eigenspaces of a Schr\\\"odinger operator with a uniform magnetic field in $\\mathbb{CP}^n$ which we also refer to as generalized Bergman spaces. These coherent states are then used to define the Berezin transform in this magnetic setting, for which we provide a variational formula by means of the Fubini-Study Laplace operator. In particular, we recover the original formula of Berezin for the complex projective line which corresponds here to the lowest spherical Landau level. Note that these findings complete the aformentionned analyses of magnetic Berezin transforms on phase spaces modeled by rank-one complex symmetric spaces. Moreover, the existence of such variational formulas is ensured by the fact that the algebra of bi-invariant operators on these phase spaces are generated by their Laplace-Beltrami operators since the Berezin transforms commute with translation operators of the underlying Lie groups (\\cite{Ber2}). At the physical level, it is worth noting that the variational formula is represented through the Fubini-Study Laplace operator which describes the motion of a free particle. Therefore, our strategy leading to this formula transfers the effect of the magnetic field at a higher spherical Landau level only the representing function. As a matter of fact, it sheds the light on the interplay between the geometry of the phase space on the one hand and the physical quantities on the other hand. This phenomenon has the same flavor as diamagnetic inequalities (see e.g. \\cite{Sim}).\nThe paper is organized as follows. In the next section, we outline the formalism of coherent states quantization. In the third section, we recall from \\cite{Haf-Int} the spectral theory of the magnetic Schr\\\"odinger operator on $\\mathbb{CP}^n$ giving rise to the generalized Bergman spaces. In particular, we also exhibit the spectral function of the Fubini-Study Laplacian. In the fourth section, we attach a system of coherent states to each eigenspace and introduce the corresponding Berezin transform. The variational formula is then derived in the fifth section, whence we recover Berezin formula on the Riemann sphere.\n\n\\section{Coherent states quantization}\nIn this section, we recall the formalism of coherent states quantization and refer the reader to the standard monograph \\cite{Gaz} (p.72-75). \n Let $(X, \\mu)$ be a measurable space and denote $L^{2}(X,\\mu)$ the space of $\\mu$-square integrable functions on $X$. Let $\\mathcal{A} \\subset L^{2}(X,d\\mu )$ be a closed subspace (possibly infinite-dimensional) with an orthonormal basis $\\left\\{ \\Phi _{j}\\right\\} _{j=0}^{\\infty }$ and let $(\\mathcal{H}, \\langle \\,\\mid \\,\\rangle)$ be a infinite-dimensional separable Hilbert space equipped with an orthonormal basis $\\left\\{ \\phi _{j}\\right\\} _{j=0}^{\\infty }$. Then the coherent states $\\left\\{ \\mid x>\\right\\} _{x\\in X}$ in $\\mathcal{H}$ are defined by\n\\begin{equation}\\label{CohSta}\n\\mid x>:=\\left( \\mathcal{N}\\left( x\\right) \\right) ^{-\\frac{1}{2}} \\sum_{j=0}^{+\\infty }\\Phi _{j}\\left( x\\right) \\mid \\phi _{j}>\\quad \n\\end{equation}\nwhere\n\\begin{equation}\\label{Norm}\n\\mathcal{N}\\left( x\\right) :=\\sum_{j=0}^{+\\infty }\\Phi _{j}\\left( x\\right) \\overline{\\Phi _{j}\\left( x\\right) }\n\\end{equation}\nis the normalization factor such that $\\left\\langle x\\mid x\\right\\rangle _{\\mathcal{H}}=1$. These states provide the following resolution of the identity operator:\n\\begin{equation}\\label{Res}\n\\mathbf{1}_{\\mathcal{H}}=\\int\\limits_{X}\\mid x>\\in \\mathcal{H}$. In this respect, the property \\eqref{Res} bridges between classical and quantum mechanics. In fact, the (Klauder-Berezin) coherent states quantization consists in associating to a classical observable (that is a function on $X$ with specific properties) the operator-valued integral: \n\\begin{equation*}\nA_f := \\int_X \\mid x> $ is referred to as the lower or covariant symbol of $A$. Consequently, we can associate to a classical observable $f$ the expectation $\\langle x \\mid A_f \\mid x \\rangle$ and as such, we get the Berezin transform of $f$ defined by: \n\\begin{equation*}\nB[f](x) := \\langle x \\mid A_f \\mid x \\rangle, \\quad x \\in X. \n\\end{equation*}\n\n\\section{Generalized Bergman spaces on $\\mathbb{CP}^n$}\nBelow, we recall from \\cite{Haf-Int} the spectral theory of the magnetic Schr\\\"odinger operator $\\Delta_{\\nu}$ in $\\mathbb{CP}^n, n \\geq 1$:\n\\begin{equation}\\label{Oper}\n\\Delta_{\\nu}:=4(1+|z|^2)\\left(\\sum_{i,j}^n(\\delta_{ij}+z_i\\bar{z_j})\\frac{\\partial^2}{\\partial z_i \\partial\\bar{z_j}}+\\nu\\sum_{j=1}^n \\left(z_j\\frac{\\partial}{\\partial z_j}-\\bar{z_j}\\frac{\\partial}{\\partial \\bar{z_j}}\\right)-\\nu^2\\right)+4\\nu^2,\n\\end{equation}\nwhere $2\\nu\\in\\mathbb{Z}^+$ (for sake of simplicity, we ommit the dependence of this operator on $n$). Namely, for $\\lambda\\in\\mathbb{C}$, we consider the equation\n\\begin{equation}\\label{3.1}\n\\Delta_{\\nu}F(z)=\\left(\\lambda^2-n^2+4\\nu^2\\right)F(z),\n\\end{equation}\nwhere $F$ is a bounded function on $\\mathbb{C}^n$, and define\n\\begin{equation}\\label{3.2}\nA^{\\nu}_{m}(\\mathbb{CP}^n) := \\{F:\\mathbb{C}^n\\to\\mathbb{C},\\ F\\text{ is a bounded and }\\Delta_{\\nu}(z)=(\\lambda^2-n^2+4\\nu)F(z)\\}.\n\\end{equation}\nIn order to recall the description of these eigenspaces, we need to fix some notations. For $p,q\\in\\mathbb{Z}_{+}$, let $H(p,q)$ denote the finite-dimensional space of all harmonic homogeneous polynomials in $\\mathbb{C}^n$ which are \nof degree $p$ in $z$ and of degree $q$ in $\\bar{z}$ and let $\\mathscr{H}(p,q)$ be the space of spherical harmonics in $H(p,q)$. Then, the spaces $\\mathscr{H}(p,q)$ are pairwise orthogonal in $L^2(S^{2n-1},dw)$ and the dimension of $\\mathscr{H}(p,q)$ is given by \n\\begin{equation}\\label{3.3}\nd(n,p,q)=\\frac{(p+q+n-1)(p+n-2)!(q+n-2)!}{p!q!(n-1)!(n-2)!}.\n\\end{equation}\nPick any orthogonal basis $(h_{p,q})$ of $\\mathscr{H}(p,q)$ and introduce the set \n\\begin{equation}\\label{3.4}\nD_{\\nu} := \\{\\lambda\\in\\mathbb{C},\\ \\frac{n\\pm \\lambda}{2}+\\nu\\in\\mathbb{Z}_{-}\\}\\cup \\{\\lambda\\in\\mathbb{C},\\ \\frac{n\\pm \\lambda}{2}-\\nu\\in\\mathbb{Z}_{-}\\}.\n\\end{equation}\nFrom \\cite{Haf-Int}, p.147, the eigenspace $A^{\\nu}_{m}(\\mathbb{CP}^n)=\\{0\\}$ if $\\lambda\\not\\in D_{\\nu}$, otherwise it is not trivial if and only if $\\lambda$ has the form $\\lambda=\\pm (2(m+\\nu)+n)$ for some $m\\in\\mathbb{Z}_{+}$. When this condition is fulfilled, any function $F(z)$ in $A^{\\nu}_{m}(\\mathbb{CP}^n)$ admits the expansion:\n\\begin{equation}\\label{3.5}\nF(z)=\\left(1+|z|^2\\right)^{-(m+\\nu)}\\sum_{\\mathclap{\\substack{0\\leq p\\leq m\\\\ 0\\leq q\\leq m+2\\nu}}} \\text{ }_2F_1\\left(p-m,q-m-2\\nu,n+p+q;-|z|^2\\right)h_{p,q}(z,\\bar{z}),\n\\end{equation}\nwhere $_2F_1(a,b,c;x)$ is the Gauss hypergeometric function (\\cite{AAR}) and $F$ satisfies the growth condition\n\\begin{equation}\\label{3.6}\n\\lim_{r\\to\\infty}F(r\\omega)=\\sum_{0\\leq p\\leq m} (-1)^{m-p}\\frac{\\Gamma(m-p+1)\\Gamma(n+2p+2\\nu)}{\\Gamma(m+n+p+2\\nu)}h_{p,p+2\\nu}(\\omega,\\bar{\\omega}), \\quad z=rw,\\ r>0, \\,\\omega \\in S^{2n-1}.\n\\end{equation}\nMoreover, $A^{\\nu}_{m}(\\mathbb{CP}^n)$ is finite-dimensional and its dimension is given by\n\\begin{equation}\\label{3.7}\n(2m+n+2\\nu)\\frac{\\Gamma(m+n)\\Gamma(m+n+2\\nu)}{n \\Gamma^2(n)\\Gamma(m+1)\\Gamma(m+2\\nu+1)}\n\\end{equation}\nLet \n\\begin{equation}\\label{1.8}\nd\\mu_n(w)=\\frac{1}{(1+|w|^2)^{n+1}} dw\n\\end{equation}\nwhere $dw$ is the Lebesgue measure on $\\mathbb{C}^n$, then $A^{\\nu}_{m}(\\mathbb{CP}^n)$ admits a reproducing kernel given by\n\\begin{equation}\\label{3.10}\nK_m^{\\nu}(z,w) := c^{\\nu,n}_{m}\\left[\\cos d_{FS}(z,w)\\right]^{2\\nu}P_m^{(n-1,2\\nu)}(\\cos 2d_{FS}(z,w)),\n\\end{equation}\nwhere $d_{FS}(z,w)$ is the Fubini-Study distance:\n\\begin{equation}\\label{3.11}\n\\cos^2d_{FS}(z,w)=\\frac{|1+ \\langle z,w \\rangle|^2}{(1+|z|^2)(1+|w|^2)}\n\\end{equation}\nand \n\\begin{equation}\\label{Const}\nc^{\\nu,n}_{m} := \\frac{(2m+2\\nu+n)\\Gamma(m+n+2\\nu)}{\\pi^n\\Gamma(m+2\\nu+1)}. \n\\end{equation}\n\nWhen $\\nu=0$, the operator $\\Delta_{\\nu}$ in \\eqref{Oper} reduces to the Fubini-Study Laplacian:\n\\begin{equation}\\label{2.5}\n\\Delta_0=4(1+|z|^2)\\sum_{i,j}^n(\\delta_{ij}+z_i\\bar{z}_j)\\frac{\\partial^2}{\\partial z_i\\partial\\bar{z}_j}\n\\end{equation}\nwhich has a discrete spectral decomposition with eigenvalues $(-4k(k+n))_{k \\geq 0}$. Besides, each eigenspace is finite-dimensional and has a orthonormal basis given by homogeneous spherical harmonics of degree zero. \nIn particular, the kernel of the orthogonal projection from the $L^2(\\mathbb{C}P^n, d_{FS})$ onto the $k$-th eigenspace reads:\n\\begin{equation}\\label{SpecFun}\n\\psi_n(k;z,w):= \\frac{(2k+n)\\Gamma(n+k)}{\\pi^n k!} \\frac{P_k^{n-1, 0}(\\cos 2d_{FS}(z,w))}{P_k^{n-1, 0}(1)}\n\\end{equation}\nThus, the spectral Theorem implies that for any suitable function $u$, the operator $u(-\\Delta_{\\textit{FS}})$ is an integral operator whose kernel is given by: \n\\begin{equation}\\label{Spect}\n\\sum_{k \\geq 0} u(4k(k+n))\\psi_n(k;z,w) .\n\\end{equation}\n\n\\section{Berezin transforms attached to spherical Landau levels}\nNow, we specialize the definition \\eqref{CohSta} of coherent states to the generalized Bergman space $A_m^{\\nu}(\\mathbb{CP}^n)$ by taking: \n\\begin{itemize}\n\\item $X = \\mathbb{CP}^n$ endowed with $d\\mu_n$. \n\\item $x \\rightarrow z \\in \\mathbb{CP}^n$.\n\\item $\\mathcal{A} \\rightarrow A_m^{\\nu}(\\mathbb{CP}^n)$. \n\\item A orthonormal basis $(\\Phi_{p,q,j}^{\\nu,m})_{p,q,j}$ of $A_m^{\\nu}(\\mathbb{CP}^n)$ where $1 \\leq j \\leq d(n,p,q)$ and $0 \\leq q \\leq m+2, 0 \\leq p \\leq m$. \n\\item The Hilbert space $\\mathscr{H}$ carrying the quantum states of some physical system and its basis $(\\phi_{p,q,j})_{p,q,j}$ will be specified when needed. \n\\end{itemize}\nWith these data, we define the following: \n\\begin{definition}\nFor any $n \\geq 1, 2\\nu \\in \\mathbb{Z}_+$ and any $m \\in \\mathbb{Z}_+$, the system $\\mid z; \\nu, m >$ of coherent states attached to $A_m^{\\nu}(\\mathbb{CP}^n)$ is defined by \n\\begin{equation*}\n\\mid z; \\nu, m > := (\\mathcal{N}^{\\nu,m}(z))^{-1\/2}\\sum_{\\mathclap{\\substack{0\\leq p\\leq m\\\\ 0 \\leq q \\leq m+2\\nu \\\\ 1 \\leq j \\leq d(n,p,q)}}} \\overline{\\Phi_{p,q,j}^{\\nu,m}(z)} \\mid \\phi_{p,q,j} >\n\\end{equation*}\nwhere $\\mathcal{N}^{\\nu,m}(z)$ is the normalizing factor given in \\eqref{Norm}. \n\\end{definition}\nObserve that $\\mathcal{N}^{\\nu,m}(z)$ is the diagonal of the reproducing kernel $K_m^{\\nu}(z,w)$ in \\eqref{3.10}: \n\\begin{equation*}\n\\mathcal{N}^{\\nu,m}(z) = \\sum_{\\mathclap{\\substack{0\\leq p\\leq m\\\\ 0 \\leq q \\leq m+2\\nu \\\\ 1 \\leq j \\leq d(n,p,q)}}} \\overline{\\Phi_{p,q,j}^{\\nu,m}(z)}\\Phi_{p,q,j}^{\\nu,m}(z) = K_m^{\\nu}(z,z)\n\\end{equation*}\nso that \n\\begin{equation*}\n\\mathcal{N}^{\\nu,m}(z) = \\frac{(2m+2\\nu+n)\\Gamma(m+n+2\\nu)}{\\pi^n\\Gamma(m+2\\nu+1)}P_m^{(n-1,2\\nu)}(1).\n\\end{equation*}\nUsing the special value (\\cite{AAR}):\n\\begin{equation}\\label{JacVal}\nP_m^{(n-1,2\\nu)}(1) = \\frac{(n)_m}{m!},\n\\end{equation}\nwe readily get \n\\begin{equation*}\n\\mathcal{N}^{\\nu,m}(z) = \\frac{(2m+2\\nu+n)\\Gamma(m+n+2\\nu)}{\\pi^{n}\\Gamma(m+2\\nu+1)}\\frac{(n)_m}{m!}.\n\\end{equation*}\nThe states defined above satisfy the resolution of the identity \n\\begin{equation*}\n{\\bf 1}_{\\mathscr{H}} = \\int_{\\mathbb{CP}^n} \\mid z; \\nu, m > \\, < z; \\nu, m \\mid \\mathcal{N}^{\\nu,m}(z)d\\mu_n(z)\n\\end{equation*}\nand allow for the quantization scheme described in section 2. As a matter of fact, we can define the Berezin transform of a classical observable $f$ in the usual way: we first associate to $f$ the operator-valued integral \n\\begin{equation*}\nA_f = \\int_{\\mathbb{CP}^n} \\mid w; \\nu, m > \\, < w; \\nu, m \\mid f(w) \\mathcal{N}^{\\nu,m}(w)d\\mu_n(w)\n\\end{equation*}\nthen take the expectation $\\langle z; \\nu, m \\mid A_f \\mid z; \\nu, m \\rangle$ with respect to the coherent state $\\mid z; \\nu, m >$. \n\\begin{definition}\nThe Berezin transform $B_m^{\\nu}[f]$ of $f \\in L^{\\infty}(\\mathbb{CP}^n, d\\mu_n),$ attached to the generalized Bergman space $A_m^{\\nu}(\\mathbb{CP}^n)$ is defined by \n\\begin{equation}\\label{1.7}\n\\mathcal{B}_m[f](z)= \\frac{c^{\\nu,n}_{m}}{P^{(n-1,2\\nu)}_{m}(1)} \\int\\displaylimits_{\\mathbb{C}^n}\\left(\\cos^2d_{FS}(z,w)\\right)^{2\\nu}\\left(P^{(n-1,2\\nu)}_{m}(\\cos 2d(z,w))\\right)^2 f(w) d\\mu_n(w).\n\\end{equation}\n\\end{definition}\n\nNotice that the kernel of the Berezin transform $B_m^{\\nu}$ depends only on the geodesic distance $d_{FS}$ therefore is a $SU(n+1,\\mathbb{C})$-biinvariant function. It follows that $B_m^{\\nu}$ commutes with the translation operators defined by group elements and in turn, \nit is a function of the Fubini-Study operator $\\Delta_{FS}$ (\\cite{Ber2}, p.353). In the subsequent section, we determine explicitly this function relying on the spectral theory of $\\Delta_{FS}$. \n\n\n\n\\section{Variational formula for the Berezin transform}\nWe seek a function $W = W_m^{\\nu}$ depending on $m,\\nu$ such that \n\\begin{equation*}\nB_m^{\\nu} = W(-\\Delta_{FS}). \n\\end{equation*}\nAppealing to \\eqref{Spect}, the function $W$ should solve the equation\n\\begin{equation*}\n \\frac{c^{\\nu,n}_{m}}{P^{(n-1,2\\nu)}_{m}(1)} \\left(\\cos^2d_{FS}(z,w)\\right)^{2\\nu}\\left(P^{(n-1,2\\nu)}_{m}(\\cos 2d(z,w))\\right)^2 = \\sum_{k=0}^{\\infty}W(\\lambda_k)\\psi_n(k,z,w)\n\\end{equation*}\nwhere $\\lambda_k := k(k+n)$. \nIn order to get a solution to this equation, we need to expand the product of Jacobi polynomials in the left-hand side as a series of Jacobi polynomials $(P_k^{(n-1,0)})_{k\\geq 0}$. To proceed, we make use of the following instance of formula (52) in \\cite{Sri}, p.4467: for any $\\alpha_1, \\alpha, \\beta > -1$, any $m, \\mu \\in \\mathbb{Z}_+$ and any $t \\in [0,1]$, one has:\n\\begin{align}\\label{5.8}\nt^{\\mu}P_{m}^{(\\alpha_1,\\beta)}&(1-2t)P_{m}^{(\\alpha_1,\\beta)}(1-2t)\\nonumber\\\\\n&=(\\alpha+1)_{\\mu}\\binom{\\alpha_1+m}{m}^2\n\\,\\,\\sum_{k=0}^{+\\infty}\\frac{(\\alpha+\\beta+2k+1)(-\\mu)_k}{(\\alpha\n+1)_k(\\alpha+\\beta+k+1)_{\\mu+1}}P_k^{(\\alpha,\\beta)}(1-2t)\\nonumber\\\\\n&F^{2:2,2}_{2:1,1}\\left[\n\\begin{matrix}\n\\mu+1,\\ \\alpha+\\mu+1\\ :\\hfill -m\\ ,\\hfill \\alpha_1+\\beta+m+1\\ ,\\hfill -m\\ \n,\\hfill \\alpha_1+\\beta+m+1\\hfill \\\\ \n\\mu-k+1\\ ,\\hfill \\alpha+\\beta+\\mu+2+k\\ :\\hfill \\alpha_1+1\\ ,\\hfill \n\\alpha_1+1\n\\end{matrix}\n\\, \\middle\\vert \\,\n1,1\n\\right]\n\\end{align}\nwhere \n\\begin{equation*}\nF^{2:2,2}_{2:1,1}\\left[\n\\begin{matrix}\na_1, a_2 :\\hfill b_1, b_2, b_3, b_4 \\\\ \nc_1, c_2 :\\hfill d_1, d_2\n\\end{matrix}\n\\, \\middle\\vert \\,\nx,y\n\\right] = \\sum_{s,l = 0}^{\\infty} \\frac{(a_1)_{l+s}(a_2)_{l+s}}{(c_1)_{l+s}(c_2)_{l+s}}\\frac{(b_1)_l(b_2)_l(b_3)_s(b_4)_s}{(d_1)_l(d_2)_s} \\frac{x^ly^s}{l!s!}\n\\end{equation*}\nis the Kamp\\'e de F\\'eriet function (\\cite{Man-Sri}). The issue of our computations is recorded in the following proposition: \n\\begin{proposition}\nThe function $W = W_m^{\\nu}$ can be chosen as: \n\\begin{multline*}\nW(\\lambda) = \\gamma_{n,m,\\nu}\\frac{\\Gamma(R_n(\\lambda)+1)}{\\Gamma(n+R_n(\\lambda))\\Gamma(2\\nu-R_n(\\lambda)+1)\\Gamma(n+R_n(\\lambda)+2\\nu+1)} \n\\\\ \\sum_{s=0}^m \\frac{(-m)_s(2\\nu+1)_s(2\\nu+m+n)_s}{s! (2\\nu-R_n(\\lambda)+1)_{s}(n+2\\nu+R_n(\\lambda)+1)_{s}} \n{}_4F_3\\left[\n\\begin{matrix}\n-m,\\ 2\\nu+1+s, \\, 2\\nu+1+s ,2\\nu+m+n, \n\\\\ 2\\nu-R_n(\\lambda)+1+s, n+2\\nu+1+R_n(\\lambda)+s, 2\\nu+1\n\\end{matrix}; 1\\right].\n\\end{multline*}\nwhere \n\\begin{equation*}\n\\gamma_{n,m,\\nu} := (2m+2\\nu+n) (m+2\\nu+1)_{n-1} \\frac{((2\\nu+m)!)^2(n-1)!}{(n)_m m!} \n\\end{equation*}\nand \n\\begin{equation*}\nR_n(\\lambda) := \\frac{\\sqrt{n^2+\\lambda} - n}{2}, \\quad \\lambda \\geq 0.\n\\end{equation*}\nConsequently, the Berezin transform is given by $B_m^{\\nu} = W_m^{\\nu}(-\\Delta_{FS})$. \n\\end{proposition}\n\n\\begin{proof} \nSpecializing (\\ref{5.8}) with\n\\begin{equation*}\n\\mu= \\alpha_1 = 2\\nu, \\, \\, \\beta = n-1, \\, \\alpha =0, \\, t=\\cos^2d(z,w), \n\\end{equation*}\nand using the symmetry relation $P_k^{\\alpha,\\beta}(-x) = (-1)^k P_k^{\\alpha,\\beta}(-x)$ (\\cite{AAR}), we readily obtain\n\\begin{align*}\n\\left(\\cos^2d_{FS}(z,w)\\right)^{2\\nu}\\left(P^{(n-1,2\\nu)}_{m}(\\cos 2d(z,w))\\right)^2 &=(1)_{2\\nu} \\binom{2\\nu+m}{m}^2\\sum_{k=0}^{\\infty}\\frac{(2k+n) (-2\\nu)_k}{(n)_k(n+k)_{2\\nu+1}}(-1)^k P_k^{(n-1,0)}(\\cos 2d(z,w)) \\nonumber \n\\\\ & \\times F^{2:2,2}_{2:1,1}\\left[\n\\begin{matrix}\n2\\nu+1,\\ 2\\nu+1 :\\hfill -m\\ ,\\hfill 2\\nu+m+n ,\\hfill -m\\ \n,\\hfill 2\\nu+m+n\\hfill \\\\ \n2\\nu-k+1\\ ,\\hfill n+2\\nu+k+1 :\\hfill 2\\nu+1\\ ,\\hfill 2\\nu+1\n\\end{matrix}\n\\, \\middle\\vert \\,\n1,1\n\\right]\\nonumber\\\\\n&=\\pi^{n}(1)_{2\\nu} \\binom{2\\nu+m}{m}^2\\sum_{k=0}^\\infty \\frac{(-1)^k(-2\\nu)_kk!}{(n)_k(n+k)_{2\\nu+1}(n+k-1)!} \\psi_n(k;z,w)\n\\\\&\\times F^{2:2,2}_{2:1,1}\\left[\n\\begin{matrix}\n2\\nu+1,\\ 2\\nu+1 :\\hfill -m\\ ,\\hfill 2\\nu+m+n ,\\hfill -m\\ \n,\\hfill 2\\nu+m+n\\hfill \\\\ \n2\\nu-k+1\\ ,\\hfill n+2\\nu+k+1 :\\hfill 2\\nu+1\\ ,\\hfill 2\\nu+1\n\\end{matrix}\n\\, \\middle\\vert \\,\n1,1\n\\right]\n\\\\& =\\pi^{n}(1)_{2\\nu} \\binom{2\\nu+m}{m}^2\\sum_{k=0}^\\infty \\frac{(-1)^k(-2\\nu)_kk!}{(n)_k\\Gamma(n+k+2\\nu+1)} \\psi_n(k;z,w)\n \\nonumber\\\\\n&\\times F^{2:2,2}_{2:1,1}\\left[\n\\begin{matrix}\n2\\nu+1,\\ 2\\nu+1 :\\hfill -m\\ ,\\hfill 2\\nu+m+n ,\\hfill -m\\ \n,\\hfill 2\\nu+m+n\\hfill \\\\ \n2\\nu-k+1\\ ,\\hfill n+2\\nu+k+1 :\\hfill 2\\nu+1\\ ,\\hfill 2\\nu+1\n\\end{matrix}\n\\, \\middle\\vert \\,\n1,1\n\\right].\n\\end{align*}\n\nObserve that the sum over $k$ terminates at $2\\nu$ and that \n\\begin{equation*}\n(-2\\nu)_k = (-1)^k \\frac{(2\\nu)!}{(2\\nu-k)!}. \n\\end{equation*}\nSimilarly, the Kamp\\'e de Feriet series terminates at $m$ and using the relation \n\\begin{equation*}\n(a)_{l+s} = (a+s)_l(a)_s\n\\end{equation*}\nsatisfied by the Pochhammer symbol, we derive \n\\begin{multline*}\nF^{2:2,2}_{2:1,1}\\left[\n\\begin{matrix}\n2\\nu+1,\\ 2\\nu+1 :\\hfill -m\\ ,\\hfill 2\\nu+m+n ,\\hfill -m\\ \n,\\hfill 2\\nu+m+n\\hfill \\\\ \n2\\nu-k+1\\ ,\\hfill n+2\\nu+k+1 :\\hfill 2\\nu+1\\ ,\\hfill 2\\nu+1\n\\end{matrix}\n\\, \\middle\\vert \\,\n1,1\n\\right] = \\sum_{l,s=0}^m \\frac{(2\\nu+1)_{l+s}(2\\nu+1)_{l+s}}{(2\\nu-k+1)_{l+s}(n+2\\nu+k+1)_{l+s}} \\\\ \\times \\frac{(-m)_l(2\\nu+m+n)_l(-m)_s(2\\nu+m+n)_s}{(2\\nu+1)_l(2\\nu+1)_s} \\frac{1}{l!s!}\n= \\sum_{s=0}^m \\frac{(-m)_s(2\\nu+1)_s(2\\nu+m+n)_s}{s! (2\\nu-k+1)_{s}(n+2\\nu+k+1)_{s}} \n\\\\ \\sum_{l=0}^m \\frac{(-m)_l ((2\\nu+1+s)_{l})^2 (2\\nu+m+n)_l}{(2\\nu-k+1+s)_{l}(n+2\\nu+k+1+s)_{l}(2\\nu+1)_l} \\frac{1}{l!}\n= \\sum_{s=0}^m \\frac{(-m)_s(2\\nu+1)_s(2\\nu+m+n)_s}{s! (2\\nu-k+1)_{s}(n+2\\nu+k+1)_{s}} \\\\\n{}_4F_3\\left[\n\\begin{matrix}\n-m,\\ 2\\nu+1+s, ,\\hfill 2\\nu+1+s ,2\\nu+m+n, \n\\\\ 2\\nu-k+1+s, n+2\\nu+1+k+s, 2\\nu+1 \n\\end{matrix}; 1\\right]\n\\end{multline*}\nAs a result, we should have \n\\begin{multline*}\nW(k(k+n)) = \\frac{((2\\nu)!)^2\\pi^n c^{\\nu,n}_{m}}{P^{(n-1,2\\nu)}_{m}(1)} \\binom{2\\nu+m}{m}^2 \\frac{k!}{(n)_k\\Gamma(2\\nu-k+1)\\Gamma(n+k+2\\nu+1)} \n\\\\ \\sum_{s=0}^m \\frac{(-m)_s(2\\nu+1)_s(2\\nu+m+n)_s}{s! (2\\nu-k+1)_{s}(n+2\\nu+k+1)_{s}} \n{}_4F_3\\left[\n\\begin{matrix}\n-m,\\ 2\\nu+1+s, ,\\hfill 2\\nu+1+s ,2\\nu+m+n, \n\\\\ 2\\nu-k+1+s, n+2\\nu+1+k+s, 2\\nu+1\n\\end{matrix}; 1\\right].\n\\end{multline*}\nKeeping in mind \\eqref{Const} and \\eqref{JacVal}, the last expression simplifies as \n\\begin{multline*}\nW(k(k+n)) = (2m+2\\nu+n) (m+2\\nu+1)_{n-1} \\frac{((2\\nu+m)!)^2}{m!} \\frac{k!}{(n)_k(n)_m\\Gamma(2\\nu-k+1)\\Gamma(n+k+2\\nu+1)} \n\\\\ \\sum_{s=0}^m \\frac{(-m)_s(2\\nu+1)_s(2\\nu+m+n)_s}{s! (2\\nu-k+1)_{s}(n+2\\nu+k+1)_{s}} \n{}_4F_3\\left[\n\\begin{matrix}\n-m,\\ 2\\nu+1+s, ,\\hfill 2\\nu+1+s ,2\\nu+m+n, \n\\\\ 2\\nu-k+1+s, n+2\\nu+1+k+s, 2\\nu+1\n\\end{matrix}; 1\\right].\n\\end{multline*}\nSolving the equation $k(k+n) = \\lambda$ in the variable $k \\geq 0$ for $\\lambda \\geq 0$, we are done. \n\\end{proof}\n\nWhen $m=0, n=1$, we retrieve Berezin formula in the case of the Riemann sphere. Indeed, for these parameters, the function $W_m^{\\nu}$ reduces to \n\\begin{equation*}\nW(k(k+1)) = \\gamma_{1,0,\\nu} \\frac{1}{\\Gamma(2\\nu-k+1)\\Gamma(k+2\\nu+2)}. \n\\end{equation*}\nNow, recall the Weierstrass product for the Gamma function: \n\\begin{equation*}\n\\frac{1}{\\Gamma(s+1)} = e^{\\gamma s} \\prod_{p \\geq 1} \\left(1+\\frac{s}{p}\\right)e^{-s\/p}\n\\end{equation*}\nwhere $\\gamma$ is the Euler constant. \nIt follows that \n\\begin{equation*}\nW(k(k+n)) = \\gamma_{1,0,\\nu}e^{\\gamma(4\\nu+3)}\\prod_{p \\geq 1}\\left(1+\\frac{2\\nu-k}{p}\\right)\\left(1+\\frac{2\\nu+1+k}{p}\\right)e^{-(4\\nu+3)\/p}. \n\\end{equation*}\nWriting \n\\begin{equation*}\n\\left(1+\\frac{2\\nu-k}{p}\\right)\\left(1+\\frac{2\\nu+1+k}{p}\\right) = \\left(1+\\frac{2\\nu}{p}\\right)\\left(1+\\frac{2\\nu+1}{p}\\right) \\left(1-\\frac{k}{p+2\\nu}\\right)\\left(1+\\frac{k}{p+2\\nu+1}\\right)\n\\end{equation*}\nand using again he Weierstrass product \n\\begin{equation*}\ne^{\\gamma(4\\nu+3)}\\prod_{p \\geq 1}\\left(1+\\frac{2\\nu}{p}\\right)\\left(1+\\frac{2\\nu+1}{p}\\right)e^{-(4\\nu+3)\/p} = \\frac{1}{\\Gamma(2\\nu+1)\\Gamma(2\\nu+2)},\n\\end{equation*}\nwe get \n\\begin{equation*}\nW(k(k+1)) = \\frac{\\gamma_{1,0,\\nu}}{\\Gamma(2\\nu+1)\\Gamma(2\\nu+2)}\\prod_{p \\geq 1} \\left(1- \\frac{k(k+1)}{(p+2\\nu)(p+2\\nu+1)}\\right). \n\\end{equation*}\nAs a matter of fact, $W$ can be chosen as \n\\begin{equation*}\nW(\\lambda) = \\frac{((2\\nu)!)^2(2\\nu+1)}{\\Gamma(2\\nu+1)\\Gamma(2\\nu+2)}\\prod_{p \\geq 1} \\left(1- \\frac{\\lambda}{(p+2\\nu)(p+2\\nu+1)}\\right), \\quad \\lambda \\geq 0 \n\\end{equation*}\nso that (we use the identity $\\Gamma(s+1) = s\\Gamma(s)$)\n\\begin{equation*}\nB_0^{\\nu} = W(-\\Delta_{FS}) = \\prod_{p \\geq 1} \\left(1 + \\frac{\\Delta_{FS}}{(p+2\\nu)(p+2\\nu+1)}\\right).\n\\end{equation*}\nIdentifying $2\\nu$ to $1\/h$ in the notation of \\cite{Ber1} (see eq. (5.9) p. 171), we get Berezin's formula: \n\\begin{equation*}\nB_0^{\\nu} = \\prod_{p \\geq 1} \\left(1 + h^2\\frac{\\Delta_{FS}}{(1+ph)(1+(p+1)h)}\\right).\n\\end{equation*}\nA similar formula holds for $m=0$ and general $n \\geq 1$. \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Supplementary Materials}\n\n\\subsection{Proof of~\\Cref{thm:biased-weighted-maj-vote-mistake-bound}}\n\n\\medskip\n\\noindent\\textbf{\\Cref{thm:biased-weighted-maj-vote-mistake-bound}} (Restated)\\textbf{.}\\emph{\n \\Cref{alg:biased-weighted-maj-vote} makes at most $e(\\Delta+2)(\\ln|\\mathcal{H}|+\\mathsf{OPT})$ mistakes against any adversary.\n}\n\n\\medskip\n\n\\begin{proof}\nTo begin with, we show that if a mistake is made in round $t$, then the weights get updated such that $W_{t+1}\\leq W_t\\big(1-\\gamma\/(\\Delta+2)\\big)$. %\nMoreover, the algorithm penalizes an expert only if it made a mistake. In other words, the algorithm never over-penalizes experts who do not make a mistake.\n\nFirst, suppose a mistake is made on a true negative. In this case, $v_t$ is labeled as positive by $h_t$, so the total weight of experts predicting positive on $v_t$ is at least $W_t\/(\\Delta+2)$, and each of their weights is decreased by a factor of $\\gamma$. \nAs a result, we have $W_{t+1}\\leq W_t\\big(1-\\gamma\/(\\Delta+2)\\big)$.\nMoreover, for each classifier $h$ that gets penalized, we have $h(v_t)=+1$, so $v_t$ belongs to the positive region $S_h$, which implies that the initial node $u_t\\in N[v_t]$ is able to reach the positive region $S_h$. Therefore, our previous observation indicates that $u_t$ would have ended up being predicted as positive had it best responded to $h$, so $h$ had also made a mistake.\n\nNext, consider the case of making a mistake on a true positive. Similar to the proof of \\ref{thm:biased-weighted-maj-vote-mistake-bound}, we argue that the agent has not moved from a different location to $v_t$ to get classified as negative, so $v_t=u_t$. Since the agent did not move, none of the vertices in $N[v_t]$ was labeled positive by the algorithm, implying that for each $x\\in N[v_t]$, weights of experts labeling x as positive is less than $W_t\/(\\Delta+2)$. Therefore, taking the union of all $x\\in N[v_t]$, we conclude that the total weight of experts predicting negative on all $x\\in N[v_t]$ is at least $W_t\\Big(1-(\\Delta+1)\/(\\Delta+2)\\Big) = W_t\/(\\Delta+2)$. \nAll these experts are making a mistake as $v_t=u_t$ cannot reach the positive region of any of these experts, so they all end up classifying agent $u_t$ as negative. As a result, the algorithm cuts their weight by a factor of $\\gamma$, resulting in $W_{t+1}\\leq W_t-(\\gamma W_t)\/(\\Delta+2)$.\n\nLet $M=\\mathsf{Mistake}(T)$ denote the number of mistakes made by the algorithm. Since the initial weights are all set to 1, we have $W_0=|\\mathcal{H}|$. Together with the property that $W_{t+1}\\leq W_t\\left(1-\\frac{\\gamma}{\\Delta+2}\\right)$ on each mistake, we have $W_T\\leq |\\mathcal{H}|\\left(1-\\frac{\\gamma}{\\Delta+2}\\right)^M$.\n\nNow we show that $W_T\\ge \\gamma^{\\mathsf{OPT}}$. We have proved that whenever the algorithm decreases the weight of an expert, they must have made a mistake. However, it can be the case that an expert makes a mistake, but the algorithm does not detect that. In other words, the algorithm may under-penalize an expert, but it would never over-penalize. \nLet $h^\\star\\in\\mathcal{H}$ denote the best expert that achieves the minimum number of mistakes $\\mathsf{OPT}$. Suppose the algorithm detects $q$ of the rounds where $h^\\star$ makes a mistake, then we have $q\\leq \\mathsf{OPT}$. Therefore, after $T$ rounds, $W_T\\ge w_T(h^\\star)=\\gamma^{q}\\geq \\gamma^{\\mathsf{OPT}}$, since $0\\leq\\gamma\\leq 1$. Finally, we have:\n\\begin{align*}\n&\\gamma^{\\mathsf{OPT}}\\leq W_T\\leq |\\mathcal{H}|\\left(1-\\frac{\\gamma}{\\Delta+2}\\right)^M\\\\\n\\Rightarrow\\ &\\mathsf{OPT}\\cdot\\ln{\\gamma}\\leq \\ln{|\\mathcal{H}|}+M\\ln{\\Big(1-\\frac{\\gamma}{\\Delta+2}\\Big)}\\leq \\ln{|\\mathcal{H}|}-M\\frac{\\gamma}{\\Delta+2}\\\\\n\\Rightarrow\\ & M\\leq \\frac{\\Delta+2}{\\gamma}\\ln{|\\mathcal{H}|}-\\frac{\\ln{\\gamma}(\\Delta+2)}{\\gamma}\\mathsf{OPT}\\\\\n\\end{align*}\nBy setting $\\gamma=1\/e$, we bound the total number of mistakes as $M\\leq e(\\Delta+2)(\\ln{|\\mathcal{H}|}+\\mathsf{OPT})$.\n\\end{proof}\n\n\\subsection{Improving the Upper Bound}\n\\label{sec:improving-upper-bound}\nIn this section, we propose a pre-processing step to improve the mistake bound of~\\Cref{alg:halving} in some cases, depending on the structure of the underlying manipulation graph. We leave it open to get a general mistake bound that depends on other characteristics of the manipulation graph besides the maximum degree. Consider the case where the manipulation graph $G(\\mathcal{X}, \\mathcal{E})$ is a complete graph, and the hypothesis class $\\mathcal{H}$ includes all possible labelings of $\\mathcal{X}$, i.e. $|\\mathcal{H}|=2^{|\\mathcal{X}|}$. However,~\\Cref{prop:effective-hypothesis-class-complete-graphs} shows that all the examples $(u_t,y_t)$ arriving over time get labeled the same: either all positive or all negative. Therefore, the size of the \\emph{effective} hypothesis class is $2$.\n\n\\begin{proposition}\n\\label{prop:effective-hypothesis-class-complete-graphs}\nIf the manipulation graph $G(\\mathcal{X},\\mathcal{E})$ is a complete undirected graph, then all the examples arriving over time are labeled the same, i.e. all positive or all negative.\n\\end{proposition}\n\n\\begin{proof}\nConsider a hypothesis $h$ that labels at least one node $v\\in \\mathcal{X}$ as positive. Then any example $u_t$ arriving at time-step $t$ can reach $v$ and get classified as positive. Hence, $h$ classifies all the examples as positive. On the other hand, if $h$ labels all the nodes $v\\in \\mathcal{X}$ as negative, then it would classify all the examples arriving over time as negative.\n\\end{proof}\n\n\\Cref{alg:halving} has a mistake bound of $\\mathcal{O}(\\Delta \\ln|\\mathcal{H}|)$ in the realizable case. However, when the manipulation graph is complete, we can get a mistake bound of $1$ as follows: initially starting with an all-positive classifier, if a mistake happens, switch to an all-negative classifier. The case of complete graphs shows that depending on the underlying manipulation graph, there can be a large gap between the upper bound given by~\\Cref{alg:halving} and the best achievable bound. ~\\Cref{alg:improvement-halving} is a pre-processing step to improve this gap. %\n\n\\begin{algorithm}\n\\SetKwInOut{Input}{Input}\n\\SetKwInOut{Output}{Output}\n\\SetNoFillComment\n\\Input{$G(\\mathcal{X},\\mathcal{E})$, hypothesis class $\\mathcal{H}$}\n\\For{$t=1,\\cdots,T$}{\nCommit to $h_t$ that labels all nodes as positive\\;\n\\tcc{Observe $(v_t,y_t)$}\n\\If{$y_t\\neq h_t(v_t)$}{\\tcc{when the first mistake happens, remove all the hypotheses that make a mistake}\n$\\mathcal{H}'\\gets\\mathcal{H}\\setminus \\{h:\\ \\exists v\\in N[v_t],\\ h(v)=+1\\}$\\;\nBreak\\;\n}\n}\nRun~\\Cref{alg:halving} on $(G, \\mathcal{H}')$\\;\n\\caption{A pre-processing step to improve the mistake bound of~\\Cref{alg:halving}}\n\\label[algo]{alg:improvement-halving}\n\\end{algorithm}\n\n\\Cref{alg:improvement-halving} initially starts with an all-positive classifier. When the first mistake happens on a node $v_t$, it means that $v_t$ and all its neighbors need to be classified as negative. Hence, we exclude all the hypothesis $h\\in \\mathcal{H}$ that classify any node $v\\in N[v_t]$ as positive from $\\mathcal{H}$. After filtering $\\mathcal{H}$, we run~\\Cref{alg:halving} on the new set $\\mathcal{H}'$. \nWe now restate the guarantee of \\Cref{alg:improvement-halving} that we presented in \\Cref{thm:mistake-bound-improved-halving}, and show its proof.\n\n\n\\medskip\n\\noindent\\textbf{\\Cref{thm:mistake-bound-improved-halving}} (Restated)\\textbf{.}\\emph{\n \\Cref{alg:improvement-halving} makes at most $\\min\\{n-\\delta, 1+\\Delta\\cdot \\min\\{\\ln|\\mathcal{H}|, n-\\delta-1\\}\\}$ mistakes, where $n=|\\mathcal{X}|$ and \n $\\delta$ is the minimum degree of $G(\\mathcal{X},\\mathcal{E})$.\n}\n\\medskip\n\n\\begin{proof}\nAfter the first mistake happens on $v_t$,~\\Cref{alg:improvement-halving} only keeps the hypotheses that label all the nodes in $N[v_t]$ as negative. Since $\\big|N[v_t]\\big|\\geq \\delta+1$, the number of such hypotheses is at most $2^{n-(\\delta+1)}$. Therefore, the filtered-out hypothesis set $\\mathcal{H'}$ satisfies $|\\mathcal{H}'|\\leq \\min\\{|\\mathcal{H}|, 2^{n-\\delta-1}\\}$. Therefore, the number of mistakes that~\\Cref{alg:halving} makes on the filtered hypothesis set is at most:\n\\[1+\\Delta\\cdot\\ln(|\\mathcal{H}'|)\\leq1+\\Delta\\cdot\\min\\{\\ln|\\mathcal{H}|, \\ln(2^{n-\\delta-1}) = 1+\\Delta\\cdot\\min\\{\\ln|\\mathcal{H}|, n-\\delta-1\\}\\]\n\nSuppose that $n-\\delta<1+\\Delta\\cdot\\min\\{\\ln|\\mathcal{H}|, n-\\delta-1\\}$. After the first mistake happens on $v_t$ and the labels of $N[v_t]$ get flipped, for the remaining graph $G\\setminus N[v_t]$, the labels can get flipped one by one whenever a mistake is observed. Therefore, the total number of mistakes is at most:\n\\[\\min\\{n-\\delta, 1+\\Delta\\cdot \\min\\{\\ln|\\mathcal{H}|, n-\\delta-1\\}\\}\\]\n \n\\end{proof}\n\n\\begin{remark}\nWhen the manipulation graph is dense, the mistake bound in \\Cref{thm:mistake-bound-improved-halving} can greatly outperform that given in \\Cref{thm:baseline-realizable-upper-bound}. For instance, in complete graphs where both the minimum degree and the maximum degree are $n-1$, \\Cref{thm:mistake-bound-improved-halving} guarantees that \\Cref{alg:improvement-halving} makes at most one mistake, whereas \\Cref{alg:halving} could end up making $n$ mistakes in total, one on each vertex.\n\\end{remark}\n\n\\subsection{Extension of the Deterministic Model to Directed Manipulation Graphs}\n\\label{sec:directed-graphs}\n\nSuppose that the manipulation graph $G(\\mathcal{X},\\mathcal{E})$ is a directed graph. We show how to modify~\\Cref{alg:halving,alg:biased-weighted-maj-vote} to work in the case of directed manipulation graphs and get a regret bound that depends on $\\Delta_{\\text{out}}$ instead of $\\Delta$, where $\\Delta_{\\text{out}}$ is the maximum out-degree of all the nodes $v\\in\\mathcal{X}$.\n\\begin{proposition}\nIn the realizable case,~\\Cref{alg:halving} can be modified to make at most $(\\Delta_{\\text{out}}+2)\\ln |\\mathcal{H}|$ mistakes. \n\\end{proposition}\n\\begin{proof}\nFirst, we need to change the threshold of the majority vote for classifying a node as positive from $1\/(\\Delta+2)$ to $1\/(\\Delta_{\\text{out}}+2)$. Now, if a mistake on a true negative happens, then $1\/(\\Delta_{\\text{out}}+2)$ of the remaining hypotheses gets discarded, which are the set of experts that predict the observable node as positive. On the other hand, if a mistake on a true positive happens, it means that the agent was classified as negative and did not move. Therefore, all the nodes in the reachable out-neighborhood were classified as negative by the algorithm. The number of reachable nodes from the starting node is at most $\\Delta_{\\text{out}}+1$, and for each of them. less than $(1\/(\\Delta_{\\text{out}}+2))|\\mathcal{H}|$ experts classified them as positive. Therefore, a total of $|\\mathcal{H}|\\Big(1-(\\Delta_{\\text{out}}+1)\/(\\Delta_{\\text{out}}+2)\\Big)=(1\/(\\Delta_{\\text{out}}+2))|\\mathcal{H}|$ remaining hypotheses are classifying the entire reachable set as negative, and they are all making a mistake. As a result, whenever a mistake happens, $1\/(\\Delta_{\\text{out}}+2)$ fraction of the hypotheses can get discarded. This results in a mistake bound of $(\\Delta+2)\\ln{|\\mathcal{H}|}$.\n\\end{proof}\n\nSimilarly, we can show that \\Cref{alg:biased-weighted-maj-vote} can be modified to get a mistake bound that depends on $\\Delta_{\\text{out}}$ instead of $\\Delta$, as shown in the following proposition. \n\\begin{proposition}\nIn the unrealizable case,~\\Cref{alg:biased-weighted-maj-vote} can be modified to make at most $e(\\Delta_{\\text{out}}+2)(\\ln |\\mathcal{H}|+\\mathsf{OPT})$ mistakes. \n\\end{proposition}\n\n\n\\subsection{Regret bound of $\\widetilde{\\mathcal{O}}\\left(T^{\\frac{3}{4}}\\ln^{\\frac{1}{4}} |\\mathcal{H}|\\right)$ against an adaptive adversary}\n\\label{sec:adaptive-adversaries}\n\nIn this section, we present an algorithm (\\Cref{alg:reduction-adaptive}) based on the idea of full-information acceleration, and prove a regret bound of $\\widetilde{\\mathcal{O}}\\left(T^{\\frac{3}{4}}\\ln^{\\frac{1}{4}} |\\mathcal{H}|\\right)$ against general adaptive adversaries in \\Cref{thm:regret-alg-adaptive-reduction}. The proof of this theorem requires a more careful analysis of the difference between the estimated loss sequence and the actual loss sequence using martingale difference sequences, which borrows similar ideas from \\citet{mcmahan2004online}.\n\n\\begin{algorithm}[!ht]\n\\SetKwInOut{Input}{Input}\n\\SetKwInOut{Output}{Output}\n\\SetNoFillComment\nInitialize $w_1(h)\\gets0,\\ \\forall h\\in\\mathcal{H}$\\;\nInitialize step size $\\eta\\gets \\sqrt{\\frac{8\\ln|\\mathcal{H}|}{T}}$, exploration coefficient $\\gamma\\gets T^{-\\frac{1}{4}}\\ln^{\\frac{1}{4}}(T|\\mathcal{H}|)$\\;\nLet $h^+$ be an all-positive classifier\\;\n\\For{$t\\in [T]$}{\n\\tcc{Commit to a distribution $\\mathcal{D}_t$ defined as follows, then draw classifier $h_t\\sim\\mathcal{D}_t$}\n Let $\\mathcal{D}_t$ be a distribution over $\\mathcal{H}\\cup\\{h^+\\}$ specified by probabilities $p_t(h^+)=\\gamma$, and $p_t(h)=(1-\\gamma)\\frac{w_t(h)}{W_t}$ for all $h\\in\\mathcal{H}$, where $W_t=\\sum_{h'\\in\\mathcal{H}} w_t(h')$\\;\n \\tcc{Observe agent $(v_t,y_t)$.}\n \\tcc{Construct an estimated loss vector and use it to update the weights:}\n \\For{$h\\in \\mathcal{H}$}{\n $\\hat{\\ell_t}(h)\\gets\\frac{\\ell(h,\\mathsf{BR}_h(v_t),y_t)\\cdot\\indicator{h_t=h^+}}{\\gamma}$\\;\n $w_{t+1}(h)\\leftarrow w_t(h) e^{-\\eta \\cdot\\hat{\\ell}_t(h)}$.\n }\n}\n\\caption{Randomized algorithm against adaptive adversaries}\n\\label[algo]{alg:reduction-adaptive}\n\\end{algorithm}\n\n\n\n\n\n\\begin{theorem}\n \\Cref{alg:reduction-adaptive} achieves a regret of $\\widetilde{\\mathcal{O}}\\left(T^{\\frac{3}{4}}\\ln^{\\frac{1}{4}} |\\mathcal{H}|\\right)$ against any adaptive adversary.\n \\label{thm:regret-alg-adaptive-reduction}\n\\end{theorem}\n\\begin{proof}\n Similar to the proof of \\Cref{thm:regret-alg-oblivious}, we first show that at every round $t$ and for all experts $h\\in\\mathcal{H}$, $\\hat{\\ell}_t(h)$ is an unbiased estimate of $\\ell_t(h)$, where $\\ell_t(h)=\\ell(h,\\mathsf{BR}_{h}(u_t),y_t)$ is the true loss of $h$. \n Since we are dealing with adaptive adversaries, we show that for every $h\\in\\mathcal{H}$, $\\left(\\hat{\\ell}_t(h)-\\ell_t(h)\\right)_{t=1}^T$ is a Martingale Difference Sequence: let $\\mathcal{F}_t$ denote the $\\sigma$-algebra generated by the randomness up to time $t$, then\n \\begin{align}\n \\E\\left[\\left.\\hat{\\ell_t}(h)-\\ell_t(h)\\right|\\mathcal{F}_{t-1}\\right]=&\\E\\left[\\left.\\gamma\\cdot \\frac{\\ell(h,\\mathsf{BR}_h(v_t),y_t)}{\\gamma}\n -\\ell(h,\\mathsf{BR}_h(u_t),y_t)\\ \n \\right|\\mathcal{F}_{t-1}\\right]=0.\n \\label{eq:adaptive-unbiased}\n \\end{align}\n Here, the first equality is because \n $\\hat{\\ell}_t\\neq0$ only when $h_t=h^+$ is an all-positive classifier, which happens with probability $\\gamma$.\n The second equality is because the agent would not move under $h^+$, resulting in $u_t=v_t$. Moreover, from the definition of $\\hat{\\ell}_t$, the term $\\hat{\\ell}_t(h)-\\ell_t(h)$ is bounded in absolute value by $\\frac{1}{\\gamma}$.\n Now we calculate the expected cumulative loss of \\Cref{alg:reduction-adaptive}. \n \\begin{align}\n \\E\\left[\\sum_{t=1}^T \\ell_t(h_t)\\right]=&\n \\E\\left[\\sum_{t=1}^T \\E\\left[\\ell_t(h_t)|\\mathcal{F}_{t-1}\\right] \\right]\\label{eq:tower-property}\\\\\n =&\\E\\left[\\sum_{t=1}^T \\E\\left[\\left.\\gamma\\cdot\\indicator{y_t\\neq1}+ (1-\\gamma)\\cdot\n \\E_{h\\sim \\frac{w_t(\\cdot)}{W_t}}\\left[\\ell_t(h)\\right]\\ \\right|\\ \\mathcal{F}_{t-1}\\right] \\right]\\nonumber\\\\\n \\le &\\E\\left[\\sum_{t=1}^T \\gamma+\\E_{h\\sim p_t'}\\left[\\ell_t(h)\\right] \\right],\\qquad\\qquad\\qquad\\small{ \\text{where }p_t'(h)\\triangleq \\frac{w_t(h)}{W_t},\\ \\forall h\\in\\mathcal{H}};\n \\label{eq:red-tmp-1}\\\\\n =&\\gamma T+ \\E\\left[\\sum_{t=1}^T \\E_{h\\sim p_t'}\\left[\\hat{\\ell}_t(h)\\right] \\right]+\\E\\left[\\sum_{t=1}^T \\E_{h\\sim p_t'}\\left[\\ell_t(h)-\\hat{\\ell}_t(h)\\right] \\right].\n \\label{eq:red-tmp-2}\n \\end{align}\n In the above equations, \\Cref{eq:tower-property} is from the tower property of conditional expectations,\n \\Cref{eq:red-tmp-1} is because $\\indicator{y_t\\neq1}\\le1$ and $1-\\gamma\\le1$, where we also use the tower property to remove the conditional expectations. Finally, \\Cref{eq:red-tmp-2} is because we add and subtract the second term. \n \n Now, for the third term in \\eqref{eq:red-tmp-2}, note that $p_t'$ is defined on $\\mathcal{F}_{t-1}$, so $\\left(\\E_{h\\sim p_t'}\\left[\\ell_t(h)-\\hat{\\ell}_t(h)\\right]\\right)_{t=1}^T$ is also a martingale difference sequence with respect to the filtration $\\left(\\mathcal{F}_{t}\\right)_{t=1}^T$. Again, from the tower property, this term is always zero:\n\n \\begin{align}\n \\E\\left[\\sum_{t=1}^T \\E_{h\\sim p_t'}\\left[\\ell_t(h)-\\hat{\\ell}_t(h)\\right] \\right]=\\E\\left[\\sum_{t=1}^T \\E\\left[\\E_{h\\sim p_t'}\\left[\\left.\\ell_t(h)-\\hat{\\ell}_t(h)\\ \\right|\\mathcal{F}_{t-1}\\right]\\right] \\right]=0.\n \\label{eq:concentration-loss}\n \\end{align}\n Since $p_1',\\cdots,p_T'$ are exactly the same as the strategies generated by running Hedge on the estimated loss sequence $\\hat{\\ell}_1,\\cdots,\\hat{\\ell}_T$, and the magnitude of the losses are all bounded by $\\frac{1}{\\gamma}$, we have the following regret guarantee from~\\citet{freund1997decision}:\n \\begin{align}\n \\E\\left[\\sum_{t=1}^T \\E_{h\\sim p_t'}\\left[\\hat{\\ell}_t(h)\\right] -\\min_{h^\\star\\in\\mathcal{H}}\\sum_{t=1}^T \\hat{\\ell}_t(h^\\star)\\right]\\le {\\mathcal{O}}\\left(\\frac{1}{\\gamma}\\sqrt{T\\ln|\\mathcal{H}|}\\right).\\label{eq:hedge-guarantee}\n \\end{align}\n Putting \\Cref{eq:red-tmp-2,eq:concentration-loss,eq:hedge-guarantee} together gives us the bound on expected loss:\n \\begin{align}\n \\E\\left[\\sum_{t=1}^T \\ell_t(h_t) \\right]\\le &\\E\\left[\\min_{h^\\star\\in\\mathcal{H}}\\sum_{t=1}^T \\hat{\\ell}_t(h^\\star)\\right] +{\\mathcal{O}}\\left(\\gamma T+\\frac{1}{\\gamma}\\sqrt{T\\ln|\\mathcal{H}|}\\right).\\label{eq:tmp10}\n \\end{align}\n We define\n \\begin{align*}\n \\widehat{\\mathsf{OPT}}\\triangleq\\min_{h^\\star\\in\\mathcal{H}}\\sum_{t=1}^T \\hat{\\ell}_t(h^\\star),\n \\end{align*}\n then the above inequality \\eqref{eq:tmp10} implies \n \\begin{align}\n \\E[\\mathsf{Regret}]=\\E\\left[\\sum_{t=1}^T \\ell_t(h_t) -\\mathsf{OPT}\\right]\\le \\E\\left[\\widehat{\\mathsf{OPT}}-\\mathsf{OPT}\\right]+{\\mathcal{O}}\\left(\\gamma T+\\frac{1}{\\gamma}\\sqrt{T\\ln|\\mathcal{H}|}\\right).\n \\label{eq:tmp23}\n \\end{align}\n Now, the last step is to bound the expected difference between $\\widehat{\\mathsf{OPT}}$ and the true optimal $\\mathsf{OPT}=\\min_{h^\\star\\in\\mathcal{H}}\\ell_t(h^\\star)$. We have:\n \\begin{align*}\n \\E\\left[\\widehat{\\mathsf{OPT}}-\\mathsf{OPT}\\right]\n =\\E\\left[\\min_{\\hat{h}}\\max_{h}\\sum_{t=1}^T \\hat{\\ell}_t(\\hat{h})-\\ell_t(h)\\right]\n \\le\\E\\left[\\max_{h\\in\\mathcal{H}} \\sum_{t=1}^T\\hat{\\ell}_t(h)-\\ell_t(h)\\right].\n \\end{align*}\n Since $\\left(\\hat{\\ell}_t(h)-\\ell_t(h)\\right)_{t=1}^T$ is a martingale difference sequence for any fixed $h$, we use Azuma-Hoeffding inequality together with the union bound to obtain\n \\begin{align*}\n \\Pr\\left[\\max_{h\\in\\mathcal{H}} \\sum_{t=1}^T\\hat{\\ell}_t(h)-\\ell_t(h)\\ge\\frac{1}{\\gamma}\\sqrt{2T\\ln\\left(\\frac{1}{\\delta}\\right)}\\right]\\le\\delta|\\mathcal{H}|.\n \\end{align*}\n Setting $\\delta=\\frac{1}{T|\\mathcal{H}|}$ gives us\n \\begin{align}\n \\E\\left[\\widehat{\\mathsf{OPT}}-\\mathsf{OPT}\\right]\\le\\E\\left[\\max_{h\\in\\mathcal{H}} \\sum_{t=1}^T\\hat{\\ell}_t(h)-\\ell_t(h)\\right]\\le \\frac{1}{\\gamma}\\sqrt{2T\\ln\\left(\\frac{1}{\\delta}\\right)}+\\delta|\\mathcal{H}|\\cdot T\\le\\mathcal{O}\\left(\\frac{1}{\\gamma}\\sqrt{2T\\ln\\left(T|\\mathcal{H}|\\right)}\\right).\n \\label{eq:opt-concentration}\n \\end{align}\n Finally, by putting \\Cref{eq:tmp23,eq:opt-concentration} together, and setting \n $\\gamma=T^{-\\frac{1}{4}}\\ln^{\\frac{1}{4}}(T|\\mathcal{H}|)$,\n we {derive} the desired regret bound:\n \\begin{align*}\n \\E[\\mathsf{Regret}]=\\E\\left[\\sum_{t=1}^T \\ell_t(h_t) -\\mathsf{OPT}\\right]\\le \\mathcal{O}\\left(T^{\\frac{3}{4}}\\ln^{\\frac{1}{4}}(T|\\mathcal{H}|)\\right).\n \\end{align*}\n \n\\end{proof}\n\n\n\n\\subsection{Strategic online linear classification}\n\\label{sec:strategic-perceptron}\n\nIn this section, we propose an algorithm for the {problem of online} linear classification in the presence of strategic behavior. \n{In this setting,} each {original }example $\\mathbf{z}_t$ can move for an $\\ell_2$ distance of at most $\\alpha$ and reach a new observable state $\\mathbf{x}_t$; and the examples would move for a minimum distance that results in a positive classification.\n{\\citet{ahmadi2021strategic} propose an algorithm for the case that} \n{original} examples are linearly separable; \n{in the case of inseparable examples, they get a mistake bound in terms of the hinge loss of \\emph{manipulated} examples, and }leave it as an open problem \nto obtain a mistake bound in terms of the hinge-loss of \\emph{original} examples.\n\nIn this section, we propose an algorithm \nfor the inseparable case {that obtains a bound in terms of the hinge-loss of \\emph{original} examples. However, our mistake bound has an additional $\\mathcal{O}(\\sqrt{T})$ additive term compared to the bound obtained by \\citet{ahmadi2021strategic} in the separable case}. The idea behind this algorithm is to use an all-positive classifier at random time steps to observe the un-manipulated examples. \nUsing the un-manipulated examples, the standard Perceptron algorithm suffices to deal with inseparable data.\nFor simplicity, we present the algorithm for oblivious adversaries and remark that a similar bound could be obtained for the case of adaptive adversaries using similar techniques as in \\Cref{sec:adaptive-adversaries}.\n\n\\begin{algorithm}[!ht]\n\\SetKwInOut{Input}{Input}\n\\SetKwInOut{Output}{Output}\n\\SetNoFillComment\nPartition the timeline $1,\\cdots, T$ into $K$ consecutive blocks $B_1,\\cdots,B_K$ where $B_j=[\\frac{(j-1)T}{K}+1,\\frac{jT}{K}]$\\;\nInitialize $w_1\\gets\\mathbf{0}$\\;\n\\For{$j\\in[K]$}{\n Sample $\\tau_j \\in B_j$ uniformly at random\\;\n\\For{$t\\in B_j$}{\n \\eIf{$t=\\tau_j$}{\n Use classifier $h_t\\gets h^+$, where $h^+(\\mathbf{x})=+1\\ \\forall \\mathbf{x}$\\;\n }\n {\n Use classifier $h_t\\gets h^j$, where $h^j(\\mathbf{x})=\\text{sgn}\\left(\\frac{\\mathbf{w}_j^\\mathsf{T} \\mathbf{x}}{|\\mathbf{w}_j|}-\\alpha\\right)$\\;\n }\n \\tcc{Observe example $(\\mathbf{x}_t,y_t)$}\n}\n\\If{$y_{\\tau_j}\\neq h_{\\tau_j}(\\mathbf{x}_{\\tau_j})$}{\n$\\mathbf{w}_{j+1}\\gets {\\mathbf{w}_j}+y_{\\tau_j}\\mathbf{x}_{\\tau_j}$\\;\n}\n}\n\\caption{Algorithm for online linear strategic classification when original examples are inseparable}\n\\label[algo]{alg:reduction-MAB-FIB-hinge-loss}\n\\end{algorithm}\n\n\\begin{theorem}\nLet $S=\\{(\\mathbf{z}_t,y_t)\\}_{t=1}^T$ be the set of original data points, where $\\max_t|\\mathbf{z}_t|\\le R $. For any $\\mathbf{w}^\\star$, \\Cref{alg:reduction-MAB-FIB-hinge-loss} with parameter $K=\\sqrt{T} R\\|\\mathbf{w}^\\star\\|$ satisfies\n\\begin{align}\n \\E\\left[\\mathsf{Mistake}(T)\\right]\\le 2L_{\\text{hinge}}(\\mathbf{w}^\\star,S)+2\\sqrt{T}R\\|\\mathbf{w}^\\star\\|,\n\\end{align}\nwhere the hinge loss is defined as\n$$L_{\\text{hinge}}(\\mathbf{w}^\\star,S)\\triangleq \\sum_{(z_t,y_t)\\in S} \\max\\left\\{0,1-y_t (\\mathbf{z}_t^\\mathsf{T} \\mathbf{w}^\\star)\\right\\}.$$\n\\end{theorem}\n\\begin{proof}\nWe use $\\ell_t(h)=\\indicator{y_t\\neq h(\\mathsf{BR}_{h}(\\mathbf{z}_t)}$ to denote the loss of classifier $h$ had agent $(\\mathbf{z}_t,y_t)$ best responded to $h$. \n\n In each block $B_j$, we have $\\ell(h_{\\tau_j})=\\indicator{y_{\\tau_j}\\neq +1}\\le1$ on the all-positive step $\\tau_j$. On the other steps $t\\neq \\tau_j$, since $h^j$ is obtained by shifting the boundary $\\mathbf{w}_j$ by $\\alpha$,\n an agent $(\\mathbf{x}_t)$ can reach the positive region of $h^j$ if and only if its original features $(\\mathbf{z}_t)$ have a nonnegative dot product with $\\mathbf{w}_j$. Thus we have\n $$\\ell_t(h^j)=\\indicator{h^j(\\mathbf{x}_t)\\neq y_{t}}=\\indicator{\\text{sgn}\\left(\\frac{\\mathbf{w}_j^\\mathsf{T} \\mathbf{x}_t}{|\\mathbf{w}_j|}-\\alpha\\right)\\neq y_{t}}\n =\\indicator{\\text{sgn}\\left(\\frac{\\mathbf{w}_j^\\mathsf{T} \\mathbf{z}_t}{|\\mathbf{w}_j|}\\right)\\neq y_t}=\\indicator{\\text{sgn}\\left({\\mathbf{w}_j^\\mathsf{T} \\mathbf{z}_t}\\right)\\neq y_t}.$$\n\n As a result, we can bound the number of mistakes as follows:\n \\begin{align}\n \\E\\left[\\mathsf{Mistake}(T) \\right]\n =&\n \\E\\left[\\sum_{j=1}^K\n \\sum_{t\\in B_j}\\ell_t(h_t)\\right]\n {\\le}\\E\\left[\\sum_{j=1}^K\\left(1+\n \\sum_{t\\in B_j} \n \\ell_t(h^j)\n \\right)\\right]\\nonumber\\\\\n {=}& K+\\sum_{j=1}^K \\sum_{t\\in B_j} \\indicator{\\text{sgn}\\left({\\mathbf{w}_j^\\mathsf{T} \\mathbf{z}_t}\\right)\\neq y_t}\\nonumber\\\\\n {\\le}& K+\\frac{T}{K}\\sum_{j=1}^K\n \\E_{\\tau_j\\sim B_j}\\indicator{\\text{sgn}\\left({\\mathbf{w}_j^\\mathsf{T} \\mathbf{z}_{\\tau_j}}\\right)\\neq y_{\\tau_j}},\\label{tmp::5}\n \\end{align}\n where the last step is because $\\tau_j$ is sampled uniformly at random from $B_j$.\n\n Note that $w_1,\\cdots,w_K$ is obtained from running the standard Perceptron algorithm on examples $S_\\tau\\triangleq\\{(\\mathbf{x}_{\\tau_1},y_{\\tau_1}),\\cdots, (\\mathbf{x}_{\\tau_K},y_{\\tau_K})\\}$.\n Since at each $\\tau_j$, the learner uses an all-positive classifier to stop the agents from moving, we have $\\mathbf{x}_{\\tau_j}=\\mathbf{z}_{\\tau_j}$, and $S_\\tau=\\{(\\mathbf{z}_{\\tau_1},y_{\\tau_1}),\\cdots, (\\mathbf{z}_{\\tau_K},y_{\\tau_K})\\}$.\n From \\citet{block1962perceptron}, we have\n $$\\sum_{j=1}^K\n \\indicator{\\text{sgn}\\left({\\mathbf{w}_j^\\mathsf{T} \\mathbf{z}_{\\tau_j}}\\right)\\neq y_{\\tau_j}}\\le R^2\\|\\mathbf{w}^\\star\\|^2+2L_{\\text{hinge}}(\\mathbf{w}^\\star,S_\\tau).$$\n Taking the expectation over $\\tau_1,\\cdots,\\tau_K$, we have the following mistake bound on the standard perceptron algorithm:\n \n \\begin{align}\n \\frac{T}{K}\\sum_{j=1}^K\n \\E_{\\tau_j\\sim B_j}\\indicator{\\text{sgn}\\left({\\mathbf{w}_j^\\mathsf{T} \\mathbf{z}_{\\tau_j}}\\right)\\neq y_{\\tau_j}}\\le& \\frac{T}{K}R^2\\|\\mathbf{w}^\\star\\|^2+2\n \\frac{T}{K}\\sum_{j=1}^K\n \\E_{\\tau_j\\sim B_j}\n L_{\\text{hinge}}(\\mathbf{w}^\\star,(\\mathbf{z}_{\\tau_j},y_{\\tau_j}))\\nonumber\\\\\n =&\\frac{T}{K}R^2\\|\\mathbf{w}^\\star\\|^2+2\n \\frac{T}{K}\\sum_{j=1}^K \\frac{1}{|B_j|}\\sum_{t\\in B_j}L_{\\text{hinge}}(\\mathbf{w}^\\star,(\\mathbf{z}_t,y_t))\\label{tmp::8}\\\\\n =&\\frac{T}{K}R^2\\|\\mathbf{w}^\\star\\|^2+2\n \\sum_{j=1}^K \\sum_{t\\in B_j}L_{\\text{hinge}}(\\mathbf{w}^\\star,(\\mathbf{z}_t,y_t))\\label{tmp::4}\\\\\n =&\\frac{T}{K}R^2\\|\\mathbf{w}^\\star\\|^2+2 L_{\\text{hinge}}(\\mathbf{w}^\\star,S).\\label{tmp::3}\n \\end{align}\n In the above inequalities, \\Cref{tmp::8} follows from the fact that $\\tau_j$ is distributed uniformly at random in block $B_j$, and \\Cref{tmp::4} is because every block has size $|B_j|=\\frac{T}{K}$.\n \n Now we plug \\Cref{tmp::3} back into \\eqref{tmp::5} and obtain\n \\begin{align*}\n \\E\\left[\\mathsf{Mistake}(T) \\right]\\le K+\\frac{T}{K}R^2\\|\\mathbf{w}^\\star\\|^2+2L_{\\text{hinge}}(\\mathbf{w}^\\star,S).\n \\end{align*}\n Finally, letting $K=\\sqrt{T} R\\|\\mathbf{w}^\\star\\|$ yields the desired bound.\n\\end{proof}\n\n\\input{two-populations}\n\n\n\\subsection*{Acknowledgements}\nThis work was supported in part by the National Science Foundation under grant CCF-2212968 {and grant CCF-2145898}, by the Simons Foundation under the Simons Collaboration on the Theory of Algorithmic Fairness, by the Defense Advanced Research Projects Agency under cooperative agreement HR00112020003, {by a C3.AI Digital Transformation Institute grant, and a Berkeley AI Research (BAIR) Commons award}. The views expressed in this work do not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred. Approved for public release; distribution is unlimited.\n\\bibliographystyle{plainnat}\n\n\\section{Conclusion and Open Problems}\n\\label{sec:open-problems}\nIn this paper, we studied the problem of online strategic classification under manipulation graphs. We showed fundamental differences between strategic and non-strategic settings in both deterministic and randomized models.\nIn the deterministic model, we show that in contrast to the nonstrategic setting where $O(\\ln|\\mathcal{H}|)$ bound is achievable by the simple $\\mathsf{Halving}$ algorithm, in the strategic setting, mistake\/regret bounds are closely characterized by the maximum degree $\\Delta$ even when $|\\mathcal{H}|=O(\\Delta)$. In the randomized model, we show that unlike the nonstrategic setting where withholding random bits can benefit the learner, in the strategic setting, hiding the random choices has to suffer $\\Omega(\\Delta)$-multiplicative regret, whereas revealing the random choices to the strategic agents can provably bypass this barrier. We also design generic deterministic algorithms that achieve $O(\\Delta)$-multiplicative regret and randomized algorithms that achieve $o(T)$ regret against both oblivious and adaptive adversaries.\n\nOur work suggests several open problems. The first is to design a deterministic algorithm in the realizable setting that achieves a mistake bound in terms of generic characteristics of the manipulation graph other than the maximum degree. Recall that our upper bound of $O(\\Delta\\ln|\\mathcal{H}|)$ and lower bound of %\n{$\\Omega(\\Delta)$} are not matching, so it would be interesting to tighten either the upper or lower bound in this setting. The second open question is to incorporate the graph structure into randomized algorithms and achieve a {$o(T)$} \nregret bound that depends on the characterizations of the graph, such as the maximum degree.\n\n\\section{Deterministic Classifiers}\n\\label{sec:deterministic}\n\\subsection{Realizable Case}\n\\label{sec:realizable}\n\n\nIn the realizable case, we assume that there exists a perfect expert $h^\\star\\in\\mathcal{H}$ with zero mistakes, i.e., $\\mathsf{OPT}=0$. This implies that for all time steps $t\\in [T]$, we have $\\ell(h^\\star,\\mathsf{BR}_{h^\\star}(u_t),y_t)=0$. \nIn this case, our goal of bounding the Stackelberg regret coincides with the mistake bound:\n\\begin{align}\n \\mathsf{Mistake}(T)\\triangleq\\sum_{t=1}^T \\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t).\n\\end{align}\nFor notational convenience, let $S^\\star$ denote %\nthe set of nodes in $\\mathcal{X}$ with positive labels under $h^\\star$, namely $S^\\star\\triangleq\\left\\{u\\in\\mathcal{X}:\\ h^\\star(u)=+1\\right\\}$. Then realizability implies that $S^\\star$ must satisfy two properties: (1) all the true positives can reach $S^\\star$ within no more than one hop;\n(2) No true negatives can reach $S^\\star$ in one hop.\nWe formalize these two properties in \\Cref{prop:dominating-set}.\n\n\\begin{proposition}\n \\label{prop:dominating-set}\n In the realizable case, there exists a subset of nodes $S^\\star\\subseteq \\mathcal{X}$ such that $S^\\star$ is a \\emph{dominating set} for all the true positives $u_t$, i.e. $\\mathsf{dist}(u_t, S^\\star)\\leq 1$. Additionally, none of the true negatives $u_t$ are dominated by $S^\\star$, i.e. $\\mathsf{dist}(u_t, S^\\star)>1$, where $\\mathsf{dist}(u, S^\\star)$ represents the minimum distance from node $u$ to the set $S^\\star$.\n\n\\end{proposition}\n\\subsubsection{The failure of vanilla Halving}\nIn the problem of nonstrategic online classification with expert advice, the well-known $\\mathsf{Halving}$ algorithm achieves a mistake bound of $\\mathsf{Mistake}(T)=\\mathcal{O}(\\ln{|\\mathcal{H}|})$. In each iteration, $\\mathsf{Halving}$ uses the majority vote of remaining experts to make predictions on the next instance, which ends up reducing the number of remaining experts by at least half on each mistake. Since there are $|\\mathcal{H}|$ experts at the beginning and at least one expert at the end, the total number of mistakes is bounded by $\\mathcal{O}(\\ln{|\\mathcal{H}|})$. However, in the following example, we show that when agents are strategic, the vanilla $\\mathsf{Halving}$ algorithm may suffer from an infinite number of mistakes, as do\ntwo extensions of vanilla $\\mathsf{Halving}$ that consider the best response function before taking majority votes.\nMoreover, our construction indicates that these algorithms fail even when the sequence of agents is chosen by an oblivious adversary.\n\n\n\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}\n\\def 20 {20}\n\\def 2cm {2cm}\n\n\\def \\margin {8} %\n\n\\node[draw, circle] at (360:0mm) (ustar) {$x_0$};\n\\node at (352:2cm*0.3) {\\textcolor{red}{$-$}};\n\\foreach \\i [count=\\ni from 0] in {\\Delta,1,2,3,4}{\n \\node[draw, circle] at ({120-\\ni*36}:2cm) (u\\ni) {$x_{\\i}$};\n \\node at ({116-\\ni*36}:2cm*1.3) {\\textcolor{red}{$-$}};\n \\draw[thick] (ustar)--(u\\ni);\n}\n\\node[draw, circle] at ({225}:2cm) (ui) {$x_{i}$};\n\\node at ({225}:2cm*1.3) {\\textcolor{red}{+}};\n\\draw[thick] (ustar)--(ui);\n\\foreach \\i in {5,6,8,9}{\n \\node[circle] at ({120-\\i*36}:2cm) (aux) {\\phantom{$u_{5}$}};\n \\draw[dotted, thick, shorten >=1mm, shorten <=2mm] (ustar)--(aux);\n}\n\n\\draw[dotted, semithick, red] (-40:2cm\/2) arc[start angle=-40, end angle=-120, radius=2cm\/2];\n\\draw[dotted, semithick, red] (-150:2cm\/2) arc[start angle=-150, end angle=-230, radius=2cm\/2];\n\\end{tikzpicture}\n\\caption{Expert $h^i$}\n\\label{fig:lower-bound-deterministic}\n\\end{figure}\n\n\\begin{example}\n \\label{example:halving-fails}\nConsider the following manipulation graph $G(\\mathcal{X},\\mathcal{E})$ and hypothesis class $\\mathcal{H}$: $G(\\mathcal{X},\\mathcal{E})$ is a star that includes a central node $x_0$, and $\\Delta$ leaves $x_1,\\cdots,x_{\\Delta}$. Hypothesis class $\\mathcal{H}=\\{h^1,\\cdots,h^{\\Delta}\\}$, where each $h^i\\in \\mathcal{H}$ assigns positive to $x_i$, and negative to all other nodes in $\\mathcal{X}$ (see \\Cref{fig:lower-bound-deterministic}). The perfect expert is $h^\\star=h^j\\in \\mathcal{H}$ for some $j\\in[\\Delta]$ unknown to the learner.\n\\end{example}\n\nNow consider two algorithms: the vanilla $\\mathsf{Halving}$ algorithm and the variant that performs an expansion of positive region on top of $\\mathsf{Halving}$.\n\n\\begin{enumerate}\n \\item \\textbf{Vanilla $\\mathsf{Halving}$}.\n \n Consider the following sequence of agents: at every time $t$, the same agent with initial position $u_t=x_0$ and label $y_t=+1$ arrives. We claim that the $\\mathsf{Halving}$ algorithm makes mistakes on each agent regardless of the total number of rounds executed. First, note that this sequence is realizable with respect to class $\\mathcal{H}$: for all $h^i\\in \\mathcal{H}$, we have $\\mathsf{BR}_{h^i}(x_0)=x^i$ and $h^i(x_i)=+1$, so each $h^i$ classifies $(x_0,+1)$ correctly in isolation. Therefore, any expert in $\\mathcal{H} $ achieves %\n zero mistakes on this sequence of agents. \n \n Now consider the vanilla $\\mathsf{Halving}$ algorithm. Initially, for each node $x\\in\\mathcal{X}$, there is at most one expert in $\\mathcal{H}$ that labels it as positive. Therefore, the majority vote classifier of $\\mathcal{H}$ labels every node as negative. In response to this all-negative majority vote classifier, the first agent $(x_0,+1)$ stays put and gets classified as negative mistakenly.\n However, %\n we know that each classifier $h^i$ predicts correctly on $(x_0,+1)$. As a result, none of the experts get discarded. Therefore, a mistake is made by the learner, but no progress is made in terms of shrinking the set $\\mathcal{H}$. The same agent appears at every round, resulting in the $\\mathsf{Halving}$ algorithm %\n making mistakes in each round.\n\n \\item \\textbf{A strategic variant of $\\mathsf{Halving}$}.\n \n\n \n Now consider a different voting rule for taking the majority-vote classifier based on the best-response function: Let $\\overline{h}(u)=h(\\mathsf{BR}_{h}(u))$ for all $h\\in\\mathcal{H}$ and $u\\in\\mathcal{X}$, and suppose that the learner runs $\\mathsf{Halving}$ on the hypotheses class $\\overline{\\mathcal{H}}=\\{\\overline{h^1},\\cdots,\\overline{h^\\Delta}\\}$. Specifically, for each $h^i\\in \\mathcal{H}$, $\\overline{h^i}(x_0)=h^i(\\mathsf{BR}_{h^i}(x_0))=h^i(x_i)=+1$, therefore the majority-vote classifier predicts positive on $x_0$. On the other hand, the majority vote classifier predicts negative on all the leaves.\n Now, suppose the adversary secretly chooses $j\\in[\\Delta]$ and constructs a sequence in which $h^j$ is realizable as follows: at each time step $t$, selects an example with true label $y_t=-1$ and initial position $u_t=x_i\\in \\mathcal{X}\\setminus\\{x_0,x_j\\}$.\n Note that all classifiers in ${\\mathcal{H}}$ except ${h^i}$ will classify $(u_t,y_t)$ correctly.\n However, the majority vote classifier will make a mistake because $u_t$ can manipulate to $x_0$ and get classified as positive. Once the mistake is made, had the true location $u_t=x_i$ been observable, the learner could have shrunk the size of $\\mathcal{H}$ by discarding $h^i$. However, $u_t$ is hidden from the learner, so the learner would not know which classifier is making a mistake. Therefore, it cannot make progress by excluding at least one expert from $\\mathcal{H}$ in each round. \n\n \n\n \n\n \\item \\textbf{Another strategic variant of $\\mathsf{Halving}$.}\n \n The positive region of $h^{\\text{maj}}$ in the previous variation can be reached by all the nodes in the graph, which makes gaming too easy for the agents. Now, suppose the learner's goal is to shrink the positive region of $h^{\\text{maj}}$ and get a new classifier $h$, such that the positive region of $h$ can only be reached by the true positives under $h^{\\text{maj}}$, but none of the true negatives.\n \n We use the same example as above to show the failure of this algorithm because such $h$ does not exist. Recall that the positive region of $h^{\\text{maj}}$ contains only the central node $x_0$. \n Suppose such an $h$ exists, then $x_0$ cannot belong to the positive region of $h$, because it can be reached by all leaf nodes $x_i$, which are true negatives under $h^{\\text{maj}}$. \n In addition, no leaf nodes should be included in the positive region of $h$ either. This implies that the positive region of $h$ is empty, which contradicts with the assumption that true positive node $x_0$ can reach it.\n For this reason, the learner is unable to find an $h$ satisfying this property. %\n\n\\end{enumerate}\n\n \n\n\n\n\n\n\\Cref{example:halving-fails} indicates that taking majority votes fails in the strategic setting. One crucial point is that the leaves do not meet the threshold for \\emph{majority}, and therefore they are always negative under the majority vote classifier (whether we consider the best response function or not) and thus indistinguishable, weakening the learner's leverage to identify the optimal expert. In fact, in this example, the only evidence for removing an expert is a false positive agent at the corresponding leaf node, so the learner should classify the leaves as positive in order to make progress. Therefore, one needs to lower the threshold for majority votes to increase the likelihood of false positives and make more room for improvement.\n\nIn the next section, we propose an algorithm based on the idea of \\emph{biased majority vote} in favor of positive predictions, which provably achieves finite mistake bounds against any adversarial sequence of strategic agents. We show that compared to the nonstrategic setting, the extra number of mistakes made by the learner is closely characterized by the maximum degree of the manipulation graph.\n\n\n\n\\subsubsection{Upper Bound: Biased Majority-Vote Algorithm}\nIn this section, we propose a biased version of the majority vote algorithm for the realizable strategic setting. The algorithm proceeds in rounds as follows: At each round $t$, a new agent arrives and gets observed as $v_t$. From the remaining set of experts, if at least $1\/(\\Delta+2)$ fraction of them classify $v_t$ as positive, then the algorithm predicts positive. If the algorithm made a mistake, all the experts that predicted positive get removed from $\\mathcal{H}$. \nIf less than $1\/(\\Delta+2)$ fraction of the experts classify $v_t$ as positive, the algorithm predicts negative. If the prediction was wrong, then each expert that labeled all the vertices in the neighborhood of $v_t$, i.e. $N[v_t]$, as negative gets removed from $\\mathcal{H}$.\nWe present this algorithm in \\Cref{alg:halving} and analyze its mistake bound in \\Cref{thm:baseline-realizable-upper-bound}.\n\n\n\n\n\\begin{algorithm}[!ht]\n\\SetKwInOut{Input}{Input}\n\\SetKwInOut{Output}{Output}\n\\SetNoFillComment\n\\Input{$G(\\mathcal{X},\\mathcal{E})$, hypothesis class $\\mathcal{H}$}\n\\For{$t=1,2,\\cdots$}{\n \\tcc{learner commits to a classifier $h_t$ that is constructed as follows:}\n \\For{$v\\in \\mathcal{X}$}{\n \\eIf{$|h\\in \\mathcal{H}:h(v)=+1|\\geq |\\mathcal{H}|\/(\\Delta+2)$}{\n $h_t(v)\\leftarrow+1$\\;\n }\n {\n $h_t(v)\\leftarrow-1$\\;\n }\n }\n \\tcc{example $v_t$ is observed.}\n predict according to $h_t(v_t)$\\;\n \\tcc{If there was a mistake:}\n \\If{$h_t(v_t)\\neq y_t$}{\n \\eIf{$y_t=-1$}{\n $\\mathcal{H}\\leftarrow \\mathcal{H}\\setminus \\{h\\in \\mathcal{H}:h(v_t)=+1\\}$\\tcp*{at least $|\\mathcal{H}|\/(\\Delta+2)$ experts are removed.} \n }\n {\n $\\mathcal{H}\\leftarrow \\mathcal{H}\\setminus \\{h\\in \\mathcal{H}: \\forall x\\in N[v_t], h(x)=-1\\}$\\tcp*{at least $|\\mathcal{H}|\/(\\Delta+2)$ experts are removed.}\n }\n }\n}\n\\caption{Biased majority-vote algorithm.}\n\\label[algo]{alg:halving}\n\\end{algorithm}\n\n\\begin{theorem}\n\\label{thm:baseline-realizable-upper-bound}\nIf there exists at least one perfect expert under manipulation, \\Cref{alg:halving} makes at most $(\\Delta+2)\\ln{|\\mathcal{H}|}$ mistakes.\n\\end{theorem}\n\n\\begin{proof}\nWe show whenever a mistake is made, at least $1\/(\\Delta+2)$ fraction of the remaining experts get excluded from $\\mathcal{H}$, %\nbut the realizable classifier $h^\\star$ is never excluded.\n\nFirst, consider the case of making mistake on a true negative, i.e. $y_t=-1$. In this case, at least $|\\mathcal{H}|\/(\\Delta+2)$ of the experts are predicting positive on $v_t$, and all of them are excluded from $\\mathcal{H}$. %\nOn the other hand, according to \\Cref{prop:dominating-set}, all neighbors of $u_t$ are labeled as negative by $h^\\star$. Since $v_t\\in N[u_t]$, this implies that $h^\\star$ must have labeled $v_t$ as negative, so $h^\\star$ will not be excluded.\n\nNext, consider the case of making a mistake on a true positive, i.e. $y_t=+1$. Since the algorithm is predicting negative on $v_t$, the agent has not moved from a different location to $v_t$ to get classified as negative. Hence, it must be the case that $v_t=u_t$. Since the agent did not move, none of the vertices in its neighborhood has been labeled positive by the algorithm, which means each of the vertices in $N[v_t]$ is labeled positive by less than $|\\mathcal{H}|\/(\\Delta+2)$ of the experts. Since there are at most $(\\Delta+1)$ vertices in $N[v_t]$, at least $|\\mathcal{H}|(1-(\\Delta+1)\/(\\Delta+2)) = |\\mathcal{H}|\/(\\Delta+2)$ experts are predicting negative on all vertices in $N[v_t]$, all of which will be excluded. \nOn the other hand, by \\Cref{prop:dominating-set} again, $u_t=v_t$ is dominated by the positive region of $h^\\star$, so at least one vertex in $N[u_t]$ is labeled positive by $h^\\star$, which implies that $h^\\star$ will not be excluded from $\\mathcal{H}$.\n\nIn either case, when a mistake is made, at least $1\/(\\Delta+2)$ fraction of the remaining experts get excluded, but the perfect expert never gets excluded. Therefore, the total number of mistakes $M=\\mathsf{Mistake}(T)$ can be bounded as follows:\n\\begin{align*}\n&\\left(1-\\frac{1}{\\Delta+2}\\right)^M|\\mathcal{H}|\\ge 1\n\\quad\\Rightarrow\\quad M\\leq (\\Delta+2)\\ln|\\mathcal{H}|.\n\\end{align*}\n\n\\end{proof}\n\n\\paragraph{Improving the Upper Bound}\nIn~\\Cref{sec:improving-upper-bound}, we propose a pre-processing step (\\Cref{alg:improvement-halving}) that improves the mistake bound of \\Cref{alg:halving} when the underlying manipulation graph is dense, i.e., the minimum degree of all the vertices is large. We achieve the following upper bound:\n\n\\begin{theorem}[Improving the number of mistakes]\n\\label{thm:mistake-bound-improved-halving}\n\\Cref{alg:improvement-halving} makes at most $\\min\\{n-\\delta, 1+\\Delta\\cdot \\min\\{\\ln|\\mathcal{H}|, n-\\delta-1\\}\\}$ mistakes, where $n=|\\mathcal{X}|$ and \n$\\delta$ is the minimum degree of $G(\\mathcal{X},\\mathcal{E})$.\n\\end{theorem}\n\nWe leave it open to get a general instance-dependent upper bound that potentially depends on other characteristics of the manipulation graph besides the maximum\/minimum degree.\n\n\\subsection{Unrealizable Case}\nIn the unrealizable (agnostic) case, we remove the assumption that there exists a perfect classifier under manipulation. Our goal is to design an adaptive algorithm that does not make too many mistakes compared to $\\mathsf{OPT}$ (which is the minimum number of mistakes achieved by any classifier in $\\mathcal{H}$), without a priori knowledge of the value of $\\mathsf{OPT}$ or the optimal classifier that achieves this value. \nBefore presenting our algorithm, we first observe that for any classifier $h$ with positive region $S_h$, $h$ makes a mistake on a true positive agent $u$ if and only if $u$ cannot reach $S_h$, and $h$ makes a mistake on a true negative agent $u$ if and only if $u$ can reach $S_h$.\n\n\n\n\n\n\\subsubsection{Upper Bound: Biased Weighted Majority-Vote Algorithm}\n\\label{sec:unrealizable}\nNext, we propose an algorithm for the unrealizable setting. The algorithm is adapted from the \\emph{weighted majority vote} algorithm, which maintains a weight for each hypothesis in $\\mathcal{H}$ that are initially set to be 1.\nSimilar to~\\Cref{alg:halving}, at each round $t$, a new example arrives and gets observed as $v_t$. Let $W_+^t$ and $W_-^t$ denote the sum of weights of experts that predict $v_t$ as positive and negative respectively. Let $W_t = W_+^t+W_-^t$. If $W_+^t\\geq W_t\/(\\Delta+2)$, the algorithm predicts %\npositive, otherwise it predicts %\nnegative. If the algorithm makes a mistake on a true negative, then we decrease the weights of all experts that predicted $v_t$ as positive by a factor of $\\gamma$. If the algorithm makes a mistake on a true positive, then we decrease the weights of all experts that labeled all the vertices in $N[v_t]$ as negative by a factor of $\\gamma$. We formally present this algorithm in \\Cref{alg:biased-weighted-maj-vote} and its mistake bound guarantee in \\Cref{thm:biased-weighted-maj-vote-mistake-bound}.\n\n\n\\begin{algorithm}[!ht]\n\\SetKwInOut{Input}{Input}\n\\SetKwInOut{Output}{Output}\n\\SetNoFillComment\n\\Input{$G(\\mathcal{X},\\mathcal{E})$, $\\mathcal{H}$}\nSet weights $w_0(h)\\leftarrow 1$ for all classifiers $h\\in \\mathcal{H}$\\;\nLet $\\gamma=\\frac{1}{e}$\\;\n\\For{$t=1,2,\\cdots$}{\n \\tcc{the learner commits to a classifier $h_t$ that is constructed as follows:}\n \\For{$v\\in V$}{\n Let $W_t^+(v) = \\sum_{h\\in\\mathcal{H}:h(v)=+1}w_t(h)$, $W_t^-(v) = \\sum_{h\\in\\mathcal{H}:h(v)=-1}w_t(h)$, and $W_t = W_t^+(v)+W_t^-(v) = \\sum_{h\\in\\mathcal{H}}w_t(h)$\\;\n \\eIf{$W_t^+(v)\\geq W_t\/(\\Delta+2)$}{\n $h_t(v)\\leftarrow +1$\\;\n }\n {\n $h_t(v)\\leftarrow -1$\\;\n }\n }\n \\tcc{example $v_t$ is observed.}\n output prediction $h_t(v_t)$\\;\n \\tcc{If there was a mistake:}\n \\If{$h_t(v_t)\\neq y_t$}{\n \\eIf{$y_t=-1$}{\n \\tcc{\n penalize the experts that label $v_t$ as positive.}\n $\\mathcal{H'}\\leftarrow \\{h\\in \\mathcal{H}: h(v_t)=+1\\}$\\;\n } \n {\n \\tcc{\n penalize the experts that label all nodes in $N[v_t]$ as negative.}\n $\\mathcal{H'}\\leftarrow \\{h\\in \\mathcal{H}: \\forall x\\in N[v_t], h(x)=-1\\}$\\;\n }\n if $h\\in \\mathcal{H}'$, then $w_{t+1}(h)\\leftarrow\\gamma\\cdot w_t(h)$;\n otherwise, $w_{t+1}(h)\\gets w_t(h)$\\;\n }\n}\n\\caption{Biased weighted majority-vote algorithm.}\n\\label[algo]{alg:biased-weighted-maj-vote}\n\\end{algorithm}\n\\begin{theorem}\n\\label{thm:biased-weighted-maj-vote-mistake-bound}\n\\Cref{alg:biased-weighted-maj-vote} makes at most $e(\\Delta+2)(\\ln|\\mathcal{H}|+\\mathsf{OPT})$ mistakes against any adversary.\n\\end{theorem}\n\n\\subsection{Lower Bound}\n\\label{sec:lower-bound-deterministic}\n\n\n\nIn this section, we show lower bounds on the number of mistakes made by any \\emph{deterministic} learner against an adaptive adversary in both realizable and agnostic settings. We present the lower bounds in \\Cref{thm:deterministic-lower-bound}.\n\n\\begin{theorem}\n\\label{thm:deterministic-lower-bound}\n There exists a manipulation graph $G(\\mathcal{X},\\mathcal{E})$, a hypothesis class $\\mathcal{H}:\\mathcal{X}\\to\\mathcal{Y}$, and an adaptive adversary, such that any deterministic learning algorithm has to make at least $\\Delta-1$ mistakes in the realizable setting and $\\Delta\\cdot\\mathsf{OPT}$ mistakes in the agnostic setting, where $\\mathsf{OPT}$ captures the minimum number of mistakes made by any classifier in the hypothesis class $\\mathcal{H}$. \n\\end{theorem}\n\n\\begin{proof}\nHere, we use the same manipulation graph $G$ and expert class $\\mathcal{H}$ as shown in \\Cref{example:halving-fails}. The manipulation graph $G(\\mathcal{X},\\mathcal{E})$ is a star that includes a central node $x_0$, and $\\Delta$ leaves $x_1,\\cdots,x_{\\Delta}$. Hypothesis set $\\mathcal{H}=\\{h^1,\\cdots,h^{\\Delta}\\}$, where each $h^i\\in \\mathcal{H}$, assigns $+1$ to $x_i$, and $-1$ to all other nodes in $G$ (\\Cref{fig:lower-bound-deterministic}). \n\nIn the agnostic setting, we construct an adaptive adversary that always can pick a bad example $(u_t,y_t)$ on observing $h_t$, such that $h_t$ fails to classify this example correctly (i.e., $\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)=1$), but this example can be successfully classified by all but one expert. The detailed construction is as follows:\n\n\\begin{enumerate}\n \\item If $h_t(x_0)=+1$, then the adversary picks $(u_t = x_j,y_t=-1)$ for an arbitrary $j\\in[\\Delta]$. Since $u_t$ can move to $x_0$ and get classified as positive by $h_t$, we have $\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)=1$. \n On the other hand, all experts except for $h^j$ classify this example correctly.\n \\item If $h_t(x)=-1$ for all nodes $x\\in\\mathcal{X}$, then the adversary picks $(u_t=x_0,y_t=+1)$. In this case,\n $\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)=1$. However, $\\forall h^i\\in \\mathcal{H}$, we have $\\mathsf{BR}_{h^i}(u_t)=x_0$, so this example can receive a positive classification and therefore\n $\\ell(h^i,\\mathsf{BR}_{h^i}(u_t),y_t)=0$.\n \n \\item If $h_t(x_0)=-1$ and there exists $j\\in[\\Delta]$ such that $h_t(x_j)=+1$, then the adversary picks $(u_t=x_j,y_t=-1)$. In this case, $h_t$ will classify this example as a false positive. On the other hand, all experts except for $h^j$ will correctly classify it as negative.\n\\end{enumerate}\nFollowing the above construction, the learner is forced to make a mistake at all rounds; however, in each round, at most one of the experts makes a mistake, implying that sum of the number of mistakes made by all experts is at most $T$. Since the number of experts is $\\Delta$, by the pigeon-hole principle there exists an expert that makes at most $T\/\\Delta$ mistakes. Therefore, $\\mathsf{OPT}\\le T\/\\Delta$, implying a mistake lower bound of $\\Delta\\cdot\\mathsf{OPT}$.\n\nIn the realizable setting, we use the same construction but only focus on the first $\\Delta-1$ time steps, such that the learner is forced to make $\\Delta-1$ mistakes, but there exists at least one expert that has not made a mistake so far, suppose $h^i$ is one of such experts. After the first $\\Delta-1$ steps, the adversary keeps showing the same agent $(x_i,+1)$ to the learner, such that expert $h^i$ is still realizable.\n\\end{proof}\nWe remark that \\Cref{thm:deterministic-lower-bound} implies no deterministic algorithm is able to make $o(\\Delta)$ mistakes in the realizable setting and $o(\\Delta)$ multiplicative regret in the agnostic setting. \n{Moreover, the construction shows that any deterministic algorithm is forced to err at every round in the worst-case agnostic setting, resulting in an $\\Omega(T)$ regret as long as $\\Delta\\ge2$.}\n\n\\section{Fractional Classifiers}\n\n\\label{sec:fractional-model}\n\\subsection{Model}\n\nIn this section, we consider the randomized model where the learner uses a deterministic algorithm to output a probability distribution over classifiers at each round. After the learner commits to a distribution, an agent $(u_t,y_t)$ (which is chosen by an adversary) best responds to this distribution by selecting $v_t$ that maximizes the expected utility. In particular, let $P_{h_t}(v)\\in[0,1]$ denote the induced probability of $h_t$ classifying node $v$ as positive, then the agent's best response function can be written as:\n\\begin{align}\n v_t\\in \\mathsf{BR}_{h_t}(u_t)\\triangleq\\arg\\max_{v\\in \\mathcal{X}} \\Big[P_{h_t}(v)-\\mathsf{Cost}(u_t,v)\\Big].\\label{eq:fractional-agent-util}\n\\end{align}\nAs a result of manipulation, the observable feature $v_t\\in \\mathsf{BR}_{h_t}(u_t)$ is revealed to the learner, and the learner suffers an expected loss of \n\\begin{align}\n \\E\\left[\\ell(h_t,v_t,y_t)\\right]=\\Pr\\left[{y_t\\neq h_t(v_t)}\\right]=\\begin{cases}\n P_{h_t}(v_t),&\\text{if }{y_t=-1};\\\\\n 1-P_{h_t}(v_t),&\\text{if }{y_t=+1}.\n \\end{cases}\n \\label{eq:fractional-learner-util}\n\\end{align}\nFrom \\Cref{eq:fractional-agent-util,eq:fractional-learner-util}, we can see that the set of induced probabilities $P_{h_t}(u)$ for each $u\\in\\mathcal{X}$ serves as a sufficient statistics for both the learner and the agent.\nTherefore, instead of committing to a distribution and having the agents calculate the set of induced probabilities, the learner can directly commit to a \\emph{fractional classifier} $h_t$ \nthat explicitly specifies the probabilities $P_{h_t}(u)\\in[0,1]$ for each $u\\in\\mathcal{X}$. Then, after the agent best responds to these fractions and reaches $v_t$, random label $h_t(v_t)$ is realized according to the proposed probability $P_{h_t}(v_t)$.\n\nWe remark that deterministic classifiers are special cases of the fractional classifiers where $P_h(u)\\in\\{0,1\\}$. Since the experts in $\\mathcal{H}$ are all deterministic, the benchmark $\\mathsf{OPT}$, which is the minimum number of mistakes achieved by the best expert in hindsight, is still a deterministic value.\n\nIn this setting, we consider two cost functions: the \\emph{weighted-graph} cost function, where the manipulation cost {from $u$ to $v$} is defined as the total weight on the shortest path {from $u$ to $v$}; and the free-edges model, where the first hop is free and the second hop costs infinity.\nRecall that agents break ties by preferring features with higher expected values, so the agents in the \\emph{free-edges} cost model will move to a neighbor $v_t\\in N[u_t]$ with the highest probability of getting classified as positive.\n\n In the \\Cref{sec:fractional-lower-bound}, we show that the type of randomness is of limited help because they can only reduce the $\\Delta$-multiplicative regret by constants. This is evidenced by our lower bounds in \\Cref{thm:fractional-onehop-lower-bound,thm:fractional-multi-hops-lower-bound}, which states that any algorithm using this type of randomness needs to suffer $\\frac{\\Delta}{2}$-multiplicative regret in the free-edges model and $\\frac{{\\Delta}}{4}$-multiplicative regret in the weighted-graph model.\n We also complement this result by providing nearly-matching upper bounds in \\Cref{sec:fractional-upper-bound}.\n\n\\subsection{Lower Bound}\n\\label{sec:fractional-lower-bound}\n\\begin{theorem}\n\\label{thm:fractional-onehop-lower-bound}\nIn the model of ``free edges'' cost functions, for any sequence of fractional classifiers chosen by a deterministic algorithm, there exists an adaptive adversary such that the learner must make at least $\\frac{\\Delta}{2}\\cdot \\mathsf{OPT}$ mistakes in expectation.\n\\end{theorem}\n\n\\begin{proof}\nConsider a manipulation graph $G(\\mathcal{X},\\mathcal{E})$ that is a star with a central node $x_0$, and $\\Delta$ leaves $x_1,\\cdots,x_{\\Delta}$. Hypothesis set $\\mathcal{H}=\\{h^1,\\cdots,h^{\\Delta}\\}$, where each $h^i\\in \\mathcal{H}$ assigns positive to $x_i$ and negative to all other nodes in $G$, as shown in \\Cref{fig:lower-bound-deterministic}.\nWe construct an adversary that picks $(u_t,y_t)$ upon receiving the fractional classifier $h_t$ at each round, such that $h_t$ makes a mistake with probability at least 0.5 whereas all but one expert predicts correctly. Our detailed construction is as follows:\n\\begin{enumerate}\n \\item If $P_{h_t}(x_0)\\geq 0.5$, then the adversary picks $(u_t = x_j,y_t=-1)$ for an arbitrary $j\\in[\\Delta]$. Since $x_0\\in N[u_t]$ and $v_t$ is the node in $N[u_t]$ that achieves the largest success probability, we have $\\E[\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)]=P_{h_t}(v_t)\\ge P_{h_t}(x_0)\\ge0.5$. On the other hand, only $h^j\\in \\mathcal{H}$ makes a mistake on $(x_j,-1)$, and all other experts classify it correctly.\n \n \\item If $P_{h_t}(x_0)<0.5$ but there exists $j\\in[\\Delta]$ such that $P_{h_t}(x_j)\\geq 0.5$, then the adversary picks $(u_t=x_j,y_t=-1)$. Since the closed neighborhood of $x_j$ only contains $\\{x_j,x_0\\}$, we have $v_t=u_t$ and $\\E[\\ell(h_t,v_t,y_t)]\\geq 0.5$. In addition, all experts but $h^j$ classify this example correctly.\n \n \\item If neither of the above two conditions holds, i.e., $P_{h_t}(v)<0.5$ for all nodes $v\\in \\mathcal{X}$, then the adversary picks $(u_t=x_0,y_t=+1)$. In this case, no matter how the agent chooses $v_t$, the probability $P_{h_t}(v_t)$ cannot exceed 0.5. As a result,\n the learner suffers an expected loss of $\\E[\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)]=1-P_{h_t}(v_t)\\ge 0.5$. On the other hand, all experts classify this example correctly because $x_0$ can move to the corresponding leaf node and get classified as positive.\n\\end{enumerate}\nAs a result, the learner has an expected loss of at least $0.5$ on each round, which implies\n$$\\E[\\mathsf{Mistake}(T)]=\\E\\left[\\sum_{t=1}^T \\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)\\right]\\geq T\/2.$$ However, in each round, at most one expert makes a mistake. Following the same arguments as \\Cref{thm:deterministic-lower-bound}, we conclude that $\\mathsf{OPT}\\le \\frac{T}{\\Delta}$. Putting all together, the expected number of mistakes made by the learner is at least $$\\E[\\mathsf{Mistake}(T)]\\ge\\frac{T}{2}=\\frac{\\Delta}{2}\\cdot\\frac{T}{\\Delta}\\ge \\frac{\\Delta}{2}\\cdot \\mathsf{OPT}.$$\n\\end{proof}\n\n\n\\begin{theorem}\n In weighted graphs, for any sequence of fractional classifiers chosen by a deterministic algorithm, there exists an adaptive adversary such that the learner must make at least $\\frac{{\\Delta}}{4}\\cdot \\mathsf{OPT}$ mistakes in expectation.\n \\label{thm:fractional-multi-hops-lower-bound}\n\\end{theorem}\n\\begin{proof}\n Again, we consider the same graph structure as in \\Cref{thm:fractional-onehop-lower-bound}, where $G(\\mathcal{X},\\mathcal{E})$ is a star with central node $x_0$ and leaf nodes $x_1,\\cdots,x_{\\Delta}$. Assume each edge $e\\in\\mathcal{E}$ has the same weight $w(e)=w$, where $w\\triangleq 0.5+\\epsilon$ for an infinitesimal constant $\\epsilon$. \n Note that in this graph, no agent has the incentive to travel more than one edge, because it would cost them more than $1$.\n \n We work with the hypothesis set $\\mathcal{H}=\\{h^1,\\cdots,h^{\\Delta}\\}$, assuming each $h^i\\in \\mathcal{H}$ assigns positive to $x_i$ and negative to all other nodes in $\\mathcal{X}$.\n We construct an adversary that picks $(u_t,y_t)$ upon receiving the fractional classifier $h_t$ as follows, such that $h_t$ makes a mistake with probability at least $\\frac{1}{4}$, whereas all but one expert predicts correctly. Our detailed construction is as follows:\n\n Let $p=\\max_{x\\in\\mathcal{X}}P_{h_t}(x)$ denote the maximum fraction on any node. If $p1-w>\\frac{1}{4}.$$\n As for the experts, all of them classify this agent correctly. Therefore, it suffices to consider the case of $p\\ge w$ for the rest of the proof. We consider two cases depending on where $p$ is achieved:\n\n \\begin{enumerate}\n \\item If $p$ is achieved at a leaf node $x_i$ (i.e., $P_{h_t}(x_i)=p$) for some $j\\in[\\Delta]$, then\n the adversary chooses $(u_t=x_i,y_t=-1)$. \n We claim that $u_t=v_t=x_i$, since the agent is already placed at the node with the highest fraction, so they do not need to pay a nonnegative cost to reach a node with an even smaller fraction. As a result, we have $\\E[\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)]=P_{h_t}(x_i)=p\\ge w>\\frac{1}{4}$.\n On the other hand, all but expert $h^i$ classify this agent correctly.\n \\item If $p$ is achieved at the central node $x_0$, i.e., $P_{h_t}(x_0)=p$, then every leaf node have fractions no more than $p$. We first assume at least one leaf node $x_i$ ($i\\in[\\Delta]$) satisfies $P_{h_t}(x_i)< p-w$. In this case, the adversary chooses $(u_t=x_i,y_t=-1)$. \n Since $P_{h_t}(x_0)-\\mathsf{Cost}(u_t,x_0)=p-w > P_{h_t}(u_t)-\\mathsf{Cost}(u_t,u_t)$, the agent will select $v_t=x_0$ as the best response, and achieve a success probability of $P_{h_t}(x_0)=p$. Therefore, the learner has expected loss $\\E[\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)]=p\\ge w>\\frac{1}{4}$. On the other hand, all but expert $x^i$ labels this agent correctly.\n \\item \n Now we consider the last case where $p$ is achieved at the central node $x_0$ and all the leaf nodes have fractions at least $p-w$. In this case, no agent has the incentive to move regardless of their initial positions. The adversary can select the next agent as follows: if $1-p\\ge p-w$, then choose $(u_t=x_0,y_t=+1)$ and make the learner err with probability $1-P_{h_t}(x_0)=1-p$; otherwise, choose $(u_t=x_i,y_t=-1)$ for an arbitrary $i\\in[\\Delta]$ and make the learner err with probability $P_{h_t}(x_i)\\ge p-w$. In either case, the learner has to suffer from an expected loss of $\\E[\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)]\\ge\\max\\{1-p,p-w\\}\\ge\\frac{1-w}{2}=\\frac{1}{4}-\\frac{\\epsilon}{2}$.\n As for the experts, at most one of them is making a mistake.\n \\end{enumerate}\n\nPutting together all the possible cases and let $\\epsilon\\to0$, the learner is forced to make mistakes with probability at least $\\frac{1}{4}$ on each round, i.e., $\\sum_{t=1}^T \\E[\\ell_t(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)]\\geq T\/4$. However, in each round, at most one of the experts makes a mistake, implying that $\\mathsf{OPT}\\le\\frac{T}{\\Delta}$ as proved in \\Cref{thm:deterministic-lower-bound}. As a result, the total loss made by the learner is bounded as\n$$\\E[\\mathsf{Mistake}(T)]=\\E\\left[\\sum_{t=1}^T \\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)\\right]\\geq \\frac{T}{4}\n\\ge\\frac{\\Delta}{4}\\cdot\\mathsf{OPT}.$$ \nThe proof is thus complete.\n\\end{proof}\n\n\\begin{remark}\n In a related work,\n \\citet{Braverman2020TheRO} %\n showed that introducing randomization in their classification rule can increase the learner's classification accuracy, and the optimal randomized classifier has the structure that agents are better off not manipulating. \n In case (3) of the proof of \\Cref{thm:fractional-multi-hops-lower-bound}, we show that even when learners choose \n such an ``optimal'' classifier under which agents have no incentive to manipulate,\n the adversary is still able to impose a high misclassification error. This example shows the limitations of fractional classifiers.\n\\end{remark}\n\n\\subsection{Upper Bound}\n\\label{sec:fractional-upper-bound}\nIn this section, we show how to use the idea of \\Cref{alg:biased-weighted-maj-vote} to obtain upper bounds in the randomized classifiers model.\n\\begin{proposition}\n In the free-edges model, \\Cref{alg:biased-weighted-maj-vote} achieves a mistake bound of \n \\[\\mathsf{Mistake}(T)\\le e(\\Delta+2)(\\ln|\\mathcal{H}|+\\mathsf{OPT}).\\]\n\\end{proposition}\n\\begin{proof}\n To prove this proposition, it suffices to show that if the learner uses deterministic classifiers as a special case of fractional classifiers, then the \\emph{free-edges} cost model and \\emph{unweighted graph} cost model result in the same best response functions. In fact, in both models, agents manipulate their features if and only if their original nodes are labeled as negative and there exists a neighbor that is labeled as positive. Therefore, the two cost models yield the same best response behaviors to deterministic classifiers. As a result, \\Cref{alg:biased-weighted-maj-vote} suffers from the mistake bound of $e(\\Delta+2)(\\ln|\\mathcal{H}|+\\mathsf{OPT})$.\n\\end{proof}\n\nNow we consider weighted manipulation graphs.\nIn this setting, we can run \\Cref{alg:biased-weighted-maj-vote} \non the expanded manipulation graph $\\Tilde{G}$ that is an unweighted graph constructed from $G$ by connecting all pairs $u,v$ of vertices in $G$ such that $\\mathsf{Cost}(u,v)\\leq 1$.\nAs a result, we obtain mistake bound in terms of $\\Tilde{\\Delta}$ instead of $\\Delta$, where $\\Tilde{\\Delta}$ is the maximum degree of $\\Tilde{G}$.\n\n\\begin{proposition}\n\\label{prop:fractional-multi-hop-upper-bound}\n Given a weighted manipulation graph $G$, running \\Cref{alg:biased-weighted-maj-vote} on the expanded graph $\\tilde{G}$ achieves a mistake bound of $\\mathsf{Mistake}(T)\\le e(\\Tilde{\\Delta}+2)(\\ln|\\mathcal{H}|+\\mathsf{OPT})$, where $\\Tilde{\\Delta}$ is the maximum degree of $\\Tilde{G}$.\n\\end{proposition}\n\n\\begin{proof} \nAfter constructing $\\Tilde{G}$, we can see that under any deterministic classifier, a manipulation from $u$ to $v$ happens in the weighted graph $G$ if and only if the same manipulation happens in the unweighted graph $\\Tilde{G}$. Therefore, by running \\Cref{alg:biased-weighted-maj-vote} on $\\Tilde{G}$, we obtain a mistake bound in the original manipulation graph $G$ of $e(\\Tilde{\\Delta}+2)(\\ln|\\mathcal{H}|+\\mathsf{OPT})$, where $\\Tilde{\\Delta}$ is the maximum degree of $\\Tilde{G}$. \n\\end{proof}\n\n\n\\section{Introduction}\n\n\\emph{Strategic classification} concerns the problem of learning classifiers that are robust to gaming by self-interested agents~\\cite{10.1145\/2020408.2020495,Hardt2016}. An example is deciding who should be qualified for getting a loan and who should be rejected. Since applicants would like to be approved for getting a loan, they may spend efforts on activities that do not truly change their underlying loan-worthiness but may cause the classifier to label them as positive. An example of such efforts is holding multiple credit cards. Such gaming behaviors have nothing to do with their true qualification but could increase their credit score and therefore their chance of getting a loan. \n Strategic classification is particularly challenging in the \\emph{online} setting where data points arrive in an online manner. In this scenario, the way that examples manipulate depends on the \\emph{current classifier}. %\nTherefore, the examples' behavior changes over time and it may be different from examples with similar features observed in the previous rounds. Additionally, there is no useful source of unmanipulated data since there is no assumption that the unmanipulated data is coming from an underlying distribution.\n\n\n\nStrategic agents are modeled as having bounded manipulation power, and a goal of receiving a positive classification. The set of plausible manipulations has been characterized in two different ways in the literature. The first model considers a geometric setting where each example is a point in the space that can move in a ball of bounded radius (e.g., %\n~\\citet{dong2018strategic,chen2020learning,haghtalab-ijcai2020,ahmadi2021strategic,ghalme2021strategic}). Another model is an abstraction of feasible manipulations using a \\emph{manipulation graph} that first was introduced by \\citet{zhang2021incentive}. %\nWe follow the second model and formulate possible manipulations using a graph. \nEach possible feature vector is modeled as a node in this graph, and an edge from $\\vec{x}\\rightarrow \\vec{x'}$ in the manipulation graph implies that an agent with feature vector $\\vec{x}$ may modify their features to $\\vec{x'}$ if it helps them to receive a positive classification. We consider the problem of online strategic classification given an underlying manipulation graph. Our goal is to minimize the \\emph{Stackelberg regret} which is the difference between the learner's cumulative loss and the cumulative loss of the best-fixed hypothesis against the same sequence of agents, but best-responding to this fixed hypothesis. \n\nIn this paper, we consider three models with different levels of randomization. First, we consider the scenario where the learner can pick \\emph{deterministic} classifiers. A well-known deterministic algorithm in the context of online learning is the \\emph{halving} algorithm, which classically makes at most $O(\\ln|\\mathcal{H}|)$ mistakes when the target function belongs to class $\\mathcal{H}$. First, we show that when agents are strategic, the \\emph{halving} algorithm fails completely and may end up making mistakes at every round even in this realizable case. Moreover, we show that no deterministic algorithm can achieve a mistake bound $o(\\Delta)$ in the strategic setting, where $\\Delta$ is the maximum degree of the manipulation graph, even when $|\\mathcal{H}|=O(\\Delta)$. We complement this result with a general algorithm achieving mistake bound $O(\\Delta\\ln|\\mathcal{H}|)$ in the strategic setting.\nWe further extend this algorithm to achieve $O(\\Delta)$ multiplicative regret bounds in the non-realizable (agnostic) strategic setting, giving matching lower bounds as well.\n\nOur next model is a {\\em fractional} model where at each round the learner chooses a probability distribution over classifiers, inducing expected values on each vertex (the probability of each vertex being classified as positive), which the strategic agents respond to.\nThe agents' goal is to maximize their utility by reaching a state that maximizes their chance of getting classified as positive minus their modification cost. For this model, we show regret upper and lower bounds similar to the deterministic case. \n\nIn the last model, the learner again picks a probability distribution over classifiers, but now, while the adversary who selects the next agent must respond to this probability distribution, the agent responds to the actual classifier drawn from this distribution. That is, in this model, the random draw occurs after the adversary's selection of the agent but before the agent responds, whereas in the fractional model the random draw occurs after the agent responds. Surprisingly, we show this model is %\nnot only more transparent to the agents, but also\nmore advantageous to the learner than the fractional model. \n{ We argue that transparency can make the learner and agents cooperate against the adversary in a way that would be more beneficial to both parties, which is an interesting phenomenon that differentiates the strategic setting from the nonstrategic one.} %\n{In this model, }we design randomized algorithms that achieve sublinear regret bounds against both oblivious and adaptive adversaries. \nWe give a detailed overview of our results in~\\Cref{sec:overview-results}. \n\n\n\n\\section{Randomized Algorithms}\nIn this section, we propose another model of randomization. \n Unlike the fractional classifiers model discussed in \\Cref{sec:fractional-model}, we show that this randomized model induces a different type of manipulation behavior, for which success probabilities (fractions) are no longer sufficient to characterize.\n In this model, the interaction between the classifier, adversary, and agents proceeds as follows:\nAt each round $t$, the learner commits to a probability distribution $\\mathcal{D}_t$ over a set of deterministic classifiers $\\{h:\\mathcal{X}\\to\\mathcal{Y}\\}$; and promises to use $h_t\\sim\\mathcal{D}_t$. \nBased on this mixed strategy $\\mathcal{D}_t$ (and before the random classifier $h_t$ gets realized), the adversary specifies the next agent to be $(u_t,y_t)$.\nThen comes the most important step that differentiates this model from the {fractional classifiers} setting: the learner samples $h_t\\sim \\mathcal{D}_t$ and \\emph{releases it to the agent}, who then best responds to the true $h_t$ by modifying its features from $u_t$ to $v_t$.\nThe learner aims to minimize the (pseudo) regret with respect to class $\\mathcal{H}$:\n\\begin{align}\n \\E\\left[\\mathsf{Regret}(T)\\right]\\triangleq \\E\\left[\\sum_{t=1}^T\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)\\right]-\\min_{h^\\star\\in\\mathcal{H}}\\left[\\sum_{t=1}^T \\ell(h^\\star,\\mathsf{BR}_{h^\\star}(u_t),y_t)\\right].\n\\end{align}\n\n We show that, surprisingly, releasing the random choices to the agents can help the learner to surpass the\n {$\\Omega(\\Delta\\cdot\\mathsf{OPT})$} lower bound.\n In this model, we propose three algorithms that achieve $o(T)$ regret, which does not depend on $\\mathsf{OPT}$ or $\\Delta$.\n\n\n\n\\subsection{Between bandit and full-information feedback}\nBefore presenting our algorithms, we first investigate the feedback information available to the learner at the end of each round. \nAfter the agents respond, the learner observes not only the loss of the realized expert ($\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)$), but also the best response state $v_t=\\mathsf{BR}_{h_t}(u_t)$ and the true label $y_t$.\nHowever, because the original state $u_t$ is hidden, the losses of other experts $\\ell(h',\\mathsf{BR}_{h'}(u_t),y_t)$ for $h'\\neq h_t$ are not fully observable.\nFor this reason, the feedback structure is potentially richer than bandit feedback, which only contains $\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)$ for the realized expert $h_t$; but sparser than full-information feedback, which contains $\\ell(h',\\mathsf{BR}_{h'}(u_t),y_t)$ for all $h'\\in\\mathcal{H}$. \n\nNevertheless, we remark that the learner \nis capable of going beyond the bandit feedback using the additional information $(v_t,y_t)$. For instance, if $h'$ fully agrees with the realized $h_t$ on the entire 2-hop neighborhood of $v_t$, then $\\ell(h',\\mathsf{BR}_{h'}(u_t),y_t)=\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)$. Another scenario is when the agent ends up reporting truthfully ($u_t=v_t$), so the learner can explicitly calculate the best response $\\mathsf{BR}_{h'}(u_t)$ and the loss $\\ell(h',\\mathsf{BR}_{h'}(u_t),y_t)$ for all $h'\\in\\mathcal{H}$.\n\nIn \\Cref{sec:mixed-strategy-adaptive}, we consider a learning algorithm that discards additional information and only uses bandit feedback, which achieves $\\mathcal{O}\\left(\\sqrt{T |\\mathcal{H}|\\ln|\\mathcal{H}|}\\right)$ regret.\nTo remove the polynomial dependency on $|\\mathcal{H}|$, we propose a generic algorithmic idea that uses an all-positive classifier at random time steps to encourage the truthful reporting of agents. In this way, the learner can obtain full-information feedback on these time steps, which accelerates the learning process. \nIn \\Cref{sec:mixed-strategy-oblivious}, we use this idea to achieve $\\mathcal{O}\\left(T^{\\frac{2}{3}}\\ln^{\\frac{1}{3}} |\\mathcal{H}|\\right)$ regret against any oblivious adversary. In \\Cref{sec:adaptive-adversaries}, we extend this idea to the general case of adaptive adversaries and obtain a bound of $\\widetilde{\\mathcal{O}}\\left(T^{\\frac{3}{4}}\\ln^{\\frac{1}{4}} |\\mathcal{H}|\\right)$.\nWe also show that this framework could be useful in other strategic settings as well. For example, in \\Cref{sec:strategic-perceptron}, we apply it to the setting of strategic online linear classification and obtain a mistake bound in terms of the hinge loss of the original examples when the original data points are not linearly separable.\n\n\n\n\\subsection{Algorithm based on bandit feedback}\n\\label{sec:mixed-strategy-adaptive}\n\nAs a warmup, we show the learner can use the vanilla EXP3 algorithm~\\citep{auer2002nonstochastic}, which is a standard multi-armed bandit algorithm, to obtain sublinear regret. This algorithm works by maintaining a distribution over $\\mathcal{H}$ from which classifier $h_t$ is sampled, where the weights of each expert are updated according to $$p_{t+1}(h)\\propto p_t(h)\\cdot \\exp\\left(-\\eta\\cdot\\frac{\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)\\cdot\\indicator{h=h_t}}{p_t(h)}\\right),\\ \\forall h\\in\\mathcal{H}.$$\nIt is known that running EXP3 with learning rate $\\eta=\\sqrt{\\frac{2\\ln|\\mathcal{H}|}{|\\mathcal{H}|T}}$ will achieve a regret bound of $O(\\sqrt{T|\\mathcal{H}|\\ln|\\mathcal{H}|})$, see \\citet{auer2002nonstochastic} for a proof.\n\n\n\n\n\n\\subsection{Algorithm based on full-information acceleration}\n\\label{sec:mixed-strategy-oblivious}\nIn this section, we provide an algorithm with $\\mathcal{O}\\left(T^{\\frac{2}{3}}\\log^{\\frac{1}{3}} n\\right)$ regret against \\emph{oblivious adversaries}. \nAn oblivious adversary is one who chooses the sequence of agents $\\{(u_t,y_t)\\}_{t=1}^T$ before the interaction starts, irrespective of the learner's decisions during the game.\nOur algorithm (\\Cref{alg:reduction-MAB-FIB}) uses a\nreduction from the partial-information model to the full-information model, which is similar in spirit to \\cite{awerbuch2004adaptive} and \\citet[Chapter 4.6]{blum_mansour_2007}. The main idea is to divide the timeline $1,\\cdots, T$ into $K$ consecutive blocks $B_1,\\cdots,B_K$, where $B_j=\\{(j-1)(T\/K)+1,\\cdots,j(T\/K)\\}$, and simulate a full-information online learning algorithm (Hedge) with each block representing a single step. \nWithin each block $B_j$, our algorithm uses the same distribution over the experts, except that it will also pick one time-step $\\tau_j\\sim B_j$ uniformly at random, and assigns an all-positive classifier to $\\tau_j$. \nThe intention for this time step $\\tau_j$ is to prevent the agent from manipulations and simultaneously obtain the loss of every expert. This observed loss then serves as an unbiased loss estimate for the average loss over the same block.\nIn the remainder of this section, we formally present this algorithm in \\Cref{alg:reduction-MAB-FIB} and provide its regret guarantee in \\Cref{thm:regret-alg-oblivious}.\n\n\n\n\n\\begin{algorithm}[!ht]\n\\SetKwInOut{Input}{Input}\n\\SetKwInOut{Output}{Output}\n\\SetNoFillComment\n$K\\gets T^{\\frac{2}{3}}\\ln^{\\frac{1}{3}} |\\mathcal{H}|$\\;\nPartition the timeline $\\{1,\\cdots, T\\}$ into $K$ consecutive blocks $B_1,\\cdots,B_K$ where $B_j=\\left\\{(j-1)\\cdot\\frac{T}{K}+1,\\cdots, j\\cdot\\frac{T}{K}\\right\\}$\\;\nInitialize $w_1(h)\\gets0,\\ \\forall h\\in\\mathcal{H}$\\;\n\\For{$1\\leq j\\leq K$}{\n Sample $\\tau_j \\in B_j$ uniformly at random\\;\n\\For{$t\\in B_j$}{\n \\tcc{Commit to drawing classifier $h_t\\sim\\mathcal{D}_t$, with $\\mathcal{D}_t$ defined as follows:}\n \\eIf{$t=\\tau_j$}{\n $\\mathcal{D}_t$ puts all weight on a classifier that labels every node as positive\\;\n }\n {\n $\\mathcal{D}_t$ is a distribution over $\\mathcal{H}$, where $p_j(\\cdot)=\\frac{w_j(\\cdot)}{W_j}$, $W_j=\\sum_{h\\in\\mathcal{H}} w_j(h)$\\;\n }\n \\tcc{Observe agent $(v_t,y_t)$.}\n}\n\\tcc{Update the distribution at the end of $B_j$}\n\\For{$h\\in \\mathcal{H}$}{\n$w_{j+1}(h)\\leftarrow w_j(h) e^{-\\eta \\cdot\\hat{\\ell}_j(h)}$, where $\\hat{\\ell}_j(h)=\\ell(h,\\mathsf{BR}_h(v_{\\tau_j}),y_{\\tau_j})$\\;\n}\n\n}\n\\caption{Randomized algorithm against oblivious adversaries \n}\n\\label[algo]{alg:reduction-MAB-FIB}\n\\end{algorithm}\n\n\n\n\\begin{theorem}\n\\Cref{alg:reduction-MAB-FIB} with parameter $K=T^{\\frac{2}{3}}\\ln^{\\frac{1}{3}} |\\mathcal{H}|$ achieves a regret bound of $\\mathcal{O}\\left(T^{\\frac{2}{3}}\\log^{\\frac{1}{3}} |\\mathcal{H}|\\right)$ against any oblivious adversary.\n\\label{thm:regret-alg-oblivious}\n\\end{theorem}\n\n\\begin{proof}\nFor notational convenience, we denote the average loss over block $B_j$ as $\\bar{\\ell}_j(h)=\\frac{\\sum_{t\\in B_j}\\ell_t(h)}{|B_j|}$, where for each expert $h\\in\\mathcal{H}$, $\\ell_t(h)=\\ell(h,\\mathsf{BR}_h(u_t),y_t)$.\nWe first claim that for all $h$, $\\hat{\\ell}_j(h)=\\ell(h,\\mathsf{BR}_h(v_{\\tau_j}),y_{\\tau_j})$\\footnote{Note that here $v_{\\tau_j}$ is the best-response to $h_{\\tau_j}$ and $\\mathsf{BR}_h(v_{\\tau_j})$ is the best-response to $h$ when the agent is at location $v_{\\tau_j}$ (which we will show is the same as $u_{\\tau_j}$).} is an unbiased estimator of the average loss $\\bar{\\ell}_j(h)$, i.e., $\\E_{\\tau_j\\sim B_j}[\\hat{\\ell}_j(h)]=\\bar{\\ell}_j(h)$. \nThis is because the algorithm predicts positive on every state at time $\\tau_j$, so the agent reports truthfully ($u_{\\tau_j}=v_{\\tau_j}$), thus %\n$\\hat{\\ell}_j(h)=\\ell(h,\\mathsf{BR}_h(u_{\\tau_j}),y_{\\tau_j})=\\ell_{\\tau_j}(h)$ can be observed for any expert $h$. \nSince $\\tau_j$ is sampled from $B_j$ uniformly at random, we have\n$$\\E_{\\tau_j\\sim B_j}[\\hat{\\ell}_j(h)]=\\E_{\\tau_j\\sim B_j}[\\ell_{\\tau_j}(h)]=\\bar{\\ell_j}(h).$$\n\nSince the choice of $\\tau_j$ is sampled independently after the distribution $p_j$ is chosen, the above claim implies that for any block $B_j$ and any $p_j$:\n\\begin{align}\n \\E_{h\\sim p_j}\\left[\\bar{\\ell}_j(h)\\right] = \n \\E_{\\tau_j}\\E_{h\\sim p_j}\\left[\\hat{\\ell}_j(h)\\right].\n \\label{eq:claim-unbiased}\n\\end{align}\nTherefore, inside each block $B_j$ and conditioning on the history before block $B_j$, the total loss of \\Cref{alg:reduction-MAB-FIB} can be bounded as follows:\n\\begin{align}\n \\E\\left[\\sum_{t\\in B_j}\\ell_t(h_t)\\right]=&\\indicator{\\Tilde{y}_{\\tau_j}\\neq y_{\\tau_j}}+\\sum_{t\\in B_j,\\ t\\neq\\tau_j}\\E_{h\\sim p_j}\\left[{\\ell}_t(h)\\right]\\nonumber\\\\\n \\le&1+\\sum_{t\\in B_j}\\E_{h\\sim p_j}\\left[{\\ell}_t(h)\\right]=1+|B_j|\\cdot\\E_{h\\sim p_j}\\left[\\bar{\\ell}_j(h)\\right],\\label{eq:reduction-tmp1}\\\\\n =&1+\\frac{T}{K}\\E_{\\tau_j}\\E_{h\\sim p_j}\\left[\\hat{\\ell}_j(h)\\right].\\label{eq:reduction-tmp2}\n\\end{align}\nwhere the inequality \\eqref{eq:reduction-tmp1} is because $\\indicator{\\Tilde{y}_{\\tau_j}\\neq y_{\\tau_j}}\\le1$ and the loss $\\ell_t$ is always nonnegative, and \\eqref{eq:reduction-tmp2} is because of the claim in \\eqref{eq:claim-unbiased}. Summing over $K$ blocks and taking the expectation over $\\tau_1,\\cdots,\\tau_K$, we obtain an upper bound of the expected total loss of \\Cref{alg:reduction-MAB-FIB}:\n\\begin{align}\n \\E\\left[\\sum_{t=1}^T\\ell_t(h_t)\\right]=&\\sum_{j=1}^K\\E\\left[\\sum_{t\\in B_j}\\ell_t(h_t)\\right]\n \\le K+\\frac{T}{K}\\E_{\\tau_1,\\cdots,\\tau_K}\\left[\\sum_{j=1}^K\\E_{h\\sim p_j}\\left[\\hat{\\ell}_j(h)\\right]\\right].\\label{eq:reduction-tmp6}\n\\end{align}\nFrom the regret guarantee of Hedge, we have that over the loss sequence $\\hat{\\ell}_1,\\cdots,\\hat{\\ell}_K$, there is\n \\begin{align}\n {\\sum_{j=1}^K \n \\E_{h\\sim p_j}\\left[\\hat{\\ell}_j(h)\\right]}\n -{\\min_{h^\\star \\in \\mathcal{H}}\\sum_{j=1}^K \\hat{\\ell}_j(h^\\star)}\n \\le \\mathcal{O}\\Big(\\sqrt{K\\ln|\\mathcal{H}|}\\Big).\n \\end{align}\nTherefore, taking the expectation over $\\tau_1,\\cdots,\\tau_K$, we obtain\n\\begin{align}\n \\E_{\\tau_1,\\cdots,\\tau_K}\\left[{\\sum_{j=1}^K \n \\E_{h\\sim p_j}\\left[\\hat{\\ell}_j(h)\\right]}\\right]\n \\le& \\E_{\\tau_1,\\cdots,\\tau_K}\\left[{\\min_{h^\\star \\in \\mathcal{H}}\\sum_{j=1}^K \\hat{\\ell}_j(h^\\star)}\\right]\n + \\mathcal{O}\\Big(\\sqrt{K\\ln|\\mathcal{H}|}\\Big)\\nonumber\\\\\n \\le& \\min_{h^\\star \\in \\mathcal{H}}\\E_{\\tau_1,\\cdots,\\tau_K}\\left[{\\sum_{j=1}^K \\hat{\\ell}_j(h^\\star)}\\right]\n + \\mathcal{O}\\Big(\\sqrt{K\\ln|\\mathcal{H}|}\\Big)\\label{eq:reduction-tmp3}\\\\\n =&\\min_{h^\\star \\in \\mathcal{H}}\\left[{\\sum_{j=1}^K \\bar{\\ell}_j(h^\\star)}\\right]\n + \\mathcal{O}\\Big(\\sqrt{K\\ln|\\mathcal{H}|}\\Big).\\label{eq:reduction-tmp4}\n \\end{align}\nIn the above equations, \\eqref{eq:reduction-tmp3} is due to Jensen's inequality %\nand \\eqref{eq:reduction-tmp4} is from the unbiasedness property established in \\Cref{eq:claim-unbiased}.\nFinally, putting \\Cref{eq:reduction-tmp6,eq:reduction-tmp4} together, and using the definition of the average loss $\\bar{\\ell}_j$, we conclude that\n\\begin{align*}\n \\E\\left[\\sum_{t=1}^T\\ell_t(h_t)\\right]\\le& K+\\frac{T}{K}\\left(\\min_{h^\\star \\in \\mathcal{H}}\\left[{\\sum_{j=1}^K \\bar{\\ell}_j(h^\\star)}\\right]\n + \\mathcal{O}\\Big(\\sqrt{K\\ln|\\mathcal{H}|}\\Big)\\right)\\\\\n =&\\min_{h^\\star\\in \\mathcal{H}}\\sum_{j=1}^K\\sum_{t\\in B_j}{\\ell}_t(h^\\star)+\\mathcal{O}\\left(T\\sqrt{\\frac{\\ln|\\mathcal{H}|}{K}}\\right)+K.\n\\end{align*}\n\n\n\n \n\n\n\n\n\n\n Set $K=T^{\\frac{2}{3}}\\ln^{\\frac{1}{3}} |\\mathcal{H}|$, this gives the final regret bound of $\\mathcal{O}\\left( T^{2\/3}\\ln^{1\/3}|\\mathcal{H}|\\right)$.\n\\end{proof}\n\n\n\\subsection{Discussion on transparency}\n\\label{sec:discussion-transparency}\nIn the sections above, we have shown that making random choices \\emph{fully transparent} to strategic agents can provably help the learner to achieve sublinear regret. This is in contrast to the fractional model, where we have lower bound examples showing that keeping random choices \\emph{fully opaque} to the agents leads to linear regret.\nThe contrasting results in these two models reveal a fundamental difference between strategic and non-strategic (adversarial) settings: unlike the adversarial setting where learners benefit more from hiding the randomness, in the strategic setting, the learner benefits more from being transparent.\nAt a high level, this is because the relationship between the learner and strategic agents is not completely opposing: instead, the utility of the learner and agents can align to a certain degree. \n\nTo be more specific, in our online strategic classification setting, there are three effective players in the game: the learner who selects the classification rule, the adversary who chooses the initial features of agents, and the strategic agents who best respond to the classification rule. \nFrom the learner's perspective, the only malicious player is the adversary, whereas the agent has a known, controllable best response rule.\nIn the fractional classifiers model, both the adversary and the agents face the same amount of information (which is the set of fractions). Although the opacity can prevent the adversary from selecting worst-case agents that force the learner to err with probability $1$ (as in the lower bound examples of \\ref{thm:deterministic-lower-bound}), it also reduces the learner's control of the strategic behavior of agents.\nAs a result, the potentially rich structure of randomness collapses to the set of deterministic fractional values, resulting in the fact that the learner is still forced to make mistakes with a constant probability.\n\nOn the contrary, in the randomized algorithms model, the learner can increase her own leverage of controlling the agents' best response behavior, and simultaneously reduces the adversary's ability of using the strategic nature of agents to hide information from the learner. Both are achieved by giving the agents more information (i.e., the realized classifiers). In other words, the learner benefits from ``colluding'' with the agents and competing against the malicious adversary in unity. This idea is demonstrated by \\Cref{alg:reduction-MAB-FIB,alg:reduction-adaptive}, where the learner occasionally uses an all-positive classifier to encourage the truthful reporting of agents, thus making the adversary unable to benefit from hiding the true features from the learner.\n\\section{Model}\n\n\\subsection{Strategic Classification}\nLet $\\mathcal{X}$ denote the space of agents' features, and $\\mathcal{Y}=\\{+1,-1\\}$ denote the space of labels. %\nWe consider the task of sequential classification where the learner aims to classify a sequence of agents $\\{u_t,y_t\\}_{t=1}^T$ that arrive in an online fashion. Here, we assume $u_t\\in\\mathcal{X}$ is the true feature set of agent $t$ and $y_t\\in\\mathcal{Y}$ is the true label. We call agents with $y_t=+1$ \\emph{true positives}, and the ones with $y_t=-1$ \\emph{true negatives}. %\nImportantly, we make minimum assumptions on the sequence of agents, and our results apply to the case of adversarially chosen sequences.\nA hypothesis $h:\\mathcal{X}\\rightarrow \\mathcal{Y}$ (also called a {\\em classifier} or an {\\em expert}) is a function that assigns labels to the agents $u\\in\\mathcal{X}$. \nGiven a hypotheses class $\\mathcal{H}:\\mathcal{X}\\to\\mathcal{Y}$, our goal is to bound the total number of mistakes made by the learner, compared to the best classifier $h^\\star\\in\\mathcal{H}$ in hindsight. \n\nTo model the gaming behavior in real-life classification tasks, we work with the setting of \\emph{strategic classification}, in which agents have the ability to modify their features at a given cost and reach a new observable state.\nFormally, strategic classification can be described as a repeated Stackelberg game between the learner (leader) and the agents (followers).\nAt each step $t\\in[T]$, the learner first publicly commits to a classifier $h_t$. Then, the $t$-th agent $(u_t,y_t)$ arrives and responds to $h_t$ by modifying their features from $u_t$ to $v_t$. As a result of manipulation, only $v_t$ (instead of $u_t$) is observable to the learner. \n\nWe assume that $v_t$ is chosen as a best-response ($\\mathsf{BR}$) to the announced rule $h_t$, such that the agent's utility is maximized:\n\\begin{align}\n v_t\\in\\mathsf{BR}_{h_t}(u_t)\\triangleq\\arg\\max_{v\\in \\mathcal{X}} \\Big[\\mathsf{Value}(h_t(v))-\\mathsf{Cost}(u_t,v)\\Big].\n\\end{align}\nHere, $\\mathsf{Value}(h_t(v))$ indicates the value of outcome $h_t(v)$, which is a binary function that takes the value of $1$ when $h_t(v)=+1$, and $0$ when $h_t(v)=-1$.\nIn~\\Cref{sec:fractional-model}, we consider the generalization of agents best responding to a probability distribution over classifiers, where $\\mathsf{Value}(h_t(v))$ becomes the induced expectation on node $v$, i.e., the probability of $v$ getting classified as positive by $h_t$. Equivalently, we refer to $h_t$ as a \\emph{fractional classifier} and the induced probabilities as \\emph{fractions}.\n$\\mathsf{Cost}(u_t,v)$ is a known, non-negative cost function that captures the cost of modifying features from $u_t$ to $v$. It is natural to assume $\\mathsf{Cost}(u,u)=0$ for all $u\\in \\mathcal{X}$. We use $v_t \\in \\mathsf{BR}_{h_t}(u_t)$ to show the best-response of agent $u_t$ at time-step $t$. Ties are broken by always preferring features with higher $\\mathsf{Value}(h_t(\\cdot))$, and \npreferring to stay put, i.e. $u_t=v_t$, if $u_t$ is among the set of best-responses that achieves the highest value.\n\n\\textbf{Learner's Objective:}\nThe learner's loss is defined as the misclassification error on the observable %\nstate: $\\ell(h_t,v_t,y_t)=\\indicator{y_t\\neq h_t(v_t)}$. Since $v_t \\in \\mathsf{BR}_{h_t}(u_t)$ and has the highest value of $h_t(\\cdot)$ according to the tie breaking rule, we also abuse the notation and write $\\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)=\\indicator{y_t\\neq\\Big. \\max\\left\\{ h_t(v):\\ {v\\in \\mathsf{BR}_{h_t}(u_t)}\\right\\}}$.\nThe learner's goal is to minimize the Stackelberg regret with respect to the best hypothesis $h^\\star\\in\\mathcal{H}$ in hindsight, had the agents best responded to $h^\\star$:\n\\begin{align}\n \\mathsf{Regret}(T)\\triangleq\\sum_{t=1}^T \\ell(h_t,\\mathsf{BR}_{h_t}(u_t),y_t)-\\min_{h^\\star\\in\\mathcal{H}}\\sum_{t=1}^T \\ell(h^\\star,\\mathsf{BR}_{h^\\star}(u_t),y_t).\n\\end{align}\nFor notational convenience, we use $\\mathsf{OPT}$ to denote the optimal loss achieved by the best hypothesis:\n\\begin{align}\n \\mathsf{OPT}\\triangleq\\min_{h^\\star\\in\\mathcal{H}}\\sum_{t=1}^T \\ell(h^\\star,\\mathsf{BR}_{h^\\star}(u_t),y_t).\n\\end{align}\nWhen $\\mathsf{OPT}=0$, we call the sequence of agents \\emph{realizable}, meaning that there exists a perfect classifier that never makes a mistake had agents best responded to it. Otherwise when $\\mathsf{OPT}>0$, we call it \\emph{unrealizable} or \\emph{agnostic}.\n\n\\subsection{Manipluation Graph}\nWe use graph $G(\\mathcal{X},\\mathcal{E})$ to characterize the set of \\emph{plausible manipulations}. In graph $G$, each node in $\\mathcal{X}$ corresponds to a state (i.e., features), and\neach edge $e=(u,v)\\in\\mathcal{E}$ captures a plausible manipulation from $u$ to $v$. \nThe cost function $\\mathsf{Cost}(u,v)$ is defined as the sum of costs on the shortest path from $u$ to $v$, or $+\\infty$ if such a path does not exist.\nWe present our results for the case of undirected manipulation graphs and show how they can be extended to the case of directed graphs (\\Cref{sec:directed-graphs}).\n\nTo model the cost of each edge, we consider \\emph{weighted graphs} in which each edge $e\\in \\mathcal{E}$ is associated with a nonnegative weight $w(e)\\in[0,1]$. \nAs a special case of the weighted graphs, we also consider \\emph{unweighted graphs}, in which each edge takes unit cost, i.e., $w(e)=1$. We remark that in unweighted graphs, agents will move for at most one hop because manipulating the features can increase the value of classification outcomes by at most $1$. To be specific, let $N[u]$ denote the closed neighborhood of state $u\\in\\mathcal{X}$, then agent $u$ respond to classifier $h$ as follows: \nif $h(u)$ is negative and there exists a neighbor $v\\in N[u]$ with positive $h(v)$, then $u$ will move to $v$; otherwise, $u$ does not move. As a result, the loss function in unweighted graphs can be equivalently expressed as\n\\[\n\\ell(h,\\mathsf{BR}_h(u),y)=\n\\begin{cases}\n1 &\\quad y=+1 \\text{, } \\forall v\\in N[u]: h(v)=-1;\\\\\n1 &\\quad y=-1 \\text{, } \\exists v\\in N[u]: h(v)=+1;\\\\\n0 &\\quad \\text{otherwise}.\n\\end{cases}\n\\label{def-loss}\n\\]\n\nWhen fractional classifiers are used, we also consider the \\emph{free-edge} manipulation model. In this model, we restrict the agent to only moving one hop, where the cost of moving is zero. \nSpecifically, each pair of nodes $(u,v)\\in\\mathcal{X}^2$ has zero manipulation cost if $(u,v)\\in\\mathcal{E}$, otherwise the cost is infinity. \nWhen agents best respond to classifier $h$ under this cost function, they will move to a one-hop neighbor of their initial state that has the highest probability of getting classified as positive.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Related Work}\n\n\n\n\nOur work builds upon a growing line of research, initiated by \\citet{10.1145\/1014052.1014066,dekel2008incentive,10.1145\/2020408.2020495}, that studies learning from data provided by strategic agents.~\\citet{Hardt2016} differentiated the field of strategic classification from the more general area of learning under adversarial perturbations;\nthey introduced the problem of \\emph{strategic classification} and modeled it as a sequential game between a jury that deploys a classifier and an agent that best responds to the classifier by modifying their features at a cost.\n\nFollowing the framework of \\citet{Hardt2016}, recent works have focused on both the offline setting where examples come from underlying distributions~\\citep{zhang2021incentive,sundaram2021pac,lechner2022learning,pmlr-v119-perdomo20a} and the online settings where examples are chosen by an adversary in a sequential manner~\\citep{dong2018strategic,chen2020learning,ahmadi2021strategic}.\n\\citet{milli-etal,10.1145\/3287560.3287597} extend the setting considered by \\citet{Hardt2016} to the case that heterogeneous sub-populations of strategic agents have different manipulation costs and studied other objectives such as social burden and fairness. %\nA number of other works focus on incentivizing agents to take improvement actions that increase their true qualification as opposed to gaming actions~\\citep{10.1145\/3417742,Alon2020MultiagentEM,haghtalab-ijcai2020,ahmadi_et_al:LIPIcs.FORC.2022.3,bechavod2022information}.\nThe works by \\citet{pmlr-v119-shavit20a,pmlr-v119-perdomo20a,bechavod2021gaming} study causal relationships between observable features and outcomes in strategic classification.~\\citet{levanon2021strategic} provide a practical learning framework for strategic classification%\n. \nRecent works relax the assumption that strategic agents best respond to the classifiers and consider alternative assumptions such as noisy response~\\citep{jagadeesan2021alternative}, learning agents~\\citep{zrnic2021leads}, and non-myopic agents~\\citep{haghtalab2022learning}.\n\nOur work is most closely related to that of~\\citet{zhang2021incentive,lechner2022learning}, which captures the set of plausible manipulations using an underlying \\emph{manipulation graph}, where each edge $\\vec{x}\\rightarrow \\vec{x'}$ represents a plausible manipulation from features $\\vec{x}$ to $\\vec{x}'$. This formulation is in contrast to a geometric model where agents' features are vectors in a $d$-dimensional space, with manipulation cost captured by some distance metric. As a result, agents in the geometric setting move in a ball of bounded radius~\\citep{dong2018strategic,chen2020learning,haghtalab-ijcai2020,ahmadi2021strategic,ghalme2021strategic,sundaram2021pac}. However, the work of \\citet{zhang2021incentive,lechner2022learning} focuses on the \\emph{offline} PAC learning setting. Our work can be considered as generalizations of their model to the \\emph{online learning} setting.\n\nOur work is also connected to the line of work that studies randomness and transparency in strategic classification. In terms of \\emph{classification accuracy} in the offline setting, \\citet{Braverman2020TheRO} shows that in a one-dimensional feature space, both committed randomness (probabilistic classifiers) and noisy features under deterministic classifiers can improve accuracy, and the optimal randomized classifier has a structure where agents are better off not manipulating. On the other hand, \\citet{ghalme2021strategic} gives sufficient conditions under which \\emph{transparency} is the recommended policy for improving predictive accuracy. Our paper combines the insights of both papers in the online setting, where we show that randomness is necessary against the adversary that selects agents, but transparency is more advantageous when it comes to the strategic agents themselves (see \\Cref{sec:discussion-transparency} for more discussions). In addition to accuracy, there are also studies about the societal implications of randomization and transparency in the presence of multiple subpopulations, such as information discrepancy~\\citep{bechavod2022information} and fairness~\\citep{immorlica2019access,kannan2019downstream,frankel2022improving,Braverman2020TheRO}.\n\n\\section{Overview of Results}\n\\label{sec:overview-results}\n\n\\begin{table}\n \\centering\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n \\parbox[c][][c]{2cm}{Type of \\\\Randomness} &\\parbox[c][][c]{2cm}{Manipulation\\\\ Graph} & {Upper Bound} &Lower Bound\\\\\\hhline{====}\n \\parbox[c][][c]{1.9cm}{Deterministic}&\n {Unweighted}\n &\n \\begin{tabular}{l|l}\n \\multirow{2}{*}{Realizable} & $O(\\Delta\\ln|\\mathcal{H}|)$\\\\\n &\\Cref{alg:halving} (\\Cref{thm:baseline-realizable-upper-bound})\\\\\\hline\n \\multirow{2}{*}{Agnostic} & $O(\\Delta\\cdot\\mathsf{OPT}+\\Delta\\ln|\\mathcal{H}|)$\n \\\\\n & \\Cref{alg:biased-weighted-maj-vote} (\\Cref{thm:biased-weighted-maj-vote-mistake-bound})\n \\end{tabular}\n &\n \\begin{tabular}{l|c}\n \\multirow{2}{*}{Realizable} & $\\Delta-1$\\\\\n &\\Cref{thm:deterministic-lower-bound}\\\\\\hline\n \\multirow{2}{*}{Agnostic} & $\\Delta\\cdot\\mathsf{OPT}$\n \\\\\n & \\Cref{thm:deterministic-lower-bound}\n \\end{tabular}\n \\\\\\hhline{====}\n \\multirow{2}{*}{\\parbox[c][][c]{1.9cm}{Fractional \\\\Classifiers \\\\{\\tiny (random choice after agents respond)}}}&Free-edges&\n \\parbox[c][1cm][c]{4cm}{$O(\\Delta\\cdot\\mathsf{OPT}+\\Delta\\ln|\\mathcal{H}|)$ \\\\\n \\Cref{alg:biased-weighted-maj-vote} (\\Cref{thm:biased-weighted-maj-vote-mistake-bound})}\n &\n \\parbox[c][1cm][c]{3cm}{\n $\\frac{\\Delta}{2}\\cdot\\mathsf{OPT}$\\\\\n \\Cref{thm:fractional-onehop-lower-bound}\n }\\\\\\hhline{~---}\n &Weighted&\n \\parbox[c][1cm][c]{4cm}{$O(\\tilde{\\Delta}\\cdot\\mathsf{OPT}+\\tilde{\\Delta}\\ln|\\mathcal{H}|)$ \\\\\n \\Cref{prop:fractional-multi-hop-upper-bound}}\n &\n \\parbox[l][1cm][c]{3cm}{\n {$\\frac{{\\Delta}}{4}\\cdot\\mathsf{OPT}\\ \\left(\\frac{\\Tilde{\\Delta}}{4}\\cdot\\mathsf{OPT}\\right)$ }\\\\\n \\Cref{thm:fractional-multi-hops-lower-bound}\n }\\\\\\hhline{====}\n \\parbox[c][][c]{1.9cm}{Randomized\\\\ Algorithms\n \\\\{\\tiny (random choice before agents respond)}}&{\n Unweighted}&\n \\begin{tabular}{l|l}\n \\multirow{2}{*}{Oblivious} & $O\\left(T^{\\frac{2}{3}}\\ln^{\\frac{1}{3}}|\\mathcal{H}|\\right)$\\\\\n &\\Cref{alg:reduction-MAB-FIB} (\\Cref{thm:regret-alg-oblivious})\\\\\\hline\n \\multirow{4}{*}{Adaptive} & $\\widetilde{O}\\left(T^{\\frac{3}{4}}\\ln^{\\frac{1}{4}}|\\mathcal{H}|\\right)$\n \\\\\n & \\Cref{alg:reduction-adaptive} (\\Cref{thm:regret-alg-adaptive-reduction})\\\\\\hhline{~-}\n & $\\widetilde{O}\\left(\\sqrt{T|\\mathcal{H}|\\ln|\\mathcal{H}|}\\right)$\n \\\\\n & Vanilla EXP3 Algorithm\n \\end{tabular}\n &Open\\\\\\hline\n \\end{tabular}\n \\caption{\\small{This table summarizes the main results of this paper for the model of deterministic classifiers, fractional classifiers, and randomized algorithms respectively. We use $\\Delta$ to denote the maximum degree of the manipulation graph, and $\\tilde{\\Delta}$ to denote the maximum degree of the expanded manipulation graph, which is constructed from a weighted graph by connecting all %\n pairs of nodes $(u,v)\\in \\mathcal{X}^2$ where $\\mathsf{Cost}(u,v)\\leq 1$.\n Although the table is presented in terms of undirected graphs, we remark that all the upper and lower can be extended to the setting of directed graphs, with the degrees to be replaced by the corresponding out-degrees, %\n see \\Cref{sec:directed-graphs} for an example in the setting of deterministic classifiers}.\n $\\mathsf{OPT}$ stands for the optimal number of mistakes\n achieved by the best hypothesis in class $\\mathcal{H}$.\n }\n \\label{table-of-results}\n\\end{table}\n\nOur work considers three types of randomness: deterministic, fractional classifiers, and randomized algorithms. In the \\emph{deterministic} model, the learner is constrained to using deterministic algorithms to output a sequence of deterministic classifiers. In the \\emph{fractional classifiers} model, the learner is allowed to output a probability distribution over classifiers at every round, inducing fractions on every node that represent their probability of being classified as positive. The agents best respond to these fractions before the random labels are realized. In the last \\emph{randomized algorithms} model, the learner outputs a probability distribution over classifiers as in the fractional model, and the adversary may pick $u_t$ based on these probabilities, but now the agents respond to the true realized classifier in selecting $v_t$. We summarize our main results \nin \\Cref{table-of-results}.\n\n\\textbf{Deterministic Classifiers:} \nIn the case of deterministic classifiers, we model strategic manipulations by unweighted graphs that have unit cost on all edges. We first consider the realizable setting where the perfect classifier lies in a finite hypothesis class $\\mathcal{H}$, and show fundamental differences between the non-strategic and strategic settings.\nIn the non-strategic setting, the deterministic algorithm $\\mathsf{Halving}$ achieves $O(\\ln|\\mathcal{H}|)$ mistake bound. However, in the strategic setting, we show in \\Cref{example:halving-fails} that the same algorithm can suffer from an infinite number of mistakes. \n\nIn \\Cref{sec:deterministic}, we analyze the strategic setting and provide upper and lower bounds of the mistake bound, both characterized by the \\emph{maximum degree} of vertices in the manipulation graph, which we denote with $\\Delta$.\nOn the lower bound side, we show in \\Cref{thm:deterministic-lower-bound} that no deterministic algorithm is able to achieve $o(\\Delta)$ mistake bound, and this barrier exists even when $|\\mathcal{H}|=O(\\Delta)$.\nOn the upper bound side, we propose \\Cref{alg:halving} that achieves mistake bound $O(\\Delta\\ln|\\mathcal{H}|)$ by incorporating the graph structure into the vanilla $\\mathsf{Halving}$ algorithm.\n\nWe then move to the agnostic strategic setting and propose \\Cref{alg:biased-weighted-maj-vote} which achieves a mistake bound of $O(\\Delta\\cdot\\mathsf{OPT}+\\Delta\\ln|\\mathcal{H}|)$, where $\\mathsf{OPT}$ denotes the minimum number of mistakes made by the best classifier in $\\mathcal{H}$. \nThis bound is $\\Delta$-multiplicative of the bound achieved by the weighted majority vote algorithm in the non-strategic setting.\nWe complement this result with a lower bound showing that no deterministic algorithm can achieve $o(\\Delta\\cdot\\mathsf{OPT})$ mistake bound. \nIn order to overcome the $\\Delta$-multiplicative barrier, we study the use of randomization in the next two models.\n\n\n\n\\textbf{Fractional Classifiers:} %\nIn this setting, we consider \ntwo models of cost function:\nthe \\emph{free-edges} cost model, where traveling one edge is cost-free but the second edge costs infinity, and the \\emph{weighted graph} model, where agents can travel multiple edges and pay for the sum of costs of edges.\nIn the free-edges model, we show that no learner can overcome the mistake lower bound $\\frac{\\Delta}{2}\\cdot\\mathsf{OPT}$, and provide an upper bound of $O(\\Delta\\cdot\\mathsf{OPT}+\\Delta\\ln|\\mathcal{H}|)$ based on~\\Cref{alg:biased-weighted-maj-vote}.\nIn the weighted graph model, we show a mistake lower bound of $\\frac{{\\Delta}}{4}\\cdot\\mathsf{OPT}$, and an upper bound of $O(\\Tilde{\\Delta}\\cdot\\mathsf{OPT}+\\Tilde{\\Delta}\\ln|\\mathcal{H}|)$, which is obtained by running \\Cref{alg:biased-weighted-maj-vote} on the \\emph{expanded manipulation graph} $\\Tilde{G}$ that is constructed by connecting all pairs of nodes $(u,v)\\in \\mathcal{X}^2$ where $\\mathsf{Cost}(u,v)\\leq 1$, and $\\Tilde{\\Delta}$ denotes the maximum degree of $\\Tilde{G}$.\nIn particular, our construction for the lower bound satisfies $\\Tilde{\\Delta}=\\Delta$, so this result also implies that no learner is able to surpass the $\\frac{\\Tilde{\\Delta}}{4}$-multiplicative regret.\n\nOur results in this setting indicate that using fractional classifiers cannot help the learner to achieve $o(\\Delta\\cdot\\mathsf{OPT})$ regret. \nTo resolve this issue, we move on to the randomized algorithms model where the learner realizes the random choices in transparency to the agents.\n\n\\textbf{Randomized Algorithms:} \nIn this setting, the learner uses randomized algorithms that produce probability distribution over deterministic classifiers at each round.\nThe key difference from the fractional classifiers setting is, although the adversary still chooses agent $(u_t,y_t)$ based on the distribution, the agent will best respond to the classifier to be used after it is sampled from the distribution.\nSurprisingly, we show that revealing the random choices to the agents can make the interaction more fruitful for both the agents and the learner, as the learner is now able to achieve vanishing regret without the multiplicative dependency on $\\Delta$ or $\\Tilde{\\Delta}$. This demonstrates an interesting difference between strategic and non-strategic settings {from the learner's perspective}: whereas delaying the realization of random bits is helpful in non-strategic settings, it is more helpful to realize the random choices \\emph{before agents respond} in the strategic setting. We refer the readers to \\Cref{sec:discussion-transparency} for more discussions about this difference.\n\nAs for algorithms and upper bounds in this setting, we first show that the vanilla EXP3 algorithm on expert set $\\mathcal{H}$ gives us a regret upper bound of $O\\left(\\sqrt{T|\\mathcal{H}|\\ln|\\mathcal{H}|}\\right)$.\nTo improve the dependency on $|\\mathcal{H}|$, we design two algorithms that simultaneously observe the loss of all experts by using an all-positive classifier at random time steps to stop the manipulations.\nIn particular, \\Cref{alg:reduction-MAB-FIB} achieves regret upper bound of $O\\left(T^{\\frac{2}{3}}\\ln^\\frac{1}{3}|\\mathcal{H}|\\right)$ against oblivious adversaries; and \\Cref{alg:reduction-adaptive} achieves regret bound of $\\widetilde{O}\\left(T^{\\frac{3}{4}}\\ln^{\\frac{1}{4}}|\\mathcal{H}|\\right)$ against general adaptive adversaries.\nWe also extend this algorithmic idea to the linear classification setting where original examples are inseparable and obtain an upper bound in terms of the hinge loss of the original data points, resolving an open problem proposed in \\citet{ahmadi2021strategic}. Although, our mistake bound has an extra $O(\\sqrt{T})$ additive term compared to their bound for the case that original data points are separable.\n\n\n\\textbf{Two Populations:}\nWe propose an extension to our model in which agents are divided into two populations with heterogeneous manipulation power: group $A$ agents face a cost of 0.5 on each edge, whereas group $B$ agents face a cost of 1. \nWe assume that group membership is a protected feature, and is observable only after the classifier is published.\nIn \\Cref{sec:two-populations}, we present an algorithm with a $\\min\\left\\{\\Delta+1+\\frac{1}{\\beta},\\ \\Delta^2+2\\right\\}$-multiplicative regret, where $\\beta$ is the probability that agents are assigned to group $B$.\n\n\n\n\\subsection{Two populations}\n\\label{sec:two-populations}\n\nIn this section, we study extensions of the unit-edge cost function in our baseline model. We assume there are two populations with different manipulation costs: agents of group $A$ face a cost of $0.5$ on each edge, whereas agents of group $B$ face a cost of $1$. As a result, in response to deterministic classifiers, agents from group $A$ move within their two-hop distance neighborhood, whereas agents from group $B$ only move inside their one-hop distance neighborhood.\n\nWe suppose each agent has fixed probabilities of belonging to each group, regardless of the initial position and the label chosen by the adversary. In other words, at every round $t$, after the adversary picks the next agent $(u_t,y_t)$, we assume nature independently assigns this agent to group $c_t=B$ with probability $\\beta$ and $c_t=A$ with probability $\\alpha=1-\\beta$. The agent's best response to classifier $h_t$ is a function of $u_t$ and $c_t$:\n\n\n\\begin{align*}\n v_t\\in\\mathsf{BR}_{h_t}(u_t,c_t)\\triangleq\\begin{cases}\n \\arg\\max_{v\\in \\mathcal{X}} \\Big[\\mathsf{Value}(h_t(v))-\\mathsf{Cost}_A(u_t,v)\\Big], & \\text{if } c_t=A\\\\\n \\arg\\max_{v\\in \\mathcal{X}} \\Big[\\mathsf{Value}(h_t(v))-\\mathsf{Cost}_B(u_t,v)\\Big], & \\text{if } c_t=B.\n \\end{cases}\n\\end{align*}\nAs a result of manipulation, the learner suffers loss $\\ell(h_t,v_t,y_t)=\\ell(h_t,\\mathsf{BR}_{h_t}(u_t,c_t),y_t)$ and observes $(v_t,y_t)$ together with group membership $c_t$. The learner's goal is to bound the expected number of mistakes in terms of the optimal number of mistakes in expectation, where the expectations are taken over the random group assignments and the possible randomness in the learning algorithm and the adversary's choices. \n\\begin{align*}\n \\E[\\mathsf{Mistake}(T)]=\\E\\left[\\sum_{t=1}^T\\ell(h_t,\\mathsf{BR}_{h_t}(u_t,c_t),y_t)\\right],\\quad\n \\E[\\mathsf{OPT}]=\\min_{h\\in\\mathcal{H}}\\E\\left[\\sum_{t=1}^T\\ell(h,\\mathsf{BR}_{h}(u_t,c_t),y_t)\\right].\n\\end{align*}\n\n {We propose \\Cref{alg:two-populations} that} is based on the idea of biased weighted majority vote (\\Cref{alg:biased-weighted-maj-vote}), with\na \\emph{group-independent} threshold for the biased majority votes, and\na \\emph{group-dependent} way of penalizing experts. We state the mistake bound guarantee in \\Cref{thm:two-populations}.\n\\begin{algorithm}[t]\n \\SetKwInOut{Input}{Input}\n \\SetKwInOut{Output}{Output}\n \\SetNoFillComment\n \\Input{$G(\\mathcal{X},\\mathcal{E})$, $\\mathcal{H}$}\n Set initial weights $w_1(h)\\leftarrow 1$ for all experts $h\\in \\mathcal{H}$\\;\n Set discount factor $\\gamma=\\frac{1}{e}$, threshold $\\theta=\\max\\left\\{\\frac{1}{\\Delta+1+\\frac{1}{\\beta}},\\ \\frac{1}{\\Delta^2+2}\\right\\}$\\;\n \\For{$t=1,2,\\cdots$}{\n \\tcc{The learner commits to a classifier $h_t$ that is constructed as follows:}\n \\For{$v\\in V$}{\n Let $W_t^+(v) = \\sum_{h\\in\\mathcal{H}:h(v)=+1}w_t(h)$, $W_t^-(v) = \\sum_{h\\in\\mathcal{H}:h(v)=-1}w_t(h)$, and $W_t= W_t^+(v)+W_t^-(v)$\\;\n \\eIf{$W_t^+(v)\\geq \\theta\\cdot W_t$}{\n $h_t(v)\\leftarrow +1$\\;\n }\n {\n $h_t(v)\\leftarrow -1$\\;\n }\n }\n \\tcc{Unlabeled example $v_t$ is observed.}\n output prediction $h_t(v_t)$\\;\n \\tcc{The true label $y_t$ and group membership $c_t$ are observed}\n \\tcc{If there was a mistake:}\n \\If{$h_t(v_t)\\neq y_t$}{\n \\eIf{$h_t(v_t)=+1$}{\n for all $h\\in \\mathcal{H}:h(v_t)=+1$, $w_{t+1}(h)\\leftarrow \\gamma\\cdot w_t(h)$\\tcp*{false positive} \n }\n {\n \\eIf{$c_t=A$}{$\\mathcal{H'}\\leftarrow \\{h\\in \\mathcal{H}: \\forall x\\in N^2[v_t], h(x)=-1\\}$\\tcp*{$N^2[\\cdot]$ is the 2-hop neighborhood}}\n {$\\mathcal{H'}\\leftarrow \\{h\\in \\mathcal{H}: \\forall x\\in N[v_t], h(x)=-1\\}$}\n If $h\\in \\mathcal{H}'$, $w_{t+1}(h)\\leftarrow\\gamma\\cdot w_t(h)$, otherwise $w_{t+1}(h)\\leftarrow w_t(h)$.\\tcp*{false negative}\n }\n }\n }\n \\caption{Biased weighted majority-vote algorithm for two populations.}\n \\label[algo]{alg:two-populations}\n \\end{algorithm}\n\n \\begin{theorem}\n \\label{thm:two-populations}\n In the setting of two populations and population $B$ has probability $\\beta$, \\Cref{alg:two-populations} achieves an expected mistake bound of the following:\n \\begin{align*}\n \\E[\\mathsf{Mistake}(T)]\\le e\\cdot\\min\\left\\{\\Delta+1+\\frac{1}{\\beta},\\ \\Delta^2+2\\right\\}\\left(\\ln|\\mathcal{H}|+\\E[\\mathsf{OPT}]\\right).\n \\end{align*}\n \\end{theorem}\n\n\\begin{remark} \n In \\Cref{thm:two-populations}, when all agents can make two hops (i.e., $\\beta=0$), the mistake bound reduces to the guarantee provided \\Cref{thm:biased-weighted-maj-vote-mistake-bound} with $\\Delta^2$ as the maximum degree. In this case, \\Cref{alg:two-populations} is equivalent with \\Cref{alg:biased-weighted-maj-vote} running on the expanded neighborhood graph $\\widetilde{G}$ in which every two nodes of distance at most two are connected by an edge. Here, $\\Delta^2$ is an upper bound on the maximum degree of $\\widetilde{G}$.In contrast, when all agents can only make one hop (i.e., $\\beta=1$), the problem reduces to the baseline model, and \\Cref{thm:two-populations}'s guarantee becomes the same as that of \\Cref{thm:biased-weighted-maj-vote-mistake-bound} with the same set of parameters.\n For values of $\\beta$ between 0 and 1, the mistake bound smoothly interpolates the guarantees of the two extreme cases.\n\\end{remark}\n \\begin{proof}[Proof of~\\Cref{thm:two-populations}]\n \n We show that whenever a mistake is made, we can reduce the total weight of experts ($W_t$) by a constant fraction in expectation.\n\n First, consider the case of a false positive. \n Since $h_t(v_t)=+1$, the total weight of experts that predict positive on $v_t$ is at least $\\theta W_t$ %\n ; and {the weight of each of them gets reduced} by a factor of $\\lambda$. \n Let $\\mathcal{F}_t$ be the $\\sigma$-algebra generated by the random variables up to time $t$, then\n we have \n \\begin{align}\n \\E[W_{t+1}\\ |\\ \\mathcal{F}_{t-1},\\text{ false positive}]\\le W_t (1-\\lambda \\theta).\n \\label{eq:cut-false-positive}\n \\end{align}\n\n Next, consider the case of a false negative. Since $h_t(v_t)=-1$, we know that the agent did not move, i.e., $v_t=u_t$. The algorithm updates as follows: if $c_t=B$, it reduces the weight of experts who predict {negative on all the nodes in the one-hop neighborhood of $v_t$, i.e., $N[v_t]$.} if $c_t=A$, then it reduces the weight of experts who predict {negative on all the nodes in the two-hop neighborhood of $v_t$, i.e., $N^2[v_t]$}. We claim that:\n \n \\begin{align*}\n &\\frac{\\Pr(c_t=B\\ |\\ \\mathcal{F}_{t-1},\\text{false negative})}{\\beta}\\ge \\frac{\\Pr(c_t=A\\ |\\ \\mathcal{F}_{t-1},\\text{false negative})}{1-\\beta}\\\\ \\Rightarrow\\ &\\Pr(c_t=B\\ |\\ \\mathcal{F}_{t-1},\\text{false negative})\\ge\\beta.\n \\end{align*}\n To see this, we can use the Bayes law to calculate the conditional probability of group assignments: for {$X\\in \\{A,B\\}$}, we have\n \\begin{align}\n \\Pr(c_t=X\\ |\\ \\mathcal{F}_{t-1},\\text{false negative})=\\frac{\\Pr(\\text{false negative}\\ |\\ \\mathcal{F}_{t-1},c_t=X)\\cdot\\Pr(c_t=X\\ |\\ \\mathcal{F}_{t-1})}{\\Pr(\\text{false negative}\\ |\\ \\mathcal{F}_{t-1})}.\n \\end{align}\n Since the group membership $c_t$ is independently realized after the adversary chooses $(u_t,y_t)$, we have\n \\begin{align*}\n &\\frac{\\Pr(c_t=B\\ |\\ \\mathcal{F}_{t-1},\\text{false negative})}{\\beta}-\\frac{\\Pr(c_t=A\\ |\\ \\mathcal{F}_{t-1},\\text{false negative})}{1-\\beta}\\\\\n =&\\frac{1}{\\Pr(\\text{false negative}\\ |\\ \\mathcal{F}_{t-1})}\\Big(\\Pr(\\text{false negative}\\ |\\ \\mathcal{F}_{t-1},c_t=B)-\\Pr(\\text{false negative}\\ |\\ \\mathcal{F}_{t-1},c_t=A)\\Big)\\ge0,\n \\end{align*}\n where the last step is because agents of population $A$ have more manipulation power, so under every possible classifier, group $A$ is able to get classified as positive whenever group $B$ is; therefore, group $A$ agents are less likely to become false negative. We have thus established the claim.\n\n Now we turn to the total weight that is reduced in this scenario. If $c_t=A$, then there are at most $(\\Delta^2+1)$ nodes in the two-hop neighborhood, in which all of them are predicted negative. Therefore, the total weight of experts who predict negative on all of them is at least $W_t(1-\\theta(\\Delta^2+1))_+$. On the other hand, if $c_t=B$, then the total weight of experts who predict negative on the one-hop neighborhood is at least $W_t(1-\\theta(\\Delta+1))_+$.\n Putting the two cases together and conditioning on the false negative, the total weight that can be reduced is at least\n \\begin{align}\n &\\Pr(c_t=B\\ |\\ \\text{false negative},\\mathcal{F}_{t-1})\\cdot {(1-(\\Delta+1)\\theta)_+}\n \\nonumber\\\\&\\qquad\\qquad\n +\\Pr(c_t=A\\ |\\ \\text{false negative},\\mathcal{F}_{t-1})\\cdot (1-(\\Delta^2+1)\\theta)_+\\nonumber\\\\\n \\ge& \\max\\left\\{\\beta(1-(\\Delta+1)\\theta)_+,(1-(\\Delta^2+1)\\theta)_+\\right\\},\n \\label{eq:tttmp}\n \\end{align}\n where the first term in \\eqref{eq:tttmp} is due to the claim we just established, and the second term follows from $(1-(\\Delta+1)\\theta)_+\\ge(1-(\\Delta^2+1)\\theta)_+$ together with\n $\\Pr(c_t=B\\ |\\ \\text{false negative},\\mathcal{F}_{t-1})+\\Pr(c_t=A\\ |\\ \\text{false negative},\\mathcal{F}_{t-1})=1$.\n From \\Cref{eq:tttmp}, we obtain\n \\begin{align}\n \\E[W_{t+1}\\ |\\ \\mathcal{F}_{t-1},\\text{ false negative}]\\le W_t\\left(1-\\lambda\\cdot\\max\\left\\{\\beta(1-(\\Delta+1)\\theta)_+,(1-(\\Delta^2+1)\\theta)_+\\right\\}\\right).\\label{eq:cut-false-negative}\n \\end{align}\nFinally, we optimize the threshold $\\theta$ to equalize the decrease in the case of false positive (\\Cref{eq:cut-false-positive}) and false negative (\\Cref{eq:cut-false-negative}). As a result, the optimal $\\theta$ is obtained by solving the following equation:\n\\begin{align}\n \\underbrace{\\theta}_{f(\\theta)}=\\max\\left\\{\\underbrace{\\beta(1-(\\Delta+1)\\theta)}_{f_1(\\theta)},\\,\\underbrace{1-(\\Delta^2+1)\\theta}_{f_2(\\theta)},\\,0\\right\\}.\\nonumber\n\\end{align}\nSince $f,f_1$, and $f_2$ are all linear functions where $f_1,f_2$ have a negative slope and $f$ has a positive slope, the intersection between $f$ and $\\max\\{f_1,f_2\\}$ coincides with the maximum value between the intersection of $\\{f,f_1\\}$ and the intersection of $\\{f,f_2\\}$. Moreover, $\\theta=0$ is not a valid solution because the other two intersections have strictly positive values.\nThus we obtain\n\\begin{align*}\n \\theta\\triangleq\\max\\left\\{\\frac{1}{\\Delta+1+\\frac{1}{\\beta}},\\ \\frac{1}{\\Delta^2+2}\\right\\}.\n\\end{align*}\nCorrespondingly, on each mistake, the optimal amount of decrease in the total weight is\n\\begin{align*}\n \\E\\left[\\left.\\frac{W_{t+1}}{W_t}\\ \\right|\\ \\mathcal{F}_{t-1},\\text{ mistake}\\right]\\le \\min\\left\\{1-\\frac{\\lambda}{\\Delta+1+\\frac{1}{\\beta}},\\ 1-\\frac{\\lambda}{\\Delta^2+2}\\right\\}.\n\\end{align*}\nBy Jensen's inequality, we further obtain that if a mistake is made at time $t$, then\n\\begin{align}\n \\E\\left[\\ln \\left.\\frac{W_{t+1}}{W_t}\\ \\right|\\ \\mathcal{F}_{t-1},\\text{ mistake}\\right]\\le&\\ln\\E\\left[\\left.\\frac{W_{t+1}}{W_t}\\ \\right|\\ \\mathcal{F}_{t-1},\\text{ mistake}\\right]\\nonumber\\\\\n \\le &\\ln\\left(\\min\\left\\{1-\\frac{\\lambda}{\\Delta+1+\\frac{1}{\\beta}},\\ 1-\\frac{\\lambda}{\\Delta^2+2}\\right\\}\\right).\\label{eq:decrease:mistake}\n\\end{align}\nThe last step is to telescope \\Cref{eq:decrease:mistake} over all mistakes.\nNote that the algorithm only penalizes the experts that make mistakes, so the same argument as \\Cref{thm:biased-weighted-maj-vote-mistake-bound} implies that $W_T\\ge\\gamma^{\\mathsf{OPT}}$. Thus we have\n\\begin{align*}\n\\E[\\ln(\\gamma^{\\mathsf{OPT}})-\\ln|\\mathcal{H}|] \\le \\E[\\ln W_T-\\ln|\\mathcal{H}|]\n \\le \\E[\\mathsf{Mistake}(T)]\\cdot\\ln\\left(\\min\\left\\{1-\\frac{\\lambda}{\\Delta+1+\\frac{1}{\\beta}},\\ 1-\\frac{\\lambda}{\\Delta^2+2}\\right\\}\\right).\n\\end{align*}\n \n {Rearranging} the above inequality, setting $\\lambda=1\/e$ {and using $\\ln(1-x)\\leq -x$} gives us an expected mistake bound of \n \\begin{align*}\n \\E[\\mathsf{Mistake}(T)]\\le e\\cdot\\min\\left\\{\\Delta+1+\\frac{1}{\\beta},\\ \\Delta^2+2\\right\\}\\left(\\ln|\\mathcal{H}|+\\E[\\mathsf{OPT}]\\right).\n \\end{align*}\n This completes the proof.\n \\end{proof}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nColour is a vital component in the successful delivery of information, whether it is\nin marketing a commercial product \\cite{SableA10}, designing webpages \\cite{Meier88,Pribadi90}, or visualizing information \\cite{Christ75,CardMS99}.\nSince real-world concepts have associations with certain colour categories\n(for example, {\\it danger} with red, and {\\it softness} with pink),\ncomplimenting linguistic and non-linguistic information with appropriate colours has a number of benefits, including:\n(1) strengthening the message (improving semantic coherence), \n(2) easing cognitive load on the receiver,\n(3) conveying the message quickly, and\n(4) evoking the desired emotional response.\nConsider, for example, the use of red in stop signs.\nDrivers are able to recognize the sign faster, and it evokes a subliminal emotion pertaining to danger, which is entirely appropriate in the context.\nThe use of red to show areas of high crime rate in a visualization is another example of good use of colour to draw emotional response.\nOn the other hand, improper use of colour can be more detrimental to understanding than using no colour \\cite{Marcus82,Meier88}.\n\n\n\nMost languages have expressions involving colour, and many of these express sentiment. Examples in English include:\n{\\it green with envy}, {\\it blue blood} (an aristocrat), {\\it greener pastures} (better avenues),\n{\\it yellow-bellied} (cowardly), {\\it red carpet} (special treatment), and {\\it looking through rose-tinted glasses} (being optimistic).\nFurther, new expressions are continually coined, for example,\n{\\it grey with uncertainty} from Bianca Marsden's poem {\\it Confusion}.\\footnote{http:\/\/www.biancaday.com\/confusion.html}\nThus, knowledge of concept--colour associations may also be useful for automatic natural language systems\nsuch as textual entailment, paraphrasing, machine translation, and sentiment analysis.\n\n A word has strong association with a colour when the colour is a salient feature of the concept\n the word refers to, or because the word is related to a such a concept.\nMany concept--colour associations, such as {\\it swan} with white and {\\it vegetables} with green, involve\nphysical entities.\nHowever, even abstract notions\nand emotions\nmay have colour associations ({\\it honesty}--white, {\\it danger}--red, {\\it joy}--yellow, {\\it anger}--red).\nFurther, many associations are culture-specific \\cite{Gage99,Chen05}. For example, {\\it prosperity} is associated with\nred in much of Asia.\n\nUnfortunately, there exists no lexicon with any significant coverage that captures concept--colour associations,\nand a number of questions remain unanswered, such as, the extent to which humans agree on these associations,\nand whether physical concepts are more likely to have a colour association than abstract ones.\nWe expect that the word--colour associations manifest themselves as co-occurrences\nin text and speech, but there have been no studies to show the extent to which words co-occur more with associated colours\nthan with other colours.\n\nIn this paper, we describe how we created a large word--colour association lexicon by crowdsourcing with effective quality control measures (Section 3).\nWe used a word-choice question to guide the annotators toward the desired senses of the target words, and also\nto determine if the annotators know the meanings of the words.\n\nWe conducted several experiments to measure the consensus in word--colour associations, and how these associations manifest themselves in language.\nSpecifically, we show that:\n\n \\begin{myitemize}\n \\item More than 30\\% of terms\n have a strong colour association (Sections 4). \n\\item \nAbout 33\\% of thesaurus categories have strong colour associations (Section 5).\n\\item Abstract terms have colour associations almost as often as physical entities do (Section 6).\n\\item There is a strong association of emotions and polarities with colours (Section 7).\n\\item Word-colour association manifests itself as closeness in WordNet (to a smaller extent), and as high co-occurrence in text (to a greater extent) (Section 8).\n\\end{myitemize}\n\\noindent Finally, we present an automatic method to determine word--colour association that relies on co-occurrence and polarity cues, but\nno labeled information of word--colour associations.\nIt obtains an accuracy of more than 60\\%. Comparatively, the random choice and most-frequent class supervised\nbaselines obtain only 9.1\\% and 33.3\\%, respectively.\nSuch approaches can be used to \nfor creating similar lexicons in other languages.\n\n\n\n\\section{Related Work}\n The relation between language and cognition has received considerable attention\n over the years, mainly on answering whether language impacts thought,\n and if so, to what extent. Experiments with colour categories have been used both to show\n that language has an effect on thought \\cite{BrownL54,Ratner89}\n and that it does not \\cite{Bornstein85}. However, that line of work does not\nexplicitly deal with word--colour associations. \nIn fact, we did not find any other academic work that gathered large word--colour associations. There is, however,\na commercial endeavor---Cymbolism\\footnote{http:\/\/www.cymbolism.com\/about}.\n\n\nChild et al.\\@ \\shortcite{Child68}, Ou et al.\\@ \\shortcite{Ou11}, and others show that people of different ages and genders\nhave different colour preferences. (See also the online study by Joe Hallock\\footnote{http:\/\/www.joehallock.com\/edu\/COM498\/preferences.html}.)\nIn this work, we are interested in identifying words that have a strong association with a colour\ndue to their meaning; associations that are not affected by age and gender preferences.\n\nThere is substantial work on inferring the emotions evoked by colour \\cite{Luscher69,Xin04,Kaya04}.\nStrapparava and Ozbal \\shortcite{StrapparavaO10} compute corpus-based semantic similarity between emotions and colours.\n We combine the word--colour and word--emotion association lexicons to determine the correlation between \n emotion-associated words and colours.\n\nBerlin and Kay \\shortcite{BerlinK69}, and later Kay and Maffi \\shortcite{KayM99}, showed that often colour\nterms appeared in languages in certain groups. If a language has only two colour terms, then they\nare white and black. \nIf a language has three colour terms, then they are white, black, and red.\nIf a language has four colour terms, then they are white, black, red, and green, and so on\nup to eleven colours. From these groupings, the colours can be ranked as follows:\n \\begin{quote}\n1. white, 2. black, 3. red, 4. green, 5. yellow, 6. blue, 7. brown, 8. pink, 9. purple, 10. orange, 11. grey \\hspace{2.7cm} (1)\n\\end{quote}\n\\noindent We will refer to the above ranking as the {\\it Berlin and Kay (B\\&K) order}.\nThere are hundreds of different words for colours.\\footnote{See http:\/\/en.wikipedia.org\/wiki\/List\\_of\\_colors}\n To make our task feasible, we needed to choose\n a relatively small list of basic colours. We chose to\n use the eleven basic colour words of Berlin and Kay (1969).\n\nThe MRC Psycholinguistic Database \\cite{Coltheart81} has, among other information, the {\\it imageability\nratings} for 9240 words.\\footnote{http:\/\/www.psy.uwa.edu.au\/mrcdatabase\/uwa\\_mrc.htm}\nThe imageability rating is a score given by human judges that reflects\nhow easy it is to visualize the concept.\nIt is a scale from 100 (very hard to visualize) to 700 (very easy to visualize).\nWe use the ratings in our experiments to determine whether there is a correlation between imageability and\nstrength of colour association.\n\n\\begin{table*}[]\n\\centering\n {\\small\n\\begin{tabular}{l rrrr rrrr rrr}\n\\hline\n \n & white &black &red &green &yellow &blue &brown &pink &purple &orange &grey\\\\\n\\hline\n overall &11.9 &12.2 &11.7 &12.0 &11.0 &9.4 &9.6 &8.6 &4.2 &4.2 &4.6\\\\\n voted &22.7 &18.4 &13.4 &12.1 &10.0 &6.4 &6.3 &5.3 &2.1 &1.5 &1.3\\\\\n\\hline\n\\end{tabular}\n}\n \\vspace*{-2mm}\n\\caption{ Percentage of terms marked as being associated with each colour.}\n\\label{tab:col votes}\n\\end{table*}\n\n\n\n\n\\section{Crowdsourcing}\n\nAmazon's Mechanical Turk (AMT) is an online crowdsourcing platform that is especially\nwell suited for tasks that can be done over the Internet through a computer\nor a mobile device.\\footnote{Mechanical Turk: www.mturk.com}\nIt is already being used to obtain human annotation\non various linguistic tasks \\cite{SnowOJN08,CallisonBurch09}.\nHowever, one must define the task carefully to obtain annotations of\nhigh quality. Several checks must be placed to ensure that random and\nerroneous annotations are discouraged, rejected, and re-annotated.\n\nWe used Mechanical Turk to obtain\nword--colour association annotations on a large-scale.\nEach task is broken into small independently solvable units called {\\it HITs\n(Human Intelligence Tasks)} and uploaded on the Mechanical Turk\nwebsite. \nThe people who provide responses to these HITs are called {\\it Turkers}. \nThe annotation provided by a Turker for a HIT is called an {\\it assignment}.\n\n\nWe used the {\\it Macquarie Thesaurus} \\cite{Bernard86} as the source for terms to be annotated.\nThesauri, such as the {\\it Roget's} and {\\it Macquarie}, group related words into categories.\nThe {\\it Macquarie} has about a thousand categories, each having about a hundred or so related terms.\nEach category has a {\\it head word} that best represents the words in it.\nThe categories can be thought of as coarse senses or concepts \\cite{Yarowsky92}.\nIf a word is ambiguous, then it is listed in more than one category.\nSince a word may have different colour associations when used in different senses,\nwe obtained annotations at word-sense level.\nWe chose to annotate words that had \none to five senses in the {\\it Macquarie Thesaurus} and occurred\nfrequently in the {\\it Google N-gram Corpus}.\nWe annotated more than 10,000 of these word--sense pairs \nby creating HITs as described below. \n\nEach HIT has a set of questions, all of\nwhich are to be answered by the same person. \nWe requested annotations from five different Turkers for each HIT.\n(A Turker cannot attempt multiple assignments for the same term.)\nA complete HIT\nis shown below:\n\n\\rule{7.6cm}{0.4mm}\n\n{\\small\n \n\n\n\\vspace*{1mm}\n\\indent Q1. Which word is closest in meaning \nto {\\it sleep}?\n\n\n\\vspace*{1mm}\n \\begin{minipage}[t]{4mm}\n \\end{minipage}\n \\begin{minipage}[t]{1.7cm}\n \\begin{myitemize}\n \\item {\\it car}\n \\end{myitemize}\n \\end{minipage}\n \\begin{minipage}[t]{1.9cm}\n \\begin{myitemize}\n \\item {\\it tree}\n \\end{myitemize}\n \\end{minipage}\n \\begin{minipage}[t]{1.7cm}\n \\begin{myitemize}\n \\item {\\it nap}\n \\end{myitemize}\n \\end{minipage}\n \\begin{minipage}[t]{1.6cm}\n \\vspace*{-0.7mm}\n \\begin{myitemize}\n \\item {\\it olive}\\\\\n \\end{myitemize}\n \\end{minipage}\n\n\\vspace*{-3mm}\nQ2. What colour is associated with {\\it sleep}?\n\n\n\n \\vspace*{1mm}\n \\begin{minipage}[t]{4mm}\n \\end{minipage}\n \\begin{minipage}[t]{1.7cm}\n \\begin{myitemize}\n \\item black\n \\item blue\n \\item brown\n \\end{myitemize}\n \\end{minipage}\n \\begin{minipage}[t]{1.8cm}\n \\begin{myitemize}\n \\item green\n \\item grey\n \\item orange\n \\end{myitemize}\n \\end{minipage}\n \\begin{minipage}[t]{1.7cm}\n \\begin{myitemize}\n \\item purple\n \\item pink\n \\item red\n \\end{myitemize}\n \\end{minipage}\n \\begin{minipage}[t]{1.9cm}\n \\begin{myitemize}\n \\item white\n \\item yellow\\\\\n \\end{myitemize}\n \\end{minipage}\n\n\\rule{7.6cm}{0.4mm}\\\\\n}\n\n\\vspace*{1mm}\n\\noindent \nQ1 is a word-choice question generated automatically by taking a near-synonym\nfrom the thesaurus and random distractors. \nThe near-synonym also guides the annotator to the desired sense of the word.\nFurther, it encourages the annotator to think clearly about the target word's meaning; we believe this improves the quality of the annotations in Q2.\n If a word has multiple senses, that is, it is listed in more than one thesaurus category,\n then separate questionnaires are generated for each sense.\n Thus we obtain colour associations at a word-sense level.\n\nIf an annotator answers Q1 incorrectly,\nthen we discard information obtained from both Q1 and Q2. \nThus, even though we do not have correct answers to Q2, likely incorrect annotations are filtered out.\nAbout 10\\% of the annotations were discarded because of an incorrect answer to Q1. \nTerms with less than three valid annotations were removed from further analysis.\nEach of the remaining terms had, on average, 4.45 distinct annotations.\n\nThe colour options in Q2 were presented in random order.\nObserve that we do not provide a ``not associated with any colour\" option. This encourages\ncolour selection even if the annotator felt the association was weak. If there is no association\nbetween a word and a colour, then we expect low agreement amongst the annotators.\n The survey was approved by the ethics board at the authors' institution.\n\n\n\n\n\n\\section{Word--Colour Association}\n\nThe information from multiple annotators\nwas combined\nby taking the majority vote, resulting in a lexicon of 8,813 entries.\nEach entry contains a unique word--synonym pair (from Q1), majority-voted colour, and\na confidence score---number of votes for the colour\n\/ number of total votes.\n(For the analyses in the rest of the paper, ties were broken by picking one colour at random.)\nA separate version of the lexicon that includes entries for all of the valid annotations by each of the annotators\nis also available.\\footnote{Please contact the author to obtain a copy of the lexicon.}\n\n \nThe first row, {\\it overall}, in Table \\ref{tab:col votes} shows the percentage of times different colours were\nassociated with the target term.\nThe second row, {\\it voted}, shows percentages after taking a majority vote from multiple annotators.\nObserve that even though the colour options were presented in random order,\nthe order of the most frequently associated colours\nis identical to the Berlin and Kay order\n(Section 2:(1)).\n\n\n\n\nTable \\ref{tab:col agreement} shows how often the size of the majority class in colour associations\nis one, two, three, four, and five.\nSince the annotators were given eleven colour options to choose from, \nif we assume independence, then the chance that none of the five annotators agrees with each other\n(majority class size of one) is $1 \\times 10\/11 \\times 9\/11 \\times 8\/11 \\times 7\/11 = 0.344$.\nThus, if there was no correlation among any of the terms and colours, then 34.4\\% of the time\nnone of the annotators would have agreed. However, this happens only 15.1\\% of the time.\nA large number of terms have a majority class size $\\geq$ 2 (84.9\\%), and thus more than chance association with a colour.\nOne can argue that terms with a majority class size $\\geq$ 3 (32\\%) have {\\it strong} colour associations.\n\n\\begin{table}[t]\n\\centering\n {\\small\n\\begin{tabular}{rrrrrrr}\n\\hline\n \\multicolumn{7}{c}{\\bf majority class size} \\\\\n one &two &three &four &five &$\\geq$ two &$\\geq$ three\\\\\n\\hline\n 15.1 &52.9 &22.4 &7.3 &2.1 &84.9 &32.0 \\\\\n\\hline\n\\end{tabular}\n }\n \\vspace*{-2mm}\n\\caption{Percentage of terms in different majority classes.}\n\\label{tab:col agreement}\n\\end{table}\n\n\n\n\n\nBelow are some reasons why agreement values are much lower than those obtained for certain other tasks,\nfor example, part of speech tagging:\n\\begin{myitemize}\n\\item The annotators were not given a ``not associated with any colour\" option.\nLow agreement for certain instances is an indicator that these words have weak, if any, colour association.\n\\item Words are associated with colours to different degrees.\nSome words may be associated with more than one colour in comparable degrees, and there might be higher disagreement for such instances.\n\\item The target word is presented out of context. We expect higher agreement if\nwe provided words in particular contexts, but words can occur in innumerable contexts,\nand annotating too many instances of the same word is costly.\n\\end{myitemize}\n\\noindent Nonetheless, the term--colour association lexicon is useful for downstream applications because\nany of the following strategies may be employed: (1) choosing colour associations from only those instances\nwith high agreement, (2) assuming low-agreement terms have no colour association, (3) determining colour association of\na category through information from many words, as described in the next section.\n\n\n \\begin{figure*}[t]\n \\begin{center}\n \\includegraphics[width=1.4\\columnwidth]{band-labels.png}\n \\end{center}\n \n \n \\vspace*{-4mm}\n \\caption{Scatter plot of thesaurus categories. The area of high colour association is shaded. Some points are labeled.}\n \\label{fig:scatter}\n \\vspace*{-3mm}\n \\end{figure*}\n\n\n\n\\section{Category--Colour Association}\n\nWords within a thesaurus category may not be strongly associated with any colour, or they may each be associated with many different colours.\nWe now describe experiments to determine whether there exist categories where the semantic coherence carries over to a strong common association with one colour. \n\n We determine the strength of colour association of a category by first determining the colour $c$ most associated with the terms in it, \n and then calculating the ratio of the number of times a word from the category is associated with $c$ to the number of words in the category associated with any colour. \nOnly categories that had at least four words that also appear in the word--colour lexicon were considered; 535 of the 812 categories from {\\it Macquarie Thesaurus} met this condition. \n\nIf a category has exactly four words that appear in the colour lexicon, and if all four words are associated with different colours, then the category has the lowest possible strength of colour association---0.25 (1\/4). \n19 categories had a score of 0.25. No category had a score less than 0.25.\nAny score above 0.25 shows more than random chance association with a colour. There were 516 such categories (96.5\\%). 177 categories (33.1\\%) had a score 0.5 or above, that is,\nhalf or more of the words in these categories are associated with one colour. We consider these to be strong associations,\nand a gold standard for automatic measures of association.\n\n\n\n\n\n\n\n\\begin{table*}[t]\n\\centering\n {\\small\n\\begin{tabular}{l rrrr rrrr rrr}\n\\hline\n &white &black &red &green &yellow &blue &brown &pink &purple &orange &grey\\\\\n\\hline\n\nanger words &2.1 &{\\bf 30.7} &{\\bf 32.4} &5.0 &5.0 &2.4 &6.6 &0.5 &2.3 &2.5 &9.9\\\\\nanticipation words &{\\bf 16.2} &7.5 &11.5 &{\\bf 16.2} &10.7 &9.5 &5.7 &5.9 &3.1 &4.9 &8.4\\\\\ndisgust words &2.0 &{\\bf 33.7} &{\\bf 24.9} &4.8 &5.5 &1.9 &9.7 &1.1 &1.8 &3.5 &10.5\\\\\nfear words &4.5 &{\\bf 31.8} &{\\bf 25.0} &3.5 &6.9 &3.0 &6.1 &1.3 &2.3 &3.3 &11.8\\\\\njoy words &{\\bf 21.8} &2.2 &7.4 &{\\bf 14.1} &13.4 &11.3 &3.1 &11.1 &6.3 &5.8 &2.8\\\\\nsadness words &3.0 &{\\bf 36.0} &{\\bf 18.6} &3.4 &5.4 &5.8 &7.1 &0.5 &1.4 &2.1 &16.1\\\\\nsurprise words &11.0 &13.4 &{\\bf 21.0} &8.3 &{\\bf 13.5} &5.2 &3.4 &5.2 &4.1 &5.6 &8.8\\\\\ntrust words &{\\bf 22.0} &6.3 &8.4 &14.2 &8.3 &{\\bf 14.4} &5.9 &5.5 &4.9 &3.8 &5.8\\\\\n\\hline\n\\end{tabular}\n }\n \\vspace*{-2mm}\n\\caption{Colour signature of emotive terms: percentage of terms associated with each colour. For example, 32.4\\% of the anger terms are associated with red. The two most associated colours are shown in bold.}\n\\label{tab:col sig1}\n \\vspace*{-1mm}\n\\end{table*}\n\n \\begin{table*}[t]\n \\centering\n\n {\\small\n \\begin{tabular}{l rrrr rrrr rrr}\n \\hline\n &white &black &red &green &yellow &blue &brown &pink &purple &orange &grey\\\\\n \\hline\n \n negative &2.9 \t\t&{\\bf 28.3} &{\\bf 21.6} \t&4.7 \t\t&6.9 \t&4.1 \t&{\\bf 9.4} &1.2 \t&2.5 \t&3.8 \t&{\\bf 14.1}\\\\\n positive &{\\bf 20.1} \t&3.9 \t&8.0 \t\t\t&{\\bf 15.5} \t&{\\bf 10.8} &{\\bf 12.0} &4.8 \t\t&{\\bf 7.8} &{\\bf 5.7} &{\\bf 5.4} &5.7\\\\\n \\hline\n \\end{tabular}\n }\n \\vspace*{-2mm}\n \\caption{Colour signature of positive and negative terms: percentage terms associated with each colour. For example, 28.3\\% of the negative terms are associated with black. The highest values in each column are shown in bold.}\n \\label{tab:col sig2}\n \\vspace*{-1mm}\n \\end{table*}\n\n\n\n\n\n\n\\section{Imageability and Colour Association}\n\n\nIt is natural for physical entities of a certain colour to be associated with that colour. However, abstract concepts such as {\\it danger} and {\\it excitability} are also associated with colours---red and orange, respectively.\\\\\n{\\mbox Figure \\ref{fig:scatter}} displays an experiment to determine whether there is a correlation between imageability and association with colour.\n\nWe define imageability of a thesaurus category to be the average of\nthe imageability ratings of words in it. We calculated imageability\nfor the 535 categories described in the previous section using\nonly the words that appear in the colour lexicon. Figure\n\\ref{fig:scatter} shows the scatter plot of these categories on\nthe imageability and strength of colour association axes. \nThe colour association was calculated as described in the previous section.\n\nIf higher imageability correlated with\ngreater tendency to have a colour association, then we would \nsee most of the points along the diagonal moving up from left to\nright. Instead, we observe that the strongly associated categories (points in the shaded region) are spread across the imageability axis,\nimplying that there is only weak, if any, correlation between\nimageability and strength of association with colour.\nImageability and colour association have a Pearson's product\nmoment correlation of 0.116, and a Spearman rank order correlation of\n0.102.\n\n\n\n\n\\section{The Colour of Emotion Words}\n\nEmotions such as joy and anger are abstract concepts dealing with one's psychological state.\nMohammad and Turney \\shortcite{MohammadT10} created a crowdsourced term--emotion association lexicon\nconsisting of associations of over 10,000 word-sense pairs with eight emotions---joy, sadness, anger, fear,\ntrust, disgust, surprise, and anticipation---argued to be the\nbasic and prototypical emotions \\cite{Plutchik80}.\nWe combine their term--emotion association lexicon \nand our term--colour lexicon to\ndetermine the colour signature of different emotions---the rows in Table \\ref{tab:col sig1}.\n The top two most frequently associated colours with each of the eight emotions are shown in bold.\nFor example, the ``anger\" row shows the percentage of anger terms associated with different colours. \n\nWe see that all of the emotions have strong associations with certain colours. \nObserve that anger is associated most with red.\nOther negative emotions---disgust, fear, sadness---go strongest with black.\nAmong the positive emotions: anticipation is most frequently associated with white and green;\njoy with white, green, and yellow; and trust with white, blue, and green.\nThus, colour can add to the emotional potency of visualizations.\n\nThe Mohammad and Turney \\shortcite{MohammadT10} lexicon also has associations with\npositive and negative polarity. \nWe combine these term--polarity associations with term--colour associations to\nshow the colour signature for positive and \nnegative terms---the rows of Table \\ref{tab:col sig2}. \nWe observe that some colours tend to, more often than not, have strong positive associations (white, green, yellow, blue, pink, and orange),\nwhereas others have strong negative associations (black, red, brown, and grey).\n\n\n \\begin{table*}[t]\n \\centering\n \\resizebox{0.85\\textwidth}{!}{\n \\begin{tabular}{l rrrr rrrr rrr}\n \\hline\n \n colour \t& white \t&black \t&red \t&green \t&yellow \t&blue \t&brown \t&pink \t&purple \t&orange \t&grey\\\\\n\\hline\n\\# of senses\t\t&25\t\t&22\t\t&7\t\t&14\t\t&8\t\t&16\t\t&8\t\t&7\t\t&7\t\t&6\t\t&13\\\\ %\n \\hline\n \\end{tabular}\n }\n \\vspace*{-2mm}\n \\caption{The number of senses of colour terms in WordNet.}\n \\label{tab:colour senses}\n \\vspace*{1mm}\n \\end{table*}\n\n \\begin{table*}[t]\n \\centering\n \\resizebox{0.97\\textwidth}{!}{\n \\begin{tabular}{lr rrrr rrrr rrr rr}\n \\hline\n \n& \t\t\t& white \t&black \t&red \t&green \t&yellow \t&blue \t&brown \t&pink \t&purple \t&orange \t&grey \t& &$\\rho$\\\\\t\n\\multicolumn{2}{r}{B\\&K rank:}\t&1\t\t&2\t\t&3\t\t&4\t\t&5\t\t\t&6\t\t&7\t\t&8\t\t&9\t\t\t&10\t\t\t&11\t\t& &\\\\\n\\hline\n\\multirow{2}{*}{\\it BNC} \t&freq: \t&1480\t\t&3460\t\t&2070\t\t&1990\t&270\t&1430\t&1170\t&450\t&180\t&360\t&800\t& &\\rule{0pt}{14pt}\t\t\t\\\\\n& rank:\t\t&4\t\t\t&1\t\t\t&2\t\t\t&3\t\t&10\t\t&5\t\t&6\t\t&8\t\t&11\t\t&9\t\t&7\t\t& &0.727\t\t\t\\\\\n\\multirow{2}{*}{\\it GNC} \t&freq: \t&205\t\t&239\t\t&138\t\t&106\t&80\t\t&123\t&63\t\t&41\t\t&16\t\t&36\t\t&18\t\t& &\\rule{0pt}{14pt} \\\\\n& \trank:\t&2\t\t\t&1\t\t\t&3\t\t\t&5\t\t&6\t\t&4\t\t&7\t\t&8\t\t&11\t\t&9\t\t&10\t\t& &0.884\t\t\t\\\\\n\\multirow{2}{*}{\\it GBC} &freq: &233\t\t&188\t\t&130\t\t&86\t\t&44\t\t&75\t\t&72\t\t&14\t\t&11\t\t&19\t\t&22\t\t& &\\rule{0pt}{14pt} \t\t\\\\\n& rank:\t\t&1\t\t\t&2\t\t\t&3\t\t\t&4\t\t&7\t\t&5\t\t&6\t\t&9\t\t&10\t\t&11\t\t&8\t\t& &0.918\t\t\t\\\\\n \\hline\n \\end{tabular}\n }\n \\vspace*{-2mm}\n \\caption{Frequency and ranking of colour terms per 1,000,000 words in the {\\it British National Corpus (BNC), Google N-gram Corpus (GNC),} and {\\it Google Books Corpus (GBC)}. The last column lists the Spearman rank order correlation ($\\rho$) of the rankings with the Berlin and Kay (B\\&K) ranks. }\n \\label{tab:uni colours}\n \\vspace*{-3mm}\n \\end{table*}\n\n\n\n\n\n\\section{Manifestation of Concept--Colour Association in WordNet and in Text}\n\n\\subsection{Closeness in WordNet}\nColour terms are listed in WordNet,\nand interestingly, they are fairly ambiguous. Therefore, they can be found in many different synsets\n(see Table \\ref{tab:colour senses}).\nA casual examination of WordNet reveals that some synsets (or concepts) are\nclose to their associated colour's synset. For example, {\\it darkness} is a hypernym of black\nand {\\it inflammation} is one hop away from red.\nIt is plausible that if a concept is strongly associated with a certain colour, then \nsuch concept--colour pairs will be close to each other in a semantic\nnetwork such as WordNet.\nIf so, the semantic closeness of a word with each of the eleven basic colours in WordNet\ncan be used to automatically determine the colour most associated with the 177 thesaurus categories\nfrom the gold standard described in Section 5 earlier.\nWe determine closeness using two similarity measures---Jiang and Conrath \\shortcite{JiangC97} and Lin \\shortcite{Lin97}---and two relatedness measures---Lesk \\cite{BanerjeeP03} and gloss vector overlap \\cite{PedersenPM04a}---from the\nWordNet Similarity package.\n\nFor each thesaurus category--colour pair, we summed the WordNet closeness of each of the terms in the category to\nthe colour. The colour with the highest sum is chosen as the one closest to the thesaurus category.\nSection (c) and section (d) of Table~\\ref{tab:auto results}, show how often the closest colours are also the colours most associated with the gold standard categories.\nSection (a) lists some unsupervised baselines. Random-choice baseline is the score obtained when a colour is chosen at random (1\/11 = 9.1\\%).\nAnother baseline is a system that always chooses the most frequent colour in a corpus.\nSection (a) reports three such baseline scores obtained by choosing the most frequently occurring colour in three separate corpora.\nSection (b) lists a supervised baseline obtained by choosing the colour most commonly associated with a categories in the gold standard.\nThe automatic measures listed in sections (c) through (f) do not have access to this information.\n\nObserve that the relatedness measures \nare markedly better\nthan the similarity measures\nat identifying the true associated colour.\nYet, for a majority of the thesaurus categories \nthe closest colour in WordNet is not the most associated colour.\n\n\n\n\\subsection{Co-occurrence in Text}\nPhysical entities that tend to have a certain colour tend to be associated with that colour. For example leaves are associated with green.\nIntuition suggests that these entities will co-occur with the associated colours more often than with any other colour.\nAs language has expressions such as {\\it green with envy} and {\\it feeling blue}, we also expect\nthat certain abstract notions, such as {\\it envy} and {\\it sadness}, will co-occur more often with their\nassociated colours, green and blue respectively, more often than with any other colour.\nWe now describe experiments to determine the extent to which target concepts\nco-occur in text most often with their associated colours.\n\nWe selected three corpora to investigate occurrences of colour terms: the {\\it British National Corpus (BNC)} \\cite{Burnard00u}, \nthe {\\it Google N-gram Corpus (GNC)}, and the {\\it Google Books Corpus (GBC)} \\cite{GBC}.\\footnote{The {\\it BNC}\nis available at: http:\/\/www.natcorp.ox.ac.uk.\\\\ The {\\it GNC} is available through the Linguistic Data Consortium.\\\\ The {\\it GBC} is available at http:\/\/ngrams.googlelabs.com\/datasets.}\nThe {\\it BNC}, a 100 million word corpus, is considered to be fairly balanced with text from various domains.\nThe {\\it GNC} is a trillion-word web coprus.\nThe {\\it GBC} is a digitized version of about 5.2 million books,\nand the English portion has about 361 billion words.\nThe {\\it GNC} and {\\it GBC} are distributed as collections of 1-gram to 5-gram files.\n\nTable \\ref{tab:uni colours} shows the frequencies and ranks of the eleven basic colour terms in the {\\it BNC} and the unigram files\nof {\\it GNC} and {\\it GBC}. The ranking is from the most frequent to the least frequent colour in the corpus.\nThe last column lists the Spearman rank order correlation ($\\rho$) of the rankings with the Berlin and Kay ranks \\shortcite{BerlinK69} (listed in Section 2:(1)).\nObserve that order of the colours from most frequent to least frequent in the {\\it GNC} and {\\it GBC} have a strong correlation\nwith the order proposed by Berlin and Kay, especially so for the rankings obtained from counts in the {\\it Google Books Corpus}.\n\n\\begin{table}[t]\n\n\\centering\n \\resizebox{0.5\\textwidth}{!}{\n\\begin{tabular}{lr}\n\\hline\n Automatic method for choosing colour &Accuracy \\\\\n\\hline\n\t\t(a) Unsupervised baselines: \\rule{0pt}{10pt} & \\\\\n\t\t$\\;\\;\\;\\;\\;$ - randomly choosing a colour \t\t\t&9.1\\\\\n\t\t$\\;\\;\\;\\;\\;$ - most frequent colour in {\\it BNC} (black)\t\t\t&23.2\\\\\n\t\t$\\;\\;\\;\\;\\;$ - most frequent colour in {\\it GNC} (black)\t\t\t&23.2\\\\\n\t\t$\\;\\;\\;\\;\\;$ - most frequent colour in {\\it GBC} (white)\t\t&33.3\\\\\n\t\t(b) Supervised baseline: \\rule{0pt}{10pt} & \\\\\n\t\t\\multicolumn{2}{l}{$\\;\\;\\;\\;\\;$ - colour most often associated }\\\\\n\t\t$\\;\\;\\;\\;\\;\\;\\;$ with categories (white) \t\t\t\t\t\t\t\t\t&33.3\\\\\n (c) WordNet similarity measures: \\rule{0pt}{10pt} & \\\\\n\t\t$\\;\\;\\;\\;\\;$ - Jiang Conrath measure \t\t\t \t&15.7\\\\\n\t\t$\\;\\;\\;\\;\\;$ - Lin's measure \t\t\t\t\t \t&15.7\\\\\n (d) WordNet relatedness measures: \\rule{0pt}{10pt} & \\\\\n\t\n\t\t$\\;\\;\\;\\;\\;$ - Lesk measure \t\t\t\t\t \t&24.7\\\\\n\t\t$\\;\\;\\;\\;\\;$ - gloss vector measure \t\t\t \t&28.6\\\\\n (e) Co-occurrence in text: \\rule{0pt}{10pt} \t\t\t\t\t\t\t& \\\\\n\t\t$\\;\\;\\;\\;\\;$ - $p(colour|word)$ in {\\it BNC} \t\t\t\t&31.4\\\\\n\t\t$\\;\\;\\;\\;\\;$ - $p(colour|word)$ in {\\it GNC} \t\t\t&37.9\\\\\n\t\t$\\;\\;\\;\\;\\;$ - $p(colour|word)$ in {\\it GBC} \t\t\t\t&38.3\\\\\n \\multicolumn{2}{l}{(f) Co-occurrence and polarity: \\rule{0pt}{10pt}}\\\\\n\t\t$\\;\\;\\;\\;\\;$ - $p(colour|word,polarity)$ in {\\it BNC} \t\t\t\t\t\t\t&51.4\\\\\n\t\t$\\;\\;\\;\\;\\;$ - $p(colour|word,polarity)$ in {\\it GNC} \t\t\t\t&47.6\\\\\n\t\t$\\;\\;\\;\\;\\;$ - $p(colour|word,polarity)$ in {\\it GBC} \t\t\t\t&{\\bf 60.1}\\\\\n \n\\hline\n\\end{tabular}\n }\n\\label{tab:auto results}\n\\vspace*{-3mm}\n\\caption{Percentage of times the colour chosen by automatic method is also the\ncolour identified by annotators as most associated to a thesaurus category.\n}\n\\vspace*{-2mm}\n\\end{table}\n\n\n\n\n\nFor each of the 177 gold standard thesaurus categories,\nwe determined the conditional probability of co-occurring with different colour terms in the {\\it BNC, GNC,} and {\\it GBC}. \nThe total co-occurrence frequency of a category with a colour was calculated by summing up the co-occurrence\nfrequency of each of the terms in it with the colour term. \nWe used a four-word window as context. The counts from {\\it GNC} and {\\it GBC} were determined using the fivegram files.\nSection (e) in Table~\\ref{tab:auto results} shows how often the colour with the highest conditional probability is also the colour most associated with a category.\nThese numbers are higher than the baselines (a and b), as well as the scores obtained by the WordNet approaches (c).\n\nFrom Table 5 in Section 7, we know that some colours tend to be strongly positive and others negative.\nWe wanted to determine how useful these polarity cues can be in identifying \nthe colour most associated with a category. We used the automatically generated Macquarie Semantic Orientation Lexicon (MSOL) \\cite{MohammadDD09}\nto determine if a thesaurus category is positive or negative.\\footnote{MSOL is available at http:\/\/www.umiacs.umd.edu\/$\\sim$saif\/\\\\WebPages\/ResearchInterests.html\\#semanticorientation.}\nA category is marked as negative if it has more negative words than positive, otherwise it is marked as positive.\nIf a category is positive, then co-occurrence cues were used to select a colour from only the positive colours (white, green, yellow,\nblue, pink, and orange), whereas if a category is negative, then co-occurrence cues select from only the negative colours (black, red, brown,\nand grey).\nSection (f) of Table~\\ref{tab:auto results} provides results with this method.\nObserve that these numbers are a marked improvement over Section (e) numbers, suggesting\nthat polarity cues can be very useful in determining concept--colour association.\n\nCounts from the {\\it GNC} yielded poorer results compared to the much smaller {\\it BNC}, and \nthe somewhat smaller {\\it GBC} possibly because frequency counts from {\\it GNC} are available\nonly for those n-grams that occur at least thirty times. \nFurther, {\\it GBC} and {\\it BNC} are both collections of edited texts, and so \nexpected to be cleaner than the {\\it GNC} which is a corpus extracted from the World Wide Web.\n\n\n\n\n\n\n\n\n\\section{Conclusions and Future Work}\nWe created a large word--colour association lexicon by crowdsourcing, which we will make publicly available.\nWord-choice questions were used to guide the annotators to the desired senses of the target words, and also as a gold\nquestions for identifying malicious annotators (a common problem in\ncrowdsourcing).\nWe found that more than 32\\% of the words and 33\\% of the {\\it Macquarie Thesaurus} categories have a strong association with one of the eleven colours\n chosen for the experiment. \nWe analyzed abstract concepts, emotions in particular, and showed that they\ntoo have strong colour associations.\nThus, using the right colours in tasks such as information visualization and web development, can not only improve semantic coherence but also\ninspire the desired emotional response.\n\nInterestingly, we found that frequencies of colour associations follow the same order in which colour terms occur in different languages \\cite{BerlinK69}.\nThe frequency-based ranking of colour terms in the \n{\\it BNC, GNC}, and {\\it GBC} also had a high correlation with the Berlin and Kay order.\n\nFinally, we show that automatic methods that rely on co-occurrence and polarity cues alone, and\nno labeled information of word--colour association,\ncan accurately estimate the colour associated with a concept more than 60\\% of the time. The random choice and supervised\nbaselines for this task are 9.1\\% and 33.3\\%, respectively.\nWe are interested in using word--colour associations as a feature in sentiment analysis and for paraphrasing.\n\n\n\n\\section*{Acknowledgments}\n\\vspace*{-2mm}\nThis research was funded by the National Research Council Canada.\nGrateful thanks to Peter Turney, Tara Small, and the reviewers for many wonderful ideas.\n Thanks to the thousands of people who answered the colour survey with diligence and care.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzocf b/data_all_eng_slimpj/shuffled/split2/finalzocf new file mode 100644 index 0000000000000000000000000000000000000000..735060aa4e2cf410e0ab798e4ad82f31e6e6ded4 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzocf @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\subsection{Overview}\n\nTwo multi-kilometer interferometric gravitational-wave (GW) detectors\nare presently in operation: LIGO\\footnote{http:\/\/www.ligo.caltech.edu}\nand Virgo\\footnote{http:\/\/www.virgo.infn.it}. They are sensitive to\nGWs produced by the coalescence of two neutron stars to a distance of\nroughly 30 Mpc, and to the coalescence of a neutron star with a\n$10M_{\\odot}$ black hole to roughly 60 Mpc. Over the next several\nyears, these detectors will undergo upgrades which are expected to\nextend their range by a factor $\\sim 10$. Most estimates suggest that\ndetectors at advanced sensitivity should measure at least a few, and\npossibly a few dozen, binary coalescences every year (e.g.,\n\\citealt{koppa08,ligo_rates}).\n\nIt has long been argued that neutron star-neutron star (NS-NS) and\nneutron star-black hole (NS-BH) mergers are likely to be accompanied\nby a gamma-ray burst \\citep{eichler89}. Recent evidence supports the\nhypothesis that many short-hard gamma-ray bursts (SHBs) are indeed\nassociated with such mergers (\\citealt{fox05}, \\citealt{nakar06},\n\\citealt{berger07}, \\citealt{perleyetal08}). This suggests that it\nmay be possible to simultaneously measure a binary coalescence in\ngamma rays (and associated afterglow emission) and in GWs\n\\citep{dietz09}. The combined electromagnetic and gravitational view\nof these objects will teach us substantially more than what we learn\nfrom either data channel alone. Because GWs track a system's global\nmass and energy dynamics, it has long been known that measuring GWs from a coalescing binary allows us to determine, in the\nideal case, ``intrinsic'' binary properties such as the masses and spins\nof its members with exquisite\naccuracy (\\citealt{fc93}, \\citealt{cf94}). As we describe in\nthe following subsection, it has also long been appreciated that GWs\ncan determine a system's ``extrinsic'' properties\n\\citep{schutz86} such as location on the sky and distance to the\nsource. In particular, the amplitude of a binary's GWs directly\nencodes its luminosity distance. Direct measurement of a coalescing\nbinary could thus be used as a cosmic distance measure: Binary\ninspiral would be a ``standard siren'' (the GW equivalent of a\nstandard candle, so-called due to the sound-like nature of GWs) whose\ncalibration depends only on the validity of general\nrelativity~\\citep{HolzHughes05,dalaletal}.\n\nUnfortunately, GWs alone do not measure extrinsic parameters as\naccurately as the intrinsic ones. As we describe in more detail in the\nfollowing section, GW observation of a binary measures a complicated\ncombination of its distance, its position on the sky, and its\norientation, with overall fractional accuracy $\\sim\n1\/\\mbox{signal-to-noise}$. As distance is degenerate with these\nangles, using GWs to measure absolute distance to a source requires a\nmechanism to break the degeneracy. Associating the GW coalescence\nwaves with a short-hard gamma-ray burst (SHB) is a near-perfect way to\nbreak some of these degeneracies.\n\nIn this paper we explore the ability of the near-future advanced\nLIGO-Virgo detector network to constrain binary parameters (especially\ndistance), when used in conjunction with electromagnetic observations\nof the same event (such as an associated SHB). We also examine how\nwell these measurements can be improved if planned detectors in\nWestern Australia (AIGO\\footnote{http:\/\/www.gravity.uwa.edu.au}) and\nin Japan's Kamioka mine\n(LCGT\\footnote{http:\/\/gw.icrr.u-tokyo.ac.jp:8888\/lcgt\/}) are\noperational. This paper substantially updates and improves upon\nearlier work \\citep[hereafter DHHJ06]{dalaletal}, using a more\nsophisticated parameter estimation technique. In the next section we\nreview standard sirens, and in Sec.\\ {\\ref{sec:dalaletal}} we briefly\nsummarize DHHJ06. The next subsection describes the\norganization and background relevant for the rest of the paper.\n\n\\subsection{Standard sirens}\n\\label{sec:sirens}\n\nIt has long been recognized that GW inspiral measurements could be\nused as powerful tools for cosmology. {\\cite{schutz86}} first\ndemonstrated this by analyzing how binary coalescences allow a direct\nmeasurement of the Hubble constant; {\\cite{markovic93}} and\n{\\cite{fc93}} subsequently generalized this approach to include other\ncosmological parameters. More recently, there has been much interest\nin the measurements enabled when GWs from a merger are accompanied by\na counterpart in the electromagnetic spectrum~(\\citealt{bloometal09},\n\\citealt{phinney09}, \\citealt{kk09}). In this paper we focus\nexclusively on GW observations of binaries that have an independent\nsky position furnished by electromagnetic observations.\n\nWe begin by examining gravitational waves from binary inspiral as\nmeasured in a single detector. We only present here the lowest order\ncontribution to the waves; in subsequent calculations our results are\ntaken to higher order (see Sec.\\ \\ref{sec:gwform}). The leading\nwaveform generated by a source at luminosity distance $D_L$,\ncorresponding to redshift $z$, is given by\n\\begin{eqnarray}\nh_+ &=& \\frac{2(1 + z){\\cal M}}{D_L}\\left[\\pi(1 + z){\\cal\nM}f\\right]^{2\/3}\\times\\nonumber\\\\\n& & \\qquad\\qquad\\qquad\\left(1 + \\cos^2\\iota\\right)\\cos2\\Phi_N(t)\\;,\n\\nonumber\\\\\nh_\\times &=& -\\frac{4(1 + z){\\cal M}}{D_L}\\left[\\pi(1 + z){\\cal\nM}f\\right]^{2\/3}\\cos\\iota \\sin2\\Phi_N(t)\\;,\n\\nonumber\\\\\n\\Phi_N(t) &=& \\Phi_c - \\left[\\frac{t_c - t}{5(1 + z){\\cal\nM}}\\right]^{5\/8}\\;,\\qquad f \\equiv \\frac{1}{\\pi}\\frac{d\\Phi_N}{dt}\\;.\n\\label{eq:NewtQuad}\n\\end{eqnarray}\nHere $\\Phi_N$ is the lowest-order contribution to the orbital phase,\n$f$ is the GW frequency, and ${\\cal M} = m_1^{3\/5} m_2^{3\/5}\/(m_1 +\nm_2)^{1\/5}$ is the binary's ``chirp mass,'' which sets the rate at\nwhich $f$ changes. We use units with $G = c = 1$; handy conversion\nfactors are $M_\\odot \\equiv GM_\\odot\/c^2 = 1.47\\,{\\rm km}$, and\n$M_\\odot \\equiv GM_\\odot\/c^3 = 4.92 \\times 10^{-6}\\,{\\rm seconds}$.\nThe angle $\\iota$ describes the inclination of the binary's orbital\nplane to our line-of-sight: $\\cos\\iota = \\mathbf{\\hat\nL}\\cdot\\mathbf{\\hat n}$, where $\\mathbf{\\hat L}$ is the unit vector\nnormal to the binary's orbital plane, and $\\mathbf{\\hat n}$ is the\nunit vector along the line-of-sight to the binary. The parameters\n$t_c$ and $\\Phi_c$ are the time and orbital phase when \n$f$ diverges in this model. We expect finite size effects to impact\nthe waveform before this divergence is reached.\n\nA given detector measures a linear combination of the polarizations:\n\\begin{equation}\nh_{\\rm meas} = F_+(\\theta, \\phi, \\psi) h_+ + F_\\times(\\theta, \\phi,\n\\psi) h_\\times\\;,\n\\label{eq:hmeas}\n\\end{equation}\nwhere $\\theta$ and $\\phi$ describe the binary's position on the sky,\nand the ``polarization angle'' $\\psi$ sets the inclination of the\ncomponents of $\\mathbf{\\hat L}$ orthogonal to $\\mathbf{\\hat n}$. The\nangles $\\iota$ and $\\psi$ fully specify the orientation vector\n$\\mathbf{\\hat L}$. For a particular detector geometry, the antenna\nfunctions $F_+$ and $F_\\times$ can be found in~\\cite{300yrs}. In\nSec.\\ \\ref{sec:gwmeasure} we give a general form for the gravitational\nwaveform without appealing to a specific detector, following the\nanalysis of \\citealt{cf94} (hereafter abbreviated CF94).\n\nSeveral features of Eqs.\\ (\\ref{eq:NewtQuad}) and (\\ref{eq:hmeas}) are\nworth commenting upon. First, note that the phase depends on the {\\it\nredshifted}\\\/ chirp mass. Measuring phase thus determines the\ncombination $(1 + z){\\cal M}$~\\citep{fc93}, not ${\\cal M}$ or $z$\nindependently. To understand this, note that ${\\cal M}$ controls how\nfast the frequency evolves: using Eq.\\ (\\ref{eq:NewtQuad}), we find\n$\\dot f \\propto f^{11\/3}{\\cal M}^{5\/3}$. The chirp mass enters the\nsystem's dynamics as a timescale $\\tau_c = G{\\cal M}\/c^3$. For a\nsource at cosmological distance, this timescale is redshifted; the\nchirp mass we infer is likewise redshifted. Redshift and chirp mass\nare inextricably degenerate. This remains true even when higher order\neffects (see, e.g., \\citealt{blanchet06}) are taken into account:\nparameters describing a binary impact its dynamics as timescales which\nundergo cosmological redshift, so we infer redshifted values for those\nparameters. {\\it GW observations on their own cannot directly\ndetermine a source's redshift.}\n\nNext, note that the amplitude depends on $(1 + z){\\cal M}$, the angles\n$(\\theta, \\phi, \\iota, \\psi)$, and the luminosity distance $D_L$.\nMeasuring the amplitude thus measures a combination of these\nparameters. By measuring the phase, we measure the redshifted chirp\nmass sufficiently well that $(1 + z){\\cal M}$ essentially decouples\nfrom the amplitude. More concretely, matched filtering the data with\nwaveform templates should allow us to determine the phase with\nfractional accuracy $\\delta\\Phi\/\\Phi \\sim 1\/[(\\mbox{signal-to-noise})\n\\times (\\mbox{number of measured cycles})]$; $(1 + z){\\cal M}$ should\nbe measured with similar fractional accuracy. NS-NS binaries will\nradiate roughly $10^4$ cycles in the band of advanced LIGO, and NS-BH\nbinaries roughly $10^3$ cycles, so the accuracy with which phase and\nredshifted chirp mass can be determined should be exquisite\n(\\citealt{fc93}, CF94).\n\nAlthough $(1 + z){\\cal M}$ decouples from the amplitude, the distance,\nposition, and orientation angles remain highly coupled. To determine\nsource distance we must break the degeneracy that the amplitude's\nfunctional form sets on these parameters. One way to break these\ndegeneracies is to measure the waves with multiple detectors. Studies\n{\\citep{sylvestre, cavalieretal, blairetal, fairhurst09, wenchen10}}\nhave shown that doing so allows us to determine source position to\nwithin a few degrees in the best cases, giving some information about\nthe source's distance and inclination.\n\nPerhaps the best way to break some of these degeneracies is to measure\nthe event electromagnetically. An EM signature will pin down the\nevent's position far more accurately than GWs alone. The position\nangles then decouple, much as the redshifted chirp mass decoupled.\nUsing multiple detectors, we can then determine the source's\norientation and its distance. This gives us a direct,\ncalibration-free measure of the distance to a cosmic event. The EM\nsignature may also provide us with the event's redshift, directly\nputting a point on the Hubble diagram. In addition, if\nmodeling or observation give us evidence for beaming of the SHB\nemission, this could strongly constrain the source inclination.\n\n\\subsection{This work and previous analysis}\n\\label{sec:dalaletal}\n\nOur goal is to assess how well we can determine the luminosity\ndistance $D_L$ to SHBs under the assumption that they are associated\nwith inspiral GWs. We consider both NS-NS and NS-BH mergers as\ngenerators of SHBs, and consider several plausible advanced detector\nnetworks: the current LIGO\/Virgo network, upgraded to advanced\nsensitivity; LIGO\/Virgo plus the proposed Australian AIGO; LIGO\/Virgo\nplus the proposed Japanese LCGT; and LIGO\/Virgo plus AIGO plus LCGT.\n\nThe engine of our analysis is a probability function that describes\nhow inferred source parameters $\\boldsymbol{\\theta}$ should be\ndistributed following GW measurement. (Components $\\theta^a$ of the\nvector $\\boldsymbol{\\theta}$ are physical parameters such as a\nbinary's masses, distance, sky position angles, etc.; our particular\nfocus is on $D_L$.) Consider one detector which measures a datastream\n$s(t)$, containing noise $n(t)$ and a GW signal\n$h(t,{\\boldsymbol{\\hat\\theta}})$, where $\\boldsymbol{\\hat\\theta}$\ndescribes the source's ``true'' parameters. In the language of\n\\cite{finn92}, we assume ``detection'' has already occurred; our goal\nin this paper is to focus on the complementary problem of\n``measurement.''\n\nAs shown by {\\cite{finn92}}, given a model for our signal\n$h(t,\\boldsymbol{\\theta})$, and assuming that the noise statistics are\nGaussian, the probability that the parameters $\\boldsymbol{\\theta}$\ndescribe the data $s$ is\n\\begin{equation}\np(\\boldsymbol{\\theta} | s) =\np_0(\\boldsymbol{\\theta})\\exp\\left[-\\left(( h(\\boldsymbol{\\theta}) - s)\n|( h(\\boldsymbol{\\theta}) - s )\\right)\/2\\right]\\;.\n\\label{eq:likelihood}\n\\end{equation}\nThe inner product $(a|b)$ describes the noise weighted\ncross-correlation of $a(t)$ with $b(t)$, and is defined precisely\nbelow. The distribution $p_0(\\boldsymbol{\\theta})$ is a {\\it prior\nprobability distribution}; it encapsulates what we know about our\nsignal prior to measurement. We define $\\boldsymbol{\\tilde\\theta}$ to\nbe the parameters that maximize Eq.\\ (\\ref{eq:likelihood}).\n\nDHHJ06 did a first pass on the analysis we describe here. They\nexpanded the exponential to second order in the variables\n$(\\boldsymbol{\\theta} - \\boldsymbol{\\hat\\theta})$; we will henceforth\nrefer to this as the ``Gaussian'' approximation (cf.\\\n\\citealt{finn92}):\n\\begin{eqnarray}\n\\label{eq:gaussian}\n& &\\exp\\left[-\\left( h(\\boldsymbol{\\theta}) - s | h(\\boldsymbol{\\theta})\n - s\\right)\/2\\right] \\simeq\n\\nonumber\\\\\n& &\\qquad\\qquad\\qquad\n\\exp\\left[-\\frac{1}{2}\\left(\\frac{\\partial h}{\\partial\\theta^a} \\Biggl|\n \\frac{\\partial h}{\\partial\\theta^b}\\right)\\delta\\theta^a\n \\delta\\theta^b\\right]\\;,\n\\end{eqnarray}\nwhere $\\delta\\theta^a = \\theta^a - \\hat\\theta^a$. In this limit,\n$\\boldsymbol{\\tilde\\theta} = \\boldsymbol{\\hat\\theta}$ (at least for\nuniform priors). The matrix\n\\begin{equation}\n\\Gamma_{ab} \\equiv \\left(\\frac{\\partial h}{\\partial\\theta^a} \\Biggl|\n \\frac{\\partial h}{\\partial\\theta^b}\\right)\n\\label{eq:Fisherdef}\n\\end{equation}\nis the {\\it Fisher information matrix}. Its inverse $\\Sigma^{ab}$ is\nthe covariance matrix. Diagonal entries $\\Sigma^{aa}$ are the\nvariance of parameter $\\theta^a$; off-diagonal entries describe\ncorrelations.\n\nThe Gaussian approximation to Eq.\\ (\\ref{eq:likelihood}) is known to\nbe accurate when the signal-to-noise ratio (SNR) is large. However,\nit is not clear what ``large'' really means {\\citep{vallis}}. Given\ncurrent binary coalescence rate estimates, it is expected that most\nevents will come from $D_L \\sim \\mbox{a few} \\times 100\\,{\\rm Mpc}$.\nIn such cases, we can expect an advanced detector SNR $\\sim 10$. It\nis likely that this value is not high enough for the ``large SNR''\napproximation to be appropriate.\n\nIn this analysis we avoid the Gaussian approximation. We instead\nuse Markov-Chain Monte-Carlo (MCMC) techniques (in particular, the\nMetropolis-Hastings algorithm) to explore our parameter distributions.\nA brief description of this technique is given in Sec.\\\n\\ref{sec:estimate}, and described in detail in \\cite{lewis02}. We\nfind that the Gaussian approximation to Eq.\\ (\\ref{eq:likelihood}) is\nindeed failing in its estimate of extrinsic parameters (though it\nappears to do well for intrinsic parameters such as mass).\n\n\\subsection{Organization of this paper}\n\nWe begin in Sec.\\ {\\ref{sec:inspiralwaves}} by summarizing how GWs\nencode the distance to a coalescing binary. We first describe the\npost-Newtonian (PN) gravitational waveform we use in Sec.\\\n{\\ref{sec:gwform}}, and then describe how that wave interacts with a\nnetwork of detectors in Sec.\\ {\\ref{sec:gwmeasure}}. Our discussion\nof the network-wave interaction is heavily based on the notation and\nformalism used in Sec.\\ 4 of CF94, as well as the analysis of\n\\cite{abcf01}. Section {\\ref{sec:gwmeasure}} is sufficiently dense\nthat we summarize its major points in Sec.\\\n{\\ref{sec:gwmeasure_summary}} before concluding, in Sec.\\\n{\\ref{sec:detectors}}, with a description of the GW detectors which we\ninclude in our analysis.\n\nWe outline parameter estimation in Sec.\\ \\ref{sec:estimate}. In Sec.\\\n{\\ref{sec:formaloverview}} we describe in more detail how to construct\nthe probability distributions describing parameter measurement. We\nthen give, in Sec.\\ \\ref{sec:selectionandpriors}, a brief description\nof our selection procedure based on SNR detection thresholds. This\nprocedure sets physically motivated priors for some of our parameters.\nThe Markov-Chain Monte-Carlo technique we use to explore this function\nis described in Sec.\\ {\\ref{sec:mhmc}}. How to appropriately average\nthis distribution to give ``noise averaged'' results and to compare\nwith previous literature is discussed in Sec.\\\n{\\ref{sec:averagedPDF}}.\n\nIn Sec.\\ {\\ref{sec:valid}} we discuss the validation of our code. We\nbegin by attempting to reproduce some of the key results on distance\nmeasurement presented in CF94. Because of the rather different\ntechniques used by Cutler \\& Flanagan, we do not expect exact\nagreement. It is reassuring to find, nonetheless, that we can\nreconstruct with good accuracy all of the major features of their\nanalysis. We then examine how these results change as we vary the\namplitude (moving a fiducial test binary to smaller and larger\ndistances), as we vary the number of detectors in our network, and as\nwe vary the source's inclination.\n\nOur main results are given in Sec.\\ {\\ref{sec:main_results}}. We\nconsider several different plausible detector networks and examine\nmeasurement errors for two ``fiducial'' binary systems, comprising\neither two neutron stars (NS-NS) with physical masses of $m_1 = m_2 =\n1.4\\,M_{\\odot}$, or a neutron star and black hole (NS-BH) system with\nphysical masses $m_1 = 1.4\\,M_{\\odot}$ and $m_2 = 10\\,M_{\\odot}$.\nAssuming a constant comoving cosmological density, we distribute\npotential GW-SHB events on the sky, and select from this distribution\nusing a detection threshold criterion set for the entire GW detector\nnetwork. We summarize some implications of our results in Sec.\\\n\\ref{sec:summary}. A more in-depth discussion of these implications,\nparticularly with regard to what they imply for cosmological\nmeasurements, will be presented in a companion paper.\n\nThroughout this paper, we use units with $G = c = 1$. We define\nthe shorthand $m_z = (1 + z)m$ for any mass parameter $m$.\n\n\\section{Measuring gravitational waves from inspiraling binaries}\n\\label{sec:inspiralwaves}\n\nIn this section we review the GW description we use, the formalism\ndescribing how these waves interact with a network of detectors, and\nthe properties of the detectors.\n\n\\subsection{GWs from inspiraling binaries}\n\\label{sec:gwform}\n\nThe inspiral and merger of a compact binary's members can be divided\ninto three consecutive phases. The first and longest is a gradual\nadiabatic {\\it inspiral}, when the members slowly spiral together due\nto the radiative loss of orbital energy and angular momentum.\nPost-Newtonian (PN) techniques (an expansion in gravitational\npotential $M\/r$, or equivalently for bound systems, orbital speed\n$v^2$) allow a binary's evolution and its emitted GWs to be modeled\nanalytically to high order; see \\cite{blanchet06} for a review. When\nthe bodies come close together, the PN expansion is no longer valid,\nand direct numerical calculation is required. Recent breakthroughs in\nnumerical relativity now make it possible to fully model the\nstrong-field, dynamical {\\it merger} of two bodies into one; see\n\\cite{pretorius05}, \\cite{su06}, and \\cite{etienne08} for discussion.\nIf the end state is a single black hole, the final waves from the\nsystem should be described by a {\\it ringdown} as the black hole\nsettles down to the Kerr solution.\n\nIn this work we are concerned solely with the inspiral, and will\naccordingly use the PN waveform to describe our waves. In particular,\nwe use the so-called ``restricted'' PN waveform; following CF94, the\ninspiral waveform may be written schematically\n\\begin{equation}\nh(t) = \\mathrm{Re}\\left(\\sum_{x,m}\nh^x_m(t)e^{im\\Phi_{\\mathrm{orb}}(t)}\\right) \\, .\n\\label{eq:hPN}\n\\end{equation}\nHere $x$ indicates PN order [$h^x$ is computed to $O(v^{2x})$ in\norbital speed], $m$ denotes harmonic order (e.g., $m = 2$ is\nquadrupole), and $\\Phi_{\\rm orb}(t) = \\int^t \\Omega(t') dt'$ is\norbital phase [with $\\Omega(t)$ the orbital angular frequency]. The\n``restricted'' waveform neglects all PN amplitude terms beyond the\nleading one, and considers only the dominant $m = 2$ contribution to\nthe phase. The phase is computed to high PN order.\n\nLet the unit vector $\\hat{\\bf n}$ point to a binary on the sky (so\nthat the waves propagate to us along $-\\hat{\\bf n}$), and let the unit\nvector $\\hat{\\bf L}$ denote the normal along the binary's orbital\nangular momentum. The waveform is fully described by the two\npolarizations:\n\\begin{eqnarray}\nh_+(t) & = & \\frac{2{\\mathcal M}_z}{D_L}\n\\left[\\pi{\\cal M}_z f(t)\\right]^{2\/3}\n[1+(\\mathbf{\\hat{L}}\\cdot \\mathbf{\\hat{n}})^2]\\cos[\\Phi(t)]\\;,\n\\nonumber\\\\\n&\\equiv& \\frac{4{\\mathcal M}_z}{D_L}\n\\left[\\pi{\\cal M}_z f(t)\\right]^{2\/3}\n{\\cal A}_+(\\hat{\\bf n},\\hat{\\bf L})\\cos[\\Phi(t)]\\;;\n\\label{eq:hplus}\\\\\nh_{\\times}(t) & = & - \\frac{4{\\mathcal M}_z}{D_L}\n[\\pi{\\cal M}_z f(t)]^{2\/3}\n(\\mathbf{\\hat{L}}\\cdot \\mathbf{\\hat{n}}) \\sin[\\Phi(t)]\\;,\n\\nonumber\\\\\n&\\equiv& \\frac{4{\\mathcal M}_z}{D_L}\n\\left[\\pi{\\cal M}_z f(t)\\right]^{2\/3}\n{\\cal A}_\\times(\\hat{\\bf n},\\hat{\\bf L})\\sin[\\Phi(t)]\\;.\n\\label{eq:hcross}\n\\end{eqnarray}\nEquations (\\ref{eq:hplus}) and (\\ref{eq:hcross}) are nearly identical\nto those given in Eq.\\ (\\ref{eq:NewtQuad}); only the phase $\\Phi(t)$\nis different, as described below. ${\\cal M}_z$ is the binary's\nredshifted chirp mass, $D_L$ is its luminosity distance, and we have\nwritten the inclination angle $\\cos\\iota$ using the vectors $\\hat{\\bf\nn}$ and $\\hat{\\bf L}$. The functions ${\\cal A}_{+,\\times}$ compactly\ngather all dependence on sky position and orientation. In Sec.\\\n{\\ref{sec:gwmeasure}} we discuss how these polarizations interact\nwith our detectors.\n\nIn these forms of $h_+$ and $h_\\times$, the phase is computed to\n2nd-post-Newtonian (2PN) order \\citep{bdiww95}:\n\\begin{eqnarray}\n\\Phi(t) &=& 2\\pi \\int f(t')\\,dt' = 2\\pi \\int \\frac{f}{df\/dt}df\\;,\n\\\\\n\\frac{df}{dt} &=& \\frac{96}{5}\\pi^{8\/3}\n\\mathcal{M}_z^{5\/3}f^{11\/3}\\left[1 -\n\\left(\\frac{743}{336} + \\frac{11}{4}\\eta\\right)(\\pi M_z f)^{2\/3}\n\\right.\n\\nonumber\\\\\n& &\\quad\\left. +\n4\\pi(\\pi M_z f) \\right. \\nonumber \\\\\n& &\\quad \\left. +\n\\left(\\frac{34103}{18144} + \\frac{13661}{2016}\\eta +\n\\frac{59}{18}\\eta^2 \\right)(\\pi M_z f)^{4\/3}\\right] \\;.\n\\label{eq:freqchirp}\n\\end{eqnarray}\nHigher order results for $df\/dt$ are now known (\\citealt{bij02,\nbfij02, blanchet04}), but 2PN order will be adequate for our purposes.\nSince distance measurements depend on accurate amplitude\ndetermination, we do not need a highly refined model of the wave's\nphase. The rate of sweep is dominantly determined by the chirp mass,\nbut there is an important correction due to $\\eta = \\mu \/M = m_1\nm_2\/(m_1 + m_2)^2$, the reduced mass ratio. Note that $\\eta$ is not\nredshifted; both $\\mu$ and $M$ (the reduced mass and total mass,\nrespectively) acquire $(1 + z)$ corrections, so their ratio is the\nsame at all $z$. Accurate measurement of the frequency sweep can thus\ndetermine both ${\\cal M}_z$ and $\\eta$ (or ${\\cal M}_z$ and $\\mu_z$).\n\nWe will find it useful to work in the frequency domain, using the\nFourier transform ${\\tilde h}(f)$ rather than $h(t)$:\n\\begin{equation}\n{\\tilde h}(f) \\equiv \\int_{-\\infty}^{\\infty}\\, e^{2\\pi i f t}h(t)\\, dt\\;.\n\\label{eq:fourierT}\n\\end{equation}\nAn approximate result for ${\\tilde h}(f)$ can be found using\nstationary phase \\citep{fc93}, which describes the Fourier transform\nwhen $f$ changes slowly:\n\\begin{eqnarray}\n\\tilde{h}_+(f) &=& \\sqrt{\\frac{5}{96}}\\frac{\\pi^{-2\/3} {\\mathcal\nM}_z^{5\/6}}{D_L}{\\cal A}_+ f^{-7\/6} e^{i\\Psi(f)} \\, ,\n\\label{eq:freqdomainhp}\n\\\\\n\\tilde{h}_\\times(f) &=& \\sqrt{\\frac{5}{96}}\\frac{\\pi^{-2\/3} {\\mathcal\nM}_z^{5\/6}}{D_L}{\\cal A}_\\times f^{-7\/6} e^{i\\Psi(f) - i\\pi\/2} \\, .\n\\label{eq:freqdomainhc}\n\\end{eqnarray}\n``Slowly'' means that $f$ does not change very much over a single wave\nperiod $1\/f$, so that $(df\/dt)\/f \\ll f$. The validity of this\napproximation for the waveforms we consider, at least until the last\nmoments before merger, has been demonstrated in previous work\n{\\citep{droz99}}. The phase function $\\Psi(f)$ in Eqs.\\\n(\\ref{eq:freqdomainhp}) and (\\ref{eq:freqdomainhc}) is given by\n\\begin{eqnarray}\n\\Psi(f) &=& 2\\pi f t_c - \\Phi_c - \\frac{\\pi}{4} + \\frac{3}{128}(\\pi\n{\\mathcal M} f)^{-5\/3}\n\\times\\nonumber\\\\\n& &\\left[1 + \\frac{20}{9}\\left(\\frac{743}{336} +\n\\frac{11}{4}\\eta\\right)(\\pi M_z f)^{2\/3} \\right.\n\\nonumber\\\\\n& &\\left. -16\\pi( \\pi M_z f)\\right.\n\\nonumber\\\\\n& &\\left.+ 10\\left(\\frac{3058673}{1016064} +\n\\frac{5429}{1008}\\eta + \\frac{617}{144}\\eta^2 \\right)(\\pi\nM_z f)^{4\/3}\\right] \\, .\n\\nonumber\\\\\n\\label{eq:PNpsi}\n\\end{eqnarray}\nAs in Eq.\\ (\\ref{eq:NewtQuad}), $t_c$ is called the ``time of\ncoalescence'' and defines the time at which $f$ diverges within the PN\nframework; $\\Phi_c$ is similarly the ``phase at coalescence.'' We\nassume an abrupt and unphysical transition between inspiral and merger\nat the innermost stable circular orbit (ISCO), $f_{\\rm ISCO}=(6\n\\sqrt{6} \\pi M_z)^{-1}$. For NS-NS, $f_{\\rm ISCO}$ occurs at high\nfrequencies where detectors have poor sensitivity. As such, we are\nconfident that this abrupt transition has little impact on our\nresults. For NS-BH, $f_{\\rm ISCO}$ is likely to be in a band with\ngood sensitivity, and better modeling of this transition will be\nimportant.\n\nIn this analysis we neglect effects which depend on spin. In general\nrelativity, spin drives precessions which can ``color'' the waveform\nin important ways, and which can have important observational effects\n(see, e.g., \\citealt{v04}, \\citealt{lh06}, \\citealt{vandersluys08}).\nThese effects are important when the dimensionless spin parameter, $a\n\\equiv c|{\\bf S}|\/GM^2$, is fairly large. Neutron stars are unlikely\nto spin fast enough to drive interesting precession during the time\nthat they are in the band of GW detectors. To show this, write the\nmoment of inertia of a neutron star as\n\\begin{equation}\nI_{\\rm NS} = \\frac{2}{5}\\kappa M_{\\rm NS} R_{\\rm NS}^2\\;,\n\\end{equation}\nwhere $M_{\\rm NS}$ and $R_{\\rm NS}$ are the star's mass and radius,\nand the parameter $\\kappa$ describes the extent to which its mass is\ncentrally condensed (compared to a uniform sphere). Detailed\ncalculations with different equations of state indicate $\\kappa \\sim\n0.7$--$1$ [cf.\\ \\cite{cook94}, especially the slowly rotating\nconfigurations in their Tables 12, 15, 18, and 21]. For a neutron\nstar whose spin period is $P_{\\rm NS}$, the Kerr parameter is given by\n\\begin{eqnarray}\na_{\\rm NS} &=& \\frac{c}{G}\\frac{I_{\\rm NS}}{M_{\\rm NS}^2}\n\\frac{2\\pi}{P_{\\rm NS}}\n\\nonumber\\\\\n&\\simeq& 0.06 \\kappa \\left(\\frac{R_{\\rm\nNS}}{\\rm{12\\,km}}\\right)^2\\left(\\frac{1.4\\,M_\\odot}{M_{\\rm NS}}\\right)\n\\left(\\frac{10\\,{\\rm msec}}{P_{\\rm NS}}\\right)\\;.\n\\end{eqnarray}\nAs long as the neutron star spin period is longer than $\\sim10$ msec,\n$a_{\\rm NS}$ is small enough that spin effects can be neglected in our\nanalysis. We {\\it should}\\\/ include spin in our models of BH-NS\nbinaries; we leave this to a later analysis. Van der Sluys et al.\\\n(2008) included black hole spin effects in an analysis which did not\nassume known source position. They found that spin-induced modulations\ncould help GW detectors to localize a source. This and companion\nworks (\\citealt{raymond09}, \\citealt{vandersluys09}) suggest that, if\nposition is known, spin modulations could improve our ability to\nmeasure source inclination and distance.\n\nOur GWs depend on nine parameters: two masses ${\\cal M}_z$ and\n$\\mu_z$, two sky position angles (which set $\\hat{\\bf n}$), two\norientation angles (which set $\\hat{\\bf L}$), time at coalescence\n$t_c$, phase at coalescence $\\Phi_c$, and luminosity distance $D_L$.\nWhen sky position is known, the parameter set is reduced to seven: $\\{\n{\\cal M}_z, \\mu_z, D_L, t_c, \\cos \\iota, \\psi, \\Phi_c \\}$.\n\n\\subsection{Measurement of GWs by a detector network}\n\\label{sec:gwmeasure}\n\nWe now examine how the waves described in Sec.\\ {\\ref{sec:gwform}}\ninteract with a network of detectors. We begin by introducing a\ngeometric convention, which follows that introduced in CF94 and in\n\\cite{abcf01}. A source's sky position is given by a unit vector\n$\\hat{\\bf n}$ (which points from the center of the Earth to the\nbinary), and its orientation is given by a unit vector $\\hat{\\bf L}$\n(which points along the binary's orbital angular momentum). We\nconstruct a pair of axes which describe the binary's orbital plane:\n\\begin{equation}\n\\hat{\\bf X} = \\frac{\\hat{\\bf n}\\times\\hat{\\bf L}}{|\\hat{\\bf\nn}\\times\\hat{\\bf L}|}\\;,\\quad\n\\hat{\\bf Y} = -\\frac{\\hat{\\bf n}\\times\\hat{\\bf X}}{|\\hat{\\bf\nn}\\times\\hat{\\bf X}|}\\;.\n\\label{eq:XYvectors}\n\\end{equation}\nWith these axes, we define the {\\it polarization basis tensors}\n\\begin{eqnarray}\n{\\bf e}^+ &=& \\hat{\\bf X} \\otimes \\hat{\\bf X} - \\hat{\\bf Y} \\otimes\n\\hat{\\bf Y}\\;,\n\\label{eq:plusbasis}\n\\\\\n{\\bf e}^\\times &=& \\hat{\\bf X} \\otimes \\hat{\\bf Y} + \\hat{\\bf Y}\n\\otimes \\hat{\\bf X}\\;.\n\\label{eq:timesbasis}\n\\end{eqnarray}\nThe transverse-traceless metric perturbation describing our source's\nGWs is then\n\\begin{equation}\nh_{ij} = h_+ e^+_{ij} + h_\\times e^\\times_{ij}\\;.\n\\label{eq:wavetensor}\n\\end{equation}\n\nWe next characterize the GW detectors. Each detector is an $L$-shaped\ninterferometer whose arms define two-thirds of an orthonormal triple.\nDenote by $\\hat{\\bf x}_a$ and $\\hat{\\bf y}_a$ the unit vectors along\nthe arms of the $a$-th detector in our network; we call these the $x$-\nand $y$-arms. (The vector $\\hat{\\bf z}_a = \\hat{\\bf x}_a \\times\n\\hat{\\bf y}_a$ points radially from the center of the Earth to the\ndetector's vertex.) These vectors define the {\\it response tensor}\\\/\nfor detector $a$:\n\\begin{equation}\nD^{ij}_a = \\frac{1}{2}\\left[(\\hat{\\bf x}_a)^i (\\hat{\\bf x}_a)^j -\n(\\hat{\\bf y}_a)^i (\\hat{\\bf y}_a)^j\\right]\\;.\n\\label{eq:detresponse}\n\\end{equation}\nThe response of detector $a$ to a GW is given by\n\\begin{eqnarray}\nh_a &=& D^{ij}_a h_{ij}\n\\nonumber\\\\\n&\\equiv& e^{- 2 \\pi i ({\\bf n}\\cdot{\\bf r}_a) f} (F_{a,+}h_+ +\nF_{a,\\times}h_\\times)\\;,\n\\label{eq:measuredwave}\n\\end{eqnarray}\nwhere $\\bf{r}_a$ is the position of the detector $a$ and the factor\n$({\\bf n}\\cdot {\\bf r}_a)$ measures the time of flight between it and\nthe coordinate origin. The second form of Eq.\\\n(\\ref{eq:measuredwave}) shows how the antenna functions introduced in\nEq.\\ (\\ref{eq:hmeas}) are built from the wave tensor and the response\ntensor.\n\nOur discussion has so far been frame-independent, in that we have\ndefined all vectors and tensors without reference to coordinates. We\nnow introduce a coordinate system for our detectors following\n\\cite{abcf01} [who in turn use the WGS-84 Earth model\n{\\citep{althouse_etal}}]. The Earth is taken to be an oblate\nellipsoid with semi-major axis $a = 6.378137 \\times 10^6$ meters, and\nsemi-minor axis $b = 6.356752314 \\times 10^6$ meters. Our coordinates\nare fixed relative to the center of the Earth. The $x$-axis (which\npoints along ${\\bf i}$) pierces the Earth at latitude $0^\\circ$ North,\nlongitude $0^\\circ$ East (normal to the equator at the prime\nmeridian); the $y$-axis (along ${\\bf j}$) pierces the Earth at\n$0^\\circ$ North, $90^\\circ$ East (normal to the equator in the Indian\nocean somewhat west of Indonesia); and the $z$-axis (along ${\\bf k}$)\npierces the Earth at $90^\\circ$ North (the North geographic pole).\n\nA GW source at $(\\theta,\\phi)$ on the celestial sphere has sky\nposition vector $\\hat{\\bf n}$:\n\\begin{equation}\n\\hat{\\bf n} = \\sin\\theta\\cos\\phi{\\bf i} + \\sin\\theta\\sin\\phi{\\bf j} +\n\\cos\\theta{\\bf k}\\;.\n\\end{equation}\nThe {\\it polarization angle}, $\\psi$, is the angle (measured clockwise\nabout $\\hat{\\bf n}$) from the orbit's line of nodes to the source's\n$\\hat{\\bf X}$-axis. In terms of these angles, the vectors $\\hat{\\bf\nX}$ and $\\hat{\\bf Y}$ are given by {\\citep{abcf01}}\n\\begin{eqnarray}\n\\hat{\\bf X} &=& (\\sin\\phi\\cos\\psi - \\sin\\psi\\cos\\phi\\cos\\theta) {\\bf\ni}\\nonumber\\\\\n& & - (\\cos\\phi\\cos\\psi + \\sin\\psi\\sin\\phi\\cos\\theta){\\bf j}\n+\\sin\\psi\\sin\\theta {\\bf k}\\;,\n\\nonumber\\\\\n\\label{eq:Xvector2}\\\\\n\\hat{\\bf Y} &=& (-\\sin\\phi\\sin\\psi - \\cos\\psi\\cos\\phi\\cos\\theta) {\\bf\ni}\\nonumber\\\\\n& & + (\\cos\\phi\\sin\\psi - \\cos\\psi\\sin\\phi\\cos\\theta){\\bf j} +\n\\cos\\psi\\sin\\theta {\\bf k}\\;.\n\\nonumber\\\\\n\\label{eq:Yvector2}\n\\end{eqnarray}\nThe angle $\\phi$ is related to right ascension $\\alpha$ by $\\alpha =\n\\phi + {\\rm GMST}$ (where GMST is the Greenwich mean sidereal time at\nwhich the signal arrives), and $\\theta$ is related to declination\n$\\delta$ by $\\delta = \\pi\/2 - \\theta$ (cf.\\ \\citealt{abcf01}, Appendix\nB). Combining Eqs.\\ (\\ref{eq:Xvector2}) and (\\ref{eq:Yvector2}) with\nEqs.\\ (\\ref{eq:plusbasis})--(\\ref{eq:wavetensor}) allows us to write\n$h_{ij}$ for a source in coordinates adapted to this problem.\n\nWe now similarly describe our detectors using convenient coordinates.\nDetector $a$ is at East longitude $\\lambda_a$ and North latitude\n$\\varphi_a$ (not to be confused with sky position angle $\\phi$). The\nunit vectors pointing East, North, and Up for this detector are\n\\begin{eqnarray}\n{\\bf e}^{\\rm E}_a &=& -\\sin\\lambda_a{\\bf i} + \\cos\\lambda_a{\\bf j}\\;,\n\\label{eq:eastunitvec}\n\\\\\n{\\bf e}^{\\rm N}_a &=& -\\sin\\varphi_a\\cos\\lambda_a{\\bf i} -\n\\sin\\varphi_a\\sin\\lambda_a{\\bf j} + \\cos\\varphi_a{\\bf k}\\;,\n\\label{eq:northunitvec}\n\\\\\n{\\bf e}^{\\rm U}_a &=& \\cos\\varphi_a\\cos\\lambda_a{\\bf i} +\n\\cos\\varphi_a\\sin\\lambda_a{\\bf j} - \\cos\\varphi_a{\\bf k}\\;.\n\\label{eq:upunitvec}\n\\end{eqnarray}\nThe $x$-arm of detector $a$ is oriented at angle $\\Upsilon_a$ North of\nEast, while its $y$-arm is at angle $\\Upsilon_a + \\pi\/2$. Thanks to\nthe Earth's oblateness, the $x$- and $y$-arms are tilted at angles\n$\\omega^{x,y}_a$ to the vertical. The unit vectors $\\hat{\\bf x}_a$,\n$\\hat{\\bf y}_a$ can thus be written\n\\begin{eqnarray}\n\\hat{\\bf x}_a &=& \\cos\\omega^x_a \\cos\\Upsilon_a{\\bf e}^{\\rm E}_a +\n\\cos\\omega^x_a\\sin\\Upsilon_a{\\bf e}^{\\rm N}_a + \\sin\\omega^x_a{\\bf\ne}^{\\rm U}\\;,\n\\nonumber\\\\\n\\label{eq:detectorxhat}\n\\\\\n\\hat{\\bf y}_a &=& -\\cos\\omega^y_a\\sin\\Upsilon_a{\\bf e}^{\\rm E}_a +\n\\cos\\omega^y_a\\cos\\Upsilon_a{\\bf e}^{\\rm N}_a + \\sin\\omega^y_a{\\bf\ne}^{\\rm U}\\;.\n\\nonumber\\\\\n\\label{eq:detectoryhat}\n\\end{eqnarray}\nCombining Eqs.\\ (\\ref{eq:detectorxhat}) and (\\ref{eq:detectoryhat})\nwith Eq.\\ (\\ref{eq:detresponse}) allows us to write the response\ntensor for each detector in our network.\n\n\\subsection{Summary of the preceding section}\n\\label{sec:gwmeasure_summary}\n\nSection {\\ref{sec:gwmeasure}} is sufficiently dense that a brief\nsummary may clarify its key features, particularly with respect to the\nquantities we hope to measure. From Eq.\\ (\\ref{eq:measuredwave}), we\nfind that each detector in our network measures a weighted sum of the\ntwo GW polarizations $h_+$ and $h_\\times$. Following \\cite{cutler98},\nwe can rewrite the waveform detector $a$ measures as\n\\begin{equation}\nh_a = \\frac{4{\\cal M}_z}{D_L}{\\cal A}_p\\left[\\pi {\\cal M}_z\nf(t)\\right]^{2\/3} \\cos\\left[\\Phi(t) + \\Phi_p\\right]\\;,\n\\label{eq:measuredwave2}\n\\end{equation}\nwhere we have introduced detector $a$'s ``polarization amplitude''\n\\begin{equation}\n{\\cal A}_p = \\sqrt{ \\left(F_{a,+}{\\cal A}_+\\right)^2 +\n\\left(F_{a,\\times}{\\cal A}_\\times\\right)^2}\\;,\n\\label{eq:polamp}\n\\end{equation}\nand its ``polarization phase''\n\\begin{equation}\n\\tan\\Phi_p = \\frac{F_{a,\\times}{\\cal A}_\\times}{F_{a,+}{\\cal A}_+}\\;.\n\\label{eq:polphase}\n\\end{equation}\nThe intrinsic GW phase, $\\Phi(t)$, is a strong function of the\nredshifted chirp mass, ${\\cal M}_z$, the redshifted reduced mass,\n$\\mu_z$, the time of coalescence, $t_c$, and the phase at coalescence,\n$\\Phi_c$. Measuring the phase determines these four quantities,\ntypically with very good accuracy.\n\nConsider for a moment measurements by a single detector. The\npolarization amplitude and phase depend on the binary's sky position,\n$(\\theta,\\phi)$ or $\\hat{\\bf n}$, and orientation, $(\\psi,\\iota)$ or\n$\\hat{\\bf L}$. [They also depend on detector position, $(\\lambda_a,\n\\varphi_a)$, orientation, $\\Upsilon_a$, and tilt, $(\\omega^x_a,\n\\omega^y_a)$. These angles are known and fixed, so we ignore them in\nthis discussion.] If the angles $(\\theta,\\phi,\\psi,\\iota)$ are not\nknown, a single detector cannot separate them, nor can it separate the\ndistance $D_L$.\n\nMultiple detectors can, at least in principle, separately determine\nthese parameters. Each detector measures its own amplitude and\npolarization phase. Combining their outputs, we can fit to the\nunknown angles and the distance. Various works have analyzed how well\nthis can be done assuming that the position and orientation are\ncompletely unknown (\\citealt{sylvestre, cavalieretal, blairetal}).\nVan der Sluys et al.\\ (2008) performed such an analysis for\nmeasurements of NS-BH binaries, including the effect of orbital\nprecession induced by the black hole. This precession effectively\nmake the angles $\\iota$ and $\\psi$ time dependent, also breaking the\ndegeneracy among these angles and $D_L$.\n\nIn what follows, we assume that an electromagnetic identification pins\ndown the angles $(\\theta,\\phi)$, so that they do not need to be\ndetermined from the GW data. We then face the substantially less\nchallenging problem of determining $\\psi$, $\\iota$, and $D_L$. We\nwill also examine the impact of a constraint on the inclination,\n$\\iota$. Long bursts are believed to be strongly collimated, emitting\ninto jets with opening angles of just a few degrees. Less is known\nabout the collimation of SHBs, but it is plausible that their emission\nmay be primarily along a preferred axis (presumably the progenitor\nbinary's orbital angular momentum axis).\n\n\\subsection{GW detectors used in our analysis}\n\\label{sec:detectors}\n\nHere we briefly summarize the properties of the GW detectors that we\nconsider.\n\n\\noindent\n{\\it LIGO}: The Laser Interferometer Gravitational-wave Observatory\nconsists of two 4 kilometer interferometers located in Hanford,\nWashington (US) and Livingston, Louisiana (US). These instruments\nhave achieved their initial sensitivity goals. An upgrade to\n``advanced'' configuration is expected to be completed around 2014,\nwith tuning for best sensitivity to be undertaken in the years\nfollowing\\footnote{http:\/\/www.ligo.caltech.edu\/advLIGO\/scripts\/summary.shtml}.\nWe show the anticipated noise limits from fundamental noise sources in Fig.\\\n{\\ref{fig:aligonoise}} for a broad-band tuning {\\citep{ligo_noise}}.\nThis spectrum is expected to be dominated by quantum sensing noise\nabove a cut-off at $f < 10$ Hz, with a contribution from thermal noise\nin the test mass coatings in the band from 30--200 Hz.\n\n\\begin{figure}\n\\centering \n\\includegraphics[angle=90,width=0.98\\columnwidth]{fig1.eps}\n\\caption{Anticipated noise spectrum for Advanced LIGO\n({\\citealt{ligo_noise}}; cf.\\ their Fig.\\ 3). Our calculations assume\nno astrophysically interesting sensitivity below a low frequency\ncut-off of 10 Hz. The features at $f \\simeq 10$ Hz and a few hundred\nHz are resonant modes of the mirror suspensions driven by thermal\nnoise.}\n\\label{fig:aligonoise}\n\\end{figure}\n\n\\noindent\n{\\it Virgo}: The Virgo detector \\citep{acernese08} near Pisa, Italy has slightly shorter\narms than LIGO (3 kilometers), but should achieve similar advanced\nsensitivity on roughly the same timescale as the LIGO\ndetectors\\footnote{http:\/\/www.ego-gw.it\/public\/virgo\/virgo.aspx}. For\nsimplicity, we will take Virgo's sensitivity to be the same as LIGO's.\n\nOur baseline detector network consists of the LIGO Hanford and\nLivingston sites, and Virgo; these are instruments which are running\ntoday, and will be upgraded over the next decade. We also examine the\nimpact of adding two proposed interferometers to this network:\n\n\\noindent\n{\\it AIGO}: The Australian International Gravitational Observatory \\citep{barriga10} is\na proposed multi-kilometer interferometer that would be located in\nGingin, Western Australia. AIGO's proposed site in Western Australia\nis particularly favorable due to low seismic and human activity.\n\n\\noindent\n{\\it LCGT}: The Large-scale Cryogenic Gravitational-wave Telescope \\citep{kuroda10} is\na proposed multi-kilometer interferometer that would be located in the\nKamioka observatory, 1 kilometer underground. This location takes\nadvantage of the fact that local ground motions tend to decay rapidly\nas we move away from the Earth's surface. They also plan to use\ncryogenic cooling to reduce thermal noise.\n\nAs with Virgo, we will take the sensitivity of AIGO and LCGT to be the\nsame as LIGO for our analysis. Table {\\ref{tab:detectors}} gives the\nlocation and orientation of these detectors, needed to compute\neach detector's response function. It's worth mentioning that more\nadvanced detectors are in the early planning stages. Particularly\nnoteworthy is the European proposal for the ``Einstein Telescope,''\ncurrently undergoing design studies. It is being designed to study\nbinary coalescence to high redshift ($z \\gtrsim 5$) {\\citep{sathya09}}.\n\n\\begin{widetext}\n\n\\begin{deluxetable}{lccccc}\n\\tablewidth{17.5cm}\n\\tablecaption{GW detectors (positions and orientations).}\n\\tablehead{\n\\colhead{Detector} &\n\\colhead{East Long.\\ $\\lambda$} &\n\\colhead{North Lat.\\ $\\varphi$} &\n\\colhead{Orientation $\\Upsilon$} &\n\\colhead{$x$-arm tilt $\\omega^x$} &\n\\colhead{$y$-arm tilt $\\omega^y$}\n}\n\n\\startdata\n\nLIGO-Han & $-119.4^\\circ$ & $46.5^\\circ$ & $126^\\circ$ & $ (-6.20 \\times 10^{-4})^\\circ$ & $(1.25 \\times 10^{-5})^\\circ$ \\\\\nLIGO-Liv & $-90.8^\\circ$ & $30.6^\\circ$ & $198^\\circ$ & $ (-3.12 \\times 10^{-4})^\\circ$ & $(-6.11 \\times 10^{-4} )^\\circ$ \\\\\nVirgo & $10.5^\\circ$ & $43.6^\\circ$ & $70^\\circ$ & $0.0^\\circ$ & $0.0^\\circ$ \\\\\nAIGO & $115.7^\\circ$ & $-31.4^\\circ$ & $0^\\circ$ & $0.0^\\circ$ & $0.0^\\circ$ \\\\\nLCGT & $137.3^\\circ$ & $36.4^\\circ$ & $25^\\circ$ & $0.0^\\circ$ & $0.0^\\circ$ \\\\\n\n\\enddata\n\n\\label{tab:detectors}\n\\end{deluxetable}\n\\end{widetext}\n\n\n\\section{Estimation of binary parameters}\n\\label{sec:estimate}\n\n\\subsection{Overview of formalism}\n\\label{sec:formaloverview}\n\nWe now give a brief summary of the parameter estimation formalism we\nuse. Further details can be found in \\cite{finn92}, \\cite{krolak93},\nand CF94.\n\nAssuming detection has occurred, the datastream of detector $a$,\n$s_a(t)$, has two contributions: The true GW signal\n$h_a(t;{\\boldsymbol{\\hat \\theta}})$ (constructed by contracting the GW\ntensor $h_{ij}$ with detector $a$'s response tensor $D^{ij}_a$; cf.\\\nSec.\\ {\\ref{sec:gwmeasure}}), and a realization of detector noise\n$n_a(t)$,\n\\begin{equation}\n\\label{eq:sig}\ns_a(t) = h_a(t; {\\boldsymbol{\\hat \\theta}}) + n_a(t)\\;.\n\\end{equation}\nThe incident gravitational wave strain depends on (unknown) true\nparameters ${\\boldsymbol{\\hat \\theta}}$. As in Sec.\\\n\\ref{sec:dalaletal}, $\\boldsymbol{\\hat\\theta}$ is a vector whose\ncomponents are binary parameters. Below we use a vector ${\\bf s}$\nwhose components $s_a$ are the datastreams of each detector.\nLikewise, ${\\bf h}$ and ${\\bf n}$ are vectors whose components are the\nGW and noise content of each detector.\n\nWe assume the noise to be stationary, zero mean, and Gaussian. This\nlets us categorize it using the spectral density as follows. First,\ndefine the noise correlation matrix:\n\\begin{eqnarray}\nC_n(\\tau)_{ab} &=& \\langle n_a(t + \\tau) n_b(t) \\rangle -\n\\langle n_a(t + \\tau) \\rangle \\, \\langle n_b(t) \\rangle\n\\nonumber\\\\\n&=& \\langle n_a(t + \\tau) n_b(t) \\rangle\\;,\n\\label{eq:autocov}\n\\end{eqnarray}\nwhere the angle brackets are ensemble averages over noise\nrealizations, and the zero mean assumption gives us the simplified\nform on the second line. For $a = b$, this is the auto-correlation of\ndetector $a$'s noise; otherwise, it describes the correlation between\ndetectors $a$ and $b$. The (one-sided) power spectral density matrix\nis the Fourier transform of this:\n\\begin{equation}\n\\label{eq:sn_def}\nS_n(f)_{ab} = 2 \\int_{-\\infty}^{\\infty} d \\tau \\, e^{2 \\pi i f \\tau}\nC_n(\\tau)_{ab}\\;.\n\\end{equation}\nThis is defined for $f > 0$ only. For $a = b$, it is the spectral\ndensity of noise power in detector $a$; for $a \\ne b$, it again\ndescribes correlations between detectors. From these definitions, one\ncan show that\n\\begin{equation}\n\\label{eq:noisestats}\n\\langle {\\tilde n}_a(f) \\, {\\tilde n}_b(f^\\prime)^* \\rangle = {1 \\over 2}\n\\delta(f - f^\\prime) S_n(f)_{ab}.\n\\end{equation}\nFor Gaussian noise, this statistic completely characterizes our\ndetector noise. No real detector is completely Gaussian, but by using\nmultiple, widely-separated detectors non-Gaussian events can be\nrejected. For this analysis, we assume the detectors' noises are\nuncorrelated such that Eq.\\ (\\ref{eq:noisestats}) becomes\n\\begin{equation}\n\\label{eq:noisestatsunocrrelated}\n\\langle {\\tilde n}_a(f) \\, {\\tilde n}_b(f^\\prime)^* \\rangle = {1 \\over\n2} \\delta_{ab} \\delta(f - f^\\prime) S_n(f)_a.\n\\end{equation}\nFinally, for simplicity we assume that $S_n(f)_a$ has the universal\nshape $S_n(f)$ projected for advanced LIGO, shown in Fig.\\\n\\ref{fig:aligonoise}.\n\nMany of our assumptions are idealized (Gaussian noise; identical noise\nspectra; no correlated noise between interferometers), and will\ncertainly not be achieved in practice. These idealizations greatly\nsimplify our analysis, however, and are a useful baseline. It would\nbe useful to revisit these assumptions and understand the quantitative\nimpact that they have on our analysis, but we do not expect a major\nqualitative change in our conclusions.\n\nThe central quantity of interest in parameter estimation is the\nposterior probability distribution function (PDF) for\n${\\boldsymbol{\\theta}}$ given detector output {\\bf s}, which is\ndefined as\n\\begin{equation}\n\\label{eq:postPDF}\np({\\boldsymbol \\theta} \\, | \\, {\\bf s}) = {\\cal N} \\, p^{(0)}\n({\\boldsymbol{\\theta}}) {\\cal L}_{\\rm TOT} ({\\bf s} \\, |\n\\,{\\boldsymbol{\\theta}}) \\,.\n\\end{equation}\n${\\cal N}$ is a normalization constant,\n$p^{(0)}({\\boldsymbol{\\theta}})$ is the PDF that represents the prior\nprobability that a measured GW is described by the parameters\n$\\boldsymbol{\\theta}$, and ${\\cal L}_{\\rm TOT} (\\bf{s} \\, | \\,\n{\\boldsymbol \\theta} )$ is the total {\\it likelihood function} (e.g.,\n\\citealt{mackay03}). The likelihood function measures the relative\nconditional probability of observing a particular dataset $\\bf{s}$\ngiven a measured signal ${\\bf h}$ depending on some unknown set of\nparameters $\\boldsymbol{\\theta}$ and given noise ${\\bf n}$. Because\nwe assume that the noise is independent and uncorrelated at each\ndetector site, we may take the total likelihood function to be the\nproduct of the individual likelihoods at each detector:\n\\begin{equation}\n\\label{eq:totLike}\n{\\cal L}_{\\rm TOT} ({\\bf s} \\, | \\,{\\boldsymbol \\theta}) = \\Pi_{a}\n{\\cal L}_a (s_a \\, | \\,{\\boldsymbol \\theta})\\;,\n\\end{equation}\nwhere ${\\cal L}_a$, the likelihood for detector $a$, is given by\n\\citep{finn92}\n\\begin{equation}\n\\label{eq:Like}\n{\\cal L}_a \\, (s \\, | \\,{\\boldsymbol \\theta}) = \\, e^{ -\n\\big( h_a({\\boldsymbol \\theta}) - s_a \\, \\big| \\, h_a({\\boldsymbol\n\\theta}) - s_a \\big)\/2 } \\, .\n\\end{equation}\nThe inner product $\\left( \\ldots | \\ldots \\right)$ on the vector space\nof signals is defined as\n\\begin{equation}\n(g|h) = 2 \\int_0^{\\infty} df \\frac{\\tilde{g}^*(f)\\tilde{h}(f) +\n\\tilde{g}(f)\\tilde{h}^*(f)}{S_n(f)} \\, .\n\\label{eq:innerproduct}\n\\end{equation}\nThis definition means that the probability of the noise $n(t)$\ntaking some realization $n_0(t)$ is\n\\begin{equation}\n\\label{eq:noise_distribution}\np(n = n_0) \\, \\propto \\, e^{- \\left( n_0 | n_0 \\right) \/ 2 }.\n\\end{equation}\n\\noindent \n\nFor clarity, we distinguish between various definitions of SNR. The\n{\\it true}\\\/ SNR at detector $a$, associated with a given instance of\nnoise for a measurement at a particular detector, is defined as (CF94)\n\\begin{eqnarray}\n\\left({S\\over N}\\right)_{a, {\\rm true}} & = & { \\left( h_a \\, |\n\\, s_a \\right) \\over \\sqrt{ \\left( h_a \\, | \\, h_a\\right) } }\\;.\n\\label{eq:snr_true}\n\\end{eqnarray}\nThis is a random variable with Gaussian PDF of unit variance. For an\nensemble of realizations of the detector noise $n_a$, the {\\it\naverage} SNR at detector $a$ is given by\n\\begin{equation}\n\\label{eq:snr_ave}\n\\left({S\\over N}\\right)_{a, {\\rm ave}} = {{(h_a | h_a)}\\over\n{ {\\rm rms}\\ (h_a|n_a)}} = (h_a|h_a)^{1\/2}.\n\\end{equation}\nConsequently, we can define the combined {\\it true} and {\\it average}\nSNRs of a coherent network of detectors:\n\\begin{eqnarray}\n\\left({S\\over N}\\right)_{{\\rm true}} & = & \\sqrt{\\sum_a \\left({S\\over\nN}\\right)^2_{a, {\\rm true}}}\\ \\ ,\n\\label{eq:snr_total}\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\left({S\\over N}\\right)_{{\\rm ave}} & = & \\sqrt{\\sum_a \\left({S\\over\nN}\\right)^2_{a, {\\rm ave}}}\\ \\ .\n\\label{eq:snr_tot_ave}\n\\end{eqnarray}\n\nEstimating the parameter set ${\\boldsymbol{\\theta}}$ is often done\nusing a ``maximum likelihood'' method following either a Bayesian\n(\\citealt{loredo89}, \\citealt{finn92}, CF94, \\citealt{poisson95}) or\nfrequentist point of view (\\citealt{krolak93}, CF94). We do not\nattempt to review these philosophies, and instead refer to Appendix A2\nof CF94 for detailed discussion. It is worth noting that, in the GW\nliterature, the ``maximum likelihood'' or ``maximum a posterior'' are\noften interchangeably referred to as ``best-fit'' parameters. The\nmaximum a posterior is the parameter set\n$\\boldsymbol{\\tilde\\theta}_{\\rm MAP}$ which maximizes the full\nposterior probability, Eq.\\ (\\ref{eq:postPDF}); likewise, the maximum\nlikelihood is the parameter set $\\boldsymbol{\\tilde\\theta}_{\\rm ML}$\nwhich maximizes the likelihood function, Eq.\\ (\\ref{eq:totLike}).\n\nFollowing the approach advocated by CF94, we introduce the Bayes\nestimator ${\\tilde \\theta}_{\\rm BAYES}^i({\\bf s})$,\n\\begin{equation}\n\\label{eq:Bayes}\n{\\tilde \\theta}_{\\rm BAYES}^i({\\bf s}) \\equiv \\int {\\theta}^i\\,\np(\\boldsymbol{\\theta} \\, | \\, {\\bf s}) d\\boldsymbol{\\theta}\\;.\n\\end{equation} \nThe integral is performed over the whole parameter set\n$\\boldsymbol{\\theta}$; $d\\boldsymbol{\\theta} = d\\theta^1d\\theta^2\\dots\nd\\theta^n$. Similarly, we define the rms measurement errors\n$\\Sigma_{\\rm BAYES}^{ij}$\n\\begin{equation}\n\\label{eq:sigma_bayes}\n\\Sigma_{\\rm BAYES}^{ij} = \\int ({\\theta}^i - {\\tilde \\theta}^i_{\\rm\nBAYES}) \\, ({\\theta}^j - {\\tilde \\theta}^j_{\\rm BAYES}) \\,\np(\\boldsymbol{ \\theta} \\, | \\, {\\bf s}) d\\boldsymbol{\\theta} .\n\\end{equation}\nTo understand the meaning of ${\\tilde\\theta}_{\\rm BAYES}^i({\\bf s})$,\nconsider a single detector which records an arbitrarily large ensemble\nof signals. This ensemble will contain a sub-ensemble in which the\nvarious $s(t)$ are identical to one another. Each member of the\nsub-ensemble corresponds to GW signals with different true parameters\n$\\boldsymbol{\\hat \\theta}$, but have noise realizations $n(t)$ that\nconspire to produce the same $s(t)$. In this case, ${\\tilde\n\\theta}_{\\rm BAYES}^i({\\bf s})$ is the expectation of $\\theta^i$\naveraged over the sub-ensemble. The principle disadvantage of the\nBayes estimator is the computational cost to evaluate the\nmulti-dimensional integrals in Eqs.\\ (\\ref{eq:Bayes}) and\n(\\ref{eq:sigma_bayes}).\n\nFor large SNR it can be shown that the estimators\n$\\boldsymbol{\\tilde\\theta}_{\\rm ML}$, $\\boldsymbol{\\tilde\\theta}_{\\rm\nMAP}$, and $\\boldsymbol{\\tilde\\theta}_{\\rm BAYES}$ agree with one\nanother (CF94), and that Eq.\\ (\\ref{eq:postPDF}) is well-described by\na Gaussian form [cf.\\ Eq.\\ (\\ref{eq:gaussian})]. However, as\nillustrated in Sec.\\ IVD of CF94, effects due to prior information and\nwhich scale nonlinearly with $1\/\\mbox{SNR}$ contribute significantly\nat low SNR. The Gaussian approximation then tends to underestimate\nmeasurement errors by missing tails or multimodal structure in\nposterior distributions.\n\nWe emphasize that in this analysis we do not consider systematic\nerrors that occur due to limitations in our source model or to\ngravitational lensing effects. A framework for analyzing systematic\nerrors in GW measurements has recently been presented by \\cite{cv07}.\nAn important follow-on to this work will be to estimate systematic\neffects and determine whether they significantly change our\nconclusions.\n\n\\subsection{Binary Selection and Priors}\n\\label{sec:selectionandpriors}\n\nWe now describe how we generate a sample of detectable GW-SHB events.\nWe assume a constant comoving density (\\citealt{peebles93},\n\\citealt{hogg99}) of GW-SHB events, in a $\\Lambda$CDM Universe with\n$H_0=70.5\\ \\mbox{km}\/\\mbox{sec}\/\\mbox{Mpc}$, $\\Omega_{\\Lambda}=0.726$,\nand $\\Omega_{m}=0.2732$ \\citep{komatsu09}. We distribute $10^6$\nbinaries uniformly in volume with random sky positions and\norientations to redshift $z = 1$ ($D_L \\simeq 6.6$ Gpc). We then\ncompute the average SNR, Eq.\\ (\\ref{eq:snr_ave}), for each binary at\neach detector, and use Eq.\\ (\\ref{eq:snr_tot_ave}) to compute the\naverage total SNR for each network we consider. We assume prior\nknowledge of the merger time (since we have assumed that the inspiral is\ncorrelated with a SHB), so we set a threshold SNR for the {\\it total}\ndetector network, $\\mbox{SNR}_{\\rm total} = 7.5$ (see discussion in\nDHHJ06). This is somewhat reduced from the threshold we would set in\nthe absence of a counterpart, since prior knowledge of merger time and\nsource position reduces the number of search templates we need by a\nfactor $\\sim 10^{5}$ (\\citealt{kp93}, \\citealt{Owen96}). Using the\naverage SNR to set our threshold introduces a slight error into our\nanalysis, since the true SNR will differ from the average. Some events\nwhich we identify as above threshold could be moved below threshold\ndue to a measurement's particular noise realization. However, some\nsub-threshold events will likewise be moved above threshold, and the net\neffect is not expected to be significant.\n\nOur threshold selects detectable GW-SHB events for each detector\nnetwork. We define ``total detected binaries'' to mean binaries which\nare detected by a network of all five detectors---both LIGO\nsites, Virgo, AIGO, and LCGT. Including AIGO and LCGT substantially\nimproves the number detected, as compared to just using the two LIGO\ndetectors and Virgo. Assuming that all binary orientations are\nequally likely given an SHB (i.e., no beaming), we find that a LIGO-Virgo\nnetwork detects \n$50\\%$ of the total detected binaries; LIGO-Virgo-AIGO detects $74\\%$\nof the total; and LIGO-Virgo-LCGT detects $72\\%$ of the total. Figure\n\\ref{fig:detected_binaries} shows the sky distribution of detected binaries for\nvarious detector combinations.\nNetworks which include LCGT tend to have rather uniform sky coverage.\nThose with AIGO cover the quadrants $\\cos\\theta > 0$, $\\phi > \\pi$ and\n$\\cos\\theta < 0$, $\\phi < \\pi$ particularly well.\n\n\\begin{figure}\n\\centering \n\\includegraphics[width=0.9\\columnwidth]{fig2.eps}\n\\caption{Detected NS-NS binaries for our various detector networks as\na function of sky position $(\\cos\\theta,\\phi)$. The lower right panel\nshows the binaries detected by a five-detector network (both LIGO\nsites, Virgo, AIGO, and LCGT). We find that LIGO plus Virgo (our\n``base'' network) only detects $50\\%$ of the five-detector events;\nLIGO, Virgo, and AIGO detect $74\\%$ of these events; and LIGO, Virgo,\nand LCGT, detect $72\\%$ of these events. Detections are more\nuniformly distributed on the sky in networks that include LCGT; AIGO\nimproves coverage in two of the sky's quadrants. Our coordinate\n$\\phi$ is related to right ascension $\\alpha$ by $\\phi=\\alpha-$GMST,\nwhere GMST is Greenwich Mean Sidereal Time; $\\theta$ is related to\ndeclination $\\delta$ by $\\theta = \\pi\/2 - \\delta$.}\n\\label{fig:detected_binaries}\n\\end{figure}\n\nOur selection method implicitly sets a prior distribution on our\nparameters. For example, the thresholding procedure results in a\nsignificant bias in detected events toward face-on binaries, with\n$\\mathbf{\\hat L}\\cdot \\mathbf{\\hat n} \\rightarrow \\pm 1$. Figure\n{\\ref{fig:marg2DDLcosinc}} shows the distribution of detectable NS-NS\nbinaries for the parameters $\\left(\\cos\\iota, D_L\\right)$. Since we\nuse an unrealistic mass distribution $(1.4\\,M_\\odot$--$1.4\\,M_\\odot$\nNS-NS and $1.4\\,M_\\odot$--$10\\,M_\\odot$ NS-BH binaries), instead of a\nmore astrophysically realistic distribution, the implicit mass prior\nis uninteresting. Figure \\ref{fig:SNRvsDL} shows the average total\nSNR versus the true $D_L$ of our sample of detectable NS-NS and NS-BH\nbinaries for our ``full'' network (LIGO, Virgo, AIGO, LCGT). Very few\ndetected binaries have SNR above 30 for NS-NS, and above 70 for NS-BH.\nIt is interesting to note the different detectable ranges between the\ntwo populations: NS-BH binaries are detectable to over twice the\ndistance of NS-NS binaries.\n\n\\begin{figure}\n\\centering \n\\includegraphics[width=0.98\\columnwidth]{fig3.eps}\n\\caption{The 2-D marginalized prior distribution in luminosity\ndistance $D_L$ and cosine inclination $\\cos \\iota$. Each point\nrepresents a detected NS-NS binary for a network comprising all five\ndetectors. Notice the bias toward detecting face-on binaries\n($\\cos\\iota \\to \\pm 1$)---they are detected to much larger distances\nthan edge-on ($\\cos\\iota \\to 0$).}\n\\label{fig:marg2DDLcosinc}\n\\end{figure}\n\n\\begin{figure}\n\\centering \n\\includegraphics[width=0.98\\columnwidth]{fig4.eps}\n\\caption{Average network SNR versus luminosity distance of the total detected\nNS-NS and NS-BH binaries. This assumes an idealized network consisting\nof both LIGO detectors,\nVirgo, AIGO, and LCGT. Left panel shows all detected NS-NS binaries\n(one point with SNR above 100 is omitted); right panel shows all\ndetected NS-BH binaries (one point with SNR above 350 is omitted).\nNotice the different axis scales: NS-BH binaries are detected to more\nthan twice the distance of NS-NS. The threshold SNR for the {\\it total}\ndetector network is 7.5, $\\mbox{SNR}_{\\rm total} = 7.5$.}\n\\label{fig:SNRvsDL}\n\\end{figure}\n\nWe are also interested in seeing the impact that prior knowledge of\nSHB collimation may have on our ability to measure these events. To\ndate there exist only two tentative observations which suggest that\nSHBs may be collimated (\\citealt{grupeetal06},\n\\citealt{burrowsetal06}, \\citealt{soderbergetal06}); we therefore\npresent results for moderate collimation and for isotropic SHB\nemission. To obtain a sample of beamed SHBs, we assume that the burst\nemission is collimated along the orbital angular momentum axis, where\nbaryon loading is minimized. Following DHHJ06, we use a distribution\nfor $\\cos\\iota \\equiv v$ of $dP\/dv \\propto \\exp[-(1 -\nv)^2\/2\\sigma_v^2]$, with $\\sigma_v=0.05$. This corresponds to a\nbeamed population with $68\\%$ of its distribution having an opening\njet angle within roughly $25^\\circ$. We construct a beamed subsample\nby selecting events from the total sample of detected events such that\nthe final distribution in inclination angle follows $dP\/dv$. Joint\nmeasurements of SHBs and GW-driven inspirals should enable us to\nconstrain beaming angles by comparing the measured rates for these two\npopulations.\n\n\\subsection{Markov-Chain Monte-Carlo approach}\n\\label{sec:mhmc}\n\nThe principle disadvantage of the Bayes estimators\n$\\tilde\\theta^i_{\\rm BAYES}$ and $\\Sigma^{ij}_{\\rm BAYES}$ is the high\ncomputational cost of evaluating the multi-dimensional integrals which\ndefine them, Eqs.\\ (\\ref{eq:Bayes}) and (\\ref{eq:sigma_bayes}). To\nget around this problem, we use Markov-Chain Monte-Carlo (MCMC)\nmethods to explore the PDFs describing the seven parameters $\\{ {\\cal\nM}_c, \\mu, D_L, \\cos \\iota, \\psi, t_c, \\Phi_c \\}$. MCMC methods are\nwidely used in diverse astrophysical applications, ranging from high\nprecision cosmology (e.g.\\ \\citealt{dunkley09}, \\citealt{sievers09})\nto extra-solar planet studies (e.g.\\ \\citealt{ford05},\n\\citealt{winn07}). They have seen increased use in GW measurement and\nparameter estimation studies in recent years (e.g.,\n\\citealt{stroeer06}, \\citealt{wickham06}, \\citealt{cornish07},\n\\citealt{porter08}, \\citealt{rover07}, \\citealt{vandersluys08}).\n\nMCMC generates a random sequence of parameter states that sample the\nposterior distribution, $p(\\boldsymbol{\\theta} | \\mathbf{s})$. Let\nthe $n$th sample in the sequence be $\\boldsymbol{\\theta}^{(n)}$. If\none draws a total of $N$ random samples, Eqs.\\ (\\ref{eq:Bayes}) and\n(\\ref{eq:sigma_bayes}) can then be approximated as sample averages:\n\\begin{eqnarray}\n{\\tilde\\theta}^i_{\\rm BAYES} &\\simeq& \\frac{1}{N} \\sum_{n = 1}^N\n(\\theta^i)^{(n)}\\;,\n\\label{eq:approxBayes}\\\\\n\\Sigma^{ij}_{\\rm BAYES} &\\simeq& \\frac{1}{N} \\sum_{n = 1}^N\n\\left(\\tilde\\theta^i_{\\rm BAYES} - (\\theta^i)^{(n)}\\right)\n\\left(\\tilde\\theta^j_{\\rm BAYES} - (\\theta^j)^{(n)}\\right)\\;.\n\\nonumber\\\\\n\\label{eq:approxsigma_bayes}\n\\end{eqnarray}\nThe key to making this technique work is drawing a sequence that\nrepresents the posterior PDF. We use the Metropolis-Hastings\nalgorithm to do this (\\citealt{metropolis53}, \\citealt{hastings70});\nsee \\cite{neal93}, \\cite{gilks96}, \\cite{mackay03}, and \\cite{cml04}\nfor in-depth discussion. The MCMC algorithm we use is based on a\ngeneric version of CosmoMC\\footnote{See\nhttp:\/\/cosmologist.info\/cosmomc\/}, described in \\cite{lewis02}.\n\nAppropriate priors are crucial to any MCMC analysis. We take the\nprior distributions in chirp mass ${\\cal M}_z$, reduced mass $\\mu_z$,\npolarization angle $\\psi$, coalescence time $t_c$, and coalescence\nphase $\\Phi_c$ to be {\\it flat} over the region of sample space where\nthe binary is detectable according to our selection procedure. More\nspecifically, we choose\n\\begin{itemize}\n\n\\item $p^{(0)}({\\cal M}_z) = {\\rm constant}$ over the range\n$[1\\,M_\\odot$, $2\\,M_\\odot]$ for NS-NS; and over the range\n$[2.5\\,M_\\odot$, $4.9\\,M_\\odot]$ for NS-BH. (The true chirp masses in\nthe binaries' rest frames are $1.2\\,M_\\odot$ for NS-NS and\n$3.0\\,M_\\odot$ for NS-BH.)\n\n\\item $p^{(0)}(\\mu_z) = {\\rm constant}$ over the range\n$[0.3\\,M_\\odot$, $2\\,M_\\odot]$ for NS-NS; and over the range\n$[0.5\\,M_\\odot$, $3.5\\,M_\\odot]$ for NS-BH. (The true reduced masses\nin the binaries' rest frames are $0.7\\,M_\\odot$ for NS-NS and\n$1.2\\,M_\\odot$ for NS-BH.)\n\n\\item $p^{(0)}(\\psi) = {\\rm constant}$ over the range $[0,\\pi]$.\n\n\\item $p^{(0)}(t_c) = {\\rm constant}$ over the range $[-100\\,{\\rm\nsec}, 100\\,{\\rm sec}]$. Since we assume that $t_c$ is close to the\ntime of the SHB event, it is essentially the time offset between the\nsystem's final GWs and its SHB photons. We find that the range in\n$t_c$ we choose is almost irrelevant, as long as the prior is flat and\nincludes the true value. No matter how broad we choose the prior in\n$t_c$, our posterior PDF ends up narrowly peaked around $\\hat t_c$.\n\n\\item $p^{(0)}(\\Phi_c) = {\\rm constant}$ over the range $[0, 2\\pi]$.\n\n\\end{itemize}\nThe prior distribution for $D_L$ is inferred by taking the density of\nSHBs to be uniform per unit comoving volume over the luminosity\ndistance range [0, 2 Gpc] for NS-NS binaries, and over the range [0, 5\nGpc] for NS-BH binaries. For our sample with isotropic inclination\ndistribution, we put $p^{(0)}(\\cos \\iota) = {\\rm constant}$ over the\nrange $[-1,1]$. When we assume SHB collimation, our prior in\n$\\cos\\iota \\equiv v$ is the same as the one that we used in our\nselection procedure discussed previously:\n\\begin{equation}\n\\frac{dp^{(0)}}{dv}(v) \\propto e^{-(1 -v)^2\/2\\sigma_v^2}\\;,\n\\end{equation}\nwith $\\sigma_v = 0.05$.\n\nWe then map out full distributions for each of our seven parameters,\nassessing the mean values [Eq.\\ (\\ref{eq:Bayes})] and the standard\ndeviations [Eq.\\ (\\ref{eq:sigma_bayes})]. We generate four chains\nwhich run in parallel on the CITA ``Sunnyvale'' Cluster. Each chain\nruns for a maximum of $10^7$ steps; we find that the mean and median\nnumber of steps are $\\sim 10^5$ and $\\sim 10^4$, respectively. Each\nevaluation of the likelihood function takes $\\sim0.3$ seconds. We use\nthe first 30\\% of a chain's sample states for ``burn in,'' and\ndiscard that data. Our chains start at random offset parameter\nvalues, drawn from Gaussians centered on the true parameter values. We\nassess convergence by testing whether the multiple chains have\nproduced consistent parameter distributions. Following standard\npractice, we use the Gelman-Rubin convergence criterion, defining a\nsequence as ``converged'' if the statistic $R < 1.1$ on the last half\nof our samples; see \\cite{gr92} for more details. We use convergence\nas our stopping criterion. Each simulation for every binary runs for\nan hour to forty-eight hours; the mean and median runtime are eight\nand three hours, respectively.\n\n\\subsection{The ``averaged'' posterior PDF}\n\\label{sec:averagedPDF}\n\nCentral to the procedure outlined above is the use of the datastream\n${\\bf s} = {\\bf h}(\\boldsymbol{\\theta}) + {\\bf n}$ which enters the\nlikelihood function ${\\cal L}_{\\rm TOT}({\\bf s} |\n\\boldsymbol{\\theta})$. The resulting posterior PDF, and the\nparameters one infers, thus depend on the noise ${\\bf n}$ which one\nuses. One may want to evaluate statistics that are in a well-defined\nsense ``typical'' given the average noise properties, rather than\ndepending on a particular noise instance. Such averaging is\nappropriate, for example, when forecasting how well an instrument\nshould be able to measure the properties of a source or process. We\nhave also found it is necessary to average when trying to compare our\nMCMC code's output with previous work.\n\nAs derived below, the averaged posterior PDF takes a remarkably simple\nform: It is the ``usual'' posterior PDF, Eq.\\ (\\ref{eq:postPDF}) with\nthe noise ${\\bf n}$ set to {\\it zero}. This does not mean that one\nignores noise when constructing the averaged PDF; one still relates\nsignal amplitude to typical noise by the average SNR, Eq.\\\n(\\ref{eq:snr_ave}). As such, the averaged statistics will show an\nimprovement in measurement accuracy as SNR is increased.\n\nTo develop a useful notion of averaged posterior PDF, consider the\nhypothetical (and wholly unrealistic) case in which we measure a\nsignal using $M$ different noise realizations for the same event. The\njoint likelihood for these measurements is\n\\begin{equation}\n{\\cal L}^{\\rm joint}_{\\rm TOT}({\\bf s}_1, {\\bf s}_2, \\ldots {\\bf\ns}_M | \\boldsymbol{\\theta} ) = \\prod_{i=1}^M {\\cal L}_{\\rm TOT} ({\\bf s}_i | \\boldsymbol{\\theta})\\;.\n\\label{eq:pjointdef}\n\\end{equation}\nLet us define the ``average'' PDF as the product of the prior\ndistribution of the parameters multiplied by the geometric mean of the\nlikelihoods which describe these measurements:\n\\begin{equation}\np_{\\rm ave}(\\boldsymbol{\\theta} | {\\bf s}) \\equiv {\\cal N} \\, p^{(0)} \n{\\cal L}^{\\rm joint}_{\\rm TOT}({\\bf s}_1, {\\bf s}_2, \\ldots {\\bf\ns}_M| \\boldsymbol{\\theta})^{1\/M}\\;.\n\\label{eq:pavedef}\n\\end{equation}\nExpanding this definition, we find\n\\begin{eqnarray}\np_{\\rm ave} (\\boldsymbol{\\theta} | \\bf{s}) & \\equiv & {\\cal N} \\,\np^{(0)} \\prod_{i=1}^{M}\\, \\left[ {\\cal L}_{\\rm TOT} ({\\bf s}_i |\n \\boldsymbol{\\theta}) \\right]^{1\/M}\\;, \n\\label{eq:avepostPDF}\n\\end{eqnarray}\nwhere the subscript $i$ denotes the $i$th noise realization in our set\nof $M$ observations. The ``ensemble average likelihood function'' can\nin turn be expanded as\n\\begin{eqnarray}\n\\prod_{i=1}^{M} \\left[{\\cal L}_{\\rm TOT} ({\\bf s}_i |\n{\\boldsymbol{\\theta}})\\right]^{1\/M} & = & \\prod_{a} \\prod_{i=1}^{M}\n\\left[{\\cal L}_{a} (s_{a,i} \\, | \\,{\\boldsymbol \\theta})\\right]^{1\/M}\n\\nonumber\\\\\n&=& \\prod_{a} \\prod_{i=1}^{M} e^{ - \\big(h_a({\\boldsymbol\n \\theta}) - s_{a,i} \\, \\big| \\, h_a({\\boldsymbol \\theta}) - s_{a,i}\n \\big)\/2M }\n\\nonumber\\\\\n& = & \\prod_{a} e^{-\\big( h_a({\\boldsymbol \\theta}) -\n h_a(\\boldsymbol{\\hat{\\theta}}) \\, \\big| \\, h_a({\\boldsymbol \\theta})\n - h_a (\\boldsymbol{\\hat{\\theta}})\\big)\/2}\n\\nonumber\\\\\n&\\times&\n\\prod_{i=1}^M \\exp\\left[\\frac{1}{M}\\left( n_{a,i} \\, \\bigg|\n h_{a}(\\boldsymbol{\\theta}) - h_{a}(\\boldsymbol{\\hat\\theta})\\right)\n \\right]\n\\nonumber\\\\\n&\\times&\n\\prod_{i=1}^M \\exp\\left[-\\frac{1}{2M}\\left( n_{a,i} \\, \\bigg|\nn_{a,i}\\right)\\right]\\;.\n\\label{eq:multi_obs_likelihood}\n\\end{eqnarray}\nBy taking $M$ to be large, the last two lines of Eq.\\\n(\\ref{eq:multi_obs_likelihood}) can be evaluated as follows:\n\\begin{eqnarray}\n& &\\prod_{i=1}^M \\exp\\left[\\frac{1}{M}\\left( n_{a,i} \\, \\bigg|\n h_{a}(\\boldsymbol{\\theta}) - h_{a}(\\boldsymbol{\\hat\\theta})\\right)\n \\right] \n\\nonumber\\\\\n& &\\qquad\\qquad = \\exp\\left[\\frac{1}{M}\\sum_{i = 1}^M \\left(\n n_{a,i} \\, \\bigg| h_{a}(\\boldsymbol{\\theta}) -\n h_{a}(\\boldsymbol{\\hat\\theta})\\right) \\right]\n\\nonumber\\\\\n& &\\qquad\\qquad \\simeq \\exp\\left[\\left\\langle \\left( n_{a} \\,\n \\bigg| h_{a}(\\boldsymbol{\\theta}) -\n h_{a}(\\boldsymbol{\\hat\\theta})\\right)\\right\\rangle \\right]\n\\nonumber\\\\\n& & \\qquad\\qquad = 1\\;.\n\\end{eqnarray}\nHere, $\\langle \\ldots \\rangle$ denotes an ensemble average over noise\nrealizations (cf.\\ Sec.\\ {\\ref{sec:formaloverview}}), and we have used\nthe fact that our noise has zero mean. Similarly, we find\n\\begin{eqnarray}\n\\prod_{i=1}^M \\exp\\left[-\\frac{1}{2M}\\left( n_{a,i} \\, \\bigg|\n n_{a,i}\\right)\\right] &=& \\exp\\left[-\\frac{1}{2M}\\sum_{i = 1}^M\n \\left( n_{a,i} \\, \\bigg| n_{a,i}\\right) \\right]\n\\nonumber\\\\ &\\simeq&\n \\exp\\left[-\\frac{1}{2}\\left\\langle \\left( n_{a} \\, \\bigg|\n n_{a}\\right)\\right\\rangle \\right]\n \\nonumber\\\\ &=& e^{-1}\\;.\n\\end{eqnarray}\nThis uses $\\langle (n_a | n_a) \\rangle = 2$, which can be proved using\nthe noise properties (\\ref{eq:autocov}), (\\ref{eq:sn_def}), and\n(\\ref{eq:noisestats}).\n\nPutting all this together, we finally find\n\\begin{equation}\np_{\\rm ave}(\\boldsymbol{\\theta} | {\\bf s}) = {\\cal N}\n p^{0}(\\boldsymbol{\\theta}) \\prod_{a} e^{ -\\big( h_a({\\boldsymbol\n \\theta}) - h_a (\\boldsymbol{\\hat{\\theta}}) \\, \\big| \\,\n h_a({\\boldsymbol \\theta}) - h_a (\\boldsymbol{\\hat{\\theta}}) \\big)\/2\n }\\;,\n\\label{eq:avepostPDF_final2}\n\\end{equation}\nwhere we have absorbed $e^{-1}$ into the normalization ${\\cal N}$.\nThe posterior PDF, averaged over noise realizations, is simply\nobtained by evaluating Eq.\\ (\\ref{eq:postPDF}) with the noise ${\\bf\nn}$ set to zero.\n\n\\section{Results I: Validation and Testing}\n\\label{sec:valid}\n\nWe now validate and test our MCMC code against results from CF94. In\nparticular, we examine the posterior PDF for the NS-NS binary which was\nstudied in detail in CF94. We also explore the dependence of distance\nmeasurement accuracies on the detector network and luminosity\ndistance, focusing on the strong degeneracy that exists between $\\cos\n\\iota$ and $D_L$.\n\n\\subsection{Comparison with CF94}\n\\label{sec:cf}\n\nValidation of our MCMC results requires comparing to work which goes\nbeyond the Gaussian approximation and Fisher matrix estimators. In\nSection IVD of CF94, Cutler \\& Flanagan investigate effects that are\nnon-linear in $1\/\\mbox{SNR}$. They show that such effects\nhave a significant impact on distance measurement accuracies for low\nSNR. In particular, they find that Fisher-based estimates understate\ndistance measurement errors for a network of two LIGO detectors\nand Virgo.\n\nBecause they go beyond a Fisher matrix analysis, the results of CF94\nare useful for comparing to our results. Their paper is also useful\nin that they take source position to be known. Our approach is\nsufficiently different from CF94 that we do not expect perfect\nagreement, however. The most important difference is that we directly map out\nthe posterior PDF and compute sample averages using Eqs.\\\n(\\ref{eq:Bayes}) and (\\ref{eq:sigma_bayes}), for the full parameter\nset $\\{ {\\cal M}_z, \\mu_z, D_L, \\cos \\iota, \\psi, t_c, \\Phi_c \\}$. In\ncontrast, CF94 estimate measurement errors only for $D_L$, using an\napproximation on an analytic Bayesian derivation of the marginalized\nPDF for $D_L$. Specifically, Cutler \\& Flanagan expand the\nexponential factor in Eq.\\ (\\ref{eq:postPDF}) beyond second order in\nterms of some ``best-fit'' maximum likelihood parameters. Their\napproximation treats strong correlations between the parameters $D_L$\nand $\\cos \\iota$ that are non-linear in 1\/SNR. However, other\ncorrelations between $D_L$ and $(\\psi, \\phi_c)$ are only considered to\nlinear order. They obtain an analytic expression for the posterior PDF\nof the variables $D_L$ and $\\cos \\iota$ in terms of their ``best-fit''\nmaximum-likelihood values $\\tilde{D}_L$ and $\\cos \\tilde{\\iota}$ [see\nEq.\\ (4.57) of CF94]. The marginalized 1-D posterior PDFs for $D_L$\nare then computed by numerically integrating over $\\cos \\iota$. The\n1-D marginalized PDF we compute in parameter $\\theta_i$ is\n\\begin{equation}\n\\label{eq:margPDF}\np_{\\rm marg}(\\theta_i | {\\bf s}) = \\int \\dots \\int\np(\\boldsymbol{\\theta} | {\\bf s}) d\\theta_1 \\dots d\\theta_{i-1}\\;\nd\\theta_{i+1} \\dots d\\theta_N\n\\end{equation}\nwhere $p(\\boldsymbol{\\theta} | \\bf{s})$ is the posterior PDF given by\nEq.\\ (\\ref{eq:postPDF}) and $N$ is the number of dimensions of our\nparameter set.\n\nIn addition to this rather significant difference in techniques, there\nare some minor differences which also affect our comparison:\n\n\\begin{itemize}\n\n\\item We use the restricted 2PN waveform; CF94 use the leading\n``Newtonian, quadrupole'' waveform that we used for pedagogical\npurposes in Sec.\\ \\ref{sec:sirens}. Since distance is encoded in the\nwaveform's amplitude, we do not expect that our use of a higher-order\nphase function will have a large impact. However, to avoid any easily\ncircumvented mismatch, we adopt the Newtonian-quadrupole waveform for\nthese comparisons. This waveform does not depend on reduced mass\n$\\mu$, so {\\it for the purpose of this comparison only}, our parameter\nspace is reduced from 7 to 6 dimensions.\n\n\\item We use the projected advanced sensitivity noise curve shown in\nFig.\\ (\\ref{fig:aligonoise}); CF94 use an analytical form [their Eq.\\\n(2.1)\\footnote{Note that it is missing an overall factor of $1\/5$ (E.\\\nE.\\ Flanagan, private communication).}] based on the best-guess for\nwhat advanced sensitivity would achieve at the time of their analysis.\nCompared to the most recent projected sensitivity, their curve\nunderestimates the noise at middle frequencies ($\\sim 40$--$150$ Hz)\nand overestimates it at high frequencies ($\\gtrsim 200$ Hz). We adopt\ntheir noise curve for this comparison. Because of these differences,\nCF94 rather seriously overestimates the SNR for NS-NS inspiral. Using\ntheir noise curve, the average SNR for the binary analyzed in their Fig.\\\n10 is 12.4\\footnote{CF94 actually report an SNR of 12.8. The\ndiscrepancy is due to rounding the parameter $r_0$ in their Eq.\\\n(4.28). Adjusting to their preferred value (rather than computing\n$r_0$) gives perfect agreement.}; using our up-to-date model for\nadvanced LIGO, it is 5.8. As such, the reader should view the numbers\nin this section of our analysis as useful {\\it only} for validation\npurposes.\n\n\\item The two analyses use different priors. As extensively discussed\nin Sec.\\ {\\ref{sec:mhmc}}, we set uniform priors on the chirp mass\n${\\cal M}_z$, on the time $t_c$ and phase $\\Phi_c$ at coalescence, and\non the polarization phase $\\psi$. For this comparison, we assume\nisotropic emission and set a flat prior on $\\cos\\iota$. We\nassume our sources are uniformly distributed in constant comoving\nvolume. However, our detection threshold depends on the total network\nSNR, and effectively sets a joint prior on source inclination and\ndistance. CF94 use a prior distribution only for the set $\\{ D_L, \\cos\n\\iota, \\psi, \\Phi_c \\}$ that is flat in polarization phase,\ncoalescence phase, and inclination. They assume a prior that is\nuniform in volume, but that cuts off the distribution at a distance\n$D_{L,{\\rm max}} \\simeq 6.5\\,{\\rm Gpc}$.\n\n\\end{itemize}\n\nOur goal here is to reproduce the 1-D marginalized posterior PDF in\n$D_L$ for the binary shown in Fig.\\ 10 of CF94. We call this system\nthe ``CF binary.'' Each NS in the CF binary has $m_z = 1.4\\,M_\\odot$ and\nsky position $(\\theta, \\phi) = (50^{\\circ}, 276^{\\circ})$;\nthe detector network comprises LIGO Hanford, LIGO Livingston and\nVirgo. CF94 report the ``best-fit'' maximum-likelihood values\n($\\tilde{D}_L$, $\\cos\\tilde{\\iota}$, $\\tilde{\\Psi}$) to be ($432\\,{\\rm\nMpc}$, $0.31$, $101.5^{\\circ}$), where $\\Psi = \\psi + \\Delta \\psi\n({\\bf n})$, and where $\\Delta \\psi ({\\bf n})$ depends on the preferred\nbasis of ${\\bf e}^{\\times}$ and ${\\bf e}^{\\times}$ set by the detector\nnetwork [see Eqs.\\ (4.23)--(4.25) of CF94\\footnote{Note that Eq.\\\n(4.25) of CF94 should read $\\tan(4\\Delta\\psi) =\n2\\Theta_{+\\times}\/(\\Theta_{++} - \\Theta_{\\times\\times})$. In addition,\n$\\tilde\\Psi = 56.5^{\\circ}$ should read $\\tilde\\Psi = 101.5^{\\circ}$\nunder the caption of Fig.\\ 10. (We have changed notation from\n$\\bar\\psi$ in CF94 to $\\Psi$ to avoid multiple accents on the best fit\nvalue.) We thank \\'Eanna Flanagan for confirming these\ncorrections.}]. To compare our distribution with theirs, we assume\nthat $\\boldsymbol{\\hat\\theta} = \\boldsymbol{\\tilde\\theta}_{\\rm ML}$\nfor the purpose of computing the likelihood function ${\\cal\nL}(\\boldsymbol{\\theta} | {\\bf s})$. This is a reasonable assumption\nwhen the priors are uniform over the relevant parameter space. As\nalready mentioned, for this comparison we use their advanced detector\nnoise curve and the Newtonian-quadrupole waveform. Finally, we\ninterpret the solid curve in Fig.\\ 10 of CF94 as the marginalized 1-D\nposterior PDF in $D_L$ for an average of posterior PDFs of parameters\n(given an ensemble of many noisy observations for a particular event).\nWe compute the average PDF as described in Sec.\\\n{\\ref{sec:averagedPDF}}, and then marginalize over all parameters\nexcept $D_L$, using Eq.\\ (\\ref{eq:margPDF}).\n\nThe left-hand panels of Fig.\\ \\ref{fig:CFbinary} show the resulting\n1-D marginalized PDF in $D_L$ and $\\cos \\iota$. Its shape has a broad\nstructure not dissimilar to the solid curve shown in Fig.\\ 10 of CF94:\nThe distribution has a small bump near $D_L \\approx 460\\,{\\rm Mpc}$, a\nmain peak at $D_L \\approx 700\\,{\\rm Mpc}$, and extends out to roughly\n1 Gigaparsec. Because of the broad shape, the Bayes mean ($\\tilde\nD_{L,\\rm{BAYES}} = 694\\,{\\rm Mpc}$) is significantly different from\nboth the true value ($\\hat D_L = 432\\,{\\rm Mpc}$ in our calculation)\nand from the maximum likelihood ($\\tilde D_{L,\\rm{ML}} = 495\\,{\\rm\nMpc}$). Thanks to the marginalization, the peak of this curve does\nnot coincide with the maximum likelihood.\n \n\\begin{figure}\n\\centering \n\\includegraphics[width=1.00\\columnwidth]{fig5.eps}\n\\caption{1-D and 2-D marginalized posterior PDFs for $D_L$ and\n$\\cos \\iota$ averaged over noise (as described in Sec.\\\n\\ref{sec:averagedPDF}) for the ``CF binary.'' Our goal is to\nreproduce, as closely as possible, the non-Gaussian limit summarized\nin Fig.\\ 10 of CF94. Top left panel shows the 1-D marginalized\nposterior PDF in $D_L$ (the true value $\\hat D_L = 432\\,{\\rm Mpc}$ is\nmarked with a solid black line); bottom left panel illustrates the 1-D\nmarginalized posterior PDF in $\\cos \\iota$ (true value $\\cos \\hat\n\\iota = 0.31$ likewise marked). The right-hand panel shows the 2-D\nmarginalized posterior PDF for $D_L$ and $\\cos \\iota$; the true values\n($\\hat D_L = 432\\,{\\rm Mpc}, \\cos \\hat \\iota = 0.31$) are marked with a cross. The contours\naround the dark and light areas indicate the 68 and 95\\% interval\nlevels, respectively. The true values lie within the 68\\% interval.\nThe Bayes mean and rms measurement accuracies are (694.4 Mpc, 0.70)\nand (162 Mpc, 0.229) for ($D_L$, $\\cos \\iota$), respectively.}\n\\label{fig:CFbinary}\n\\end{figure}\n\nWe further determine the 2-D marginalized posterior PDFs in $D_L$ and\n$\\cos \\iota$ for the CF binary. Figure \\ref{fig:CFbinary} illustrates\ndirectly the very strong degeneracy between these parameters, as\nexpected from the form of Eqs.\\ (\\ref{eq:hplus}) and\n(\\ref{eq:hcross}), as well as from earlier works (e.g.,\n\\citealt{markovic93}, CF94). It's worth noting that, as CF94 comment,\nthis binary is measured particularly poorly. This is largely due to\nthe fact that one polarization is measured far better than the other,\nso that the $D_L$--$\\cos\\iota$ degeneracy is essentially unbroken.\nThis degeneracy is responsible for the characteristic tail to large\n$D_L$ we find in the 1-D marginalized posterior PDF in $D_L$, $p(D_L |\n\\bf{s})$, which we investigate further in the following section.\n\n\\subsection{Test 1: Varying luminosity distance and number of detectors}\n\\label{sec:cf_vary}\n\nWe now examine how well we measure $D_L$ as a function of distance to\nthe CF binary and the properties of the GW detector network. Figures\n\\ref{fig:CFbinaryvaryingDLa} and \\ref{fig:CFbinaryvaryingDLb} show the\n1-D and 2-D marginalized posterior PDFs in $D_L$ and $\\cos \\iota$ for\nthe CF binary at $\\hat{D}_L = \\{100$, $200$, $300$, $400$, $500$,\n$600\\}$ Mpc. For all these cases, we keep the binary's sky position,\ninclination, and polarization angle fixed as in Sec.\\ \\ref{sec:cf}.\nThe average network SNRs we find for these six cases are (going from\n$\\hat D_L = 100\\,{\\rm Mpc}$ to $600\\,{\\rm Mpc}$) 53.6, 26.8, 17.9,\n13.4, 10.7, and 8.9 (scaling as $1\/\\hat D_L$). Interestingly, the\nmarginalized PDFs for both distance and $\\cos\\iota$ shown in Figs.\\\n{\\ref{fig:CFbinaryvaryingDLa}} and {\\ref{fig:CFbinaryvaryingDLb}} have\nfairly Gaussian shapes for $\\hat D_L = 100$ and 200 Mpc, but have very\nnon-Gaussian shapes for $\\hat D_L \\ge 300\\,{\\rm Mpc}$. This can be\nconsidered ``anecdotal'' evidence that the Gaussian approximation for\nthe posterior PDF breaks down at ${\\rm SNR} \\lesssim 25$ or so, at\nleast for this case. For lower SNR, the degeneracy between\n$\\cos\\iota$ and $D_L$ becomes so severe that the 1-D errors on these\nparameters become quite large.\n\n\\begin{figure}\n\\centering \n\\includegraphics[width=1.1\\columnwidth]{fig6.eps}\n\\caption{1-D and 2-D marginalized PDFs for $D_L$ and $\\cos \\iota$,\naveraged (as described in Sec.\\ \\ref{sec:averagedPDF}) over noise\nensembles for the ``CF binary'' at different values of true luminosity\ndistance $\\hat{D}_L$: [100 Mpc, 200 Mpc, 300 Mpc] (top to bottom).\nTrue parameter values are marked with a solid black line or a black\ncross. The Bayes means and rms errors on luminosity distance are\n[101.0 Mpc, 212.1 Mpc, 411.2 Mpc] and [3.6 Mpc, 21.4 Mpc, 110.0 Mpc],\nrespectively. The corresponding means and errors for $\\cos \\iota$ are\n[0.317, 0.357, 0.562] and [0.033, 0.089, 0.247]. The dark and light\ncontours in the 2-D marginalized PDF plots indicate the 68 and 95\\%\ninterval levels, respectively. The true value always lies within the\n68\\% contour region of the 2-D marginalized area at these distances.}\n\\label{fig:CFbinaryvaryingDLa}\n\\end{figure}\n\n\\begin{figure}\n\\centering \n\\includegraphics[width=1.1\\columnwidth]{fig7.eps}\n\\caption{Same as Fig.\\ {\\ref{fig:CFbinaryvaryingDLa}}, but for true\nluminosity distance $\\hat{D}_L =$ [400 Mpc, 500 Mpc, 600 Mpc] (top to\nbottom). True parameter values are marked with a solid black line or a black\ncross. In this case, the Bayes means and rms errors for luminosity\ndistance are [627.17 Mpc, 857.3 Mpc, 1068 Mpc] and [148.8 Mpc, 198.1\nMpc, 262.2 Mpc], respectively. The means and errors for $\\cos \\iota$\nare [0.686, 0.745, 0.746] and [0.237, 0.209, 0.218]. The dark and\nlight contours in the 2-D marginalized PDF plots indicate the 68 and\n95\\% interval levels, respectively. The true value lies within the\n68\\% contour region for $D_L = 400$ Mpc, but moves outside this region\nfor larger values.}\n\\label{fig:CFbinaryvaryingDLb}\n\\end{figure}\n\nNext, we examine measurement accuracy versus detector network. For\nthe CF binary, adding detectors does not substantially increase the\ntotal SNR. We increase the average total SNR from 12.4 to 14.6\n(adding only AIGO), to 12.4 (adding only LCGT; its contribution is so\nsmall that the change is insignificant to the stated precision), or to\n14.7 (adding both AIGO and LCGT). The average SNR in our detectors is\n8.23 for LIGO-Hanford, 8.84 for LIGO-Livingston, 2.91 for Virgo, 8.71\nfor AIGO, and 1.1 for LCGT. This pathology is an example of a fairly\ngeneral trend that we see; it is common for the SNR to be quite low in\none or more detectors.\n\nIn the case of the CF binary, we find that adding detectors does not improve the\nmeasurement enough to break the $D_L$--$\\cos\\iota$ degeneracy. The\nmarginalized PDFs as functions of $D_L$ and $\\cos\\iota$ remain very\nsimilar to Fig.\\ {\\ref{fig:CFbinary}}, so we do not show them. As a\nconsequence, even with additional detectors, the distance errors\nremain large and biased. The bias is because we tend to find\n$\\cos\\iota$ to be larger than the true (relatively edge-on) value (cf.\\ lower left-hand\npanel of Fig.\\ {\\ref{fig:CFbinary}}). Thanks to the\n$D_L$--$\\cos\\iota$ degeneracy, we likewise overestimate distance.\n\n\n\\subsection{Test 2: Varying source inclination}\n\\label{sec:faceon_cf}\n\nOne of the primary results from the CF binary analysis is a strong\ndegeneracy between $\\cos \\iota$ and $D_L$. As Fig.\\\n\\ref{fig:CFbinary} shows, this results in a tail to large distance in\nthe 1-D marginalized posterior PDF $p(D_L | \\bf{s})$, with a Bayes\nmean $\\tilde D_L = 694\\,{\\rm Mpc}$ (compared to $\\hat D_L = 432\\,{\\rm\nMpc}$). Such a bias is of great concern for using binary sources as\nstandard sirens.\n\nThe CF binary has $\\cos\\hat\\iota = 0.31$, meaning that it is nearly\nedge-on to the line of sight. Hypothesizing that the large tails may\nbe due to its nearly edge-on nature, we consider a complementary\nbinary that is nearly face on: We fix all of the parameters to those\nused for the CF binary, except for the inclination, which we take to\nbe $\\cos\\hat\\iota = 0.98$. We call this test case the ``face-on'' CF\nbinary. Changing to a more nearly face-on situation substantially\naugments the measured SNR; the average SNR for the face-on CF binary\nmeasured by the LIGO\/Virgo base network is 24.3 (versus 12.4 for the\nCF binary). We thus expect some improvement simply owing to the\nstronger signal.\n\nFigure \\ref{fig:SDbinary} shows the 1-D and 2-D marginalized posterior\nPDFs in $D_L$ and $\\cos \\iota$. As expected, these distributions are\ncomplementary to those we found for the CF binary. In particular, the\npeak of the 1-D marginalized posterior PDF in $D_L$ is shifted to\nlower values in $D_L$, and the Bayes mean is much closer to the true\nvalue: $\\tilde D_L = 376.3\\,{\\rm Mpc}$. The shape of the 1-D\nmarginalized posterior PDF in $\\cos \\iota$ is abruptly cut off by the\nupper bound of the physical prior $\\cos\\iota \\le 1$, and the tail extends to\nlower distances (the opposite of the CF binary). The Bayes mean\nfor the inclination is $\\cos\\tilde\\iota = 0.83$.\n\n\\begin{figure}\n\\centering \n\\includegraphics[width=1.1\\columnwidth]{fig8.eps}\n\\caption{Same as Fig.\\ {\\ref{fig:CFbinary}}, but for the ``face-on''\nCF binary. The Bayes mean and rms errors are (376.3 Mpc, 0.83) and\n(51.3 Mpc, 0.12) for ($D_L$, $\\cos \\iota$), respectively. Top left\nshows the 1-D marginalized posterior PDF in $D_L$ ($\\hat D_L =\n432\\,{\\rm Mpc}$ is marked with a solid black line); bottom left shows\nthe marginalized PDF in $\\cos \\iota$ (solid black line marks $\\cos\n\\hat \\iota = 0.98$). The right panel shows the 2-D marginalized\nposterior PDF; the cross marks the true source parameters ($\\hat D_L =\n432\\,{\\rm Mpc}$, $\\cos \\hat\\iota = 0.98$). As\nwith the CF binary, the true values lie within the 68\\% region.}\n\\label{fig:SDbinary}\n\\end{figure}\n\nJust as we varied distance and detector network for the CF binary, we\nalso do so for the face-on CF binary, with very similar results. In\nparticular, varying network has little impact on the marginalized 1-D\nPDFs in $D_L$ and $\\cos\\iota$. Varying distance, we find that the\nmarginalized 1-D PDFs are nearly Gaussian in shape for small\ndistances, but become significantly skewed (similar to the left-hand\npanels of Fig.\\ {\\ref{fig:SDbinary}}) when $\\hat D_L > 200$ Mpc. The\ndistributions in $\\cos\\iota$ are particularly skewed thanks to the\nhard cut-off at $\\cos\\iota = 1$. Interestingly, in this case we tend\nto infer a value of $\\cos\\iota$ that is smaller than the true value.\nWe likewise find a Bayes mean $\\tilde D_L$ that is smaller than $\\hat\nD_L$.\n\n\\subsection{Summary of validation tests}\n\\label{sec:testing_discuss}\n\nThe main result from our testing is that the posterior PDFs we find\nhave rather long tails, with strong correlations between $\\cos\\iota$\nand $D_L$. Except for cases with very high SNR, the 1-D marginalized\nposterior PDF in $\\cos\\iota$ tends to be rather broad. The Bayes mean\nfor $\\cos\\iota$ thus typically suggests that a binary is at\nintermediate inclination. As such, we tend to {\\it underestimate}\n$\\cos\\iota$ for nearly face-on binaries, and to {\\it overestimate} it\nfor nearly edge-on binaries. Overcoming this limitation requires us\nto either break the $D_L$--$\\cos\\iota$ degeneracy (such as by setting\na prior on binary inclination), or by measuring a population of\ncoalescences. Measuring a population will make it possible to sample\na wide range of the $\\cos\\iota$ distribution, so that the\nevent-by-event bias is averaged away in the sample.\n\n\\section{Results II: Survey of standard sirens}\n\\label{sec:main_results}\n\nWe now examine how well various detector networks can measure an\nensemble of canonical GW-SHB events. We randomly choose events from\nour sample of {\\it detected}\\\/ NS-NS and NS-BH binaries (where the\nselection is detailed in Sec.\\ \\ref{sec:selectionandpriors}). We set\na total detector network threshold of 7.5. Crudely speaking, one\nmight imagine that this implies, on average, a threshold per detector\nof $7.5\/\\sqrt{5}=3.4$ for a five detector network. Such a crude ``per\ndetector threshold'' is useful for getting a rough idea of the range\nto which our network can measure events. Averaging Eq.\\\n(\\ref{eq:snr_ave}) over all sky positions and orientations yields\n(DHHJ06)\n\\begin{eqnarray}\n\\label{eq:snr_ave_sky_orien}\n\\left({S\\over N}\\right)_{a,\\ {\\rm sky-ave}} &=& \\frac{8}{5}\n\\sqrt{\\frac{5}{96}} \\frac{c}{D_L} \\frac{1}{\\pi^{2\/3}} \\left(\\frac{G\n{\\cal M}_z}{c^3}\\right)^{5\/6}\\times\n\\nonumber\\\\\n& &\\qquad \\int_{f_{\\rm low}}^{f_{\\rm ISCO}}\n\\frac{f^{-7\/3}}{S_h(f)} df\\;,\n\\end{eqnarray}\nFor total detector network threshold of 7.5, a five detector network\nhas an average range of about $600\\,{\\rm Mpc}$ for NS-NS events, and\nabout $1200\\,{\\rm Mpc}$ for NS-BH events. If SHBs are associated with\nface-on binary inspiral, these numbers are increased by a factor\n$\\sqrt{5\/2} \\simeq 1.58$. (This factor is incorrectly stated to be\n$\\sqrt{5\/4} \\simeq 1.12$ in DHHJ06.)\n\nLet us assume a constant comoving rate of 10 SHBs Gpc$^{3}$ yr$^{-1}$\n\\citep{nakar06}. If these events are all NS-NS binary mergers, and\nthey are isotropically oriented, we expect the full\nLIGO-Virgo-AIGO-LCGT network to measure 6 GW-SHB events per year. If\nthese events are instead all NS-BH binaries, the full network is\nexpected to measure 44 events per year. If these events are beamed,\nthe factor $1.58$ increases the expected rate to 9 NS-NS or 70 NS-BH\nGW-SHB events per year. We stress that these numbers should be taken\nas rough indicators of what the network may be able to measure.\nNot all SHBs will be associated with binary inspiral. Those events\nwhich are will likely include both NS-NS and NS-BH events, with\nparameters differing from our canonical choices. We also do not\naccount for the fraction of SHBs which will be missed due to\nincomplete sky coverage.\n\nIn all cases we build our results by constructing the posterior\ndistribution for an event given a unique noise realization at each\ndetector. We keep the noise realization in a given detector and for a\nspecific binary constant as we add other detectors. This allows us\nto make meaningful comparisons between the performance of different\ndetector networks.\n\n\\subsection{NS-NS binaries}\n\nWe begin by imagining a population of six hundred detected NS-NS\nbinaries, either isotropically distributed in inclination angle or\nfrom our beamed subsample, using a network with all five\ndetectors. Figure \\ref{fig:DeltaDLNSNS} shows scatter plots of the\ndistance measurement accuracies for our unbeamed (blue crosses) and\nbeamed events (black dots), with each panel corresponding to a\ndifferent detector network. The distance measurement error is defined\nas the ratio of the rms measurement error with the true\nvalue\\footnote{Our definition differs from that given in CF94, their\nEq.\\ (4.62). Their distance measurement error is described as the\nratio of the rms measurement error with the Bayes mean. We prefer to\nuse Eq.\\ (\\ref{eq:measaccdefn}) as we are interested primarily in the\nmeasurement error given a binary at its true luminosity distance.}\n$\\hat{D}_L$:\n\\begin{equation}\n\\frac{\\Delta D_L}{\\hat{D}_L} = \\frac{\\sqrt{\\Sigma^{D_L\nD_L}}}{\\hat{D}_{L}}\\;.\n\\label{eq:measaccdefn}\n\\end{equation}\n$\\Sigma^{D_L D_L}$ is computed using (\\ref{eq:approxsigma_bayes}). We\nemphasize some general trends in Fig.\\ \\ref{fig:DeltaDLNSNS} which\nare particularly relevant to standard sirens:\n\n\\begin{itemize}\n\n\\item {\\it The unbeamed total sample and the beamed subsample separate\ninto two distinct distributions.} As anticipated, the beamed\nsubsample improves measurement errors in $D_L$ significantly, by\ngreater than a factor of two or more. This is due to the beaming\nprior, which constrains the\ninclination angle, $\\cos \\iota$, to $\\sim 3\\%$, thereby breaking the\nstrong $D_L$--$\\cos \\iota$ degeneracy. By contrast, when no beaming\nprior is assumed, we find absolute errors of $0.1$--$0.3$ in\n$\\cos\\iota$ for the majority of events. The strong $D_L$--$\\cos\n\\iota$ degeneracy then increases the distance errors. A significant\nfraction of binaries randomly selected from our sample have $0.5\n\\lesssim |\\cos\\hat\\iota| < 1$. As discussed in Sec.\\\n{\\ref{sec:selectionandpriors}}, this is due to the SNR selection\ncriterion: At fixed distance, face-on binaries are louder and tend to\nbe preferred.\n\n\\item {\\it Beamed subsample scalings.} We fit linear scalings\nto our beamed subsample:\\\\\n$\\Delta D_L\/\\hat{D}_L \\simeq \\hat{D}_L\/(2.15\\,\\rm{Gpc})$ for LIGO + Virgo\\\\\n$\\Delta D_L\/\\hat{D}_L \\simeq \\hat{D}_L\/(2.71\\,\\rm{Gpc})$ for LIGO + Virgo + AIGO\\\\\n$\\Delta D_L\/\\hat{D}_L \\simeq \\hat{D}_L\/(2.38\\,\\rm{Gpc})$ for LIGO + Virgo + LCGT\\\\\n$\\Delta D_L\/\\hat{D}_L \\simeq \\hat{D}_L\/(2.82\\,\\rm{Gpc})$ for LIGO + Virgo + AIGO + LCGT\n\n\\item {\\it When isotropic emission is assumed, we find a large scatter\nin distance measurement errors for all events, irrespective of network\nand true distance.} We find much less scatter when we assume a\nbeaming prior. This is illustrated very clearly by the upper-right\npanel of Fig.\\ {\\ref{fig:DeltaDLNSNS}}. In that panel, we show the\nscatter of distance measurement error versus true distance for the\nLIGO, Virgo, AIGO detector network, comparing to the\nFisher-matrix-derived linear scaling trend found in DHHJ06. For the\nunbeamed case, our current results scatter around the linear trend;\nfor the beamed case, most events lie fairly close to the trend. This\ndemonstrates starkly the failure of Fisher methods to estimate\ndistance accuracy, especially when we cannot set a beaming prior.\n\n\\item {\\it Adding detectors to the network considerably increases the\nnumber of detected binaries, but does not significantly improve the\naccuracy with which those binaries are measured.} The increase we see\nin the number of detected binaries is particularly significant for\nGW-SHB standard sirens. For instance, an important application is\nmapping out the posterior PDF for the Hubble constant, $H_0$. As the\nnumber of events increases, the resulting joint posterior PDF in $H_0$\nwill become increasingly well constrained. Additional detectors also\nincrease the distance to which binaries can be detected. This can be\nseen in Fig.\\ \\ref{fig:DeltaDLNSNS}: for the LIGO and Virgo network,\nour detected events extend to $\\hat D_L \\sim 600\\,{\\rm Mpc}$; the\nlarger networks all go somewhat beyond this. Interestingly, networks\nwhich include the AIGO detector seem to reach somewhat farther out.\n\n\\end{itemize}\n\nIt is perhaps disappointing that increasing the number of detectors\ndoes not greatly improve measurement accuracy. We believe this is due\nto two effects. First, a larger network tends to detect more weak\nsignals. These additional binaries are poorly constrained. Second,\nthe principle limitation to distance measurement is the\n$D_L$--$\\cos\\iota$ degeneracy. A substantial improvement in distance\naccuracy on individual events would require breaking this degeneracy.\nWe find that adding detectors does not do this, but the beaming prior\ndoes.\n\n\\begin{figure}\n\\centering \n\\includegraphics[width=1.0\\columnwidth]{fig9.eps}\n\\caption{Distance measurement errors versus true luminosity distance\nfor our sample of NS-NS binaries. Colored crosses assume isotropic\nemission; black points assume our beaming prior. The dashed lines\nshow the linear best-fit to the beamed sample (see text for\nexpressions). In the LIGO+Virgo+AIGO panel we also show the\nFisher-matrix-derived linear scaling given in DHHJ06: $\\Delta\nD_L\/\\hat{D}_L \\simeq \\hat{D}_L\/(4.4\\,\\rm{ Gpc})$ assuming beaming (solid), and\n$\\Delta D_L\/\\hat{D}_L \\simeq \\hat{D}_L\/(1.7\\,\\rm{Gpc})$ for isotropic\nemission (dotted).}\n\\label{fig:DeltaDLNSNS}\n\\end{figure}\n\n\\subsection{NS-BH binaries}\n\nWe now repeat the preceding analysis for six hundred detected NS-BH\nbinaries. Figure~\\ref{fig:DeltaDLNSBH} shows scatter plots of\nmeasurement accuracies for unbeamed and beamed NS-BH binaries. We\nfind similar trends to the NS-NS case:\n\n\\begin{itemize}\n\n\\item {\\it The unbeamed and beamed samples separate into two distinct\ndistributions.} Notice, however, that outliers exist in measurement\nerrors at high $D_L$ for several beamed events for all networks. This\nis not too surprising, given that we expect beamed sources at higher\nluminosity distances and lower SNR. Such events are more likely to\ndeviate from the linear relationship predicted by the Fisher matrix.\n\n\\item {\\it We see substantial scatter in distance measurement,\nparticularly when isotropic emission is assumed}. As with the NS-NS case, the\nscatter is not as severe when we assume beaming, and in that case lies\nfairly close to a linear trend, as would be predicted by a Fisher\nmatrix. This trend is shallower in slope than for NS-NS binaries,\nthanks to the larger mass of the system.\n\n\\item {\\it We do not see substantial improvement in distance\nmeasurement as we increase the detector network.} As with NS-NS binaries,\nadding detectors increases the range of the network; AIGO appears\nto particularly add events at large $\\hat D_L$ (for both the isotropic\nand beamed samples). However, adding detectors does not break the\nfundamental $D_L$--$\\cos\\iota$ degeneracy, and doesn't improve errors. From our\nfull posterior PDFs, we find absolute errors of $0.1$--$0.3$ in $\\cos\\iota$,\nwhich is very similar to the NS-NS case.\n\n\\item {\\it Beamed subsample scalings.} The linear scalings\nfor our beamed subsample are:\\\\\n$\\Delta D_L\/\\hat{D}_L \\simeq \\hat{D}_L\/(4.83\\,\\rm{Gpc})$ for LIGO + Virgo\\\\\n$\\Delta D_L\/\\hat{D}_L \\simeq \\hat{D}_L\/(6.14\\,\\rm{Gpc})$ for LIGO + Virgo + AIGO\\\\\n$\\Delta D_L\/\\hat{D}_L \\simeq \\hat{D}_L\/(5.20\\,\\rm{Gpc})$ for LIGO + Virgo + LCGT\\\\\n$\\Delta D_L\/\\hat{D}_L \\simeq \\hat{D}_L\/(6.76\\,\\rm{Gpc})$ for LIGO + Virgo + AIGO + LCGT\\\\\n\n\\end{itemize}\n\n\\begin{figure}\n\\centering \n\\includegraphics[width=1.0\\columnwidth]{fig10.eps}\n\\caption{Distance measurement errors versus true luminosity distance\nfor our sample of NS-BH binaries. Colored crosses assume isotropic\nemission; black points use our beaming prior. The lower right-hand\npanel shows the sample detected by our ``full'' network\n(LIGO+Virgo+AIGO+LCGT). Upper left is LIGO+Virgo; upper right is\nLIGO+Virgo+AIGO; and lower left is LIGO+Virgo+LCGT. The dashed lines\nshow the linear best-fit to the beamed sample (see text for\nexpressions).}\n\\label{fig:DeltaDLNSBH}\n\\end{figure}\n\n\\section{Summary discussion}\n\\label{sec:summary}\n\nIn this analysis we have studied how well GWs can be used to measure\nluminosity distance, under the assumption that binary inspiral is\nassociated with (at least some) short-hard gamma ray bursts. We examine two\nplausible compact binary SHB progenitors, and a variety of plausible\ndetector networks. We emphasize that {\\it we assume sky position is\nknown}. We build on the previous study of DHHJ06, which used the\nso-called Gaussian approximation of the posterior PDF. This\napproximation works well for large SNR, but the limits of its validity are\npoorly understood. In particular, since the SNR of events measured by ground-based\ndetectors is likely to be of order 10, the Gaussian limit may be inapplicable. \nWe examine the posterior PDF for the parameters of observed\nevents using Markov-Chain Monte-Carlo techniques, which do not rely on\nthis approximation. We also introduce a well-defined noise-averaged\nposterior PDF that does not depend solely on a particular noise\ninstance. Such a quantity is useful to predict how well a detector\nshould be able to measure the properties of a source.\n\nWe find that the Gaussian approximation substantially underestimates\ndistance measurement errors. We also find that the main limitation\nfor individual standard siren measurements is the strong degeneracy\nbetween distance to the binary and the binary's inclination to the line of\nsight; similar discussion of this issue is given in a recent analysis\nby \\cite{ajithbose09}. Adding detectors to a network only slightly\nimproves distance measurement for a given single event. When we\nassume that the SHB is isotropic (so that we cannot infer anything\nabout the source's inclination from the burst), we find that Fisher\nmatrix estimates of distance errors are very inaccurate. Our\ndistributions show large scatter about the Fisher-based predictions.\n\nThe situation improves dramatically if we assume that SHBs are\ncollimated, thereby giving us a prior on the orientation of the\nprogenitor binary. By assuming that SHBs are preferentially emitted\ninto an opening angle of roughly $25^\\circ$, we find that the\ndistance--inclination correlation is substantially broken. The Fisher\nmatrix estimates are then much more reasonable, giving a good sense of\nthe trend with which distances are determined (albeit with a moderate\nscatter about that trend). This illustrates the importance of\nincorporating prior knowledge, at least for individual measurements.\n\nOur distance measurement results are summarized by\nFig.\\ {\\ref{fig:DeltaDLNSNS}} (for NS-NS SHB progenitors) and\nFig.\\ {\\ref{fig:DeltaDLNSBH}} (for NS-BH). Assuming isotropy, we find\nthe distance to NS-NS binaries is measured with a fractional error of\nroughly $20$--$60$\\%, with most events in our distribution clustered\nnear $20$--$30$\\%. Beaming improves this by roughly a factor of two,\nand eliminates much of the high error tail from our sample. NS-BH\nevents are measured somewhat more accurately: the distribution of\nfractional distance errors runs from roughly $15$--$50$\\%, with most\nevents clustered near $15$--$25$\\%. Beaming again gives roughly a\nfactor of two improvement, elimating most of the high error tail.\n\nIt is worth emphasizing that these results describe the outcome of\n{\\it individual siren measurements}. When these measurements are used\nas cosmological probes, we will be interested in constructing the\njoint distribution, following observation of $N$ GW-SHB events.\nIndeed, preliminary studies show that our ability to constrain $H_0$\nimproves dramatically as the number of measured binaries is\nincreased. In our most pessimistic scenario (the SHB is assumed to be\na NS-NS binary, with no prior on inclination, and measured by the\nbaseline LIGO-Virgo network), we find that $H_0$ can be measured with\n$\\sim 13\\%$ fractional error with $N = 4$, improving to $\\sim 5\\%$ for\n$N = 15$. This is because multiple measurements allow us to sample\nthe inclination distribution, and thus average out the bias\nintroduced by the tendency to overestimate distance for edge-on binaries,\nand underestimate it for face-on binaries. Details of this analysis\nwill be presented in a followup paper.\n\nIncreasing the number of measured events will thus be crucial for\nmaking cosmologically interesting measurements. To this end, it is\nimportant to note that increasing the number of detectors in our\nnetwork enables a considerable increase in the number of detected\nbinaries. This is due to increases in both the sky coverage and in the\ntotal detection volume. Going from a network which includes all four\ndetectors (LIGO, Virgo, AIGO, and LCGT) to our baseline network of\njust LIGO and Virgo entails a $\\sim$~50\\% reduction in the number of\ndetected binaries. Eliminating just one of the proposed detectors\n(AIGO or LCGT) leaves us with $\\sim$~75\\% of the original detected\nsample.\n\nAside from exploring the cosmological consequences, several other\nissues merit careful future analysis. One general result is\nthe importance that priors have on the posterior PDF. We plan to\nexamine this in some detail, identifying the parameters which particularly\ninfluence the final result, and which uncertainties can be ascribed to\nan inability to set relevant priors. Another issue is the importance of\nsystematic errors in these models. We have used the second-post-Newtonian\ndescription of a binary's GWs in this analysis, and have ignored all\nbut the leading quadrupole harmonic of the waves (the ``restricted''\npost-Newtonian waveform). Our suspicion is that a more complete\npost-Newtonian description of the phase would have little impact on\nour results, since such effects won't impact the $D_L$--$\\cos\\iota$\ndegeneracy. In principle, including additional (non-quadrupole)\nharmonics could have an impact, since these other\nharmonics encode different information about the inclination angle\n$\\iota$. In practice, we expect that they won't have much effect on\nGW-SHB measurements, since these harmonics are measured with very low\nSNR (the next strongest harmonic is roughly a factor of 10 smaller in\namplitude than the quadrupole).\n\nAs discussed previously, we confine our analysis to the inspiral.\nInspiral waves are terminated at the innermost stable circular orbit\nfrequency, $f_{\\rm ISCO}=(6^{3\/2} \\pi M_z)$. For NS-NS binaries,\n$f_{\\rm ISCO} \\simeq 1600\\,{\\rm Hz}$. At this frequency, detectors\nhave fairly poor sensitivity, so we are confident that terminating the\nwaves has little impact on our NS-NS results. However, for our\nassumed NS-BH binaries, $f_{\\rm ISCO} \\simeq 400\\,{\\rm Hz}$.\nDetectors have good sensitivity in this band, so it may be quite\nimportant to improve our model for the waves' termination in this\ncase.\n\nPerhaps the most important follow-up would be to include the impact of\nspin. Although the impact of neutron star spin is likely to be small,\nit may not be negligible; and, for NS-BH systems, the impact of the\nblack hole's spin is likely to be significant. Spin induces\nprecession which makes the orbit's orientation, $\\bf{\\hat L}$,\ndynamical. That makes the observed inclination dynamical, which can\nbreak the $D_L$--$\\cos\\iota$ degeneracy. In other words, with spin\nprecession the source's orbital dynamics may break this degeneracy.\nVan der Sluys et al.\\ (2008) have already shown that spin precession\nphysics can improve the ability of ground-based detectors to determine\na source's position on the sky. We are confident that a similar\nanalysis which assumes known sky position will find that measurements\nof source distance and inclination can likewise be improved.\n\n\\acknowledgments\n\nIt is a pleasure to acknowledge useful discussions with K.\\ G.\\ Arun,\nYoicho Aso, Duncan Brown, Curt Cutler, Jean-Michel D{\\'e}sert,\nAlexander Dietz, L.\\ Samuel Finn, Derek Fox, \\'Eanna Flanagan, Zhiqi\nHuang, Ryan Lang, Antony Lewis, Ilya Mandel, Nergis Mavalvala,\nSzabolcs M\\'arka, Phil Marshall, Cole Miller, Peng Oh, Ed Porter,\nAlexander Shirokov, David Shoemaker, and Pascal Vaudrevange. We are\ngrateful to Neil Cornish in particular for early guidance on the\ndevelopment of our MCMC code, to Michele Vallisneri for careful\nreading of the manuscript, and to Phil Marshall for his detailed\ncomments on the ensemble averaged likelihood function. We also are\ngrateful for the hospitality of the Kavli Institute for Theoretical\nPhysics at UC Santa Barbara, and to the Aspen Center for Physics,\nwhere portions of the work described here were formulated.\nComputations were performed using the Sunnyvale computing cluster at\nthe Canadian Institute for Theoretical Astrophysics, which is funded\nby the Canadian Foundation for Innovation. SAH is supported by NSF\nGrant PHY-0449884, and the MIT Class of 1956 Career Development Fund.\nHe gratefully acknowledges the support of the Adam J.\\ Burgasser Chair\nin Astrophysics.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nWithin the thousands of brown dwarfs discovered in wide-field surveys over the last 20 years lies a population of young, free-floating, brown dwarfs \\citep[e.g.][]{2006ApJ...639.1120K,2008AJ....136.1290R}. These brown dwarfs show redder near- and mid-infrared colors than their older field counterparts and display peculiar spectral features indicative of youth \\citep[e.g.][]{2008ApJ...689.1295K, 2009AJ....137.3345C, 2013ApJ...772...79A, 2013AJ....145....2F}. Many of them have their youth confirmed by being kinematically linked to local young moving groups \\citep{2013ApJ...777L..20L,2014ApJ...783..121G,2016ApJS..225...10F} which helps constrain their ages, temperatures, and masses. Around the same time as the discovery of young field brown dwarfs, the first directly-imaged exoplanets (2M 1207 b, HR 8799 bcd, and $\\beta$ Pic b) were discovered around young (10\u2013100 Myr) stars \\citep{2005A&A...438L..25C, 2008Sci...322.1348M,2009A&A...493L..21L}. Brown dwarfs with similar absolute magnitudes exhibit prominent near-infrared (hereafter near-IR) methane absorption bands but these exoplanets do not display the blue near-IR colors indicative of cloud-free, methane-rich atmospheres.\n\nObservations of free-floating young brown dwarfs and directly-imaged gas giant planets have revealed a number of important similarities between the two populations including very red near-IR colors compared to the older ($\\sim$Gyr) field brown dwarfs, a triangular-shaped H-band continuum, lower effective temperatures than field brown dwarfs of the same spectral type, and peculiar mid-infrared colors relative to field brown dwarfs \\citep{2017AJ....153..182C,2013ApJ...772...79A, 2008Sci...322.1348M, 2013ApJ...777L..20L,2014ApJ...786...32M}. Given the relative ease of observing free-floating young brown dwarfs, they make ideal analogs with which to study the atmospheric properties of directly-imaged planets.\n\nThe 3 to 4 $\\mu$m wavelength range is a particularly important spectral region because it is very sensitive to the various physical processes that control the emergent spectrum. In particular, it contains the $\\nu_3$ fundamental band of methane which is very sensitive to both disequilibrium chemistry due to vertical mixing within the atmosphere and variations in the cloud properties \\citep{2003IAUS..211..345S,2012ApJ...753...14S}. However, observing these wavelengths from the ground is difficult because of the strong telluric absorption and high thermal background and so there are relatively few brown dwarf spectra, and in particular young brown dwarf spectra, at these wavelengths. \n\nIn this paper, we present $L$-band spectra of a sample of eight young brown dwarfs. We discuss the sample and observations in Section \\ref{sec:obs}, and go into detail about the reduction process in Section \\ref{sec:data}. In Section \\ref{sec:analysis} we analyze the effects of spectral type on the $L$-band spectra, fit our sample to various model suites, and build a sequence of objects from the same young moving group.\n\n\n\n\\section{Sample and Observations} \\label{sec:obs}\n\n\\begin{table*}\n\\caption{Our Sample of Young L Dwarfs (All parallaxes are from \\citet{2016ApJ...833...96L})} \\label{tbl:samp}\n\\begin{tabular}{cllcccr}\n\\toprule[1.5pt]\n Name &\n SpT &\n SpT &\n YMG &\n Mass Range &\n Refs &\n Parallax \\\\\n &\n Opt &\n NIR &\n Membership &\n $M_\\mathrm{Jup}$ &\n &\n mas\\\\\n\\toprule[1.5pt]\n2MASS J00452143+1634446\t&\tL2 $\\beta$\t & L2 \\textsc{vl-g} &\tArgus &\t20--29 &\tC09, AL13, L16, F16\t&\t65.9\t$\\pm$\t1.3\t\\\\\nWISEP J004701.06+680352.1\t&\tL7~pec\t & L7 \\textsc{int-g} &\tAB~Dor& 9--15\t&\tG15, F16\t&\t82.3\t$\\pm$\t1.8\t\\\\\n2MASS J010332.03+1935361\t&\tL6 $\\beta$\t & L6 \\textsc{int-g} &\tArgus?&\t5--21&\tF12, AL13, F16\t&\t46.9\t$\\pm$\t7.6\t\\\\\n2MASS J03552337+1133437\t&\tL5 $\\gamma$\t & L3 \\textsc{vl-g} &\tAB Dor\t& 15--27 &\tC09, F13, AL13, F16\t&\t109.5\t$\\pm$\t1.4\t\\\\\n2MASS J05012406$-$0010452\t&\tL4 $\\gamma$\t & L3 \\textsc{vl-g} &\t\\nodata & 9--35\t&\tC09, AL13, F16\t&\t48.4\t$\\pm$\t1.4\t\\\\\nG~196$-$3B\t&\tL3 $\\beta$\t & L3 \\textsc{vl-g} &\t\\nodata\t& 22--53 &\tC09, AL13, F16\t&\t49.0\t$\\pm$\t2.3\t\\\\\nPSO J318.5338$-$22.8603\t&\t\\nodata\t & L7 \\textsc{vl-g} &\t$\\beta$~Pic\t& 5--7 &\tL13, A16, F16\t&\t45.1\t$\\pm$\t1.7\t\\\\\n2MASS J22443167+2043433\t&\tL6.5~pec\t & L6 \\textsc{vl-g} &\tAB~Dor\t& 9--12&\tK08, A16, V18\t&\t58.7\t$\\pm$\t1.0\t\\\\\n\\end{tabular}\n\\tablerefs{(A16)~\\citet{2016ApJ...819..133A}, (AL13)~\\citet{2013ApJ...772...79A},\n(C09)~\\citet{2009AJ....137.3345C}, \n(F12)~\\citet{2012ApJ...752...56F}, (F13)~\\citet{2013AJ....145....2F},\n(F16)~\\citet{2016ApJS..225...10F},\n(G15)~\\citet{2015ApJ...799..203G},\n(K08)~\\citet{2008ApJ...689.1295K},\n(L16)~\\citet{2016ApJ...833...96L}, (L13)~\\citet{2013ApJ...777L..20L},\n(V18)~\\citet{2018MNRAS.474.1041V}\n}\n\\end{table*}\n\nOur sample consists of eight young L dwarfs with spectral types ranging from L2 to L7. Table \\ref{tbl:samp} gives the full identifications (hereafter we abbreviate the full designations of 2MASS and WISE sources as HHMM+DD), spectral types (red optical and near-IR), young moving group memberships where applicable, estimated masses, and parallaxes. The near-IR spectra of all our objects exhibit features of low gravity, including weaker alkali and FeH features, triangular-shaped $H$-band spectra, and redder near-IR colors than field brown dwarfs of the same spectral type (See Figure \\ref{fig:cmd}). As further evidence of youth, six members of our sample also are kinematically linked to young moving groups (e.g. Argus, AB Dor, and $\\beta$ Pic) with ages ranging from 12 to 125 Myr.\n\n\\begin{figure}\n\\includegraphics[trim = {.55cm .95cm 1.35cm 2.25cm},clip,width = 3.35in]{J-KDirectlyImagedFinal.pdf}\n\\centering\n\\caption{A J$-$K color-magnitude diagram showing the difference in the near-IR color of older field dwarfs \\citep{2012ApJS..201...19D}, and young objects. Included are young brown dwarfs from the UltracoolSheet \\citep{Best}, directly imaged exoplanets from the NASA Exoplanet Archive, and our sample.}\\label{fig:cmd}\n\\end{figure}\n\n\nWe obtained $L$-band spectra of seven of these young brown dwarfs between 2014 October 27 and December 8 using the Gemini Near-InfraRed Spectrograph \\citep[GNIRS;][]{2006SPIE.6269E..4CE} at the Gemini North Observatory on Maunakea. Details of our observations are listed in Table \\ref{tbl:obs}. For these seven objects, we used the 10.44~lines~mm\\textsuperscript{$-$1} grating with the 0\\farcs05~pix\\textsuperscript{--1} camera resulting in a continuous 2.98--3.96~$\\mu$m spectrum. For most of our targets, we used a slit with a 0\\farcs3 width, resulting in a spectral resolving power ($R = {\\lambda}\/{\\Delta \\lambda}$) varying linearly with wavelength from $\\sim$500 at 2.98~$\\mu$m to $\\sim$675 at 3.96~$\\mu$m with an average of $R\\approx 590$. For our two faintest targets, we used a 0\\farcs6750-wide slit, resulting in $R\\sim $225 to $\\sim$300 from 2.98 to 3.96~$\\mu$m with an average of $R\\approx 260$. For our science target observations, we used a maximum exposure time of 10~sec and 5~sec for observations taken with the 0\\farcs3 and 0\\farcs675 slits respectively, and took 10--60 coadds to reach the integration times listed in Table \\ref{tbl:obs}. An ABBA nodding pattern was used to enable dark, bias, and sky subtraction. Observations of telluric standard stars with spectral types ranging from B9 V to A0.5V were taken adjacent to observations of our targets. \n\nOur sample also includes 2MASS~J22443167+2043433, a young brown dwarf with a published $L$-band spectrum \\citep[3.0$-$4.1 $\\mu$m, $R\\approx$ 460;][]{2009ApJ...702..154S} obtained with the Gemini Near-InfraRed Imager and Spectrograph \\citep[NIRI:][]{2003PASP..115.1388H}.\n\\begin{table*}\n\\caption{GNIRS Observations} \\label{tbl:obs}\n\\begin{tabular}{llrrlr}\n\\toprule[1.5pt]\nName &\nDate &\nSlit &\nN $\\times$ Int. Time &\nTelluric Standard &\n$\\Delta$ Airmass\\\\\n &\n(UT) &\n(arcsec) &\n(sec) &\n &\n \\\\\n\\toprule[1.5pt]\n2MASS~0045+16& 2014 Nov 28 & 0.300&8$\\times$120 &HIP 117927& 0.025\\\\\nWISE~0047+68 & 2014 Nov 30 & 0.300&8$\\times$300 &HIP 8016 & 0.080\\\\\n2MASS~0103+19& 2014 Nov 25 & 0.675&8$\\times$300 &HIP 117927& 0.076\\\\\n2MASS~0103+19& 2014 Dec 01 & 0.675&8$\\times$300 &HIP 117927& 0.043\\\\\n2MASS~0355+11& 2014 Oct 27 & 0.300&8$\\times$100 &HIP 18907 & 0.013\\\\\n2MASS~0501$-$00& 2014 Oct 27 & 0.300&8$\\times$300 &HIP 24607 & 0.028\\\\\n2MASS~0501$-$00& 2014 Dec 08 & 0.300&8$\\times$300 &HIP 24607 & 0.185\\\\\nG~196$-$3B & 2014 Dec 08 & 0.300&8$\\times$300 &HIP 51697 & 0.164\\\\\nPSO 318.5$-$22& 2014 Nov 28 & 0.675&10$\\times$300& HIP 108542& 0.131\\\\\nPSO 318.5$-$22& 2014 Nov 30 & 0.675&12$\\times$300& HIP 108542& 0.210\\\\\n\\end{tabular}\n\\end{table*}\n\n\\section{Data Reduction} \\label{sec:data}\nWe reduced our data using the \\texttt{REDSPEC}\\footnote{\\url{https:\/\/www2.keck.hawaii.edu\/inst\/nirspec\/redspec}} package, modified for use with GNIRS. \nFor spatial map creation, we used the \\texttt{spatmap} procedure on the bright telluric standard, which allowed us to reorient the spectra (of both the science target and telluric standard) along the detector rows. To create a spectral map and wavelength calibrate our data, we compared sky emission spectra from our telluric standard observations to a model sky emission spectrum\\footnote{\\url{http:\/\/www.gemini.edu\/sciops\/telescopes-and-sites\/observing-condition-constraints\/ir-background-spectra}}.\n\nWe found that the brightness of the sky background made fitting individual sky emission lines challenging, and thus chose to fit the entire 2.98--3.96~$\\mu$m spectrum simultaneously. We first Gaussian-smoothed the model sky to match our observed spectral resolving power. We then ran both the observed and model sky spectra through a high-pass filter to remove thermal emission. We established a 3rd order polynomial wavelength solution by specifying the wavelengths of four evenly-spaced (in the spectral direction) anchor pixels. We then used IDL's \\texttt{AMOEBA} to iteratively solve for the wavelengths of the anchor pixels that minimize the difference between our observed (with a wavelength solution determined from a 3rd order fit to our four anchor pixels) and model sky emission spectra. This was done for five spatial slices of the observed sky, which allowed us to align each column with a specific wavelength. After creating the spatial and spectral maps, we rectified all raw frames so that the spectral direction ran along image rows and the spatial (along the slit) dimension was aligned with image columns.\n\nA pair of spectra were extracted for each AB pair with the \\texttt{redspec} process. This process first collapses the rectified images in the spectral direction to create a spatial profile of the A$-$B subtracted image. We fit a pair of Gaussian curves centered on the positive and negative peaks in the spatial profile. Residual sky background was subtracted by fitting a line (at each spectral column) to regions of the spatial profile containing no object flux. We extracted the spectra by summing the flux within an aperture equal to 1.4$\\times$ the FWHM of the Gaussian fit to the spatial profile, as we found that this value provided the highest signal-to-noise ratio. The noise for each pixel in our spectra was determined by tracking Poisson noise contributions throughout the reduction and extraction process.\n \nThese extracted spectra were combined using a robust weighted mean in \\texttt{xcombspec} \\citep{2004PASP..116..362C}, and then were corrected for telluric absorption and flux calibrated using the \\texttt{xtellcor} package \\citep{2003PASP..115..389V}. Though our telluric correction is excellent overall, we removed regions of the spectra where atmospheric transmission is modeled to fall below 20\\%, as the signal-to-noise ratio in these regions is low and the telluric correction less reliable. This consists of the region around the 3.3 $\\mu$m $Q$-branch of methane. Three of our targets had spectra taken on two separate nights. We reduced each night's data individually, and then combined the final telluric-corrected spectra using \\texttt{xcombspec} with a weighted mean. \n\n\n\\begin{figure*}\n\\includegraphics[trim = {.85cm .8cm 6.05cm 12.5cm},clip,width=.65\\textwidth]{2021LbandStacked.pdf}\n\\centering\n\\caption{The reduced $L$-band spectra of our objects, normalized and offset to the dotted line. The highlighted region marks the $Q$-branch of methane which appears in our objects at a spectral type of L6. The gap in our spectra are cuts made where atmospheric transmission is below 20\\% and our telluric correction is less reliable.}\\label{fig:LBandAll}\n\\end{figure*}\n\nWe calibrated the absolute flux our spectra using published Spitzer\/IRAC Channel 1 (3.6 $\\mu$m) photometry (Table \\ref{tbl:irac} in Appendix \\ref{app:irac}), as the IRAC Channel 1 bandpass sits comfortably inside the spectral range of our GNIRS observations. PSO 318.5$-$22 does not have a [3.6] magnitude, so we used its spectral types, WISE W1 magnitudes \\citep{2013ApJ...772...79A,2013ApJ...777L..20L} and a spectral type vs.~W1$-$[3.6] color relation to compute a [3.6] magnitude of $12.892\\pm0.035$~mags for PSO 318.5$-$22. The spectral type vs.~W1$-$[3.6] color relation was created from a linear least-squares fit to the values for young L dwarfs listed in Table \\ref{tbl:irac}, which includes some previously unpublished photometry. The photometry, fit, and covariance matrix for the fit can be found in Appendix \\ref{app:irac}. With these [3.6] magnitudes and the zero-magnitude flux for [3.6], we were able to calculate the scaling factor needed to convert our spectra to absolute units of $\\mathrm{W~m}^{-2}~\\mu \\mathrm{m}^{-1}$ using the process found in \\citet{2005PASP..117..978R}. Our final reduced spectra are presented in Figure \\ref{fig:LBandAll}. \n\n\\section{Analysis}\\label{sec:analysis}\n\n\\subsection{The Young, L-band Spectral Sequence}\n\nIn general, the $L$-band spectra of our targets do not show the deep atomic and molecular absorption features that are prominent in near-IR spectra of brown dwarfs \\citep[e.g.][]{2005ARA&A..43..195K, 2017ApJ...838...73M}. There are some weak water features around 3.0-3.2 $\\mu$m, but the strongest absorption feature observed in our spectra is the $Q$-branch of the $\\nu_3$ fundamental band of methane at 3.3 $\\mu$m, and only in our objects of spectral types L6 or later. Moving later in spectral type also corresponds with the spectrum getting redder, shifting from a slightly negative slope with respect to wavelength at L2 to a nearly flat spectrum at L7 when using units of $f_\\lambda$ (Figure \\ref{fig:LBandAll}). \nThis reddening, along with the onset of methane occurring somewhere between L3 and L6, is consistent with tendencies observed in the spectra of older field dwarfs \\citep{2000ApJ...541L..75N, 2008ApJ...678.1372C, 2019RNAAS...3c..52J}. \n\n\n\\subsection{Model Comparisons: Best-Fit Parameters}\nFitting spectra to the predictions of atmospheric models has long been a fruitful method for illuminating the physical and chemical processes that occurs in stellar and substellar atmospheres \\citep[e.g.][]{2010ApJS..186...63R}. Unfortunately, the medium resolution $L$-band spectra of L dwarfs lack deep features (excluding the $Q$-branch of methane), so a broad range of model parameters have been shown to give statistically good fits when fitting just the $L$-band spectra alone \\citep[e.g.][]{2008ApJ...678.1372C}. As such, we chose to combine our $L$-band spectra with the published near-IR spectra listed in Table \\ref{tbl:nirsamp}, and then fit both the combined spectrum and just the near-IR spectra in order to gauge how the addition of the $L$-band spectra affects the best-fit parameters for young L dwarfs.\n\n\\begin{table}\n\\caption{IRTF\/SpeX Near-IR Spectra} \\label{tbl:nirsamp}\n\\begin{tabular}{lccr}\n\\toprule[1.5pt]\nName &\nWavelength &\n$<{\\lambda}\/{\\Delta \\lambda}>$ &\nReferences\\\\\n &\nRange ($\\mu$m) &\n &\n\\\\\n\\toprule[1.5pt]\n2MASS~J0045+16& 0.939$-$2.425& 750 & \\citet{2013ApJ...772...79A}\\\\\nWISEP~J0047+68 & 0.643$-$2.550& 120 &\\citet{2012AJ....144...94G}\\\\\n2MASSI~J0103+19& 0.659$-$2.565& 120 & \\citet{2004yCat..51262421C}\\\\\n2MASS~J0355+11& 0.735$-$2.517& 120 & \\citet{2013AJ....145....2F}\\\\\n2MASS~J0501$-$00& 0.850$-$2.502& 120 & \\citet{2010ApJ...715..561A}\\\\\nG~196$-$3B & 0.644$-$2.554& 120& \\citet{2010ApJ...715..561A}\\\\\nPSO~J318.5$-$22& 0.646$-$2.553& 100& \\citet{2013AJ....145....2F}\\\\\n2MASS J2244+20& 0.651$-$2.564& 100&\\citet{2008ApJ...686..528L}\\\\\n\\end{tabular}\n\\end{table}\n\nTo combine the $L$-band and near-IR spectra, we performed an absolute flux calibration on the near-IR spectra using 2MASS $J$,$H$, and $K_S$ photometry \\citep{2003tmc..book.....C} with the zero-point fluxes from \\citet{2003AJ....126.1090C}. We determined the scaling factor necessary to convert the spectra to units of $\\mathrm{W~m}^{-2}~\\mu \\mathrm{m}^{-1}$ for each of the three filters, and then used the weighted average of those factors to scale and stitch the near-IR to the $L$ band. PSO 318.5$-$22 lacked 2MASS photometry, so we used MKO $J$,$H$, and $K$ band photometry \\citep{2013ApJ...777L..20L}, with the zero-points coming from \\citet{2005PASP..117..421T}. The average percent difference between the three values and the weighted mean was always under 5\\%, and often under 1\\%.\n\nSeveral grids of atmospheric models were used, allowing us to compare how various approaches to 1-D atmospheric models fare. The models we included were:\n\\begin{itemize}\n\n\\item BT-Settl CIFIST and BT-Settl AGSS Models \\citep{2012RSPTA.370.2765A}: These models include clouds simulated via detailed dust micro-physics, and an estimation of the diffusion process based on 2D hydrodynamic simulations. These models are in chemical equilibrium, though two different chemical abundances were used for AGSS09 and CIFIST11 \\citep[][respectively]{2009ARA&A..47..481A,2011SoPh..268..255C}. \n\nAGSS Parameter ranges: \\teff~ranges from 1000--2600 K at 100 K intervals, \\logg{} from 3.5--5.5 [cm\/s$^{2}$] at 0.5 dex intervals, and [Fe\/H] = 0.0. Above 2000 K, \\logg{} extends to $-$0.5, and [Fe\/H] ranges from +0.5 to $-$4 in 0.5 dex increments, and includes [Fe\/H] = +0.3. \n\nCIFIST Parameter ranges: \\teff~ranges from 1000--2900 K at 50 K intervals, \\logg{} from 3.5--5.5 [cm\/s$^{2}$] at 0.5 dex intervals, and [Fe\/H] = 0.0.\n\n\\item Tremblin Models \\citep{2017ApJ...850...46T}: These models do not include clouds, instead recreating the effects attributed to clouds by changing the adiabatic index in the layer above the convective zone, artificially heating this portion of the atmosphere and simulating convective fingering. They also include a \\kzz~mixing parameter that keeps the model from coming to chemical equilibrium. A higher \\kzz~value models more vigorous mixing.\n\nParameter ranges: \\teff~ranges from 1200--2400 K at 200 K intervals, all with \\logg{} = 3.5 [cm\/s$^{2}$], log \\kzz~= 6 [$\\mathrm{cm}^2 \\mathrm{s}^{-1}$], [Fe\/H] = 0.0, and an effective adiabatic index ($\\gamma$) of 1.03. We also included 4 models that had already been made for the near-IR fits of specific objects (P. Tremblin, private communication), including two from our sample: PSO 318.5$-$22 (\\teff~= 1275 K , \\logg{} = 3.7 [cm\/s$^{2}$], log \\kzz~= 5 [$\\mathrm{cm}^2 \\mathrm{s}^{-1}$], [Fe\/H] = +0.4, and an $\\gamma$ of 1.03) and 2MASS 0355+11 (\\teff~= 1400 K, \\logg{} = 3.5 [cm\/s$^{2}$], log \\kzz~= 6 [$\\mathrm{cm}^2 \\mathrm{s}^{-1}$], [Fe\/H] = +0.2, and an $\\gamma$ of 1.01).\n\n\\item Saumon \\& Marley Models \\citep{2008ApJ...689.1327S}: These models account for clouds using an \\mbox{$f_{\\mathrm{sed}}$}~parameterization, which describes the efficiency of sedimentation in comparison to turbulent mixing, with a lower \\mbox{$f_{\\mathrm{sed}}$}~value implying thicker clouds. The Saumon \\& Marley models also include a \\kzz~mixing parameter to account for disequilibrium chemistry, with a higher \\kzz~once again modeling more vigorous mixing.\n\nParameter ranges: \\teff~ranges from 700--2400 K at 100 K intervals, and down to 500 K at 50 K intervals, \\logg{} ranges from 4.5--5.5 [cm\/s$^{2}$] at approximately 0.5 dex intervals, with log \\kzz~of 0, 2, 4 and 6 [$\\mathrm{cm}^2 \\mathrm{s}^{-1}$], [Fe\/H] = 0.0, and \\mbox{$f_{\\mathrm{sed}}$}~of 1, 2, 3, and 4, along with a cloud-free model (nc). Additionally, there were cloud-free models with [Fe\/H] = +0.3 and +0.5 for 500--600 K, as well as \\logg{} of 5.25 [cm\/s$^{2}$] from 500--700 K and 4.0 [cm\/s$^{2}$] from 1500--1700 K. From 800--1500 K, the \\mbox{$f_{\\mathrm{sed}}$}~= 1 and 2 models included a \\logg{} of 4.0, and 4.25 [cm\/s$^{2}$], and the \\mbox{$f_{\\mathrm{sed}}$}~= 2 models include a \\logg{} of 4.75 [cm\/s$^{2}$].\n\n\\item Drift-Phoenix Models \\citep{2009A&A...506.1367W}: These models simulate clouds using detailed dust micro-physics, with a more robust focus on both the seeding and subsequent growth of these cloud-forming grains, and are allowed to come into chemical equilibrium.\n\nParameter ranges: \\teff~ranges from 1000--2200 K at 100 K intervals, \\logg{} from 3.5--5.5 [cm\/s$^{2}$] at 0.5 dex intervals, and [Fe\/H] = 0.0.\n\n\\item Madhusudhan Models \\citep{2011ApJ...737...34M}: These models include clouds which are parameterized by variable dust grain sizes and various upper altitude cutoffs ranging from a sharp cutoff (E) to extending fully to the top of the atmosphere (A), with AE and AEE ranging between the two. The models are also in chemical equilibrium.\n\nParameter ranges: \\teff~ranges from 700--1700 K at 100 K intervals, \\logg{} = 4.0 [cm\/s$^{2}$], with [Fe\/H] of 0.0 and +0.5, and dust grain sizes of 30, 60, and 100 $\\mu$m. The AE cloud set has some finer \\teff~and \\logg{} spacing, with a 25 K interval from 750--1050 K and \\logg{} ranging from 3.75--4.25 [cm\/s$^{2}$] with a 0.25 dex interval across the whole range.\n\\end{itemize} \nWe found the best fits by first calculating the scaling factor $C$ (where $C$ has the physical analog [Radius$^2$\/Distance$^2$]) that minimized \\gk, a goodness-of-fit statistic \\citep[]{2008ApJ...678.1372C}, for each model spectrum. Then for each model set the synthetic spectrum with the minimum \\gk~is selected as the best-fit model. We report the parameters of these models, for both the near-IR and combined spectra, in Table \\ref{tbl:bestfits}. \n\nFor the sake of compactness, PSO 318.5$-$22 and 2MASS 0355+11 are presented as a representative sample for the rest of the paper, with PSO 318.5$-$22 representing the cooler and later spectral types and 2MASS 0355+11 the hotter and earlier. The best near-IR fits of 2MASS 0355+11 and PSO 318.5$-$22 can be seen in Figures \\ref{fig:0355NIR} and \\ref{fig:PSONIR}, respectively. Note that the $L$-band spectra is included in these figures, but not in this fitting process. The combined best fits for these objects can be seen in Figures \\ref{fig:0355Fits} and \\ref{fig:318Fits}, respectively. The combined fits for the remaining objects can be found in Appendix \\ref{app:data}.\n\nWhen we calculate the best fits to just the published near-IR spectra of our sample, we find that the models fit portions of the near-IR well, though there are definitely a variety of deviations for each set of models, which matches what was seen for young brown dwarfs in \\citet{2014A&A...564A..55M}. However, 8 of the 12 fits are very poor matches to the the $L$ band, as can be seen in Figures \\ref{fig:0355NIR} and \\ref{fig:PSONIR}. The BT-Settl CIFIST and Tremblin models fit the $L$ band well enough for hotter objects like 2MASS 0355+11, but underestimate the $L$-band flux at lower temperatures. Similarly, the Saumon \\& Marley models provide one of the better $L$-band fits for PSO 318.5$-$22, but for the hotter objects they predict strong water features in the $L$ band that are not present. For these same hotter objects the Drift-Phoenix best-fit models predict too much flux emerging out of the $L$ band. Between \\teff~of 1600 K and 1400 K the Drift-Phoenix models transition from being bright in the $J$ and $H$ bands to bright in $K$ and $L$ bands (This transition can be seen between Figures \\ref{fig:0355NIR}, \\ref{fig:0355Fits}, and \\ref{fig:0355_EVO} which show the 1500, 1600, and 1400 K models respectively). This rapid reddening causes most of the cooler objects to be fit by the Drift-Phoenix models at either 1600 or 1500 K, and leads to Drift-Phoenix having the best $L$-band fit of PSO 318.5$-$22, even though the near-IR fit is the worst. All of these discrepancies show that a reasonable fit at near-IR wavelengths does not necessarily mean the same model spectra will also fit the $L$-band region well.\n\n\\begin{figure*}\n\\includegraphics[trim = {.25cm 4.7cm 3.5cm 2.75cm},clip,width=.9\\textwidth]{2021BestFit0355NIR.pdf}\n\\centering\n\\caption{The combined spectrum of 2MASS 0355+11 (black) compared to the model spectra (colored) with parameters that best fit only the shaded near-IR portion of the spectrum.}\n\\label{fig:0355NIR}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[trim = {.25cm 4.7cm 3.5cm 2.75cm},clip,width=.9\\textwidth]{2021BestFit318NIR.pdf}\n\\caption{Same as Fig. \\ref{fig:0355NIR}, except using the cooler PSO 318.5$-$22.}\n\\label{fig:PSONIR}\n\\end{figure*}\n\n\\subsubsection{Overall Summary of the Combined Fits}\nWhen we fit to the combined spectra (See Figures \\ref{fig:0355Fits} and \\ref{fig:318Fits} and Appendix \\ref{app:data}) we find that no single model set fits all the objects well. The Saumon \\& Marley and Tremblin models are the most consistent across the whole spectral range, and the best fits for the later spectral types. For the earlier types, the BT-Settl CIFIST models are often better, but these best-fit models for the later type objects have muted $J$, $H$, and $K$ peaks and shallower troughs between peaks as seen in Figure \\ref{fig:318Fits}. The Drift-Phoenix models tend to fit the $L$ band fairly well, particularly the flux levels between the near-IR and $L$ band, yet the fit to the near-IR is not good. Figure \\ref{fig:318Fits} is a good example of both the flux level match and the near-IR mismatch. The Madhusudhan and BT-Settl AGSS models are not the best fitting model for any of our objects' combined spectra. The Madhusudhan models' poor fits are caused by a humped $L$ band, as well as under-predicting the flux emerging from the $J$ and $H$ bands, which can be seen in both Figures \\ref{fig:0355Fits} and \\ref{fig:318Fits}. For the BT-Settl AGSS models, the poor fits are due to the fact that when the flux levels between the near-IR and $L$-band match the observed spectra, the near-IR spectral morphology is off as seen in Figure \\ref{fig:0355Fits} (often bright in the $K$ band and dim in the $J$ band). \n\n\\subsubsection{Temperature}\n\nWhen we calculate the best fits for the combined spectra, we find the best-fit temperatures are generally $\\sim$100 K colder compared to the fits to just the near-IR, as seen in Table \\ref{tbl:bestfits}. The lower temperature of fits that include the $L$ band can be also seen in Figure \\ref{fig:GK}, which compares the \\gk~values for PSO 318.5$-$22 for a range of model temperatures and surface gravities. The shape of the \\gk~plots for temperature and surface gravity are similar for both the combined and near-IR spectra but including the $L$ band offsets the curve to colder temperatures. This temperature drop is caused in part by the higher temperature models under-predicting the flux ratio between the near-IR and the $L$ band. Even cases where a model's best-fit temperature increased by adding the $L$ band had this 100 K shift (for example the BT-Settl CIFIST fit of PSO 318.5$-$22). In these cases there are two local minima: the hotter minimum with muted troughs and peaks in the near-IR, while the cooler one is less muted. For both cases, the minima shift to cooler temperatures when the $L$ band is added, but the absolute minima switches from the colder minima to the hotter one. \n\nThe combined best-fit temperatures also divides our models into two groups: those models that generally give higher temperature fits for our objects (BT-Settl and Drift-Phoenix) and those that fit these same objects with lower temperatures (Madhusudhan, Tremblin, and Saumon \\& Marley). The colder-model fits all share a similar feature in their $P\/T$ profile: Moving radially outward, they track with similar models until at some height they have a sharp decrease in pressure over a small temperature range, after which they continue along similar models except with hotter temperatures in the upper layers of the atmosphere (\\citealp[Examples can bee seen in Fig. 3 in][]{2011ApJ...737...34M}\\citealp[, Fig. 4 in][]{2016ApJ...817L..19T}\\citealp[, and Fig. 1 in][]{2010ApJ...723L.117M}). The cause of this thermal perturbation is not the same for all models, as the modified adiabatic index produces the perturbation in the Tremblin models while the existence of clouds produces it in the Madhusudhan and Saumon \\& Marley models. It should be noted that not all the Madhusudhan and Saumon \\& Marley models have this thermal perturbation, only the ones with thick clouds (Type A for Madhusudhan, and at lower \\mbox{$f_{\\mathrm{sed}}$} values for Saumon \\& Marley).\n\n\\subsubsection{Clouds}\nTwo of the model grids allow us to also look at the effects of variations in cloud properties. For both sets of models the thickest clouds were consistently the best fits (the \\mbox{$f_{\\mathrm{sed}}$}~of 1 for Saumon \\& Marley and cloud type A for Madhusudhan), and with the fits getting progressively worse for thinner clouds. Thick clouds cause an increase in opacity which blocks flux coming from the deeper and hotter layers of the atmosphere. As such, the near-IR, which originates below the added cloud deck in cloudless models, is now emerging from a cooler region just above the cloud deck instead. \\citep{2001ApJ...556..872A}. Now that less flux is coming out in the near-IR, to maintain a level of radiation consistent with the \\teff~of this object the upper atmosphere must heat up. Since the $L$ band originates from these upper layers, we get an increase in $L$ band flux, which when combined with the lower near-IR flux results in better fits. However, the thick clouds (combined with a high \\kzz~for Saumon \\& Marley) also create a turndown at the long end of the $L$ band that we do not see in our data. This can be seen in Figure \\ref{fig:318Fits}, and is also an issue for several of the other models. Still, overall it seems that if clouds are the answer to the spectral reddening of our young objects as suggested by \\citet{2014A&A...564A..55M} and \\citet{2016ApJS..225...10F}, then they will need to be thick rather than thin to best fit the full spectrum.\n\n\\begin{landscape}\n\\begin{deluxetable}{lrrrcccr|rrrcccr}\n\\tabletypesize{\\footnotesize}\n\\tablewidth{0pt}\n\\tablecolumns{13}\n\\tablecaption{Atmospheric Model Fits\\label{tbl:bestfits}}\n\\tablehead{\n\\colhead{Model} &\n\\multicolumn{7}{c}{\\underline{Best Fit to Near-IR only}} &\n\\multicolumn{7}{c}{\\underline{Best Fit to Near-IR + $L$ Band}} \\\\\n\\colhead{} &\n\\colhead{\\teff} &\n\\colhead{\\logg{}} &\n\\colhead{[Fe\/H]} &\n\\colhead{\\mbox{$f_{\\mathrm{sed}}$}} &\n\\colhead{\\kzz} &\n\\colhead{Dust Size} &\n\\colhead{\\gk}&\n\\colhead{\\teff} &\n\\colhead{\\logg{}} &\n\\colhead{[Fe\/H]} &\n\\colhead{\\mbox{$f_{\\mathrm{sed}}$}} &\n\\colhead{\\kzz} &\n\\colhead{Dust Size} &\n\\colhead{\\gk}\\\\\n\\cmidrule(lr){2-8}\\cmidrule(lr){9-15}\n\\colhead{} &\n\\colhead{(K)} &\n\\colhead{[cm\/s$^{2}$]} &\n\\colhead{} &\n\\colhead{} &\n\\colhead{[$\\mathrm{cm}^2 \\mathrm{s}^{-1}$]} &\n\\colhead{($\\mu$m)} &\n\\colhead{}&\n\\colhead{(K)} &\n\\colhead{[cm\/s$^{2}$]} &\n\\colhead{} &\n\\colhead{} &\n\\colhead{[$\\mathrm{cm}^2 \\mathrm{s}^{-1}$]} &\n\\colhead{($\\mu$m)} &\n\\colhead{}\n}\n\\startdata\n\\sidehead{\\underline{2MASS 0045+16:}}\n~~BT-Settl AGSS09 &2000&4.50&+0.5& \\nodata& \\nodata& \\nodata & {253710} & 2000&5.00&+0.5& \\nodata& \\nodata& \\nodata & {493972} \\\\\n~~BT-Settl CIFIST11 &1800&5.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {184374} & 1800&5.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {202512} \\\\\n~~ Drift-Phoenix &1700&4.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {217628} & 1800&4.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {359250} \\\\\n~~Saumon \\& Marley &1600&4.48& 0.0\\tablenotemark{a}&1&6& \\nodata & {174446} & 1600&4.48& 0.0\\tablenotemark{a}&1&6& \\nodata & {231808} \\\\\n~~Madhusudhan &1700&4.0&0.0& \\nodata & \\nodata &100 & {485212} & 1700&4.0&0.0& \\nodata & \\nodata &100 & {612891} \\\\\n~~Tremblin & 2000&3.5\\tablenotemark{a}& 0.0\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} & \\nodata & {174357} & 2000&3.5\\tablenotemark{a}&0.0\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} &\\nodata & {366177} \\\\\n\\sidehead{\\underline{G196$-$3B:}}\n~~BT-Settl AGSS09 &1700&3.50&0.0& \\nodata& \\nodata& \\nodata & {15589} & 1700&4.50&0.0& \\nodata& \\nodata& \\nodata & {42102} \\\\\n~~BT-Settl CIFIST11 &1750&4.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {14745} & 1700&3.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {40166} \\\\\n~~ Drift-Phoenix &1600&4.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {28231} & 1600&4.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {43899} \\\\\n~~Saumon \\& Marley &1400&4.25& 0.0\\tablenotemark{a}&1&4& \\nodata & {21832} & 1400&4.25& 0.0\\tablenotemark{a}&1&4& \\nodata & {49006} \\\\\n~~Madhusudhan &1400&4.0&0.0& \\nodata & \\nodata &60 & {47408} & 1400&4.0&0.0& \\nodata & \\nodata &100 & {103123} \\\\\n~~Tremblin & 1600&3.5\\tablenotemark{a}& 0.0\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} & \\nodata & {15486} & 1600&3.5\\tablenotemark{a}&0.0\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} &\\nodata & {23985} \\\\\n\\sidehead{\\underline{2MASS 0501$-$00:}}\n~~BT-Settl AGSS09 &2000&5.50&+0.5& \\nodata& \\nodata& \\nodata & {23214} & 1700&4.50&0.0& \\nodata& \\nodata& \\nodata & {105845} \\\\\n~~BT-Settl CIFIST11 &1650&5.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {18834} & 1600&5.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {106892} \\\\\n~~ Drift-Phoenix &1600&3.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {18862} & 1600&3.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {35401} \\\\\n~~Saumon \\& Marley &1400&4.00& 0.0\\tablenotemark{a}&1&6& \\nodata & {15106} & 1400&4.25& 0.0\\tablenotemark{a}&1&4& \\nodata & {106145} \\\\\n~~Madhusudhan &1500&4.0&0.0& \\nodata & \\nodata &60 & {53237} & 1400&4.0&0.0& \\nodata & \\nodata &100 & {222405} \\\\\n~~Tremblin & 1800&3.5\\tablenotemark{a}& 0.0\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} & \\nodata & {12864} & 1600&3.5\\tablenotemark{a}&0.0\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} &\\nodata & {70538} \\\\\n\\sidehead{\\underline{2MASS 0355+11:}}\n~~BT-Settl AGSS09 &1700&4.50&0.0& \\nodata& \\nodata& \\nodata & {13796} & 1600&3.50&0.0& \\nodata& \\nodata& \\nodata & {75661} \\\\\n~~BT-Settl CIFIST11 &1550&4.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {5305} & 1550&4.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {21312} \\\\\n~~ Drift-Phoenix &1500&3.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {11847} & 1600&5.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {94024} \\\\\n~~Saumon \\& Marley &1200&4.25& 0.0\\tablenotemark{a}&1&6& \\nodata & {12298} & 1200&5.47& 0.0\\tablenotemark{a}&1&4& \\nodata & {77846} \\\\\n~~Madhusudhan &1300&3.75&0.0& \\nodata & \\nodata &60 & {17793} & 1200&4.25&0.0& \\nodata & \\nodata &60 & {187057} \\\\\n~~Tremblin & 1400&3.5\\tablenotemark{a}& +0.2\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} & \\nodata & {7631} & 1400&3.5\\tablenotemark{a}&0.0\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} &\\nodata & {67283} \\\\\n\\sidehead{\\underline{2MASS 0103+19:}}\n~~BT-Settl AGSS09 &1500&4.50&0.0& \\nodata& \\nodata& \\nodata & {5615} & 1700&4.50&0.0& \\nodata& \\nodata& \\nodata & {24475} \\\\\n~~BT-Settl CIFIST11 &1650&5.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {2912} & 1500&5.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {15322} \\\\\n~~ Drift-Phoenix &1600&3.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {4581} & 1600&4.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {11271} \\\\\n~~Saumon \\& Marley &1400&4.25& 0.0\\tablenotemark{a}&1&6& \\nodata & {2551} & 1300&4.00& 0.0\\tablenotemark{a}&1&2& \\nodata & {17097} \\\\\n~~Madhusudhan &1400&4.0&0.0& \\nodata & \\nodata &60 & {7222} & 1300&4.0&0.0& \\nodata & \\nodata &100 & {28273} \\\\\n~~Tremblin & 1800&3.5\\tablenotemark{a}& 0.0\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} & \\nodata & {4101} & 1600&3.5\\tablenotemark{a}&0.0\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} &\\nodata & {17127} \\\\\n\\sidehead{\\underline{2MASS 2244+20:}}\n~~BT-Settl AGSS09 &1700&4.50&0.0& \\nodata& \\nodata& \\nodata & {47015} & 1600&3.50&0.0& \\nodata& \\nodata& \\nodata & {109410} \\\\\n~~BT-Settl CIFIST11 &1400&4.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {25443} & 1400&4.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {58998} \\\\\n~~ Drift-Phoenix &1500&3.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {47917} & 1500&3.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {63373} \\\\\n~~Saumon \\& Marley &1200&4.00& 0.0\\tablenotemark{a}&1&4& \\nodata & {10359} & 1000&4.00& 0.0\\tablenotemark{a}&1&6& \\nodata & {27231} \\\\\n~~Madhusudhan &1300&4.0&0.0& \\nodata & \\nodata &60 & {44486} & 1200&4.0&0.0& \\nodata & \\nodata &60 & {78465} \\\\\n~~Tremblin & 1400&3.5\\tablenotemark{a}& +0.2\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} & \\nodata & {15470} & 1400&3.5\\tablenotemark{a}&+0.2\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} &\\nodata & {40499} \\\\\n\\sidehead{\\underline{PSO 318.5$-$22:}}\n~~BT-Settl AGSS09 &1600&3.50&0.0& \\nodata& \\nodata& \\nodata & {3633} & 1600&3.50&0.0& \\nodata& \\nodata& \\nodata & {54994} \\\\\n~~BT-Settl CIFIST11 &1350&3.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {2861} & 1550&3.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {11876} \\\\\n~~ Drift-Phoenix &1500&5.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {5423} & 1500&5.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {9482} \\\\\n~~Saumon \\& Marley &1000&4.25& 0.0\\tablenotemark{a}&1&6& \\nodata & {2122} & 900&5.00& 0.0\\tablenotemark{a}&1&4& \\nodata & {8704} \\\\\n~~Madhusudhan &1100&4.0&0.0& \\nodata & \\nodata &30 & {5440} & 1000&4.0&+0.5& \\nodata & \\nodata &60 & {21581} \\\\\n~~Tremblin & 1275&3.7\\tablenotemark{a}& +0.4\\tablenotemark{a}& \\nodata & 5\\tablenotemark{a} & \\nodata & {1694} & 1200&3.5\\tablenotemark{a}&0.0\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} &\\nodata & {12564} \\\\\n\\sidehead{\\underline{WISE 0047+68:}}\n~~BT-Settl AGSS09 &1700&4.50&0.0& \\nodata& \\nodata& \\nodata & {23472} & 1600&3.50&0.0& \\nodata& \\nodata& \\nodata & {131724} \\\\\n~~BT-Settl CIFIST11 &1400&4.00& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {13212} & 1500&3.50& 0.0\\tablenotemark{a} & \\nodata& \\nodata& \\nodata & {57278} \\\\\n~~ Drift-Phoenix &1500&3.50& 0.0\\tablenotemark{a}& \\nodata& \\nodata& \\nodata & {25152} & 1500&4.00& 0.0\\tablenotemark{a}& \\nodata& \\nodata& \\nodata & {47773} \\\\\n~~Saumon \\& Marley &1100&4.47& 0.0\\tablenotemark{a}&1&6& \\nodata & {5711} & 1000&4.00& 0.0\\tablenotemark{a}&1&6& \\nodata & {20615} \\\\\n~~Madhusudhan &1200&4.25&0.0& \\nodata & \\nodata &60 & {19659} & 1100&4.0&+0.5& \\nodata & \\nodata &100 & {76802}\\\\\n~~Tremblin & 1400&3.5\\tablenotemark{a}& +0.2\\tablenotemark{a}& \\nodata & 6\\tablenotemark{a} & \\nodata & {5935} & 1275&3.7\\tablenotemark{a}&+0.4\\tablenotemark{a}& \\nodata & 5\\tablenotemark{a} &\\nodata & {25983} \\\\\n\\enddata\n\\tablenotetext{a}{Not a free parameter in the fit}\n\\end{deluxetable}\n\\end{landscape}\n\n\n\\begin{figure*}\n\\includegraphics[trim = {.25cm 4.7cm 3.5cm 2.75cm},clip,width=.9\\textwidth]{2021BestFit2MJ0355+11LBand.pdf}\n\\centering\n\\caption{The combined spectrum of 2MASS 0355+11 (black) compared to the model spectra (colored) with parameters that best fits the entire combined spectrum. For ease of comparison, the near-IR best-fit models are shown in grey.}\\label{fig:0355Fits}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[trim = {.25cm 4.7cm 3.5cm 2.75cm},clip,width=.9\\textwidth]{2021BestFitPSO-318LBand.pdf}\n\\centering\n\\caption{Same as Figure \\ref{fig:0355Fits}, except using the cooler PSO 318.5$-$22}\n\\label{fig:318Fits}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[trim = {0 5cm 1cm 8.5cm}, clip,width=.75\\textwidth]{2022PS0MP.pdf}\n\\centering\n\\caption{The goodness-of-fit parameter (\\gk) for PSO 318.5$-$22 plotted as function of temperature and \\logg{}. Note that the shape does not change drastically when including the $L$ band, but it does move to cooler temperatures ($\\sim$100 K). This held true across all of our objects.}\n\\label{fig:GK}\n\\end{figure*}\n\n\\subsubsection{Mixing Rate}\n\nWe can see how the vertical mixing rate (\\kzz) tends to vary with the inclusion of the $L$ band through the Saumon \\& Marley fits. With the inclusion of the $L$-band spectra, there is a shift to lower \\kzz~values (from the max possible 6 [$\\mathrm{cm}^2 \\mathrm{s}^{-1}$] to 4 or even 2) for the combined spectra, which on face value disagrees with the conclusions from \\citet{2018ApJ...869...18M} finding a need for a high \\kzz~(around $10^8$ $\\mathrm{cm}^2 \\mathrm{s}^{-1}$) to explain $L$-band features. However, when the models' thickest cloud parameter was used (f1), as was the case in the best fit for all of our objects, changing the log \\kzz~value from 6 to 4 had little affect on the model spectra in the near-IR and $L$ band. The effect on \\gk~was at its highest a 0.5 percent increase, and sometimes was as small as 0.01 percent. For some temperatures, the insensitivity of the model to \\kzz~extends down to \\kzz~of 2 [$\\mathrm{cm}^2 \\mathrm{s}^{-1}$], like at the best-fit temperature of 2MASS 0103+19 (1300 K), but often the lowered vertical mixing would start to deepen the $L$-band methane features at this point, making the fit noticeably worse. When a thinner cloud parameter was set, the best-fit \\kzz~values tended to stay high. Overall, we have a low sensitivity between the higher \\kzz~values, but can still say we agree with \\citet{2018ApJ...869...18M} that higher values in general create better fits. \n\n\\subsection{Model Comparisons: Evolutionary Parameters} \\label{EvoParam}\nSome of our objects are members of young moving groups and therefore have ages associated with them. These ages, along with measured $L_{\\mathrm{bol}}$ and evolutionary models, give well-defined evolutionary parameters for five of our objects, 2MASS 0045+16, 2MASS 2244+20, WISE 0047+68, 2MASS 0355+11 \\citep[All from ][]{2016ApJS..225...10F} and PSO 318.5$-$22 \\citep{2016ApJ...819..133A}. The memberships, ages, masses, effective temperature, and surface gravities for all of these objects can be found in Table \\ref{tbl:EVO}. We can us these \\teff~and \\logg{} values to select synthetic spectra that match these parameters, which we'll hereafter refer to as evolutionary effective temperature and surface gravity. For each model set, we took the model spectrum with the effective temperature and surface gravity closest to each object's evolutionary effective temperature and surface gravity, and of these selected the model parameters and $C$ which minimized \\gk. The Madhusudhan and Tremblin models do not fully cover the range of temperatures and gravities needed to encompass all of the objects evolutionary temperatures and gravities. When this occurred, the closest available parameters were used. The comparisons for these objects can be seen in Figures \\ref{fig:0045_EVO} through \\ref{fig:PSO_EVO}. For ease of reference, the combined best-fit model from the previous section is also plotted in grey.\n\n\\begin{table*}\n\\caption{Our Sample of Young L Dwarfs With Known Moving Groups and Ages} \\label{tbl:EVO}\n\\begin{tabular}{lcccccc}\n\\toprule[1.5pt]\nName & YMG & Age & Mass Range & \\teff & \\logg{} & Refs \\\\\n & Membership & (Myrs) & ($M_\\mathrm{Jup}$) & (K) & [$\\mathrm{cm}^2 \\mathrm{s}^{-1}$] & \\\\\n\\toprule[1.5pt]\n2MASS J0045+16\t&\tArgus & 40 & 20--29& $2059 \\pm 45$& $4.22 \\pm 0.10$ &\tC09, F15, L16, F16\t\\\\\n2MASS J0355+11\t&\tAB Dor\t& 110--130 & 15--27& $1478\\pm 58$ & $4.58^{+0.07}_{-0.17}$ &\tC09, F13, AL13, F16\t\\\\\nWISEP J0047+68 &\tAB~Dor & 110--130 & 9--15\t& $1230 \\pm 27$& $4.21 \\pm 0.10$ &\tG15, F15 ,F16\\\\\n2MASS J2244+20\t&\tAB~Dor\t& 110--130 & 9--12 &$1184 \\pm 10$& $4.18 \\pm 0.08$ &\tK08, F15, A16 \t\\\\\nPSO J318.5$-$22\t&$\\beta$~Pic& 20--26 & 5--7 & $1127^{+24}_{-26}$ & 4.01 $\\pm$ 0.03 &\tL13, A16, F16\t\n\\end{tabular}\n\\tablerefs{(A16)~\\citet{2016ApJ...819..133A}, (AL13)~\\citet{2013ApJ...772...79A},\n(C09)~\\citet{2009AJ....137.3345C}, \n(F12)~\\citet{2012ApJ...752...56F}, (F13)~\\citet{2013AJ....145....2F},\n(F15)~\\citet{2015ApJ...810..158F}\n(F16)~\\citet{2016ApJS..225...10F},\n(G15)~\\citet{2015ApJ...799..203G},\n(K08)~\\citet{2008ApJ...689.1295K},\n(L16)~\\citet{2016ApJ...833...96L}, (L13)~\\citet{2013ApJ...777L..20L},\n(V18)~\\citet{2018MNRAS.474.1041V}\n}\n\n\\end{table*}\nFor all our objects except 2MASS 0045+16, we see that the Drift-Phoenix and the BT-Settl AGSS models do not fit well when the evolutionary effective temperature and surface gravity are used. The Drift-Phoenix model over-predicts the flux in the $L$ band, with the peaks of the $J$ and $H$ bands being almost non-existent, which can be most clearly seen in Figure \\ref{fig:2244_EVO}. The AGSS model has the opposite problem, as already at 1400 K in Figure \\ref{fig:0355_EVO} it looks remarkably like a T dwarf with its deep methane absorption in the near-IR and $L$ band. The fact that the methane absorption is this strong in the AGSS model at such high temperatures is part of why the best-fit temperatures were hot for our young L dwarfs. The exception is our hottest object 2MASS 0045+16 (Figure \\ref{fig:0045_EVO}), which the AGSS model arguably has the best fit. Meanwhile, the Drift-Phoenix model inverts its color with the $J$ and $H$ band being too strong compared to weaker $L$ and $K$ bands, though not quite as dramatically at this temperature.\n\nThe CIFIST model set is one of the better fits for 2MASS 0355+11 when using the evolutionary effective temperature and surface gravity, where its biggest issues are predicting stronger water absorption features than we observe, both between the near-IR bands and in the $L$ band, and missing the sharp triangle shape of the H band. This issue is there in all the objects, but it is most pronounced for the colder ones like in Figure \\ref{fig:0047_EVO}, where the predicted water absorption becomes even stronger, and is accompanied by strong methane features as well. This increased absorption drops the model's $L$-band flux level too low. In general, the CIFIST BT-Settl model set seems to do comparatively well for young L dwarfs at higher temperatures, but struggles when transitioning to lower temperatures. \n\nThe selected Madhusudhan models have the opposite problem, fitting colder objects better than hotter ones. When using the evolutionary effective temperature and surface gravity of 2MASS 0355+11 and 2MASS 0045+16, we see the differences in the flux levels of the near-IR and $L$ band are much greater in the model than in the observed spectrum. This disparity is not there for the three colder objects, though other issues occur, such as over-estimation of the $K$-band flux and a turndown at the long end of the $L$ band. However, these issues are minor compared to the flux level spread between the bands at higher temperatures, which explains the Madhusudhan model's tendency to give colder best-fit temperatures.\n\nThe Tremblin and Saumon \\& Marley model sets are the most consistently good fits, though both still have some discrepancies. The Saumon \\& Marley models tend to overestimate the flux in the $J$ and $H$ bands, especially for the 2MASS 0045+16 and 2MASS 0035+11. Models with lower temperatures have lower near-IR flux which are closer to the observed values, such as in Figure \\ref{fig:0047_EVO}, which explains why the models' best fits trend towards colder temperatures. The Tremblin models are off in the near-IR about as much as the Saumon \\& Marley models, but with no consistent issues. For the three coldest objects, the Tremblin models matches the $L$-band flux level, yet exhibits strong absorption in the $P$, $Q$, and $R$ branches of methane unseen in the observed spectrum. Both also under-predict the $L$-band flux for 2MASS 0045+16, though this is consistent across all the models.\n\nLooking more generally, the three of the four models without disequilibrium chemistry (both BT-Settl models and Drift-Phoenix) show an onset of methane at higher temperatures than we observe, such as in Figure \\ref{fig:0355_EVO}. This agrees with what we saw from the best-fit models. This effect is especially notable in the BT-Settl AGSS models, whose spectra at this temperature look more similar to T dwarfs than L dwarfs. The Madhusudhan model with the thickest clouds (Type A) does not have this early appearance of methane, but the less thick AE clouds do, with $Q$-band absorption being strong even at 1400 K. As for the models with disequilibrium chemistry, the $Q$-branch of methane is well fit when using the evolutionary effective temperature and surface gravity, and overall are generally the better fit compared to the equilibrium models. We also find that the cloudless BT-Settl models clouds tend to fit the colder objects worse than those with clouds as their evolutionary temperature, as seen in Figures \\ref{fig:0047_EVO}-\\ref{fig:PSO_EVO}. The exception to the cloudy models fitting better is the Drift-Phoenix models, which are poor past 1500 K due to so little flux coming out in the $J$ and $K$ bands. This is evidence that some combination of clouds and disequilibrium chemistry is a necessary for modeling brown dwarf atmospheres with accurate effective temperatures.\n\n\\begin{figure*}\n\\includegraphics[trim = {.25cm 4.7cm 3.5cm 2.75cm},clip,width=.9\\textwidth]{EVOComp0045.pdf}\n\\centering\n\\caption{The spectra of 2MASS 0045+16 compared to the spectra from each model set that best matches the known evolutionary effective temperature and surface gravity found in Table \\ref{tbl:EVO}. Note that in this case the Tremblin models do not extend to 4.5 [$\\mathrm{cm}^2 \\mathrm{s}^{-1}$] and the Madhusudhan models do not extend to 2000 K. In gray is also the combined best-fit spectra (in the case of the Tremblin models, the best-fit and evolutionary model parameters are the same).}\n\\label{fig:0045_EVO}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[trim = {.25cm 4.7cm 3.5cm 2.75cm},clip,width=.9\\textwidth]{2021EVOComp0355.pdf}\n\\centering\n\\caption{The spectra of 2MASS 0355+11 compared to the spectra from each model set that best matches the known evolutionary effective temperature and surface gravity found in Table \\ref{tbl:EVO}. Note that in this case the Tremblin models do not extend to 4.5 [$\\mathrm{cm}^2 \\mathrm{s}^{-1}$]. In gray is also the combined best-fit spectra (in the case of the Tremblin models, the best-fit and evolutionary model parameters are the same).}\n\\label{fig:0355_EVO}\n\\end{figure*}\n\n\n\n\\begin{figure*}\n\\includegraphics[trim = {.25cm 4.7cm 3.5cm 2.75cm},clip,width=.9\\textwidth]{EVOComp0047.pdf}\n\\centering\n\\caption{The spectra of WISE 0047+68 compared to the spectra from each model set that best matches the known evolutionary effective temperature and surface gravity found in Table \\ref{tbl:EVO}. Note that in this case the Tremblin models do not extend to 4.0 [$\\mathrm{cm}^2 \\mathrm{s}^{-1}$]. In gray is also the combined best-fit spectra.}\n\\label{fig:0047_EVO}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[trim = {.25cm 4.7cm 3.5cm 2.75cm},clip,width=.9\\textwidth]{EVOComp2244.pdf}\n\\centering\n\\caption{The spectra of 2MASS 2244+20 compared to the spectra from each model set that best matches the known evolutionary effective temperature and surface gravity found in Table \\ref{tbl:EVO}. Note that in this case the Tremblin models do not extend to 4.0 [$\\mathrm{cm}^2 \\mathrm{s}^{-1}$]. In gray is also the combined best-fit spectra (in the case of the Madhusudhan models, the best-fit and evolutionary model parameters are the same).}\n\\label{fig:2244_EVO}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[trim = {.25cm 4.7cm 3.5cm 2.75cm},clip,width=.9\\textwidth]{2021EVOComp318.pdf}\n\\centering\n\\caption{The spectra of PSO 318.5$-$22 compared to the spectra from each model set that best matches the known evolutionary effective temperature and surface gravity found in Table \\ref{tbl:EVO}. Note that in this case the Tremblin models do not extend to 1100 K or 4.0 [$\\mathrm{cm}^2 \\mathrm{s}^{-1}$]. In gray is also the combined best-fit spectra (in the case of the Tremblin models, the best-fit and evolutionary model parameters are the same).}\n\\label{fig:PSO_EVO}\n\\end{figure*}\n\n\\subsection{AB Dor Sub-population}\nThree of our targets, 2MASS 2244+20, WISE 0047+68, and 2MASS 0355+13, are members of the AB Doradus young moving group, and thus can be assumed to have similar compositions and ages \\citep[$\\sim$120 Myrs,][]{2013ApJ...766....6B} due to their shared origin and formation history. WISE 0047+68 and 2MASS 2244+20 have approximately the same mass, respectively 9-15 and 9-12 \\mbox{$M_{\\rm{Jup}}$}{}, but 2MASS 0355+13 is almost twice as massive at 15-27 \\mbox{$M_{\\rm{Jup}}$}. This means Figure \\ref{fig:ABDOR}, which shows the spectra of these three objects, is a mass sequence at a fixed age and composition, though admittedly an abbreviated one. 2MASS 0355+13 is more luminous than WISE 0047+68 and 2MASS 2244+20, whose spectra lie nearly on top of each other due to having similar masses (See Table \\ref{tbl:EVO}). Of particular interest is the 3.3 $\\mu$m $Q$-branch methane feature, which has a similar depth in the two cooler objects, even with different infrared spectral types and gravity classification (WISE 0047+68 as L7 INT-G and 2MASS 2244+20 as L6 VL-G). This feature is absent in the more massive (and hotter) 2MASS 0355+13, showing the temperature dependence of the methane feature for a fixed age and composition. AB Dor has several other ultracool members \\citep{2016ApJS..225...10F}, which means it will be possible to expand this mass sequence. \n\\begin{figure*}\n\\includegraphics[width = \\textwidth]{ABDOR.pdf}\n\\centering\n\\caption{Spectra of three members of the young moving group AB Dor, placed at 10 pc. The two objects of the same mass lie almost on top of each other, and with a $Q$-branch methane feature of similar depth. This feature is absent in the hotter and more massive 2MASS 0355+11.}\\label{fig:ABDOR}\n\\end{figure*}\n\n\\section{Conclusions}\\label{sec:con}\n\nFrom this work, we can see that $L$-band spectra are not particularly strong diagnostic tools on their own for young L dwarfs due to a lack of strong features (with the exception of the $Q$-branch of methane). However, when used in hand with near-IR spectra it is extremely useful for understanding objects' atmospheric properties. The inclusion of the $L$ band highlights where models are falling short, and what physical processes models need to include to match observations. We find adding the $L$ band lowers the best-fit effective temperature of our models by $\\sim$100 K. For these fits, when clouds are included in the models, thick ones are preferred, and when a vertical mixing rate is used (\\kzz), higher values give the best results. We also note that for models to fit the combined spectra well at the published evolutionary effective temperatures for these objects, we need models with disequilibrium chemistry and\/or clouds. Overall, we find the Tremblin and Saumon \\& Marley models fit the full spectrum of the young objects best. These results show the power of wide spectral coverage, matching conclusion from \\citet{2021MNRAS.506.1944B}, and we recommend that our data also be used to enhance future retrievals of these objects to parse their diversity. In particular though, these observations show the value of the $L$ band for understanding the atmospheres of young brown dwarfs, and the giant exoplanets for which they act as proxies and give us a preview of the insights that the James Webb Space Telescope will bring. \n\n\\section*{Acknowledgements}\nBased on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\u00eda e Innovaci\u00f3n Productiva (Argentina), and Minist\u00e9rio da Ci\u00eancia, Tecnologia e Inova\u00e7\u00e3o (Brazil). KNA and SAB are grateful for support from the Isaac J. Tressler fund for astronomy at Bucknell University, and the Doreen and Lyman Spitzer Graduate Fellowship in Astrophysics at the University of Toledo. This work benefited from the Exoplanet Summer Program in the Other Worlds Laboratory (OWL) at the University of California, Santa Cruz, a program funded by the Heising-Simons Foundation. We appreciated conversations with Brittney Miles which enhanced this work, and we would like to thank Denise Stephens for contributing the 2MASS 2244+20 spectrum, and Pascal Tremblin for allowing us access to unpublished models. This work also benefited from The UltracoolSheet, maintained by Will Best, Trent Dupuy, Michael Liu, Rob Siverd, and Zhoujian Zhang, and developed from compilations by Dupuy \\& Liu (2012, ApJS, 201, 19), Dupuy \\& Kraus (2013, Science, 341, 1492), Liu et al. (2016, ApJ, 833, 96), Best et al. (2018, ApJS, 234, 1), and Best et al. (2020b, AJ, in press).\n\n\\section*{Data Availability}\nThe combined spectra for all of these objects are available in the online supplementary material for this article.\n\n\n\n\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Abstract}\nWe have carried out a high statistics ($2 \\times 10^9$ events)\nsearch for ultra-high energy\ngamma-ray emission from the X-ray binary sources Cygnus X-3 and\nHercules X-1.\nUsing data taken with the CASA-MIA detector over a five year period\n(1990-1995), we find no evidence for steady emission from either\nsource.\nThe derived 90\\% c.l. upper limit to the steady integral flux\nof gamma-rays from Cygnus X-3\nis $\\Phi (E > 115\\,{\\rm TeV}) \n< 6.3 \\times 10^{-15}$ photons cm$^{-2}$ sec$^{-1}$,\nand from Hercules X-1 it is $\\Phi (E > 115\\,{\\rm TeV}) \n< 8.5 \\times 10^{-15}$\nphotons cm$^{-2}$ sec$^{-1}$.\nThese limits are more than two orders of magnitude lower than earlier\nclaimed detections and are better than recent experiments\noperating in the same energy range.\nWe have also searched for transient emission on time periods of\none day and $0.5\\,$hr and find no evidence for such emission from\neither source.\nThe typical daily limit on the integral gamma-ray flux\nfrom Cygnus X-3 or Hercules X-1 is\n$\\Phi_{{\\rm daily}} (E > 115\\,{\\rm TeV}) < 2.0 \\times 10^{-13}$\nphotons cm$^{-2}$ sec$^{-1}$.\nFor Cygnus X-3, we see no evidence for emission correlated with the\n$4.8\\,$hr\nX-ray periodicity or with the occurrence of large\nradio flares.\nUnless one postulates that these sources were very active earlier\nand are now dormant,\nthe limits presented here put into question the earlier results,\nand highlight the difficulties that possible future experiments\nwill have in detecting gamma-ray signals at ultra-high energies.\n\n\n\\section{Introduction}\n\\label{sec:intro}\nCosmic ray particles span a remarkable range of energies, from\nthe MeV scale to more than $10^{20}\\,$eV (eV = electron Volt).\nAt energies above $1\\,$TeV ($10^{12}\\,$eV), we know that\ncosmic rays do not originate from local sources in or nearby our\nSolar System.\nTherefore, high energy cosmic rays must come from acceleration sites in the\nGalaxy at large or from outside the Galaxy.\nRemarkably, after many years of research, the exact sites of\nhigh energy cosmic ray acceleration remain unknown.\n\nThere are several difficulties that plague efforts to pinpoint the\norigins of high energy cosmic rays.\nFirst, since the bulk of the cosmic radiation is electrically\ncharged, any source information contained in the directions of the\narriving particles is lost due to deflection in the \nGalactic magnetic field.\nA second difficulty concerns the energetics of \nthe proposed cosmic ray acceleration\nmechanisms.\nFor example, \nalthough models based on shock acceleration in supernova remnants offer\nplausible explanations for the cosmic ray origin up to $10^{14}\\,$eV\n(and perhaps up to $10^{15}\\,$eV), these models become less\nsatisfying and less realistic at energies above $10^{15}\\,$eV.\nSince cosmic ray origins remain\nmysterious, it is natural to search for neutral\nradiation from point sources \nwhich, if seen,\ncould pinpoint possible\nacceleration sites.\nThe question of cosmic ray origin is thus a prime motivation for high\nenergy neutral particle (gamma-ray or neutrino) astronomy.\n\nIn addition to supernova remnants, possible galactic sources of high\nenergy particles include pulsars and compact binary systems.\nGamma-radiation has been unambiguously detected from the Crab\nNebula (a supernova remnant) at energies up to $10\\,$TeV by\nground-based detectors \\cite{ref:Snowmass}, but, historically,\nthe compact binary sources Cygnus X-3 and Hercules X-1 have received\nconsiderably greater attention in the ground-based astronomical community.\nIn the period 1975-1990, literally dozens of \ngamma-ray detections from Cygnus X-3 and Hercules X-1\nwere reported by numerous experiments.\nThe detections spanned\na wide range of energies ($100\\,$MeV to $10^{17}\\,$eV)\n\\cite{ref:Reviews},\nwere generally of low statistical significance\n(typically three to four standard deviations), and episodic in nature.\nOften a statistically significant signal could only be extracted\nas a result of a periodicity analysis, where the\ndata were phase-locked to a known source X-ray periodicity.\nIn spite of these difficulties, the sheer number of reports made\nit difficult to dismiss the detections as being\nentirely due to statistical fluctuation \\cite{ref:Protheroe}.\nIn fact, by the late 1980's, it was generally established that\nCygnus X-3 and Hercules X-1 were powerful\nemitters of high energy gamma-rays (although \ncontrary interpretations of the data existed \\cite{ref:Chardin}).\nA number of new, more sensitive\nground-based air shower arrays were commissioned\nat this time, including the CASA-MIA experiment in Dugway, Utah (USA).\nThis paper describes long-term (1990-1995) observations of Cygnus\nX-3 and Hercules X-1 by CASA-MIA.\n\nIn the following section, we\noutline the experimental techniques of gamma-ray astronomy.\nWe summarize the properties of\nCygnus X-3 and Hercules X-1, and\nreview previous observations\nof these sources and the astrophysical ramifications from the\nobservations.\nWe then turn our attention to the experimental apparatus,\nthe event reconstruction procedures, and the methods used to\nselect gamma-ray candidate events.\nWe present results from a \ndata sample of $2 \\times 10^9$ events, and\nconclude with comparisons of our results to those of\nearlier and contemporaneous experiments.\n\n\\section{Experimental Techniques of Gamma-Ray Astronomy}\n\\label{sec:techniques}\nGamma-ray sources typically exhibit power law spectra;\nfluxes fall rapidly with increasing energy.\nSpace-borne experiments (on satellites or balloons) \ncurrently have sufficient\nsensitivity to detect gamma-rays up to an energy of $\\sim 10\\,$GeV.\nAstronomy at higher energies requires very large collection\nareas available only to ground-based telescopes.\nGround-based instruments rely upon the fact that\nhigh energy gamma-rays interact in the Earth's atmosphere to\nproduce extensive air showers.\nAt energies near $1\\,$TeV, \natmospheric Cherenkov telescopes \nuse optical techniques to\ndetect the Cherenkov radiation in\nthe shower.\nAt higher energies ($\\sim 10\\,$TeV and above),\nthe charged particles in the shower penetrate to ground level.\nHere, air shower arrays \nrecord the arrival times and particle densities of the\ncharged particles.\nAt energies above $10^{17}\\,$eV, there is enough energy in the shower\nto allow the detection of \nnitrogen fluorescence in the atmosphere.\nThe faint near ultraviolet fluorescence signal can be optically detected at\nnight by experiments such as the Fly's Eye.\n\n\\section{Discussion of the Sources and Earlier Results}\n\\label{sec:sources}\n\\subsection{Cygnus X-3}\n\\label{subsec:cygnus}\nCygnus X-3 is one of the most luminous X-ray sources in our\nGalaxy \\cite{ref:Giacconi}.\nThe X-ray emission is characterized by a $4.8\\,$hr \nperiodicity,\nwhich is assumed to be associated with the orbital motion of\na compact object (neutron star or black hole) around its binary\ncompanion.\nThe periodicity has been well studied; a complete ephemeris\nis available for the period from 1970-1995 \n\\cite{ref:Parsignault,ref:vanderKlis1,ref:vanderKlis2,ref:Kitamoto}.\nIn addition to being a powerful X-ray source, Cygnus X-3 is seen\nin the infrared and is a strong and variable radio source.\nRadio flares have been detected in which the output from the source\nincreases by two to three orders of magnitude on the time scale of\ndays \n\\cite{ref:Gregory1,ref:Johnston,ref:Waltman1,ref:Waltman2}.\nThese flares were first detected in 1972 and the\noutbursts have continued through 1994.\nSince\nCygnus X-3 lies in the galactic plane, its optical emission\nis largely obscured by interstellar material.\nThe lack of a strong optical signal makes determination of\nthe distance to Cygnus X-3 difficult, but general\nconsensus places it near $10\\,$kpc \\cite{ref:Lauque}.\n\nThe first published result claiming the detection of\ngamma-ray emission from Cygnus X-3 \ncame in 1977 from the SAS-2 satellite \nat low gamma-ray energies (E$> 35\\,$MeV) \\cite{ref:Lamb1}.\nThis result made use of an apparent correlation between the\ngamma-ray arrival times and the $4.8\\,$hr\nX-ray periodicity.\nLater observations by the COS-B satellite \n\\cite{ref:Bennett}\nwith a much larger\nsource exposure failed to confirm the SAS-2 result, and the COS-B\nauthors argued that the initial detection \\cite{ref:Lamb1}\nwas flawed because of an incorrect treatment of the diffuse\ngamma-ray component \\cite{ref:Hermsen}.\nTo complicate matters, there have been re-examinations\nof both the SAS-2 \\cite{ref:Fichtel} and COS-B \\cite{ref:Li}\ndata sets which claim that \nsignals indeed exist in both cases.\nMost recently, the EGRET experiment on the Compton Gamma-Ray Observatory\n(CGRO) failed to detect gamma-ray emission from Cygnus\nX-3 at a sensitivity level comparable to COS-B\n\\cite{ref:Michelson}.\nTo summarize,\nthere exists some controversy as to whether low energy\ngamma-rays have {\\it ever} been detected from Cygnus X-3.\nRegardless, it can be reasonably concluded\nthat the source is not a strong emitter of gamma-rays in the energy\nrange between $30\\,$MeV and $10\\,$GeV.\n\nThe first published report of very-high energy\ngamma-ray emission from Cygnus X-3\ncame from the Crimean Observatory \nusing an atmospheric Cherenkov telescope at energies above $2\\,$TeV \n\\cite{ref:Neshpor}.\nThis result was based on data taken between 1972 and 1977 and \nthe emission was claimed to be correlated with the\n$4.8\\,$hr\nX-ray period.\nFrom 1980 to 1990, there were numerous additional detections\nof Cygnus X-3 by atmospheric Cherenkov telescopes \n\\cite{ref:Danaher,ref:Lamb2,ref:Dowthwaite1,ref:Cawley1,%\nref:Chadwick,ref:Bhat,ref:Brazier}.\nThe detections were generally episodic in nature and \nusually required the\nuse of the \n$4.8\\,$hr\nperiodicity to extract a signal.\nEvidence for a 12.6 msec gamma-ray pulsar inside the Cygnus X-3 system\nwas claimed on more than one occasion\n\\cite{ref:Chadwick,ref:Brazier,ref:Gregory2}.\n\nAt the higher energies accessible by ground arrays, \nevidence for ultra-high energy gamma-ray emission from\nCygnus X-3 was presented by the Kiel array \\cite{ref:Samorski}\nand subsequently by the Haverah Park experiment \\cite{ref:Lloyd}.\nThese results were based on data taken between 1976 and 1980.\nThe gamma-ray emission was apparently steady over this time\nperiod and was\ncorrelated with the X-ray\nperiodicity.\nAdditional evidence \nfor gamma-ray emission from Cygnus X-3\nwas later reported by other air shower detectors\n\\cite{ref:Kifune,ref:Alexeenko,ref:Baltrusaitis1,%\nref:Tonwar1,ref:Morello}.\n\nAt extremely-high energies (E $> 5 \\times 10^{17}\\,$eV),\nevidence was presented for neutral particles from the direction\nof Cygnus X-3 by the Fly's Eye \\cite{ref:Cassiday1} and\nby Akeno \\cite{ref:Teshima} groups, based on data taken \nduring the periods 1981-1989 and 1984-1989, respectively.\nThese data apparently indicated steady emission of neutral\nparticles from Cygnus X-3 that was uncorrelated with the X-ray periodicity.\nThe Haverah Park experiment, operating in the same energy range, and\nduring much of the same period in time,\nfound no evidence for such emission\n\\cite{ref:Lawrence}.\n\nThe evidence \nfrom ground-based experiments \nfor gamma-ray emission from Cygnus X-3 from 1975 to 1990\nis shown in Figure~\\ref{fig:OldCyg}.\nHere,\nthe integral gamma-ray fluxes\nare plotted as a function of energy.\nAlso\nshown is a single power law fit of the\nform E$^{-1.1}$.\nThe fact that the gamma-ray fluxes at widely varying energies\ncould be approximately fit by a single power law was taken by\nsome as evidence of a unified acceleration mechanism at the source.\nOne should note, however, that all results\nshown in Figure~\\ref{fig:OldCyg} represent {\\it integral}\nflux measurements by experiments incapable of accurately measuring\ndifferential fluxes.\nSince the detections were generally\nonly marginally statistically significant, the reported fluxes\nequally represent the three to four standard deviation\nsensitivity of each instrument at a fixed energy.\nThe fluxes\ntherefore would naturally fall on an E$^{-1}$ power law,\nif the sensitivities of the experiments scaled linearly\nwith energy (which was approximately true for these\nfirst-generation experiments).\nIt has also been pointed out that even if the source\nmechanism produced emission with a single power law form,\nthe {\\it detected} flux at Earth would have a significant\ndip between $10^{3}$ and $10^{4}\\,$TeV \nbecause of absorption of gamma-rays by\nthe cosmic microwave radiation \\cite{ref:Cawley2}.\n\nStarting with the CYGNUS experiment in 1988 \\cite{ref:Dingus1},\na number of more sensitive ground-based experiments were\nunable to detect gamma-ray emission from Cygnus X-3,\nat levels significantly lower than the earlier reports.\nUpper limits on the flux were reported for experiments using both the\natmospheric Cherenkov technique \\cite{ref:Fegan},\nas well as the ground-array technique\n\\cite{ref:Cassiday2,ref:Alexandreas1}.\nUsing parts of the eventual CASA-MIA detector, some\nof us reported limits for data taken in 1988-1989\n\\cite{ref:Ciampa} and in 1989 \\cite{ref:Cronin}.\nThe general trend of a ``quiet'' Cygnus X-3\ncontinued into the early 1990's, although\nthere were several publications claiming gamma-ray emission based\nlargely on data that had been taken in the previous decade\n\\cite{ref:Muraki,ref:Bowden,ref:Tonwar2}.\n\n\\subsection{Hercules X-1}\n\\label{subsec:hercules}\nLike Cygnus X-3, Hercules X-1 is a powerful binary X-ray \nsource \\cite{ref:Tananbaum}.\nThe X-ray emission is modulated on a time scale of 1.7 days\nwhich is assumed to result from the\norbital motion of the binary pair.\nUnlike Cygnus X-3, Hercules X-1 is not seen in radio, but\nhas been observed for many years in the optical \nrange \\cite{ref:Jones} and a $5.8\\,$kpc distance\nto the source has been determined \\cite{ref:Forman}.\nIn addition, Hercules X-1 contains an X-ray pulsar\nwith a period of $1.24\\,$sec \\cite{ref:Tananbaum}, but\nwhose ephemeris is relatively poorly determined because of\nunpredictable variations in the spin-up rate \\cite{ref:Deeter}.\nHercules X-1 has not been detected by space-borne gamma-ray instruments.\n\nThe first evidence from a ground-based observatory for gamma-ray emission\nfrom Hercules X-1 was reported in 1984 by the Durham group using\nthe atmospheric Cherenkov technique \\cite{ref:Dowthwaite2}.\nThe reported gamma-ray emission came in the form of a short\nburst ($\\sim 3$ minute duration) that exhibited 1.24 sec\nperiodicity.\nFollowing this report, additional pulsed emission was claimed by\nCherenkov detectors operating at ultra-high energies (E $> 500\\,$TeV)\n\\cite{ref:Baltrusaitis2} and at TeV energies\n\\cite{ref:Gorham1,ref:Gorham2}.\n\nThe most intriguing evidence for gamma-ray emission from\nHercules X-1 came from data taken in 1986 by three\nexperiments.\nData taken between April and July of 1986 by \nthe Haleakala \\cite{ref:Resvanis}\nand Whipple \\cite{ref:Lamb3} telescopes operating\nnear $1\\,$TeV, and by the CYGNUS experiment \\cite{ref:Dingus2}\nin July of 1986 operating at energies above\n$50\\,$TeV, all indicated evidence for gamma-ray emission from Hercules\nX-1 in the form of short bursts of approximately 0.5 hr duration.\nIn addition, the emission detected by each experiment exhibited\na common periodicity near $1.2358\\,$sec,\nwhich differed by a significant amount \n($\\sim 0.16\\%$ lower)\nfrom the known X-ray period.\nThe data from the CYGNUS experiment was further puzzling\nbecause the events from the direction of Hercules X-1\nhad a muon content that was similar to the cosmic ray background events,\nwhereas gamma-ray events should have contained\nsignificantly fewer muons.\nLater, two groups with somewhat poorer sensitivity presented\nadditional evidence for gamma-ray emission from Hercules X-1 at\ndifferent times in 1986 \nat TeV \\cite{ref:Vishwanath} and $100\\,$TeV \\cite{ref:Gupta}\nenergies.\n\nSince the advent of upgraded and improved experiments in the early\n1990's, the gamma-ray signals from Hercules X-1 disappeared\nfrom the published literature.\nThe Whipple group, using a more sensitive Cherenkov imaging\ntechnique, failed to detect emission from Hercules X-1, and found \nno statistically significant evidence for gamma-ray\nemission from Hercules X-1 over a six year period, \neven including their data from 1986 \\cite{ref:Reynolds}.\nAn enlarged and improved\nCYGNUS experiment also failed to see gamma-rays from\nHercules X-1 \nin the period between 1987 and 1991\n\\cite{ref:Alexandreas2}.\nUsing data taken in 1989,\nwe reported upper limits on the emission of gamma-rays from\nHercules X-1 using part of the eventual CASA-MIA experiment \\cite{ref:Cronin}.\n\n\\subsection{Theoretical Implications}\n\\label{subsec:theory}\nThe many claims of very high energy gamma-ray\nemission from the binary systems Cygnus X-3 and Hercules\nX-1 fueled great interest in the development of astrophysical\nmodels to explain such emission.\nThere were also non-standard particle physics models put forward\nto explain the observations; these models will not be discussed here.\n \nFor the case of Cygnus X-3, where the gamma-ray emission was\ngenerally observed with a \n$4.8\\,$hr\nperiodicity, the astrophysical\nmodels needed to incorporate the orbital dynamics of the\nbinary system.\nModels in which an energetic pulsar alone served as the power source\nfor the gamma-rays \\cite{ref:Cheng1}\nor in which accretion powered the gamma-rays\n\\cite{ref:Chanmugam}\nwere proposed.\nThese models generally had\ndifficulty\nin producing gamma-rays at energies above $10^{15}\\,$eV.\nMore popular were a \ngeneral class of models in which the gamma-rays were produced from\nthe decays of $\\pi^0$ mesons made in hadronic collisions\n\\cite{ref:Vestrand,ref:Eichler1,ref:Kazanas}.\nThe hadronic beam \nresulted from\ndiffusive shock acceleration,\nperhaps near the neutron star, and possible beam targets included\nthe\natmosphere of the companion star or material in the accretion disk.\nSuch ``beam-dump'' models were capable of explaining\ngamma-rays at ultra-high\nenergies and were also able to accommodate the observed periodicities of\nthe gamma-ray signals.\nSeveral authors recognized\nthat in order to explain\nthe ultra-high energy gamma-ray fluxes initially seen,\nthe required luminosity of Cygnus X-3 \nwould also be sufficient to account for\na substantial fraction of the high energy cosmic ray flux\n\\cite {ref:Wdowczyk}.\nHillas pointed out that if Cygnus X-3 consisted of a $10^{17}\\,$eV\naccelerator with a luminosity of $\\sim 10^{39}\\,$ergs\/sec,\nonly one such object like it would be required to explain the origin\nof cosmic rays above $10^{16}\\,$eV \\cite{ref:Hillas}.\n\nUnlike Cygnus X-3, the gamma-ray emission from Hercules X-1 was not\nseen to be correlated with the orbital motion of the binary system,\nbut instead with the pulsar periodicity.\nThis observation, along with evidence that the emission appeared in\nthe form of short bursts,\nled naturally to models in which the pulsar itself was the \npower source.\nIn such models,\nthe gamma-rays were produced by the interaction\nof a charged particle beam with the accretion disk\n\\cite{ref:Eichler2,ref:Gorham3}.\nMore difficult to explain were the 1986 observations of\ngamma-ray emission at a slightly shorter period than the X-ray period.\nThe anomalous gamma-ray periodicity was\nexplained by the presence of matter in the accretion disk\nwhich periodically obscured the gamma-ray interaction region\n\\cite{ref:Cheng2,ref:Slane}.\n\nIn summary, although theoretical difficulties existed\nin explaining the apparent signals of gamma-rays from\nCygnus X-3 and Hercules X-1, the signals were\ntantalizing because of the possibility that they\nrevealed important sources of the ultra-high\nenergy cosmic ray flux.\n\n\\section{Experimental Procedure}\n\\label{sec:experiment}\n\\subsection{The CASA-MIA Experiment}\n\\label{subsec:casamia}\nThe CASA-MIA experiment is located in Dugway, Utah, USA \n($40.2^\\circ\\,$N, $112.8^\\circ\\,$W) at an altitude\nof $1450\\,$m above sea level ($870\\,$g\/cm$^2$ atmospheric depth).\nCASA-MIA consists of two major components: the Chicago\nAir Shower Array (CASA), a large surface array of scintillation detectors,\nand the Michigan Array (MIA), a buried array of scintillation counters\nsensitive to the muonic component of air showers.\n\nCASA consists of 1089 scintillation detectors placed on a \n$15\\,$m square grid and enclosing an area of $230,400\\,$m$^2$.\nConstruction on the array started in 1988 and a small portion\n(5\\%) of the experiment operated in 1989. A more\nsubstantial portion ($\\sim$50\\%) of it was completed by early 1990.\nData collection with this portion \nstarted on March 1, 1990.\nAdditional detectors were added in 1990 \nto complete the construction.\n\nMIA consists of 1024 scintillation counters located beneath CASA in\n16 groupings (patches).\nThe total active scintillator area is $2,500\\,$m$^2$ and the counters\nare buried beneath $\\sim 3.5\\,$m of soil.\nThis depth corresponds to a muon threshold energy of approximately $0.8\\,$GeV.\nParts of MIA were operational as early as 1987, 50\\% of the experiment\nwas completed by early 1990, and the entire array was working\nby early 1991.\nThe CASA-MIA\nexperiment was turned off temporarily in 1991 for repair due to\nlightning damage, but has operated essentially uninterrupted since\nthat time.\nTable~\\ref{tab:array} summarizes the size and detector makeup of \nCASA-MIA as a function of time.\n\n\\begin{table}\n\\begin{center}\n\\caption{\nSize and makeup of CASA-MIA experiment as a function of time\nfor the data sample used in this analysis.\nA range of values indicates that the experiment was\nbeing enlarged during this period of time.\nData taken after August 1995 are not used in this analysis.}\n\\label{tab:array}\n\\vspace{10pt}\n\\begin{tabular}{|cccc|}\\hline\nEpoch & Enclosed Area (m$^2$) & CASA Detectors & MIA Counters \\\\\n\\hline\nMar. 1990 -- Oct. 1990 & 108,900 & 529 & 512 \\\\\nOct. 1990 -- Apr. 1991 & 108,900--230,400 & 529--1089 & 512--1024 \\\\\nJan. 1992 -- Aug. 1993 & 230,400 & 1089 & 1024 \\\\\nAug. 1993 -- Aug. 1995 & 216,225 & 1056 & 1024 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nFigure~\\ref{fig:array} shows a plan view of the experimental site.\nIn addition to CASA-MIA, there are other installations\nat the same site.\nThe other equipment used in this analysis is an array\nof five tracking Cherenkov telescopes.\nOne telescope is located at the center of CASA-MIA, and the other\nfour are 120$\\,$m away from the center along the major axes.\nEach telescope consists of a $35\\,$cm diameter mirror which focuses\nCherenkov radiation onto a single $5.1\\,$cm photomultiplier tube (PMT).\nThe signals from the PMTs are digitized to record the amplitude and\ntime of arrival of the Cherenkov wavefront at each telescope location.\nThe shower direction is reconstructed\nby fitting the Cherenkov arrival times to a conical wavefront.\n\nA complete description of the CASA-MIA experiment can be found\nelsewhere \\cite{ref:NIMPaper}; here we briefly describe some aspects\nof the experiment that are relevant for this analysis.\nEach CASA station consists of four scintillation counters connected\nto a local electronics board.\nA station is {\\it alerted} when at least two of the four counters\nfire within a $30\\,$nsec window and it is {\\it triggered} when at least\nthree of the counters fire.\nIf three or more stations trigger within a time period of approximately\n$3\\,\\mu$sec, an {\\it array trigger} is said to have occurred.\nThe array trigger rate depends on operating conditions\n(e.g. atmospheric pressure), but is\n$\\sim$20 Hz for the full CASA-MIA experiment.\n\nUpon an array trigger, the Universal Time (UT) is latched and\nrecorded by either a GOES Satellite Receiver Clock (1990-1993)\nwith a precision of $\\pm 1\\,$msec, or by a Global Positioning System (GPS)\nclock (1993-1995) with a precision of $\\pm 100\\,$nsec.\nFor each array trigger,\na command is broadcast to the array\ninstructing each alerted CASA station to digitize\nand record its data, and\na signal is generated to stop time-to-digital\nconverters (TDCs) on each MIA counter.\nThe TDCs have a range of $4\\,\\mu$sec and \na least bit precision of $4\\,$nsec.\nThe data from each CASA station consist of the arrival times and pulse-height\namplitudes of the pulses from each scintillation counter, as well\nas the arrival times of pulses from the four nearest neighbor stations.\nThe data from each MIA counter consist of the \ntime of arrival of the array trigger\nrelative to the passage of a muon through the counter.\nThe CASA-MIA data and the Universal Time recorded as the result\nof an array trigger\ncorrespond to a single air shower {\\it event}.\n\n\\subsection{Event Reconstruction}\n\\label{sec:recon}\nWe briefly summarize some of the important aspects of the\nCASA-MIA event reconstruction; full details can be found\nelsewhere \\cite{ref:NIMPaper}.\nThe data from the experiment are accumulated in runs of six hours\nduration.\nAll calibrations and offsets are \ndetermined for each run separately.\nAt the start of a run, the timing constants associated with the CASA\nstation electronics are calibrated by an internal oscillator.\nTiming constants are corrected for the effects of temperature\nby studying the constants over the span of a week.\nThe CASA counter particle gains are determined for each run from\nthe abundant cosmic ray air showers.\nThe counter gains are found from the PMT amplitude distributions\nof those counters hit in stations with two out of four counters hit.\nA statistical correction of $\\sim 20\\%$\naccounts for the fact that\non average slightly more than one particle passes through a counter\nin this situation.\nThe CASA cable and electronic delays \nare determined from the zenith angle distributions of the detected\nevents.\nThe relative delay between the CASA and MIA trigger systems is\ndetermined by centering the peak of the muon arrival time distribution\nrelative to the position of the CASA trigger time.\n\nWe estimate the shower\n{\\it core position} \nby the location on the ground with the highest particle density.\nThe total number of particles in the shower, or \n{\\it shower size}, is determined\nby fitting the density samples obtained from the CASA stations to\na lateral distribution of fixed form.\nThe mean number of alerted CASA stations is 19 and the\nmean shower size is $\\sim 25,000$ equivalent minimum ionizing\nparticles.\n\nThe {\\it shower direction} is determined from the timing information\nrecorded by CASA.\nThe relative times between pairs of adjacent alerted CASA stations\nare determined.\nEach relative time gives a measure of the shower\ndirection along one axis of the experiment.\nThe times are weighted by an empirical function of the\nlocal particle density and distance to the shower core, and\nare fit to a wavefront which accounts for\nthe conical shape of the shower front.\nThe cone slope is approximately $0.07\\,$nsec\/m.\nThe shower direction in local coordinates is defined by two\nangles.\nThe zenith angle, $\\theta$, is measured with respect to the vertical\ndirection and the azimuthal angle, $\\phi$, is measured with respect\nto East in a counter-clockwise manner.\n\nIn order to be confident of any astronomical results, it is essential\nto measure the {\\it angular resolution} of the experiment.\nThe resolution has two parts.\nThe statistical part largely derives from\nthe intrinsic \nfluctuations in the arrival times of the\nshower particles\nand from the timing resolution of the CASA counters and electronics.\nThe systematic contribution derives primarily from the\naccuracies of the experiment survey and of\nthe calculation of timing delays and offsets.\n\nThe statistical contribution to the angular resolution\nis determined by three different techniques.\nFirst, on an event-by-event basis, we divide the array into\ntwo overlapping sub-arrays and compare the\nshower directions that are reconstructed by each sub-array.\nUsing an air shower and detector simulation, we estimate\nthe statistical correction required to derive the angular\nresolution from the sub-array direction comparison.\nSecond, we compare the shower direction as determined by CASA with\nthe direction determined by the five Cherenkov telescopes for\nevents in which both CASA and the telescopes triggered.\nBy statistically removing the angular resolution of the telescopes\nfrom this comparison, we estimate the CASA resolution.\nThird, we have detected the shadow that the Moon casts in the cosmic\nrays \\cite{ref:Moon}. \nFor data taken between 1990 and 1995, the Moon \nshadow is shown in Figure~\\ref{fig:moon}.\nBy deconvolving the size of the Moon from the measured shadow,\nwe obtain another estimate of the angular resolution.\nFigure~\\ref{fig:resolution} shows the resolution estimates\nfrom these different techniques.\nThe agreement between the various methods is good, which allows\nus to determine a single parametric form for the resolution as\na function of the number of alerted CASA stations.\n\nThe systematic contribution to the angular resolution \nof the experiment has been checked by two different\ntechniques.\nFirst, for data taken in coincidence with the tracking\nCherenkov telescopes, we examine \nthe angular difference between the directions determined by CASA\nand by the telescopes.\nThe distribution of these differences indicates\nthat the systematic offset between\nCASA and the telescope array is very small ($< 0.1^\\circ$).\nThe alignment of the telescope array has been verified by the\nobservation of a number of stars.\nA second check on the pointing accuracy of CASA comes from the\nMoon shadow. The center of the Moon shadow image is within \n$0.1^\\circ$ of the known position of the Moon.\nWe conclude that\nthe pointing uncertainty of CASA is negligible\nin comparison with the experiment's angular resolution.\n\nThe {\\it muon content} of the shower is determined from the\ndata recorded by MIA.\nSince MIA records \nthe times of counter hits over an \ninterval of $4\\,\\mu$sec, it is sensitive to muons\nproduced by showers arriving at any location of the\narray and from any direction.\nDuring the same time interval, MIA\nalso records accidental counter hits produced by PMT noise and by\nnatural radioactivity in the ground.\nThe average number of accidental hits is approximately \nsixteen per event,\nwhile\nthe average number of real muons associated with air showers \nis approximately nine per event.\n\nReal muons arrive within 100$\\,$nsec of the shower front arrival, while\nthe accidental hits occur randomly over the $4\\,\\mu$sec interval.\nWe greatly reduce the acceptance for accidental muon hits by narrowing the\ntime window for accepting muons.\nThe width of the window is determined from the distribution of\nmuon times for each six hour run.\nWe set the width to encompass 95\\% of the real shower muons;\non average, it is $\\sim 150\\,$nsec.\nThe position of the window is found on an event-by-event basis by\nmeans of a clustering algorithm.\nThe algorithm searches for the cluster of three or more muons within an\ninterval of $40\\,$nsec.\nIn approximately 25\\% of the events, no cluster is found and the\nwindow position is placed at the center of the muon time distribution\nas determined for the entire run.\nAs a result of tightening the time window for muon hit acceptance,\nthe average number of real muons recorded is 8.5 per event,\nwhile the average number of accidentals is 0.63 per event.\n\nThe CASA-MIA data undergo several stages of processing and compression.\nIn the most highly compressed format upon which this analysis is based,\nthe data records are 26 Bytes per event and\ninclude the following information for each event:\nUniversal Time (UT), number of alerted CASA stations, number of in-time\nmuon hits, core location, arrival direction, shower size,\nand muon shower size (not used here).\n\n\\section{Analysis}\n\\label{sec:anal}\n\\subsection{Data Sample}\n\\label{subsec:sample}\nThe data used in this analysis were taken between March 4, 1990 and\nAugust 10, 1995, with a gap of 255 days in 1991.\nThe experiment had usable data on 1627 days\nwith the remainder of the days lost \nlargely because of power\noutages at the site and computer problems.\nThe experiment has an instrumental deadtime of approximately\n5.4\\% which is due to a number of effects, including \ndata acquisition computer latency and\nthe time needed to digitize the CASA station data.\nCalibration runs of approximate length of six minutes taken\nat the start of data runs,\nlosses due to $8\\,$mm tape failures,\nand downtime from array maintenance led\nto an additional reduction in the live time to a total of \n1378.4 days (84.7\\% of the total).\nAfter the reduction and processing of the data, the final data sample\nconsisted of $2.0878 \\times 10^9$ reconstructed events.\n\nThe size of the CASA-MIA data sample is unprecedented in air shower physics.\nTo ensure data integrity, we\nimpose a comprehensive\nset of data quality cuts.\nThe cuts are tailored separately for the data sample in which we\nonly use information from the surface array \n({\\it all-data}) and the sample in which we use information from both the\nsurface and muon arrays ({\\it muon-data}).\nFor each of these samples, we make quality cuts on an event-by-event basis\nand on a run-by-run basis.\nCuts are applied to runs and events only in the cases\nthere there is evidence of an instrumental bias.\nThe efficiencies of the \ncuts are summarized in Table~\\ref{tab:events}.\n\n\\begin{table}\n\\begin{center}\n\\caption{Quality cut efficiencies and event totals for \nthe CASA-MIA data sample.\nThe muon-data quality cuts are applied after the all-data quality\ncuts.\nThe data sets (all and muon) are described in the text.}\n\\label{tab:events}\n\\vspace{10pt}\n\\begin{tabular}{|ccc|}\\hline\n Category & All-Data Sample & Muon-Data Sample \\\\\n\\hline\nInitial Event Total & 2087.8M & 1925.8M \\\\\nRun Cut Efficiency & 0.935 & 0.929 \\\\\nEvent Cut Efficiency & 0.986 & 0.896 \\\\\n\\hline\nOverall Efficiency & 0.922 & 0.832 \\\\\n\\hline\nFinal Event Total & 1925.8M & 1602.7M \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nFor the all-data sample, the run and event cuts have a combined efficiency\nof 92.2\\%, which yields a final sample of $1.9258\\times 10^9$ events.\nThe most restrictive run cut requires\na minimum fraction of the CASA stations to be working reliably and\nremoves 2.2\\% of the data, largely because of instances in which\nisolated parts of the array failed.\nFor the muon-data sample, the run cuts have an efficiency of 92.9\\%.\nA cut which requires a sufficient fraction of the muon\ncounters to be working removes 4.8\\% of the data.\nThe event cuts have an additional efficiency of\n89.6\\%.\nThe most restrictive event cut eliminates 3.2\\% of the events\nbecause they have no muon information due to deadtime of the\nMIA data acquisition system.\nThe overall efficiency of the muon-data cuts is 83.2\\%, which yields\na final sample of $1.6027\\times 10^9$ events.\n\n\\subsection{Gamma-Ray Selection}\n\\label{subsec:select}\nFrom prior observations of Cygnus X-3 and Hercules X-1, \nwe expect that gamma-ray\nfluxes, if present, will be\nsmall in comparison with the isotropic cosmic ray flux.\nTherefore, we need to enhance the presence of a possible gamma-ray signal\nby eliminating as many cosmic ray air showers as possible, while\nkeeping a high fraction of the gamma-ray air showers.\nTo do this, we select\nthose showers with a reconstructed direction\nconsistent with the position of the sources (within the angular resolution\nof the experiment) and with a muon content\nconsistent with that expected from a gamma-ray primary.\n\n\\subsubsection{Angular Search Bin}\n\\label{subsec:search}\nWe define a circular search bin whose size is based on the estimated angular\nresolution of the experiment.\nFor a sufficiently large number of events, the\nbin which optimizes the signal-to-noise has a size equal to\n$1.59$ times the angular resolution and contains 72\\% of the signal.\nThe CASA-MIA angular resolution depends on\nthe number of alerted CASA stations in an event, and therefore we use\na variable-sized search bin which scales with the number of alerts.\nFor simplicity, we use seven different bin sizes that range from\n$2.45^\\circ$ radius for showers with the least number of alerts,\nto $0.41^\\circ$ radius for showers with the largest number of alerts.\nThese bin sizes are shown in Table~\\ref{tab:bins}, along with the\nfraction of events in each alert range.\n\n\\begin{table}\n\\begin{center}\n\\caption{Angular search bin sizes and event fractions \nas a function of the number of CASA alerts.\nThe search bin is a circular region in equatorial coordinates whose\nradius is equal to $1.59$ times the angular resolution.}\n\\label{tab:bins}\n\\vspace{10pt}\n\\begin{tabular}{|ccc|}\\hline\nAlert Range & Event Fraction & Search Bin Radius \\\\\n\\hline\n{\\ 3 - 10 } & 0.331 & $2.45^\\circ$ \\\\\n{ 11 - 15 } & 0.224 & $1.88^\\circ$ \\\\\n{ 16 - 20 } & 0.121 & $1.40^\\circ$ \\\\\n{ 21 - 30 } & 0.150 & $1.05^\\circ$ \\\\\n{ 31 - 40 } & 0.064 & $0.78^\\circ$ \\\\\n{ 41 - 60 } & 0.058 & $0.60^\\circ$ \\\\\n{ $>60$ } & 0.052 & $0.41^\\circ$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsubsection{Muon Content}\n\\label{subsec:muon}\nAir showers created by gamma-ray primaries are expected to\ncontain far fewer muons than showers initiated by cosmic ray\nnuclei.\nThis expectation results because the\ncross section for\nphoto-pion production\nis much smaller than the cross section for electron-positron\npair production \\cite{ref:HERA}.\nTherefore,\nthe interaction of a high energy gamma-ray in the atmosphere\nis much more likely to produce an electromagnetic cascade\nin the atmosphere than it is to create a hadronic cascade.\nConversely,\ncosmic ray nuclei preferentially\ninteract to create hadronic cascades.\nShowers initiated by gamma-rays are thus expected to\ncontain far fewer\nhadrons than those initiated by cosmic rays.\nSince air shower muons are predominantly produced from the\ndecays of pions and kaons in the hadronic cascade,\ngamma-ray air showers should contain far fewer muons as well.\nSimulations have been done to estimate the muon content of\nair showers\n\\cite{ref:Chatelet,ref:Halzen}.\nOur own simulation indicates that an air shower initiated by a \n$100\\,$TeV gamma-ray contains, on average, 3-4\\% of the number of muons\nin a shower initiated by a proton of the same energy.\n\nThe muon content of showers should in principle\nbe a powerful tool in rejecting cosmic ray background events.\nIn our experiment, the rejection capability is limited\nby the collection area of the muon array and, to a lesser\nextent, by the presence of a small amount of accidental\nmuon hits.\nThe muon array (MIA) is significantly larger than any other\nair shower muon detector built to date, \nbut its active area still corresponds to only\n$\\sim 1\\%$ of the enclosed area of the experiment.\nAs shown in Figure~\\ref{fig:in_time_muons}, the average \nof the distribution of the number of in-time muons\nis $\\sim$8.5, but the shape of the distribution\nis such that its mode is three, and a substantial fraction of events\nhave zero muons.\nIn Figure~\\ref{fig:in_time_muons}, we also show the estimated number of\nmuons for showers initiated by gamma-rays, including the contribution\nfrom accidental\nmuon hits.\nFor gamma-ray showers, we expect, on average, 0.28 real muons per event\nand 0.63 accidental muons per event.\n\nIn order to enhance a possible gamma-ray signal,\nwe wish to select {\\it muon-poor} events, i.e. events that\nhave fewer muons than the average expected number.\nTo do this, we make the assumption that any gamma-ray\nsignal in the data is much smaller than the flux of cosmic rays.\nWe can therefore use\nthe muon information from the detected events\nto describe the muon content of the background, and \nour simulation to describe the muon content of the gamma-ray\nsignal.\n\nThe number of muons in a shower depends on a number of observable\nquantities, for example,\nthe number of alerted CASA\nstations, shower zenith angle, and core position.\nWe develop a parameterization for the average number of muons\nas a function of these quantities\nby examining a large ensemble of actual showers.\nWe then determine the relative muon content of a specific shower\nby comparing the observed muon number, $({\\rm n}_\\mu)_{{\\rm obs}}$,\nto the expected number of muons, $<{\\rm n}_\\mu>_{{\\rm exp}}$,\nfor showers\nhaving similar zenith angles, core positions, and numbers of alerts.\nThe relative muon content, ${\\rm r}_\\mu$, is\ndefined by:\n\n\\begin{equation} \n{\\rm r}_\\mu \\ \\ \\equiv\\ \\ {\\rm Log_{\\rm 10}} \\\n\\Biggl[\n{ { ({\\rm n}_\\mu)_{\\rm obs} } \\over\n { <{\\rm n}_\\mu>_{\\rm exp} } }\n\\Biggr] \\ . \n\\label{eq:rmu}\n\\end{equation}\n\n\\noindent Figure~\\ref{fig:rmu} shows the \ndistributions of ${\\rm r}_\\mu$ for \nobserved events and for simulated\ngamma-ray events.\nMuon-poor events are defined as those having \n${\\rm r}_\\mu$ values less than some cut value.\nThe position of the cut is chosen to reject as many background\nevents as possible, while keeping a high fraction of the gamma-ray\nevents.\nThe cut value depends weakly on the number of CASA alerts because\nthe separation between the signal and background \n${\\rm r}_\\mu$ distributions improves\nas the showers get larger.\n\nTable~\\ref{tab:rmucut}\nshows the ${\\rm r}_\\mu$ cut values for various samples of data\nalong with the fractions of signal and background events retained,\nand the sensitivity improvement achieved from making a cut.\nFor the entire data set, the sensitivity is improved by a factor\nof 2.94 by cutting on the shower muon content.\nThe quality factor increases to 29.7 for events having more than\n80 alerted CASA stations.\n\n\\begin{table}\n\\begin{center}\n\\caption{\nQuantities associated with the selection of muon-poor events.\nMuon-poor events are those having a relative muon content,\n${\\rm r}_\\mu$ (defined in the text),\nless than a cut value.\nThe cut values\nare given in the\nsecond column, and\nthe third and fourth columns give the efficiencies for\npassing the cut for gamma-ray signal events and\nfor hadronic background events, respectively.\nThe fifth column gives the quality factor, Q, or the\nimprovement in flux sensitivity from making the cut.}\n\\label{tab:rmucut}\n\\vspace{10pt}\n\\begin{tabular}{|ccccc|}\\hline\nData Set & ${\\rm r}_\\mu$ cut Value & Signal $\\epsilon$ &\nBackground $\\epsilon$ & Q \\\\\n\\hline\nAll & $-0.75$ & 0.72 & 0.0600 & 2.94 \\cr\n$\\le 10$ Alerts & $-0.50$ & 0.69 & 0.1644 & 1.70 \\cr\n$ > 10$ Alerts & $-0.75$ & 0.76 & 0.0362 & 3.99 \\cr\n$ > 40$ Alerts & $-1.00$ & 0.71 & $1.77\\times 10^{-3}$ & 16.9\\cr\n$ > 80$ Alerts & $-1.00$ & 0.77 & $0.67\\times 10^{-3}$ & 29.7 \\cr\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsection{Background Estimation}\n\\label{subsec:back}\nWe select gamma-ray candidate events \n({\\it on-source} events)\nbased on their\nreconstructed arrival direction \nin equatorial coordinates (right ascension, $\\alpha$, and \ndeclination, $\\delta$) and on their muon content.\nIn order to derive the significance of a possible gamma-ray signal,\nwe need to determine the expected number of background cosmic ray\nevents \n({\\it off-source} events)\nthat would arrive from the same direction in the sky as the source\nand would have a similar muon content as gamma-ray events.\nAgain,\nwe make the assumption that the detected air showers\nare predominantly caused by background cosmic ray events.\nWe thus use the detected events themselves to estimate the\nexpected background.\n\nA common method to estimate the expected number of background events\nis to use off-source bins having the same declination as the source,\nbut having different right ascension values.\nThis method, which assumes a uniform experiment\nexposure over declination,\nwas satisfactory for earlier smaller experiments.\nHowever, given our\nlarge event sample, \nthis technique is not reliable for CASA-MIA\nbecause of small, but non-negligible, systematic biases\n(e.g. diurnal variations).\nFor the CASA-MIA data sample,\na source at a declination of $40^\\circ$\noccupies an angular bin with\n$\\sim 1.8 \\times 10^6$ events.\nThe fractional statistical uncertainty corresponding to \none standard deviation in the number of events is\n0.075\\%.\nIn order to accurately estimate the number\nof background events, the relative systematic uncertainty must\nbe well below this level.\nAs a result, an accurate and robust way to determine\nthe expected background is needed.\nSeveral methods have been developed \nby other groups \n\\cite{ref:Cassiday1,ref:Alexandreas3}\nand the method that we use \nis similar to these.\n\nThe detection rate of an air shower array \ntriggering on cosmic rays \nis determined by the properties of the cosmic ray flux\nand by the properties\nof the array itself.\nAssuming that the cosmic ray parameters do not change with time,\nany variation in the detection rate is caused only by\nchanges in the detector or in the atmospheric conditions.\nOver short intervals of time ($\\sim 1\\,$hour),\nthe relative detection efficiency as a function of the shower \ndirection in local coordinates, ($\\theta$,$\\phi$),\nis largely determined by the array geometry (placement of detectors,\nuniformity of terrain, etc.) and is almost constant, and\nthe time variation of the detector response may be estimated from\nthe trigger rate.\nTherefore,\nwe separate the detection rate per unit solid\nangle in local coordinates, $ N (\\theta,\\phi,t)$, into\ntwo terms:\n\n\\begin{equation} \nN (\\theta,\\phi,t) \\ \\ = \\ \\ D (\\theta,\\phi) \\cdot R (t) \\ \\ ,\n\\label{eq:rate}\n\\end{equation}\n\n\\noindent \nwhere $ D (\\theta,\\phi) $ is the efficiency per unit solid angle\nof detecting a shower from a given direction in the sky, and\n $R (t) $ is the trigger rate as a function of time.\nThe factor $ D (\\theta,\\phi) $ is determined by maps made\nfrom the arrival directions of cosmic ray showers over\ngiven periods of time.\nThe time dependent term, $R(t)$, is determined from\nthe arrival times of the actual events.\nThe expected number of events for a given bin in the sky \nis then determined by integrating $ N (\\theta,\\phi,t)$\nover the time interval in question.\nTo determine the expected number of events for a bin in\nequatorial coordinates, ($\\alpha,\\delta$), we integrate \n$ N (\\theta,\\phi,t)$ over the time interval and over local\ncoordinate space.\n\nMore explicitly, the background estimation\nis done by the following procedure.\nFor intervals of $4,200\\,$sec,\nwe accumulate the arrival directions of \ncosmic ray events into 2,700 bins segmented in local coordinate space\n(30 bins in $\\theta$, 90 bins in $\\phi$).\nWe use the binned data to construct maps of the relative acceptance\nof any point in the sky over this time interval.\nSeparate maps are calculated for each data sample\nused in the source search (e.g. all-data and muon-poor data).\nTo generate simulated background data,\nwe discard the directional information of an event\nand associate the event time with a local coordinate direction\nobtained by sampling from the appropriate sky map.\nWe then\ncompute artificial values for the\nequatorial coordinates \nand determine if this simulated event falls into a search\nbin of a source.\nBy sampling more than once from the sky map for each event time,\nwe increase the statistics on the \nsimulated data sample.\nNegligible\nsystematic bias is introduced by such oversampling.\nFor this work, we oversample by a factor of ten, an amount that \nis limited only by computational resources.\n\nWe have checked that our background estimation method is free from\nbias by comparing the detected numbers of events in an\nangular bin to our expected number for bins that do not contain\nCygnus X-3 and Hercules X-1.\nFor each bin, we compute the statistical significance of any\nexcess or deficit in the number of detected events relative\nto the number we predict.\nThe distribution of these significances is in close agreement\nwith that expected from statistics, which, because of \nbackground oversampling,\nis dominated by the statistical uncertainty on the number\nof detected events.\n\n\\subsection{Energy Response}\n\\label{subsec:energy}\nAir shower arrays trigger on the shower size, i.e.\nthe number of charged particles in the shower at ground level.\nFor each shower, we determine a shower size\nfrom the particle densities measured in the CASA stations.\nFor astrophysical interpretations of flux measurements or flux limits,\nhowever, it is necessary to translate from the measured shower\nparameters (size and zenith angle) to an estimate of the energy of\nthe primary particle.\nSince there are large fluctuations in shower size \nfor showers initiated by particles at fixed energy,\nit is difficult for air shower experiments to measure accurately\n{\\it differential} primary spectra.\nTraditionally, therefore, \nflux measurements\nhave been quoted as {\\it integral} intensities above\na fixed energy point.\nAlthough to some degree the energy value at which to quote\nthe intensity is arbitrary, we desire to use\nan energy at which the experiment has a significant\ndegree of sensitivity.\nWe chose to quote flux measurements at the {\\it median} energy\nof the experiment which reduces the dependence of the flux on\nthe assumed spectral index \n\\cite{ref:Ciampa,ref:Gaisser1}.\n\nWe estimate the energy response of the experiment by the constant\nintensity method, which has been used by other experiments\n\\cite{ref:Nagano}, as well as by our own group \\cite{ref:McKay}.\nThe constant intensity procedure is described in more detail\nelsewhere \\cite{ref:Newport}. \nBriefly,\nwe determine the relationship between shower size and\nenergy by comparing the detected flux of showers above a given size\nto an assumed form of the all-particle cosmic ray spectrum.\nThe comparison is done on a run-by-run basis to account for\nchanges in the detector response.\nThe cosmic ray flux is\nderived from\nmeasurements made by other space-borne \n\\cite{ref:Asakimori,ref:Swordy}\nand ground-based \n\\cite{ref:Nagano}\nexperiments.\nThe assumed integral cosmic ray intensity above $100\\,$TeV is\n$6.57 \\times 10^{-9}\\,$particles cm$^{-2}$ s$^{-1}$ sr$^{-1}$.\n\nWe use the relationship between energy and shower size to\ndetermine the most likely energy for each shower coming from the\ndirection of Cygnus X-3 or Hercules X-1 in a angular bin of fixed\nradius.\nThe medians of the energy distributions determine\nthe median energies for \ncosmic ray particles from the direction of\nCygnus X-3 and Hercules X-1 that would trigger the experiment and\npass all selection criteria.\nBy normalizing our energy scale to the cosmic ray flux,\nwe make the assumption that the primary particle\nhas the same spectral index as the detected cosmic rays.\nThis assumption is reasonable when dealing with\nsources like Cygnus X-3 and Hercules X-1 in which there are\nare no well established measurements of spectral indices.\nThe median energy of particles from the direction of Cygnus\nX-3 is $114\\,$TeV, and from Hercules X-1, it is $116\\,$TeV.\nSince the difference in the energies for the two sources\nis negligible, we report our measurements at\na common energy of $115\\,$TeV.\n\n\\subsection{Search Strategy}\n\\label{subsec:strategy}\nWe carry out searches for particle emission from a particular source\nby comparing the number of events found\nwithin a circular angular bin\naround the source to the number of events estimated by our background\nprocedure.\nThe angular bin sizes vary as a function of the number of alerted CASA\nstations, as itemized in Table~\\ref{tab:bins}.\nSource positions (J1992) are taken to be \n$(\\alpha,\\delta) = (308.04^\\circ,40.93^\\circ)$ for\nCygnus X-3, and\n$(\\alpha,\\delta) = (254.39^\\circ,35.35^\\circ)$ for\nHercules X-1.\n\nSeparate searches are made based on particle type and energy.\nBy using the {\\it all-data} sample, we are sensitive to any type of\nneutral particle that would create air showers.\nWith the {\\it muon-poor} sample, we are specifically sensitive to\nthe emission of gamma-rays.\nWe carry out three separate searches with various integral cuts on the\nnumber of alerted CASA stations,\nin addition to a search with no cuts.\nThis procedures takes advantage of the correlation between primary\nenergy and size (as represented by the number of alerts),\nand improves our sensitivity to possible emission\nthat might be present at either low or high energies.\nThe data samples selected by cutting on the alert number\nand their corresponding\nmedian energies are shown in Table~\\ref{tab:energycuts}.\n\n\\begin{table}\n\\begin{center}\n\\caption{Data samples selected by integral cuts on the number\nof CASA alerts.\nThe cut values are given in the first column and the fractions\nof events surviving the cut \n(and within the angular search region)\nare shown in the second column.\nThe median energies for events coming from either Cygnus X-3\nor Hercules X-1 are listed in the third column.}\n\\label{tab:energycuts}\n\\vspace{10pt}\n\\begin{tabular}{|crr|}\\hline\n Alert Cut &\\ \\ Event Fraction\\ \\ &\\ \\ Median Energy\\ \\ \\\\\n\\hline\nNone & 100.00\\%\\ \\ \\ & $115\\,$TeV\\ \\ \\ \\\\\n\\hline\n$\\le 10$ & 62.87\\%\\ \\ \\ & $85\\,$TeV\\ \\ \\ \\\\\n$> 40$ & 0.58\\%\\ \\ \\ & $530\\,$TeV\\ \\ \\ \\\\\n$> 80$ & 0.09\\%\\ \\ \\ & $1175\\,$TeV\\ \\ \\ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\section{Results}\n\\label{sec:results}\nWe search for evidence of neutral \n(gamma-ray or other) particle emission from Cygnus X-3 and\nHercules X-1.\nSeparate searches are carried out for steady and\ntransient emission from either source.\nIn addition, we search for periodic emission from Cygnus X-3\nat the \n$4.8\\,$hr\nX-ray periodicity and for emission from Cygnus\nX-3 that was coincident with the occurrence of large radio flares.\nNo compelling evidence for emission from either source is found\nfor all the different searches, and consequently we\nset upper limits on the fluxes of particles from the sources.\n\n\\subsection{Steady Emission}\n\\label{subsec:steady}\nThe numbers of on-source and background events for the various\nsearches from Cygnus X-3 are shown in Table~\\ref{tab:CygEvents}.\nThe results from similar searches carried out on Hercules X-1\nare shown in Table~\\ref{tab:HerEvents}.\nFor each search, we also calculate\nthe statistical significance of\nany excess or deficit in the number of\nevents observed relative to background by\nthe prescription of Li and Ma \\cite{ref:LiMa}, using\nan oversampling factor of 10.\nNo significant excess is observed for any search from either source.\nTherefore, for each search, we calculate\nan upper limit, $N_{90}$, on the number of excess events\nfrom the source at the 90\\% confidence level\n\\cite{ref:Helene,ref:PDG}.\nEach $N_{90}$ value is converted to a limit on the fractional\nexcess of events from the source, $f_{90}$, \nby dividing by the estimated number of background events, which is\nassumed to represent the background cosmic-ray level.\nSince\nthe $f_{90}$ values are independent of the absolute flux normalization,\nthey are useful in comparing\nresults between different experiments.\n\n\\begin{table}\n\\begin{center}\n\\caption{Steady emission search results for Cygnus X-3 using\nthe all-data sample (top) and muon-poor sample (bottom).\nThe number of events observed on-source and the number\nexpected from background are given in the second the third\ncolumns, respectively.\nThe fourth column gives the statistical significance of\nany excess or deficit.\nThe 90\\% c.l. upper limit on the number of excess events,\n$N_{90}$, and the upper limit on the fractional excess,\n$f_{90}$, are given in the last two columns.\nThe methods used to calculate statistical significances\nand upper limits are outlined in the text.\nThe data samples at $85\\,$TeV, $530\\,$TeV, and $1175\\,$TeV\nare subsets of the data sample at $115\\,$TeV.}\n\\label{tab:CygEvents}\n\\vspace{10pt}\n\\begin{tabular}{|rrrcrc|}\n\\multicolumn{6}{c}{ All-Data Sample} \\\\\n\\hline\nEnergy & On-Source & Background & Signif. & $N_{90}$ &$f_{90}$ \\\\\n\\hline\n $85\\,$TeV\\ \\ & 1119469\\ \\ & 1119987\\ \\ & $-0.48\\sigma$ & \n 1502.1 &\\ \\ $1.34\\times 10^{-3}$ \\\\\n $115\\,$TeV\\ \\ & 1780594\\ \\ & 1781479\\ \\ & $-0.66\\sigma$ & \n 1774.9 &\\ \\ $9.96\\times 10^{-4}$ \\\\\n $530\\,$TeV\\ \\ & 10286\\ \\ & 10235\\ \\ & $+0.49\\sigma$ & \n 205.3 &\\ \\ $2.01\\times 10^{-2}$ \\\\\n$1175\\,$TeV\\ \\ & 1583\\ \\ & 1580\\ \\ & $+0.08\\sigma$ & \n 68.9 &\\ \\ $4.36\\times 10^{-2}$ \\\\\n\\hline\n\\multicolumn{6}{c}{\\hphantom{dummy}} \\\\\n\\multicolumn{6}{c}{ Muon-Poor Sample} \\\\\n\\hline\nEnergy & On-Source & Background & Signif. & $N_{90}$ &$f_{90}$ \\\\\n\\hline\n $85\\,$TeV\\ \\ & 149676\\ \\ & 149863\\ \\ & $-0.57\\sigma$ & \n 548.1 &\\ \\ $5.90\\times 10^{-4}$ \\\\\n $115\\,$TeV\\ \\ & 121409\\ \\ & 121594\\ \\ & $-0.37\\sigma$ & \n 485.4 &\\ \\ $3.28\\times 10^{-4}$ \\\\\n $530\\,$TeV\\ \\ & 20\\ \\ & 21.0\\ \\ & $-0.21\\sigma$ & \n 8.2 &\\ \\ $9.47\\times 10^{-4}$ \\\\\n$1175\\,$TeV\\ \\ & 1\\ \\ & 0.6\\ \\ & $+0.44\\sigma$ & \n 3.5 &\\ \\ $2.67\\times 10^{-3}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{table}\n\\begin{center}\n\\caption{Steady emission search results for Hercules X-1 using\nthe all-data sample (top) and muon-poor sample (bottom).}\n\\label{tab:HerEvents}\n\\vspace{10pt}\n\\begin{tabular}{|rrrcrc|}\n\\multicolumn{6}{c}{ All-Data Sample} \\\\\n\\hline\nEnergy & On-Source & Background & Signif. & $N_{90}$ &$f_{90}$ \\\\\n\\hline\n $85\\,$TeV\\ \\ & 1058904\\ \\ & 1057583\\ \\ & $+1.12\\sigma$ & \n 2738.1 &\\ \\ $2.59\\times 10^{-3}$ \\\\\n $115\\,$TeV\\ \\ & 1681708\\ \\ & 1681392\\ \\ & $+0.23\\sigma$ & \n 2387.6 &\\ \\ $1.42\\times 10^{-3}$ \\\\\n $530\\,$TeV\\ \\ & 9579\\ \\ & 9532\\ \\ & $+0.46\\sigma$ & \n 196.5 &\\ \\ $2.06\\times 10^{-2}$ \\\\\n$1175\\,$TeV\\ \\ & 1419\\ \\ & 1459\\ \\ & $-0.98\\sigma$ & \n 44.0 &\\ \\ $3.02\\times 10^{-2}$ \\\\\n\\hline\n\\multicolumn{6}{c}{\\hphantom{dummy}} \\\\\n\\multicolumn{6}{c}{ Muon-Poor Sample} \\\\\n\\hline\nEnergy & On-Source & Background & Signif. & $N_{90}$ &$f_{90}$ \\\\\n\\hline\n $85\\,$TeV\\ \\ & 139580\\ \\ & 139670\\ \\ & $-0.24\\sigma$ & \n 577.1 &\\ \\ $6.57\\times 10^{-4}$ \\\\\n $115\\,$TeV\\ \\ & 113360\\ \\ & 113244\\ \\ & $+0.37\\sigma$ & \n 643.4 &\\ \\ $4.62\\times 10^{-4}$ \\\\\n $530\\,$TeV\\ \\ & 14\\ \\ & 16.8\\ \\ & $-0.67\\sigma$ & \n 6.3 &\\ \\ $7.96\\times 10^{-4}$ \\\\\n$1175\\,$TeV\\ \\ & 0\\ \\ & 0.5\\ \\ & $-0.98\\sigma$ & \n 2.3 &\\ \\ $1.90\\times 10^{-3}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nFigure~\\ref{fig:CygScan} shows scans in right ascension for\nbands of declination centered on Cygnus X-3 for the all-data\nand muon-poor samples.\nNo significant excess above background \nis seen in either sample for the bin\ncontaining Cygnus X-3.\nThe background estimation \nagrees well with the data in the off-source region.\nSimilar scans for Hercules X-1 are shown in \nFigure~\\ref{fig:HerScan}, and\nagain the background estimation agrees well with the observed data\nand no excesses are seen.\n\n\\subsubsection{Flux Limit Calculation}\n\\label{subsec:flux}\nIn the absence of a statistically significant excess\nfrom either Cygnus X-3 or Hercules X-1, we set upper limits\non the flux of particles from each source.\nSeparate limits are set for neutral and gamma-ray primaries.\nFor gamma-ray primaries, the 90\\% c.l. upper limit, \n$\\Phi_\\gamma (E)$,\non the integral flux is calculated from the measured\nfractional excess by normalizing to the cosmic ray flux:\n\n\\begin{equation}\n\\Phi_\\gamma\\ (E) \\ \\ = \\ \\ \n { { f_{90}\\ \\bar{\\Omega} } \\over {\\epsilon\\ R_\\gamma} }\n \\ J (E)\\ .\n\\label{eq:limit}\n\\end{equation}\n\n\\noindent Here, \n$\\bar{\\Omega}$ is the mean solid angle used in the search,\n$\\epsilon$ is the fraction of events that would pass cuts\nand fall into the search bin,\n$J (E)$ is the integral cosmic ray intensity above energy $E$, \nand $R_\\gamma$ is a factor which accounts for the relative trigger\nefficiency for gamma-rays as opposed to cosmic rays.\nThe value of $\\bar{\\Omega}$ ranges from $5.74\\times 10^{-3}\\,$sr for\nthe lowest energy data set to\n$1.60\\times 10^{-4}\\,$sr for the highest energy data set.\nThe $\\epsilon$ factor accounts for the fraction of gamma-rays that\nwould end up in the angular search bin (0.72) and the fraction\nthat would pass the muon-poor selection\ncriterion (Table~\\ref{tab:rmucut}).\nThe value of $R_\\gamma$ was determined by Monte Carlo\nsimulations to be 1.6.\n\nTo determine an upper limit on the integral\nflux of any neutral particle from\na source, $\\Phi_N (E)$, we use\nEq.~\\ref{eq:limit}, except $\\epsilon$ is now 0.72\nand $R_\\gamma$ is 1.0.\nIn this calculation, we assume that the neutral particle \nwould interact in the atmosphere to create air showers\nin a similar manner to cosmic rays.\n\nTable~\\ref{tab:CygHerLimits} gives the flux limits obtained from the\nvarious searches for steady emission from \nthe two sources.\nLimits are not calculated for the data samples with median energies\nof $85\\,$TeV because these are {\\it not} integral energy samples.\n\n\\begin{table}\n\\begin{center}\n\\caption{Flux limits from searches for steady emission from \nCygnus X-3 (top) and\nHercules X-1 (bottom). The second and third columns give the 90\\% c.l. upper\nlimit on the integral flux of any neutral or gamma-ray particles\nfrom the source, respectively.\nThe units of flux are particles cm$^{-2}$ sec$^{-1}$.}\n\\label{tab:CygHerLimits}\n\\vspace{10pt}\n\\begin{tabular}{|rcc|}\n\\multicolumn{3}{c}{ Cygnus X-3} \\\\\n\\hline\nEnergy & $\\Phi_N\\ (E) $ & $\\Phi_\\gamma\\ (E)$ \\\\\n\\hline\n $115\\,$TeV\\ \\ & $2.20 \\times 10^{-14}$ & $6.26 \\times 10^{-15}$ \\\\\n $530\\,$TeV\\ \\ & $1.43 \\times 10^{-15}$ & $1.21 \\times 10^{-16}$ \\\\\n$1175\\,$TeV\\ \\ & $1.04 \\times 10^{-15}$ & $5.19 \\times 10^{-17}$ \\\\\n\\hline\n\\multicolumn{3}{c}{\\hphantom{dummy}} \\\\\n\\multicolumn{3}{c}{ Hercules X-1} \\\\\n\\hline\nEnergy & $\\Phi_N\\ (E) $ & $\\Phi_\\gamma\\ (E)$ \\\\\n\\hline\n $115\\,$TeV\\ \\ & $3.04 \\times 10^{-14}$ & $8.55 \\times 10^{-15}$ \\\\\n $530\\,$TeV\\ \\ & $2.87 \\times 10^{-15}$ & $9.75 \\times 10^{-17}$ \\\\\n$1175\\,$TeV\\ \\ & $6.91 \\times 10^{-16}$ & $3.56 \\times 10^{-17}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsection{Transient Emission}\n\\label{subsec:transient}\nWe search for transient emission of particles\nfrom Cygnus X-3 and Hercules X-1 on daily (single transit)\ntime scales.\nFor each transit of the source, we compare the number of \non-source events to the number of expected background events\nand calculate a significance based on\nthe prescription of Li and Ma \\cite{ref:LiMa}.\nWe require the live time fraction during the transit to be\nat least 0.20 to remove transits in which the experiment was\noperational for only a small fraction of the time.\nFor each source,\nwe make separate studies of the transit significances for the\nall-data and muon-poor samples, corresponding to possible emission\nfrom any neutral and gamma-ray particles, respectively.\n\nFor Cygnus X-3, the number of good transits in the\nall-data sample is 1500.\nIn the muon-poor sample, it is 1291.\nThe distributions of significances for \nthe two samples of Cygnus X-3 transits\nare shown in Figure~\\ref{fig:Cyg_Trans}.\nEach\ndistribution agrees well with a Gaussian\ndistribution\nof mean zero and unit width.\nThere is no\nevidence for any excess of events at high values of significance\n(either positive or negative).\n\nFor Hercules X-1, there are 1492 good transits in the all-data\nsample, and 1271 good transits in the muon-poor sample.\nThe significance distributions for Hercules X-1 are shown in\nFigure~\\ref{fig:Her_Trans}, and again, no evidence for \nsignificant excesses exists.\n\nBased on the lack of statistically significant excesses,\nwe place limits on the daily\nfluxes of neutral and gamma-ray particles\nfrom Cygnus X-3 and Hercules X-1.\nThese limits are calculated by a similar procedure as used\nfor the steady searches.\nThe limit values depend on the actual statistical significance\nof the search on a given day, and also on the \nepoch of data taking.\nAs shown in Table~\\ref{tab:array}, the size of the experiment has\nchanged with time, and the sensitivity changed accordingly.\nIn Table~\\ref{tab:daily}, we give typical daily flux\nlimits for the two sources for different epochs of the experiment.\nSince the numbers of events detected per transit are the\nsame for the two sources to within 5\\%, the limits for\nCygnus X-3 and Hercules X-1 are virtually identical.\nTypical 90\\% c.l. limits on the integral flux\nusing the full experiment\nare $\\Phi_N ( E > 115\\,{\\rm TeV} ) < 9.7\\times 10^{-13}$\nneutral particles cm$^{-2}$ sec$^{-1}$ and\n $\\Phi_\\gamma ( E > 115\\,{\\rm TeV} ) < 2.0\\times 10^{-13}$\nphotons cm$^{-2}$ sec$^{-1}$.\n\n\\begin{table}\n\\begin{center}\n\\caption{Typical daily\nupper flux limits (90\\% c.l.)\nfor emission of neutral and\ngamma-ray particles from Cygnus X-3 and Hercules X-1.\nThe flux limits are calculated for two different epochs assuming\nthe same number of on-source events as off-source.\nThe third column gives\nthe typical number of\nevents observed on-source during the different epochs.\nEpoch I corresponds to March 1990 to October 1990.\nEpoch II corresponds to January 1992 to August 1993.\nFlux limits for the remaining periods of time of operation\nare close to those for Epoch II.\nUnits of flux are particles cm$^{-2}$ sec$^{-1}$.}\n\\label{tab:daily}\n\\vspace{10pt}\n\\begin{tabular}{|rcrrc|}\n\\multicolumn{5}{c}{Epoch I} \\\\\n\\hline\nEnergy & Particle &\\ Events\\ \\ & $f_{90}\\ \\ $ & $\\Phi_{daily}(E)$ \\\\\n\\hline\n$115\\,$TeV & Any & 575 & 0.071 & $\\ \\ 1.6\\times 10^{-12}$ \\\\\n$530\\,$TeV & Any & 3.2 & 1.34 & $\\ \\ 1.9\\times 10^{-13}$ \\\\\n$1175\\,$TeV & Any & 0.56 & 4.11 & $\\ \\ 9.6\\times 10^{-14}$ \\\\\n$115\\,$TeV & $\\gamma$-ray & 45 & 0.026 & $\\ \\ 3.6\\times 10^{-13}$ \\\\\n\\hline \n\\multicolumn{5}{c}{\\hphantom{dummy}} \\\\\n\\multicolumn{5}{c}{Epoch II} \\\\\n\\hline\nEnergy & Particle &\\ Events\\ \\ & $f_{90}\\ \\ $ & $\\Phi_{daily}(E)$ \\\\\n\\hline\n$115\\,$TeV & Any & 1450 & 0.044 & $\\ \\ 9.7\\times 10^{-13}$ \\\\\n$530\\,$TeV & Any & 8.1 & 0.76 & $\\ \\ 1.1\\times 10^{-13}$ \\\\\n$1175\\,$TeV & Any & 1.3 & 2.56 & $\\ \\ 5.9\\times 10^{-14}$ \\\\\n$115\\,$TeV & $\\gamma$-ray & 96 & 0.015 & $\\ \\ 2.0\\times 10^{-13}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nWe have also carried out searches for transient emission \non the shorter time scale of \n$0.5\\,$hr.\nHere, we compare the number of events observed on-source\nto the expected background level for ten \n$0.5\\,$hr\ntime intervals on either side of the time of source culmination.\nThe typical number of on-source events, and therefore the flux\nsensitivity, depends strongly on the source zenith angle.\nFor example, for an overhead source\nnear culmination, the experiment observes\n$\\sim 175$ events per \n$0.5\\,$hr,\nwhereas at four hours from\nculmination the rate is $\\sim 15$ events per \n$0.5\\,$hr.\nRegardless of the rate, for each \n$0.5\\,$hr\ninterval, we calculate the significance in\nthe number of on-source events relative to the background\nand combine all such significances into\na single distribution.\nThe resulting\nsignificance distributions are consistent with those expected\nfrom background processes for both sources in both the all-data\nand muon-poor samples.\nThe typical 90\\% c.l. upper limits on the fluxes\nfrom either source are\n$\\Phi_N ( E > 115\\,{\\rm TeV} ) < 3.1\\times \n10^{-12}$ neutral particles cm$^{-2}$ sec$^{-1}$\nand\n$\\Phi_\\gamma ( E > 115\\,{\\rm TeV} ) < 7.1\\times 10^{-13}$\nphotons cm$^{-2}$ sec$^{-1}$\nfor $0.5\\,$hr\nperiods within one hour of culmination.\n\n\\subsubsection{Cygnus X-3 Radio Flares}\n\\label{subsec:radio}\nWe study showers from the direction of Cygnus X-3 during\nthe occurrence of large radio flares at the source.\nWe define large flares\nas those times when the radio\noutput at 8.3 GHz exceeded 2 Jy, a level which is\ntwo orders of magnitude above the typical quiescent level.\nDuring the period of CASA-MIA operations, there\nwere six large flares, as listed in Table~\\ref{tab:flares}\n\\cite{ref:Waltman1,ref:Waltman2}.\n\n\\begin{table}\n\\begin{center}\n\\caption{Large radio flares of Cygnus X-3 from 1990 to 1995,\ncoincident with the operational time of CASA-MIA.\nThe flare number is an arbitrary index used for this work.\nThe peak radio flux values (8.3 GHz)\ncome from \n\\protect\\cite{ref:Waltman1,ref:Waltman2}.\nThe March 1994 flare was actually a prolonged event that\nextended for the ten days following March 1, 1994.}\n\\label{tab:flares}\n\\vspace{10pt}\n\\begin{tabular}{|crr|}\\hline\nFlare & Date\\ \\ \\ \\ \\ \\ \\ \\ & Peak Flux (Jy) \\\\\n\\hline\n1 &\\ \\ Aug. 15, 1990\\ \\ & 7.5\\ \\ \\ \\ \\ \\ \\ \\ \\ \\\\\n2 & Oct. 05, 1990 & 10.2\\ \\ \\ \\ \\ \\ \\ \\ \\ \\\\\n3 & Jan. 21, 1991 & 14.8\\ \\ \\ \\ \\ \\ \\ \\ \\ \\\\\n4 & Sep. 04, 1992 & 4.1\\ \\ \\ \\ \\ \\ \\ \\ \\ \\\\\n5 & Feb. 20, 1994 & 4.9\\ \\ \\ \\ \\ \\ \\ \\ \\ \\\\\n6 & Mar. 09, 1994 & 5.2\\ \\ \\ \\ \\ \\ \\ \\ \\ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nWe examine the daily significances for Cygnus X-3 on\nthe day of each large flare, as well as on the\nday preceeding and following each flare.\nTable~\\ref{tab:flare_results} lists the numbers of\nobserved events, the expected background, and the\nLi-Ma significances\nfor the examined days.\nThere is no\ncompelling evidence for any statistical excess in the \nobserved number of\nevents from Cygnus X-3 for either the all-data or muon-poor samples.\nOn one day (Feb. 20, 1994) the Li-Ma\nsignificance is 2.26$\\sigma$ for the all-data sample.\nThe probability that we would get a day with this level of\nsignificance or greater is 28.2\\% after accounting for the\nfifteen days in which we searched.\nIn addition, on this same day, there is no evidence for any\nexcess in the muon-poor data, while\nwe would expect the statistical\nsignificance to increase by a factor of 3.3 if it were\ndue to a gamma-ray signal.\nTable~\\ref{tab:flare_results} also lists the derived \nfractional excess values, $f_{90}$, as well as the upper limits\nto the integral flux of neutral or gamma-ray particles from\nCygnus X-3 during the flares.\n\n\\begin{table}\n\\begin{center}\n\\caption{CASA-MIA search results for emission from\nCygnus X-3 near the time of large radio flares.\nThe flare numbers are defined in \nTable~\\protect\\ref{tab:flares}.\nThe $-$ and $+$ designations refer to the days preceeding\nand following the flare day, respectively.\nThe significances (columns 4 and 8) are \nstandard deviation values calculated using\nthe prescription of Li and Ma \n\\protect\\cite{ref:LiMa}.\nThe last two columns give the 90\\% c.l. upper limits on the\nintegral flux above $115\\,$TeV\nof any neutral particle and gamma-rays, respectively,\nin units of \n$10^{-12}$ particles cm$^{-2}$ sec$^{-1}$.\nEntries having only a dash indicate the absence of\nany usable data.}\n\\label{tab:flare_results}\n\\vspace{10pt}\n\\begin{tabular}{|c|rrrr|rrrr|cc|}\\hline\n\\ \\ & \\multicolumn{4}{c|}{ All-Data Sample} &\n\\multicolumn{4}{c|}{ Muon-Poor Sample} &\n\\multicolumn{2}{c|}{ Flux Limits} \\\\\n\\hline\nFlare & On & Back & Signif. & $f_{90}$(\\%) &\n On & Back & Signif. & $f_{90}$(\\%) &\n $\\Phi_N(E)$ & $\\Phi_\\gamma(E) $ \\\\\n\\hline\n$-$& 226 & 241.2 & $-0.94$ & 7.8\\ \\ & --\\ \\ & --\\ \\ & --\\ \\ & --\\ \\ & 1.7 & \n --\\ \\ \\\\\n 1 & 249 & 249.5 & $-0.03$ & 10.8\\ \\ & --\\ \\ & --\\ \\ & --\\ \\ & --\\ \\ & 2.4 & \n --\\ \\ \\\\\n$+$& 252 & 248.1 & $+0.25$ & 12.1\\ \\ & 22 & 19.4 & $+0.58$ & 5.2\\ \\ & \n2.7 & 0.73 \\\\\n\\hline\n$-$& 850 & 863.1 & $-0.43$ & 4.9\\ \\ & 121 & 113.4 & $+0.67$ & 3.4\\ \\ & \n1.1 & 0.48 \\\\\n 2 & 884 & 849.1 & $+1.13$ & 9.0\\ \\ & 94 & 111.7 & $-1.65$ & 1.5\\ \\ & \n2.0 & 0.21 \\\\\n$+$& 872 & 901.4 & $-0.94$ & 3.9\\ \\ & 103 & 124.5 & $-1.90$ & 1.4\\ \\ & \n0.9 & 0.20 \\\\\n\\hline\n$-$&1279 &1260.1 & $+0.51$ & 5.8\\ \\ & 227 & 230.7 & $-0.23$ & 2.3\\ \\ & \n1.3 & 0.32 \\\\\n 3 &1297 &1267.3 & $+0.80$ & 6.4\\ \\ & 255 & 235.5 & $+1.19$ & 4.0\\ \\ & \n1.4 & 0.56 \\\\\n$+$&1265 &1235.2 & $+0.81$ & 6.6\\ \\ & 216 & 227.2 & $-0.71$ & 1.9\\ \\ & \n1.5 & 0.27 \\\\\n\\hline\n$-$& 884 & 992.8 & $+0.04$ & 5.8\\ \\ & 59 & 50.2 & $+1.15$ & 2.8\\ \\ & \n1.3 & 0.39 \\\\\n 4 & 711 & 756.9 & $-1.61$ & 3.4\\ \\ & 45 & 53.4 & $-1.13$ & 1.4\\ \\ & \n0.8 & 0.20 \\\\\n$+$& 735 & 773.9 & $-1.35$ & 3.7\\ \\ & 30 & 43.4 & $-2.07$ & 0.9\\ \\ & \n0.8 & 0.13 \\\\\n\\hline\n$-$&1708 &1734.0 & $-0.60$ & 3.2\\ \\ & 119 & 126.9 & $-0.68$ & 1.1\\ \\ & \n0.7 & 0.15 \\\\\n 5 &1692 &1596.2 & $+2.26$ & 9.4\\ \\ & 119 & 124.4 & $-0.47$ & 1.2\\ \\ & \n2.1 & 0.17 \\\\\n$+$&1485 &1447.1 & $+0.95$ & 6.4\\ \\ & 127 & 119.1 & $+0.68$ & 2.1\\ \\ & \n1.4 & 0.29 \\\\\n\\hline\n$-$&1382 &1375.2 & $+0.18$ & 4.9\\ \\ & 92 & 102.6 & $-1.02$ & 1.1\\ \\ & \n1.1 & 0.15 \\\\\n 6 &1212 &1225.3 & $-0.36$ & 4.2\\ \\ & 90 & 95.1 & $-0.50$ & 1.4\\ \\ & \n0.9 & 0.20 \\\\\n$+$&1390 &1391.7 & $-0.06$ & 4.4\\ \\ & 108 & 99.1 & $+0.84$ & 2.1\\ \\ & \n1.0 & 0.29 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsection{Periodic Emission}\n\\label{subsec:periodic}\nSeveral previous observations of Cygnus X-3 claimed evidence for\nsteady emission correlated with the \n$4.8\\,$hr\nX-ray periodicity of the source.\nFor this reason, we carry out a search for such emission using the\nentire CASA-MIA data set.\nThe event arrival times (UT) are corrected to the barycenter\nof the solar system using the JPL DE200 planetary ephemeris\n\\cite{ref:Standish}.\nThe corrected times are folded with the \n$4.8\\,$hr\nX-ray ephemeris\nof van der Klis and Bonnet-Bidaud \\cite{ref:vanderKlis2}.\nA slight correction is made for newer X-ray data from the ASCA\nsatellite, as reported by Kitamoto {\\it et al.} \\cite{ref:Kitamoto}.\nEach event is then assigned a phase value\nin the interval (0,1) representing the fraction of a period \nthat the event is from the X-ray minimum. The phase values\nare accumulated in twenty bins of 0.05 phase units each for\nboth the on-source and\ngenerated background events.\n\nFigure~\\ref{fig:CygPhase} shows the \n$4.8\\,$hr\nperiodicity\ndistribution of events from\nthe direction of Cygnus X-3 for the all-data and muon-poor samples.\nAlso shown is the phase distribution expected from the background events.\nNo compelling excesses are seen at any particular phase interval\nfor either sample.\nWe carry out similar periodicity analyses using data\nat higher energies selected by the number of alerted CASA stations.\nThese searches also do not indicate any significant excesses\nat any phase interval.\nIn Table~\\ref{tab:CygPhase}, we list flux limits for the various\nsearches at the phase intervals (0.2,0.3) and (0.6,0.7).\nThese intervals were ones in which\nnumerous earlier experiments had reported detections.\n\n\\begin{table}\n\\begin{center}\n\\caption{CASA-MIA search results for \n$4.8\\,$hr\nperiodic emission from\nCygnus X-3.\nFlux limits are given for selected phase intervals in which\nearlier experiments had reported detections.\nColumns 3 and 4 give the 90\\% c.l. upper limits to the integral\nflux of neutral and gamma-ray particles, respectively, in\nunits of particles cm$^{-2}$ sec$^{-1}$.\nA blank entry corresponds to a data set having\ninsufficient data with which to calculate a limit.}\n\\label{tab:CygPhase}\n\\vspace{10pt}\n\\begin{tabular}{|crcc|}\\hline\nPhase Interval & Energy & $\\Phi_N(E)$ & $\\Phi_\\gamma(E)$ \\\\\n\\hline\n$0.2 - 0.3 $ & $115\\,$TeV\\ & $8.9\\times 10^{-14}$ & $2.3\\times 10^{-14}$ \\\\\n & $530\\,$TeV & $4.5\\times 10^{-15}$ & $6.9\\times 10^{-16}$ \\\\\n &$1175\\,$TeV & $3.5\\times 10^{-15}$ & --\\ \\ \\\\\n\\hline\n$0.6 - 0.7 $ & $115\\,$TeV & $1.4\\times 10^{-13}$ & $3.5\\times 10^{-14}$ \\\\\n & $530\\,$TeV & $3.8\\times 10^{-15}$ & $3.4\\times 10^{-16}$ \\\\\n &$1175\\,$TeV & $6.5\\times 10^{-15}$ & --\\ \\ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\section{Comparison with Other Results}\n\\label{sec:comparison}\nAs described earlier, the many\ndetections of Cygnus X-3 and Hercules X-1 by experiments\noperating between 1975 and 1990 varied greatly in their\ncharacteristics.\nSome results were steady and some were episodic;\nsome exhibited apparent periodicity and others did not.\nWe thus choose to compare the results of this work to a\ngeneralized picture of the earlier results and to more recent\nwork.\n\n\\subsection{Cygnus X-3}\n\\label{subsec:cygnus_comparison}\nIn Figure~\\ref{fig:NewCyg}, we \nplot the flux limits reported here on \nsteady emission from Cygnus X-3.\nWe also show\npublished results from other experiments\nusing data taken at times which overlap our observation period.\nThe other results come from the \nTibet air shower array in Yangbajing, China \n\\cite{ref:Amenomori}, the\nCYGNUS array in New Mexico, USA\n\\cite{ref:Alexandreas4},\nthe EAS-TOP array at Gran Sasso, Italy\n\\cite{ref:Aglietta},\nand the HEGRA experiment on the Canary Island La Palma\n\\cite{ref:Karle}. \nWe do not show earlier results from data taken by \na portion of our experiment\nin 1989 \\cite{ref:Cronin} or the results of our all-sky survey\nfor northern hemisphere point sources using data taken\nin 1990-1991 \\cite{ref:McKay}.\nIn Figure~\\ref{fig:CygFract}, we show a similar comparison of\nthe limits on the fractional excess of events from Cygnus X-3 relative to\nthe cosmic ray background.\n\nThe data from the recent experiments are consistent; no steady\nemission of ultra-high energy\nparticles (gamma-ray or otherwise) has been\ndetected from Cygnus X-3 at levels which are considerably lower\nthan earlier reports.\nAt TeV energies, the results from the Whipple Telescope\n\\cite{ref:Whipple} are also considerably lower than\nthe earlier reports.\nThe limits presented here are a factor of\n130 lower at $115\\,$TeV, and a factor of 900 at $1175\\,$TeV,\nthan the spectrum \nplotted in Figure~\\ref{fig:OldCyg}.\nOur results are also inconsistent with emission reported by\na smaller experiment using data \ntaken during a time that overlapped our observations \\cite{ref:Muraki}.\n\nThe limits presented here on transient emission from Cygnus X-3\nare lower than, but in agreement with, those reported by\nother air shower experiments.\nThere have been no compelling reports of transient emission of\ngamma-rays from Cygnus X-3 over the period 1990-1995,\nincluding during large radio flares from the source.\nThere was an observation of underground muons from the direction\nof Cygnus X-3 during the January 1991 radio flare \\cite{ref:Thomson}.\nThe reported flux for this observation was $7.5\\times 10^{-10}$\nmuons cm$^{-2}$ sec$^{-1}$,\nfor muon energies above $0.7\\,$TeV.\nIf the muons were produced in air showers by the interaction\nof a hypothetical neutral particle from Cygnus X-3, we would expect\na typical neutral particle energy of $\\sim 10\\,$TeV \n\\cite{ref:Gaisser2}.\nAssuming that the particle spectrum continues to energies\ndetectable by CASA-MIA\n(and conservatively using \na soft spectrum comparable to the cosmic rays),\none derives an expected flux of $\\sim 10^{-11}$\nparticles cm$^{-2}$ sec$^{-1}$ \nfor energies above $115\\,$TeV.\nThis flux is\na factor of 5 to 10 above the flux limits set by CASA-MIA on\nthe emission of any neutral particle during the January 1991 flare \n(Table~\\ref{tab:flare_results}).\n\nWe have also shown that there is no evidence \nfor $4.8\\,$\nperiodic emission from Cygnus X-3.\nThis result is consistent with reports by other experiments\nover the same period of time.\nThe limits on pulsed gamma-ray emission presented here for\nthe phase intervals of 0.2-0.3 and 0.6-0.7 (Table~\\ref{tab:CygPhase})\nare lower at $115\\,$TeV, and considerably lower at $530\\,$TeV,\nthan the fluxes predicted by a recent \ntheoretical paper \\cite{ref:Mitra}.\n\n\\subsection{Hercules X-1}\n\\label{subsec:hercules_comparison}\nThe limits on steady emission of gamma-rays from Hercules X-1\npresented here are in agreement with those from other\nexperiments, as shown in Figure~\\ref{fig:NewHer}.\nGamma-ray emission from Hercules X-1 was typically seen by\nearlier experiments as transient emission over short time\nscales (e.g. the 1986 outbursts).\nWe have no evidence for such emission over the entire period 1990-1995.\nIn Figure~\\ref{fig:HerTran1994}, we compare the \ndaily event totals\nobserved by CASA-MIA from the direction of Hercules X-1 \nto the total expected assuming the flux \nof an earlier reported outburst \\cite{ref:Dingus2}.\nClearly, no evidence for\nemission at even\nmuch weaker levels than this outburst is seen during this time.\nThe flux reported in Ref.~\\cite{ref:Dingus2} was $\\sim 2\\times 10^{-11}$\nparticles cm$^{-2}$ sec$^{-1}$ for minimum energies of $100\\,$TeV.\nThis flux is about a factor of 45 larger than the typical\nlimits placed by CASA-MIA during the early part of operations\nand about a factor of 80 larger than the typical daily gamma-ray\nlimits placed by the full CASA-MIA experiment (Table~\\ref{tab:daily}).\nSince we have no evidence for transient emission from Hercules X-1,\nwe choose not to carry out a periodicity analysis based on the\nX-ray pulsar period of $1.24\\,$sec.\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nWe have carried out a high statistics search for ultra-high energy\nneutral and gamma-ray particle emission from Cygnus X-3 and\nHercules X-1 between 1990 and 1995.\nWe have no evidence for steady or transient emission from\neither source, and for Cygnus X-3, we have no evidence for\n$4.8\\,$hr\nperiodic emission or emission correlated with large radio\nflares.\nThese results are in agreement with those from other experiments\noperating during the same period of time, but are in stark\ncontrast to earlier (1975-1990) reports.\n\nThe apparent disappearance of Cygnus X-3 and Hercules X-1 from\nthe ultra-high energy gamma-ray sky can be interpreted in\ntwo ways.\nAn optimistic view \\cite{ref:Protheroe}\nis that the earlier results indicated the presence of\nultra-high energy gamma-rays (or particles) from Cygnus X-3\nand Hercules X-1, and that the sources, which are episodic on long times\nscales, are now dormant.\nA more pessimistic view is that the earlier reported detections were\nlargely, if not entirely, statistical fluctuations, and that no compelling\nevidence exists for ultra-high energy gamma-rays from \nany astrophysical source.\nWe point out that an earlier all-sky survey using a portion of our data\nsample indicates that the northern hemisphere does not contain\nany steady\npoint sources of gamma-rays with fluxes comparable to those\nreported from X-ray binaries in the 1980's\n\\cite{ref:McKay}.\nWe have presented an update on this analysis at a conference \\cite{ref:Nitz}\nwhich are consistent with the absence of bright 100 TeV gamma-ray\npoint sources.\nWe are in the process of completing a final all-sky survey on the\nfive year CASA-MIA data sample.\n\nThe pessimistic interpretation of the ultra-high energy point source\nquestion, if correct,\nhighlights the difficulties in detecting gamma-rays from sources at\nother (high) energies and in detecting neutrinos as well.\nIn addition, without compelling evidence for high energy particle\nacceleration at point sources, the difficulties in explaining\nthe origins of cosmic rays above $10^{14}\\,$eV remain.\n\n\n\n\\vspace{5mm}\n\\noindent {\\bf Acknowledgements}\n\\vspace{3mm}\n\nWe acknowledge the assistance of the command and\nstaff of Dugway Proving Ground, and the University of Utah\nFly's Eye group.\nSpecial thanks go to M. Cassidy.\nWe also wish to thank P.Burke, S. Golwala, M. Galli, J. He, H. Kim, L. Nelson,\nM. Oonk, M. Pritchard, P. Rauske, K. Riley, and Z. Wells \nfor assistance with data processing.\nThis work is supported by the U.S. \nNational\nScience Foundation and the U.S. Department of Energy.\nJWC and RAO wish to acknowledge the support of\nthe W.W. Grainger Foundation.\nRAO acknowledges additional support from the Alfred P. Sloan\nFoundation.\n\n\\vspace{10mm}\n\n\\noindent $^*$ Present Address: Department of Physics, Massachusettts Institute\nof Technology, Cambridge, MA 02139, USA.\n\n\\noindent $^\\dag$ Present Address: Department of Physics and Astronomy,\nIowa State University, Ames, IA 50011, USA.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{physician_system.pdf}\n \\caption{PClean applied to Medicare's 2.2-million-row Physician Compare National database. Based on a user-specified relational model, PClean infers a latent database of entities, which it uses to correct systematic errors (e.g. the misspelled \\textit{Abington, MD} appears 152 times in the dataset) and impute missing values.}\n \\vspace{-5mm}\n \\label{fig:physicians_results}\n\\end{figure*}\n\nReal-world data is often noisy and incomplete, littered with NULL values, typos, duplicates, and inconsistencies. Cleaning dirty data is important for many workflows, but can be difficult to automate, as it often requires judgment calls about objects in the world (e.g., to decide whether two records refer to the same hospital, or which of several cities called ``Jefferson'' someone lives in).\n\nThis paper presents PClean, a domain-specific generative probabilistic programming language (PPL) for Bayesian data cleaning. Although generative models provide a conceptually appealing approach to data cleaning, they have proved difficult to apply, due to the heterogeneity of real-world error patterns~\\cite{Abedjan2016} and the difficulty of inference. Like some PPLs (e.g. BLOG~\\cite{Milch2006}), PClean programs encode generative models of relational domains, with uncertainty about a latent database of objects and relationships underlying a dataset. However, PClean's approach is inspired by domain-specific PPLs, such as Stan \\cite{Carpenter2017} and Picture \\cite{Kulkarni2015}: it aims not to serve all conceivable relational modeling needs, but rather to enable fast inference, concise model specification, and accurate cleaning on large-scale problems. It does this via three modeling and inference contributions:\n\\vspace{-2.5mm}\n\\begin{enumerate}\n \\item PClean introduces a domain-general non-parametric prior on the number of latent objects and their link structure. PClean programs customize the prior via a relational schema and via generative models for objects' attributes.\n \\vspace{-2mm}\n \\item PClean inference is based on a novel sequential Monte Carlo (SMC) algorithm, to initialize the latent database with plausible guesses, and novel rejuvenation updates to fix mistakes.\n \\vspace{-2mm}\n \\item PClean provides a proposal compiler that generates near-optimal SMC proposals and Metropolis-Hastings rejuvenation proposals given the user's dataset, PClean program, and inference hints. These proposals improve over generic top-down PPL inference by incorporating local Bayesian reasoning within user-specified subproblems and heuristics from traditional cleaning systems.\n \n\\end{enumerate}\n \\vspace{-2mm}\nTogether, this paper's innovations improve over generic PPL inference techniques, and enable fast and accurate cleaning of challenging real-world datasets with millions of rows.\n \n\n\\subsection{Related work}\n\nMany researchers have proposed generative models for data cleaning in specific datasets~\\cite{Pasula2003, Kubica2003, MayfieldJenniferNeville2009, Matsakis2010, Xiong2011, Hu2012, Zhao2012, Abedjan2016, De2016, Steorts2016, Winn2017, DeSa2019}. Generative formulations specify a prior over latent ground truth data, and a likelihood that models how the ground truth is noisily reflected in dirty datasets. In contrast, PClean's PPL makes it easy to write short (\\textless 50 line) programs to specify custom priors for new datasets, and yield inference algorithms that deliver fast, accurate cleaning results. \n\n\nThere is a rich literature on Bayesian approaches to modeling relational data~\\cite{Friedman1999}, including `open-universe' models with identity and existence uncertainty~\\cite{Milch2006}. Several PPLs could express data cleaning models \\cite{milch2005, goodman2012church, dippl, Tolpin2016, Mansinghka2014, bingham2019, Cusumano-Towner2019}, but in practice, generic PPL inference is often too slow. This paper introduces new algorithms that enable PClean to scale better, and demonstrates external validity of the results by calibrating PClean's runtime and accuracy against SOTA data-cleaning baselines~\\cite{Dallachiesat2013,Rekatsinas2017} that use machine learning and weighted logic (typical of discriminative approaches~\\cite{Mccallum2003, Wellner2004, Wick2013}). Some of PClean's inference innovations have close analogues in traditional cleaning systems; for example, PClean's preferred values from Section~\\ref{sec:hints} are related to HoloClean's notion of domain restriction. In fact, PClean can be viewed as a scalable, Bayesian, domain-specific PPL implementation of the PUD framework from \\cite{DeSa2019} (which abstractly characterizes the HoloClean implementation from~\\cite{Rekatsinas2017}, but does not itself include PClean's modeling or inference innovations).\n\n\n\\section{Modeling}\n\\label{sec:modeling}\nIn this section, we present the PClean modeling language, which is designed for encoding domain-specific knowledge about data and likely errors into concise generative models. PClean programs specify (i) a prior distribution $p(\\mathbf{R})$ over a latent ground-truth relational database of entities underlying the user's dataset, and (ii) an observation model $p(\\mathbf{D} \\mid \\mathbf{R})$ describing how the attributes of entities from $\\mathbf{R}$ are reflected in the observed flat data table $\\mathbf{D}$. Unlike general-purpose probabilistic programming languages, PClean does not afford the user complete freedom in specifying $p(\\mathbf{R})$. Instead, we impose a novel domain-general structure prior $p(\\mathbf{S})$ on the \\textit{skeleton} of the relational database: $\\mathbf{S}$ determines how many entities are in each latent database table, and which entities are related. The user's program specifies $p(\\mathbf{R} \\mid \\mathbf{S})$, a probabilistic relational model over the attributes of the objects whose existence and relationships are given by $\\mathbf{S}$. This decomposition limits the PClean model class, but enables the development of an efficient sequential Monte Carlo inference algorithm, presented in Section~\\ref{sec:inference}.\n\n\n\\subsection{PClean Modeling Language}\n\\label{sec:language}\n\n\n\n\\begin{figure}[t]\n\\includegraphics[width=\\linewidth]{physician_program_tall.pdf}\n\\caption{An example PClean program. PClean programs define: (i) an \\textit{acyclic} relational schema, comprising a set of classes $\\mathcal{C}$, and for each class $C$, sets $\\mathcal{A}(C)$ of attributes and $\\mathcal{R}(C)$ of reference slots; (ii) a probabilistic relational model $\\Pi$ encoding uncertain assumptions about object attributes; and (iii) a query $\\mathbf{Q}$ (last line of program), specifying how latent object attributes are observed in the flat data table $\\mathbf{D}$. \\textit{Inference hints} in gray do not affect the model's semantics.}\n\\label{fig:example-program}\n\\end{figure}\n\nA PClean program (Figure~\\ref{fig:example-program}) defines a set of \\textit{classes}\n$\\mathcal{C} = (C_1, \\dots, C_k)$ representing the types of object \nthat underlie the user's data\n(e.g. Physician, City), as well as a \\textit{query} $\\mathbf{Q}$ that describes how a latent object database informs the observed flat dataset $\\mathbf{D}$.\n\n\\textbf{Class declarations.} The declaration of a PClean class $C$ may include\nthree kinds of statement: \\textit{reference statements} ($Y \\sim C'$), which define a foreign key or reference slot $C.Y$ that connects objects of class $C$ to objects of a target class $T(C.Y) = C'$; \\textit{attribute statements} ($X \\sim \\phi_{C.X}(\\dots)$), which define a new field or \\textit{attribute} $C.X$ that objects of the class possess, and declare an assumption about the probability distribution $\\phi_{C.X}$ that the attribute typically follows; and $\\textit{parameter statements}$ ($\\textbf{parameter } \\theta_C \\sim p_{\\theta_C}(\\dots)$), which introduce mutually independent hyperparameters shared among all objects of the class $C$, to be learned from the noisy dataset. The distribution $\\phi_{C.X}$ of an attribute may depend on the values of a \\textit{parent set} $Pa(C.X)$ of attributes, potentially accessed via reference slots. For example, in Figure~\\ref{fig:example-program}, the \\textit{Physician} class has a \\textit{school} reference slot with target class \\textbf{School}, and a \\textit{degree} attribute whose value depends on \\textit{school.degree\\_dist}. Together, the attribute statements specify a \\textit{probabilistic relational model} $\\Pi$ for the user's schema (possibly parameterized by hyperparameters $\\{\\theta_C\\}_{C \\in \\mathcal{C}}$)~\\cite{Friedman1999}.\n\n\\textbf{Query.} After its class declarations, a PClean program ends with a \\textit{query}, connecting the schema of the latent relational database to the fields of the observed dataset. The query has the form \\textbf{observe} $(U_1 \\textbf{ as } x_1), \\cdots, (U_k \\textbf{ as } x_k) \\textbf{ from } C_{obs}$, where $C_{obs}$ is a class that models the records of the observed dataset (\\textit{Record}, in Figure~\\ref{fig:example-program}), $x_i$ are the names of the columns in the observed dataset, and $U_i$ are dot-expressions (e.g., \\textit{physician.school.name}) picking out an attribute accessible via zero or more reference slots from $C_{obs}$. We assume that each observed data record represents an observation of selected attributes of a distinct object in $C_{obs}$ (or objects related to it), and that these attributes are observed directly in the dataset. This means that errors are modeled as \\textit{part} of the latent relational database $\\mathbf{R}$, rather than as a separate stage of the generative process. For example, Figure~\\ref{fig:example-program} models systematic typos in the \\textit{City} field, by associating each \\textit{Practice} with a possibly misspelled version \\textit{bad\\_city} of the name of the city in which it is located.\n\n\\begin{figure}\n\\begin{algorithmic}[0]\n\\Model{GenerateSkeleton}{$\\mathcal{C}, |\\mathbf{D}|$}\n\\LineComment{Create one $C_{obs}$ object per observed record}\n\\State $\\mathbf{S}_{C_{obs}} := \\{1, \\dots, |\\mathbf{D}|\\}$\n\\LineComment{Generate a class \\textit{after} all referring classes:}\n\\For{class $C \\in \\textsc{TopoSort}(\\mathcal{C} \\setminus \\{C_{obs}\\})$}\n \\LineComment{Collect references to class $C$}\n \\State $\\mathbf{Ref}_\\mathbf{S}(C) := \\{(r, Y) \\mid r \\in \\mathbf{S}_{C'}, T(C'.Y) = C\\}$\n \\LineComment{Generate targets of those references}\n \\State $\\mathbf{S}_C \\sim \\textsc{GenerateObjectSet}(C, \\mathbf{Ref}_\\mathbf{S}(C))$\n \\LineComment{Assign reference slots pointing to $C$}\n \\For{object $r' \\in \\mathbf{S}_C$}\n \t\\For{referring object $(r, Y) \\in r'$}\n \t\t\\State $r.Y := r'$\n \t\\EndFor\n \\EndFor\n\\EndFor\n\\LineComment{Return the skeleton}\n\\State \\Return $\\{\\mathbf{S}_C\\}_{C \\in \\mathcal{C}}, (r, Y) \\mapsto r.Y$\n\\EndModel\n\n\\Model{GenerateObjectSet}{$C, \\mathbf{Ref}_{\\mathbf{S}}(C)$}\n\\State $s_C \\sim \\textit{Gamma}(1, 1)$; $d_C \\sim \\textit{Beta}(1, 1)$\n\\LineComment{\\parbox[t]{.9\\linewidth}{Partition $\\mathbf{Ref}_{\\mathbf{S}}(C)$ into disjoint co-referring subsets; each represents an object}}\n\\State $\\mathbf{S}_C \\sim CRP(\\mathbf{Ref}_{\\mathbf{S}}(C), s_C, d_C)$\n\\EndModel\n\\end{algorithmic}\n\n\\caption{PClean's non-parametric structure prior $p(\\mathbf{S})$ over the relational skeleton $\\mathbf{S}$ for a schema $\\mathcal{C}$.}\n\\label{fig:skeleton-generator}\n\\end{figure}\n\n\\subsection{Non-parametric Structure Prior $p(\\mathbf{S})$}\n\nA PClean program's class declarations specify a probabilistic relational model that can be used to generate the attributes of objects in the latent database, but does not encode a prior over how many objects exist in each class or over their relationships. (The one exception is $C_{obs}$, the designated observation class, whose objects are assumed to be in one-to-one correspondence with the rows of the observed dataset $\\mathbf{D}$.) In this section, we introduce a domain-general structure prior $p(\\mathbf{S}; |\\mathbf{D}|)$ that encodes a non-parametric generative process over the \\textit{object sets} $\\mathbf{S}_C$ associated with each class $C$, and over the values of each object's reference slots. The parameter $|\\mathbf{D}|$ is the number of observed data records; $p(\\mathbf{S}; |\\mathbf{D}|)$ places mass only on relational skeletons in which there are exactly $|\\mathbf{D}|$ objects in $C_{obs}$ and every object in another class is connected via some chain of reference slots to one of them.\n\nPClean's generative process for relational skeletons is shown in Figure~\\ref{fig:skeleton-generator}. First, with probability 1, we set $\\mathbf{S}_{C_{obs}} = \\{1, \\dots, |\\mathbf{D}|\\}$. (The objects here are natural numbers, but any choice will do; all that matters is the cardinality of the set $\\mathbf{S}_{C_{obs}}$.) PClean requires that the directed graph with an edge $(C, T(C.Y))$ for each reference slot $C.Y$ is acyclic, which allows us to generate the remaining object sets class-by-class, processing a class only after processing any classes with reference slots targeting it.\nIn order to generate an object set for class $C$, we first consider the reference set $\\mathbf{Ref}_\\mathbf{S}(C)$ of all objects with reference slots that point to it:\n$$\\mathbf{Ref}_\\mathbf{S}(C) = \\{(r, Y) \\mid Y \\in \\mathcal{R}(C') \\wedge T(C'.Y) = C\\wedge r \\in \\mathbf{S}_{C'}\\}$$\nThe elements of $\\mathbf{Ref}_\\mathbf{S}(C)$ are pairs $(r, Y)$ of an object and a reference slot; if a single object has two reference slots targeting class $C$, then the object will appear twice in the reference set. The point is to capture all of the places in $\\mathbf{S}$ that will refer to objects of class $C$.\n\nNow, instead of first generating an object set $\\mathbf{S}_C$ and then assigning the reference slots in $\\mathbf{Ref}_\\mathbf{S}(C)$, we directly model the \\textit{co-reference partition} of $\\mathbf{Ref}_\\mathbf{S}(C)$, i.e., we will partition the references to objects of class $C$ into disjoint subsets, within each of which we will take all references to point to the same target object. To do this, we use the two-parameter Chinese restaurant process $CRP(X, s, d)$, which defines a non-parametric distribution over partitions of its set-valued parameter $X$. The strength $s$ and discount $d$ control the sizes of the clusters.\nWe can use the CRP to generate a partition of all references to class $C$. \\textit{We treat the resulting partition as the object set $\\mathbf{S}_C$}, i.e., each component defines one object of class $C$:\n$$\\mathbf{S}_C \\mid \\mathbf{Ref}_\\mathbf{S}(C) \\sim CRP(\\mathbf{Ref}_{\\mathbf{S}}(C), s, d)$$\nTo set the reference slots $r.Y$ with target class $T(\\mathbf{Class}(r).Y) = C$, we simply look up which partition component $(r, Y)$ (viewed as an element of $\\mathbf{Ref}_{\\mathbf{S}}(C)$) was assigned to. Since we have equated these partition components with objects of class $C$, we can directly set $r.Y$ to point to the component (object) that contains $(r, Y)$ as an element:\n$$r.Y := \\textrm{the unique } r' \\in \\mathbf{S}_{T(\\mathbf{Class}(r).Y)} \\textrm{ s.t. } (r, Y) \\in r'$$\nThis procedure can be applied iteratively to generate object sets for every relevant class, and simultaneously to fill all these objects' reference slots.\n\n\n\n\n\\section{Inference}\n\\label{sec:inference}\n\nPClean's non-parametric structure prior ensures that PClean models admit a sequential\nrepresentation, which can be used as the basis of a resample-move sequential Monte Carlo inference scheme\n (Section~\\ref{sec:smc}). However, if the SMC and rejuvenation proposals are made\nfrom the model prior, as is typical in PPLs, inference will still require\nprohibitively many particles to deliver accurate results. To address this issue, PClean uses\na \\textit{proposal compiler} that exploits conditional independence in the model\nto generate fast enumeration-based proposal kernels for both SMC and MCMC rejuvenation (Section~\\ref{sec:proposals}). Finally, to help users scale these proposals to large data,\nwe introduce \\textit{inference hints}, lightweight annotations in the PClean program that\ncan divide variables into subproblems to be separately handled by the proposal, or direct\nthe enumerator to focus its efforts on a dynamically computed subset of a large discrete domain (Section~\\ref{sec:hints}).\n\n\\begin{figure}\n\\begin{algorithmic}[0]\n\\Model{GenerateDataset}{$\\Pi$, $\\mathbf{Q}$, $|\\mathbf{D}|$}\n\\State $\\mathbf{R}^{(0)} \\leftarrow \\emptyset$ \\Comment{Initialize empty database}\n\\For{observation $i \\in \\{1, \\dots, |\\mathbf{D}|\\}$}\n \n \\State $\\Delta_i^\\mathbf{R} \\leftarrow \\textsc{GenerateDbIncr}(\\mathbf{R}^{(i-1)}, C_{obs})$\n \\State $\\mathbf{R}^{(i)} \\leftarrow \\mathbf{R}^{(i-1)} \\cup \\Delta_i^\\mathbf{R}$\n\n \\State $r \\leftarrow$ the unique object of class $C_{obs}$ in $\\Delta_i^\\mathbf{R}$\n \\State $d_i \\leftarrow \\{X \\mapsto r.\\mathbf{Q}(X),\\,\\, \\forall X \\in \\mathcal{A}(\\mathbf{D})\\}$\n\\EndFor\n\\State \\Return $\\mathbf{R} = \\mathbf{R}^{(|\\mathbf{D}|)}, \\mathbf{D} = (d_1, \\dots, d_{|\\mathbf{D}|})$\n\\EndModel\n\\Model{GenerateDbIncr}{$\\mathbf{R}^{(i-1)}$, root class $C$}\n \\State $\\Delta \\leftarrow \\emptyset$; $r_* \\leftarrow$ a new object of class $C$\n \\For{each reference slot $Y \\in \\mathcal{R}(C)$}\n \t\\State $C' \\leftarrow T(C.Y)$\n \t\\For{each object $r \\in \\mathbf{R}^{(i-1)}_{C'} \\cup \\Delta_{\\mathbf{R}_{C'}}$}\n \t\t\\State $n_r \\leftarrow |\\{r'\\mid r' \\in \\mathbf{R}^{(i-1)} \\cup \\Delta \\wedge \\exists \\tau, r'.\\tau = r\\}|$\n \t\\EndFor\n \t\\State $r_*.Y \\leftarrow r$ w.p. $\\propto {n_r - d_{C'}}$, or $\\star$ w.p. $\\propto {s_{C'} + d_{C'}|\\mathbf{R}^{(i-1)}_{C'} \\cup \\Delta_{\\mathbf{R}_{C'}}|}$\n \t\\If{$r_*.Y = \\star$}\n \t\t\\State $\\Delta' \\leftarrow \\textsc{GenerateDbIncr}(\\mathbf{R}^{(i-1)} \\cup \\Delta, C')$\n \t\t\\State $\\Delta \\leftarrow \\Delta \\cup \\Delta'$\n \t\t\\State $r_*.Y \\leftarrow $ the unique $r'$ of class $C'$ in $\\Delta'$\n \t\\EndIf\n \\EndFor\n \\For{each $X \\in \\mathcal{A}(C)$, in topological order}\n \t\\State $r_*.X \\sim \\phi_{C.X}(\\cdot \\mid \\{r_*.U\\}_{U \\in Pa(C.X)})$\n \\EndFor\n \\State \\Return $\\Delta \\cup \\{r_*\\}$\n\\EndModel\n\n\\end{algorithmic}\n\\vspace{-4mm}\n\\caption{Sequential model representation.}\n\\label{fig:seqrep}\n\\end{figure}\n\n\\subsection{Per-observation sequential Monte Carlo with per-object rejuvenation}\n\\label{sec:smc}\n\n\\begin{algorithm*}[th!]\n\\caption{Compiling SMC proposal to Bayesian network}\n\\label{alg:increment-bayes-net}\n\\begin{algorithmic}\n\\Procedure{GenerateIncrementBayesNet}{partial instance $\\mathbf{R}^{(i-1)}$, data $d_i$}\n \\LineComment{Set the vertices to all attributes and reference slots accessible from $C_{obs}$}\n \\State $U \\leftarrow \\mathcal{A}(C_{obs}) \\cup \\{K \\mid C_{obs}.K \\textrm{ is a valid slot chain} \\} \\cup \\{K.X \\mid X \\in \\mathcal{A}(T(C_{obs}.K))\\}$ \n \\LineComment{Determine parent sets and CPDs for each variable}\n \\For{each variable $u \\in U$}\n \t\\If{$u \\in \\mathcal{A}(C_{obs})$}\n \t\t\\State Set $Pa(u) = Pa^{\\Pi}(C.u)$\n \t\t\\State Set $\\phi_{u}(v_u \\mid \\{v_{u'}\\}_{u' \\in Pa(u)}) = \\phi^\\Pi_{C.u}(v_u \\mid \\{v_{u'}\\}_{u' \\in Pa(u)})$\n \t\\ElsIf{$u = K.X$ for $X \\in \\mathcal{A}(T(C_{obs}.K))$}\n \t\t\\State Set $Pa(u) = Pa^{\\Pi}(T(C_{obs}.K).X) \\cup \\{K\\} \\cup \\{u'.X \\mid u' \\textrm{ already processed} \\wedge T(C_{obs}.u') = T(C_{obs}.K)\\}$\n \t\t\\State Set \\[ \\phi_u(v_u \\mid \\{v_{u'}\\}_{u' \\in Pa(u)}) = \\begin{cases} \n \\mathbf{1}[v_u = v_K.X] & v_K \\in \\mathbf{R}^{(i-1)} \\\\\n \\phi^\\Pi_{T(C_{obs}.K).X}(v_u \\mid \\{v_{u'}\\}_{u' \\in Pa^\\Pi(T(C_{obs}.K).X)}) & v_K = \\textbf{new}_K \\\\\n \\mathbf{1}[v_u = v_{u'.X}] & v_K = \\textbf{new}_{u'}, u' \\neq K\n \\end{cases}\n\\]\n \t\\Else\n \t \\State Set $Pa(u)$ to already-processed slot chains $u'$ s.t. $T(C.u') = T(C.u)$\n \t \\State Set domain $V(u) = \\mathbf{R}^{(i-1)}_{T(C.u)} \\cup \\{\\textbf{new}_{u'} \\mid u' \\in Pa(u) \\cup \\{u\\}\\}$\n \t \\State Set $\\phi_u(v_u \\mid \\{v_{u'}\\}_{u' \\in Pa(u)})$ according to CRP\n \t\\EndIf\n \\EndFor\n\n \\For{attribute $X \\in \\mathcal{A}(\\mathbf{D})$}\n \t\\State Change node $\\mathbf{Q}(u)$ to be observed with value $d_i.x$, \\textbf{unless} $d_i.x$ is missing\n \\EndFor\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm*}\n\n\nOne representation of the PClean model's generative process was given in Section~\\ref{sec:modeling}: a skeleton can be generated from $p(\\mathbf{S})$, then attributes can be filled in using the user-specified probabilistic relational model $p_{\\Pi}(\\mathbf{R} \\mid \\mathbf{S})$. Finally an observed dataset $\\mathbf{D}$ can be generated from $\\mathbf{R}$ according to the query $\\mathbf{Q}$. But a key feature of our model is that it also admits a sequential representation, in which the latent relational database $\\mathbf{R}$ is built in stages: at each stage, a single record is\nadded to the observation class $C_{obs}$, along with any new objects in other classes that it refers to. Using this representation, we can run sequential Monte Carlo on the model, building a particle approximation to the posterior that incorporates one observation at a time.\n\n\\textbf{Database increments.} Let $\\mathbf{R}$ be a database with designated observation class $C_{obs}$. Assume $\\mathbf{R}_{C_{obs}}$, the object set for the class $C_{obs}$, is $\\{1, \\dots, |\\mathbf{D}|\\}$. Then the database's $i^{\\textrm{th}}$ \\textit{increment} $\\Delta_\\mathbf{R}^i$ is the object set\n$$\\{r \\in \\mathbf{R} \\mid \\exists K, \\, i.K = r \\wedge \\forall K', \\forall j < i, j.K' \\neq r\\},$$\nalong with their attribute values and targets of their reference slots. Objects in $\\Delta_\\mathbf{R}^i$ \nmay refer to other objects within the increment, or in earlier increments.\nThat is, the $i^{\\textrm{th}}$ increment of a database is the set of objects referenced by the $i^{\\textrm{th}}$ observation object, but \\textit{not} from any other observation object $j < i$.\n\n\\textbf{Sequential generative process.} Figure~\\ref{fig:seqrep} shows a generative process equivalent to the one in Section~\\ref{sec:modeling}, but which generates the attributes and reference slots of each increment sequentially. Intuitively, the database is generated via a Chinese-restaurant `social network': Consider a collection of restaurants, one for each class $C$, where each table serves a dish $r$ representing an object of class $C$.\nUpon entering a restaurant, customers either sit at an existing\ntable or start a new one, as in the usual generalized CRP construction.\nBut these restaurants require that to start a new table, customers must first send $\n|\\mathcal{R}(C)|$ friends to \\textit{other} restaurants (one to the target of each reference slot). Once they are seated at these \\textit{parent} restaurants, they phone the original customer to help decide what to order, i.e., how to sample the attributes $r.X$ of the new table's object, informed by \\textit{their} dishes (the objects $r.Y$ of class $T(C.Y)$). \nThe process starts with $|\\mathbf{D}|$ customers at the observation class $C_{Obs}$'s restaurant,\nwho sit at separate tables; each customer who sits down triggers the sampling of one increment.\n\n\\textbf{SMC inference with object-wise rejuvenation.}\nThe sequential representation yields a sequence of intermediate unnormalized target densities $\\tilde{\\pi}_i$ for SMC:\n$$\\tilde{\\pi}_i(\\mathbf{R}) = \\prod_{j=1}^i p(\\Delta_j^\\mathbf{R} \\mid \\Delta_1^\\mathbf{R}, \\dots, \\Delta_{j-1}^\\mathbf{R}) p(d_j \\mid \\Delta_1^\\mathbf{R}, \\dots, \\Delta_j^\\mathbf{R}).$$\nParticles are initialized to hold an empty database, to which proposed increments $\\Delta_i^\\mathbf{R}$ are added each iteration. As is typical in SMC, at each step, the particles are reweighted according to how well they explain the new observed data, and resampled to cull low-weight particles while cloning and propagating promising ones. This process allows the algorithm to hypothesize new latent objects as needed to explain each new observation, but not to revise earlier inferences about latent objects (or delete previously hypothesized objects) in light of new observations; we address this problem with MCMC rejuvenation moves. These moves select an object $r$, and update all $r$'s attributes and reference slots in light of all relevant data incorporated so far. In doing so, these moves may also lead to the ``garbage collection'' of objects that are no longer connected to the observed dataset, or to the insertion of new objects as targets of $r$'s reference slots.\n\n\\subsection{Compiling data-driven SMC proposals}\n\\label{sec:proposals}\nProposal quality is the determining factor for the quality of SMC inference: at each step of the algorithm, a proposal $Q_i(\\Delta_i^\\mathbf{R}; \\mathbf{R}^{(i-1)}, d_i)$ generates proposed additions $\\Delta_i^\\mathbf{R}$ to the existing latent database $\\mathbf{R}^{(i-1)}$ to explain the $i^\\textrm{th}$ observed data point, $d_i$. A key limitation of the sequential Monte Carlo implementations in most general-purpose PPLs today is that the proposals $Q_i$ are not \\textit{data-driven}, but rather based only on the prior: they make blind guesses as to the latent variable values and thus tend to make proposals that explain the data poorly.\nBy contrast, PClean compiles proposals that use exact enumerative inference to propose discrete variables in a data-driven way. This approach extends ideas from \\cite{arora2012gibbs} to the block Gibbs rejuvenation and block SMC setting, with user-specified blocking hints. These proposals are \\textit{locally optimal} for models that contain only discrete finite-domain variables, meaning that of all possible proposals $Q_i$ they minimize the divergence \n$$KL(\\pi_{i-1}(\\mathbf{R}^{(i-1)}) Q_i(\\Delta_i^\\mathbf{R}; \\mathbf{R}^{(i-1)}, d_i) || \\pi_i(\\mathbf{R}^{(i-1)} \\cup \\Delta_i^\\mathbf{R})).$$\nThe distribution on the left represents a perfect sample $\\mathbf{R}^{(i-1)}$ from the target given the first $i - 1$ observations, extended with the proposal $Q_i$. The distribution on the right is the target given the first $i$ data points.\nIn our setting the locally optimal proposal is given by\n\\begin{align*}\nQ_i(\\Delta_i^\\mathbf{R};& \\mathbf{R}^{(i-1)}, d_i) \\propto \\\\\n&p(\\Delta_i^\\mathbf{R} \\mid \\Delta_1^\\mathbf{R}, \\dots, \\Delta_{i-1}^\\mathbf{R})p(d_i \\mid \\Delta_1^\\mathbf{R}, \\dots, \\Delta_{i}^\\mathbf{R}).\n\\end{align*}\nAlgorithm~\\ref{alg:increment-bayes-net} shows how to compile this distribution to a Bayesian network; when the latent attributes have finite domains, the normalizing constant can be computed and the locally optimal proposal can be simulated (and evaluated) exactly. \nThis is possible because there are only a finite number of instantiations of the random increment $\\Delta_i^\\mathbf{R}$ to consider.\nThe compiler generates efficient enumeration code separately for each pattern of missing values it encounters in the dataset, exploiting conditional independence relationships in each Bayes net to yield potentially exponential savings over naive enumeration.\nA similar strategy can be used to compile data-driven object-wise rejuvenation proposals, and to handle some continuous variables with conjugate priors; see supplement for details.\n\n\n\n\\subsection{Scaling to large data with inference hints}\n\\label{sec:hints}\nScaling to models with large-domain variables and to datasets with many rows is a key challenge.\nIn PClean, users can specify lightweight \\textit{inference hints} to the proposal compiler, shown in gray in Figure~\\ref{fig:example-program}, to speed up inference without changing model's meaning.\n\n\\textbf{Programmable subproblems.} First, users may group attribute and reference statements into blocks by wrapping them in the syntax $\\textbf{subproblem begin}\\dots\\textbf{end}$. This partitions the attributes and reference slots of a class into an ordered list of \\textit{subproblems}, which SMC uses as intermediate target distributions. This makes enumerative proposals faster to compute, at the cost of considering less information at each step; rejuvenation moves can often compensate for short-sighted proposals.\n\n\n{\\bf Adaptive mixture proposals with dynamic preferred values.}\nA random variable within a model may be intractable to enumerate. For example, $\\texttt{string\\_prior(1, 100)}$ is a distribution over all strings between 1 and 100 letters long.\nTo handle these, PClean programs may declare \\textit{preferred values hints}. Instead of $X \\sim d(E,\\dots,E)$, the user can write $X \\sim d(E,\\dots,E) \\textbf{ preferring } E,$ where the final expression gives a list of values $\\xi_X$ on which the posterior mass is expected to concentrate. \nWhen enumerating, PClean replaces the CPD $\\phi_X$ with a surrogate $\\hat{\\phi}_X$, which is equal to $\\phi_X$ for preferred value inputs in $\\xi_X$, but 0 for all other values. The mass not captured by the preferred values, $1 - \\sum_{x \\in \\xi_{X}} \\phi_X(x)$, is assigned to a special $\\textbf{other}$ token.\nEnumeration yields a partial proposal $\\hat{Q}$ over a modified domain; the full proposal $Q$ first draws from $\\hat{Q}$ then replaces $\\textbf{other}$ tokens with samples from the appropriate CPDs $\\phi_X(\\cdot \\mid Pa(X))$. This yields a mixture proposal between the enumerative posterior on preferred values and the prior: when none of the preferred values explain the data well, $\\textbf{other}$ will dominate, causing the attribute to be sampled from its prior. But if any of the preferred values are promising, they will almost certainly be proposed.\n\n\n\\begin{figure*}[t]\n\\includegraphics[width=0.95\\linewidth]{inference_plot.pdf}\n\\vspace{-4mm}\n\\caption{Median accuracy vs. runtime for five runs of alternative inference algorithms on the \\textit{Hospital} dataset \\cite{Chu2013}, with an additional 20\\% of cells artificially deleted so as to test both repair and imputation.}\n\\label{fig:inference-comparison}\n\\end{figure*}\n\n\\section{Experiments}\n\\label{sec:results}\n\n\nIn this section, we demonstrate empirically that (1) PClean's inference works when standard PPL inference strategies fail, (2) short PClean programs suffice to compete with existing data cleaning systems in both runtime and accuracy, and (3) PClean can scale to large real-world datasets. Experiments were run on a laptop with a 2.6 GHz CPU and 32 GB of RAM.\n\n\n\\textbf{(1) Comparison to Generic PPL Inference.}\nWe evaluate PClean's inference against\nstandard PPL inference algorithms reimplemented\nto work on PClean models, on a popular benchmark from the data cleaning literature (Figure~\\ref{fig:inference-comparison}). We do not compare directly to other PPLs' implementations, because many (e.g. BLOG) cannot represent PClean's non-parametric prior. Some languages (e.g. Turing) have explicit support for non-parametric distributions, but could not express PClean's recursive use of CRPs. Others could in principle express PClean's model, but would complicate an algorithm comparison in other ways: Venture's dynamic dependency tracking is thousands of times slower than SOTA; Pyro's focus is on variational inference, hard to apply in PClean models; and Gen supports non-parametrics only via the use of mutation in its slower dynamic modeling language (making SMC $O(N^2)$) or via low-level extensions that would amount to reimplementing PClean using Gen's abstractions. Nonetheless, the algorithms in Figure~\\ref{fig:inference-comparison} are inspired by the generic automated inference provided in many PPLs, which use top-down proposals from the prior for SMC, MH~\\cite{dippl,ritchie2016c3}, and PGibbs~\\cite{wood2014new,Murray2015,Mansinghka2014}. Our results show that PClean suffices for fast, accurate inference where generic techniques fail, and also demonstrate why inference hints are necessary for scalability: without subproblem hints, PClean takes much longer to converge, even though it eventually arrives at a similar $F_1$ value.\n\n\\begin{table*}[]\n\\centering\n\\begin{tabular}{|lc|ccccc|}\n\\hline\n\\multicolumn{1}{|l}{\\textbf{\\footnotesize{Task}}} & \\multicolumn{1}{l|}{\\footnotesize{\\textbf{Metric}}} & \\multicolumn{1}{l}{\\textbf{\\footnotesize{PClean}}} & \\multicolumn{1}{l}{\\begin{tabular}[c]{@{}c@{}}\\footnotesize{\\textbf{HoloClean}}\\\\ \\footnotesize{\\textbf{(Unpublished)}} \\end{tabular}\n &\n\\multicolumn{1}{l}{\\footnotesize{\\textbf{HoloClean\n}} &\\multicolumn{1}{l}{\\footnotesize{\\textbf{NADEEF\n}} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}c@{}}\\footnotesize{\\textbf{NADEEF + Manual}}\\\\ \\footnotesize{\\textbf{Java Heuristics}} \\end{tabular}} \\\\ \\hline \n\\multirow{2}{*}{\\textbf{\\footnotesize{Flights}}} \n& $F_1$ & \\textbf{0.90} & 0.64 & 0.41 & 0.07 & \\textbf{0.90}\n\\\\\n & {Time } & \\textbf{3.1s} & { 45.4s} & { 32.6s} & { 9.1s} & 14.5s\n \\\\\\hline\n\\multirow{2}{*}{\\textbf{\\footnotesize{Hospital}}} \n& $F_1$ & \\textbf{0.91} & 0.90 & 0.83 & 0.84 & 0.84 \n\\\\\n & { Time } & \\textbf{{ 4.5s}} & {1m 10s} & { 1m 32s} & { 27.6s} & 22.8s\n \\\\\\hline\n\\multirow{2}{*}{\\textbf{\\footnotesize{Rents}}} \n& $F_1$ & \\textbf{0.69} & 0.48 & 0.48 & 0 & 0.51\n\\\\ \n & { Time } & {1m 20s} & { 20m 16s} & {13m 43s} & { 13s} & \\textbf{{7.2s}}\n\n \\\\\\hline\n\\end{tabular}\n\\caption{Results of PClean and various baseline systems on three diverse cleaning tasks.}\n\\vspace{-1mm}\n\\label{tab:results}\n\\end{table*}\n\\textbf{(2) Applicability to Data Cleaning.} \nTo check PClean's modeling and inference capabilities are good for data cleaning \\textit{in absolute terms} (rather than relative to generic PPL inference), we contextualize PClean's accuracy and runtime against two SOTA data-cleaning systems on three benchmarks with known ground truth (Table~\\ref{tab:results}), described in detail in the supplement. Briefly, the datasets are \\textit{Hospital}, a standard benchmark with artificial typos in 5\\% of cells; \\textit{Flights}, a standard benchmark resolving flight details from conflicting real-world data sources; and \\textit{Rent}, a synthetic dataset based on census data, with continuous and discrete values. The systems are \\textit{HoloClean}~\\cite{Rekatsinas2017}, based on probabilistic machine learning, and \\textit{NADEEF}, which uses MAX-SAT solvers to adjudicate between user-defined cleaning rules~\\cite{Dallachiesat2013}. For HoloClean, we consider both the original code and the authors' latest (unpublished) version on GitHub; for NADEEF, we include results both with NADEEF's built-in rules interface alone and with custom, handwritten Java rules.\n\nTable~\\ref{tab:results} reports \\textit{$F_1$} scores and cleaning speed (see supplement for precision\/recall). We do not aim to anoint a single `best cleaning system,' since optimality depends on the available domain knowledge and the user's desired level of customization. Further, while we followed system authors' per-dataset recommendations where possible, a pure system comparison is difficult, since each system relies on its own rule configuration. Rather, we note that short (\\textless 50-line) PClean programs can encode knowledge useful in practice for cleaning diverse data, and inference is good enough to achieve $F_1$ scores as good or better than SOTA data-cleaning systems on all three datasets, often in less wall-clock time. Additionally, PClean programs are concise, and e.g. could encode in a single line what required 50 lines of Java for NADEEF (see supplement).\n\n\n\\textbf{(3) Scalability to large, real-world data.}\nWe ran PClean on the Medicare Physician Compare National dataset, shown earlier in Figure~\\ref{fig:physicians_results}. It contains 2.2 million records, each listing a clinician and a practice location; the same clinician may work at multiple practices, and many clinicians may work at the same practice. NULL values and systematic errors are common (e.g. consistently misspelled city names for a practice).\n\nRunning PClean took 7h36m, changing 8,245 values and imputing 1,535,415 missing cells. In a random sample of 100 imputed cells, 90\\% agreed with manually obtained ground truth. We also manually checked PClean's changes, and 7,954 (96.5\\%) were correct. Of these, some were correct normalization (e.g. choosing a single spelling for cities whose names could be spelled multiple ways). To calibrate, NADEEF only changes 88 cells across the whole dataset, and HoloClean did not initialize in 24 hours, using the configuration provided by HoloClean's authors.\n\nFigure~\\ref{fig:physicians_results} shows PClean's real behavior on four rows. Consider the misspelling \\textit{Abington, MD}, which appears in 152 entries. The correct spelling \\textit{Abingdon, MD} occurs in only 42. However, PClean recognizes \\textit{Abington, MD} as an error because all 152 instances share a single practice address, and errors are modeled as happening systematically at the practice level. Next, consider PClean's correct inference that K. Ryan's degree is \\textit{DO}. PClean leverages the fact that her school \\textit{PCOM} awards more DOs than MDs, even though more \\textit{Family Medicine} doctors are MDs than DOs. All parameters enabling this reasoning are learned from the dirty data.\n\n\n\\section{Discussion}\nPClean, like other domain-specific PPLs, aims to be more automated and scalable than general purpose PPLs, by leveraging structure in its restricted model class to deliver fast inference. At the same time, it aims to be expressive enough to concisely solve a broad class of real-world data cleaning problems. \n\nOne direction for future research is to quantify the ease-of-implementation, runtime, accuracy, and program length tradeoffs that PClean users can achieve, given varying levels of expertise. Rigorous user studies could calibrate these results against other data cleaning, de-duplication, and record linkage systems. One challenge is to account for the subtle differences in the knowledge representation approach between PClean (causal and generative) and most other data cleaning systems (based on learning and\/or weighted logic)\\footnote{For example, correspondence with some HoloClean authors yielded ways to improve HoloClean's performance beyond previously published results, but did not yield ways for HoloClean to encode all forms of knowledge that PClean scripts can encode.}.\n\nIt may be possible to relax PClean's modeling restrictions without sacrificing inference performance and accuracy. One approach could be to integrate custom open-universe priors with explicit number statements and recursive object-level generative processes\\footnote{See supplement for a discussion of this direction in the context of data cleaning; many datasets with cyclic links among classes (e.g. people who are friends with other people) can be modeled in PClean by introducing additional latent classes.}, or to embed PClean in a general-purpose PPL such as Gen, to allow deeper customization of the model and inference. Another important direction is to explore learnability of PClean programs, especially for tables with large numbers of columns\/attributes. It seems potentially feasible to apply automated error modeling techniques \\cite{Heidari2019} or probabilistic program synthesis~\\cite{saad-popl-2019,choi2020group} to partially automate PClean program authoring. It also could be fruitful to develop hierarchical variants of PClean that enable parameters and latent objects inferred by PClean programs to transfer across datasets.\n\n\\subsubsection*{Acknowledgements}\nThe authors are grateful to Zia Abedjan, Marco Cusumano-Towner, Raul Castro Fernandez, Cameron Freer, Divya Gopinath, Christina Ji, Tim Kraska, George Matheos, Feras Saad, Michael Stonebraker, Josh Tenenbaum, and Veronica Weiner for useful conversations and feedback, as well as to anonymous referees on earlier versions of this work. This work is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1745302; DARPA, under the Machine Common Sense (MCS) and Synergistic Discovery and Design (SD2) programs; gifts from the Aphorism Foundation and the Siegel Family Foundation; a research contract with Takeda Pharmaceuticals; and financial support from Facebook, Google, and the Intel Probabilistic Computing Center. \n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn this paper, we are concerned with the optimal decay rate\nof global small solution to the Cauchy problem for the compressible Navier-Stokes (CNS) equations with and without external force in three-dimensional whole space.\nThus, our first result is to investigate the optimal decay rate\nfor the CNS equations without external force as follows:\n\\begin{equation}\\label{ns1}\n\\left\\{\\begin{array}{lr}\n\t\\rho_t +\\mathop{\\rm div}\\nolimits(\\rho u)=0,\\quad (t,x)\\in \\mathbb{R}^{+}\\times \\mathbb{R}^3,\\\\\n\t(\\rho u)_t+\\mathop{\\rm div}\\nolimits(\\rho u \\otimes u)+ \\nabla p-\\mu\\tri u-(\\mu+\\lam)\\nabla\\mathop{\\rm div}\\nolimits u = 0,\n\t\\quad (t,x)\\in \\mathbb{R}^{+}\\times \\mathbb{R}^3,\n\\end{array}\\right.\n\\end{equation}\nwhere $\\rho$, $u$ and $p$ represent the unknown density, velocity and\npressure, respectively.\nThe initial data is given by\n\\begin{equation}\\label{initial1}\n(\\rho,u)|_{t=0}=(\\rho_0,u_0)(x),\\quad x\\in \\mathbb{R}^3.\n\\end{equation}\nFurthermore, as the space variable tends to infinity, we assume\n\\begin{equation}\\label{boundary1}\n\\lim\\limits_{|x|\\rightarrow+\\infty}(\\rho,u)=(\\bar \\rho,0),\n\\end{equation}\nwhere $\\bar\\rho$ is a positive constant.\nThe pressure $p(\\rho)$ here is assumed to be a smooth function\nin a neighborhood of $\\bar\\rho$ with $p'(\\bar\\rho)>0$.\nThe constant viscosity coefficients $\\mu$ and $\\lambda$ satisfy the physical conditions\\begin{equation}\\label{physical-condition}\n\\mu>0,~~~~2\\mu+3\\lambda\\geq 0. \\end{equation}\nThe CNS system \\eqref{ns1} is a well-known model which describes\nthe motion of compressible fluid.\nIn the following, we will introduce some mathematical results related to the CNS equations, including the local and global-in-time well-posedness, large time behavior and so on.\n\nWhen the initial data are away from the vacuum, the local existence and\nuniqueness of classical solutions have been obtained in \\cite{{Serrin-1959},{Nash-1962}}.\nIf the initial density may vanish in open sets, the well-posedness theory also has been studied in \\cite{{Cho-Choe-Kim-2004},{Choe-Kim-2003},{Cho-Kim-2006},{Li-Liang-2014},{Salvi-Straskraba-1993}} when the initial data satisfy some compatibility conditions.\nThe global classical solution was first obtained by Matsumura and Nishida \\cite{Matsumura-Nishida-1980} as initial data is closed to a non-vacuum\nequilibrium in some Sobolev space $H^s$.\nThis celebrated result requires that the solution has small oscillation from a uniform non-vacuum state such that the density is strictly away from vacuum.\nFor general data, one has to face a tricky problem of the possible appearance of vacuum.\nAs observed in \\cite{Li2019,rozanova2008blow,Xin1998,Xin2013}, the strong (or smooth) solution for the CNS equations will blow up in finite time.\nIn order to solve this problem, it is important to study some blow-up criteria of strong solutions, refer to \\cite{Huang2011blowcriter1,{Sun-Wang-Zhang-2011},\nHuang2011blowcriter2,Wen2013}.\nIt is worth noting that the bounds established by the above papers are dependent on time.\nThus, under the assumption that $\\sup_{t\\in \\mathbb R^+}\\|\\rho(t, \\cdot)\\|_{C^\\alpha}\\le M$ for some $0<\\alpha<1$, He, Huang and Wang \\cite{he-huang-wang} proved global stability of large solution and built the decay rate for the global solution as it tends to the\nconstant equilibrium state.\nLater, Gao, Wei and Yao studied the optimal decay rate for this class of\nglobal large solution itself and its derivatives\nin a series of articles \\cite{{gao-wei-yao-D},{gao-wei-yao-NA},{gao-wei-yao-pre}}.\nIn the presence of vacuum, Huang, Li and Xin\\cite{Huang-Li-Xin-2012} established the global existence and uniqueness of strong solution for the CNS equations in three-dimensional space in the condition that the initial energy is small.\nRecently, Li and Xin \\cite{Li-Xin-2019-pde} obtained similar results for the dimension two,\nin addition, they also studied the large time behavior of the solution for the CNS system with small initial data but allowing large oscillations.\nSome other related results with respect to the global well-posedness theory can be found in \\cite{Li2020,Wen2017}.\n\nThe large time behavior of the solutions to the isentropic compressible Navier-Stokes system has been studied extensively.\nThe optimal decay rate for strong solution to the CNS system was first derived by Matsumura and Nishida \\cite{nishida2}, and later by Ponce \\cite{ponce} for the optimal $L^p(p\\ge 2)$ decay rate.\nWith the help of the study of Green function, the optimal $L^p$ ($1\\le p\\le\\infty$) decay rates in $\\mathbb R^n(n\\ge 2)$ were obtained in \\cite{{hoff-zumbrun},{liu-wang}} when the small initial perturbation bounded in $H^s\\cap L^1$ with the integer $s\\ge[n\/2]+3$.\nAll these decay results mentioned above are restricted to the perturbation framework, that is, if the initial data is a small perturbation of constant equilibrium in $L^1\\cap H^3$,\nthe decay rate of global solution to system \\eqref{ns1} in $L^2$-norm is\n$$\\|\\rho(t)-\\bar\\rho\\|_{L^2}+\\|u (t)\\|_{L^2}\\le C(1+t)^{-\\frac34}.$$\nFurthermore, Gao, Tao and Yao \\cite{gao2016} applied the Fourier splitting method,\ndeveloped by Schonbek \\cite{Schonbek1985}, to establish optimal decay rate for\nthe higher-order spatial derivative of global small solution.\nSpecially, they built the decay rate as follows:\n\\begin{equation}\\label{Decay-Gao-Tao-Yao}\n\\|\\nabla^k(\\rho-\\bar\\rho)(t)\\|_{H^{N-k}}\n+\\|\\nabla^k u(t)\\|_{H^{N-k}}\n\\leq C_0 (1+t)^{-\\f34-\\f k2},\\quad \\text{for}~~ 0\\leq k\\leq N-1,\n\\end{equation}\nif the initial perturbation belongs to $H^N(\\mathbb{R}^3)\\cap L^1(\\mathbb{R}^3)$.\nObviously, the decay rate for the $N-$th order spatial derivative of global solution\nin \\eqref{Decay-Gao-Tao-Yao} is still not optimal.\n\\textbf{Recently, this tricky problem is addressed simultaneously in a series of articles\n\\cite{{chen2021},{Wang2020},{wu2020}} by using the spectrum analysis of\nthe linearized part.}\nIn the perturbation setting, the approach to proving the decay estimate for the solution of the CNS system relies heavily on the analysis of the linearization of the system.\nMore precisely, most of these decay results were proved by combining the linear optimal decay of spectral analysis with the energy method.\nFrom another point of view, under the assumption that the initial perturbation is bounded in $\\dot H^{-s}(s\\in [0, \\frac32))$, Guo and Wang \\cite{guo2012} obtained the optimal decay rate of the solution and its spatial derivatives of system \\eqref{ns1} under the $H^N(N\\ge 3)-$framework by using pure energy method.\nMore precisely, they established the following decay estimate\n\\begin{equation}\\label{Decay-Guo}\n\\|\\nabla^l (\\rho-\\bar{\\rho})(t)\\|_{H^{N-l}}\n+\\|\\nabla^l u(t)\\|_{H^{N-l}}\\le C_0(1+t)^{-(l+s)}, ~{\\rm for~}-s< l \\le N-1.\n\\end{equation}\nThis method in \\cite{guo2012} mainly combined the energy estimates with the interpolation between negative and positive Sobolev norms, and hence do not use the analysis of the linearized part.\nFrom the decay rate of \\eqref{Decay-Guo}, it is easy to see that the decay rate of $N-th$ order derivative of solution $(\\rho-\\bar{\\rho}, u)$ coincides with the lower one.\nHowever, the $N-th$ order spatial derivative of heat equation has the optimal decay rate $(1+t)^{-(N+s)}$ rather than $(1+t)^{-(N-1+s)}$(see Theorem $1.1$ in \\cite{guo2012}).\n\\textbf{\\textit{Thus, the first purpose in this paper is to investigate the optimal decay rate for the quantity $\\nabla^N (\\rho, u)$ as it converges to zero in $L^2-$norm.}}\n\nNow, we state the result of decay rate for the CNS equations that has been established before.\n\n\\begin{prop}\\label{hs1}(\\cite{guo2012})\nAssume that $(\\rho_0-\\bar\\rho,u_0)\\in H^{N}$ for an integer $N\\geq 3$.\nThen there exists a constant $\\delta_0$ such that if\n\t\\[\\|(\\rho_0-\\bar\\rho,u_0)\\|_{H^{[\\f N2]+2}}\\leq \\delta_0, \\]\n\tthen the problem \\eqref{ns1}--\\eqref{boundary1} admits a unique global solution $(\\rho, u)$\n satisfying that for all $t \\ge 0$\n\t\\begin{equation}\\label{energy-01}\n\t\\|(\\rho-\\bar\\rho)(t)\\|_{H^m}^2\n +\\|u(t)\\|_{H^m}^2\n +\\int_0^t(\\|\\nabla\\rho\\|_{H^{m-1}}^2+\\|\\nabla u\\|_{H^m}^2)d\\tau\n \\leq C (\\|\\rho_0-\\bar\\rho \\|_{H^m}^2+\\|u_0\\|_{H^m}^2),\n\t\\end{equation}\n where $[\\f N2]+2\\leq m\\leq N$.\n If further, $(\\rho_0-\\bar\\rho,u_0)\\in \\dot H^{-s}$ for some\n $s \\in [0, \\f32)$, then for all $t \\ge 0$\n\t\t\\begin{equation}\n\t\t\\|\\Lambda^{-s}(\\rho-\\bar\\rho)(t)\\|_{L^2}^2\n +\\|\\Lambda^{-s}u (t)\\|_{L^2}^2\\leq C_0,\n\t\t\\end{equation}\n\t\tand\n\t\t\\begin{equation}\\label{sde1}\n\t\t\\|\\nabla^l(\\rho-\\bar\\rho)(t)\\|_{H^{N-l}}^2+\\|\\nabla^l u(t)\\|_{H^{N-l}}^2\n \\leq C_0 (1+t)^{-(l+s)}, {~\\rm for~}-s0$. The stationary solution $(\\rho^*(x),u^*(x))$ for\nthe CNS equations \\eqref{ns3} is given by $(\\rho^*(x),0)$ satisfying\n\\begin{equation}\\label{p}\n\\int_{\\rho_{\\infty}}^{\\rho^*(x)}\\f{p'(s)}{s}ds+\\phi(x)=0.\n\\end{equation}\nThe details of derivation for the stationary solution can be found in \\cite{mat1983}.\nFirst, Matsumura and Nishida \\cite{mat1983} obatined the global existence of solutions to system \\eqref{ns3} near the steady state $(\\rho^*(x),0)$ with initial perturbation \nunder the $H^3-$framework.\nIn addition, they also showed that the global solution \nconverges to the stationary state as time tends to infinity.\nThe first work to give explicit decay estimate for solution was \nrepresented by Deckelnick \\cite{Deckelnick1992}.\nSpecifically, Deckelnick was concerned about the isentropic case and showed that\n\\begin{equation*}\n\\sup_{x\\in\\mathbb{R}^3}|(\\rho(t,x)-\\rho^*(x),u(t,x))|\\leq C(1+t)^{-\\f14}.\n\\end{equation*}\nThis was then improved by Shibata and Tanaka for more general external forces in \\cite{Shibata2003,Shibata2007} to $(1+t)^{-\\f12+\\kappa}$ for any small positive constant $\\kappa$ when the initial perturbation belongs to $H^3\\cap L^{\\f65}$.\nLater, Duan, Liu, Ukai and Yang \\cite{duan2007} investigated the optimal $L^p-L^q$ convergence rates for this system when the initial perturbation is also bounded in $L^p$ with $1\\leq p<\\f65$. Specifically, they established the decay rate as follows:\n\\begin{equation}\\label{highdecayL1}\n\\|(\\rho-\\rho^*,u)(t)\\|_{L^2}\\leq C(1+t)^{-\\frac32(\\frac1p-\\frac12)},\n\\quad\n\\|\\nabla^k(\\rho-\\rho^*,u)(t)\\|_{L^2}\n\\leq C(1+t)^{-\\frac32(\\frac1p-\\frac12)-\\frac{k}{2}}, ~\\text{for}~~k=1,2,3.\n\\end{equation}\nFor more result about the decay estimate for the CNS equations with potential force, \none may refer to \\cite{{Ukai2006},{duan-ukai-yang-zhao2007},{Okita2014},{Li2011},\n{Wang2017},{Matsumura1992},{Matsumura2001}}.\nObviously, the decay rates of the second and third order spatial derivatives \nin \\eqref{highdecayL1} are only the same as the first one.\n\\textbf{\\textit{In this paper, our second target is to investigate the optimal decay rate for\nthe $k-th$ $(k\\geq2)$ order spatial derivative of solution to the \nCNS equations with potential force.}}\n\nFinally, we aim to investigate the lower bounds of decay rates for the solution\nitself and its spatial derivatives.\nThe decay rate is called optimal in the sense that this rate \ncoincides with the linearized part.\nThus, the study of the lower decay rate, which is the same as the upper one, can\nhelp us obtain the optimal decay rate of solution.\nAlong this direction, Schonbek addressed the lower bound of decay rate for solution\nof the classical incompressible Navier-Stokes equations \\cite{Schonbek1986,Schonbek1991}\n(see also MHD equations \\cite{Schonbek-Schonbek-Suli}).\nBased on so-called Gevrey estimates, Oliver and Titi \\cite{Oliver2000} established\nthe lower and upper bounds of decay rate for the higher order derivatives\nof solution to the incompressible Navier-Stokes equations in whole space.\nFor the case of compressible flow, there are many results of lower bound of\ndecay rate for the solution itself to the CNS equations and related\nmodels, such as the CNS equations \\cite{lizhang2010,Kagei2002},\ncompressible Navier-Stokes-Poisson equations \\cite{limatsumura2010,Zhang2011},\nand compressible viscoelastic flows \\cite{Hu-Wu-2013}.\nHowever, these lower bounds mentioned above only consider the solution itself\nand do not involve the derivative of the solution.\nRecently, some scholars are devoted to studying the lower bound\nof decay rate for the derivative of solution,\nwhich can be referred to \\cite{{chen2021},{wu2020},{gao-lyu-Yao-2019},{gao-wei-yao-pre}}.\n\\textbf{\\textit{Thus, our third target is to establish lower bound of decay rate for\nthe global solution itself and its spatial derivatives.}}\nThese lower bounds of decay estimates, which coincide with the upper ones,\nshow that they are really optimal.\n\nNow, our second result can be stated as follows.\n\n\\begin{theo}\\label{them3}\n\tLet $(\\rho^*(x),0)$ be the stationary solution of initial value problem \\eqref{ns3}--\\eqref{initial-boundary}, if $(\\rho_0-\\rho^*,u_0)\\in H^N$ for $N\\geq3$, there exists a constant $\\delta$ such that the potential function $\\phi(x)$ satisfies\n\t\\begin{equation}\\label{phik}\n\t\\sum_{k=0}^{N+1}\\|(1+|x|)^{k}\\nabla^k\\phi\\|_{L^2\\cap L^\\infty}\\leq \\delta,\n\t\\end{equation}\n\tand the initial perturbation statisfies\n\t\\begin{equation}\\label{initial-H2}\n\t\\|(\\rho_0-\\rho^*,u_0)\\|_{H^{N}}\\leq \\delta.\n\t\\end{equation}\n\tThen there exists a unique global solution $(\\rho,u)$ of initial value problem \\eqref{ns3}--\\eqref{initial-boundary} satisfying\n\\begin{equation}\\label{energy-thm}\n\\begin{split}\n\\|(\\rho-\\rho^*,u)(t)\\|_{H^N}^2\n+\\int_0^t\\big(\\|\\nabla(\\rho-\\rho^*)\\|_{H^{N-1}}^2+\\|\\nabla u\\|_{H^N}^2\\big)ds\n\\leq C\\|(\\rho_0-\\rho^*,u_0)\\|_{H^N}^2 ,\\quad t\\geq0,\n\\end{split}\n\\end{equation}\nwhere $C$ is a positive constant independent of time $t$.\nIf further\n\\begin{equation*}\n\\|(\\rho_0-\\rho^*,u_0)\\|_{L^1}<\\infty,\n\\end{equation*}\nthen there exists constans $\\delta_0>0$ and $\\bar C_0>0$ such that for any $0<\\delta\\leq\\delta_0$, we have\n\\begin{equation}\\label{kdecay}\n\\|\\nabla^k(\\rho-\\rho^*)(t)\\|_{L^2}+\\|\\nabla^k u(t)\\|_{L^2}\n\\leq \\bar C_0(1+t)^{-\\f34-\\f k2},\\quad\\text{for}~~0\\leq k \\leq N.\n\\end{equation}\n\\end{theo}\n\n\\begin{rema}\nThe global well-posedness theory of the CNS equations with potential force\nin three-dimensional whole space was studied in \\cite{duan2007} under the $H^3-$ framework.\nFurthermore, they also established the decay estimate \\eqref{highdecayL1} if\nthe initial data belongs to $L^p$ with $1\\le p < \\frac65$.\nThus, the advantage of the decay rate \\eqref{kdecay} in Theorem \\ref{them3} is that\nthe decay rate of the global solution $(\\rho-\\rho^*,u)$ itself and\nits any order spatial derivative is optimal.\n\\end{rema}\n\n\nFinally, we have the following result concerning the lower bounds of decay rates for solution and its spatial derivatives of the CNS equations with potential force.\n\\begin{theo}\\label{them4}\nSuppose the assumptions of Theorem \\ref{them3} hold on. Furthermore, we assume that $\\int_{\\R^3}(\\rho_0-\\rho^*)(x) d x$ and $\\int_{\\R^3}u_0(x) d x$ are at least one nonzero.\nThen, the global solution $(\\rho,u)$ obtained in Theorem \\ref{them4} has the decay rates for any large enough $t$,\n\\begin{equation}\n\\begin{aligned}\\label{kdecaylow}\n&{c_0}(1+t)^{-\\f34-\\f k2}\\le \\|\\nabla^k(\\rho-\\rho^*)(t)\\|_{L^2}\\le {c_1}(1+t)^{-\\f34-\\f k2};\\\\\n&{c_0}(1+t)^{-\\f34-\\f k2}\\le \\|\\nabla^ku(t)\\|_{L^2}\\le {c_1}(1+t)^{-\\f34-\\f k2};\n\\end{aligned}\n\\end{equation}\nfor all $0\\leq k\\leq N$.\nHere $c_0$ and $c_1$ are positive constants independent of time $t$.\n\\end{theo}\n\n\\begin{rema}\n\tThe decay rates showed in \\eqref{kdecay} and \\eqref{kdecaylow} imply that the $k-th$ $(0\\leq k\\leq N)$ order spatial derivative of the solution converges to the equilibrium state $(\\rho^*, 0)$ at the $L^2-$rate $(1+t)^{-\\f34-\\f k2}$. In other words, these decay rates of the solution itself and its spatial derivatives obtained in \\eqref{kdecay} and \\eqref{kdecaylow} are optimal.\n\\end{rema}\n\n\\textbf{Notation:} Throughout this paper, for $1\\leq p\\leq +\\infty$ and $s\\in\\mathbb{R}$, we simply denote $L^p(\\mathbb{R}^3)$ and $H^s(\\mathbb{R}^3)$ by $L^p$ and $H^s$, respectively.\nAnd the constant $C$ denotes a general constant which may vary in different estimates.\n$\\widehat{f}(\\xi)=\\mathcal F(f(x))$ represents the usual Fourier transform of the function $f(x)$ with respect to $x\\in\\mathbb{R}^3$. $\\mathcal F^{-1}(\\widehat{f}(\\xi))$ means the inverse Fourier transform of $\\widehat{f}(\\xi)$ with respect to $\\xi\\in\\mathbb{R}^3$. For the sake of simplicity, we write $\\int f dx:=\\int _{\\mathbb{R}^3} f dx$.\n\n\nThe rest of the paper is organized as follows. In Section \\ref{approach}, we introduce the difficulties and our approach to prove the results. In Section \\ref{pre}, we recall some important lemmas, which will be used in later analysis. And Section \\ref{h-s} is denoted to giving the proof of Theorem \\ref{hs2}. Finally, Theorem \\ref{them3} and Theorem \\ref{them4} are proved in Section \\ref{rhox}.\n\n\n\\section{Difficulties and outline of our approach}\\label{approach}\nThe main goal of this section is to explain the main difficulties of\nproving Theorems \\ref{hs2}, \\ref{them3} and \\ref{them4} as well as our\nstrategies for overcoming them.\nIn order to establish optimal decay estimate for the CNS equations,\nthe main difficulty comes from the system \\eqref{ns1} or \\eqref{ns3}\nsatisfying hyperbolic-parabolic coupling equations, such that\nthe density only can obtain lower dissipation estimate.\n\nFirst of all, let us introduce our strategy to prove the Theorem \\ref{hs2}.\nIndeed, applying the classical energy estimate, it is easy\nto establish following estimate:\n\\begin{equation}\\label{energy-N-1}\n\\begin{split}\n\\f{d}{dt}\\|\\nabla^N(n,u)\\|_{L^2}^2+\\|\\nabla^{N+1}u\\|_{L^2}^2\n\\leq \\|\\nabla (n, u)\\|_{H^1}^2 \\|\\nabla^N(n, u)\\|_{L^2}^2\n+\\text{some~good~terms}.\n\\end{split}\n\\end{equation}\nIn order to control the first term on the right handside of \\eqref{energy-N-1},\nthe idea in \\cite{guo2012} is to establish the dissipative for the density $\\nabla^N n$,\nwhich gives rise to the cross term $\\frac{d}{dt}\\int \\nabla^{N-1} u\\cdot\\nabla^{N}n dx$\nin energy part.\nThis is the reason why the decay rate of the $N-th$ order spatial derivative of\nsolution of the CNS equations can only attain the decay rate as the $(N-1)-th$ one.\nIn order to settle this problem, our strategy is to apply the time integrability\nof the dissipative term of density rather than absorbing it by the dissipative term.\nMore precisely, applying the weighted energy method to the estimate\n\\eqref{energy-N-1} and using decay \\eqref{sde1}, it holds true\n\\begin{equation}\\label{energy-N-2}\n\\begin{split}\n&(1+t)^{N+\\s+\\ep_0}\\|\\nabla^N (n,u)\\|_{L^2}^2\n+\\int_{0}^{t}(1+\\tau)^{N+\\s+\\ep_0}\\|\\nabla^{N+1}u\\|_{L^2}^2d\\tau \\\\\n\\leq& \\|\\nabla^N (n_0,u_0)\\|_{L^2}^2\n+\\int_0^t (1+\\tau )^{N-1+\\s+\\ep_0}\\|\\nabla^N (n,u)\\|_{L^2}^2 d\\tau\n+\\text{some~good~terms}.\n\\end{split}\n\\end{equation}\nThus, we need to control the second term on the right handside of \\eqref{energy-N-2}.\nOn the other hand, it is easy to check that\n\\begin{equation}\\label{energy-02}\n\\begin{split}\n\\f{d}{dt}\\mathcal{E}^{N-1}(t)\n+C(\\|\\nabla^{N}n\\|_{L^2}^2+\\|\\nabla^{N}u\\|_{H^1}^2)\\leq 0.\n\\end{split}\n\\end{equation}\nHere $\\mathcal{E}^{N-1}(t)$ is equivalent to $\\|\\nabla^{N-1}(n,u)\\|_{H^1}^2$.\nThe combination of \\eqref{energy-02} and decay estimate \\eqref{sde1} yields directly\n\\begin{equation}\\label{energy-N-3}\n(1+t)^{N-1+\\s+\\ep_0}\\mathcal{E}^{N-1}(t)\n+\\int_0^t(1+\\tau)^{N-1+\\s+\\ep_0}\n\\big(\\|\\nabla^N n\\|_{L^2}^2+\\|\\nabla^N u\\|_{H^1}^2\\big)d\\tau\n\\leq C(1+t)^{\\ep_0}.\n\\end{equation}\nThus, we apply the time integrability of the dissipative term of density in \\eqref{energy-N-3}\nto control the second term on the right handside of \\eqref{energy-N-2}.\nTherefore, we can obtain the optimal decay rate for $\\nabla^N(n, u)$ as it converges to zero.\n\nSecondly, we will establish the optimal decay rate, including in Theorem \\ref{them3},\nfor the higher order spatial derivative of global solution to the CNS equations\nwith external potential force.\nDue to the influence of potential force, the equilibrium state of global solution will\ndepend on the spatial variable. This will create some fundamental difficulties\nas we establish the energy estimates, see Lemmas \\ref{enn-1}, \\ref{enn} and \\ref{ennjc}.\nSimilar to the decay estimate \\eqref{highdecayL1}(cf.\\cite{duan2007}),\none can combine the energy estimate and the decay rate of linearized system\nto obtain the following decay estimates:\n\\begin{equation}\\label{basic-decay-d}\n\\|\\nabla^k (\\rho-\\rho^*)(t)\\|_{H^{N-k}}+\\|\\nabla^k u(t)\\|_{H^{N-k}}\n\\le C(1+t)^{-(\\frac34+\\frac{k}{2})},\\quad k=0,1,\n\\end{equation}\nif the initial data $(\\rho_0-\\rho^*, u_0)$ belongs to $H^N \\cap L^1$.\nTo prove that this decay is true for $k\\in\\{2,\\cdots,N-1\\}$, we are going to do it\nby mathematical induction.\nThus, assume that decay \\eqref{basic-decay-d} holds on for $k=l\\in\\{1,\\cdots,N-2\\}$,\nour target is to prove the validity of \\eqref{basic-decay-d} as $k=l+1$.\nThis logical relationship can be guaranteed by using the classical Fourier splitting\nmethod(cf\\cite{gao2016}). However, similar to the method of the proof of Theorem \\ref{hs2},\nwe guarantee this logical relationship by using the time weighed method,\nsee Lemma \\ref{N-1decay} more specifically.\nSince the presence of potential force term $\\rho \\nabla \\phi$, we can not\napply the time weighted method mentioned above to establish the optimal decay\nrate for the $N-th$ order spatial derivative of global solution.\nMotivated by \\cite{wu2020}, we establish some energy estimate for the quantity\n$\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N-1}v}\\cdot \\overline{\\widehat{\\nabla^{N}n}}d\\xi$,\nnamely the higher frequency part, rather than $\\int \\nabla^{N-1} u\\cdot\\nabla^{N}n dx$.\nHere $\\widehat{\\nabla^{N-1}v}$ and $\\widehat{\\nabla^{N}n}$ stand for the Fourier part of\n$\\nabla^{N-1}v$ and $\\nabla^{N}n$ respectively.\nThe advantage is that the quantity $\\|\\nabla^{N}(n,v)\\|_{L^2}^2-\\eta_3\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N-1}v}\\cdot \\overline{\\widehat{\\nabla^{N}n}}d\\xi$\nis equivalent to $\\|\\nabla^{N}(n, v)\\|_{L^2}^2$.\nThen, the combination of some energy estimate and decay estimate can help us build\nthe following inequality:\n\\begin{equation}\\label{highesthigh}\n\\begin{split}\t&\\f{d}{dt}\\Big\\{\\|\\nabla^{N}(n,v)\\|_{L^2}^2-\\eta_3\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N-1}v}\\cdot \\overline{\\widehat{\\nabla^{N}n}}d\\xi \\Big\\}+\\|\\nabla^{N}v^h\\|_{L^2}^2+\\eta_3\\|\\nabla^{N}n^h\\|_{L^2}^2\\\\\n\t\\leq& C\\|\\nabla^{N}(n^l, v^l)\\|_{L^2}^2+C(1+t)^{-3-N}.\n\\end{split}\n\\end{equation}\nThen, one has to estimate the decay rate of the low-frequency term\n$\\|\\nabla^{N}(n^l, v^l)\\|_{L^2}^2$.\nIndeed, Duhamel's principle and decay estimate of $k-th$ $(0\\leq k\\leq N)$ order spatial derivative of solution obtained above allow us to obtain that\\begin{equation*}\n\\|\\nabla^N(n^l, v^l)\\|_{L^2}\\leq C\\delta\\sup_{0\\leq\\tau\\leq t}\\|\\nabla^N (n,v)\\|_{L^2}+C(1+t)^{-\\f34-\\f N2}.\n\\end{equation*}\nwhich, together with \\eqref{highesthigh}, by using the smallness of $\\d$,\nwe can obtain the optimal decay rate for the $N-th$ order spatial derivative\nof global solution to the CNS equations with external potential force.\n\nFinally, we will establish the lower decay estimate, coincided with the upper one,\nfor the global solution itself and its spatial derivative.\nIt is noticed that the system in terms of density and momentum is always adopted to establish the lower bounds of decay rates for the global solution and its spatial derivatives in many previous works, one may refer to (\\cite{chen2021,lizhang2010}).\nHowever, the appearance of potential force term(i.e., $\\rho \\nabla \\phi$)\nprevents us taking this method to solve the problem.\nThus, let $(n, v)$ and $(\\tilde{n}, \\tilde{v})$ be the solutions of nonlinear\nand linearized problem respectively.\nDefine the difference: $(n_\\d, v_\\d)\\overset{def}{=}(n-\\widetilde{n}, v-\\widetilde{v})$,\nit holds on\n\\begin{equation*}\n\\|n\\|_{L^2}\\ge \\|\\widetilde{n}\\|_{L^2}-\\|{n}_\\d\\|_{L^2}\n\\quad \\text{and} \\quad\n\\|v\\|_{L^2} \\ge \\|\\widetilde{v}\\|_{L^2}-\\|{v}_\\d\\|_{L^2}.\n\\end{equation*}\nIf these quantities obey the assumptions:\n$\\|(n_\\d,v_\\d)\\|_{L^2}\\leq \\widetilde C\\d(1+t)^{-\\f34}$\nand\n$\\min\\{\\|\\widetilde{n}\\|_{L^2},\\|\\widetilde{v}\\|_{L^2}\\}\\geq \\widetilde{c}(1+t)^{-\\f34}$,\nmoreover, the constant $\\d$ is a small constant and independent of $\\widetilde{c}$,\nthen we can choose the constant $\\d$ to obtain the decay rate\n$\\min\\{\\|{n}\\|_{L^2},\\|{v}\\|_{L^2}\\}\\geq {c_1}(1+t)^{-\\f34}$.\nSimilarly, it is easy to check that the decay \\eqref{kdecaylow} holds true for $k=1$.\nBased on the lower bound of decay rate for first order spatial derivative of solution and the upper bound of decay rate for the solution itself, we can deduce the lower bound of decay rate for $k-th$ $(2\\leq k\\leq N)$ order spatial derivative of solution by using the following Sobolev interpolation inequality:\\begin{equation*}\n\\|\\nabla^k(n,v)\\|_{L^2}\\geq C\\|\\nabla(n,v)\\|_{L^2}^k\\|(n,v)\\|_{L^2}^{-(k-1)},\\quad\\text{for}~~2\\leq k\\leq N.\n\\end{equation*}\nAnd more proof details of Theorem \\ref{them4} can be found in Section \\ref{lower} below.\n\n\n\\section{Preliminary}\\label{pre}\nIn this section, we collect some elementary inequalities, which will be extensively used in later sections.\nFirst of all, in order to estimate the term about $\\bar\\rho(x)$ in the CNS equations with a potential force, we need the following Hardy inequality.\n\\begin{lemm}[Hardy inequality]\\label{hardy}\n\tFor $k\\geq1$, suppose that $\\f{\\nabla\\phi}{(1+|x|)^{k-1}}\\in L^2$, then $\\f{\\phi}{(1+|x|)^{k}}\\in L^2$, with the estimate\n\t\\begin{equation*}\n\t\\begin{split}\n\t\t\\|\\f{\\phi}{(1+|x|)^{k}}\\|_{L^2}\\leq C\\|\\f{\\nabla\\phi}{(1+|x|)^{k-1}}\\|_{L^2}.\n\t\\end{split}\n\t\\end{equation*}\n\\end{lemm}\nThe proof of Lemma \\ref{hardy} is simply and we omit it here. We will use the following Sobolev interpolation of Gagliardo-Nirenberg inequality frequently in energy estimates, which can be found in \\cite{guo2012} more details.\n\\begin{lemm}[Sobolev interpolation inequality]\\label{inter}\n\tLet $2\\leq p\\leq +\\infty$ and $0\\leq l,k\\leq m$. If $p=+\\infty$, we require furthermore that $l\\leq k+1$ and $m\\geq k+2$. Then if $\\nabla^l\\phi\\in L^2$ and $\\nabla^m \\phi\\in L^2$, we have $\\nabla^k\\phi\\in L^p$. Moreover, there exists a positive constant $C$ dependent only on $k,l,m,p$ such that\n\t\\begin{equation}\\label{Sobolev}\n\t\\|\\nabla^k\\phi\\|_{L^p}\\leq C\\|\\nabla^l\\phi\\|_{L^2}^{\\theta}\\|\\nabla^m\\phi\\|_{L^2}^{1-\\theta},\n\t\\end{equation}\n\twhere $0\\leq\\theta\\leq1$ satisfying\n\t\\begin{equation*}\n\t\\f k3-\\f1p=\\Big(\\f l3-\\f12\\Big)\\theta+\\Big(\\f m3-\\f12\\Big)(1-\\theta).\n\t\\end{equation*}\n\\end{lemm}\n\nThen we recall the following commutator estimate, which is used frequently in energy estimates. The proof and more details may refer to \\cite{majda2002}.\n\\begin{lemm}\\label{commutator}\n\tLet $k\\geq1$ be an integer and define the commutator\\begin{equation*}\n [\\nabla^k,f]g=\\nabla^k(fg)-f\\nabla^kg.\n \\end{equation*}\n Then we have\n \\begin{equation*}\n \\|[\\nabla^k,f]g\\|_{L^2}\\leq C\\|\\nabla f\\|_{L^\\infty}\\|\\nabla^{k-1}g\\|_{L^2}+C\\|\\nabla^k f\\|_{L^2}\\|g\\|_{L^\\infty},\n \\end{equation*}\n where $C$ is a positive constant dependent only on $k$.\n\\end{lemm}\n\nFinally, we conclude this section with the following lemma. The proof and more details may refer to \\cite{chen2021}.\n\\begin{lemm}\\label{tt2}\nLet $r_1,r_2>0$ be two real numbers, for any $0<\\ep_0<1$, we have\n\\begin{equation*}\n\\begin{split}\n\t\\int_0^{\\f t2}(1+t-\\tau)^{-r_1}(1+\\tau)^{-r_2}d\\tau\\leq C& \\left\\{\\begin{array}{l}\n\t\t(1+t)^{-r_1},\\quad \\text{for}~~ r_2>1,\\\\\n\t\t(1+t)^{-r_1+\\ep_0},\\quad~~ \\text{for}~~ r_2=1, \\\\\n\t\t(1+t)^{-(r_1+r_2-1)},\\quad \\text{for}~~ r_2<1,\n\t\\end{array}\\right.\\\\\n\t\\int_{\\f t2}^{t}(1+t-\\tau)^{-r_1}(1+\\tau)^{-r_2}d\\tau\\leq C& \\left\\{\\begin{array}{l}\n\t\t(1+t)^{-r_2},\\quad \\text{for}~~ r_1>1,\\\\\n\t\t(1+t)^{-r_2+\\ep_0},\\quad~~ \\text{for}~~ r_1=1, \\\\\n\t\t(1+t)^{-(r_1+r_2-1)},\\quad \\text{for}~~ r_1<1,\n\t\\end{array}\\right.\n\\end{split}\n\\end{equation*}\nwhere $C$ is a positive constant independent of $t$.\n\\end{lemm}\n\n\\section{The proof of Theorem \\ref{hs2}}\\label{h-s}\n\nIn this section, we study the optimal decay rate of the $N-th$\nspatial derivative of global small solution for the initial value problem \\eqref{ns1}--\\eqref{boundary1}.\nThus, let us write $n\\overset{def}{=} \\rho-\\bar\\rho$, then the original system \\eqref{ns1}--\\eqref{initial1} can be rewritten in the perturbation form as\n\\begin{equation}\\label{ns2}\n\\left\\{\\begin{array}{lr}\n\tn_t +\\bar\\rho\\mathop{\\rm div}\\nolimits u=S_1,\\quad (t,x)\\in \\mathbb{R}^{+}\\times \\mathbb{R}^3,\\\\\n\tu_t+\\gamma \\bar\\rho\\nabla n-\\bar\\mu\\tri u-(\\bar\\mu+\\bar\\lam) \\nabla \\mathop{\\rm div}\\nolimits u=S_2,\\quad (t,x)\\in \\mathbb{R}^{+}\\times \\mathbb{R}^3,\\\\\n\t(n,u)|_{t=0}=(\\rho_0-\\bar\\rho,u_0),\\quad x\\in \\mathbb{R}^3,\n\\end{array}\\right.\n\\end{equation}\nwhere the functions $f(n), g(n)$ and source terms $S_i(i=1,2)$ are defined by\n\\begin{equation*}\\label{fgdefine}\n\\begin{split}\n &f(n)\\overset{def}{=}\\f{n}{n+\\bar\\rho},\\quad \\quad\n g(n)\\overset{def}{=}\\f{p'(n+\\bar\\rho)}{n+\\bar\\rho}-\\f{p'(\\bar\\rho)}{\\bar\\rho},\\\\\n\t&S_1 \\overset{def}{=}-n\\mathop{\\rm div}\\nolimits u-u\\cdot\\nabla n,\\\\\n\t&S_2 \\overset{def}{=} -u\\cdot \\nabla u\n -f(n)\\big(\\bar\\mu\\tri u-(\\bar\\mu+\\bar\\lam) \\nabla \\mathop{\\rm div}\\nolimits u\\big)-g(n)\\nabla n.\n\\end{split}\n\\end{equation*}\nHere the coefficients $\\bar\\mu, \\bar\\lam$ and $\\gamma$\nare defined by\n$\\bar\\mu=\\f{\\mu}{\\bar\\rho}, \\\n\\bar\\lam=\\f{\\lam}{\\bar\\rho}, \\\n\\gamma=\\f{p'(\\bar\\rho)}{\\bar\\rho^2}.$\nDue the the uniform estimate \\eqref{energy-01}, then there exists a positive\nconstant $C$ such that for any $1\\leq k\\leq N$,\n\\begin{equation*}\n\\begin{split}\n\t|f(n)|\\leq C|n|,\\quad |g(n)|\\leq C|n|,\\quad\n\t|f^{(k)}(n)|\\leq C,\\quad |g^{(k)}(n)|\\leq C.\n\\end{split}\n\\end{equation*}\nNext, in order to estimate the $L^2-$norm of the spatial derivatives of $f(n)$ and $g(n)$, we shall record the following lemma, which will be used frequently in later estimate.\n\\begin{lemm}\\label{hrholem}\n\tUnder the assumptions of Theorem \\ref{hs2}, $f(n)$ and $g(n)$ are two functions of $n$ defined by \\eqref{fgdefine}, then for any integer $1\\leq m\\leq N-1$, it holds true\n\t\\begin{equation}\\label{hrho}\n\t\\|\\nabla^mf(n)\\|_{L^2}^2+\\|\\nabla^mg(n)\\|_{L^2}^2\\leq C(1+t)^{-(m+s)},\n\t\\end{equation}\n\twhere $C$ is a positive constant independent of time.\n\\end{lemm}\n\\begin{proof}\n\tWe only control the first term on the left handside of \\eqref{hrho}, and the other one can be controlled similarly.\n\tNotice that for $m\\geq 1$,\n\t\\[\\nabla^mf(n)=\\text{a sum of products}~~f^{\\gamma_1,\\gamma_2,\\cdots,\\gamma_j}(n)\\nabla^{\\gamma_1}n\\cdots\\nabla^{\\gamma_j}n\\]\n\twith the functions $f^{\\gamma_1,\\gamma_2,\\cdots,\\gamma_j}(n)$ are some derivatives of $f(n)$ and $1\\leq \\gamma_{i}\\leq m$, $i=1,2,\\cdots,j$, $\\gamma_1+\\gamma_2+\\cdots+\\gamma_j=m$, $j\\geq 1$. Without loss of generality, we assume that $1\\leq \\gamma_1\\leq\\gamma_2\\leq\\cdots\\leq\\gamma_j\\leq m$. Thus, if $j\\geq2$, we have $\\gamma_{j-1}\\leq m-1\\leq N-2$.\n\tIt follows from the decay estimate \\eqref{sde1} that for $1\\le m\\leq N-1$,\\begin{equation*}\n\t\\|\\nabla^mn\\|_{L^2}\\leq C(1+t)^{-(m+s)},\n\t\\end{equation*}\n\twhich, together with Sobolev inequality and the fact that $j\\geq1$, we deduce that\n\t\\begin{equation*}\n\t\\begin{split}\n\t&\\|f^{\\gamma_1,\\gamma_2,\\cdots,\\gamma_j}(n)\\nabla^{\\gamma_1}n\\cdots\\nabla^{\\gamma_j}n\\|_{L^2}\\\\\n\t\t\\leq & C \\|\\nabla^{\\gamma_1}n\\|_{L^\\infty}\\cdots\\|\\nabla^{\\gamma_{j-1}}n\\|_{L^\\infty}\\|\\nabla^{\\gamma_{j}}n\\|_{L^2}\\\\\n\t\t\\leq &C \\|\\nabla^{\\gamma_1+1}n\\|_{H^1}\\cdots\\|\\nabla^{\\gamma_{j-1}+1}n\\|_{H^1} \\|\\nabla^{\\gamma_j}n\\|_{L^2}\\\\\n\t\t\\leq&C(1+t)^{-\\f{m+j s+(j-1)}{2}}\n\t\t\\leq C(1+t)^{-\\f{m+ s}{2}}.\n\t\\end{split}\n\t\\end{equation*}\n Consequently, the proof of this lemma is completed.\n\\end{proof}\nNow, based on lemma \\ref{hrholem}, we give the following lemma, which provides the time integrability of $N-th$ order spatial derivate of the solution $(n,u)$.\n\\begin{lemm}\\label{integrability}\nUnder the assumptions in Theorem \\ref{hs2},\nfor any fixed constant $0<\\ep_0<1$, we have\n\\begin{equation}\\label{Enc} (1+t)^{N+s-1}\\|\\nabla^{N-1}(n,u)\\|_{H^1}^2\n+(1+t)^{-\\ep_0}\\int_0^t(1+\\tau)^{N+s+\\ep_0-1}\n\\big(\\|\\nabla^N n\\|_{L^2}^2+\\|\\nabla^N u\\|_{H^1}^2\\big)d\\tau\n\\leq C,\n\\end{equation}\nwhere $C$ is a positive constant independent of $t$.\n\\end{lemm}\n\\begin{proof}\nFrom Lemmas 3.2, 3.3 and 3.4 in \\cite{guo2012},\nthe following estimates hold on for all $0\\leq k\\leq N-1$,\n\\begin{align}\n\\label{in01} \\f{d}{dt}\\|\\nabla^k(n,u)\\|_{L^2}^2+\\|\\nabla^{k+1}u\\|_{L^2}^2\\leq\n& C\\delta_0 \\|\\nabla^{k+1}(n,u)\\|_{L^2}^2,\\\\\n\\label{in02} \\f{d}{dt}\\|\\nabla^{k+1}(n,u)\\|_{L^2}^2+\\|\\nabla^{k+2}u\\|_{L^2}^2\n\\leq& C\\delta_0 \\big(\\|\\nabla^{k+1}(n,u)\\|_{L^2}^2+\\|\\nabla^{k+2}u\\|_{L^2}^2 \\big),\\\\\n\\label{in03} \\f{d}{dt}\\int \\nabla^ku\\cdot\\nabla^{k+1}n dx+\\|\\nabla^{k+1}n\\|_{L^2}^2\n\\leq& C \\big( \\|\\nabla^{k+1}u\\|_{L^2}^2+\\|\\nabla^{k+2}u\\|_{L^2}^2\\big).\n\\end{align}\nwhere the constant $C$ is a positive constant independent of time.\nBased on the estimates \\eqref{in01}, \\eqref{in02} and \\eqref{in03},\nthen it holds for all $0\\leq k\\leq N-1$,\n\\begin{equation}\\label{e1}\n\\begin{split}\n\\f{d}{dt}\\big(\\|\\nabla^k(n,u)\\|_{H^1}^2\n+\\eta_1 \\int \\nabla^k u \\cdot\\nabla^{k+1}n dx\\big)\n+\\eta_1\\|\\nabla^{k+1}n\\|_{L^2}^2+\\|\\nabla^{k+1}u\\|_{H^1}^2\\leq 0.\n\\end{split}\n\\end{equation}\nHere $\\eta_1$ is a small positive constant.\nTaking $k=N-1$ in inequality \\eqref{e1}, then we have\n\\begin{equation}\\label{e2}\n\\f{d}{dt}\\mathcal{E}^{N-1}(t)\n+\\eta_1\\|\\nabla^{N}n\\|_{L^2}^2+\\|\\nabla^{N}u\\|_{H^1}^2\\leq 0,\n\\end{equation}\nwhere the energy $\\mathcal{E}^{N-1}(t)$ is defined by\n$$\n\\mathcal{E}^{N-1}(t)\\overset{def}{=}\n\\|\\nabla^{N-1}(n,u)(t)\\|_{H^1}^2+\\eta_1 \\int \\nabla^{N-1} u\\cdot\\nabla^{N}n dx.\n$$\nThen, due to the smallness of $\\eta_1$, we have the following equivalent relation\n\\begin{equation}\\label{xih}\nc_1\\|\\nabla^{N-1}(n, u)(t)\\|_{H^1}^2\n\\le \\mathcal{E}^{N-1} (t)\n\\le c_2\\|\\nabla^{N-1}(n, u)(t)\\|_{H^1}^2,\n\\end{equation}\nwhere the constants $c_1$ and $c_2$ are independent of time.\nFor any fixed $\\ep_0(0<\\ep_0<1)$,\nmultiplying the inequality \\eqref{e2} by $(1+t)^{N+s+\\ep_0-1}$,\nit holds true\n\\begin{equation}\\label{Ek1}\n\\begin{split}\n\\f{d}{dt}\\big\\{(1+t)^{N+s+\\ep_0-1}\\mathcal{E}^{N-1}(t)\\big\\}\n +(1+t)^{N+s+\\ep_0-1}\n \\big(\\|\\nabla^{N}n\\|_{L^2}^2+\\|\\nabla^{N}u\\|_{H^1}^2\\big)\n\\leq C (1+t)^{N+s+\\ep_0-2}\\mathcal{E}^{N-1}(t).\n\\end{split}\n\\end{equation}\nThe decay estimate \\eqref{sde1} and equivalent relation\n\\eqref{xih} lead us to get that for $00$, one obtains that\\begin{equation}\\label{h}\n\t|\\h(n,\\bar\\rho)|\\leq C|n|.\n\t\\end{equation}\n\tNext, let us to deal with the derivatives of $\\h$. In view of the definition of $\\h$ and $\\wg$, it then follows from \\eqref{hg-relation} that for any $l\\geq 1$,\\begin{equation}\\label{wideh}\n\t\\begin{split}\n\t\t&\\nabla^l\\h(n,\\bar\\rho)\n\t\t=\\nabla^l\\wg(n+\\bar\\rho)-\\nabla^l\\wg(\\bar\\rho)\\\\\n\t\t=&\\text{a sum of products}~~\\big\\{\\wg^{\\gamma_1,\\gamma_2,\\cdots,\\gamma_{i+j}}(n+\\bar\\rho)\\nabla^{\\gamma_1}n\\cdots\\nabla^{\\gamma_i}n\\nabla^{\\gamma_{i+1}}\\bar\\rho\\cdots\\nabla^{\\gamma_{i+j}}\\bar\\rho\\\\\n\t\t&\\quad-\\wg^{\\gamma_1,\\gamma_2,\\cdots,\\gamma_{i+j}}(\\bar\\rho)\\nabla^{\\gamma_1}\\bar\\rho\\cdots\\nabla^{\\gamma_{i+j}}\\bar\\rho\\big\\}\n\t\\end{split}\n \\end{equation}\n\twith the functions $\\wg^{\\gamma_1,\\gamma_2,\\cdots,\\gamma_{i+j}}$ are some derivatives of $\\wg$ and $1\\leq \\gamma_{{\\beta}}\\leq l$, ${\\beta}=1,2,\\cdots,i+j$; $\\gamma_1+\\gamma_2+\\cdots+\\gamma_i=m$, $0\\leq m\\leq l$ and $\\gamma_1+\\gamma_2+\\cdots+\\gamma_{i+j}=l$.\n\tFor the case that $m=0$, we use mean value theorem to find that there exists a $\\xi$ between $\\bar\\rho$ and $n+\\bar\\rho$, such that\\begin{equation*}\n\t\\begin{split}\n\t\t&\\wg^{\\gamma_1,\\gamma_2,\\cdots,\\gamma_{j}}(n+\\bar\\rho)\\nabla^{\\gamma_1}\\bar\\rho\\cdots\\nabla^{\\gamma_{j}}\\bar\\rho-\\wg^{\\gamma_1,\\gamma_2,\\cdots,\\gamma_{j}}(\\bar\\rho)\\nabla^{\\gamma_1}\\bar\\rho\\cdots\\nabla^{\\gamma_{j}}\\bar\\rho\n\t\t=\\wg^{(\\gamma_1,\\gamma_2,\\cdots,\\gamma_{j})+1}(\\xi)n\\nabla^{\\gamma_1}\\bar\\rho\\cdots\\nabla^{\\gamma_{j}}\\bar\\rho,\n\t\\end{split}\n\t\\end{equation*}\n which, together with \\eqref{wideh}, yields the following estimate\n \\begin{equation}\\label{h2}\n\t\\begin{split}\n\t\t|\\nabla^l\\h(n,\\bar\\rho)|\\leq C|n|\\sum_{\\gamma_1+\\cdots+\\gamma_{j}=l}|\\nabla^{\\gamma_{1}}\\bar\\rho|\\cdots|\\nabla^{\\gamma_{j}}\\bar\\rho|+ C\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l}}|\\nabla^{\\gamma_1}n|\\cdots|\\nabla^{\\gamma_i}n||\\nabla^{\\gamma_{i+1}}\\bar\\rho|\\cdots|\\nabla^{\\gamma_{i+j}}\\bar\\rho|.\n\t\\end{split}\n\t\\end{equation}\n\tThis, together with the estimate \\eqref{h}, gives\\begin{equation}\\label{j4}\n\t\\begin{split}\n\t\t|J_4|\\leq&C\\int|\\h(n,\\bar\\rho)||\\nabla^{k}\\bar\\rho||\\nabla^{k+1}v|dx+C\\sum_{1\\leq l\\leq k-1}\\int|\\nabla^{l}\\h(n,\\bar\\rho)||\\nabla^{k-l}\\bar\\rho||\\nabla^{k+1}v|dx\\\\\n\t\t\\leq &C\\int|n||\\nabla^{k}\\bar\\rho||\\nabla^{k+1}v|dx+C\\sum_{1\\leq l\\leq k-1}\\sum_{\\gamma_1+\\cdots+\\gamma_{j}=l}\\int|n||\\nabla^{\\gamma_1}\\bar\\rho|\\cdots|\\nabla^{\\gamma_j}\\bar\\rho||\\nabla^{k-l}\\bar\\rho||\\nabla^{k+1}v|dx\\\\\n\t\t&\\quad+C\\sum_{1\\leq l\\leq k-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l}}\\int|\\nabla^{\\gamma_1}n|\\cdots|\\nabla^{\\gamma_i}n||\\nabla^{\\gamma_{i+1}}\\bar\\rho|\\cdots|\\nabla^{\\gamma_{i+j}}\\bar\\rho||\\nabla^{k-l}\\bar\\rho||\\nabla^{k+1}v|dx\\\\\n\t\t\\overset{def}{=}&J_{41}+J_{42}+J_{43}.\n\t\\end{split}\n\t\\end{equation}\n\tBy virtue of Sobolev and Hardy inequalities, it is easy to deduce\n\t\\begin{equation}\\label{j41}\n\t\\begin{split}\n\t\tJ_{41}+J_{42}\\leq& C \\|\\f{n}{(1+|x|)^{k}}\\|_{L^6}\\Big(\\|(1+|x|)^{k}\\nabla^{k}\\bar\\rho\\|_{L^3}+\\sum_{1\\leq l\\leq k-1}\\sum_{\\gamma_1+\\cdots+\\gamma_{j}=l}\\|(1+|x|)^{\\gamma_{1}}\\nabla^{\\gamma_{1}}\\bar\\rho\\|_{L^\\infty}\\\\\n\t\t&\\quad\\cdots\\|(1+|x|)^{\\gamma_{j}}\\nabla^{\\gamma_{j}}\\bar\\rho\\|_{L^\\infty}\\|(1+|x|)^{k-l}\\nabla^{k-l}\\bar\\rho\\|_{L^3}\\Big)\\|\\nabla^{k+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\Big(\\|\\f{\\nabla n}{(1+|x|)^{k}}\\|_{L^2}+\\|\\f{n}{(1+|x|)^{k+1}}\\|_{L^2}\\Big)\\|\\nabla^{k+1}v\\|_{L^2}\\\\\n\t\t\\leq& C\\delta \\|\\nabla^{k+1}(n,v)\\|_{L^2}^2.\n\t\\end{split}\n\t\\end{equation}\n\tIt then follows from Sobolev inequality that\n\t\\begin{equation}\\label{J43}\n\t\\begin{split}\n\t\tJ_{43}\\leq& C\\sum_{1\\leq l\\leq k-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{1+j}=l\\\\\\gamma_1=m\\\\1\\leq m\\leq l}}\\|\\f{\\nabla^{\\gamma_1}n}{(1+|x|)^{k-m}}\\|_{L^6}\\|(1+|x|)^{\\gamma_{2}}\\nabla^{\\gamma_{2}}\\bar\\rho\\|_{L^\\infty}\\cdots\\|(1+|x|)^{\\gamma_{1+j}}\\nabla^{\\gamma_{1+j}}\\bar\\rho\\|_{L^\\infty}\\\\\n\t\t&\\times\\|(1+|x|)^{k+1-l}\\nabla^{k-l}\\bar\\rho\\|_{L^3}\\|\\nabla^{k+1}v\\|_{L^2}+C\\sum_{1\\leq l\\leq k}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l,~i\\geq2}}\\|\\f{\\nabla^{\\gamma_1}n}{(1+|x|)^{k-m}}\\|_{L^6}\\|\\nabla^{\\gamma_2}n\\|_{L^\\infty}\\cdots\\\\\n\t\t&\\times\\|\\nabla^{\\gamma_i}n\\|_{L^\\infty}\\|(1+|x|)^{\\gamma_{i+1}}\\nabla^{\\gamma_{i+1}}\\bar\\rho\\|_{L^\\infty}\\cdots\\|(1+|x|)^{\\gamma_{i+j}}\\nabla^{\\gamma_{i+j}}\\bar\\rho\\|_{L^\\infty}\\|(1+|x|)^{k+1-l}\\nabla^{k-l}\\bar\\rho\\|_{L^3}\\|\\nabla^{k+1}v\\|_{L^2}\\\\\n\t\t\\overset{def}{=}&J_{431}+J_{432}.\n\t\\end{split}\n\t\\end{equation}\n\tThanks to Hardy inequality, one can deduce that\n\t\\begin{equation*}\n\t\\begin{split}\n\t\tJ_{431}\\leq& C\\delta\\sum_{1\\leq l\\leq k-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{1+j}=l\\\\\\gamma_1=m\\\\1\\leq m\\leq l}}\\Big(\\|\\f{\\nabla^{\\gamma_1+1}n}{(1+|x|)^{k-m}}\\|_{L^2}+\\|\\f{\\nabla^{\\gamma_1}n}{(1+|x|)^{k-m+1}}\\|_{L^2}\\Big)\\|\\nabla^{k+1}v\\|_{L^2}\\\\\n\t\t\\leq& C\\delta\\|\\nabla^{k+1}n\\|_{L^2}\\|\\nabla^{k+1}v\\|_{L^2}\\leq C\\delta\\|\\nabla^{k+1}(n,v)\\|_{L^2}^2.\n\t\\end{split}\n\t\\end{equation*}\n\tWith the help of Sobolev interpolation inequality \\eqref{Sobolev}\n\tin Lemma \\ref{inter} and Hardy ineuqlity, one obtains\n\t\\begin{equation*}\n\t\\begin{split}\n\t\tJ_{432}\\leq&C\\delta\\sum_{1\\leq l\\leq k-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l,~i\\geq2}}\\|\\nabla^{k+1-m+\\gamma_1}n\\|_{L^2}\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_2}{2(k+1)}}\\|\\nabla^{k+1}n\\|_{L^2}^{\\f{3+2\\gamma_2}{2(k+1)}}\\cdots\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_i}{2(k+1)}}\\|\\nabla^{k+1}n\\|_{L^2}^{\\f{3+2\\gamma_i}{2(k+1)}}\\|\\nabla^{k+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\sum_{1\\leq l\\leq k-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l,~i\\geq2}}\\|\\nabla^{{\\alpha}}n\\|_{L^2}^{\\theta}\\|\\nabla^{k+1}n\\|_{L^2}^{1-\\theta}\\|n\\|_{L^2}^{1-\\theta}\\|\\nabla^{k+1}n\\|_{L^2}^{\\theta}\\|\\nabla^{k+1}v\\|_{L^2}\n\t\t\\leq C\\delta\\|\\nabla^{k+1}(n,v)\\|_{L^2}^2,\n\t\\end{split}\n\t\\end{equation*}\n\twhere\n\t$$\\theta=\\f{3(i-1)+2(m-\\gamma_1)}{2(k+1)},\\quad{\\alpha}=\\f{3(i-1)(k+1)}{3(i-1)+2(m-\\gamma_1)}\\leq \\f{3}{5}(k+1)\\leq \\f35 N\\leq N,$$\n\tprovided that $i\\geq2$ and $i-1\\leq m-\\gamma_1$.\n Inserting the estimates of $J_{431}$ and $J_{432}$ into $\\eqref{J43}$, it follows immediately\n\t\\begin{equation}\\label{j43}\n\tJ_{43}\\leq C\\delta\\|\\nabla^{k+1}(n,v)\\|_{L^2}^2.\n\t\\end{equation}\n\tThus, substituting \\eqref{j41} and \\eqref{j43} into \\eqref{j4}, we deduce that\\begin{equation*}\n\t\\begin{split}\n\t\t|J_4|\\leq C\\delta\\|\\nabla^{k+1}(n,v)\\|_{L^2}^2,\n\t\\end{split}\n\t\\end{equation*}\n\twhich, together with \\eqref{j1}, \\eqref{j2} and \\eqref{j3}, gives\n\t\\begin{equation}\\label{s2}\n\t\\begin{split}\n\t\t\\int \\nabla^k\\widetilde{S}_2\\cdot \\nabla^kv dx\n\t\t\\leq C\\delta\\|\\nabla^{k+1}(n,v)\\|_{L^2}^2.\n\t\\end{split}\n\t\\end{equation}\n\tWe finally utilize \\eqref{s1} and \\eqref{s2} in \\eqref{ehk}, to obtain \\eqref{en1} directly.\n\tThus, we complete the proof of this lemma.\n\\end{proof}\n\nWe then derive the energy estimate for $N-th$ order spatial derivative of solution.\n\\begin{lemm}\\label{enn}\n\tUnder the assumptions in Theorem \\ref{them3}, we have\n\t\\begin{equation}\\label{en2}\n\t\\f{d}{dt}\\|\\nabla^{N}(n,v)\\|_{L^2}^2+\\|\\nabla^{N+1}v\\|_{L^2}^2\\leq C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2,\n\t\\end{equation}\n\twhere $C$ is a positive constant independent of time.\n\\end{lemm}\n\\begin{proof}\nApplying differential operator $\\nabla^{N}$ to $\\eqref{ns5}_1$ and $\\eqref{ns5}_2$,\nmultiplying the resulting equations by $\\nabla^{N}n$ and $\\nabla^{N}v$, respectively,\nand integrating over $\\mathbb{R}^3$, it holds\n\\begin{equation}\\label{ehk1}\n\\begin{split}\n\t\t\\f12\\f{d}{dt}\\|\\nabla^{N}(n,v)\\|_{L^2}^2+\\mu_1\\|\\nabla^{N+1} v\\|_{L^2}^2+\\mu_2\\|\\nabla^{N}\\mathop{\\rm div}\\nolimits v\\|_{L^2}^2\n\t\t= \\int \\nabla^{N}\\widetilde{S}_1\\cdot \\nabla^{N}n dx+\\int \\nabla^{N}\\widetilde{S}_2\\cdot \\nabla^{N}v dx.\n\\end{split}\n\\end{equation}\nNow we estimate two terms on the right handside of \\eqref{ehk1} separately.\nIn view of the definition of $\\widetilde{S}_1$, we have\n\\begin{equation}\\label{ks2}\n\\begin{split}\n\\int \\nabla^{N}\\widetilde{S}_1\\cdot \\nabla^{N}n dx\n=\n&C\\int \\nabla^{N}( v\\cdot\\nabla n )\\cdot \\nabla^{N}n dx\n +C\\int \\nabla^{N}( n\\mathop{\\rm div}\\nolimits v)\\cdot \\nabla^{N}n dx\\\\\n& +C\\int \\nabla^{N}( v\\cdot\\nabla\\bar\\rho )\\cdot \\nabla^{N}n dx\n +C\\int \\nabla^{N}( \\bar\\rho \\mathop{\\rm div}\\nolimits v)\\cdot \\nabla^{N}n dx\\\\\n\\overset{def}{=} &K_1+K_2+K_3+K_4.\n\\end{split}\n\\end{equation}\nSobolev inequality and integration by parts yield\\begin{equation*}\n\\begin{split}\nK_1=&C\\sum_{0\\leq l\\leq N}\\int\\nabla^{l+1}n\\nabla^{N-l}v\\nabla^{N}n dx\\\\\n=&C\\int\\nabla n\\nabla^{N}v\\nabla^{N}ndx\n +C\\sum_{1\\leq l\\leq N-2}\\int\\nabla^{l+1} n\\nabla^{N-l}v\\nabla^{N}n dx\n +C\\int \\nabla^{N}n\\nabla v\\nabla^{N}ndx-C\\int \\mathop{\\rm div}\\nolimits v|\\nabla^{N}n|^2dx\\\\\n\\leq&C\\|\\nabla v\\|_{L^\\infty}\\|\\nabla^{N}n\\|_{L^2}^2\n +C\\|\\nabla n\\|_{L^3}\\|\\nabla^{N}v\\|_{L^6}\\|\\nabla^{N}n\\|_{L^2}\n +C\\sum_{2\\leq l\\leq N-1}\\|\\nabla^ln\\|_{L^6}\\|\\nabla^{N+1-l}v\\|_{L^3}\\|\\nabla^{N}n\\|_{L^2}\\\\\n\\leq&C \\|\\nabla^2 v\\|_{H^1}\\|\\nabla^{N}n\\|_{L^2}^2\n +C\\|\\nabla n\\|_{H^1}\\|\\nabla^{N+1}v\\|_{L^2}\\|\\nabla^{N}n\\|_{L^2}\n +C\\sum_{2\\leq l\\leq N-1}\\|\\nabla^ln\\|_{L^6}\\|\\nabla^{N+1-l}v\\|_{L^3}\\|\\nabla^{N}n\\|_{L^2}.\n\\end{split}\n\\end{equation*}\nUsing the Sobolev interpolation inequality \\eqref{Sobolev}\nin Lemma \\ref{inter}, the third term on the right handside\nof above inequality can be estimated as follows\n\\begin{equation}\\label{lnv}\n\\begin{aligned}\n&\\|\\nabla^ln\\|_{L^6}\\|\\nabla^{N+1-l}v\\|_{L^3}\\|\\nabla^{N}n\\|_{L^2}\\\\\n\\leq &C\\|n\\|_{L^2}^{1-\\f{l+1}{N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{l+1}{N}}\n \\|\\nabla^{\\alpha} v\\|_{L^2}^{\\f{l+1}{N}}\n \\|\\nabla^{N+1}v\\|_{L^2}^{1-\\f{l+1}{N}}\\|\\nabla^{N}n\\|_{L^2}\\\\\n\\leq &C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2,\n\\end{aligned}\n\\end{equation}\nhere we denote ${\\alpha}$ that\n$${\\alpha}=\\f{3N}{2(l+1)}+1\\leq \\f{N}{2}+1,$$\nsince $l\\geq 2$.\nThus, we have\n\\begin{equation*}\n|K_1| \\leq C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2.\n\\end{equation*}\nIt follows in a similar way to $K_1$ that\n\\begin{equation*}\n\\begin{split}\nK_2= &C\\sum_{0\\leq l\\leq N}\\int\\nabla^l n\\nabla^{N+1-l}v\\nabla^{N}n dx\\\\\n \\leq &C\\Big(\\|n\\|_{L^\\infty}\\|\\nabla^{N+1} v\\|_{L^2}+\\|\\nabla n\\|_{L^3}\\|\\nabla^{N} v\\|_{L^6}\n +\\sum_{2\\leq l\\leq N-1}\\|\\nabla^ln\\|_{L^6}\\|\\nabla^{N+1-l}v\\|_{L^3}\n +\\|\\nabla v\\|_{L^\\infty}\\|\\nabla^{N}n\\|_{L^2}\\Big)\\|\\nabla^{N}n\\|_{L^2}\\\\\n \\leq&C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2,\n\\end{split}\n\\end{equation*}\nwhere we have used \\eqref{lnv} in the last inequality above.\nThanks to Hardy inequality, we compute\n\\begin{equation*}\n\\begin{split}\n|K_3| \\leq& C\\sum_{0\\leq l\\leq N}\\|(1+|x|)^{l+1}\\nabla^{l+1}\\bar\\rho\\|_{L^{\\infty}}\n \\|\\f{\\nabla^{N-l}v}{(1+|x|)^{l+1}}\\|_{L^2}\\|\\nabla^{N}n\\|_{L^2}\n\t\t\\leq C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2,\\\\\n|K_4| \\leq& C\\sum_{0\\leq l\\leq N}\\|(1+|x|)^{l}\\nabla^{l}\\bar\\rho\\|_{L^{\\infty}}\n \\|\\f{\\nabla^{N+1-l}v}{(1+|x|)^{l}}\\|_{L^2}\\|\\nabla^{N}n\\|_{L^2}\n\t\t\\leq C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2.\n\\end{split}\n\\end{equation*}\nSubstituting the estimates of term from $K_1$ to $K_4$ into \\eqref{ks2}, we have\n\\begin{equation}\\label{ws11}\n\\begin{split}\n\\left|\\int \\nabla^{N}\\widetilde{S}_1\\cdot \\nabla^{N}n dx \\right|\n\\leq&C \\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2.\n\\end{split}\n\\end{equation}\nRemebering the definition of $\\widetilde{S}_2$ and integrating by part, we find\n\\begin{equation}\\label{ks22}\n\\begin{split}\n&\\int \\nabla^{N}\\widetilde{S}_2\\cdot \\nabla^{N}v dx\\\\\n=&\\int \\nabla^{N-1}\\Big(\\f{\\mu_1\\gamma}{\\mu}v\\cdot \\nabla v \\Big)\\cdot \\nabla^{N+1}v dx\n+\\int \\nabla^{N-1}\\Big\\{\\wf(n+\\bar\\rho)\\big(\\mu_1\\tri v+\\mu_2\\nabla\\mathop{\\rm div}\\nolimits v\\big) \\Big\\}\\cdot \\nabla^{N+1}v dx\\\\\n&+\\int \\nabla^{N-1}\\Big(\\wg(n+\\bar\\rho) \\nabla n \\Big)\\cdot \\nabla^{N+1}v dx\n+\\int \\nabla^{N-1}\\Big(\\h(n,\\bar\\rho)\\nabla\\bar\\rho\\Big)\\cdot \\nabla^{N+1}v dx\\\\\n\\overset{def}{=}&L_1+L_2+L_3+L_4.\n\\end{split}\n\\end{equation}\nAccording to the Sobolev interpolation inequality \\eqref{Sobolev}\nin Lemma \\ref{inter}, we deduce that\n\\begin{equation}\\label{l1}\n\\begin{split}\n|L_1|\n\\leq &C\\sum_{0\\leq l\\leq N-1}\\|\\nabla^{N-1-l}v\\|_{L^3}\\|\\nabla^{l+1}v\\|_{L^6}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\\leq &C\\sum_{0\\leq l\\leq N-1}\\|\\nabla^{{\\alpha}}v\\|_{L^2}^{\\f{l+2}{N+1}}\n \\|\\nabla^{N+1}v\\|_{L^2}^{1-\\f{l+2}{N+1}}\n \\|v\\|_{L^2}^{1-\\f{l+2}{N+1}}\\|\\nabla^{N+1}v\\|_{L^2}^{\\f{l+2}{N+1}}\n \\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\\leq &C\\delta \\|\\nabla^{N+1}v\\|_{L^2}^2,\n\\end{split}\n\\end{equation}\nwhere ${\\alpha}$ is given by\n$$\n{\\alpha}=\\f{N+1}{2(l+2)}\\leq \\f{N+1}{4}.\n$$\nNext, we estimate the term $L_2$ and\ndivide it into two terms as follows\n\\begin{equation}\\label{L2}\n|L_2|\n\\leq C\\int|\\wf(n+\\bar\\rho)||\\nabla^{N+1}v|^2dx\n+C\\sum_{1\\leq l\\leq N-1}\\int|\\nabla^{l}\\wf(n+\\bar\\rho)||\\nabla^{N+1-l}v||\\nabla^{N+1}v|dx\n\\overset{def}{=}L_{21}+L_{22}.\n\\end{equation}\n\tBy Sobolev inequality, one can deduce directly that\\begin{equation}\\label{l21}\n\t\\begin{split}\n\t\tL_{21}\\leq C\\|(n+\\bar\\rho)\\|_{L^\\infty}\\|\\nabla^{N+1}v\\|_{L^2}^2\\leq C\\delta\\|\\nabla^{N+1}v\\|_{L^2}^2.\n\t\\end{split}\n\t\\end{equation}\nIt follows from \\eqref{widef} that $L_{22}$ can be\nsplit up into three terms as follows:\n\\begin{equation}\\label{l22}\n\\begin{split}\nL_{22} \\leq\n&C\\sum_{1\\leq l\\leq N-1}\\sum_{\\gamma_1+\\cdots+\\gamma_i=l}\n \\int|\\nabla^{\\gamma_1}n|\\cdots|\\nabla^{\\gamma_i}n||\\nabla^{N+1-l}v||\\nabla^{N+1}v|dx\\\\\n&+C\\sum_{1\\leq l\\leq N-1}\\sum_{\\gamma_1+\\cdots+\\gamma_j=l}\n \\int|\\nabla^{\\gamma_1}\\bar\\rho|\\cdots|\\nabla^{\\gamma_j}\\bar\\rho||\\nabla^{N+1-l}v||\\nabla^{N+1}v|dx\\\\\n&+C\\sum_{1\\leq l\\leq N-1}\n\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l-1}}\n\\int|\\nabla^{\\gamma_1}n|\\cdots|\\nabla^{\\gamma_i}n|\n |\\nabla^{\\gamma_{i+1}}\\bar\\rho|\\cdots|\\nabla^{\\gamma_{i+j}}\\bar\\rho|\n |\\nabla^{N+1-l}v||\\nabla^{N+1}v|dx\\\\\n\\overset{def}{=} & L_{221}+L_{222}+L_{223}.\n\\end{split}\n\\end{equation}\nWe then divide $L_{221}$ into two terms as follows:\n\\begin{equation*}\n\\begin{aligned}\nL_{221}\n\\leq & C\\sum_{1\\leq l\\leq N-1}\n \\int|\\nabla^{l}n||\\nabla^{N+1-l}v||\\nabla^{N+1}v|dx\\\\\n &+C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_i=l\\\\i\\geq2}}\n \\int|\\nabla^{\\gamma_1}n|\\cdots|\\nabla^{\\gamma_i}n||\\nabla^{N+1-l}v||\\nabla^{N+1}v|dx\\\\\n\\overset{def}{=}&L_{2211}+L_{2212}.\n\\end{aligned}\n\\end{equation*}\nUsing the Sobolev inequality and estimate \\eqref{lnv}, it holds\n\\begin{equation*}\n\\begin{split}\nL_{2211}\n\\leq& C\\Big(\\|\\nabla n\\|_{L^3}\\|\\nabla^{N}v\\|_{L^6}\n +\\sum_{2\\leq l\\leq N-1}\\|\\nabla^{l}n\\|_{L^6}\\|\\nabla^{N+1-l}v\\|_{L^3}\\Big)\n |\\nabla^{N+1}v\\|_{L^2}\\\\\n\\leq& C\\Big(\\|\\nabla n\\|_{H^1}\\|\\nabla^{N+1}v\\|_{L^2}\n +\\sum_{2\\leq l\\leq N-1}\\|n\\|_{L^2}^{1-\\f{l+1}{N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{l+1}{N}}\n \\|\\nabla^{\\alpha} v\\|_{L^2}^{\\f{l+1}{N}}\\|\\nabla^{N+1}v\\|_{L^2}^{1-\\f{l+1}{N}}\\Big)\n \\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\\leq& C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2.\n\\end{split}\n\\end{equation*}\nSince $i\\geq2$ in term $L_{2212}$, then $\\gamma_{\\beta}\\leq N-2$ for ${\\beta}=1,2,\\cdots,i$.\nThus, we can use the Sobolev interpolation inequality \\eqref{Sobolev} in\nLemma \\ref{inter} to find\n\\begin{equation*}\n\t\\begin{split}\n\t\tL_{2212}\\leq&C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_i=l\\\\i\\geq2}}\\|\\nabla^{\\gamma_1}n\\|_{L^\\infty}\\cdots\\|\\nabla^{\\gamma_i}n\\|_{L^\\infty}\\|\\nabla^{N+1-l}v\\|_{L^2}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_i=l\\\\i\\geq2}}\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_1}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3+2\\gamma_1}{2N}}\\cdots\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_{i}}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3+2\\gamma_i}{2N}}\\|\\nabla^{N+1-l}v\\|_{L^2}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_i=l\\\\i\\geq2}}\\|n\\|_{L^2}^{1-\\f{3i+2l}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3i+2l}{2N}}\\|\\nabla^{{\\alpha}}v\\|_{L^2}^{\\f{3i+2l}{2N}}\\|\\nabla^{N+1}v\\|_{L^2}^{1-\\f{3i+2l}{2N}}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^{2},\n\t\\end{split}\n\t\\end{equation*}\n\twhere ${\\alpha}$ is given by\n\t$${\\alpha}=1+\\f{3iN}{3i+2l}\\leq 1+\\f35N\\leq N,$$\n\tprovided $i\\geq 1$, $m\\geq 1$, $i\\leq m$ and $N\\geq3$. Therefore, it follows from the estimates above that\n\\begin{equation*}\nL_{221}\\leq C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^{2}.\n\\end{equation*}\nThe application of Hardy inequality yields directly\n\\begin{equation*}\n\\begin{split}\nL_{222}\n\\leq &C \\sum_{1\\leq l\\leq N-1}\\sum_{\\gamma_1+\\cdots+\\gamma_j=l}\n \\|(1+|x|)^{\\gamma_1}\\nabla^{\\gamma_1}\\bar \\rho\\|_{L^{\\infty}}\\cdots\\|(1+|x|)^{\\gamma_j}\\nabla^{\\gamma_j}\\bar \\rho\\|_{L^{\\infty}}\n \\|\\f{\\nabla^{N+1-l}v}{(1+|x|)^{l}}\\|_{L^2}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\\leq & C\\delta \\|\\nabla^{N+1}v\\|_{L^2}^2.\n\\end{split}\n\\end{equation*}\nUsing Sobolev interpolation inequality \\eqref{Sobolev} in\nLemma \\ref{inter} again, we have\n\\begin{equation*}\n\\begin{split}\nL_{223}\n\\leq& C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l-1}}\\|\\nabla^{\\gamma_1}n\\|_{L^\\infty}\\cdots\\|\\nabla^{\\gamma_i}n\\|_{L^\\infty}\\|(1+|x|)^{\\gamma_{i+1}}\\nabla^{\\gamma_{i+1}}\\bar\\rho\\|_{L^\\infty}\\cdots\\|(1+|x|)^{\\gamma_{i+j}}\\nabla^{\\gamma_{i+j}}\\bar\\rho\\|_{L^\\infty}\\\\\n\t\t&\\times\\|\\f{\\nabla^{N+1-l}v}{(1+|x|)^{l-m}}\\|_{L^2}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l-1}}\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_1}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3+2\\gamma_1}{2N}}\\cdots\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_{i}}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3+2\\gamma_i}{2N}}\\|\\nabla^{N+1-m}v\\|_{L^2}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l-1}}\\|n\\|_{L^2}^{1-\\f{3i+2m}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3i+2m}{2N}}\\|\\nabla^{{\\alpha}}v\\|_{L^2}^{\\f{3i+2m}{2N}}\\|\\nabla^{N+1}v\\|_{L^2}^{1-\\f{3i+2m}{2N}}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^{2},\n\t\\end{split}\n\t\\end{equation*}\nwhere ${\\alpha}$ is given by\n$${\\alpha}=1+\\f{3iN}{3i+2m}\\leq 1+\\f35N\\leq N,$$\nprovided $i\\geq 1$, $m\\geq 1$ and $i\\leq m$.\nThen, substituting estimates of terms $L_{221}, L_{222}$ and $L_{223}$\ninto \\eqref{l22}, we have\n\\begin{equation}\\label{L22}\n\\begin{split}\nL_{22}\\leq C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^{2}.\n\\end{split}\n\\end{equation}\nThe combination of estimate \\eqref{L2}, \\eqref{l21} and \\eqref{L22} yields directly\n\\begin{equation}\\label{l2}\n|L_2|\\leq C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^{2}.\n\\end{equation}\nNow we give the estimate for the term $L_3$. Indeed, it is easy to check that\n\\begin{equation}\\label{L3}\n\\begin{split}\n|L_3|\n &\\leq C\\int|\\wg(n+\\bar\\rho)||\\nabla^{N}n||\\nabla^{N+1}v|dx\n +C\\sum_{1\\leq l\\leq N-1}\\sum_{\\gamma_1+\\cdots+\\gamma_i=l}\n \\int|\\nabla^{\\gamma_1}n|\\cdots|\\nabla^{\\gamma_i}n||\\nabla^{N-l}n||\\nabla^{k+2}v|dx\\\\\n\t&\\quad+C\\sum_{1\\leq l\\leq N-1}\\sum_{\\gamma_1+\\cdots+\\gamma_j=l}\n \\int|\\nabla^{\\gamma_1}\\bar\\rho||\\nabla^{N-l}n|\\cdots\n |\\nabla^{\\gamma_j}\\bar\\rho||\\nabla^{N-l}n||\\nabla^{N+1}v|dx\\\\\n\t&\\quad+C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\n \\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l-1}}\n \\int|\\nabla^{\\gamma_1}n|\\cdots|\\nabla^{\\gamma_i}n|\n |\\nabla^{\\gamma_{i+1}}\\bar\\rho|\\cdots|\\nabla^{\\gamma_{i+j}}\\bar\\rho|\n |\\nabla^{N-l}n||\\nabla^{N+1}v|dx\\\\\n\t&\\overset{def}{=}L_{31}+L_{32}+L_{33}+L_{34}.\n\\end{split}\n\\end{equation}\nBy Sobolev inequality, it is easy to check that\n\\begin{equation}\\label{L31}\n\\begin{split}\nL_{31}\n\\leq C\\|(n+\\bar\\rho)\\|_{L^\\infty}\\|\\nabla^{N}n\\|_{L^2}\\|\\nabla^{N+1}v\\|_{L^2}\n\\leq C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2.\n\\end{split}\n\\end{equation}\nFor $L_{32}$, we divide it into the following two terms: \\begin{equation*}\n\\begin{split}\nL_{32}\\leq& C\\sum_{1\\leq l\\leq N-1}\\int|\\nabla^{l}n||\\nabla^{N-l}n||\\nabla^{N+1}v|dx\\\\\n &+C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_i=l\\\\i\\geq2}}\n \\int|\\nabla^{\\gamma_1}n|\\cdots|\\nabla^{\\gamma_i}n||\\nabla^{N-l}n||\\nabla^{N+1}v|dx\\\\\n\t\\overset{def}{=}&L_{321}+L_{322}.\n\t\\end{split}\n\t\\end{equation*}\nBy Sobolev inequality, it holds\n\\begin{equation*}\n\\begin{split}\nL_{321}\n\\leq& C\\Big(\\|\\nabla n\\|_{L^3}\\|\\nabla^{N-1}n\\|_{L^6}\n +\\sum_{2\\leq l\\leq N-1}\\|\\nabla^{l}n\\|_{L^6}\\|\\nabla^{N-l}n\\|_{L^3}\\Big)\n \\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\\leq& C\\Big(\\|\\nabla n\\|_{L^3}\\|\\nabla^{N-1}n\\|_{L^6}\n +\\sum_{2\\leq l\\leq N-1}\n \\|n\\|_{L^2}^{1-\\f{l+1}{N}}\n \\|\\nabla^{N}n\\|_{L^2}^{\\f{l+1}{N}}\n \\|\\nabla^{{\\alpha}}n\\|_{L^2}^{\\f{l+1}{N}}\n \\|\\nabla^{N}n\\|_{L^2}^{1-\\f{l+1}{N}}\\Big)\n \\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\\leq& C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2,\n\\end{split}\n\\end{equation*}\nwhere ${\\alpha}$ is defined by ${\\alpha}=\\f{3N}{2(l+1)}\\leq \\f{N}{2}$.\nFor the term $L_{322}$, the fact $i\\geq2$ implies\n$\\gamma_{\\beta}+2\\leq l+1\\leq N$ for ${\\beta}=1,2,\\cdots,i$.\nThen, it follows from Sobolev interpolation inequality \\eqref{Sobolev} in Lemma \\ref{inter} that\n\\begin{equation*}\n\t\\begin{split}\n\t\tL_{322}\\leq&C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i}=l\\\\i\\geq2}}\\|\\nabla^{\\gamma_1}n\\|_{L^\\infty}\\cdots\\|\\nabla^{\\gamma_i}n\\|_{L^\\infty}\\|\\f{\\nabla^{N-l}n}{(1+|x|)^{l-m}}\\|_{L^2}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i}=l\\\\i\\geq2}}\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_1}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3+2\\gamma_1}{2N}}\\cdots\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_{i}}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3+2\\gamma_i}{2N}}\\|\\nabla^{N-m}n\\|_{L^2}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i}=l\\\\i\\geq2}}\\|n\\|_{L^2}^{1-\\theta}\\|\\nabla^{N}n\\|_{L^2}^{\\theta}\\|\\nabla^{{\\alpha}}n\\|_{L^2}^{\\theta}\\|\\nabla^{N}n\\|_{L^2}^{1-\\theta}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^{2},\n\t\\end{split}\n\t\\end{equation*}\nwhere we denote\n$$\\theta=\\f{3i+2l}{2N},\\quad{\\alpha}=\\f{3iN}{3i+2l}\\leq \\f35N\\leq N,$$\nprovided $l\\geq1$, $i\\leq l$ and $i\\geq2$.\nThen, the combination of estimates of terms $L_{321}$ and $L_{322}$ implies directly\n\\begin{equation}\\label{L32}\n\tL_{32}\\leq C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^{2}.\n\\end{equation}\n For the term $L_{33}$, by Hardy inequality, we obtain\\begin{equation}\\label{L33}\n\t\\begin{split}\n\t\tL_{33}\n\t\t\\leq&C \\sum_{1\\leq l\\leq N-1}\\sum_{\\gamma_1+\\cdots+\\gamma_j=l}\\|(1+|x|)^{\\gamma_1}\\nabla^{\\gamma_1}\\bar \\rho\\|_{L^{\\infty}}\\cdots\\|(1+|x|)^{\\gamma_j}\\nabla^{\\gamma_j}\\bar \\rho\\|_{L^{\\infty}}\\|\\f{\\nabla^{N-l}n}{(1+|x|)^{l}}\\|_{L^2}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq& C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2.\n\t\\end{split}\n\t\\end{equation}\n\tIn view of Sobolev interpolation inequality \\eqref{Sobolev} in\n\tLemma \\ref{inter} and Hardy inequality, one deduces that\\begin{equation}\\label{L34}\n\t\\begin{split}\n\t\tL_{34}\\leq& C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l-1}}\\|\\nabla^{\\gamma_1}n\\|_{L^\\infty}\\cdots\\|\\nabla^{\\gamma_i}n\\|_{L^\\infty}\\|(1+|x|)^{\\gamma_{i+1}}\\nabla^{\\gamma_{i+1}}\\bar\\rho\\|_{L^\\infty}\\cdots\\|(1+|x|)^{\\gamma_{i+j}}\\nabla^{\\gamma_{i+j}}\\bar\\rho\\|_{L^\\infty}\\\\\n\t\t&\\times\\|\\f{\\nabla^{N-l}n}{(1+|x|)^{l-m}}\\|_{L^2}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l-1}}\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_1}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3+2\\gamma_1}{2N}}\\cdots\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_{i}}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3+2\\gamma_i}{2N}}\\|\\nabla^{N-m}n\\|_{L^2}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l-1}}\\|n\\|_{L^2}^{1-\\theta}\\|\\nabla^{N}n\\|_{L^2}^{\\theta}\\|\\nabla^{{\\alpha}}n\\|_{L^2}^{\\theta}\\|\\nabla^{N}n\\|_{L^2}^{1-\\theta}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^{2},\n\t\\end{split}\n\t\\end{equation}\n\twhere\n\t$$\\theta=\\f{3i+2m}{2N},\\quad{\\alpha}=\\f{3iN}{3i+2m}\\leq \\f35N\\leq N,$$\n\tprovided $i\\geq 1$, $m\\geq 1$ and $i\\leq m$. We substitute \\eqref{L31}-\\eqref{L34} into \\eqref{L3}, to find that\n\t\\begin{equation}\\label{l3}\n\t\\begin{split}\n\t\t|L_3|\\leq C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^{2}.\n\t\\end{split}\n\t\\end{equation}\n\tFor the last term $L_4$, with the aid of the estimates \\eqref{h} and \\eqref{h2} of $\\h$, it is easy to deduce\\begin{equation}\\label{LL4}\n\t\\begin{split}\n\t\t|L_4|\\leq&C\\int|\\h(n,\\bar\\rho)||\\nabla^{N}\\bar\\rho||\\nabla^{N+1}v|dx+C\\sum_{1\\leq l\\leq N-1}\\int|\\nabla^{l}\\h(n,\\bar\\rho)||\\nabla^{N-l}\\bar\\rho||\\nabla^{N+1}v|dx\\\\\n\t\t\\leq &C\\int |n||\\nabla^{N}\\bar\\rho||\\nabla^{N+1}v|dx+C\\sum_{1\\leq l\\leq N-1}\\sum_{\\gamma_1+\\cdots+\\gamma_{j}=l}\\int|n||\\nabla^{\\gamma_1}\\bar\\rho|\\cdots|\\nabla^{\\gamma_j}\\bar\\rho||\\nabla^{N-l}\\bar\\rho||\\nabla^{N}v|dx\\\\\n\t\t&\\quad+C\\sum_{0\\leq l\\leq N-1}\\sum_{\\gamma_1+\\cdots+\\gamma_i=l}\\int|\\nabla^{\\gamma_1}n|\\cdots|\\nabla^{\\gamma_i}n||\\nabla^{N-l}\\bar\\rho||\\nabla^{N+1}v|dx\\\\\n\t\t&\\quad+C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l-1}}\\int|\\nabla^{\\gamma_1}n|\\cdots|\\nabla^{\\gamma_i}n||\\nabla^{\\gamma_{i+1}}\\bar\\rho|\\cdots|\\nabla^{\\gamma_{i+j}}\\bar\\rho||\\nabla^{N-l}\\bar\\rho||\\nabla^{N+1}v|dx\\\\\n\t\t\\overset{def}{=}&L_{41}+L_{42}+L_{43}+L_{44}.\n\t\\end{split}\n\t\\end{equation}\n\tAccording to Hardy inequality, we obtain immediately\n\t\\begin{equation*}\n\t\\begin{split}\n\t\tL_{41}\\leq& \\sum_{0\\leq l\\leq N-1} \\|\\f{n}{(1+|x|)^{N}}\\|_{L^2}\\|(1+|x|)^{N}\\nabla^{N}\\bar\\rho\\|_{L^\\infty}\\|\\nabla^{N+1}v\\|_{L^2}\\leq C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2,\\\\\n\t\tL_{42}\\leq& \\sum_{0\\leq l\\leq N-1}\\sum_{\\gamma_1+\\cdots+\\gamma_j=l} \\|\\f{n}{(1+|x|)^{N}}\\|_{L^2}\\|(1+|x|)^{\\gamma_1}\\nabla^{\\gamma_1}\\bar\\rho\\|_{L^\\infty}\\cdots\\|(1+|x|)^{\\gamma_j}\\nabla^{\\gamma_j}\\bar\\rho\\|_{L^\\infty}\\\\\n\t\t&\\quad\\times\\|(1+|x|)^{N-l}\\nabla^{k+1}\\bar\\rho\\|_{L^\\infty}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq& C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2.\n\t\\end{split}\n\t\\end{equation*}\n\tNext, let us to deal with $L_{43}$. It is easy to deduce that\\begin{equation*}\n \\begin{split}\n \tL_{43}\\leq& C\\sum_{0\\leq l\\leq N-1}\\int|\\nabla^{l}n||\\nabla^{N-l}\\bar\\rho||\\nabla^{N+1}v|dx+C\\sum_{0\\leq l\\leq k}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_i=l\\\\i\\geq2}}\\int|\\nabla^{\\gamma_1}n|\\cdots|\\nabla^{\\gamma_i}n||\\nabla^{N-l}\\bar\\rho||\\nabla^{N+1}v|dx\\\\\n \t\\overset{def}{=}&L_{431}+L_{432}.\n \\end{split}\n \\end{equation*}\n We employ Hardy inequality once again, to get\n \\begin{equation*}\n \\begin{split}\n \tL_{431}\\leq \\sum_{0\\leq l\\leq k} \\|\\f{\\nabla^ln}{(1+|x|)^{N-l}}\\|_{L^2}\\|(1+|x|)^{N-l}\\nabla^{N-l}\\bar\\rho\\|_{L^\\infty}\\|\\nabla^{N+1}v\\|_{L^2}\\leq C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2.\n \\end{split}\n \\end{equation*}\n\tThe fact $i\\geq2$ implies that $\\gamma_{\\beta}+2\\leq l+1\\leq N$, for ${\\beta}=1,\\cdots,i$. Applying Sobolev interpolation inequality \\eqref{Sobolev} in Lemma \\ref{inter} and Hardy inequality, we obtain\\begin{equation*}\n\t\\begin{split}\n\t\tL_{432}\\leq&C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i}=l\\\\i\\geq2}}\\|\\f{\\nabla^{\\gamma_1}n}{(1+|x|)^{N-l}}\\|_{L^2}\\|\\nabla^{\\gamma_2}n\\|_{L^\\infty}\\cdots\\|\\nabla^{\\gamma_i}n\\|_{L^\\infty}\\|(1+|x|)^{N-l}\\nabla\\bar\\rho\\|_{L^2}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i}=l\\\\i\\geq2}}\\|\\nabla^{N-l+\\gamma_1}n\\|_{L^2}\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_2}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3+2\\gamma_2}{2N}}\\cdots\\|_{L^2}\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_i}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3+2\\gamma_i}{2N}}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\i\\geq2}}\\|\\nabla^{{\\alpha}}n\\|_{L^2}^{\\theta}\\|\\nabla^{N}n\\|_{L^2}^{1-\\theta}\\|n\\|_{L^2}^{1-\\theta}\\|\\nabla^{N}n\\|_{L^2}^{\\theta}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2,\n\t\\end{split}\n\t\\end{equation*}\n\twhere\n\t$$\\theta=\\f{3(i-1)+2(l-\\gamma_1)}{2N},\\quad {\\alpha}=\\f{3(i-1)N}{3(i-1)+2(l-\\gamma_1)}\\leq \\f{3}{5}N\\leq N,$$\n\tprovided that $i\\geq2$ and $i-1\\leq l-\\gamma_1$.\n\tTwo estimates of terms from $L_{431}$ and $L_{432}$ gives\\begin{equation*}\n\tL_{43}\\leq C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2.\n\t\\end{equation*}\n\tThe last term on the right handside of \\eqref{LL4} can be divided into two terms:\n\t\\begin{equation*}\n\t\\begin{split}\n\tL_{44}\\leq&C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{1+j}=l\\\\\\gamma_1=m\\\\1\\leq m\\leq l-1}}\\|\\f{\\nabla^{\\gamma_1}n}{(1+|x|)^{N-m}}\\|_{L^2}\\|(1+|x|)^{\\gamma_{2}}\\nabla^{\\gamma_{2}}\\bar\\rho\\|_{L^\\infty}\\cdots\\|(1+|x|)^{\\gamma_{1+j}}\\nabla^{\\gamma_{1+j}}\\bar\\rho\\|_{L^\\infty}\\\\\n\t&\\times\\|(1+|x|)^{N-l}\\nabla^{N-l}\\bar\\rho\\|_{L^\\infty}\\|\\nabla^{N+1}v\\|_{L^2}+C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l-1,~i\\geq2}}\\|\\f{\\nabla^{\\gamma_1}n}{(1+|x|)^{N-m}}\\|_{L^2}\\|\\nabla^{\\gamma_2}n\\|_{L^\\infty}\\cdots\\|\\nabla^{\\gamma_i}n\\|_{L^\\infty}\\\\\n\t&\\times\\|(1+|x|)^{\\gamma_{i+1}}\\nabla^{\\gamma_{i+1}}\\bar\\rho\\|_{L^\\infty}\\cdots\\|(1+|x|)^{\\gamma_{i+j}}\\nabla^{\\gamma_{i+j}}\\bar\\rho\\|_{L^\\infty}\\|(1+|x|)^{N-l}\\nabla^{N-l}\\bar\\rho\\|_{L^\\infty}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\\overset{def}{=}&L_{441}+L_{442}.\n\t\\end{split}\n \\end{equation*}\n We use Hardy inequaly, to find\\begin{equation*}\n \\begin{split}\n \tL_{441}\\leq C\\delta\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{1+j}=l\\\\1\\leq m\\leq l-1}}\\|\\nabla^{N}n\\|_{L^2}\\|\\nabla^{N+1}v\\|_{L^2}\\leq C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2,\n \\end{split}\n \\end{equation*}\n To deal with term $L_{442}$, by virtue of Sobolev interpolation inequality \\eqref{Sobolev} in Lemma \\ref{inter} and Hardy inequality, we arrive at\n\t\\begin{equation*}\n\t\\begin{split}\n\tL_{442}\\leq&C\\delta\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l-1,~i\\geq2}}\\|\\nabla^{N-m+\\gamma_1}n\\|_{L^2}\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_2}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3+2\\gamma_2}{2N}}\\cdots\\|n\\|_{L^2}^{1-\\f{3+2\\gamma_i}{2N}}\\|\\nabla^{N}n\\|_{L^2}^{\\f{3+2\\gamma_i}{2N}}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\\leq&C\\delta\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l-1,~i\\geq2}}\\|\\nabla^{{\\alpha}}n\\|_{L^2}^{\\theta}\\|\\nabla^{N}n\\|_{L^2}^{1-\\theta}\\|n\\|_{L^2}^{1-\\theta}\\|\\nabla^{N}n\\|_{L^2}^{\\theta}\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\\leq&C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2,\n\t\\end{split}\n\t\\end{equation*}\n\twhere\n\t$$\\theta=\\f{3(i-1)+2(m-\\gamma_1)}{2N},\\quad{\\alpha}=\\f{3(i-1)N}{3(i-1)+2(m-\\gamma_1)}\\leq \\f{3}{5}N\\leq N,$$\n\tprovided that $i\\geq2$ and $i-1\\leq m-\\gamma_1$.\n\tHence, the combination of estimates of terms $L_{441}$ and $L_{442}$ implies directly\n\t\\begin{equation*}\n\tL_{44}\\leq C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2.\n\t\\end{equation*}\n\tWe substitute estimates of terms from $L_{41}$ to $L_{44}$ into \\eqref{LL4} to find\\begin{equation}\\label{l4}\n\t\\begin{split}\n\t\t|L_4|\\leq C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2.\n\t\\end{split}\n\t\\end{equation}\n\tInserting \\eqref{l1}, \\eqref{l2}, \\eqref{l3} and \\eqref{l4} into \\eqref{ks22}, we thereby deduce that\\begin{equation}\\label{ws22}\n\t\\begin{split}\n\t\t\\Big|\\int \\nabla^{N}\\widetilde{S}_2\\cdot \\nabla^{N}v dx\\Big|\\leq C\\delta\\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2.\n\t\\end{split}\n\t\\end{equation}\n\tPlugging \\eqref{ws11} and \\eqref{ws22} into \\eqref{ehk1} gives \\eqref{en2} directly. Therefore, the proof of this lemma is completed.\n\\end{proof}\nFinally, we aim to recover the dissipation estimate for $n$.\n\\begin{lemm}\\label{ennjc}\n\tUnder the assumptions in Theorem \\ref{them3}, for $1\\leq k\\leq N-1$, we have\n\t\\begin{equation}\\label{en3}\n\t\\f{d}{dt}\\int \\nabla^k v\\cdot\\nabla^{k+1}ndx+\\|\\nabla^{k+1}n\\|_{L^2}^2\\leq C_2\\|(\\nabla^{k+1}v,\\nabla^{k+2}v)\\|_{L^2}^2,\n\t\\end{equation}\n\twhere $C_2$ is a positive constant independent of $t$.\n\\end{lemm}\n\\begin{proof}\n\tApplying differential operator $\\nabla^k$ to $\\eqref{ns5}_2$, multiplying the resulting equation by $\\nabla^{k+1}n$, and integrating over $\\mathbb{R}^3$, one arrives at\\begin{equation}\\label{vnjc}\n\t\\begin{split}\n\t\t\\int \\nabla^{k}v_t\\cdot\\nabla^{k+1}n dx+\\|\\nabla^{k+1}n\\|_{L^2}^2\\leq C \\|\\nabla^{k+2}v\\|_{L^2}^2+\\int \\nabla^k\\widetilde{S}_2\\cdot\\nabla^{k+1}n dx.\n\t\\end{split}\n\t\\end{equation}\n\tIn order to deal with $\\int \\nabla^{k}v_t\\cdot\\nabla^{k+1}n dx$, we turn\n\tthe time derivative of velocity to the density.\n\tThen, applying differential operator $\\nabla^k$ to the mass equation $\\eqref{ns5}_1$, we find\n\t\\[\\nabla^kn_t+\\gamma\\nabla^{k}\\mathop{\\rm div}\\nolimits v=\\nabla^k \\widetilde{S}_1.\\]\n\tHence, we can transform time derivative to the spatial derivative, i.e.,\\begin{equation*}\n\t\\begin{split}\n\t\t\\int \\nabla^{k}v_t\\cdot\\nabla^{k+1}n dx=&\\f{d}{dt}\\int \\nabla^{k}v\\cdot\\nabla^{k+1}n dx-\\int \\nabla^{k}v\\cdot\\nabla^{k+1}n_t dx\\\\\n\t\t=&\\f{d}{dt}\\int \\nabla^{k}v\\cdot\\nabla^{k+1}n dx+\\gamma\\int \\nabla^{k} v\\cdot\\nabla^{k+1}\\mathop{\\rm div}\\nolimits v dx-\\int \\nabla^{k} v\\cdot\\nabla^{k+1}\\widetilde{S}_1 dx\\\\\n\t\t=&\\f{d}{dt}\\int \\nabla^{k}v\\cdot\\nabla^{k+1}n dx-\\gamma\\|\\nabla^{k}\\mathop{\\rm div}\\nolimits v\\|_{L^2}^2-\\int \\nabla^{k+1}\\mathop{\\rm div}\\nolimits v\\cdot\\nabla^{k-1}\\widetilde{S}_1 dx\n\t\\end{split}\n\t\\end{equation*}\n\tSubstituting the identity above into \\eqref{vnjc} and integrating by parts yield\\begin{equation}\\label{nvjc2}\n\t\\begin{split}\n\t\t&\\f{d}{dt}\\int \\nabla^{k}v\\cdot\\nabla^{k+1}n dx+\\|\\nabla^{k+1}n\\|_{L^2}^2\\\\\n\t\t\\leq& C \\|(\\nabla^{k+1}v,\\nabla^{k+2}v)\\|_{L^2}^2+C\\int \\nabla^{k+1}\\mathop{\\rm div}\\nolimits v\\cdot\\nabla^{k-1}\\widetilde{S}_1dx-C\\int \\nabla^k\\widetilde{S}_2\\cdot\\nabla^{k+1}n dx.\n\t\\end{split}\n\t\\end{equation}\n\tAs for the term of $\\widetilde{S}_1$.\n\t\tIt then follows in a similar way to the estimates of term from $I_1$ to $I_4$ in Lemma \\ref{enn-1} that\n\n\n\n\n\t\\begin{equation}\\label{ss1}\n\t\\begin{split}\n\t\t\\Big|\\int \\nabla^{k+1}\\mathop{\\rm div}\\nolimits v\\cdot\\nabla^{k-1}\\widetilde{S}_1 dx\\Big|\\leq C\\|\\nabla^{k+2}v\\|_{L^2}\\|\\nabla^{k-1}\\widetilde{S}_1\\|_{L^2}\n\t\t\\leq C \\delta \\|\\nabla^{k+1}n\\|_{L^2}^2+C\\|(\\nabla^{k+1}v,\\nabla^{k+2}v)\\|_{L^2}^2.\n\t\\end{split}\n\t\\end{equation}\n\tTo deal with the term of $\\widetilde{S}_2$, we only need to follow the idea as the estimates of term from $L_1$ to $L_4$ in Lemma \\ref{enn}. Hence, we\n\tgive the estimates as follow\n\t\\begin{equation}\\label{ss2}\n\t\\begin{split}\n\t\t\\Big|\\int \\nabla^k\\widetilde{S}_2\\cdot\\nabla^{k+1}n dx\\Big|\\leq C \\|\\nabla^k\\widetilde{S}_2\\|_{L^2}\\|\\nabla^{k+1}n\\|_{L^2}\\leq C \\delta\\|(\\nabla^{k+1}n,\\nabla^{k+2}v)\\|_{L^2}^2.\n\t\\end{split}\n\t\\end{equation}\n\tWe then utilize \\eqref{ss1} and \\eqref{ss2} in \\eqref{nvjc2}, to deduce \\eqref{en3} directly.\n\\end{proof}\n\n\\underline{\\noindent\\textbf{The proof of Proposition \\ref{nenp}.}}\nWith the help of Lemmas \\ref{enn-1}-\\ref{ennjc},\nit is easy to establish the estimate \\eqref{eml}.\nTherefore, we complete the proof of Proposition \\ref{nenp}.\n\n\n\n\n\n\\subsection{Optimal decay of higher order derivative}\nIn this subsection, we will study the optimal decay rate for the\n$k-th$ $(2 \\leq k\\leq N-1)$ order spatial derivatives of global solution.\nIn order to achieve this target, the optimal decay rate of higher order spatial\nderivative will be established by the lower one.\nIn this aspect, the Fourier splitting method, developed by Schonbek(see \\cite{Schonbek1985}),\nis applied to establish the optimal decay rate for higher order derivative\nof global solution in \\cite{{Schonbek-Wiegner},{gao2016}}.\nHowever, We're going to use time weighted and mathematical induction to solve this problem.\n\n\\begin{lemm}\\label{N-1decay}\n\tUnder the assumption of Theorem \\ref{them3}, for $0\\leq k\\leq N-1$, we have\n\t\\begin{equation}\\label{n1h1}\n\t\\|\\nabla^k(n,v)\\|_{H^{N-k}}\\leq C (1+t)^{-\\f34-\\f k2},\n\t\\end{equation}\n\twhere $C$ is a positive constant independent of time.\n\\end{lemm}\n\\begin{proof}\nWe will take the strategy of induction to give the proof for\nthe decay rate \\eqref{n1h1}. In fact, the decay rate \\eqref{basic-decay}\nimplies \\eqref{n1h1} holds true for the the case $k=0, 1$.\nBy the general step of induction, assume that the decay rate \\eqref{n1h1}\nholds on for the case $k=m$, i.e.,\n\t\\begin{equation}\\label{inducass}\n\t\\|\\nabla^m (n,v)\\|_{H^{N-m}}\\leq C (1+t)^{-\\f34-\\f m2},\n\t\\end{equation}\nfor $m=1,...,N-2$.\nChoosing the integer $l=m$ in \\eqref{eml} and\nmultiplying it by $(1+t)^{\\f32+m+\\ep_0} (0<\\ep_0<1)$, we have\n\\begin{equation*}\n\\begin{split}\n\\f{d}{dt}\\Big\\{(1+t)^{\\f32+m+\\ep_0} \\mathcal{E}^N_m(t)\\Big\\}\n+(1+t)^{\\f32+m+\\ep_0}\\big(\\|\\nabla^{m+1}n\\|_{H^{N-m-1}}^2\n+\\|\\nabla^{m+1}v\\|_{H^{N-m}}^2\\big)\\leq C(1+t)^{\\f12+m+\\ep_0} \\mathcal{E}^N_m(t).\n\\end{split}\n\\end{equation*}\nIntegrating with respect to $t$, using the equivalent relation \\eqref{emleq} and the decay estimate \\eqref{inducass}, one obtains\n\\begin{equation}\\label{energy2}\n\\begin{split}\n&(1+t)^{\\f32+m+\\ep_0} \\mathcal{E}^N_m(t)\t\n +\\int_0^t(1+\\tau)^{\\f32+m+\\ep_0}\\big(\\|\\nabla^{m+1}n\\|_{H^{N-m-1}}^2\n +\\|\\nabla^{m+1}v\\|_{H^{N-m}}^2\\big)d\\tau\\\\\n\\leq& \\mathcal{E}^N_m(0)+C\\int_0^t(1+\\tau)^{\\f12+m+\\ep_0} \\mathcal{E}^N_m(\\tau)d\\tau\\\\\n\\leq&C\\|\\nabla^m(n_0,v_0)\\|_{H^{N-m}}^2\n +C\\int_0^t(1+\\tau)^{\\f12+m+\\ep_0} \\|\\nabla^m(n,v)\\|_{H^{N-m}}^2d\\tau\\\\\n\\leq&C\\|\\nabla^m(n_0,v_0)\\|_{H^{N-m}}^2\n +C\\int_0^t(1+\\tau)^{-1+\\ep_0}d\\tau\\leq C(1+t)^{\\ep_0}.\n\\end{split}\n\\end{equation}\nOn the other hand, taking $l=m+1$ in \\eqref{eml}, we have\n\\begin{equation}\\label{Ekk}\n\\begin{split}\t\\f{d}{dt}\\mathcal{E}^{N}_{m+1}(t)\n+\\|\\nabla^{m+2}n\\|_{H^{N-m-2}}^2+\\|\\nabla^{m+2}v\\|_{H^{N-m-1}}^2\\leq 0.\n\\end{split}\n\\end{equation}\nMultiplying \\eqref{Ekk} by $(1+t)^{\\f52+m+\\ep_0}$,\nintegrating over $[0, t]$ and using estimate \\eqref{energy2}, we find\n\\begin{equation*}\n\\begin{split}\n&(1+t)^{\\f52+m+\\ep_0}\\mathcal{E}^{N}_{m+1}(t)\n+\\int_0^t(1+\\tau)^{\\f52+m+\\ep_0}\\big(\\|\\nabla^{m+2}n\\|_{H^{N-m-2}}^2\n+\\|\\nabla^{k+2}v\\|_{H^{N-m-1}}^2\\big)d\\tau\\\\\n\\leq& \\mathcal{E}^{N}_{m+1}(0)\n+\\int_0^t(1+\\tau)^{\\f32+m+\\ep_0}\\mathcal{E}^{N}_{m+1}(\\tau)d\\tau\\\\\n\\leq&C\\|\\nabla^{m+1}(n_0,v_0)\\|_{H^{N-m-1}}^2\n+\\int_0^t(1+\\tau)^{\\f32+m+\\ep_0}\\|\\nabla^{m+1}(n,v)\\|_{H^{N-m-1}}^2d\\tau\n \t\\leq C(1+t)^{\\ep_0},\n\\end{split}\n\\end{equation*}\nwhich, togeter with the equivalent relation \\eqref{emleq}, yields immediately\n\\begin{equation*}\n\\|\\nabla^{m+1}(n,v)\\|_{H^{N-m-1}}\\leq C(1+t)^{-\\f34-\\f {m+1}2}.\n\\end{equation*}\nThen, the decay estimate \\eqref{n1h1} holds on for case of $k=m+1$.\nBy the general step of induction, we complete the proof of this lemma.\n\\end{proof}\n\n\n\\subsection{Optimal decay of critical derivative}\n\nIn this subsection, our target is to establish the optimal decay rate for the\n$N-th$ order spatial derivative of global solution $(n, v)$ as it tends to zero.\nThe decay rate of $N-th$ order derivative of global solution $(n, v)$ in Lemma \\ref{N-1decay}\nis not optimal since it only has the same decay rate as the lower one.\nThe loss of time decay estimate comes from the appearance of cross term\n$\\frac{d}{dt}\\int \\nabla^{N-1} v \\cdot \\nabla^N n dx$ in energy\nwhen we set up the dissipation estimate for the density.\nNow let us introduce some notations that will be used frequently in this subsection.\nLet $0\\leq\\varphi_0(\\xi)\\leq1$ be a function in $C_0^{\\infty}(\\mathbb{R}^3)$ such that\\begin{equation*}\n\\begin{split}\n\\varphi_0(\\xi)=\\left\\{\n\\begin{array}{ll}\n1,\\quad \\text{for}~~|\\xi|\\leq \\f{\\eta}{2},\\\\[1ex]\n0,\\quad\\text{for}~~|\\xi|\\geq \\eta, \\\\[1ex]\n\\end{array}\n\\right.\n\\end{split}\n\\end{equation*}\nwhere $\\eta$ is a fixed positive constant. Based on the Fourier transform, we can define a low-medium-high-frequency decomposition $(f^l(x),f^h(x))$ for a function $f(x)$ as follows:\n\\begin{equation}\\label{def-h-l}\nf^l(x)\\overset{def}{=}\\mathcal{F}^{-1}(\\varphi_0(\\xi)\\widehat{f}(\\xi))~~\\text{and}~~f^h(x)\\overset{def}{=}f(x)-f^l(x).\n\\end{equation}\n\n\\begin{lemm}\\label{highfrequency}\nUnder the assumptions of Theorem \\ref{them3},\nthere exists a positive small constant $\\eta_3$, such that\n\\begin{equation}\\label{en6}\n\\begin{split}\t&\\f{d}{dt}\\Big\\{\\|\\nabla^{N}(n,v)\\|_{L^2}^2-\\eta_3\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N-1}v}\\cdot \\overline{\\widehat{\\nabla^{N}n}}d\\xi \\Big\\}+\\|\\nabla^{N}v^h\\|_{L^2}^2+\\eta_3\\|\\nabla^{N}n^h\\|_{L^2}^2\\\\\n\t\t\\leq& C_4\\|\\nabla^{N}(n^l, v^l)\\|_{L^2}^2+C(1+t)^{-3-N},\n\\end{split}\n\\end{equation}\n\twhere $C_4$ is a positive constant independent of time.\n\\end{lemm}\n\t\\begin{proof}\nTaking differential operating $\\nabla^{N-1}$ to the equation \\eqref{ns5}, it holds true\n\\begin{equation}\\label{ns6}\n\\left\\{\\begin{array}{lr}\n\\nabla^{N-1}n_t +\\gamma\\nabla^{N-1}\\mathop{\\rm div}\\nolimits v=\\nabla^{N-1}\\widetilde S_1,\\\\\n\\nabla^{N-1}v_t+\\gamma\\nabla^{N} n-\\mu_1\\nabla^{N-1}\\tri v-\\mu_2\\nabla^{N}\\mathop{\\rm div}\\nolimits v\n=\\nabla^{N-1}\\widetilde S_2.\n\\end{array}\\right.\n\\end{equation}\nTaking the Fourier transform of $\\eqref{ns6}_2$, multiplying the resulting equation by $\\overline{\\widehat{\\nabla^{N}n}}$ and integrating on $\\{\\xi||\\xi|\\geq \\eta\\}$, it holds true\n\\begin{equation}\\label{f1}\n\\begin{split}\n&\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N-1}v_t}\\cdot \\overline{\\widehat{\\nabla^{N}n}}d\\xi+\\gamma\\int_{|\\xi|\\geq\\eta}|\\widehat{\\nabla^Nn}|^2d\\xi\\\\\n=&\\int_{|\\xi|\\geq\\eta}\\big(\\mu_1\\widehat{\\nabla^{N-1}\\tri v}+\\mu_2\\widehat{\\nabla^N\\mathop{\\rm div}\\nolimits v}\\big)\\cdot \\overline{\\widehat{\\nabla^{N}n}}d\\xi+\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N-1}\\widetilde S_2}\\cdot \\overline{\\widehat{\\nabla^{N}n}}d\\xi.\n\\end{split}\n\\end{equation}\nIt is easy to deduce from $\\eqref{ns6}_1$ that\n\\begin{equation*}\n\\begin{split}\n\t\t\\widehat{\\nabla^{N-1}v_t}\\cdot \\overline{\\widehat{\\nabla^{N}n}}=&-i\\xi\\widehat{\\nabla^{N-1}v_t}\\cdot \\overline{\\widehat{\\nabla^{N-1}n}}=-\\widehat{\\nabla^Nv_t}\\cdot \\overline{\\widehat{\\nabla^{N-1}n}}\\\\\n\t\t=&-{\\partial}_t(\\widehat{\\nabla^Nv}\\cdot \\overline{\\widehat{\\nabla^{N-1}n}})+\\widehat{\\nabla^Nv}\\cdot \\overline{\\widehat{\\nabla^{N-1}n_t}}\\\\\n\t\t=&-{\\partial}_t(\\widehat{\\nabla^Nv}\\cdot \\overline{\\widehat{\\nabla^{N-1}n}})-\\gamma\\widehat{\\nabla^Nv}\\cdot\n\\overline{\\widehat{\\nabla^{N-1}\\mathop{\\rm div}\\nolimits v}}+\\widehat{\\nabla^Nv}\\cdot \\overline{\\widehat{\\nabla^{N-1}\\widetilde{S}_1}}.\n\t\\end{split}\n\t\\end{equation*}\nThen, substituting this identity into identity \\eqref{f1}, we have\n\\begin{equation}\\label{f2}\n\\begin{split}\n\t\t&-\\f{d}{dt}\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N}v}\\cdot \\overline{\\widehat{\\nabla^{N-1}n}}d\\xi+\\gamma\\int_{|\\xi|\\geq\\eta}|\\widehat{\\nabla^Nn}|^2d\\xi\\\\\n\t\t=&\\int_{|\\xi|\\geq\\eta}\\big(\\mu_1\\widehat{\\nabla^{N-1}\\tri v}+\\mu_2\\widehat{\\nabla^N\\mathop{\\rm div}\\nolimits v}\\big)\\cdot \\overline{\\widehat{\\nabla^{N}n}}d\\xi+\\gamma\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N}v}\\cdot \\overline{\\widehat{\\nabla^{N-1}\\mathop{\\rm div}\\nolimits v}}d\\xi \\\\\n\t\t&\\quad-\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N}v}\\cdot \\overline{\\widehat{\\nabla^{N-1}\\widetilde S_1}}d\\xi +\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N-1}\\widetilde S_2}\\cdot \\overline{\\widehat{\\nabla^{N}n}}d\\xi\\\\\n\t\t\\overset{def}{=}&M_1+M_2+M_3+M_4.\n\\end{split}\n\\end{equation}\nThe application of Cauchy inequality yields directly\n\\begin{equation}\\label{N1}\n\\begin{split}\n|M_1|\n\\leq& C\\int_{|\\xi|\\geq\\eta}|\\xi|^{2N+1}|\\widehat{v}||\\widehat{n}|d\\xi\n\\leq\\ep \\int_{|\\xi|\\geq\\eta}|\\xi|^{2N}|\\widehat{n}|^2d\\xi\n +C_{\\ep}\\int_{|\\xi|\\geq\\eta}|\\xi|^{2(N+1)}|\\widehat{v}|^2d\\xi,\n\\end{split}\n\\end{equation}\nfor some small $\\ep$, which will be determined later.\nObviously, it holds true\n\\begin{equation}\\label{N2}\n|M_2| \\leq C\\int_{|\\xi|\\geq\\eta}|\\xi|^{2N}|\\widehat{v}|^2d\\xi.\n\\end{equation}\nUsing the Cauchy inequality and definition of $\\widetilde{S}_1$, it is easy to check that\n\\begin{equation}\\label{m3}\n\\begin{split}\n|M_3|\n\\leq&C \\int_{|\\xi|\\geq\\eta}|\\xi|^{2(N+1)}|\\widehat{v}|^2d\\xi\n +C\\int_{|\\xi|\\geq\\eta}|\\xi|^{2(N-2)}|\\widehat{\\widetilde S_1}|^2 d\\xi \\\\\n\\leq&C \\int_{|\\xi|\\geq\\eta}|\\xi|^{2(N+1)}|\\widehat{v}|^2d\\xi\n +C\\int_{|\\xi|\\geq\\eta}|\\xi|^{2(N-2)}|\\widehat{\\nabla nv+n\\nabla v}|^2d\\xi\\\\\n & +C\\int_{|\\xi|\\geq\\eta}|\\xi|^{2(N-2)}|\\widehat{\\nabla\\bar\\rho v+\\bar\\rho\\nabla v}|^2d\\xi\\\\\n\\overset{def}{=}&\\int_{|\\xi|\\geq\\eta}|\\xi|^{2(N+1)}|\\widehat{v}|^2d\\xi+M_{31}+M_{32}.\n\\end{split}\n\\end{equation}\nUsing Plancherel Theorem and Sobolev inequality, it is easy to check that\n\\begin{equation}\\label{m31}\n\\begin{split}\nM_{31}\n\\le &C\\|\\nabla^{N-2}(\\nabla nv+n\\nabla v)\\|_{L^2}^2\\\\\n\\leq &C\\big(\\|\\nabla n\\|_{L^\\infty}^2\\|\\nabla^{N-2}v\\|_{L^2}^2\n +\\|\\nabla^{N-1}n\\|_{L^2}^2\\|v\\|_{L^\\infty}^2\n +\\|n\\|_{L^\\infty}^2\\|\\nabla^{N-1}v\\|_{L^2}^2\n +\\|\\nabla^{N-2}n\\|_{L^2}^2\\|\\nabla v\\|_{L^\\infty}^2\\big)\\\\\t\t\n\\leq&C\\big(\\|\\nabla^{2}(n,v)\\|_{H^1}^2\\|\\nabla^{N-2}(n,v)\\|_{L^2}^2\n +\\|\\nabla(n,v)\\|_{H^1}^2\\|\\nabla^{N-1}v\\|_{L^2}^2\\big)\\\\\n\\leq&C(1+t)^{-3-N},\n\\end{split}\n\\end{equation}\nwhere we have used the decay \\eqref{n1h1} in the last inequality.\nSimilarly, we also apply Hardy's inequality to obtain\n\\begin{equation}\\label{m32}\n\\begin{split}\nM_{32}\n\\leq &C\\int_{|\\xi|\\geq\\eta}|\\xi|^{2(N-1)}|\\widehat{\\nabla\\bar\\rho v+\\bar\\rho\\nabla v}|^2d\\xi\n\\le C\\|\\nabla^{N-1}(\\nabla \\bar\\rho v+\\bar\\rho\\nabla v)\\|_{L^2}^2 \\\\\n\t\t\\leq&C\\sum_{0\\leq l\\leq N-1}\\Big(\\|(1+|x|)^{l+1}\\nabla^{l+1}\\bar\\rho\\|_{L^\\infty}\\|\\f{\\nabla^{N-1-l}v}{(1+|x|)^{l+1}}\\|_{L^2}+\\|(1+|x|)^{l}\\nabla^{l}\\bar\\rho\\|_{L^\\infty}\\|\\f{\\nabla^{N-l}v}{(1+|x|)^{l}}\\|_{L^2}\\Big)\\|\\nabla^{N+1}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\|(\\nabla^{N}v,\\nabla^{N+1}v)\\|_{L^2}^2,\n\\end{split}\n\\end{equation}\nwhere we have used the fact that for any suitable function $\\phi$,\nthere exists a positive constant $C$ dependent only on $\\eta$ such that\n\\begin{equation*}\n\\int_{|\\xi|\\geq\\eta}|\\xi|^{2(N-2)}|\\widehat{\\phi}|^2d\\xi\\leq C\\int_{|\\xi|\\geq\\eta}|\\xi|^{2(N-1)}|\\widehat{\\phi}|^2d\\xi.\n\\end{equation*}\nSubstituting the estimates \\eqref{m31} and \\eqref{m32} into \\eqref{m3},\none can get that\n\\begin{equation}\\label{N3}\n|M_3|\\leq C\\delta\\|(\\nabla^{N}v,\\nabla^{N+1}v)\\|_{L^2}^2 +C(1+t)^{-3-N}.\n\\end{equation}\nUsing Cauchy inequality and the definition of $\\widetilde S_2$, we have\n\\begin{equation}\\label{m4}\n\\begin{split}\n|M_4|\n\\leq &C\\int_{|\\xi|\\geq\\eta}|\\xi|^{2N-1}|\\widehat{\\widetilde S_2}|| {\\widehat{n}}|d\\xi\\\\\n\t\t\\leq&C\\int_{|\\xi|\\geq\\eta}|\\xi|^{2N-1}| {\\widehat{n}}||\\widehat{v\\cdot \\nabla v}|d\\xi\n +C\\int_{|\\xi|\\geq\\eta}|\\xi|^{2N-1}| {\\widehat{n}}\n |\\big(\\mu_1|\\widehat{\\wf(n+\\bar\\rho)\\tri v}|\n +\\mu_2|\\widehat{\\wf(n+\\bar\\rho)\\nabla\\mathop{\\rm div}\\nolimits v}|\\big)d\\xi\\\\\n&\\quad+C\\int_{|\\xi|\\geq\\eta}|\\xi|^{2N-1}| {\\widehat{n}}|\n |\\widehat{\\wg(n+\\bar\\rho)\\nabla n}|d\\xi\n +C\\int_{|\\xi|\\geq\\eta}|\\xi|^{2N-1}| {\\widehat{n}}|\n |\\widehat{\\h(n,\\bar\\rho)\\nabla \\bar\\rho}|d\\xi\\\\\n\\overset{def}{=}&M_{41}+M_{42}+M_{43}+M_{44}.\n\\end{split}\n\\end{equation}\nIn view of Plancherel Theorem, Sobolev inequality and\ncommutator estimate in Lemma \\ref{commutator}, we find\n\\begin{equation}\\label{M41}\n\\begin{split}\nM_{41}\n\\leq &\\ep \\|\\nabla^Nn\\|_{L^2}^2+C_{\\ep}\\|\\nabla^{N-1}(v\\cdot\\nabla v)\\|_{L^2}^2\\\\\n\\leq &\\ep \\|\\nabla^Nn\\|_{L^2}^2+C_{\\ep}\\|v\\|_{L^\\infty}^2\\|\\nabla^Nv\\|_{L^2}^2\n +C_{\\ep}\\|[\\nabla^{N-1},v]\\cdot\\nabla v\\|_{L^2}^2\\\\\n\\leq &\\ep \\|\\nabla^Nn\\|_{L^2}^2+C_{\\ep}\\|\\nabla v\\|_{H^1}^2\\|\\nabla^Nv\\|_{L^2}^2\n +C_{\\ep}\\|\\nabla v\\|_{L^\\infty}^2\\|\\nabla^{N-1} v\\|_{L^2}^2\\\\\n\\leq &\\ep \\|\\nabla^Nn\\|_{L^2}^2+C_{\\ep}(1+t)^{-3-N},\n\\end{split}\n\\end{equation}\nwhere we have used the estimate \\eqref{n1h1} in the last inequality.\nSimilarly, it is easy to check that\n\\begin{equation}\\label{m42}\n\\begin{split}\nM_{42}\n\\leq &\\ep \\|\\nabla^Nn\\|_{L^2}^2+C_{\\ep}\\|\\nabla^{N-1}\\big(\\wf(n+\\bar\\rho)\n (\\mu_1\\tri v+\\mu_2\\nabla\\mathop{\\rm div}\\nolimits v)\\big)\\|_{L^2}^2\\\\\n\\leq &\\ep\\|\\nabla^Nn\\|_{L^2}^2+C_{\\ep}\\|(n,\\bar\\rho)\\|_{L^\\infty}^2\n \\|\\nabla^{N+1}v\\|_{L^2}^2+C_{\\ep}\\|\\nabla n\\|_{L^3}^2\\|\\nabla^{N}v\\|_{L^6}^2\n +C_{\\ep}\\|(1+|x|)\\nabla \\bar\\rho\\|_{L^\\infty}^2 \\|\\f{\\nabla^{N}v}{1+|x|}\\|_{L^2}^2\\\\\n &\\quad+C_{\\ep}\\sum_{2\\leq l\\leq N-1}\\|\\nabla^l\\wf(n+\\bar\\rho)\\nabla^{N+1-l}v\\|_{L^2}^2\\\\\n\\leq &\\ep \\|\\nabla^Nn\\|_{L^2}^2+C_{\\ep}\\delta\\|\\nabla^{N+1}v\\|_{L^2}^2\n +C_{\\ep}\\sum_{2\\leq l\\leq N-1}\\|\\nabla^l\\wf(n+\\bar\\rho)\\nabla^{N+1-l}v\\|_{L^2}^2.\n\\end{split}\n\\end{equation}\nNow let us deal with the last term on the right handside\nof \\eqref{m42}. Indeed, it is easy to deduce that\n\\begin{equation*}\n\\begin{split}\n&\\sum_{2\\leq l\\leq N-1}\\|\\nabla^l\\wf(n+\\bar\\rho)\\nabla^{N+1-l}v\\|_{L^2}\\\\\n\\leq& C\\sum_{2\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\0\\leq m\\leq l}}\\||\\nabla^{\\gamma_1}n|\\cdots|\\nabla^{\\gamma_{i}}n||\\nabla^{\\gamma_{i+1}}\\bar\\rho|\\cdots|\\nabla^{\\gamma_{i+j}}\\bar\\rho||\\nabla^{N+1-l}v|\\|_{L^2}.\n\\end{split}\n\\end{equation*}\nFor $m=0$, we apply the Hardy inequality to obtain\n\\begin{equation*}\n\\begin{split}\n\\sum_{2\\leq l\\leq N-1}\n\\sum_{\\gamma_1+\\cdots+\\gamma_{j}=l}\\|(1+|x|)^{\\gamma_{i+1}}\\nabla^{\\gamma_{i+1}}\\bar\\rho\\|_{L^\\infty}\n\\cdots\\|(1+|x|)^{\\gamma_{i+j}}\\nabla^{\\gamma_{i+j}}\\bar\\rho\\|_{L^\\infty}\n\\|\\f{\\nabla^{N+1-l}v}{(1+|x|)^{l}}\\|_{L^2}\n\\leq C\\delta\\|\\nabla^{N+1}v\\|_{L^2}.\n\\end{split}\n\\end{equation*}\nFor $1\\leq m\\leq l-1$, the Sobolev inequality\nand decay estimate \\eqref{n1h1} imply\n\\begin{equation*}\n\\begin{split}\n&\\sum_{2\\leq l\\leq N-1}\n\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l-1}}\n \\|\\nabla^{\\gamma_1}n\\|_{L^\\infty}\\cdots\\|\\nabla^{\\gamma_{i}}n\\|_{L^\\infty}\n \\|(1+|x|)^{\\gamma_{i+1}}\\nabla^{\\gamma_{i+1}}\\bar\\rho\\|_{L^\\infty}\n \\cdots\\|(1+|x|)^{\\gamma_{i+j}}\\nabla^{\\gamma_{i+j}}\\bar\\rho\\|_{L^\\infty}\n \\|\\f{\\nabla^{N+1-l}v}{(1+|x|)^{l-m}}\\|_{L^2}\\\\\n\\leq&C\\sum_{2\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots\n +\\gamma_{i}=m\\\\1\\leq m\\leq l-1}}\n\\|\\nabla^{\\gamma_1+1}n\\|_{H^1}\\cdots\\|\\nabla^{\\gamma_{i}+1}n\\|_{H^1}\\|\\nabla^{N+1-m}v\\|_{L^2}\n\\leq C(1+t)^{-\\f{4+N}{2}}.\n\\end{split}\n\\end{equation*}\nFor $m=l$, the Sobolev inequality\nand decay estimate \\eqref{n1h1} yield directly\n\\begin{equation*}\n\\begin{split}\n&\\sum_{2\\leq l\\leq N-1}\\sum_{\\gamma_1+\\cdots+\\gamma_{i}=l}\n\\|\\nabla^{\\gamma_1}n\\|_{L^\\infty}\n\\cdots\\|\\nabla^{\\gamma_{i-1}}n\\|_{L^\\infty}\n\\|\\nabla^{\\gamma_{i}}n\\|_{L^3}\\|\\nabla^{N+1-l}v\\|_{L^6}\\\\\n\\leq\n&\\sum_{2\\leq l\\leq N-1}\\sum_{\\gamma_1+\\cdots+\\gamma_{i}=l}\\|\\nabla^{\\gamma_1+1}n\\|_{H^1}\\cdots\n\\|\\nabla^{\\gamma_{i-1}+1}n\\|_{H^1}\\|\\nabla^{\\gamma_{i}}n\\|_{H^1}\\|\\nabla^{N+1-l}v\\|_{H^1}\n\\leq C(1+t)^{-\\f{4+N}{2}}.\n\\end{split}\n\\end{equation*}\nTherefore, we can obtain the following estimate\n\\begin{equation}\\label{m421}\n\\begin{split}\n\t\\sum_{2\\leq l\\leq N-1}\\|\\nabla^l\\wf(n+\\bar\\rho)\\nabla^{N+1-l}v\\|_{L^2}\n\t\\leq C\\delta\\|\\nabla^{N+1}v\\|_{L^2}+C(1+t)^{-\\f{4+N}{2}},\n\\end{split}\n\\end{equation}\nwhich, together with the estimate \\eqref{m42}, yields directly\n\\begin{equation}\\label{M42}\nM_{42}\n\t\\leq \\ep \\|\\nabla^Nn\\|_{L^2}^2+C_{\\ep}\\delta \\|\\nabla^{N+1}v\\|_{L^2}^2+C_{\\ep}(1+t)^{-4-N}.\n\\end{equation}\nOne can deal with the term $M_{43}$ in the manner of $M_{42}$. It holds true\n\\begin{equation}\\label{m43}\n\\begin{split}\n\tM_{43}\n\t\\leq&\\ep \\|\\nabla^Nn\\|_{L^2}^2+C_{\\ep}\\|(n,\\bar\\rho)\\|_{L^\\infty}^2\\|\\nabla^{N}n\\|_{L^2}^2+C_{\\ep}\\sum_{1\\leq l\\leq N-1}\\|\\nabla^l\\wg(n+\\bar\\rho)\\nabla^{N-l}n\\|_{L^2}^2\\\\\n\t\\leq&(\\ep+C_\\ep\\delta) \\|\\nabla^Nn\\|_{L^2}^2+C_{\\ep}\\sum_{2\\leq l\\leq N-1}\\|\\nabla^l\\wg(n+\\bar\\rho)\\nabla^{N-l}n\\|_{L^2}^2.\n\\end{split}\n\\end{equation}\nSimilar to the estimate \\eqref{m421}, it is easy to check that\n\\begin{equation*}\n\\begin{split}\n\t\\sum_{2\\leq l\\leq N-1}\\|\\nabla^l\\wg(n+\\bar\\rho)\\nabla^{N+1-l}v\\|_{L^2}\n\t\\leq C\\delta\\|\\nabla^{N}n\\|_{L^2}+C(1+t)^{-\\f{3+N}{2}},\n\\end{split}\n\\end{equation*}\nwhich, together with the inequality \\eqref{m43}, yields directly\n\\begin{equation}\\label{M43}\n\\begin{split}\n\tM_{43}\n\t\\leq&(\\ep+C_\\ep\\delta) \\|\\nabla^Nn\\|_{L^2}^2+C_{\\ep}(1+t)^{-3-N}.\n\\end{split}\n\\end{equation}\nFinally, let us deal with the term $M_{44}$.\nIndeed, the Hardy inequality and identity \\eqref{h} yield directly\n\\begin{equation}\\label{m44}\n\t\\begin{split}\n\t\tM_{44}\t\t\\leq&\\ep\\|\\nabla^{N}n\\|_{L^2}^2+C_{\\ep}\\|\\h(n,\\bar\\rho)\\nabla^{N}\\bar\\rho\\|_{L^2}^2+C_{\\ep}\\sum_{1\\leq l\\leq N-1}\\|\\nabla^l\\h(n,\\bar\\rho)\\nabla^{N-l}\\bar\\rho\\|_{L^2}^2\\\\\n\t\t\\leq&\\ep\\|\\nabla^{N}n\\|_{L^2}^2+C_{\\ep}\\|\\f{n}{(1+|x|)^{N}}\\|_{L^2}^2\\|(1+|x|)^{N}\\nabla^{N}\\bar\\rho\\|_{L^\\infty}^2+C_{\\ep}\\sum_{1\\leq l\\leq N-1}\\|\\nabla^l\\h(n,\\bar\\rho)\\nabla^{N-l}\\bar\\rho\\|_{L^2}^2\\\\\n\t\t\\leq&(\\ep+C\\delta)\\|\\nabla^{N}n\\|_{L^2}^2+C_{\\ep}\\sum_{1\\leq l\\leq N-1}\\|\\nabla^l\\h(n,\\bar\\rho)\\nabla^{N-l}\\bar\\rho\\|_{L^2}^2.\n\t\\end{split}\n\t\\end{equation}\n\tTo deal with the last term on the right side of \\eqref{m44}, we employ the estimate \\eqref{h2} of $\\h$, to find\\begin{equation*}\n\t\\begin{split}\n\t\t&\\|\\nabla^l\\h(n,\\bar\\rho)\\nabla^{N-l}\\bar\\rho\\|_{L^2}\\\\\n\t\t\\leq&C\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{1+j}=l\\\\\\gamma_1=m\\\\0\\leq m\\leq l}}\\|\\f{\\nabla^{\\gamma_1}n}{(1+|x|)^{N-m}}\\|_{L^2}\\|(1+|x|)^{\\gamma_{2}}\\nabla^{\\gamma_{2}}\\bar\\rho\\|_{L^\\infty}\\cdots\\|(1+|x|)^{\\gamma_{1+j}}\\nabla^{\\gamma_{1+j}}\\bar\\rho\\|_{L^\\infty}\\|(1+|x|)^{N-l}\\nabla^{N-l}\\bar\\rho\\|_{L^\\infty}\\\\\n\t\t&\\quad+C\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l,i\\geq2}}\\|\\f{\\nabla^{\\gamma_1}n}{(1+|x|)^{N-m}}\\|_{L^2}\\|\\nabla^{\\gamma_2}n\\|_{L^\\infty}\n\\cdots\\|\\nabla^{\\gamma_i}n\\|_{L^\\infty}\n\\|(1+|x|)^{\\gamma_{i+1}}\\nabla^{\\gamma_{i+1}}\\bar\\rho\\|_{L^\\infty}\\\\\n&\\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\cdots\\|(1+|x|)^{\\gamma_{i+j}}\\nabla^{\\gamma_{i+j}}\\bar\\rho\\|_{L^\\infty}\\|(1+|x|)^{N-l}\\nabla^{N-l}\\bar\\rho\\|_{L^\\infty}\\\\\n\t\t\\overset{def}{=}&M_{441}+M_{442}.\n\t\\end{split}\n\t\\end{equation*}\nIn view of Hardy inequality, we can obtain the estimate\n\\begin{equation*}\n\\begin{split}\nM_{441}\n\t\\leq& C\\delta\\|\\nabla^N n\\|_{L^2}.\n\\end{split}\n\\end{equation*}\nTo deal with the term $M_{442}$, we can apply Hardy inequality and the decay estimate \\eqref{n1h1} to obtain\n\t\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\tM_{442}\n\t\t\t\\leq& C\\delta\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\0\\leq m\\leq l,~i\\geq2}}\\|\\nabla^{N-m+\\gamma_1}n\\|_{L^2}\\|\\nabla^{\\gamma_2+1}n\\|_{H^1}\\cdots\\|\\nabla^{\\gamma_i+1}n\\|_{H^1}\n\t\t\t\\leq C(1+t)^{-\\f{4+N}{2}}.\n\t\t\\end{split}\n\t\t\\end{equation*}\nThe combination of estimates from term $M_{441}$ to $M_{442}$ yields directly\n\\begin{equation*}\n\\sum_{1\\leq l\\leq N-1}\\|\\nabla^l\\h(n,\\bar\\rho)\\nabla^{N-l}\\bar\\rho\\|_{L^2}\n\\leq C\\delta\\|\\nabla^{N}n\\|_{L^2}+C(1+t)^{-\\f{4+N}{2}},\n\\end{equation*}\nwhich, together with the inequality \\eqref{m44}, yields directly\n\t\\begin{equation}\\label{M44}\n\t\\begin{split}\n\t\tM_{44}\n\t\t\\leq(\\ep+C_{\\ep}\\delta)\\|\\nabla^{N}n\\|_{L^2}^2+C_{\\ep}(1+t)^{-4-N}.\n\t\\end{split}\n\t\\end{equation}\nConsequently, by virtue of the estimates \\eqref{m4}, \\eqref{M41},\n\\eqref{M42}, \\eqref{M43} and \\eqref{M44}, it holds true\n\\begin{equation}\\label{M4}\nM_4 \\leq (\\ep+C_{\\ep}\\delta) \\|\\nabla^N n\\|_{L^2}^2\n +C_{\\ep}\\delta \\|\\nabla^{N+1}v\\|_{L^2}^2+C_{\\ep}(1+t)^{-3-N}.\n\\end{equation}\nSubstituting the estimates \\eqref{N1}-\\eqref{N3} and \\eqref{M4} into \\eqref{f2}, we find\n\\begin{equation*}\n\\begin{split}\n&-\\f {d}{dt}\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N}v}\\cdot \\overline{\\widehat{\\nabla^{N-1}n}}d\\xi+\\gamma\\int_{|\\xi|\\geq\\eta}|\\widehat{\\nabla^Nn}|^2d\\xi\\\\\n&\\leq(\\ep+C_{\\ep}\\delta)\\|\\nabla^N n\\|_{L^2}^2+C_{\\ep}\\|(\\nabla^N v,\\nabla^{N+1}v)\\|_{L^2}^2+C_{\\ep}(1+t)^{-3-N}.\n\t\\end{split}\n\t\\end{equation*}\nDue to the definition \\eqref{def-h-l}, there exists a positive constant $C$ such that\n\t\\begin{equation}\\label{vhvl}\n\t\\begin{split}\n\t\t\\|\\nabla^N v^h\\|_{L^2}^2\\leq C\\|\\nabla^{N+1}v^h\\|_{L^2}^2,\\quad \\|\\nabla^{N+1} v^l\\|_{L^2}^2\\leq C \\|\\nabla^{N}v^l\\|_{L^2}^2,\n\t\\end{split}\n\t\\end{equation}\n\tand choosing $\\ep$ and $\\delta$ suitably small, we deduce that\n\t\\begin{equation}\\label{en5}\n\t\\begin{split}\n\t\t&-\\f{d}{dt}\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N}v}\\cdot \\overline{\\widehat{\\nabla^{N-1}n}}d\\xi+\\gamma\\int_{|\\xi|\\geq\\eta}|\\widehat{\\nabla^Nn}|^2d\\xi\n\t\t\\leq C\\|\\nabla^N (n,v)^l\\|_{L^2}^2+C_{3} \\|\\nabla^{N+1}v^h\\|_{L^2}^2+C(1+t)^{-3-N}.\n\t\\end{split}\n\t\\end{equation}\nRecalling the estimate \\eqref{en2} in Lemma \\ref{enn}, one has the following estimate\n\\begin{equation}\\label{en4}\n\\f{d}{dt}\\|\\nabla^{N}(n,v)\\|_{L^2}^2+\\|\\nabla^{N+1}v\\|_{L^2}^2\\leq C\\delta \\|(\\nabla^{N}n,\\nabla^{N+1}v)\\|_{L^2}^2.\n\\end{equation}\nMultiplying \\eqref{en5} by $\\eta_3$, then adding to \\eqref{en4}, and choosing $\\delta$ and $\\eta_3$ suitably small, then we have\n\\begin{equation*}\n\\begin{aligned}\t\t&\\f{d}{dt}\\Big\\{\\|\\nabla^{N}(n,v)\\|_{L^2}^2-\\eta_3\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N-1}v}\\cdot \\overline{\\widehat{\\nabla^{N}n}}d\\xi \\Big\\}\n+\\|\\nabla^{N+1}v\\|_{L^2}^2+\\eta_3\\|\\nabla^{N}n^h\\|_{L^2}^2\\\\\n\\leq &C_4\\|\\nabla^{N}(n^l,v^l)\\|_{L^2}^2+C(1+t)^{-3-N}.\n\\end{aligned}\n\\end{equation*}\nUsing \\eqref{vhvl} once again, we obtain that\n\\begin{equation*}\n\\begin{aligned}\t\n&\\f{d}{dt}\\Big\\{\\|\\nabla^{N}(n,v)\\|_{L^2}^2\n -\\eta_3\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N-1}v}\\cdot\n \\overline{\\widehat{\\nabla^{N}n}}d\\xi \\Big\\}\n +\\|\\nabla^{N}v^h\\|_{L^2}^2+\\eta_3\\|\\nabla^{N}n^h\\|_{L^2}^2\\\\\n\\leq & C_4\\|\\nabla^{N}(n^l, v^l)\\|_{L^2}^2+C(1+t)^{-3-N}.\n\\end{aligned}\n\\end{equation*}\nTherefore, we complete the proof of this lemma.\n\\end{proof}\n\n\n Observe the right handside of the estimate \\eqref{en6} in Lemma \\ref{highfrequency}, we need to estimate the low frequency of $\\nabla^N(n,v)$.\n For this purpose, we need to analyze the initial value problem for linearized system of \\eqref{ns5}:\n \\begin{equation}\\label{linear}\n \\left\\{\\begin{array}{lr}\n\t\\widetilde n_t +\\gamma\\mathop{\\rm div}\\nolimits\\widetilde v=0,\\quad (t,x)\\in \\mathbb{R}^{+}\\times \\mathbb{R}^3,\\\\\n\t\\widetilde u_t+\\gamma\\nabla \\widetilde n-\\mu_1\\tri \\widetilde v-\\mu_2\\nabla\\mathop{\\rm div}\\nolimits \\widetilde v =0,\\quad (t,x)\\in \\mathbb{R}^{+}\\times \\mathbb{R}^3,\\\\\n\t(\\widetilde n,\\widetilde v)|_{t=0}=(n_0,v_0),\\quad x\\in \\mathbb{R}^3.\n \\end{array}\\right.\n \\end{equation}\n In terms of the semigroup theory for evolutionary equations, the solution $(\\widetilde n,\\widetilde v)$ of the linearized system \\eqref{linear} can be expressed as\n \\begin{equation}\\label{U}\n \\left\\{\\begin{array}{lr}\n \\widetilde U_t=A\\widetilde U,\\quad t\\geq0,\\\\\n \\widetilde U(0)=U_0,\n\\end{array}\\right.\\end{equation}\n where $\\widetilde U \\overset{def}{=}(\\widetilde{n},\\widetilde{v})^t$,\n $U_0 \\overset{def}=(n_0,v_0)$\n and the matrix-valued differential operator $A$ is given by\n \\begin{equation*}\n A = {\\left(\n\t\\begin{matrix}\n\t\t0 & -\\gamma \\mathop{\\rm div}\\nolimits \\\\\n\t\t-\\gamma \\nabla & \\mu_1\\tri+\\mu_2\\nabla\\mathop{\\rm div}\\nolimits\n\t\\end{matrix}\n\t\\right).}\n \\end{equation*}\n Denote $S(t)\\overset{def} =e^{tA}$, then the system \\eqref{U} gives rise to\n \\begin{equation}\\label{uexpress}\\widetilde U(t)=S(t)U_0=e^{tA} U_0,\\quad t\\geq0. \\end{equation}\n Then, it is easy to check that the following estimate holds\n \\begin{equation}\\label{linearlow}\n \\|\\nabla^N(S(t)U_0)\\|_{L^2}\\leq C(1+t)^{-\\f34-\\f N2}\\|U_0\\|_{L^1\\cap H^N},\n \\end{equation}\n where $C$ is a positive constant independent of time.\n The estimate \\eqref{linearlow} can be found in \\cite{chen2021,duan2007}.\n Finally, let us denote $F(t)=(\\widetilde{S_1}(t),\\widetilde{S_2}(t))^{tr}$, then\n the system \\eqref{ns5} can be rewritten as follows:\n \\begin{equation}\\label{nonlinear}\n \\left\\{\\begin{array}{lr}\n \tU_t=A U+F,\\\\\n \tU(0)=U_0.\n \\end{array}\\right.\n \\end{equation}\n Then we can use Duhamel's principle to represent the solution of system \\eqref{ns5} in term of the semigroup\n \\begin{equation}\\label{Uexpress}\n U(t)=S(t)U_0+\\int_0^tS(t-\\tau)F(\\tau)d\\tau.\n \\end{equation}\n Now, one can establish the estimate for the low frequency of $\\nabla^N(n,v)$ as follows:\n\\begin{lemm}\\label{lowfrequency}\nUnder the assumption of Theorem \\ref{them3}, we have\n\\begin{equation}\\label{lowfre}\n\\|\\nabla^N(n^l, v^l)(t)\\|_{L^2}\n\\leq C \\delta \\sup_{0\\leq s\\leq t}\\|\\nabla^N(n,v)(s)\\|_{L^2}+C(1+t)^{-\\f34-\\f N2},\n\\end{equation}\nwhere $C$ is a positive constant independent of time.\n\\end{lemm}\n\\begin{proof}\n\tIt follows from the formula \\eqref{Uexpress} that\n \\begin{equation*}\n\t\\nabla^N(n, v)=\\nabla^N(S (t)U_0)+\\int_0^t\\nabla^{N}[S(t-\\tau)F(\\tau)]d\\tau,\n\t\\end{equation*}\n\twhich yields directly\n\t\\begin{equation}\\label{nvexpress}\n\t\\|\\nabla^N(n^l, v^l)\\|_{L^2}\n \\leq \\|\\nabla^N(S (t)U_0)^l\\|_{L^2}+\\int_0^t\\|\\nabla^{N}[S(t-\\tau)F(\\tau)]^l\\|_{L^2}d\\tau.\n\t\\end{equation}\n\tSince the initial data $U_0=(n_0,v_0)\\in L^1\\cap H^N$, it follows from the\n estimate \\eqref{linearlow} that\n \\begin{equation}\\label{U0es}\n\t\\|\\nabla^N(S (t)U_0)^l\\|_{L^2}\\leq C(1+t)^{-\\f34-\\f N2}\\|U_0\\|_{L^1\\cap H^N}.\n\t\\end{equation}\n\tThen we compute by means of Sobolev inequality that\\begin{equation}\\label{nvl}\n\t\\begin{split}\n\t\t&\\int_0^t\\|\\nabla^{N}[S(t-\\tau)F(\\tau)]^l\\|_{L^2}d\\tau\n\t\t\\leq\\int_0^t\\||\\xi|^{N}|\\widehat{S}(t-\\tau)||\\widehat F(\\tau)|\\|_{L^2(|\\xi|\\leq \\eta)}d\\tau\\\\\n\t\t\\leq&\\int_0^{\\f t2}\\||\\xi|^{N}|\\widehat{S}(t-\\tau)|\\|_{L^2(|\\xi|\\leq \\eta)}\\|\\widehat F(\\tau)\\|_{L^\\infty(|\\xi|\\leq \\eta)}d\\tau+\\int_{\\f t2}^t\\||\\xi||\\widehat{S}(t-\\tau)|\\|_{L^2(|\\xi|\\leq \\eta)}\\||\\xi|^{N-1}\\widehat F(\\tau)\\|_{L^\\infty(|\\xi|\\leq \\eta)}d\\tau\\\\\n\t\t\\leq&\\int_0^{\\f t2}(1+t-\\tau)^{-\\f34-\\f N2}\\|\\widehat F(\\tau)\\|_{L^\\infty(|\\xi|\\leq \\eta)}d\\tau+\\int_{\\f t2}^t(1+t-\\tau)^{-\\f54}\\||\\xi|^{N-1}\\widehat F(\\tau)\\|_{L^\\infty(|\\xi|\\leq \\eta)}d\\tau\\\\\n\t\t\\overset{def}{=}&N_1+N_2.\n\t\\end{split}\n\t\\end{equation}\n\tNow we estimate the first term on the right handside of \\eqref{nvl} as follows:\n\\begin{equation}\\label{ss1N}\n\\begin{split}\nN_1=\n\\int_0^{\\f t2}(1+t-\\tau)^{-\\f34-\\f N2}\\|F\\|_{L^1}d\\tau\n\\leq C\\int_0^{\\f t2}(1+t-\\tau)^{-\\f34-\\f N2} \\big(\\|\\widetilde{S}_1\\|_{L^1}+\\|\\widetilde{S}_2\\|_{L^1}\\big)d\\tau.\n\\end{split}\n\\end{equation}\nIn view of the definitions of $\\widetilde{S}_i(i=1,2)$\nand decay estimate \\eqref{n1h1}, we have\n\\begin{equation}\\label{SS1}\n\\|\\widetilde{S}_1\\|_{L^1}\\leq C\\|\\nabla (n,v)\\|_{L^2}\\|(n,v)\\|_{L^2}+C\\|(1+|x|)\\nabla \\bar\\rho\\|_{L^2}\\|\\f{v}{1+|x|}\\|_{L^2}+C\\|\\bar\\rho\\|_{L^2}\\|\\nabla v\\|_{L^2}\\leq C\\delta (1+t)^{-\\f54},\n\\end{equation}\nand\n\\begin{equation}\\label{SS2}\n\\begin{split}\n\t\t\\|\\widetilde{S}_2\\|_{L^1}\\leq& C\\|v\\|_{L^2}\\|\\nabla v\\|_{L^2}+C\\|\\wf(n+\\bar\\rho)\\|_{L^2}\\|\\nabla^2 v\\|_{L^2}+C\\|\\wg(n+\\bar\\rho)\\|_{L^2}\\|\\nabla n\\|_{L^2}+C\\|\\h(n,\\bar\\rho)\\|_{L^2}\\|\\nabla \\bar\\rho\\|_{L^2}\\\\\n\t\t\\leq&C\\|v\\|_{L^2}\\|\\nabla v\\|_{L^2}+C\\|n\\|_{L^2}\\|\\nabla^2 v\\|_{L^2}+C\\|\\bar\\rho\\|_{L^2}\\|\\nabla^2 v\\|_{L^2}+C\\|n\\|_{L^2}\\|\\nabla n\\|_{L^2}\\\\\n\t\t&\\quad+C\\|\\bar\\rho\\|_{L^2}\\|\\nabla n\\|_{L^2}+C\\|\\f{n}{1+|x|}\\|_{L^2}\\|(1+|x|)\\nabla \\bar\\rho\\|_{L^2}\\\\\n\t\t\\leq&C\\delta (1+t)^{-\\f54}.\n\t\\end{split}\n\t\\end{equation}\nSubstituting the estimates \\eqref{SS1} and \\eqref{SS2} into \\eqref{ss1N},\nand using the estimate in Lemma \\ref{tt2}, it holds\n\\begin{equation}\\label{F1}\n\t\\begin{split}\n\t\tN_1\\leq C \\int_0^{\\f t2}(1+t-\\tau)^{-\\f34-\\f N2}(1+\\tau)^{-\\f54}d\\tau\\leq C (1+t)^{-\\f34-\\f N2}.\n\\end{split}\n\\end{equation}\nNext, let us deal with the $N_2$ term.\nFor any smooth function $\\phi$, there exists a positive constant $C$\ndependent only on $\\eta$, such that\n$$\\||\\xi|^{N-1}\\widehat \\phi\\|_{L^\\infty(|\\xi|\\leq \\eta)}\\leq C\\||\\xi|^{N-2}\\widehat \\phi\\|_{L^\\infty(|\\xi|\\leq \\eta)},$$\n\tthen we find that\n\\begin{equation}\\label{L}\n\t\\begin{split}\n\t\t\\||\\xi|^{N-1}\\widehat F\\|_{L^\\infty(|\\xi|\\leq \\eta)}\n\t\t\\leq& C \\|[\\nabla^{N-1}\\widetilde {S}_1]^l\\|_{L^1}+C\\|[\\nabla^{N-1}\\widetilde {S}_2]^l\\|_{L^1}\\\\\n\t\t\\leq&C\\|[\\nabla^{N-1}(\\nabla nv+n\\nabla v)]^l\\|_{L^1}+C\\|[\\nabla^{N-1}(\\nabla \\bar\\rho v+\\bar\\rho\\nabla v)]^l\\|_{L^1}+C\\|[\\nabla^{N-1}(v\\nabla v)]^l\\|_{L^1}\\\\\n\t\t&\\quad+C\\|[\\nabla^{N-2}\\big(\\wf(n+\\bar\\rho)(\\mu_1\\tri v+\\mu_2\\nabla \\mathop{\\rm div}\\nolimits v)\\big)]^l\\|_{L^1}+C\\|[\\nabla^{N-1}(\\wg(n+\\bar\\rho)\\nabla n)]^l\\|_{L^1}\\\\\n\t\t&\\quad+C\\|[\\nabla^{N-1}(\\h(n,\\bar\\rho)\\nabla \\bar\\rho)]^l\\|_{L^1}\\\\\n \\overset{def}{=}&N_{21}+N_{22}+N_{23}+N_{24}+N_{25}+N_{26}.\n\t\\end{split}\n\t\\end{equation}\n\tFirst of all, applying the decay estimate \\eqref{n1h1}, then the term $N_{21}$ can be estimated as follows\n \\begin{equation}\\label{L1}\n\t\\begin{split}\n\t\tN_{21}\\leq C\\sum_{0\\leq l\\leq N-1}\\big(\\|\\nabla^{l+1}n\\|_{L^2}\\|\\nabla^{N-l-1}v\\|_{L^2}+\\|\\nabla^{l}n\\|_{L^2}\\|\\nabla^{N-l}v\\|_{L^2}\\big)\n\t\t\\leq C (1+t)^{-1-\\f N2}.\n\t\\end{split}\n\t\\end{equation}\n\tBy virtue of Hardy inequality, we have\\begin{equation}\\label{L222}\n\t\\begin{split}\n\t\tN_{22}\\leq&C\\sum_{0\\leq l\\leq N-1}\\Big(\\|(1+|x|)^{l+1}\\nabla^{l+1}\\bar\\rho\\|_{L^2}\\|\\f{\\nabla^{N-l-1}v}{(1+|x|)^{l+1}}\\|_{L^2}+\\|(1+|x|)^{l}\\nabla^{l}\\bar\\rho\\|_{L^2}\\|\\f{\\nabla^{N-l}v}{(1+|x|)^{l}}\\|_{L^2} \\Big)\\\\\n\t\t\\leq&C \\delta\\|\\nabla^N v\\|_{L^2}.\n\t\\end{split}\n\t\\end{equation}\n\tIn view of the decay estimate \\eqref{n1h1}, it follows directly\n\t\\begin{equation}\\label{LL3}\n\t\\begin{split}\n\t\tN_{23}\\leq C\\sum_{0\\leq l\\leq N-1}\\|\\nabla^{l}v\\|_{L^2}\\|\\nabla^{N-l}v\\|_{L^2}\n\t\t\\leq C (1+t)^{-1-\\f N2}.\n\t\\end{split}\n\t\\end{equation}\n\tApplying the estimate \\eqref{widef} of function $\\wf$, we deduce that\\begin{equation}\\label{L4}\n\t\\begin{split}\n\t\tN_{24}\n\t\t\\leq&C\\int|\\wf(n+\\bar\\rho)||\\nabla^{N}v|dx+C\\sum_{1\\leq l\\leq N-2}\\sum_{\\gamma_1+\\cdots+\\gamma_{j}=l}\\int|\\nabla^{\\gamma_{1}}\\bar\\rho|\\cdots|\\nabla^{\\gamma_{j}}\\bar\\rho||{\\nabla^{N-l}v}|dx\\\\\n\t\t&\\quad+C\\sum_{1\\leq l\\leq N-2}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l}}\\int|\\nabla^{\\gamma_1}n|\\cdots|\\nabla^{\\gamma_{i}}n||\\nabla^{\\gamma_{i+1}}\\bar\\rho|\\cdots|\\nabla^{\\gamma_{i+j}}\\bar\\rho||{\\nabla^{N-l}v}|dx\\\\\n\t\t\\overset{def}{=}&N_{241}+N_{242}+N_{243}.\n\t\\end{split}\n \\end{equation}\n\tIt follows from H{\\\"o}lder inequality that\\begin{equation*}\n\t\\begin{split}\n\t\tN_{241}\\leq&C\\|(n+\\bar\\rho)\\|_{L^2}\\|\\nabla^N v\\|_{L^2}\\leq C\\delta\\|\\nabla^N v\\|_{L^2}.\n\t\\end{split}\n \\end{equation*}\n\tBy Hardy inequality, it is easy to deduce\\begin{equation*}\n\t\\begin{split}\n\t\tN_{242}\\leq& C\\sum_{1\\leq l\\leq N-2}\\sum_{\\gamma_1+\\cdots+\\gamma_{j}=l}\\|(1+|x|)^{\\gamma_{1}}\\nabla^{\\gamma_{1}}\\bar\\rho\\|_{L^2}\\|(1+|x|)^{\\gamma_{2}}\\nabla^{\\gamma_{2}}\\bar\\rho\\|_{L^\\infty}\\cdots\\|(1+|x|)^{\\gamma_{j}}\\nabla^{\\gamma_{j}}\\bar\\rho\\|_{L^\\infty}\\|\\f{\\nabla^{N-l}v}{(1+|x|)^{l}}\\|_{L^2}\\\\\n\t\t\\leq& C\\delta\\|\\nabla^N v\\|_{L^2}.\n\t\\end{split}\n\t\\end{equation*}\n\tWithout loss of generality, we assume that $1\\leq \\gamma_1\\leq\\cdots\\leq \\gamma_{i}\\leq N-2$. The fact $i\\geq2$ implies $\\gamma_{i-1}\\leq N-3\\leq N-2$. Thus, we can exploit Hardy inequality, Sobolev interpolation inequality \\eqref{Sobolev} in Lemma \\ref{inter} and the decay estimate \\eqref{n1h1} to obtain\\begin{equation*}\\begin{split}\n\t\tN_{243}\\leq&C\\sum_{1\\leq l\\leq N-2}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l}}\\|\\nabla^{\\gamma_1}n\\|_{L^\\infty}\\cdots\\|\\nabla^{\\gamma_{i-1}}n\\|_{L^\\infty}\\|\\nabla^{\\gamma_{i}}n\\|_{L^2}\\|(1+|x|)^{\\gamma_{i+1}}\\nabla^{\\gamma_{i+1}}\\bar\\rho\\|_{L^\\infty}\\\\\n\t\t&\\quad\\cdots\\|(1+|x|)^{\\gamma_{i+j}}\\nabla^{\\gamma_{i+j}}\\bar\\rho\\|_{L^\\infty}\\|\\f{\\nabla^{N-l}v}{(1+|x|)^{l-m}}\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\sum_{1\\leq l\\leq N-2}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l}}\\|\\nabla^{\\gamma_1+1}n\\|_{H^1}\\cdots\\|\\nabla^{\\gamma_{i-1}+1}n\\|_{H^1}\\|\\nabla^{\\gamma_i}n\\|_{L^2}\\|\\nabla^{N-m}v\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\sum_{1\\leq l\\leq N-2}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l}}(1+t)^{-\\f N2-\\f{5i}{4}-\\f74}\n\t\t\\leq C (1+t)^{-\\f32-\\f N2}.\n\t\\end{split}\n\t\\end{equation*}\n\tSubstituting the estimates of $N_{241}$, $N_{242}$ and $N_{243}$ into \\eqref{L4}, we arrive at\n\t\\begin{equation}\\label{L24}\n\tN_{24}\\leq C (1+t)^{-\\f32-\\f N2}+C\\delta\\|\\nabla^N v\\|_{L^2}.\n\t\\end{equation}\n\tOne can deal with the term $N_{25}$ in the same manner of $N_{24}$.\n Then, it is easy to check that\n \\begin{equation}\\label{L5}\n\tN_{25}\\leq C(1+t)^{-\\f32-\\f N2}+C\\delta \\|\\nabla^N n\\|_{L^2}.\n\t\\end{equation}\n\tFinally, let us deal with $N_{26}$. By virtue of the estimate \\eqref{h} and \\eqref{h2} of $\\h$, then we have\\begin{equation}\\label{l6}\\begin{split}\n\t\tN_{26}\\leq& C\\int|n||\\nabla^N \\bar\\rho|dx+ C\\sum_{1\\leq l\\leq N-1}\\sum_{\\gamma_1+\\cdots+\\gamma_{j}=l}\\int|n||\\nabla^{\\gamma_{1}}\\bar\\rho|\\cdots|\\nabla^{\\gamma_{j}}\\bar\\rho||\\nabla^{N-l}\\bar\\rho|dx\\\\\n\t\t&\\quad+C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l}}\\int|\\nabla^{\\gamma_1}n|\\cdots|{\\nabla^{\\gamma_i}n}||\\nabla^{\\gamma_{i+1}}\\bar\\rho|\\cdots|\\nabla^{\\gamma_{i+j}}\\bar\\rho||\\nabla^{N-l} \\bar\\rho|dx\\\\\n\t\t\\overset{def}{=}& N_{261}+N_{262}+N_{263}.\n\t\\end{split}\n\t\\end{equation}\n\tIt follows from Hardy inequality that\\begin{equation}\\label{N261-2}\n\t\\begin{split}\n\t\tN_{261}+N_{262}\\leq& C\\|\\f{n}{(1+|x|)^N}\\|_{L^2}\\|(1+|x|)^N\\nabla^N \\bar\\rho\\|_{L^2}+C\\sum_{1\\leq l\\leq N-1}\\sum_{\\gamma_1+\\cdots+\\gamma_{j}=l}\\|\\f{n}{(1+|x|)^{N}}\\|_{L^2}\\\\\n\t\t&\\quad\\times\\|(1+|x|)^{\\gamma_{1}}\\nabla^{\\gamma_{1}}\\bar\\rho\\|_{L^\\infty}\\cdots\\|(1+|x|)^{\\gamma_{j}}\\nabla^{\\gamma_{j}}\\bar\\rho\\|_{L^\\infty}\\|(1+|x|)^{N-l}\\nabla^{N-l} \\bar\\rho\\|_{L^2}\\\\\n\t\t\\leq& C\\delta\\|\\nabla^Nn\\|_{L^2}.\n\t\\end{split}\n\t\\end{equation}\n\tFor the term $N_{263}$, it is easy to check that\\begin{equation}\\label{N263}\n\t\\begin{split}\n\tN_{263}\\leq&C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1=m\\\\1\\leq m\\leq l}}\\|\\f{\\nabla^{\\gamma_1}n}{(1+|x|)^{N-\\gamma_1}}\\|_{L^2}\\|(1+|x|)^{\\gamma_{2}}\\nabla^{\\gamma_{2}}\\bar\\rho\\|_{L^\\infty}\\cdots\\|(1+|x|)^{\\gamma_{1+j}}\\nabla^{\\gamma_{1+j}}\\bar\\rho\\|_{L^\\infty}\\\\\n\t&\\quad\\|(1+|x|)^{N-l}\\nabla^{N-l} \\bar\\rho\\|_{L^2}+C\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l,i\\geq2}}\\|\\nabla^{\\gamma_1}n\\|_{L^\\infty}\\cdots\\|\\nabla^{\\gamma_{i-1}}n\\|_{L^\\infty}\\|\\f{\\nabla^{\\gamma_i}n}{(1+|x|)^{N-m}}\\|_{L^2}\\\\\n\t&\\quad\\times\\|(1+|x|)^{\\gamma_{i+1}}\\nabla^{\\gamma_{i+1}}\\bar\\rho\\|_{L^\\infty}\\cdots\\|(1+|x|)^{\\gamma_{i+j}}\\nabla^{\\gamma_{i+j}}\\bar\\rho\\|_{L^\\infty}\\|(1+|x|)^{N-l}\\nabla^{N-l} \\bar\\rho\\|_{L^2}\\\\\n\t\\overset{def}{=}&N_{2631}+N_{2632}.\n\t\\end{split}\n\t\\end{equation}\n\tWe employ Hardy inequality once again, to discover\n\t\\begin{equation*}\n\t\\begin{split}\n\t\tN_{2631}\n\t\t\\leq& C\\d\\|\\nabla^Nn\\|_{L^2}.\n\t\\end{split}\n\t\\end{equation*}\n\tWithout loss of generality, we assume that $1\\leq \\gamma_1\\leq\\cdots\\leq \\gamma_{i}\\leq N-1$. The fact $i\\geq2$ implies $\\gamma_{i}\\leq m-1\\leq N-2$. For the term $N_{2632}$, by virtue of Hardy inequality, Sobolev interpolation inequality \\ref{Sobolev} in Lemma \\ref{inter} and the decay estimate \\eqref{n1h1}, we deduce\n\t\\begin{equation*}\n\t\\begin{split}\n\t\tN_{2632}\n\t\t\\leq&C\\delta\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l,i\\geq2}}\\|\\nabla^{\\gamma_1+1}n\\|_{H^1}\\cdots\\|\\nabla^{\\gamma_{i-1}+1}n\\|_{H^1}\\|\\nabla^{N-m+\\gamma_i}n\\|_{L^2}\\\\\n\t\t\\leq&C\\delta\\sum_{1\\leq l\\leq N-1}\\sum_{\\substack{\\gamma_1+\\cdots+\\gamma_{i+j}=l\\\\\\gamma_1+\\cdots+\\gamma_{i}=m\\\\1\\leq m\\leq l,i\\geq2}} (1+t)^{-\\f N2+\\f12-\\f{5i}{4}}\n\t\t\\leq C (1+t)^{-2-\\f N2}.\n\t\\end{split}\n\t\\end{equation*}\n\tSubstituting estimates of terms $N_{2631}$ and $N_{2632}$ into \\eqref{N263}, we find\\begin{equation*}\n\tN_{263}\\leq C\\d\\|\\nabla^Nn\\|_{L^2}+C(1+t)^{-2-\\f N2}.\n\t\\end{equation*}\n\tInserting \\eqref{N261-2} and \\eqref{N263} into \\eqref{l6}, we get immediately\\begin{equation}\\label{L6}\n\tN_{26}\\leq C\\delta \\|\\nabla^N n\\|_{L^2}+C(1+t)^{-2-\\f N2}.\n\t\\end{equation}\n\tWe then conclude from \\eqref{L}-\\eqref{LL3}, \\eqref{L24}, \\eqref{L5} and \\eqref{L6} that\\begin{equation*}\n\t\\begin{split}\n\t\t\\||\\xi|^{N-1}\\widehat F\\|_{L^\\infty(|\\xi|\\leq \\eta)}\\leq&C\\delta\\|\\nabla^N (n,v)\\|_{L^2}+ (1+t)^{-1-\\f N2},\n\t\\end{split}\n\t\\end{equation*}\n\twhich, together with the definition of term $N_2$ and\n the estimate in Lemma \\ref{tt2}, yields directly\n \\begin{equation}\\label{kF1}\n\t\\begin{split}\n\t\tN_2\n\t\t\\leq&C \\int_{\\f t2}^t(1+t-\\tau)^{-\\f54}\\big(\\delta\\|\\nabla^N (n,v)\\|_{L^2}+ (1+\\tau)^{-1-\\f N2}\\big)d\\tau\\\\\n\t\t\\leq&C\\delta\\sup_{0\\leq\\tau\\leq t}\\|\\nabla^N (n,v)\\|_{L^2}\\int_{\\f t2}^t(1+t-\\tau)^{-\\f54}d\\tau+C(1+t)^{-1-\\f N2}\\\\\n\t\t\\leq&C\\delta\\sup_{0\\leq\\tau\\leq t}\\|\\nabla^N (n,v)\\|_{L^2}+C(1+t)^{-1-\\f N2}.\n\t\\end{split}\\end{equation}\n\tSubstituting \\eqref{F1} and \\eqref{kF1} into \\eqref{nvl}, it holds true\n \\begin{equation}\\label{Unon}\n\t\\int_0^t\\|\\nabla^{N}[S(t-\\tau)F(U(\\tau))]^l\\|_{L^2}d\\tau\n \\leq C\\delta\\sup_{0\\leq\\tau\\leq t}\\|\\nabla^N (n,v)\\|_{L^2}+C(1+t)^{-\\f34-\\f N2}.\n\t\\end{equation}\n\tInserting \\eqref{U0es} and \\eqref{Unon} into \\eqref{nvexpress}, one obtains immediately that\n\t\\begin{equation*}\n\t\\|\\nabla^N(n^l,v^l)\\|_{L^2}\\leq C \\delta \\sup_{0\\leq s\\leq t}\\|\\nabla^N(n,v)\\|_{L^2}+C(1+t)^{-\\f34-\\f N2}.\n\t\\end{equation*}\n\tThis completes the proof of this lemma.\n\\end{proof}\n\nFinally, we focus on establishing optimal decay rate for the $N-th$ order spatial derivative of solution.\n\\begin{lemm}\\label{optimaln}\n\tUnder the assumption of Theorem \\ref{them3}, we have\n\t\\begin{equation}\\label{n1h2}\n\t\\|\\nabla^N(n,v)(t)\\|_{L^2}\\leq C (1+t)^{-\\f34-\\f N2},\n\t\\end{equation}\n\twhere $C$ is a positive constant independent of $t$.\n\\end{lemm}\n\\begin{proof}\n\tWe may rewrite the estimate \\eqref{en6} in Lemma \\ref{highfrequency} as\\begin{equation}\\label{ddt}\n\t\\f{d}{dt}\\widetilde{\\mathcal E}^N(t)+\\|\\nabla^{N}v^h\\|_{L^2}^2+\\eta_3\\|\\nabla^{N}n^h\\|_{L^2}^2\n\t\\leq C_4\\|\\nabla^{N}(n,v)^l\\|_{L^2}^2+C(1+t)^{-3-N}.\n\t\\end{equation}\n\twhere the energy $\\widetilde{\\mathcal E}^N(t)$ is defined by\n\t\\[\\widetilde{\\mathcal E}^N(t)\\overset{def}{=}\\|\\nabla^{N}(n,v)\\|_{L^2}^2-\\eta_3\\int_{|\\xi|\\geq\\eta}\\widehat{\\nabla^{N-1}v}\\cdot \\overline{\\widehat{\\nabla^{N}n}}d\\xi.\\]\n\tWith the help of Young inequality, by choosing $\\eta_3$ small enough, one obtains the equivalent relation\\begin{equation}\\label{endj}\n\tc_5\\|\\nabla^{N}(n,v)\\|_{L^2}^2\\leq\\widetilde{\\mathcal E}^N(t)\\leq c_6 \\|\\nabla^{N}(n,v)\\|_{L^2}^2,\n\t\\end{equation}\n\twhere the constants $c_5$ and $c_6$ are independent of time.\n\tThen adding on both sides of \\eqref{ddt} by $\\|\\nabla^{N}(n^l,v^l)\\|_{L^2}^2$\n and applying the estimate \\eqref{lowfre} in Lemma \\ref{lowfrequency}, we find\n \\begin{equation*}\n\t\\begin{split}\n\t\t\\f{d}{dt}\\widetilde{\\mathcal E}^N(t)+\\|\\nabla^{N}(n,v)\\|_{L^2}^2\n\t\t\\leq ( C_4+1)\\|\\nabla^{N}(n^l,v^l)\\|_{L^2}^2+C(1+t)^{-3-N}\\leq C\\delta \\sup_{0\\leq \\tau\\leq t}\\|\\nabla^N(n,v)\\|_{L^2}^2+C(1+t)^{-\\f32-N}.\n\t\\end{split}\n\t\\end{equation*}\n By virtue of the equivalent relation \\eqref{endj}, we have\\begin{equation}\\label{en7}\n\t\\begin{split}\n\t\t\t\\f{d}{dt}\\widetilde{\\mathcal E}^N(t)+\\widetilde{\\mathcal E}^N(t)\n\t\t\\leq C\\delta \\sup_{0\\leq \\tau\\leq t}\\|\\nabla^N(n,v)\\|_{L^2}^2+C(1+t)^{-\\f32-N},\n\t\\end{split}\n\t\\end{equation}\n\twhich, using Gronwall inequality, gives immediately\\begin{equation}\\label{estimateE}\n\t\\begin{split}\n\t\t\\widetilde{\\mathcal E}^N(t)\\leq e^{-t} \\widetilde{\\mathcal E}^N(0)+C\\delta\\sup_{0\\leq \\tau\\leq t}\\|\\nabla^N(n,v)\\|_{L^2}^2\\int_0^te^{\\tau-t}d\\tau+C\\int_0^te^{\\tau-t}(1+\\tau)^{-\\f32-N}d\\tau.\n\t\\end{split}\n\t\\end{equation}\n\tIt is easy to deduce that\n $$\n\t\\int_0^te^{\\tau-t}d\\tau\\leq C\n $$\n and\n $$\\int_0^te^{\\tau-t}(1+\\tau)^{-\\f32-N}d\\tau\\leq C(1+t)^{-\\f32-N}.$$\n\tFrom the equivalent relation \\eqref{endj} and \\eqref{estimateE}, it holds\n\t\\begin{equation*}\n\t\\begin{split}\n\t\t\\sup_{0\\leq \\tau\\leq t}\\|\\nabla^N(n,v)(\\tau)\\|_{L^2}^2\\leq Ce^{-t}\\|\\nabla^N(n_0,v_0)\\|_{L^2}^2+C\\delta\\sup_{0\\leq \\tau\\leq t}\\|\\nabla^N(n,v)\\|_{L^2}^2+ C(1+t)^{-\\f32-N}.\n\t\\end{split}\n\t\\end{equation*}\n\tApplying the smallness of $\\delta$, one arrives at\n \\begin{equation*}\n\t\t\\sup_{0\\leq \\tau\\leq t}\\|\\nabla^N(n,v)(\\tau)\\|_{L^2}^2\\leq C(1+t)^{-\\f32-N}.\n\t\\end{equation*}\n\tTherefore, we cpmlete the proof of this lemma.\n\\end{proof}\n\n\\underline{\\noindent\\textbf{The Proof of Theorem \\ref{them3}.}}\nCombining the estimate \\eqref{n1h1} in Lemma \\ref{N-1decay} with\nestimate \\eqref{n1h2} in Lemma \\ref{optimaln}, then we can obtain the\ndecay rate \\eqref{kdecay} in Theorem \\ref{them3}.\n Consequently, we finish the proof of Theorem \\ref{them3}.\n\n\n\\subsection{Lower bound of decay rate}\\label{lower}\nIn this subsection, the content of our analysis is to establish the lower bound of decay rate for the global solution and its spatial derivatives of the initial value problem \\eqref{ns5}.\nIn order to achieve this target, we need to analyze the linearized system \\eqref{linear}.\nWe obtain the following proposition immediately, whose proof is similar to \\cite{chen2021} and standard, so we omit here.\n\\begin{prop}\\label{lamma-lower}\nLet $U_0=(n_0,v_0)\\in L^1(\\R^3)\\cap H^l(\\R^3)$ with $l\\geq3$,\nassume that $M_n\\overset{def}{=} \\int_{\\R^3}n_0(x) d x$\nand $M_v\\overset{def}{=}\\int_{\\R^3}v_0(x) d x$\nare at least one nonzero.\nThen there exists a positive constant $\\widetilde c$ independent of time\nsuch that for any large enough $t$, the global solution $(\\widetilde n,\\widetilde v)$\nof the linearized system \\eqref{linear} satisfies\n\\begin{equation}\\label{linearnudecay}\n\\min\\{\\|{\\partial}_{x}^{k} \\widetilde n(t)\\|_{L^2(\\R^3)},\n \\|{\\partial}_{x}^{k} \\widetilde v(t)\\|_{L^2(\\R^3)}\\}\n\\geq \\widetilde c(1+t)^{-\\f34-\\f{k}{2}},\\quad\\text{for}~~ 0\\leq k\\leq l.\n\\end{equation}\nHere $\\widetilde c$ is a positive constant depending only on $M_n$ and $M_v$.\n\\end{prop}\n\nDefine the difference $(n_{\\d},v_{\\d})\\overset{def}{=}(n-\\widetilde n,v-\\widetilde v)$,\nthen the quantity $(n_{\\d},v_{\\d})$ satisfies the following system:\n\\begin{equation}\\label{ns7}\n\\left\\{\\begin{array}{lr}\n\tn_{\\d t} +\\gamma\\mathop{\\rm div}\\nolimits v_{\\d}=\\widetilde S_1,\\quad (t,x)\\in \\mathbb{R}^{+}\\times \\mathbb{R}^3,\\\\\n\tv_{\\d t}+\\gamma\\nabla n_{\\d}-\\mu_1\\tri v_{\\d}-\\mu_2\\nabla\\mathop{\\rm div}\\nolimits v_{\\d} =\\widetilde S_2,\\quad (t,x)\\in \\mathbb{R}^{+}\\times \\mathbb{R}^3,\\\\\n\t(n_\\d,v_\\d)|_{t=0}=(0,0).\n\\end{array}\\right.\n\\end{equation}\nBy virtue of the formula \\eqref{Uexpress}, the solution of system \\eqref{ns7}\ncan be represented as\\begin{equation*}\n(n_\\d,v_\\d)^{t}=\\int_0^tS(t-\\tau)F(U(\\tau))d\\tau.\n\\end{equation*}\nNext, we aim to establish the upper bounds of decay rates for the difference $(n_\\d,v_\\d)$ and its first order spatial derivative.\n\\begin{lemm}\n\tUnder the assumptions of Theorem \\ref{them4}, assume $(n_\\d,v_\\d)$ be the smooth solution of the initial value problem \\eqref{ns7}. Then, it holds on for $t\\geq0$,\n\t\\begin{equation}\\label{nuddecay}\n\t\\|\\nabla^k(n_\\d,v_\\d)(t)\\|_{L^2}\\leq \\widetilde C\\d(1+t)^{-\\f34-\\f k2},\\quad \\text{for}~~k=0,1,\n\t\\end{equation}\n\twhere $\\widetilde C$ is a constant independent of time.\n\\end{lemm}\n\\begin{proof}\n\tBy Duhamel's principle, it holds for $k\\geq0$,\\begin{equation}\\label{nvd}\n\t\\|\\nabla^k(n_\\d,v_\\d)(t)\\|_{L^2}\\leq \\int_0^t(1+t-\\tau)^{-\\f34-\\f k2}\\big(\\|(\\widetilde {S}_1,\\widetilde {S}_2)(\\tau)\\|_{L^1}+\\|\\nabla^k(\\widetilde {S}_1,\\widetilde {S}_2)(\\tau)\\|_{L^2}\\big)d\\tau.\n\t\\end{equation}\n\tIt then follows from decay estimates \\eqref{SS1} and \\eqref{SS2} that\\begin{equation}\\label{s1s2}\n\t\\|(\\widetilde {S}_1,\\widetilde {S}_2)\\|_{L^1}\\leq C\\d(1+t)^{-\\f54}.\n\t\\end{equation}\nIn view of Sobolev and Hardy inequality, it is easy to deduce that\n\\begin{equation}\\label{ds1}\n\\|\\widetilde {S}_1\\|_{L^2}\n\\leq C\\|\\bar\\rho\\|_{L^\\infty}\\|\\nabla n\\|_{L^2}\n +C\\|(1+|x|)\\nabla\\bar\\rho\\|_{L^\\infty}\\|\\f{n}{1+|x|}\\|_{L^2}\n\\leq C\\delta\\|\\nabla n\\|_{L^2}\\leq C\\d(1+t)^{-\\f54},\n\\end{equation}\nand\n\\begin{equation}\\label{ds2}\n\\begin{aligned}\n\\|\\widetilde {S}_2\\|_{L^2}\n\\leq& C\\|v\\|_{L^\\infty}\\|\\nabla v\\|_{L^2}\n +\\|(n+\\bar\\rho)\\|_{L^\\infty}\\|(\\nabla^2 v,\\nabla n)\\|_{L^2}\n +\\|\\f{n}{1+|x|}\\|_{L^\\infty}\\|(1+|x|)\\nabla \\bar\\rho\\|_{L^2}\\\\\n\\leq& C\\delta\\|(\\nabla n,\\nabla v,\\nabla^2v)\\|_{L^2}\\leq C\\delta(1+t)^{-\\f54}.\n\\end{aligned}\n\\end{equation}\nThen, substituting the decay estimates \\eqref{s1s2}, \\eqref{ds1}\nand \\eqref{ds2} into \\eqref{nvd} with $k=0$, it holds\n\\begin{equation*}\n\\begin{split}\n\\|(n_\\d,v_\\d)(t)\\|_{L^2}\\leq& \\int_0^t(1+t-\\tau)^{-\\f34}\\big(\\|(\\widetilde {S}_1,\\widetilde {S}_2)(\\tau)\\|_{L^1}+\\|(\\widetilde {S}_1,\\widetilde {S}_2)(\\tau)\\|_{L^2}\\big)d\\tau\\\\\n\t\t\\leq& C\\d \\int_0^t(1+t-\\tau)^{-\\f34}(1+\\tau)^{-\\f54}d\\tau\\\\\n\t\t\\leq& C\\d(1+t)^{-\\f34},\n\\end{split}\n\\end{equation*}\nwhere we have used the Lemma \\ref{tt2} in the last inequality.\nSimilar to \\eqref{ds1} and \\eqref{ds2}, one arrives at\n\\begin{equation*}\n\t\\begin{split}\n\t\t\\|\\nabla\\widetilde {S}_1\\|_{L^2}\\leq& C\\|\\bar\\rho\\|_{L^\\infty}\\|\\nabla^2 n\\|_{L^2}+C\\|(1+|x|)\\nabla\\bar\\rho\\|_{L^\\infty}\\|\\f{\\nabla n}{1+|x|}\\|_{L^2}+C\\|(1+|x|)^2\\nabla^2\\bar\\rho\\|_{L^\\infty}\\|\\f{n}{(1+|x|)^2}\\|_{L^2}\\\\\n\t\t\\leq& C\\delta\\|\\nabla^2 n\\|_{L^2}\\leq C\\d(1+t)^{-\\f74},\\\\\n\t\t\\|\\nabla\\widetilde {S}_2\\|_{L^2}\\leq& C\\|v\\|_{L^\\infty}\\|\\nabla^2 v\\|_{L^2}+\\|\\nabla v\\|_{L^3}\\|\\nabla v\\|_{L^6}+\\|n+\\bar\\rho\\|_{L^\\infty}\\|(\\nabla^3 v,\\nabla^2 n)\\|_{L^2}+\\|\\nabla n\\|_{L^3}\\|(\\nabla^2v,\\nabla n)\\|_{L^6}\\\\\n\t\t&+\\|(1+|x|)\\nabla\\bar\\rho\\|_{L^\\infty}\\|(\\f{\\nabla^2v}{1+|x|},\\f{\\nabla n}{1+|x|})\\|_{L^2}+\\|(1+|x|)\\nabla \\bar\\rho\\|_{L^\\infty}^2\\|\\f{ n}{(1+|x|)^2}\\|_{L^2}\\\\\n\t\t\\leq& C\\delta\\|(\\nabla^2 n,\\nabla^2 v,\\nabla^3v)\\|_{L^2}\\leq C\\delta(1+t)^{-\\f74},\n\t\\end{split}\n\t\\end{equation*}\nwhich, together with \\eqref{nvd}, \\eqref{s1s2} and Lemma \\ref{tt2},\nyields directly\n\\begin{equation*}\n\t\\begin{split}\n\t\t\\|\\nabla(n_\\d,v_\\d)(t)\\|_{L^2}\\leq& \\int_0^t(1+t-\\tau)^{-\\f54}\\big(\\|(\\widetilde {S}_1,\\widetilde {S}_2)(\\tau)\\|_{L^1}+\\|\\nabla(\\widetilde {S}_1,\\widetilde {S}_2)(\\tau)\\|_{L^2}\\big)d\\tau\\\\\n\t\t\\leq&C\\d\\int_0^t(1+t-\\tau)^{-\\f54}(1+\\tau)^{-\\f54}d\\tau\\leq C\\d(1+t)^{-\\f54}.\n\t\\end{split}\n\t\\end{equation*}\n\tTherefore, the proof of this lemma is completed.\n\\end{proof}\nFinally, we establish the lower bound of decay rate for the global solution and its spatial derivatives of compressible Navier-Stokes equations with potential force.\nThen, the estimate \\eqref{lower-estimate} in Lemma \\ref{lemmalower} below will\nyield the optimal decay estimate in Theorem \\ref{them4}.\n\\begin{lemm}\\label{lemmalower}\nUnder the assumptions of Theorem \\ref{them4}, then for any large enough $t$, we have\n\\begin{equation}\\label{lower-estimate}\n\\min\\{\\|\\nabla^kn(t)\\|_{L^2},\\|\\nabla^kv(t)\\|_{L^2} \\}\n\\geq c_1(1+t)^{-\\f34-\\f k2},\\quad \\text{for}~~ 0\\leq k\\leq N,\n\\end{equation}\nwhere $c_1$ is a positive constant independent of time.\n\\end{lemm}\n\\begin{proof}\nBy virtue of the definition of $n_\\d$, it holds true\n\\begin{equation*}\n\\|\\nabla^k\\widetilde n\\|_{L^2}\\leq \\|\\nabla^kn\\|_{L^2}+\\|\\nabla^kn_\\d\\|_{L^2},\n\\end{equation*}\nwhich, together with the lower bound decay \\eqref{linearnudecay} and upper bound decay \\eqref{nuddecay}, yields directly\n\\begin{equation}\\label{lowlow-01}\n\\|\\nabla^kn\\|_{L^2}\\geq \\|\\nabla^k\\widetilde n\\|_{L^2}-\\|\\nabla^kn_\\d\\|_{L^2}\n\\geq \\widetilde c(1+t)^{-\\f34-\\f k2}-\\widetilde C\\d (1+t)^{-\\f34-\\f k2},\n\\end{equation}\nwhere $k=0,1$. It is worth noting that the small constant $\\delta$ is used\nto control the upper bound of initial data in $L^2-$norm instead of $L^1$ one\n(see \\eqref{phik} and \\eqref{initial-H2}).\nFrom the estimate \\eqref{linearnudecay} in Lemma \\ref{lamma-lower},\nthe constant $\\widetilde c$ in \\eqref{lowlow-01} only depends on the quantities $M_n$ and $M_v$.\nThen, we can choose $\\delta$ small enough such that\n$\\widetilde C \\d\\leq\\f{1}{2} \\widetilde c$, and hence,\nit follows from \\eqref{lowlow-01} that\n\\begin{equation}\\label{lowlow}\n\\|\\nabla^k n\\|_{L^2} \\geq \\frac12\\widetilde c(1+t)^{-\\f34-\\f k2}, \\quad k=0,1.\n\\end{equation}\nBy virtue of the Sobolev interpolation inequality in Lemma \\ref{inter},\nit holds true for $k\\geq2$\n\\begin{equation*}\n\\|\\nabla n\\|_{L^2}\\leq C\\|n\\|_{L^2}^{1-\\f1k}\\|\\nabla^kn\\|_{L^2}^{\\f1k},\n\\end{equation*}\nwhich, together with the lower bound decay \\eqref{lowlow}\nand upper bound decay \\eqref{kdecay}, implies directly\n\\begin{equation}\\label{highlow}\n\\|\\nabla^kn\\|_{L^2}\n\\geq C\\|\\nabla n\\|_{L^2}^k\\|n\\|_{L^2}^{-(k-1)}\n\\geq C(1+t)^{-\\f{5k}{4}}(1+t)^{\\f{3(k-1)}{4}}\n\\geq c_1(1+t)^{-\\f34-\\f k2},\n\\end{equation}\nfor all $k \\geq 2$.\nIn the same manner, it is easy to deduce that\n\\begin{equation}\\label{vhighlow}\n\\|\\nabla^kv\\|_{L^2}\\geq c_1(1+t)^{-\\f34-\\f k2},\\quad \\text{for}~~k \\geq 0.\n\\end{equation}\nThen, the combination of estimates \\eqref{lowlow}, \\eqref{highlow}\nand \\eqref{vhighlow} yields the estimate \\eqref{lower-estimate}.\nTherefore, we complete the proof of this lemma.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgements}\n\n\n\nThis research was partially supported by NNSF of China(11801586, 11971496, 12026244), Guangzhou Science and technology project of China(202102020769), Natural Science Foundation of Guangdong Province of China (2020A1515110942),\nNational Key Research and Development Program of China(2020YFA0712500).\n\n\n\\phantomsection\n\\addcontentsline{toc}{section}{\\refname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzofo b/data_all_eng_slimpj/shuffled/split2/finalzofo new file mode 100644 index 0000000000000000000000000000000000000000..d190bf5a402cdad7adf9934f99a90a11d93855d4 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzofo @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nNormalized B-bases (a comprehensive study of which can be found in\n\\cite{Pena1999} and references therein) are normalized totally positive bases\nthat imply optimal shape preserving properties for the representation of\ncurves described as linear combinations of control points and basis functions.\nSimilarly to the classical Bernstein polynomials\n\\[\n\\mathcal{B}_{n}=\\left\\{ \\dbinom{n}{i}u^{i}\\left( 1-u\\right) ^{n-i}%\n:u\\in\\left[ 0,1\\right] \\right\\} _{i=0}^{n}%\n\\]\nof degree $n\\in%\n\\mathbb{N}\n$ -- that in fact form the normalized B-basis of the vector space of\npolynomials\n\\[\n\\mathbb{P}_{n}=\\left\\{ 1,u,\\ldots,u^{n}:u\\in\\left[ 0,1\\right] \\right\\}\n\\]\nof degree at most $n$ on the compact interval $\\left[ 0,1\\right] $, cf.\n\\cite{Carnicer1993} -- normalized B-bases provide shape preserving properties\nlike closure for the affine transformations of the control polygon, convex\nhull, variation diminishing (which also implies convexity preserving of plane\ncontrol polygons), endpoint interpolation, monotonicity preserving, hodograph\nand length diminishing, and a recursive corner cutting algorithm (also called\nB-algorithm) that is the analogue of the de Casteljau algorithm of B\\'{e}zier\ncurves. Among all normalized totally positive bases of a given vector space of\nfunctions a normalized B-basis is the least variation diminishing and the\nshape of the generated curve more mimics its control polygon. Important curve\ndesign algorithms like evaluation, subdivision, degree elevation or knot\ninsertion are in fact corner cutting algorithms that can be treated in a\nunified way by means of B-algorithms induced by B-bases.\n\nThese advantageous properties make normalized B-bases ideal blending function system\ncandidates for curve (and surface) modeling. Using B-basis functions, our\nobjective is to provide control point based exact description for higher order\nderivatives of trigonometric and hyperbolic curves specified with coordinate\nfunctions given in traditional parametric form, i.e., in vector spaces%\n\\begin{equation}\n\\mathbb{T}_{2n}^{\\alpha}=\\operatorname{span}\\mathcal{T}_{2n}^{\\alpha\n}=\\operatorname{span}\\left\\{ \\cos\\left( ku\\right) ,\\sin\\left( ku\\right)\n:u\\in\\left[ 0,\\alpha\\right] \\right\\} _{k=0}^{n}\n\\label{truncated_Fourier_vector_space}%\n\\end{equation}\nor%\n\\begin{equation}\n\\mathbb{H}_{2n}^{\\alpha}=\\operatorname{span}\\mathcal{H}_{2n}^{\\alpha\n}=\\operatorname{span}\\left\\{ \\cosh\\left( ku\\right) ,\\sinh\\left( ku\\right)\n:u\\in\\left[ 0,\\alpha\\right] \\right\\} _{k=0}^{n},\n\\label{hyperbolic_vector_space}%\n\\end{equation}\nwhere $\\alpha$ is a fixed strictly positive shape (or design)\nparameter which is either strictly less than $\\pi$, or it is unbounded from above in\nthe trigonometric and hyperbolic cases, respectively. The obtained results will also be\nextended for the control point based exact description of the rational\ncounterpart of these curves and of higher dimensional multivariate (rational)\nsurfaces that are also specified by coordinate functions given in traditional\ntrigonometric or hyperbolic form along of each of their variables.\n\n\\begin{remark}\nFrom the point of view of control point based exact description of smooth\n(rational) trigonometric closed curves and surfaces (i.e., when $\\alpha=2\\pi\n$), articles \\cite{RothJuhasz2010}\\ and \\cite{JuhaszRoth2010} already provided\ncontrol point configurations by using the so-called cyclic basis functions\nintroduced in \\cite{RothEtAl2009}. Although this special cyclic basis of\n$\\mathbb{T}_{2n}^{2\\pi}$ fulfills some important properties (like positivity,\nnormalization, cyclic variation diminishing, cyclic symmetry, singularity free\nparametrization, efficient explicit formula for arbitrary order elevation), it\nis not totally positive, hence it is not a B-basis, since, as it was shown in\n\\cite{Pena1997}, the vector space (\\ref{truncated_Fourier_vector_space}) has\nno normalized totally positive bases when $\\alpha\\geq\\pi$. Therefore, by using\nthe B-basis of (\\ref{truncated_Fourier_vector_space}), the control point based\nexact description of arcs, patches or volume entities of higher dimensional\n(rational) trigonometric curves and multivariate surfaces given in traditional\nparametric form remained, at least for us, an interesting and challenging question.\n\\end{remark}\n\nThe rest of the paper is organized as follows. Section\n\\ref{sec:special_parametrizations} briefly recalls some basic properties of\nrational B\\'{e}zier curves and points out that curves described as linear\ncombinations of control points and B-basis functions of vector spaces\n(\\ref{truncated_Fourier_vector_space}) or (\\ref{hyperbolic_vector_space}) are\nin fact special reparametrizations of specific classes of rational B\\'{e}zier\ncurves. This section also defines control point based (rational) trigonometric\nand hyperbolic curves of finite order, briefly reviews some of their\n(geometric) properties like order elevation and asymptotic behavior and at the\nsame time also describes their subdivision algorithm which, to the best of our\nknowledge, were either totally not detailed or not described with full\ngenerality for these type of curves in the literature. Based on multivariate\ntensor products of trigonometric and hyperbolic curves, Section\n\\ref{sec:multivariate_surfaces} defines higher dimensional multivariate\n(rational) trigonometric and hyperbolic surfaces. Section\n\\ref{sec:basis_transformations} provides efficient and parallely implementable\nrecursive formulae for those base changes that transform the normalized B-bases of vector\nspaces (\\ref{truncated_Fourier_vector_space}) and\n(\\ref{hyperbolic_vector_space}) to their corresponding canonical (traditional)\nbases, respectively. Using these transformations, theorems and algorithms of\nSection \\ref{sec:exact_description} provide control point configurations for\nthe exact description of large classes of higher dimensional (rational)\ntrigonometric or hyperbolic curves and multivariate (hybrid) surfaces. All\nexamples included in this section emphasize the applicability and usefulness\nof the proposed curve and surface modeling tools. Finally, Section\n\\ref{sec:final_remarks} closes the paper with our final remarks.\n\n\\section{Special parametrizations of a class of rational B\\'{e}zier\ncurves\\label{sec:special_parametrizations}}\n\nUsing Bernstein polynomials, a rational B\\'{e}zier curve of even degree $2n$\ncan be described as%\n\\begin{equation}\n\\mathbf{r}_{2n}\\left( v\\right) =\\frac{%\n{\\displaystyle\\sum\\limits_{i=0}^{2n}}\nw_{i}\\mathbf{d}_{i}B_{i}^{2n}\\left( v\\right) }{%\n{\\displaystyle\\sum\\limits_{j=0}^{2n}}\nw_{j}B_{j}^{2n}\\left( v\\right) },~v\\in\\left[ 0,1\\right]\n,\\label{rational_Bezier_curve}%\n\\end{equation}\nwhere $\\left[ \\mathbf{d}_{i}\\right] _{i=0}^{2n}\\in\\mathcal{M}_{1,2n+1}%\n\\left(\n\\mathbb{R}\n^{\\delta}\\right) $ is a user defined control polygon ($\\delta\\geq2$), while\n$\\left[ w_{i}\\right] _{i=0}^{2n}\\in\\mathcal{M}_{1,2n+1}\\left(\n\\mathbb{R}\n_{+}\\right) $ is also a user specified non-negative weight vector of rank $1$\n(i.e., $\\sum_{i=0}^{2n}w_{i}\\neq0$).\n\nFor any fixed ratio $v\\in\\left[ 0,1\\right] $, the recursive relations%\n\\begin{equation}\n\\left\\{\n\\begin{array}\n[c]{l}%\nw_{i}^{r}\\left( v\\right) =\\left( 1-v\\right) w_{i}^{r-1}\\left( v\\right)\n+vw_{i+1}^{r-1}\\left( v\\right) ,\\\\\n\\\\\n\\mathbf{d}_{i}^{r}\\left( v\\right) =\\left( 1-v\\right) \\dfrac{w_{i}%\n^{r-1}\\left( v\\right) }{w_{i}^{r}\\left( v\\right) }\\mathbf{d}_{i}%\n^{r-1}\\left( v\\right) +v\\dfrac{w_{i+1}^{r-1}\\left( v\\right) }{w_{i}%\n^{r}\\left( v\\right) }\\mathbf{d}_{i+1}^{r-1}\\left( v\\right) ,~r=1,2,\\ldots\n,2n,~i=0,1,\\ldots,2n-r\n\\end{array}\n\\right. \\label{rational_de_Casteljau}%\n\\end{equation}\nwith initial conditions%\n\\[\nw_{i}^{0}\\left( v\\right) \\equiv w_{i},~\\mathbf{d}_{i}^{0}\\left( v\\right)\n\\equiv\\mathbf{d}_{i},~i=0,1,\\ldots,2n\n\\]\ndefine the B-algorithm (or rational de Casteljau algorithm) of the curve\n(\\ref{rational_Bezier_curve}) (cf. \\cite{Farin2002}).\n\nWe will produce control point exact based description of trigonometric and\nhyperbolic curves, therefore we need proper bases for vector spaces\n(\\ref{truncated_Fourier_vector_space}) and (\\ref{hyperbolic_vector_space}) of\ntrigonometric and hyperbolic polynomials of order at most $n$ (or of degree at\nmost $2n$), respectively. In what follows, $\\overline{\\mathcal{T}}%\n_{2n}^{\\alpha}$ and $\\overline{\\mathcal{H}}_{2n}^{\\alpha}$ denote the normalized B-bases\nof vector spaces $\\mathbb{T}_{2n}^{\\alpha}$ and $\\mathbb{H}_{2n}^{\\alpha}$, respectively.\n\n\\subsection{Trigonometric curves and their rational\ncounterpart\\label{sec:trigonometric_curves}}\n\nLet $\\alpha\\in\\left( 0,\\pi\\right) $ be an arbitrarily fixed parameter and\nconsider the linearly reparametrized version of the B-basis\n\\begin{equation}\n\\overline{\\mathcal{T}}_{2n}^{\\alpha}=\\left\\{ T_{2n,i}^{\\alpha}\\left(\nu\\right) :u\\in\\left[ 0,\\alpha\\right] \\right\\} _{i=0}^{2n}=\\left\\{\nt_{2n,i}^{\\alpha}\\sin^{2n-i}\\left( \\frac{\\alpha-u}{2}\\right) \\sin^{i}\\left(\n\\frac{u}{2}\\right) :u\\in\\left[ 0,\\alpha\\right] \\right\\} _{i=0}^{2n}\n\\label{Sanchez_basis}%\n\\end{equation}\nof order $n$ (degree $2n$) specified in \\cite{Sanchez1998}, where the\nnon-negative normalizing coefficients%\n\\[\nt_{2n,i}^{\\alpha}=\\frac{1}{\\sin^{2n}\\left( \\frac{\\alpha}{2}\\right) }%\n\\sum_{r=0}^{\\left\\lfloor \\frac{i}{2}\\right\\rfloor }\\binom{n}{i-r}\\binom\n{i-r}{r}\\left( 2\\cos\\left( \\frac{\\alpha}{2}\\right) \\right) ^{i-2r}%\n,~i=0,1,\\ldots,2n\n\\]\nfulfill the symmetry property%\n\\begin{equation}\nt_{2n,i}^{\\alpha}=t_{2n,2n-i}^{\\alpha},~i=0,1,\\ldots,n\\text{.}\n\\label{symmetry_of_Sanchez_Reyes_constants}%\n\\end{equation}\n\n\n\\begin{definition}\n[Trigonometric curves]A trigonometric curve of order $n$ (degree $2n$) can be\ndescribed as the convex combination%\n\\begin{equation}\n\\mathbf{t}_{n}^{\\alpha}\\left( u\\right) =\\sum_{i=0}^{2n}\\mathbf{d}%\n_{i}T_{2n,i}^{\\alpha}\\left( u\\right) ,~u\\in\\left[ 0,\\alpha\\right] ,\n\\label{trigonometric_curve}%\n\\end{equation}\nwhere $\\left[ \\mathbf{d}_{i}\\right] _{i=0}^{2n}\\in\\mathcal{M}_{1,2n+1}%\n\\left(\n\\mathbb{R}\n^{\\delta}\\right) $ defines its control polygon.\n\\end{definition}\n\nAs stated in Remark \\ref{rem:trigonometric_reparametrization} curves of type\n(\\ref{trigonometric_curve}) can also be obtained as a special trigonometric\nreparametrization of a class of rational B\\'{e}zier curves of even degree.\n\n\\begin{remark}\n[Trigonometric reparametrization]\\label{rem:trigonometric_reparametrization}%\nUsing the function%\n\\begin{equation}\n\\left\\{\n\\begin{array}\n[c]{l}%\nv:\\left[ 0,\\alpha\\right] \\rightarrow\\left[ 0,1\\right] ,\\\\\n\\\\\nv\\left( u\\right) =\\dfrac{1}{2}+\\dfrac{\\tan\\left( \\frac{u}{2}-\\frac{\\alpha\n}{4}\\right) }{2\\tan\\left( \\frac{\\alpha}{4}\\right) }=\\dfrac{\\sin\\left(\n\\frac{u}{2}\\right) }{2\\cos\\left( \\frac{\\alpha}{4}-\\frac{u}{2}\\right)\n\\sin\\left( \\frac{\\alpha}{4}\\right) }%\n\\end{array}\n\\right. \\label{trigonometric_reparametrization}%\n\\end{equation}\nand weights%\n\\begin{equation}\nw_{i}=\\frac{t_{2n,i}^{\\alpha}}{\\binom{2n}{i}},~i=0,1,\\ldots,2n,\n\\label{trigonometric_weights}%\n\\end{equation}\none can reparametrize the rational B\\'{e}zier curve\n(\\ref{rational_Bezier_curve}) into the trigonometric form\n(\\ref{trigonometric_curve}). Indeed, one has that%\n\\begin{align*}\nw_{i}B_{i}^{2n}\\left( v\\left( u\\right) \\right) & =\\frac{t_{2n,i}%\n^{\\alpha}}{\\binom{2n}{i}}\\binom{2n}{i}v^{i}\\left( u\\right) \\left(\n1-v\\left( u\\right) \\right) ^{2n-i}\\\\\n& =t_{2n,i}^{\\alpha}\\cdot\\dfrac{\\sin^{i}\\left( \\frac{u}{2}\\right) }%\n{2^{i}\\cos^{i}\\left( \\frac{\\alpha}{4}-\\frac{u}{2}\\right) \\sin^{i}\\left(\n\\frac{\\alpha}{4}\\right) }\\cdot\\frac{\\sin^{2n-i}\\left( \\frac{\\alpha-u}%\n{2}\\right) }{2^{2n-i}\\cos^{2n-i}\\left( \\frac{\\alpha}{4}-\\frac{u}{2}\\right)\n\\sin^{2n-i}\\left( \\frac{\\alpha}{4}\\right) }\\\\\n& =\\frac{1}{2^{2n}\\cos^{2n}\\left( \\frac{\\alpha}{4}-\\frac{u}{2}\\right)\n\\sin^{2n}\\left( \\frac{\\alpha}{4}\\right) }\\cdot T_{2n,i}^{\\alpha}\\left(\nu\\right)\n\\end{align*}\nfor all $i=0,1,\\ldots,2n$ and $u\\in\\left[ 0,\\alpha\\right] $, therefore%\n\\[\n\\mathbf{r}_{2n}\\left( v\\left( u\\right) \\right) =\\frac{%\n{\\displaystyle\\sum\\limits_{i=0}^{2n}}\nw_{i}\\mathbf{d}_{i}B_{i}^{2n}\\left( v\\left( u\\right) \\right) }{%\n{\\displaystyle\\sum\\limits_{j=0}^{2n}}\nw_{j}B_{j}^{2n}\\left( v\\left( u\\right) \\right) }=\\frac{%\n{\\displaystyle\\sum\\limits_{i=0}^{2n}}\n\\mathbf{d}_{i}T_{2n,i}^{\\alpha}\\left( u\\right) }{%\n{\\displaystyle\\sum\\limits_{j=0}^{2n}}\nT_{2n,j}^{\\alpha}\\left( u\\right) }=%\n{\\displaystyle\\sum\\limits_{i=0}^{2n}}\n\\mathbf{d}_{i}T_{2n,i}^{\\alpha}\\left( u\\right) =\\mathbf{t}_{n}^{\\alpha\n}\\left( u\\right) ,~\\forall u\\in\\left[ 0,\\alpha\\right] ,\n\\]\nsince the function system (\\ref{Sanchez_basis}) is normalized, i.e.,\n$\\sum_{j=0}^{2n}T_{2n,j}^{\\alpha}\\left( u\\right) \\equiv1$, $\\forall\nu\\in\\left[ 0,\\alpha\\right] $. Basis functions (\\ref{Sanchez_basis}) and the\nreparametrization function (\\ref{trigonometric_reparametrization}) were\nrepeatedly applied in articles \\cite{Sanchez1990}, \\cite{Sanchez1997} and\n\\cite{Sanchez1998}, however the (inverse) transformation between bases\n$\\overline{\\mathcal{T}}_{2n}^{\\alpha}$ and $\\mathcal{T}_{2n}^{\\alpha}$ were\ncalculated only up to second order in \\cite[p. 916]{Sanchez1998} with the aid\nof a computer algebra system, moreover subdivision of such curves was\ndetailed only for very special control point configurations in\n\\cite{Sanchez1990}.\n\\end{remark}\n\n\\begin{remark}\n[B-algorithm of trigonometric curves]\\label{rem:trigonometric_subdivision}Due\nto Remark \\ref{rem:trigonometric_reparametrization}, the subdivision algorithm\nof trigonometric curves of type (\\ref{trigonometric_curve}) is a simple\ncorollary of the rational de Casteljau algorithm (\\ref{rational_de_Casteljau}%\n). One has to apply the parameter transformation\n(\\ref{trigonometric_reparametrization}) and initial weights\n(\\ref{trigonometric_weights}) in recursive formulae\n(\\ref{rational_de_Casteljau}). Fig. \\ref{fig:subdivsion}(a) shows the steps of\nthis special variant of the classical rational corner cutting algorithm in\ncase of a third order trigonometric curve.\n\\end{remark}\n\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nheight=3.243in,\nwidth=6.2388in\n]%\n{trigonometric_and_hyperbolic_subdivision_u_pi_per_4_alpha_pi_per_2.pdf}%\n\\caption{Consider the fixed control polygon $\\left[ \\mathbf{d}_{i}\\right]\n_{i=0}^{6}$ and the shape parameter $\\alpha=\\frac{\\pi}{2}$ that generate third\norder trigonometric and hyperbolic curves of the type\n(\\ref{trigonometric_curve}) and (\\ref{hyperbolic_curve}), respectively. Cases\n(\\emph{a}) and (\\emph{b}) illustrate the subdivision of these curve at the\ncommon parameter value $u=\\frac{\\pi}{4}$. (The parameter value $\\left.\nv\\left( u\\right) \\right\\vert _{u=\\frac{\\pi}{4}}=\\frac{1}{2}$ is generated by\nreparametrization functions (\\ref{trigonometric_reparametrization}) and\n(\\ref{hyperbolic_reparametrization}), respectively.)}%\n\\label{fig:subdivsion}%\n\\end{center}\n\\end{figure}\n\\qquad\n\nThe order elevation of trigonometric curves of type\n(\\ref{trigonometric_weights}) was also considered in \\cite[Section\n4.2]{Sanchez1998}. This method will be one of the main auxiliary tools used by\nthe present paper, therefore we briefly recall this process, by using our notations.\n\n\\begin{remark}\n[Order elevation of trigonometric curves]%\n\\label{rem:trigonometric_order_elevation}Multiplying the curve\n(\\ref{trigonometric_curve}) with the first order constant function%\n\\[\n1\\equiv T_{2,0}^{\\alpha}\\left( u\\right) +T_{2,1}^{\\alpha}\\left( u\\right)\n+T_{2,2}^{\\alpha}\\left( u\\right) ,~\\forall u\\in\\left[ 0,\\alpha\\right]\n\\]\nand applying the product rule%\n\\[\nT_{2n,i}^{\\alpha}\\left( u\\right) T_{2m,j}^{\\alpha}\\left( u\\right)\n=\\frac{t_{2n,i}^{\\alpha}t_{2m,j}^{\\alpha}}{t_{2\\left( n+m\\right)\n,i+j}^{\\alpha}}T_{2\\left( n+m\\right) ,i+j}^{\\alpha}\\left( u\\right)\n,~\\forall u\\in\\left[ 0,\\alpha\\right] ,\n\\]\none obtains the trigonometric curve%\n\\[\n\\mathbf{t}_{n+1}^{\\alpha}\\left( u\\right) =\\sum_{r=0}^{2\\left( n+1\\right)\n}\\mathbf{e}_{r}T_{2\\left( n+1\\right) ,r}^{\\alpha}\\left( u\\right)\n,~u\\in\\left[ 0,\\alpha\\right]\n\\]\nof order $n+1$ such that%\n\\[\n\\mathbf{t}_{n+1}^{\\alpha}\\left( u\\right) =\\mathbf{t}_{n}^{\\alpha}\\left(\nu\\right) ,~\\forall u\\in\\left[ 0,\\alpha\\right] ,\n\\]\nwhere%\n\\begin{equation}\n\\left\\{\n\\begin{array}\n[c]{lcl}%\n\\mathbf{e}_{0} & = & \\mathbf{d}_{0}\\dfrac{t_{2n,0}^{\\alpha}t_{2,0}^{\\alpha}%\n}{t_{2\\left( n+1\\right) ,0}^{\\alpha}}=\\mathbf{d}_{0}\\\\\n& & \\\\\n\\mathbf{e}_{1} & = & \\mathbf{d}_{0}\\dfrac{t_{2n,0}^{\\alpha}t_{2,1}^{\\alpha}%\n}{t_{2\\left( n+1\\right) ,1}^{\\alpha}}+\\mathbf{d}_{1}\\dfrac{t_{2n,1}^{\\alpha\n}t_{2,0}^{\\alpha}}{t_{2\\left( n+1\\right) ,1}^{\\alpha}},\\\\\n& & \\\\\n\\mathbf{e}_{r} & = & \\mathbf{d}_{r-2}\\dfrac{t_{2n,r-2}^{\\alpha}t_{2,2}%\n^{\\alpha}}{t_{2\\left( n+1\\right) ,r}^{\\alpha}}+\\mathbf{d}_{r-1}%\n\\dfrac{t_{2n,r-1}^{\\alpha}t_{2,1}^{\\alpha}}{t_{2\\left( n+1\\right)\n,r}^{\\alpha}}+\\mathbf{d}_{r}\\dfrac{t_{2n,r}^{\\alpha}t_{2,0}^{\\alpha}%\n}{t_{2\\left( n+1\\right) ,r}^{\\alpha}},~r=2,3,\\ldots,2n,\\\\\n& & \\\\\n\\mathbf{e}_{2n+1} & = & \\mathbf{d}_{2n-1}\\dfrac{t_{2n,2n-1}^{\\alpha}%\nt_{2,2}^{\\alpha}}{t_{2\\left( n+1\\right) ,2n+1}^{\\alpha}}+\\mathbf{d}%\n_{2n}\\dfrac{t_{2n,2n}^{\\alpha}t_{2,1}^{\\alpha}}{t_{2\\left( n+1\\right)\n,2n+1}^{\\alpha}},\\\\\n& & \\\\\n\\mathbf{e}_{2\\left( n+1\\right) } & = & \\mathbf{d}_{2n}\\dfrac{t_{2n,2n}%\n^{\\alpha}t_{2,2}^{\\alpha}}{t_{2\\left( n+1\\right) ,2\\left( n+1\\right)\n}^{\\alpha}}=\\mathbf{d}_{2n}.\n\\end{array}\n\\right. \\label{trigonometric_order_elevation}%\n\\end{equation}\n\n\\end{remark}\n\nDue to normality of functions systems $\\overline{\\mathcal{T}}_{2}^{\\alpha}$,\n$\\overline{\\mathcal{T}}_{2n}^{\\alpha}$ and $\\overline{\\mathcal{T}}_{2\\left(\nn+1\\right) }^{\\alpha}$, one has the simple equality%\n\\[\n1^{n+1}=\\sum_{r=0}^{2\\left( n+1\\right) }T_{2\\left( n+1\\right) ,r}^{\\alpha\n}\\left( u\\right) =\\left( \\sum_{i=0}^{2}T_{2,i}^{\\alpha}\\left( u\\right)\n\\right) \\left( \\sum_{j=0}^{2n}T_{2n,j}^{\\alpha}\\left( u\\right) \\right)\n=1\\cdot1^{n},~\\forall u\\in\\left[ 0,\\alpha\\right]\n\\]\nfrom which follows that%\n\\begin{align*}\n1 & =\\frac{t_{2n,0}^{\\alpha}t_{2,0}^{\\alpha}}{t_{2\\left( n+1\\right)\n,0}^{\\alpha}}=\\frac{t_{2n,2n}^{\\alpha}t_{2,2}^{\\alpha}}{t_{2\\left(\nn+1\\right) ,2\\left( n+1\\right) }^{\\alpha}},\\\\\n1 & =\\frac{t_{2n,0}^{\\alpha}t_{2,1}^{\\alpha}}{t_{2\\left( n+1\\right)\n,1}^{\\alpha}}+\\frac{t_{2n,1}^{\\alpha}t_{2,0}^{\\alpha}}{t_{2\\left( n+1\\right)\n,1}^{\\alpha}}=\\frac{t_{2n,2n-1}^{\\alpha}t_{2,2}^{\\alpha}}{t_{2\\left(\nn+1\\right) ,2n+1}^{\\alpha}}+\\frac{t_{2n,2n}^{\\alpha}t_{2,1}^{\\alpha}%\n}{t_{2\\left( n+1\\right) ,2n+1}^{\\alpha}},\\\\\n1 & =\\frac{t_{2n,r-2}^{\\alpha}t_{2,2}^{\\alpha}}{t_{2\\left( n+1\\right)\n,r}^{\\alpha}}+\\frac{t_{2n,r-1}^{\\alpha}t_{2,1}^{\\alpha}}{t_{2\\left(\nn+1\\right) ,r}^{\\alpha}}+\\frac{t_{2n,r}^{\\alpha}t_{2,0}^{\\alpha}}{t_{2\\left(\nn+1\\right) ,r}^{\\alpha}},~r=2,3,\\ldots,2n,\n\\end{align*}\ni.e., all combinations that appear in the order elevation process\n(\\ref{trigonometric_order_elevation}) are convex. This observation implies\nthat the order elevated control polygon is closer to the shape of the curve\nthan its original one. Therefore, repeatedly increasing the order of the\ntrigonometric curve (\\ref{trigonometric_curve}) from $n$ to $n+z$ ($z\\geq1$),\nwe obtain a sequence of control polygons that converges to the curve generated\nby the starting control polygon. This geometric property is illustrated in\nFig. \\ref{fig:trigonometric_order_elevation} and it will be essential in case\nof control point based exact description of higher dimensional rational\ntrigonometric curves and multivariate surfaces given in traditional parametric form.%\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nheight=3.0268in,\nwidth=2.9551in\n]%\n{trigonometric_order_elevation_alpha_pi_per_2.pdf}%\n\\caption{A plane third order trigonometric curve ($\\alpha=\\frac{\\pi}{2}$) with\nits original and order elevated control polygons which form a sequence that\nconverges to the curve generated by the original control polygon.}%\n\\label{fig:trigonometric_order_elevation}%\n\\end{center}\n\\end{figure}\n\n\n\\begin{remark}\n[Asymptotic behavior]\\label{rem:trigonometric_asymptotic_behavior}As proved in\n\\cite[Proposition 2.1, p. 249]{JuhaszRoth2014}, the basis $\\overline\n{\\mathcal{T}}_{2n}^{\\alpha}$ degenerates to the classical Bernstein polynomial\nbasis $\\mathcal{B}_{2n}$ defined over the unit compact interval as the shape\nparameter $\\alpha$ tends to $0$ from above. In this case the trigonometric\ncurve (\\ref{trigonometric_curve}) becomes a classical B\\'{e}zier curve of\ndegree $2n$, while the subdivision algorithm presented in Remark\n\\ref{rem:trigonometric_subdivision} degenerates to the classical non-rational\nde Casteljau algorithm. Fig. \\ref{fig:effect_of_trigonometric_shape_parameter}\nillustrates the effect of the shape parameter $\\alpha\\in\\left( 0,\\pi\\right)\n$ on the image of a third order trigonometric curve.\n\\end{remark}\n\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nheight=3.0398in,\nwidth=3.0952in\n]%\n{effect_of_trigonometric_shape_parameter.pdf}%\n\\caption{Effect of the design parameter $\\alpha\\in\\left( 0,\\pi\\right) $ on\nthe shape of a third order trigonometric curve. In the limiting case\n$\\alpha\\rightarrow0$ the curve becomes a classical B\\'{e}zier curve of degree\n$6$ (as it is expected, in this special case, the B-algorithm of the\ntrigonometric curve degenerates to the classical corner cutting de Casteljau\nalgorithm, i.e., each subdivision point is determined by the same ratio along\nthe edges of the control polygon.)}%\n\\label{fig:effect_of_trigonometric_shape_parameter}%\n\\end{center}\n\\end{figure}\n\n\n\\begin{definition}\n[Rational trigonometric curves]The non-negative weight vector $\\mathbf{%\n\\boldsymbol{\\omega}%\n}=\\left[ \\omega_{i}\\right] _{i=0}^{2n}$ of rank $1$ associated with the\ncontrol polygon $\\left[ \\mathbf{d}_{i}\\right] _{i=0}^{2n}\\in\\mathcal{M}%\n_{1,2n+1}\\left(\n\\mathbb{R}\n^{\\delta}\\right) $ and the normalized linearly independent rational (or\nquotient) functions%\n\\[\nR_{2n,i}^{\\alpha,\\mathbf{%\n\\boldsymbol{\\omega}%\n}}\\left( u\\right) =\\frac{\\omega_{i}T_{2n,i}^{\\alpha}\\left( u\\right) }{%\n{\\displaystyle\\sum\\limits_{j=0}^{2n}}\n\\omega_{j}T_{2n,j}^{\\alpha}\\left( u\\right) },~u\\in\\left[ 0,\\alpha\\right]\n,~i=0,1,\\ldots,2n\n\\]\ndefine the rational counterpart%\n\\begin{equation}\n\\mathbf{t}_{n}^{\\alpha,\\mathbf{%\n\\boldsymbol{\\omega}%\n}}\\left( u\\right) =\\sum_{i=0}^{2n}\\omega_{i}\\mathbf{d}_{i}R_{2n,i}%\n^{\\alpha,\\mathbf{%\n\\boldsymbol{\\omega}%\n}}\\left( u\\right) =\\frac{%\n{\\displaystyle\\sum\\limits_{i=0}^{2n}}\n\\omega_{i}\\mathbf{d}_{i}T_{2n,i}^{\\alpha}\\left( u\\right) }{%\n{\\displaystyle\\sum\\limits_{j=0}^{2n}}\n\\omega_{j}T_{2n,j}^{\\alpha}\\left( u\\right) },~u\\in\\left[ 0,\\alpha\\right]\n\\label{rational_trigonometric_curve}%\n\\end{equation}\nof the trigonometric curve (\\ref{trigonometric_curve}).\n\\end{definition}\n\n\\begin{remark}\n[Pre-image of rational trigonometric curves]The rational trigonometric curve\n(\\ref{rational_trigonometric_curve}) can also be considered as the central\nprojection of the higher dimensional curve%\n\\begin{equation}\n\\mathbf{t}_{n,\\mathcal{\\wp}}^{\\alpha,\\mathbf{%\n\\boldsymbol{\\omega}%\n}}\\left( u\\right) =\\sum_{i=0}^{2n}\\left[\n\\begin{array}\n[c]{c}%\n\\omega_{i}\\mathbf{d}_{i}\\\\\n\\omega_{i}%\n\\end{array}\n\\right] T_{2n,i}^{\\alpha}\\left( u\\right) ,~u\\in\\left[ 0,\\alpha\\right]\n\\label{trigonometric_pre_image}%\n\\end{equation}\nin the $\\delta+1$ dimensional space from the origin onto the $\\delta$\ndimensional hyperplane $x^{\\delta+1}=1$ (assuming that the coordinates of $%\n\\mathbb{R}\n^{\\delta+1}$ are denoted by $x^{1},x^{2},\\ldots,x^{\\delta+1}$). The curve\n(\\ref{trigonometric_pre_image}) is called the pre-image of the rational curve\n(\\ref{rational_trigonometric_curve}), while the vector space $%\n\\mathbb{R}\n^{\\delta+1}$ is called its pre-image space. This concept will be useful in\ncase of control point based exact description of smooth rational trigonometric\ncurves given in traditional parametric form expressed in the canonical basis\n$\\mathcal{T}_{2n}^{\\alpha}$.\n\\end{remark}\n\n\\subsection{Hyperbolic curves and their rational counterpart}\n\nIn this case, let $\\alpha>0$ be an arbitrarily fixed parameter and consider\nthe B-basis%\n\\begin{equation}\n\\overline{\\mathcal{H}}_{2n}^{\\alpha}=\\left\\{ H_{2n,i}^{\\alpha}\\left(\nu\\right) :u\\in\\left[ 0,\\alpha\\right] \\right\\} _{i=0}^{2n}=\\left\\{\nh_{2n,i}^{\\alpha}\\sinh^{2n-i}\\left( \\frac{\\alpha-u}{2}\\right) \\sinh\n^{i}\\left( \\frac{u}{2}\\right) :u\\in\\left[ 0,\\alpha\\right] \\right\\}\n_{i=0}^{2n} \\label{Wang_basis}%\n\\end{equation}\nof order $n$ (degree $2n$) of the vector space (\\ref{hyperbolic_vector_space})\nintroduced in \\cite{ShenWang2005}, where the non-negative normalizing coefficients%\n\\[\nh_{2n,i}^{\\alpha}=\\frac{1}{\\sinh^{2n}\\left( \\frac{\\alpha}{2}\\right) }%\n\\sum_{r=0}^{\\left\\lfloor \\frac{i}{2}\\right\\rfloor }\\binom{n}{i-r}\\binom\n{i-r}{r}\\left( 2\\cosh\\left( \\frac{\\alpha}{2}\\right) \\right) ^{i-2r}%\n,~i=0,1,\\ldots,2n\n\\]\nfulfill the symmetry property%\n\\begin{equation}\nh_{2n,i}^{\\alpha}=h_{2n,2n-i}^{\\alpha},~i=0,1,\\ldots,n\\text{.}%\n\\end{equation}\n\n\n\\begin{definition}\n[Hyperbolic curves]The convex combination%\n\\begin{equation}\n\\mathbf{h}_{n}^{\\alpha}\\left( u\\right) =\\sum_{i=0}^{2n}\\mathbf{d}%\n_{i}H_{2n,i}^{\\alpha}\\left( u\\right) ,~u\\in\\left[ 0,\\alpha\\right] ,\n\\label{hyperbolic_curve}%\n\\end{equation}\ndefines a hyperbolic curve of order $n$ (degree $2n$), where $\\left[\n\\mathbf{d}_{i}\\right] _{i=0}^{2n}\\in\\mathcal{M}_{1,2n+1}\\left(\n\\mathbb{R}\n^{\\delta}\\right) $ forms a control polygon.\n\\end{definition}\n\nSimilarly to Subsection \\ref{sec:trigonometric_curves} it is easy to observe\nthat curves of type (\\ref{hyperbolic_curve}) are in fact special\nreparametrizations of a class of rational B\\'{e}zier curves of even degree\n$2n$. Instead of trigonometric sine, cosine, and tangent functions one has to\napply the hyperbolic variant of these functions, i.e., instead of parameter\ntransformation (\\ref{trigonometric_reparametrization}) and weights\n(\\ref{trigonometric_weights}) one has to substitute the reparametrization\nfunction%\n\\begin{equation}\n\\left\\{\n\\begin{array}\n[c]{l}%\nv:\\left[ 0,\\alpha\\right] \\rightarrow\\left[ 0,1\\right] ,\\\\\n\\\\\nv\\left( u\\right) =\\dfrac{1}{2}+\\dfrac{\\tanh\\left( \\frac{u}{2}-\\frac{\\alpha\n}{4}\\right) }{2\\tanh\\left( \\frac{\\alpha}{4}\\right) }=\\dfrac{\\sinh\\left(\n\\frac{u}{2}\\right) }{2\\cosh\\left( \\frac{\\alpha}{4}-\\frac{u}{2}\\right)\n\\sinh\\left( \\frac{\\alpha}{4}\\right) }%\n\\end{array}\n\\right. \\label{hyperbolic_reparametrization}%\n\\end{equation}\nand weights%\n\\begin{equation}\nw_{i}=\\frac{h_{2n,i}^{\\alpha}}{\\binom{2n}{i}},~i=0,1,\\ldots,2n\n\\label{hyperbolic_weights}%\n\\end{equation}\ninto the rational B\\'{e}zier curve (\\ref{rational_Bezier_curve}), respectively.\n\nUsing observations similar to Remarks \\ref{rem:trigonometric_subdivision},\n\\ref{rem:trigonometric_order_elevation} and\n\\ref{rem:trigonometric_asymptotic_behavior}, the subdivision, order elevation\nand asymptotic behavior of hyperbolic curves of type (\\ref{hyperbolic_curve})\ncan also be formulated. With the exception of the subdivision algorithm and\nwithout the observation of the parameter transformation\n(\\ref{hyperbolic_reparametrization}) and special weight settings\n(\\ref{hyperbolic_weights}), the asymptotic behavior and the order elevation of\nhyperbolic curves were first studied in \\cite{ShenWang2005}. The steps of the\nsubdivision of a third order hyperbolic curve is presented in Fig.\n\\ref{fig:subdivsion}(\\emph{b}).\n\nThe rational variant of the hyperbolic curve (\\ref{hyperbolic_curve}) and its\npre-image can also be easily described.\n\n\\begin{definition}\n[Rational hyperbolic curves]Consider the non-negative weight vector\n$\\mathbf{%\n\\boldsymbol{\\omega}%\n}=\\left[ \\omega_{i}\\right] _{i=0}^{2n}$ of rank $1$ associated with the\ncontrol polygon $\\left[ \\mathbf{d}_{i}\\right] _{i=0}^{2n}\\in\\mathcal{M}%\n_{1,2n+1}\\left(\n\\mathbb{R}\n^{\\delta}\\right) $. Normalized quotient basis functions%\n\\[\nS_{2n,i}^{\\alpha,\\mathbf{%\n\\boldsymbol{\\omega}%\n}}\\left( u\\right) =\\frac{\\omega_{i}H_{2n,i}^{\\alpha}\\left( u\\right) }{%\n{\\displaystyle\\sum\\limits_{j=0}^{2n}}\n\\omega_{j}H_{2n,j}^{\\alpha}\\left( u\\right) },~u\\in\\left[ 0,\\alpha\\right]\n,~i=0,1,\\ldots,2n\n\\]\ngenerate the rational counterpart%\n\\begin{equation}\n\\mathbf{h}_{n}^{\\alpha,\\mathbf{%\n\\boldsymbol{\\omega}%\n}}\\left( u\\right) =\\sum_{i=0}^{2n}\\omega_{i}\\mathbf{d}_{i}S_{2n,i}%\n^{\\alpha,\\mathbf{%\n\\boldsymbol{\\omega}%\n}}\\left( u\\right) =\\frac{%\n{\\displaystyle\\sum\\limits_{i=0}^{2n}}\n\\omega_{i}\\mathbf{d}_{i}H_{2n,i}^{\\alpha}\\left( u\\right) }{%\n{\\displaystyle\\sum\\limits_{j=0}^{2n}}\n\\omega_{j}H_{2n,j}^{\\alpha}\\left( u\\right) },~u\\in\\left[ 0,\\alpha\\right]\n\\label{rational_hyperbolic_curve}%\n\\end{equation}\nof the hyperbolic curve (\\ref{hyperbolic_curve}), the pre-image of which is%\n\\begin{equation}\n\\mathbf{h}_{n,\\mathcal{\\wp}}^{\\alpha,\\mathbf{%\n\\boldsymbol{\\omega}%\n}}\\left( u\\right) =\\sum_{i=0}^{2n}\\left[\n\\begin{array}\n[c]{c}%\n\\omega_{i}\\mathbf{d}_{i}\\\\\n\\omega_{i}%\n\\end{array}\n\\right] H_{2n,i}^{\\alpha}\\left( u\\right) ,~u\\in\\left[ 0,\\alpha\\right] .\n\\label{hyperbolic_pre_image}%\n\\end{equation}\n\n\\end{definition}\n\n\\section{(Hybrid) (rational) trigonometric and hyperbolic multivariate\nsurfaces\\label{sec:multivariate_surfaces}}\n\nBy means of tensor products of curves of type (\\ref{trigonometric_curve}) and\n(\\ref{hyperbolic_curve}) one can introduce the following multivariate higher\ndimensional surface modeling tools. Let $\\delta\\geq2$ and $\\kappa\\geq0$\narbitrarily fixed natural numbers and consider the also fixed vector\n$\\mathbf{n}=\\left[ n_{j}\\right] _{j=1}^{\\delta}$ of orders, where $n_{j}%\n\\geq1$ for all $j=1,2,\\ldots,\\delta$.\n\n\\begin{definition}\n[Trigonometric surfaces and their rational counterpart]%\n\\label{trigonometric_surface_definitions}Let\n\\[\n\\mathbf{%\n\\boldsymbol{\\alpha}%\n=}\\left[ \\alpha_{j}\\right] _{j=1}^{\\delta}\\in\\times_{j=1}^{\\delta}\\left(\n0,\\pi\\right)\n\\]\nbe a fixed vector of shape parameters and consider the multidimensional\ncontrol grid\n\\begin{equation}\n\\left[ \\mathbf{d}_{i_{1},i_{2},\\ldots,i_{\\delta}}\\right] _{i_{1}%\n=0,i_{2}=0,\\ldots,i_{\\delta}=0}^{2n_{1},2n_{2},\\ldots,2n_{\\delta}}%\n\\in\\mathcal{M}_{2n_{1}+1,2n_{2}+1,\\ldots2n_{\\delta}+1}\\left(\n\\mathbb{R}\n^{\\delta+\\kappa}\\right) . \\label{trigonometric_control_grid}%\n\\end{equation}\nThe multivariate surface%\n\\begin{align}\n\\mathbf{t}_{\\mathbf{n}}^{\\mathbf{%\n\\boldsymbol{\\alpha}%\n}}\\left( \\mathbf{u}\\right) & =\\mathbf{t}_{n_{1},n_{2},\\ldots,n_{\\delta}%\n}^{\\alpha_{1},\\alpha_{2},\\ldots,\\alpha_{\\delta}}\\left( u_{1},u_{2}%\n,\\ldots,u_{\\delta}\\right) \\label{trigonometric_surface}\\\\\n& =\\sum_{i_{1}=0}^{2n_{1}}\\sum_{i_{2}=0}^{2n_{2}}\\cdots\\sum_{i_{\\delta}%\n=0}^{2n_{\\delta}}\\mathbf{d}_{i_{1},i_{2},\\ldots,i_{\\delta}}T_{2n_{1},i_{1}%\n}^{\\alpha_{1}}\\left( u_{1}\\right) T_{2n_{2},i_{2}}^{\\alpha_{2}}\\left(\nu_{2}\\right) \\cdot\\ldots\\cdot T_{2n_{\\delta},i_{\\delta}}^{\\alpha_{\\delta}%\n}\\left( u_{\\delta}\\right) ,~\\mathbf{u}=\\left[ u_{j}\\right] _{j=1}^{\\delta\n}\\in\\times_{j=1}^{\\delta}\\left[ 0,\\alpha_{j}\\right] \\nonumber\n\\end{align}\nis called $\\delta$-variate trigonometric surface of order $\\mathbf{n}$ taking\nvalues in $%\n\\mathbb{R}\n^{\\delta+\\kappa}$. Assigning the non-negative multidimensional weight matrix\n\\begin{equation}\n\\boldsymbol{\\Omega}=\\left[ \\omega_{i_{1},i_{2},\\ldots,i_{\\delta}}\\right] _{i_{1}%\n=0,i_{2}=0,\\ldots,i_{\\delta}=0}^{2n_{1},2n_{2},\\ldots,2n_{\\delta}}%\n\\in\\mathcal{M}_{2n_{1}+1,2n_{2}+1,\\ldots2n_{\\delta}+1}\\left(\n\\mathbb{R}\n_{+}\\right) \\label{trigonometric_weight_grid}%\n\\end{equation}\nof rank at least $1$ to the control grid (\\ref{trigonometric_control_grid}),\none obtains the $\\delta$-variate rational trigonometric surface%\n\\begin{align}\n\\mathbf{t}_{\\mathbf{n}}^{\\mathbf{%\n\\boldsymbol{\\alpha}%\n},\\boldsymbol{\\Omega}}\\left( \\mathbf{u}\\right) & =\\mathbf{t}_{n_{1},n_{2}%\n,\\ldots,n_{\\delta}}^{\\alpha_{1},\\alpha_{2},\\ldots,\\alpha_{\\delta},\\boldsymbol{\\Omega}\n}\\left( u_{1},u_{2},\\ldots,u_{\\delta}\\right)\n\\label{trigonometric_rational_surface}\\\\\n& =\\frac{%\n{\\displaystyle\\sum\\limits_{i_{1}=0}^{2n_{1}}}\n{\\displaystyle\\sum\\limits_{i_{2}=0}^{2n_{2}}}\n\\cdots%\n{\\displaystyle\\sum\\limits_{i_{\\delta}=0}^{2n_{\\delta}}}\n\\omega_{i_{1},i_{2},\\ldots,i_{\\delta}}\\mathbf{d}_{i_{1},i_{2},\\ldots\n,i_{\\delta}}T_{2n_{1},i_{1}}^{\\alpha_{1}}\\left( u_{1}\\right) T_{2n_{2}%\n,i_{2}}^{\\alpha_{2}}\\left( u_{2}\\right) \\cdot\\ldots\\cdot T_{2n_{\\delta\n},i_{\\delta}}^{\\alpha_{\\delta}}\\left( u_{\\delta}\\right) }{%\n{\\displaystyle\\sum\\limits_{j_{1}=0}^{2n_{1}}}\n{\\displaystyle\\sum\\limits_{j_{2}=0}^{2n_{2}}}\n\\cdots%\n{\\displaystyle\\sum\\limits_{j_{\\delta}=0}^{2n_{\\delta}}}\n\\omega_{j_{1},j_{2},\\ldots,j_{\\delta}}T_{2n_{1},j_{1}}^{\\alpha_{1}}\\left(\nu_{1}\\right) T_{2n_{2},j_{2}}^{\\alpha_{2}}\\left( u_{2}\\right) \\cdot\n\\ldots\\cdot T_{2n_{\\delta},j_{\\delta}}^{\\alpha_{\\delta}}\\left( u_{\\delta\n}\\right) }\\nonumber\n\\end{align}\nof the same order, which is the central projection of the pre-image%\n\\begin{align}\n\\mathbf{t}_{\\mathbf{n},\\wp}^{\\mathbf{%\n\\boldsymbol{\\alpha}%\n},\\boldsymbol{\\Omega}}\\left( \\mathbf{u}\\right) & =\\mathbf{t}_{n_{1},n_{2}%\n,\\ldots,n_{\\delta},\\wp}^{\\alpha_{1},\\alpha_{2},\\ldots,\\alpha_{\\delta},\\boldsymbol{\\Omega}\n}\\left( u_{1},u_{2},\\ldots,u_{\\delta}\\right) \\label{trigonometric_preimage}%\n\\\\\n& =\\sum_{i_{1}=0}^{2n_{1}}\\sum_{i_{2}=0}^{2n_{2}}\\cdots\\sum_{i_{\\delta}%\n=0}^{2n_{\\delta}}\\left[\n\\begin{array}\n[c]{c}%\n\\omega_{i_{1},i_{2},\\ldots,i_{\\delta}}\\mathbf{d}_{i_{1},i_{2},\\ldots\n,i_{\\delta}}\\\\\n\\omega_{i_{1},i_{2},\\ldots,i_{\\delta}}%\n\\end{array}\n\\right] T_{2n_{1},i_{1}}^{\\alpha_{1}}\\left( u_{1}\\right) T_{2n_{2},i_{2}%\n}^{\\alpha_{2}}\\left( u_{2}\\right) \\cdot\\ldots\\cdot T_{2n_{\\delta},i_{\\delta\n}}^{\\alpha_{\\delta}}\\left( u_{\\delta}\\right) \\nonumber\n\\end{align}\nin $%\n\\mathbb{R}\n^{\\delta+\\kappa+1}$ from its origin onto the $\\delta+\\kappa$ dimensional\nhyperplane $x^{\\delta+k+1}=1$ (provided that the coordinates of $%\n\\mathbb{R}\n^{\\delta+\\kappa+1}$ are labeled by $x^{1},x^{2},\\ldots,x^{\\delta+k+1}$).\n\\end{definition}\n\n\\begin{remark}\n[$3$-dimensional $2$-variate trigonometric surfaces]The simplest variant of\nmultivariate surfaces introduced in Definition\n\\ref{trigonometric_surface_definitions} corresponds to $\\delta=2$ and\n$\\kappa=1$, when the $2$-variate trigonometric surface\n(\\ref{trigonometric_surface}) is a $3$-dimensional traditional tensor product\nsurface of curves of the type (\\ref{trigonometric_curve}). In this special\ncase, the grid (\\ref{trigonometric_control_grid}) of control points\\ and the\nmultidimensional weight matrix (\\ref{trigonometric_weight_grid}) degenerate to\na traditional control net and rectangular weight matrix, respectively.\n\\end{remark}\n\n\\begin{remark}\n[$3$-dimensional trigonometric volumes]Using settings $\\delta=3$ and\n$\\kappa=0$, Definition \\ref{trigonometric_surface_definitions} describes\n$3$-dimensional volumes (solids) by means of $3$-variate tensor product of\ncurves of the type (\\ref{trigonometric_curve}).\n\\end{remark}\n\n\\begin{definition}\n[Hyperbolic surfaces and their rational counterpart]%\n\\label{trigonometric_surface_definitions copy(1)}Let\n\\[\n\\mathbf{%\n\\boldsymbol{\\alpha}%\n=}\\left[ \\alpha_{j}\\right] _{j=1}^{\\delta}\\in\\times_{j=1}^{\\delta}\\left(\n0,+\\infty\\right)\n\\]\nbe a fixed vector of shape parameters and consider the non-negative\nmultidimensional weight matrix\n\\[\n\\boldsymbol{\\Omega}=\\left[ \\omega_{i_{1},i_{2},\\ldots,i_{\\delta}}\\right] _{i_{1}%\n=0,i_{2}=0,\\ldots,i_{\\delta}=0}^{2n_{1},2n_{2},\\ldots,2n_{\\delta}}%\n\\in\\mathcal{M}_{2n_{1}+1,2n_{2}+1,\\ldots2n_{\\delta}+1}\\left(\n\\mathbb{R}\n_{+}\\right)\n\\]\n(of rank at least $1$) associated with the control grid\n\\[\n\\left[ \\mathbf{d}_{i_{1},i_{2},\\ldots,i_{\\delta}}\\right] _{i_{1}%\n=0,i_{2}=0,\\ldots,i_{\\delta}=0}^{2n_{1},2n_{2},\\ldots,2n_{\\delta}}%\n\\in\\mathcal{M}_{2n_{1}+1,2n_{2}+1,\\ldots2n_{\\delta}+1}\\left(\n\\mathbb{R}\n^{\\delta+\\kappa}\\right) .\n\\]\nThe multivariate hyperbolic surface of order $\\mathbf{n}$, its rational\ncounterpart, and the pre-image of the rational variant are%\n\\begin{align}\n\\mathbf{h}_{\\mathbf{n}}^{\\mathbf{%\n\\boldsymbol{\\alpha}%\n}}\\left( \\mathbf{u}\\right) & =\\mathbf{h}_{n_{1},n_{2},\\ldots,n_{\\delta}%\n}^{\\alpha_{1},\\alpha_{2},\\ldots,\\alpha_{\\delta}}\\left( u_{1},u_{2}%\n,\\ldots,u_{\\delta}\\right) \\label{hyperbolic_surface}\\\\\n& =\\sum_{i_{1}=0}^{2n_{1}}\\sum_{i_{2}=0}^{2n_{2}}\\cdots\\sum_{i_{\\delta}%\n=0}^{2n_{\\delta}}\\mathbf{d}_{i_{1},i_{2},\\ldots,i_{\\delta}}H_{2n_{1},i_{1}%\n}^{\\alpha_{1}}\\left( u_{1}\\right) H_{2n_{2},i_{2}}^{\\alpha_{2}}\\left(\nu_{2}\\right) \\cdot\\ldots\\cdot H_{2n_{\\delta},i_{\\delta}}^{\\alpha_{\\delta}%\n}\\left( u_{\\delta}\\right) ,\\nonumber\\\\\n\\mathbf{u} & =\\left[ u_{j}\\right] _{j=1}^{\\delta}\\in\\times_{j=1}^{\\delta\n}\\left[ 0,\\alpha_{j}\\right] ,\\nonumber\\\\\n& \\nonumber\\\\\n\\mathbf{h}_{\\mathbf{n}}^{\\mathbf{%\n\\boldsymbol{\\alpha}%\n},\\boldsymbol{\\Omega}}\\left( \\mathbf{u}\\right) & =\\mathbf{h}_{n_{1},n_{2}%\n,\\ldots,n_{\\delta}}^{\\alpha_{1},\\alpha_{2},\\ldots,\\alpha_{\\delta},\\boldsymbol{\\Omega}\n}\\left( u_{1},u_{2},\\ldots,u_{\\delta}\\right)\n\\label{rational_hyperbolic_surface}\\\\\n& =\\frac{%\n{\\displaystyle\\sum\\limits_{i_{1}=0}^{2n_{1}}}\n{\\displaystyle\\sum\\limits_{i_{2}=0}^{2n_{2}}}\n\\cdots%\n{\\displaystyle\\sum\\limits_{i_{\\delta}=0}^{2n_{\\delta}}}\n\\omega_{i_{1},i_{2},\\ldots,i_{\\delta}}\\mathbf{d}_{i_{1},i_{2},\\ldots\n,i_{\\delta}}H_{2n_{1},i_{1}}^{\\alpha_{1}}\\left( u_{1}\\right) H_{2n_{2}%\n,i_{2}}^{\\alpha_{2}}\\left( u_{2}\\right) \\cdot\\ldots\\cdot H_{2n_{\\delta\n},i_{\\delta}}^{\\alpha_{\\delta}}\\left( u_{\\delta}\\right) }{%\n{\\displaystyle\\sum\\limits_{j_{1}=0}^{2n_{1}}}\n{\\displaystyle\\sum\\limits_{j_{2}=0}^{2n_{2}}}\n\\cdots%\n{\\displaystyle\\sum\\limits_{j_{\\delta}=0}^{2n_{\\delta}}}\n\\omega_{j_{1},j_{2},\\ldots,j_{\\delta}}H_{2n_{1},j_{1}}^{\\alpha_{1}}\\left(\nu_{1}\\right) H_{2n_{2},j_{2}}^{\\alpha_{2}}\\left( u_{2}\\right) \\cdot\n\\ldots\\cdot H_{2n_{\\delta},j_{\\delta}}^{\\alpha_{\\delta}}\\left( u_{\\delta\n}\\right) }\\nonumber\n\\end{align}\nand%\n\\begin{align*}\n\\mathbf{h}_{\\mathbf{n},\\wp}^{\\mathbf{%\n\\boldsymbol{\\alpha}%\n},\\boldsymbol{\\Omega}}\\left( \\mathbf{u}\\right) & =\\mathbf{h}_{n_{1},n_{2}%\n,\\ldots,n_{\\delta},\\wp}^{\\alpha_{1},\\alpha_{2},\\ldots,\\alpha_{\\delta},\\boldsymbol{\\Omega}\n}\\left( u_{1},u_{2},\\ldots,u_{\\delta}\\right) \\\\\n& =\\sum_{i_{1}=0}^{2n_{1}}\\sum_{i_{2}=0}^{2n_{2}}\\cdots\\sum_{i_{\\delta}%\n=0}^{2n_{\\delta}}\\left[\n\\begin{array}\n[c]{c}%\n\\omega_{i_{1},i_{2},\\ldots,i_{\\delta}}\\mathbf{d}_{i_{1},i_{2},\\ldots\n,i_{\\delta}}\\\\\n\\omega_{i_{1},i_{2},\\ldots,i_{\\delta}}%\n\\end{array}\n\\right] H_{2n_{1},i_{1}}^{\\alpha_{1}}\\left( u_{1}\\right) H_{2n_{2},i_{2}%\n}^{\\alpha_{2}}\\left( u_{2}\\right) \\cdot\\ldots\\cdot H_{2n_{\\delta},i_{\\delta\n}}^{\\alpha_{\\delta}}\\left( u_{\\delta}\\right) ,\n\\end{align*}\nrespectively.\n\\end{definition}\n\n\\begin{remark}\n[Hybrid multivariate surfaces]Naturally, one can\\ also mix the trigonometric\nor hyperbolic type of B-basis functions in directions $\\left[ u_{j}\\right]\n_{j=1}^{\\delta}$, i.e., one can also define higher dimensional hybrid\nmultivariate (rational) surfaces.\n\\end{remark}\n\n\\section{ Basis transformations\\label{sec:basis_transformations}}\n\nWe are going to derive recursive formulae for the transformation of B-bases\n$\\overline{\\mathcal{T}}_{2n}^{\\alpha}$ and $\\overline{\\mathcal{H}}%\n_{2n}^{\\alpha}$ to the canonical bases $\\mathcal{T}_{2n}^{\\alpha}$ and\n$\\mathcal{H}_{2n}^{\\alpha}$ of the vector spaces $\\mathbb{T}_{2n}^{\\alpha}$\nand $\\mathbb{H}_{2n}^{\\alpha}$, respectively.\n\n\\subsection{The trigonometric\ncase\\label{sec:trigonometric_basis_transformation}}\n\nLet $k\\in\\left\\{ 0,1,\\ldots,n\\right\\} $ be an arbitrarily fixed natural\nnumber. Assume that the unique representations of trigonometric functions\n$\\sin\\left( ku\\right) $ and $\\cos\\left( ku\\right) $ in the basis\n(\\ref{Sanchez_basis}) of order $n$ are%\n\\begin{equation}\n\\sin\\left( ku\\right) =\\sum_{i=0}^{2n}\\lambda_{k,i}^{n}T_{2n,i}^{\\alpha\n}\\left( u\\right) ,~u\\in\\left[ 0,\\alpha\\right] \\label{sine_form}%\n\\end{equation}\nand%\n\\begin{equation}\n\\cos\\left( ku\\right) =\\sum_{i=0}^{2n}\\mu_{k,i}^{n}T_{2n,i}^{\\alpha}\\left(\nu\\right) ,~u\\in\\left[ 0,\\alpha\\right] , \\label{cosine_form}%\n\\end{equation}\nrespectively, where coefficients $\\left\\{ \\lambda_{k,i}^{n}\\right\\}\n_{i=0}^{2n}$ and $\\left\\{ \\mu_{k,i}^{n}\\right\\} _{i=0}^{2n}$ are unique real\nnumbers. The basis transformation from the first order B-basis $\\overline\n{\\mathcal{T}}_{2}^{\\alpha}$ to the first order trigonometric canonical basis\n$\\mathcal{T}_{2}^{\\alpha}$ can be expressed in the matrix form%\n\\[\n\\left[\n\\begin{array}\n[c]{c}%\n1\\\\\n\\sin\\left( u\\right) \\\\\n\\cos\\left( u\\right)\n\\end{array}\n\\right] =\\left[\n\\begin{array}\n[c]{ccc}%\n\\mu_{0,0}^{1} & \\mu_{0,1}^{1} & \\mu_{0,2}^{1}\\\\\n\\lambda_{1,0}^{1} & \\lambda_{1,1}^{1} & \\lambda_{1,2}^{1}\\\\\n\\mu_{1,0}^{1} & \\mu_{1,1}^{1} & \\mu_{1,2}^{1}%\n\\end{array}\n\\right] \\left[\n\\begin{array}\n[c]{c}%\nT_{2,0}^{\\alpha}\\left( u\\right) \\\\\nT_{2,1}^{\\alpha}\\left( u\\right) \\\\\nT_{2,2}^{\\alpha}\\left( u\\right)\n\\end{array}\n\\right] ,~\\forall u\\in\\left[ 0,\\alpha\\right] ,\n\\]\nwhere%\n\\begin{equation}\n\\left\\{\n\\begin{array}\n[c]{l}%\n\\mu_{0,0}^{1}=\\mu_{0,1}^{1}=\\mu_{0,2}^{1}=1,\\\\\n\\\\\n\\lambda_{1,0}^{1}=0,~\\lambda_{1,1}^{1}=\\tan\\left( \\frac{\\alpha}{2}\\right)\n,~\\lambda_{1,2}^{1}=\\sin\\left( \\alpha\\right) ,\\\\\n\\\\\n\\mu_{1,0}^{1}=\\mu_{1,1}^{1}=1,~\\mu_{1,2}^{1}=\\cos\\left( \\alpha\\right) .\n\\end{array}\n\\right. \\label{trigonometric_initial_conditions}%\n\\end{equation}\n\n\nUsing initial conditions (\\ref{trigonometric_initial_conditions}), our\nobjective is to derive recursive formulae for the matrix elements of the\nlinear transformation that changes the higher order B-basis $\\overline\n{\\mathcal{T}}_{2\\left( n+1\\right) }^{\\alpha}$ to the canonical trigonometric\nbasis $\\mathcal{T}_{2\\left( n+1\\right) }^{\\alpha}$.\n\nPerforming order elevation on functions (\\ref{sine_form}) and\n(\\ref{cosine_form}), one obtains that%\n\\[\n\\sin\\left( ku\\right) =\\sum_{r=0}^{2\\left( n+1\\right) }\\lambda_{k,r}%\n^{n+1}T_{2\\left( n+1\\right) ,r}^{\\alpha}\\left( u\\right)\n\\]\nand%\n\\[\n\\cos\\left( ku\\right) =\\sum_{r=0}^{2\\left( n+1\\right) }\\mu_{k,r}%\n^{n+1}T_{2\\left( n+1\\right) ,r}^{\\alpha}\\left( u\\right) ,\n\\]\nwhere%\n\\begin{align*}\n\\lambda_{k,0}^{n+1} & =\\lambda_{k,0}^{n},\\\\\n\\lambda_{k,1}^{n+1} & =\\lambda_{k,0}^{n}\\frac{t_{2n,0}^{\\alpha}%\nt_{2,1}^{\\alpha}}{t_{2\\left( n+1\\right) ,1}^{\\alpha}}+\\lambda_{k,1}^{n}%\n\\frac{t_{2n,1}^{\\alpha}t_{2,0}^{\\alpha}}{t_{2\\left( n+1\\right) ,1}^{\\alpha}%\n},\\\\\n\\lambda_{k,r}^{n+1} & =\\lambda_{k,r-2}^{n}\\frac{t_{2n,r-2}^{\\alpha}%\nt_{2,2}^{\\alpha}}{t_{2\\left( n+1\\right) ,r}^{\\alpha}}+\\lambda_{k,r-1}%\n^{n}\\frac{t_{2n,r-1}^{\\alpha}t_{2,1}^{\\alpha}}{t_{2\\left( n+1\\right)\n,r}^{\\alpha}}+\\lambda_{k,r}^{n}\\frac{t_{2n,r}^{\\alpha}t_{2,0}^{\\alpha}%\n}{t_{2\\left( n+1\\right) ,r}^{\\alpha}},~r=2,3,\\ldots,2n,\\\\\n\\lambda_{k,2n+1}^{n+1} & =\\lambda_{k,2n-1}^{n}\\frac{t_{2n,2n-1}^{\\alpha\n}t_{2,2}^{\\alpha}}{t_{2\\left( n+1\\right) ,2n+1}^{\\alpha}}+\\lambda_{k,2n}%\n^{n}\\frac{t_{2n,2n}^{\\alpha}t_{2,1}^{\\alpha}}{t_{2\\left( n+1\\right)\n,2n+1}^{\\alpha}},\\\\\n\\lambda_{k,2\\left( n+1\\right) }^{n+1} & =\\lambda_{k,2n}^{n}%\n\\end{align*}\nand%\n\\begin{align*}\n\\mu_{k,0}^{n+1} & =\\mu_{k,0}^{n},\\\\\n\\mu_{k,1}^{n+1} & =\\mu_{k,0}^{n}\\frac{t_{2n,0}^{\\alpha}t_{2,1}^{\\alpha}%\n}{t_{2\\left( n+1\\right) ,1}^{\\alpha}}+\\mu_{k,1}^{n}\\frac{t_{2n,1}^{\\alpha\n}t_{2,0}^{\\alpha}}{t_{2\\left( n+1\\right) ,1}^{\\alpha}},\\\\\n\\mu_{k,r}^{n+1} & =\\mu_{k,r-2}^{n}\\frac{t_{2n,r-2}^{\\alpha}t_{2,2}^{\\alpha}%\n}{t_{2\\left( n+1\\right) ,r}^{\\alpha}}+\\mu_{k,r-1}^{n}\\frac{t_{2n,r-1}%\n^{\\alpha}t_{2,1}^{\\alpha}}{t_{2\\left( n+1\\right) ,r}^{\\alpha}}+\\mu_{k,r}%\n^{n}\\frac{t_{2n,r}^{\\alpha}t_{2,0}^{\\alpha}}{t_{2\\left( n+1\\right)\n,r}^{\\alpha}},~r=2,3,\\ldots,2n,\\\\\n\\mu_{k,2n+1}^{n+1} & =\\mu_{k,2n-1}^{n}\\frac{t_{2n,2n-1}^{\\alpha}%\nt_{2,2}^{\\alpha}}{t_{2\\left( n+1\\right) ,2n+1}^{\\alpha}}+\\mu_{k,2n}^{n}%\n\\frac{t_{2n,2n}^{\\alpha}t_{2,1}^{\\alpha}}{t_{2\\left( n+1\\right)\n,2n+1}^{\\alpha}},\\\\\n\\mu_{k,2\\left( n+1\\right) }^{n+1} & =\\mu_{k,2n}^{n},\n\\end{align*}\nrespectively. Moreover, due to initial conditions\n(\\ref{trigonometric_initial_conditions}) and simple trigonometric identities\n\\begin{align}\n\\sin\\left( a+b\\right) & =\\sin\\left( a\\right) \\cos\\left( b\\right)\n+\\cos\\left( a\\right) \\sin\\left( b\\right) ,\\label{sin_of_sum}\\\\\n\\cos\\left( a+b\\right) & =\\cos\\left( a\\right) \\cos\\left( b\\right)\n-\\sin\\left( a\\right) \\sin\\left( b\\right) , \\label{cos_of_sum}%\n\\end{align}\none has that%\n\\begin{align*}\n\\sin\\left( \\left( n+1\\right) u\\right) & =\\left( \\sum_{i=0}^{2n}%\n\\lambda_{n,i}^{n}T_{2n,i}^{\\alpha}\\left( u\\right) \\right) \\left(\n\\sum_{j=0}^{2}\\mu_{1,j}^{1}T_{2,j}^{\\alpha}\\left( u\\right) \\right) +\\left(\n\\sum_{i=0}^{2n}\\mu_{n,i}^{n}T_{2n,i}^{\\alpha}\\left( u\\right) \\right)\n\\left( \\sum_{j=0}^{2}\\lambda_{1,j}^{1}T_{2,j}^{\\alpha}\\left( u\\right)\n\\right) \\\\\n& =\\sum_{r=0}^{2\\left( n+1\\right) }\\lambda_{n+1,r}^{n+1}T_{2\\left(\nn+1\\right) ,r}^{\\alpha}\\left( u\\right) ,\\\\\n\\cos\\left( \\left( n+1\\right) u\\right) & =\\left( \\sum_{i=0}^{2n}%\n\\mu_{n,i}^{n}T_{2n,i}^{\\alpha}\\left( u\\right) \\right) \\left( \\sum\n_{j=0}^{2}\\mu_{1,j}^{1}T_{2,j}^{\\alpha}\\left( u\\right) \\right) -\\left(\n\\sum_{i=0}^{2n}\\lambda_{n,i}^{n}T_{2n,i}^{\\alpha}\\left( u\\right) \\right)\n\\left( \\sum_{j=0}^{2}\\lambda_{1,j}^{1}T_{2,j}^{\\alpha}\\left( u\\right)\n\\right) \\\\\n& =\\sum_{r=0}^{2\\left( n+1\\right) }\\mu_{n+1,r}^{n+1}T_{2\\left( n+1\\right)\n,r}^{\\alpha}\\left( u\\right) ,\n\\end{align*}\nwhere%\n\\begin{align*}\n\\lambda_{n+1,0}^{n+1}= & \\lambda_{n,0}^{n}\\mu_{1,0}^{1}+\\mu_{n,0}^{n}%\n\\lambda_{1,0}^{1},\\\\\n\\lambda_{n+1,1}^{n+1}= & \\left( \\lambda_{n,0}^{n}\\mu_{1,1}^{1}+\\mu\n_{n,0}^{n}\\lambda_{1,1}^{1}\\right) \\frac{t_{2n,0}^{\\alpha}t_{2,1}^{\\alpha}%\n}{t_{2\\left( n+1\\right) ,1}^{\\alpha}}+\\left( \\lambda_{n,1}^{n}\\mu_{1,0}%\n^{1}+\\mu_{n,1}^{n}\\lambda_{1,0}^{1}\\right) \\frac{t_{2n,1}^{\\alpha}%\nt_{2,0}^{\\alpha}}{t_{2\\left( n+1\\right) ,1}^{\\alpha}},\\\\\n\\lambda_{n+1,r}^{n+1}= & \\left( \\lambda_{n,r-2}^{n}\\mu_{1,2}^{1}%\n+\\mu_{n,r-2}^{n}\\lambda_{1,2}^{1}\\right) \\frac{t_{2n,r-2}^{\\alpha}%\nt_{2,2}^{\\alpha}}{t_{2\\left( n+1\\right) ,r}^{\\alpha}}+\\left( \\lambda\n_{n,r-1}^{n}\\mu_{1,1}^{1}+\\mu_{n,r-1}^{n}\\lambda_{1,1}^{1}\\right)\n\\frac{t_{2n,r-1}^{\\alpha}t_{2,1}^{\\alpha}}{t_{2\\left( n+1\\right) ,r}%\n^{\\alpha}}\\\\\n& +\\left( \\lambda_{n,r}^{n}\\mu_{1,0}^{1}+\\mu_{n,r}^{n}\\lambda_{1,0}%\n^{1}\\right) \\frac{t_{2n,r}^{\\alpha}t_{2,0}^{\\alpha}}{t_{2\\left( n+1\\right)\n,r}^{\\alpha}},~r=2,3,\\ldots,2n,\\\\\n\\lambda_{n+1,2n+1}^{n+1}= & \\left( \\lambda_{n,2n-1}^{n}\\mu_{1,2}^{1}%\n+\\mu_{n,2n-1}^{n}\\lambda_{1,2}^{1}\\right) \\frac{t_{2n,2n-1}^{\\alpha}%\nt_{2,2}^{\\alpha}}{t_{2\\left( n+1\\right) ,2n+1}^{\\alpha}}+\\left(\n\\lambda_{n,2n}^{n}\\mu_{1,1}^{1}+\\mu_{n,2n}^{n}\\lambda_{1,1}^{1}\\right)\n\\frac{t_{2n,2n}^{\\alpha}t_{2,1}^{\\alpha}}{t_{2\\left( n+1\\right)\n,2n+1}^{\\alpha}},\\\\\n\\lambda_{n+1,2\\left( n+1\\right) }^{n+1}= & \\lambda_{n,2n}^{n}\\mu_{1,2}%\n^{1}+\\mu_{n,2n}^{n}\\lambda_{1,2}^{1}%\n\\end{align*}\nand%\n\\begin{align*}\n\\mu_{n+1,0}^{n+1}= & \\mu_{n,0}^{n}\\mu_{1,0}^{1}-\\lambda_{n,0}^{n}%\n\\lambda_{1,0}^{1},\\\\\n\\mu_{n+1,1}^{n+1}= & \\left( \\mu_{n,0}^{n}\\mu_{1,1}^{1}-\\lambda_{n,0}%\n^{n}\\lambda_{1,1}^{1}\\right) \\frac{t_{2n,0}^{\\alpha}t_{2,1}^{\\alpha}%\n}{t_{2\\left( n+1\\right) ,1}^{\\alpha}}+\\left( \\mu_{n,1}^{n}\\mu_{1,0}%\n^{1}-\\lambda_{n,1}^{n}\\lambda_{1,0}^{1}\\right) \\frac{t_{2n,1}^{\\alpha}%\nt_{2,0}^{\\alpha}}{t_{2\\left( n+1\\right) ,1}^{\\alpha}},\\\\\n\\mu_{n+1,r}^{n+1}= & \\left( \\mu_{n,r-2}^{n}\\mu_{1,2}^{1}-\\lambda\n_{n,r-2}^{n}\\lambda_{1,2}^{1}\\right) \\frac{t_{2n,r-2}^{\\alpha}t_{2,2}%\n^{\\alpha}}{t_{2\\left( n+1\\right) ,r}^{\\alpha}}+\\left( \\mu_{n,r-1}^{n}%\n\\mu_{1,1}^{1}-\\lambda_{n,r-1}^{n}\\lambda_{1,1}^{1}\\right) \\frac\n{t_{2n,r-1}^{\\alpha}t_{2,1}^{\\alpha}}{t_{2\\left( n+1\\right) ,r}^{\\alpha}}\\\\\n& +\\left( \\mu_{n,r}^{n}\\mu_{1,0}^{1}-\\lambda_{n,r}^{n}\\lambda_{1,0}%\n^{1}\\right) \\frac{t_{2n,r}^{\\alpha}t_{2,0}^{\\alpha}}{t_{2\\left( n+1\\right)\n,r}^{\\alpha}},~r=2,3,\\ldots,2n,\\\\\n\\mu_{n+1,2n+1}^{n+1}= & \\left( \\mu_{n,2n-1}^{n}\\mu_{1,2}^{1}-\\lambda\n_{n,2n-1}^{n}\\lambda_{1,2}^{1}\\right) \\frac{t_{2n,2n-1}^{\\alpha}%\nt_{2,2}^{\\alpha}}{t_{2\\left( n+1\\right) ,2n+1}^{\\alpha}}+\\left( \\mu\n_{n,2n}^{n}\\mu_{1,1}^{1}-\\lambda_{n,2n}^{n}\\lambda_{1,1}^{1}\\right)\n\\frac{t_{2n,2n}^{\\alpha}t_{2,1}^{\\alpha}}{t_{2\\left( n+1\\right)\n,2n+1}^{\\alpha}},\\\\\n\\mu_{n+1,2\\left( n+1\\right) }^{n+1}= & \\mu_{n,2n}^{n}\\mu_{1,2}^{1}%\n-\\lambda_{n,2n}^{n}\\lambda_{1,2}^{1},\n\\end{align*}\nrespectively. Summarizing all calculations above, we have proved the next theorem.\n\n\\begin{theorem}\n[Trigonometric basis transformation]%\n\\label{thm:trigonometric_basis_transformation}The matrix form of the linear\ntransformation from the normalized B-basis $\\overline{\\mathcal{T}}_{2\\left( n+1\\right)\n}^{\\alpha}$ to the canonical trigonometric basis $\\mathcal{T}_{2\\left(\nn+1\\right) }^{\\alpha}$ is%\n\\[\n\\left[\n\\begin{array}\n[c]{c}%\n1\\\\\n\\sin\\left( u\\right) \\\\\n\\cos\\left( u\\right) \\\\\n\\vdots\\\\\n\\sin\\left( \\left( n+1\\right) u\\right) \\\\\n\\cos\\left( \\left( n+1\\right) u\\right)\n\\end{array}\n\\right] =\\left[\n\\begin{array}\n[c]{cccccc}%\n1 & 1 & 1 & \\cdots & 1 & 1\\\\\n\\lambda_{1,0}^{n+1} & \\lambda_{1,1}^{n+1} & \\lambda_{1,2}^{n+1} & \\cdots &\n\\lambda_{1,2n+1}^{n+1} & \\lambda_{1,2\\left( n+1\\right) }^{n+1}\\\\\n\\mu_{1,0}^{n+1} & \\mu_{1,1}^{n+1} & \\mu_{1,2}^{n+1} & \\cdots & \\mu\n_{1,2n+1}^{n+1} & \\mu_{1,2\\left( n+1\\right) }^{n+1}\\\\\n\\vdots & \\vdots & \\vdots & & \\vdots & \\vdots\\\\\n\\lambda_{n+1,0}^{n+1} & \\lambda_{n+1,1}^{n+1} & \\lambda_{n+1,2}^{n+1} & \\cdots\n& \\lambda_{n+1,2n+1}^{n+1} & \\lambda_{n+1,2\\left( n+1\\right) }^{n+1}\\\\\n\\mu_{n+1,0}^{n+1} & \\mu_{n+1,1}^{n+1} & \\mu_{n+1,2}^{n+1} & \\cdots &\n\\mu_{n+1,2n+1}^{n+1} & \\mu_{n+1,2\\left( n+1\\right) }^{n+1}%\n\\end{array}\n\\right] \\left[\n\\begin{array}\n[c]{c}%\nT_{2\\left( n+1\\right) ,0}^{\\alpha}\\left( u\\right) \\\\\nT_{2\\left( n+1\\right) ,1}^{\\alpha}\\left( u\\right) \\\\\nT_{2\\left( n+1\\right) ,2}^{\\alpha}\\left( u\\right) \\\\\n\\vdots\\\\\nT_{2\\left( n+1\\right) ,2n+1}^{\\alpha}\\left( u\\right) \\\\\nT_{2\\left( n+1\\right) ,2\\left( n+1\\right) }^{\\alpha}\\left( u\\right)\n\\end{array}\n\\right]\n\\]\nfor all parameters $u\\in\\left[ 0,\\alpha\\right] $.\n\\end{theorem}\n\n\\begin{remark}\nThe matrix of the $\\left( n+1\\right) $th order basis transformation that appears in Theorem \\ref{thm:trigonometric_basis_transformation} can be efficiently calculated by parallel programming since its rows and their entries are independent of each other. Based on the entries of the first and $n$th order transformation matrices that are already calculated in previous steps, each thread block has to build up a single row of the $\\left( n+1\\right) $th order basis transformation matrix, while each thread within a block has to calculate a single entry of the corresponding row.\n\\end{remark}\n\n\\subsection{The hyperbolic case}\n\nIn this case we can proceed as in Subsection\n\\ref{sec:trigonometric_basis_transformation}. Naturally, instead of\ntrigonometric sine, cosine, tangent functions and identities (\\ref{sin_of_sum}%\n) and (\\ref{cos_of_sum}) one has to apply the hyperbolic variant of these\nfunctions and identities, respectively. The only difference consists in a sign\nchange in the hyperbolic counterpart of the identity (\\ref{cos_of_sum}),\nsince\n\\begin{equation}\n\\cosh\\left( a+b\\right) =\\cosh\\left( a\\right) \\cosh\\left( b\\right)\n+\\sinh\\left( a\\right) \\sinh\\left( b\\right) . \\label{cosh_of_sum}%\n\\end{equation}\nLet $k\\in\\left\\{ 0,1,\\ldots,n\\right\\} $ be an arbitrarily fixed natural\nnumber and denote the representations of hyperbolic functions $\\sinh\\left(\nku\\right) $ and $\\cosh\\left( ku\\right) $ in the B-basis (\\ref{Wang_basis})\nby%\n\\begin{equation}\n\\sinh\\left( ku\\right) =\\sum_{i=0}^{2n}\\sigma_{k,i}^{n}H_{2n,i}^{\\alpha\n}\\left( u\\right) ,~u\\in\\left[ 0,\\alpha\\right]\n\\end{equation}\nand%\n\\begin{equation}\n\\cosh\\left( ku\\right) =\\sum_{i=0}^{2n}\\rho_{k,i}^{n}H_{2n,i}^{\\alpha}\\left(\nu\\right) ,~u\\in\\left[ 0,\\alpha\\right]\n\\end{equation}\nrespectively, where coefficients $\\left\\{ \\sigma_{k,i}^{n}\\right\\}\n_{i=0}^{2n}$ and $\\left\\{ \\rho_{k,i}^{n}\\right\\} _{i=0}^{2n}$ are unique\nscalars. The basis transformation from the first order B-basis $\\overline\n{\\mathcal{H}}_{2}^{\\alpha}$ to the first order hyperbolic canonical basis\n$\\mathcal{H}_{2}^{\\alpha}$ can be written in the matrix form%\n\\[\n\\left[\n\\begin{array}\n[c]{c}%\n1\\\\\n\\sinh\\left( u\\right) \\\\\n\\cosh\\left( u\\right)\n\\end{array}\n\\right] =\\left[\n\\begin{array}\n[c]{ccc}%\n\\rho_{0,0}^{1} & \\rho_{0,1}^{1} & \\rho_{0,2}^{1}\\\\\n\\sigma_{1,0}^{1} & \\sigma_{1,1}^{1} & \\sigma_{1,2}^{1}\\\\\n\\rho_{1,0}^{1} & \\rho_{1,1}^{1} & \\rho_{1,2}^{1}%\n\\end{array}\n\\right] \\left[\n\\begin{array}\n[c]{c}%\nH_{2,0}^{\\alpha}\\left( u\\right) \\\\\nH_{2,1}^{\\alpha}\\left( u\\right) \\\\\nH_{2,2}^{\\alpha}\\left( u\\right)\n\\end{array}\n\\right] ,~\\forall u\\in\\left[ 0,\\alpha\\right] ,\n\\]\nwhere%\n\\begin{equation}\n\\left\\{\n\\begin{array}\n[c]{l}%\n\\rho_{0,0}^{1}=\\rho_{0,1}^{1}=\\rho_{0,2}^{1}=1,\\\\\n\\\\\n\\sigma_{1,0}^{1}=0,~\\sigma_{1,1}^{1}=\\tanh\\left( \\frac{\\alpha}{2}\\right)\n,~\\sigma_{1,2}^{1}=\\sinh\\left( \\alpha\\right) ,\\\\\n\\\\\n\\rho_{1,0}^{1}=\\rho_{1,1}^{1}=1,~\\rho_{1,2}^{1}=\\cosh\\left( \\alpha\\right) .\n\\end{array}\n\\right. \\label{hyperbolic_initial_conditions}%\n\\end{equation}\nUsing initial conditions (\\ref{hyperbolic_initial_conditions}) and normalizing\nconstants $\\left[ h_{2,j}^{\\alpha}\\right] _{j=0}^{2},\\left[ h_{2n,i}%\n^{\\alpha}\\right] _{i=0}^{2n}$ and $\\left[ h_{2\\left( n+1\\right)\n,r}^{\\alpha}\\right] _{r=0}^{2\\left( n+1\\right) }$ instead of $\\left[\nt_{2,j}^{\\alpha}\\right] _{j=0}^{2},\\left[ t_{2n,i}^{\\alpha}\\right]\n_{i=0}^{2n}$ and $\\left[ t_{2\\left( n+1\\right) ,r}^{\\alpha}\\right]\n_{r=0}^{2\\left( n+1\\right) }$, respectively, one obtains recursive formulae\nfor the unique order elevated coefficients $\\left[ \\sigma_{k,r}^{n+1}\\right]\n_{k=0,r=0}^{n+1,2\\left( n+1\\right) }$ and $\\left[ \\rho_{k,r}^{n+1}\\right]\n_{k=0,r=0}^{n,2\\left( n+1\\right) }$ in a similar way as it was done in the\ntrigonometric case for constants $\\left[ \\lambda_{k,r}^{n+1}\\right]\n_{k=0,r=0}^{n+1,2\\left( n+1\\right) }$ and $\\left[ \\mu_{k,r}^{n+1}\\right]\n_{k=0,r=0}^{n,2\\left( n+1\\right) }$, respectively, while applying identity\n(\\ref{cosh_of_sum}) for constants $\\left[ \\rho_{n+1,r}^{n+1}\\right]\n_{r=0}^{2\\left( n+1\\right) }$ we have that%\n\\begin{align*}\n\\rho_{n+1,0}^{n+1}= & \\rho_{n,0}^{n}\\rho_{1,0}^{1}+\\sigma_{n,0}^{n}%\n\\sigma_{1,0}^{1},\\\\\n\\rho_{n+1,1}^{n+1}= & \\left( \\rho_{n,0}^{n}\\rho_{1,1}^{1}+\\sigma_{n,0}%\n^{n}\\sigma_{1,1}^{1}\\right) \\frac{h_{2n,0}^{\\alpha}h_{2,1}^{\\alpha}%\n}{h_{2\\left( n+1\\right) ,1}^{\\alpha}}+\\left( \\rho_{n,1}^{n}\\rho_{1,0}%\n^{1}+\\sigma_{n,1}^{n}\\sigma_{1,0}^{1}\\right) \\frac{h_{2n,1}^{\\alpha}%\nh_{2,0}^{\\alpha}}{h_{2\\left( n+1\\right) ,1}^{\\alpha}},\\\\\n\\rho_{n+1,r}^{n+1}= & \\left( \\rho_{n,r-2}^{n}\\rho_{1,2}^{1}+\\sigma\n_{n,r-2}^{n}\\sigma_{1,2}^{1}\\right) \\frac{h_{2n,r-2}^{\\alpha}h_{2,2}^{\\alpha\n}}{h_{2\\left( n+1\\right) ,r}^{\\alpha}}+\\left( \\rho_{n,r-1}^{n}\\rho\n_{1,1}^{1}+\\sigma_{n,r-1}^{n}\\sigma_{1,1}^{1}\\right) \\frac{h_{2n,r-1}%\n^{\\alpha}h_{2,1}^{\\alpha}}{h_{2\\left( n+1\\right) ,r}^{\\alpha}}\\\\\n& +\\left( \\rho_{n,r}^{n}\\rho_{1,0}^{1}+\\sigma_{n,r}^{n}\\sigma_{1,0}%\n^{1}\\right) \\frac{h_{2n,r}^{\\alpha}h_{2,0}^{\\alpha}}{h_{2\\left( n+1\\right)\n,r}^{\\alpha}},~r=2,3,\\ldots,2n,\\\\\n\\rho_{n+1,2n+1}^{n+1}= & \\left( \\rho_{n,2n-1}^{n}\\rho_{1,2}^{1}%\n+\\sigma_{n,2n-1}^{n}\\sigma_{1,2}^{1}\\right) \\frac{h_{2n,2n-1}^{\\alpha}%\nh_{2,2}^{\\alpha}}{h_{2\\left( n+1\\right) ,2n+1}^{\\alpha}}+\\left( \\rho\n_{n,2n}^{n}\\rho_{1,1}^{1}+\\sigma_{n,2n}^{n}\\sigma_{1,1}^{1}\\right)\n\\frac{h_{2n,2n}^{\\alpha}h_{2,1}^{\\alpha}}{h_{2\\left( n+1\\right)\n,2n+1}^{\\alpha}},\\\\\n\\rho_{n+1,2\\left( n+1\\right) }^{n+1}= & \\rho_{n,2n}^{n}\\rho_{1,2}%\n^{1}+\\sigma_{n,2n}^{n}\\sigma_{1,2}^{1}.\n\\end{align*}\nSummarizing all calculations, one can formulate the following theorem.\n\n\\begin{theorem}\n[Hyperbolic basis transformation]\\label{thm:hyperbolic_basis_transformation}%\nThe matrix form of the linear transformation from the normalized B-basis $\\overline\n{\\mathcal{H}}_{2\\left( n+1\\right) }^{\\alpha}$ to the canonical hyperbolic\nbasis $\\mathcal{H}_{2\\left( n+1\\right) }^{\\alpha}$ is%\n\\[\n\\left[\n\\begin{array}\n[c]{c}%\n1\\\\\n\\sinh\\left( u\\right) \\\\\n\\cosh\\left( u\\right) \\\\\n\\vdots\\\\\n\\sinh\\left( \\left( n+1\\right) u\\right) \\\\\n\\cosh\\left( \\left( n+1\\right) u\\right)\n\\end{array}\n\\right] =\\left[\n\\begin{array}\n[c]{cccccc}%\n1 & 1 & 1 & \\cdots & 1 & 1\\\\\n\\sigma_{1,0}^{n+1} & \\sigma_{1,1}^{n+1} & \\sigma_{1,2}^{n+1} & \\cdots &\n\\sigma_{1,2n+1}^{n+1} & \\sigma_{1,2\\left( n+1\\right) }^{n+1}\\\\\n\\rho_{1,0}^{n+1} & \\rho_{1,1}^{n+1} & \\rho_{1,2}^{n+1} & \\cdots &\n\\rho_{1,2n+1}^{n+1} & \\rho_{1,2\\left( n+1\\right) }^{n+1}\\\\\n\\vdots & \\vdots & \\vdots & & \\vdots & \\vdots\\\\\n\\sigma_{n+1,0}^{n+1} & \\sigma_{n+1,1}^{n+1} & \\sigma_{n+1,2}^{n+1} & \\cdots &\n\\sigma_{n+1,2n+1}^{n+1} & \\sigma_{n+1,2\\left( n+1\\right) }^{n+1}\\\\\n\\rho_{n+1,0}^{n+1} & \\rho_{n+1,1}^{n+1} & \\rho_{n+1,2}^{n+1} & \\cdots &\n\\rho_{n+1,2n+1}^{n+1} & \\rho_{n+1,2\\left( n+1\\right) }^{n+1}%\n\\end{array}\n\\right] \\left[\n\\begin{array}\n[c]{c}%\nH_{2\\left( n+1\\right) ,0}^{\\alpha}\\left( u\\right) \\\\\nH_{2\\left( n+1\\right) ,1}^{\\alpha}\\left( u\\right) \\\\\nH_{2\\left( n+1\\right) ,2}^{\\alpha}\\left( u\\right) \\\\\n\\vdots\\\\\nH_{2\\left( n+1\\right) ,2n+1}^{\\alpha}\\left( u\\right) \\\\\nH_{2\\left( n+1\\right) ,2\\left( n+1\\right) }^{\\alpha}\\left( u\\right)\n\\end{array}\n\\right]\n\\]\nfor all parameters $u\\in\\left[ 0,\\alpha\\right] $.\n\\end{theorem}\n\n\\section{Control point based exact description\\label{sec:exact_description}}\n\nUsing curves of the type (\\ref{trigonometric_curve}) or\n(\\ref{hyperbolic_curve}), following subsections provide control point\nconfigurations for the exact description of higher order (mixed partial)\nderivatives of any smooth parametric curve (higher dimensional multivariate\nsurface) specified by coordinate functions given in traditional parametric\nform, i.e., in vector spaces (\\ref{truncated_Fourier_vector_space}) or\n(\\ref{hyperbolic_vector_space}), respectively. The obtained results will also\nbe extended for the control point based exact description of the rational\ncounterpart of these curves and multivariate surfaces. Core properties of this\nsection are formulated by the next lemmas.\n\n\\begin{lemma}\n[Exact description of trigonometric polynomials]\\label{lem:tp}Consider the\ntrigonometric polynomial%\n\\begin{equation}\ng\\left( u\\right) =\\sum_{p\\in P}c_{p}\\cos\\left( pu+\\psi_{p}\\right)\n+\\sum_{q\\in Q}s_{q}\\sin\\left( qu+\\varphi_{q}\\right) ,~u\\in\\left[\n0,\\alpha\\right] ,~\\alpha\\in\\left( 0,\\pi\\right)\n\\label{trigonometric_polynomial}%\n\\end{equation}\nof order at most $n$, where $P,Q\\subset%\n\\mathbb{N}\n$ and $c_{p},\\psi_{p},s_{q},\\varphi_{q}\\in%\n\\mathbb{R}\n$. Then, we have the equality%\n\\[\n\\frac{\\text{\\emph{d}}^{r}}{\\text{\\emph{d}}u^{r}}g\\left( u\\right) =\\sum\n_{i=0}^{2n}d_{i}\\left( r\\right) T_{2n,i}^{\\alpha}\\left( u\\right) ,~\\forall\nu\\in\\left[ 0,\\alpha\\right] ,~\\forall r\\in%\n\\mathbb{N}\n,\n\\]\nwhere trigonometric ordinates $\\left[ d_{i}\\left( r\\right) \\right]\n_{i=0}^{2n}$ are of the form%\n\\begin{align}\nd_{i}\\left( r\\right) = & \\sum_{p\\in P}c_{p}p^{r}\\left( \\mu_{p,i}^{n}%\n\\cos\\left( \\psi_{p}+\\frac{r\\pi}{2}\\right) -\\lambda_{p,i}^{n}\\sin\\left(\n\\psi_{p}+\\frac{r\\pi}{2}\\right) \\right) \\label{trigonometric_ordinates}\\\\\n& +\\sum_{q\\in Q}s_{q}q^{r}\\left( \\lambda_{q,i}^{n}\\cos\\left( \\varphi\n_{q}+\\frac{r\\pi}{2}\\right) +\\mu_{q,i}^{n}\\sin\\left( \\varphi_{q}+\\frac{r\\pi\n}{2}\\right) \\right) .\\nonumber\n\\end{align}\n\n\\end{lemma}\n\n\\begin{proof}\nThe $r$th order derivative of the trigonometric polynomial \\textbf{(}%\n\\ref{trigonometric_polynomial}\\textbf{)} can be written in the form%\n\\begin{align*}\n\\frac{\\text{d}^{r}}{\\text{d}u^{r}}g\\left( u\\right) = & \\sum_{p\\in P}%\nc_{p}p^{r}\\cos\\left( pu+\\psi_{p}+\\frac{r\\pi}{2}\\right) +\\sum_{q\\in Q}%\ns_{q}q^{r}\\sin\\left( qu+\\varphi_{q}+\\frac{r\\pi}{2}\\right) \\\\\n& \\\\\n= & \\sum_{p\\in P}c_{p}p^{r}\\left( \\cos\\left( pu\\right) \\cos\\left(\n\\psi_{p}+\\frac{r\\pi}{2}\\right) -\\sin\\left( pu\\right) \\sin\\left( \\psi\n_{p}+\\frac{r\\pi}{2}\\right) \\right) \\\\\n& +\\sum_{q\\in Q}s_{q}q^{r}\\left( \\sin\\left( qu\\right) \\cos\\left(\n\\varphi_{q}+\\frac{r\\pi}{2}\\right) +\\cos\\left( qu\\right) \\sin\\left(\n\\varphi_{q}+\\frac{r\\pi}{2}\\right) \\right) \\\\\n& \\\\\n= & \\sum_{p\\in P}c_{p}p^{r}\\cos\\left( \\psi_{p}+\\frac{r\\pi}{2}\\right)\n\\cos\\left( pu\\right) -\\sum_{p\\in P}c_{p}p^{r}\\sin\\left( \\psi_{p}+\\frac\n{r\\pi}{2}\\right) \\sin\\left( pu\\right) \\\\\n& +\\sum_{q\\in Q}s_{q}q^{r}\\cos\\left( \\varphi_{q}+\\frac{r\\pi}%\n{2}\\right) \\sin\\left( qu\\right) +\\sum_{q\\in Q}s_{q}q^{r}\\sin\\left(\n\\varphi_{q}+\\frac{r\\pi}{2}\\right) \\cos\\left( qu\\right) \\\\\n& \\\\\n= & \\sum_{p\\in P}c_{p}p^{r}\\cos\\left( \\psi_{p}+\\frac{r\\pi}{2}\\right)\n\\left( \\sum_{i=0}^{2n}\\mu_{p,i}^{n}T_{2n,i}^{\\alpha}\\left( u\\right)\n\\right) -\\sum_{p\\in P}c_{p}p^{r}\\sin\\left( \\psi_{p}+\\frac{r\\pi}{2}\\right)\n\\left( \\sum_{i=0}^{2n}\\lambda_{p,i}^{n}T_{2n,i}^{\\alpha}\\left( u\\right)\n\\right) \\\\\n& +\\sum_{q\\in Q}s_{q}q^{r}\\cos\\left( \\varphi_{q}+\\frac{r\\pi}{2}\\right)\n\\left( \\sum_{i=0}^{2n}\\lambda_{q,i}^{n}T_{2n,i}^{\\alpha}\\left( u\\right)\n\\right) +\\sum_{q\\in Q}s_{q}q^{r}\\sin\\left( \\varphi_{q}+\\frac{r\\pi}%\n{2}\\right) \\left( \\sum_{i=0}^{2n}\\mu_{q,i}^{n}T_{2n,i}^{\\alpha}\\left(\nu\\right) \\right)\n\\end{align*}\nfor all parameters $u\\in\\left[ 0,\\alpha\\right] $, where we have applied\nTheorem \\ref{thm:trigonometric_basis_transformation} for order $n$. Collecting\nthe coefficients of basis functions $\\left\\{ T_{2n,i}^{\\alpha}\\right\\}\n_{i=0}^{2n}$, one obtains the ordinates specified by\n(\\ref{trigonometric_ordinates}).\n\\end{proof}\n\n\\begin{lemma}\n[Exact description of hyperbolic polynomials]\\label{lem:hp}Consider the\nhyperbolic function%\n\\[\ng\\left( u\\right) =\\sum_{p\\in P}c_{p}\\cosh\\left( pu+\\psi_{p}\\right)\n+\\sum_{q\\in Q}s_{q}\\sinh\\left( qu+\\varphi_{q}\\right) ,~u\\in\\left[\n0,\\alpha\\right] ,~\\alpha>0\n\\]\nof order at most $n$, where $P,Q\\subset%\n\\mathbb{N}\n$ and $c_{p},\\psi_{p},s_{q},\\varphi_{q}\\in%\n\\mathbb{R}\n$. Then, one has that%\n\\[\n\\frac{\\text{\\emph{d}}^{r}}{\\text{\\emph{d}}u^{r}}g\\left( u\\right) =\\sum\n_{i=0}^{2n}d_{i}\\left( r\\right) H_{2n,i}^{\\alpha}\\left( u\\right) ,~\\forall\nu\\in\\left[ 0,\\alpha\\right] ,~\\forall r\\in%\n\\mathbb{N}\n,\n\\]\nwhere%\n\\begin{equation}\nd_{i}\\left( r\\right) =\\left\\{\n\\begin{array}\n[c]{ll}%\n{\\displaystyle\\sum\\limits_{p\\in P}}\nc_{p}p^{r}\\left( \\rho_{p,i}^{n}\\cosh\\left( \\psi_{p}\\right) +\\sigma\n_{p,i}^{n}\\sinh\\left( \\psi_{p}\\right) \\right) & \\\\\n+%\n{\\displaystyle\\sum\\limits_{q\\in Q}}\ns_{q}q^{r}\\left( \\sigma_{q,i}^{n}\\cosh\\left( \\varphi_{q}\\right) +\\rho\n_{q,i}^{n}\\sinh\\left( \\varphi_{q}\\right) \\right) , & r=2z,\\\\\n& \\\\%\n{\\displaystyle\\sum\\limits_{p\\in P}}\nc_{p}p^{r}\\left( \\sigma_{p,i}^{n}\\cosh\\left( \\psi_{p}\\right) +\\rho\n_{p,i}^{n}\\sinh\\left( \\psi_{p}\\right) \\right) & \\\\\n+%\n{\\displaystyle\\sum\\limits_{q\\in Q}}\ns_{q}q^{r}\\left( \\rho_{q,i}^{n}\\cosh\\left( \\varphi_{q}\\right) +\\sigma\n_{q,i}^{n}\\sinh\\left( \\varphi_{q}\\right) \\right) , & r=2z+1.\n\\end{array}\n\\right. \\label{hyperbolic_ordinates}%\n\\end{equation}\n\n\\end{lemma}\n\n\\begin{proof}\nUsing the basis transformation formulated in Theorem\n\\ref{thm:hyperbolic_basis_transformation} for order $n$ and applying\nderivative formulae%\n\\begin{align*}\n\\frac{\\text{d}^{r}}{\\text{d}u^{r}}\\cosh\\left( au+b\\right) & =\\left\\{\n\\begin{array}\n[c]{ll}%\na^{r}\\cosh\\left( au+b\\right) , & r=2z,\\\\\n& \\\\\na^{r}\\sinh\\left( au+b\\right) , & r=2z+1,\n\\end{array}\n\\right. \\\\\n& \\\\\n\\frac{\\text{d}^{r}}{\\text{d}u^{r}}\\sinh\\left( au+b\\right) & =\\left\\{\n\\begin{array}\n[c]{ll}%\na^{r}\\sinh\\left( au+b\\right) , & r=2z,\\\\\n& \\\\\na^{r}\\cosh\\left( au+b\\right) , & r=2z+1\n\\end{array}\n\\right.\n\\end{align*}\nwith hyperbolic identities (\\ref{cosh_of_sum}) and%\n\\[\n\\sinh\\left( a+b\\right) =\\sinh\\left( a\\right) \\cosh\\left( b\\right)\n+\\cosh\\left( a\\right) \\sinh\\left( b\\right) ,\n\\]\none can follow the steps of the proof of the previous Lemma \\ref{lem:tp}.\n\\end{proof}\n\n\\subsection{Description of (rational) trigonometric curves and\nsurfaces\\label{sec:exact_description_trigonometric}}\n\nConsider the smooth parametric curve%\n\\begin{equation}\n\\mathbf{g}\\left( u\\right) =\\left[ g^{\\ell}\\left( u\\right) \\right]\n_{\\ell=1}^{\\delta},~u\\in\\left[ 0,\\alpha\\right] ,~\\alpha\\in\\left(\n0,\\pi\\right) \\label{traditional_trigonometric_curve}%\n\\end{equation}\nwith coordinate functions of the form%\n\\[\ng^{\\ell}\\left( u\\right) =\\sum_{p\\in P_{\\ell}}c_{p}^{\\ell}\\cos\\left(\npu+\\psi_{p}^{\\ell}\\right) +\\sum_{q\\in Q_{\\ell}}s_{q}^{\\ell}\\sin\\left(\nqu+\\varphi_{q}^{\\ell}\\right)\n\\]\nand the vector space (\\ref{truncated_Fourier_vector_space}) of order\n\\[\nn\\geq n_{\\min}=\\max\\left\\{ z:z\\in\\cup_{\\ell=1}^{\\delta}\\left( P_{\\ell}\\cup\nQ_{\\ell}\\right) \\right\\} ,\n\\]\nwhere $P_{\\ell},Q_{\\ell}\\subset%\n\\mathbb{N}\n$ and $c_{p}^{\\ell},\\psi_{p}^{\\ell},s_{q}^{\\ell},\\varphi_{q}^{\\ell}\\in%\n\\mathbb{R}\n$.\n\n\\begin{theorem}\n[Control point based exact description of trigonometric curves]%\n\\label{thm:cpbed_trigonometric_curves}The $r$th ($r\\in%\n\\mathbb{N}\n$) order\\ derivative of the curve (\\ref{traditional_trigonometric_curve}) can\nbe written in the form%\n\\[\n\\frac{\\text{\\emph{d}}^{r}}{\\text{\\emph{d}}u^{r}}\\mathbf{g}\\left( u\\right)\n=\\sum_{i=0}^{2n}\\mathbf{d}_{i}\\left( r\\right) T_{2n,i}^{\\alpha}\\left(\nu\\right) ,~\\forall u\\in\\left[ 0,\\alpha\\right] ,\n\\]\nwhere control points $\\mathbf{d}_{i}\\left( r\\right) =\\left[ d_{i}^{\\ell\n}\\left( r\\right) \\right] _{\\ell=1}^{\\delta}$ are determined by coordinates%\n\\begin{align}\nd_{i}^{\\ell}\\left( r\\right) = & \\sum_{p\\in P_{\\ell}}c_{p}^{\\ell}%\np^{r}\\left( \\mu_{p,i}^{n}\\cos\\left( \\psi_{p}^{\\ell}+\\frac{r\\pi}{2}\\right)\n-\\lambda_{p,i}^{n}\\sin\\left( \\psi_{p}^{\\ell}+\\frac{r\\pi}{2}\\right) \\right)\n\\label{trigonometric_control_points}\\\\\n& +\\sum_{q\\in Q_{\\ell}}s_{q}^{\\ell}q^{r}\\left( \\lambda_{q,i}^{n}\\cos\\left(\n\\varphi_{q}^{\\ell}+\\frac{r\\pi}{2}\\right) +\\mu_{q,i}^{n}\\sin\\left(\n\\varphi_{q}^{\\ell}+\\frac{r\\pi}{2}\\right) \\right) .\\nonumber\n\\end{align}\n(In case of zeroth order derivatives we will use the simpler notation\n$\\mathbf{d}_{i}=\\left[ d_{i}^{\\ell}\\right] _{\\ell=1}^{\\delta}=\\left[\nd_{i}^{\\ell}\\left( 0\\right) \\right] _{\\ell=1}^{\\delta}=\\mathbf{d}%\n_{i}\\left( 0\\right) $ for all $i=0,1,\\ldots,2n$.)\n\\end{theorem}\n\n\\begin{proof}\nUsing Lemma \\ref{lem:tp}, the $r$th order derivative of the $\\ell$th\ncoordinate function of the curve (\\ref{traditional_trigonometric_curve}) can\nbe written in the form%\n\\[\n\\frac{\\text{d}^{r}}{\\text{d}u^{r}}g^{\\ell}\\left( u\\right) =\\sum_{i=0}%\n^{2n}d_{i}^{\\ell}\\left( r\\right) T_{2n,i}^{\\alpha}\\left( u\\right)\n,~\\forall u\\in\\left[ 0,\\alpha\\right] ,\n\\]\nwhere the $i$th ordinate has exactly the form of\n(\\ref{trigonometric_control_points}). Repeating this reformulation for all\n$\\ell=1,2,\\ldots,\\delta$ and collecting the coefficients of basis functions\n$\\left\\{ T_{2n,i}^{\\alpha}\\right\\} _{i=0}^{2n}$ one obtains all coordinates\nof control points $\\mathbf{d}_{i}\\left( r\\right) =\\left[ d_{i}^{\\ell\n}\\left( r\\right) \\right] _{\\ell=1}^{\\delta}$ that can be substituted into\nthe description of trigonometric curves of the type (\\ref{trigonometric_curve}).\n\\end{proof}\n\n\\begin{example}\n[Application of Theorem \\ref{thm:cpbed_trigonometric_curves} -- plane\ncurves]Cases (a) and (b) of Fig. \\ref{fig:ed_non_rational_tpc} show the\ncontrol point based exact descriptions of the hypocycloidal arc%\n\\begin{equation}\n\\mathbf{g}\\left( u\\right) =\\left[\n\\begin{array}\n[c]{c}%\ng^{1}\\left( u\\right) \\\\\n\\\\\ng^{2}\\left( u\\right)\n\\end{array}\n\\right] =\\left[\n\\begin{array}\n[c]{l}%\n4\\cos\\left( u-\\frac{\\pi}{3}\\right) +\\cos\\left( 4u-\\frac{\\pi}{3}\\right) \\\\\n\\\\\n4\\sin\\left( u-\\frac{\\pi}{3}\\right) -\\sin\\left( 4u-\\frac{\\pi}{3}\\right)\n\\end{array}\n\\right] ,~u\\in\\left( 0,\\frac{3\\pi}{4}\\right) \\label{hypocycloid}%\n\\end{equation}\nand of the arc\n\\begin{equation}\n\\mathbf{g}\\left( u\\right) =\\left[\n\\begin{array}\n[c]{c}%\ng^{1}\\left( u\\right) \\\\\n\\\\\ng^{2}\\left( u\\right)\n\\end{array}\n\\right] =\\frac{1}{2}\\left[\n\\begin{array}\n[c]{l}%\n\\sin\\left( u-\\frac{\\pi}{12}\\right) +\\sin\\left( 3u-\\frac{\\pi}{4}\\right) \\\\\n\\\\\n\\cos\\left( u-\\frac{\\pi}{12}\\right) -\\cos\\left( 3u-\\frac{\\pi}{4}\\right)\n\\end{array}\n\\right] ,~u\\in\\left( 0,\\frac{2\\pi}{3}\\right) \\label{quadrifolium}%\n\\end{equation}\nof a quadrifolium, respectively.\n\\end{example}\n\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nheight=3.4636in,\nwidth=6.058in\n]%\n{ed_non_rational_tpc.pdf}%\n\\caption{Order elevated control point based exact description of trigonometric\ncurves (\\ref{hypocycloid}) and (\\ref{quadrifolium}) by means of Theorem\n\\ref{thm:cpbed_trigonometric_curves}. (\\emph{a}) A hypocycloidal arc\n($\\alpha=\\frac{3\\pi}{4}$). (\\emph{b}) An arc of a quadrifolium ($\\alpha\n=\\frac{2\\pi}{3}$).}%\n\\label{fig:ed_non_rational_tpc}%\n\\end{center}\n\\end{figure}\n\n\n\\begin{example}\n[Application of Theorem \\ref{thm:cpbed_trigonometric_curves} -- space\ncurve]Fig. \\ref{fig:torus_knot} illustrates the control point based exact\ndescriptions of the arc%\n\\begin{equation}\n\\mathbf{g}\\left( u\\right) =\\left[\n\\begin{array}\n[c]{c}%\ng^{1}\\left( u\\right) \\\\\n\\\\\ng^{2}\\left( u\\right) \\\\\n\\\\\ng^{3}\\left( u\\right)\n\\end{array}\n\\right] =\\left[\n\\begin{array}\n[c]{l}%\n\\frac{1}{2}\\cos\\left( u\\right) +2\\cos\\left( 3u\\right) +\\frac{1}{2}%\n\\cos\\left( 5u\\right) \\\\\n\\\\\n\\frac{1}{2}\\sin\\left( u\\right) +2\\sin\\left( 3u\\right) +\\frac{1}{2}%\n\\sin\\left( 5u\\right) \\\\\n\\\\\n\\sin\\left( 2u\\right)\n\\end{array}\n\\right] ,~u\\in\\left( 0,\\frac{\\pi}{2}\\right) , \\label{torus_knot}%\n\\end{equation}\nof a torus knot.\n\\end{example}\n\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nheight=3.8553in,\nwidth=4.9882in\n]%\n{torus_knot_n_5_7_9_alpha_pi_per_2_pch_0.pdf}%\n\\caption{Order elevated control point based exact description of an arc\n($\\alpha=\\frac{\\pi}{2}$) of a knot that lies on the surface of the torus\n$\\left[ \\left( R+r\\sin\\left( u_{1}\\right) \\right) \\cos\\left(\nu_{2}\\right) ,\\left( R+r\\sin\\left( u_{1}\\right) \\right) \\sin\\left(\nu_{2}\\right) ,r\\cos\\left( u_{1}\\right) \\right] $, where $\\left( u_{1}%\n,u_{2}\\right) \\in\\left[ 0,2\\pi\\right] \\times\\left[ 0,2\\pi\\right] $, $R=2$\nand $r=1$. Control points were obtained by using Theorem\n\\ref{thm:cpbed_trigonometric_curves}.}%\n\\label{fig:torus_knot}%\n\\end{center}\n\\end{figure}\n\n\nConsider now the rational trigonometric curve%\n\\begin{equation}\n\\mathbf{g}\\left( u\\right) =\\frac{1}{g^{\\delta+1}\\left( u\\right) }\\left[\ng^{\\ell}\\left( u\\right) \\right] _{\\ell=1}^{\\delta},~u\\in\\left[\n0,\\alpha\\right] , \\label{rational_trigonometric_curve_to_be_described}%\n\\end{equation}\ngiven in traditional parametric form\n\\[\ng^{\\ell}\\left( u\\right) =\\sum_{p\\in P_{\\ell}}c_{p}^{\\ell}\\cos\\left(\npu+\\psi_{p}^{\\ell}\\right) +\\sum_{q\\in Q_{\\ell}}s_{q}^{\\ell}\\sin\\left(\nqu+\\varphi_{q}^{\\ell}\\right) ,~P_{\\ell},Q_{\\ell}\\subset%\n\\mathbb{N}\n,~c_{p}^{\\ell},\\psi_{p}^{\\ell},s_{q}^{\\ell},\\varphi_{q}^{\\ell}\\in%\n\\mathbb{R}\n,~\\ell=1,2,\\ldots,\\delta+1,\n\\]\nwhere%\n\\[\ng^{\\delta+1}\\left( u\\right) >0,~\\forall u\\in\\left[ 0,\\alpha\\right] .\n\\]\n\n\n\\begin{algorithm}\n[Control point based exact description of rational trigonometric\ncurves]\\label{alg:cpbed_rational_trigonometric_curves}The process that\nprovides the control point based exact description of the rational curve\n(\\ref{rational_trigonometric_curve_to_be_described}) consists of the following operations:\n\n\\begin{itemize}\n\\item let%\n\\[\nn\\geq n_{\\min}=\\max\\left\\{ z:z\\in\\cup_{\\ell=1}^{\\delta+1}\\left( P_{\\ell}\\cup\nQ_{\\ell}\\right) \\right\\}\n\\]\nbe an arbitrarily fixed order;\n\n\\item apply Theorem \\ref{thm:cpbed_trigonometric_curves} to the pre-image\n\\begin{equation}\n\\mathbf{g}_{\\wp}\\left( u\\right) =\\left[ g^{\\ell}\\left( u\\right) \\right]\n_{\\ell=1}^{\\delta+1},~u\\in\\left[ 0,\\alpha\\right]\n\\label{pre_image_of_rational_trigonometric_curve_to_be_described}%\n\\end{equation}\nof the curve (\\ref{rational_trigonometric_curve_to_be_described}), i.e.,\ncompute control points\n\\[\n\\mathbf{d}_{i}^{\\wp}=\\left[ d_{i}^{\\ell}\\right] _{\\ell=1}^{\\delta+1}\\in%\n\\mathbb{R}\n^{\\delta+1},~i=0,1,\\ldots,2n\n\\]\nfor the exact trigonometric representation of\n(\\ref{pre_image_of_rational_trigonometric_curve_to_be_described}) in the\npre-image space $%\n\\mathbb{R}\n^{\\delta+1}$;\n\n\\item project the obtained control points onto the hyperplane $x^{\\delta+1}=1$\nthat results the control points\n\\[\n\\mathbf{d}_{i}=\\frac{1}{d_{i}^{\\delta+1}}\\left[ d_{i}^{\\ell}\\right] _{\\ell\n=1}^{\\delta}\\in%\n\\mathbb{R}\n^{\\delta},~i=0,1,\\ldots,2n\n\\]\nand weights%\n\\[\n\\omega_{i}=d_{i}^{\\delta+1},~i=0,1,\\ldots,2n\n\\]\nneeded for the rational trigonometric representation\n(\\ref{rational_trigonometric_curve}) of\n(\\ref{rational_trigonometric_curve_to_be_described});\n\n\\item the above generation process does not necessarily ensure the positivity\nof all weights, since the last coordinate of some control points in the\npre-image space $%\n\\mathbb{R}\n^{\\delta+1}$ can be negative; if this is the case, one can increase in $%\n\\mathbb{R}\n^{\\delta+1}$ the order of the trigonometric curve used for the control point\nbased exact description of the pre-image $\\mathbf{g}_{\\wp}$, since -- as\nstated in Remark \\ref{rem:trigonometric_order_elevation} -- order elevation\ngenerates a sequence of control polygons that converges to $\\mathbf{g}_{\\wp}$\nwhich is a geometric object of one branch that does not intersect the\nvanishing plane $x^{\\delta+1}=0\\,$(i.e., the $\\left( \\delta+1\\right) $th\ncoordinate of all its points are of the same sign); therefore, it is\nguaranteed that exists a finite and minimal order $n+z$ ($z\\geq1$) for which\nall weights are positive.\n\\end{itemize}\n\\end{algorithm}\n\n\\begin{example}\n[Application of Algorithm \\ref{alg:cpbed_rational_trigonometric_curves} --\nrational curves]Fig. \\ref{fig:bernoulli_s_lemniscate} shows the control point\nbased description of an arc of the rational trigonometric curve%\n\\begin{equation}\n\\mathbf{g}\\left( u\\right) =\\frac{1}{g^{3}\\left( u\\right) }\\left[\n\\begin{array}\n[c]{c}%\ng^{1}\\left( u\\right) \\\\\n\\\\\ng^{2}\\left( u\\right)\n\\end{array}\n\\right] =\\frac{1}{\\frac{3}{2}-\\frac{1}{2}\\cos\\left( 2u\\right) }\\left[\n\\begin{array}\n[c]{r}%\n\\cos\\left( u\\right) \\\\\n\\\\\n\\frac{1}{2}\\sin\\left( 2u\\right)\n\\end{array}\n\\right] ,~u\\in\\left[ 0,\\frac{2\\pi}{3}\\right] ,\n\\label{Bernoulli_s_lemniscate}%\n\\end{equation}\nalso known as Bernoulli's lemniscate.\n\\end{example}\n\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nheight=3.7239in,\nwidth=6.346in\n]%\n{Bernoulli_s_Lemniscate.pdf}%\n\\caption{Using Algorithm \\ref{alg:cpbed_rational_trigonometric_curves} with\nshape parameter $\\alpha=\\frac{2\\pi}{3}$, the left image shows the control\npoint based exact description of an arc of Bernoulli's lemniscate\n(\\ref{Bernoulli_s_lemniscate}) by means of a fourth order rational\ntrigonometric curve $\\mathbf{g}$ (green) and its pre-image $\\mathbf{g}_{\\wp}$\n(red). Control points $\\mathbf{d}_{i}^{\\wp}=\\left[\n\\protect\\begin{array}\n[c]{cc}%\n\\omega_{i}\\mathbf{d}_{i} & \\omega_{i}%\n\\protect\\end{array}\n\\right] \\in\\mathbb{R} ^{3}$ ($i=0,1,\\ldots,8$) of the pre-image\n$\\mathbf{g}_{\\wp}=\\left[ g^{\\ell}\\left( u\\right) \\right] _{\\ell=1}^{3}$\nwere obtained by the application of Theorem\n\\ref{thm:cpbed_trigonometric_curves}, while control points $\\left[\n\\mathbf{d}_{i}\\right] _{i=0}^{8}$ and weights $\\left[ \\omega_{i}\\right]\n_{i=0}^{8}$ needed for the rational representation\n(\\ref{rational_trigonometric_curve}) were obtained by the central projection\nof points $\\left[ \\mathbf{d}_{i}^{\\wp}\\right] _{i=0}^{8}$ onto the\nhyperplane $x^{3}=1$. The right image also illustrates the control polygons of\nthe second and third order exact rational trigonometric representations of the\nsame arc. (For interpretation of the references to color in this figure\nlegend, the reader is referred to the web version of this paper.)}%\n\\label{fig:bernoulli_s_lemniscate}%\n\\end{center}\n\\end{figure}\n\n\nTheorem \\ref{thm:cpbed_trigonometric_curves} can also be used to provide\ncontrol nets (grids) for the control point based exact description of the\nhigher order mixed partial derivatives of a general class of multivariate surfaces the\nelements of which can be expressed in the form%\n\\begin{equation}\n\\mathbf{s}\\left( \\mathbf{u}\\right) =\\left[\n\\begin{array}\n[c]{cccc}%\ns^{1}\\left( \\mathbf{u}\\right) & s^{2}\\left( \\mathbf{u}\\right) & \\cdots &\ns^{\\delta+\\kappa}\\left( \\mathbf{u}\\right)\n\\end{array}\n\\right] \\in%\n\\mathbb{R}\n^{\\delta+\\kappa},~\\mathbf{u}=\\left[ u_{j}\\right] _{j=1}^{\\delta}\\in\n\\times_{j=1}^{\\delta}\\left[ 0,\\alpha_{j}\\right] ,~\\alpha_{j}\\in\\left(\n0,\\pi\\right) ,~\\kappa\\geq0 \\label{trigonometric_surface_to_be_described}%\n\\end{equation}\nwhere%\n\\[\ns^{\\ell}\\left( \\mathbf{u}\\right) =\\sum_{\\zeta=1}^{m_{\\ell}}\\prod\n_{j=1}^{\\delta}\\left( \\sum_{p\\in P_{\\ell,\\zeta,j}}c_{p}^{\\ell,\\zeta,j}%\n\\cos\\left( pu_{j}+\\psi_{p}^{\\ell,\\zeta,j}\\right) +\\sum_{q\\in Q_{\\ell\n,\\zeta,j}}s_{q}^{\\ell,\\zeta,j}\\sin\\left( qu_{j}+\\varphi_{q}^{\\ell,\\zeta\n,j}\\right) \\right) ,~\\ell=1,2,\\ldots,\\delta+\\kappa\n\\]\nand%\n\\[\nP_{\\ell,\\zeta,j},Q_{\\ell,\\zeta,j}\\subset%\n\\mathbb{N}\n,~m_{\\ell}\\in%\n\\mathbb{N}\n\\setminus\\left\\{ 0\\right\\} ,~c_{p}^{\\ell,\\zeta,j},\\psi_{p}^{\\ell,\\zeta\n,j},s_{q}^{\\ell,\\zeta,j},\\varphi_{q}^{\\ell,\\zeta,j}\\in%\n\\mathbb{R}\n.\n\\]\nIndeed, we have the next theorem.\n\n\\begin{theorem}\n[Control point based exact description of trigonometric surfaces]%\n\\label{thm:cpbed_trigonometric_surfaces}The control point based exact\ndescription of the $\\left( r_{1}+r_{2}+\\ldots+r_{\\delta}\\right) $th order\nmixed partial derivative of the surface\n(\\ref{trigonometric_surface_to_be_described}) fulfills the equality%\n\\begin{equation}\n\\dfrac{\\partial^{r_{1}+r_{2}+\\ldots+r_{\\delta}}}{\\partial u_{1}^{r_{1}%\n}\\partial u_{2}^{r_{2}}\\cdots\\partial u_{\\delta}^{r_{\\delta}}}\\mathbf{s}\\left(\n\\mathbf{u}\\right) =%\n{\\displaystyle\\sum\\limits_{i_{1}=0}^{2n_{1}}}\n{\\displaystyle\\sum\\limits_{i_{2}=0}^{2n_{2}}}\n\\cdots%\n{\\displaystyle\\sum\\limits_{i_{\\delta}=0}^{2n_{\\delta}}}\n\\mathbf{d}_{i_{1},i_{2},\\ldots,i_{\\delta}}\\left( r_{1},r_{2},\\ldots\n,r_{\\delta}\\right)\n{\\displaystyle\\prod\\limits_{j=1}^{\\delta}}\nT_{2n_{j},i_{j}}^{\\alpha_{j}}\\left( u_{j}\\right)\n\\label{exact_trigonometric_surface_represantation}%\n\\end{equation}\nfor all parameter vectors $\\mathbf{u\\in}\\times_{j=1}^{\\delta}\\left[\n0,\\alpha_{j}\\right] $, where%\n\\begin{align*}\nn_{j} & \\geq n_{\\min}^{j}=\\max\\left\\{ z_{j}:z_{j}\\in\\cup_{\\ell=1}%\n^{\\delta+k}\\cup_{\\zeta=1}^{m_{\\ell}}\\left( P_{\\ell,\\zeta,j}\\cup Q_{\\ell\n,\\zeta,j}\\right) \\right\\} ,~j=1,2,\\ldots,\\delta,\\\\\n\\mathbf{d}_{i_{1},i_{2},\\ldots,i_{\\delta}}\\left( r_{1},r_{2},\\ldots\n,r_{\\delta}\\right) & =\\left[ d_{i_{1},i_{2},\\ldots,i_{\\delta}}^{\\ell\n}\\left( r_{1},r_{2},\\ldots,r_{\\delta}\\right) \\right] _{\\ell=1}%\n^{\\delta+\\kappa}\\in%\n\\mathbb{R}\n^{\\delta+\\kappa},\\\\\nd_{i_{1},i_{2},\\ldots,i_{\\delta}}^{\\ell}\\left( r_{1},r_{2},\\ldots,r_{\\delta\n}\\right) & =\\sum_{\\zeta=1}^{m_{\\ell}}\\prod_{j=1}^{\\delta}d_{i_{j}}%\n^{\\ell,\\zeta}\\left( r_{j}\\right) ,~\\ell=1,2,\\ldots,\\delta+\\kappa\n\\end{align*}\nand%\n\\begin{align*}\nd_{i_{j}}^{\\ell,\\zeta}\\left( r_{j}\\right) = & \\sum_{p\\in P_{\\ell,\\zeta,j}%\n}c_{p}^{\\ell,\\zeta,j}p^{r_{j}}\\left( \\mu_{p,i_{j}}^{n_{j}}\\cos\\left(\n\\psi_{p}^{\\ell,\\zeta,j}+\\frac{r_{j}\\pi}{2}\\right) -\\lambda_{p,i_{j}}^{n_{j}%\n}\\sin\\left( \\psi_{p}^{\\ell,\\zeta,j}+\\frac{r_{j}\\pi}{2}\\right) \\right) \\\\\n& +\\sum_{q\\in Q_{\\ell,\\zeta,j}}s_{q}^{\\ell,\\zeta,j}q^{r_{j}}\\left(\n\\lambda_{q,i_{j}}^{n_{j}}\\cos\\left( \\varphi_{q}^{\\ell,\\zeta,j}+\\frac{r_{j}%\n\\pi}{2}\\right) +\\mu_{q,i_{j}}^{n_{j}}\\sin\\left( \\varphi_{q}^{\\ell,\\zeta\n,j}+\\frac{r_{j}\\pi}{2}\\right) \\right) ,\\\\\n\\ell & =1,2,\\ldots,\\delta+\\kappa,~\\zeta=1,2,\\ldots,m_{\\ell},~j=1,2,\\ldots\n,\\delta.\n\\end{align*}\n(In case of zeroth order partial derivatives we will use the simpler notation\n\\[\n\\mathbf{d}_{i_{1},i_{2},\\ldots,i_{\\delta}}=\\left[ d_{i_{1},i_{2}%\n,\\ldots,i_{\\delta}}^{\\ell}\\right] _{\\ell=1}^{\\delta+\\kappa}=\\left[\nd_{i_{1},i_{2},\\ldots,i_{\\delta}}^{\\ell}\\left( 0,0,\\ldots,0\\right) \\right]\n_{\\ell=1}^{\\delta+\\kappa}=\\mathbf{d}_{i_{1},i_{2},\\ldots,i_{\\delta}}\\left(\n0,0,\\ldots,0\\right)\n\\]\nfor all $i_{j}=0,1,\\ldots,2n_{j}$ and $j=1,2,\\ldots,\\delta$.)\n\\end{theorem}\n\n\\begin{proof}\nObserve, by using Lemma \\ref{lem:tp}, that equality%\n\\begin{align*}\n& \\frac{\\partial^{r_{1}+r_{2}+\\ldots+r_{\\delta}}}{\\partial u_{1}^{r_{1}%\n}\\partial u_{2}^{r_{2}}\\cdots\\partial u_{\\delta}^{r_{\\delta}}}s^{\\ell}\\left(\n\\mathbf{u}\\right)= \\\\\n= & \\frac{\\partial^{r_{1}+r_{2}+\\ldots+r_{\\delta}}}{\\partial u_{1}^{r_{1}%\n}\\partial u_{2}^{r_{2}}\\cdots\\partial u_{\\delta}^{r_{\\delta}}}\\sum_{\\zeta=1}^{m_{\\ell}%\n}\\prod_{j=1}^{\\delta}\\left( \\sum_{p\\in P_{\\ell,\\zeta,j}}c_{p}^{\\ell,\\zeta\n,j}\\cos\\left( pu_{j}+\\psi_{p}^{\\ell,\\zeta,j}\\right) +\\sum_{q\\in\nQ_{\\ell,\\zeta,j}}s_{q}^{\\ell,\\zeta,j}\\sin\\left( qu_{j}+\\varphi_{q}%\n^{\\ell,\\zeta,j}\\right) \\right) \\\\\n= & \\sum_{\\zeta=1}^{m_{\\ell}}\\prod_{j=1}^{\\delta}\\frac{\\text{d}^{r_{j}}%\n}{\\text{d}u_{j}^{r_{j}}}\\left( \\sum_{p\\in P_{\\ell,\\zeta,j}}c_{p}^{\\ell\n,\\zeta,j}\\cos\\left( pu_{j}+\\psi_{p}^{\\ell,\\zeta,j}\\right) +\\sum_{q\\in\nQ_{\\ell,\\zeta,j}}s_{q}^{\\ell,\\zeta,j}\\sin\\left( qu_{j}+\\varphi_{q}%\n^{\\ell,\\zeta,j}\\right) \\right) \\\\\n= & \\sum_{\\zeta=1}^{m_{\\ell}}\\prod_{j=1}^{\\delta}\\left( \\sum_{i_{j}%\n=0}^{2n_{j}}d_{i_{j}}^{\\ell,\\zeta}\\left( r_{j}\\right) T_{2n_{j},i_{j}%\n}^{\\alpha_{j}}\\left( u_{j}\\right) \\right) \\\\\n= & \\sum_{i_{1}=0}^{2n_{1}}\\sum_{i_{2}=0}^{2n_{2}}\\cdots\\sum_{i_{\\delta}%\n=0}^{2n_{\\delta}}\\left( \\sum_{\\zeta=1}^{m_{\\ell}}\\prod_{j=1}^{\\delta}%\nd_{i_{j}}^{\\ell,\\zeta}\\left( r_{j}\\right) \\right) T_{2n_{1},i_{1}}%\n^{\\alpha_{1}}\\left( u_{1}\\right) T_{2n_{2},i_{2}}^{\\alpha_{2}}\\left(\nu_{2}\\right) \\cdot\\ldots\\cdot T_{2n_{\\delta},i_{\\delta}}^{\\alpha_{\\delta}%\n}\\left( u_{\\delta}\\right)\n\\end{align*}\nholds for all parameter vectors $\\mathbf{u}=\\left[ u_{j}\\right]\n_{j=1}^{\\delta}\\in\\times_{j=1}^{\\delta}\\left[ 0,\\alpha_{j}\\right] $ and\ncoordinate functions $\\ell=1,2,\\ldots,\\delta+\\kappa$, i.e., $d_{i_{j}}%\n^{\\ell,\\zeta}\\left( r_{j}\\right) $ can be calculated by means of formula\n(\\ref{trigonometric_control_points}) for all indices $\\ell=1,2,\\ldots\n,\\delta+\\kappa$, $\\zeta=1,2,\\ldots,m_{\\ell}$ and $j=1,2,\\ldots,\\delta$.\n\\end{proof}\n\n\\begin{example}\n[Application of Theorem \\ref{thm:cpbed_trigonometric_surfaces} --\nsurfaces]Using trigonometric surfaces of type (\\ref{trigonometric_surface})\nand applying Theorem \\ref{thm:cpbed_trigonometric_surfaces}, Figs.\n\\ref{fig:torus} and \\ref{fig:surface_of_revolution_star} show several control\npoint constellations for the exact description of the toroidal patch%\n\\begin{align}\n\\mathbf{s}\\left( u_{1},u_{2}\\right) & =\\left[\n\\begin{array}\n[c]{c}%\ns^{1}\\left( u_{1},u_{2}\\right) \\\\\n\\\\\ns^{2}\\left( u_{1},u_{2}\\right) \\\\\n\\\\\ns^{3}\\left( u_{1},u_{2}\\right)\n\\end{array}\n\\right] =\\left[\n\\begin{array}\n[c]{c}%\n\\left( 3+\\frac{15}{8}\\left( \\sqrt{5}-1\\right) \\sin\\left( u_{1}\\right)\n\\right) \\cos\\left( u_{2}\\right) \\\\\n\\\\\n\\left( 3+\\frac{15}{8}\\left( \\sqrt{5}-1\\right) \\sin\\left( u_{1}\\right)\n\\right) \\sin\\left( u_{2}\\right) \\\\\n\\\\\n\\frac{15}{8}\\left( \\sqrt{5}-1\\right) \\cos\\left( u_{1}\\right)\n\\end{array}\n\\right] ,\\label{torus}\\\\\n\\left( u_{1},u_{2}\\right) & \\in\\left[ 0,\\frac{3\\pi}{4}\\right]\n\\times\\left[ 0,\\frac{\\pi}{2}\\right] \\nonumber\n\\end{align}\nand of the patch%\n\\begin{align}\n\\mathbf{s}\\left( u_{1},u_{2}\\right) & =\\left[\n\\begin{array}\n[c]{c}%\ns^{1}\\left( u_{1},u_{2}\\right) \\\\\ns^{2}\\left( u_{1},u_{2}\\right) \\\\\ns^{3}\\left( u_{1},u_{2}\\right)\n\\end{array}\n\\right] =\\left[\n\\begin{array}\n[c]{c}%\n\\left( 12+6\\sin\\left( u_{1}\\right) -\\sin\\left( 6u_{1}\\right) \\right)\n\\cos\\left( u_{2}\\right) \\\\\n\\left( 12+6\\sin\\left( u_{1}\\right) -\\sin\\left( 6u_{1}\\right) \\right)\n\\sin\\left( u_{2}\\right) \\\\\n6\\cos\\left( u_{1}\\right) +\\cos\\left( 6u_{1}\\right)\n\\end{array}\n\\right] ,\\label{surface_of_revolution_star}\\\\\n& \\left( u_{1},u_{2}\\right) \\in \\left[ 0,\\frac{\\pi}{2}\\right]\n\\times\\left[ 0,\\frac{2\\pi}{3}\\right] \\nonumber\n\\end{align}\nthat also lies on a surface of revolution generated by the rotation of the\nhypocycloid%\n\\begin{equation}\n\\mathbf{g}\\left( u\\right) =\\left[\n\\begin{array}\n[c]{c}%\ng^{1}\\left( u\\right) \\\\\ng^{2}\\left( u\\right)\n\\end{array}\n\\right] =\\left[\n\\begin{array}\n[c]{c}%\n6\\sin\\left( u\\right) -\\sin\\left( 6u\\right) \\\\\n6\\cos\\left( u\\right) +\\cos\\left( 6u\\right)\n\\end{array}\n\\right] ,~u\\in\\left[ 0,2\\pi\\right] \\label{star_hypocycloid}%\n\\end{equation}\nabout the axis $z$, respectively.\n\\end{example}\n\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nnatheight=4.094900in,\nnatwidth=6.298400in,\nheight=3.6729in,\nwidth=6.3261in\n]%\n{torus_merged+.pdf}%\n\\caption{Control point based exact description of the toroidal patch\n(\\ref{torus}) by means of $3$-dimensional $2$-variate trigonometric surfaces\nof the type (\\ref{trigonometric_surface}). Control nets corresponding to\ndifferent orders were obtained by using Theorem\n\\ref{thm:cpbed_trigonometric_surfaces} with parameter settings $\\delta=2$,\n$\\kappa=1$; $\\alpha_{1}=\\frac{3\\pi}{4}$, $\\alpha_{2}=\\frac{\\pi}{2}$;\n$m_{1}=m_{2}=m_{3}=1$; $P_{1,1,1}=\\left\\{ 0\\right\\} $, $P_{1,1,2}=\\left\\{\n1\\right\\} $, $Q_{1,1,1}=\\left\\{ 1\\right\\} $,~$Q_{1,1,2}=\\varnothing$,\n$c_{0}^{1,1,1}=3$, $c_{1}^{1,1,2}=1$, $s_{1}^{1,1,1}=\\frac{15}{8}\\left(\n\\sqrt{5}-1\\right) $, $\\psi_{0}^{1,1,1}=\\psi_{1}^{1,1,2}=\\varphi_{1}%\n^{1,1,1}=0$; $P_{2,1,1}=\\left\\{ 0\\right\\} $, $P_{2,1,2}=\\varnothing$,\n$Q_{2,1,1}=Q_{2,1,2}=\\left\\{ 1\\right\\} $, $c_{0}^{2,1,1}=3$, $s_{1}%\n^{2,1,1}=\\frac{15}{8}\\left( \\sqrt{5}-1\\right) $, $s_{1}^{2,1,2}=1$,\n$\\psi_{0}^{2,1,1}=\\varphi_{1}^{2,1,1}=\\varphi_{1}^{2,1,2}=0$; $P_{3,1,1}%\n=\\left\\{ 1\\right\\} $, $P_{3,1,2}=\\left\\{ 0\\right\\} $, $Q_{3,1,1}%\n=Q_{3,1,2}=\\varnothing$, $c_{1}^{3,1,1}=\\frac{15}{8}\\left( \\sqrt{5}-1\\right)\n$, $c_{0}^{3,1,2}=1$, $\\psi_{0}^{3,1,1}=\\psi_{1}^{3,1,2}=0$.}%\n\\label{fig:torus}%\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nnatheight=2.546900in,\nnatwidth=6.298400in,\nheight=2.6247in,\nwidth=6.3261in\n]%\n{surface_of_revolution_star_merged+.pdf}%\n\\caption{Control point based exact description of the trigonometric patch\n(\\ref{surface_of_revolution_star}) that lies on a surface of revolution\ngenerated by the rotation of the hypocycloid (\\ref{star_hypocycloid}) about\nthe axis $z$. Control nets were generated by formulae described in Theorem\n\\ref{thm:cpbed_trigonometric_surfaces} ($\\delta=2$, $\\kappa=1$; $\\alpha\n_{1}=\\frac{\\pi}{2}$, $\\alpha_{2}=\\frac{2\\pi}{3}$; $m_1=m_2=m_3=1$).}%\n\\label{fig:surface_of_revolution_star}%\n\\end{center}\n\\end{figure}\n\n\n\\begin{example}\n[Application of Theorem \\ref{thm:cpbed_trigonometric_surfaces} -- volumes]%\n$3$-dimensional trigonometric volumes can also be exactly described by means\nof $3$-variate tensor product surfaces of the type\n(\\ref{trigonometric_surface}). Figs.\n\\ref{fig:non_rational_trigonometric_volume} and\n\\ref{fig:non_rational_trigonometric_volume2} illustrate control grids that\ngenerate the volumes%\n\\begin{align}\n\\mathbf{s}\\left( u_{1},u_{2},u_{3}\\right) & =\\left[\n\\begin{array}\n[c]{c}%\ns^{1}\\left( u_{1},u_{2},u_{3}\\right) \\\\\ns^{2}\\left( u_{1},u_{2},u_{3}\\right) \\\\\ns^{3}\\left( u_{1},u_{2},u_{3}\\right)\n\\end{array}\n\\right] \\nonumber\\\\\n& =\\left[\n\\begin{array}\n[c]{c}%\n\\left( 6+\\cos\\left( u_{1}+\\frac{\\pi}{3}\\right) \\right) \\cos\\left(\nu_{2}-\\frac{\\pi}{6}\\right) \\cos\\left( u_{3}+\\frac{\\pi}{3}\\right) \\\\\n\\\\\n\\left( 6+\\cos\\left( u_{1}+\\frac{\\pi}{3}\\right) \\right) \\cos\\left(\nu_{2}-\\frac{\\pi}{6}\\right) \\sin\\left( u_{3}+\\frac{\\pi}{3}\\right) \\\\\n\\\\\n\\cos\\left( u_{1}+\\frac{\\pi}{3}\\right) \\sin\\left( u_{2}-\\frac{\\pi}%\n{6}\\right)\n\\end{array}\n\\right] ,\\label{non_rational_trigonometric_volume1}\\\\\n\\left( u_{1},u_{2},u_{3}\\right) & \\in\\left[ 0,\\frac{\\pi}{2}\\right]\n\\times\\left[ 0,\\frac{\\pi}{2}\\right] \\times\\left[ 0,\\frac{2\\pi}{3}\\right]\n\\nonumber\n\\end{align}\nand%\n\\begin{align}\n\\mathbf{s}\\left( u_{1},u_{2},u_{3}\\right) & =\\left[\n\\begin{array}\n[c]{c}%\ns^{1}\\left( u_{1},u_{2},u_{3}\\right) \\\\\ns^{2}\\left( u_{1},u_{2},u_{3}\\right) \\\\\ns^{3}\\left( u_{1},u_{2},u_{3}\\right)\n\\end{array}\n\\right] \\nonumber\\\\\n& =\\left[\n\\begin{array}\n[c]{c}%\n\\left( 2+\\frac{3}{4}\\sin\\left( u_{1}\\right) -\\frac{1}{4}\\sin\\left(\n3u_{1}\\right) \\right) \\cos\\left( u_{2}\\right) \\left( \\frac{3}{2}-\\frac\n{1}{2}\\cos\\left( 2u_{3}\\right) \\right) \\\\\n\\\\\n\\left( \\frac{5}{2}-\\frac{1}{2}\\cos\\left( 2u_{1}\\right) \\right) \\sin\\left(\nu_{2}\\right) \\left( 1+\\frac{3}{4}\\sin\\left( u_{3}\\right) -\\frac{1}{4}%\n\\sin\\left( 3u_{3}\\right) \\right) \\\\\n\\\\\n\\left( \\frac{1}{2}+\\frac{3}{8}\\cos\\left( u_{1}\\right) +\\frac{1}{2}%\n\\cos\\left( 2u_{1}\\right) +\\frac{1}{8}\\cos\\left( 3u_{1}\\right) \\right)\n\\left( \\frac{3}{2}+\\cos\\left( u_{2}\\right) +\\sin\\left( u_{3}\\right)\n\\right)\n\\end{array}\n\\right] ,\\label{non_rational_trigonometric_volume2}\\\\\n\\left( u_{1},u_{2},u_{3}\\right) & \\in\\left[ 0,\\frac{\\pi}{2}\\right]\n\\times\\left[ 0,\\frac{2\\pi}{3}\\right] \\times\\left[ 0,\\frac{\\pi}{2}\\right]\n,\\nonumber\n\\end{align}\nrespectively.\n\\end{example}\n\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nnatheight=2.801100in,\nnatwidth=6.298400in,\nheight=2.8011in,\nwidth=6.2984in\n]%\n{non_rational_trigonometric_volume_merged+.pdf}%\n\\caption{Different views of the same $3$-dimensional trigonometric volume\n(\\ref{non_rational_trigonometric_volume1}) along with its control grid\ncalculated by means of Theorem \\ref{thm:cpbed_trigonometric_surfaces}\n($\\delta=3$, $\\kappa=0$; $\\alpha_{1}=\\frac{\\pi}{2}$, $\\alpha_{2}=\\frac{\\pi}%\n{2}$, $\\alpha_{3}=\\frac{2\\pi}{3}$; $m_1=m_2=m_3=1$).}%\n\\label{fig:non_rational_trigonometric_volume}%\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nnatheight=2.633300in,\nnatwidth=6.138400in,\nheight=2.6602in,\nwidth=6.1661in\n]%\n{non_rational_trigonometric_volume2_merged+.pdf}%\n\\caption{Control point based exact description of the $3$-dimensional\ntrigonometric volume (\\ref{non_rational_trigonometric_volume2}) by means of\n$3$-variate tensor product surfaces of the type (\\ref{trigonometric_surface})\nwith different orders. Control grids were obtained by using Theorem\n\\ref{thm:cpbed_trigonometric_surfaces} ($\\delta=3$, $\\kappa=0$; $\\alpha\n_{1}=\\frac{\\pi}{2}$, $\\alpha_{2}=\\frac{2\\pi}{3}$, $\\alpha_{3}=\\frac{\\pi}{2}%\n$; $m_1=m_2=1$, $m_3=2$).}%\n\\label{fig:non_rational_trigonometric_volume2}%\n\\end{center}\n\\end{figure}\n\n\nTheorem \\ref{thm:cpbed_trigonometric_surfaces} can also be used to provide\ncontrol point based exact description of the rational trigonometric surface%\n\\begin{equation}\n\\mathbf{s}\\left( \\mathbf{u}\\right) =\\frac{1}{s^{\\delta+\\kappa+1}\\left(\n\\mathbf{u}\\right) }\\left[\n\\begin{array}\n[c]{cccc}%\ns^{1}\\left( \\mathbf{u}\\right) & s^{2}\\left( \\mathbf{u}\\right) & \\cdots &\ns^{\\delta+\\kappa}\\left( \\mathbf{u}\\right)\n\\end{array}\n\\right] \\in%\n\\mathbb{R}\n^{\\delta+\\kappa}, \\label{rational_trigonometric_surface_to_be_described}%\n\\end{equation}\nwhere%\n\\begin{align*}\n\\mathbf{u} & =\\left[ u_{j}\\right] _{j=1}^{\\delta}\\in\\times_{j=1}^{\\delta\n}\\left[ 0,\\alpha_{j}\\right] ,~\\alpha_{j}\\in\\left( 0,\\pi\\right)\n,~\\kappa\\geq0,\\\\\ns^{\\ell}\\left( \\mathbf{u}\\right) & =\\sum_{\\zeta=1}^{m_{\\ell}}\\prod\n_{j=1}^{\\delta}\\left( \\sum_{p\\in P_{\\ell,\\zeta,j}}c_{p}^{\\ell,\\zeta,j}%\n\\cos\\left( pu_{j}+\\psi_{p}^{\\ell,\\zeta,j}\\right) +\\sum_{q\\in Q_{\\ell\n,\\zeta,j}}s_{q}^{\\ell,\\zeta,j}\\sin\\left( qu_{j}+\\varphi_{q}^{\\ell,\\zeta\n,j}\\right) \\right) ,\\\\\n\\ell & =1,2,\\ldots,\\delta+\\kappa+1,\\\\\nP_{\\ell,\\zeta,j},Q_{\\ell,\\zeta,j} & \\subset%\n\\mathbb{N}\n,~m_{\\ell}\\in%\n\\mathbb{N}\n\\setminus\\left\\{ 0\\right\\} ,~c_{p}^{\\ell,\\zeta,j},\\psi_{p}^{\\ell,\\zeta\n,j},s_{q}^{\\ell,\\zeta,j},\\varphi_{q}^{\\ell,\\zeta,j}\\in%\n\\mathbb{R}%\n\\end{align*}\nand%\n\\[\ns^{\\delta+\\kappa+1}\\left( \\mathbf{u}\\right) >0,~\\forall\\mathbf{u}\\in\n\\times_{j=1}^{\\delta}\\left[ 0,\\alpha_{j}\\right] .\n\\]\nSimilarly to Algorithm \\ref{alg:cpbed_rational_trigonometric_curves} one can\nformulate the next process.\n\n\\begin{algorithm}\n[Control point based exact description of rational trigonometric\nsurfaces]\\label{alg:cpbed_rational_trigonometric_surfaces}Operations that\nensure the control point based exact description of the surface\n(\\ref{rational_trigonometric_surface_to_be_described}) are as follows:\n\n\\begin{itemize}\n\\item let%\n\\[\nn_{j}\\geq n_{\\min}^{j}=\\max\\left\\{ z_{j}:z_{j}\\in\\cup_{\\ell=1}^{\\delta+\\kappa+1}%\n\\cup_{\\zeta=1}^{m_{\\ell}}\\left( P_{\\ell,\\zeta,j}\\cup Q_{\\ell,\\zeta,j}\\right)\n\\right\\} ,~j=1,2,\\ldots,\\delta\n\\]\nbe arbitrarily fixed orders in directions $u_{1},u_{2},\\ldots,u_{\\delta}$;\n\n\\item apply Theorem \\ref{thm:cpbed_trigonometric_surfaces} to the pre-image\n\\begin{equation}\n\\mathbf{s}_{\\wp}\\left( \\mathbf{u}\\right) =\\left[ s^{\\ell}\\left(\n\\mathbf{u}\\right) \\right] _{\\ell=1}^{\\delta+\\kappa+1}\\in%\n\\mathbb{R}\n^{\\delta+\\kappa+1},~\\mathbf{u}=\\left[ u_{j}\\right] _{j=1}^{\\delta}\\in\n\\times_{j=1}^{\\delta}\\left[ 0,\\alpha_{j}\\right]\n\\label{pre_image_of_rational_trigonometric_surface_to_be_described}%\n\\end{equation}\nof the surface (\\ref{rational_trigonometric_surface_to_be_described}), i.e.,\ncompute control points%\n\\[\n\\mathbf{d}_{i_{1},i_{2},\\ldots,i_{\\delta}}^{\\wp}=\\left[ d_{i_{1},i_{2}%\n,\\ldots,i_{\\delta}}^{\\ell}\\right] _{\\ell=1}^{\\delta+\\kappa+1}\\in%\n\\mathbb{R}\n^{\\delta+\\kappa+1},~i_{j}=0,1,\\ldots,2n_{j},~j=1,2,\\ldots,\\delta\n\\]\nfor the exact trigonometric representation of\n(\\ref{pre_image_of_rational_trigonometric_surface_to_be_described}) in the\npre-image space $%\n\\mathbb{R}\n^{\\delta+\\kappa+1}$;\n\n\\item project the obtained control points onto the hyperplane $x^{\\delta\n+\\kappa+1}=1$ that results the control points\n\\[\n\\mathbf{d}_{i_{1},i_{2},\\ldots,i_{\\delta}}=\\frac{1}{d_{i_{1},i_{2}%\n,\\ldots,i_{\\delta}}^{\\delta+\\kappa+1}}\\left[ d_{i_{1},i_{2},\\ldots,i_{\\delta\n}}^{\\ell}\\right] _{\\ell=1}^{\\delta+\\kappa}\\in%\n\\mathbb{R}\n^{\\delta+\\kappa},~i_{j}=0,1,\\ldots,2n_{j},~j=1,2,\\ldots,\\delta\n\\]\nand weights%\n\\[\n\\omega_{i_{1},i_{2},\\ldots,i_{\\delta}}=d_{i_{1},i_{2},\\ldots,i_{\\delta}%\n}^{\\delta+\\kappa+1},~i_{j}=0,1,\\ldots,2n_{j},~j=1,2,\\ldots,\\delta\n\\]\nneeded for the rational trigonometric representation\n(\\ref{trigonometric_rational_surface}) of\n(\\ref{rational_trigonometric_surface_to_be_described});\n\n\\item if not all weights are positive, try to increase the components of the\norder $\\left( n_{1},n_{2},\\ldots,n_{\\delta}\\right) $ of the trigonometric\nsurface used for the control point based exact description of the pre-image\n$\\mathbf{s}_{\\wp}$ in $%\n\\mathbb{R}\n^{\\delta+\\kappa+1}$ and repeat the previous projectional and weight\ndetermination step.\n\\end{itemize}\n\\end{algorithm}\n\n\\begin{example}\n[Application of Algorithm \\ref{alg:cpbed_rational_trigonometric_surfaces} --\nrational surfaces]Using surfaces of the type\n(\\ref{trigonometric_rational_surface}), Fig.\n\\ref{fig:rational_trigonometric_surface}\\ shows the control point based exact\ndescription of the rational trigonometric patch%\n\\begin{equation}\n\\mathbf{s}\\left( u_{1},u_{2}\\right) =\\frac{1}{s^{4}\\left( u_{1}%\n,u_{2}-\\gamma\\right) }\\left[\n\\begin{array}\n[c]{c}%\ns^{1}\\left( u_{1},u_{2}-\\gamma\\right) \\\\\ns^{2}\\left( u_{1},u_{2}-\\gamma\\right) \\\\\ns^{3}\\left( u_{1},u_{2}-\\gamma\\right)\n\\end{array}\n\\right] ,~\\left( u_{1},u_{2}\\right) \\in\\left[ 0,\\frac{\\pi}{4}\\right]\n\\times\\left[ 0,\\frac{\\pi}{3}\\right] , \\label{rational_trigonometric_8}%\n\\end{equation}\nwhere%\n\\begin{align*}\ns^{1}\\left( u_{1},u_{2}\\right) = & \\cos\\left( u_{1}\\right) \\left(\n173\\cos\\left( u_{2}\\right) -10\\sin\\left( 3u_{2}\\right) -10\\sin\\left(\n5u_{2}\\right) +\\cos\\left( 7u_{2}\\right) +\\cos\\left( 9u_{2}\\right) \\right)\n\\\\\n& +\\sin\\left( u_{1}\\right) \\left( -173\\sin\\left( u_{2}\\right)\n+10\\cos\\left( 3u_{2}\\right) -10\\cos\\left( 5u_{2}\\right) +\\sin\\left(\n7u_{2}\\right) -\\sin\\left( 9u_{2}\\right) \\right) ,\\\\\n& \\\\\ns^{2}\\left( u_{1},u_{2}\\right) = & \\cos\\left( u_{1}\\right) \\left(\n173\\sin\\left( u_{2}\\right) -10\\cos\\left( 3u_{2}\\right) +10\\cos\\left(\n5u_{2}\\right) -\\sin\\left( 7u_{2}\\right) +\\sin\\left( 9u_{2}\\right) \\right)\n\\\\\n& +\\sin\\left( u_{1}\\right) \\left( 173\\cos\\left( u_{2}\\right)\n-10\\sin\\left( 3u_{2}\\right) -10\\sin\\left( 5u_{2}\\right) +\\cos\\left(\n7u_{2}\\right) +\\cos\\left( 9u_{2}\\right) \\right) ,\\\\\n& \\\\\ns^{3}\\left( u_{1},u_{2}\\right) = & 20\\cos\\left( u_{1}\\right) \\left(\n5\\cos\\left( u_{2}\\right) +\\sin\\left( 3u_{2}\\right) +\\sin\\left(\n5u_{2}\\right) \\right) \\\\\n& +20\\sin\\left( u_{1}\\right) \\left( 5\\sin\\left( u_{2}\\right)\n+\\cos\\left( 3u_{2}\\right) -\\cos\\left( 5u_{2}\\right) \\right) ,\\\\\n& \\\\\ns^{4}\\left( u_{1},u_{2}\\right) = & 20\\cos\\left( u_{1}\\right) \\left(\n5\\sin\\left( u_{2}\\right) +\\cos\\left( 3u_{2}\\right) -\\cos\\left(\n5u_{2}\\right) \\right) \\\\\n& -20\\sin\\left( u_{1}\\right) \\left( 5\\cos\\left( u_{2}\\right)\n+\\sin\\left( 3u_{2}\\right) +\\sin\\left( 5u_{2}\\right) \\right) +200\n\\end{align*}\nand $\\gamma=\\frac{\\pi}{3}$.\n\\end{example}\n\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nnatheight=2.753600in,\nnatwidth=6.298400in,\nheight=2.7812in,\nwidth=6.3261in\n]%\n{rational_trigonometric_surface_merged+.pdf}%\n\\caption{Control point based exact description of the patch\n(\\ref{rational_trigonometric_8}) by means of $3$-dimensional $2$-variate\nrational trigonometric patches of different orders. Control nets were obtained\nby following the steps of Algorithm\n\\ref{alg:cpbed_rational_trigonometric_surfaces} ($\\delta=2$, $\\kappa=1$;\n$\\alpha_{1}=\\frac{\\pi}{4}$, $\\alpha_{2}=\\frac{\\pi}{3}$; $m_{1}=m_{2}=m_{3}=2$,\n$m_{4}=3$).}%\n\\label{fig:rational_trigonometric_surface}%\n\\end{center}\n\\end{figure}\n\n\n\\subsection{Description of (rational) hyperbolic curves and\nsurfaces\\label{sec:exact_description_hyperbolic}}\n\nAssume that the smooth parametric curve%\n\\begin{equation}\n\\mathbf{g}\\left( u\\right) =\\left[ g^{\\ell}\\left( u\\right) \\right]\n_{\\ell=1}^{\\delta},~u\\in\\left[ 0,\\alpha\\right] ,~\\alpha>0\n\\label{traditional_hyperbolic_curve}%\n\\end{equation}\nhas coordinate functions of the form%\n\\[\ng^{\\ell}\\left( u\\right) =\\sum_{p\\in P_{\\ell}}c_{p}^{\\ell}\\cosh\\left(\npu+\\psi_{p}^{\\ell}\\right) +\\sum_{q\\in Q_{\\ell}}s_{q}^{\\ell}\\sinh\\left(\nqu+\\varphi_{q}^{\\ell}\\right) ,\n\\]\nwhere $P_{\\ell},Q_{\\ell}\\subset%\n\\mathbb{N}\n$ and $c_{p}^{\\ell},\\psi_{p}^{\\ell},s_{q}^{\\ell},\\varphi_{q}^{\\ell}\\in%\n\\mathbb{R}\n$ and consider the vector space (\\ref{hyperbolic_vector_space}) of order\n\\begin{equation}\nn\\geq n_{\\min}=\\max\\left\\{ z:z\\in\\cup_{\\ell=1}^{\\delta}\\left( P_{\\ell}\\cup\nQ_{\\ell}\\right) \\right\\} . \\label{minimal_hyperbolic_order}%\n\\end{equation}\n\n\nUsing Lemma \\ref{lem:hp} and performing calculations similar to the proof of\nTheorem \\ref{thm:cpbed_trigonometric_curves} one obtains the next statement.\n\n\\begin{theorem}\n[Control point based exact description of hyperbolic curves]%\n\\label{thm:cpbed_hyperbolic_curves}For any arbitrarily fixed order\n(\\ref{minimal_hyperbolic_order}) the curve (\\ref{traditional_hyperbolic_curve}%\n) given in traditional hyperbolic parametric form has a unique control point\nbased exact description, more precisely one has that\n\\[\n\\frac{\\text{\\emph{d}}^{r}}{\\text{\\emph{d}}u^{r}}g^{\\ell}\\left( u\\right)\n=\\sum_{i=0}^{2n}d_{i}^{\\ell}\\left( r\\right) H_{2n,i}^{\\alpha}\\left(\nu\\right) ,\n\\]\nwhere%\n\\[\nd_{i}^{\\ell}\\left( r\\right) =\\left\\{\n\\begin{array}\n[c]{ll}%\n{\\displaystyle\\sum\\limits_{p\\in P_{\\ell}}}\nc_{p}^{\\ell}p^{r}\\left( \\rho_{p,i}^{n}\\cosh\\left( \\psi_{p}^{\\ell}\\right)\n+\\sigma_{p,i}^{n}\\sinh\\left( \\psi_{p}^{\\ell}\\right) \\right) & \\\\\n+%\n{\\displaystyle\\sum\\limits_{q\\in Q_{\\ell}}}\ns_{q}^{\\ell}q^{r}\\left( \\sigma_{q,i}^{n}\\cosh\\left( \\varphi_{q}^{\\ell\n}\\right) +\\rho_{q,i}^{n}\\sinh\\left( \\varphi_{q}^{\\ell}\\right) \\right) , &\nr=2z,\\\\\n& \\\\%\n{\\displaystyle\\sum\\limits_{p\\in P_{\\ell}}}\nc_{p}^{\\ell}p^{r}\\left( \\sigma_{p,i}^{n}\\cosh\\left( \\psi_{p}^{\\ell}\\right)\n+\\rho_{p,i}^{n}\\sinh\\left( \\psi_{p}^{\\ell}\\right) \\right) & \\\\\n+%\n{\\displaystyle\\sum\\limits_{q\\in Q_{\\ell}}}\ns_{q}^{\\ell}q^{r}\\left( \\rho_{q,i}^{n}\\cosh\\left( \\varphi_{q}^{\\ell}\\right)\n+\\sigma_{q,i}^{n}\\sinh\\left( \\varphi_{q}^{\\ell}\\right) \\right) , & r=2z+1\n\\end{array}\n\\right.\n\\]\ndenotes the $\\ell$th coordinate of the $i$th control point $\\mathbf{d}_{i}$\nneeded for the hyperbolic curve description (\\ref{hyperbolic_curve}).\n\\end{theorem}\n\n\\begin{example}\n[Application of Theorem \\ref{thm:cpbed_hyperbolic_curves} -- curves]Fig.\n\\ref{fig:equilateral_hyperbola} shows the control point based description of\nthe arc%\n\\begin{equation}\n\\mathbf{g}\\left( u\\right) =\\left[\n\\begin{array}\n[c]{c}%\ng^{1}\\left( u\\right) \\\\\n\\\\\ng^{2}\\left( u\\right)\n\\end{array}\n\\right] =\\left[\n\\begin{array}\n[c]{r}%\n\\sinh\\left( u-\\frac{3}{2}\\right) \\\\\n\\\\\n\\cosh\\left( u-\\frac{3}{2}\\right)\n\\end{array}\n\\right] ,~u\\in\\left[ 0,3\\right] \\label{equilateral_hyperbola}%\n\\end{equation}\nof an equilateral hyperbola.\n\\end{example}\n\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nheight=2.7121in,\nwidth=5.028in\n]%\n{hyperbola_n_1_3_5_alpha_3_pch_m1p5.pdf}%\n\\caption{Using Theorem \\ref{thm:cpbed_hyperbolic_curves}, the image shows the\ncontrol point based exact description of the hyperbolic arc\n(\\ref{equilateral_hyperbola}) by means of hyperbolic curves of the type\n(\\ref{hyperbolic_curve}) of varying order and fixed shape parameter $\\alpha\n=3$.}%\n\\label{fig:equilateral_hyperbola}%\n\\end{center}\n\\end{figure}\n\n\n\\begin{remark}\n[Hyperbolic counterpart of Theorem \\ref{thm:cpbed_trigonometric_surfaces}%\n]Higher order (mixed) partial derivatives of a non-rational higher dimensional\nmultivariate hyperbolic surface can also be exactly described by means of\nTheorem \\ref{thm:cpbed_hyperbolic_curves}; one would simply obtain the\nhyperbolic counterpart of Theorem \\ref{thm:cpbed_trigonometric_surfaces}.\nMoreover, one can combine Theorems \\ref{thm:cpbed_trigonometric_curves} and\n\\ref{thm:cpbed_hyperbolic_curves} in order to exactly describe patches of\nhybrid multivariate surfaces%\n\\[\n\\mathbf{s}\\left( \\mathbf{u}\\right) =\\left[\n\\begin{array}\n[c]{cccc}%\ns^{1}\\left( \\mathbf{u}\\right) & s^{2}\\left( \\mathbf{u}\\right) & \\cdots &\ns^{\\delta+\\kappa}\\left( \\mathbf{u}\\right)\n\\end{array}\n\\right] \\in%\n\\mathbb{R}\n^{\\delta+\\kappa},~\\mathbf{u}=\\left[ u_{j}\\right] _{j=1}^{\\delta}\\in\n\\times_{j=1}^{\\delta}\\left[ 0,\\alpha_{j}\\right] ,~\\alpha_{j}\\in\\left(\n0,\\beta_{j}\\right) ,~\\kappa\\geq0\n\\]\nthat are either trigonometric or hyperbolic in case of a fixed direction\n$u_{j}$, where $\\beta_{j}$ is either $\\pi$ or $+\\infty$ depending on the\ntrigonometric or hyperbolic type of the coordinate function $s^{j}$,\nrespectively. (Along a selected direction each coordinate function must be of\nthe same type).\n\\end{remark}\n\n\\begin{example}\n[Combination of Theorems \\ref{thm:cpbed_trigonometric_curves} and\n\\ref{thm:cpbed_hyperbolic_curves} \\ -- hybrid surfaces]Fig. \\ref{fig:catenoid}\nillustrates the control point based exact description of the patch%\n\\begin{equation}\n\\mathbf{s}\\left( u_{1},u_{2}\\right) =\\left[\n\\begin{array}\n[c]{c}%\ns^{1}\\left( u_{1},u_{2}\\right) \\\\\n\\\\\ns^{2}\\left( u_{1},u_{2}\\right) \\\\\n\\\\\ns^{3}\\left( u_{1},u_{2}\\right)\n\\end{array}\n\\right] =\\left[\n\\begin{array}\n[c]{c}%\n\\left( 1+\\cosh\\left( u_{1}-\\frac{3}{2}\\right) \\right) \\sin\\left(\nu_{2}\\right) \\\\\n\\\\\n\\left( 1+\\cosh\\left( u_{1}-\\frac{3}{2}\\right) \\right) \\cos\\left(\nu_{2}\\right) \\\\\n\\\\\n\\sinh\\left( u_{1}-\\frac{3}{2}\\right)\n\\end{array}\n\\right] ,~\\left( u_{1},u_{2}\\right) \\in\\left[ 0,3\\right] \\times\\left[\n0,\\frac{2\\pi}{3}\\right] \\label{catenoid}%\n\\end{equation}\nthat lies on a surface of revolution (also called hyperboloid) obtained\nby the rotation of the equilateral hyperbolic arc%\n\\begin{equation}\n\\mathbf{g}\\left( u\\right) =\\left[\n\\begin{array}\n[c]{c}%\ng^{1}\\left( u\\right) \\\\\n\\\\\ng^{2}\\left( u\\right)\n\\end{array}\n\\right] =\\left[\n\\begin{array}\n[c]{r}%\n\\cosh\\left( u-\\frac{3}{2}\\right) \\\\\n\\\\\n\\sinh\\left( u-\\frac{3}{2}\\right)\n\\end{array}\n\\right] ,~u\\in\\left[ 0,3\\right] \\label{catenary}%\n\\end{equation}\nalong the axis $z$.\n\\end{example}\n\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nnatheight=3.164400in,\nnatwidth=6.298400in,\nheight=3.192in,\nwidth=6.3261in\n]%\n{catenoid_merged+.pdf}%\n\\caption{Control point based exact description of the hyperboloidal patch\n(\\ref{catenoid}) with hybrid surfaces of different orders. In order to\nformulate the hybrid variant of Theorem \\ref{thm:cpbed_trigonometric_surfaces}%\n, in directions $u_{1}$ and $u_{2}$ the results of Theorems\n\\ref{thm:cpbed_hyperbolic_curves} ($\\alpha_{1}=3$) and\n\\ref{thm:cpbed_trigonometric_curves} ($\\alpha_{2}=\\frac{2\\pi}{3}$) were\napplied ($m_1=m_2=m_3=1$), respectively.}%\n\\label{fig:catenoid}%\n\\end{center}\n\\end{figure}\n\n\n\n\\begin{remark}\n[Hyperbolic counterpart of Algorithms\n\\ref{alg:cpbed_rational_trigonometric_curves} and\n\\ref{alg:cpbed_rational_trigonometric_surfaces}]Any smooth rational hyperbolic\ncurve\/surface that is given in traditional parametric form (with a\nnon-vanishing function in its denominator) can also be exactly described by\nmeans of rational hyperbolic curves\/surfaces of the type\n(\\ref{rational_hyperbolic_curve})\/(\\ref{rational_hyperbolic_surface}); one\nsimply has to apply the hyperbolic counterpart of Algorithms\n\\ref{alg:cpbed_rational_trigonometric_curves} or\n\\ref{alg:cpbed_rational_trigonometric_surfaces}. Moreover, combining the\npresented trigonometric algorithms and their hyperbolic counterparts, higher\ndimensional hybrid multivariate rational surfaces can also exactly described\nby means of multivariate hybrid tensor product surfaces.\n\\end{remark}\n\n\\begin{example}\n[Applying the hyperbolic counterpart of Algorithm\n\\ref{alg:cpbed_rational_trigonometric_curves} ]Cases (a) and (b) of Fig.\n\\ref{fig:ed_rational_hpc}\\ show the control point based exact description of\nthe rational hyperbolic arcs%\n\\begin{equation}\n\\mathbf{g}\\left( u\\right) =\\frac{1}{g^{3}\\left( u\\right) }\\left[\n\\begin{array}\n[c]{c}%\ng^{1}\\left( u\\right) \\\\\ng^{2}\\left( u\\right)\n\\end{array}\n\\right] =\\frac{1}{4+3\\cosh\\left( u-1\\right) +\\cosh\\left( 3u-3\\right)\n}\\left[\n\\begin{array}\n[c]{r}%\n4\\cosh\\left( 2u-2\\right) \\\\\n8\\sinh\\left( u-1\\right)\n\\end{array}\n\\right] ,~u\\in\\left[ 0,3.1\\right] \\label{rational_hyperbolic_curve_1}%\n\\end{equation}\nand%\n\\begin{equation}\n\\mathbf{g}\\left( u\\right) =\\frac{1}{g^{3}\\left( u\\right) }\\left[\n\\begin{array}\n[c]{c}%\ng^{1}\\left( u\\right) \\\\\n\\\\\ng^{2}\\left( u\\right)\n\\end{array}\n\\right] =\\frac{1}{11+4\\cosh\\left( 2u-\\frac{3}{2}\\right) +\\cosh\\left(\n4u-3\\right) }\\left[\n\\begin{array}\n[c]{c}%\n16\\cosh\\left( u-\\frac{3}{4}\\right) \\\\\n\\\\\n4\\sinh\\left( 2u-\\frac{3}{2}\\right)\n\\end{array}\n\\right] ,~u\\in\\left[ 0,2.5\\right] \\label{rational_hyperbolic_curve_2}%\n\\end{equation}\nrespectively. (Note that in both cases $\\lim\\limits_{u\\rightarrow\\pm\\infty\n}\\mathbf{g}\\left( u\\right) =\\mathbf{0}$.)\n\\end{example}\n\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nheight=3.2162in,\nwidth=6.1004in\n]%\n{ed_rational_hpc.pdf}%\n\\caption{Using rational hyperbolic curves of the type\n(\\ref{rational_hyperbolic_curve}), cases (\\emph{a}) and (\\emph{b}) illustrate\nthe control point based exact description of arcs\n(\\ref{rational_hyperbolic_curve_1}) and (\\ref{rational_hyperbolic_curve_2}),\nrespectively. Control polygons were determined by the hyperbolic counterpart\nof Algorithm \\ref{alg:cpbed_rational_trigonometric_curves}.}%\n\\label{fig:ed_rational_hpc}%\n\\end{center}\n\\end{figure}\n\n\n\\begin{example}\n[Applying the hyperbolic counterpart of Algorithm\n\\ref{alg:cpbed_rational_trigonometric_surfaces}]Using surfaces of the type\n(\\ref{rational_hyperbolic_surface}), Fig.\n\\ref{fig:rational_hyperbolic_butterfly} illustrates several control point\nconfigurations for the exact description of the rational hyperbolic surface\npatch%\n\\begin{equation}\n\\mathbf{s}\\left( u_{1},u_{2}\\right) =\\frac{1}{s^{4}\\left( u_{1}%\n,u_{2}\\right) }\\left[\n\\begin{array}\n[c]{c}%\ns^{1}\\left( u_{1},u_{2}\\right) \\\\\ns^{2}\\left( u_{1},u_{2}\\right) \\\\\ns^{3}\\left( u_{1},u_{2}\\right)\n\\end{array}\n\\right] ,~\\left( u_{1},u_{2}\\right) \\in\\left[ 0,6\\right] \\times\\left[\n0,10\\right] ,\\label{rational_hyperbolic_butterfly}%\n\\end{equation}\nwhere%\n\\begin{align*}\ns^{1}\\left( u_{1},u_{2}\\right) & =6\\left( \\cosh\\left( 2u_{1}-2\\right)\n+\\sinh\\left( u_{2}-5\\right) \\right) ,\\\\\ns^{2}\\left( u_{1},u_{2}\\right) & =\\frac{1}{10}\\sinh\\left( u_{1}-1\\right)\n\\cosh\\left( 2u_{2}-10\\right) ,\\\\\ns^{3}\\left( u_{1},u_{2}\\right) & =2\\left( \\sinh\\left( 2u_{1}-2\\right)\n+\\cosh\\left( 2u_{1}-2\\right) \\right) \\cosh\\left( u_{2}-5\\right) ,\\\\\ns^{4}\\left( u_{1},u_{2}\\right) & =275+100\\cosh(2u_{1}-2)+25\\cosh\n(4u_{1}-4).\n\\end{align*}\n\n\\end{example}\n\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nnatheight=5.475100in,\nnatwidth=6.298400in,\nheight=5.5019in,\nwidth=6.3261in\n]%\n{rational_hyperbolic_surface_merged+.pdf}%\n\\caption{Control point based exact description of the patch\n(\\ref{rational_hyperbolic_butterfly}) by means of rational hyperbolic surfaces\nof the type (\\ref{rational_hyperbolic_surface}). Control nets were obtained by\nthe hyperbolic counterpart of Algorithm\n\\ref{alg:cpbed_rational_trigonometric_surfaces} ($\\delta=2$, $\\kappa=1$;\n$\\alpha_{1}=6$, $\\alpha_{2}=10$; $m_1=2$, $m_2=m_3=m_4=1$).}%\n\\label{fig:rational_hyperbolic_butterfly}%\n\\end{center}\n\\end{figure}\n\n\n\n\\begin{example}\n[Hybrid counterpart of Algorithm\n\\ref{alg:cpbed_rational_trigonometric_surfaces} -- hybrid rational\nvolumes]Using multivariate rational tensor product surfaces specified by\nfunctions that are exclusively either trigonometric or hyperbolic in each of\ntheir variables, Fig. \\ref{fig:rational_hybrid_volume} shows the control point\nbased exact description of the $3$-dimensional $3$-variate rational surface\nelement (volume)%\n\\begin{equation}\n\\mathbf{s}\\left( \\mathbf{u}\\right) =\\frac{1}{s^{4}\\left( u_{1},u_{2}%\n,u_{3}\\right) }\\left[\n\\begin{array}\n[c]{c}%\ns^{1}\\left( u_{1},u_{2},u_{3}\\right) \\\\\ns^{2}\\left( u_{1},u_{2},u_{3}\\right) \\\\\ns^{3}\\left( u_{1},u_{2},u_{3}\\right)\n\\end{array}\n\\right] ,~\\left( u_{1},u_{2},u_{3}\\right) \\in\\left[ 0,2\\right]\n\\times\\left[ 0,\\frac{3\\pi}{4}\\right] \\times\\left[ 0,\\frac{\\pi}{2}\\right]\n\\label{rational_hybrid_volume}%\n\\end{equation}\nwhere functions%\n\\begin{align*}\ns^{1}\\left( u_{1},u_{2},u_{3}\\right) = & \\frac{5}{4}\\cosh\\left(\nu_{1}-1\\right) \\cos\\left( u_{2}\\right) \\left( \\frac{3}{2}+\\frac{3}{4}%\n\\sin\\left( u_{3}\\right) -\\frac{1}{2}\\cos\\left( u_{3}\\right) -\\frac{1}%\n{4}\\sin\\left( 3u_{3}\\right) \\right) ,\\\\\ns^{2}\\left( u_{1},u_{2},u_{3}\\right) = & \\cosh\\left( u_{1}-1\\right)\n\\sin\\left( u_{2}\\right) \\left( \\frac{5}{2}+\\frac{3}{4}\\sin\\left(\nu_{3}\\right) -\\frac{1}{4}\\sin\\left( 3u_{3}\\right) \\right) ,\\\\\ns^{3}\\left( u_{1},u_{2},u_{3}\\right) = & -\\frac{5}{4}\\sinh\\left(\nu_{1}-1\\right) \\left( \\frac{7}{4}+\\frac{1}{4}\\cos\\left( 2u_{3}\\right)\n\\right) ,\\\\\ns^{4}\\left( u_{1},u_{2},u_{3}\\right) = & 1-\\frac{3}{32}\\sqrt{2-\\sqrt{2}%\n}\\sin\\left( u_{3}\\right) -\\frac{3}{32}\\sqrt{2+\\sqrt{2}}\\cos\\left(\nu_{3}\\right)\n-\\frac{1}{32}\\sqrt{2+\\sqrt{2}}\\sin\\left( 3u_{3}\\right) -\\frac{1}{32}%\n\\sqrt{2-\\sqrt{2}}\\cos\\left( 3u_{3}\\right)\n\\end{align*}\nare hyperbolic or trigonometric in the first and the last two variables, respectively.\n\\end{example}\n\n\n\\begin{figure}\n[!h]\n\\begin{center}\n\\includegraphics[\nnatheight=2.910100in,\nnatwidth=6.298400in,\nheight=2.9378in,\nwidth=6.3261in\n]%\n{rational_hybrid_volume_merged+.pdf}%\n\\caption{Control point based exact description of the hybrid $3$-dimensional\nrational volume element (\\ref{rational_hybrid_volume}). The control grid was\ncalculated by using the hybrid counterpart of Algorithm\n\\ref{alg:cpbed_rational_trigonometric_surfaces} ($\\delta=3$, $\\kappa=0$;\n$\\alpha_{1}=2$, $\\alpha_{2}=\\frac{3\\pi}{4}$, $\\alpha_{3}=\\frac{\\pi}{2}$; $m_1=m_2=m_3=m_4=1$).}%\n\\label{fig:rational_hybrid_volume}%\n\\end{center}\n\\end{figure}\n\n\n\n\\section{Final remarks\\label{sec:final_remarks}}\n\nSubdivision algorithms of trigonometric and hyperbolic curves detailed in\nSection \\ref{sec:special_parametrizations} can also be easily extended to\nhigher dimensional multivariate (rational) trigonometric or hyperbolic\nsurfaces, respectively. Therefore, similarly to standard rational B\\'{e}zier\ncurves and surfaces that are present in the core of major CAD\/CAM systems, all\nsubdivision based important curve and surface design algorithms (like\nevaluation or intersection detection) can be both mathematically and\nprogrammatically treated in a unified way by means of normalized B-bases\n(\\ref{Sanchez_basis}) and (\\ref{Wang_basis}). Considering the large variety of\n(rational) curves and multivariate surfaces that can be exactly described by\nmeans of control points and the fact that classical (rational) B\\'{e}zier\ncurves and multivariate surfaces are special limiting cases of the\ncorresponding curve and surface modeling tools defined in Sections\n\\ref{sec:special_parametrizations} and \\ref{sec:multivariate_surfaces}, it is\nworthwhile to incorporate all proposed techniques and algorithms presented in\nSection \\ref{sec:exact_description} into CAD systems of our days.\n\n\\begin{acknowledgements}\n\\'{A}goston R\\'{o}th was supported by the European Union and the State of\nHungary, co-financed by the European Social Fund in the framework of\nT\\'{A}MOP-4.2.4.A\/2-11\/1-2012-0001 'National Excellence Program'. All assets,\ninfrastructure and personnel support of the research team led by the author\nwas contributed by the Romanian national grant CNCS-UEFISCDI\/PN-II-RU-TE-2011-3-0047.\n\\end{acknowledgements}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the Cold Dark Matter (CDM) cosmogony, galactic stellar haloes are\nbuilt up in large part from the debris of tidally disrupted satellites\n(e.g. Searle \\& Zinn 1978; White \\& Springel 2000; Bullock \\& Johnston 2005; Cooper {et~al.} 2010). Discovering\nand quantifying halo structures around the Milky Way may provide a\nuseful diagnostic of the Galaxy's merger history\n(e.g. Helmi \\& de~Zeeuw 2000; Johnston {et~al.} 2008; G{\\'{o}}mez \\& Helmi 2010). Upcoming Milky Way\nsurveys (for example with PanSTARRS1, LAMOST, HERMES and the LSST)\nwill provide large datasets in which to search for structure. The\n\\textit{Gaia} mission will determine six-dimensional phase-space\ncoordinates for all stars brighter than $V\\sim17$, from which it\nshould be possible to untangle even well-mixed streams in the nearby\nhalo (G{\\'{o}}mez {et~al.} 2010).\n\nTesting the CDM model by comparing these observations to numerical\nsimulations of stellar halo formation requires a straightforward\ndefinition for the `abundance of substructure', one that can be\nquantified with a method equally applicable to simulations and\nobservations. Algorithms already exist for identifying substructure in\nhuge multidimensional datasets (e.g. Sharma \\& Johnston 2009), such as the\ndata expected from \\textit{Gaia} supplemented by chemical abundance\nmeasurements (Freeman \\& Bland-Hawthorn 2002). These algorithms can\nalso be applied to simulations, although this is not\nstraightforward. One problem is that current (cosmological)\nhydrodynamic simulations still fall short of the star-by-star\n`resolution' of the \\textit{Gaia} data, particularly in the Solar\nneighbourhood (e.g. Brown, Vel{\\'{a}}zquez \\& Aguilar 2005).\n\nIn the outer halo, longer mixing times allow ancient structures to\nremain coherent in configuration space for many gigayears. However, 6D\n\\textit{Gaia} data will be restricted to relatively bright stars. In\nthe near future, studies of the outer halo (beyond $\\sim20$~kpc) will\ncontinue to rely on a more modest number of `tracers' (giant and\nhorizontal branch stars). For these stars, typically only angular\npositions and (more uncertain) estimates of distance and radial\nvelocity are available. In this regime, current simulations contain as\nmany particles as there are (rare) tracer stars in observational\nsamples, and can be compared directly with data that are already\navailable. Here we focus on quantifying the degree of structure in\nrare tracer stars with a generic method, which we apply to\nobservational data and to simulations of Milky Way-like stellar\nhaloes.\n\nMost studies of spatial and kinematic structure in the Milky Way halo\nhave given priority to the discovery of individual overdensities\n(exceptions include Gould 2003, Bell {et~al.} 2008, Xue, Rix \\& Zhao 2009,\nXue {et~al.} 2011 and Helmi {et~al.} 2011a). Relatively few\nhave investigated global statistical quantities for the entire stellar\nhalo, although several authors have suggested an approach based on\nclustering statistics. Re~Fiorentin {et~al.} (2005) analysed the\nvelocity-space clustering of a small number of halo stars in the Solar\nneighbourhood, using a correlation function statistic. Following early\nwork by Doinidis \\& Beers (1989), Brown {et~al.} (2004) examined the angular\ntwo-point correlation function of photometrically selected blue\nhorizontal branch (BHB) stars in the Two Micron All Sky Survey,\nprobing from $\\sim2-9$~kpc. They detected no significant correlations\nat latitudes $|b|\\ga50\\degr$, but did detect correlations on small\nscales ($1\\degr$, $\\sim100$~pc) at lower latitudes, which they\nattributed to structure in the thick disc. Lemon {et~al.} (2004) performed a\nsimilar analysis for nearby F-type stars in the Millennium Galaxy\nCatalogue and found no significant clustering.\n\nStarkenburg {et~al.} (2009) used a correlation function in \\textit{four}\ndimensions to identify substructures in the Spaghetti pencil-beam\nsurvey of the distant halo (Morrison {et~al.} 2000; Dohm-Palmer {et~al.} 2000).\nWith this method they obtained a significant detection of clustering\nand set a lower limit on the number of halo stars in all\nsubstructures. Similarly, Schlaufman {et~al.} (2009) constrained the mass\nfraction of the halo in detectable substructure by estimating the\ncompleteness of their overdensity detection\nalgorithm. Starkenburg {et~al.} and Schlaufman {et~al.}\nconcluded that $>10\\%$ (by number of stars) and $\\sim30\\%$ (by volume)\nof the Milky Way halo belongs to groups meeting their respective\ndefinitions of phase space substructure. These methods were tested on\n`mock catalogues' of tracer stars based on simplified models of the\nstellar halo.\n\nThe work of Xue {et~al.} (2009, 2011) is particularly\n relevant. Xue {et~al.} (2009) considered the pairwise radial velocity\n separation of a sample of 2558 halo BHB stars as a function of their\n separation in space, but found no evidence of clustering. From\n comparisons to the simulations of Bullock \\& Johnston (2005),\n Xue {et~al.} concluded that a pairwise velocity statistic was\n not capable of detecting structure against a more smoothly\n distributed background in phase space (made up from stars in\n phase-mixed streams). However, their observed signal was not\n compared to an expected signal from random realizations. More\n recently, Xue {et~al.} (2011) studied an enlarged catalogue of\n BHBs comprising more than 4000 stars. They quantified clustering\n in this larger sample using a four-dimensional metric\n (similar to that of Starkenburg {et~al.} 2009), finding a significant\n excess of clustering on small scales by comparison to smooth\n models. The conclusions of this more recent study by\n Xue {et~al.} agree with our own, as we discuss further\n in \\mnsec{sec:conclusion}.\n\nThe statistic we develop in this paper also builds on the\n approach of Starkenburg {et~al.} (2009). We define a two-point correlation\n function based on a metric combining pairwise separations in four\n readily obtained phase space observables for halo stars (angular\n position, radial distance and radial velocity). We apply this\n statistic to the catalogue of BHB stars published by\n Xue {et~al.} (2008)\\footnote{The larger sample used by Xue {et~al.} (2011)\n was not publicly available at the time this paper went to press.}\n and demonstrate that a significant clustering signal can be\n recovered from these data.\n\nA clustering metric of the kind we propose can be tuned to probe a\nspecific scale in phase space by adjusting the weight given to each of\nits components. In the stellar halo, however, many `components' may be\nsuperimposed with a complex assortment of scales and morphologies in\nphase space (Johnston {et~al.} 2008; Cooper {et~al.} 2010). For this reason, it is not\nclear, a priori, what sort of signal to expect, or which scales are\nmost relevant. We find that we cannot identify an `optimal'\n metric. Instead, we make a fiducial choice based on the\nself-consistent accreted halo models of Cooper {et~al.} (2010). These\nincorporate an ab initio \\lcdm{} galaxy formation model using\nhigh-resolution cosmological N-body simulations from the Aquarius\nproject (Springel {et~al.} 2008). We apply our fiducial metric to the\ndata and to these models. We find that even though both the metric and\nour choice of scaling are simple, this approach has the power to\ndiscriminate quantitatively between qualitatively different stellar\nhaloes.\n\nWe describe the basis of our method in \\mnsec{sec:method} and the SDSS\nDR6 BHB catalogue of Xue {et~al.} (2008) in \\mnsec{sec:obsdata}. In\n\\mnsec{sec:sims} we describe our simulations\n(\\mnsec{sec:nbodygalform}) and our procedure for constructing mock\ncatalogues (\\mnsec{sec:tracer_stars}). \\mnsec{sec:fiducial} describes\nour choice of a fiducial metric. In \\mnsec{sec:segue} we apply our\nmethod to quantify clustering in the SDSS data and compare this with\nour mock catalogues. Our conclusions are given in\n\\mnsec{sec:conclusion}. \n\n\n\\section{A metric for phase-space distance}\n\\label{sec:method}\n\nThe most readily obtained phase-space observables for halo stars are\ntheir Galactic angular coordinates, $l$ and $b$, heliocentric radial\ndistance, $r_{\\mathrm{hel}}$, and heliocentric line-of-sight velocity,\n$v_{\\mathrm{hel}}$. Using its angular position and distance estimate,\neach star can be assigned a three-dimensional position vector in\ngalactocentric Cartesian coordinates, $\\bmath{r}\\,(X,Y,Z)$, and a\nradial velocity corrected for the Solar and Local Standard of Rest\nmotions, $v$. We begin by defining a scaling relation (metric),\n$\\Delta$, which combines these observables into a simple `phase-space\nseparation' between two stars:\n\n\\begin{equation}\n{\\Delta}_{ij}^2=\n{|\\bmath{r_i}-\\bmath{r_j}|}^2+w_{v}^2({v_{i}-v_{j}})^2.\n\\label{eqn:delta_metric}\n\\end{equation}\n\nHere, $|\\bmath{r_i}-\\bmath{r_j}|$ is the separation of a pair of stars\nin coordinate space (in kiloparsecs), and $v_{i}-v_{j}$ is the\ndifference in their radial velocities (in kilometres per second). The\nscaling factor $w_{v}$ has units of $\\mathrm{kpc\\,km^{{-}1}\\,s}$, such\nthat $\\Delta$ has units of kpc. The choice of $w_{v}$ is arbitrary\nunless a particular `phase space scale' of interest can be\nidentified. This is not straightforward; we discuss some possible\nchoices below.\n \nThe aim of this paper is to explore $\\xi({\\Delta})$, the cumulative\ntwo-point correlation function of halo stars in the metric defined by\nEquation \\ref{eqn:delta_metric}. Throughout, we use the\nestimator \\begin{equation} 1+\\xi({\\Delta}) =\n \\frac{DD(<{\\Delta})}{\\left \\langle RR(<{\\Delta}) \\right\n \\rangle}. \\label{eqn:estimator}\n\\end{equation} Here $DD(<\\Delta)$ counts the number of pairs in the\nsample separated by less than $\\Delta$, and $\\left \\langle RR(<\\Delta)\n\\right \\rangle$ is the equivalent count for pairs of random points\nwithin the survey volume, averaged over several realizations. To\n generate these realizations we `shuffle' the data by randomly\n reassigning $(r_{\\mathrm{hel}},v_{\\mathrm{hel}})$ pairs to different\n $(l,b)$ coordinates drawn from the original catalogue\\footnote{The\n same randomisation procedure was used by\n Starkenburg {et~al.} (2009). Xue {et~al.} (2011) use a similar\n procedure, but for each galaxy they re-assign $r_{\\mathrm{hel}}$\n and $v_{\\mathrm{hel}}$ \\textit{separately} to different $(l,b)$\n coordinates.}.\n\n\nSimilar methods for quantifying the clustering of stars in a\n four-dimensional space are described by Starkenburg {et~al.} (2009, applied to a sample\n of giant stars from the Spaghetti survey) and\n Xue {et~al.} (2011, applied to a sample of BHB stars from\n SDSS). Our $\\Delta$ metric is very similar to the\n $\\delta_{4\\mathrm{d}}$ metric of Starkenburg et al. in the limit of\n small angular separations\\footnote{Starkenburg et al. developed\n their metric with the aim of identifying `true' pairs of stars\n with high confidence. In their definition (equation 1 of\n Starkenburg {et~al.} 2009), the $\\delta_{4\\mathrm{d}}$ separation\n between two stars depends not only on their actual phase-space\n coordinates, but also on how accurately those coordinates are\n measured. For example, moving two stars 10 kpc further apart and\n simultaneously decreasing the error on their distance measurements\n by a factor of 10 (relative to the average of the sample) results\n in the same $\\delta_{4\\mathrm{d}}$. Thus $DD\/RR$ for separations\n in $\\delta_{4\\mathrm{d}}$ is not a straightforward measurement of\n physical clustering.}. We have verified that our metric gives\n similar results when we repeat the analysis of Starkenburg et\n al. using the Spaghetti sample of 101 halo RGB stars. For the rest\n of this paper we will focus on clustering in the SDSS DR6 BHB\n catalogue of Xue {et~al.} (2008).\n\n\n\\section{Observational data}\n\\label{sec:obsdata}\n\nXue {et~al.} (2008) have published a catalogue of 2558 stars from SDSS DR6,\nwhich they identify as high-confidence halo BHBs (contamination\n$<10\\%$) using a combination of colour cuts and Balmer line\ndiagnostics. This sample ranges in distance from $4-60\\,\\mathrm{kpc}$;\na cut on distance error excluded more distant stars observed in\nSDSS. Xue {et~al.} (2008) claim distance errors of $\\sim10\\%$ and radial\nvelocity errors of $5-20\\,\\mathrm{km\\,s^{-1}}$. This catalogue is not\na complete sample of halo BHBs. In particular, Xue {et~al.} note\nthat the targeting of SDSS spectroscopy is biased against the\nfollow-up of more distant stars.\n\nTo isolate stars representing the kinematics of the halo (in order to\nstudy the Galactic circular velocity profile) Xue {et~al.} (2008)\nfurther restricted their sample to stars at heights\n$|z|>4\\,\\mathrm{kpc}$ above the plane (avoiding the thick disc). We\nalso apply this cut (which excludes thick disc stars and low-latitude\nhalo stars alike), leaving 2401 stars in the sample. Finally we\nexclude 9 stars in the sample identified by Xue {et~al.} (2009) as probable\nglobular cluster members. Thus, the sample against which we compare our\nmodels contains 2392 stars from the 2558 stars in the Xue {et~al.} (2008)\ncatalogue. The effects of these refinements to the sample on the\nrecovered \\deltacf{} signal are discussed in\n\\mnsec{sec:obsdata_clustering}.\n\n\\section{Stellar Halo Simulations}\n\\label{sec:sims}\n\n\\subsection{N-body and galaxy formation model}\n\\label{sec:nbodygalform}\n\nThe mock observations that we use to test the \\deltacf{} correlation\nfunction are derived from simulations of the accreted stellar halo\npresented in Cooper {et~al.} (2010). These simulations approximate the\ndynamics of stars in dwarf satellites of Milky Way-like galaxies by\n`tagging' appropriate particles (i.e. those strongly bound within\nsubhaloes) in the Aquarius suite of high-resolution N-body simulations\n(Springel {et~al.} 2008). Each `tag' associates a dark matter (DM)\nparticle with a particular stellar population of a given mass, age\nand metallicity. This `tagging' technique is reasonable in the regime\nof high mass-to-light ratios, which is supported in this case by\nobservations of stellar kinematics in dwarf galaxies\n(e.g. Walker {et~al.} 2009).\n\nThe tagging method has a single free parameter, the fraction of the\nmost-bound particles chosen in each DM halo for each assignment of\nnewly-formed stars (for further details see Cooper {et~al.} 2010). The\nvalue of this parameter (1 per cent) was fixed by requiring the\npopulation of \\textit{surviving} satellites (at the present day) to\nhave a distribution of half-light radius as a function of luminosity\nconsistent with Milky Way and M31 observations\\footnote{The luminosity\n function of surviving satellites in these models also agrees with MW\n and M31 data. This agreement is mostly due to the underlying galaxy\n formation model.}. The Cooper {et~al.} models differ from the\nearlier models of Bullock \\& Johnston (2005) in that they treat the full\ncosmological evolution of all satellites self-consistently in a single\nN-body simulation, and use a comprehensive semi-analytic model of\ngalaxy formation (Bower {et~al.} 2006) constrained by data on large scales\nand compatible with the observed MW satellite luminosity\nfunction. Both the Cooper {et~al.} and the\nBullock \\& Johnston simulations produce highly structured stellar\nhaloes built from the debris of disrupted dwarf galaxies. As we\ndiscuss further below, other halo components formed {\\it in situ} may\nbe present in real galaxies\n(e.g. Abadi, Navarro \\& Steinmetz 2006; Zolotov {et~al.} 2009; Font {et~al.} 2011) but these are expected\nto be more smoothly distributed than the accreted component\n(e.g. Helmi {et~al.} 2011a).\n\n \\begin{figure*}\n\\includegraphics[height=60mmi,clip=True,trim=1cm 0cm 1cm 0 ]{fig1a.pdf}\n\\includegraphics[height=60mm,clip=True]{fig1b.pdf}\n\\includegraphics[height=60mm,clip=True,trim=1cm 0cm 1cm 0 ]{fig1c.pdf}\n\\includegraphics[height=60mm,clip=True, trim=0 0 0 0 ]{fig1d.pdf}\n\n\\caption{\\textit{Left panels:} An example of the sky distribution of\n halo BHB stars in Aq-A from the perspective of a `Solar' observer,\n shown as Mollweide projections in galactic coordinates centred on\n $(l,b)=(0,0)$. Colours indicate the mean heliocentric distance of\n stars in each pixel, in kiloparsecs. Pixels outside the SDSS DR6\n footprint are shown with lower contrast. The upper panel includes\n all BHB stars from $6$--$60$~kpc, the lower panel includes only\n stars in the range $30$--$60$~kpc. Our fiducial choice of the\n Galactic $Z$ axis with respect to the dark halo has been applied,\n but distances in these panels are not convolved with observational\n errors. The normalization of BHB stars per unit stellar mass in the\n halo has been increased in these panels to emphasise the\n distribution of structure. \\textit{Right panels:} Blue histograms\n show the distribution of heliocentric distances (above) and\n heliocentric velocities (below) for the fiducial observer\n corresponding to the left panels, after convolution with\n observational errors (see text). Orange histograms are the\n distributions for stars in the Xue {et~al.} (2008) catalogue. To compare\n the shape of the two distributions, the normalization of the mock\n distributions in these panels has been matched to that of the\n observations for $r<20$~kpc, where the observations are most likely\n to be complete.}\n\\label{fig:fiducial}\n \\end{figure*}\n\nAs in Springel {et~al.} (2008) and Cooper {et~al.} (2010), we refer to our\nsix simulations as haloes \nAq-A, Aq-B, Aq-C, Aq-D, Aq-E and Aq-F. From these simulations, we\nconstruct catalogues of tracer stars (BHB stars, for example) by\nconverting the stellar mass assigned to each dark matter particle into\nan appropriate number of stars.\n\nEach DM particle can give rise to many tracer stars if it is tagged\nwith sufficient stellar mass. We therefore interpolate the positions\nand velocities of these tracer stars between nearby tagged DM\nparticles. To accomplish this, the 32 nearest phase space neighbours\nof each tagged particle are identified using a procedure which we\ndescribe below. The mean dispersion in each of the six phase-space\ncoordinates is then calculated for each particle as an average over\nits neighbours. These dispersions define a 6D ellipsoidal Gaussian\nkernel centred on the particle, from which the positions and\nvelocities of its tracer stars are drawn randomly. Each progenitor\ngalaxy (a set of tagged DM particles accreted by the main `Milky Way'\nhalo as members of a single subhalo) is treated individually in this\nsmoothing operation, i.e. particles are smoothed using only neighbours\nfrom the same progenitor (so there is no `cross talk' between streams\nfrom different progenitors). This procedure can be thought of as a\ncrude approximation to running our original simulation again including\neach tracer star as a test particle.\n\n\n\nThe `distance in phase space' used to identify neighbours in the\ninterpolation scheme is defined by a scaling relation between\ndistances in configuration space and velocity space\\footnote{In this\n part of the calculation, we are only interested in finding\n neighbours, so the absolute values of these distances are not\n important. This scaling of velocity space to configuration space for\n the purpose of resampling the simulations should not be confused\n with the $\\Delta$ metric we define for our analysis of\n clustering.}. For each progenitor, we adopt an individual scaling\nwhich corresponds to making the median pairwise interparticle\nseparation of its particles in configuration space (at $z=0$) equal to\ntheir median separation in velocity space. In practice, the value of\nthis scaling parameter makes very little difference to the results we\npresent, when compared to the extreme choice of selecting only 32\nvelocity or position neighbours (disregarding the other three\ncoordinates in each case). Giving more weight to configuration-space\nneighbours smears out velocity substructure within the debris of a\nprogenitor (for example, where two wraps of a stream pass near one\nanother). Giving more weight to velocity neighbours has the opposite\neffect -- stars can be interpolated over arbitrarily large separations\nin configuration space, but coherent velocity structures are\npreserved. Therefore, the `optimal' choice is the scaling which\nbalances smoothing in configuration space against smoothing in\nvelocity space.\n\nTo quantify this balance between smoothing in configuration and\nvelocity space, we compute six smoothing lengths for each particle,\n$\\epsilon_{x,i}$ and $\\epsilon_{v,i}$, where $i$ represents a single\ndimension in space or velocity. To compute these dispersions,\n we use the 32 nearest phase-space neighbours of each particle. We\ndefine the `optimum' choice of scaling for \\textit{each} progenitor\ngalaxy as that which minimises the quantity\n\n\\begin{equation}\n\\sigma_{\\epsilon}^{2} = \\left (\\frac{1}{\\bar{\\epsilon}_{x,\\mathrm{min}}} \\sum_{i=0}^{3}{\\epsilon_{x,i}} \\right )^{2} + \\left ( \\frac{1}{\\bar{\\epsilon}_{v,\\mathrm{min}}} \\sum_{i=0}^{3}{\\epsilon_{v,i}}\\right )^{2}.\n\\end{equation} This is the sum in quadrature of the mean smoothing lengths in\nconfiguration and velocity space, normalized respectively by\n$\\bar{\\epsilon}_{x,\\mathrm{min}}$, the `minimal' mean smoothing length\nin configuration space (obtained from the 32 nearest configuration space\nneighbours) and $\\bar{\\epsilon}_{v,\\mathrm{min}}$, the `minimal' mean\nsmoothing length in velocity space (obtained from the 32 nearest\nvelocity space neighbours). We find that the scaling obtained by\nmatching the median interparticle separations in position and velocity\nas described above is typically a good approximation to this optimal\nvalue -- a similar result is discussed in more detail by\nMaciejewski {et~al.} (2009).\n\n\nIn the Cooper {et~al.} model, when a given stellar population is\nformed, the most bound 1\\% of DM particles in the corresponding dark\nhalo at that time are chosen as dynamical tracers of that\npopulation. Hence, each DM \\textit{particle} to which stars are\nassigned has an individual mass-to-light ratio, M\/L, which can be as\nhigh as $\\sim1$ (i.e. $M_{\\mathrm{stellar}}\\sim10^{4}\\,M_{\\sun}$) and\nas low as $\\sim10^{-6}$. This will affect the density of stars seeded\nby a DM particle independently of the density of its neighbours in\nphase space (i.e. a low M\/L particle will create a denser `cluster' of\ntracers relative to a high M\/L particle with the same neighbouring\npositions and velocities). We have tested an alternative approach in\nwhich the M\/L of each particle in a given progenitor is resampled by\ndistributing the total stellar mass of the progenitor evenly amongst\nits tagged particles\\footnote{This is almost equivalent to choosing\n M\/L only once, at the time in the simulation when the progenitor\n falls into the main halo (similar to the lower-resolution model of\n De~Lucia \\& Helmi 2008).}. We find that the extra clustering due to a\nfew `hot' particles in our default approach makes no difference to our\nresults. \n\n\n\n\\subsection{Tracer stars and mock observations}\n\\label{sec:tracer_stars}\n\nEach N-body dark matter particle in our simulation contributes a\nnumber of tracer stars to our mock observations, based on the stellar\npopulation with which it has been `tagged'. In the specific case of\nour comparisons to SDSS, these tracers are BHB stars meeting the selection\ncriteria of Xue {et~al.} (2008). Here we assume a global scaling between the\nstellar mass associated with each N-body particle, $M_{\\star}$, and\nthe number of BHBs it contributes to our mock catalogues, i.e.\n$N_{\\mathrm{BHB}} = f_{\\mathrm{BHB}}M_{\\star}$ where\n$f_{\\mathrm{BHB}}$ is the number of tracer stars per unit mass of the\noriginal stellar population\\footnote{We do this because we prefer to\n make a straightforward comparison with the observational data in\n this paper. In principle, the age and metallicity information\n associated with each stellar population in our model could be used\n to populate an individual colour-magnitude diagram for each N-body\n particle, and make a detailed prediction for the appropriate number\n of tracers. The `bias' of BHBs relative to the total stellar mass\n distribution of the halo (Bell {et~al.} 2010) may affect the\n clustering statistic recovered (Xue {et~al.} 2011), but this effect\n is beyond the scope of the present paper. }. For each N-body\nparticle, the actual number of BHB stars generated is drawn from a\nPoisson distribution with mean $N_{\\mathrm{BHB}}$. The correlation\nfunction results described below are not sensitive to the choice of\n$f_{\\mathrm{BHB}}$, provided that the underlying distribution is\nwell-sampled at a given scale. We have therefore selected a fiducial\nvalue of $f_{\\mathrm{BHB}}^{-1} = 3000\\,M_{\\sun}\/\\mathrm{star}$. In\ncreating the mock catalogue, we do not include any stars\ngravitationally bound to satellites. However, we do include stars in\ntheir tidal tails (which, by our definition, are part of the stellar\nhalo).\n\nUsing our simulated BHB catalogues, we create mock observations for\ncomparison to the Xue {et~al.} (2008) data as follows. First we located the\nobserver at a radius $r_{\\sun}=8\\,\\mathrm{kpc}$ from the centre of the\nhalo. For our main comparison to the data, we restrict all observers\nto the same `Galactic plane', with each random vantage point differing\nonly in its azimuthal angle in this plane and in the `polarity' of the\nGalactic rotation axis (the $Z$ coordinate). However, the orientation\nof the rotation axis cannot be directly constrained by the simulation,\nwhich only models the \\textit{accreted} component of the halo and the\nbulge, and not the in situ formation of a stellar disc. As described\nin Cooper {et~al.} (2010), the accreted `bulge' is prolate or mildly\ntriaxial. We define the minor axis of this bulge component\n(conservatively defined by all accreted stars within $r<3$~kpc;\nCooper {et~al.} 2010) as the Galactic $Z$ axis. This axis is essentially\nidentical to the minor axis of the dark halo within $r<3$~kpc. There\nare other plausible choices of Galactic plane (for example, relative\nto the shape or spin vectors of the entire dark halo, rather than the\nstars in its inner regions). However, any choice is somewhat arbitrary\nwithout a self-consistent simulation of disc formation\\footnote{In a\n full hydrodynamic simulation the effects of feedback and adiabatic\n contraction may also make the dark halo itself more spherical\n (e.g. Tissera {et~al.} 2010; Abadi {et~al.} 2010).}.\n\nHaving chosen a location for the observer, we select all tracer stars\nwithin the spectroscopic footprint of SDSS DR6 having galactocentric\ndistance in the range 20--60~kpc (our principal comparison will focus\non the outer halo as defined by this distance range, although we also\nstudy the ranges 5--60~kpc and 5--20~kpc below). Galactic longitude\nand latitude are defined such that $(l,b)=(0,0)$ is the vector\ndirected from the observer to the centre of the halo. We set the\nheliocentric velocity components of each star assuming a Solar motion\nof $U,V,W = (10.0,5.2,7.2)\\:\\mathrm{km\\,s^{{-}1}}$ (Dehnen \\& Binney 1998) and\na velocity of the Local Standard of Rest about the Galactic centre\n$v_{\\mathrm{LSR}}=220\\mathrm{\\:km\\,s^{{-}1}}$. We compute $(X,Y,Z)$\nand $v_{\\mathrm{los}}$ as described by Xue {et~al.} (2008). Finally,\ndistances and velocities are convolved with Gaussian observational\nerrors of $\\sigma_{d}= 10\\%$ and $\\sigma_{v}=15\\;\\mathrm{km\\,s^{-1}}$\nrespectively (Xue {et~al.} 2008).\n\nIn both the mock observations and the real data, the average random\npair count $\\langle RR \\rangle$ is calculated by reshuffling distances\nand velocities among the positions on the sky of stars many times. We find that\nby using 500 random catalogues to calculate $\\langle RR \\rangle$ for\neach mock observation and performing 500 mock observations in each\nhalo, we obtain a sufficiently well-converged estimate of the\ndistribution of \\deltacf{}.\n\n\\fig{fig:fiducial} illustrates the structure of one of our haloes and\nverifies that our mock observations can result in distributions of\nheliocentric distance and heliocentric radial velocity similar to the\nSDSS data of Xue {et~al.} (2008). In this figure we have specifically chosen\nan observer orientation in halo Aq-A such that the distributions of\ndistance and velocity we recover are close to those of the data, after\nconvolution with typical observational errors. We have aligned\nthe Galactic $Z$ axis of the mock observer with the minor axis of \nthe dark halo, as described above. This confines the most prominent\nstructures in the stellar halo to low Galactic latitudes, outside the\nSDSS DR6 spectroscopic footprint. Of course, the simulated haloes are\ninhomogeneous on large scales, and there are many choices of observer\nin each halo that \\textit{do not} resemble the SDSS data\\footnote{As\n discussed by Xue {et~al.} 2008, the completeness of the data declines\n at larger distances ($r_{\\sun}>20$~kpc). Mock catalogue distributions\n that match the observed distributions well at $r_{\\sun}<20$~kpc\n typically show a flatter profile with identifiable overdensities\n (streams) at larger distances. As the SDSS spectroscopic selection\n function for the data we use is difficult to quantify (Xue {et~al.} 2008),\n we do not explore the effects of incompleteness in this paper.}.\n\n\n\\section{Distance - velocity scaling in the $\\Delta$ metric}\n\\label{sec:fiducial}\n\n\\begin{figure}\n\\includegraphics[width=84mm,clip=True]{fig2.pdf}\n\\caption{Correlation functions in spatial separation (blue) and velocity\n separation (red) for stars in halo Aq-A. The velocity separation\n correlation function has been scaled by a factor\n $w_{v}=0.04\\,\\wvunits$ to match the turnover in the configuration\n space separation correlation function.}\n\\label{fig:accretedscale} \n\\end{figure}\n\nBefore \\deltacf{} can be computed, a value must be chosen for the\n velocity-to-distance scaling $w_{v}$ in Equation\n \\ref{eqn:delta_metric}. There is no clearly well-motivated way to\n choose this value, and in the absence of a physical justification,\nit must be treated as a free parameter. The choice of $w_{v}$\ndetermines the scale of substructure to which the correlation function\nis most sensitive. Naively, we expect this to be the typical width and\ntransverse velocity dispersion of a `stream'. It is preferable to fix\nthis parameter in a universal manner that does not depend on the\n details of a particular survey. We make a fiducial choice of\n$w_{v}$ as follows.\n\nIn each simulated halo we adopt the SDSS-like survey configuration\ndescribed in \\mnsec{sec:tracer_stars} (without observational errors or\nassumptions about the orientation of the Galactic plane). We\nconstruct one dimensional distributions of the separation in radial\ndistance and velocity between stars. We generate many random\nrealizations of these distributions by first convolving each simulated\nstar with Gaussian smoothing kernels of width $8\\,\\kpc$ (distance) and\n$80\\,\\kms$ (velocity), and then drawing randomly from these `smoothed'\ndistributions. The smoothing scales were chosen as a compromise\nbetween signal (diminished by oversmoothing) and noise (increased by\nundersmoothing). Using these random realizations we construct\none-dimensional correlation functions for each distribution. These two\ncorrelation functions are shown for halo Aq-A in\n\\fig{fig:accretedscale}. Although the signals are intrinsically weak,\nthey have a very similar shape for both distributions, each with a\ncharacteristic `turnover' scale. Matching this scale in the two\ncorrelation functions corresponds to $w_{v}\\sim0.04\\pm0.01\\,\\wvunits$\nfor the six haloes, which we adopt as a fiducial value. We caution\nthat although the scales on which we match the one-dimensional\ncorrelation functions are somewhat smaller than the smoothing scales\nwe adopt to create the random distributions, this does not guarantee\nthat our choice of $w_{v}$ is unaffected by our choice of smoothing.\n\nClearly, there are other ways of fixing $w_{v}$. In practice, however,\nour conclusions are not highly sensitive to the value of this\nparameter. Values of the order of $w_{v}\\sim0.01$--$1.0\\,\\wvunits$\nresult in very similar $\\xi({\\Delta})$ correlation functions. Values\nlower than $0.01\\,\\wvunits$ recover very little signal. Values above\n$1\\,\\wvunits$ treat $1\\,\\kms$ velocity differences as equivalent to\n$>1\\,\\kpc$ separations in space, and so make the cumulative\ncorrelation function very noisy on small scales for only a marginal\nincrease in the overall signal. (This noise, in turn, increases the\nscatter between signals measured by different observers.) We find that\nour choice of $w_{v}\\sim0.04\\,\\wvunits$ is a reasonable\ncompromise. Our method for choosing $w_{v}$ can be compared with that\nof Starkenburg {et~al.} (2009), who determine the equivalent of $w_{v}$ in\ntheir metric to be the ratio of the Spaghetti survey limits in radial\ndistance and velocity ($0.26\\,\\wvunits$). Either value is acceptable\nto illustrate our approach and compare to simulations. We therefore\nadopt $w_{v}\\sim0.04\\,\\wvunits$.\n\n\\section{Clustering of SDSS BHB stars}\n\\label{sec:segue}\n\n\n\\subsection{Clustering in the Xue et al. sample}\n\\label{sec:obsdata_clustering}\n\n\\begin{figure}\n\\includegraphics[width=84mm,trim= 0.0cm 0cm 0cm 0.0cm,\n clip=True]{fig3.pdf}\n\\caption{\\deltacf{} correlation function for the SDSS BHB sample of\n Xue {et~al.} (2008). Black points (with Poisson error bars) show\n \\deltacf{} computed for all stars at galactocentric distances\n greater than 20~kpc. Grey points show the result for all stars in\n the main sample (galactocentric distances of 4--60~kpc). The blue\n squares include 9 stars suspected to belong to globular clusters,\n while red circles include stars at low Galactic $|z|$ heights\n (possible thick disc stars). Neither of these contributions are\n relevant for the more distant selection shown by the black points.}\n\\label{fig:sdss_real} \n\\end{figure}\n\n\n\\fig{fig:sdss_real} shows \\deltacf{} computed for 2392 BHB stars in\nthe Xue {et~al.} (2008) sample (\\mnsec{sec:obsdata}; grey points). Stars at\nsmall separations in the metric of \\eqn{eqn:delta_metric}\n($\\Delta<4\\,\\mathrm{kpc}$) show significant clustering. The amplitude\nof the signal increases if we restrict the sample to larger\ngalactocentric distances, $r>20$~kpc (black points). At larger\ndistances substructure is expected to be dynamically young and to have\nundergone less phase mixing. Our finding of stronger clustering for\nmore distant halo stars is in qualitative agreement with the results\nof Xue {et~al.} (2011).\n\nAlthough we appear to recover a significant clustering signal in\n\\fig{fig:sdss_real}, we have only one SDSS survey. The observed signal\nmay be an artifact of the particular structures covered by the SDSS\nfootprint. Other parts of the halo may be smoother or more structured,\nor may appear so when viewed from different points around the Solar\ncircle. We will address this issue of sample variance in the following\nsection using our mock catalogues.\n\nWe show two further permutations of the Xue {et~al.} (2008) sample in\n\\fig{fig:sdss_real}. The first of these (red open circles) includes\nstars close to the Galactic plane, $|Z|<4\\,\\mathrm{kpc}$. These were\nexcluded from the main sample of Xue {et~al.} (2008) to excise the thick\ndisc.\n\nAlthough only $\\sim150$ stars are excluded by the cut on\n $|Z|$, they make a substantial difference to the correlation\n function, suppressing the clustering signal on scales below\n $\\Delta<\\sim8\\,\\mathrm{kpc}$. In the SDSS data the majority of\n low-$|Z|$ stars are at small heliocentric radii. These stars\n constitute a foreground `screen' with a relatively smooth\n distribution, which may dilute the signal of correlated stars.\n\nThe final sample shown in \\fig{fig:sdss_real} (blue open squares)\nincludes all stars from the main sample (grey points) and a further\nnine BHB stars identified as globular cluster members by\nXue {et~al.} (2009). Two of these are from one cluster, and seven from\nanother. Including these stars marginally increases the clustering\nsignal in the smallest-separation bin. This shows that the technique\nis sensitive to the clustering of stars on these scales, which\ncorrespond to separations comparable to the distance and velocity\nerrors of the data.\n\n\\subsection{Comparison with Mock Catalogues}\n\\label{sec:mock_clustering}\n\n\\begin{figure}\n\\includegraphics[width=84mm, clip=True]{fig4.pdf}\n\\caption{\\deltacf{} for halo stars of galactocentric distances\n $2020$~kpc is dominated by a single radial stream\n(see figures 6 and 7 of Cooper {et~al.} 2010).\n\nWe find that two haloes, Aq-E and Aq-F (red), are consistent with the\nobserved \\deltacf{} on all scales. The structure of Aq-F is atypical\nfor the sample -- most of its stars are accreted in a late 3:1 merger\nand its surface brightness at the Solar radius is substantially higher\nthan current estimates for the Milky Way halo. In projection, Aq-F\nresembles the `shell'-dominated haloes observed in a number of nearby\nelliptical galaxies. Meanwhile haloes Aq-A (black), Aq-B (cyan) and\nAq-D (green) are marginally inconsistent with the data: below\n$\\Delta\\sim4$~kpc, $\\sim90$ per cent of mock observations in these\nhaloes imply a greater degree of clustering than we find for the Milky\nWay, particularly on small scales. Aq-C (purple) is entirely\ninconsistent with the Milky Way observations on all scales, showing a\nmuch higher degree of clustering. Beyond $20$~kpc, the sky of an\nobserver in Aq-C is dominated by two bright tidal streams on wide\n($\\sim100$~kpc) orbits. Although their orbital planes are\napproximately coincident with our definition of the Galactic plane,\nnevertheless sections of these streams intrude on the SDSS footprint\nat low Galactic latitudes.\n\n\\begin{figure*}\n\\includegraphics[height=65mm,clip=True]{fig5a.pdf}\n\\includegraphics[height=65mm,trim=1.3cm 0 0 0,clip=True]{fig5b.pdf}\n\\includegraphics[height=65mm,trim=1.3cm 0 0 0,clip=True]{fig5c.pdf}\n\\caption{\\deltacf{} for mock observations, as\n \\fig{fig:sdss_mock_pa}. From left to right, we vary our modelling\n assumptions as follows: (a) no restriction on the alignment of the\n Galactic plane with respect to the dark halo (the observer is\n located randomly on a sphere of radius $r_{\\sun}=8$~kpc) and no\nconvolution of the data with the expected observational errors; (b) the\n observer is restricted to the Galactic plane as in\n \\fig{fig:sdss_mock_pa}, but the mock observations are \\textit{not}\n convolved with expected observational errors; (c) as panel (a), but\n mock observations \\textit{are} convolved with errors. Colours are as\n \\fig{fig:sdss_mock_pa}}\n\\label{fig:sdss_mock_no} \n\\end{figure*}\n\nThe DR6 footprint and the cut on extra-planar height in the\nXue {et~al.} (2008) sample exclude stars near the Galactic plane from our\nclustering analysis. \\fig{fig:sdss_mock_no} illustrates how our\ndefinition of the Galactic plane influences the halo clustering\nsignal. In panel (a) the orientation of the Galactic plane with\nrespect to the halo is chosen randomly for each of the 500 mock\nobservers (i.e. observers are distributed over a sphere of radius\n$r_{\\sun}=8$~kpc), whereas in panel (b) the galactic Z direction\nis aligned with the minor axis of the halo for all observers as in\n\\fig{fig:sdss_mock_pa}. To focus on the effects of this alignment, the\ndistances and velocities of stars in these two panels have\n\\textit{not} been convolved with observational errors.\n\nThe systematically higher clustering signals in panel (b) of\n\\fig{fig:sdss_mock_no} suggest that the plane perpendicular to the\nminor axis of the dark matter halo is special. In Cooper {et~al.} (2010) and\nabove, we have noted the strong correlation between the shape of the\ndark halo and the inner regions of the stellar halo. This alignment of\nhalo structure also extends, more loosely, to other prominent stellar\nhalo structures at large distances. An overall flattening of the\nstellar halo arises because our dark matter haloes are embedded in\nlong-lived filaments of the cosmic web. Typically one or two such\nfilaments dominate the infall directions of both satellite galaxies\nand smoothly accreted dark matter, which also contributes to the shape\nof the dark halo\n(e.g. Libeskind {et~al.} 2005; Lovell {et~al.} 2011; Wang {et~al.} 2011; Vera-Ciro {et~al.} 2011). The\ndistribution of stars stripped from infalling satellites echoes the\nlarge-scale correlation of their orbital planes.\n\nBecause of this flattened global structure, the distribution of halo\nstars in our choice of Galactic plane tend to be more smoothly\ndistributed (i.e. this plane contains more diffuse phase-mixed\nmaterial as well as coherent substructure). Panel (b) demonstrates how\nthe `contrast' of small scale substructure in the outer halo is\nenhanced when these smoother components are excluded from the\nclustering analysis (through a combination of the SDSS footprint and\nthe cut on $|Z|$). This is particularly true in the case of Aq-F,\nwhere the majority of the mass in the halo is contributed by one\nextensive and relatively `smooth' component. By contrast, in Aq-C the\naverage clustering amplitude \\textit{decreases} on large scales when\nwe fix the Galactic plane. As noted above, in this case the massive\ncoherent streams that dominate the clustering signal of this halo\nmostly fall outside the SDSS footprint.\n\nPanel (c) of \\fig{fig:sdss_mock_no} shows the randomly aligned\nobservations of panel (a) convolved with observational errors in\ndistance and velocity. These errors `smooth out' the halo, suppress\nthe clustering signal overall and increase the variance between\nobservers on small scales. Again the effect is most pronounced for\nAq-F, where blurring of the dominant smooth component further\ndecreases the contrast of substructure. In most cases these two\neffects (alignment and observational errors) counteract each other to\nproduce the distribution of signals shown in\n\\fig{fig:sdss_mock_pa}. In the case of Aq-E the signal suffers\ndisproportionately from errors in the aligned configuration, perhaps\nbecause this signal is due to a small number of pairs at large\ndistances.\n\nFinally, in \\fig{fig:sdss_mock_distance} we examine differences\nbetween nearby and more distant halo stars (also discussed by\nXue {et~al.} 2011). The left-hand panel corresponds to the full\nrange of the Xue {et~al.} (2008) sample ($520$~kpc. This finding of\nstronger phase space correlations between stars in the outer halo is\nin agreement with that of Xue {et~al.} (2011). \n\nTo test models of the accreted components of stellar haloes and\nunderstand the effects of sample variance, we have computed \\deltacf{}\nfor mock observations constructed from the six $\\Lambda$CDM\nsimulations of Cooper {et~al.} (2010) in which only the stellar haloes\nproduced by disrupted satellites are considered. Our statistic\ndistinguishes quantitatively between these six qualitatively different\nhalo realizations. When only stars with $r>20$~kpc are considered,\nfive of our six simulations show statistically significant\ncorrelations on scales in our metric equivalent to $\\sim 1-8$~kpc (for\nall observers on the Solar circle). Most of the models are consistent\nwith the Milky Way data for the outer halo, $r>30$~kpc. For the inner\nhalo, however, particularly at galactocentric distances smaller than\n20~kpc, the simulations tend to be significantly more strongly\nclustered than the data. One possible explanation for this is a\ndeficiency of smoothly distributed halo stars in the models, perhaps\nattributable to the absence of so-called `in situ' halo stars. These\nstars may be scattered from the Galactic disc, or born on eccentric\norbits (in streams of accreted gas or an unstable cooling flow, for\nexample). Neither of these processes are included in our model of the\naccreted halo.\n\nAlthough it seems reasonable to expect that in situ haloes are\ndistributed with spherical or axial symmetry and have a low degree of\ncoherence in phase space, models of such components and predictions\nfor the fraction of stars they contain are not well constrained. Most\nhypotheses for in situ halo formation limit these stars to an `inner'\nhalo and predict that the accreted component (which we simulate)\ndominates at larger radii\n(e.g. Abadi {et~al.} 2006; Zolotov {et~al.} 2009; Font {et~al.} 2011). However, the fraction\nof the halo formed in situ and the boundary between `inner' and\n`outer' halo are highly model-dependent. Detections of observable\n`dichotomies' in the Milky Way halo (Carollo {et~al.} 2007) are still\ndebated (e.g. Sch{\\\"{o}}nrich, Asplund \\& Casagrande 2011; Beers {et~al.} 2011). It is\npossible to place broad limits on the fraction of stars in a `missing'\nsmooth component, for example by comparing the RMS variation of\nprojected star-counts in our models (Helmi {et~al.} 2011b) to the Milky\nWay (Bell {et~al.} 2008). However, the uncertainties involved are substantial.\n\nAnother factor in the discrepancy between the models and the data may\nbe the absence of a baryonic (disc) contribution to the gravitational\npotential. A massive disc could alter the process of satellite\ndisruption in the inner halo and might make the potential within\n$30$~kpc more spherical (Kazantzidis, Abadi \\& Navarro 2010), possibly distributing\nmore inner halo stars into the SDSS footprint (on the other hand, a\nmore axisymmetric or spherical dark halo might also result in fewer\nchaotic orbits, hence more coherent streams). Because of these\nmodelling uncertainties, our application of the \\deltacf{} statistic\ncan presently serve only as a basic test for the abundance of\nsubstructure in the simulations.\n\nSeveral aspects of our approach could be improved. \nIt seems desirable to use well-measured radial velocity data to\nenhance clustering signals such as our correlation function relative\nto those based on configuration space data alone. However, so far, no\nproposal for \nincluding these velocity data is well-supported by theory. \nHere, we have used a straightforward choice of\nparametrised metric to illustrate the concept of scaling radial\nvelocity separations to `equivalent' configuration space separations,\nand this is empirically useful in recovering a measurable\nsignal. Nevertheless, we have not found any compelling or generic way\nto select the scaling parameter\n($w_{v}$). Improving either the definition of the metric itself or the\nmeans of fixing this parameter is a clear priority for extensions of\nthis approach. A similar issue affects the weighting of velocity\ninformation in clustering algorithms (e.g. Sharma {et~al.} 2010).\n\nFinally, further comparisons between stellar halo models and\nobservational data should also account for the selection effects such\nspectroscopic incompleteness and the potential bias of BHB stars as a\ntracer of the stellar halo (Bell {et~al.} 2010; Xue {et~al.} 2011). For\nstatistical analysis, there is a pressing requirement for\nobservational samples with well-understood selection functions, even\nif they do not probe the most distant halo. The LAMOST Galactic survey\nis likely to be the first to approach this goal.\n\nIn summary, we have taken a first step in adapting a well-studied\ncosmological statistic, the two-point correlation function, to the\nstudy of the Milky Way halo. Our comparisons highlight the complexity\nof statistical analysis in the stellar halo, and the importance of\ninterpreting observational results in the context of realistic models\nof halo assembly. We have compared the SDSS data with the stellar\nhalos produced by disrupted satellites in {\\em ab initio} galaxy\nformation models constructed from the Aquarius N-body simulations of\ngalactic dark halos in the $\\Lambda$CDM cosmology. With further\nrefinements and more data, our statistical approach to quantifying the\nsmoothness of the halo can provide a practical and productive way to\nstudy the structure of the Milky Way halo and compare with theoretical\nexpectations.\n\n\\section*{Acknowledgements}\n\nThe authors thank Heather Morrison and the Spaghetti Survey team for\nmaking their data available prior to publication, and Xiangxiang Xue\nand Sergey Koposov for their assistance. We thank the referee for\ntheir helpful comments which greatly improved the structure of this\npaper. APC acknowledges an STFC studentship and thanks Else\nStarkenburg for useful discussions. SMC acknowledges the support of a\nLeverhulme Research Fellowship. CSF acknowledges a Royal Society\nWolfson Research Merit Award and ERC Advanced Investigator grant\n267291- COSMIWAY. AH acknowledges funding support from the European\nResearch Council under ERC-StG grant GALACTICA-24027. This work was\nsupported in part by an STFC rolling grant to the ICC. The\ncalculations for this paper were performed on the ICC Cosmology\nMachine, which is part of the DiRAC Facility jointly funded by STFC,\nthe Large Facilities Capital Fund of BIS, and Durham University.Fig. 1\nwas produced with the {\\tt HEALPy} implementation of {\\tt HEALPix}\n[http:\/\/healpix.jpl.nasa.gov, G{\\'{o}}rski {et~al.} 2005].\n\n{}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{introduction}\nIron selenide based superconductors have come to attract particular attention after the discovery that the binary material FeSe avoids the formation of the magnetic order but becomes superconducting at $T_c$~=~8~K. \\cite{hsu2008superconductivity} The value of $T_c$ can be enhanced both by applying pressure, reaching a maximum value of about 37~K,\\cite{medvedev2009electronic,margadonna2009pressure} or in single layer films of FeSe grown on an SrTiO$_3$ substrate, \\cite{qing2012interface,FeSe2013,FeSe2015} and it is suggested that large enhancements of $T_c$ arise due to an increased electron carrier density. \\cite{FeSeCarr} Another method of enhancing the superconductivity is by intercalating alkali metals between FeSe layers, as in the case of $A_x$Fe$_{2-y}$Se$_2$ ($A$~=~K, Rb, Cs,Tl\/Rb, Tl\/K) \\cite{guo2010superconductivity,2011CsSCRep,wang2011superconductivity,2011TlRbSCRep,2011TlKSCRep}. A notable difference from both the iron arsenide based materials and the bulk binary compound FeSe is the absence of the hole pocket in the Fermi surface at the Brillouin zone center.\\cite{qian2011absence} This appears to contradict the s$_{\\pm}$ superconducting state often applied to iron arsenide superconductors, where there is nesting and a sign change of the order parameter between the hole and electron pockets.\\cite{mazin2008unconventional,kuroki2008unconventional} While there have been a variety of alternative proposals for the pairing state, \\cite{mazin2011symmetry,khodas2012interpocket,nica2015orbital} studying the intrinsic superconducting properties is greatly complicated by the clear evidence for phase separation between non-superconducting regions with an ordered arrangement of Fe vacancies and vacancy free superconducting regions. \\cite{li2012phase,PhaseSep2011DL,PhaseSepNMR}.\n\nRecently a new iron selenide based superconductor (Li$_{1-x}$Fe$_x$)OHFeSe ($x\\approx0.2$) was discovered, \\cite{lu2015coexistence} with a high transition temperature of $T_c\\approx$~40~K. It has a quasi-two-dimensional crystal structure, with layers of both (Li$_{1-x}$Fe$_x$)OH and superconducting FeSe. The material displays coexistence between superconductivity and antiferromagnetism, \\cite{lu2015coexistence} while in the phase diagram superconductivity occurs in close proximity to spin-density wave order. \\cite{dong2014phase} In common with $A_x$Fe$_{2-y}$Se$_2$, the hole pocket is also absent,\\cite{niu2015surface,zhao2016common} but the samples are much more homogeneous, indicating that (Li$_{1-x}$Fe$_x$)OHFeSe is a good candidate for probing iron based superconductors without a hole pocket at the zone center. Angle resolved photoemission spectroscopy (ARPES) measurements are consistent with the presence of nodeless superconductivity with a single isotropic energy gap, but disagree over the gap magnitudes.\\cite{niu2015surface,zhao2016common} However, scanning tunneling spectroscopy (STS) studies show two distinct features in the conductance spectra, suggesting the presence of multiple gaps. \\cite{du2016scrutinizing,yan2016surface} Meanwhile the in-plane superfluid density obtained from muon-spin rotation ($\\mu$SR) measurements is consistent with either one or two gaps, but very different behavior is seen in the out-of-plane component, which shows a much more rapid drop of the superfluid density with temperature. \\cite{khasanov2016proximity} Furthermore, inelastic neutron scattering (INS) measurements give evidence for the presence of a spin resonance peak, \\cite{Davies2016spin,pan2016structure} consistent with a sign change of the order parameter across the Fermi surface, while a lack of a sign change was suggested from an STS study, on the basis of quasi-particle interference (QPI) results and the effect of impurities.\\cite{yan2016surface} \n\n\n\nTo further characterize the superconducting gap structure, we report London penetration depth measurements of (Li$_{1-x}$Fe$_x$)OHFeSe single crystals using a tunnel-diode-oscillator (TDO) based technique, from which we obtain the temperature dependence of the London penetration depth shift $\\Delta\\lambda(T)$. The low temperature $\\Delta\\lambda(T)$ gives clear evidence for nodeless superconductivity, while a single-gap isotropic $s$-wave model is unable to account for the superfluid density. The superfluid density is well fitted by a two-gap $s$-wave model, as well as models with anisotropic gaps.\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=\\columnwidth]{Fig1.eps}\n\\end{center}\n\t\\caption{Magnetic susceptibility $\\chi$ of (Li$_{1-x}$Fe$_x$)OHFeSe samples from two batches ($\\#$A and $\\#$B). Both field-cooled (FC) and zero-field-cooled (ZFC) measurements were performed in an applied magnetic field of 1~mT. The inset shows the magnetization as a function of applied field at 2~K. The solid line shows the linear fit at low fields while the arrow marks the position of the lower critical field $H_{c1}$, where there is deviation from linear behavior.}\n \\label{samplecharacterization}\n\\end{figure}\n\\section{Experimental details}\nSingle crystals of (Li$_{1-x}$Fe$_x$)OHFeSe (with $x\\approx0.2$) from two batches prepared by two different groups were measured ($\\#$A and $\\#$B), where the crystals were synthesized following Ref.~\\onlinecite{DongSynth}. Using the parameters from Ref.~\\onlinecite{DongSynth} ($\\rho_0=0.1~$m$\\Omega$-cm and a carrier density of $n=1.04\\times10^{21}$cm$^{-3}$), we estimate a mean free path of $l=12.4$~nm using $l=\\hbar(3\\pi^2)^{\\frac{1}{3}}\/e^2n^{\\frac{2}{3}}\\rho_0$. This is considerably larger than the Ginzburg-Landau coherence length $\\xi_{GL}=2$~nm calculated from an upper critical field of $H_{c2}(0)=79$~T, \\cite{DongSynth} and therefore the material is in the clean limit. Magnetization measurements of samples from both batches were performed utilizing a SQUID magnetometer (MPMS-5T). The temperature dependence of the London penetration depth shift $\\Delta\\lambda(T)~=~\\lambda(T)-\\lambda(0)$ was measured in a ${^3}$He cryostat from 42~K down to a base temperature of about 0.5~K using a tunnel-diode-oscillator based method, with an operating frequency of about 7~MHz. The samples were cut into a regular shape and mounted on a sapphire rod which was inserted into the coil without any contact. A very small ac field of about 2~$\\mu T$ is applied to the sample along the $c$~axis, which is much smaller than the lower critical field $\\mu_0H_{c1}$ and therefore the sample is always in the Meissner state. As such the shift in the resonant frequency from zero temperature $\\Delta f(T)$ is related to the penetration depth shift in the $ab$~plane $\\Delta\\lambda(T)$ via $\\Delta\\lambda(T)$~=~G$\\Delta f(T)$, where the calibration constant $G$ is calculated using the geometry of the coil and sample \\cite{Gfactor}. \n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=0.8\\columnwidth]{Fig2.eps}\n\\end{center}\n\t\\caption{Temperature dependence of the London penetration depth $\\Delta\\lambda(T)$ of two samples from (a) batch $\\#$A and (b) batch $\\#$B. The main panels show the low temperature behavior, where the data are fitted below $T_c\/3$ with a fully gapped model and a power law dependence. The insets display the frequency shift $\\Delta f(T)$ from the lowest temperature up to above the superconducting transition temperature, normalized by the value at 40~K.}\n \\label{penetrationdepth}\n\\end{figure}\n\n\\section{Results and discussion}\n\n Both the field-cooled (FC) and zero-field-cooled (ZFC) magnetic susceptibility [$\\chi(T)$] measurements are shown in Fig.~\\ref{samplecharacterization}, from above the superconducting transition temperature down to 2~K under a small applied field of 10~Oe, where corrections for the demagnetization effect were applied. The ZFC $\\chi(T)$ of both samples show sharp superconducting transitions onsetting at around 40~K\nand 37~K for samples $\\#$A and $\\#$B, respectively. At low temperatures the data for $\\#$A flattens, while\nfor $\\#$B there continues to be a slight decrease with decreasing temperature. This indicates the high quality of the single crystals from batch $\\#$A, whereas those from batch $\\#$B show evidence for some inhomogeneity. Meanwhile the superconducting shielding fraction is around 100$\\%$ in both samples. The inset displays the field dependence of the magnetization at 2~K, for a field applied in-plane so that the demagnetization effect is very small. There is a deviation of the low field linear behavior at $\\mu_0H_{c1}\\approx 4.5$~mT, confirming that $H_{c1}$ is significantly larger than the ac field applied in the penetration depth measurements.\n\nFigure~\\ref{penetrationdepth} displays $\\Delta\\lambda(T)$ for single crystal samples from two batches, with samples from $\\#$A and $\\#$B displayed in (a) and (b) respectively. The insets of both panels display the temperature dependence of the frequency shift $\\Delta f(T)$ from above the superconducting transition at 42~K down to 0.5~K. The superconducting transition onsets at respective temperatures of 40~K and 39~K in samples $\\#$A and $\\#$B, while the corresponding endpoints of the transition are around 34~K and 35~K, and the latter values of $T_c$ are used in the subsequent analysis of the superfluid density. The main panels of Fig.~\\ref{penetrationdepth} display the low temperature behavior of $\\Delta\\lambda(T)$. It can be seen in Fig.~\\ref{penetrationdepth}(a) that $\\Delta\\lambda(T)$ for the $\\#$A samples decreases with decreasing temperature before flattening below about 0.1T$_c$, indicating a nodeless gap structure in (Li$_{1-x}$Fe$_x$)OHFeSe with a lack of low energy excitations, which is consistent with previous results. \\cite{niu2015surface,zhao2016common,du2016scrutinizing,yan2016surface,khasanov2016proximity} Furthermore when fitted with a power law dependence $\\Delta\\lambda(T)\\sim T^n$, exponents of $n=2.83$ and $n=2.46$ are obtained for samples $\\#$A-1 and $\\#$B-1 respectively. In the case of nodal superconductors, $n=1$ for line nodes and $n=2$ for point nodes is generally anticipated. While impurity scattering, non-local effects and quantum fluctuations can all lead to $n\\approx2$ at low temperatures for $d$-wave superconductors with line nodes,\\cite{Hirschfeld1993,Kosztin1997,Benfatto2001} our observation that $n$ is significantly larger than two gives further evidence for fully gapped behavior. For a fully-gapped superconductor at $T\\ll T_c$, the penetration depth can be described by $\\Delta\\lambda(T)=\\lambda_{eff}(0)\\sqrt{\\pi\\Delta(0)\/2k_BT}\\textrm{exp}[-\\frac{\\Delta(0)}{k_BT}]$, where $\\Delta(0)$ is the superconducting gap magnitude at zero temperature and $\\lambda_{eff}(0)$ is an effective zero temperature penetration depth. The low temperature data for sample $\\#$A-1 is well fitted with a gap magnitude of $\\Delta(0)=~0.87k_BT_c$. A similar value of $\\Delta(0)=~0.78k_BT_c$ is obtained from the fitting for sample $\\#$B-1, although there is a small deviation of the fitted curve for this sample at the lowest temperatures, where a weak anomaly is observed in the data, the origin of which is not clear. The values of the fitted gaps are much smaller than the value from BCS theory of $1.76k_BT_c$ for weakly coupled isotropic $s$-wave superconductors, suggesting multi-gap superconductivity or gap anisotropy in (Li$_{1-x}$Fe$_x$)OHFeSe. The fitted values of $\\lambda_{eff}(0)$ of 636~\\AA\\ and 1228~\\AA\\ for samples $\\#$A-1 and $\\#$B-1 respectively, are significantly different from the value of $\\lambda(0)\\approx2800$~\\AA\\ from $\\mu$SR measurements. \\cite{khasanov2016proximity}. Such a difference is also expected for multi-gap or anisotropic superconductors. \\cite{Malone2009}\n\n\nIn order to obtain further information about the superconducting pairing state, the normalized superfluid density [$\\rho_s(T)$] was calculated from the penetration depth using $\\rho_s(T)$~=~$[\\lambda(0)$\/$\\lambda(T)]^2$, where $\\lambda(0)\\approx$~2800~\\AA\\ was estimated from $\\mu$SR measurements.\\cite{khasanov2016proximity} Since the samples from batch $\\#$A are higher quality and show a sharper superconducting transition, measurements from this batch were used in the subsequent analysis. The $\\rho_s(T)$ of sample $\\#$A-1 is displayed in Fig.~\\ref{superfluiddensity}. The superfluid density was modelled using \n\\begin{equation}\n\\rho_{\\rm s}(T) = 1 + 2 \\left\\langle\\int_{\\Delta_k}^{\\infty}\\frac{E{\\rm d}E}{\\sqrt{E^2-\\Delta_k^2}}\\frac{\\partial f}{\\partial E}\\right\\rangle_{\\rm FS},\n\\label{equation2}\n\\end{equation}\n\\noindent where $f(E, T)=[1+{\\rm exp}(E\/k_BT)]^{-1}$ is the Fermi function and $\\left\\langle\\ldots\\right\\rangle_{\\rm FS}$ denotes an average over the Fermi surface. The temperature dependence of the gap function $\\Delta_k$ is approximated by \\cite{carrington2003magnetic}\n\\begin{equation}\n\\delta(T)={\\rm tanh}\\left\\{1.82\\left[1.018\\left(T_c\/T-1\\right)\\right]^{0.51}\\right\\},\n\\label{equation3}\n\\end{equation}\n\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=\\columnwidth]{Fig3.eps}\n\\end{center}\n\t\\caption{Normalized superfluid density $\\rho_s$ as a function of the reduced temperature $T\/T_c$ for sample $\\#$A-1 of (Li$_{1-x}$Fe$_x$)OHFeSe. The dashed, dashed-dotted and solid lines show fits to models with a single isotropic $s$-wave gap, two isotropic gaps and the $s\\times\\tau_3$ state respectively. The inset shows $\\rho_s$ upon varying $\\lambda(0)$ by $\\pm30\\%$, along with fits to a two-gap model.}\n \\label{superfluiddensity}\n\\end{figure}\n\n\\noindent As discussed previously, the behavior of $\\Delta\\lambda(T)$ at low temperatures and previous experimental results indicate nodeless superconductivity in (Li$_{1-x}$Fe$_x$)OHFeSe and therefore we fitted $\\rho_s(T)$ with various fully-gapped models. The simplest model is to assume an isotropic superconducting gap, with $\\Delta_k(T)~=~\\Delta(0)\\delta(T)$. The fitted curve for such an isotropic single band $s$-wave model with $\\Delta(0)~=~1.72k_BT_c$ is shown by the dashed line in Fig.~\\ref{superfluiddensity}, where although there is reasonable agreement above 0.5$T_c$, there is a significant discrepancy in the intermediate temperature range between 0.2$T_c$ and 0.5$T_c$. This difference arises due to the data dropping more quickly than the calculated $\\rho_s(T)$, indicating the presence of an additional lower energy scale in the gap structure, which is consistent with the smaller gap value obtained from fitting $\\Delta\\lambda(T)$ at low temperatures. Therefore the data are fitted with a two-gap model, as has been applied to many iron based superconductors. For this model the total superfluid density is given by the weighted sum of two components with different gaps,\n\\begin{equation}\n\\rho_{\\rm s}(T) = \\alpha\\rho{_{\\rm s}^1}(\\Delta_k^1, T) + (1-\\alpha)\\rho{_{\\rm s}^2}(\\Delta_k^2, T),\n\\label{equation4}\n\\end{equation}\n\n\\noindent where $\\rho_s^i (i=1, 2)$ is the normalized superfluid density with a gap function $\\Delta_k^i (i=1, 2)$ and $\\alpha$ is the relative weight for the component $\\rho_s^1$ ($0\\leq\\alpha\\leq1$). The fitting result is shown in Fig.~\\ref{superfluiddensity} by the dashed-dotted line, which agrees well with the data across the whole temperature range. The fitted parameters are $\\Delta_1(0)=0.8k_BT_c$, $\\Delta_2(0)=1.9k_BT_c$ and $\\alpha=0.13$, where the value of the small gap is close to that found from fitting $\\Delta\\lambda(T)$, which is further consistent with two-gap superconductivity. In order to consider possible uncertainties in the calibration constant $G$ or $\\lambda(0)$, in the inset we show $\\rho_s(T)$ upon varying $\\lambda(0)$ by $\\pm30\\%$. The data can still be fitted by a two gap model with slightly different parameters of $\\Delta_1(0)=0.8k_BT_c$, $\\Delta_2(0)=1.6k_BT_c$ and $\\alpha=0.15$ for $\\lambda(0)$~=~2000~\\AA ~and $\\Delta_1(0)=0.8k_BT_c$, $\\Delta_2(0)=2.1k_BT_c$ and $\\alpha=0.12$ for $\\lambda(0)$~=~3600~\\AA. The data were also well fitted with an anisotropic single band model with $\\Delta_k(T,\\phi)=\\Delta(0)(1+r{\\rm cos}2\\phi)\\delta(T)$, using $\\Delta(0)=1.32k_BT_c$ and $r=0.65$, which is not shown for the sake of clarity.\n\nARPES measurements indicate that the Fermi surface consists of electron pockets at the Brillouin zone corners, without the presence of hole pockets at the zone center \\cite{niu2015surface,zhao2016common}. From a recent STM study it was proposed on the basis of QPI measurements, as well as examining the effects of impurities, that there is no sign change of the superconducting gap across the Fermi surface \\cite{yan2016surface}, in which case such a two gap $s$-wave model readily explains the data. However INS measurements show evidence for a spin resonance peak,\\cite{Davies2016spin,pan2016structure} which indicates that there is a sign change of the order parameter between regions of the Fermi surface connected by the resonance wave vector. This would be incompatible with two-gap $s$-wave superconductivity with no sign change and are also difficult to account for with the $s_{\\pm}$ state proposed for many iron arsenide superconductors, due to both the lack of a hole pocket at the zone center and a different nesting wave vector for the spin resonance. \\cite{mazin2008unconventional,kuroki2008unconventional} On the other hand a different sign changing $s_{\\pm}$ state has been suggested for $A_x$Fe$_{2-y}$Se$_2$ (A~=~K, Rb, Cs), \\cite{khodas2012interpocket,mazin2011symmetry} with a very similar Fermi surface to (Li$_{1-x}$Fe$_x$)OHFeSe.\n\nAnother proposed pairing state for nodeless sign changing superconductivity in iron based superconductors with only electron pockets is an orbital selective $s\\times\\tau_3$ state. \\cite{nica2015orbital} In this scenario, intraband pairing has $d_{x^2-y^2}$ symmetry, while the interband pairing has $d_{xy}$ symmetry. As a result, the zeroes of the intraband and interband pairing are offset by an angle of $\\pi\/4$ and therefore the gap remains nodeless. A simple model for the gap function of this state taking into account the Fermi surface is $\\Delta_k(T,\\phi)$=[($\\Delta_1(0)$)$^2$+($\\Delta_2(0)$sin($\\phi$))$^2$]$^{1\/2}$$\\delta(T)$ \\cite{nica2015orbital,SiPrivate}. As shown in Fig.~\\ref{superfluiddensity}, this model can also well fit the experimental data, with fitted parameters of $\\Delta_1(0)=1.05k_BT_c$ and $\\Delta_2(0)=3.2k_BT_c$. In this case the gap minimum $\\Delta_1(0)$ is slightly larger than the smaller gap from the two-gap $s$-wave fit. Therefore the data can be well accounted for by fitting with models with either two-gaps or a strong gap anisotropy and our results are compatible with two-gap behavior, anisotropic $s$-wave superconductivity or an orbital selective $s\\times\\tau_3$ state. It is often very difficult to distinguish between scenarios with multiple gaps and those where there is one anisotropic gap. While the isotropic nature of the gap inferred from ARPES would favor the two-gap scenario over an anisotropic gap, this still requires further study. \\cite{niu2015surface,zhao2016common} Furthermore, the superfluid density is only sensitive to the gap magnitude rather than the phase and different measurements are required to clarify the presence of a sign change and to determine the nature of the pairing state.\n\nWe note that there have been conflicting reports about the nature of the gap structure of (Li$_{1-x}$Fe$_x$)OHFeSe, with only a single gap being resolved from ARPES measurements \\cite{niu2015surface,zhao2016common}, while two gaps are found from STS \\cite{du2016scrutinizing,yan2016surface} and the in-plane superfluid density from $\\mu$SR measurements is compatible with both single-gap and two-gap models. \\cite{khasanov2016proximity} The gap values we obtain from fitting the in-plane superfluid density are smaller than those reported from previous measurements, particularly in the case of the smaller gap. Evidence for this smaller gap is clearly observed from our measurements of $\\Delta\\lambda(T)$ and $\\rho_s$ at low temperatures, which may be a result of the high sensitivity of the TDO-based technique. A further reason for the discrepancies could be due to the non-stoichiometric nature of (Li$_{1-x}$Fe$_x$)OHFeSe, where the exact composition and homogeneity may influence the doping level, $T_c$ and the gap magnitudes. In addition, the doping level can be different between the surface and in the bulk, \\cite{niu2015surface} and therefore probes which primarily measure the surface properties may give different results. Indeed different results have been found from different measurements of the gap structure of other iron-based superconductors, \\cite{evtushinsky2009momentum} and significant sample dependence has been suggested for Ba$_{1-x}$K$_x$Fe$_2$As$_2$.\\cite{hashimoto2009microwave} \n\n\n\\section{summary}\n\nWe have measured the temperature dependence of the London penetration depth $\\Delta\\lambda(T)$ and the derived superfluid density $\\rho_s(T)$ of the recently discovered high-$T_c$ iron-based superconductor (Li$_{1-x}$Fe$_x$)OHFeSe. The behavior of $\\Delta\\lambda(T)$ at low temperatures gives clear evidence for nodeless superconductivity, with a relatively small energy gap, while the analysis of $\\rho_s(T)$ is consistent with both two-gap superconductivity and models with significant gap anisotropy such as an orbital selective $s\\times\\tau_3$ or anisotropic $s$-wave state.\n\nWe thank Q. Si, E. M. Nica, R. Yu and X. Lu for helpful discussions and valuable suggestions. This work was supported by the National Natural Science Foundation of China (No.11574370), National Key Research and Development Program of China (No. 2016YFA0300202), and the Science Challenge Project of China. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Kronecker tensor structures for complex fields}\n\\label{App:ProjA}\nIn this appendix we demonstrate how a product of two operators can be decomposed onto irreps of $U(m)\\times U(n)$ in the picture where we work with complex fields. For simplicity we will start with just $U(n)$. The generalization to $U(m)\\times U(n)$ is trivial. The main utility of working with complex fields is that the projectors take their simplest form possible, namely as combinations of Kronecker deltas with the least number of indices possible. The real field picture can also have its projectors expressed in terms of Kronecker deltas, albeit at the cost of more indices. We hope that presenting our projectors in three different pictures will make our work more intuitive. In order to capture all irreps that can appear in the real field notation, and hence not miss any information in the bootstrap algorithm, we need to consider two OPEs, namely $\\Phi_i^\\dagger \\times \\Phi_j$ and $\\Phi_i \\times \\Phi_j$. Below we show how they may be decomposed onto irreps:\n\\begin{equation}\n\\begin{split}\n \\Phi_i^\\dagger \\times \\Phi_j &\\sim \\Big(\\Phi_i^\\dagger \\Phi_j - \\frac{1}{n}\\delta_{ij} \\Phi_k^\\dagger \\Phi_k \\Big) +\\frac{1}{n}\\delta_{ij} \\Phi_k^\\dagger \\Phi_k\\,, \\\\\n \\Phi_i \\times \\Phi_j &\\sim (\\Phi_i \\Phi_j + \\Phi_j \\Phi_i) + (\\Phi_i \\Phi_j - \\Phi_j \\Phi_i)\\,.\n\\end{split}\n\\label{complexOPE}\n\\end{equation}\nThe first line in \\eqref{complexOPE} shows the decomposition into the adjoint ($R$) and singlet ($S$) representations, where as the second line shows the decomposition into the symmetric ($T$) and antisymmetric representations ($A$). To read off the projectors from \\eqref{complexOPE} it is useful to remember\n\\begin{equation}\n\\begin{aligned}\n O^X_{ij} \\sim P^X_{ijkl} \\Phi_k \\Phi_l\\,,\\qquad\n O^X_{ij} \\sim P^X_{ijkl} \\Phi^\\dagger_k \\Phi_l\\,,\n\\end{aligned}\n\\label{projdefinitionscomplex}\n\\end{equation}\nwhere $O^X$ is the exchanged field in some irrep, e.g.\\ $O^T_{12} \\sim \\Phi_1 \\Phi_2 + \\Phi_2 \\Phi_1$. The first relation in \\eqref{projdefinitionscomplex} can be used to read off the $T$ and $A$ projectors, whereas the second can be used for $S$ and $R$. Notice that we have implicitly assumed that fields are inserted at different positions in order for antisymmetric irreps to not vanish identically. The projectors can be read off as\\footnote{Note that we take the correlator to be $\\langle \\Phi_i^\\dagger \\Phi_j \\Phi_k \\Phi_l^\\dagger \\rangle$ which is why the projector of the adjoint representation is equal to $P^R_{ijkl} =\\delta_{ik}\\delta_{jl}-\\frac{1}{n}\\delta_{ij}\\delta_{kl}$ instead of $P^R_{ijkl} =\\delta_{il}\\delta_{jk}-\\frac{1}{n}\\delta_{ij}\\delta_{kl}$.}\n\\begin{align}\n P^S_{ijkl} &=\\frac{1}{n}\\delta_{ij}\\delta_{kl}\\,,\\qquad\n P^R_{ijkl} =\\delta_{ik}\\delta_{jl}-\\frac{1}{n}\\delta_{ij}\\delta_{kl}\\,, \\\\\n P^T_{ijkl} &=\\frac{1}{2}(\\delta_{ik}\\delta_{jl} +\\delta_{il}\\delta_{jk})\\,,\\qquad\n P^A_{ijkl}= \\frac{1}{2}(\\delta_{ik}\\delta_{jl} -\\delta_{il}\\delta_{jk})\\,.\n\\label{complexprojectorsUn}\n\\end{align}\nThe dimensions of the corresponding irreps are $(1,(n-1)(n+1),\\frac12n(n+1),\\frac12n(n-1))$. As the reader may have observed from the main text, or the next appendix, when going from the complex field picture to the real field picture the above dimensions get multiplied by a factor of $2$. For example $\\frac{n(n+1)}{2}$ becomes $n(n+1)$. This is because each of the initial $\\frac{n(n+1)}{2}$ complex elements contains two real elements. The last step now is to write down the projectors for $U(m) \\times U(n)$. This is trivial, in the sense that they are just products of $U(m)$ with $U(n)$ projectors. We have\n\\begin{equation}\n\\begin{aligned}\n P^{S}_{ijklmnop} &=P^S_{ijkl}P^S_{mnop} \\,,\\quad\n P^{RS}_{ijklmnop} =P^R_{ijkl}P^S_{mnop} \\,,\\quad\n P^{SR}_{ijklmnop} =P^S_{ijkl}P^R_{mnop} \\,,\\\\\n P^{RR}_{ijklmnop} &=P^R_{ijkl}P^R_{mnop} \\,,\\quad\n P^{TT}_{ijklmnop} =P^T_{ijkl}P^T_{mnop} \\,,\\quad\n P^{TA}_{ijklmnop} =P^T_{ijkl}P^A_{mnop} \\,,\\\\\n P^{AT}_{ijklmnop} &=P^A_{ijkl}P^T_{mnop} \\,,\\quad\n P^{AA}_{ijklmnop} =P^A_{ijkl}P^A_{mnop}\\,.\n\\end{aligned}\n\\label{complexprojectorsUnII}\n\\end{equation}\nThe sum rules that can be derived with the above projectors can be checked to be completely equivalent to those derived from the projectors of real fields outlined in the main text. Another observation is that, compared to real fields, we do not need separate projectors for even and odd spins.\n\n\\section{Kronecker tensor structures for real fields}\n\\label{App:ProjB}\nThe projectors that correspond to a four-point function of real fields can be intuitively presented in terms of Kronecker deltas if we add an additional index. This form is useful since one may directly extract the form of exchanged operators, as we will show. The form of exchanged operators is useful to know since it can guide us with respect to assumptions we may impose. Also, we expect it to be easier to work with in a mixed correlator system. We start by labeling the real and complex parts of an operator $\\Phi_i$ with an index (we start with $U(n)$ for simplicity)\n\\begin{equation}\n\\Phi_i = \\phi^1_{i} + i\\hspace{1pt}\\phi^2_{i}\\,,\n\\label{complextoreal}\n\\end{equation}\nwhere the upper case $\\Phi$ denotes the complex operator and the lower case $\\phi$ denote real fields. We must now simply plug in \\eqref{complextoreal} to the expressions for the representations of the previous appendix. For simplicity we will do this for the singlet representation, and then quote the results for rest of the representations. Note that implicitly we consider the two external fields of the OPE at different positions, for otherwise the antisymmetric combinations would vanish identically. We have\n\\begin{equation}\n \\Phi_i^\\dagger \\Phi_i = (\\phi^1_i \\phi^1_i + \\phi^2_i \\phi^2_i ) + i (\\phi^1_i \\phi^2_i - \\phi^2_i \\phi^1_i)\\,,\n\\end{equation}\nwhere the first parenthesis corresponds to what was called $S_{\\text{even}}$ in the main text, and the second parenthesis corresponds to what was called $S_{\\text{odd}}$. As expected $S_{\\text{odd}}$ vanishes identically if we don't insert powers of derivatives between the operators. The projectors are now very straightforward to write down by recalling the relation\n\\begin{equation}\n O^X_{ij;ab} = P^X_{ijkl;abcd}\\hspace{1pt} \\phi^a_i \\phi^b_j\\,,\n\\label{projrelation}\n\\end{equation}\nwhere $X$ stands for some specific irrep and indices from the beginning of the latin alphabet take the values $1,2$. Notice that \\eqref{projrelation} is simply the statement that projectors must project products of operators onto irreps. We have\n\\begin{equation}\n\\begin{split}\n P^{S_{\\text{even}}}_{ijkl;abcd} &= \\frac{1}{2n} \\delta_{ab}\\delta_{cd}\\delta_{ij}\\delta_{kl}\\,, \\\\\n P^{S_{\\text{odd}}}_{ijkl;abcd} &= \\frac{1}{2n}(\\delta_{ac}\\delta_{bd}-\\delta_{ad}\\delta_{bc})\\delta_{ij}\\delta_{kl}\\,.\n\\end{split}\n\\end{equation}\nIndeed, one may confirm that, for example,\n\\begin{equation}\n O^{S_{\\text{even}}}_{11;11}\\sim(\\phi^1_i \\phi^1_i + \\phi^2_i \\phi^2_i ) \\sim P^{S_{\\text{even}}}_{11kl;11cd} \\hspace{1pt}\\phi^{c}_k \\phi^{d}_l\\,.\n\\end{equation}\nThis procedure can be repeated for the rest of the irreps. The resulting projectors are\n\\begin{equation}\n\\begin{split}\n P^{S_{\\text{even}}}_{ijkl;abcd} &= \\frac{1}{2n} \\delta_{ab}\\delta_{cd}\\delta_{ij}\\delta_{kl}\\,, \\\\\n P^{S_{\\text{odd}}}_{ijkl;abcd} &= \\frac{1}{2n}(\\delta_{ac}\\delta_{bd}-\\delta_{ad}\\delta_{bc})\\delta_{ij}\\delta_{kl}\\,,\\\\\n P^{R_{\\text{even}}}_{ijkl;abcd} &= \\tfrac{1}{2}\\delta_{ab}\\delta_{cd}\\Big(\\delta_{ik}\\delta_{jl}-\\frac{1}{n}\\delta_{ij}\\delta_{kl}\\Big)\\,,\\\\\n P^{R_{\\text{odd}}}_{ijkl;abcd} &= \\tfrac{1}{2}(\\delta_{ac}\\delta_{bd}-\\delta_{ad}\\delta_{bc})\\Big(\\delta_{ik}\\delta_{jl}-\\frac{1}{n}\\delta_{ij}\\delta_{kl}\\Big)\\,,\\\\\n P^{T_{\\text{even}}}_{ijkl;abcd} &= \\tfrac{1}{4}(\\delta_{ac}\\delta_{bd}+\\delta_{ad}\\delta_{bc}-\\delta_{ab}\\delta_{cd})(\\delta_{ik}\\delta_{jl}+\\delta_{il}\\delta_{jl})\\,,\\\\\n P^{A_{\\text{odd}}}_{ijkl;abcd} &= \\tfrac{1}{4}(\\delta_{ac}\\delta_{bd}+\\delta_{ad}\\delta_{bc}-\\delta_{ab}\\delta_{cd})(\\delta_{ik}\\delta_{jl}-\\delta_{il}\\delta_{jl})\\,.\n\\end{split}\n\\end{equation}\nThe dimensions of the corresponding irreps are $(1,1,(n-1)(n+1),(n-1)(n+1),n(n+1),n(n-1))$.\n\nUsing the above expressions, it is now trivial to write down the $U(m)\\times U(n)$ projectors:\n\\begin{align}\n P^{S_{\\text{even}}}_{ijklmnop;abcdefgh} &= P^{S_{\\text{even}}}_{ijkl;abcd}\\hspace{1pt} P^{S_{\\text{even}}}_{mnop;efgh}+ P^{S_{\\text{odd}}}_{ijkl;abcd}\\hspace{1pt} P^{S_{\\text{odd}}}_{mnop;efgh}\\,,\\nonumber\\\\\n P^{S_{\\text{odd}}}_{ijklmnop;abcdefgh} &= P^{S_{\\text{even}}}_{ijkl;abcd}\\hspace{1pt} P^{S_{\\text{odd}}}_{mnop;efgh}+ P^{S_{\\text{odd}}}_{ijkl;abcd}\\hspace{1pt} P^{S_{\\text{even}}}_{mnop;efgh}\\,,\\nonumber\\\\\n P^{RS_{\\text{even}}}_{ijklmnop;abcdefgh} &= P^{R_{\\text{even}}}_{ijkl;abcd}P^{S_{\\text{even}}}_{mnop;efgh} + P^{R_{\\text{odd}}}_{ijkl;abcd}P^{S_{\\text{odd}}}_{mnop;efgh}\\,,\\nonumber\\\\\n P^{RS_{\\text{odd}}}_{ijklmnop;abcdefgh} &= P^{R_{\\text{even}}}_{ijkl;abcd}P^{S_{\\text{odd}}}_{mnop;efgh} + P^{R_{\\text{odd}}}_{ijkl;abcd}P^{S_{\\text{even}}}_{mnop;efgh}\\,,\\nonumber\\\\\n P^{SR_{\\text{even}}}_{ijklmnop;abcdefgh} &= P^{S_{\\text{even}}}_{ijkl;abcd}P^{R_{\\text{even}}}_{mnop;efgh} + P^{S_{\\text{odd}}}_{ijkl;abcd}P^{R_{\\text{odd}}}_{mnop;efgh}\\,,\\nonumber\\\\\n P^{SR_{\\text{odd}}}_{ijklmnop;abcdefgh} &= P^{S_{\\text{even}}}_{ijkl;abcd}P^{R_{\\text{odd}}}_{mnop;efgh} + P^{S_{\\text{odd}}}_{ijkl;abcd}P^{R_{\\text{even}}}_{mnop;efgh}\\,,\\nonumber\\\\\n P^{RR_{\\text{even}}}_{ijklmnop;abcdefgh} &= P^{R_{\\text{even}}}_{ijkl;abcd}P^{R_{\\text{even}}}_{mnop;efgh} + P^{R_{\\text{odd}}}_{ijkl;abcd}P^{R_{\\text{odd}}}_{mnop;efgh}\\,,\\nonumber\\\\\n P^{RR_{\\text{odd}}}_{ijklmnop;abcdefgh} &= P^{R_{\\text{even}}}_{ijkl;abcd}P^{R_{\\text{odd}}}_{mnop;efgh} + P^{R_{\\text{odd}}}_{ijkl;abcd}P^{R_{\\text{even}}}_{mnop;efgh}\\,,\\nonumber\\\\\n P^{TT_{\\text{even}}}_{ijklmnop;abcdefgh} &= P^{T_{\\text{even}}}_{ijkl;abcd}P^{T_{\\text{even}}}_{mnop;efgh}\\,,\\nonumber\\\\\n P^{TA_{\\text{odd}}}_{ijklmnop;abcdefgh} &= P^{T_{\\text{even}}}_{ijkl;abcd}P^{A_{\\text{odd}}}_{mnop;efgh}\\,,\\nonumber\\\\\n P^{AT_{\\text{odd}}}_{ijklmnop;abcdefgh} &= P^{A_{\\text{odd}}}_{ijkl;abcd}P^{T_{\\text{even}}}_{mnop;efgh}\\,,\\nonumber\\\\\n P^{AA_{\\text{even}}}_{ijklmnop;abcdefgh} &= P^{A_{\\text{odd}}}_{ijkl;abcd}P^{A_{\\text{odd}}}_{mnop;efgh}\\,.\n\\end{align}\nFrom these expressions we can also see explicitly that when $m=n$, if we choose to consider the two $U(n)$ symmetries as indistinguishable (which we remind the reader is not strictly necessary), the $RS$ irreps become the same as the $SR$ irreps. The same also happens for $TA$ and $AT$.\n\n\\section{Numerical parameters}\n\\label{App:C}\nFor most of our plots, the bounds are obtained with the use of\n\\texttt{PyCFTBoot}~\\cite{Behan:2016dtz} and \\texttt{SDPB}~\\cite{Landry:2019qug}. We use the numerical parameters $\\texttt{m\\_max}=6, \\texttt{n\\_max}=9, \\texttt{k\\_max}=36$ in \\texttt{PyCFTBoot}, and we include spins up to $\\texttt{l\\_max}=26$. The binary precision for the produced \\texttt{xml} files is 896 digits. \\texttt{SDPB} is run with the options \\texttt{-}\\texttt{-precision=896}, \\texttt{-}\\texttt{-detectPrimalFeasibleJump}, \\texttt{-}\\texttt{-detectDualFeasibleJump} and default values for other parameters. We refer to this set of parameters as ``\\!\\emph{A}''. Unless otherwise stated, our plots are ran with parameters ``\\!\\emph{A}''.\n\nFor some of the plots we used $\\texttt{m\\_max}=5, \\texttt{n\\_max}=7, \\texttt{l\\_max}=36, \\texttt{k\\_max}=42$ and\n$\\texttt{m\\_max}=6, \\texttt{n\\_max}=9, \\texttt{l\\_max}=36, \\texttt{k\\_max}=42$, referred to as ``\\!\\emph{B}'' and ``\\hspace{-1pt}\\emph{C}\\hspace{1pt}'' respectively. Lastly, we also used \\texttt{qboot} \\cite{Go:2020ahx}, with $\\Lambda=15$, $\\texttt{n\\_max}=500$, $\\nu\\texttt{\\_max}=25$ and $l=\\{0\\text{--}49,55,56, 59, 60, 64, 65, 69, 70, 74, 75,\\\\ 79, 80, 84, 85, 89, 90\\}$ referred to as ``\\hspace{-1pt}\\emph{D}\\hspace{0.5pt}''.\n\n\n\n\n\n\\end{appendices}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn recent years, data involving large numbers of features have become increasingly prevalent. Broadly speaking, there are two main approaches to analyzing such data: \\textit{large-scale testing} and \\textit{regression modeling}. The former entails conducting separate tests for each feature, while the later considers all features simultaneously in a single model.\nA major advance in large-scale testing has been the development of methods for estimating \\textit{local false discovery rates}, which provide an assessment of the significance of individual features while controlling the false discovery rate across the multiple tests.\nWe present here an approach for extending local false discovery rates to penalized regression models such as the lasso, thereby quantifying each feature's importance in a way that has been absent in the field of penalized regression until now.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=.95\\textwidth]{Fig1}\n\\caption{\\label{Fig:Example} The rows of this figure correspond to two simulated datasets. The column on the left shows the usual lasso coefficient path, while the column on the right displays our method's feature specific local false discovery rates ($\\mfdr$) along the lasso path. The triangles prior to the start of the mfdr path are the traditional local false discovery rate estimates resulting from large-scale testing. Along the mfdr path dashed lines indicate the portion of the path where a feature is inactive ($\\bh = 0$) in the model. The vertical dotted line shows the value of the penalty parameter, $\\lambda$, chosen by cross validation.}\n\\end{figure}\n\nFigure~\\ref{Fig:Example} shows two simulated examples to illustrate the information provided by local false discovery rates in the penalized regression context. In both examples, there are two features causally related to the outcome, two features that are correlated with the causal variables, and 96 features that are purely noise. The panels on the left show the lasso coefficient estimates returned by standard software packages such as \\texttt{glmnet} \\citep{Friedman2010}, while the panels on the right display the local false discovery rate, with a dotted line at $\\lambda_{\\CV}$, the point along the path which minimizes cross validation error. In the upper left panel, the model at $\\lambda_{\\CV}$ contains several noise variables, but the two causal features clearly stand out from others in the coefficient path. The $\\mfdr$ plot in the upper right panel confirms this visual assessment -- the two causal features have much lower false discovery rates than the other features. The dataset in the second row demonstrates a more challenging case. Here, it is not obvious from the coefficient path which features are significant and which may be noise. The $\\mfdr$ plot in the lower right panel lends clarity to this situation, showing at $\\lambda_{\\CV}$ that the truly causal features are considerably less likely to be false discoveries.\n\nThe two $\\mfdr$ paths of Figure~\\ref{Fig:Example} also illustrate the connection between the $\\mfdr$ approach and traditional large-scale testing approach to local false discovery rates. These univariate local false discovery rates are denoted by the triangles prior to the start of the $\\mfdr$ path, and are equivalent to the $\\mfdr$ estimates at the beginning of the $\\mfdr$ path when no features are active in the model. Initially, each method identifies both causal features, along with some of the correlated features, as important. However as $\\lambda$ decreases, and the causal features become active in the model, the regression-based $\\mfdr$ method reveals that the correlated features are only indirectly related to the outcome.\n\nHaving presented an initial case for the utility of $\\mfdr$ and an illustration of the connections it shares with both traditional local false discovery rates and lasso regression, we structure the remainder of the paper as follows: Section~\\ref{Sec:mfdr_back} gives a more formal introduction to false discovery rate approaches in the context of both large-scale testing and model based approaches to high dimensional data. Section~\\ref{Sec:method} introduces our lasso based $\\mfdr$ estimator in the linear regression setting and extends the approach to a more general class of penalized likelihood models including penalized logistic and Cox regression models. Section~\\ref{Sec:loc_sims} studies the $\\mfdr$ approach using simulation, comparing it to existing methods commonly used in high-dimensional analysis, and Section~\\ref{Sec:loc_case} explores two real data case studies where the method proves to be useful.\n\n\\section{Background}\n\\label{Sec:mfdr_back}\n\nIn the context of both large-scale testing and model-based approaches, this paper will focus on false discovery rates, a common approach to inference in high-dimensional data analysis. There are two main types of false discovery rates: tail-area approaches, which describe the expected rate of false discoveries for all features beyond a given threshold, and local approaches, which describe the density of false discoveries at a specific point. We adopt the general convention throughout, used by many other authors, of using Fdr to refer to tail-area approaches, and fdr to refer to local approaches, reflecting the traditional use of $F$ and $f$ to refer to distribution and density functions. Both Fdr and fdr have been well studied in the realm of large-scale testing. The seminal Fdr procedure \\citep{BH_1995} remains a widely popular approach to Fdr and has led to many extensions and related approaches \\citep[][and many others]{Storey2004, Genovese2004, Strimmer2008}. Using an empirical Bayes framework, \\citet{Efron2001} introduced the idea of local false discovery rates, a proposal which has also been extended in many ways \\citep[e.g.,][]{Muralidharan2010,Stephens2017}. In this section, we provide a brief overview of false discovery rate estimation from an empirical Bayes estimation perspective; for more information, including other ways of motivating FDR estimation and control, see \\citet{Farcomeni2008} and \\citet{Efron_book}.\n\nRecently, false discovery rates have been considered in the realm of high-dimensional modeling, although thus far this research has focused exclusively on tail-area approaches. The majority of this work has been concentrated on lasso regression \\citep{tibshirani_1996}, a popular modeling approach which naturally performs variable selection by using $L_1$ regularization. The issue of false discovery rate control is particularly important in this case, as false discoveries can be quite prevalent for lasso models \\citep{Su2017}.\n\nThe false discovery rate control provided by large-scale testing approaches is marginal in the sense that a feature $X_j$ is considered a false discovery only if that feature is marginally independent of the outcome $Y$: $X_j \\independent Y$. In regression, where many features are being considered simultaneously, the issue is more complicated and can involve various kinds of conditional independence. For example, we can adopt a \\textit{fully conditional} perspective, which considers a feature $X_j$ to be a false discovery if it is independent of the outcome conditional upon all other features: $X_j \\independent Y | X_{k \\neq j}$. Several approaches to controlling the fully conditional false discovery rate have been proposed, including procedures based on the bootstrap \\citep{Dezeure2017}, de-biasing \\citep{Javanmard2014}, sample splitting \\citep{Wasserman2009,Meinshausen2009}, and the knock-off filter \\citep{barber2015, candes2018}.\n\nAn alternative for penalized models is the \\textit{pathwise conditional} perspective. Pathwise approaches focus on the point in the regularization path at which feature $j$ first becomes active and condition only on the other variables present in the model (denote this set $M_j$) when assessing whether or not variable $j$ is a false discovery: $X_j \\independent Y | M_j$. The methods of \\citet{CovTest} and \\citet{Selective_Inference} used in conjunction with the sequential stopping rule of \\citet{GSell2016} allow for control over the pathwise conditional Fdr. \n\nLess restrictive approaches to false discovery rates for penalized regression models have also been proposed. \\citet{Breheny2019} developed an analytic method which bounds the \\textit{marginal} false discovery rate (mFdr) of penalized linear regression models. \\citet{Miller2019} extended this approach to a more general class of penalized likelihood models, while \\citet{HuangFDR} addressed a similar question using a Monte Carlo approach.\n\nIn this paper we combine the marginal perspective on false discoveries employed by \\citet{Breheny2019} and \\citet{Miller2019} with the idea of local false discovery rates. The resulting method provides a powerful new inferential tool for penalized regression models, allowing one to assess each individual feature's probability of having been selected into the model by chance alone.\n\n\\subsection{Large-scale testing, Fdr, and fdr}\n\nConsider data of the usual form $(\\y, \\X)$, where $\\y$ denotes the response for $i = \\{1, \\ldots, n\\}$ independent observations and $\\X$ is an $n$ by $p$ matrix containing the values of $j = \\{1, \\ldots, p\\}$ explanatory features. We presume that only a subset of the available features have a non-null relationship with the outcome, and the goal of our analysis is to correctly identify those important features.\n\nLet $\\beta_j$ denote the effect of feature $j$ in relation to $\\y$; this paper focuses on regression coefficients, but $\\beta_j$ could also represent a difference in means or some other measure. Large-scale univariate testing considers $p$ separate null hypotheses, $H_j:\\beta_j = 0$, each corresponding to a single feature, and conducts a univariate test on each of those hypotheses. These tests are usually performed by scaling the estimated effects $\\{\\bh_1, \\bh_2, \\ldots, \\bh_p\\}$ by their standard errors $\\{\\hat{s}_1, \\hat{s}_2, \\ldots, \\hat{s}_p\\}$ to obtain test statistics $\\{t_1, t_2, \\ldots, t_p\\}$ which are used to calculate p-values $\\{p_1, p_2, \\ldots, p_p\\}$. Alternatively, these test statistics can be converted to $z$-values defined by $z_j = \\Phi^{-1}(F_t(t_j))$, where $\\Phi$ is the standard normal CDF, so that the $z$-values for features with true null hypotheses will follow a N(0,1) distribution.\n\nThe false discovery rate is a tool for meaningfully aggregating the results of these tests while quantifying the expected proportion of false positives. In this paper, our primary focus is on local false discovery rates, or estimates of $\\Pr(H_j | \\bh_j, \\hat{s}_j) = \\Pr(H_j | z_j)$, the probability that feature $j$ has a null relationship with the outcome given the observed data.\n\nA natural framework for estimating this probability is to assume that features arise from two classes with prior probability $\\pi_0$ and $\\pi_1$, with $f_0(z)$ denoting the density of the $z$-values for features in the null class and $f_1(z)$ denoting the density for features in the non-null class, with the mixture density $f(z) = \\pi_0f_0(z) + \\pi_1f_1(z)$ giving the marginal distribution of $z$. In addition, let $\\cZ$ denote any subset of the real line, $F_0(\\cZ) = \\int_{\\cZ}^{}f_0(z)dz$ denote the probability of observing $z \\in \\cZ$ for a null feature and $F_1(\\cZ) = \\int_{\\cZ}^{}f_1(z)dz$ the probability for a non-null feature\n\nGiven the framework described in the preceding paragraph, applying Bayes' rule yields\n\\begin{align} \\label{eq:FdrBayes}\n\\Pr(\\text{Null}|z \\in \\cZ) = \\frac{\\pi_0F_0(\\cZ)}{F(\\cZ)} = \\Fdr(\\cZ),\n\\end{align}\nwhere $F(\\cZ) = \\pi_0F_0(\\cZ) + \\pi_1F_1(\\cZ)$. In a typical application, $\\cZ$ would denote a tail area such as $z > 3$ or $\\abs{z} > 3$, and would allow the analyst to control the Fdr through the choice of $\\cZ$.\n\nInstead of focusing on the tail area condition $z \\in \\cZ$, we may alternatively consider the limit as $\\cZ$ approaches the single point $z_j$, in which case the distribution functions become density functions and we have\n\\begin{align} \\label{eq:fdrzch2}\n\\Pr(\\text{Null}|z = z_j) = \\frac{\\pi_0f_0(z_j)}{f(z_j)} = \\fdr(z_j).\n\\end{align}\nThis quantity is typically referred to as the ``local'' false discovery rate, and describes feature $j$ specifically, as opposed to the collection of features whose $z$-statistics fall in $\\cZ$.\n\nThere are several important connections between Fdr and fdr \\citep{Efron_locfdr}. Of particular interest is the relationship\n\\begin{equation} \\label{eq:Fdr-fdr}\n\\Fdr(\\cZ) = \\Ex\\big(\\fdr(z)| z \\in \\cZ\\big).\n\\end{equation}\nIn words, the Fdr of the set of features whose normalized test statistics fall in the tail region $\\cZ$ is equal to the average fdr of the features in $\\cZ$. This ensures that selecting individual features using a threshold $fdr(z) < \\alpha$ also limits Fdr below $\\alpha$ for the entire set of features defined by the threshold. Additional detail on the links between Fdr and fdr can be found in \\citet{Efron_locfdr} and \\citet{Strimmer2008}.\n\nBroadly speaking, there are two main approaches to estimating local false discovery rates given the observed collection $\\{z_j\\}_{j=1}^p$. Recall that by construction, $f_0$ is the density function of the standard normal distribution; thus, $\\pi_0$ and $f$ are the only quantities that must be estimated. The first approach, originally proposed in \\citep{Efron_locfdr} but extended and modified by many authors since then, focuses on estimating the marginal density $f$. Replacing $\\pi_0$ with its upper bound of 1, and estimating $f(z)$ using any nonparametric density estimation method (e.g., kernel density estimation), we have\n\\begin{align} \\label{eq:fdrhat1}\n\\widehat{\\fdr}(z_j) = \\frac{f_0(z_j)}{\\hat{f}(z_j)}.\n\\end{align}\n\nAlternatively, one can explicitly model the mixture distribution of $z$ (in this approach, there is typically one null distribution and many non-null distributions), obtaining the estimates $\\hat{\\pi}_0, \\hat{\\pi}_1, \\hat{\\pi}_2, \\ldots$. The estimated local fdr is therefore\n\\begin{align} \\label{eq:fdrhat2}\n\\widehat{\\fdr}(z_j) = \\frac{\\hat{\\pi}_0f_0(z_j)}{\\sum_{k=0}^K\\hat{\\pi}_k\\hat{f}_k(z_j)},\n\\end{align}\nwhere $K$ is the number of non-null mixture components and $\\hat{f}$ must be estimated for the non-null components; estimation of $\\pi_k$ and $f_k$ is typically accomplished with maximum likelihood via EM algorithm. This approach was originally proposed by \\citet{Muralidharan2010}, but as with the marginal density approach, has been explored by many other authors since then. In particular, we focus on a mixture model proposed by \\citet{Stephens2017}, which requires that all non-null components have a mode of 0 (Stephens refers to this as the ``unimodal assumption''). One attractive aspect of this model, which is implemented in the R package \\texttt{ashr}, is that the resulting $fdr$ is a monotone function of as $z$ moves away from 0 in either direction; this is typically not true for marginal density estimates of the form \\eqref{eq:fdrhat1}.\n\nIn the sections that follow we refer to fdr estimates based on \\eqref{eq:fdrhat2} as the ``ashr'' approach, and fdr estimates found using \\eqref{eq:fdrzch2} as the ``density'' approach. Both ashr and density approaches estimate the posterior probability, given the observed data, that feature $j$ is a false discovery, but their assumptions can lead to different estimates. We further discuss the relative strengths and weaknesses of these two approaches in Section~\\ref{Sec:density_est}.\n\n\\subsection{Penalized regression and mFdr}\n\\label{Sec:background_mfdr}\n\nIn contrast with the univariate nature of large-scale testing, regression models simultaneously relate all of the explanatory features in $\\X$ with $\\y$ using a probability model involving coefficients $\\bb$. In what follows we assume the columns of $\\X$ are standardized such that each variable has a mean of $0$ and $\\sum_i \\x_{ij}^2 = n$. The fit of a regression model can be summarized using the log-likelihood, which we denote $\\ell(\\bb|\\X,\\y)$. In the classical setting, $\\bb$ is estimated by maximizing $l(\\bb|\\X,\\y)$. However, this approach is unstable when $p > n$ unless an appropriate penalty is imposed on the size of $\\bb$.\nIn the case of the lasso penalty, estimates of $\\bb$ are found by minimizing the objective function:\n\\al{eq:obj}{\nQ(\\bb|X,\\y) = -\\frac{1}{n} \\ell(\\bb|X,\\y) + \\lambda||\\bb||_1\n}\n\nThe maximum likelihood estimate is found by setting the score, $\\u(\\bb) = \\nabla \\ell(\\bb|\\X,\\y)$, equal to zero. The lasso estimate, $\\bbh$, can be found similarly, although allowances must be made for the fact that the penalty function is typically not differentiable. These penalized score equations are known as the Karush-Kuhn-Tucker (KKT) conditions in the convex optimization literature, and are both necessary and sufficient for a solution $\\bbh$ to minimize $Q(\\bb|\\X,\\y)$. \n\nAn important property of the lasso is that it naturally performs variable selection. The lasso estimates are sparse, meaning that $\\bh_j = 0$ for a large number of features, with only a subset of the available features being active in the model. The regularization parameter $\\lambda$ governs the degree of sparsity with smaller values of $\\lambda$ leading to more variables having non-zero coefficients.\n\nThe KKT conditions can be used to to develop an upper bound for the number of features expected to be selected in a the lasso model by random chance. Heuristically, if feature $j$ is marginally independent of $\\y$, then $Pr(\\bh_j \\neq 0)$ is approximately equal to $Pr(\\tfrac{1}{n}\\abs{u_j(\\bb)} > \\lam)$, where $\\u_j$ denotes the $j^{th}$ component of the score function (gradient of the log-likelihood). Classical likelihood theory provides asymptotic normality results and allows for estimation of this tail probability, which in turn provides a bound on the mFdr. For additional details and proofs, see \\citet{Miller2019}.\n\nThis approach provides an overall assessment of model selection, but it does not offer any specific information about individual features.\nIt is often the case that among the selected features, some appear to be clearly related to the outcome while others are of borderline significance.\nFor example, as suggested by \\eqref{eq:Fdr-fdr}, we may select two features, one with a 1\\% fdr and the other with a 39\\% fdr, but the overall Fdr of the model is 20\\%. Providing this level of feature-specific inference is the major motivation for estimating local false discovery rates for penalized regression.\n\nThe ability to provide feature-specific false discovery rates also allows one to overcome the tension between predictive accuracy and selection reliability. For lasso models, it is typically the case that the number of features that can be selected under Fdr restrictions is much smaller than the number of active features in model that achieves maximum predictive performance as determined by cross-validation.\nThis poses something of a dilemma, as we must choose between a model with sub-optimal predictions and one with a high proportion of false discoveries.\nLocal fdr, however, allows us to use the most predictive model while retaining the ability to identify features that are unlikely to be false discoveries.\n\n\\section{Estimating mfdr}\n\\label{Sec:method}\n\nWe begin by mentioning that the elements of $\\bbh$ are not directly suitable for local false discovery rate estimation. In particular, for most choices of $\\lam$, $\\bh_j$ is exactly zero for many features, making it impossible to construct statistics with a N(0,1) distribution under the null. Instead, we use the KKT conditions, which mathematically characterize feature selection at a given value of $\\lambda$, to construct normally distributed statistics appropriate for the given model. Section~\\ref{Sec:linear} addresses linear regression, while Section~\\ref{Sec:glm_cox} addresses GLM and Cox regression models.\n\n\\subsection{Linear regression} \n\\label{Sec:linear}\n\nConsider the linear regression setting:\n\\begin{align*}\n\\y = \\X \\bb + \\pmb{\\epsilon}, \\qquad \\epsilon_i \\sim N(0, \\sigma^2).\n\\end{align*}\nAs mentioned in Section~\\ref{Sec:background_mfdr}, the lasso solution, $\\bbh$, is mathematically characterized by the KKT conditions, which are given by \\citep{lasso_kkt}:\n\\begin{alignat*}{2}\n\\frac{1}{n}\\x_j^T(\\y - \\X\\bbh) &= \\lambda \\textrm{ sign}(\\bh_j) \\qquad & & \\text{for all } \\bh_j \\ne 0 \\\\\n\\frac{1}{n}\\x_j^T(\\y - \\X\\bbh) &\\leq \\lambda & & \\text{for all } \\bh_j = 0.\n\\end{alignat*}\n\nWe define the partial residual as $\\rj = \\y - \\X_{-j}\\bbh_{-j}$ where the subscript $-j$ indicates the removal of the $j^{th}$ feature. Using this definition it follows directly from the KKT conditions that:\n\\begin{alignat*}{2}\n\\frac{1}{n}|\\x_j^T \\rj| &> \\lambda \\qquad && \\text{for all } \\bh_j \\ne 0 \\\\\n\\frac{1}{n}|\\x_j^T \\rj| &\\leq \\lambda && \\text{for all } \\bh_j = 0.\n\\end{alignat*}\nThe quantity $\\frac{1}{n}\\x_j^T \\rj$ governs the selection of the $j^{th}$ feature: if its absolute value is large enough, relative to $\\lambda$, feature $j$ is selected.\nIn this manner, $\\frac{1}{n}\\x_j^T \\rj$ can be considered analogous to a test statistic in the hypothesis testing framework.\n\n\nIn the special case of orthonormal design where $\\frac{1}{n}\\X^T \\X = \\I$, it is straightforward to show that $\\frac{1}{n}\\x_j^T \\rj \\sim N(\\beta_j, \\sigma^2\/n)$ \\citep{Breheny2019}. Under the null hypothesis that $\\beta_j = 0$, this result can be used to construct the normalized test statistic\n\\begin{align} \\label{eq:fdr_linearch2}\nz_j = \\frac{\\frac{1}{n}\\x_j^T \\rj}{\\hat{\\sigma}\/\\sqrt{n}},\n\\end{align}\nwhere $\\hat{\\sigma}$ is an estimate of $\\sigma$; for the sake of simplicity, we use the residual sum of squares divided by the model degrees of freedom \\citep{zou2007}, but many other possibilities exist \\citep{reid2016}. These statistics are then used to estimate local false discovery rates using either \\eqref{eq:fdrhat1} or \\eqref{eq:fdrhat2}.\n\nIn practice, the design matrix will not be orthonormal and the result $\\frac{1}{n}\\x_j^T \\rj \\sim N(\\beta_j, \\sigma^2\/n)$ will not hold exactly. Nevertheless, it still holds approximately under reasonable conditions. To understand these conditions we explore the relationship between $\\frac{1}{n}\\X^T \\X$ and $z_j$:\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:remainder}\n\\frac{1}{n}\\x_j^T \\rj &= \\frac{1}{n}\\x_j^T (\\X\\bb + \\bep - \\X_{-j}\\bbh_{-j}) \\\\\n&= \\frac{1}{n}\\x_j^T\\bep + \\beta_j + \\frac{1}{n}\\x_j^T \\X_{-j} (\\bb_{-j} - \\bbh_{-j}).\n\\end{aligned}\n\\end{equation}\nThe component $\\frac{1}{n}\\x_j^T\\bep + \\beta_j$ is unaffected by the structure of $\\frac{1}{n}\\X^T \\X$; thus the estimator in \\eqref{eq:fdr_linearch2} will be accurate in situations where the final term, $\\frac{1}{n}\\x_j^T \\X_{-j} (\\bb_{-j} - \\bbh_{-j})$, is negligible (in orthonormal designs, this term is exactly zero). If feature $j$ is independent of all other features, then $\\frac{1}{n}\\x_j^T \\X_{-j}$ will converge to zero as $n$ increases, making the term asymptotically negligible provided $\\sqrt{n}(\\bb_{-j} - \\bbh_{-j})$ is bounded in probability.\n\nIf pairwise correlations exist between features, $\\frac{1}{n}\\x_j^T \\X_{-j}$ will not converge to zero, and the null distribution of $z_j$ will not follow a standard normal distribution; in particular, as discussed in \\citet{Breheny2019}, its distribution will have thinner tails than a standard normal distribution. This will causes the local mfdr estimates to be somewhat conservative in the presence of strong correlation; this phenomenon is explored in depth in Section~\\ref{Sec:loc_sims}.\n\nWhen $\\frac{1}{n}\\x_j^T \\rj \\sim N(\\beta_j, \\sigma^2\/n)$ holds exactly, the mfdr estimator of \\eqref{eq:fdr_linearch2} shares an important relationship with the mFdr estimator proposed by \\citet{Breheny2019}, captured in the following theorem, whose proof appears in the appendix:\n\n\\begin{theorem}\n\\label{Thm:Efdr}\nLet $\\cM_{\\lam}$ denote the set of nonzero coefficients selected by a lasso model, and let $C_j = n^{-1}\\x_j\\rj$ denote the random variable governing the selection of a given feature. If $C_j$ has density $g = \\pi_0 g_0 + (1 - \\pi_0) g_1$, where $g_0$ is the $\\Norm(0, \\sigma^2\/n)$ density, then\n\\begin{align*}\n\\mFDR(\\cM_\\lambda) = \\EX\\left\\{\\mfdr(\\tfrac{C_j}{\\sigma\/\\sqrt{n}}) | j \\in \\cM_\\lambda\\right\\}.\n\\end{align*}\n\\end{theorem}\n\nNoting that $z_j$ from \\eqref{eq:fdr_linearch2} is a standardized version of $c_j$, dividing by $\\sigma\/\\sqrt{n}$ in order to have unit variance, Theorem~\\ref{Thm:Efdr} states that, on average, the marginal false discovery rate of a model is the average local false discovery rate of its selections. Alternatively, this result implies that the expected number of false discoveries in a model can be decomposed into the sum of each selected feature's mfdr.\n\nThe above theorem assumes known values for $\\pi_0$, $\\sigma$, and $f$; when these quantities are estimated from the data, the equality no longer holds. In our experience, the average mfdr is typically close to the mFdr, although this depends on how the above quantities are estimated.\n\n\\subsection{GLM and Cox models}\n\\label{Sec:glm_cox}\n\nWe now consider the more general case where $\\y$ need not be normally distributed. Specifically we focus our attention on binary outcomes (logistic regression) and survival outcomes (Cox regression), although the approach is general and can also be applied to other likelihood based models.\n\nSimilar to the linear regression setting, we can develop a local false discovery rate estimator by studying minimization of the objective function, $Q(\\bb|X,\\y)$, as defined in \\eqref{eq:obj}. When $\\y$ is not normally distributed, $\\ell(\\bb|X,\\y)$ is no longer a quadratic function. However, we can construct a quadratic approximation by taking a Taylor series expansion of $\\ell(\\bb|X,\\y)$ about a point $\\bbt$. In the context of this approach it is useful to work in terms of the linear predictor $\\be = \\X\\bb$ (and $\\tilde{\\be} = \\X\\bbt$), noting that we can equivalently express the likelihood in terms of $\\be$ such that:\n\\begin{align*}\n\\ell(\\bb|X,\\y) & \\approx l(\\bbt) + (\\bb - \\bbt)^T l'(\\bbt) + \\frac{1}{2}(\\bb - \\bbt)^T l''(\\bbt) (\\bb - \\bbt) \\\\\n& = \\frac{1}{2}(\\yt - \\be)^T f''(\\bet) (\\yt - \\be) + \\text{const}.\n\\end{align*}\nHere $\\yt = \\bet - f''(\\bet)^{-1} f'(\\bet)$ serves as a pseudo-response in the weighted least squares expression.\nThe KKT conditions here are very similar to those in the linear regression setting, differing only by the inclusion of a weight matrix $\\W = f''(\\bet)$ and the fact that $\\y$ has been replaced by $\\yt$. \n\nProceeding similarly to Section~\\ref{Sec:linear}, we define the partial pseudo-residual $\\rj = \\yt - \\X_{-j}\\bbh_{-j}$, which implies:\n\\begin{alignat*}{2}\n\\frac{1}{n}|\\x_j^T \\W \\rj| &> \\lambda \\qquad & &\\text{for all } \\bh_j \\ne 0 \\\\\n\\frac{1}{n}|\\x_j^T \\W \\rj| &\\leq \\lambda & &\\text{for all } \\bh_j = 0.\n\\end{alignat*}\n\\citet{Miller2019} show that, under appropriate regularity conditions,\n\\begin{align}\\label{eq:glm_fdr}\nz_j=\\frac{\\tfrac{1}{n}\\x_j^T\\W\\rj}{\\hat{s}_j\/\\sqrt{n}} \\inD N(0, 1),\n\\end{align}\nwhere $\\hat{s}_j = \\sqrt{\\x_j^T\\W\\x_j\/n}$. As in Section~\\ref{Sec:linear}, these statistics can be used to estimate local false discovery rates using either \\eqref{eq:fdrhat1} or \\eqref{eq:fdrhat2}.\n\nFor the most part, the regularity conditions required for \\eqref{eq:glm_fdr} to hold are the same as those required for asymptotic normality in classical likelihood theory, with one additional requirement. Just as the mfdr estimator in Section~\\ref{Sec:linear} holds under feature independence, the estimator in Equation~\\ref{eq:glm_fdr} relies on an assumption of vanishing correlation; consequently it will be accurate in the case of independent features but tends to result in conservative false discovery rate estimates when features are correlated. The assumption of vanishing correlation is unlikely to be literally true in practice; its purpose is to establish a hypothetical worst-case scenario in terms of false discovery selection so that a fdr can be estimated. We investigate how robust the estimator is to this assumption in Section~\\ref{Sec:loc_sims}.\n\n\\section{Simulation studies}\n\\label{Sec:loc_sims}\n\nIn this section we conduct a series of simulations studying the behavior of the mfdr estimators resulting from \\eqref{eq:fdr_linearch2} and \\eqref{eq:glm_fdr}. We investigate both the internal validity of the method in terms of whether the estimate accurately reflects the probably that a feature is a false discovery as well as compare the results of mfdr-based inference with other approaches to inference in the high-dimensional setting.\n\nFor the mfdr approach, we present results for two different values of $\\lambda$. The first, $\\lambda_{\\CV}$, characterizes the model with the lowest cross-validated error. The second, $\\lambda_{1\\SE}$, characterizes the most parsimous model within one standard error of the lowest cross-validated error. Unless otherwise indicated, we estimate mfdr using the approach of \\citet{Stephens2017} as implemented in the R package \\texttt{ashr}.\n\nWe generate data from three models: linear regression, logistic regression, and Cox regression. For each model we present results for two data-generating scenarios we refer to as ``Assumptions Met'', where the underlying assumption of independent features and furthermore $n > p$, which improves asymptotic approximations, and ``Assumptions Violated'', where features are correlated in a manner consistent with real data and $p > n$.\n\n\\textbf{Assumptions Met:} In this scenario, $n > p$ and all features are independent of each other. Both of these factors should lead to a small remainder term in \\eqref{eq:remainder}.\n\n\\begin{itemize}\n\\item $n = 1000, p=600$\n\\item Covariate values $x_{ij}$ independently generated from the standard normal distribution.\n\\item Response variables are generated as follows:\n\\begin{itemize}\n\\item Linear regression, $\\y = \\X\\bb + \\bep$ where $\\epsilon_i \\sim N(0,\\sigma^2)$, $\\bb_{1:60} = 4$, and $\\bb_{61:600} = 0$, and $\\sigma = \\sqrt{n}$\n\\item Logistic regression, $y_i \\sim \\textrm{Bin}\\bigg(1, \\pi_i = \\frac{\\exp(\\x_i^T \\bb)} {1 + \\exp(\\x_i^T \\bb)}\\bigg)$, $\\bb_{1:60} = .15$, and $\\bb_{61:600} = 0$\n\\item Cox regression, $y_i \\sim \\textrm{Exp}\\big(\\exp(\\x_i^T \\bb)\\big)$, $\\bb_{1:60} = .15$, and $\\bb_{61:600} = 0$, and 10\\% random censoring\n\\end{itemize}\n\\end{itemize}\n\n\n\\textbf{Assumptions Violated:} In this scenario, we impose an association structure motivated by the causal diagram below.\n\\begin{center}\n\\begin{tikzpicture}[node distance=1cm]\n\n\\node(b)[text centered] {$B$};\n\\node(u)[below of = b, text centered] {$ $};\n\\node(a)[left of = u, text centered, xshift = -1.5cm] {$A$};\n\\node(c)[right of = u, text centered, xshift = 1.5cm] {$C$};\n\\node(y)[below of = u, text centered] {$Y$};\n\\draw [arrow] (a) -- (b);\n\\draw [arrow] (a) -- (y);\n\n\\end{tikzpicture} \\\\\n\\end{center}\nHere, variable $A$ has a direct causal relationship with the outcome variable $Y$, variable $B$ is correlated with $Y$ through its relationship with $A$, but is not causally related, and variable $C$ is unrelated to all of the other variables and the outcome. In terms of the false discovery perspectives introduced in Section~\\ref{Sec:mfdr_back}, all of the perspectives agree that $A$ would never be a false discovery and that $C$ would always be a false discovery. However, selecting $B$ is considered a false discovery by the fully conditional perspective, but not the marginal perspective. From the pathwise conditional perspective, whether $B$ is a false discovery depends on whether $A$ has entered the model or not.\n\n\\begin{itemize}\n\\item $n = 200, p=600$\n\\item Covariates generated with the following dependence structure:\n\\begin{itemize}\n\\item 6 causative features ($A$), which are independent of each other\n\\item 54 correlated features ($B$), grouped such that 9 are related to each causative feature with $\\rho = 0.5$\n\\item 540 noise features ($C$), which are correlated with each other by an autoregressive correlation structure where $\\textrm{Cor}(\\x_j, \\x_k) = 0.8^{|j - k|}$\n\\end{itemize}\n\\item Response variables ($Y$) are generated from the same models described in the Assumptions Met scenario; however, $\\bb$ differs to reflect the change in sample size:\n\\begin{itemize}\n\\item Linear regression, $\\bb_{1:6} = (6,-6,5,-5,4,-4)$, and $\\bb_{7:600} = 0$, and $\\sigma = \\sqrt{n}$\n\\item Logistic regression, $\\bb_{1:6} = (1.1,-1.1,1,-1,.9,-.9)$, and $\\bb_{7:600} = 0$\n\\item Cox regression, $\\bb_{1:6} = (.6, -.6, .5, -.5, .4, -.4)$, and $\\bb_{7:600} = 0$, and 10\\% random censoring\n\\end{itemize}\n\\end{itemize}\n\nTo summarize each of these scenarios in terms of the diagram above, the Assumptions Met scenario consists of 60 features akin to variable $A$, 0 features akin to variable $B$, and 540 features akin to variable $C$, while the Assumptions Violated scenario consists of 6 features akin to variable $A$, 54 features akin to variable $B$, and 540 features akin to variable $C$. The Assumptions Violated scenario also imposes an autoregressive correlation structure on the noise features, undermining the assumption of vanishing correlation.\n\nFor comparison, we also include results for the traditional univariate approach to fdr throughout our simulations, defining the univariate procedure to consist of fitting a univariate regression model to each of the $j \\in \\{1, \\ldots, p\\}$ features, extracting the test statistic, $t_j$, corresponding to the test on the hypothesis that $\\beta_j = 0$, then normalizing these test statistics such that $z_j = \\Phi^{-1}(Pr(T < t_j))$. The \\texttt{ashr} package was then used to calculate local false discovery rates.\n\n\\subsection{Calibration}\n\nHow well do our proposed estimates reflect the true probability that a feature is purely noise (i.e., unrelated to the outcome either directly or indirectly)? We address this question through calibration plots comparing our mfdr estimates to the observed proportion of noise features across the full spectrum of false discovery rates. For example, an estimate of $0.2$ is well-calibrated if 20\\% of features with $\\widehat{\\text{mfdr}} = .2$ are observed to be false discoveries. \n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=.85\\textwidth]{Fig2}\n\\caption{\\label{Fig:Calibr} The expected proportion of false discoveries at a given estimated local false discovery rate after smoothing for the linear regression setting. Univariate estimates along with two lasso estimates (at different values of $\\lam$) are shown.\n \n}\n\\end{figure}\n\nFigure~\\ref{Fig:Calibr} displays results for the linear regression setting for both scenarios.\nStrip plots of individual mfdr estimates are provided along with smoothed estimates of the calibration relationships and a 45 degree line is provided for reference.\nSimilar results are available in Supplemental Material for logistic and Cox models; the same patterns hold for all three models.\n\nWhen all assumptions are met, the mfdr estimates are very accurate at both $\\lambda_{\\CV}$ and $\\lambda_{1\\SE}$, showing essentially perfect correspondence with the empirical proportion of false discoveries. The traditional univariate method also appears to be very well calibrated, only slightly underestimating the proportion of noise features at the upper end of the mfdr spectrum.\n\nIn the Assumptions Violated scenario, the lasso mfdr estimates tend to be accurate in the regions near 0 and 1, which are typically of greatest practical interest. In between, the estimates based on $\\lambda_{\\CV}$ are conservative. For example, among features with an estimated mfdr of 35\\%, only about 20\\% were actually noise features. The lasso mfdr estimates based on $\\lambda_{1\\SE}$ and the univariate estimates, on the other hand, slightly underestimated the probability that a given feature was a false discovery.\n\nIt is worth noting that the presence of correlated ``B'' variables in the Violated scenario complicates the assessment of calibration, as features cannot be unambiguously separated into noise and signal. Since our method is based upon the marginal perspective, we do not treat the ``B'' variables as false (noise) discoveries here. However, as one would expect, they tend to have much higher mfdr estimates than the causative (``A'') variables; this should be kept in mind when interpreting these calibration plots.\n\nThe three methods depicted in Figure~\\ref{Fig:Calibr} show markedly different potential with respect to classifying features as noise versus signal. At $\\lambda_{\\CV}$, the mfdr estimates of noise features are tightly clustered near 1 and the mfdr estimates of causal features are tightly clustered near 0, as seen in the strip plots at the top and bottom of the figure, respectively. In contrast, the traditional univariate method yields far more intermediate estimates for variables of both types. In other words, the lasso estimates allow one to much more confidently identify signals compared to a univariate analysis. As expected, the results for $\\lambda_{1\\SE}$ fall somewhere in between the results at $\\lambda_{\\CV}$ and the univariate approach.\n\n\\begin{table}[!htb]\n\\centering\n\\caption{\\label{Table:hard} Local false discovery rate accuracy results for the Assumptions Violated scenario. Features are binned based upon their estimated fdr and the observed proportion of noise variables in each bin is reported in the body of the table for each method.}\n\\begin{tabular}{ l c c c c c }\n\\hline\nLinear & (0, 0.2] & (0.2, 0.4] & (0.4, 0.6] & (0.6, 0.8] & (0.8, 1] \\\\\n\\hline\nUnivariate $\\widehat{\\text{fdr}}$ & 0.11 & 0.40 & 0.63 & 0.85 & 0.95 \\\\ \n$\\widehat{\\mfdr}$ at $\\lambda_{1\\SE}$ & 0.03 & 0.36 & 0.60 & 0.84 & 0.92 \\\\ \n$\\widehat{\\mfdr}$ at $\\lambda_{\\CV}$ & 0.03 & 0.25 & 0.32 & 0.49 & 0.91 \\\\ \n\\hline\nLogistic & (0, 0.2] & (0.2, 0.4] & (0.4, 0.6] & (0.6, 0.8] & (0.8, 1] \\\\\n\\hline\nUnivariate $\\widehat{\\text{fdr}}$ & 0.15 & 0.45 & 0.68 & 0.87 & 0.95 \\\\ \n$\\widehat{\\mfdr}$ at $\\lambda_{1\\SE}$ & 0.00 & 0.05 & 0.15 & 0.29 & 0.91 \\\\ \n$\\widehat{\\mfdr}$ at $\\lambda_{\\CV}$ & 0.00 & 0.00 & 0.07 & 0.31 & 0.91 \\\\ \n\\hline \nCox & (0, 0.2] & (0.2, 0.4] & (0.4, 0.6] & (0.6, 0.8] & (0.8, 1] \\\\\n\\hline\nUnivariate $\\widehat{\\text{fdr}}$ & 0.08 & 0.39 & 0.65 & 0.86 & 0.95 \\\\ \n$\\widehat{\\mfdr}$ at $\\lambda_{1\\SE}$ & 0.00 & 0.00 & 0.07 & 0.29 & 0.91 \\\\ \n$\\widehat{\\mfdr}$ at $\\lambda_{\\CV}$ & 0.00 & 0.00 & 0.00 & 0.26 & 0.91 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nTable~\\ref{Table:hard} displays an alternative representation of the calibration results for the Assumptions Violated scenario. Here, features are sorted into five equally spaced bins based upon their estimated local false discovery rate under a given procedure. Within in each bin we calculate the proportion of noise (i.e., ``C'') variables; for a well-calibrated estimator, the proportion of noise features within a bin should remain within the range of the bin. For example, in the linear regression case, 25\\% of the features with $\\widehat{\\mfdr}$ estimates between 0.2 and 0.4 at $(\\lam_\\CV)$ were truly noise. Overall, the mfdr estimates at $\\lam_\\CV$ were always either well-calibrated or conservative, with the estimates for logistic and Cox regression more conservative than those for linear regression. In comparison, the univariate approach occasionally underestimated the false discovery probability. Again, results for $\\lam_{1\\SE}$ were intermediate between the other two approaches.\n\n\\subsection{Power compared to univariate fdr}\n\nIn this section we compare the number of mfdr feature selections of each type, $A$, $B$, and $C$, for the two previously described scenarios to the selections based on univariate local false discovery rates.\n\nUsing a local false discovery rate threshold of $0.1$, Figure~\\ref{Fig:power} shows that the regression-based mfdr approach leads to increased selection of causally important variables. At $\\lambda_{\\CV}$, the lasso mfdr approach selects 61\\% more causal (``A'') variables than the univariate approach in the Assumptions Met scenario. In the Assumptions Violated scenario, lasso mfdr ($\\lam_\\CV$) remains slightly more powerful than the univariate approach, selecting on average 5\\% more $A$ variables (3.63 vs. 3.45) than the univariate approach.\n\n\\begin{figure} [htb]\n\\centering\n\\includegraphics[width=.8\\textwidth]{Fig3}\n\\caption{\\label{Fig:power} The average number of features of each type with estimated local false discovery rates of less than 0.10 for each method in the two scenarios.}\n\\end{figure}\n\nIn addition to improving the power to detect causal variables, the mfdr approach drastically reduces the amount of correlated, non-causal features with low local false discovery rate estimates. This is most notable when comparing mfdr at $\\lambda_{\\CV}$ with univariate fdr, where the number of $B$ variables with fdr $< 0.1$ is \\textit{36 times higher} for the univariate approach.\nFurthermore, in the presence of correlated (``B'') features, a univariate approach also selects 6.6 times as many features that are purely noise (``C'') when compared to mfdr at $\\lambda_{\\CV}$.\nThus, the penalized regression mfdr approach proposed here results in selecting increased numbers of causally important variables while also reducing the number of correlated and noise features selected.\n\nThe results shown in Figure~\\ref{Fig:power} use a somewhat arbitrary threshold of $0.1$.\nTo illustrate performance over the entire spectrum of classification thresholds, we also performed an ROC analysis. Specifically, we considered the number of false positives, defined as noise features classified as significant at a given threshold, and false negatives, defined as important features classified as noise at a given threshold, and assessed discriminatory power using the area under the ROC curve (AUC). For the Assumptions Violated scenario, we omit $B$ variables from these calculations.\nAt $\\lambda_{\\CV}$, the mfdr approach results in average AUC values of 0.936 and 0.990, respectively, for the Met and Violated Scenarios. This is an improvement over the average AUC values of 0.908 and 0.966 for the univariate procedure, further demonstrating the advantages of regression-based mfdr over univariate approaches.\n\n\n\\subsection{Comparisons with other inferential approaches for penalized regression}\n\nTheorem~\\ref{Thm:Efdr} indicates that mfdr can be used to control mFdr, motivating a comparison of the mfdr method and existing Fdr control approaches for lasso regression models. In this simulation, we use mfdr to select features with $\\widehat{\\mfdr} < 0.1$. This is a conservative approach to mFdr control -- recalling the relationship between mfdr and mFdr given in Theorem~\\ref{Thm:Efdr}, if $\\widehat{\\mfdr} < 0.1$ for every feature, then the average mfdr will be $\\ll 0.1$ -- but serves to illustrate the most salient differences between local mfdr and other inferential approaches.\n\nWe compare our results with the selective inference approach of \\citet{Selective_Inference} using the ForwardStop rule \\citep{GSell2016}, which controls the pathwise-wise Fdr at 10\\%, the repeated sample splitting method as implemented by the {\\tt hdi} package \\citep{Dezeure2015}, which controls the fully conditional Fdr at 10\\%, and the (Model-X) knock-off filter method implented in the {\\tt knockoff} package \\citep{candes2018}, which also controls the fully conditional Fdr at 10\\%.\n\n\\begin{table}[!htb]\n\\centering\n\\caption{\\label{Tab:SelectiveInference} Simulation results comparing the average number of selections of causal, correlated, and noise variables, as well as the proportion of noise variable selections, for various model-based false discovery rate control procedures. The ``exact'', ``spacing'', ``mod-spacing'', and ``covtest'' methods are related tests performed by the {\\tt selectiveInference} package. Noise rate here refers to the fraction of selected variables that come from the ``Noise'' group of features (i.e., the mFdr).}\n\\begin{tabular}{l r r r r @{\\hskip 0.5in} r r r}\n& \\multicolumn{4}{c}{Assumptions Violated} & \\multicolumn{3}{c}{Assumptions Met}\\\\\n\\hline\n& Causal & Correlated & Noise & Noise rate & Causal & Noise & Noise rate\\\\\n& (of 6) & (of 54) & (of 540) & (mFdr) & (of 60) & (of 540) & (mFdr)\\\\\n\\hline\nmfdr (CV) & 3.84 & 0.60 & 0.14 & 3.0 \\% & 16.25 & 0.6 & 4.0 \\% \\\\\nmulti-split & 2.03 & 0.05 & 0 & 0\\% & 10.53 & 0 & 0 \\% \\\\\nknock-off & 0.42 & 0.16 & 0.02 & 5.0 \\% & 3.50 & 0 & 0\\% \\\\\nexact & 0.84 & 0.060 & 0 & 0\\% & 0.84 & 0 & 0\\%\\\\\nspacing & 1.45 & 0.070 & 0 & 0 \\% & 0.98 & 0 & 0\\% \\\\\nmod-spacing & 1.45 & 0.10 & 0 & 0 \\% & 0.98 & 0 & 0\\% \\\\\ncovtest & 1.43 & 0.10 & 0 & 0 \\% & 0.97 & 0 & 0\\% \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nTable~\\ref{Tab:SelectiveInference} shows the average number of causal and correlated features selected along with the false discovery rate for the aforementioned approaches. As they are intended to, the conditional Fdr approaches greatly limit the number of correlated, non-causal (type B) features present in the model compared to the proposed marginal fdr approach. However, this added control comes at a considerable cost in terms of power to discover causal features of interest. Overall, using pathwise or fully conditional approaches resulted in discovering at least 50\\% fewer causative features, and with the conditional approaches discovering up to 95\\% fewer features than the marginal approach in some cases. These results demonstrate that conditional Fdr control approaches tend to be much more conservative than marginal approaches, even in moderate dimensions.\n\n\\subsection{Differences in the ashr and density approaches}\n\\label{Sec:density_est}\n\nIn Sections \\ref{Sec:mfdr_back} and \\ref{Sec:method}, we discussed two methods of local false discovery rate estimation, the ``ashr'' approach, which estimates fdr using the posterior distribution of feature effects under a mixture model \\citep{Stephens2017}, and the ``density'' approach, which estimates fdr as the ratio of the theoretical null to an empirically estimated mixture density. It is beyond the scope of this paper to exhaustively review the differences of these approaches, but we did explore their performance in the simulations described in section~\\ref{Sec:loc_sims}. Overall, the approaches provide very similar results in most cases (see Supplemental Material), although each one has strengths and weaknesses.\n\nThe primary weakness of the mixture modeling approach is that when a very high proportion of features are null (i.e., when $\\pi_0 \\approx 1$), there is very little information with which to estimate the non-null mixture components. In particular, it is not uncommon for the mixture modeling approach used by \\texttt{ashr} to estimate $\\pi_0=1$ even when non-null features are present, especially when the signal is weak or the sample size is small. Certainly, this does not invalidate the procedure -- one could argue that it is wise to be cautious in the presence of weak signals -- although it does contribute, in some scenarios, to the method being conservative.\n\n\\begin{figure} [htb]\n\\centering\n\\includegraphics[width=.95\\textwidth]{Fig5}\n\\caption{\\label{Fig:density_comparison}The relationship between feature $z$-values and their estimated local false discovery rates in the assumptions violated simulation scenario when using the ashr and density approaches. The right panel shows the empirical distribution of $z$-values (histogram and rug plot), the estimated mixture density using kernel density estimation (solid line), and the theoretical N(0,1) null distribution (dashed line). The left panel shows the relationship between $z$-values and mfdr estimates for each method.}\n\\end{figure}\n\nThe primary weakness of the density modeling approach is that the resulting mfdr is not a monotone function of the normalized test statistic $z$ and as a result, is somewhat prone to artifacts as shown in Figure~\\ref{Fig:density_comparison}. The figure depicts results from a single dataset simulated under the assumptions violated scenario. Due to the numerous pairwise correlations between noise features the null distribution of $z$ is more tightly concentrated around zero than the theoretical $N(0,1)$ suggests. For the density modeling approach, this has the undesirable consequence that features with $z$-values near zero have fdr estimates smaller than those of features with $z$-values further away from zero. The mixture modeling approach of \\texttt{ashr}, which uses unimodal mixture components, protects against this and leads to a monotonic relationship between a feature's local false discovery rate estimate and how far its $z$-value is from zero.\n\nFor this reason, we consider it safer in general to use the \\texttt{ashr} approach and make it the default method in the \\texttt{ncvreg} package (provided the \\texttt{ashr} package is installed). However, as the figure suggests, the two approaches tend to produce very similar results in the tails of the $z$ distribution. If one is careful to avoid artifacts away from the tails, both methods provide valid results; in the next section, we present analyses of real data using both approaches.\n\n\\section{Case studies}\n\\label{Sec:loc_case}\n\n\\subsection{BRCA1 Gene Expression}\n\nOur first case study examines the gene expression of breast cancer patients from The Cancer Genome Atlas (TCGA) project. The data set is publicly available at \\url{http:\/\/cancergenome.nih.gov}, and consists of 17,814 gene expression measures for 536 subjects. One of these genes, BRCA1, is a tumor suppressor that plays a critical role in the development of breast cancer. When BRCA1 is under-expressed the risk of breast cancer is significantly increased, which makes genes that are related to BRCA1 expression interesting candidates for future research.\n\nOne would expect a large number of genes to have indirect relationships with BRCA1, and relatively few genes to directly affect BRCA1 expression, as in the following diagram:\n\\begin{center}\n\\begin{tikzpicture}[node distance=1cm]\n\n\\node(b)[text centered] {Correlated Gene};\n\\node(u)[below of = b, text centered] {$ $};\n\\node(a)[left of = u, text centered, xshift = -1.5cm] {Promoter\/Repressor Gene};\n\\node(y)[below of = u, text centered] {BRCA1};\n\\draw [arrow] (a) -- (b);\n\\draw [arrow] (a) -- (y);\n\\draw [dashed] (b) to [bend left](y);\n\n\\end{tikzpicture} \\\\\n\\end{center}\nUnsurprisingly, at an fdr threshold of $0.10$, the univariate approach selects 8,431 genes, clearly picking up on a large number of indirect associations.\n\nAlternatively, we may use lasso regression to jointly model the relationship between BCRA1 and the remaining 17,813 genes. Here, cross validation selects a model containing 96 features. This model, however, has a high false discovery rate -- the average mfdr of these features, estimated using the ashr approach, is $0.760$. One can lower the false discovery rate by choosing a larger value of $\\lam$ and selecting fewer features; for example, at $\\lam_{1\\SE}$, the model selects 49 features with an average mfdr of 0.026. However, this smaller model is considerably less accurate, raising the cross validation error by 11\\%. Using the feature-specific inference that mfdr provides, however, we can base our analysis on the model with the greatest predictive accuracy and still identify which of the 96 features are likely to be false discoveries. In this example, 16 of those genes have local false discovery rates under 10\\%.\n\n\\begin{table}[!htb]\n\\centering\n\\caption{\\label{Table:BRCA1}The top 10 selected genes from the univariate approach and their local false discovery rate estimates (ashr) at each $\\lambda$ value.}\n\\begin{tabular}{l c |c c c }\nGene & Chromosome & Univariate $\\widehat{\\fdr}$ & $\\widehat{\\mfdr}$ at $\\lambda_{1\\SE}$ & $\\widehat{\\mfdr}$ at $\\lambda_{\\CV}$ \\\\\n\\hline\nC17orf53 & 17 &$<$0.0001 & 0.0005 & 0.31987 \\\\\nTUBG1 & 17 &$<$0.0001 & 0.0310 & 0.3507 \\\\\nDTL & 1 &$<$0.0001 & $<$0.0001 & $<$0.0001 \\\\\nVPS25 & 17 &$<$0.0001 & $<$0.0001 & 0.0211 \\\\\nTOP2A & 17 &$<$0.0001 & 0.0002 & 0.0020 \\\\\nPSME3 & 17 &$<$0.0001& $<$0.0001 & 0.0004 \\\\\nTUBG2 & 17 &$<$0.0001 & 0.0508 & 0.3599 \\\\\nTIMELESS & 12 &$<$0.0001& 0.0123 & 0.3196 \\\\\nNBR2 & 17 &$<$0.0001 & $<$0.00001 & $<$0.00001\\\\\nCCDC43 & 17 &$<$0.0001 & 0.0169 & 0.3326 \\\\\n\\end{tabular}\n\\end{table}\n\nTable~\\ref{Table:BRCA1} displays the 10 genes with the lowest univariate local false discovery rates, along with their mfdr estimates at $\\lambda_{1\\SE}$ and $\\lambda_{\\CV}$ using the ashr approach. Similar results were found at both $\\lambda$ values using the density approach. Many of the genes with the lowest fdr estimates according to univariate analysis have biological roles with no apparent connection to BRCA1, but are located near BRCA1 on chromosome 17 and therefore all correlated with each other: TUBG1, TUBG2, NBR2, VPS25, TOP2A, PSME3, and CCDC43.\nAlmost all of these genes are estimated to have much higher false discovery rates in the simultaneous regression model than in the univariate approach.\n\nOther selections that have low local false discovery rates in both the univariate and lasso approaches have very plausible relationships with BRCA1. For example PSME3 encodes a protein that is known to interact with p53, a protein that is widely regarded as playing a crucial role in cancer formation \\citep{zhang_p53}. Another example is DTL, which interacts with p21, another protein known to have a role in cancer formation \\citep{DTL_p21}. These results demonstrate the potential of the mfdr approach to identify more scientifically relevant relationships by reducing the number of features only indirectly associated with the outcome.\n\n\\subsection{Lung Cancer Survival}\n\n\\citet{Shedden2008} studied the survival of 442 early-stage lung cancer subjects. Researchers collected expression data for 22,283 genes as well as information regarding several clinical covariates: age, race, gender, smoking history, cancer grade, and whether or not the subject received adjuvant chemotherapy. The goal of our analysis is to identify genetic features that are associated with survival after adjusting for the clinical covariates. \n\nWe first analyze the data using the traditional univariate fdr approach, which is based upon the test statistics from 22,283 separate Cox regression models. Each of these models contains a single genetic feature in addition to the clinical covariates. Note that although these models contain more than one variable, we will refer to this as the ``univariate approach'' to indicate how the high-dimensional features are being treated.\n\nWe compare results from the univariate approach with the proposed local mfdr approach. Here, the clinical covariates are included in the model as unpenalized covariates along with the 22,283 features, to which a lasso penalty is applied. We consider both cross validation and the one standard error rule as methods of selecting $\\lambda$ and estimate mfdr using the ``density\" approach. Cross validation selects $\\lambda=0.095$, while the 1SE approach suggests $\\lambda=0.155$, corresponding with 43 and 1 genetic features being selected, respectively.\n\n\\begin{table}[!htb] \n\\centering\n\\caption{\\label{Tab:Shedden}Local false discovery rate estimates of the top ten features, when performing univariate testing, for the Shedden survival data.}\n\\begin{tabular}{l l | l l c | l l c }\nUnivariate fdr & & mfdr at $\\lambda_{1\\SE}$ & & & mfdr at $\\lambda_{\\CV}$ & & \\\\\n\\hline \nFeature & $\\widehat{\\text{fdr}}$ & Feature & $\\widehat{\\mfdr}$ & $\\bh_j \\ne 0$ & Feature & $\\widehat{\\mfdr}$ & $\\bh_j \\ne 0$ \\\\\n\\hline\nZC2HC1A & 0.0010 & FAM117A & 0.0576 & * & FAM117A & 0.0678 &* \\\\\nFAM117A & 0.0014 & TERF1 & 0.1938 & & NUDT6 & 0.4238 & * \\\\\nSCGB1D1 & 0.0016 & PTGER3 & 0.1951 & & RAB2A & 0.8658 &* \\\\\nCHEK1 & 0.0022 & CDC42 & 0.1955 & & MAP1A & 0.8658 & * \\\\\nHILPDA & 0.0027 & BHLHB9 & 0.1996 & & RHOA &0.8658 & * \\\\\nCSRP1 & 0.0039 & NDST1 & 0.2005 & & PLP1 & 0.8658& * \\\\\nPDPK1 & 0.0050 & CPT1A& 0.2137 & & GUK1 & 0.8658 & * \\\\\nBSDC1 & 0.0051 & AFFX-M27830 & 0.2161 & & PREP & 0.8658 & * \\\\\nXPNPEP1 & 0.0051 & BSDC1 & 0.2233 & & SCN7A & 0.8658 & * \\\\\nARHGEF2 & 0.0051 & ETV5 & 0.2269 & & BTBD1 & 0.8658 & * \\\\\n\\end{tabular}\n\\end{table}\n\nTable~\\ref{Tab:Shedden} displays the ten features with the lowest local false discovery rates for the univariate and lasso mfdr approaches. Here, we present mfdr estimates based on the density modeling approach. Using the \\texttt{ashr} approach, results are very similar at $\\lambda_{1\\SE}$, but at $\\lambda_{\\CV}$, all features were estimated to have local false discovery rates near 1 using \\texttt{ashr}. As discussed in Section~\\ref{Sec:density_est}, this may result from not having enough features to estimate the non-null mixture components.\n\nWe observe one feature, FAM117A, stands out in all approaches, albeit with different estimates. We also notice the estimates for the univariate fdr tend to be smaller than the mfdr approach at $\\lambda_{1\\SE}$, which in turn tend to be smaller than those at $\\lambda_{\\CV}$. This illustrates a key aspect of local mfdr in practice: although the development of the fdr estimator is concerned only with marginal false discoveries, the regression model is certainly making conditional adjustments. At $\\lam_{\\max}$, the lasso mfdr and univariate fdr are equivalent, but as $\\lam$ decreases and the model grows larger, more extensive conditional adjustments are being performed.\n\n\\begin{figure} [!htb]\n\\centering\n\\includegraphics[width=.85\\textwidth]{Fig4}\n\\caption{\\label{locfdr_shedden_density} The mixture density estimates, $\\hat{f}(z)$, for the different methods applied to the Shedden data. We observe that the distribution of test statistics more closely resembles the null as more features are adjusted for by the model. }\n\\end{figure}\n\nFigure~\\ref{locfdr_shedden_density} shows the estimated marginal density, $\\hat{f}$, for each method. With the univariate approach, which does not account for any correlations between features, we see that the distribution of univariate test statistics is quite different from the null distribution. In the lasso model at $\\lambda_{1\\SE}$ , the model adjusts for gene FAM117A and consequently, the distribution narrows relative to that of the univariate approach. When cross validation is used to select $\\lambda$, the model adjusts for 43 genes and the distribution narrows even further to the point where it closely resembles the null.\nAs predictors enter the model and help to explain the outcome, the residuals (or pseudo-residuals) increasingly resemble white noise and exhibit no correlation with the remaining features.\n\n\\section{Discussion}\n\nLocal approaches to marginal false discovery rates for penalized regression models provides a very useful way of quantifying the reliability of individual feature selections after a model is fit. The estimator can be quickly computed, even in high dimensions, using quantities that are easily obtained from the fitted model. This makes it a convenient and informative way to carry out inference after fitting a lasso model. The method is currently implemented in the {\\tt summary} function of the R package {\\tt ncvreg} \\citep{Breheny2011}. By default {\\tt summary} reports local false discovery rates for all features selected at a given value of $\\lambda$, but includes options to report all variables that meet a specified mfdr threshold or to report a specified number of features in order of mfdr significance. For more information on using {\\tt ncvreg} to calculate mfdr, see the package vignette or the online documentation at \\href{http:\/\/pbreheny.github.io\/ncvreg}{http:\/\/pbreheny.github.io\/ncvreg}.\n\nLike any estimate, the local mfdr has limitations. Although it has clear advantages over univariate hypothesis testing in many cases, a regression approach is not practical in many situations in which high-throughput testing arises, such as two-group comparisons with $n<5$ in each group. Likewise, the local mfdr is far more powerful than other approaches to inference for the lasso such as selective inference and sample splitting, but this is because it controls a weaker notion of fdr control -- namely, it can only claim to limit the number of selections that are purely noise and does not attempt to eliminate features that are marginally associated with the outcome. Finally, although we have introduced causal ideas and diagrams to motivate ideas here, any attempt to infer causal relationships from observational data in practice should be taken with a grain of salt.\n\nNevertheless, the local mfdr approach that we propose here addresses a critical need for feature-specific inference in high-dimensional penalized regression models, especially in the GLM and Cox regression settings where few other options have been proposed.\n\n\\section{Appendix}\n\\subsection{Proof of Theorem~\\ref{Thm:Efdr}}\n\n\\begin{proof}\n\n Let $Z=\\sqrt{n}C_j\/\\sigma$, so that $Z$ has density $f=\\pi_0 f_0 + (1-\\pi_0)f_1$, with $f_0$ the standard normal density, and let $\\cZ = (-\\infty, -\\lam\\sqrt{n}\/\\sigma] \\cup [\\lam\\sqrt{n}\/\\sigma], \\infty)$, so that $Z \\in \\cZ$ is equivalent to $C_j \\in \\cM_\\lam$. Now,\n \\begin{align*}\n \\EX\\left\\{\\mfdr(\\tfrac{C_j}{\\sigma\/\\sqrt{n}}) | c_j \\in \\cM_\\lam\\right\\} &= \\EX\\left\\{\\mfdr(Z) | Z \\in \\cZ\\right\\} \\\\\n &= \\int_\\cZ \\frac{\\mfdr(z)f(z)}{F(\\cZ)}\\,\\mathrm{d}z\\\\\n \\intertext{(where $F(\\cZ)$ denotes the probability assigned to the set $\\cZ$ by the distribution function $F$)}\n &= \\int_\\cZ \\frac{\\pi_0f_0(z)}{f(z)}\\frac{f(z)}{F(\\cZ)} \\,\\mathrm{d}z\\\\\n &= \\frac{\\pi_0F_0(\\cZ)}{F(\\cZ)} \\\\\n &= \\frac{2\\pi_0\\Phi(-\\lam\\sqrt{n}\/\\sigma)}{F(\\cZ),}\n \\end{align*}\n where $\\Phi$ denotes the standard Gaussian CDF. The mFdr estimator consists of replacing $F(\\cZ)$ with its empirical estimate $\\abs{\\cM_\\lam}\/p$, in which case the above quantity yields the one given in Section 2.1 of \\cite{Breheny2019}.\n\\end{proof}\n\n\n\\section*{Reproducibility} A repository containing code to reproduce all results in this paper is located at \\url{https:\/\/github.com\/remiller1450\/loc-mfdr-paper}.\n\n\\bibliographystyle{ims}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaocf b/data_all_eng_slimpj/shuffled/split2/finalzzaocf new file mode 100644 index 0000000000000000000000000000000000000000..ddf6c78bc07db818d32e01e93a33c739024699be --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaocf @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\section{Introduction}\n\nAn interesting byproduct of the intense pursuit of materials that can host spin-liquids has been the discovery of nominally pure crystalline solids with frozen short range correlated magnetism.\\cite{Lee1996,Zaliznyak1999,Cava2001,Nakatsuji2005,Hiroi2009} In some cases quenched disorder simply alters the ground state and defines a short spin correlation length, but for materials such as two dimensional SCGO\\cite{Lida2014} and $\\rm NiGa_2S_4$\\cite{Nakatsuji2005} where the spin correlation length is much shorter than a plausible impurity spacing such explanations seem untenable. Instead in the present study, we propose that spin disorder in frustrated low dimensional magnets can result from a complex thermalization process in the absence of quenched disorder.\n\nComprising two types of Ising spin chains with nearest neighbor ($J_1$) and next nearest neighbor ($J_2$) interactions [anisotropic next-nearest-neighbor (ANNNI) models\\cite{[][{, and references therein.}]Selke1988213}] organized on a honeycomb-like lattice, $\\rm SrHo_2O_4$ provides a striking example.\\cite{Karunadasa2005,Ghosh2011,Hayes2012,Young2012,Young2013,Poole2014} We show the chains straddle the $J_2\/J_1=1\/2$ critical point so that ``red\" chains have a ground state that doubles the unit cell ($\\uparrow\\downarrow\\uparrow\\downarrow$) while the ground state for ``blue\" chains is a double-N\\'eel state ($\\uparrow\\uparrow\\downarrow\\downarrow$) [as illustrated in Fig.~\\ref{fig:1}(b)]. While red chains develop 3D LRO, blue chains in the very same crystal cease further equilibration towards their more complex ground state when the red spins saturate in an ordered state.\n\n$\\rm SrHo_2O_4$ belongs to a family of iso-structural rare-earth strontium oxides, $\\rm SrRE_2O_4$.\\cite{Karunadasa2005,Ghosh2011,Hayes2012,Young2012,Young2013,Poole2014,PhysRevB.78.184410,PhysRevB.84.174435,PhysRevB.86.064203,Petrenko2014,Li2014} Recent experiments on this series of materials have revealed low temperature magnetic states ranging from a disordered state in $\\rm SrDy_2O_4$\\cite{Poole2014}\nto noncollinear 3D LRO in $\\rm SrYb_2O_4$\\cite{PhysRevB.86.064203}. A very unusual coexistence of a 3D LRO and 1D short range order (SRO) was discovered in polycrystalline and single crystalline samples of $\\rm SrHo_2O_4$\\cite{Young2012,Young2013,Poole2014} and $\\rm SrEr_2O_4$\\cite{PhysRevB.78.184410,PhysRevB.84.174435}. No explanation for the coexistence of two drastically different types of correlations over different length scales in the same crystal has so far been provided. With the additional experimental results and analysis presented here, we are able to provide an explanation for a partially ordered magnetic\nstate where quenched disorder does not play an essential role.\n\n\\section{Experimental Methods}\n\n\\subsection{Crystal structure and synthesis}\n\nSrHo$_2$O$_4$ crystallizes in space group $Pnam$\\cite{Karunadasa2005} with two inequivalent Ho sites [Fig.~\\ref{fig:1}(a)]. Both are Wyckoff $4c$ sites with mirror planes perpendicular to the $\\bf{c}$ direction where Ho is surrounded by 6 oxygen atoms forming a distorted octahedron. The magnetic lattice consists of zig-zag ladders which extend along $\\bf{c}$ and form a honeycomb-like pattern in the $\\bf{a-b}$ plane [Fig.~\\ref{fig:1}(b)].\n\nPolycrystalline powders of $\\rm SrHo_2O_4$ were prepared by solid state synthesis using $\\rm Ho_2O_3$ (99.99 \\%) and $\\rm SrCO_3$ (99.99\\%) as starting materials. The starting powders were mixed, pressed into pellets, and heated at 900, 1000 and 1100 $^\\circ$C in air, each for 10 h with intermediate grinding. 4\\% extra $\\rm SrCO_3$ was added to the starting materials to account for evaporation of Sr during single crystal growth. After grinding and phase identification, synthesized powders were compacted into a rod using a hydraulic press. The feed rods were then sintered at 1200 $^\\circ$C for 10 h in air. Single crystals with diameter 4-5 mm and length up to 60 mm were grown from the polycrystalline feed rods in a four-mirror optical floating zone furnace (Crystal Systems Inc. FZ-T-12000-X-VPO-PC) with 4$\\times$3 kW xenon lamps. The growth rate was 4 mm\/h with rotation rates of 5 rpm for the growing crystal (lower shaft) and 0 rpm for the feed rod (upper shaft) in a 4 bar purified argon atmosphere. Growth under oxygen containing atmospheres led to significant evaporation of phases containing Sr, and $\\rm Ho_2O_3$ was formed as second phase inclusions in the as grown crystals.\n\n \\begin{figure}[t]\n \\includegraphics[width=3.3 in]{fig1}%\n \\caption{\\label{fig:1}(a) Crystallographic unit cell of SrHo$_2$O$_4$. Sr atoms were omitted for clarity. Red and Blue spheres show two distinct Ho sites, and the corresponding arrows show the Ising spin directions. (b) Magnetic lattice formed by Ho and a schematic representation of the spin structure determined by neutron diffraction.}\n \\end{figure}\n\n\\subsection{Thermomagnetic and neutron scattering measurements}\n\nThe thermomagnetic properties of $\\rm SrHo_2O_4$ were measured using a Physical Properties Measurement System from Quantum Design, Inc.~with a dilution refrigerator option for measurements below $1.8$~K. Heat capacity measurements were performed using the quasi-adiabatic heat-pulse technique on a thin plate of polished single crystalline $\\rm SrHo_2O_4$. Temperature dependent magnetization measurements between 2 K and 300 K along three crystalline axes were carried out using a vibrating sample magnetometer option in a magnetic field of 200 Oe.\n\nElastic neutron scattering (ENS) maps for $T$ down to 1.5 K were measured on MACS\\cite{MACS2008} at the NIST Center for Neutron Research with $E_i=E_f=5$~meV neutrons. A 4.3 g single crystal was mounted for consecutive experiments in the $(HK0)$ and $(0KL)$ planes. Single crystal ENS measurement for $T$ down to 0.28 K in a $^3$He insert were conducted on the HB-1A instrument at the High Flux Isotope Reactor of ORNL. Samples cut to the shape of small cubes with masses of 0.34 g and 0.19 g to reduce the effects of neutron absorption were used for measurements in the $(0KL)$ and $(H0L)$ planes respectively. The sample mount was made from oxygen-free copper to ensure good thermal contact at low $T$. Temperature dependent measurements were carried out upon warming after cooling to the $0.28$~K base temperature of the pumped $^3$He system.\n\n\\section{Experimental Results}\n\n\\subsection{Thermomagnetic measurements}\n\nThe magnetic susceptibility $\\chi$ of $\\rm SrHo_2O_4$, approximated by $M\/H$ with a measuring field of $H=200$~Oe, is shown in Fig.~\\ref{fig:2}. $\\chi_a$ is found to be an order of magnitude smaller than $\\chi_b$ and $\\chi_c$, indicating strong magnetic anisotropy with a hard axis along $\\bf{a}$. $\\chi_b$ and $\\chi_c$ increase upon cooling and form broad peaks for $T\\sim 5$~K that are characteristic of antiferromagnetic (AFM) SRO.\\cite{Karunadasa2005,Ghosh2011,Hayes2012} The inset in Fig.~\\ref{fig:2} shows the inverse magnetic susceptibility $\\chi^{-1}$ for $T$ between $2$~K and $300$~K. A Curie-Weiss analysis of the approximately linear regime between $100$~K and $300$~K results in effective moment sizes $P_{\\rm eff}$ of $10.1(1)~\\mu_{\\rm B}$, $10.5(1)~\\mu_{\\rm B}$, and $11.8(1)~\\mu_{\\rm B}$, and Weiss temperature $\\Theta_{\\rm CW}$ of $-63.5(3)$~K, $16.4(1)$~K, and $-27.6(2)$~K for the $\\bf{a}$, $\\bf{b}$, and $\\bf{c}$ directions respectively. These $P_{\\rm eff}$ values are consistent with\nan $^5I_8$ electronic configuration for $4f^{10}$ $\\rm Ho^{3+}$. While $\\Theta_{\\rm CW}$ is usually associated with inter-spin interactions, such an interpretation is not straight forward here since the high temperature $\\chi$ is also affected by the crystalline electric field (CEF)\\cite{jensen1991} level scheme for $\\rm Ho^{3+}$ in $\\rm SrHo_2O_4$.\n\n \\begin{figure}[t]\n \\includegraphics[width=3.2 in]{fig2}%\n \\caption{\\label{fig:2} Magnetic susceptibility of SrHo$_2$O$_4$ along three axes measured under 200 Oe. Dashed lines show fits to $J_1-J_2$ Ising models. The inset shows the inverse magnetic susceptibility up to 300 K. }\n \\end{figure}\n\nThe temperature dependent specific heat of $\\rm SrHo_2O_4$ is shown in Fig.~\\ref{fig:3}(a). Filled symbols show the experimental specific heat $C_{\\rm total}$. The broad peak for $T\\sim$ 5 K corresponds to SRO also indicated by a maximum in $\\chi$. A second anomaly is observed at $T\\sim$ 0.2 K. There is a single stable isotope of holmium, $^{165}\\rm Ho$ with a finite nuclear spin $I=7\/2$, the multiplet of which can be split through hyperfine interactions with electronic spins. This produces a peak in the specific heat at low temperatures, the so called nuclear Schottky anomaly ($C_{\\rm nuc}$).\\cite{tari2003} By assuming a 7.7(2) $\\mu_{\\rm B}$ static magnetic moment for electronic spins of $\\rm Ho^{3+}$, the low temperature $C_{\\rm total}$ peak can be very well accounted for by the nuclear Schottky anomaly, as shown by the magenta long dashed line in Fig.~\\ref{fig:3}(a). After subtracting $C_{\\rm nuc}$ from $C_{\\rm total}$, the contribution to specific heat from electronic spins ($C_{\\rm mag}$) is obtained and shown as open symbols in Fig.~\\ref{fig:3}(a). The phonon contribution to the specific heat is negligible in the temperature range probed.\n\nA kink in $C_{\\rm mag}$ is observed at $T_{\\mathrm{N}}=0.61(2)$~K, which is even more apparent as a sharp peak in $C_{\\rm mag}\/T$ indicating a bulk phase transition [Fig.~\\ref{fig:3}(b)]. The magnetic entropy ($S$) inferred from the area under the $C_{\\rm mag}\/T$ curve is shown in Fig.~\\ref{fig:3}(c).\n\n \\begin{figure}[t]\n \\includegraphics[width=3.4 in]{fig3}%\n \\caption{\\label{fig:3}(a) Specific heat of $\\rm SrHo_2O_4$ as a function of $T$. Filled symbols show the experimental total specific heat; the magenta long dashed line shows the calculated specific heat due to a nuclear Schottky anomaly; open symbols show the magnetic specific heat obtained by subtracting the nuclear Schottky anomaly from the total specific heat. (b) Magnetic specific heat over $T$ versus $T$. (c) shows the entropy versus $T$. The dashed line shows the entropy of an Ising doublet.}\n \\end{figure}\n\n\\subsection{Neutron scattering}\n\n\\begin{figure}[t]\n \\includegraphics[width=3.2 in]{fig4}%\n \\caption{\\label{fig:4} $T-$dependent elastic magnetic neutron scattering indicating quasi-one-dimensional short range order in $\\rm SrHo_2O_4$. (a)(c) show measurements in the $(HK0)$ reciprocal lattice plane while (b)(d) are from the $(0KL)$ plane. Measurements at 50 K were subtracted to eliminate nuclear scattering. (e) and (f) show $J_1-J_2$ model calculations at 1.4 K with exchange constants determined from Fig.~\\ref{fig:2}.}\n\\end{figure}\n\nFor a direct view of the short range spin correlations indicated by the thermodynamic anomalies for $T\\sim$ 5~K, Fig.~\\ref{fig:4}(a)-(d) show ENS intensity maps covering the $(HK0)$ and $(0KL)$ planes. The strongly anisotropic nature of the wave vector dependence indicates quasi-1D correlations along the $\\bf{c}$-axis consistent with previous data.\\cite{Ghosh2011,Young2013} The modulation in the $(HK0)$ plane takes the form of a checker-board-like structure [Fig.~\\ref{fig:4}(a) and (c)] and is associated with intra-ladder correlations. The fact that this scattering occurs for $\\bm{q_c}$ $=0$ but vanishes near $\\bm{q}$ $=0$ indicates an AFM structure that is not modulated along $\\bf{c}$. The $(0KL)$ intensity map reveals another type of correlations with $\\bm{q_c}$ $\\sim 0.5\\bf{c^*}$, where $\\rm c^*\\equiv \\frac{2\\pi}{c}$. The intensity maximum near $\\bm{q_c}$ $\\sim 0.5\\bf{c^*}$ indicates spins displaced by $\\bf{c}$ are anti-parallel. As will be discussed in Sec.~\\ref{sec:CEF}, the single ion magnetic anisotropy in $\\rm SrHo_2O_4$ allows an unambiguous association of red sites with $\\bm{q_c}$ $=0$ correlations while spins on blue sites host $\\bm{q_c}$ $\\sim 0.5\\bf{c^*}$ type correlations. For clarity this assignment will be employed from this point though it will not be justified until Sec.~\\ref{sec:CEF}. The correlation lengths along $\\bf{c}$ can be estimated by 2\/(FWHM$_{\\rm expt}^2$-FWHM$_{\\rm reso}^2$)$^{-\\frac{1}{2}}$, where FWHM$_{\\rm expt}$, FWHM$_{\\rm reso}$ are experimental and instrumental full width at half maximum. At $T=1.5$~K we find the correlation lengths along $\\bf{c}$ are indistinguishable at 13.3(3)\\AA\\ and 13.6(3) \\AA\\ for red and blue ladders respectively.\n\n\\begin{figure}[t]\n \\includegraphics[width=3.2 in]{fig5}%\n \\caption{\\label{fig:5}Thermal evolution of inter-chain correlations for red and blue sites probed by neutrons. (a) shows $(0,K,0)$ scans probing the correlations along $\\bf{b}$ between red chains at several temperatures. (b) and (c) respectively show $(0,K,\\frac{1}{2})$ and $(H,0,\\frac{1}{2})$ scans that probe correlations along $\\bf{b}$ and $\\bf{a}$ for blue chains. Horizontal bars represent instrumental resolution. Sharp peaks at $K \\sim 5.5$ in (b), at $H \\sim 4.6$ and $H \\sim 5.4$ in (c) are the results of Bragg scattering from the copper sample mount. }\n\\end{figure}\n\nThe evolution of these two kinds of short range correlations across $T_\\mathrm{N}$ were probed by single crystal ENS in the $(0KL)$ and $(H0L)$ planes down to $T \\sim 0.3$~K. Firstly $L-$ scans at all accessible $(\\mathrm{n}_a,\\mathrm{n}_b,\\mathrm{n}_c)$ and $(\\mathrm{n}_a,\\mathrm{n}_b,\\mathrm{n}_c+1\/2)$ magnetic peaks ($\\mathrm{n}_a, \\mathrm{n}_b, \\mathrm{n}_c$ being integers) are resolution limited at $T=0.28$~K. This is evidence of quasi-1D correlations over length scales exceeding 286(5) \\AA\\ and 100(1) \\AA\\ respectively for red and blue ladders. These lower bounds were obtained from $L-$scans at $(030)$ and $(00\\frac{1}{2})$ respectively. The different limits arise because the $L-$scan is a rocking scan for red ladders but a longitudinal scan for blue ladders, which results in better resolution for red ladders.\n\nTemperature dependent $(0K0)$ scans for $K\\in [0.5,4.5]$ between 0.28 K and 10 K were acquired to probe correlations between red chains with relative displacement along $\\bf{b}$ [Fig.~\\ref{fig:5}(a)]. Upon cooling, a broad intensity modulation develops and turns into resolution limited Bragg peaks for $T$ below $T_\\mathrm{N}=0.68(2)$~K. Measurements at $(200)$ and $(030)$ indicate the correlation length for red spins exceeds 57(1) \\AA\\ and 64(1) \\AA\\ respectively along $\\bf a$ and $\\bf b$ at $T=0.28$~K.\n\nDespite a spin correlation length exceeding 100(1) \\AA\\ along $\\bf c$, blue chains however, fail to develop conventional long range inter-chain correlations [Fig.~\\ref{fig:5}(b)(c)]. While these inter-chain correlations are enhanced at low $T$ as manifested by sharpening of the peaks, the peak width remains much broader than the instrumental resolution shown by horizontal bars. A detailed view of $H-$ and $K-$scans through $(00\\frac{1}{2})$ at $T=0.28$~K is provided in Fig.~\\ref{fig:6}. The broad modulations are well described by Lorentzian fits that correspond to correlation length along $\\bf{a}$ and $\\bf{b}$ of just 6.0(1) \\AA\\ and 17.5(3) \\AA\\ respectively. The $H-$scan however, also includes a curious small sharp component that indicates some correlations between blue chains persists to a separation of 165(9) \\AA\\ perpendicular to their easy axis. This observation merits further theoretical and experimental exploration.\n\n\\begin{figure}[t]\n \\includegraphics[width=3.2 in]{fig6}%\n \\caption{\\label{fig:6}Inter-chain correlations for blue sites in $\\rm SrHo_2O_4$ probed by neutrons. Filled and empty circles are from $H-$ and $K-$scans at $(00\\frac{1}{2})$ measured at 0.28 K. The solid lines show fits to the data. Insert shows a sharp component of $H-$scan in detail, the horizontal bar is the instrumental resolution. Data from different sample orientations are normalized to the peak count rate.}\n\\end{figure}\n\n\\section{Analysis}\n\n\\subsection{Single ion anisotropy}\n\\label{sec:CEF}\n\nAs has been suggested in previous sections, magnetic anisotropy plays an all important role in the magnetic properties of $\\rm SrHo_2O_4$. In the anisotropic environment of the solid, the $J=8$ multiplet of Ho$^{3+}$ is split into multiple levels resulting in the anisotropic susceptibility\\cite{jensen1991} shown in Fig.~\\ref{fig:2}. The $C_{s}$ point group symmetry of both red and blue $\\rm Ho^{3+}$ sites implies an easy magnetic axis either along $\\bf{c}$ or within the $\\bf{ab}$ plane. A CEF calculation based on the point charge approximation\\cite{Hutchings1964}, where CEF at Ho site is approximated by the static electric field from 6 surrounding $\\rm O^{2-}$ ions, shows both red and blue sites have a doublet ground state. The single ion magnetic susceptibility calculated based on the point charge CEF level schemes (Fig.~\\ref{fig:10}) reveals the Ising doublet ground state for red Ho1 (blue Ho2) sites have an easy axis along $\\bf{c}$ ($\\bf{b}$). We infer the temperature dependent susceptibility (Fig.~\\ref{fig:2}) for fields along $\\bf{c}$ and $\\bf{b}$ are due to red and blue sites respectively. For blue sites a finite moment along $\\bf{a}$ is allowed by symmetry, however, since $\\chi_a$ is minimal it will be neglected.\n\nWith this information about the magnetic anisotropy for the two Ho sites, it can be inferred that the scattering in the $(HK0)$ plane in Fig.~\\ref{fig:4}(a)(c) is due to red ladders because there is no decrease in intensity for $\\bm{q} \\parallel$ $(0K0)$ which would be the case for easy $\\bf{b}$-axis blue sites. This is a consequence of the polarization factor in magnetic neutron scattering which ensures that neutrons only probe magnetic moments perpendicular to $\\bm{q}$.\\cite{squires2012} An analogous polarization argument shows that the scattering in $(0KL)$ plane with $\\bm{q_c}$ $\\sim 0.5\\bf{c^*}$ arises from blue sites.\n\n\\begin{figure}[t]\n \\includegraphics[width=3.4 in]{fig10}%\n \\caption{\\label{fig:10}Calculated single ion magnetic susceptibility for (a) red Ho1 site and (b) blue Ho2 site in $\\rm SrHo_2O_4$ along three axes based on CEF level schemes calculated according to point charge approximation.}\n\\end{figure}\n\n\\subsection{ANNNI model}\n\n \\begin{table}[t\n \\caption{\\label{tab:1}Magnitude of dipolar interaction energies between neighboring spins in $\\rm SrHo_2O_4$ assuming Ising moment size of 6.2 $\\mu_{\\rm B}$ and 9.9 $\\mu_{\\rm B}$ on red and blue sites respectively. The numbering of $\\rm Ho^{3+}$ ions is as shown in Fig.~\\ref{fig:1}(b). Intra-ladder dipolar energies (row 1-4) are positive\/negative for FM\/AFM interactions. For comparison, the corresponding ANNNI model exchange constants inferred from susceptibility fits are in the third column.}\n \\begin{ruledtabular}\n \\begin{tabular}{l c r}\n Pair of Spins & Dipolar Energy (meV) & $J_{\\rm ANNNI}$ (meV) \\\\\n \\textcolor{black}{Ho1(1)} - \\textcolor{black}{Ho1(3)} & -0.01 & \\textcolor{black}{$J_{r1}$} -0.10(2) \\\\\n \\textcolor{black}{Ho1(1)} - \\textcolor{black}{Ho1(1)}+\\bf{c} & ~0.10 & \\textcolor{black}{$J_{r2}$}~~0.04(3) \\\\\n \\textcolor{black}{Ho2(1)} - \\textcolor{black}{Ho2(3)} & ~0.08 & \\textcolor{black}{$J_{b1}$} -0.14(3) \\\\\n \\textcolor{black}{Ho2(1)} - \\textcolor{black}{Ho2(1)}+\\bf{c} & -0.13 & \\textcolor{black}{$J_{b2}$} -0.21(1)\\\\\n \\textcolor{black}{Ho1(1)} - \\textcolor{black}{Ho2(3)} & ~0.05 &\\\\\n \\textcolor{black}{Ho1(4)} - \\textcolor{black}{Ho2(1)} & ~0.00 &\\\\\n\n \\end{tabular}\n \\end{ruledtabular}\n \\end{table}\n\nThe dominant intra-ladder SRO (Fig.~\\ref{fig:4}) suggests the minimal model for $\\rm SrHo_2O_4$ is a collection of two types of independent Ising zig-zag spin ladders with different inter-leg interactions $J_{r1},J_{b1}$ and intra-leg interactions $J_{r2},J_{b2}$ for the red and blue chains respectively [Fig.~\\ref{fig:1}(b)]. The underlying model for both chains is the exactly solvable 1D ANNNI model\\cite{Selke1988213}:\n\\begin{eqnarray} \\label{eq:1}\n H=\\sum_{i}-J_1S_iS_{i+1}-J_2S_iS_{i+2},\n\\end{eqnarray}\nwhere $S_i=\\pm 1$. Dipolar interactions between spins in $\\rm SrHo_2O_4$ can be comparable to $T_{\\rm N}$ because of the large magnetic moment size for $\\rm Ho^{3+}$ ions. To be specific, consider magnetic moment size of $6.2~\\mu_{\\rm B}$ and $9.9~\\mu_{\\rm B}$ oriented along the easy axes of red and blue $\\rm Ho^{3+}$ sites respectively. These are the magnetic moment sizes for the two Ho sites estimated in this study that will be justified in detail in Sec.~\\ref{sec:momentsize}. The corresponding dipolar interaction energies between neighboring $\\rm Ho^{3+}$ ions are listed in Table.~\\ref{tab:1}. These energies are found to be of order a Kelvin and extend with considerable strength to further neighbors. The ANNNI model should thus be considered as a minimal effective model to describe each of the spin ladders in $\\rm SrHo_2O_4$. Antisymmetric Dzyaloshinskii-Moriya (DM) interactions are also allowed in the low crystalline symmetry of $\\rm SrHo_2O_4$. The strong Ising anisotropy however, extinguishes intra-ladder DM interactions because all spins within each type of ladder are oriented along the same easy axis. While DM interactions between red and blue ladders are allowed, the different modulation wave vectors for the two types of ladders (0 and $0.5\\bf{c^*}$ respectively) render these and all other inter-ladder interactions ineffective at the mean field level. This may explain why the simple model of independent ANNNI chains that we shall explore in the following provides a good basis for describing the magnetism of $\\rm SrHo_2O_4$ outside of the critical regime near $T_{\\rm N}$.\n\nTo determine the exchange constants $J_1$ and $J_2$ we fit the anisotropic susceptibility to the analytical result for the susceptibility of the ANNNI model. The exchange constants for red and blue chains are denoted by $J_{r1,2}$ and $J_{b1,2}$ respectively. The uniform magnetic susceptibility $\\chi$ can be related to the two point correlation function as follows\n\\begin{eqnarray}\n\\chi\\equiv\\lim_{h \\to 0}\\frac{\\partial \\langle M\\sum_i S_i \\rangle}{\\partial h}=NM^2\\beta\\tilde{G}(\\bm{q}=0),\n\\end{eqnarray}\nwhere $N$ is the number of sites in the spin chain, $M$ is the dipole moment of each spin, $h$ is the magnetic field, $\\beta=1\/k_{\\mathrm{B}}T$, $\\tilde{G}(\\bm{q}=0)=\\sum_iG(i)$, and $G(i)\\equiv \\braket{S_0S_i}$ is the two-point correlation function for the 1D ANNNI model. The fits were restricted to data points with $T\\le20$ K where the influence from higher CEF levels can be neglected.\n\nFor red chains (accessible through $\\chi_c$), the best fit is achieved for $J_{r1}=-0.10(2)$ meV and $J_{r2}=0.04(3)$ meV. The corresponding calculated susceptibility is shown as a red dashed line in Fig.~\\ref{fig:2}. These exchange parameters define an unfrustrated ANNNI chain where all interactions are simultaneously satisfied by the N\\'{e}el structure ($\\uparrow\\downarrow\\uparrow\\downarrow$). $\\chi_b$ for blue chains is best fit with $J_{b1}=-0.14(3)$ meV and $J_{b2}=-0.21(1)$ meV. While these competing interactions produce incommensurate short range correlations at finite temperatures, the ground state is the double N\\'{e}el structure ($\\uparrow\\uparrow\\downarrow\\downarrow$).\\cite{Selke1988213} The magnetic moment sizes extracted from fitting the susceptibility data for red and blue sites are 5.5(3) $\\mu_{\\rm B}$ and 8.1(2) $\\mu_{\\rm B}$ respectively, which are consistent with neutron diffraction measurements.\\cite{Poole2014,Young2012} These effective Ising exchange constants are compared to the dipolar interaction strengths in Table~\\ref{tab:1}. The significant discrepancies might be accounted for by contributions to the effective ANNNI interactions from superexchange, longer range dipole interactions, as well as higher order effects from inter-ladder interactions.\n\nA critical test of the quasi-1D ANNNI model for $\\rm SrHo_2O_4$ is offered by Fig.~\\ref{fig:4}. Frames (e) and (f) show a calculation of the diffuse magnetic neutron scattering intensity at the given temperature from such spin chains based on the Fourier transformation of the two-point correlation function $G(r)$ for the exchange constants derived from $\\chi_c$ and $\\chi_b$ and the particular crystalline structure of the ladders. Only an overall scale factor and a constant background were varied to achieve the excellent account of the ENS data in Fig.~\\ref{fig:4}(c)-(d). Though no correlations between spin chains are included, the finite width of the zig-zag spin ladders and the two different red ladder orientations in $\\rm SrHo_2O_4$ [Fig.~\\ref{fig:1}(b)] produce the checker-board-like structure in the $(HK0)$ plane. It is remarkable that a purely 1D model can account for the magnetism of a dense 3D assembly of spin chains. Contributing to this are surely the different spin orientations for red and blue sites and the incompatible modulation wave vectors.\n\n\\subsection{Low temperature magnetic structure}\n\\label{sec:momentsize}\n\n\\begin{figure}[t]\n \\includegraphics[width=3.2 in]{fig7}%\n \\caption{\\label{fig:7}Magnetic structure refinement for $\\rm SrHo_2O_4$ based on $L-$integrated peak intensities measured for $T= 0.3$~K. (a) and (b) show the refinement results for red and blue sites respectively. Long dashed lines represent the $y=x$ line.}\n\\end{figure}\n\nWhile the strictly 1D ANNNI model can only form LRO at $T=0$ K, finite inter-chain interactions can induce LRO at finite $T_{\\rm N}$. ENS (Fig.~\\ref{fig:5}) reveal that the red sites form 3D LRO while the blue sites develop long range correlations only along the chain direction. To characterize the corresponding magnetic structures and to estimate the static magnetic moment sizes, a magnetic structure refinement was carried out based on ENS measured for $T=0.3$~K.\n\nWe first consider the 3D ordered red sites. Since the propagation vector $\\bm{q_c}=0$, magnetic Bragg scattering overlaps with nuclear Bragg peaks. For unpolarized neutrons employed in this study, the total Bragg scattering intensity is simply the sum of these two contributions.\n\nRepresentation analysis\\cite{Wills2000680} shows that for $\\bm{q_c}=0$ there are two magnetic structures that are compatible with both the easy $\\bf{c}$-axis anisotropy and the short range magnetic order inferred from the diffuse neutron scattering pattern (Fig.~\\ref{fig:4}). Using the labeling of red Ho1 atoms in the unit cell as shown in Fig.~\\ref{fig:1}(b), these two structures can be represented as $\\phi_{1,2}\\equiv (\\bm{m}_1,\\bm{m}_2,\\bm{m}_3,\\bm{m}_4)=\\mathrm{m}\\bm{\\hat{c}}(1,\\pm1,-1,\\mp1)$, where $\\bm{m}_i$ is the magnetic moment on atom $i$, $\\mathrm{m}$ is the moment size, and $\\bm{\\hat{c}}$ is a unit vector along $\\textbf{c}$-axis. The experimental observation that all nuclear forbidden Bragg peaks in the $(H0L)$ plane with even $H$ indices have negligible magnetic scattering intensity shows that $\\phi_1$ is the appropriate magnetic structure, since $\\phi_2$ would give rise to magnetic Bragg peaks at such locations.\n\nTo extract the magnetic moment size for Ho1, the $L-$integrated intensities for all accessible Bragg peaks with integer indices, which contain both nuclear and magnetic scattering contributions, were compared to the calculated neutron diffraction intensity for the $\\phi_1$ structure. The nuclear structure factors were calculated according to the crystal structure determined in a previous study.\\cite{Karunadasa2005} Measurements in the $(0KL)$ and $(H0L)$ planes were co-refined while keeping the ratio of scale factors in the two reciprocal lattice planes fixed at the mass ratio for the samples employed in each reciprocal lattice plane. The best fit shown in Fig.~\\ref{fig:7}(a) yields a moment size of $6.8(4)~\\mu_\\mathrm{B}$ for the red Ho1 sites. The correspondence with the red chain moment of 5.5(3) $\\mu_\\mathrm{B}$ derived from ANNNI fits to $\\chi_c$ corroborates the assumptions that underlie this analysis.\n\nFor the blue sites, where the correlation length is short along $\\bf{a}$ and $\\bf{b}$ directions, we approximate the peak shape as the product of a sharp Gaussian along $\\bf{c}$ with two broad Lorentzians along $\\bf{a}$ and $\\bf{b}$. For measurements in the $(0KL)$ plane, the peak width of the Gaussian and the in-plane Lorentzian were obtained by fitting the experimental data, while the peak width for the out of plane Lorentzian was assumed to be identical at all peak positions and was approximated by the average peak width along $\\bf{a}$ measured in the $(H0L)$ plane. Note that when taking the average, each peak width was weighted by the corresponding integrated intensity so that stronger peaks contribute with larger weight to the average: $\\overline{\\mathrm{FWHM}}=\\sum_i \\mathrm{FWHM}_i\\cdot I_i\/\\sum_i I_i$. Following the same procedure, the out of plane Lorentzian width for peaks within the $(H0L)$ plane were obtained from measurements in the $(0KL)$ plane.\n\nRepresentation analysis for $\\bm{q_c}=0.5\\bf{c^*}$ allows spin structures of the form $\\psi=\\mathrm{m}_1\\bm{\\hat{b}}(1,1,0,0)+\\mathrm{m}_2\\bm{\\hat{b}}(0,0,1,1)$. Since there is no 3D long range order, this magnetic structure only reflects the locally ordered pattern. It is assumed that $|\\mathrm{m}_1|=|\\mathrm{m}_2|$ since all four Ho2 ions in the unit cell are equivalent in the paramagnetic phase and no significant improvement in refinement was obtained by allowing $\\mathrm{m}_1$ and $\\mathrm{m}_2$ to vary independently. With this constraint there are still two different magnetic structures $\\psi_{1,2}=\\mathrm{m}\\bm{\\hat{b}}(1,1,\\pm1,\\pm1)$, which can be regarded as two domain types that are related by mirror reflection about the $\\bf{ab}$ plane. With no $a~priori$ reason to favor one domain over the other, it is assumed that both domains contribute equally. Using the same scale factor obtained from refinement of the red Ho1 moment size, least squares fit to the $L-$integrated intensities of all accessible peaks of the form $(0K\\frac{2n+1}{2})$ and $(H0\\frac{2n+1}{2})$ was conducted. This resulted in a moment size of $11.6(8)~\\mu_\\mathrm{B}$ for blue Ho2. Fig.~\\ref{fig:7}(b) provides a comparison of the measured and calculated integrated intensities.\n\nThe analysis of anisotropic diffuse scattering that is the basis for the moment sizes extracted for blue sites is subject to systematic uncertainties that are not reflected in the error bars. We therefore consider the magnetic moment sizes extracted from the neutron measurements [6.8(4) $\\mu_{\\rm B}$ and 11.6(8) $\\mu_{\\rm B}$ for red and blue Ho] consistent with those obtained from the ANNNI susceptibility fits [5.5(3) $\\mu_{\\rm B}$ and 8.1(2) $\\mu_{\\rm B}$ respectively]. Combining these results, which are subject to different systematic errors leads to an experimental average result of 6.2(3) $\\mu_{\\rm B}$ and 9.9(4) $\\mu_{\\rm B}$ for red and blue sites respectively. The difference in the moment sizes for red and blue sites indicates the different CEF environment for these two sites. The moment on blue sites is close to the maximal magnetic moment size of 10 $\\mu_{\\rm B}$ for a Ho$^{3+}$ ion, which indicates a strong Ising character for blue Ho2 ions.\n\n\n\\subsection{Temperature dependent spin correlations}\n\n\\begin{figure}[t]\n \\includegraphics[width=3.2 in]{fig8}%\n \\caption{\\label{fig:8}Spin correlations versus $T$ in $\\rm SrHo_2O_4$ probed by neutrons. (a)-(e) show results from $L-$scans at $(030)$ (red) and $(00\\frac{1}{2})$ (blue) that probe correlations along red and blue chains respectively. (a) and (b) show integrated intensities and peak widths for $(030)$. The green dashed line in (b) shows the instrumental resolution. (c) shows integrated intensities for $(00\\frac{1}{2})$. (d) shows the peak shift from $(00\\frac{1}{2})$. The inset shows the peak shift along $\\bf c^*$ from $(0K\\frac{1}{2})$. The dashed line shows the predicted shift based on the $J_1-J_2$ model. (e) shows the inverse correlation length $\\kappa_c$ derived as the half width at half maximum of resolution convoluted Lorentzian fits. (f) shows $\\kappa_b$ extracted from $K-$scans at $(00\\frac{1}{2})$. Black dashed lines indicate $T_\\mathrm{N}$ and $T_\\mathrm{S}$. Blue dashed lines in (d) and (e) are ANNNI model calculations based only on the exchange constants obtained from the data in Fig.~\\ref{fig:2}.}\n\\end{figure}\n\nTo probe the interplay between the two types of spin ladders, scans for a range of $T$ were carried out through the $(030)$ and $(00\\frac{1}{2})$ peaks. These peaks respectively arise from red and blue chains (Fig.~\\ref{fig:8}). The result for red sites is shown in Fig.~\\ref{fig:8}(a) and (b). The integrated intensity of the peak, a measure of the staggered magnetization squared, grows below $T_\\mathrm{N}$ then saturates at $T_\\mathrm{S}=0.52(2)$~K. This is consistent with a second order phase transition in a uniaxial spin system where the gap in the magnetic excitation spectrum produces a characteristic saturation temperature. Near $T_\\mathrm{N}$, the critical exponent $\\beta=0.36(2)$ is consistent with that for the 3D Ising model [$\\beta_{3DI}=0.3258(14)$], but also indistinguishable from the 3D XY [$\\beta_{3DXY}=0.3470(14)$] and 3D Heisenberg models [$\\beta_{3DH}=0.3662(25)$].\\cite{PhysRevD.60.085001} The peak width [Fig.~\\ref{fig:8}(b)] decreases markedly upon cooling towards $T_\\mathrm{N}$ signalling the development of commensurate long range correlations among red spins.\n\nA rather different situation is found for the blue chains. There is no anomaly in the temperature dependent $L-$integrated intensity of the $(00\\frac{1}{2})$ peak at $T_\\mathrm{N}$ but a gradual increase upon cooling that terminates at $T_\\mathrm{S}$ [Fig.~\\ref{fig:8}(c)]. The nature of spin correlations along the blue chains is probed by the position and width of the $(00\\frac{1}{2})$ peak. Both evolve continuously across $T_\\mathrm{N}$ in semi-quantitative agreement with the ANNNI model, using the parameters that also describe the susceptibility and diffuse neutron scattering data. The trend however, ceases at $T_\\mathrm{S}$ with a peak center position of $0.501\\bf{c^*}$. The deviation, $\\Delta q_c$, from the commensurate position $0.5\\bf{c^*}\\rm$ is significant and a long wave length modulated structure is apparent as an oscillation of the centers for other $(0K\\frac{1}{2})$ type peaks with $K=1,2,3,4,5$ [inset to Fig.~\\ref{fig:8}(d)]. This is consistent with $G(r)$ for the incommensurate zig-zag ladder, which is indicated by the dashed line in the inset.\n\n\\section{Discussion and Conclusion}\n\\label{sec:discussion}\n\n\\begin{figure}[t]\n \\includegraphics[width=3.4 in]{fig9}%\n \\caption{\\label{fig:9}Ground state degeneracy and domain walls of ANNNI chains in $\\rm SrHo_2O_4$. (a) and (b) show the two fold degenerate ground states for red chains and the corresponding domain wall. (c) illustrates that a $R1$ to $R1^*$ domain wall is identical to a $R1^*$ to $R1$ domain wall (up to time reversal). (d) and (e) show the more complicated situation for blue chains where the ground state is four fold degenerate and the domain walls are chiral. (f) illustrates the chiral character of the domain walls in blue chains: a $B1$ to $B2$ domain wall is different from a $B2$ to $B1$ domain wall. Refer to Sec.~\\ref{sec:discussion} for a detailed description. Dashed rectangles in (c) and (f) encircle spins whose bond energies are affected by transition to a different ground state, and thus represent the domain walls.}\n\\end{figure}\n\nTo understand the very different behaviors of red and blue spin chains we consider the ground state degeneracies and the corresponding domain wall structures of these two weakly coupled spin systems (Fig.~\\ref{fig:9}). For the red chains there are two ground states which are time-reversal partners. These are shown as $R1$ and $R1^*$ in Fig.~\\ref{fig:9}(a). A transition between these structures involves a domain wall that costs a finite energy of $\\Delta E=-2J_{r1}+4J_{r2}$. The situation is more complicated in the blue chains. In their ground state there are four sites per magnetic unit cell and therefore four different types of domains that correspond to shifting the ($\\uparrow\\uparrow\\downarrow\\downarrow$) motif with respect to the origin. We label these as $B1$, $B2$ and their time-reversal partners $B1^*$ and $B2^*$ in Fig.~\\ref{fig:9}(d). Low energy domain walls correspond to transitions between ground states that are shifted by only one lattice site. Their ``chiral\" character can be appreciated by comparing the energy cost of a domain wall that goes from $B1$ to $B2$ [counterclockwise in Fig.~\\ref{fig:9}(e)] to a domain wall that effectuates a transition from $B2$ to $B1$ [clockwise in Fig.~\\ref{fig:9}(e)]. By convention we choose the right direction as the positive direction of the spin chain. The domain walls are chiral because their energy depends on handedness in Fig.~\\ref{fig:9}(e): $\\Delta E_{\\rm ccw}=J_{b1}-2J_{b2}$ and $\\Delta E_{\\rm cw}=-J_{b1}-2J_{b2}$. The larger ground state degeneracy and the correspondingly more complex domain wall structures for blue chains complicates their attainment of thermodynamic equilibrium.\n\nIt is useful to consult the 3D ANNNI model\\cite{Selke1988213} to understand how the different domain wall structures may affect the low temperature magnetic ordering. For $J_{r2}\/J_{r1}=-0.4(3)$, the mean field phase diagram of the 3D ANNNI model features a single phase transition from a paramagnetic (PM) phase to 3D N\\'eel order, as we observe for the red chains in $\\rm SrHo_2O_4$. The exchange parameters of the blue chains [$J_{b2}\/J_{b1}=1.5(3)$] however, place these in a complicated part of the 3D ANNNI phase diagram. Between the PM phase and 3D double-N\\'eel order exists a large number of 3D LRO phases with different modulation wave vectors. These can be described in terms of different arrangements of domain wall defects within the double N\\'{e}el structure. Effective interactions between defects stabilize these various phases at different temperatures.\n\nWith this picture in mind, the continuous peak center shift observed in Fig.~\\ref{fig:8}(d) reflects domain wall rearrangement along blue chains. Focusing on domain walls, full 3D LRO requires registry in the placement of transitions between domains along all blue spin chains. Such collective domain wall motion requires rearrangement of large numbers of spins and so can be a slow process. Further, in the non-Kramers doublet ground state of Ho$^{3+}$, a spin flip can only take place through tunneling or a thermal process involving higher CEF levels. In the recently proposed CEF level scheme\\cite{Poole2014} blue sites have a large energy gap ($\\sim$ 12 meV) to the first excited state (compared to $\\sim$ 1 meV for red sites). This can be expected to reduce the tunneling and thermal rate for blue spin flips at low temperatures.\n\nWe now return to the important experimental observation that the spin configuration on blue chains ceases to evolve when red chains become fully ordered for $T