diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdduf" "b/data_all_eng_slimpj/shuffled/split2/finalzzdduf" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdduf" @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction}\\label{sec:intr}\n\nA considerable amount of work has been done over the last few years to improve the simulations of MPGDs. A full understanding of the micro-physics associated with MPGDs operation is vital for the improvement of such detectors, whose some behaviours are still not completely understood.\\cite{COliveira2012JINST,Ozkan2010JINST,spark_simulation}.\n\n\nSome works report that there is a transient period during which the effective gain changes, after voltages are applied and the detector irradiated\\cite{Azmoun200611,THGEM_operation_Ne_CH4}. \nThe gain ends up stabilizing after minutes or even hours, depending on the MPGD and the rates of irradiation.\n\nMPGDs were developed to detect radiation, and their main applications are for high energy physics, astrophysics, rare-event searches and medical imaging\\cite{mpgd_progression,natal_luz}. \nThe \\mbox{\\textit{Gas Electron Multiplier}} (GEM)\\cite{Sauli1997531} has been largely used in many of those applications. \n\nThe device consists of a thin polyimide (insulator) foil typically 50 $\\mu$m thick. The foil is covered on both sides with 5 $\\mu$m thick layers of a conductor and etched with an hexagonal pattern of holes. The device operates inside a gas medium and suitable electric potentials are applied between the upper and the lower electrodes of the structure. In this way, a very high electric field is created inside the holes. Electrons created in the drift region by interaction of external radiation, travel towards the micro-structure, being focused into the holes and accelerated. They acquire enough energy to ionize atoms\/molecules of the gas, creating new ionizations. The secondary electrons undergo the same process while inside the hole and an avalanche ends up being produced, the Townsend avalanche.\n\nThe charges produced during multiplication have two possible destinations: they may be collected by conductor electrodes, both those of the GEM itself or of any other readout setup; and a fraction of them ends up accumulating in the insulator surfaces.\nThe number of simulated avalanches is correlated with the number of primary electrons that undergo to the hole, since we assume that no charges will drift in the insulator. \n\nEffective gain is defined as the number of secondary electrons, for each primary electron, that are collected in an electrode plane located below the GEM (figure~\\ref{fig:plane_section}).\n \n The electronic affinity of the polyimide usually used in these devices is high, 1.4 eV \\cite{poly_affinity}. Once electrons are trapped, it is unlikely that they are able to leave the surface. The same process happens with ions, according to \\cite{sessler}. \n\nThe effective gain of the detector strongly depends on the intensity of the electric field produced in the multiplication region. Charges accumulated in the insulator surfaces change locally the electric field, changing the amplification gain. This is known as the charging-up effect in the insulator.\n\n\nDeposited charges can flow through the insulator surface and insulator bulk under the action of the electric field. \nPrevious studies propose that the positive ions are not captured in insulators surfaces, instead they transfer their charge to intrinsic carriers of the insulator, and the conduction should be made by electrons and holes\\cite{sessler}. \nThe time for charges evacuation is of the order of several hours to days, so we did not include this effect in our method, i.e. all deposited charges will remain in the same surface during the whole simulation time. \nThis approach should remain valid if the charging-up process is much faster than the draining of charges.\n\nSimulations of the charging-up influence in the GEM transparency were already reported\\cite{Alfonsi20126}. \nIn order to study the contribution of the charging-up for the effective gain variations, two methods to simulate the charge accumulation in the detector are presented.\nWe also compare our results with available experimental data. \n\n\n\\section*{Calculations}\\label{sec:gem_thgem}\n\n\n\\subsection*{Geometry}\\label{subsec:mpgds_geom}\n\nAn hexagonal hole pattern with a distance between holes of 140 $\\mu$m was considered, the insulator thickness is 50 $\\mu$m and the metal electrodes are 5 $\\mu$m thick. Figure \\ref{fig:cross_section} shows the GEM geometry.\nThe corresponding electric field configuration is depicted in figure \\ref{fig:plane_section}. The hole has a bi-conical shape, the outer diameter is 70 $\\mu$m, while the narrower part has 50 $\\mu$m of diameter.\n\n\\begin{figure}[htp]\n\\centering\n\\subfloat[]{\\includegraphics[width=.45\\textwidth]{gem_cross_section_dimensoes.pdf}\n\\label{fig:cross_section}}\\quad\n\\subfloat[]{\\includegraphics[width=.45\\textwidth]{gem_cross_section_plano.pdf}\n\\label{fig:plane_section}}\n\\caption{a) Cross sections of a GEM, with the dimensions and voltages between electrodes. b) Simulated configuration and applied drift and induction fields.}\n\\end{figure}\n\n\\subsection*{Gases}\\label{subsec:gases}\n\nIn order to compare simulation results with our measurements, we simulated a gas mixture of $\\mathrm{Ar \\ 70\\% \\ \/ \\ CO_2 \\ 30\\%}$. This is a penning-mixture, due to the presence of the quencher molecule CO$_{2}$, that opens new de-excitation ways for the previously excited noble gas molecules. If this excess of energy is above the ionization threshold of the quencher molecule, an ionization may occur, whit a probability called Penning probability. We used a Penning probability of 0.7 in this simulations based on previous calculations for $\\mathrm{Ar \\ 70\\% \\ \/ \\ CO_2 \\ 30\\%}$ mixture\\cite{Ozkan2010JINST,sven,penning1,penning2}.\n\nThe drift and induction fields (electric fields applied up and above the GEM, respectively) were 0.2 and 0.3 kVcm$^{-1}$.\nTaking into account that the computational time strongly depends on the gain, because a higher number of electrons need to be tracked within the avalanche, a potential of 400 $V$ between electrodes was used for the firsts simulations tests, corresponding to gains of $\\sim$10$^{2}$.\nAll simulations were performed considering a temperature of 293 K and a pressure of 760 Torr.\n\n\\section*{Simulation details}\n\n\\subsection*{Software platforms}\n\nThe Monte Carlo calculations involved three programs. Due to the complex shape of the GEM structure, an analytic solution for the electric field in the interest region is not possible to obtain.\nTo overcome this problem, the electric field is computed with \\textit{Finit Element Methods} (FEM) software, that is used to calculate the electric potential in discrete nodes of a mesh, using boundary conditions. \nWe used ANSYS$^\\circledR$\\footnote{www.ansys.com} to produce potential maps, to which we call generally as field maps, selecting the curved tetrahedral elements as our mesh elements, because they easily fit in sharp curved surfaces present in GEMs.\n\nTo simulate the drift and transport properties of electrons and ions in the MPGD gas medium, we used Garfield++ \\cite{garfieldpp}. As input, this software requires the electric field configuration in the MPGD, the gas mixture, temperature, pressure and initial conditions of the primary charges (position, velocity and energy). \n\nIn what regards the electric field configuration, we used Garfield++ to read the potential maps calculated with ANSYS$^\\circledR$ and to calculate the electric field in any point of the space by interpolation between nodes.\n\nA microscopic approach is used to simulate the drift of the charges. This uses Monte-Carlo methods to calculate the probability to occur each type of collision during the drift (elastic, excitation or ionization). \nThe cross sections associated with each collision type are obtained from Magboltz\\cite{magboltz,Biagi1999234}.\n\nPrimary electrons starts with assigned $\\vec{r}_\\mathrm{start}=(x_\\mathrm{s},y_\\mathrm{s},z_\\mathrm{s})$, velocity $\\vec{v}_\\mathrm{start}=(v_\\mathrm{x,s},v_\\mathrm{y,s},v_\\mathrm{z,s})$ and kinetic energy $E_\\mathrm{start}$, drifting through the gas and producing secondary charges as it passes the multiplication region.\nThe final position of each secondary charge, $\\vec{r}_\\mathrm{end}$ and the effective gain are the observables of interested that are recorded for further analysis of the charging-up effect.\n\n\n\\subsection*{Initial attempts}\n \nTo start our simulations, we randomly distributed 10$^{4}$ primary electrons in the surface of a plane parallel to the GEM, located 100 $\\mu$m above the GEM, indicated as the start plane in figure \\ref{fig:cross_section}. \n\n\nIn order to determine the number of collected and deposited electrons and ions, the final position of each electron and ion from avalanches are analysed:\n\\begin{itemize}\n \\item Electrons are collected if the final z is -100 $\\mu$m below the GEM (the collection plane represented in figure \\ref{fig:plane_section}.\n \\item Ions are collected if the final z coordinate is in the top (positive) electrode of the GEM.\n \\item Electrons and ions are deposited in the insulator surface if after the drift the z coordinate is between -25 $\\mu$m to +25 $\\mu$m.\n\\end{itemize} \n\nGEMs previous to charging-up, i.e. without deposited charges, are called uncharged GEMs, being the charged GEMs those who have already deposited charges due to charging-up. \nThe deposition distributions of charges (electrons and ions separately) in the insulator, for the case previous to charging-up (figure \\ref{fig:depo_hist_with_no_charg}) shows that the charges are not deposited uniformly on the hole surface. \nIn addition, the number of deposited electrons is higher than the number of deposited ions. The reason is related with the mass of each particle. Ions are heavier than electrons, and tend to follow very well the field lines, in the direction of the electrodes. Electrons has a much more chaotic movement, due to the lower mass, having more probability of ending in the insulator surfaces. This originates variations on the local electric field.\n\n\\begin{figure}[htp]\n \\centering\n \\subfloat[Uncharged GEM]{\\includegraphics[width=.45\\textwidth]{deposition_400V_gem_uncharged.eps}\\label{fig:depo_hist_with_no_charg}}\\quad\n \\subfloat[Charged GEM]{\\includegraphics[width=.45\\textwidth]{deposition_400V_gem_charged.eps}\\label{fig:depo_hist_with_charg}}\\\\\n \\caption{Spatial distribution of charges deposited in the insulator surface of the GEM detector, before (\\ref{fig:depo_hist_with_no_charg}) and after (\\ref{fig:depo_hist_with_charg}) simulation of primary avalanches, at $V_\\mathrm{GEM}=$ 400 V.}\n \\label{depo_hist_with_charg}\n\\end{figure}\n\\\nAfter some avalanches, the distribution of new electrons and ions that reaches the insulator tend to compensate each other, due to Coulomb attraction between previous and future deposited charges (figure \\ref{fig:depo_hist_with_charg}). The local variation in the electric field will therefore vanish and a stable configuration will be achieved.\n\n\nIn order to simulate the effective gain variation as avalanches happen, we needed to iteratively include this charge deposition in the potential maps computed with ANSYS$^\\circledR$. The software does not provide the option to put single charges in their exact deposition position in the insulator surface. In addiction, this scenario would lead to discontinuities and numerical issues.\nInstead, we created small slice surfaces in the insulator foil and add the correspondent density charge to each surface. Due to the shape of the deposition, and to computational limitations of field maps files for very small finite elements, we used 24 different slices in the insulator, achieving in this way a good balance between the detail of the calculations and the needed computing power. The slices are not regularly distributed, as shown in figure \\ref{fig:gem_slice}, trying to match the z profile of the charge deposition histograms (figure \\ref{fig:depo_hist_with_no_charg}).\n\n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[width=.3\\textwidth]{gem_pictures003-eps-converted-to.pdf}\n\\caption{Unity cell of a GEM, used to calculate the field maps with ANSYS$^\\circledR$. 24 slices of different sizes were non regularly distributed due to the charge deposition non uniformity in the insulator surface, shown in figure \\ref{fig:depo_hist_with_no_charg}.\\label{fig:gem_slice}}\n\\end{figure}\n\n\n \n\\subsection*{Constant Step Method}\\label{subsec:const_meth}\n\n\nThe flow-chart of the first iterative algorithm used to simulate charging-up iterations is in figure \\ref{fig:cont_Step_meth}.\n\n\\begin{figure}[htp]\n \\centering\n {\\includegraphics[width=.5\\textwidth]{charg_corrigido.pdf}\\caption{Constant step method flow-chart.\\label{fig:cont_Step_meth}}}\n\\end{figure}\n\nAt the first iteration, we compute the electric field map assuming no charges in the insulator surface. Then, we import that field map into Garfield++, simulate $10^{4}$ primary avalanches and determine the density charge deposited in each insulator slice surface.\nA new field map is created, with the contribution of previously deposited charges. The density charge in each slice is calculated taking into account the contribution of both the ions and the electrons ending up in the insulator surface. \nA new set of $10^{4}$ primary avalanches is simulated and the process is repeated iteratively.\n\nIt was found that statistical fluctuations in the calculated gain depends on the number of simulated avalanches per step, but the number of deposited charges per avalanche seems to be less sensitive to fluctuations.\nA small step-size of $10^{4}$ primary electrons\nwas chosen in order to obtain good detail in the time evolution of charging-up. However, this small step implies hundreds of iterations until stabilization, which lead to a very heavy computation.\n\nSince the number of deposited charges is the responsible for the local variation in the electric field, we use that observable as our control function for the iterative simulation, i.e. we stop our iterations when its value stabilizes over iterations (corresponding also to a gain stabilization).\n \n\n\\subsection*{Dynamic Step Method}\n\nIn order to accelerate the simulation process, we developed an extended method that uses a dynamic step-size in each iteration.\nThis step-size is smaller when the number of deposited charges per avalanche changes quickly, and is larger when this quantity is more constant, i.e. the deposition stabilizes.\n\nTo constrain the size of the step, we defined that the maximum total charge (sum of signs of ions and electrons) that can be added to the new field map should not be larger than 2$\\times$10$^{4}$ $q_{e}$, (where $q_{e}$ is the elemental charge 1.6$\\times10^{-19}$ C). This way, the maximum allowed number of avalanches per step is equal to $\\frac{2\\times10^{4}q_{e}}{G_{tot}}$, where $G_{tot}$ is the absolute gain in each iteration. The output of this calculation give us an upper limit for the step-size, considering a maximum charge that can be added to new potential maps in each iteration. \nOur attempts show that this upper limit value an acceptable value in terms of the convergence and speed of the method, but other limits can be defined.\nThe dynamic method is briefly described in the flow-chart in figure \\ref{fig:din_Step_meth}.\n\\begin{figure}[htb]\n \\centering\n {\\includegraphics[width=.65\\textwidth]{charg_din_corrigido.pdf}\\caption{Dynamic step method diagram.\\label{fig:din_Step_meth}}}\n\\end{figure}\n\n\nThe method starts with an uncharged ANSYS$^\\circledR$ field map of the GEM. In each iteration we simulate $10^{3}$ primary avalanches, which is a good compromise between statistical fluctuations and computational time. \n\n\nThe number of deposited charges per avalanche, in each slice of the insulator surface, is multiplied by the variable step. \nFor the firsts iterations, steps between $0.5\\times10^{3}$ and $10^{3}$ primary avalanches were used. \n\nAfter the first 5 iterations (sufficient number of iterations that allows a reasonable fit), we fit the number of deposited charges per avalanche to a first order polynomial, and calculate for a given step, what the value of that function should be for the new iteration. \n\nWe then simulate iteration number 6, and compare with the predicted value from the fit:\n\n\\begin{itemize}\n\\item If the difference between simulated and fitted value is larger than the maximum defined step, discard the iteration, the step is reduced to its half, and repeat the iteration.\n\n\\item If the difference between simulated and fitted value is smaller than the maximum defined step, the iteration is saved, we increase the step to the double. A new iteration is calculated and new fit considers only the last 5 valid iterations\n\\end{itemize} \n\n\\section*{Results}\\label{GEM_sub}\n\\subsection*{Comparison between methods}\nThe sum of all electric charges (the integral of the deposition histograms in figures \\ref{fig:depo_hist_with_no_charg} and \\ref{fig:depo_hist_with_charg}) deposited in the insulator surface, per primary avalanche, is shown in figure \\ref{fig:depo_2method} for both methods, as function of the charge produced by each avalanche, per hole (is simply the number of primary simulated electrons in each hole multiplied by the total gain).\n\n\\begin{figure}[htb]\n\\centering\n\\subfloat[]{\\includegraphics[width=0.45\\textwidth]{deposition400V.eps}\\label{fig:depo_2method}}\\quad\n\\subfloat[]{\\includegraphics[width=0.45\\textwidth]{cons_vs_dyn_method_400V.eps}\\label{fig:gain400V}}\n\\caption{a) Total number of deposited charges per produced secondary charge per hole, for both constant and dynamic methods. b) Comparison of the total gain (total number of secondary electrons produced per avalanche), between the constant and dynamic method. Both plots obtained for V$_{\\mathrm{GEM}}=400$ V. \\label{fig:compar}}\n\\end{figure}\n\nThe agreement between both methods is clear. However, the dynamic-step method saves computational resources, using about one tenth of iterations.\n\n\nFigure \\ref{fig:gain400V} represents the total gain evolution for the two methods. We observe an increase in effective gain, followed by a stabilization plateau, reached in both methods. \nDue to previous results, from now on we will only consider the dynamic-step method for calculations.\n\n\n\\subsection*{Charging-up effect in the GEM transmission}\nPrimary electrons produced by incident radiation and drifting towards the GEM holes can be collected in the top electrodes, ending up not producing avalanches. \nThe ratio between the number of primary electrons that enter the holes, producing avalanches and the total number of primary electrons simulated is defined as the electron transmission, shown in figure \\ref{fig:trasmission}, for several voltages applied to the GEM electrodes. \n\n\nThe contribution of the charging-up effect in the electron transmission is more important when low voltages (<400 V) are used and negligible when higher electrical potentials are used.\n\n\\subsection*{Effective gain with and without charging-up}\nThe dependence of effective gain on the voltage applied between electrodes in the GEM detector, is shown in figure \\ref{fig:gain_vs_vgem}. \nThe gain, after charging-up stabilization, is 10-15\\% higher than the situation without charging-up.\n\n\n\\begin{figure}[htb]\n\\centering\n\\subfloat[]{\\includegraphics[width=0.42\\textwidth]{transmission_vs_vgem.eps}\\label{fig:trasmission}}\\quad\n\\subfloat[]{\\includegraphics[width=0.43\\textwidth]{effgain_vs_vgem.eps}\\label{fig:gain_vs_vgem}}\n\\caption{a) Electron transmission as a function of the voltage applied bettween the GEM electrodes. b) Effective gain comparison between charged (red) and uncharged (green) GEM, for different voltage between electrodes.}\\label{fig:gain_and_transmission_with_charging}\n\\end{figure}\n\n\n\\subsection*{Electric field intensity variation}\nA 2D representation of the electric field in the GEM is shown in figure~\\ref{fig:efield_variation}. Each plot is obtained by calculation of the intensity of the electric field along a plane corresponding to a vertical cross section of the GEM hole, at four different stages of the charging-up process.\n\n\\begin{figure}[htp]\n\\centering\n\\subfloat[Without charging-up.]{\\includegraphics[width=0.45\\textwidth]{Efile0.jpg}\\label{fig:efield_a}}\\quad\n\\subfloat[3 $\\times 10^{6}$ avalanches.]{\\includegraphics[width=0.45\\textwidth]{Efile30.jpg}\\label{fig:efield_b}}\\quad\n\\subfloat[6 $\\times 10^{6}$ avalanches.]{\\includegraphics[width=0.45\\textwidth]{Efile60.jpg}\\label{fig:efield_c}}\\quad\n\\subfloat[10 $\\times 10^{6}$ avalanches.]{\\includegraphics[width=0.45\\textwidth]{Efile100.jpg}\\label{fig:efield_d}}\\quad\n\\caption{Evolution of the intensity of the electric field, in a GEM cross section. Computed with ANSYS$^\\circledR$. The colorbar refers to the logarithm of \\textbf{E}. Only intensities above 100 kVcm$^{-1}$ ($\\ln (100)\\simeq 4.6$) are colored. }\\label{fig:efield_variation}\n\\end{figure}\n\n We can observe that the biggest change in the electric field occurs near the electrodes. While the intensity of the electric field near the top (negative polarized) electrode decreases, it increases near the center of the hole and the bottom (positive polarized) electrode. \nThe development of an avalanche inside the hole follows a nearly exponential model. The bigger fraction of secondary electrons is produced at the exit of the hole, in the last stages of the avalanches. There, the electric field is higher due to the charging-up effect, and thus, the effective gain increases as a result of this process.\n \n \\subsection*{Comparison with experimental results}\n\nExperimental measurements were performed at CERN. Physical parameters of the GEM, and the gas mixture used for measurements correspond to the simulation settings. X-ray photons were used as ionizing radiation. $\\mathrm{K_{\\alpha}}$ and $\\mathrm{K_{\\beta}}$ photons corresponding to energies 8.0 $\\mathrm{keV}$ and 8.9 $\\mathrm{keV}$ respectively were emitted by the X-ray tube which employed a copper target. \n\nCollimators to control the photon flux were used in order to regulate the rate of charging up. The gain was measured over the time, for a constant irradiation flux. The GEM structure was housed inside an air-tight chamber, shown in figure \\ref{fig:Timing}, that had a constant circulation of mixture, with a gas flow rate of $\\mathrm{6\\ l.h^{-1}}$. Chamber pressure was maintained at $\\mathrm{760\\ Torr}$. Figure \\ref{fig:Flowchart} shows the schematics setup for gain calibration (left) and gain measurement (right). In order to compare with simulations, the experimental results were normalized from the time scale to charges per hole. This is done multiplying the number of primary electrons produced per second by the absolute gain, divided by the number of irradiated holes of the GEM.\n\nFor a detailed description of the measurement procedure refer to \\cite{mythra}.\n\n\n\\begin{figure}[ht]\n{\\centering \\resizebox*{3in}{3in}{\\includegraphics{TimingGEM.jpg}} \\par}\n\\caption{GEM foil being mounted on a timing-GEM chamber}\n\\label{fig:Timing}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\begin{minipage}[b]{0.5\\linewidth}\n{\\centering \\resizebox*{3in}{3in}{\\includegraphics{Gain_calibration.png}} \\par}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}[b]{0.5\\linewidth}\n{\\centering \\resizebox*{3in}{3in}{\\includegraphics{Gain_measurement.png}} \\par}\n\\end{minipage}\n\\caption{Flow chart depicting the setup for Gain calibration (left) and Gain measurement (right).}\n\\label{fig:Flowchart}\n\\end{figure}\n \nA comparison between Monte Carlo calculations and measurements is shown in figure~\\ref{fig:mythra_gain_comp}.\nThe total gain increases as the GEM starts to be irradiated, in both cases, achieving then a plateau.\nA normalization to the plateau of both data was done to directly compare the time evolution of Monte Carlo simulations and experimental values, as shown in figure \\ref{fig:mythra_gain_comp_norm}. From this observation, Monte Carlo simulations reproduce the time evolution of the gain.\n\nOn the other hand, the value of the total gain still do not match. This can be related with the mobility of the charges in the insulator surfaces and bulk, numerical issues associated with the finite elements method, computed with ANSYS$^\\circledR$, impurities in gas and imperfections in GEMs dimensions introduced during the production.\n\n\\begin{figure}[ht]\n\\centering\n\\subfloat[]{\\includegraphics[width=0.42\\textwidth]{comparation_mytra_380V.eps}\\label{fig:mythra_gain_comp}}\\quad\n\\subfloat[]{\\includegraphics[width=0.42\\textwidth]{comparation_mytra_380V_normalized.eps}\\label{fig:mythra_gain_comp_norm}}\\quad\n\\caption{a) Absolute gain comparison between measuremments (red) and simulated (green) results. Same situation as figure \\ref{fig:gain400V} but with $V_\\mathrm{GEM}$=380 V. b) Same plot as figure \\ref{fig:mythra_gain_comp} but with gain normalized. Experimental data taken by Mythra Varun Nemallapudi at RD51 facilities, CERN.\\label{fig:mythra_gain_comp_tot}}\n\\end{figure}\n\n\n\n\n\\section*{Conclusion and future work}\nIn this work we have presented two iterative methods for the simulation of the insulator surface charging-up in GEMs, allowing a better understanding of their response.\n\nBoth methods agree between each other. However, the dynamic-step method saves computational resources.\nThe Monte Carlo functional time behaviour of the gain as the GEM is irradiated reproduces that observed experimentally. \nHowever, the absolute scale still not agree. \n\nPrimary electrons transmission should be affected by the charging-up at lower voltages between electrodes of the GEM, but for higher voltages, used in regular applications, it does not play an important role.\n\n\nFuture work will include the application of the presented charging-up simulation methods to other MPGDs (e.g. THGEM) and the study of new geometries and detectors that could take advantages of this effect or minimize it.\n\n\nThe simulation of the mobility of deposited charges in the insulator surfaces could contribute to obtain more precise values. Refine the calculations of the electric field calculations (with ANSYS$^\\circledR$ or other method) can also be important to get agreement between absolute simulated and measured gain values.\n\n\\acknowledgments\n\\noindent\nThis work was partially supported by projects CERN\/FP\/123604\/2011 and PTDC\/FIS\/110925\/2009 through COMPETE, FEDER and FCT (Lisbon) programs.\n\n\\noindent P.M.M. Correia was supported by FCT (Lisbon) grant BIC\/UI96\/5496\/2011.\n\n\\noindent C.D.R. Azevedo was supported by FCT (Lisbon) grant SFHR\/BPD\/79163\/2011.\n\n\n\\bibliographystyle{PedroJINST}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIt is believed that supermassive black holes (SMBHs) reside in central\nnuclei of many galaxies, and they occasionally capture a stellar mass\ncompact object (SMCO) which surrounds them.\nGravitational waves from such binary systems with extreme mass\nratios bring us information on the orbits of SMCOs and the spacetime\nstructure near black holes.\nTherefore such systems are considered to be one of the most important\ntargets of the LISA space-based gravitational wave detector\n\\cite{LISA}.\nIn order to detect gravitational waves emitted by extreme mass\nratio inspirals (EMRIs) and to extract physical information from them \nefficiently, we need to predict accurate theoretical waveforms \nin advance.\nOur goal along the line of this paper \nis to precisely calculate theoretical waveforms from EMRIs.\n\nTo investigate gravitational waves from EMRIs,\nour strategy is to adopt the black hole perturbation method:\n\\cite{Mino:1997bx}\nwe consider metric perturbations induced by a SMCO in\na black hole spacetime governed by a SMBH. \nWe also assume that a SMCO is described by a point particle, \nneglecting its internal structure. \nUnder the above approximations, we can calculate the metric perturbation\nevaluated at infinity to predict gravitational waveforms.\nAt the lowest order with respect to the mass ratio, \nwe may calculate the metric perturbation \nby approximating the particle's orbit by\na background geodesic.\nTo step further, \nwe consider the orbital shift from the background geodesic by \ntaking account of the self-force induced by the particle itself. \n\nIn a Schwarzschild background, we can assume that the \norbit is in the equatorial plane from the symmetry without loss \nof generality. \nHence, the orbital velocity can be specified solely by the energy \nand the azimuthal angular momentum of the particle.\nNamely, we can evaluate the orbital evolution from \nthe change rates of the energy and the angular momentum.\nTheir averaged change rates \ncan be evaluated by using the balance argument; \nthe energy and the angular momentum that a particle loses are\nequal to the ones that are radiated to the infinity or across the\nhorizon as gravitational waves because of the conservation laws.\nIn the limit of a large mass ratio, averaged change rates will \nbe sufficient to determine the leading order effects on the \norbital evolution due to the self-force. \nIn this sense the leading order effects can be read from \nthe asymptotic \nbehavior of the metric perturbation in the Schwarzschild case.\n\nOn the other hand, the third constant of motion, i.e., the Carter\nconstant, is necessary in addition to the energy and the azimuthal \nangular momentum to specify a geodesic in a Kerr background.\nHowever, there is no known conserved current composed of gravitational\nwaves that is associated with the Carter constant, \nand hence we cannot use the balance argument to evaluate\nthe change rate of the Carter constant. \nTherefore we have to calculate the self-force acting on a particle\n\\cite{Ori:1995ed,Ori:1997be}.\nWhen we calculate the self-force, we are faced with the regularization\nproblem.\nAlthough the formal expression for the regularized \nself-force had been derived\n\\cite{Mino:1996nk,Quinn:1996am,Detweiler:2002mi},\ndoing explicit calculation is not so straightforward.\n\nGal'tsov \\cite{Gal'tsov82} proposed a method of calculating\nthe loss rates of the energy and angular momentum of a particle\nby using the radiative part of metric perturbation, which was\nintroduced earlier by Dirac \\cite{Dirac:1938nz}.\nThe radiative field is defined by half retarded field minus half\nadvanced one, which is a homogeneous solution of the field equation.\nIt was shown that the time-averaged loss rates of the energy and\nangular momentum evaluated by using the radiative field \nare identical with the results obtained from the balance argument.\nRecently, Mino proved that the Gal'tsov's scheme also gives \nthe correct averaged change rate of the Carter constant\n\\cite{Mino:2003yg}.\nThe Gal'tsov-Mino method has a great advantage that \nwe do not need any regularization procedure because\nthe radiative field is free from divergence from the beginning. \nIn Ref.~\\citen{Sago:2005gd},\nwe briefly reported that the formula for the adiabatic evolution\nof the Carter constant based on Gal'tsov-Mino method\ncan be largely simplified. \nIn this paper, we explain the derivation of this new formula\nin detail.\nApplying our new formula, \nwe explicitly calculate\nthe change rate of the Carter constant for orbits with \nsmall eccentricities and inclinations. \n\nThis paper is organized as follows.\nIn Sec.~\\ref{sec:geodesic},\nwe give a brief review of the Kerr geometry and the geodesic motion.\nNext, we show a practical prescription to calculate the time-averaged\nchange rates of the constants of motion in Sec.~\\ref{sec:COMdot}.\nWe also derive a simplified expression for the change rate\nof the Carter constant.\nIn Sec.~\\ref{sec:example}, applying our prescription,\nwe calculate the change rates of the constants of motion\nand then show the analytic formulae of them\nfor slightly eccentric and inclined orbits.\nFinally we devote Sec.~\\ref{sec:summary} to summarize\nthis paper.\nIn Appendix~\\ref{sec:radiative}, we show the derivation of\nthe radiative part of metric perturbation.\nAnd we also give short reviews on analytic methods \nof solving the radial Teukolsky equation and \nobtaining the spheroidal harmonics in Appendices~\\ref{sec:MST}\nand \\ref{sec:spheroidal}.\n\n\n\\section{Geodesic motion in the Kerr spacetime} \\label{sec:geodesic}\nIn this section, we give a brief review on geodesics in the \nKerr geometry.\nThe metric of the Kerr spacetime in the Boyer-Lindquist\ncoordinates is \n\\begin{eqnarray}\nds^2 &=&\n-\\left(1-\\frac{2Mr}{\\Sigma}\\right)dt^2\n-\\frac{4Mar\\sin^2\\theta}{\\Sigma}dtd\\varphi\n+\\frac{\\Sigma}{\\Delta}dr^2\n\\nonumber \\\\ &&\n\\hspace*{2cm} +\\Sigma d\\theta^2\n+\\left(r^2+a^2+\\frac{2Ma^2r}{\\Sigma}\\sin^2\\theta\\right)\n\\sin^2\\theta d\\varphi^2, \\label{eq:Kerr}\n\\end{eqnarray}\nwhere\n\\[\n\\Sigma=r^2+a^2\\cos^2\\theta, \\quad\n\\Delta=r^2-2Mr+a^2.\n\\]\n$M$ and $aM$ are the mass and angular momentum of\nthe black hole, respectively.\nThere are two Killing vectors reflecting the stationary\nand axisymmetric properties of the Kerr geometry:\n\\begin{equation}\n\\xi_{(t)}^{\\mu}=(1,0,0,0), \\quad\n\\xi_{(\\varphi)}^{\\mu}=(0,0,0,1).\n\\end{equation}\nIn addition, the Kerr spacetime possesses a Killing tensor,\n\\begin{equation}\nK_{\\mu\\nu}=2\\Sigma l_{(\\mu}n_{\\nu)}+r^2g_{\\mu\\nu},\n\\end{equation}\nwhich satisfies $K_{(\\mu\\nu;\\rho)}=0$, where the parenthese\noperating on the indices is the notation for symmetric part of\ntensors. Here we have introduced null vectors,\n\\begin{eqnarray}\nl^{\\mu} &:=&\n\\left(\\frac{r^2+a^2}{\\Delta},1,0,\\frac{a}{\\Delta} \\right), \\quad\nn^{\\mu}:=\n\\left(\\frac{r^2+a^2}{2\\Sigma},-\\frac{\\Delta}{2\\Sigma},\n0,\\frac{a}{2\\Sigma}\\right),\n\\nonumber \\\\\nm^{\\mu} &:=&\n\\frac{1}{\\sqrt{2}(r+ia\\cos\\theta)}\n\\left(ia\\sin\\theta,0,1,\\frac{i}{\\sin\\theta}\\right). \n\\end{eqnarray}\nWe consider a point particle moving in the Kerr geometry:\n\\[\nz^{\\alpha}(\\tau) =\n\\left(t_z(\\tau),r_z(\\tau),\\theta_z(\\tau),\\varphi_z(\\tau)\\right),\n\\]\nwhere $\\tau$ is the proper time along the orbit.\nHere we introduce quantities defined by \n\\begin{eqnarray}\n\\hat{E} &:=&\n-u^{\\alpha}\\xi_{\\alpha}^{(t)}=\n\\left(1-\\frac{2Mr_z}{\\Sigma}\\right)u^t\n+\\frac{2Mar_z\\sin^2\\theta_z}{\\Sigma}u^{\\varphi},\n\\label{eq:Energy} \\\\\n\\hat{L} &:=&\nu^{\\alpha}\\xi_{\\alpha}^{(\\varphi)}=\n-\\frac{2Mar_z\\sin^2\\theta_z}{\\Sigma}u^t\n+\\frac{(r_z^2+a^2)^2-\\Delta a^2\\sin^2\\theta_z}{\\Sigma}\n\\sin^2\\theta_z u^\\varphi, \\label{eq:Momentum} \\\\\n\\hat{Q} &:=&\nK_{\\alpha\\beta}u^{\\alpha}u^{\\beta}\n=\\frac{(\\hat{L}-a\\hat{E}\\sin^2\\theta_z)^2}{\\sin^2\\theta_z}\n+a^2\\cos^2\\theta_z+\\Sigma^2 (u^\\theta)^2, \\label{eq:Carter}\n\\end{eqnarray}\nwhere $u^{\\alpha}=dz^{\\alpha}\/d\\tau$.\nThese quantities remain constant as long as the orbit is a geodesic.\n$\\hat{E}$ and $\\hat{L}$ represent the energy and the (azimuthal) angular\nmomentum per unit mass, respectively.\n$\\hat{Q}$ is called the Carter constant.\nDenoting the mass of a particle by $\\mu$, \nthe energy, the angular momentum and the Carter constant\nof a particle are \n$E\\equiv\\mu\\hat{E}$, $L\\equiv\\mu\\hat{L}$ and $Q\\equiv\\mu^2\\hat{Q}$,\nrespectively.\nAnother notation for the Carter constant defined by \n\\begin{equation}\nC \\equiv\nQ-(aE-L)^2, \\label{eq:Carter2}\n\\end{equation}\nis also convenient since $C$ vanishes for orbits in the \nequatorial plane.\nWe also use $\\hat{C}\\equiv C\/\\mu^2$.\n\nWe can specify an orbit of a particle \nby using three constants of motion,\nthe total energy, angular momentum and Carter constant.\nIntroducing a new parameter $\\lambda$ by \n$d\\lambda=d\\tau\/\\Sigma$,\nthe equations of motion are given as \n\\begin{eqnarray}\n\\frac{dt_z}{d\\lambda} &=&\n-a(a\\hat{E}\\sin^2\\theta_z-\\hat{L})\n+\\frac{r_z^2+a^2}{\\Delta}P(r_z), \\label{eq:eom_t}\\\\\n\\left(\\frac{dr_z}{d\\lambda}\\right)^2 &=&\nR(r_z), \\label{eq:eom_r} \\\\\n\\left(\\frac{d\\cos\\theta_z}{d\\lambda}\\right)^2 &=&\n\\Theta(\\cos\\theta_z), \\label{eq:eom_theta} \\\\\n\\frac{d\\varphi_z}{d\\lambda} &=&\n-\\left(a\\hat{E}-\\frac{\\hat{L}}{\\sin^2\\theta_z}\\right)\n+\\frac{a}{\\Delta}P(r_z), \\label{eq:eom_phi}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nP(r)&:=&\\hat{E}(r^2+a^2)-a\\hat{L}, \\\\\nR(r)&:=&[P(r)]^2-\\Delta[r^2+(a\\hat{E}-\\hat{L})^2+\\hat{C}], \\\\\n\\Theta(\\cos\\theta)&:=&\n\\hat{C} - (\\hat{C}+a^2(1-\\hat{E}^2)+\\hat{L}^2)\\cos^2\\theta\n+ a^2(1-\\hat{E}^2)\\cos^4\\theta.\n\\end{eqnarray}\nIt should be noted that the equations for $r_z$ and\n$\\theta_z$ are completely decoupled by using $\\lambda$. \nMoreover, $R(r)$ and $\\Theta(\\cos\\theta)$\nare quartic functions of $r$ and $\\cos\\theta$, respectively.\n\nWe first consider the radial component of the geodesic equations.\nWhen the radial motion is bounded by the \nminimal and the maximal radii \n$r_{{\\rm min}}$ and $ r_{{\\rm max}}$,\n$r_z(\\lambda)$ becomes a periodic function which satisfies \n$r_z(\\lambda+\\Lambda_r) = r_z(\\lambda)$ \nwith period \n\\begin{equation}\n\\Lambda_r =\n2 \\int_{r_{{\\rm min}}}^{r_{{\\rm max}}} \\frac{dr}{\\sqrt{R(r)}}.\n\\end{equation}\nTherefore, we can expand the radial motion in a Fourier series as\n\\begin{equation}\nr_z(\\lambda) = \\sum_n \\tilde{r}_n e^{-in\\Omega_r\\lambda}\n\\,,\n\\end{equation}\nwhere\n\\begin{equation}\n\\Omega_r = {2\\pi\/\\Lambda_r}.\n\\end{equation}\n\nWe can deal with the motion in $\\theta$-direction \nin a similar manner.\nWhen the minimum of $\\theta$ is given by \n$\\theta_{{\\rm min}} (\\le \\pi\/2)$, the maximum is \n$\\theta_{{\\rm max}} = \\pi - \\theta_{{\\rm min}}$\nbecause of the symmetry with respect to the equatorial plane.\nAs in the case of the radial motion, \n$\\cos\\theta_z(\\lambda)$ becomes a periodic function which satisfies \n$\\cos\\theta_z(\\lambda+\\Lambda_\\theta) = \\cos\\theta_z(\\lambda)$ \nwith period \n\\begin{equation}\n\\Lambda_\\theta =\n4\\int_0^{\\cos\\theta_{{\\rm min}}}\n\\frac{d(\\cos\\theta)}{\\sqrt{\\Theta(\\cos\\theta)}}.\n\\end{equation}\nWe can expand $\\cos\\theta_z(\\lambda)$ in a Fourier series as\n\\begin{equation}\n\\cos\\theta_z(\\lambda) = \\sum_n \\tilde{z}_n e^{-in\\Omega_\\theta \\lambda}\n\\,,\n\\end{equation}\nwhere $\\Omega_\\theta = {2\\pi\/\\Lambda_\\theta}$.\n\n\nNext, we consider the $t$- and $\\varphi$-components of geodesic equations.\nEqs.~(\\ref{eq:eom_t}) and (\\ref{eq:eom_phi}) can be integrated as\n\\begin{eqnarray}\nt_z(\\lambda)&=&t^{(r)}(\\lambda)+\n t^{(\\theta)}(\\lambda)+\n \\left\\langle {dt_z\\over d\\lambda}\\right\\rangle \\lambda, \\\\\n\\varphi_z(\\lambda)&=&\\varphi^{(r)}(\\lambda)+\n \\varphi^{(\\theta)}(\\lambda)+\n \\left\\langle {d\\varphi_z\\over d\\lambda}\\right\\rangle \\lambda, \n\\end{eqnarray}\nwhere\n\\begin{eqnarray*}\nt^{(r)}(\\lambda) &:=&\n\\int d\\lambda \\left[ \\frac{(r_z^2+a^2)P(r_z)}{\\Delta(r_z)}\n- \\left\\langle \\frac{(r_z^2+a^2)P(r_z)}{\\Delta(r_z)}\n \\right\\rangle \\right], \\\\\nt^{(\\theta)}(\\lambda) &:=&\n-\\int d\\lambda \\left[ a^2 \\hat{E}\\sin^2\\theta_z - a \\hat{L}\n- \\left\\langle a^2 \\hat{E}\\sin^2\\theta_z - a \\hat{L} \\right\\rangle\n\\right], \\\\\n\\varphi^{(r)}(\\lambda) &:=&\n\\int d\\lambda \\left[ \\frac{aP(r_z)}{\\Delta(r_z)}\n- \\left\\langle \\frac{aP(r_z)}{\\Delta(r_z)} \\right\\rangle\n\\right], \\\\\n\\varphi^{(\\theta)}(\\lambda) &:=&\n\\int d\\lambda \\left[ \\frac{\\hat{L}}{\\sin^2\\theta_z} -a\\hat{E}\n- \\left\\langle \\frac{\\hat{L}}{\\sin^2\\theta_z} -a\\hat{E} \\right\\rangle\n\\right]. \\label{eqs:tphi-motion}\n\\end{eqnarray*}\n$\\langle \\cdots \\rangle$ represents the time average along the \ngeodesic:\n\\[\n\\left\\langle F(\\lambda) \\right\\rangle\n:=\n\\lim_{T\\to\\infty}\\frac{1}{2T}\\int_{-T}^{T}\nd\\lambda' \\, F(\\lambda').\n\\]\nHere, $t^{(r)}(\\lambda)$ and $\\varphi^{(r)}(\\lambda)$ are\nperiodic functions with period $\\Lambda_r$, while\n$t^{(\\theta)}(\\lambda)$ and $\\varphi^{(\\theta)}(\\lambda)$ are \nthose with period $\\Lambda_\\theta$.\n\n\n\\section{The Time Evolution of the Constants of motion}\n\\label{sec:COMdot}\nIf the timescale of the orbital evolution due to gravitational\nradiation reaction is much longer than the typical dynamical\ntimescale, we may be able to approximate the particle's motion by \nthe geodesic in the background spacetime that is momentarily \ntangential to the orbit \n(osculating geodesic approximation).\nUnder this assumption, we evaluate the change rates of the constants\nof motion at each moment. \nFor bound orbits \nwe can express the change rates of the constants of motion, \n$I^i=\\left\\{E,L,Q\\right\\}$, as \n\\begin{eqnarray}\n{d I^i\\over d\\lambda} \n=\n\\left\\langle \\frac{dI^i}{d\\lambda} \\right\\rangle \n+ \\sum_{(n_r,r_\\theta)\\not=(0,0)}\n \\dot{I}^{i(n_r,n_\\theta)}\n \\exp\\left[ - i(n_r\\Omega_r+n_\\theta\\Omega_\\theta) \\lambda\\right]. \n\\label{eq:bareQdot}\n\\end{eqnarray}\nThe first term on the right hand side is a time-independent dissipative\ncontribution due to radiation reaction, while \nthe others are oscillating. \nIntegrating over a long period, the first term \nbecomes dominant. \nIn the same spirit in Ref.~\\citen{Mino:2003yg},\nhere we define the 'adiabatic' evolution as an approximation \nwhich takes account of only the first term. \nNamely, the adiabatic evolution is solely determined by \nthe time averaged change rates of the constants\nof motion.\n\nOwing to the argument given in Ref.~\\citen{Mino:2003yg}, \nwe can evaluate the averaged change rates of the constants of \nmotion by using\nthe radiative field of the metric perturbation\n\\begin{equation}\n \\left\\langle \\frac{dI^i}{d\\lambda} \\right\\rangle =\n\\lim_{T\\to\\infty}\\frac{1}{2T}\\int_{-T}^{T}d\\lambda\\,\\Sigma \n {\\partial I^i\\over\\partial u^\\alpha}\n {f}^{\\alpha}[h_{\\mu\\nu}^{\\rm rad}],\n\\label{eq:meanQdot}\n\\end{equation}\nwhere $h_{\\mu\\nu}^{{\\rm rad}}$ is the radiative part of the\nmetric perturbation defined by half retarded field minus half \nadvanced field, i.e., \n$\nh_{\\mu\\nu}^{\\rm rad}:=\n(h_{\\mu\\nu}^{\\rm ret}-h_{\\mu\\nu}^{\\rm adv})\/2. \n$\nRadiative field is a solution of source-free vacuum \nEinstein equation. The singular parts contained in both retarded \nand advanced fields cancel out. \nTherefore we can avoid the tedious issue of regularizing \nthe self-force. ${f}^{\\alpha}$ is a differential operator,\n\\begin{equation}\nf^{\\alpha}[h_{\\mu\\nu}]:=\n-\\frac{1}{2}(g^{\\alpha\\beta}+u^{\\alpha}u^{\\beta})\n(h_{\\beta\\gamma;\\delta}+h_{\\beta\\delta;\\gamma}-h_{\\gamma\\delta;\\beta})\nu^{\\gamma}u^{\\delta}.\n\\end{equation}\nThis operator with its index lowered reduces to \n\\begin{eqnarray}\nf_\\alpha [h_{\\mu\\nu}]\n&=& g_{\\alpha\\beta} f^\\beta [h_{\\mu\\nu}] \\nonumber \\\\\n&=&\n\\frac{1}{2}\\left( \\partial_\\alpha h_{\\gamma\\delta} \\right)\nu^\\gamma u^\\delta\n-\\frac{d}{d\\tau}\\left( h_{\\alpha\\gamma}u^\\gamma \\right)\n-\\frac{1}{2}u_\\alpha \\frac{d}{d\\tau}\n\\left( h_{\\gamma\\delta} u^\\gamma u^\\delta \\right)\n+ O(\\mu^2),\n\\end{eqnarray}\nignoring the second order terms.\n\n\n\\subsection{Calculation of $dE\/dt$ and $dL\/dt$}\nFrom Eq.~(\\ref{eq:meanQdot}), we obtain\n\\begin{eqnarray}\n \\left\\langle \\frac{dE}{d\\lambda} \\right\\rangle &=&\n\\lim_{T\\to\\infty}\\frac{\\mu}{2T}\\int_{-T}^{T}d\\lambda\\,\\Sigma \n\\left( -\\xi_\\alpha^{(t)} \\right)\n {f}^{\\alpha}[h_{\\mu\\nu}^{\\rm rad}] \\nonumber \\\\\n&=&\n\\lim_{T\\to\\infty}\\frac{-\\mu}{2T}\\int_{-T}^{T}d\\lambda\n\\left[ \\frac{\\Sigma}{2}\\left( \\partial_t h_{\\gamma\\delta}^{{\\rm rad}} \\right)\n u^\\gamma u^\\delta\n -\\frac{d}{d\\lambda}\\left(\n h_{t \\gamma}^{{\\rm rad}}u^\\gamma \\right)\n +\\frac{\\hat{E}}{2}\\frac{d}{d\\lambda}\\left(\n h_{\\gamma\\delta}^{{\\rm rad}}u^\\gamma u^\\delta \\right)\n\\right] \\nonumber \\\\\n&=&\n\\lim_{T\\to\\infty}\\frac{-\\mu}{2T}\\int_{-T}^{T}d\\lambda\n\\left[ \\frac{\\Sigma}{2}\\left( \\partial_t h_{\\gamma\\delta}^{{\\rm rad}} \\right)\n u^\\gamma u^\\delta\n\\right]. \\label{eq:Edot-int}\n\\end{eqnarray}\nIn the last equality, \nthe total derivative terms are neglected.\n\nNext, we introduce a vector field\n$\\tilde{u}^\\mu (x)$ by~\\cite{Sago:2005gd}\n\\begin{equation}\n(\\tilde{u}_t, \\tilde{u}_r, \\tilde{u}_\\theta, \\tilde{u}_\\varphi)\n:=\n\\left( -\\hat{E}, \\pm\\frac{\\sqrt{R(r)}}{\\Delta(r)},\n \\pm\\frac{\\Theta(\\cos\\theta)}{\\sin\\theta}, \\hat{L} \\right).\n\\end{equation}\nThis vector field is a natural extension of the \nfour-velocity of a particle. \nIn fact, it satisfies\n$\\tilde{u}_\\mu (z(\\lambda)) = u_\\mu(\\lambda)$.\n$\\tilde{u}_\\mu$ depends only on $r$ and $\\theta$.\nFurthermore, since $\\tilde{u}_r$ and $\\tilde{u}_\\theta$ depend\nonly on $r$ and $\\theta$, respectively, we have the relation\n$u_{\\mu;\\nu} = u_{\\nu;\\mu}$.\n\nUsing this vector field, we can rewrite Eq.~(\\ref{eq:Edot-int})\nas\n\\begin{equation}\n \\left\\langle \\frac{dE}{d\\lambda} \\right\\rangle =\n\\lim_{T\\to\\infty}\\frac{-\\mu}{2T}\\int_{-T}^{T}d\\lambda \\left[\n\\partial_t \\left(\n\\frac{\\Sigma}{2}h_{\\gamma\\delta}^{{\\rm rad}}\\tilde{u}^\\gamma\\tilde{u}^\\delta\n\\right) \\right]_{x\\to z(\\lambda)},\n\\label{eq:dEdl}\n\\end{equation}\nwhere we used the fact that $\\Sigma$ and $\\tilde{u}_\\mu$\nare independent of $t$ (and $\\varphi$).\n\nAs shown in Appendix~\\ref{sec:radiative}\n(Eq.~(\\ref{hrad})),\nthe radiative field of metric perturbation is given by\n\\begin{eqnarray}\nh_{\\mu\\nu}^{{\\rm rad}}(x) &=&\n\\mu \\int d\\omega \\sum_{\\ell m} \\frac{1}{2i\\omega^3} \\bigg\\{\n|N_{s}^{{\\rm out}}|^2 {}_s\\Pi_{\\Lambda,\\mu\\nu}^{{\\rm out}}(x)\n\\int d\\lambda \\left[\n\\Sigma {}_s\\bar{\\Pi}_{\\Lambda,\\alpha\\beta}^{{\\rm out}}(z(\\lambda))\nu^{\\alpha}u^{\\beta} \\right]\n\\cr &&\n+ \\frac{\\omega}{k}\n|N_{s}^{{\\rm down}}|^2 {}_s\\Pi_{\\Lambda,\\mu\\nu}^{{\\rm down}}(x)\n\\int d\\lambda \\left[\n\\Sigma {}_s\\bar{\\Pi}_{\\Lambda,\\alpha\\beta}^{{\\rm down}}(z(\\lambda))\nu^{\\alpha}u^{\\beta} \\right] \\bigg\\}\n+ ({\\rm c.c.}),\n\\end{eqnarray}\nwhere $\\Lambda=\\{\\ell m \\omega\\}$,\n$k=\\omega-ma\/2Mr_+$ and $r_+=M+\\sqrt{M^2-a^2}$.\n$ {}_s\\Pi_{\\Lambda,\\mu\\nu}^{{\\rm (out)}}(x) $ \nand $ {}_s\\Pi_{\\Lambda,\\mu\\nu}^{{\\rm (down)}}(x) $ \nare the out-going and down-going mode solutions \nfor $h_{\\mu\\nu}$, respectively.\n$N_s^{{\\rm out}}$ and $N_s^{{\\rm down}}$\nare normalization factors, given by\nEqs.~(\\ref{eq:Namp-out}) and (\\ref{eq:Namp-down}). \nA bar represents complex conjugation. \nUsing this formula, we obtain \n\\begin{eqnarray}\n\\psi^{{\\rm rad}}(x) &:=&\n\\frac{1}{2}\\Sigma h_{\\gamma\\delta}^{{\\rm rad}}\n\\tilde{u}^\\gamma \\tilde{u}^\\delta\n\\cr &=&\n\\mu \\int d\\omega \\sum_{\\ell m} \\frac{1}{4i\\omega^3} \\bigg[\n\\phi_\\Lambda^{{\\rm out}}(x)\n\\int d\\lambda' \\bar{\\phi}_\\Lambda^{{\\rm out}}(z(\\lambda'))\n\\cr && \\hspace*{2cm}\n+ \\frac{\\omega}{k} \\phi_\\Lambda^{{\\rm down}}(x)\n\\int d\\lambda' \\bar{\\phi}_\\Lambda^{{\\rm down}}(z(\\lambda'))\n\\bigg] + ({\\rm c.c.}),\n\\label{eq:huu}\n\\end{eqnarray}\nwhere\n\\begin{equation}\n\\phi_\\Lambda^{{\\rm (out\/down)}}(x) :=\nN_s^{{\\rm (out\/down)}}\n\\Sigma(x) {}_s\\Pi_{\\Lambda,\\gamma\\delta}^{{\\rm (out\/down)}}(x)\n\\tilde{u}^\\gamma(x) \\tilde{u}^\\delta(x).\n\\label{eq:def-phi-out}\n\\end{equation}\nFor a bound orbit, we can expand $\\phi_\\Lambda^{{\\rm out}}$\nin a Fourier series as:\n\\begin{equation}\n\\phi_\\Lambda^{{\\rm (out\/down)}}(z(\\lambda)) =\n\\frac{1}{2\\pi} \\!\n\\left\\langle\\frac{dt_z}{d\\lambda}\\right\\rangle\n\\!\\! \\sum_{n_r, n_\\theta} \\!\n\\bar{\\tilde{Z}}_{\\ell m n_r n_\\theta}^{{\\rm (out\/down)}}(\\omega)\n\\exp\\left[ i\\left\\langle\\frac{dt_z}{d\\lambda}\\right\\rangle\n(\\omega - \\omega_{mn_r n_\\theta})\\lambda \\right],\n\\label{eq:phi-out-ft}\n\\end{equation}\nwhere\n\\begin{equation}\n\\omega_{m n_r n_\\theta} :=\n\\left\\langle\\frac{dt_z}{d\\lambda}\\right\\rangle^{-1}\n\\left( m \\left\\langle\\frac{d\\varphi_z}{d\\lambda}\\right\\rangle\n + n_r \\Omega_r + n_\\theta \\Omega_\\theta \\right).\n\\label{eq:disc-omega}\n\\end{equation}\nSubstituting Eqs.~(\\ref{eq:huu}) and (\\ref{eq:phi-out-ft})\ninto (\\ref{eq:dEdl}), we obtain:\n\\begin{equation}\n\\left\\langle\n\\frac{dE}{dt}\\right\\rangle_{\\!\\!t} =\n- \\mu^2 \\sum_{\\ell m n_r n_\\theta}\n\\frac{1}{4\\pi\\omega_{m n_r n_\\theta}^2}\n\\left( \\left|Z_{\\ell m n_r n_\\theta}^{{\\rm out}} \\right|^2\n+ \\frac{\\omega_{m n_r n_\\theta}}{k_{m n_r n_\\theta}}\n \\left|Z_{\\ell m n_r n_\\theta}^{{\\rm down}} \\right|^2 \\right), \n\\label{eq:ad-Edot}\n\\end{equation}\nwhere $k_{m n_r n_\\theta}=\\omega_{m n_r n_\\theta}-ma\/2Mr_+$,\n$\n\\langle F(t) \\rangle_t :=\n\\lim_{T\\to\\infty}\\frac{1}{2T}\\int_{-T}^{T} dt \\, F(t)\n$, and\n\\begin{equation}\nZ_{\\ell m n_r n_\\theta}^{{\\rm (out\/down)}} \\equiv\n\\tilde{Z}_{\\ell m n_r n_\\theta}^{{\\rm (out\/down)}}(\\omega_{m n_r n_\\theta}).\n\\label{eq:disc-Z}\n\\end{equation}\nIn a similar manner, the formula for the loss rate of the angular\nmomentum is given by \n\\begin{equation}\n\\left\\langle\\frac{dL}{dt}\\right\\rangle_{\\!\\!t} =\n- \\mu^2 \\sum_{\\ell m n_r n_\\theta}\n\\frac{m}{4\\pi\\omega_{m n_r n_\\theta}^3} \\left(\n\\left|Z_{\\ell m n_r n_\\theta}^{{\\rm out}} \\right|^2\n+ \\frac{\\omega_{m n_r n_\\theta}}{k_{m n_r n_\\theta}}\n \\left|Z_{\\ell m n_r n_\\theta}^{{\\rm down}} \\right|^2 \\right).\n\\label{eq:ad-Ldot}\n\\end{equation}\n\n\n\\subsection{Calculation of $dQ\/dt$}\nTo obtain the change rate of the Carter constant, \nwe need to evaluate \n\\begin{equation}\n \\left\\langle \\frac{dQ}{d\\lambda} \\right\\rangle =\n\\lim_{T\\to\\infty}\\frac{\\mu^2}{2T}\\int_{-T}^{T}d\\lambda\n2\\Sigma K_{\\beta}^{\\alpha}u^{\\beta}\n{f}_{\\alpha}[h_{\\mu\\nu}^{\\rm rad}].\n\\end{equation}\nUsing the vector field $\\tilde{u}^{\\alpha}(x)$,\nwhich was introduced in the previous subsection,\nwe obtain \n\\begin{eqnarray}\n2 K_\\beta^\\alpha u^\\beta f_\\alpha &=&\n\\lim_{x\\rightarrow z}\\left[\nK_\\beta^\\alpha \\tilde{u}^\\beta \\partial_\\alpha\n(h_{\\gamma\\delta}\\tilde{u}^\\gamma \\tilde{u}^\\delta)\n+2h_{\\gamma\\delta}\\tilde{u}^\\beta \\tilde{u}^\\gamma\n(K^\\delta_{\\beta;\\alpha}\\tilde{u}^\\alpha\n -K^\\alpha_\\beta \\tilde{u}^\\delta_{;\\alpha})\n\\right],\n\\end{eqnarray}\nto the first order in perturbation, excluding \ntotal derivative terms with respect to $\\tau$. \nThose total derivative terms do not contribute after\ntaking a long-time average. \nFurthermore, one can show that \nthe second term also vanishes by using \n$K_{(\\alpha\\beta;\\gamma)}=0$ and \n$\\tilde{u}_{\\alpha;\\beta}=\\tilde{u}_{\\beta;\\alpha}$.\nAfter all, we find \n\\begin{eqnarray}\n \\left\\langle \\frac{dQ}{d\\lambda} \\right\\rangle &=&\n\\lim_{T\\to\\infty}\\frac{\\mu^2}{2T}\\int_{-T}^{T}d\\lambda\n\\left[\n2 \\Sigma K_\\beta^\\alpha \\tilde{u}^\\beta \\partial_\\alpha\n\\left( \\frac{\\psi^{{\\rm rad}}(x)}{\\Sigma} \\right)\n\\right]_{x\\to z(\\lambda)} \\nonumber \\\\\n& = &\n\\lim_{T\\to\\infty}\\frac{-\\mu^2}{T}\\int_{-T}^{T}d\\lambda\n\\nonumber \\\\ && \\times\n\\left[\\left\\{\n {P(r)\\over \\Delta}\\left(\n (r^2+a^2)\\partial_t+a\\partial_{\\varphi}\\right)\n + {dr_z\\over d\\lambda}\\partial_r\n\\right\\} \\psi^{{\\rm rad}}(x)\\right]_{x\\to z(\\lambda)}~~.\n\\label{eq:Qdot-form1}\n\\end{eqnarray}\nTo obtain the last term in the last line, \nthe term with $\\tilde u^\\mu\\partial_\\mu$ was rewritten \ninto $\\Sigma^{-1}{d\/d\\lambda}$, and integration by parts \nwas applied. \n\nSubstituting Eqs.~(\\ref{eq:huu}) and (\\ref{eq:phi-out-ft})\ninto Eq.~(\\ref{eq:Qdot-form1}), we obtain:\n\\begin{eqnarray}\n \\left\\langle \\frac{dQ}{d\\lambda} \\right\\rangle &=&\n\\lim_{T\\to\\infty}\\frac{-\\mu^3}{2T} \\int_{-T}^{T}d\\lambda\n\\int d\\omega \\sum_{\\ell m n_r n_\\theta}\n\\frac{1}{2i\\omega^3}\\delta(\\omega-\\omega_{m n_r n_\\theta})\n\\cr && \\hspace*{-5mm}\n\\times \\bigg[\nZ_{\\ell m n_r n_\\theta}^{{\\rm out}}\n\\bigg\\{ \\frac{P(r)}{\\Delta}\n\\big( (r^2+a^2)\\partial_t + a\\partial_\\varphi \\big)\n+\\frac{dr_z}{d\\lambda}\\partial_r \\bigg\\}\n\\phi_\\Lambda^{{\\rm out}}(x)\n\\cr && \\hspace*{-2mm}\n+ \\frac{\\omega}{k}\nZ_{\\ell m n_r n_\\theta}^{{\\rm down}}\n\\bigg\\{ \\frac{P(r)}{\\Delta}\n\\big( (r^2+a^2)\\partial_t + a\\partial_\\varphi \\big)\n+\\frac{dr_z}{d\\lambda}\\partial_r \\bigg\\}\n\\phi_\\Lambda^{{\\rm down}}(x) \\bigg]_{x\\to z(\\lambda)}\n\\nonumber \\\\ && \\hspace*{-2mm} + ({\\rm c.c.}).\n\\label{eq:Qdot-form2}\n\\end{eqnarray}\nNow we focus on the $r$-derivative term in the curly brackets.\nSince $\\phi_\\Lambda^{{\\rm out}}$ and $\\phi_\\Lambda^{{\\rm down}}$\ndepend on $t$ and $\\varphi$ only through an exponential function\n$e^{-i\\omega t + im\\varphi}$, we can write \n\\begin{eqnarray}\n&& \\hspace*{-1.5cm}\n\\phi_\\Lambda(z(\\lambda))\n\\delta(\\omega-\\omega_{m n_r n_\\theta})\n\\nonumber \\\\ &&\n= f(r_z(\\lambda), \\cos\\theta_z(\\lambda))\n\\delta(\\omega-\\omega_{m n_r n_\\theta})\n\\cr && \\hspace*{5mm} \\times\n\\exp\\bigg[-i\\omega_{m n_r n_\\theta} \\left(\n\\left\\langle \\frac{dt_z}{d\\lambda} \\right\\rangle \\lambda\n+ t^{(r)}(\\lambda) + t^{(\\theta)}(\\lambda) \\right)\n\\cr && \\hspace*{2.5cm}\n+ im\\left( \\left\\langle \\frac{d\\varphi_z}{d\\lambda} \\right\\rangle \\lambda\n+ \\varphi^{(r)}(\\lambda) + \\varphi^{(\\theta)}(\\lambda) \\right) \\bigg]\n\\nonumber \\\\ &&\n= f(r_z(\\lambda), \\cos\\theta_z(\\lambda))\n\\delta(\\omega-\\omega_{m n_r n_\\theta})\n\\cr && \\hspace*{5mm} \\times\n\\exp\\Big[ - in_r\\Omega_r\\lambda\n - i\\omega_{m n_r n_\\theta} t^{(r)}(\\lambda)\n\t + im\\varphi^{(r)}(\\lambda)\n\\cr && \\hspace*{2.5cm}\n - in_\\theta\\Omega_\\theta\\lambda\n - i\\omega_{m n_r n_\\theta} t^{(\\theta)}(\\lambda)\n\t + im\\varphi^{(\\theta)}(\\lambda)\n\\Big],\n\\end{eqnarray}\nwhere $f(r,\\cos\\theta)$ represents the \ndependence on $r$ and $\\cos\\theta$ in \n$\\phi_\\Lambda(x)$. \n$r_z(\\lambda)$, $t^{(r)}(\\lambda)$ and $\\varphi^{(r)}(\\lambda)$\nare periodic functions with period $\\Lambda_r$, while\n$\\theta_z(\\lambda)$, $t^{(\\theta)}(\\lambda)$ and\n$\\varphi^{(\\theta)}(\\lambda)$ are \nthose with period $\\Lambda_\\theta$.\nWe introduce two different time variables $\\lambda_r$\nand $\\lambda_\\theta$. We use them instead of $\\lambda$ for functions\nwith period $\\Lambda_r$ and $\\Lambda_\\theta$.\nThen, by using these new variables, we can replace \nthe infinitely long time average with a double integral\nover a finite region:\n\\begin{eqnarray}\n&& \\hspace*{-1cm}\n\\lim_{T\\to\\infty}\\frac{1}{2T}\\int_{-T}^{T} \\!\\! d\\lambda\n\\delta(\\omega-\\omega_{m n_r n_\\theta})\n\\frac{dr_z}{d\\lambda}\n\\partial_r \\phi_\\Lambda(z(\\lambda))\n\\nonumber \\\\\n&&\n= \\frac{1}{\\Lambda_r\\Lambda_\\theta}\n\\int_0^{\\Lambda_r} \\!\\!\\! d\\lambda_r\n\\int_0^{\\Lambda_\\theta} \\!\\!\\! d\\lambda_\\theta\n\\delta(\\omega-\\omega_{m n_r n_\\theta})\n\\frac{dr_z}{d\\lambda_r} \\partial_r\\bigg\\{\nf(r_z(\\lambda_r), \\cos\\theta_z(\\lambda_\\theta))\n\\cr && \\hspace*{2cm} \\times\n\\exp\\bigg[ - in_r\\Omega_r\\lambda_r\n - i\\omega_{m n_r n_\\theta} t^{(r)}(\\lambda_r)\n + im\\varphi^{(r)}(\\lambda_r) \n\\cr && \\hspace*{3.5cm}\n - i n_\\theta\\Omega_\\theta\\lambda_\\theta\n - i \\omega_{m n_r n_\\theta} t^{(\\theta)}(\\lambda_\\theta)\n + i m \\varphi^{(\\theta)}(\\lambda_\\theta)\n\\bigg]\\bigg\\}.\n\\label{3.22}\n\\end{eqnarray}\nWe only need to integrate over one cycle for each of $\\lambda_r$ and \n$\\lambda_\\theta$. Using the relation\n\\begin{eqnarray*}\n&& \\hspace*{-1cm}\n\\frac{d}{d\\lambda_r}\\left\\{\nf(r_z(\\lambda_r), \\cos\\theta_z(\\lambda_\\theta))\n\\exp[ - in_r\\Omega_r\\lambda_r\n - i\\omega_{m n_r n_\\theta} t^{(r)}\n + im\\varphi^{(r)} ] \n\\right\\}\n\\nonumber \\\\ &&\n= \\bigg[ \\frac{dt^{(r)}}{d\\lambda_r}\\partial_t\n + \\frac{dr_z}{d\\lambda_r}\\partial_r\n + \\frac{d\\varphi^{(r)}}{d\\lambda_r}\\partial_\\varphi\n + \\partial_{\\lambda_r} \\bigg]\n\\cr && \\hspace*{5mm} \\times\nf(r_z(\\lambda_r), \\cos\\theta_z(\\lambda_\\theta))\n\\exp\\big[ - in_r\\Omega_r\\lambda_r\n - i\\omega_{m n_r n_\\theta} t^{(r)}\n + im\\varphi^{(r)} \\big],\n\\end{eqnarray*}\n$\\lambda_r$-integral in (\\ref{3.22}) can be rewritten as \n\\begin{eqnarray}\n&& \\hspace*{-15mm} \n\\int_0^{\\Lambda_r} \\!\\!\\! d\\lambda_r\n\\frac{dr_z}{d\\lambda_r}\\partial_r\\big\\{\nf(r_z(\\lambda_r), \\cos\\theta_z(\\lambda_\\theta))\n\\cr && \\hspace*{1cm} \\times\n\\exp\\big[ - i n_r \\Omega_r \\lambda_r\n - i \\omega_{m n_r n_\\theta} t^{(r)}\n + i m \\varphi^{(r)}\n\\big] \\big\\}\n\\nonumber \\\\ &=&\n\\int_0^{\\Lambda_r} \\!\\!\\! d\\lambda_r \\bigg[\n- \\frac{dt^{(r)}}{d\\lambda_r}\\partial_t\n- \\frac{d\\varphi^{(r)}}{d\\lambda_r}\\partial_\\varphi\n+ i n_r \\Omega_r \\lambda_r\n\\bigg]\n\\nonumber \\\\\n&& \\times \\big\\{\nf(r_z(\\lambda_r), \\cos\\theta_z(\\lambda_r))\n\\exp\\big[ - i n_r \\Omega_r \\lambda_r\n - i \\omega_{m n_r n_\\theta} t^{(r)}\n + i m \\varphi^{(r)}\n\\big] \\big\\} .\n\\end{eqnarray}\nEliminating the $r$-derivative term from (\\ref{eq:Qdot-form2}) \nby using the above relations,\nwe obtain \n\\begin{eqnarray}\n\\hspace*{-5mm}\\left\\langle {dQ\\over d\\lambda}\\right\\rangle\n& = & \n\\lim_{T\\to\\infty}\\frac{-\\mu^3}{2T} \\int_{-T}^T \\!\\!\\! d\\lambda\n\\int d\\omega \\!\\!\\! \\sum_{\\ell m n_r n_\\theta}\n\\!\\! \\frac{1}{2i\\omega^3} \\delta(\\omega-\\omega_{m n_r n_\\theta})\n\\cr &&\n\\times \\bigg[\nZ_{\\ell m n_r n_\\theta}^{{\\rm out}} \\left\\{\n\\left\\langle {(r^2+a^2)P\\over \\Delta}\\right\\rangle \\partial_t\n+ \\left\\langle {a P\\over \\Delta}\\right\\rangle \\partial_\\varphi\n+ i n_r\\Omega_r \\right\\} \\phi_\\Lambda^{{\\rm out}}(x)\n\\cr &&\n\\quad + \\frac{\\omega}{k}\nZ_{\\ell m n_r n_\\theta}^{{\\rm down}} \\left\\{\n\\left\\langle {(r^2+a^2)P\\over \\Delta}\\right\\rangle \\partial_t\n+ \\left\\langle {a P\\over \\Delta}\\right\\rangle \\partial_\\varphi\n+ i n_r\\Omega_r \\right\\} \\phi_\\Lambda^{{\\rm down}}(x)\n\\bigg]_{x\\to z(\\lambda)}\n\\cr && \\quad+ ({\\rm c.c.})\n\\nonumber \\\\\n&=&\n- 2 \\mu^3 \\left\\langle \\frac{dt_z}{d\\lambda} \\right\\rangle\n\\!\\! \\sum_{\\ell m n_r n_\\theta} \\!\\!\\!\n\\frac{1}{4\\pi\\omega_{m n_r n_\\theta}^2}\n\\cr && \\hspace*{1cm} \\times\n\\bigg[ - \\left\\langle {(r^2+a^2)P\\over \\Delta}\\right\\rangle\n + \\frac{m}{\\omega_{m n_r n_\\theta}}\n \\left\\langle {a P\\over \\Delta}\\right\\rangle\n + \\frac{n_r \\Omega_r}{\\omega_{m n_r n_\\theta}}\n\\bigg]\n\\cr && \\hspace*{1cm}\n\\times \\left(\n|Z_{\\ell m n_r n_\\theta}^{{\\rm out}}|^2\n+\\frac{\\omega_{m n_r n_\\theta}}{k_{m n_r n_\\theta}}\n|Z_{\\ell m n_r n_\\theta}^{{\\rm down}}|^2\n\\right).\n\\end{eqnarray}\nHere we used Eqs.~(\\ref{eq:ad-Edot}) and (\\ref{eq:ad-Ldot})\nin the last equality. Finally, we obtain:\n\\begin{eqnarray}\n\\left\\langle {dQ\\over dt}\\right\\rangle_{\\!\\!t}\n& = & \n2\\mu\\left\\langle {(r^2+a^2)P\\over \\Delta}\\right\\rangle\n \\left\\langle{dE\\over dt}\\right\\rangle_{\\!\\!t}\n-2\\mu\\left\\langle {a P\\over \\Delta}\\right\\rangle\n \\left\\langle{dL\\over dt}\\right\\rangle_{\\!\\!t}\n\\cr &&\n+ \\mu^3 \\!\\!\\! \\sum_{\\ell m n_r n_\\theta} \\!\\!\\!\n \\frac{n_r \\Omega_r}{2 \\pi \\omega_{m n_r,n_\\theta}^3}\n\\bigg(\n|Z_{\\ell m n_r n_\\theta}^{{\\rm out}}|^2\n+\\frac{\\omega_{m n_r n_\\theta}}{k_{m n_r n_\\theta}}\n|Z_{\\ell m n_r n_\\theta}^{{\\rm down}}|^2\n\\bigg).\n\\label{eq:ad-Qdot}\n\\end{eqnarray}\n\n\n\\subsection{Consistency of our formulae in simple cases}\nIn this subsection, we examine our formulae in a few simple cases.\nFirst, we consider circular orbits. \nWe know that a circular orbit remains circular\nunder radiation \nreaction\\cite{Kennefick:1995za}. \nThis condition fixes $dQ\/dt$ for circular orbits as \n\\begin{equation}\n\\frac{dQ}{dt} =\n\\frac{2\\mu(r^2+a^2)P}{\\Delta} \\frac{dE}{dt}\n- \\frac{2\\mu aP}{\\Delta} \\frac{dL}{dt}.\n\\label{eq:check-cir}\n\\end{equation}\nSince we have $Z_{\\ell m n_r n_\\theta}^{{\\rm out\/down}}=0$\nfor $n_r\\ne 0$ in the case of a circular orbit,\nthe last term in Eq.~(\\ref{eq:ad-Qdot}) vanishes. \nThus Eq.~(\\ref{eq:ad-Qdot}) is consistent with the above condition\nthat a circular orbit remains circular. \n\nNext, we consider orbits in the equatorial plane.\nAn orbit in the equatorial plane should not \nleave the plane by symmetry. This can be \nconfirmed by rewriting the above formula in terms of $C$. \nFrom the definition of $\\omega_{m n_r n_\\theta}$\n(\\ref{eq:disc-omega}), we obtain the following identity:\n\\begin{eqnarray*}\n&& \\hspace*{-1cm}\n\\mu^2\\!\\!\\sum_{\\ell m n_r n_\\theta}\n\\!\\! \\frac{n_r \\Omega_r}{4\\pi\\omega_{m n_r n_\\theta}^3}\n\\left(\n\\left|Z_{\\ell m n_r n_\\theta}^{{\\rm out}} \\right|^2\n+ \\frac{\\omega_{m n_r n_\\theta}}{k_{m n_r n_\\theta}}\n \\left|Z_{\\ell m n_r n_\\theta}^{{\\rm down}} \\right|^2 \\right) \\\\\n&=&\n\\mu^2\\!\\!\\sum_{\\ell m n_r n_\\theta}\n\\!\\! \\frac{1}{4\\pi\\omega_{m n_r n_\\theta}^2} \\left(\n\\left\\langle \\frac{dt_z}{d\\lambda} \\right\\rangle\n-\\frac{m}{\\omega_{m n_r n_\\theta}}\n\\left\\langle \\frac{d\\varphi_z}{d\\lambda} \\right\\rangle\n-\\frac{n_\\theta \\Omega_\\theta}{4\\pi\\omega_{m n_r n_\\theta}}\n\\right)\n\\cr && \\hspace*{2.5cm} \\times \\left(\n\\left|Z_{\\ell m n_r n_\\theta}^{{\\rm out}} \\right|^2\n+ \\frac{\\omega_{m n_r n_\\theta}}{k_{m n_r n_\\theta}}\n \\left|Z_{\\ell m n_r n_\\theta}^{{\\rm down}} \\right|^2 \\right) \\\\\n&=&\n-\\left\\langle \\frac{dt_z}{d\\lambda} \\right\\rangle\n\\left\\langle \\frac{dE}{dt} \\right\\rangle_{\\!\\!t}\n+\\left\\langle \\frac{d\\varphi_z}{d\\lambda} \\right\\rangle\n\\left\\langle \\frac{dL}{dt} \\right\\rangle_{\\!\\!t}\n\\cr && \\hspace*{1cm}\n- \\mu^2 \\!\\! \\sum_{\\ell m n_r n_\\theta}\n\\!\\! \\frac{n_\\theta \\Omega_\\theta}{4\\pi\\omega_{m n_r n_\\theta}^3}\n\\left(\n\\left|Z_{\\ell m n_r n_\\theta}^{{\\rm out}} \\right|^2\n+ \\frac{\\omega_{m n_r n_\\theta}}{k_{m n_r n_\\theta}}\n \\left|Z_{\\ell m n_r n_\\theta}^{{\\rm down}} \\right|^2 \\right),\n\\end{eqnarray*}\nwhere we used the the expressions of\n$\\langle{dE\/dt}\\rangle_t$ and $\\langle{dL\/dt}\\rangle_t$ \ngiven in Eqs.~(\\ref{eq:ad-Edot}) and (\\ref{eq:ad-Ldot}).\nUsing this identity, we have\n\\begin{eqnarray}\n\\left\\langle {dC\\over dt}\\right\\rangle_{\\!\\!t}\n& = &\n\\left\\langle {dQ\\over dt}\\right\\rangle_{\\!\\!t}\n-2(aE-L)\\left(\na\\left\\langle {dE\\over dt}\\right\\rangle_{\\!\\!t}\n-\\left\\langle {dL\\over dt}\\right\\rangle_{\\!\\!t}\n\\right)\n\\nonumber \\\\ &=&\n- 2\\left\\langle a^2 E \\cos^2\\theta_z\\right\\rangle\n \\left\\langle{dE\\over dt}\\right\\rangle_{\\!\\!t}\n+ 2\\left\\langle {L \\cot^2\\theta_z}\\right\\rangle\n \\left\\langle{dL\\over dt}\\right\\rangle_{\\!\\!t}\n\\cr &&\n-\\mu^3 \\!\\!\\!\\!\\!\\sum_{\\ell,m,n_r,n_\\theta}\\!\\!\\!\\!\\!\n {n_\\theta\\Omega_\\theta\\over 2\\pi\\omega^3_{m n_r n_\\theta}} \n\\left(\n\\left|Z_{\\ell m n_r n_\\theta}^{{\\rm out}} \\right|^2\n+ \\frac{\\omega_{m n_r n_\\theta}}{k_{m n_r n_\\theta}}\n \\left|Z_{\\ell m n_r n_\\theta}^{{\\rm down}} \\right|^2 \\right),\n\\end{eqnarray}\nwhere we have used the following relations:\n\\begin{eqnarray*}\n\\left\\langle \\frac{dt_z}{d\\lambda} \\right\\rangle &=&\n-a(a\\hat{E}-\\hat{L})\n+ \\left\\langle a^2 \\hat{E} \\cos^2\\theta_z \\right\\rangle\n+ \\left\\langle \\frac{r_z^2+a^2}{\\Delta}P \\right\\rangle, \\\\\n\\left\\langle \\frac{d\\varphi_z}{d\\lambda} \\right\\rangle &=&\n-a\\hat{E} + \\hat{L}\n+ \\left\\langle \\hat{L} \\cot^2\\theta_z \\right\\rangle\n+ \\left\\langle \\frac{aP}{\\Delta} \\right\\rangle.\n\\end{eqnarray*}\nFrom this equation, it is found\nthat $\\langle dC\/dt\\rangle_t=0$ when $\\theta=\\pi\/2$.\nNote that we have\n$Z_{\\ell m n_r n_\\theta}^{{\\rm out\/down}} \\ne 0$\nonly for $n_\\theta=0$ in the case of equatorial orbits. \n\n\n\n\\section{Application of our formulation to orbits with\nsmall eccentricity and inclination} \\label{sec:example}\nIn this section, as an application of our formulation,\nwe consider a slightly eccentric orbit with\nsmall inclination from the equatorial plane.\nSince, in this case, we can expand an orbit with respect to\nthe eccentricity and inclination, we can analytically \ncalculate the change rates of the constants of motion.\n\n\\subsection{Orbits}\nHere we define $r_0$ so that the potential in $r$-direction \n$R(r)$ takes its minimum at $r=r_0$:\n\\begin{equation}\n\\left. \\frac{dR}{dr}\\right|_{r=r_0}=0. \\label{eq:min-condition}\n\\end{equation}\nWe denote the outer turning point by $r_0(1+e)$. \nNamely, \n\\begin{equation}\nR(r_0(1+e))=0, \\label{eq:turn-condition}\n\\end{equation}\nwhich gives the definition of the eccentricity $e$. \nWe also define a parameter $y=C\/L^2$, which is \nrelated to the inclination angle. For orbits in the equatorial \nplane, we have $y=0$. \nFurther, we introduce a new parameter $v=\\sqrt{M\/r_0}$.\nFor circular orbits \n$v$ represents the orbital velocity at the Newtonian order. \nHence, we regard $v$ as a parameter whose power indicates \ntwice the post-Newtonian (PN) order.\n\nSolving (\\ref{eq:min-condition}) and\n(\\ref{eq:turn-condition}) for $\\hat{E}$ and $\\hat{L}$, \nthey are expressed in terms of \n$e$ and $y$ as \n\\begin{eqnarray}\n\\hat{E} &=&\n1-\\frac{1}{2} v^2 + \\frac{3}{8} v^4 - q v^5\n- \\left( \\frac{1}{2} v^2 - \\frac{1}{4} v^4 + 2 q v^5 \\right) e^2\n+\\frac{1}{2} q v^5 y + q v^{5} e^2 y,\n\\label{eq:E_exp} \\\\\n\\hat{L} &=& r_0 v \\bigg[\n1 + \\frac{3}{2} v^2 -3 q v^3\n+ \\frac{27}{8} v^4 + q^2 v^4\n- \\frac{15}{2} q v^5\n\\cr && \\hspace*{1cm}\n+ \\Big( - 1 + \\frac{3}{2} v^2 - 6 q v^3\n + \\frac{81}{8} v^4 + \\frac{7}{2} q^2 v^4\n - \\frac{63}{2} q v^5 \\Big) e^2\n\\cr && \\hspace*{1cm}\n+ \\Big( -\\frac{1}{2} - \\frac{3}{4} v^2 + 3 q v^3 \n - \\frac{27}{16} v^4 - \\frac{3}{2} q^2 v^4\n + \\frac{15}{2} q v^5 \\Big) y\n\\cr && \\hspace*{1cm}\n+ \\Big( \\frac{1}{2} - \\frac{3}{4} v^2 + 6 q v^3\n - \\frac{81}{16} v^4 - \\frac{19}{4} q^2 v^4\n + \\frac{63}{2} q v^5 \\Big) e^2 y\n\\bigg], \n\\label{eq:L_exp}\n\\end{eqnarray}\nwhere $q:=a\/M$.\nHereafter we keep terms up to $O(v^5 e^2 y)$\nrelative to the leading order.\n\nWith the initial condition set to $r_z(\\lambda=0)=r_0(1+e)$,\nthe solution for $r_z(\\lambda)$ is obtained in an expansion \nwith respect to $e$ as \n\\begin{eqnarray}\nr_z(\\lambda) &=&\nr_0[1+er^{(1)}+e^2r^{(2)}],\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nr^{(1)} &=& \\cos\\Omega_r\\lambda, \\nonumber \\\\\nr^{(2)} &=&\np^{(1)} (1-\\cos\\Omega_r\\lambda)\n+ p^{(2)} (1-\\cos 2\\Omega_r\\lambda), \\nonumber \\\\\n\\Omega_r &=& r_0 v \\bigg[\n1 - \\frac{3}{2} v^2 + 3 q v^3\n- \\frac{45}{8} v^4 - \\frac{3}{2} q^2 v^4\n+ \\frac{33}{2} q v^5\n\\cr && \\hspace*{5mm}\n- \\bigg\\{ 1 + \\frac{3}{2} v^2 - 6 q v^3\n + \\Big( \\frac{165}{8} + \\frac{9}{2} q^2 \\Big) v^4\n - \\frac{165}{2} q v^5 \\bigg\\} e^2\n\\cr && \\hspace*{5mm}\n- \\Big( \\frac{3}{2} q v^3 - 2 q^2 v^4\n + \\frac{33}{4} q v^5 \\Big) y\n- \\Big( 3 q v^3 - \\frac{27}{4} q^2 v^4 \n + \\frac{165}{4} q v^5 \\Big) e^2 y\n\\bigg], \\cr\np^{(1)} &=&\n - 1 - v^2 + 2 q v^3 -6 v^4 - q^2 v^4 + 20 q v^5 \n- \\left( q v^3 - 2 q^2 v^4 + 10 q v^5 \\right) y, \\cr\np^{(2)} &=&\n- \\frac{1}{2} - \\frac{1}{2} v^2 + q v^3 - 3 v^4\n- \\frac{1}{2} q^2 v^4 + 10 q v^5\n- \\Big( \\frac{1}{2} q v^3 - q^2 v^4 + 5 q v^5 \\Big) y.\n\\nonumber\n\\end{eqnarray}\n\nWe also compute $\\cos\\theta_z(\\lambda)$ in a series expansion \nin $y$ as\n\\begin{equation}\n\\cos\\theta_z(\\lambda) =\n\\sqrt{y}[ c_z^{(0)}(\\lambda) + y c_z^{(1)}(\\lambda)],\n\\end{equation}\nwhere\n\\begin{eqnarray*}\nc_z^{(0)} &=&\n\\Big( 1 - \\frac{1}{2} q^2 v^4 - \\frac{3}{2}q^2 v^4 e^2 \\Big)\n\\sin\\Omega_\\theta \\lambda, \\\\\nc_z^{(1)} &=&\n\\Big( - \\frac{1}{2} + \\frac{13}{16} q^2 v^4\n + \\frac {39}{16} q^2 v^4 e^2 \\Big)\n\\sin\\Omega_\\theta \\lambda\n+ \\Big( \\frac{1}{16} q^2 v^4 + \\frac{3}{16} q^2 v^4 e^2 \\Big)\n\\sin 3\\Omega_\\theta \\lambda, \\\\\n\\Omega_\\theta &=& r_0 v \\bigg[\n1 + \\frac{3}{2} v^2 - 3 q v^3 + \\frac{27}{8} v^4\n+ \\frac{3}{2} q^2 v^4 - \\frac{15}{2} q v^5\n\\cr && \\hspace*{5mm}\n+ \\Big( - 1 + \\frac{3}{2} v^2 -6 q v^3\n + \\frac{81}{8} v^4 + \\frac{9}{2} q^2 v^4\n - \\frac{63}{2} q v^{5} \\Big) e^2\n\\cr && \\hspace*{5mm}\n+ \\Big( \\frac{3}{2} q v^3 - \\frac{7}{4} q^2 v^4 + \\frac{15}{4} q v^5 \\Big) y\n+ \\Big( 3 q v^3 - \\frac{9}{2} q^2 v^4 + \\frac {63}{4} q v^{5} \\Big) e^2 y\n\\bigg].\n\\end{eqnarray*}\nHere the solution satisfies the condition,\n$\\cos\\theta_z(\\lambda=0) = 0$.\n\nSubstituting $r_z$ and $\\cos\\theta_z$\ninto Eqs.(\\ref{eq:eom_t}), (\\ref{eq:eom_phi})\nand (\\ref{eqs:tphi-motion}), we obtain\n\\begin{eqnarray}\nt^{(r)} &=& \\frac{r_0 e}{v} \\bigg[\n\\Big\\{\n( 2 + 4 v^2 - 6 q v^3 + 17 v^4 + 3 q^2 v^4 - 54 q v^5)\n\\cr && \\hspace*{1cm}\n+ ( 2 + 6 v^2 - 10 q v^3\n + 33 v^4 + 5 q^2 v^4 - 108 q v^5) e\n\\cr && \\hspace*{1cm}\n+ ( 3 q v^3 - 4 q^2 v^4 + 27 q v^5) y\n\\cr && \\hspace*{1cm}\n+ ( 5 q v^3 - 8 q^2 v^4 + 54 q v^5) e y\n\\Big\\} \\sin \\Omega_r \\lambda\n\\cr && \\hspace*{1cm}\n+ \\Big\\{\n\\Big( \\frac{3}{4} + \\frac{7}{4} v^2 - \\frac{13}{4} q v^3\n + \\frac{81}{8} v^4 + \\frac{13}{8} q^2 v^4\n - \\frac{135}{4} q v^5\n\\Big) e\n\\cr && \\hspace*{1.5cm}\n+ \\Big( \\frac{13}{8} q v^3 - \\frac{5}{2} q^2 v^4\n + \\frac{135}{8} q v^5 \n\\Big) e y\n\\Big\\} \\sin 2\\Omega_r \\lambda\n\\bigg], \\\\\nt^{(\\theta)} &=& q^2 v^3 r_0 y \\bigg[\n\\Big\\{\n\\Big( -\\frac{1}{4} + \\frac{1}{2} v^2\n - \\frac{3}{4} q v^3 + \\frac{5}{8} q^2 v^4 + q v^5\n\\Big)\n\\cr && \\hspace*{1.5cm}\n+ \\Big( - \\frac{1}{4} + \\frac{11}{8} v^2 -3 q v^3\n + \\frac{1}{2} v^4 + \\frac{23}{8} q^2 v^4\n + \\frac{9}{2} q v^5\n\\Big) e^2\n\\Big\\} \\sin 2\\Omega_\\theta \\lambda\n\\bigg], \\\\\n\\left\\langle {dt_z\\over d\\lambda}\\right\\rangle &=&\nr_0^2 \\Big[\n1 + \\frac{3}{2} v^2 + \\frac{27}{8} v^4 - 3 q v^5\n- \\Big( \\frac{5}{2} + \\frac{21}{4} v^2 - 6 q v^3\n + \\frac{315}{16} v^4 + 3 q^2 v^4\n - \\frac{123}{2} q v^5\n\\Big) e^2\n\\cr && \\hspace*{5mm}\n+ \\Big( \\frac{1}{2} q^2 v^4 + \\frac{3}{2} q v^5\n\\Big) y\n+ \\Big( -3 q v^3 + 6 q^2 v^4 - \\frac{123}{4} q v^5\n\\Big) e^2 y \\Big], \\\\\n\\varphi^{(r)} &=&\nq v^3 e \\bigg[ \\Big\\{\n( - 2 + 2 q v - 10 v^2 + 18 q v^{3})\n+ ( - 2 + 2 q v - 12 v^2 + 24 q v^3) e\n\\cr && \\hspace*{3cm}\n- ( q v + 9 q v^3) y \n- ( q v +12 q v^3 ) e y\n\\Big\\} \\sin\\Omega_r \\lambda\n\\cr && \\hspace*{1cm}\n+ \\Big\\{\n\\Big( - \\frac{1}{4} q v + \\frac{1}{2} v^2 - \\frac{3}{4} q v^3 \\Big) e\n+ \\Big( \\frac{1}{8} q v + \\frac{3}{8} q v^3 \\Big) e y\n\\Big\\} \\sin 2\\Omega_r \\lambda \\bigg], \\\\\n\\varphi^{(\\theta)} &=&\ny \\bigg[\n\\Big( - \\frac{1}{4} + \\frac{3}{8} q^2 v^4 \\Big)\n+ \\frac{9}{8} q^2 v^4 e^2 \\bigg]\n\\sin 2\\Omega_\\theta \\lambda, \\\\\n\\left\\langle {d\\varphi_z\\over d\\lambda}\\right\\rangle &=&\nr_0 v \\bigg[\n1 + \\frac{3}{2} v^2 - q v^3\n+ \\frac{27}{8} v^4 - \\frac{9}{2} q v^5\n- \\Big( 1 - \\frac{3}{2} v^2 + 2 q v^3\n - \\frac{81}{8} v^4 + \\frac{27}{2} q v^5\n\\Big) e^{2}\n\\cr && \\hspace*{1cm}\n+ \\Big(\n\\frac{3}{2} q v^3 - q^2 v^4 + \\frac{15}{4} q v^5\n\\Big) y\n+ \\Big(\n3 q v^3 - \\frac{9}{4} q^2 v^4 + \\frac{63}{4} q v^5\n\\Big) e^{2} y \\bigg].\n\\end{eqnarray}\n\n\n\\subsection{Calculation of $Z_{\\ell m n_r n_\\theta}^{{\\rm out\/down}}$}\nIn order to obtain the averaged change rates of the energy,\nangular momentum and Carter constant, we have to calculate\n$Z_{\\ell m n_r n_\\theta}^{{\\rm out\/down}}$ defined by\nEq.~(\\ref{eq:disc-Z}) with Eq.~(\\ref{eq:phi-out-ft}). \nIntegrating Eq.~(\\ref{eq:def-phi-out}) with respect to\n$\\lambda$, we obtain\n\\begin{eqnarray}\n&&\\hspace*{-1cm}\n\\hat{Z}_{\\Lambda}^{{\\rm (out\/down)}}\n\\equiv\n\\int d\\lambda \\bar{\\phi}_\\Lambda^{{\\rm (out\/down)}} (z(\\lambda))\n\\nonumber \\\\\n&=&\nN_s^{{\\rm (out\/down)}} \\int d^4x \\sqrt{-g(x)} \n~{}_s\\bar{\\Pi}_{\\Lambda,\\alpha\\beta}^{{\\rm (out\/down)}}(x)\n\\int d\\tau\n\\frac{\\tilde{u}^\\alpha(x)\\tilde{u}^\\beta(x)}{\\sqrt{-g(x)}}\n\\delta^{(4)}(x-z(\\lambda))\n\\nonumber \\\\\n&=&\n{N_s^{\\rm (out\/down)}\\over \\mu} \\int d^4x \\sqrt{-g(x)}\n~{}_s\\bar{\\Pi}_{\\Lambda,\\alpha\\beta}^{{\\rm (out\/down)}}(x)\nT^{\\alpha\\beta}(x),\n\\end{eqnarray}\nwhere \n\\begin{equation}\nT^{\\alpha\\beta}(x) =\n\\mu \\int d\\tau \\frac{1}{\\sqrt{-g(x)}}\nu^\\alpha u^\\beta \\delta^{(4)}(x-z(\\tau)).\n\\label{ppEM}\n\\end{equation}\nis the energy momentum tensor of a mono-pole particle of mass \n$\\mu$. \nUsing the relation given in Eq.~(\\ref{TT}), \n$\\hat{Z}_{\\Lambda}^{{\\rm (out\/down)}}$ can be also expressed \nin the familiar form which appears as an integration over \nthe source term in the standard Teukolsky formalism as \n\\begin{eqnarray}\n\\hspace*{-5mm}\n\\hat{Z}_{\\Lambda}^{{\\rm (out\/down)}} & = &\n\\frac{N_s^{\\rm (out\/down)}}{\\mu}\\bar{\\zeta}_s \\int d^4x\\sqrt{-g(x)} \n~{}_s R_\\Lambda^{\\rm (in\/up)}(r){}_s\\bar{Z}_\\Lambda(\\theta,\\varphi)\ne^{i\\omega t} {}_s\\hat{T}(x), \n\\label{eq:Zout}\n\\end{eqnarray}\nwhere ${}_s\\hat{T}(x)$ is a projected \nenergy momentum tensor defined by \n${}_s \\hat{T}:={}_s\\tau_{\\mu\\nu} \nT^{\\mu\\nu}$ with (\\ref{taudef}), and \n$~{}_{s} R_\\Lambda^{\\rm (in\/up)}(r)(={}_{-s}\\bar R_\\Lambda^{\\rm (out\/down)}(r))$ and \n${}_s Z_\\Lambda(\\theta,\\varphi)$\nare, respectively, the radial mode functions and the spheroidal\nharmonics introduced in Appendix~\\ref{sec:radiative}.2.\n\n\nIn the following discussion we concentrate on the case with $s=-2$. \nSubstituting the explicit forms of \nthe energy momentum tensor and the projection operator \n${}_{-2}\\tau_{\\mu\\nu} $, we obtain \n\\begin{eqnarray}\n\\hat{Z}_{\\Lambda}^{{\\rm (out\/down)}}\n= 2 N_s^{{\\rm (out\/down)}}\\bar \\zeta_s \n\\int_{-\\infty}^{\\infty} dt e^{i\\omega t - im\\varphi(t)}\n{\\cal I}_\\Lambda^{{\\rm (in\/up)}}(r(t),\\theta(t)),\n\\end{eqnarray}\nwith\n\\begin{eqnarray*}\n{\\cal I}_\\Lambda &=&\n\\bigg[ R_\\Lambda ( A_{nn0}+A_{\\bar{m}n0}+A_{\\bar{m}\\bar{m}0} )\n\\cr && \\hspace*{1cm}\n-\\frac{dR_\\Lambda}{dr}(A_{\\bar{m}n1}+A_{\\bar{m}\\bar{m}1})\n+\\frac{d^2R_\\Lambda}{dr^2}A_{\\bar{m}\\bar{m}2}\n\\bigg]_{r=r(t),\\theta=\\theta(t)}, \\\\\nA_{nn0}&=& \\frac{-2}{\\sqrt{2\\pi}\\Delta^2}\nC_{nn}\\bar z^{2}z\\mathcal{L}_{1}^{\\dag}\n\\left\\{\\bar z^{4}\\mathcal{L}_2^{\\dagger}(\\bar z^{-3} S_\\Lambda )\\right\\},\\\\\nA_{\\bar{m}n0}&=& \\frac{2}{\\sqrt{\\pi}\\Delta}\nC_{\\bar{m}n}\\bar z^{3}\\bigg[\n\\Big(\\frac{iK}{\\Delta}+z^{-1}+\\bar z^{-1}\\Big)\n\\mathcal{L}^{\\dag}_2 S_\\Lambda\n-\\frac{K}{\\Delta}(z^{-1}-\\bar z^{-1})a\\sin\\theta S_\\Lambda \\bigg],\\\\\nA_{\\bar{m}\\bar{m}0} &=& -\\frac{1}{\\sqrt{2\\pi}}\\bar z^{3}z^{-1}\nC_{\\bar{m}\\bar{m}}S_\\Lambda\\left[-i\\left(\\frac{K}{\\Delta}\\right)_{,r}\n-\\frac{K^2}{\\Delta^2}+{2i\\over\\bar z}\\frac{K}{\\Delta}\\right],\\\\\nA_{\\bar{m}n1}&=& \\frac{2}{\\sqrt{\\pi}\\Delta}\\bar z^{3}\nC_{\\bar{m}n}\\left[\\mathcal{L}^{\\dag}_2S_\\Lambda\n+ia\\sin\\theta(z^{-1}-\\bar z^{-1})S_\\Lambda\\right],\\\\\nA_{\\bar{m}\\bar{m}1}&=& -\\frac{2}{\\sqrt{2\\pi}}\\bar z^{3}z^{-1}\nC_{\\bar{m}\\bar{m}}S_\\Lambda\\left(i\\frac{K}{\\Delta}+\\bar z^{-1}\\right),\\\\\nA_{\\bar{m}\\bar{m}2}&=&-\\frac{1}{\\sqrt{2\\pi}}\\bar z^{3}z^{-1}\nC_{\\bar{m}\\bar{m}}S_\\Lambda, \\\\\nC^{\\mu\\nu}&=&\\frac{u^\\mu u^\\nu}{\\Sigma u^t},\n\\end{eqnarray*}\nwhere $S_\\Lambda$ represents ${}_{-2}S_{\\Lambda}(\\theta)$ defined \nin Appendix.~\\ref{sec:radiative}, and \n\\begin{eqnarray*}\nz &=& r+ia\\cos\\theta, \\\\\nK &=&\n(r^2+a^2)\\omega -ma, \\\\\n{\\cal L}_s &=&\n\\partial_\\theta + \\frac{m}{\\sin\\theta}\n-a\\omega\\sin\\theta + s\\cot\\theta.\n\\end{eqnarray*}\nHere dagger ($\\dag$) means \nan operation that transforms $(m, \\omega)$ to $(-m, -\\omega)$.\nThe radial functions and the spheroidal harmonics appearing\nin the above equations can be evaluated analytically,\nas shown in Appendices~\\ref{sec:MST} and \\ref{sec:spheroidal}.\nFor a bound orbit, since \n$e^{-im\\varphi(t)}{\\cal I}^{\\rm (in\/up)}_\\Lambda(r(t),\\theta(t))$ \nis a double periodic function, \n$\\hat{Z}_\\Lambda^{\\rm (out\/down)}$ has a discrete spectrum \nas \n\\begin{equation}\n\\hat{Z}_\\Lambda^{{\\rm out\/down}} =\n\\sum_{n_r, n_\\theta} Z_{\\ell m n_r n_\\theta}^{{\\rm out\/down}}\n\\delta(\\omega - \\omega_{mn_r n_\\theta}),\n\\end{equation}\nwhere the coefficients $Z_{\\ell m n_r n_\\theta}^{{\\rm out\/down}}$ are\nthose already introduced in Eq.~(\\ref{eq:disc-Z})\nwith Eq.~(\\ref{eq:phi-out-ft}).\nAlthough we cannot show all the processes explicitly here,\nit is straight forward to calculate \n$Z_{\\ell m n_r n_\\theta}^{{\\rm out\/down}}$\nfor each $\\omega_{m n_r n_\\theta}$\nby substituting the analytic expansions of the orbits,\nthe radial functions and the spheroidal harmonics.\n\n\n\\subsection{Results}\nSubstituting $Z_{\\ell m n_r n_\\theta}^{{\\rm out\/down}}$\nobtained by following the scheme explained \nin the preceding subsection into\nEqs.~(\\ref{eq:ad-Edot}), (\\ref{eq:ad-Ldot}) and (\\ref{eq:ad-Qdot}),\nwe obtain:\n\\begin{eqnarray}\n\\left\\langle \\frac{dE}{dt} \\right\\rangle_{\\!\\!t}\n&=&\n- \\frac{32}{5}\\left(\\frac{\\mu}{M}\\right)^2 v^{10} \n\\cr && \\times\n\\bigg[\n1 - \\frac{1247}{336} v^2\n- \\bigg( \\frac{73}{12} q - 4 \\pi \\bigg) v^3\n\\cr && \\hspace*{5mm}\n- \\bigg( \\frac{44711}{9072} - \\frac{33}{16} q^2 \\bigg) v^4\n+ \\bigg( \\frac{3749}{336} q - \\frac{8191}{672} \\pi \\bigg) v^5\n\\cr && \\hspace*{5mm}\n+ \\bigg\\{ \\frac{277}{24} - \\frac{4001}{84} v^2\n + \\bigg( \\frac{3583}{48} \\pi - \\frac{457}{4} q \\bigg) v^3\n\\cr && \\hspace*{1cm}\n + \\bigg( 42 q^2 - \\frac{1091291}{9072} \\bigg) v^4\n + \\bigg( \\frac{58487}{672} q - \\frac{364337}{1344} \\pi \\bigg) v^5\n\\bigg\\} e^2\n\\cr && \\hspace*{5mm}\n+ \\bigg( \\frac {73}{24} q v^3 - \\frac{527}{96} q^2 v^4\n - \\frac{3749}{672} q v^5 \\bigg) y\n\\cr && \\hspace*{5mm}\n+ \\bigg( \\frac{457}{8} q v^3 - \\frac{5407}{48} q^2 v^4\n - \\frac{58487}{1344} q v^5 \\bigg) e^2 y\n\\bigg], \\\\\n\\left\\langle \\frac{dL}{dt} \\right\\rangle_{\\!\\!t}\n&=&\n- \\frac{32}{5}\\left(\\frac{\\mu^2}{M}\\right) v^{7}\n\\cr && \\times\n\\bigg[\n1 - \\frac{1247}{336} v^2\n- \\bigg( \\frac{61}{12} q - 4 \\pi \\bigg) v^3\n\\cr && \\hspace*{5mm}\n- \\bigg( \\frac{44711}{9072}\n- \\frac{33}{16} q^2 \\bigg) v^4\n+ \\bigg( \\frac{417}{56} q - \\frac{8191}{672} \\pi \\bigg) v^5\n\\cr && \\hspace*{5mm}\n+ \\bigg\\{ \\frac{51}{8} - \\frac{17203}{672} v^2\n+ \\bigg( - \\frac{781}{12} q + \\frac{369}{8} \\pi \\bigg) v^3\n\\cr && \\hspace*{1cm}\n+ \\bigg( \\frac{929}{32} q^2 - \\frac{1680185}{18144} \\bigg) v^4\n+ \\bigg( \\frac{1809}{224} q - \\frac{48373}{336} \\pi \\bigg) v^5\n\\bigg\\} e^2\n\\cr && \\hspace*{5mm}\n+ \\bigg\\{ - \\frac{1}{2} + \\frac{1247}{672} v^2\n+ \\bigg( \\frac{61}{8} q - 2 \\pi \\bigg) v^3\n\\cr && \\hspace*{1cm}\n- \\bigg( \\frac{213}{32} q^2 - \\frac{44711}{18144} \\bigg) v^4\n- \\bigg( \\frac{4301}{224} q - \\frac{8191}{1344} \\pi \\bigg) v^5\n\\bigg\\} y\n\\cr && \\hspace*{5mm}\n+ \\bigg\\{ - \\frac{51}{16} + \\frac{17203}{1344} v^2\n+ \\bigg( \\frac{1513}{16} q - \\frac{369}{16} \\pi \\bigg) v^3\n\\cr && \\hspace*{1cm}\n+ \\bigg( \\frac{1680185}{36288} - \\frac{5981}{64} q^2 \\bigg) v^4\n- \\bigg( 168 q - \\frac{48373}{672} \\pi \\bigg) v^5\n\\bigg\\} e^2 y\n\\bigg], \\\\\n\\left\\langle \\frac{dQ}{dt} \\right\\rangle_{\\!\\!t}\n&=&\n- \\frac{64}{5} \\mu^3 v^{6}\n\\cr && \\times\n\\bigg[\n1 - q v - \\frac{743}{336} v^2\n- \\bigg( \\frac{1637}{336} q - 4 \\pi \\bigg) v^3\n\\cr && \\hspace*{5mm}\n+ \\bigg( \\frac{439}{48} q^2 - \\frac{129193}{18144} - 4 \\pi q \\bigg) v^4\n+ \\bigg( \\frac{151765}{18144} q - \\frac{4159}{672} \\pi\n - \\frac{33}{16} q^3 \\bigg) v^5\n\\cr && \\hspace*{5mm}\n+ \\bigg\\{ \\frac{43}{8} - \\frac{51}{8} q v - \\frac{2425}{224} v^2\n - \\bigg( \\frac{14869}{224} q - \\frac{337}{8} \\pi \\bigg) v^3\n\\cr && \\hspace*{1cm}\n - \\bigg( \\frac{453601}{4536} - \\frac{3631}{32} q^2\n + \\frac{369}{8} \\pi q \\bigg) v^4\n\\cr && \\hspace*{1cm}\n + \\bigg( \\frac{141049}{9072} q - \\frac{38029}{672} \\pi\n - \\frac{929}{32} q^3 \\bigg) v^5 \\bigg\\} e^2\n\\cr && \\hspace*{5mm}\n+ \\bigg\\{ \\frac{1}{2} q v + \\frac{1637}{672} q v^3\n - \\bigg( \\frac{1355}{96} q^2 - 2 \\pi q \\bigg) v^4\n\\cr && \\hspace*{1cm}\n - \\bigg( \\frac{151765}{36288} q - \\frac{213}{32} q^3 \\bigg) v^5\n\\bigg\\} y\n\\cr && \\hspace*{5mm}\n+ \\bigg\\{ \\frac{51}{16} q v + \\frac{14869}{448} q v^3\n + \\bigg( \\frac{369}{16} \\pi q - \\frac{33257}{192} q^2\n \\bigg) v^4\n\\cr && \\hspace*{1cm}\n + \\bigg( - \\frac{141049}{18144} q + \\frac{5981}{64} q^3 \\bigg) v^5\n \\bigg\\} e^2 y\n\\bigg].\n\\end{eqnarray}\n\nFrom the above results we can compute \n\\begin{eqnarray}\n&& \\hspace*{-1cm}\n\\left\\langle \\frac{dQ}{dt} \\right\\rangle_{\\!\\!t}\n- \\left\\langle \\frac{2\\mu(r^2+a^2)P}{\\Delta}\\right\\rangle\n\\left\\langle \n \\frac{dE}{dt} \\right\\rangle_{\\!\\!t}\n+ \\left\\langle \\frac{2\\mu aP}{\\Delta}\\right\\rangle\n\\left\\langle \n \\frac{dL}{dt} \\right\\rangle_{\\!\\!t}\n\\nonumber \\\\ &=&\n- \\frac{64}{5} \\mu^3 v^6 e^2 \\bigg[\n- \\frac{37}{6}\n+ \\frac{13435}{672} v^2\n- \\bigg( \\frac{1561}{48} \\pi - \\frac{335}{8} q \\bigg) v^3\n\\cr && \\hspace*{2.2cm}\n+ \\bigg( \\frac{625117}{12096} - \\frac{337}{32} q^2 \\bigg) v^4\n+ \\bigg( \\frac {46827}{448} \\pi - \\frac{1355}{672} q \\bigg) v^5\n\\cr && \\hspace*{2.2cm}\n- \\bigg( \\frac{335}{16} q v^3 - \\frac{7559}{192} q^2 v^4\n - \\frac{1355}{1344} q v^5\n\\bigg) y\n\\bigg].\n\\end{eqnarray}\nThe left hand side of this equation vanishes for circular orbits\n\\cite{Kennefick:1995za}.\nIn fact, the right hand side vanishes for $e=0$.\nWhen we did not know how to compute $\\langle dQ\/dt\\rangle_t$, \nthe best guess that we could do for $\\langle dQ\/dt\\rangle_t$ \nwas to assume that the left hand side vanishes for general \norbits\\cite{Hughes:1999bq}. \nTherefore this combination represents the errors coming from \nthis hand-waving working hypothesis. \nWe can also compute \n\\begin{eqnarray}\n\\left\\langle \\frac{dC}{dt} \\right\\rangle_{\\!\\!t} &=&\n\\left\\langle \\frac{dQ}{dt} \\right\\rangle_{\\!\\!t}\n- 2(aE - L) \\bigg(\na \\left\\langle \\frac{dE}{dt} \\right\\rangle_{\\!\\!t}\n- \\left\\langle \\frac{dL}{dt} \\right\\rangle_{\\!\\!t}\n\\bigg)\n\\nonumber \\\\ &=&\n- \\frac{64}{5} \\mu^3 v^6 y \\bigg[\n1 - \\frac{743}{336} v^2\n- \\bigg( \\frac{85}{8} q - 4 \\pi \\bigg) v^3\n\\cr && \\hspace*{2.2cm}\n- \\bigg( \\frac{129193}{18144} - \\frac{307}{96} q^2 \\bigg) v^4\n+ \\bigg( \\frac{2553}{224} q - \\frac{4159}{672} \\pi \\bigg) v^5\n\\cr && \\hspace*{2.2cm}\n+ \\bigg\\{ \\frac{43}{8} - \\frac{2425}{224} v^2\n+ \\bigg( \\frac{337}{8} \\pi - \\frac{1793}{16} q \\bigg) v^3\n\\cr && \\hspace*{2.7cm}\n- \\bigg( \\frac{453601}{4536} - \\frac{7849}{192} q^2 \\bigg) v^4\n\\cr && \\hspace*{2.7cm}\n+ \\bigg( \\frac{3421}{224} q - \\frac{38029}{672} \\pi \\bigg) v^5\n\\bigg\\} e^2\n\\bigg].\n\\end{eqnarray}\nSince $y=0$ (i.e., $C=0$) corresponds to $\\theta=\\pi\/2$, \n$C$ does not \nevolve for equatorial orbits. This is consistent \nwith the requirement from the symmetry that an orbit in the\nequatorial plane stays in the equatorial plane.\n\nFinally, we consider \nthe evolution of an inclination angle $\\iota$ defined by \n\\cite{Hughes:1999bq},\n\\begin{equation}\n\\cos\\iota = \\frac{L}{\\sqrt{L^2+C}}, \n\\end{equation}\nwhich, roughly speaking, represents an angle between\nthe normal vector of an orbital plane and the rotational\naxis of the central black hole, \nbut this is not the unique definition. \nAlthough the definition of inclination angle \ncan be changed at will to some extent in Kerr case, \nthus defined inclination angle reduces correctly to the \nusual one in the $q=0$ Schwarzschild limit. \nTaking the average of the time derivative of $\\cos\\iota$,\nwe obtain\n\\begin{eqnarray}\n\\left\\langle \\frac{d\\cos\\iota}{dt} \\right\\rangle_{\\!\\!t} &=&\n\\frac{1}{2(L^2+C)^{\\frac{3}{2}}}\n\\bigg( 2 \\left\\langle \\frac{dL}{dt} \\right\\rangle_{\\!\\!t} C\n -L \\left\\langle \\frac{dC}{dt} \\right\\rangle_{\\!\\!t}\n\\bigg)\n\\nonumber \\\\ &=&\n\\frac{32\\mu^3 v^6}{5L^2(1+y)^{\\frac{3}{2}}}\nq y \\bigg[\n\\bigg( -\\frac{61}{24} v^3 + \\frac{13}{96} q v^4\n+ \\frac{1779}{224} v^5 \\bigg)\n\\cr && \\hspace*{2.7cm}\n- \\bigg( \\frac{431}{16} v^3 - \\frac{775}{192} q v^4\n- \\frac{22431}{224} v^5 \\bigg) e^2\n\\bigg].\n\\end{eqnarray}\nSubstituting $q=0$ into this equation, we can confirm that \n$\\iota$ does not change in the case of Schwarzschild limit, \nwhich must be so \nbecause of the spherical symmetry of Schwarzschild spacetime.\n\n\n\\section{Summary} \\label{sec:summary}\nIn this paper, we have considered a scheme to evaluate the change\nrates of the orbital parameters of a particle orbiting Kerr black\nhole under the adiabatic approximation.\nWe have adopted the method proposed by Mino \\cite{Mino:2003yg},\nin which we use the radiative field instead of the retarded\nfield in order to compute the change rates for ``the \nconstants of motion'' due to radiation reaction approximately. \nBased on Mino's method, we have developed a simplified scheme\nto evaluate the long term average of the change rates. \nApplying our new scheme, we have performed explicit \ncalculations to present analytic formulas of change\nrates, $\\langle dE\/dt \\rangle_t$,\n$\\langle dL\/dt \\rangle_t$ and $\\langle dQ\/dt \\rangle_t$,\nfor orbits with small eccentricity and inclination angle.\n\nHere we used the expansions with respect to \nthe post-Newtonian order, the eccentricity and the inclination \nangle in evaluating $\\langle dE\/dt \\rangle_t$, $\\langle dL\/dt \\rangle_t$\nand $\\langle dQ\/dt \\rangle_t$. \nAs a next step therefore we need to examine how large parameter region \nis covered by our formulae with a sufficient accuracy. \nAs for the inclination, we recently found a formulation to obtain\nthe analytic formulae for the change rates \nwithout assuming a small inclination angle~\\cite{Ganz:2005}.\nOn the other hand, it is almost certain that \nwe need numerical calculation for the cases with a \nlarge eccentricity.\nDrasco and Hughes \\cite{Drasco:2005kz} developed a numerical code to\ncalculate the gravitational wave fluxes of energy and azimuthal\nangular momentum evaluated at infinity and at the event horizon\nfor general geodesic orbits.\nFujita and Tagoshi also developed a numerical code based on\nan analytic method of solving the radial Teukolsky equation.\nBy applying such codes to our scheme, we can evaluate\nthe time-averaged change rate of the Carter constant\nfor general orbits, although computational cost will not \nbe small because we need to take into account a large number \nof frequency modes. \n\nOnce we obtain the change rates of ``the constants of motion'', \nas a next step, we want to use them to trace\nthe evolution of orbits.\nSome strategies to solve the orbital evolution \ntaking into account \nthe radiation reaction effects were proposed\nin Refs.~\\citen{Mino:2005an} and \\citen{Tanaka:2005ue}.\nHowever, it should be noted that the adiabatic approximation\nused here contains only the dissipative part of the self-force\non a particle, and it does not contain the conservative part.\nIn general, the conservative part also contributes\nto the secular evolution of orbits, thought it is not \nthe dominant part in the limit $\\mu\\to 0$. \nTherefore the adiabatic approximation may not be \nsufficient to evaluate the orbital evolution.\n\nRecently, Pound, Poisson and Nickel showed that\nthe conservative part of the self-force can produce\nsignificant shifts in orbital phases \nin an analogous problem with a charged particle in \nelectromagnetism\\cite{Pound:2005fs}. \nThey suggested that the conservative contribution to\nthe phase shift is relatively large in weak field, slow motion cases,\nwhile it is suppressed in strong field, rapid motion cases.\nFurthermore, there are different types of effects \nhigher order in $\\mu$ which may produce significant shifts \nin phases.\nTherefore it is important to quantify the range of validity \nof the adiabatic approximation for appropriate \napplications of the results obtained in this paper. \nAlthough it requires computing second order perturbations in \n$\\mu$ in order to understand the whole effects which potentially\ngive phase shifts greater than $O(1)$, \nsome of effects can be evaluated by studying \nthe first order self-force at each moment without \naveraging over a long period. \nWe will come back to this issue in one of our forthcoming \npapers~\\cite{Hikida:2005}.\n\n\n\n\\section*{Acknowledgments}\nWe would like to thank S.~Drasco, S.~Jhingan,\nY.~Mino, T.~Nakamura, M.~Sasaki and H.~Tagoshi\nfor invaluable discussions.\nNS, WH and HN would like to thank all participants of\nthe 8th Capra Meeting at the Rutherford Appleton Laboratory\nin UK for useful discussions.\nThis work was supported by Monbukagaku-sho Grant-in-Aid for\nScientific Research of the Japanese Ministry of Education,\nCulture, Sports, Science and Technology, Nos. 14047212 and 14047214.\nHN and WH are supported by a JSPS Research Fellowship \nfor Young Scientists, No.~5919 and No.~1756, respectively.\n\n\n\n\\begin{appendix}\n\\section{Radiative solution for the metric perturbation}\n\\label{sec:radiative}\nIn this Appendix, we give a brief review on Teukolsky formalism, \nfollowed by a derivation of the radiative Green function\nof the linearized Einstein equations.\nThis derivation is based on \nRefs.~\\citen{Chrzanowski:1975wv,Wald:1978vm} and \\citen{Gal'tsov82}.\n\n\n\\subsection{Teukolsky equation}\nAs a master variable we consider the Teukolsky functions defined by\n\\begin{eqnarray}\n{}_s\\Psi &:=& {}_sD^{\\mu\\nu}h_{\\mu\\nu} =\n\\left\\{\n \\begin{array}{ll}\n -C_{\\alpha\\beta\\gamma\\delta}l^{\\alpha}m^{\\beta}l^{\\gamma}m^{\\delta},\n & s=2, \\\\\n -\\bar{z}^4C_{\\alpha\\beta\\gamma\\delta}\n n^{\\alpha}\\bar{m}^{\\beta}n^{\\gamma}\\bar{m}^{\\delta}, & s=-2,\n \\end{array}\n\\right. \n\\label{defPsi}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n{}_2 D^{\\mu\\nu} &=& -{1\\over 2z}\\bigg[{1\\over 2}\n {\\cal L}^{\\dag}_{-1}{\\cal L}^{\\dag}_{0}{1\\over z}l^\\mu l^\\nu\n +{\\cal D}^2_0 zm^\\mu m^\\nu \n\\cr && \\hspace*{1cm}\n -{1\\over 2\\sqrt{2}}\\left( {\\cal D}_0{1\\over z^2}\n {\\cal L}^{\\dag}_{-1}z^2+{\\cal L}^{\\dag}_{-1}{1\\over z^2}\n {\\cal D}_0 z^2\\right)(l^\\mu m^\\nu +m^\\mu l^\\nu)\\bigg],\n\\nonumber \\\\\n{}_{-2} D^{\\mu\\nu} &=& -{1\\over 2z}\\Biggl[{1\\over 2}\n {\\cal L}_{-1}{\\cal L}_{0}z\\bar z^2 n^\\mu n^\\nu\n +{1\\over 4}\\Delta^2 {\\cal D}^{\\dag2}_0{\\bar z^2\\over z} \n \\bar m^\\mu \\bar m^\\nu \n\\cr && \\hspace*{5mm}\n +{\\Delta^2 \\over 4\\sqrt{2}}\\bigg({\\cal D}^{\\dag}_0\n {1\\over \\Delta z^2}\n {\\cal L}_{-1}z^2\\bar z^2+{\\cal L}_{-1}\\frac{1}{z^2}\n {\\cal D}^{\\dag}_0{z^2\\bar z^2\\over \\Delta}\\bigg)\n (n^\\mu \\bar m^\\nu +\\bar m^\\mu n^\\nu)\\Biggr],\n \\label{Ddef}\n\\end{eqnarray}\n$z:=r+ia\\cos\\theta$,\n$\\Delta:=r^2-2Mr+a^2$, and $\\Sigma:=r^2+a^2\\cos^2\\theta$.\n${\\cal D}_n$ and ${\\cal L}_s$ are the differential operators defined by \n\\begin{eqnarray}\n{\\cal D}_n &:=&\n\\partial_r + \\frac{(r^2+a^2)}{\\Delta}\n \\partial_t+\\frac{a}{\\Delta}\\partial_\\varphi\n+ \\frac{2n(r-M)}{\\Delta} , \\\\\n{\\cal L}_s &:=&\n\\partial_{\\theta} - \\frac{i}{\\sin\\theta}\\partial_\\varphi\n-i a\\sin\\theta\\partial_t + s\\cot\\theta, \n\\end{eqnarray}\nand a dagger ($\\dag$) acting on an operator \nmeans transformation \nof $(\\partial_t, \\partial_\\varphi) \\to \n(-\\partial_t, -\\partial_\\varphi)$, \nwhich reduces to the one defined in the main text by \n$(\\omega, m) \\to (-\\omega, -m)$ \nunder the assumption of Fourier expansion.\nThe Teukolsky functions satisfy a separable partial differential\nequation~\\cite{Teukolsky:1973ha}\n\\begin{equation}\n{}_s{\\cal O}~{}_s\\Psi = 4\\pi\\Sigma ~_s\\hat{T}, \n \\label{eq:Teukolsky-eq}\n\\end{equation}\nwhere \n\\begin{equation}\n{}_s \\hat{T}:=~_s\\tau_{\\mu\\nu} T^{\\mu\\nu},\n\\end{equation}\nand\n${}_s{\\cal O}$ is the Teukolsky differential operator, \n\\begin{eqnarray}\n{}_s{\\cal O} &:=& {}_s{\\cal O}_r + {}_s{\\cal O}_\\theta,\n\\end{eqnarray}\nwith \n\\begin{eqnarray}\n{}_s{\\cal O}_r &:=& \n-\\frac{(r^2+a^2)^2}{\\Delta}\\partial_t^2\n+\\Delta^{-s}\\partial_r(\\Delta^{s+1}\\partial_r)\n-\\frac{a^2}{\\Delta}\\partial_{\\varphi}^2\n-\\frac{4Mar}{\\Delta}\\partial_t\\,\\partial_{\\varphi}\n+\\frac{2sa(r-M)}{\\Delta}\\partial_{\\varphi}\n\\cr &&\n+2s\\left(\\frac{M(r^2-a^2)}{\\Delta}-r\\right)\n\\partial_t + s, \\cr\n{}_s{\\cal O}_\\theta &:=& \na^2\\sin^2\\theta\\,\\partial_t^2\n+\\frac{1}{\\sin\\theta}\\,\\partial_{\\theta}(\\sin\\theta\\,\\partial_{\\theta})\n+\\frac{1}{\\sin^2\\theta}\\,\\partial_{\\varphi}^2\n+\\frac{2is\\cos\\theta}{\\sin^2\\theta}\\,\\partial_{\\varphi}\n\\cr\n&&-2isa\\cos\\theta\\,\\partial_t-s^2\\cot^2\\theta,\n\\label{calOs}\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n{}_2 \\tau_{\\mu\\nu} & := & {1\\over\\bar z^4 z}\\bigg[\n {1\\over\\sqrt{2}}\\left({\\cal L}^{\\dag}_{-1}{\\bar z^4\\over z^2}\n {\\cal D}_0+{\\cal D}_0{\\bar z^4\\over z^2}{\\cal L}^{\\dag}_{-1}\n \\right)z^2 (l_\\mu m_\\nu +m_\\mu l_\\nu)\n\\cr && \\hspace*{2.5cm}\n -{\\cal L}^{\\dag}_{-1}\\bar z^4{\\cal L}^{\\dag}_0 {1\\over z}l_\\mu l_\\nu\n -2{\\cal D}_0\\bar z^4{\\cal D}_0 z m_\\mu m_\\nu\\bigg],\n \\nonumber \\\\\n _{-2}\\tau_{\\mu\\nu} & := & -{1\\over\\bar z^4 z}\\Biggl[\n {1\\over2\\sqrt{2}}\\Delta\\left({\\cal L}_{-1}{\\bar z^4\\over z^2}\n {\\cal D}^{\\dag}_{-1}+{\\cal D}^{\\dag}_{-1}{\\bar z^4\\over z^2}{\\cal L}_{-1}\n \\right)\\Sigma^2 (n_\\mu \\bar m_\\nu +\\bar m_\\mu n_\\nu)\n\\cr && \\hspace*{2cm}\n +{\\cal L}_{-1}\\bar z^4{\\cal L}_0 \\bar z \\Sigma n_\\mu n_\\nu\n +{1\\over 2}\\Delta^2{\\cal D}^{\\dag}_0\\bar z^4{\\cal D}^{\\dag}_0 \n {\\bar z^2\\over z} \\bar m_\\mu \\bar m_\\nu\\Biggr].\n \\label{taudef}\n\\end{eqnarray}\nWe consider the following forms of expansions for ${}_s\\Psi$\nand ${}_s\\hat{T}$:\n\\begin{eqnarray*}\n{}_s\\Psi &=&\n\\int_{-\\infty}^{\\infty}d\\omega\\sum_{\\ell m}\ne^{-i\\omega t}{}_sX_\\Lambda(r){}_sS_\\Lambda(\\theta)\n\\frac{e^{im\\varphi}}{\\sqrt{2\\pi}}, \\\\\n4\\pi\\Sigma~_s\\hat{T} &=&\n\\int_{-\\infty}^{\\infty}d\\omega\\sum_{\\ell m}\ne^{-i\\omega t}~_sT_\\Lambda(r)~_sS_\\Lambda(\\theta)\n\\frac{e^{im\\varphi}}{\\sqrt{2\\pi}},\n\\end{eqnarray*}\nwhere $\\Lambda:=\\{lm\\omega\\}$.\nSubstituting these expressions into the Teukolsky equation \n(\\ref{eq:Teukolsky-eq}),\nwe obtain equations separated for the radial and angular parts\nas\n\\begin{eqnarray}\n\\left[\\Delta^{-s}\\frac{d}{dr}\\left(\\Delta^{s+1}\\frac{d}{dr}\\right)\n+ \\frac{K^2-2is(r-M)K}{\\Delta}+4is\\omega r\n-\\lambda \\right]{}_sX_\\Lambda(r)\n&=& {}_sT_\\Lambda, \\label{eq:radial-Teuk}\\\\\n\\bigg[\\frac{1}{\\sin\\theta}\\frac{d}{d\\theta}\n\\left(\\sin\\theta\\frac{d}{d\\theta}\\right)\n- a^2\\omega^2\\sin^2\\theta\n- \\frac{(m+s\\cos^2\\theta)^2}{\\sin^2\\theta}\n\\hspace*{2cm} && \\cr \n- 2a\\omega s \\cos\\theta + \\lambda + s + 2am\\omega\n\\bigg]{}_sS_\\Lambda(\\theta)\n&=& 0,\n\\label{eq:spheroid-eq}\n\\end{eqnarray}\nwhere $K:=(r^2+a^2)\\omega -ma$ and\n$\\lambda := {}_sE_{\\ell m}(a\\omega) - s(s+1) + a^2\\omega^2 - 2am\\omega$. \nThe eigenvalue ${}_sE_{\\ell m}(a\\omega)$ is determined \nby solving Eq.(\\ref{eq:spheroid-eq}) as an eigenvalue \nproblem imposing regular boundary conditions on ${}_sS_\\Lambda(\\theta)$ \nat $\\theta=\\pm\\pi\/2$. Here $\\ell$ is an index that labels \ndiffernt eigen values. We also give a brief review on \nhow to solve this equation analytically in Appendix C. \n\n\n\n\\subsection{Mode functions}\nWe write mode functions for the Teukolsky equation \n(\\ref{eq:Teukolsky-eq}) in the form \n\\begin{equation}\n{}_s \\Omega_\\Lambda :=\n{}_sR_\\Lambda(r)~_s Z_\\Lambda(\\theta,\\varphi) e^{-i\\omega t},\n \\label{eq:Teuk-mode-fnc}\n\\end{equation}\nwhere ${}_sR_\\Lambda(r)$ is a homogeneous solution of\nthe radial Teukolsky equation (\\ref{eq:radial-Teuk}),\nand $_s Z_\\Lambda(\\theta,\\varphi)$ is the \nspheroidal harmonics\n\\begin{equation}\n_sZ_\\Lambda(\\theta,\\varphi) =\n\\frac{1}{\\sqrt{2\\pi}}~_sS_\\Lambda(\\theta) e^{im\\varphi},\n\\end{equation}\nnormalized as\n\\begin{equation}\n\\int_0^\\pi d\\theta \\sin\\theta |_sS_\\Lambda(\\theta)|^2 =1.\n\\end{equation}\nUsing the symmetry of the radial equation \n(\\ref{eq:radial-Teuk}) under the \nsimultaneous operations of the \ncomplex conjugation and the transformation \nof $(m,\\omega)\\rightarrow(-m,-\\omega)$,\nwe impose \n\\begin{equation}\n{}_s R_\\Lambda=~_s\\bar R^{\\dag}_\\Lambda,\n \\label{Rdag}\n\\end{equation}\nwhere a dagger ($\\dag$) acting on a mode function \nmeans transformation\nof $(\\omega, m) \\to (-\\omega, -m)$. \nIn a similar manner, \nby virtue of the symmetries of\nEq.~(\\ref{eq:spheroid-eq}), we arrange \nthe spheroidal harmonics to satisfy \n\\begin{equation}\n{}_s Z_\\Lambda =(-1)^m~_{-s}\\bar Z_\\Lambda^{\\dag}. \n\\label{eq:symm-spheroid}\n\\end{equation}\nIn our later discussions, we also \nneed the well-known Teukolsky-Starobinsky identities:\n\\begin{equation}\n {}_{-s} R_\\Lambda =~ _s U _s R_\\Lambda,\n\\qquad\n(\\mbox{for}~~ |s|=2), \n \\label{Utransform}\n\\end{equation}\nwith\n\\begin{equation}\n {}_{-2} U:={A \\over {\\cal C}}{\\cal D}^4_0,\\qquad \n {}_2 U:={1\\over A\\bar {\\cal C}}\\Delta^2\n{\\cal D}^{\\dag 4}_0 \\Delta^2,\n \\label{Udef}\n\\end{equation}\nwhere \n\\begin{eqnarray}\n {\\cal C}& = &[((\\lambda+s(s+1))^2\n +4a\\omega m -4a^2\\omega^2)\n \\{(\\lambda+s(s+1)-2\n )^2+36a\\omega m-36 a^2\\omega^2\\}\\cr\n &&\\quad +(2\\lambda+2s(s+1)-1)(96a^2\\omega^2-48a\\omega m)-\n 144 a^2\\omega^2]^{1\/2}+12i\\omega M, \n\\end{eqnarray}\nand $A$ is a factor which depends on how we normalize \nthe radial functions.\nIn this paper, we simply adopt $A=1$. \n(This convention is the one used in Ref.~\\citen{Mano:1996gn}). \n\nNow we discuss how to construct mode functions \nfor metric perturbations\nfrom mode functions of the Teukolsky equation. \nThe basic idea owes to \nChrzanowski~\\cite{Chrzanowski:1975wv}. \nHere we follow a more rigorous approach taken by \nWald~\\cite{Wald:1978vm}. \nUsing the relation (\\ref{defPsi}), \nthe Teukolsky equation (\\ref{eq:Teukolsky-eq}) \nis rewritten as \n\\begin{equation}\n {1\\over 4\\pi\\Sigma}{}_s{\\cal O}\\,_sD^{\\mu\\nu}\n h_{\\mu\\nu}={}_s\\hat T.\n\\end{equation}\nOn the other hand, operating ${}_s\\tau_{\\alpha\\beta}$ on \nthe linearlized Einstein equation, which we schematically denote as \n$\n G^{\\alpha\\beta\\mu\\nu} h_{\\mu\\nu}=4\\pi T_{\\alpha\\beta}, \n$\nwe obtain \n\\begin{equation}\n{1\\over 4\\pi}{}_s\\tau_{\\alpha\\beta} G^{\\alpha\\beta\\mu\\nu} \n h_{\\mu\\nu}={}_s\\hat T.\n\\end{equation}\nFrom the comparison of these equations, we find an identity at the \noperator level:\n\\begin{equation}\n{1\\over \\Sigma}{}_s{\\cal O}\\,_sD^{\\mu\\nu}\n ={}_s\\tau_{\\alpha\\beta} G^{\\alpha\\beta\\mu\\nu}.\n\\label{operatorID}\n\\end{equation}\n\nHere we define $O^{*\\mu\\nu}$, \nthe adjoint of an operator $O^{\\mu\\nu}$, so as \nto satisfy \n\\begin{equation}\n \\int\\sqrt{-g} \\bar X O^{\\mu\\nu} Y_{\\mu\\nu} d^4x\n = \\int\\sqrt{-g} Y_{\\mu\\nu} \\overline{ O^{*\\mu\\nu} X} \n d^4x, \n\\end{equation}\nfor arbitrary scalar field $X$ and tensor field $Y_{\\mu\\nu}$. \nThe definition of the adjoint operators for \ndifferent types of tensor operators is a straight forward \ngeneralization of this definition. \nIt will be worth noting $\\sqrt{-g}\\,d^4x=\\sin\\theta\\Sigma \\,\ndt\\,dr\\,d\\theta\\,d\\varphi$, \nand \n\\begin{eqnarray}\n(AB)^*=B^* A^*,\\qquad \n{\\cal D}_n^*=-\\Sigma^{-1}{\\cal D}_{-n}^\\dag\\Sigma,\\qquad\n{\\cal L}_s^*=-\\Sigma^{-1}{\\cal L}_{1-s}^\\dag\\Sigma. \n\\end{eqnarray}\nBy taking adjoint of each side in \nEq.~(\\ref{operatorID}), we obtain \n\\begin{equation}\n{}_sD^{*\\mu\\nu}\n\\left(\\Sigma^{-1}{}_s{\\cal O}\\right)^*\n =G^{\\alpha\\beta\\mu\\nu} {}_s\\tau^*_{\\alpha\\beta}. \n\\end{equation}\nHere we used the fact that the linearlized Einstein \noperator $G^{\\alpha\\beta\\mu\\nu}$ \nis self-adjoint, i.e., $G^{*\\alpha\\beta\\mu\\nu}\n=G^{\\alpha\\beta\\mu\\nu}$. \nThen, from the definition of ${}_s{\\cal O}_r$ and \n${}_s{\\cal O}_\\theta$ given in Eqs.~(\\ref{calOs}), \nit is easy to see that \n\\begin{equation}\n\\left(\\Sigma{}^{-1}~_s{\\cal O}_r\\right)^*=\n \\Sigma{}^{-1} {}_{-s}{\\cal O}_r, \n\\qquad\n\\left(\\Sigma^{-1}~{}_s{\\cal O}_\\theta\\right)^*=\n \\Sigma{}^{-1} {}_s{\\cal O}_\\theta. \n\\end{equation}\nTherefore we have \n$(\\Sigma{}_s{\\cal O})^*{}_{-s} R_{\\Lambda}\\,_s Z_{\\Lambda}\\,\ne^{-i\\omega t}=0$, \nwhihc means that \n\\begin{equation}\nG^{\\alpha\\beta\\mu\\nu} {}_s\\tau^*_{\\alpha\\beta}\\,\n {}_{-s}R_{\\Lambda}\\,{}_s Z_{\\Lambda}\\,e^{-i\\omega t}=0. \n\\end{equation}\nHere the explicit form of the \nadjoint operators ${}_s\\tau_{\\mu\\nu}^*$ are \n\\begin{eqnarray}\n{}_2\\tau^*_{\\mu\\nu} & = &\n\\bigg[\n {1\\over\\sqrt{2}}(l_\\mu \\bar m_\\nu +\\bar m_\\mu l_\\nu) \n {\\bar z\\over z}\\left({\\cal D}_0{z^4\\over \\bar z^2}{\\cal L}_2+\n {\\cal L}_2 {z^4\\over \\bar z^2}{\\cal D}_0\\right)\n\\cr && \\hspace*{1.5cm}\n -l_\\mu l_\\nu{1\\over z\\bar z^2}\n {\\cal L}_1 z^4{\\cal L}_2 -2\\bar m_\\mu \\bar m_\\nu {1\\over z}\n {\\cal D}_0 z^4 {\\cal D}_0\\bigg]{1\\over z^3},\n\\\\\n{}_{-2} \\tau^*_{\\mu\\nu} & = &\n-\\bigg[\n {1\\over 2\\sqrt{2}}(n_\\mu m_\\nu +m_\\mu n_\\nu) \n z\\bar z\\left({\\cal D}^{\\dag}_1{z^4\\over \\bar z^2}{\\cal L}^{\\dag}_2+\n {\\cal L}^{\\dag}_2 {z^4\\over \\bar z^2}{\\cal D}^{\\dag}_1\\right)\\Delta\n\\cr && \\hspace*{1.5cm}\n +n_\\mu n_\\nu z{\\cal L}^{\\dag}_1 z^4{\\cal L}^{\\dag}_2 \n +{1\\over 2}m_\\mu m_\\nu {z\\over\\bar z^2}\n {\\cal D}^{\\dag}_0 z^4 {\\cal D}^{\\dag}_0\\Delta^2\\bigg]{1\\over z^3}. \n\\label{taustar}\n\\end{eqnarray}\nHence, \n\\begin{equation}\n{}_s\\Pi_{\\Lambda,\\mu\\nu} :=\n\\zeta_s~_s\\tau_{\\mu\\nu}^{*}\\,\n{}_s\\tilde\\Omega_\\Lambda, \n \\label{Omgtopi}\n\\end{equation}\nwith \n\\begin{equation}\n{}_s\\tilde\\Omega_\\Lambda\n = {}_{-s}R_{\\Lambda}\\,_s Z_\\Lambda \\,e^{-i\\omega t}, \n\\end{equation}\nis a complex-valued homogeneous solution of the \nlinearized Einstein equations. \nHere $\\zeta_s$ is a numerical coefficient which we \ndetermine so as to satisfy \n\\begin{eqnarray}{}_sD^{\\mu\\nu}{}\n\\sum_\\Lambda ({\\cal A}_\\Lambda\\,_s \\Pi_{\\Lambda,\\mu\\nu}\n+\\overline{{\\cal A}_\\Lambda\\,_s\\Pi_{\\Lambda,\\mu\\nu}}) \n=\\sum_\\Lambda {\\cal A}_\\Lambda\\, {}_s\\Omega_{\\Lambda}, \n\\end{eqnarray}\nfor any complex-valued amplitude of each mode, ${\\cal A}_\\Lambda$. \nHere the coplex conjugate term in parentheses\nis necessary to make the metric perturbation real. \nUsing Eqs.~(\\ref{Ddef}) and (\\ref{taustar}), we can verify\n\\begin{eqnarray}\n{}_2 D^{\\mu\\nu}{}_{-2}\\tau^*_{\\mu\\nu} &= &{1\\over 4} \n {\\cal L}^{\\dag}_{-1} {\\cal L}^{\\dag}_0 \n {\\cal L}^{\\dag}_{1} {\\cal L}^{\\dag}_2, \\quad\n{}_{-2}D^{\\mu\\nu}{}_{-2}\\tau^*_{\\mu\\nu}={1\\over 16} \n \\Delta^2{\\cal D}^{\\dag 4}_0\\Delta^2,\n\\nonumber \\\\\n{}_{-2}D^{\\mu\\nu}{}_{-2}\\bar\\tau^{*}_{\\mu\\nu} &=& 0, \\quad\n{}_{2}D^{\\mu\\nu}{}_{2}\\tau^*_{\\mu\\nu}={\\cal D}^{4}_0,\n\\\\\n{}_{-2}D^{\\mu\\nu}{}_{2}\\tau^*_{\\mu\\nu} &=&\n{1\\over 4} {\\cal L}_{-1} {\\cal L}_0 {\\cal L}_{1} {\\cal L}_2, \\quad\n{}_{2}D^{\\mu\\nu}{}_{2}\\bar\\tau^{*}_{\\mu\\nu} = 0. \n\\label{Dtau}\n\\end{eqnarray}\nIn literature ${}_s\\bar\\tau^{*\\dag}_{\\mu\\nu}$ is \nused to represent what we denote here by ${}_s\\bar\\tau^{*}_{\\mu\\nu}$. \nThe difference arises because we use the notation for \nthe differential operators without assuming that \nthey always act on a single Fourier mode. \nNamely, instead of writing $(-i\\omega, im)$, we are using here \n$(\\partial_t,\\partial_\\varphi)$. The complex conjugation \nof the former gives rise a flip of signature, while that \nof the latter does not. \nWith the aid of the above relations (\\ref{Utransform}) and \n(\\ref{Dtau}), we find that the complex conjugate terms vanish\nto obtain \n\\begin{eqnarray}\n _2 D^{\\mu\\nu}{}_2\\Pi_{\\Lambda,\\mu\\nu} &=&\n \\zeta_2 \\,{\\cal C}\\, {}_2\\Omega_\\Lambda,\n \\nonumber \\\\\n {}_{-2} D^{\\mu\\nu}{}_{-2}\\Pi_{\\Lambda,\\mu\\nu} &=&\n {\\zeta_{-2}\\, \\bar {\\cal C}\\over 16}{}_{-2}\\Omega_\\Lambda.\n\\end{eqnarray}\nThus the normalization constants are fixed as \n\\begin{equation}\n \\zeta_2={1\\over {\\cal C}}, \\qquad \n \\zeta_{-2}={16\\over \\bar {\\cal C}}.\\label{eq:zeta-norm}\n\\end{equation}\n\n\n\\subsection{Radiative field}\nHere we explain a method of constructing radiative field \nfor metric perturbations. Radiative field is a homogeneous solution \nof field equations. Hence, once we obtain the radiative field \nfor the Teukolsky function, it can be easily transformed into \nthat for metric perturbations by using the relations established \nin the preceding subsection. \nWe therefore first derive the radiative field \nfor the Teukolsky function. \n\nThe retarded Green function of the Teukolsky function \nis defined as a solution of\n\\begin{equation}\n{}_s{\\cal O} {}_sG(x,x') =\n\\frac{\\delta^{(4)}(x-x')}{\\Delta^s}, \n\\end{equation}\nwith the retarded boundary condition: ${}_sG(x,x') =0$ for \n$tr'$:\n\\begin{equation}\n{}_sg^{\\rm rad}_{\\Lambda}(r,r') =\n\\frac{1}{2}\\left[\n \\frac{{}_s R^{\\rm up}_\\Lambda(r){}_s R^{\\rm in}_\\Lambda(r')}\n {W({}_s R^{\\rm in}_\\Lambda,{}_s R^{\\rm up}_\\Lambda)}-\n \\frac{{}_s R^{\\rm down}_\\Lambda(r){}_s R^{\\rm out}_\\Lambda(r')}\n {W({}_s R^{\\rm out}_\\Lambda,{}_s R^{\\rm down}_\\Lambda)}\n\\right]. \n\\label{eq1}\n\\end{equation}\nWe rewrite this expression in terms of the \ndown-field and the out-field, \neliminating ${}_s R^{\\rm up}_\\Lambda(r)$\nand ${}_s R^{\\rm out}_\\Lambda(r')(=\\Delta^{-s}\n {}_{-s} \\bar R^{\\rm in}_\\Lambda(r'))$ \nin Eq.~(\\ref{eq1}). Hence we expand \n${}_s R^{\\rm up}_\\Lambda$ and ${}_{s}R^{\\rm out}_\\Lambda$ as \n\\begin{eqnarray}\n {}_s R^{\\rm up}_\\Lambda \n & = & \\alpha~ {}_s R^{\\rm out}_\\Lambda\n +\\beta~ {}_s R^{\\rm down}_\\Lambda,\\cr\n {}_s R^{\\rm out}_\\Lambda & = & \\gamma~ {}_s R^{\\rm up}_\\Lambda\n +\\delta~ {}_s R^{\\rm in}_\\Lambda~. \n\\label{expansion}\n\\end{eqnarray}\nTaking the Wronskians of both sides of Eqs.~(\\ref{expansion})\nwith appropriate radial functions, \none can easily obtain \n\\begin{eqnarray*}\n && W({}_s R^{\\rm up}_\\Lambda,{}_s R^{\\rm down}_\\Lambda)\n =\\alpha\\, W({}_s R^{\\rm out}_\\Lambda,{}_s R^{\\rm down}_\\Lambda),\n\\qquad\n W({}_s R^{\\rm up}_\\Lambda,{}_s R^{\\rm out}_\\Lambda)\n =\\beta\\, W({}_s R^{\\rm down}_\\Lambda,{}_s R^{\\rm out}_\\Lambda), \n\\cr\n && W({}_s R^{\\rm out}_\\Lambda,{}_s R^{\\rm in}_\\Lambda)\n =\\gamma\\, W({}_s R^{\\rm up}_\\Lambda,{}_s R^{\\rm in}_\\Lambda),\n\\qquad\n W({}_s R^{\\rm out}_\\Lambda,{}_s R^{\\rm up}_\\Lambda)\n =\\delta\\, W({}_s R^{\\rm in}_\\Lambda,{}_s R^{\\rm up}_\\Lambda). \n\\end{eqnarray*}\nSubstituting these relations, the expression (\\ref{eq1}) \nreduces to\n\\begin{eqnarray}\n_sg^{\\rm rad}_\\Lambda(r,r')&=& {\\Delta^{-s}(r')\\over\n 2 W({}_s R^{\\rm in}_\\Lambda,{}_s R^{\\rm up}_\\Lambda)\n W({}_s R^{\\rm out}_\\Lambda,{}_s R^{\\rm down}_\\Lambda) }\\cr\n&&\\quad \\times \\Bigl[\n W({}_s R^{\\rm out}_\\Lambda,{}_s R^{\\rm in}_\\Lambda) \n \\,_sR^{\\rm down}_{\\Lambda}(r) \n \\,_{-s}\\bar R^{\\rm down}_{\\Lambda}(r')\\cr\n &&\\hspace*{2cm} \n +W({}_s R^{\\rm up}_\\Lambda,{}_s R^{\\rm down}_\\Lambda) \n \\,_s R^{\\rm out}_{\\Lambda}(r) \n \\,_{-s}\\bar{R}^{\\rm out}_{\\Lambda}(r') \n \\Bigr]. \\label{eq2}\n\\end{eqnarray}\nWe can do an analogous reduction for $rr'$. Namely, the step functions \nwhich was present in the retarded and the advanced Green functions \ndo not appear in the radiative Green function. \nThis is consistent with the fact \nthat the radiative field is a source-free \nhomogeneous solution. \n\nSince the radiative field is a homogeneous solution, \nwe can use the method for reconstruction of metric \nperturbation explained in the preceding subsection. \nWhen we consider the metric perturbation by a point mass,\nthe energy-momentum tensor is given by (\\ref{ppEM}). \nIn this case it is easy to verify that the radiative field \nof the metric perturbations is given by \n\\begin{eqnarray}\nh_{\\mu\\nu}^{{\\rm rad}}(x) &=&\n\\mu \\! \\int \\!\\! \nd\\omega \\sum_{\\ell m} \n\\bigg\\{\n{\\cal N}_{s}^{\\rm out} \n{}_s\\Pi_{\\Lambda,\\mu\\nu}^{{\\rm out}}(x)\n\\int \\!\\! d\\tau \\Big[\n{}_s\\bar{\\Pi}_{\\Lambda,\\alpha\\beta}^{{\\rm out}}(z(\\lambda))\nu^{\\alpha}u^{\\beta} \\Big]\n\\cr &&\n+ {\\cal N}_{s}^{\\rm down} {}_s\\Pi_{\\Lambda,\\mu\\nu}^{{\\rm down}}(x)\n\\int \\!\\! d\\tau \\Big[\n{}_s\\bar{\\Pi}_{\\Lambda,\\alpha\\beta}^{{\\rm down}}(z(\\lambda))\nu^{\\alpha}u^{\\beta}\n\\Big] \\bigg\\}\n+ ({\\rm c.c.}), \n\\label{eq:rad-field}\n\\end{eqnarray}\nwith \n\\begin{eqnarray}\n {\\cal N}_{s}^{\\rm out} \n & = & { W({}_s R^{\\rm up}_\\Lambda,{}_s R^{\\rm down}_\\Lambda)\n \\over \\bar\\zeta_s \n W({}_s R^{\\rm in}_\\Lambda,{}_s R^{\\rm up}_\\Lambda)\n W({}_s R^{\\rm out}_\\Lambda,{}_s R^{\\rm down}_\\Lambda) },\\cr\n {\\cal N}_{s}^{\\rm down} \n & = & { W({}_s R^{\\rm out}_\\Lambda,{}_s R^{\\rm in}_\\Lambda)\n \\over \\bar\\zeta_s \n W({}_s R^{\\rm in}_\\Lambda,{}_s R^{\\rm up}_\\Lambda)\n W({}_s R^{\\rm out}_\\Lambda,{}_s R^{\\rm down}_\\Lambda) }. \n\\end{eqnarray}\nIn fact, if we apply ${}_s{\\cal D}^{\\mu\\nu}$, we correctly \nrecover\n${}_s \\Psi^{\\rm rad}(x)=4\\pi\\int \nG^{\\rm rad}(x,x') \\Sigma(x')$\n$\n\\Delta^s(x') {}_s\\hat T(x') d^4 x$. \nTo show this, we also used\n\\begin{eqnarray}\n\\bar\\zeta_s \\int \\sqrt{-g}\n {}_s\\bar{\\tilde{\\Omega}}_{\\Lambda} \n \\,{}_s\\hat T d^4x\n&=&\\bar\\zeta_s\\int \\sqrt{-g} \n \\overline{\\left({}_s\\tau^{*}_{\\mu\\nu}\\, \n {}_s\\tilde\\Omega_{\\Lambda}\\right)} T^{\\mu\\nu} d^4x \\cr\n&=&\\mu\\int d\\tau\\,\n u^{\\mu} u^{\\nu} \n \\bar\\Pi_{\\Lambda,\\mu\\nu}(z(\\tau)). \n\\label{TT}\n\\end{eqnarray}\n\nIt is more convenient to rewrite ${\\cal N}_s$ written \nin terms of Wronskians by using the coefficients \nin the asymptotic forms of radial functions. \nThe radial functions take the asymptotic forms,\n\\begin{eqnarray}\n{}_s R_\\Lambda^{{\\rm in}} &:=&\n\\left\\{\n\\begin{array}{ll}\n {}_sB_{\\Lambda}^{{\\rm inc}}\\displaystyle r^{-1} e^{-i\\omega r^{*}}\n +{}_sB_{\\Lambda}^{{\\rm ref}} r^{-2s-1} e^{i\\omega r^{*}}, \\hskip1cm\n&\n {\\rm for}~~r^{*}\\rightarrow\\infty,\n\\\\\n \\displaystyle\n {}_sB_{\\Lambda}^{{\\rm trans}}\\Delta^{-s}e^{-ik r^{*}},\n&\n {\\rm for}~~r^{*}\\rightarrow -\\infty,\n\\end{array}\n\\right. \\label{eq:asymptotic-in} \\\\\n{}_s R_\\Lambda^{{\\rm up}} &:=&\n\\left\\{\n\\begin{array}{ll}\n \\displaystyle\n {}_sC_{\\Lambda}^{{\\rm trans}}r^{-2s-1} e^{i\\omega r^{*}}, \\hskip3cm\n\\hspace*{7mm}\n&\n {\\rm for}~~r^{*}\\rightarrow\\infty, \n\\\\\n \\displaystyle\n {}_sC_{\\Lambda}^{{\\rm up}}e^{ik r^{*}}\n + {}_sC_{\\Lambda}^{{\\rm ref}}\\Delta^{-s}e^{-ik r^{*}},\n&\n {\\rm for}~~r^{*}\\rightarrow -\\infty, \n\\end{array}\n\\right.\n\\label{eq:asymptotic-up}\n\\end{eqnarray}\nwhere $r^*$ is the tortoise coordinate defined by\n$dr^*\/dr=(r^2+a^2)\/\\Delta$.\nUsing the relations \n${}_s R^{\\rm out}_\\Lambda=\n\\Delta^{-s}{}_{-s} \\bar R^{\\rm out}_\\Lambda$\nand\n${}_s R^{\\rm down}_\\Lambda=\n\\Delta^{-s}{}_{-s} \\bar R^{\\rm up}_\\Lambda$, \nwe can describe the asymptotic forms\nof out- and down- fields with the same coefficients\nthat appear in Eqs.~(\\ref{eq:asymptotic-in}) and\n(\\ref{eq:asymptotic-up}).\nThen, the Wronskians that we need to evaluate are \n\\begin{eqnarray}\n W({}_s R^{\\rm in}_\\Lambda\n,{}_s R^{\\rm up}_\\Lambda) \n& = & \n 2i\\omega\\, {}_{s}B_\\Lambda^{\\rm inc} \\,\n {}_{s}C_\\Lambda^{\\rm trans}, \n\\cr \n W({}_s R^{\\rm out}_\\Lambda,{}_s R^{\\rm down}_\\Lambda) \n& = & \n -2i\\omega\\, {}_{-s}\\bar C_\\Lambda^{\\rm trans} \\,\n {}_{-s}\\bar B_\\Lambda^{\\rm inc}, \\cr \n W({}_s R^{\\rm out}_\\Lambda,{}_s R^{\\rm in}_\\Lambda) \n& = & \n -4ikM r_+\\kappa_s \\, {}_{s} B_\\Lambda^{\\rm trans} \\,\n {}_{-s}\\bar B_\\Lambda^{\\rm trans}, \\cr \n W({}_s R^{\\rm up}_\\Lambda,{}_s R^{\\rm down}_\\Lambda) \n& = & \n -2i\\omega\\, {}_{-s}\\bar C_\\Lambda^{\\rm trans} \\,\n {}_{s} C_\\Lambda^{\\rm trans}, \n\\end{eqnarray}\nwhere $\\displaystyle\\kappa_s:=1-{is(r_+-M)\/ 2kMr_{+}}$.\nThe coefficients with $(-s)$-spin can be erased by using \nthe Teukolsky-Starobinsky identities~(\\ref{Utransform}). \nSubstituting the asymptotic forms (\\ref{eq:asymptotic-in}) and\n(\\ref{eq:asymptotic-up}) into Eqs.~(\\ref{Utransform}), we obtain\n\\begin{eqnarray}\n {}_{-2}B_\\Lambda^{\\rm inc} &= \n &{{\\cal C}\\over(2\\omega)^4}{}_{2}B_\\Lambda^{\\rm inc}, \\qquad\n {}_{-2}B_\\Lambda^{\\rm trans}\n = \\left({1\\over 4M r_{+} k}\\right)^4\n {{\\cal C}\n \\over \\kappa_{-2}\\kappa_{-1}\\kappa_1}{}_{2}B_\\Lambda^{\\rm trans}, \n \\nonumber \\\\\n {}_{-2}C_\\Lambda^{\\rm trans} \n &= &{(2\\omega)^4\\over \\bar {\\cal C}}\n {}_{2}C_\\Lambda^{\\rm trans} , \\qquad\n {}_{-2}B_\\Lambda^{\\rm ref} \n ={(2\\omega)^4 \\over \\bar {\\cal C}} {}_{2}B_\\Lambda^{\\rm ref}.\n \\label{relcoeff}\n\\end{eqnarray}\nUsing the above relations, the coefficients \n${\\cal N}_s$ are rewritten as \n\\begin{equation}\n {\\cal N}^{\\rm out}_s={1\\over 2i\\omega^3}|N_s^{\\rm out}|^2 ,\n\\qquad\n {\\cal N}^{\\rm down}_s={1\\over 2i\\omega^2 k}|N_s^{\\rm down}|^2 ,\n\\end{equation}\nwith\n\\begin{eqnarray}\n|N_s^{{\\rm out}}|^2 \n&\\equiv&\n{2^{3s-2}\\omega^{2s+2} \\over |{\\cal C}|^{s\/2-1}}\n{1\\over |{}_sB^{\\rm inc}_\\Lambda|^{2}}, \n\\label{eq:Namp-out} \\\\\n|N_s^{{\\rm down}}|^2 &\\equiv&\n{2^{-3s-2} k^{-2s+2} \n|{\\cal C}|^{s\/2+1}\\over |\\kappa_2|^{s\/2-1}|\\kappa_1|^{s}\n(2Mr_+)^{2s-1}}\n {|{}_sB^{\\rm trans}_\\Lambda|^{2}\\over \n |{}_sB^{\\rm inc}_\\Lambda|^{2}\\, \n |{}_sC^{\\rm trans}_\\Lambda|^{2}}.\n\\label{eq:Namp-down}\n\\end{eqnarray}\nHence, we finally obtain \n\\begin{eqnarray}\nh_{\\mu\\nu}^{\\rm rad} &= & \\mu\\int d\\omega \n \\sum_{\\ell m}{1\\over 2i\\omega^3}\\Bigl(\n N^{\\rm out}_s \n {}_s\\Pi^{\\rm out}_{\\Lambda,\\mu\\nu}(x) \n \\int {d\\tau\\over \\Sigma}\\, \\bar \\phi_{\\Lambda}^{out}(\\tau)\n\\cr &&\\qquad\\qquad\n + {\\omega\\over k}N^{\\rm down}_s \n {}_s\\Pi^{\\rm down}_{\\Lambda,\\mu\\nu}(x) \n \\int {d\\tau\\over\\Sigma}\n \\, \\bar \\phi_{\\Lambda}^{\\rm down}(\\tau)\n \\Bigr)+({\\rm c.c.}),\n\\label{hrad}\n\\end{eqnarray}\nwhere \n\\begin{equation}\n\\phi_\\Lambda^{{\\rm (out\/down)}}(\\tau) :=\nN_s^{{\\rm (out\/down)}} \\Sigma(z(\\tau))\n{}_s\\Pi_{\\Lambda,\\gamma\\delta}^{{\\rm (out\/down)}}(z(\\tau))\nu^\\gamma(\\tau) u^\\delta(\\tau), \n\\end{equation}\nwhose extension to a field is \n$ \\phi^{\\rm (out\/down)}_{\\Lambda}(x)$ \ndefined in Eq.~(\\ref{eq:def-phi-out}). \n\n\n\\section{Mano-Suzuki-Takasugi method} \\label{sec:MST}\nMano, Suzuki and Takasugi formulated a method of\nconstructing a homogeneous solution for the radial Teukolsky\nequation in two kinds of series by using\nthe Coulomb wave function and the hypergeometric functions\n\\cite{Mano:1996vt,Mano:1996gn,Sasaki:2003xr}.\nBy applying this method under slow motion approximation,\nwe can express homogeneous solutions in an analytic form.\nFurthermore, this method determines the asymptotic amplitudes \nof homogeneous solutions without numerical integration.\nThis allows us to compute the gravitational wave flux\nat infinity and on the horizon with a high accuracy\n\\cite{Fujita:2004rb}.\nWe summarize this method in this appendix. \n\n\\subsection{Outer solution of radial Teukolsky equation}\nAccording to \\citen{Mano:1996gn,Mano:1996vt,Sasaki:2003xr},\nwe can expand ${}_sR_C^{\\nu}$, a homogeneous solution\nof the radial Teukolsky equation (\\ref{eq:radial-Teuk}),\nin terms of the Coulomb wave functions as \n\\begin{eqnarray}\n{}_sR_C^{\\nu} \n&=&\n\\frac{\\Gamma(\\nu+1-s+i\\epsilon)}{\\Gamma(2\\nu+2)}\n\\hat{z}^{-s}(2\\hat{z})^{\\nu} e^{-i\\hat{z}}\n\\left(1-\\frac{\\epsilon\\kappa}{\\hat{z}}\\right)^{-s-i\\epsilon_+}\n\\cr && \\times\n\\sum_{n=-\\infty}^{\\infty} \\!\\! (-2i\\hat{z})^n\n\\frac{(\\nu+1+s-i\\epsilon)_n}{(2\\nu+2)_{2n}} a_n^{\\nu,s}\n\\cr && \\hspace*{1.5cm} \\times\n{}_1F_1(n+\\nu+1-s+i\\epsilon,2n+2\\nu+2 ; 2i\\hat{z}),\n\\label{eq:Coulomb-series}\n\\end{eqnarray}\nwhere $\\epsilon=2M\\omega$, $\\epsilon_+=\\epsilon+\\tau$,\n$\\tau=\\kappa^{-1}(\\epsilon-ma\/M)$,\n$\\kappa=\\sqrt{1-(a\/M)^2}$, $(x)_n:=\\Gamma(x+n)\/\\Gamma(x)$,\n$\\hat{z}:=\\omega(r-r_-)$, and $r_-=M-\\sqrt{M^2-a^2}$.\nThe coefficients $a_n^{\\nu,s}$ satisfies the following \nthree term recurrence relation, \n\\begin{equation}\n\\alpha_n^{\\nu} a_{n+1}^{\\nu,s}+\\beta_n^{\\nu} a_n^{\\nu,s}\n+\\gamma_n^{\\nu} a_{n-1}^{\\nu,s}=0,\n\\label{eq:recurrence}\n\\end{equation}\nwhere\n\\begin{eqnarray}\n\\alpha_n^{\\nu} &=& \n\\frac{i\\epsilon \\kappa\n(n+\\nu+1+s+i\\epsilon)(n+\\nu+1+s-i\\epsilon)(n+\\nu+1+i\\tau)}\n{(n+\\nu+1)(2n+2\\nu+3)}, \\nonumber \\\\\n\\beta_n^{\\nu} &=& \n-\\lambda-s(s+1)+(n+\\nu)(n+\\nu+1)\n+\\epsilon^2+\\epsilon(\\epsilon-mq) \n +\\frac{\\epsilon (\\epsilon-mq)(s^2+\\epsilon^2)}{(n+\\nu)(n+\\nu+1)},\n\\nonumber \\\\\n\\gamma_n^{\\nu} &=&\n-\\frac{i\\epsilon \\kappa (n+\\nu-s+i\\epsilon)\n(n+\\nu-s-i\\epsilon)(n+\\nu-i\\tau)}\n{(n+\\nu)(2n+2\\nu-1)},\n\\label{eq:coeff-Cwave-fnc}\n\\end{eqnarray}\nand $q=a\/M$. The renormalized angular momentum $\\nu$ is determined\nby the conditions \n\\begin{equation}\n\\lim_{n\\rightarrow\\infty}\nn\\frac{a_n^{\\nu,s}}{a_{n-1}^{\\nu,s}} = \\frac{i\\epsilon\\kappa}{2}, \n\\qquad\n\\lim_{n\\rightarrow -\\infty}\nn\\frac{a_n^{\\nu,s}}{a_{n+1}^{\\nu,s}} = -\\frac{i\\epsilon\\kappa}{2}.\n\\end{equation}\nUnder this condition, the series of Coulomb wave functions\n(\\ref{eq:Coulomb-series}) converges for any $r>r_+$.\n\nFrom the equations in (\\ref{eq:coeff-Cwave-fnc}),\nwe can show that $\\alpha_{-n}^{-\\nu-1}=\\gamma_n^\\nu$ and\n$\\beta_{-n}^{-\\nu-1}=\\beta_n^\\nu$.\nBy using these relations, we can find that\n$a_n^{-\\nu-1,s}=a_{-n}^{\\nu,s}$ and\n\\begin{equation}\n\\lim_{n\\rightarrow\\infty}\nn\\frac{a_n^{-\\nu-1,s}}{a_{n-1}^{-\\nu-1,s}} = \\frac{i\\epsilon\\kappa}{2}, \n\\qquad\n\\lim_{n\\rightarrow -\\infty}\nn\\frac{a_n^{-\\nu-1,s}}{a_{n+1}^{-\\nu-1,s}} = -\\frac{i\\epsilon\\kappa}{2}.\n\\end{equation}\nThis fact shows that ${}_sR_C^{-\\nu-1}$ is also a solution of the\nradial Teukolsky equation, which converges within the region $r>r_+$.\n\n\n\\subsection{In-going and up-going solutions}\nThe in-going solution of the radial Teukolsky equation is given\nin terms of the Coulomb type solutions (\\ref{eq:Coulomb-series}) \nas \n\\begin{equation}\n_sR_\\Lambda^{{\\rm in}} =\nA_s e^{i\\epsilon\\kappa}(\nK_{s,\\nu}~_sR_{C}^{\\nu} + K_{s,-\\nu-1}~_sR_{C}^{-\\nu-1}),\n\\label{eq:Rin}\n\\end{equation}\nwhere\n\\begin{eqnarray}\nA_2 &=& \\bar{{\\cal C}}\\left(\\frac{\\omega}{\\epsilon\\kappa}\\right)^4\n\\frac{\\Gamma(3-2i\\epsilon_+)}{\\Gamma(-1-2i\\epsilon_+)}\n\\left| \\frac{\\Gamma(\\nu-1+i\\epsilon)}\n{\\Gamma(\\nu+3+i\\epsilon)}\\right|^2, \\quad A_{-2} = 1, \\\\\nK_{s,\\nu} &=&\n\\frac{(2\\epsilon\\kappa)^{s-\\nu-r}2^{-s} i^r}\n{(\\nu+1+i\\tau)_r(\\nu+1+s+i\\epsilon)_r}\n\\cr && \\times\n\\frac{\\Gamma(1-s-2i\\epsilon_+)\\Gamma(n+2\\nu+2)\\Gamma(n+2\\nu+1)}\n{\\Gamma(r+\\nu+1-s+i\\epsilon)\\Gamma(\\nu+1-s-i\\epsilon)\n\\Gamma(\\nu+1-i\\tau)} \\cr\n&& \\times \\left[\\sum_{n=r}^{\\infty} (-1)^n\n\\frac{(r+2\\nu+1)_n(\\nu+1+s+i\\epsilon)_n(\\nu+1+i\\tau)_n}\n{(n-r)!~(\\nu+1-s-i\\epsilon)_n(\\nu+1-i\\tau)_n}a_n^{\\nu,s}\\right] \\cr\n&& \\times \\left[\\sum_{n=-\\infty}^{r}\n\\frac{(-1)^n}{(r-n)!~(r+2\\nu+2)_n}\n\\frac{(\\nu+1+s-i\\epsilon)_n}{(\\nu+1-s+i\\epsilon)_n}\na_n^{\\nu,s}\\right]^{-1}.\n\\end{eqnarray}\nHere $r$ is an arbitrary integer and $K_{s,\\nu}$ is independent\nof the choice of $r$.\n\nNext, we consider the up-going solution.\n$_sR_C^{\\nu}$ can be divided into two parts as\n\\begin{equation}\n{}_sR_C^{\\nu} = {}_sR_+^{\\nu} + {}_sR_-^{\\nu},\n\\end{equation}\nwhere\n\\begin{eqnarray}\n{}_sR_+^{\\nu} &=&\ne^{-\\pi\\epsilon}e^{i\\pi(\\nu+1-s)}e^{-i\\hat{z}}\n(2\\hat{z})^{\\nu}\\hat{z}^{-s}\n\\left(1-\\frac{\\epsilon\\kappa}{\\hat{z}}\\right)^{-s-i\\epsilon_+}\n\\frac{\\Gamma(\\nu+1-s+i\\epsilon)}{\\Gamma(\\nu+1+s-i\\epsilon)} \\cr\n&& \\times\n\\sum_{n=-\\infty}^{\\infty}(2i\\hat{z})^n a_n^{\\nu,s}\n\\Psi(n+\\nu+1-s+i\\epsilon,2n+2\\nu+2; 2i\\hat{z}), \\\\\n{}_sR_-^{\\nu} &=&\ne^{-\\pi\\epsilon}e^{-i\\pi(\\nu+1+s)}e^{i\\hat{z}}\n(2\\hat{z})^{\\nu}\\hat{z}^{-s}\n\\left(1-\\frac{\\epsilon\\kappa}{\\hat{z}}\\right)^{-s-i\\epsilon_+}\n\\cr && \\times\n\\sum_{n=-\\infty}^{\\infty} \\!\\! (2i\\hat{z})^n\n\\frac{(\\nu+1+s-i\\epsilon)_n}{(\\nu+1-s+i\\epsilon)_n}a_n^{\\nu,s}\n\\cr && \\hspace*{1.5cm} \\times\n\\Psi(n+\\nu+1+s-i\\epsilon,2n+2\\nu+2; -2i\\hat{z}), \n\\end{eqnarray}\nand $\\Psi(a,c;x)$ \nis the irregular confluent hypergeometric function. \nFrom the asymptotic form of $\\Psi(a,c;x)$, \n\\begin{equation}\n\\Psi(a,c; x) \\rightarrow x^{-a}, \\qquad\n(|x|\\rightarrow\\infty),\n\\end{equation}\nthe asymptotic forms of $_sR_+^{\\nu}$ and $_sR_-^{\\nu}$ \nbecome \n\\begin{equation}\n_sR_+^{\\nu} = ~_sA_+^{\\nu}z^{-1}e^{-i(z+\\epsilon\\ln z)},\n\\qquad\n_sR_-^{\\nu} = ~_sA_-^{\\nu}z^{-1-2s}e^{i(z+\\epsilon\\ln z)},\n\\end{equation}\nwhere\n\\begin{eqnarray}\n{}_sA_+^{\\nu} &=& \ne^{-\\pi\\epsilon\/2}e^{i\\pi(\\nu+1-s)\/2}2^{s-1-i\\epsilon}\n\\frac{\\Gamma(\\nu+1-s+i\\epsilon)}{\\Gamma(\\nu+1+s-i\\epsilon)}\n\\sum_{n=-\\infty}^{\\infty} \\!\\! a_n^{\\nu,s}, \\\\\n{}_sA_-^{\\nu} &=&\ne^{-\\pi\\epsilon\/2}e^{-i\\pi(\\nu+1+s)\/2}2^{-s-1+i\\epsilon}\n\\sum_{n=-\\infty}^{\\infty} \\!\\! (-1)^n\n\\frac{(\\nu+1+s-i\\epsilon)_n}{(\\nu+1-s+i\\epsilon)_n}\na_n^{\\nu,s}.\n\\end{eqnarray}\nThis shows that $_sR_-^{\\nu}$ ($_sR_+^{\\nu}$) satisfies the\nup-going (down-coming) boundary condition at infinity.\nSo we can take the up-going solution as\n\\begin{equation}\n{}_sR_\\Lambda^{{\\rm up}} = B_s {}_sR_-^{\\nu},\n\\label{eq:Rup}\n\\end{equation}\nwhere $B_2=\\bar{C}\\omega^{2s}$ and $B_{-2}=1$.\nTaking the limit $r^*\\to\\pm\\infty$ \nin Eqs.~(\\ref{eq:Rin}) and (\\ref{eq:Rup}) \nby means of the asymptotic form of $r^*$, \n\\begin{eqnarray}\n\\omega r^* &\\rightarrow&\n\\hat{z}+\\epsilon\\ln\\hat{z}-\\epsilon\\ln\\epsilon \\quad\n(r\\rightarrow\\infty), \\\\\nkr^* &\\rightarrow&\n\\epsilon_+\\ln(-x) + \\kappa\\epsilon_+\n+\\frac{2\\kappa\\epsilon_+}{1+\\kappa}\\ln\\kappa \\quad\n(r\\rightarrow r_+),\n\\end{eqnarray}\nwe find that the coefficients which appear in the asymptotic\nforms of Eqs.~(\\ref{eq:asymptotic-in}) and (\\ref{eq:asymptotic-up})\nare given by\n\\begin{eqnarray}\n{}_sB_\\Lambda^{{\\rm inc}} &=&\n\\frac{A_s e^{i\\epsilon\\kappa}}{\\omega}\n\\left[ K_{s,\\nu} - ie^{-i\\pi\\nu}\n\\frac{\\sin\\pi(\\nu-s+i\\epsilon)}{\\sin\\pi(\\nu+s-i\\epsilon)}\nK_{s,-\\nu-1} \\right] {}_sA_+^{\\nu}, \\\\\n{}_sB_\\Lambda^{{\\rm trans}} &=&\nA_s\\left(\\frac{\\epsilon\\kappa}{\\omega}\\right)^{2s}\n\\sum_{n=-\\infty}^{\\infty} a_n^{\\nu,s}, \\\\\n{}_sC_\\Lambda^{{\\rm trans}} &=&\n\\omega^{-1-2s}e^{i\\epsilon\\ln\\epsilon} {}_sA_-^{\\nu}. \n\\end{eqnarray}\n\n\n\\section{Spheroidal harmonics} \\label{sec:spheroidal}\nHere, we review the formalism to represent the spin-weighted spheroidal \nharmonics in a series of Jacobi polynomials based on\nRef.~\\citen{Fackerell}, which was slightly improved in \nRef.~\\citen{Fujita:2004rb}.\n\nWe first transform the angular part of the Teukolsky equation \n(\\ref{eq:spheroid-eq}) as \n\\begin{eqnarray}\n\\bigg[(1-x^{2})\\frac{d^2}{dx^{2}}-2x\\frac{d}{dx}+{\\xi}^{2} x^{2}\n\\hspace*{4cm} && \\cr\n-\\frac{m^{2}+s^{2}+2msx}{1-x^{2}}-2s\\xi x+{}_sE_{\\ell m}(\\xi)\n\\bigg]\n{}_sS_{\\ell m}^{\\xi}(x) &=& 0 \\,,\n\\label{eq:Sphe diff}\n\\end{eqnarray}\nwhere $\\xi = a\\,\\omega, x = \\cos\\theta$\nand ${}_sE_{\\ell m}(\\xi)=\\lambda+s(s+1)-\\xi^{2}+2m\\xi$.\nThe angular function $_{s}S_{\\ell m}^{\\xi}(x)$ is called the\nspin-weighted spheroidal harmonics. Equation (\\ref{eq:Sphe diff}) \nis a Sturm-Liouville type eigenvalue equation with\nregular boundary conditions at $x=\\pm 1$. \nSince there are a countable number of eigenvalues \nfor fixed parameters $s$, $m$ and $\\xi$, \nwe introduced an index $\\ell$ starting with max($|m|,|s|$) \nas such a label \nthat sorts the eigenvalues ${}_sE_{\\ell m}(\\xi)$ in an ascending \norder. \nWhen $\\xi=0$, $_{s}S_{\\ell m}^{\\xi}(x)$ \nis reduced to the spin-weighted spherical \nharmonics, and the eigenvalue ${}_sE_{\\ell m}(\\xi)$ becomes $\\ell(\\ell+1)$.\nWe normalize the amplitude of $_{s}S_{\\ell m}^{\\xi}(x)$ as \n\\begin{eqnarray}\n\\label{eq:normalSp}\n\\int _{0}^{\\pi}\\left |{}_{s}S_{\\ell m}^{\\xi}\\right |^2\\sin \\theta d\\theta=1 \\,.\n\\end{eqnarray}\n\nThe differential equation (\\ref{eq:Sphe diff}) has singularities at\n$x=\\pm 1$ and at $x=\\infty$. We transform the angular function as \n\\begin{eqnarray}\n_{s}S_{\\ell m}^{\\xi}(x) \\equiv\ne^{\\xi x}\\left(\\frac{1-x}{2}\\right)^{\\frac{\\alpha}{2}}\n\\left(\\frac{1+x}{2}\\right)^{\\frac{\\beta}{2}}\\, _{s}U_{\\ell m}(x) \\,,\n\\label{eq:SpheU}\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n\\label{eq:SpheV}\n_{s}S_{\\ell m}^{\\xi}(x) \\equiv\ne^{-\\xi x}\\left(\\frac{1-x}{2}\\right)^{\\frac{\\alpha}{2}}\n\\left(\\frac{1+x}{2}\\right)^{\\frac{\\beta}{2}}\\, _{s}V_{\\ell m}(x) \\,,\n\\label{Vseries}\n\\end{eqnarray}\nwhere $\\alpha = |m+s|$ and $\\beta = |m-s|$. Then, \nEq.~(\\ref{eq:Sphe diff}) becomes\n\\begin{eqnarray}\n\\label{eq:proto-Jacobi}\n&&(1-x^{2})\\,_{s}U_{\\ell m}''(x)+\\left[\\beta-\\alpha-(2+\\alpha+\\beta)x\\right]\\,_{s}U_{\\ell m}'(x)\n\\nonumber\\\\\n&&\\quad\n+\\left[\\,_{s}E_{\\ell m}(\\xi)-\\frac{\\alpha+\\beta}{2}\\left(\\frac{\\alpha+\\beta}{2}+\n1\\right)\\right]\\,_{s}U_{\\ell m}(x)\\nonumber \\\\\n&&\\quad\n=\\xi\\left[-2(1-x^{2})\\,_{s}U_{\\ell m}'(x)+(\\alpha+\\beta+2s+2)x\\,_{s}U_{\\ell m}(x)\n\\right.\n\\nonumber\\\\\n&&\\quad\\quad\\left. \n-(\\xi+\\beta-\\alpha)\\,_{s}U_{\\ell m}(x)\\right] \\,,\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n\\label{eq:proto-JacobiV}\n&&(1-x^{2})\\,_{s}V_{\\ell m}''(x)+\\left[\\beta-\\alpha-(2+\\alpha+\\beta)x\\right]\\,\n_{s}V_{\\ell m}'(x)\n\\nonumber\\\\\n&&\\quad\n+\\left[\\,_{s}E_{\\ell m}(\\xi)-\\frac{\\alpha+\\beta}{2}\\left(\\frac{\\alpha+\\beta}{2}+\n1\\right)\\right]\\,_{s}V_{\\ell m}(x)\\nonumber \\\\\n&=&\\xi\\left[2(1-x^{2})\\,_{s}V_{\\ell m}'(x)-(\\alpha+\\beta-2s+2)x\\,_{s}V_{\\ell m}(x)\n\\right.\n\\nonumber\\\\\n&&\\quad\n\\left.\n-(\\xi-\\beta+\\alpha)\\,_{s}V_{\\ell m}(x)\\right] \\,.\n\\end{eqnarray}\nFrom Eqs.~(\\ref{eq:SpheU}) and (\\ref{eq:SpheV}), we find\n\\begin{eqnarray}\n\\label{eq:sphUtoV}\n\\,_{s}V_{\\ell m}(x)={\\rm exp}(2\\xi x)\\,_{s}U_{\\ell m}(x) \\,.\n\\end{eqnarray}\n\nWhen $\\xi=0$, the right-hand sides of Eqs.~(\\ref{eq:proto-Jacobi})\nand (\\ref{eq:proto-JacobiV}) are zero, and they reduce to\nthe differential equation satisfied by the Jacobi polynomials, \n\\begin{eqnarray}\n&&(1-x^{2})\\,P_{n}^{(\\alpha,\\beta)}{}^{''}(x)\n+\\left[\\beta-\\alpha-(2+\\alpha+\\beta)x\\right]\\,P_{n}^{(\\alpha,\\beta)}{}^{'}(x)\n\\nonumber\\\\\n&&\n\\quad +n(n+\\alpha+\\beta+1)\\,P_{n}^{(\\alpha,\\beta)}(x)=0.\n\\label{eq:Jacobi}\n\\end{eqnarray}\nIn this limit, the eigenvalue $_{s}E_{\\ell m}(\\xi)$ in the equation \n(\\ref{eq:proto-Jacobi}) becomes $\\ell(\\ell+1)$, \nwhere $n=\\ell-(\\alpha+\\beta)\/2=\\ell-{\\rm max}(\\mid m\\mid ,\\mid s\\mid )$.\nHere, the Jacobi polynomials are defined by the Rodrigue's formula by\n\\begin{eqnarray}\nP_{n}^{(\\alpha,\\beta)}(x) :=\n\\frac{(-1)^{n}}{2^{n}\\,n!}(1-x)^{-\\alpha}(1+x)^\n{-\\beta}\\left(\\frac{d}{dx}\\right)^{n}\\left[(1-x)^{\\alpha+n}(1+x)^{\\beta+n}\n\\right].\n\\end{eqnarray}\n\nNow, we expand $_{s}U_{\\ell m}(x)$ and $_{s}V_{\\ell m}(x)$ in a series of\nJacobi polynomials: \n\\begin{eqnarray}\n\\label{eq:Jacobi-series}\n_{s}U_{\\ell m}(x)&=&\\sum_{n=0}^{\\infty}\\,_{s}A_{\\ell m}^{(n)}(\\xi)\\,P_{n}^{(\\alpha,\\beta)}(x) \\,,\n\\\\ \n\\label{eq:Jacobi-series2}\n_{s}V_{\\ell m}(x)&=&\\sum_{n=0}^{\\infty}\\,_{s}B_{\\ell m}^{(n)} \\,P_{n}^{(\\alpha,\\beta)}(x) \\,.\n\\end{eqnarray}\nThe expansion coefficients $_{s}A_{\\ell m}^{(n)}(\\xi)$ \nand $_{s}B_{\\ell m}^{(n)}(\\xi)$ satisfy the recurrence \nrelations\n\\begin{eqnarray}\n\\alpha^{(0)}\\,_{s}A_{\\ell m}^{(1)}(\\xi)\n+\\beta^{(0)}\\,_{s}A_{\\ell m}^{(0)}(\\xi)&=&0, \n\\label{eq:3termElm0}\\\\\n\\alpha^{(n)}\\,_{s}A_{\\ell m}^{(n+1)}(\\xi)\n+\\beta^{(n)}\\,_{s}A_{\\ell m}^{(n)}(\\xi)\n+\\gamma^{(n)}\\,_{s}A_{\\ell m}^{(n-1)}(\\xi)&=&0, \\, (n\\ge 1) \\,,\n\\label{eq:3termElm}\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\n\\alpha^{(n)}&:=&\n\\frac{4\\xi(n+\\alpha+1)(n+\\beta+1)(n+(\\alpha+\\beta)\/2+1-s)}\n{(2n+\\alpha+\\beta+2)(2n+\\alpha+\\beta+3)},\\nonumber \\\\\n\\beta^{(n)}&:=&\n\\,_{s}E_{\\ell m}(\\xi)+\\xi ^2-\\left(n+\\frac{\\alpha+\\beta}{2}\\right)\n\\left(n+\\frac{\\alpha+\\beta}{2}+1\\right)\\nonumber \\\\\n&&+\\frac{2\\xi s(\\alpha-\\beta)(\\alpha+\\beta)}\n{(2n+\\alpha+\\beta)(2n+\\alpha+\\beta+2)},\\nonumber \\\\\n\\gamma^{(n)}&:=&\n-\\frac{4\\xi n(n+\\alpha+\\beta)(n+(\\alpha+\\beta)\/2+s)}\n{(2n+\\alpha+\\beta-1)(2n+\\alpha+\\beta)} \\,,\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n\\tilde{\\alpha}^{(0)}\\,_{s}B_{\\ell m}^{(1)}(\\xi)\n+\\tilde{\\beta}^{(0)}\\,_{s}B_{\\ell m}^{(0)}(\\xi)\n&=&0, \\nonumber \\\\\n\\tilde{\\alpha}^{(n)}\\,_{s}B_{\\ell m}^{(n+1)}(\\xi)\n+\\tilde{\\beta}^{(n)}\\,_{s}B_{\\ell m}^{(n)}(\\xi)\n+\\tilde{\\gamma}^{(n)}\\,_{s}B_{\\ell m}^{(n-1)}(\\xi)&=&0, \\quad (n\\ge 1) \\,,\n\\label{eq:3termElm2}\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\n\\tilde{\\alpha}^{(n)}&:=&\n-\\frac{4\\xi(n+\\alpha+1)(n+\\beta+1)(n+(\\alpha+\\beta)\/2+1+s)}\n{(2n+\\alpha+\\beta+2)(2n+\\alpha+\\beta+3)},\\nonumber \\\\\n\\tilde{\\beta}^{(n)}&:=&\n\\,_{s}E_{\\ell m}(\\xi)+\\xi ^2-\\left(n+\\frac{\\alpha+\\beta}{2}\\right)\n\\left(n+\\frac{\\alpha+\\beta}{2}+1\\right)\\nonumber \\\\\n&&+\\frac{2\\xi s(\\alpha-\\beta)(\\alpha+\\beta)}{(2n+\\alpha+\\beta)(2n+\\alpha+\\beta+2)},\\nonumber \\\\\n\\tilde{\\gamma}^{(n)}&:=&\n\\frac{4\\xi n(n+\\alpha+\\beta)(n+(\\alpha+\\beta)\/2-s)}\n{(2n+\\alpha+\\beta-1)(2n+\\alpha+\\beta)} \\,.\n\\end{eqnarray}\n\nThe eigenvalues $\\,{_s}E_{\\ell m}(\\xi)$ are determined \nin a way similar to \nthe renormalized angular momentum $\\nu$. \nThe three-term recurrence relation Eq.~(\\ref{eq:3termElm}) \nhas two independent solutions, which respectively behave for large $n$ as\n\\begin{eqnarray}\n&&\nA_{(1)}^{(n)}\\sim \\frac{({\\rm const.})\\,(-\\xi)^n}\n{\\Gamma(n+(\\alpha+\\beta+3)\/2-s)} \\,, \\label{eq:AlmMin}\n\\\\ &&\nA_{(2)}^{(n)}\\sim ({\\rm const.})\\,\\xi^n \\Gamma(n+(\\alpha+\\beta+1)\/2+s)\\,.\n\\label{eq:AlmDom}\n\\end{eqnarray}\nThe first one, $A_{(1)}^{(n)}$, is the minimal solution, and the \nsecond one, $A_{(2)}^{(n)}$, is a dominant solution, since\n$\\displaystyle\\lim_{n\\rightarrow \\infty}A_{(1)}^{(n)}\/A_{(2)}^{(n)}=0$. \nIn the case of the dominant solution\nthese coefficients $A_{(2)}^{(n)}$ increase with $n$, \nand the series~(\\ref{eq:Jacobi-series}) diverges for \nall values of $x$. \nIn the case of the minimal solution \nthis series converges. \nThus, we have to choose $A_{(1)}^{(n)}$ in the \nseries expansion~(\\ref{eq:Jacobi-series}). \nFor a general ${_s}E_{\\ell m}(\\xi)$, \n$A_{(1)}^{n}$ \ndoes not satsify the relation (\\ref{eq:3termElm0}). Hence, \nthe requirement to satisfy this condition determines \nthe descrete eigen values ${_s}E_{\\ell m}(\\xi)$.\n\n{}As a practical way to obtain $A_{(1)}^{(n)}$ \nas well as ${_s}E_{\\ell m}(\\xi)$, \nwe introduce \n\\begin{equation}\nR_n\\equiv {A_{(1)}^{n}\\over A_{(1)}^{n-1}},\\qquad\nL_n\\equiv {A_{(1)}^{n}\\over A_{(1)}^{n+1}}.\n\\end{equation}\nThe ratio $R_n$ can be expressed as a continued fraction,\n\\begin{equation}\nR_n =-{\\gamma^{(n)}\\over {\\beta^{(n)}+\\alpha^{(n)} R_{n+1}}}\n=-{\\gamma^{(n)}\\over \\beta^{(n)}-}\n{\\alpha^{(n)}\\gamma^{(n+1)}\\over \\beta^{(n+1)}-}\n{\\alpha^{(n+1)}\\gamma^{(n+2)}\\over \\beta^{(n+2)}-}\\cdots . \n\\label{eq:RncontElm}\n\\end{equation}\nWe can also express $L_n$ in a similar way as\n\\begin{eqnarray}\nL_n&=&-{\\alpha^{(n)}\\over {\\beta^{(n)}+\\gamma^{(n)} L_{n-1}}}\n\\cr\n&=&-{\\alpha^{(n)}\\over \\beta^{(n)}-}\\,\n{\\alpha^{(n-1)}\\gamma^{(n)}\\over \\beta^{(n-1)}-}\\,\n{\\alpha^{(n-2)}\\gamma^{(n-1)}\\over \\beta^{(n-2)}-}\\cdots\n{\\alpha^{(1)}\\gamma^{(2)}\\over \\beta^{(1)}-}\\,\n{\\alpha^{(0)}\\gamma^{(1)}\\over \\beta^{(0)}}.\n\\label{eq:LncontElm}\n\\end{eqnarray}\nThis expressions for $R_n$ and $L_n$ are valid \nif the continued fraction~(\\ref{eq:RncontElm}) converge. \n(Notice that the last step of Eq.~(\\ref{eq:LncontElm})\nis not a continued fraction, but just a rational function.)\nBy using the properties of the three-term recurrence relations, \nit is proved that the continued fraction~(\\ref{eq:RncontElm})\nconverges as long as the eigenvalue $_{s}E_{\\ell m}(\\xi)$\nis finite. \n\nDividing Eq.~(\\ref{eq:3termElm}) by the expansion coefficients \n$_{s}A_{\\ell m}^{(n)}$, we obtain \n\\begin{eqnarray}\n\\beta^{(n)}+\\alpha^{(n)}R_{n+1}+\\gamma^{(n)}L_{n-1}=0 \\,.\n\\label{eq:determine_elm}\n\\end{eqnarray}\nWe replace $R_{n+1}$ and $L_{n-1}$ by Eqs.~(\\ref{eq:RncontElm})\nand (\\ref{eq:LncontElm}). \nThen we can determine the eigenvalue $\\,_{s}E_{\\ell m}$ as a root of \nEq.~(\\ref{eq:determine_elm}). There are many roots, and the above \nequations for all value of $n$ are equivalent. \nIn practice, however, \nwe truncate the continued fractions at finite lengths. \nIn this case the most efficient way is to choose the equation with \n$n=n_\\ell:=\\ell-(\\alpha+\\beta)\/2$. With this choice all terms in \nEq.~(\\ref{eq:determine_elm}) become $O(\\xi^2)$, and \nthe length of the continued fractions that we must keep \nto achieve a given accuracy goal is the shortest.\n\nAs was done in Fujita and Tagoshi's paper, in general, we can \nadopt {\\rm Brent's algorithm}\\cite{Recipes} \nin order to determine $_{s}E_{\\ell m}(\\xi)$. \nHowever, when $|\\xi|$ is not large, \nwe can derive an analytic expression\nfor $_{s}E_{\\ell m}(\\xi)$. The result is \n\\begin{eqnarray}\n_{s}E_{\\ell m}(\\xi) \n = \\ell (\\ell +1) -\\frac{2 s^2 m}{\\ell (\\ell +1)} \\xi \n\t+ \\left[H(\\ell+1)-H(\\ell)-1\\right]\\xi^2 +O(\\xi^3),\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\nH(\\ell)=\\frac{2(\\ell^2-m^2)(\\ell^2-s^2)^2}{(2\\ell-1)\\ell^3(2\\ell+1)}.\n\\end{eqnarray}\n\nAfter we obtain the eigenvalues $_{s}E_{\\ell m}(\\xi)$, \nwe can easily determine all the coefficients. \nThe coefficient with $n=n_\\ell$\nis usually the largest term. The ratio of the other terms \nto the dominant term, i.e.\n$A_{(1)}^{(n)}\/A_{(1)}^{(n_{\\ell})}$, \ncan be determined \nin the most efficient way with a minimal error due to truncation \nusing Eqs.~(\\ref{eq:RncontElm}) and (\\ref{eq:LncontElm})\nfor $0n_\\ell$, respectively. \n\nThe coefficient of the leading term \n$A_{(1)}^{(n_{\\ell})}$ $\\big({}_{s}A_{\\ell m}^{(n_{\\ell})}\\big)$ \nis determined by the normalization condition. \nSince (\\ref{Vseries}) represents the same eigen function, \nthe series~(\\ref{eq:Jacobi-series2}) should converge \nfor the same eigen values $_{s}E_{\\ell m}(\\xi)$,\nconstituting the minimal solution of the recurrence relation\nEq.~(\\ref{eq:3termElm2}).\nAs in the case of $\\{A_{(1)}^{(n)}\\}$, \nwe have \n\\begin{eqnarray}\n\\frac{B_{(1)}^{(n)}}{B_{(1)}^{(n-1)}}&=&\n-{\\tilde{\\gamma}^{(n)}\\over \\tilde{\\beta}^{(n)}-}\\,\n{\\tilde{\\alpha}^{(n)}\\tilde{\\gamma}^{(n+1)}\\over \\tilde{\\beta}^{(n+1)}-}\\,\n{\\tilde{\\alpha}^{(n+1)}\\tilde{\\gamma}^{(n+2)}\\over \\tilde{\\beta}^{(n+2)}-}\\cdots , \n\\label{eq:RncontElm2}\n\\\\\n\\frac{B_{(1)}^{(n)}}{B_{(1)}^{(n+1)}}&=&\n-{\\tilde{\\alpha}^{(n)}\\over \\tilde{\\beta}^{(n)}-}\\,\n{\\tilde{\\alpha}^{(n-1)}\\tilde{\\gamma}^{(n)}\\over \\tilde{\\beta}^{(n-1)}-}\\,\n{\\tilde{\\alpha}^{(n-2)}\\tilde{\\gamma}^{(n-1)}\\over \\tilde{\\beta}^{(n-2)}-}\\cdots\n{\\tilde{\\alpha}^{(1)}\\tilde{\\gamma}^{(2)}\\over \\tilde{\\beta}^{(1)}-}\\,\n{\\tilde{\\alpha}^{(0)}\\tilde{\\gamma}^{(1)}\\over \\tilde{\\beta}^{(0)}}.\n\\label{eq:LncontElm2}\n\\end{eqnarray}\nFrom these equations, we can determine the ratios of \nall coefficients, $B_{(1)}^{(n)}\/B_{(1)}^{(n_{\\ell})}$. \n\nNow, we determine the values of the two coefficients\n$A_{(1)}^{(n_{\\ell})}$ and $B_{(1)}^{(n_{\\ell})}$ that \ndetermines the overall normalization.\nSince Eq.~(\\ref{eq:sphUtoV}) must hold for any value of $x$, \nwe can set $x=1$ in it to obtain\n\\begin{eqnarray}\n&& \\hspace*{-1cm}\n{}_{s}B_{\\ell m}^{(n_{\\ell})}(\\xi)\\sum_{n=0}^{\\infty}\n\\frac{\\,_{s}B_{\\ell m}^{(n)}(\\xi)}\n{\\,_{s}B_{\\ell m}^{(n_{\\ell})}(\\xi)}\n\\frac{\\Gamma(n+\\alpha+1)}{\\Gamma(n+1)\\,\\Gamma(\\alpha+1)}\n\\nonumber \\\\\n&& \\hspace*{5mm} =\n{\\rm exp}(2\\xi)\\,_{s}A_{\\ell m}^{(n_{\\ell})}(\\xi)\\sum_{n=0}^{\\infty}\n\\frac{\\,_{s}A_{\\ell m}^{(n)}(\\xi)}{\\,_{s}A_{\\ell m}^{(n_{\\ell})}(\\xi)}\n\\frac{\\Gamma(n+\\alpha+1)}{\\Gamma(n+1)\\,\\Gamma(\\alpha+1)}.\n\\label{eq:normalization1}\n\\end{eqnarray}\nOn the other hand, \nfrom the normalization condition~(\\ref{eq:normalSp}), we find\n\\begin{eqnarray}\n\\int_{-1}^{1} \\!\\! dx\n\\bigg(\\frac{1-x}{2}\\bigg)^{\\alpha}\n\\bigg(\\frac{1+x}{2}\\bigg)^{\\beta} \\!\n\\sum_{n_{1}=0}^{\\infty} \\! {}_{s}A_{\\ell m}^{(n_{1})}\nP_{n_{1}}^{(\\alpha,\\beta)}(x) \\!\n\\sum_{n_{2}=0}^{\\infty} \\! {}_{s}B_{\\ell m}^{(n_{2})}\nP_{n_{2}}^{(\\alpha,\\beta)}(x)=1.\n\\label{eq:AlmBlm}\n\\end{eqnarray}\nBecause the Jacobi polynomials are orthogonal, we have \n\\begin{eqnarray}\n&& \\hspace*{-1cm}\n\\int_{-1}^{1}{\\rm d}x\\left(\n\\frac{1-x}{2}\\right)^{\\alpha}\\left(\\frac{1+x}{2}\\right)^{\\beta}\nP_{n_{1}}^{(\\alpha,\\beta)}(x)P_{n_{2}}^{(\\alpha,\\beta)}(x)\n\\nonumber\\\\\n&&\\quad\n=\\frac{2\\, \\Gamma(n+\\alpha+1)\\Gamma(n+\\beta+1)\n\\delta_{n_{1},n_{2}}}{(2n+\\alpha+\\beta+1)\n\\Gamma(n+1)\\Gamma(n+\\alpha+\\beta+1)}.\n\\end{eqnarray}\nThen, Eq.~(\\ref{eq:AlmBlm}) reduces to \n\\begin{eqnarray}\n&&\\sum_{n=0}^{\\infty}\\left[\n\\frac{\\,_{s}A_{\\ell m}^{(n)}}{\\,_{s}A_{\\ell m}^{(n_{\\ell})}}\\right]\n\\left[\\frac{\\,_{s}B_{\\ell m}^{(n)}}{\\,_{s}B_{\\ell m}^{(n_{\\ell})}}\\right]\n\\frac{2\\, \\Gamma(n+\\alpha+1)\\Gamma(n+\\beta+1)}\n{(2n+\\alpha+\\beta+1)\\Gamma(n+1)\n\\Gamma(n+\\alpha+\\beta+1)}\n\\nonumber\\\\\n&&\\quad\n=\\frac{1}{\\,_{s}A_{\\ell m}^{(n_{\\ell})}\\,_{s}B_{\\ell m}^{(n_{\\ell})}}.\n\\label{eq:normalization2}\n\\end{eqnarray}\nCombining Eqs.~(\\ref{eq:normalization1}) and (\\ref{eq:normalization2}), \nwe can determine the squares of $\\,_{s}A_{\\ell m}^{(n_{\\ell})}$\nand $\\,_{s}B_{\\ell m}^{(n_{\\ell})}$.\nFinally, we fix the signatures of $\\,_{s}A_{\\ell m}^{(n_{\\ell})}$ and\n$\\,_{s}B_{\\ell m}^{(n_{\\ell})}$ so that \n$_{s}S_{\\ell m}^{\\xi}(x)$ reduces to \nthe spin-weighted spherical harmonics \nin the limit $\\xi\\rightarrow 0$. \n\n\n\n\\end{appendix}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe ability to prepare and probe individual quantum systems in precisely controlled environments is a driving force in modern atomic, molecular and optical physics. Manipulating single atoms~\\cite{Meschede2006}, molecules~\\cite{Moerner2007} and ions~\\cite{Leibfried2003}, for example, is becoming a common practice. At the heart of these experiments are the powerful imaging techniques which have taken on great importance in diverse areas, such as chemical sensing and chemical reaction dynamics~\\cite{Betzig1993,*Xie1994}, probing superconducting materials~\\cite{Yazdani1997,*Pan2000}, and for quantum logic and quantum information processing~\\cite{Schrader2004,*Nelson2007,*Haffner2005}. More recently, new single atom and single site sensitive imaging techniques for optical lattices have opened the door to control and probe complex many-body quantum systems in strongly correlated regimes~\\cite{Gericke2008,*Bakr2009,*Weitenberg2011}.\n\nThe usual approach to detect atoms is to measure the fluorescence or absorption of light by driving a strong optical cycling-transition. Weak or open transitions present a difficulty since the maximum number of scattered photons per atom becomes greatly limited. In the case of long lived states of trapped ions, the technique of electron shelving has been used as an amplifying mechanism in order to directly observe quantum jumps~\\cite{Nagourney86}. Another approach involves the use of an optical cavity to enhance the interaction of the atoms with the light field~\\cite{Bochmann2010,*Gehr2010,*Brahms2011}. This makes it possible to reach single-atom sensitivity, but usually at the expense of greatly reduced spatial resolution.\n\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.88\\columnwidth]{figures\/figure1v7.eps}\n\\caption{{\\bf Scheme for imaging individual impurity atoms within a dense atomic gas.} Impurity atoms (crosses) are embedded within a dense two-dimensional atomic gas of background atoms. The background atoms interact with two light fields (coupling and probe) via a two-photon resonance with an excited state $|r\\rangle$. This coupling produces an EIT resonance on the ground-state probe transition. However, strong interactions with an impurity atom lead to a frequency shift $U$ of the resonance within a critical radius $R_c$. The change in absorption properties of many surrounding atoms makes it possible to map the impurity atom distribution to the absorption profile of a probe laser for analysis.} \\label{fig:scheme}\n\\end{figure}\n\nHere we propose a new method to image individual atoms embedded within a dense atomic gas. The concept exploits strong interactions of the atoms with highly polarizable Rydberg states of the surrounding gas. The induced level shifts can then be transferred to a strong optical transition and to many surrounding atoms within a critical radius, thereby providing two mechanisms which greatly enhance the effect of a single impurity on the light field. The Rydberg states could act as non-destructive probes for individual trapped ions, nearby surface charges, dipolar molecules, or other Rydberg atoms. In our approach, the interaction-induced shifts are spatially resolved via an electromagnetically-induced-transparency (EIT) resonance involving a weak probe and a strong coupling laser in a ladder configuration~\\cite{Fleischhauer2005}. Even though the Rydberg state is barely populated, the EIT resonance is extremely sensitive to its properties~\\cite{Mohapatra07,*Weatherill2008,*Pritchard2010,*Schempp2010,Tauschinsky2010}, thereby providing the means to obtain a strong absorption signal and great sensitivity combined with high spatial resolution for detecting individual atoms.\n\nWe exemplify our imaging scheme for the specific case of probing many-body states of strongly-interacting Rydberg atoms in a quasi-two-dimensional atomic gas (depicted in Fig.~\\ref{fig:scheme}). Rydberg atoms are of great interest because their typical interaction ranges are comparable to, or larger than, the typical interatomic separations in trapped quantum gases. Traditionally Rydberg atoms are field ionized and the resulting ions are subsequently detected, which provides rather limited spatial resolution. As a result, much of the work done so far, such as the scaling laws for excitation~\\cite{Low2009}, excitation statistics~\\cite{Liebisch2005,*Amthor2010,*Viteau2011} and light-matter interactions~\\cite{Mohapatra07,*Weatherill2008,*Pritchard2010,*Schempp2010}, has been restricted to the study of cloud averaged properties. M\\\"uller \\emph{et al.} proposed to use a single Rydberg atom to conditionally transfer an ensemble of atoms between two states~\\cite{Muller2009}. Our method exploits the strong Rydberg interactions with a background gas of atoms to realize non-destructive single-shot optical images of Rydberg atoms with high resolution and enhanced sensitivity. We anticipate this technique will complement the new optical lattice imaging techniques~\\cite{Gericke2008,Bakr2009,*Weitenberg2011}, but with the capability to directly image many-body systems of Rydberg atoms. We show in particular that this will provide immediate experimental access to spatial correlations in recently predicted crystalline states of highly excited Rydberg atoms~\\cite{Weimer2008,*Pohl2010,*Schachenmayer2010,*vanbijnen2011}.\n\nTo quantitatively describe the absorption of probe light by a background gas of atoms surrounding a Rydberg atom we follow an approach based on the optical Bloch equations~\\cite{Fleischhauer2005}. The Hamiltonian describing the atom-light coupling is\n\\begin{eqnarray}\n\\label{eq1}\nH_0 &=& \\frac{\\hbar}{2}\\big(\\Omega_p | e \\rangle \\langle g | + \\Omega_c | r \\rangle \\langle e | \\nonumber \\\\\n&&+\\Delta_p | e \\rangle \\langle e | + (\\Delta_p+ \\Delta_c) | r \\rangle \\langle r | + \\textrm{h.c.}\\big).\n\\end{eqnarray}\nFor resonant driving $\\Delta_p=\\Delta_c=0$ a dark-state is formed $|dark\\rangle\\approx\\Omega_c|g\\rangle-\\Omega_p|r\\rangle$, which no longer couples to the light field. Consequently, the complex susceptibility $\\chi$ of the probe transition vanishes and the atoms become transparent.\n\nThe presence of a nearby Rydberg atom, however, causes an additional energy shift $U=\\hbar C_6\/|d|^6$ for the state $|r\\rangle$, where $d$ is the distance to the Rydberg atom and the interaction coefficient $C_6$ reflects the sign and strength of interactions on the $|r\\rangle$ state.\nOne should also account for interactions between atoms in state $|r\\rangle$, but these can be neglected for $\\Omega_p\\ll \\Omega_c$ when the population in $|r\\rangle$ becomes small. We also include spontaneous decay from the states $|e\\rangle$ and $|r\\rangle$ with rates $\\Gamma_p$, and $\\Gamma_c$ respectively. From the master equation for the density matrix $\\rho$ we calculate the steady-state absorption and solve for the complex susceptibility of the probe transition numerically.\n\nIn the weak probe limit ($\\Omega_p\\ll \\Omega_c, \\Gamma_{p}$) we assume the population stays mostly in the ground state ($\\rho_{gg}\\approx 1$). In this case we obtain for the susceptibility\n\\begin{eqnarray}\\label{eq:chi}\n\\chi_=\\frac{i\\Gamma_p}{(\\Gamma_p-2i\\Delta_p)+\\Omega_c^2(\\Gamma_c-2i\\Delta)^{-1}},\n\\end{eqnarray}\nwhere $\\Delta=\\Delta_p+\\Delta_c+C_6\/|d|^6$.\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[height=0.45\\columnwidth]{figures\/fig_susceptibility1.eps}\n\\includegraphics[height=0.45\\columnwidth]{figures\/fig_susceptibility2.eps}\n\\caption{{\\bf Probe absorption given by the imaginary part of the susceptibility.} (a) $\\mathrm{Im}[\\chi]$ as a function of probe detuning for $\\Omega_c=1$, $\\Gamma_c=0.05$ and $\\Delta_c=0$ (in units of $\\Gamma_p$) for various distances from the Rydberg atom. The solid line is for $d\\rightarrow\\infty$, dashed corresponds to $d=R_c$ and the dotted line is for $d=R_c\/2$. (b) Dependence of $\\mathrm{Im}[\\chi]$ as a function of distance from the Rydberg atom with $\\Delta_p=0$. } \\label{fig:susceptibility}\n\\end{figure}\n\nFig.~\\ref{fig:susceptibility} shows the probe absorption proportional to the imaginary part of $\\chi$ for different laser parameters and for different distances to a Rydberg atom. Far from the influence of the Rydberg atom ($d\\rightarrow\\infty$), the susceptibility takes on a characteristic shape with vanishing absorption on resonance. For shorter distances, interactions tend to shift the transparency window and the on-resonant susceptibility increases. At a critical distance $d=R_c$, $\\Chi=1\/2$. For $d0 $.\n\nThe precision with which a measurement of $\\Delta T$ can be made depends on the noise in both regions:\n\\begin{eqnarray}\\label{eq:varT1}\n\\var(\\Delta T)\\!\\approx\\! \\frac{\\var(\\NR)\\langle\\NA\\rangle^2}{\\langle\\NR\\rangle^4}\\!+\\!\\frac{\\var(\\NA)}{\\langle\\NR\\rangle^2}\\!+\\!\\frac{2 \\, \\var(\\NR) }{\\langle\\NR\\rangle^2}.\\nonumber\n\\end{eqnarray}\nWe assume Poisson distributed noise for the intensity and density fluctuations, so $\\var(\\NR)=\\nobreak\\langle\\NR\\rangle$ and $\\var(N_{ph})\\approx\\nobreak\\langle T_A\\rangle\\langle\\NR\\rangle+\\langle\\NR\\rangle^2\\var(T_A)$. Atom shot noise is accounted for by $\\var(T_A)=\\nobreak\\sigma_0^2 \\ChiA^2\\langle T_A\\rangle^2 n_{2D}\/ a$, with $a$ the area of each region (for example the area of a pixel),\n\\begin{eqnarray}\n\\var(\\Delta T)\\!=\\!\\frac{\\langle T_A\\rangle\\!+\\!\\langle T_A\\rangle^2}{\\langle\\NR\\rangle}\\!+\\! \\frac{2}{\\langle\\NR\\rangle}\\!+\\!\\frac{\\sigma_0^2n_{2D}}{a} \\ChiA^2\\langle T_A\\rangle^2.\\nonumber\n\\end{eqnarray}\nThe first two terms can be attributed to photon shot noise while the last term is from density fluctuations. Including saturation, $\\ChiA=\\nobreak\\Gamma_p^2\/(\\Gamma_p^2+\\nobreak 2 \\Omega_p^2)$. This suggests that the signal-to-noise ratio (SNR) can be made arbitrarily high for large $\\langle \\NR\\rangle$ and large $n_{2D}$. However, to ensure that interactions between background atoms can be neglected, we require that the density of atoms in the $|r\\rangle$ state is kept low ($\\rho_{rr}n_{2D}\\pi R_c'^2\\lesssim 1$). For strong coupling $\\rho_{rr}\\approx \\Omega_p^2\/\\Omega_c^2$ and this implies $\\langle\\NR\\rangle\\lesssim\\nobreak a \\tau \\Omega_c^2\/\\sigma_0n_{2D} \\pi R_c'^2\\Gamma_p$, with exposure time $\\tau$. In the limit of strong absorption $\\langle T_A\\rangle\\ll 1$, and substituting for the maximum value of $\\langle\\NR\\rangle$:\n\\begin{eqnarray}\\label{eq:varfinal}\n\\var(\\Delta T)&=& \\frac{2 \\sigma_0 \\Gamma_p n_{2D}\\pi R_c'^2}{a \\Omega_c^2 \\tau}\n\\\\ \\nonumber\n&\\times &\n\\biggr(1+\\frac{\\Omega_c^2 \\tau \\sigma_0}{2 \\pi \\Gamma_p R_c'^2}\\ChiA^2\\exp{(-2 \\sigma_0 n_{2D} \\ChiA})\\biggr)\n\\end{eqnarray}\nwith $\\ChiA=\\big(1+2\\Omega_c^2\/\\Gamma_p^2 \\pi R_c'^2 n_{2D} \\big)^{-1}$.\n\nIn general, the best SNR is obtained for large coupling strengths $\\Omega_c$ and long exposure times $\\tau$, but in practice these will be limited by the available laser power and by the required time resolution. To find the optimal values for $n_{2D}$ and $\\Omega_p$ given fixed values of $\\tau$ and $\\Omega_c$ we numerically maximize the SNR using Eq.~\\eqref{eq:varfinal}. The final parameters used in the paper include the additional effect of finite laser linewidths which tends to increase $\\rho_{rr}$ slightly for the same $\\Omega_p$. This shifts the optimum density to slightly lower values. Assuming $\\Omega_c=2\\pi\\times 50$~MHz and $\\tau=10~\\mu$s we find $n_{2D}^{opt}=40 \\mu$m$^{-2}$ (neglecting linewidth $n_{2D}^{opt}\\approx 50 \\mu$m$^{-2}$).\n\n\\subsection{Rydberg excitation model}\n\nTo simulate the excitation of Rydberg atoms by a chirped laser pulse we consider a randomly (thermally) distributed ensemble of atoms. Each atom is treated as a point-like classical particle which can be in either the electronic ground state or in a Rydberg state. As the coupling field is swept from low to high detuning, each atom can undergo a transition. The transition probability is estimated using the Landau-Zener (LZ) formula for a sweep through an avoided crossing \\cite{Wittig2005}. The effect of Rydberg-Rydberg interactions causes level shifts for the nearby atoms which subsequently alters their probability to be excited by the laser pulse, giving rise to strong spatial correlations.\n\nThe simulation starts with zero detuning for the excitation laser and one atom is chosen at random to start in the Rydberg state. In the next time step the laser frequency is varied according to a fixed sweep rate, and we calculate all level shifts due to Rydberg-Rydberg interactions. From the atoms which crossed the resonance condition in the previous timestep we randomly select newly excited atoms based on their LZ probabilities. Any successful excitation immediately influences all other surrounding atoms, and thus the simulation also reproduces the excitation blockade effect. For each time step we also solve the Newtonian equations of motion of the Rydberg atoms to account for the interparticle mechanical forces. We do not consider the motion of the ground state atoms for the simulation (frozen gas regime). The simulation returns a list of the final coordinates of all the ground-state and Rydberg atoms within the gas after the laser sweep. These coordinates are then used as inputs to calculate the corresponding absorption image.\n\n\\subsection{Correlation analysis}\n\nTo characterize the translational order of the simulated Rydberg distributions, we define a pair distribution function from the absorption images $n(\\vec r)$:\n\\begin{equation}\nG[n](\\vec r) = \\frac{ \\int d^2r_0 ~n(\\vec r_0) n(\\vec r_0 + \\vec r)} { \\left( \\int d^2r_0 ~ n(\\vec r_0) \\right) ^2}.\n\\end{equation}\nTo account for the inhomogeneous density and finite size of the system we define the following rescaled pair distribution function :\n\\begin{equation}\ng(\\vec r) = \\frac{\\langle G_2[n] \\rangle }{ G_2[\\langle n\\rangle] }\n\\label{eq:g2}\n\\end{equation}\nwhere the brackets reflect averages over independent realisations. For a random distribution of atoms $g(r) \\approx 1$. Larger correlation values indicate an enhanced probability to find two Rydberg atoms at a given separation, while lower values indicate the absence of pairs.\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=0.8\\columnwidth]{figures\/fig-suppl.eps}\n\\caption{\\label{fig:correlation2}\n{\\bf Angular correlation function computed from 15 simulated images.} Angular correlation function $\\Phi(\\theta )$ taken at the radius of the first shell. We observe peaks at angles around $\\pm \\pi \/ 3$ indicating a six-fold symmetry among nearest neighbours.\n}\n\\end{figure}\n\nWe can also extract information about the angular correlations in the images. For this we define the angular correlation function :\n\\begin{equation}\n\\Phi(\\theta)\\!\\propto\\!\\left\\langle\\! \\frac{ \\int\\! d^2r_0~n(\\vec r_0) \\int\\! d \\phi~n(\\vec r_0\\!+\\! R_{nn}\\vec{e}_{\\phi} ) n(\\vec r_0\\!+\\!R_{nn} \\vec{e}_{\\phi+\\theta} ) } { \\left(\\int d^2r_0 ~n(\\vec r_0) \\right) ^3} \\!\\right\\rangle\n\\label{eq:g3}\n\\end{equation}\nwhere $\\vec e_{\\phi}$ is defined as the unit vector with angle $\\phi$ with respect to a reference axis $\\vec e_x$, and $R_{nn}$ is the radius of the first positive shell of the pair distribution function. This gives the probability, starting from an atom and one of its nearest neighbours, to find a second nearest neighbour forming an angle $\\theta$ with the first.\n\nFigure~\\ref{fig:correlation2} shows the angular correlation function computed from 15 simulated images at the radius of the first shell. We observe two clear peaks at $\\sim \\pi\/3$ and $\\sim 5\\pi\/3$ reflecting the 6-fold symmetry present among nearest neighbours. The other peaks at $\\theta=2n\\pi\/6,n=2,3,4$ are washed out indicating the absence of true long range orientational order.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDifferential equations are used for an enormous variety of applications, including industrial design and weather prediction. In fact, many of the main applications of supercomputers are in the form of large systems of differential equations \\cite{super}. Therefore quantum algorithms for solving differential equations would be extraordinarily valuable. A quantum algorithm for differential equations was proposed in Ref.\\ \\cite{Leyton08}, but that algorithm had very poor scaling in the time. The complexity of the simulation scaled exponentially in the number of time-steps over which to perform the simulation.\n\nThe algorithm in Ref.\\ \\cite{Leyton08} may have been overly ambitious, because it aimed to solve nonlinear differential equations. A more natural application for quantum computers is \\emph{linear} differential equations. This is because quantum mechanics is described by linear differential equations. We find that, when we restrict to linear differential equations, it is possible to obtain an algorithm that is far more efficient than that proposed in Ref.\\ \\cite{Leyton08}.\n\nWe consider first-order linear differential equations. Using standard techniques, any linear differential equation with higher-order derivatives can be converted to a first-order linear differential equation with larger dimension. A first-order ordinary differential equation may be written as\n\\begin{equation}\n\\dot x(t) = A(t)x(t) + b(t),\n\\end{equation}\nwhere $x$ and $b$ are $N_x$-component vectors, and $A$ is an $N_x\\times N_x$ matrix. Classically, the complexity of solving the differential equation must be at least linear in $N_x$. The goal of the quantum algorithm is to solve the differential equation in time $O(\\poly\\log N_x)$.\n\nQuantum mechanics is described by differential equations of this form, except they are homogeneous ($b(t)=0$), and $A(t)=iH(t)$, where $H(t)$ is Hermitian. This means that the solutions in quantum mechanics only include oscillating terms, whereas more general differential equations have solutions that may grow or decay exponentially. Quantum algorithms for simulating quantum mechanical systems have been extensively studied \\cite{Lloyd96,Aharonov03,Childs04,Berry07,Childs08,Berry09,Wiebe10}.\n\nClassical physics is described by more general differential equations. Large systems of ordinary differential equations are produced by discreti{\\s}ation of partial differential equations. Many equations in physics are linear partial differential equations, where the time derivative depends linearly on spatial derivatives and the value of a quantity at some point in physical space. Examples include Stokes equations (for creeping fluid flow), the heat equation, and Maxwell's equations. Discreti{\\s}ation of the partial differential equation on a mesh of points results in an ordinary differential equation with a very large value of $N_x$.\n\nIn the case where $A$ and $b$ are time independent, then one can find the equilibrium solution of the differential equation by solving\n\\begin{equation}\nA x = -b.\n\\end{equation}\nA quantum algorithm for this problem was given by Harrow, Hassadim and Lloyd \\cite{Harrow09}, with runtime that is polynomial in $\\log(N_x)$ and the condition number of $A$. Ambainis has reported development of an improved algorithm \\cite{Ambainis10}, though this algorithm has not yet been released. We consider the more difficult case of solving the time evolution under linear differential equations, rather than just the equilibrium solution. We find that this case can also be solved using a modification of the method of Harrow, Hassadim and Lloyd.\n\n\\section{Trotter formula approach}\n\nBefore explaining that approach, we first describe an approach using Trotter formulae, and the drawback to that approach. This will not be described rigorously, because it is not our main proposal for solving differential equations.\n\nThe homogeneous case, where $b=0$, is analogous to Hamiltonian evolution. If $A$ is antiHermitian, then we can take $A=iH$, where $H$ is a Hermitian Hamiltonian. Evolution under this Hamiltonian can be solved by methods considered in previous work \\cite{Berry07,Berry09}. Another case that can be considered is where $A$ is Hermitian. In this case, the eigenvalues of $A$ are real, and $A$ can be diagonali{\\s}ed in the form $A=V D V^{-1}$, where $D$ is a real diagonal matrix and $V$ is unitary. The formal solution is then, for $A$ independent of time, $x(t)=V e^{D(t-t_0)} V^{-1} x(t_0)$.\n\nThe differential equation can be solved using a similar method to that used in Ref.\\ \\cite{Harrow09}. The value of $x$ is encoded in a quantum state as\n\\begin{equation}\n\\ket{x} = {\\cal N}_x \\sum_{j=1}^{N_x} x^{[j]} \\ket{j},\n\\end{equation}\nwhere $\\ket{j}$ are computational basis states of the quantum computer, $x^{[j]}$ are the components of the vector $x$, and ${\\cal N}_x$ is a normali{\\s}ation constant. The state can be written in a basis corresponding to the eigenvectors of $A$:\n\\begin{equation}\n\\ket{x} = \\sum_{j} \\lambda_j \\ket{\\lambda_j}.\n\\end{equation}\nUsing methods for Hamiltonian simulation, $iA$ can be simulated. By using phase estimation, if the state is an eigenstate $\\ket{\\lambda_j}$, then the eigenvalue $\\lambda_j$ can be determined. Given maximum eigenvalue $\\lambda_{\\rm max}$, we would change the amplitude by a factor of $e^{(t-t_0)(\\lambda_j-\\lambda_{\\rm max})}$. See Ref.\\ \\cite{Harrow09} for the method of changing the amplitude. If this is done coherently, then the final state will encode $x(t)$.\n\nFor more general differential equations, $A$ will be neither Hermitian nor antiHermitian. In this case, one can break $A$ up into Hermitian ($A_{H}$) and antiHermitian ($A_{aH}$) components. The evolution under each of these components can be simulated individually, and the overall evolution simulated by combining these evolutions via the Trotter formula. The drawback to this approach is that it appears to give a complexity that increases exponentially with the time interval $\\Delta t = t-t_0$ (though the complexity is still greatly improved over Ref.\\ \\cite{Leyton08}).\n\nIf $A$ were just Hermitian, then the eigenvector (or eigenspace) corresponding to the largest eigenvalue would not decay, and the system would end up in that state. Therefore the amplitude would not drop below the amplitude on the eigenspace corresponding to the largest eigenvalue. That is not the case when $A$ is a more general matrix, because usually the maximum real part of an eigenvalue of $A$ will be strictly less than the maximum eigenvalue of $A_H$. The amplitude must therefore decay exponentially, because we must use the maximum eigenvalue of $A_H$ in simulating evolution under $A_H$.\n\nThe result of this is that the complexity of the simulation will scale exponentially in the time that the differential equation needs to be simulated over, $\\Delta t$. The scaling will be considerably improved over that in Ref.\\ \\cite{Leyton08}, but it is desirable to obtain scaling that is polynomial in $\\Delta t$. Another drawback is that this approach does not enable simulation of inhomogeneous differential equations.\n\n\n\n\\section{Linear systems approach}\n\nTo avoid this problem we propose an approach based on the algorithm for solving linear systems from Ref.\\ \\cite{Harrow09}. The trick is to encode the solution of the differential equation at different times using the one state. That is, we wish to obtain the final state proportional to\n\\begin{equation}\n\\label{eq:fineq}\n\\ket{\\psi} := \\sum_{j=0}^{N_t} \\ket{t_j} \\ket{x_j}.\n\\end{equation}\nThe number $N_t$ is the number of time steps, $t_j$ is the time $t_0+j\\dt$, where $\\dt$ is the time interval in the discreti{\\s}ation of the differential equation, $x_j$ is the approximation of the value of $x$ at time $t_j$, and $\\Delta t$ is the total time interval over which the differential equation is to be solved. We use the subscript $j$ to index the vectors, and superscript for components of these vectors.\n\nOnce this state has been created, the state encoding the solution at the final time $t_0+\\Delta t$ can be approximated by measuring the register encoding the time and getting that time. Just using this method, the probability of obtaining the final time is small ($1\/(N_t+1)$). To obtain a significant probability of success, one can add times beyond $t_0+\\Delta t$ where $x$ is constant. We take $x$ to be constant for $t_0+\\Delta t$ to $t_0+2\\Delta t$, so $N_t=2\\Delta t\/\\dt$. Then any measurement result for the time in this interval will give the state corresponding to the solution. By this method, the probability of success can be boosted significantly, without changing the scaling for $N_t$.\n\nTo numerically solve differential equations, the simplest method is the Euler method, which discreti{\\s}es the differential equation as\n\\begin{equation}\n\\frac{x_{j+1}-x_j}{\\dt} = A(t_j) x_j +b(t_j).\n\\end{equation}\nFor times after $t_0+\\Delta t$, we set $x_{j+1}=x_j$ to ensure that $x$ is constant. The Euler method yields an error that scales as $O(\\dt^2)$ for a single time step. Therefore, we expect that the error in the total simulation is $O(N_t \\dt^2)=O(\\Delta t^2\/N_t)$. To achieve error bounded by $\\epsilon$, we can take $N_t = O(\\Delta t^2\/\\epsilon)$. To show these scalings rigorously requires additional constraints on the problem.\n\nIn particular, to rigorously bound the error it is necessary that the eigenvalues of $A(t_j)$ have no positive real part. Otherwise the error can grow exponentially. In cases where $A(t_j)$ does have an eigenvalue with positive real part, one can simply subtract a multiple of the identity, and rescale the solution. Note that $\\epsilon$ is the error in the solution of the differential equation, and is distinct from error in the solution of linear systems.\n\nMore generally, linear multistep methods have the form \\cite{Butcher,Hairer}\n\\begin{equation}\n\\label{eq:multi}\n\\sum_{\\ell=0}^{k} \\alpha_\\ell x_{j+\\ell} = \\dt \\sum_{\\ell=0}^{k}\\beta_\\ell [A(t_{j+\\ell}) x_{j+\\ell}+b(t_{j+\\ell})].\n\\end{equation}\nMultistep methods can be chosen such that the error is of higher order in $\\dt$, but there is the problem that the method may not be stable. That is, even if the exact solution of the differential equation is bounded, the solution of the difference equation may be unbounded.\n\nTo examine the stability, one defines the generating polynomials\n\\begin{equation}\n\\rho(\\zeta)=\\sum_{j=0}^k \\alpha_j \\zeta^j, \\qquad \\sigma(\\zeta) = \\sum_{j=0}^k \\beta_j \\zeta^j.\n\\end{equation}\nThe stability can be examined via the roots of the equation\n\\begin{equation}\n\\label{eq:stpol}\n\\rho(\\zeta)-\\mu \\sigma(\\zeta) = 0.\n\\end{equation}\nOne defines the set $S$ by\n\\begin{equation}\nS := \\left\\{ \\mu\\in {\\mathbb{C}}; \\begin{array}{*{20}l}\n{{\\rm all~roots~} \\zeta_j(\\mu) {\\rm~of~} \\eqref{eq:stpol} {\\rm~satisfy~} |\\zeta_j(\\mu)|} \\le 1 \\\\\n{{\\rm multiple~roots~satisfy~} |\\zeta_j(\\mu)|< 1} \\\\ \\end{array} \\right\\}.\n\\end{equation}\n$S$ is called the stability domain or stability region of the multistep method. In addition, if the roots of $\\sigma(\\zeta)$ all satisfy $|\\zeta|\\le 1$, and repeated roots satisfy $|\\zeta|<1$, then the method is said to be stable at infinity.\n\nA linear multistep method is said to be order $p$ if it introduces local errors $O(\\dt^{p+1})$. This means that, if it is applied with exact starting values to the problem $\\dot x = t^q$ ($0\\le q \\le p$), it integrates the problem without error. A linear multistep method has order $p$ if and only if \\cite{Butcher}\n\\begin{equation}\n\\rho(e^h)-h\\sigma(e^h) = O(h^{p+1}).\n\\end{equation}\n\nA useful property of linear multistep methods is for them to be $A$-stable \\cite{Hairer,Dahlquist}.\n\\begin{definition}\nA linear multistep method is called $A$-stable if $S \\supset \\mathbb{C}^-$, i.e., if\n\\begin{equation}\n{\\rm Re}\\, \\lambda \\le 0 \\implies \\text{numerical solution for } \\dot x = \\lambda x \\text{ is bounded.}\n\\end{equation}\n\\end{definition}\nThis definition means that, if the solution of the differential equation is bounded, then the approximation given by the multistep method is bounded as well. For a scalar differential equation, the multistep method is bounded whenever $\\lambda$ is in the left half of the complex plane. The Euler method is $A$-stable, but it is not possible to construct arbitrary order $A$-stable multistep methods. The second Dahlquist barrier is that an $A$-stable multistep method must be of order $p\\le 2$ \\cite{Hairer,Dahlquist}. As we wish to consider higher-order multistep methods, we relax the condition and require that the linear multistep method is $A(\\alpha)$-stable \\cite{Hairer,Widlund}.\n\n\\begin{definition}\nA linear multistep method is $A(\\alpha)$-stable, $0<\\alpha<\\pi\/2$, if\n\\begin{equation}\nS \\supset S_\\alpha = \\{\\mu ; |\\arg(-\\mu)| < \\alpha, \\mu \\ne 0 \\}.\n\\end{equation}\n\\end{definition}\nThis definition means that, in the case of a scalar differential equation, the multistep method is bounded whenever $\\lambda$ is within a wedge in the left half of the complex plane. For a vector differential equation, the eigenvalues of $A$ should be within this wedge. It is known that, for any $\\alpha<\\pi\/2$ and $k\\in \\mathbb{N}$, there is an $A(\\alpha)$-stable linear $k$-step method of order $p=k$ \\cite{Grigoreff,Butcher}.\n\nThe error in the total solution of the differential equation will be $O(N_t (\\Delta t)^{p+1})$. In order to obtain a rigorous result, we specialise to the case that $A$ and $b$ are independent of time. The relevant bound is given in Theorem 7.6 in Chapter V of Ref.\\ \\cite{Hairer}.\n\\begin{theorem}\n\\label{thm2}\nSuppose a linear multistep method is of order $p$, $A(\\alpha)$-stable and stable at infinity. If the matrix $A$ is diagonali{\\s}able (i.e.\\ there exists a matrix $V$ such that $V^{-1}AV=D=\\diag(\\lambda_1,\\ldots,\\lambda_n)$) with eigenvalues satisfying\n\\begin{equation}\n|\\arg(-\\lambda_i)|\\le \\alpha \\qquad for~i=1,\\ldots,N_x,\n\\end{equation} \nthen there exists a constant $M$ (depending only on the method) such that for all $\\dt>0$ the global error satisfies\n\\begin{equation}\n\\|x(t_m)-x_m\\| \\le M \\kappa_V \\left( \\max_{0\\le j< k} \\| x(t_j)-x_j \\| + \\dt^p \\int_{t_0}^{t_m} \\|x^{(p+1)}(\\xi)\\|d\\xi\\right),\n\\end{equation}\nwhere $\\kappa_V=\\|V\\| \\cdot \\|V^{-1}\\|$ is the condition number of $V$.\n\\end{theorem}\n\nHere the superscript with round brackets denotes repeated derivative. We can use this result to show a lemma on the scaling of the error.\n\n\\begin{lemma}\n\\label{lem:ersca}\nSuppose a linear multistep method is of order $p$, $A(\\alpha)$-stable and stable at infinity. If the matrix $A$ is diagonali{\\s}able (i.e.\\ there exists a matrix $V$ such that $V^{-1}AV=D=\\diag(\\lambda_1,\\ldots,\\lambda_n)$) with eigenvalues satisfying\n\\begin{equation}\n|\\arg(-\\lambda_i)|\\le \\alpha \\qquad for~i=1,\\ldots,N_x,\n\\end{equation} \nand $b$ is constant, then the global error satisfies\n\\begin{equation}\n\\|x(t_m)-x_m\\| = O\\left( \\kappa_V^2 (\\|x_{\\rm init}\\| + \\|b\\|\/\\|A\\|)\\left[ \\kappa_V (\\dt \\|A\\|)^2\n + m(\\dt \\|A\\|)^{p+1} \\right] \\right),\n\\end{equation}\nwhere $\\kappa_V=\\|V\\| \\cdot \\|V^{-1}\\|$ is the condition number of $V$.\n\\end{lemma}\n\n\\begin{proof}\nThe linear multistep method requires a starting method to obtain the values of $x_j$ for $00$, so $|\\alpha_k - \\dt \\lambda_j\\beta_k|^{-1}\\le |\\alpha_k|^{-1}$.\n\nFor the starting method, we have used the Euler method, and the result is simpler. For the Euler method, $E_m$ and $\\Delta_m$ are scalars, and are just $z_m^{[j]}$ and $y_{m+1}^{[j]}$. The corresponding result is therefore, for $0< m < k$,\n\\begin{align}\n|z_m^{[j]}| &\\le M_E\\left(|z_0^{[j]}|+\\sum_{\\ell=0}^{m-1} |y_{\\ell+1}^{[j]}| \\right) \\nn\n&= M_E\\sum_{\\ell=0}^{m} |y_{\\ell}^{[j]}|.\n\\end{align}\nHere $M_E$ is the corresponding constant for the Euler method.\nFor the end of the simulation, we have for $N_t\/2\\le m0$\n for all $x \\in {\\bf R}$\n w.r.t.\\ the Lebesgue measure on ${\\bf R}$ whose moments\n of any order are finite.\n Moreover, the stochastic differential equation~(\\ref{sde1})\n characterized by the coefficients $f$, $\\sigma$,\n and by the initial condition $X_0$ admits a unique strong solution.\n\\end{itemize}\n\nExplicit conditions ensuring (A1) are, for example, local Lipschitz continuity\nand linear growth, or the Yamada-Watanabe\ncondition (see e.g. Rogers and Williams (1987), Section V-40).\n\nOnce existence and uniqueness of the solution\nof a SDE have been established, we can analyse\nthe distribution of its solution at all time instants.\nIn describing the evolution of the distribution of a diffusion process,\nthe Fokker--Planck partial differential equation is a fundamental tool.\nWe therefore introduce the following assumption.\n\\begin{itemize}\n \\item[(A2)]\nThe unique solution $X_t$ of ~(\\ref{sde1}) admits a density $p_t$ that is absolutely\ncontinuous with respect to the Lebesgue measure, i.e.,\n\\begin{eqnarray*}\n\\mbox{Prob}\\{X_t\\in A\\} = \\int_A p_t(x) dx, \\ \\ \\mbox{for all Borel sets} \\ \\ A,\n\\end{eqnarray*}\nand that satisfies the Fokker--Planck equation:\n\\begin{eqnarray} \\label{FPESQRT}\n\\frac{\\partial p_t}{\\partial t} = -\\frac{\\partial}{\\partial x} (f_t p_t) + \\inverse{2}\n \\frac{\\partial^2}{\\partial x^2} (a_t p_t), \\ \\ a_t(\\cdot) = \\sigma_t^2(\\cdot) \\ .\n\\end{eqnarray}\n\\end{itemize}\nExamples of assumptions on the coefficients $f$, $a$ and on their\npartial derivatives\nunder which (A2) holds are given in the literature. See for example\nStroock and Varadhan (1979)\nor Friedman (1975).\n\nIn order to appropriately introduce the problem we mentioned at the beginning\nof the section, we now present a definition of exponential family.\n\\begin{definition}\n Let $\\{c_1,\\cdots,c_m\\}$ be scalar functions defined on ${\\bf R}$,\n such that $\\{1,c_1,\\cdots,c_m\\}$ are {\\em linearly independent},\n have at most polynomial growth, are\n twice continuously differentiable and the convex set\n\\begin{displaymath}\n \\Theta_0 := \\left\\{\\theta=\\{\\theta^1,\\ldots,\\theta^m\\}'\\in {\\bf R}^m\\,:\\,\n \\psi(\\theta) = \\log\\; \\int \\exp[ \\theta' c(x) ]\\, d x\n < \\infty \\right\\}\\ ,\n\\end{displaymath}\n has {\\em non--empty interior}, where $c(x)=\\{c_1(x),\\cdots,c_m(x)\\}'$ and\n `` $'$ '' denotes transposition.\n Then\n\\begin{displaymath}\n EM(c) = \\{ p(\\cdot,\\theta)\\,,\\, \\theta \\in \\Theta \\},\n \\hspace{1cm} p(x,\\theta):= \\exp[\\theta' c(x) - \\psi(\\theta)]\\ ,\n\\end{displaymath}\n where $\\Theta \\subseteq \\Theta_0$ is open,\n is called an exponential family of probability densities.\n\\end{definition}\n\nOur problem consists in finding a SDE whose solution $X_t$\nhas a density $p_t$ that follows a prescribed curve in a given exponential family.\nMore precisely, we require the curve $t \\mapsto p_t$, in the space\nof all densities, to coincide with a given curve\n$t \\mapsto p(\\cdot,\\theta_t)$ in a given $EM(c)$.\\footnote{In order to contain space\nand notation the underlying geometric setup\nis not fully developed here. We just say that the problem originated from the\nuse of differential geometry and statistics for the nonlinear filtering\nproblem. The reader interested in geometric aspects and other details is referred to\nBrigo (1996), Brigo, Hanzon and Le Gland (1999), or to the tutorial in Brigo (1999).}\n\nThis is formalized in the following.\n\\begin{problem} \\label{fin-dim:pro}\nLet be given an exponential family $EM(c)$, an initial density $p_0$\ncontained in $EM(c)$, and\na diffusion coefficient $a_t(\\cdot) := \\sigma^2_t(\\cdot)$,\n$t\\ge 0$.\nLet ${\\cal U}(p_0,\\sigma)$ denote the set of all drifts\n$f$ such that $p_0$, $f$ and $\\sigma$ and its related SDE~(\\ref{sde1}) satisfy assumptions\n(A1) and (A2). Assume ${\\cal U}(p_0,\\sigma)$ to be non-empty.\n\nThen, given the curve $t \\mapsto p(\\cdot,\\theta_t)$ in $EM(c)$\n(where $t \\mapsto \\theta_t$ is a $C^1$--curve in the parameter space\n$\\Theta$), find a drift in ${\\cal U}(p_0,\\sigma)$ whose related SDE has a solution\nwith density $p_t = p(\\cdot,\\theta_t)$.\n\n\\end{problem}\nThe solution of this problem is given by the following.\n\\begin{theorem} \\label{sol-prob1}\n{\\bf (Solution of Problem~\\ref{fin-dim:pro})}\nAssumptions and notation of Problem~\\ref{fin-dim:pro} in force.\nConsider the stochastic differential equation\n\\begin{eqnarray} \\label{sol:prob1}\nd Y_t &=& u^\\sigma_t(Y_t) dt + \\sigma_t(Y_t) dW_t, \\ \\ Y_0 \\sim p_0, \\nonumber \\\\ \\\\\n u^\\sigma_t(x) &:=& \\inverse{2} \\frac{\\partial a_t}{\\partial x}(x) +\n \\inverse{2} a_t(x) \\theta_t' \\frac{\\partial c}{\\partial x}(x)\n \\nonumber \\\\ \\nonumber \\\\ \\nonumber\n&& - \\left(\\frac{d}{dt}\\theta_t' \\right)\n \\int_{-\\infty}^x \\left(c(\\xi) - \\nabla_{\\theta}\\psi(\\theta_t)\\right)\n \\ \\exp[\\theta_t' (c(\\xi) - c(x))] d\\xi,\n\\end{eqnarray}\nwhere $\\nabla_{\\theta}\\psi(\\theta_t)=\\{\\partial \\psi \/ \\partial\n\\theta^1(\\theta_t),\\ldots,\\partial \\psi \/\\partial \\theta^m(\\theta_t) \\}'$,\nwith the symbol ``$\\sim$'' to be read as ``distributed as''.\n\nIf $u^\\sigma \\in {\\cal U}(p_0,\\sigma)$,\nthen the SDE (\\ref{sol:prob1}) solves Problem~\\ref{fin-dim:pro}, in that\n\\begin{displaymath}\np_{Y_t}(x) = \\exp\\left[\\theta_t' \\ c(x) - \\psi(\\theta_t)\\right],\n\\ \\ t \\ge 0.\n\\end{displaymath}\n\\end{theorem}\nThe proof of the theorem is rather straightforward.\nIt is sufficient to write the\nFokker--Planck equation for the SDE (\\ref{sol:prob1}) and, after\nlengthy computations, verify that indeed\n\\begin{displaymath}\n \\frac{\\partial}{\\partial t} \\exp[\\theta_t' c(x) - \\psi(\\theta_t)]\\\n= - \\frac{\\partial}{\\partial x}\n\\left(u^\\sigma_t(x) \\exp[\\theta_t' c(x) - \\psi(\\theta_t)]\\right)\n+ \\inverse{2} \\frac{\\partial^2}{\\partial x^2}\n\\left( a_t(x) \\exp[\\theta_t' c(x) - \\psi(\\theta_t)] \\right) \\\n\\end{displaymath}\nby substituting the expression for $u$ given in the theorem.\nA different proof can be found in Chapter 7 of Brigo (1996) or in Brigo (2000),\nwhere in deriving the expression for $u$ it was tacitly assumed, as is done here, that\n\\begin{displaymath}\n\\lim_{x\\rightarrow -\\infty}u^\\sigma_t(x)p_t(x) = 0 \\ \\ \\mbox{for all } \\ t \\ge 0.\n\\end{displaymath}\n\nIn the next section, we shall consider an interesting application of this theorem to\nthe option pricing problem. Indeed, we shall use such result\nmore as a ``guiding tool'' rather than applying it immediately\nas it stands. In particular, assumptions (A1) and (A2) will be checked\ndirectly and not via the sufficient conditions usually considered in the literature.\n\n\\section{Alternatives to the Black and Scholes model}\nLet us consider the Black and Scholes (1973) stock price model,\n\\begin{eqnarray} \\label{BeS}\n d S_t = \\mu S_t dt + \\bar{\\sigma} S_t \\ dW_t, \\ \\ S_0 = s_0 , \\ \\\nt \\in [0,T],\n\\end{eqnarray}\nwhere $s_0$ is a positive deterministic initial condition, and $\\mu$,\n$\\bar{\\sigma}$ and $T$ are positive real constants.\n\nThe probability density $p_{S_t}$ of $S_t$, at any time $t>0$,\nis given by\n\\begin{eqnarray} \\label{BeSlnd}\np_{S_t}(x) &=& \\exp\\left\\{\\zeta \\ln\\frac{x}{s_0} + \\rho(t)\n \\ln^2\\frac{x}{s_0} - \\psi(\\zeta,\\rho(t)) \\right\\}, \\ \\ x > 0, \\\\ \\nonumber \\\\ \\nonumber\n\\zeta &=& \\frac{\\mu}{\\bar{\\sigma}^2} - \\frac{3}{2},\n\\ \\ \\rho(t) = - \\frac{1}{2 \\bar{\\sigma}^2 t}, \\\\ \\nonumber \\\\ \\nonumber\n\\psi(\\zeta,\\rho(t)) &=& - \\frac{(\\zeta+1)^2}{4 \\rho(t)}\n+ \\inverse{2} \\ln\\left(\\frac{-\\pi}{\\rho(t)}\\right) + \\ln(s_0).\n\\end{eqnarray}\nWith the notation for exponential families introduced in the previous section,\none writes\n\\begin{eqnarray*}\nc_1(x) = \\ln\\frac{x}{s_0}, \\ \\ c_2(x) = \\ln^2\\frac{x}{s_0} , \\ \\\n\\theta_t = \\{\\zeta, \\ \\rho(t)\\}' \\ .\n\\end{eqnarray*}\n\nOne might wish to model the stock price process by considering a different volatility\nfunction $\\sigma$, instead of $\\bar{\\sigma} S_t$ in (\\ref{BeS})\\footnote{We use\nthe term ``volatility'' to denote the whole diffusion coefficient $\\sigma_t(\\cdot)$ rather\nthan the standard deviation rate of the instantaneous return as usually done in practice.},\nwhile preserving major properties of the original process\n(\\ref{BeS}). The purpose of this section is then the construction of\nalternative stock price dynamics that differs from (\\ref{BeS}), yet sharing\nsimilar features from a probabilistic point of view.\n\nLet us approach this problem by applying Theorem~\\ref{sol-prob1}\nto find a SDE with a given diffusion\ncoefficient $\\sigma_t(\\cdot)$ and whose marginal density is\nequal to the marginal density of $S$ in all time instants\nof the time interval ${\\cal T} = [\\epsilon,T]$, where $0<\\epsilon 0, \\ \\ t\\in [0,T-\\delta].\n\\end{eqnarray}\n\nAlternative models such as (\\ref{sol:bes1}) do not share this\nproperty in general. In fact, identity of the marginal laws alone\ndoes not suffice to ensure (\\ref{BeSret}), for which equality of\nsecond order laws or of transition densities would be sufficient instead.\nHow can we obtain alternative models whose properties concerning log-returns\nare as close as possible to property~(\\ref{BeSret})?\n\nTo tackle this issue, we have to find a compromise between our alternative\nmodel (\\ref{sol:bes1}) and model (\\ref{BeS}). To this end, we consider a\nweaker version of (\\ref{BeSret}) in that we restrict the set of dates for\nwhich the property holds true. Precisely, we modify the\ndefinition of $Y$ so that, given the time instants\n${\\cal T}^\\Delta:=\\{0,\\Delta, 2\\Delta, \\ldots, N\\Delta\\}$,\n$\\Delta = T\/N$, $\\Delta > \\epsilon$,\nproperty (\\ref{BeSret}) is satisfied by $Y$ in ${\\cal T}^\\Delta$, i.e.\n\\begin{eqnarray} \\label{yret}\n \\ln \\frac{Y_{i \\Delta}}{Y_{j \\Delta}}\n\\sim {\\cal N}((\\mu - \\inverse{2} \\bar{\\sigma}^2)(i-j) \\Delta ,\\ \\bar{\\sigma}^2(i-j) \\Delta),\n\\ \\ i > j, \\ \\ i=1,\\ldots,N, \\ \\ j=0,\\ldots,N-1.\n\\end{eqnarray}\nLimiting such key property to a finite set of times is not so\ndramatic. Indeed, only discrete time samples are observed in practice,\nso that once the time instants are fixed, our process $Y$ can not be\ndistinguished from Black and Scholes process'. The issue of discrete versus\ncontinuous time will be further developed in Section 5.\n\nThe new definition of $Y$ is still based on Theorem~\\ref{sol-prob1}.\nHowever, we use this theorem ``locally'' in each time interval $[(i-1)\\Delta, \\ i\\Delta)$.\nThis means that in such interval we define iteratively the drift $u^\\sigma$ as in the\ntheorem but\n\\begin{itemize}\n\\item we translate back the time--dependence of a time amount $(i-1)\\Delta$\n (thus locally restoring the dynamics of the original result) and\n\\item we replace the distribution $p_0$ for the initial condition with\n the distribution of the final value of $Y$ relative to the previous interval.\n\\end{itemize}\nWe obtain:\n\\begin{eqnarray}\n\\label{sol:bes2}\nd Y_t &=& u^\\sigma_t(Y_t,Y_{\\alpha(t)},\\alpha(t)) dt + \\sigma_t(Y_t) dW_t, \\ \\\nt \\in [i\\Delta + \\epsilon, (i+1)\\Delta), \\\\ \\nonumber \\\\ \\nonumber\ndY_t &=& \\mu Y_t dt + \\bar{\\sigma} Y_t dW_t, \\ \\ \\mbox{for} \\ \\ t \\in [i\\Delta,i\\Delta+ \\epsilon),\\ \\\n\\alpha(t) = i \\Delta \\ \\ \\mbox{for} \\ \\ t \\in [i\\Delta, \\ (i+1)\\Delta) \\ ,\n\\end{eqnarray}\nwhere $u^\\sigma_t(x,y,\\alpha)$ was defined in (\\ref{sol:bes1}).\n\nIt is clear by construction that the transition densities\nof $S$ and $Y$ satisfy\n$p_{Y_{(i+1) \\Delta}|Y_{i \\Delta}}(x;y) = p_{S_{(i+1) \\Delta}|S_{i\\Delta}}(x;y)$.\nThen, starting from the equality of the marginal laws of $S$ and $Y$ in the first\ninterval that holds by construction, we inductively obtain the equality of the\nmarginal laws also in each other interval.\nAs a consequence, the second order densities are also equal\namong consecutive instants $(i-1)\\Delta, \\ i\\Delta$, i.e.,\n\\begin{displaymath}\np_{[Y_{(i+1) \\Delta},Y_{i\\Delta}]}(x,y) =\np_{[S_{(i+1) \\Delta},S_{i\\Delta}]}(x,y).\n\\end{displaymath}\nIt follows that\n\\begin{eqnarray} \\label{yret1}\n \\ln \\frac{Y_{(i+1) \\Delta}}{Y_{i \\Delta}}\n\\sim {\\cal N}((\\mu - \\inverse{2} \\bar{\\sigma}^2) \\Delta ,\\ \\bar{\\sigma}^2 \\Delta),\n\\ \\ i=0,\\ldots,N-1.\n\\end{eqnarray}\nAt this point we remark that the process $Y$ in (\\ref{sol:bes2}) is not a Markov process\nin $[0, T]$. However, it is Markov in all time instants of ${\\cal T}^\\Delta$.\nFormally,\n\\begin{displaymath}\np_{Y_{m \\Delta}|Y_{(m-1)\\Delta},Y_{(m-2)\\Delta},\\ldots,Y_0} =\np_{Y_{m \\Delta}|Y_{(m-1)\\Delta}}.\n\\end{displaymath}\nThis property follows from the fact that in $[(m-1)\\Delta, m \\Delta)$\nthe dynamics of the SDE defining $Y$ does not depend on\n$Y_{(m-2)\\Delta},\\ldots,Y_0$, and that when such equation\nis considered for $t\\in [(m-1)\\Delta, m\\Delta)$, in its drift $u^\\sigma$\nthe local initial condition for the entry $Y$ is set to $Y_{(m-1)\\Delta}$.\n\nFrom now on, we refer to markovianity in ${\\cal T}^\\Delta$ as to\n$\\Delta${\\em --Markovianity}.\n\nWe finally notice that, through the $\\Delta$--Markovianity,\nproperty (\\ref{yret1}) extends to any pair of instants in\n${\\cal T}^\\Delta$, so as to yield (\\ref{yret}).\nMoreover, the inductive application of the $\\Delta$--Markovianity\nand the identity of transition densities in the grid leads to the\nidentity of the finite dimentional distributions of $S$ and $Y$ in the grid.\n\n\\section{Option pricing in continuous-time} \\label{ophct}\nLet us now consider the process $\\{B_t:t\\ge 0\\}$ whose value evolves\naccording to\n\\begin{equation}\ndB_t=B_t r dt,\n\\end{equation}\nwith $B_0=1$ and where $r$ is a positive real number, so that\n$B_t = \\exp(r t)$. The process $B$ is assumed to describe the evolution of\na money market account in a given financial market. The process $Y_t$ in\n(\\ref{sol:bes2}) is instead assumed to model the evolution of some traded\nfinancial (risky) asset, typically a stock.\n\nThe financial market thus defined might admit arbitrage opportunities. As\nis well known, a sufficient condition which ensures\narbitrage-free dynamics is the existence of an equivalent martingale measure\nwith respect to the initially chosen numeraire. In this paper, we use the\nprocess $B$ as a numeraire, so that an equivalent martingale measure is a\nprobability measure that is equivalent to the initial one, $P$, and under\nwhich the process $\\{Y_t\/B_t:t\\ge 0\\}$ is a martingale. A necessary condition for\nthe existence of an equivalent martingale measure is the semimartingale\nproperty for the process $Y$. The process $Y$ is indeed a semimartingale\nunder $P$ for sufficiently well behaved volatility functions $\\sigma_t(\\cdot)$.\n\nWe denote by $\\cal S$ the set of all volatility functions $\\sigma_t(\\cdot)$\nsuch that $u^\\sigma \\in {\\cal U}(p_0,\\sigma)$ and for which there exists a unique\nequivalent martingale measure.\n\nThe set $\\cal S$ is obviously non-empty, since it contains at least the\nBlack and Scholes volatility function $\\sigma_t(x)=\\bar{\\sigma} x$. Moreover, as we will\nprove in the sequel, all volatilities functions of type $\\nu I$, $\\nu >0$, belong\nto $\\cal S$, with $I$ denoting the identity map. An interesting example of\nvolatility functions which do not belong to ${\\cal S}$ is instead provided in the appendix.\n\nWe now assume that we have chosen $\\sigma\\in {\\cal S}$ and the\ncorresponding equivalent martingale measure $Q^\\sigma$.\nSince $\\{Y_t\/B_t:t\\ge 0\\}$ is a martingale under such a measure, it easily\nfollows that under $Q^\\sigma$ the process $Y$ satisfies the SDE\n\\begin{eqnarray*}\nd Y_t &=& r Y_t \\ dt + \\bar{\\sigma} Y_t \\ d \\widetilde{W}_t, \\ \\ t \\in [i\\Delta, i\\Delta+\\epsilon), \\\\\nd Y_t &=& r Y_t \\ dt + \\sigma_t(Y_t) \\ d \\widetilde{W}_t, \\ \\ t \\in [i\\Delta+\\epsilon, (i+1)\\Delta),\n\\end{eqnarray*}\nwhere $\\widetilde{W}$ is a standard Brownian motion under\n$Q^\\sigma$.\n\nFurthermore, under the assumption that\ni) there are no-transaction costs, ii) the borrowing and lending rates are both equal to $r$, iii)\nshort selling is allowed with no restriction or penalty, and iv)\nthe stock is infinitely divisible and pays no dividends,\nthe unique no-arbitrage price for a given contingent claim\n$H\\in L^2(Q^\\sigma)$ is (see Harrison and Pliska (1981, 1983))\n\\begin{equation}\n\\label{optpr}\nV_t=\\frac{B_t}{B_T}E^{Q^\\sigma}\\left\\{\\left.H\\right| {\\cal F}_t \\right\\},\n\\end{equation}\nwhere $\\{{\\cal F}_t:t\\ge 0\\}$ denotes the filtration associated to the\nprocess $Y$.\n\nIn the special case of a European call option, the following are\ninteresting problems to solve.\n\\begin{problems} \\label{infsup}\nLet us assume that the given claim is a European call option, written on\nthe stock, with maturity $T$ and strike $K$. Find:\n\\begin{equation}\n\\label{optsiginf}\n\\inf_{\\epsilon>0,\\sigma\\in {\\cal S}} B_T^{-1}\nE^{Q^\\sigma}\\left\\{\\left.(Y_T-K)^+\\right| {\\cal F}_0 \\right\\},\n\\end{equation}\n\\begin{equation}\n\\label{optsigsup}\n\\sup_{\\epsilon>0,\\sigma\\in {\\cal S}} B_T^{-1}\nE^{Q^\\sigma}\\left\\{\\left.(Y_T-K)^+\\right| {\\cal F}_0 \\right\\}.\n\\end{equation}\n\\end{problems}\nSolving these problems is equivalent to finding the lowest and highest\ntheoretical price of the option for which the underlying stock price has lognormal\nmarginal distribution and normal log-returns on the grid ${\\cal T}^\\Delta$, with\nstandard deviations proportional to $\\bar{\\sigma}$.\n\nIf we denote by $V_{*}$ the value of the infimum in\n(\\ref{optsiginf}),\nand by $V^{*}$ the value of the supremum in (\\ref{optsigsup}),\nthe following inequalities obviously hold:\n\\begin{equation}\n\\label{ineqbs}\n (s_0 - Ke^{-rT})^+ \\le V_{*}\\leq V_{BS}(\\bar{\\sigma})\n \\le V^{*} \\le s_0,\n\\end{equation}\nwhere $V_{BS}(\\bar{\\sigma})$ denotes the option Black and Scholes price at time 0\nas determined by $\\bar{\\sigma}$, the volatility parameter in (\\ref{BeS}).\nIndeed, since $\\bar{\\sigma} I\\in {\\cal S}$, the central inequalities hold by\ndefinition of $V_{*}$ and $V^{*}$, whereas the first and\nthe last ones feature respectively the well known no-arbitrage lower and upper\nbounds for option prices.\n\nIn the next subsection we will show that the first and last inequalities in\n(\\ref{ineqbs}) are actually equalities. To prove this statement, it will be\nsufficient to restrict our analysis to the class of volatilities\n$\\{\\sigma_t(x) = \\nu x$, $\\nu > 0\\}$. This result is at first sight surprising.\nIndeed, one would naively expect that the difference\nbetween prices implied by models which are equivalent in the $\\Delta$--grid\nis bounded by a quantity that is somehow related to $\\Delta$, typically\n${\\cal O}(\\Delta^\\lambda)$ for some positive real $\\lambda$. In fact, by\nhalving the size of $\\Delta$, we double the discrete--time instants where\nthe models in our family are equivalent. Accordingly, we would expect the prices\nimplied by these now ``closer'' models to range in a narrower interval. However, as\nwe shall soon see, this is not the case.\n\n\\subsection{A fundamental case} \\label{fundamental}\nWe begin by stating the following.\n\\begin{lemma}\nIn the fundamental case where $\\sigma_t(x) = \\nu x$, $\\nu>0$, the\nprocess $Y^\\nu$ given by\n\\begin{eqnarray} \\label{sol:bes3}\nd Y_t^\\nu &=& u^{\\nu I}_t(Y_t,Y_{\\alpha(t)},\\alpha(t)) dt + \\nu \\ Y_t \\ dW_t,\n\\ \\ t \\in [i\\Delta + \\epsilon, (i+1)\\Delta), \\nonumber \\\\ \\\\ \\nonumber\nu^{\\nu I}_t(y,y_\\alpha,\\alpha) &=&\ny \\left[ \\inverse{4}(\\nu^2 - \\bar{\\sigma}^2) +\n\\frac{\\mu}{2}(\\frac{\\nu^2}{\\bar{\\sigma}^2} + 1) \\right]\n+ \\frac{y}{2(t-\\alpha)} (1-\\frac{\\nu^2}{\\bar{\\sigma}^2} ) \\ln\\frac{y}{y_\\alpha},\n\\\\ \\nonumber \\\\ \\nonumber\n d Y_t^\\nu &=& \\mu Y_t^\\nu dt + \\bar{\\sigma} Y_t^\\nu dW_t \\ \\ \\mbox{for} \\ \\ t \\in [i\\Delta,i\\Delta+ \\epsilon),\\ \\\n\\alpha(t) = i \\Delta \\ \\ \\mbox{for} \\ \\ t \\in [i\\Delta, \\ (i+1)\\Delta) \\\n\\end{eqnarray}\nsolves Problem~\\ref{fin-dim:pro} when $p_t$ is given by (\\ref{BeSlnd}).\nMoreover, the volatility function $\\sigma_t(x) = \\nu x$ belongs to $\\cal S$,\nfor any $\\nu>0$.\n\\end{lemma}\n{\\em Proof}.\nSince $Y^\\nu$ has the same marginal distribution\nas $S$ under $P$, it follows that $Y_t^\\nu>0$. Then, the process $Z_t = \\ln Y_t^\\nu$ is\nwell defined and, by It\\^o's formula, the SDE for $Z_t$ is piecewise linear in a narrow\nsense (in that its diffusion coefficient is purely deterministic), and hence\nadmits a unique strong solution which, for each $t\\in [j\\Delta,(j+1)\\Delta)$,\nis explicitly given by\n\\begin{eqnarray} \\label{intediff}\nZ_t &=& Z_{j\\Delta} + (\\mu -\\inverse{2}\\bar{\\sigma}^2)(t-j\\Delta) \\\\ \\nonumber\n& &\n+\\left\\{\n\\begin{array}{ll}\n\\bar{\\sigma}(W_t-W_{j\\Delta}) & t\\in [j\\Delta,j\\Delta+\\epsilon), \\\\ \\nonumber\n\\left(\\frac{t-j\\Delta}{\\epsilon}\\right)^{\\beta\/2}\n\\left[\\bar{\\sigma}(W_{j\\Delta+\\epsilon}-W_{j\\Delta})+ \\nu\\int_{j\\Delta+\\epsilon}^t\n\\left(\\frac{u-j\\Delta}{\\epsilon}\\right)^{-\\beta\/2} dW_u\\right]\n& t\\in [j\\Delta+\\epsilon,(j+1)\\Delta),\n\\end{array}\n\\right.\n\\end{eqnarray}\nwhere $\\beta = 1-\\frac{\\nu^2}{\\bar{\\sigma}^2}$.\n\nAs a consequence, the assumptions of Problem~\\ref{fin-dim:pro} are satisfied\nso that $u^{\\nu I}$ solves Problem~\\ref{fin-dim:pro} when $p_t$ is given by\n(\\ref{BeSlnd}). Moreover, the Girsanov change of measure from $P$ to $Q^{\\nu I}$\nis well defined since one can show that the Novikov condition is satisfied through\napplication of the ``tower property'' of conditional expectations.\nHence, the measure $Q^{\\nu I}$ exists unique and $\\nu I$ belongs to\n$\\cal S$ for each $\\nu > 0$. For the uniqueness of the measure $Q^{\\nu I}$,\nwe refer for instance to Duffie (1996).\n\n\\begin{theorem}\nIn the fundamental case where $\\sigma_t(x) = \\nu x$, $\\nu>0$,\nthe unique no-arbitrage option price at time $t$ is the Black and Scholes price\n\\begin{equation}\n\\label{bslem}\nU^\\epsilon(t,\\nu)=Y_t^\\nu \\Phi(d_1) - K e^{-r(T-t)}\\Phi(d_2),\n\\end{equation}\nwhere\n\\begin{eqnarray*}\n&& d_1=\\frac{\\ln(Y_t^\\nu\/K)+(r+\\bar{\\sbar}^\\epsilon (t)^2\/2)(T-t)}{\\bar{\\sbar}^\\epsilon (t) \\sqrt{T-t}},\\\\ \\\\\n&& d_2=d_1(t) - \\bar{\\sbar}^\\epsilon (t) \\sqrt{T-t},\n\\end{eqnarray*}\n\\begin{equation}\n\\label{nubar}\n\\bar{\\sbar}^\\epsilon (t)=\n\\left\\{\n\\begin{array}{ll}\n\\sqrt{\\frac{\n\\frac{\\epsilon}{\\Delta}(\\bar{\\sigma}^2-\\nu^2)(T-\\alpha(t))+\\nu^2(T-\\alpha(t))\n+\\bar{\\sigma}^2(\\alpha(t)-t)}{T-t}} & t \\in [\\alpha(t),\\alpha(t)+\\epsilon)\n\\\\ \\sqrt{\\frac{\n\\frac{\\epsilon}{\\Delta}(\\bar{\\sigma}^2-\\nu^2)(T-\\alpha(t)-\\Delta)+\n\\nu^2(T-t)}{T-t}} & t \\in [\\alpha(t)+\\epsilon,\\alpha(t)+\\Delta)\n\\end{array}\n\\right.\n\\end{equation}\n\\end{theorem}\n{\\em Proof}.\nFrom the previous lemma, we infer the existence of a unique\nno-arbitrage option price that can be calculated through (\\ref{optpr}).\nThen (\\ref{bslem}) is obtained by noticing that under the equivalent martingale measure\n\\[ \\ln \\frac{Y_T^\\nu}{Y_t^\\nu}\n\\sim {\\cal N}\\left((r - \\inverse{2} \\bar{\\sbar}^\\epsilon (t)^2)(T-t) ,\\ \\bar{\\sbar}^\\epsilon\n(t)^2 (T-t)\\right), \\ t\\in [0,T],\\]\nwith $\\bar{\\sbar}^\\epsilon$ given by (\\ref{nubar}). This implies that the option price\nat time $t$ corresponding to $Y^\\nu$ is the Black and Scholes price with\nvolatility coefficient $\\bar{\\sbar}^\\epsilon(t)$, i.e., that (\\ref{bslem}) holds.\n\n\\mbox{}\\newline\nThe key point of our result is that, for any given volatility\ncoefficient $\\nu$, we are free to adjust the drift of the SDE defining the dynamics of\nthe stock--price process under the objective measure,\nin such a way that the resulting $Y^\\nu$ has the same distributional properties of\nthe Black and Scholes process on discrete-time dates. As opposed to this,\nthe risk--neutral valuation for pricing options {\\em imposes the drift} $r Y^\\nu$ to\nthe SDE followed by $Y^\\nu$ under the equivalent martingale measure.\nThis causes the option price implied by the alternative model to coincide, at first order\nin $\\epsilon$, with the Black and Scholes price with volatility parameter $\\nu$. In fact,\nimposing the drift $r Y^\\nu$ to $Y^\\nu$ leads to the same risk neutral process as that\nof Black and Scholes' (obtained by imposing the drift $rS$ to $S$), with the only difference\nthat $\\bar{\\sigma}$ is replaced by $\\nu$.\n\n\\begin{remark} {\\bf (Historical versus Implied volatility).}\nA first interesting property that can be deduced from this theorem\napplies in case one believes that option prices trade independently of the\nunderlying stock price. We have in fact been able to\nconstruct a stock price process, the process (\\ref{sol:bes3}), whose marginal\ndistribution and transition density depend on the volatility coefficient $\\bar{\\sigma}$,\nwhereas the corresponding option price, in the limit for $\\epsilon \\rightarrow 0$,\nonly depends on the volatility coefficient $\\bar{\\sbar}$. As a consequence, we can provide a\nconsistent theoretical framework which justifies the differences between\nhistorical and implied volatility that are commonly observed in real markets.\n\\end{remark}\n\nStraightforward application of the previous theorem leads to the main result\nof this section which is summarized in the following.\n\\begin{corollary} \\label{inf}\nThe solutions of problems\n(\\ref{optsiginf}) and (\\ref{optsigsup}) are\n\\begin{eqnarray}\n\\label{solinf}\n&V_{*}=(s_0-Ke^{-rT})^+\\nonumber \\\\ \\\\\n&V^{*}=s_0.\\nonumber\n\\end{eqnarray}\nMoreover, for any other candidate price $\\bar{V} \\in (V_{*},V^{*})$\nthere exist a volatility $\\nu$ and an $\\epsilon > 0$ such that\n\\begin{eqnarray*}\nU^\\epsilon(0,\\nu) = \\bar{V} .\n\\end{eqnarray*}\n\\end{corollary}\n{\\em Proof}.\nTo prove (\\ref{solinf}), we simply have to take the limit of expression\n(\\ref{bslem}) (with $t=0$) for $\\epsilon$ going to zero and $\\nu$ either going to zero\nor going to infinity, since\n\\begin{eqnarray*}\n&&\\lim_{v \\rightarrow 0} V_{BS}(v) = (s_0 - Ke^{-rT})^+, \\\\\n&&\\lim_{v \\rightarrow +\\infty} V_{BS}(v) = s_0,\n\\end{eqnarray*}\nand\n\\[ \\lim_{\\epsilon\\rightarrow 0} \\bar{\\sbar}^\\epsilon (0)=\\nu.\\]\nFinally, we remember that, ceteris paribus, $V_{BS}(v)$ is a strictly increasing\nfunction of $v$, so that $V_{BS}(v)=\\hat{V}$ has a unique solution for\n$V \\in (V_{*},V^{*})$, hence $U^\\epsilon(0,\\nu) = \\bar{V}$ has a solution in\n$(0,+\\infty)\\times (V_{*},V^{*})$.\n\n\\begin{remark} {\\bf (Taking $\\epsilon \\rightarrow 0$ ).}\nIt is possible to consider the limit for $\\epsilon \\rightarrow 0$ in the above\nexpressions so as to present our result in a simpler and more elegant way.\nHowever, the treatment with $\\epsilon$ does not involve limit considerations\nand permits to contain analytical effort, so that we decided to keep $\\epsilon > 0$.\nThis can be also useful in numerical implementations.\nWe just observe that, for $\\epsilon \\rightarrow 0$, since $\\beta<1$,\n(\\ref{intediff}) becomes\n\\begin{eqnarray*}\nZ_t &=& Z_{j\\Delta} + (\\mu -\\inverse{2}\\bar{\\sigma}^2)(t-j\\Delta) \\\\ \\nonumber\n& & +\n (t-j\\Delta)^{\\beta\/2} \\nu\\int_{j\\Delta}^t\n (u-j\\Delta)^{-\\beta\/2} dW_u\n\\ \\ \\ t\\in [j\\Delta ,(j+1)\\Delta).\n\\end{eqnarray*}\nThis process is well defined since the integral in the right-hand side exists finite a.s.\neven though its integrand diverges when $u \\rightarrow j \\Delta^+$.\n\nThe above equation can be better compared to the Black and Scholes\nprocess when written in differential form:\n\\begin{eqnarray*}\nd Z_t = (\\mu -\\inverse{2}\\bar{\\sigma}^2)\\ dt +\n\\frac{\\beta}{2}(t-j\\Delta)^{\\beta\/2-1} \\nu\\int_{j\\Delta}^t (u-j\\Delta)^{-\\beta\/2} dW_u\\ dt\n + \\nu \\ dW_t \\ \\ \\ t\\in [j\\Delta ,(j+1)\\Delta).\n\\end{eqnarray*}\nBy observing this last equation we can isolate three terms in the right-hand side.\nThe first term is the same drift as in the log--returns of the Black and Scholes\nprocess. The third term is the same as in the log--returns of the Black and Scholes\nprocess, but the volatility parameter $\\bar{\\sigma}$ is replaced by our $\\nu$.\nFinally, the central term is the term which is needed to have returns equal to\nthe returns in the Black and Scholes process even after changing the volatility\nfrom $\\bar{\\sigma}$ to $\\nu$. Note that this term goes to zero for $\\bar{\\sigma} = \\nu$.\nIt is this term that makes our process non-Markov outside the trading time grid.\n\\end{remark}\n\nThe interpretation of the previous theorem and corollary is as\nfollows. If we are given a discretely observed stock price, the particular\nway we use to ``complete'' the model with any of our continuous-time processes $Y$ has a\nheavy impact on the associated option price. The influence is in fact so\nrelevant that such a price can be arbitrarily close to either no-arbitrage bounds\nfor option prices.\n\nFrom an intuitive point of view, the reason why option prices can be so\ndifferent is because the time step in the grid ${\\cal T}^\\Delta$\nis never infinitesimal. In other words, continuous-time option prices\nwould simply reveal the differences existing at an infinitesimal level\namong all the stock price processes $Y^\\nu$.\n\n\\section{Option pricing and hedging in the real world}\nLet us now consider a trader who needs to price an option on a given stock.\nHis usual practice is to resort to continuous-time mathematics to fully\nexploit the richness of its theoretical results, and to model stock returns with\na normal distribution.\n\nLet us denote by $\\delta t$ the length of the smallest time interval\nwhen an actual transaction can occur. Such $\\delta t$ is the best realistic\napproximation of the infinitesimal time distance ``$dt$''.\n\nThe results of the previous section imply that\nthe geometric Brownian motion (\\ref{BeS}) is\njust one of the infinitely many processes $Y$ that possess the\nproperties required by the practitioner along intervals of equal length $\\delta t$.\nHowever, the basic equivalence in the description of\nthe stock price dynamics can not be extended to the corresponding option\nprices. Indeed, any real number in the interval $(V_{*},V^{*})$ can be\nviewed as the unique no-arbitrage option price for some process $Y$.\n\nWhich process $Y$ should then be chosen by the practitioner?\n\nA possible answer to this question can be provided,\nfor example, through the estimation of the option replication\nerror in discrete time.\nTo this end, for any $\\nu >0$, let us denote by $(\\xi^\\nu,\\eta^\\nu)$\nthe self-financing strategy that replicates in continuos time the option payoff for\nthe process $Y^\\nu$, where\n$\\xi_t^\\nu$ and $\\eta_t^\\nu$ are respectively interpreted as the number of stock\nshares and money units held at time $t$.\n\n\nThen, fixing a set of dates $\\tau_0=0<\\tau_1<\\cdots<\\tau_n=T$ such that\n$\\tau_i=i \\delta t$,\\footnote{\nAlthough fixed a priori, the set ${\\cal T}^\\Delta$\nintroduced earlier is arbitrarily chosen, so that we can set\n$\\Delta=\\delta t\/k$, with $k$ any positive integer implying that\n$\\{\\tau_1,...,\\tau_n\\} \\subset {\\cal T}^\\Delta$. }\nand, denoting the observed stock price at time $\\tau_j$ by $\\bar{S}_{\\tau_j}$,\n$j=0,\\ldots,n$, the replication error when hedging according to the\nstrategy $(\\xi^\\nu,\\eta^\\nu)$, starting from the endowment $U^\\epsilon(0,\\nu)$, is\n\\begin{eqnarray*}\n\\varepsilon(\\nu):=(\\bar{S}_{T}-K)^+ -U^\\epsilon(0,\\nu)-\\sum_{j=0}^{n-1}\n\\xi^\\nu_{\\tau_j}(\\bar{S}_{\\tau_{j+1}}-\\bar{S}_{\\tau_j})\n-\\sum_{j=0}^{n-1}\\eta^\\nu_{\\tau_j}(e^{r \\tau_{j+1}}-e^{r \\tau_j}).\n\\end{eqnarray*}\nAt this stage, we can solve, for example, an\noptimization problem where $\\varepsilon(\\nu)$ is minimized,\naccording to some criterion, over all $\\nu>0$.\nSuch procedure, however, is justified only to\nmeasure the performance of our continuous-time prices and strategies on the fixed set\nof discrete-time instants.\nMore generally, the issue of deriving a fair $\\delta t$-time option price and hedging\nstrategy should be tackled by resorting to the existing literature on incomplete markets.\n\nMany are the criterions one can choose from for pricing and hedging in\nincomplete markets. We mention for instance those of\nF\\\"{o}llmer and Sondermann (1986), F\\\"{o}llmer and Schweizer (1991),\nSchweizer (1988, 1991, 1993, 1994, 1995, 1996), Sch\\\"{a}l (1994), Bouleau and\nLamberton (1989), Barron and Jensen (1990), El Karoui, Jeanblanc-Picqu\\'{e}\nand Viswanathan (1991), Davis (1994), Frasson and Runggaldier (1997),\nEl Karoui and Quenez (1995), Frittelli (1996), Mercurio (1996), Mercurio\nand Vorst (1997), Bellini and Frittelli (1997), Frey (1998) and F\\\"ollmer and Leukert\n(1998).\n\nHowever, the purpose of this section is not to favor any particular approach.\nWe want, instead, to stress the following innovative feature in the theoretical\nproblem of option pricing. Instead of fixing a stock price process and then deriving\na fair option price and an ``optimal'' hedging strategy, we can in fact consider\na family of processes, that are somehow equivalent in the description of the\nstock price evolution, among which we can select a convenient one by means of our\nfavorite incomplete markets criterion.\n\n\nA comparison between the performances of the strategy $(\\xi^\\nu,\\eta^\\nu)$ and the hedging\nstrategy associated to any incomplete-market criterion is beyond the scope of the paper and is\nleft to future research.\n\n\\section{Conclusions}\nIn the present paper we consider an option-pricing application of a result which\nis based on the construction of nonlinear SDE's with densities evolving\nin a given finite-dimensional exponential family.\nPrecisely, we derive a family of stock price models that behave almost\nequivalently to that of Black and Scholes.\nAll such models share the same distributions for the stock price process and\nits log-returns along any previously fixed `trading time-grid'.\nTherefore, all these models can be viewed as equivalent in the description of the stock\nprice evolution.\n\nHowever, the continuous-time dynamics chosen to `complete' the model\nin between the instants of the trading time-grid,\nreflects heavily on the option price. The option price in fact can assume any value\nbetween the option intrinsic value and the underlying stock price.\n\nAs a conclusion, our result points out that no dynamics is the {\\em right}\none a priori, and that an incomplete-market criterion is needed to choose\namong all the different models.\nPractitioners with different criteria can still pick up a model\nfrom our family, so as to match their expectations or to minimize their exposures.\n\nEven though our results are based on the assumption of a lognormal distribution\nfor the stock price and a normal distribution for its log-returns, the generalization\nto many other distributions is possible. Notice, indeed, that\nthe results of Section 2 can be applied to any curve of densities in a given\nexponential family. However, the generalization is not straightforward and heavily relies\non the particular distributions which are considered.\n\n\n\\section*{Appendix}\nIn this appendix we consider the case $\\sigma_t(\\cdot) \\equiv \\nu \\neq 0$\nas an interesting example of a class of volatilities which do not belong to ${\\cal S}$.\nSuppose, for the sake of simplicity,\nthat we require only the marginal law of $Y$ to coincide with\nthe marginal law of $S$, ignoring the returns distributions.\nNow assume by contradiction that $\\nu \\in {\\cal S}$.\nThen by applying~(\\ref{sol:bes1}) we obtain a Markov process~$Y$\n\\begin{eqnarray} \\label{gaussy}\ndY_t = \\inverse{2} \\frac{\\nu^2}{Y_t}\n\\left[ \\zeta + 2 \\rho(t) \\log\\frac{Y_t}{s_0}\\right] \\ dt +\n\\frac{Y_t}{2t} \\left[\\log\\frac{Y_t}{s_0} - \\frac{\\zeta+1}{2 \\rho(t)}\\right]\n\\ dt + \\nu \\ dW_t \\ , \\ \\ \\epsilon \\le t \\le T \\ ,\n\\end{eqnarray}\nwith $Y_t=S_t$ for each $t \\in [0,\\epsilon]$.\n\nLet us focus on $t \\ge \\epsilon$.\nSince we now know that $Y_t$ and $S_t$ have the same\ndistribution under the objective probability measure, $Y_t$ is lognormally\ndistributed, and in particular $Y_t > 0$ for all $t$.\nSince $\\nu \\in {\\cal S}$, there exists an equivalent martingale measure,\nso that, under such a measure,\n\\begin{eqnarray*}\ndY_t = r Y_t \\ dt + \\nu \\ d\\widetilde{W}_t \\ ,\n\\end{eqnarray*}\nwhere $\\widetilde{W}$ is a standard Brownian motion under\nthe martingale measure. Now notice that this last SDE is a linear\nequation, so that its solution has a density whose\nsupport is the whole real line. In other terms, such solution\ncan be negative with positive probability at any fixed\ntime instant, as opposed to what we have seen\nfor $Y$ under the objective measure. However, there cannot be an equivalent measure\nthat transforms a process whose support is the positive halfline\ninto another one whose support is the whole real line. Therefore, we\nhave contradicted our assumption that $\\nu \\in {\\cal S}$.\n\nOur example is similar in spirit to that of Delbaen and Schachermayer (1995)\nwho consider a Bessel process (taking positive values at all times)\nwhich cannot be transformed into a Brownian motion. Some conditions\nfor the existence of an equivalent martingale measure are given\nfor example in Rydberg (1997).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}