diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjgti" "b/data_all_eng_slimpj/shuffled/split2/finalzzjgti" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjgti" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction.}\n\nLong-period comets experience little solar exposure and heating, which means they can be used to reveal conditions that existed during the formation and early evolution of the solar system (Delsemme 1977; Lowry \\&{} Fitzsimmons 2005). The main evidence supporting this argument is that the post-perihelion activity decreases fast after the cut-off of water sublimation at 3 AU (Fern\\'andez 2005), and then the activity usually stops until the next apparition. A few exceptional comets, however, displayed activity far beyond 3 AU, which can be explained by the sublimation of CO, but the processes involved are not well understood (Mazzotta Epifani et al. 2007). Because of this, cometary activity at large heliocentric distances has raised a considerable interest recently, addressing the question of how intact the matter in comets is (Lowry et al. 1999). For example, dust activity throughout the entire orbit could result in a continuous resurfacing of the nucleus, while the surface composition could also be altered. The observed diversity of nucleus colors (Luu 1993; Jewitt 2002) may indicate that this is indeed the case. On the other hand, comets with disguised distant activity will have larger mass loss rate per orbit than we estimate, leading to overestimated comet lifetimes (Lowry et al. 1999; Mazzotta Epifani et al. 2007) and underestimated replenishment rate of the zodiacal dust (Liou et al. 1995).\n\nIn recent years, a number of studies reported on short-period comets active between 3 and 7 AU (e.g. Lowry et al. 1999; Lowry \\&{} Fitzsimmons 2001, 2005; Lowry \\&{} Wiessman 2003; Snodgrass et al. 2006, 2008; Mazzotta Epifani et al. 2006, 2007). These surveys aimed at the investigation of the bared nucleus, but a surprisingly high number of comets showed comae and even dust tails at large ($R>3)$ heliocentric distances, where volatile sublimation is expected to be low. The activity of some long period comets is similar, with ocassionally long dust tails (Szab\\'o et al. 2001, 2002), while 11 Centaur objects are also known with cometary activity (see e.g. Rousselot 2008 and references therein). Chiron is known to be active at solar distances between 8 and 14 AU (Meech et al. 1997), and was seen to display considerable outgassing near aphelion (at 17.8--18.8 AU) between 1969 and 1977 (Bus et al. 2001). Meech et al. (2004) found that the Oort-cloud comet C\/1987 H1 (Shoemaker) displayed an extensive tail at all distances between 5 and 18 AU, which is, as of this writing, the most distant example of cometary activity.\n\nDiscovered at 7.2 AU from the Sun, C\/1995 O1 (Hale--Bopp) has been a prime target for cometary studies. In prediscovery images it had a faint coma 0\\farcm4 in diameter and a total magnitude of $\\sim$18 (McNaught \\&{} Cass 1995), while the dust production rate was $\\approx 500$ kg\/s (Fulle et al. 1998) at a solar distance of 13.1 AU. At 7.0 AU, NIR absorption of water ice was detected (Davies et al. 1997). At that time the activity was driven by CO production (Biver et al. 1996, Jewitt et al. 1996), which switched to a water-driven activity at around 3 AU (Biver et al. 1997, Weaver et al. 1997). Approaching the 0.9 AU perihelion distance, the dust production rate was $2\\times 10^6$ kg\/s (Jewitt \\&{} Matthews, 1999). The size distribution of the dust, especially in the jets, showed a dominance of $\\lesssim0.5\\ \\mu$m grains, smaller than in any other comets. This was indicated by the unprecedentedly large superheat ($T_c \/ T_{bb}$ between 1.5--2 [Hayward et al. 2000] or 1.5--1.8 [Gr\\\"un et al. 2001]), and the scattering albedo and polarization (Hayward et al. 2000). The water production was $~10^{31}$ molecule\/s, the largest value ever observed. The dust to gas ratio was very high, between 5 and 10, regardless of the solar distance (Colom et al. 1997, Lisse et al. 1997, Weaver et al. 1999). The production rates observed post-perihelion were similar to those observed pre-perihelion (Capria et al. 2002), foreshadowing long-lasting distant activity. We have indeed detected evidence for cometary activity at 25.7 AU from the Sun; this Letter presents our major findings based on broadband imaging in late 2007.\n\n\\section{Observations}\n\n\nNew observations were taken with the 2.3 m ANU telescope at the Siding Spring Observatory on 2007 October 20, 21 and 22. The solar distance of the comet was 25.7 AU. We took 9$\\times$240 s exposures in Johnson-Cousins $VR_C$ filters with a 2$\\times$2 binned image scale of 0\\farcs67\/pixel. The seeing was 2\\farcs0--2\\farcs5 on the three nigths (see Table 1 for exposure data and ephemerides). \n\nThe images were corrected in a standard fashion, including bias and flat-field correction and fringing correction of the $R_C$ images. Every night we aligned and co-added the images by fitting a coordinate grid to the stars, yielding a ``star field'' image for photometric calibrations. The images were then re-aligned with respect to the proper motion of the comet, to get untrailed ``comet'' images. In this step, the MPC ephemerides at the time of each observation were used to match the individual frames. Fig. \\ref{zsaner} shows the ``comet'' image on 2007 October 21. The estimated size of the coma is 180$\\times 10^3$ km, slightly elongated north\/southward (Fig. \\ref{ME}).\n\n\\subsection{Photometry}\n\nA good proxy to the dust content inside the coma is its brightness, which we have measured by aperture photometry in a single aperture of 14\\arcsec{} across. On October 21, the comet and field stars were calibrated with all-sky photometry using the SA 98 field of Landolt (1992), observed between airmasses 1.17 and 1.65 (Hale--Bopp was at $X=1.72$). Due to the slow apparent motion, we could use the same field stars as local standards on the other two nights as well. The measured brightnesses on October 21 were $V=20\\fm70\\pm0.1$, $R_C=20\\fm04\\pm0.1$. This corresponds to $Af\\rho=30~000$ cm, according to the definition by A'Hearn et al. (1984). For comparison, this value is twice as large as that for 29P\/Schwassmann-Wachmann 1 in outburst (Szab\\'o et al. 2002), and 3 times larger than for 174P\/Echeclus in outburst (Rousselot, 2008).\n\n\\subsection{Morphology}\n\nThe observed brightness can be converted to albedo $\\times$ dust surface, $a_RC$ (Eddington, 1910), which is the cross section of reflecting particles ($C$, in m$^2$) in the aperture, multiplied by the $a_R$ geometric albedo in the $R_C$ photometric band. It is calculated as \n\\begin{equation}\na_RC = {2.22\\times 10^{22} \\pi R^2 \\Delta^2 10^{0.4(m_{\\sun} - m_{\\rm comet})} \\over 10^{-0.4\\alpha\\beta}},\n\\end{equation} \nwhere $m_{\\sun}=-27\\fm11$, the apparent $R_C$ brightness of the Sun, and the $\\beta$ phase coefficient is usually assumed to be 0.04. Substituting the measured total brightness yields $a_RC\\approx 4300$ km$^2$. \nFor comparison, this cross section is 450 times larger than that of the dust cloud ejected in the Deep Impact experiment (e.g. Milani et al. 2007). After calibrating the image flux, the azimuthally averaged comet profile was determined from surface photometry. From this the local filling factor of the dust, $f$, can directly be expressed by replacing $m_{\\rm comet}$ with the $\\mu$ surface brightness:\n\\begin{equation}\na_Rf={1.34\\times 10^{17} R^2 10^{0.4(m_{\\sun} - \\mu)} \\over 10^{-0.4\\alpha\\beta}},\n\\end{equation}\nthat is the measured surface brightness relative to that of a reflecting surface with 1.0 albedo. We found that the surface brightness was 20\\fm3 in the inner coma, corresponding to $a_Rf \\approx 9\\times 10^{-6}$, which remained above $10^{-6}$ in the inner 70~000 km (Fig. 3). \n\n\\section{Discussion}\n\n\nAs we describe below, the observations are consistent with a CO-driven activity. Following Fern\\'andez (2005), the thermal equilibrium of the absorbed radiation, the emitted radiation and the latent heat lost by sublimation can be written as \n\\begin{equation}\n{F_{\\sun}\\over R^2}\\pi r^2 = 4 \\pi r^2 \\sigma T^4 + 4\\pi r^2 f \\zeta(T) l_s,\n\\end{equation} \nwhere $r$ is the radius of the nucleus, $T$ is the temperature, $f<1$ is the fraction of the active area, $\\zeta(T)$ is the gas production rate in molecules\/m$^2$\/s, and $l_s$ is the latent heat loss per one molecule. CO molecules deposited on CO ice (e.g. inclusions) and CO molecules deposited on H$_2$O can sublimate at large solar distances, due to their high volatility. By neglecting the CO ice inclusions, CO is condensed on water ice, for which $l_s=10^{-20}$ J\/molecule (Delsemme 1981), and $\\log\\zeta(T)\\approx 755.7\/T - 35.02$ (Mukai et al. 2001). The heat loss of such a nucleus is plotted in Fig. \\ref{sublimfig} for different values of $f$ and as a function of temperature.\n\nThe equivalent temperature of a freely sublimating ice globe ($f=1$) at 25.7 AU is 48.0 K, which is slightly less than 54.8 K for a blackbody ($f=0$), due to the sublimation of $2\\times 10^{19}$ molecules\/m$^2$\/s. This corresponds to $Q(CO)=4\\pi r^2\\zeta(T)=2.1\\times 10^{20} r^2$ molecule\/s. If the active area covers 1\\%{} of the surface, the equilibrium temperature is 53.1 K, $\\zeta(T)=6.2\\times 10^{20}$ molecule\/s\/m$^2$, $Q(CO)=8\\times 10^{19} r^2$ molecule\/s. The $Q(CO)$ production rate and the temperature just slightly depend on $f$, thus we can get a reasonable estimate assuming $f=0.01$. The thermal velocity of the gas is $u_g=\\sqrt{3kT\/m_{\\rm CO}} = 210$ m\/s, which is enough to carry off small dust grains ($m_{\\rm CO}$ is the mass of one CO molecule).\nThe acting drag force $F_D=\\pi a^2 u_g \\zeta(T) m_{\\rm CO},$ where $a$ is the size of the dust particle. Particles are carried off if the drag force exceeds the gravitation $F_D>F_G=(4 \\pi \/3)^2{G\\rho_n\\rho_p a^3 r},$ $\\rho_n$ and $\\rho_p$ are the density of the nucleus and the dust particle, respectively. Thus, the maximum radius of the escaping dust particles is \n\\begin{equation}\na_{\\rm max} = {9\\over 16 \\pi} {u_g \\zeta(T) m_{\\rm CO} \\over \\rho_n\\rho_p G r}.\n\\end{equation} \nFor an order-of-magnitude estimate we assumed $\\rho_n=1000$ kg\/m$^3$ (Capria et al. 2002), $\\rho_p=2500$ kg\/m$^3$, $r=15$ km (Meech et al. 2004) or $r=30$ km (Fern\\'andez, 2000), leading to $a_{max}\\approx 100\\ \\mu$m, and $Q(CO)\\approx 1.7\\times 10^{28}\/$s$ = 790$ kg\/s in the case of the $r=15$ km nucleus and $Q(CO)\\approx 6.8\\times 10^{28}\/$s$ = 3160$ kg\/s if $r=30$ km.\n\n\nWith a more sophisticated model (the surface is dominated by water ice instead of CO on water, the majority of CO is present in inclusions, crystallization heats the nucleus, the nucleus rotates), Capria et al. (2002) predicted $Q(CO)=5\\times 10^{27}$ molecule\/s (equivalent to $230$ kg\/s) at 25 AU (see Fig. 4 in their paper). Assuming that the dust loading remained high ($m_{dust}\/m_{gas}=$1--10), the total dust production ranges from 230 to 2300 kg\/s. \nFor example, if 500 kg\/s dust is produced in the form of 1 $\\mu$m-sized particles, the projected area is 0.25 km$^2$. Assuming a 0.05 albedo results in a 12,000 m$^2$ excess of $a_RC$ every second. With these assumptions, the nucleus can produce the observed amount of matter in the coma within $\\sim$5 days. The measured radius of the coma and the time-scale of dust production needed gives a measure of dust ejection velocity as $90\\ 000$~km$\/5$ days$ \\approx 210$~m\/s, which is consistent with the thermal gas velocity derived at 25.7 AU.\n\n\nThis self-consistent picture is also supported by the brightness variation of Hale--Bopp between 10 AU and 26 AU. In Fig. \\ref{lcs}, we plot the observed brightness of Hale--Bopp (collected from ICQ and MPC bulletins) against the solar distance. For comparison, data for 6 dynamically young Oort-comets are also plotted (Meech et al. 2004). Hale--Bopp was consistently 3--5 magnitudes brighter than these Oort-comets at all distances $\\lesssim$15 AU. Beyond that, other comets quickly disappeared as opposed to Hale--Bopp, which kept its slow rate of fading. We also show a theoretical light curve predicted from the CO production curve by Capria et al. (2002), after scaling with $R^{-2}\\Delta^{-2}$ and a constant dust loading. The overall agreement indicates that the distant brightness change is consistent with a CO-driven activity.\n\nAn alternative explanation of the present activity of Hale--Bopp could be that the light halo is not a real coma but a preserved debris tail (e.g. Jenniskens et al. 1997; Sekanina et al., 2001).\nComets with periods over a few thousand years are unlikely to preserve dense debris tail, but, as Lyytinen \\& Jenniskens (2003) remarks, giant comets such as Hale--Bopp can be exception. At the current position, the orbital path of Hale--Bopp is almost parallel to the line of sight, with only 5$^\\circ$ inclination and a solar phase angle of 2.2$^\\circ$. Thus, a hypothetical thin, $\\sim$1--2 million km long debris tail would appear as a $\\sim$100--200 thousand km long ``tail'' in projection, which is approximately the diameter of the light halo around Hale--Bopp. However, this scenario is not likely, because the optically detected dust trails are all very thin, only $\\sim$10--20$\\times 10^3$ km wide, and their projected images always appear as a thin feature pointing out of the nucleus (Ishuguro et al. 2007; Sarugaku et al. 2007). Our observation of a nearly spherical light halo, with the nucleus approximately in the center, is not compatible with such a dust trail. \n\n\n\\section{Conclusion}\nComet Hale--Bopp has been the single most significant comet encountered by modern astronomy and 11 years after perihelion it still displays fascinating phenomena. The detected activity can be well explained by theoretical models invoking CO sublimation at large heliocentric distances. Compared to other young Oort-cloud comets, the long-term behaviour of Hale--Bopp seems to suggest genuine differences beyond the difference in nucleus size.\n\nThe main results of this paper can be summarized as follows:\n\\begin{enumerate}\n\\item{} We detected cometary activity of Hale--Bopp at 25.7 AU, which is the most distant activity detection so far. \n\\item{} Our analysis indicates that the extrapolation of the Capria et al. (2002) model works very well for the distant Hale--Bopp, which confirms the physical assumptions of this model.\n\\item{} Further observations with 8 m-class telescopes can help constrain the presence of gas in the coma, effects of superheat due to small dust particles, and, ultimately, the cessation of mass loss processes in Hale--Bopp.\n\\end{enumerate}\n\n\\acknowledgments This research was supported by the ``Bolyai J\\'anos'' Research Fellowship of the Hungarian Academy of Sciences and a Group of Eight European Fellowship (Gy.M.Sz.), a University of Sydney Research Fellowship (L.L.K.) and a Hungarian State ``E\\\"otv\\\"os'' Fellowship (K.S.).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Milky Way disc models}\nThe most elaborate model of the present day Milky Way is the Besan\\c{c}on Galaxy Model (BGM) including extinction, the bar, spiral arms and the warp. The most recent version of the BGM proposes an update of the initial mass function (IMF) and the star formation history (SFR) of the disc (Czekaj et al. 2014). Nevertheless there are still open issues to be solved concerning the degeneracy of the SFR and IMF and a consistent chemical abundance model. Our alternative local disc model (JJ-model) based on the kinematics of main sequence stars (Just \\& Jahrei{\\ss} 2010), the stellar content in the solar neighbourhood (Rybizki \\& Just 2015) and Sloan Digital Sky Survey (SDSS) star counts to the north Galactic pole (Just et al. 2011) has a significantly higher accuracy compared to the old BGM (Gao et al. 2013). In the JJ-model the local SFR, IMF, AVR (age--velocity dispersion relation) are determined self-consistently and it includes a simple chemical enrichment model. In order to extend the JJ-model over the full radial range of the disc, we have used the Jeans equation for the asymmetric drift to connect local dynamics with the radial scale-lengths of stellar sub-populations (Golubov et al. 2013). Based on RAVE (RAdial Velocity Experiment, Kordopatis et al. 2013) data we found an increasing scale-length with decreasing metallicity, which is consistent with a negative overall metallicity gradient of the disc. On the other hand Milky Way-like galaxies show a radial colour gradient of the disc to be bluer and younger in the outer part. Combining both observations immediately shows that the chemical enrichment in the inner disc must be faster\/larger compared to the outer disc. There are different paths of this inside-out growth of the disc. Models based on a Kennicutt-Schmidt law grow faster in the inner part of the disc due to higher densities. Alternatively, there may be a delay in star formation in the outer disc induced by the star formation threshold. It is a challenge to disentangle these different scenarios and to build a consistent disc model. Precise ages of main sequence stars (calibrated by astroseismology, e.g.) would be the silver bullet to solve this problem, but currently the more promising support comes from observations of abundances and abundance ratios of heavy elements for large stellar samples distributed over the full disc range. The $\\alpha$-enhancement is a good tracer for the enrichment timescale and can be used to infer the evolution history of the disc. It can also be used to disentangle the thin and thick disc without introducing a kinematic bias (see e.g. Lee et al. 2011).\n\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{just-fig1}\n\\caption{Normalised present day age distributions of main sequence stars with different lifetimes in the solar neighbourhood. For comparison the normalised SFR and the corresponding gas infall rate are shown.}\n\\label{just-fig1}\n\\end{figure}\n\nObservational data of stellar populations appear in two fundamentally different kinds leading to very different constraints on the models. Star counts (like luminosity functions, colour-magnitude diagrams, metallicity distribution functions (MDF), or velocity distribution functions) provide quantitative information about the stellar populations. They require complete datasets or a detailed understanding of incompleteness. On the other hand, correlations (like the AVR, the $\\alpha$-enhancement as function of metallicity, or the asymmetric drift--metallicity relation) tell us something about the dominating physical processes of the disc evolution. They require an unbiased selection of stars (or a grasp of the biases).\n\n\\section{Chemical enrichment}\n\nMany analytic chemical evolution models (see Matteucci 2012 for an overview) are still local, annuli in case of the disc, and rely at least partly on the instantaneous recycling approximation (IRA). In order to overcome these restrictions, Sch\\\"onrich \\& Binney (2009) quantified the impact of radial mixing of stars and gas in an analytic model, whereas models based on numerical simulations start to incorporate a chemical evolution network to reproduce abundance distributions in detail (e.g. Minchev et al. 2014; Kubryk et al. 2015a,b).\n\nAs a first step to a consistent chemical enrichment in the framework of the JJ-model (default model A of Just \\& Jahrei{\\ss} 2010) we analyse the $\\alpha$-enhancement in a local one-zone model with gas infall. For comparison we use two local volume-complete datasets of main sequence stars, namely the Geneva-Copenhagen sample (GCS), where the $\\alpha$-enhancement was determined by Str\\\"omgren photometry (Casagrande 2011), and the Hipparcos sample, where the abundances were determined by high resolution spectra (Fuhrmann 2011). The chemical evolution depends on the SFR, the IMF and the gas infall rate. For comparison with the local sample, the local age distribution is needed additionally, since the dynamical heating given by the AVR results in an age-dependent vertical dilution factor measured by the vertical thickness of the corresponding sub-populations. \nThe SFR and the local age distribution for main sequence stars with different lifetimes, normalised to an average of 1\/Gyr, are shown in Fig.~\\ref{just-fig1} for the JJ-model. \nIn order to reproduce the age-selection for specific types of stars we feed the IMF and chemical enrichment of the JJ-model combined with a flat SFR into the {\\sc Galaxia} tool (Sharma et al. 2011) with PARSEC isochrones (Bressan et al. 2012) and create a large Mock sample (see Fig.~\\ref{just-fig2} for the CMD). Then we select the corresponding region in the CMD and weight the selected stars with the age distribution given by the JJ-model for the volume of interest (e.g. the yellow box in Fig.~\\ref{just-fig2} and the red line in Fig.~\\ref{just-fig1} to reproduce the local G dwarf sample).\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{just-fig2}\n\\caption{Model CMD with unweighted isochrones corresponding to a uniform age distribution. Red clump and lower main sequence selection boxes are shown in red and yellow, respectively.}\n\\label{just-fig2}\n\\end{figure}\n\nThere are two free parameters in the model. The high-mass slope of the IMF above 8\\,M$_{\\odot}$ was not determined in Rybizki \\& Just (2015), because there are no O- and early B-type stars in the solar neighbourhood. Secondly, the gas infall rate can be varied to reproduce the abundance distributions and the $\\alpha$-enhancement. The only boundary condition here is the present day surface density of gas. \n\nSome $\\alpha$-elements (we use oxygen and magnesium) are predominantly produced by core collapse supernovae (SN2) of massive stars. Due to the short lifetime of the progenitor stars, the IRA is a good approximation and we can use the observed O- and Mg-abundance distributions to derive the gas infall rate. The result of our fiducial model (with high-mass slope -2.7 for the IMF (the Salpeter IMF has a slope of -2.35) and SN2 yields of Fran\\c{c}ois et al. (2004) is shown in Fig.~\\ref{just-fig1} with the same scaling as the SFR. \n\nThe yields of O and Mg depend strongly on the high-mass slope of the IMF. Unfortunately, there are inconsistent SN2 yields published in the literature. This results in a degeneracy of the IMF slope and the yield set in the [O\/Mg] abundance ratio. An example, with the gas infall fixed to the fiducial case, is shown in Fig.~\\ref{just-fig3} with the empirically calibrated yields of Fran\\c{c}ois et al. (2004) and the theoretical yields of Chieffi \\& Limongi (2004). The Chieffi yields combined with an IMF slope of -2.7 (dashed lines) lead to the same [O\/Mg]$\\approx$0.12\\,dex (vertical offset in Fig.~\\ref{just-fig3}) as the Fran\\c{c}ois yields with an IMF slope of -2.3. On the other hand the oxygen yields depend much stronger on the IMF slope than the Mg yields. We conclude that it is required for a consistent disc model to have a more detailed look on the different $\\alpha$-elements (and clearly define, which elements are used to determine the $\\alpha$-enhancement) in order to fix the high-mass IMF slope and to determine the correct yields.\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{just-fig3}\n\\caption{The symbols show the $\\alpha$-enhancement for the datasets of Casagrande and Fuhrmann. The full lines (red for Mg, cyan for O) show the Fran\\c{c}ois yields combined with different IMF slopes (-2.3, -2.7, -3.0 from top to bottom). The dashed lines correspond to the Chieffi yields with an IMF slope of -2.7.}\n\\label{just-fig3}\n\\end{figure}\n\nAfter the determination of the gas infall rate and the IMF slope based on the $\\alpha$-element distribution, we derive in the next step the distribution in [Fe\/H]. The yields of supernovae type 1a (SN1a) are usually parametrised by the delay time distribution (DTD) and scaled by a number fraction of planetary nebulae (PN) exploding as SN1a. We have chosen a DTD with maximum at 1\\,Gyr and a decay timescale of 2.5\\,Gyr combined with a fraction of 0.2\\% exploding PNs. The resulting abundance distributions are smoothed by a rms scatter of 0.07\\,dex. In Fig.~\\ref{just-fig4} the [Mg\/H] and [Fe\/H] distributions are shown in comparison to the Casagrande and the Fuhrmann sample. There are still some issues to be solved (the low metallicity bump and the systematic shift at the high metallicity end), but the general element distributions and correlations can be reproduced in the framework of the local JJ-model.\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{just-fig4a}\n\\includegraphics[width=0.48\\textwidth]{just-fig4b}\n\\caption{Normalised [Mg\/H] and [Fe\/H] distributions compared to the datasets of Casagrande and Fuhrmann.}\n\\label{just-fig4}\n\\end{figure}\n\n\\section{Age distributions and vertical gradients}\n\nSince different types of stars have different age distributions due to different evolutionary stages, the resulting [Fe\/H] abundance distributions do also vary. The resulting [Fe\/H] distributions in the solar neighbourhood for lower main sequence, red clump, giant, and supergiant stars are shown in the top panel of Fig.~\\ref{just-fig5} as predicted by the JJ-model. Especially the red clump stars are very important, because they can be identified easily in the CMD and observed over large distances.\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{just-fig5a}\n\\includegraphics[width=0.48\\textwidth]{just-fig5b}\n\\includegraphics[width=0.48\\textwidth]{just-fig5c}\n\\caption{Top panel: [Fe\/H] distributions of different stellar types in the solar neighbourhood as predicted by the JJ-model.\nMiddle panel: Predicted [Fe\/H] distributions of red clump stars with increasing $|z|$ above the Galactic plane.\nBottom panel: Age distributions of red clump stars at different $|z|$.}\n\\label{just-fig5}\n\\end{figure}\nThe dynamical evolution of the thin disc results in vertical gradients of the stellar age distribution and the corresponding abundance distributions. The middle panel of Fig.~\\ref{just-fig5} shows the strong variation of the [Fe\/H] distribution of red clump stars with distance $|z|$ from the Galactic plane. The bottom panel of Fig.~\\ref{just-fig5} shows the corresponding age distributions. This demonstrates that the intrinsic structure of the thin disc alone leads to strong vertical gradients, e.g. a significant shift of the maximum by -0.2\\,dex at 900\\,pc. The increasing contribution of the thick disc with increasing distance to the mid-plane enhances the gradients additionally (but is not included here).\n\n\\section{Radial gradients and inside-out growth}\n\nThe extension of the chemo-dynamical disc model over the full radial distance range depends on a full 3-dimensional model and requires homogeneous datasets covering a large range of Galactocentric distances. With a Jeans analysis of the asymmetric drift of RAVE data we have shown that the radial scale-length depends on metallicity (Golubov et al. 2013). The radial scale-length decreases from 2.9\\,kpc at low metallicity to 1.6\\,kpc at super-solar metallicity and is essentially independent of colour along the main sequence (see Fig.~\\ref{just-fig6}). Only in the low metallicity bin there is a decline with colour, which may originate from the contribution of the thick disc. A consistent extrapolation of these radial scale-lengths in the framework of a disc formation and enrichment model, where the SFR, AVR, and gas infall depend on Galactocentric distance, can be tested and constraint by using the recent spectroscopic surveys RAVE (Kordopatis et al. 2013), SDSS\/APOGEE (Hayden et al. 2014, Holtzman et al. 2015), and\nGaia-ESO (GES, Smiljanic et al. 2014; Mikolaitis et al. 2014). Radial and vertical metallicity gradients are well identified in these surveys and the meridional plane distribution of the $\\alpha$-enhancement and other abundance ratios can be derived. \nUltimately, adding the high-precision parallaxes and proper motions expected from the first full data release of the Gaia mission in summer 2017 will allow direct measurements of density profiles and kinematic properties over a couple of kpc in the disc.\nBased on these data we expect to be able to construct a self-consistent evolutionary thin disc model, which allows to determine the inside-out growth of the disc. An important task will also be to disentangle the thin and thick discs, and to quantify the impact of radial migration necessary to understand the chemical and dynamical properties of the stellar disc(s).\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{just-fig6} \n\\caption{Radial scale-lengths at different metallicity bins of RAVE data as determined by the Jeans analysis of the asymmetric drift (from Golubov et al. 2013). Data points are colour bins along the main sequence.}\n\\label{just-fig6}\n\\end{figure}\n\n\\acknowledgements\nThis work was supported by Sonderforschungsbereich SFB 881 \"The Milky Way System\" (sub-project A6) of the German Research Foundation (DFG). The work contains contributions of the undergraduate students Sarah Casura and Simon Sauer.\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe AdS\/CFT duality conjecture \\cite{malda0,klebanov,witten} has passed\nan impressive number of consistency checks \\cite{review}. However, among these\ntests only few are not relying in one or another way on structures enforced by supersymmetry and\/or conformal invariance. In this situation it appears worthwhile to further analyse any possible constraint set by the first principles of\nquantum field theory and to check, whether they are fulfilled by the corresponding dual partners in string theory\/supergravity.\n\nIn this sense the present letter is devoted to the concavity of the potential \nbetween static sources in a gauge theory. In the Euclidean formulation Osterwalder-Schrader reflection\npositivity \\cite{os} ensures this property for potentials derived from Wilson\nloops \\cite{seiler,bachas}. In the AdS\/CFT context the issue of concavity has been raised in ref.\\cite{olesen}. But the discussion so far has not taken into account the degree of freedom connected with the relative orientation of the static sources ($Q\\bar Q$) in internal space.\n\nWe will fill this gap by analysing in some detail the consequences of OS reflection positivity for potentials derived in standard manner from Wilson loops\nfor contours coupling both to the gauge bosons and to a set of scalar\nfields in the adjoint representation. We take the Wilson loop in the form\nsuggested in \\cite{rey,malda} and analysed in various ways in \\cite{gross}.\nFor the case where the gauge bosons and the scalars are just the bosonic\nfields of $D=4$, ${\\cal N}=4$ super-Yang-Mills theory it has been characterised\nas an object of BPS type \\cite{gross}.\n\nOur discussion closely follows \\cite{bachas}. The new input in our\npresentation is the handling of the contour parameter dependent coupling\nto the scalars, which is described by a curve on $S^5$. We also take care\nof the fact that the Wilson loop of refs.\\cite{malda,gross} is the trace of \na generically non-unitary matrix.\n\nThe virtue of the arising concavity condition lies in its inequality\nproperty. It has to be fulfilled both for the classical SUGRA approximation\nand for the expressions obtained by adding successive corrections. Therefore,\na violation at any level of approximation on the superstring side\nwould indicate a breakdown of the corresponding duality.\n\\section{Generalised concavity for potentials derived from BPS Wilson loops}\nWe start with the functional ($A\\dot x=A_{\\mu}\\dot x^{\\mu},~\\phi\\theta =\n\\phi _j\\theta ^j,~~~\\mu =0,..,3,~~j=4,..,9 $)\n\\begin{equation}\nU_{ab}[x,\\theta]=\\left (P\\exp\\int \\{iA(x(s))\\dot x (s)+\\phi (x(s))\\theta (s)\n\\vert \\dot x\\vert\\}ds \\right )_{ab}~.\n\\label{10}\n\\end{equation}\nThe expectation value of its trace for a closed path $x(s)$ yields the Wilson loop under investigation \\cite{malda,gross}. $\\theta (s)$ specifies the coupling to the scalars $\\phi $\nalong the contour $x(s)$.\n\nA reflection operation ${\\cal R}$ is defined by\n\\begin{eqnarray}\n({\\cal R} x)^1 (s)=-x^1 (s)~;~~~~({\\cal R} x)^{\\alpha}(s)=x^{\\alpha}(s),~~~ \\alpha\\neq 1\n\\nonumber\\\\\n{\\cal R} U_{ab}[x,\\theta ]=\\overline{U_{ab}[{\\cal R} x, \\theta]}~.\n\\label{11}\n\\end{eqnarray}\nIn addition, it is useful to define in connection with an isometry \n${\\cal I} \\in O(6)$ \nof $S^5$ acting on the path $\\theta (s)$\n\\begin{equation}\n{\\cal I} U_{ab}[x,\\theta ]~=~U_{ab}[x, {\\cal I}\\theta]~.\n\\label{11a}\n\\end{equation} \nFor linear combinations of $U$'s for different contours we extend ${\\cal R} $\nand ${\\cal I}$ linearly.\n\nUsing the hermiticity of the matrices $A,~\\phi$ in the form\n$\\overline{A}=A^t,~~\\overline{\\phi}=\\phi ^t$ we can reformulate the\nr.h.s in the second line of (\\ref{11}) applying the following steps \n\\begin{eqnarray}\n\\overline{U_{ab}[x,\\theta]}&=&\\left (\nP\\exp\\int _{s_i}^{s_f}\\{-iA^t(x(s))\\dot x (s)+\\phi ^t(x(s))\\theta (s)\\vert \\dot x\\vert\\}ds\\right )_{ab}\\nonumber\\\\\n&=&\\left (\n\\hat{P}\\exp\\int _{s_i}^{s_f}\\{-iA(x(s))\\dot x (s)+\\phi (x(s))\\theta (s)\\vert \\dot x\\vert\\}ds\\right )_{ba}~.\n\\label{12}\n\\end{eqnarray}\nHere $P,~\\hat{P}$ denote ordering of matrices from right to left with\nincreasing\/decreasing argument $s$. $\\hat{P}$ applied to the path $x$ yields the\nsame result as $P$ applied to the backtracking path\n\\begin{equation}\n({\\cal B} x)(s)~=~x(s_f+s_i-s),~~~({\\cal B} \\theta )(s)~=~\\theta (s_f+s_i-s)~.\n\\label{13}\n\\end{equation}\nTherefore, we get\n\\begin{equation}\n\\overline{U_{ab}[x,\\theta]}~=~U_{ba}[{\\cal B} x,{\\cal B}\\theta]~.\n\\label{14}\n\\end{equation}\nThis, combined with (\\ref{11}),(\\ref{11a}) yields finally\n\\begin{equation}\n{\\cal R} {\\cal I} U_{ab}[x,\\theta ]~=~U_{ba}[{\\cal B}{\\cal R} x,{\\cal B}{\\cal I}\\theta]~.\n\\label{15}\n\\end{equation}\nIt is worth pointing out that for the result (\\ref{14}) the presence\/absence of the\nfactor $i$ in front of the $A$ and $\\phi $ term in $U$ is crucial. One could\nconsider this as another argument for the choice favoured by the investigations\nof ref. \\cite{gross}.\\\\\n\nWe now turn to a derivation of the basic Osterwalder-Schrader positivity\ncondition in a streamlined form within the continuum functional integral\nformulation. All steps can be made rigorously by a translation into a lattice\nversion with local and nearest neighbour interactions.\n\nLet denote $H_{\\pm}=\\{x^{\\mu}\\vert \\pm x^1>0\\},~~~H_{0}=\\{x^{\\mu}\\vert \nx^1=0\\}$. Then we consider for a functional of two paths \n$x^{(1)},x^{(2)}\\in H_+$\n\\begin{equation}\nf[x^{(1)},\\theta ^{(1)};x^{(2)},\\theta ^{(2)}]~=~U_{ab}[x^{(1)},\\theta ^{(1)}]\n~+~\\lambda U_{ab}[x^{(2)},\\theta ^{(2)}],~~~~\\lambda ~~\\mbox{real}~,\n\\label{15a}\n\\end{equation}\n\\begin{eqnarray}\n\\langle f[x,\\theta ]{\\cal R} {\\cal I} f[x,\\theta ]\\rangle&=&\\int {\\cal D} A{\\cal D} \\phi\nf[x,\\theta ]\\overline{f[{\\cal R} x,{\\cal I}\\theta ]}~e^{-S}\n\\\\\n&=&\\int {\\cal D} A^{(0)}{\\cal D} \\phi ^{(0)}~e^{-S_0}\\nonumber\\\\\n&\\cdot &\\int _{(b.c.)}{\\cal D} A^{(+)}{\\cal D} \\phi ^{(+)} f[x,\\theta ]~e^{-S_+}\\cdot\n\\int _{(b.c.)}{\\cal D} A ^{(-)}{\\cal D} \\phi ^{(-)}\\overline{f[{\\cal R} x,{\\cal I}\\theta ]}~e^{-S_-}~.\n\\nonumber\n\\label{15b} \n\\end{eqnarray}\n$\\pm ,~0$ on the fields as well as on the action indicates that\nit refers to points in $H_{\\pm},~H_{0}$. The index for the two paths has been\ndropped, and the boundary condition $(b.c.)$ is\n$$A^{(\\pm )}\\vert _{\\partial H_{\\pm}}=A^{(0)},~~~\\phi ^{(\\pm )}\\vert\n_{\\partial H_{\\pm}}=\\phi ^{(0)}~.$$ \nWith the abbreviation\n\\begin{equation}\nh[A^{(0)},\\phi ^{(0)},x,\\theta ]~=~\\int _{(b.c.)}{\\cal D} A^{(+)}{\\cal D} \\phi ^{(+)}\nf[x,\\theta ]~e^{-S_+}~,\n\\label{15c}\n\\end{equation}\nthe standard reflection properties of the action imply\n\\begin{equation}\n\\langle f[x,\\theta ]{\\cal R} {\\cal I} f[x,\\theta ]\\rangle ~=~\\int {\\cal D} A^{(0)}{\\cal D} \\phi\n^{(0)}~e^{-S_0}~h[A^{(0)},\\phi ^{(0)},x,\\theta ]\\cdot \\overline{h[A^{(0)},\\phi\n ^{(0)},x,{\\cal I}\\theta ] }~.\n\\label{15d}\n\\end{equation}\nFor ${\\cal I} = ${\\bf 1}$ $ the integrand of the final integration over the fields\nin the reflection hyperplane $H_0$ is non-negative, hence\n\\begin{equation}\n\\langle f[x,\\theta ]{\\cal R} f[x,\\theta ]\\rangle ~\\geq ~0~.\n\\label{15e}\n\\end{equation}\n\nFor nontrivial ${\\cal I} $ the situation is by far more involved. If there would be\nno boundary condition, the result of the half-space functional integral in\n(\\ref{15c}) would be invariant with respect to \n$\\theta\\rightarrow{\\cal I}\\theta $. A given boundary configuration in general breaks\n$O(6)$ invariance on $S^5$. But due to the $O(6)$ invariance of the action,\nthe functional integration measure and the $\\phi\\theta$ coupling in $f$, we have instead\n\\begin{equation}\nh[A^{(0)},{\\cal I}\\phi ^{(0)},x,{\\cal I}\\theta ]~=~h[A^{(0)},\\phi ^{(0)},x,\\theta ]~.\n\\label{15g}\n\\end{equation}\nThis implies\n\\begin{eqnarray}\n\\langle f[x,\\theta ]{\\cal R} {\\cal I} f[x,\\theta ]\\rangle ~=~\\int {\\cal D} A^{(0)}{\\cal D} \\phi\n^{(0)}~e^{-S_0}\\label{15h} \\\\\n\\cdot~\\frac{1}{2}\\left (~h[A^{(0)},\\phi ^{(0)},x,\\theta ]~\\overline{\nh[A^{(0)},\\phi ^{(0)},x,{\\cal I}\\theta ] }\\right .&+&\\left . h[A^{(0)},\\phi ^{(0)},\nx,{\\cal I} ^{-1}\\theta ]~\\overline{h[A^{(0)},\\phi ^{(0)},x,\\theta ]}~\\right ) ~,\n\\nonumber\n\\end{eqnarray}\nwhich says us only ($R$ real numbers)\n\\begin{equation}\n\\langle f[x,\\theta ]{\\cal R} {\\cal I} f[x,\\theta ]\\rangle \\in R ~~~~\\mbox{for}~~\n{\\cal I} ^2={\\bf 1}~.\n\\label{15i}\n\\end{equation} \nThe statements (\\ref{15e}) and (\\ref{15i}) are rigorous ones. \nBeyond them we found no real proof for sharpening (\\ref{15i}) to an inequality\nof the type (\\ref{15e}) for some nontrivial ${\\cal I} $. For later application\nto the estimate of rectangular Wilson loops we are in particular interested\nin nontrivial isometries keeping the, by assumption common, $S^5 $ position\nof the endpoints of the contours on $H_0 $ fixed. Then ${\\cal I} ={\\cal I}_{\\pi}$,\ndenoting a rotation around this fixpoint with angle $\\pi $, are the only\ncandidates.\n\nAt least for boundary fields $\\phi ^{(0)}$ in (\\ref{15c}), which as a map\n$R^3\\rightarrow S^5$ have a homogeneous distribution of their image points on\n$S^5$, we can expect that for contours of the type discussed in connection \nwith fig.1 below in the limit of large $T$ the orientation of $\\theta $\nrelative to $\\phi ^{(0)}$ becomes unimportant. Therefore, we conjecture for this special situation \n\\begin{equation}\n\\langle f[x,\\theta ]{\\cal R} {\\cal I}_{\\pi} f[x,\\theta ]\\rangle ~\\geq ~0~.\n\\label{15j}\n\\end{equation}\nFrom (\\ref{15e}) and (\\ref{15j}) for any real $\\lambda $ in (\\ref{15a})\nwe get via the \nstandard derivation of Schwarz-type inequalities\n\\begin{eqnarray}\n\\langle U_{ab}[x^{(1)},\\theta ^{(1)} ]~{\\cal R}{\\cal I} U_{ab}[x^{(2)},\\theta ^{(2)} ]\n\\rangle ^2&\\leq &\\langle U_{ab}[x^{(1)},\\theta ^{(1)} ]~{\\cal R}{\\cal I}\nU_{ab}[x^{(1)},\\theta ^{(1)}]\\rangle \\label{15f}\\\\\n&&~~~~~~~~~~~~\\cdot\\langle U_{ab}[x^{(2)},\\theta ^{(2)} ]~{\\cal R}{\\cal I}\nU_{ab}[x^{(2)},\\theta ^{(2)}]\\rangle ~.\n\\nonumber\n\\end{eqnarray}\nThis is a rigorous result for ${\\cal I} ={\\bf 1}$ and a conjecture for ${\\cal I} ={\\cal I} _{\\pi}$. \n\\\\ \n\nLet us continue with the discussion of a Wilson loop for a\nclosed contour which crosses the reflection hyperplane twice and which\nis the result of going first along $x^-\\in H_-$ and then along $x^+\\in H_+$.\nIn addition we restrict to cases of coinciding $S^5$ position at the\nintersection points with $H_0$ and treat in parallel ${\\cal I} ={\\bf 1},{\\cal I} _{\\pi}$\n\\begin{eqnarray}\nW[x^+\\circ x^-,\\theta ^+\\circ \\theta ^-]~=~\\sum _{ab}\n\\langle U_{ab}[x^+,\\theta ^+]~U_{ba}[x^-,\\theta ^-]\\rangle\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\label{19}\\\\\n=~\\sum _{ab}\\langle U_{ab}[x^+,\\theta ^+]~{\\cal R}{\\cal I}\\overline{U_{ba}[{\\cal R} x^-,{\\cal I} ^{-1}\\theta ^-]}\\rangle ~~~~~~~~~~~~~~~~~~~~~~~~~\\nonumber\\\\\n\\leq \\sum _{ab}\\langle U_{ab}[x^+,\\theta ^+]~{\\cal R}{\\cal I} U_{ab}[x^+,\\theta^+]\\rangle ^{\\frac{1}{2}}\\langle\\overline{U_{ba}[{\\cal R} x^-,{\\cal I} ^{-1}\\theta ^-]}~\n{\\cal R}{\\cal I} \\overline{U_{ba}[{\\cal R} x^-,{\\cal I} ^{-1}\\theta ^-]}\\rangle ^{\\frac{1}{2}} \n\\nonumber\\\\\n\\leq (\n\\sum _{ab}\\langle U_{ab}[x^+,\\theta ^+]~{\\cal R}{\\cal I} U_{ab}[x^+,\\theta ^+]\\rangle\n)^{\\frac{1}{2}}\n~(\\sum _{cd}\\langle \\overline{U_{cd}[{\\cal R} x^-,{\\cal I} ^{-1}\\theta ^-]}~{\\cal R}{\\cal I} \n\\overline{U_{cd}[{\\cal R} x^-,{\\cal I} ^{-1}\\theta ^-]}\\rangle\n)^{\\frac{1}{2}}.\n\\nonumber\n\\end{eqnarray}\nWe have used (\\ref{11}), (\\ref{11a}), ${\\cal R}\\cR x=x$, (\\ref{15f}) and the usual Schwarz inequality\nin the last step. Now with (\\ref{14}),(\\ref{15}) we get\n\\begin{eqnarray}\nW[x^+\\circ x^-,\\theta ^+\\circ \\theta ^-]\n&\\leq &\\left (\nW[x^+\\circ{\\cal B}{\\cal R} x^+,\\theta ^+\\circ {\\cal B}{\\cal I}\\theta ^+]\n\\right )^{\\frac{1}{2}}\\nonumber\\\\ \n&&~~~~~~~\\cdot ~\\left (\nW[{\\cal B}{\\cal R} x^-\\circ x^-,{\\cal B} {\\cal I} ^{-1}\\theta ^-\\circ \\theta ^-]\\right )^\n{\\frac{1}{2}}~.\n\\label{20}\n\\end{eqnarray} \n\\begin{figure}\n\\begin{center}\n\\mbox{\\epsfig{file=fig1.eps, width=100mm}}\n\\end{center}\n\\noindent {\\bf Fig.1}\\ \\ {\\it From left to right the contours\n$x^+\\circ x^-,~~x^+\\circ {\\cal B}{\\cal R} x^+,~~{\\cal B}{\\cal R} x^-\\circ x^-$.}\n\\end{figure}\n\nTo evaluate the potential between two static sources ($Q\\bar Q $) separated\nby the distance $L$ and located at fixed $S^5 $-positions $\\theta _Q,~\\theta _\n{\\bar Q}$ we need Wilson loops for rectangular contours of extension $L\\times T$ in the large $T$-limit. We choose the $S^5$-position on the two $L$-sides\nlinearly interpolating between $\\theta _Q$ and $\\theta _{\\bar Q}$ on the corresponding great circle. For this restricted set of contours the Wilson loop becomes a function of $L,~T$ and the angle between $\\theta _Q$ and $\\theta _{\\bar Q}$, called $\\Theta $. \n\nIn addition it is useful to restrict ourselves to contours which are situated \nin planes orthogonal to the reflection hyperplane and with $T$-sides running\nparallel to it in a distance $\\frac{L\\pm\\delta}{2}$, see fig.1. Then \n${\\cal I} ={\\cal I}_{\\pi}$ reflects $\\theta ^{\\pm}(s)$, \nwhich both lie on the great circle through $\\theta _Q$ and $\\theta _{\\bar Q}$,\nwith respect to the common $S^5$-position of the points $A$ and $B$, see \nfig.1. As a consequence, (\\ref{20}) implies\n\\begin{equation}\nW(L,T,\\Theta)~\\leq ~\\left (W(L-\\delta,T,\\frac{L-\\delta}{L}\\Theta )\\right )^{\\frac{1}{2}}~\\left (W(L+\\delta,T,\\frac{L+\\delta}{L}\\Theta )\\right )^{\\frac{1}{2}}~,\n\\label{21}\n\\end{equation}\nwhich by standard reasoning yields for the static potential \n\\begin{equation}\nV(L,\\Theta )~\\geq ~\\frac{1}{2}\\left (V(L-\\delta ,\\frac{L-\\delta}{L}\\Theta )~+~\nV(L+\\delta ,\\frac{L+\\delta}{L}\\Theta )\\right )~.\n\\label{22}\n\\end{equation}\nThe last inequality implies the local statement \n$\\frac{d^2}{d\\delta ^2}V(L+\\delta ,\\frac{L+\\delta }{L}\\Theta )\\leq 0$, i.e.\n\\begin{equation}\n\\left ( L^2~\\frac{\\partial ^2}{\\partial L^2}~+~2L\\Theta ~\\frac{\\partial ^2}{\\partial L\\partial \\Theta }~+~\\Theta ^2~\\frac{\\partial ^2}{\\partial \\Theta ^2}\\right )~V(L,\\Theta )~\\leq 0~.\n\\label{24}\n\\end{equation}\nIt means concavity on each straight line across the origin, in the relevant\npart of the $(L,\\Theta )$-plane, $00$ guaranteed by theorem 1 of ref.\\cite{kinar}.\n\\footnote{Our $f$ and $g$ are called $f^2$ and $g^2$ in that paper.}\n\nTherefore, for $\\Theta =0$ standard concavity of $Q\\bar Q$-potentials with respect to the \ndistance in usual space is guaranteed for the wide class of SUGRA backgrounds\ncovered by theorem 1 of ref.\\cite{kinar}.\n\nHowever, due to the more complicated structure of the l.h.s. of (\\ref{31})\nfor $\\Theta\\neq 0$ we did not found a similar general statement in the\ngeneric case. We can only start checking (\\ref{24}) case by case.\\\\\n\nAs our first example we consider the original calculation of Maldacena\n\\cite{malda} for the $AdS_5\\times S^5$ background. The result was\n($R^2=\\sqrt{2g^2_{YM}N}$)\n\\begin{equation}\nV(L,\\Theta )~=~-~\\frac{2R^2}{\\pi}\\frac{F(\\Theta )}{L}~,\n\\label{32}\n\\end{equation}\nwith\n\\begin{eqnarray}\nF(\\Theta )&=&(1-l^2)^{\\frac{3}{2}}\\left ( \\int_1^{\\infty}\\frac{dy}{\ny^2\\sqrt{(y^2-1)(y^2+1-l^2)}}\\right )^2~,\\nonumber\\\\\n\\Theta &=&2l\\int _1^{\\infty}\\frac{dy}{\\sqrt{(y^2-1)(y^2+1-l^2)}}~.\n\\label{33}\n\\end{eqnarray}\nDue to this special structure ($L\\frac{\\partial V}{\\partial L}=-V,~L^2\\frac\n{\\partial ^2}{\\partial L^2}V=2V,~ \\frac{\\partial \\Theta}{\\partial u_0}=0$), (\\ref{24}) is equivalent to\n\\begin{equation}\n\\Theta ^3~\\frac{d^2}{d\\Theta ^2}\\left (\\frac{F}{\\Theta }\\right )~\\geq 0~.\n\\label{34}\n\\end{equation}\nA numerical calculation of $\\frac{F}{\\Theta }$ confirms (\\ref{34}) clearly,\nsee fig.2.\\\\\n\nNext we discuss the large $L$ confining potential including internal\nspace dependence and $\\alpha ^{\\prime} $ corrections of the background derived in\n\\cite{dorn}. It has the form ($\\gamma =\\frac{1}{8}\\zeta (3)R^{-6}$, $T$\ntemperature parameter)\n\\begin{equation}\nV(L,\\Theta )~=~\\frac{\\pi R^2T^2}{2}(1-\\frac{265}{8}\\gamma)\\cdot L~+~\n\\frac{R^2}{4\\pi}(1+\\frac{15}{8}\\gamma )~\\frac{\\Theta ^2}{L}~+~O(1\/L^3)~.\n\\label{35}\n\\end{equation}\n\\begin{figure}\n\\begin{center}\n\\mbox{\\epsfig{file=fig2.eps, width=80mm}}\n\\end{center}\n\\noindent {\\bf Fig.2}\\ \\ {\\it $\\frac{F}{\\Theta}$ as a function of $\\Theta$.\nUse has been made of the representation in terms of elliptic integrals given\nin \\cite{malda}.}\n\\end{figure}\nAlthough this potential for $\\Theta \\neq 0$ violates naive concavity\n$\\frac{\\partial ^2V}{\\partial L^2}\\leq 0$, there is $no$ conflict with\nthe correctly generalised concavity (\\ref{24}). Applied\nto (\\ref{35}) the differential operator just produces zero.\n\\section{Concluding remarks}\nThe $Q\\bar Q$-potential derived \\cite{malda} from the classical SUGRA approximation\nfor the type IIB string in $AdS_5\\times S^5$ fulfils our generalised\nconcavity condition at $\\Theta \\geq 0$. This adds another consistency check\nof this most studied case within the AdS\/CFT duality. \n\nPotentials have been almost completely studied only for $\\Theta =0$ in other\nbackgrounds. At least partly, this might\nbe due to the wisdom to approach in some way QCD, where after all there is no place for a parameter like this angle between different orientations in $S^5$.\nHowever, one has to keep in mind that this goal, in the approaches discussed so far, requires some additional limiting procedure. Before the limit the full 10-dimensionality inherited by the string is still present. Fluctuation determinants\nin all 10 directions have to be taken into account for quantum corrections\n\\cite{olesen,theisen} and the $\\Theta $-dependence of the potentials is of \ncourse not switched off. \n\nAlthough we proved in classical SUGRA approximation monotony in $L$ and\n$\\Theta $ as well as concavity at $\\Theta =0$\nfor a whole class of backgrounds, we were not able to get a similar general\nresult on concavity for $\\Theta >0$. Further work is needed to decide, whether at all\ngeneral statements for $\\Theta >0$ are possible. Alternatively one should\nperform case by case studies for backgrounds derived e.g. from\nrotating branes \\cite{russo}, type zero strings \\cite{tseytlin} or nonsupersymmetric\nsolutions of type IIB string theory \\cite{sfetsos}.\n\nOn the field theory side further work is necessary to really prove\nthe conjectered inequality (\\ref{15j}), otherwise the available set of\nrigorous constraints on the $L$ $and$ $\\Theta $ dependent potential,\nbeyond the standard concavity at $\\Theta =0$, would contain only the very mild\ncondition (\\ref{24b}).\n\\\\[10mm]\n{\\bf Acknowledgement}\\\\\nH.D. thanks G. Bali, H.-J. Otto and C. Preitschopf for\nuseful related discussions.\nThe work of V.D.P. was supported by GRACENAS grant, project No\n97-6.2-34; RFBR-DFG grant, project No 96-02-00180 and RFBR grant,\nproject No 99-02-16617.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and summary}\nThe pions ($\\pi^+,\\pi^0,\\pi^-$) are the Goldstone bosons of spontaneous chiral \nsymmetry breaking in QCD. Their strong interaction dynamics at low energies \ncan therefore be calculated systematically (and accurately) with chiral \nperturbation theory in form of a loop expansion based on an effective chiral\nLagrangian. The very accurate two-loop predictions \\cite{cola} for the S-wave \n$\\pi\\pi$-scattering lengths, $a_0=(0.220\\pm 0.005)m_\\pi^{-1}$ and $a_2=(-0.044\n\\pm 0.001)m_\\pi^{-1}$, have been confirmed experimentally by analyzing the \n$\\pi\\pi$ final-state interaction effects occurring in various (rare) charged \nkaon decay modes \\cite{bnl,batley,cusp}. Electromagnetic processes with pions \noffer further possibilities to test chiral perturbation theory. For example, \npion Compton scattering $\\pi^- \\gamma \\to\\pi^- \\gamma$ allows one to extract \nthe electric and magnetic polarizabilities ($\\alpha_\\pi$ and $\\beta_\\pi$) of the \ncharged pion. Chiral perturbation theory at two-loop order gives for the \ndominant pion polarizability difference the firm prediction $\\alpha_\\pi-\\beta_\\pi\n=(5.7\\pm1.0)\\cdot 10^{-4}\\,$fm$^3$ \\cite{gasser}. It is however in conflict with \nthe existing experimental results from Serpukhov $\\alpha_\\pi-\\beta_\\pi=(15.6\\pm \n7.8)\\cdot 10^{-4}\\,$fm$^3$ \\cite{serpukov} and MAMI $\\alpha_\\pi-\\beta_\\pi=(11.6\n\\pm 3.4)\\cdot 10^{-4}\\,$fm$^3$ \\cite{mainz} which amount to values more than \ntwice as large. Certainly, these existing experimental determinations of \n$\\alpha_\\pi-\\beta_\\pi$ raise doubts about their correctness since they violate\nthe chiral low-energy theorem notably by a factor 2. It is worth to note that a \nrecent dispersive analysis \\cite{mouss} of the Belle data for $\\gamma \\gamma \n\\to \\pi^+\\pi^-$ gives the fit value $\\alpha_\\pi-\\beta_\\pi=4.7\\cdot 10^{-4}\\,\n$fm$^3$, compatible with chiral perturbation theory. \n\nIn that contradictory situation it is promising that the ongoing COMPASS \nexperiment \\cite{compass} at CERN aims at remeasuring the pion \npolarizabilities, $\\alpha_\\pi$ and $\\beta_\\pi$, with high statistics using the \nPrimakoff effect. The scattering of high-energy negative pions in the \nCoulomb-field of a heavy nucleus (of charge $Z$) gives access to cross \nsections for $\\pi^-\\gamma$ reactions through the equivalent photon method \n\\cite{pomer}. The consistent theoretical framework to extract the pion \npolarizabilities from the measured cross sections for (low-energy) pion \nCompton scattering $\\pi^- \\gamma \\to\\pi^- \\gamma$ or the primary pion-nucleus \nbremsstrahlung process $\\pi^- Z \\to\\pi^- Z \\gamma$ has been described (in\none-loop approximation) in refs.\\cite{picross,comptcor}. It has been stressed\nthat at the same order as the polarizability difference $\\alpha_\\pi-\\beta_\\pi$ \nthere exists a further (partly compensating) pion-structure effect in form of \na unique pion-loop correction (interpretable as photon scattering off the \n''pion-cloud around the pion''). In addition to these strong interaction \neffects, the QED radiative corrections to real and virtual pion Compton \nscattering $\\pi^-\\gamma^{(*)} \\to \\pi^- \\gamma$ have been calculated in \nrefs.\\cite{comptcor,bremscor}. The relative smallness of the \npion-structure effects in low-energy pion Compton scattering \\cite{picross} \nmakes it necessary to include such higher order electromagnetic corrections. The\nCOMPASS experiment is set up to detect simultaneously various (multi-particle) \nhadronic final-states which are produced in the Primakoff scattering of \nhigh-energy pions. The neutral pion production channel $\\pi^-\\gamma\\to \\pi^-\n\\pi^0$ serves as a test of the QCD chiral anomaly by measuring the $\\gamma \n3\\pi$ coupling constant $F_{\\gamma 3\\pi}= e\/(4\\pi^2 f_\\pi^3) = 9.72\\,$GeV$^{-3}$. For \nthe two-body process $\\pi^-\\gamma\\to \\pi^-\\pi^0$ the one-loop \\cite{picross,\nbijnens} and two-loop corrections \\cite{hannah} of chiral perturbation theory \nas well as QED radiative corrections \\cite{ametller} have already been worked \nout. \n\nThe $\\pi^- \\gamma$ reaction with three charged pions in the final-state is used \nin the energy range above 1\\,GeV to study the spectroscopy of non-strange meson \nresonances \\cite{boris} and to search for so-called exotic meson resonances\n\\cite{exotic}. The very high statistics of the COMPASS experiment allows it to \ncontinue the event rates with three pions in the final-state even downward to \nthe threshold. The (differential) cross sections for the $\\pi^-\\gamma\\to 3\\pi$ \nreactions in the low-energy region offer new possibilities to test the strong \ninteraction dynamics of the pions as predicted by chiral perturbation theory. \nIn a recent work \\cite{dreipion} the production amplitudes for $\\pi^- \\gamma \n\\to \\pi^-\\pi^0 \\pi^0$ and $\\pi^- \\gamma \\to \\pi^+\\pi^-\\pi^-$ have been calculated\nanalytically at one-loop order in chiral perturbation theory. It has been \nfound that the next-to-leading order corrections from chiral loops and \ncounterterms enhance sizeably (by a factor $1.5 -1.8$) the total cross section \nfor neutral pion-pair production $\\pi^-\\gamma \\to\\pi^-\\pi^0\\pi^0$. By contrast\nthe total cross section for charged pion-pair production $\\pi^-\\gamma \\to\\pi^+\n\\pi^-\\pi^-$ remains almost unchanged in comparison to its tree-level result. \nThis different behavior can be understood from the varying influence of the\nchiral corrections on the pion-pion final-state interaction ($\\pi^+\\pi^- \\to\n\\pi^0 \\pi^0$ versus $\\pi^-\\pi^- \\to \\pi^-\\pi^-$). \n\nThe purpose of the present paper is to further improve the theoretical\ndescription of the $\\pi^- \\gamma \\to 3\\pi$ reactions by considering the\ncorresponding QED radiative corrections. We restrict ourselves here to the\nsimpler case of neutral pion-pair production $\\pi^- \\gamma \\to\\pi^-\\pi^0\\pi^0$, \nfor which the number of contributing one-photon loop diagrams is limited to \nabout a dozen. Another fortunate circumstance is that the (leading-order) \nchiral $\\pi^+\\pi^-\\to \\pi^0\\pi^0$ contact-vertex factors out of all photon-loop\ndiagrams and therefore the radiative corrections to $\\pi^- \\gamma\\to\\pi^-\\pi^0\n\\pi^0$ can be represented simply by a multiplicative correction factor $R\n\\sim \\alpha\/2\\pi$. Infrared finiteness of these virtual radiative corrections \nis achieved (in the standard way) by including soft photon radiation below an \nenergy cut-off $\\lambda$. Taking $\\lambda=5\\,$MeV, we find that the radiative \ncorrections to the total cross section for $\\pi^- \\gamma\\to\\pi^-\\pi^0\\pi^0$ vary\nbetween $+2\\%$ and $-2\\%$ for center-of-mass energies from threshold up to \n$7m_\\pi$. An electromagnetic counterterm (necessary in order to cancel all \nultraviolet divergences generated by the photon-loops) gives an additional \nconstant contribution of about $1\\%$, however with a large uncertainty. The \nradiative corrections to the charged pion-pair production process $\\pi^- \\gamma\n\\to\\pi^+ \\pi^-\\pi^-$ can be roughly estimated to be a factor $2-4$ times larger,\narguing that in this case twice as many charged pions are involved in virtual \nphoton-loops and soft photon bremsstrahlung. \n\n\n\\section{Evaluation of one-photon loop diagrams} \nIn this section we calculate analytically the radiative corrections to the \nneutral pion-pair photoproduction process: \n\\begin{equation}\\pi^-(p_1)+\\gamma(k,\\epsilon\\,) \\to\\pi^-(p_2) +\\pi^0(q_1)+ \n\\pi^0(q_2)\\,,\\end{equation} \nas they arise from one-photon loop diagrams at order $\\alpha$. For a concise\npresentation of our analytical results it is convenient to introduce the \nfollowing dimensionless Mandelstam variables:\n\\begin{equation} s=(p_1+k)^2\/m_\\pi^2\\,, \\quad t=(p_1-p_2)^2\/m_\\pi^2\\,, \\quad \nu=(p_2-k)^2\/m_\\pi^2\\,,\\end{equation}\nwith $m_\\pi =139.570\\,$MeV the charged pion mass. In this (adapted) notation \n$\\sqrt{s}\\,m_\\pi$ is the total center-of-mass energy of the process. We will\nalso use frequently the linear combination: \n\\begin{equation} \\Sigma = s+t+u-2 =(q_1+q_2)^2\/m_\\pi^2\\,,\\end{equation}\nrelated to the squared invariant mass of the produced neutral pion-pair.\nIn the physical region the following inequalities hold: $s>(1+2\\sqrt{r_0})^2$, \n$t<0$, $u<0\\,$\\footnote{The inequality $u<1$ follows immediately from the\ndefinition of $u$. In order to derive the sharper upper bound $u<0$, one uses\nthe relation for $u$ written in eq.(21) and inserts $y_{\\rm max}=1$ and \n$\\omega_{\\rm max} = (s+1-4r_0)\/2\\sqrt{s}$. In the end the condition $r_0>1\/4$ \nturns out to be crucial for $u$ to take on only negative values.} and \n$4r_0< \\Sigma <(\\sqrt{s}-1)^2$ where $r_0=(m_{\\pi^0}\/m_\\pi)^2=0.93526$ denotes\nthe squared ratio between the neutral pion mass $m_{\\pi^0}=134.977\\, $MeV and \nthe charged pion mass $m_\\pi$.\n\nLet us recall the dynamical description of the process $\\pi^- \\gamma \\to \\pi^-\n\\pi^0\\pi^0$ at low energies \\cite{picross,dreipion}. When choosing for the \n(transversal) real photon $\\gamma(k,\\epsilon\\,)$ the Coulomb-gauge in the \ncenter-of-mass frame, the conditions $\\epsilon \\cdot p_1 =\\epsilon \\cdot k= 0$ \nimply that all diagrams for which the photon couples to the in-coming pion \n$\\pi^-(p_1)$ vanish identically. Furthermore, in the convenient parametrization\nof the special-unitary matrix-field $U =\\sqrt{1-\\vec \\pi^{\\,2} \/f_\\pi^2}+ i\\vec \n\\tau \\cdot \\vec \\pi\/f_\\pi$ no $\\gamma 4\\pi$\nand $2\\gamma 4\\pi$ contact-vertices exist (at leading order). Under these \nassumptions one is left with one single $u$-channel pole diagram in which the \nchiral $\\pi^+\\pi^-\\to \\pi^0\\pi^0$ contact-vertex is followed by a photon-pion \ncoupling proportional to $\\epsilon\\cdot p_2$.\n\nThe virtual radiative corrections to $\\pi^- \\gamma \\to \\pi^-\\pi^0\\pi^0$ are\nobtained by dressing this tree diagram with a photon-loop in all possible ways \n(see Figs.\\,1-4). A fortunate circumstance is that the (leading order) chiral \n$\\pi^+\\pi^-\\to \\pi^0\\pi^0$ transition amplitude $[(q_1+q_2)^2-m_{\\pi^0}^2]\/f_\\pi^2$ \ndepends only on the $\\pi^0\\pi^0$ invariant mass and thus factors out of all \nphoton-loop diagrams. The Coulomb-gauge ($\\epsilon\\cdot p_1=\\epsilon\\cdot k=0$)\nleaves the scalar product $\\epsilon\\cdot p_2$ as the only possible coupling \nterm for the external real photon. As a consequence of these features the \nradiative corrections due to photon-loops can represented simply by a \nmultiplicative correction factor. Its real part which is only of relevance is \ndenoted by $R(s,t,u)$. We use dimensional regularization to treat both \nultraviolet and infrared divergences (where the latter are caused by the \nmasslessness of the photon). Divergent pieces of one-loop integrals show up in \nform of the composite constant:\n\\begin{equation} \\xi = {1\\over d-4}+{1\\over 2}(\\gamma_E-\\ln 4\\pi) +\\ln{m_\\pi\n\\over \\mu}\\,, \\end{equation}\ncontaining a simple pole at $d=4$ and $\\mu$ is an arbitrary mass scale.\nUltraviolet (UV) and infrared (IR) divergences are distinguished by the\nfeature of whether the condition for convergence of the $d$-dimensional\nintegral is $d<4$ or $d>4$. We discriminate them in the notation by putting\nappropriate subscripts, i.e. $\\xi_{UV}$ and $\\xi_{IR}$. In order to simplify\nall calculations we employ the Feynman gauge, where the photon propagator is \ndirectly proportional to the Minkowski metric tensor $g_{\\mu\\nu}$. We can now\nenumerate the analytical expressions for $R(s,t,u)$ as they emerge from the\neight classes of contributing one-photon loop diagrams. \n\n\n\n\n\\begin{figure}\\begin{center}\n\\includegraphics[scale=1.,clip]{gam3pifig9.eps}\n\\end{center}\n\\vspace{-.5cm}\n\\caption{One-photon loop diagrams (I) and (II) for neutral pion-pair production \n$\\pi^-\\gamma \\to \\pi^-\\pi^0\\pi^0$. Arrows indicate out-going pions.}\n\\end{figure}\n\nThe two diagrams of class (I) shown in Fig.\\,1 introduce the wavefunction\nrenormalization factor $Z_2-1$ of the pion \\cite{comptcor}: \n\\begin{equation}R^{(\\rm I)}={\\alpha \\over \\pi}\\big(\\xi_{IR}-\\xi_{UV}\\big) \\,. \n\\end{equation}\nDiagram (II) involves the once-subtracted (off-shell) selfenergy of the pion\nand leads to the result:\n\\begin{equation}R^{(\\rm II)}={\\alpha \\over \\pi}\\bigg[-\\xi_{UV}+1 -{u+1 \\over 2u}\n\\ln(1-u)\\bigg] \\,. \\end{equation}\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=1.,clip]{gam3pifig10.eps}\n\\end{center}\n\\vspace{-.5cm}\n\\caption{One-photon loop diagrams (III), (IV) and (V).}\n\\end{figure}\nDiagram (III) shown in Fig.\\,2 gives rise a constant vertex correction:\n \\begin{equation}R^{(\\rm III)}={\\alpha \\over 8\\pi}\\big(6\\xi_{UV}-7\\big) \\,, \n\\end{equation}\nwhile diagrams (IV) and (V) generate $u$-dependent vertex corrections:\n\\begin{equation}R^{(\\rm IV)}={\\alpha \\over 8\\pi}\\bigg[6\\xi_{UV}-6 -{1\\over u}+{u-1\n\\over u^2}(3u+1)\\ln(1-u)\\bigg] \\,, \\end{equation}\n\\begin{equation}R^{(\\rm V)}={\\alpha \\over 8\\pi}\\bigg[-4\\xi_{UV}+5 +{1\\over u}+{u^2\n+6u+1\\over u^2} \\ln(1-u)\\bigg] \\,. \\end{equation}\nIt is astonishing that the last four contributions $R^{(\\rm II)}+R^{(\\rm III)}+\nR^{(\\rm IV)}+R^{(\\rm V)}=0$ sum to zero.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=1.,clip]{gam3pifig11.eps}\n\\end{center}\n\\vspace{-.5cm}\n\\caption{One-photon loop diagrams (VI) and (VII).}\n\\end{figure}\n\nThe reducible $u$-channel pole diagram (VI) shown in Fig.\\,3 includes a \nphotonic vertex correction around the $2\\pi^0$ emission vertex. One finds for\nits contribution to the (real) $R$-factor the following result: \n\\begin{eqnarray} R^{(\\rm VI)} &=& {\\alpha \\over 2\\pi}\\bigg\\{-\\xi_{UV}+1+{1-u\\over\n2u} \\ln(1-u) +\\sqrt{\\Sigma-4\\over \\Sigma}\\ln{\\sqrt{\\Sigma-4}+\\sqrt{\\Sigma}\\over\n2} \\nonumber \\\\ &&+ \\bigg(s+t+{u-7\\over 2}\\bigg) -\\!\\!\\!\\!\\!\\!\\int_0^1 dx \n{\\ln|x^{-1}+\\Sigma(x-1)|-\\ln(1-u)\\over 1+(u-1)x+\\Sigma\\, x(x-1)} \\bigg\\} \\,.\n\\end{eqnarray}\nThe integrand of the principal-value integral $-\\!\\!\\!\\!\\!\\int_0^1 dx$ has \nsimple poles at $x_\\pm=[\\Sigma+1-u\\pm\\sqrt{(\\Sigma+1-u)^2-4\\Sigma}]\/2\\Sigma$,\nbut in the physical region $u<0$, $\\Sigma >4r_0$ only the pole at $x_-$ lies\ninside the unit-interval $0s-1-\\Sigma>s-1-(\\sqrt{s}-1)^2=2(\\sqrt{s}-1)>0$. By taking\nthe absolute magnitude of the arguments of logarithms one gets directly a \nsuitable representation of the only relevant real part. It is a fortunate\ncircumstance that the Feynman-parameter representation of loop functions leads\nto expressions which can be handled easily numerically in the physical region.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=1.,clip]{gam3pifig12.eps}\n\\end{center}\n\\vspace{-.5cm}\n\\caption{One-photon loop diagrams (VIII). The black square in the right tree \ndiagram (ct) symbolizes the electromagnetic counterterm for $\\pi^+\\pi^-\\to \n\\pi^0\\pi^0$ scattering.}\n\\end{figure}\n\nFinally, we come to the irreducible $u$-channel diagrams of class (VIII) shown \nin Fig.\\,4. The contribution from the left diagram gets completely absorbed by\na term from the right diagram. The resulting contribution to the (real) \n$R$-factor includes an infrared divergent term with a non-trivial \n$t$-dependence and after putting all pieces together it reads: \n\\begin{eqnarray} R^{(\\rm VIII)} &=& {\\alpha \\over 2\\pi}\\Bigg\\{{1-u\\over 2}\\bigg[\n{D(t)-D(\\Sigma)\\over \\Sigma-t}- {1 \\over u}\\ln(1-u) \\nonumber \\\\ && + \n-\\!\\!\\!\\!\\!\\!\\int_0^1 dx {1+4x^2-t\\,x(1+x)\\over 1-t\\,x(1-x)}\\,{\\ln|x^{-1}+\\Sigma\n(x-1)|-\\ln(1-u)\\over 1+(u-1)x+\\Sigma\\,x(x-1)} \\bigg]\\nonumber \\\\ &&+{t-2 \\over \n\\sqrt{t^2-4t}} \\bigg[4\\Big(\\xi_{IR}+\\ln(1-u)\\Big) \\ln{\\sqrt{4-t}+\\sqrt{-t}\\over\n2} +{\\rm Li}_2(w)\\nonumber \\\\ && -{\\rm Li}_2(1-w)+{1\\over 2}\\ln^2 w-{1\\over 2}\n\\ln^2(1-w) + {\\rm Li}_2(h_-)-{\\rm Li}_2(h_+)\\bigg]\\nonumber \\\\ && +(2-t)\n\\int_0^1 dx{\\ln|1+ \\Sigma\\, x(x-1)|\\over 1-t\\,x(1-x)} \\Bigg\\} \\,,\\end{eqnarray}\nwith the abbreviations\n\\begin{equation} w = {1\\over 2}\\Bigg(1-\\sqrt{-t\\over 4-t}\\,\\Bigg)\\,, \\qquad\n h_\\pm = {1\\over 2}\\Big(t \\pm \\sqrt{t^2-4t}\\, \\Big)\\,. \\end{equation}\nOne observes that the term proportional to $D(t)-D(\\Sigma)$ drops out in the \nsum $R^{(\\rm VII)} + R^{(\\rm VIII)}$ and therefore we do not need to specify it. \nLi$_2(w) = \\sum_{n=1}^\\infty n^{-2} w^n= w\\int_1^\\infty dx [x(x-w)]^{-1} \\ln x$ denotes \nthe conventional dilogarithmic function. Several of the results derived in \nsection 3 of ref.\\cite{bremscor} have been useful in order to obtain the \nexpression for $R^{(\\rm VIII)}$ written in eq.(15).\n \n\\section{Infrared finiteness}\nIn the next step we have to consider the infrared divergent terms proportional\nto $\\xi_{IR}$ present in eqs.(5,15). At the level of the measurable cross \nsection these get eliminated by contributions from (undetected) soft photon \nbremsstrahlung. In its final effect, the (single) soft photon radiation off \nthe in- or out-going $\\pi^-$ multiplies the tree-level differential cross \nsection for $\\pi^-\\gamma \\to\\pi^- \\pi^0\\pi^0$ by a (universal) factor \n\\cite{comptcor,bremscor}: \n\\begin{equation} \\delta_{\\rm soft}=2R_{\\rm soft}= \\alpha\\, \\mu^{4-d}\\!\\!\\int\\limits_{\n|\\vec l\\,|<\\lambda} \\!\\!{d^{d-1}l \\over (2\\pi)^{d-2}\\, l_0} \\bigg\\{ {2p_1\\cdot p_2 \n\\over p_1 \\cdot l \\, p_2 \\cdot l} - {m_\\pi^2 \\over (p_1 \\cdot l)^2} - {m_\\pi^2 \n\\over (p_2 \\cdot l)^2} \\bigg\\} \\,, \\end{equation}\nwhich depends on a small energy cut-off $\\lambda$. Working out this momentum \nspace integral by the method of dimensional regularization (with $d>4$) one \nfinds the following contribution from soft photon emission to the $R$-factor: \n\\begin{eqnarray}R_{\\rm soft}^{(\\rm cm)}&=& {\\alpha \\over 4\\pi}\\Bigg\\{4\\bigg[1+{2 t-4 \n\\over\\sqrt{t^2-4t}} \\ln{\\sqrt{4-t}+\\sqrt{-t}\\over 2}\\bigg] \\bigg(\\ln{m_\\pi\\over\n2\\lambda} -\\xi_{IR}\\bigg) \\nonumber \\\\ && + {s+1 \\over s-1} \\ln s + {2\\omega\n\\over \\sqrt{\\omega^2-1}}\\ln\\Big(\\omega+\\sqrt{\\omega^2-1}\\,\\Big)+(t-2) \\nonumber \n\\\\ && \\times \\int_0^1 dx {s+1-\\Sigma\\,x\\over [1-t\\,x(1-x)] \\sqrt{W}} \\ln{ s+1\n-\\Sigma\\,x+\\sqrt{W} \\over s+1-\\Sigma\\,x-\\sqrt{W}}\\Bigg\\} \\,,\\end{eqnarray}\nwith the abbreviation $W= (s+1-\\Sigma\\,x)^2-4s[1-t\\,x(1-x)]$. In order to\nsimplify the last term in eq.(18) we have made use of the relation $\\Sigma= \ns+1-2 \\omega\\sqrt{s}$, where $\\omega$ denotes the center-of-mass energy of the\nout-going negative pion $\\pi^-(p_2)$ divided by $m_\\pi$. Note that the terms \nbeyond those proportional to $\\ln(m_\\pi\/2\\lambda) -\\xi_{IR}$ are specific for \nthe evaluation of the soft photon correction factor $R_{\\rm soft}$ in the \ncenter-of-mass frame with $\\lambda$ an infrared cut-off therein. \n\nIn order to present a concrete example we have evaluated the complete radiative\ncorrection factor $R$ at the threshold in the isospin limit: $s_{\\rm th}=9$, \n$t_{\\rm th}=-4\/3$, $u_{\\rm th}=-5\/3$, $\\Sigma_{\\rm th}=4$, $\\omega_{\\rm th}=1$. In \nthis case one gets numerically:\n\\begin{equation}R_{\\rm th}={\\alpha \\over 2\\pi} \\bigg\\{11.093+3\\bar k+ \\bigg(2-\n{5\\over 2}\\ln 3\\bigg)\\ln{m_\\pi \\over 2\\lambda} -0.725\\bigg\\} \\,, \\end{equation} \nwhere the terms in the curly bracket correspond in the order written to virtual\nphoton-loops, the electromagnetic counterterm, the universal soft photon \ncontribution, and the soft photon contribution specific for imposing an \ninfrared cut-off via $|\\vec l\\,|<\\lambda$ in the center-of-mass frame. \n\\section{Results: radiative corrections to cross sections}\nAfter inclusion of radiative corrections the total cross section for neutral \npion-pair production $\\pi^-\\gamma \\to\\pi^-\\pi^0\\pi^0$ depends also on\nthe infrared cut-off $\\lambda$ for undetected soft photons. We multiply the \nsquared tree-level amplitude by $1+2R(s,t,u,\\lambda)$ and integrate over the \nthree-pion phase space. Applying the usual flux and symmetry factors the\ntotal cross section reads:\n\\begin{eqnarray} \\sigma_{\\rm tot}(s,\\lambda) &=& {\\alpha\\,m_\\pi^2\\over 32\\pi^2f_\\pi^4\n(s-1)}\\int_{1}^{\\omega_{\\rm max}}\\!\\!d\\omega\\,(\\omega^2-1)^{3\/2}\\sqrt{\\Sigma-4r_0 \\over \n\\Sigma }\\nonumber \\\\ && \\times\\int_{-1}^1 dy\\,(1-y^2) \\bigg({\\Sigma-r_0\\over\nu-1} \\bigg)^{\\!\\!2}\\,\\Big[1+2R(s,t,u,\\lambda)\\Big] \\,,\n\\end{eqnarray}\nwith $\\omega_{\\rm max} = (s+1-4r_0)\/2\\sqrt{s}$ the endpoint energy of the \nout-going $\\pi^-$ divided by $m_\\pi$. Using the relations $\\Sigma= s+t+u-2= \ns+1-2 \\omega \\sqrt{s}$ and:\n\\begin{equation} u= 1+{1-s \\over\\sqrt{s}}\\Big(\\omega-y \\sqrt{\\omega^2-1}\\,\\Big)\n\\,, \\qquad t= 2-(s+1){\\omega \\over\\sqrt{s}}+{1-s \\over\\sqrt{s}} y \\sqrt{\n\\omega^2-1}\\,, \\end{equation} \nvalid in the center-of-mass frame the whole integrand in eq.(20) becomes a\nfunction of $\\omega$ and the directional cosine $y$.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=.5,clip]{2piradcor.eps}\n\\end{center}\n\\vspace{-0.6cm}\n\\caption{Radiative corrections to the total cross section for neutral \npion-pair photoproduction $\\pi^- \\gamma \\to \\pi^-\\pi^0\\pi^0$ as a function of \nthe center-of-mass energy $\\sqrt{s}\\,m_\\pi$. The infrared cut-off for soft \nphoton emission has been set to the value $\\lambda = 5\\,$MeV.} \n\\end{figure}\n\n\nFig.\\,5 shows in percent the radiative corrections to the total cross section \nfor neutral pion-pair production $\\pi^-\\gamma \\to\\pi^-\\pi^0\\pi^0$ as a function \nof the center-of-mass energy $\\sqrt{s}\\,m_\\pi$. The dashed-dotted and dashed \ncurves display the separate contributions from soft photon bremsstrahlung and \nvirtual photon-loops. In each case the radiative correction is calculated as \nthe ratio of the shift in $\\sigma_{\\rm tot}(s,\\lambda)$ induces by the respective \n$R$-factor divided by the tree-level cross section. As in ref.\\cite{comptcor} \nthe infrared cut-off $\\lambda$ for undetected soft photons has been set to \n$\\lambda=5\\,$MeV, a value which seems appropriate for the COMPASS experiment. \nThe full line in Fig.\\,5 shows the complete radiative corrections. One observes \nan almost linear decrease which ranges from $+1.6\\%$ at threshold to $-1.6\\%$ \nat the center-of-mass energy $\\sqrt{s}\\,m_\\pi=7m_\\pi$. An interesting feature\nis that the positive radiative corrections from the virtual photon-loops get \ngradually reduced and turned into negative values by the soft photon\ncontributions. \n \nThe finite part of the electromagnetic counterterm $\\bar k$ shifts the \nradiative corrections (displayed by the full curve in Fig.\\,5) by a \nconstant amount of $3\\alpha\\bar k\/\\pi= 0.7\\%\\cdot \\bar k$. In order to give an\nestimate for $\\bar k$ we exploit the elaborate result of ref.\\cite{pionium}\nfor the pionium decay amplitude $(a_0-a_2)m_\\pi+\\varepsilon$. Guided by eq.(13) \nwe identify $(\\alpha\/ 2\\pi)(3\\bar k +1)$ with the ratio $\\varepsilon^{\\rm elm}\/(\na_0-a_2)$, where $\\varepsilon^{\\rm elm}$ is the electromagnetic correction to \nthe pionium decay amplitude. Subtracting from $\\varepsilon= (6.1\\pm 1.6)\\cdot \n10^{-3}$ the contribution $4.8\\cdot 10^{-3}$ due to the (charged and neutral)\npion mass difference (see eqs.(4.28,4.29) in ref.\\cite{pionium}) one gets \n$\\varepsilon^{\\rm elm}= (1.3\\pm 1.6)\\cdot 10^{-3}$. Together with the leading order \nexpression for the $\\pi\\pi$-scattering length difference $a_0-a_2 = 9m_\\pi\/\n(32\\pi f_\\pi^2) = 0.204 m_\\pi^{-1}$ one arrives at the estimate $\\bar k = 1.5\\pm \n2.2$ for the electromagnetic counterterm. Its central value implies a constant \nshift of the radiative corrections to $\\pi^-\\gamma \\to\\pi^-\\pi^0\\pi^0$ by\nabout $1.0\\%$. The large errorbar of $\\bar k = 1.5\\pm 2.2$ introduces at the\nsame time a wide errorband to the full curve in Fig.\\,5. Still an allowed\noption is to neglect to electromagnetic counterterm, setting $\\bar k=0$. \n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=.5,clip]{2pi0radspec.eps}\n\\end{center}\n\\vspace{-0.6cm}\n\\caption{Radiative corrections to the $\\pi^0\\pi^0$ mass spectra for neutral \npion-pair production $\\pi^- \\gamma \\to \\pi^-\\pi^0\\pi^0$ as a function of \nthe $\\pi^0\\pi^0$ invariant mass $\\sqrt{\\Sigma}\\,m_\\pi$. The numbers on the\ncurves correspond to $\\sqrt{s}$.} \n\\end{figure}\n\nFinally, we consider radiative corrections to more exclusive observables. An\nobvious candidate is the $\\pi^0\\pi^0$ mass spectrum $d\\sigma\/dm_{00}$ with $m_{00} \n=\\sqrt{\\Sigma}\\,m_\\pi$ the $\\pi^0\\pi^0$ invariant mass. The differential cross \nsection $d\\sigma\/dm_{00}$ is obtained by omitting the $d\\omega$-integration in \neq.(20) and applying the normalization factor $ m_\\pi^{-1}\\sqrt{\\Sigma\/s}$. \nFig.\\,6 shows in percent the radiative corrections to the $\\pi^0\\pi^0$ mass \nspectrum for neutral pion-pair production $\\pi^- \\gamma \\to \\pi^-\\pi^0\\pi^0$ as \na function of the $\\pi^0\\pi^0$ invariant mass $\\sqrt{\\Sigma}\\,m_\\pi$. The \nnumbers (4,\\,5,\\,6,\\,7) on the four rising curves correspond to $\\sqrt{s}$, the \ntotal center-of-mass energy divided by $m_\\pi$. The electromagnetic counterterm\n$\\bar k$ shifts again the whole pattern by a constant $3\\alpha \\bar k\/\\pi$. \n \nIn summary, we find that the radiative corrections to neutral pion-pair\nproduction $\\pi^- \\gamma \\to \\pi^-\\pi^0\\pi^0$ are comparable in size to those\nfor pion Compton scattering $\\pi^- \\gamma \\to\\pi^- \\gamma $ \\cite{comptcor}. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nPerceiving environments in three dimensions (3D) is important for locating objects, identifying free space, and motion planning in robotics and autonomous vehicles.\nAlthough these domains typically rely on 3D sensors to measure depth and identify free space (e.g., LiDAR \\cite{GaEtAl20} or RGBD cameras \\cite{FeLa19}), classifying and understanding raw 3D data is a challenging and ongoing area of research \\cite{KhEtAl19,SuSh19,LiuEtAl19,pointnet}.\nAlternatively, RGB cameras are less expensive and more ubiquitous than 3D sensors, and there are many more datasets and methods based on RGB images \\cite{ImageNet,maskrcnn,MSCOCO}.\nThus, even when 3D sensors are available, RGB images remain a critical modality for understanding data and identifying objects \\cite{FlCoGr19,WaEtAl19}.\n\n\nTo identify objects in a sequence of images, video object segmentation (VOS) addresses the problem of densely labeling target objects in video.\nVOS is a hotly studied area of video understanding, with frequent developments and improving performance on challenging VOS benchmark datasets \\cite{SegTrackv2,DAVIS,DAVIS17,SegTrack,YTVOS}.\nThese algorithmic advances in VOS support learning object class models~\\cite{OnReVeECCV2014,TaSuYaCVPR2013}, scene parsing~\\cite{LiHeCVPR2015,TiLaIJCV2012}, action recognition~\\cite{LuXuCoCVPR2015,SoIdShICCV2015,SoIdShCVPR2016}, and video editing applications~\\cite{ChChChACMM2012}. \n\n\n\\begin{figure} [t]\n\t\\centering\n\t\\includegraphics[width=0.975\\textwidth]{front_vos_1.jpg}\n\t\\caption{ \\textbf{Depth from Video Object Segmentation.}\n\t\tVideo object segmentation algorithms can densely segment target objects in a variety of settings (DAVIS \\cite{DAVIS}, \\textit{left}).\n\t\tGiven object segmentations and a measure of camera movement (e.g., from vehicle odometry or robot kinematics, \\textit{right}), our network can estimate an object's depth\n\t\n\t}\n\t\\label{fig:front_vos}\n\\end{figure}\n\nGiven that many VOS methods perform well in unstructured environments, in this work, we show that VOS can similarly support 3D perception for robots and autonomous vehicles.\nWe take inspiration from work in psychology that establishes how people perceive depth motion from the optical expansion or contraction of objects \\cite{It51,SwGo86}, and we develop a deep network that learns object depth estimation from uncalibrated camera motion and video object segmentation (see Fig.~\\ref{fig:front_vos}).\nWe depict our optical expansion model in Fig.~\\ref{fig:optical_expansion}, which uses a moving pinhole camera and binary segmentation masks for an object in view.\nTo estimate an object's depth, we only need segmentations at two distances with an estimate of relative camera movement.\nNotably, most autonomous hardware platforms already measure movement, and even hand-held devices can track movement using an inertial measurement unit or GPS.\nFurthermore, although we do not study it here, if hardware-based measurements are not available, structure from motion is also plausible to recover camera motion \\cite{KaGaBa19,MuTa17,ScFr16}.\n\nIn recent work \\cite{GrFlCo20}, we use a similar model for VOS-based visual servo control, depth estimation, and mobile robot grasping.\nHowever, our previous analytic depth estimation method does not adequately account for segmentation errors.\nFor real-world objects in complicated scenes, segmentation quality can change among frames, with typical errors including: incomplete object segmentation, partial background inclusion, or segmenting the wrong object.\nThus, we develop and train a deep network that learns to accommodate segmentation errors and reduces object depth estimation error from \\cite{GrFlCo20} by as much as 59\\%.\n\nThe first contribution of our paper is developing a learning-based approach to object depth estimation using motion and segmentation, \nwhich we experimentally evaluate in multiple domains.\nTo the best of our knowledge, this work is the first to use a learned, segmentation-based approach to depth estimation, which has many advantages.\nFirst, we use segmentation masks as input, so our network does not rely on application-specific visual characteristics and is useful in multiple domains.\nSecond, we process a series of observations simultaneously, thereby mitigating errors associated with any individual camera movement or segmentation mask.\nThird, our VOS implementation operates on streaming video and our method, using a single forward pass, runs in real-time.\nFourth, our approach only requires a single RGB camera and relative motion (no 3D sensors).\nFinally, our depth estimation accuracy will improve with future innovations in VOS.\n\n\n\\begin{figure} [t]\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth]{optical_expansion_2.pdf}\n\t\\caption{ \\textbf{Optical Expansion and Depth.}\n\t\tAn object's projection onto the image plane scales inversely with the depth between the camera and object.\n\t\tWe determine an object's depth ($d$) using video object segmentation, relative camera movement, and corresponding changes in scale ($\\ell$).\n\t\tIn this example, $d_1 = \\frac{d_2}{2}$, $\\ell_1 = 2 \\ell_2$, and $d_1 \\ell_1 = d_2 \\ell_2$\n\t\n\t\n\t}\n\t\\label{fig:optical_expansion}\n\\end{figure}\n\nA second contribution of our paper is the \\textbf{O}bject \\textbf{D}epth via \\textbf{M}otion and \\textbf{S}egmentation (ODMS) dataset.\\footnote{Dataset and source code website: \\url{https:\/\/github.com\/griffbr\/ODMS}}\nThis is the first dataset for VOS-based depth estimation and enables learning-based algorithms to be leveraged in this problem space. \nODMS data consist of a series of object segmentation masks, camera movement distances, and ground truth object depth. \nDue to the high cost of data collection and user annotation \\cite{DAVIS2018,ViGr09}, manually collecting training data would either be cost prohibitive or severely limit network complexity to avoid overfitting.\nInstead, we configure our dataset to continuously generate synthetic training data with random distances, object profiles, and even perturbations, so we can train networks of arbitrary complexity.\nFurthermore, because our network input consists simply of binary segmentation masks and distances, we show that domain transfer from synthetic training data to real-world applications is viable.\nFinally, as a benchmark evaluation, we create four ODMS validation and test sets with 15,650 examples in multiple domains, including robotics and driving.\n\n\n\\section{Related Work}\n\nWe use video object segmentation (VOS) to process raw input video and output the binary segmentation masks we use to estimate object depth in this work.\nUnsupervised VOS usually relies on generic object motion and appearance cues \\cite{NLC,GrCoWACV2019,KEY,FST,WeSz17,LIBSVX}, while semi-supervised VOS segments objects that are specified in user-annotated examples \\cite{CINM,PML,GrCo19,PREMVOS,RGMP,OSMN}.\nThus, semi-supervised VOS can learn a specific object's visual characteristics and reliably segment dynamic \\textit{or} static objects.\nTo segment objects in our robot experiments, we use One-Shot Video Object Segmentation (OSVOS) \\cite{OSVOS}.\nOSVOS is state-of-the-art in VOS, has influenced other leading methods \\cite{OSVOS-S,OnAVOS}, and does not require temporal consistency (OSVOS segments frames independently). \nDuring robot experiments, we apply OSVOS models that have been pre-trained with annotated examples of each object rather than annotating an example frame at inference time.\n\nWe take inspiration from many existing datasets in this work.\nVOS research has benefited from benchmark datasets like SegTrackv2 \\cite{SegTrack,SegTrackv2}, DAVIS \\cite{DAVIS,DAVIS17}, and YouTube-VOS \\cite{YTVOS}, which have provided increasing amounts of annotated training data.\nThe recently developed MannequinChallenge dataset \\cite{LiEtAl19} trained a network to predict dense depth maps from videos with people, with improved performance when given an additional human-mask input.\nAmong automotive datasets, Cityscapes \\cite{City} focuses on \\textit{semantic segmentation} (i.e., assigning class labels to all pixels), KITTI \\cite{KITTI} includes benchmarks separate from segmentation for single-image depth completion and prediction, and SYNTHIA \\cite{SYNTHIA} has driving sequences with simultaneous ground truth for semantic segmentation and depth images.\nIn this work, our ODMS dataset focuses on \\textbf{O}bject \\textbf{D}epth via \\textbf{M}otion and \\textbf{S}egmentation, establishing a new benchmark for segmentation-based 3D perception in robotics and driving.\nIn addition, ODMS is arbitrarily extensible, which makes learning-based methods feasible in this problem space.\n\n\n\\section{Optical Expansion Model}\n\\label{sec:solve}\n\nOur optical expansion model (Fig.~\\ref{fig:optical_expansion}) forms the theoretical underpinning for our learning-based approach in Section~\\ref{sec:learn} and ODMS dataset in Section~\\ref{sec:dataset}.\nIn this section, we derive the complete model and analytic solution for segmentation-based depth estimation.\nWe start by defining the inputs we use to estimate depth.\nAssume we are given a set of $n \\geq 2$ observations that consist of masks \n\\begin{align}\n\t\\mathsf{M} := \\{\\mathbf{M}_1, \\mathbf{M}_2, \\cdots, \\mathbf{M}_n\\}\n\n\t\\label{eq:masks}\n\\end{align}\nsegmenting an object and corresponding camera positions on the optical axis\n\\begin{align}\n\t\\mathbf{z} := \\{z_1, z_2, \\cdots, z_n\\}.\n\n\t\\label{eq:distances}\n\\end{align}\nEach binary mask image $\\mathbf{M}_i$ consists of pixel-level labels where 1 indicates a pixel belongs to a specific segmented object and 0 is background.\nFor the solutions in this work, the optical axis's origin and absolute position of $\\mathbf{z}$ is inconsequential.\n\n\\subsection{Relating Depth and Scale}\n\nWe use changes in scale of an object's segmentation mask to estimate depth.\nAs depicted in Fig.~\\ref{fig:optical_expansion}, we relate depth and scale across observations using\n\\begin{align}\n\td_i \\ell_i = d_j \\ell_j \\implies \\frac{\\ell_j}{\\ell_i} = \\frac{d_i}{d_j},\n\t\\label{eq:elld}\n\\end{align}\nwhere $\\ell_i$ is the object's projected scale in $\\mathbf{M}_i$, $d_i$ is the distance on the optical axis from $z_i$ to the visible perimeter of the segmented object, and $\\frac{\\ell_j}{\\ell_i}$ is the object's change in scale between $\\mathbf{M}_i$ and $\\mathbf{M}_j$.\nNotably, it is more straightforward to track changes in scale using area (i.e., the sum of mask pixels) than length measurements.\nThus, we use Galileo Galilei's Square-cube law to modify \\eqref{eq:elld} as\n\\begin{align}\n\ta_j = a_i \\bigg(\\frac{\\ell_j}{\\ell_i}\\bigg)^2 \\implies\n\t\\frac{\\ell_j}{\\ell_i} = \\frac{\\sqrt{a_j}}{\\sqrt{a_i}} = \\frac{d_i}{d_j},\n\t\\label{eq:galileo}\n\\end{align}\nwhere $a_i$ is an object's projected area at $d_i$ and $\\frac{\\sqrt{a_j}}{\\sqrt{a_i}}$ is equal to the change in scale between $\\mathbf{M}_i$ and $\\mathbf{M}_j$.\nCombining \\eqref{eq:elld} and \\eqref{eq:galileo}, we relate observations as\n\\begin{align}\n\td_i \\sqrt{a_i} = d_j \\sqrt{a_j} = c,\n\t\\label{eq:c}\n\\end{align}\nwhere $c$ is a constant corresponding to an object's orthogonal surface area.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth]{optical_expansion_za.pdf}\n\n\t\\caption{ \\textbf{Calculating Object Depth.}\n\t\n\t\tFirst, we define $d_i$ in terms of its component parts $z_{\\text{object}}$ and $z_i$ \\eqref{eq:di}.\n\t\tSecond, we relate measured changes in camera pose $z_i$ and segmentation area $a_i$ across observations \\eqref{eq:cobj}.\n\t\tFinally, we solve for $z_{\\text{object}}$ using \\eqref{eq:zobj\n\t}\n\t\\label{fig:optical_expansion_za}\n\\end{figure}\n\n\\subsection{Object Depth Solution}\n\nTo find object depth $d_i$ in \\eqref{eq:c}, we first redefine $d_i$ in terms of its components as\n\\begin{align}\n\td_i := z_i - z_{\\text{object}},\n\t\\label{eq:di}\n\\end{align}\nwhere $z_{\\text{object}}$ is the object's static position on the optical axis and $\\dot{z}_{\\text{object}}=0$ (see Fig.~\\ref{fig:optical_expansion_za}).\nSubstituting \\eqref{eq:di} in \\eqref{eq:c}, we can now relate observations as\n\\begin{align}\n\t(z_i - z_{\\text{object}}) \\sqrt{a_i} = (z_j - z_{\\text{object}}) \\sqrt{a_j} = c.\n\n\t\\label{eq:cobj}\n\\end{align}\nFrom \\eqref{eq:cobj}, we can solve $z_{\\text{object}}$ from any two unique observations ($z_i \\neq z_j$) as\n\\begin{align}\n\tz_{\\text{object}} = \\frac{z_i \\sqrt{a_i} - z_j \\sqrt{a_j}}{\\sqrt{a_i} - \\sqrt{a_j}} = \\frac{z_i - z_j \\frac{\\sqrt{a_j}}{\\sqrt{a_i}}}{1 - \\frac{\\sqrt{a_j}}{\\sqrt{a_i}}}.\n\t\\label{eq:zobj}\n\\end{align}\nSubstituting $z_{\\text{object}}$ in \\eqref{eq:di}, we can now find object depth $d_i$ at any observation.\n\n\\section{Learning Object Depth from Camera Motion and Video Object Segmentation}\n\\label{sec:learn}\n\nUsing the optical expansion model from Section~\\ref{sec:solve}, we design a deep network, \\textbf{O}bject \\textbf{D}epth \\textbf{N}etwork (ODN), \nthat learns to predict the depth of segmented objects given a series of binary masks $\\mathsf{M}$ \\eqref{eq:masks} and changes in camera position $\\mathbf{z}$ \\eqref{eq:distances}.\nTo keep ODN broadly applicable, we formulate a normalized relative distance input in Section~\\ref{sec:norm}.\nIn Sections~\\ref{sec:dist} and \\ref{sec:scale}, we derive three unique losses for learning depth estimation.\nAfter some remarks on using intermediate observations in Section~\\ref{sec:intermediate}, we detail our ODN architecture in Section~\\ref{sec:net}.\n\n\\subsection{Normalized Relative Distance Input}\n\\label{sec:norm}\n\nTo learn to estimate a segmented object's depth, we first derive a normalized relative distance input that increases generalization.\nAs in Section~\\ref{sec:solve}, assume we are given a set of $n$ segmentation masks $\\mathsf{M}$ with corresponding camera positions $\\mathbf{z}$.\nWe can use $\\mathsf{M}$ and $\\mathbf{z}$ as inputs to predict object depth, however, a direct $\\mathbf{z}$ input enables a learned prior based on absolute camera position, which limits applicability at inference. \nTo avoid this, we define a relative distance input\n\\begin{align}\n\t\\Delta \\mathbf{z} := \\{z_2 - z_1, z_3 - z_1, \\cdots, z_n - z_1\\},\n\n\t\\label{eq:rel}\n\\end{align}\nwhere $z_1, z_2, \\cdots, z_n$ are the sorted $\\mathbf{z}$ positions with the minimum $z_1$ closest to the object (see Fig.~\\ref{fig:optical_expansion_za}) and $\\Delta \\mathbf{z} \\in \\mathbb{R}^{n-1}$. \nAlthough $\\Delta \\mathbf{z}$ consists only of relative changes in position, it still requires learning a specific SI unit of distance and enables a prior based on camera movement range.\nThus, we normalize \\eqref{eq:rel} as\n\\begin{align}\n\t\\mathbf{\\bar{z}} := \\Big\\{\\frac{z_i - z_1}{z_n - z_1} | z \\in \\mathbf{z}, 1 < i < n \\Big\\},\n\t\\label{eq:znorm}\t\n\\end{align}\nwhere $z_n - z_1$ is the camera move range, $\\frac{z_i - z_1}{z_n - z_1} \\in (0,1)$, and $\\mathbf{\\bar{z}} \\in \\mathbb{R}^{n-2}$.\n\nUsing $\\mathbf{\\bar{z}}$ as our camera motion input increases the general applicability of ODN.\nFirst, $\\mathbf{\\bar{z}}$ uses the relative difference formulation, so ODN does not learn to associate depth with an absolute camera position.\nSecond, $\\mathbf{\\bar{z}}$ is dimensionless, so our trained ODN can use camera movements on the scale of millimeters or kilometers (it makes no difference). \nFinally, $\\mathbf{\\bar{z}}$ is made a more compact motion input by removing the unnecessary constants $\\frac{z_1 - z_1}{z_n - z_1} = 0$ and $\\frac{z_n - z_1}{z_n - z_1} = 1$ in \\eqref{eq:znorm}.\n\n\n\\subsection{Normalized Relative Depth Loss}\n\\label{sec:dist}\n\nOur basic depth loss, given input masks $\\mathsf{M}$ \\eqref{eq:masks} and relative distances $\\Delta \\mathbf{z}$ \\eqref{eq:rel}, is\n\\begin{align}\n\t\\mathcal{L}_d(\\textbf{W}) := d_1 - f_d(\\mathsf{M}, \\Delta \\mathbf{z}, \\mathbf{W}),\n\t\\label{eq:depthloss}\n\\end{align}\nwhere $\\textbf{W}$ are the trainable network parameters, $d_1$ is the ground truth object depth at $z_1$ \\eqref{eq:di}, and $f_d \\in \\mathbb{R}$ is the predicted depth.\nTo use the normalized distance input $\\mathbf{\\bar{z}}$ \\eqref{eq:znorm}, we modify \\eqref{eq:depthloss} and define a normalized depth loss as\n\\begin{align}\n\n\n\n\n\n\n\t\\mathcal{L}_{\\bar{d}}(\\textbf{W}) := \\frac{d_1}{z_n - z_1} - f_{\\bar{d}}(\\mathsf{M}, \\mathbf{\\bar{z}}, \\mathbf{W}),\n\n\t\\label{eq:normloss}\n\\end{align} \nwhere $\\frac{d_1}{z_n - z_1}$ is the normalized object depth and $f_{\\bar{d}}$ is a dimensionless depth prediction that is in terms of the input camera movement range.\nTo use $f_{\\bar{d}}$ at inference, we multiply the normalized output $f_{\\bar{d}}$ by $(z_n - z_1)$ to find $d_1$.\n\n\\subsection{Relative Scale Loss}\n\\label{sec:scale}\n\nWe increase depth accuracy and simplify ODN's prediction by learning to estimate relative changes in segmentation scale.\nIn Section~\\ref{sec:dist}, we define loss functions that use a similar input-output paradigm to the analytic solution in Section~\\ref{sec:solve}.\nHowever, training ODN to directly predict depth requires learning many operations.\nAlternatively, if ODN only predicts the relative change in segmentation scale, we can finish calculating depth using \\eqref{eq:zobj}.\nThus, we define a loss for predicting the relative scale as\n\\begin{align}\n\t\\mathcal{L}_{\\ell}(\\textbf{W}) := \\frac{\\ell_n}{\\ell_1} - f_\\ell(\\mathsf{M}, \\mathbf{\\bar{z}}, \\mathbf{W}),\n\n\t\\label{eq:scaleloss}\n\\end{align}\nwhere $\\frac{\\ell_n}{\\ell_1}=\\frac{d_1}{d_n} \\in (0,1)$ \\eqref{eq:elld} is the ground truth distance-based change in scale between $\\mathbf{M}_n$ and $\\mathbf{M}_1$ and $f_\\ell$ is the predicted scale change.\nTo use $f_\\ell$ at inference, we output $f_\\ell \\approx \\frac{\\ell_n}{\\ell_1}$ and, using \\eqref{eq:galileo} to substitute $\\frac{\\ell_j}{\\ell_i}$ for $\\frac{\\sqrt{a_j}}{\\sqrt{a_i}}$ in \\eqref{eq:zobj}, find $z_{\\text{object}}$ as\n\\begin{align}\n\tz_{\\text{object}} = \\frac{z_1 - z_n f_\\ell }{1 - f_\\ell } \\approx \\frac{z_1 - z_n \\big( \\frac{\\ell_n}{\\ell_1} \\big) }{1 - \\big( \\frac{\\ell_n}{\\ell_1} \\big) }.\n\n\t\\label{eq:zobjscale}\n\\end{align}\nAfter finding $z_{\\text{object}}$ in \\eqref{eq:zobjscale}, we use \\eqref{eq:di} to find object depth as $d_1 = z_1 - z_\\text{object}$.\n\n\\subsection{Remarks on using Intermediate Observations}\n\\label{sec:intermediate}\n\nAlthough the ground truth label $d_1$ in \\eqref{eq:depthloss}-\\eqref{eq:normloss} is determined only by camera position $z_1$ and label $\\frac{\\ell_n}{\\ell_1}$ in \\eqref{eq:scaleloss} is determined only by endpoint masks $\\mathbf{M}_n$, $\\mathbf{M}_1$, we emphasize that intermediate mask and distance inputs are still useful.\nConsider that, first, the ground truth mask scale monotonically decreases across all observations (i.e., $\\forall i, \\ell_{i+1} < \\ell_i$).\nSecond, the distance inputs make it possible to extrapolate $d_1$ and $\\frac{\\ell_n}{\\ell_1}$ from intermediate changes in scale.\nThird, if $z_1$, $z_n$, $\\mathbf{M}_1$, or $\\mathbf{M}_n$ have significant errors, intermediate observations provide the best prediction for $d_1$ or $\\frac{\\ell_n}{\\ell_1}$.\nFinally, experiments in Section~\\ref{sec:exp_n} show that intermediate observations improve performance for networks trained on \\eqref{eq:depthloss}, \\eqref{eq:normloss}, or \\eqref{eq:scaleloss}.\n\n\\subsection{Object Depth Estimation Network Architecture}\n\\label{sec:net}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.975\\textwidth]{DN_network_2.pdf}\n\t\\caption{ \\textbf{Object Depth Network Architecture} \n\t}\n\t\\label{fig:network}\n\\end{figure}\n\nOur ODN architecture is shown in Fig.~\\ref{fig:network}.\nThe input to the first convolution layer consists of $n$ 112$\\times$112 binary segmentation masks and, for three configurations in Section~\\ref{sec:radial}, a radial image.\nThe first convolution layer uses 14$\\times$14 kernels, and the remaining convolution layers use 3$\\times$3 kernels in four residual blocks \\cite{HeEtAl16}.\nAfter average pooling the last residual block, the relative camera position (e.g., $\\mathbf{ \\bar{z}}$) is included with the input to the first two fully-connected layers, which use ReLU activation and 20\\% dropout for all inputs during training \\cite{dropout14}.\nAfter the first two fully-connected layers, our ODN architecture ends with one last fully-connected neuron that, depending on chosen loss, is the output object depth $f_d(\\mathsf{M}, \\Delta \\mathbf{z}, \\mathbf{W}) \\in \\mathbb{R}$ using \\eqref{eq:depthloss}, normalized object depth $f_{\\bar{d}}(\\mathsf{M}, \\mathbf{\\bar{z}}, \\mathbf{W})$ using \\eqref{eq:normloss}, or relative scale $f_\\ell(\\mathsf{M}, \\mathbf{\\bar{z}}, \\mathbf{W})$ using \\eqref{eq:scaleloss}.\n\n\\section{ODMS Dataset}\n\\label{sec:dataset}\n\nTo train our object depth networks from Section~\\ref{sec:learn}, we introduce the \\textbf{O}bject \\textbf{D}epth via \\textbf{M}otion and \\textbf{S}egmentation dataset (ODMS).\nIn Section~\\ref{sec:gen_masks}, we explain how ODMS continuously generates new labeled training data, making learning-based techniques feasible in this problem space.\nIn Section~\\ref{sec:sets}, we describe the robotics-, driving-, and simulation-based test and validation sets we develop for evaluation.\nFinally, in Section~\\ref{sec:train}, we detail our ODMS training implementation.\n\n\\subsection{Generating Random Object Masks at Scale}\n\\label{sec:gen_masks}\n\n\\subsubsection{Camera Distance and Depth}\nWe generate new training data by, first, determining $n$ random camera distances (i.e., $\\mathbf{z}$ \\eqref{eq:distances}) for each training example.\nTo make ODMS configurable, assume we are given a minimum camera movement range ($\\Delta z_{\\text{min}}$) and minimum and maximum object depths ($d_{\\text{min}}$, $d_{\\text{max}}$). \nUsing these parameters, we define distributions for uniform random variables to find the endpoints\n\\begin{align}\n\t\\label{eq:z1}\n\tz_1 \\sim & ~\\mathcal{U}[d_{\\text{min}},d_{\\text{max}}-\\Delta z_{\\text{min}}], \\\\\n\tz_n \\sim & ~\\mathcal{U}[z_1 + \\Delta z_{\\text{min}},d_{\\text{max}}],\n\t\\label{eq:zn}\n\\end{align}\nand, for $1 < i < n$, the remaining intermediate camera positions \n\\begin{align}\n\tz_i \\sim \\mathcal{U}(z_1, z_n).\n\t\\label{eq:zi}\n\\end{align}\nUsing \\eqref{eq:z1}-\\eqref{eq:zi} to select $\\mathbf{z} = \\{z_1, \\cdots, z_n\\}$ ensures that the random camera movement range is independent of the number of observations $n$.\nFor the object depth label $d_1$, we choose an optical axis such that $z_\\text{object}=0$ and $d_1=z_1$ \\eqref{eq:di}.\nWe generate data in this work using $\\Delta z_\\text{min}=d_\\text{min}=0.1~\\textrm{m}$ and $d_\\text{max}=0.7~\\textrm{m}$.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.975\\textwidth]{data_gen.pdf}\n\t\\caption{ \\textbf{Generating Random Object Masks at Scale.} \n\t\tInitializing from a random number of points within a variable boundary (\\textit{left}), random curves complete the contour of each simulated object (\\textit{middle left}).\n\t\n\t\n\t\tThese contours are then scaled for each simulated distance and output as a filled binary mask (\\textit{right}).\n\t\tEach generated object is unique\n\t\n\t}\n\t\\label{fig:data_gen}\n\\end{figure}\n\n\\subsubsection{Random Object Contour and Binary Masks}\nAfter determining $\\mathbf{z}$, we generate a random object with $n$ binary masks (i.e., $\\mathsf{M}$ \\eqref{eq:masks}) scaled for each distance in $\\mathbf{z}$ (see Fig.~\\ref{fig:data_gen}).\nTo make each object unique, we randomly select parameters that change the object's size ($s_\\mathbf{p}$), number of contour points ($n_\\mathbf{p}$), and contour smoothness ($r_B$, $\\rho_B$).\nIn this work, we randomly select $s_\\mathbf{p}$ from $\\{100, 200, 300, 400\\}$ and $n_\\mathbf{p}$ from $\\{3, 4, \\cdots, 10\\}$.\nUsing $s_\\mathbf{p}$ and $n_\\mathbf{p}$, we select each of the random initial contour points, $\\mathbf{p}_i \\in \\mathbb{R}^2$ for $1 \\leq i \\leq n_\\mathbf{p}$, as\n\\begin{align}\n\t\\mathbf{p}_i= [ x_i,y_i ]',~\n\tx_i \\sim \\mathcal{U}[0,s_\\mathbf{p}],~ y_i \\sim \\mathcal{U}[0,s_\\mathbf{p}].\n\t\\label{eq:xy}\n\\end{align}\n\nTo complete the object's contour, we use cubic B\\'ezier curves with random smoothness to connect each set of adjacent coordinates $\\mathbf{p}_i$, $\\mathbf{p}_j$ from \\eqref{eq:xy}.\nEssentially, $r_B$ and $\\rho_B$ determine polar coordinates for the two intermediate B\\'ezier control points of each curve. \n$\\arctan (\\rho_B)$ is the rotation of a control point away from the line connecting $\\mathbf{p}_i$ and $\\mathbf{p}_j$,\nwhile $r_B $ is the relative radius of a control point away from $\\mathbf{p}_i$ (e.g., $r_B=1$ has a radius of $\\lVert \\mathbf{p}_i - \\mathbf{p}_j \\rVert$).\nIn this work, we randomly select $r_B$ from $\\{0.01, 0.05, 0.2, 0.5\\}$ and $\\rho_B$ from $\\{0.01, 0.05, 0.2\\}$ for each object.\nIn general, lower $r_B$ and $\\rho_B$ values result in a more straight-edged contour, while higher values result in a more curved and widespread contour.\nAs two illustrative examples in Fig.~\\ref{fig:data_gen}, the top ``straight-edged\" object uses $r_B=\\rho_B=0.01$ and the bottom ``curved\" object uses $r_B=0.5$ and $\\rho_B=0.2$.\n\nTo simulate object segmentation over multiple distances, we scale the generated contour to match each distance $z_i \\in \\mathbf{z}$ from \\eqref{eq:z1}-\\eqref{eq:zi} and output a set of binary masks $\\mathbf{M}_i \\in \\mathsf{M}$ \\eqref{eq:masks}.\nWe let the initial contour represent the object's image projection at $d_{\\text{min}}$, and designate this initial scale as $\\ell_{\\text{min}}=1$.\nHaving chosen an optical axis such that $z_\\text{object}=0$ in \\eqref{eq:di} (i.e., $d_i=z_i$), we modify \\eqref{eq:elld} to find the contour scale of each mask, $\\ell_i $ for $1 \\leq i \\leq n$, as\n\\begin{align}\n\t\\ell_i = \\frac{d_\\text{min} \\ell_\\text{min}}{d_i}= \\frac{d_\\text{min}}{z_i}.\n\t\\label{eq:gen_scale}\n\\end{align}\nAfter finding $\\ell_i$, we scale, fill, and add the object contour to each mask $\\mathbf{M}_i$.\nIn this work, we position the contour by centering the scaled boundary ($\\ell_i s_\\mathbf{p}$) in a 480$\\times$640 mask.\nOur complete object-generating process is shown in Fig.~\\ref{fig:data_gen}.\n\n\n\\subsection{Robotics, Driving, and Simulation Validation and Test Sets}\n\\label{sec:sets}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.975\\textwidth]{val_test_set.jpg}\n\t\\caption{ \\textbf{Robot Experiment Data.}\n\t\tHSR view of validation (yellow bin) and test set objects (blue bin) using head-mounted RGBD camera (\\textit{left}).\n\t\tUnfortunately, the depth image is missing many objects (\\textit{middle left}).\n\t\tHowever, using 4,400 robot-collected examples (\\textit{middle right}), we find that segmentation-based object depth works (\\textit{right})\n\t}\n\t\\label{fig:val_test}\n\\end{figure}\n\n\nWe test object depth estimation in a variety of settings using four ODMS validation and test sets.\nThese are based on robot experiments, driving, and simulated data with and without perturbations and provide a repeatable benchmark for ablation studies and future methods.\nAll examples include $n \\geq 10$ observations.\n\n\n\\subsubsection{Robot Validation and Test Set}\nOur robot experiment data provide an evaluation for object depth estimation from a physical platform using video object segmentation on real-world objects in a practical use case.\nWe collect data using a Toyota Human Support Robot (HSR), which has a 4-DOF manipulator arm with an end effector-mounted wide-angle grasp camera \\cite{UiYamaguchi2015,HSR_journal}.\nUsing HSR's prismatic torso, we collect 480$\\times$640 grasp-camera images as the end effector approaches an object of interest, with the intent that HSR can estimate the object's depth using motion and segmentation.\nWe use 16 custom household objects for our validation set and 24 YCB objects \\cite{YCB} for our test set (Fig.~\\ref{fig:val_test}, left).\nFor each object, we collect 30 images distanced 2~\\textrm{cm} apart of the object in isolation and, as an added challenge, 30 more images in a cluttered setting (see Fig.~\\ref{fig:val_test}, middle right).\nThe ground truth object depth ($d_1$) is manually measured at the closest camera position and propagated to the remaining images using HSR's kinematics and encoder values, which also measure camera positions ($\\mathbf{z}$).\nTo generate binary masks ($\\mathsf{M}$), we segment objects using OSVOS \\cite{OSVOS}, which we fine-tune on each object using three annotated images from outside of the validation and test sets.\nWe vary the input camera movement range between 18-58~\\textrm{cm} and object depth ($d_1$) between 11-60~\\textrm{cm} to generate 4,400 robot object depth estimation examples (1,760 validation and 2,640 test).\n\n\\subsubsection{Driving Validation and Test Set}\nOur driving data provide an evaluation for object depth estimation in a faster moving automotive domain with greater camera movement and depth distances.\nOur goal is to track driving obstacles using an RGB camera, segmentation, and vehicle odometry.\nChallenges include changing object perspectives, camera rotation from vehicle turning, and moving objects.\nWe collect data using the SYNTHIA Dataset \\cite{SYNTHIA}, which includes ground truth semantic segmentation, depth images, and vehicle odometry in a variety of urban scenes and weather conditions.\nTo generate binary masks ($\\mathsf{M}$), we use SYNTHIA's semantic segmentation over a series of 760$\\times$1280 frames for unique instances of pedestrians, bicycles, and cars (see Fig.~\\ref{fig:val_synthia}).\nFor each instance, the ground truth object depth ($d_1$) is the mean depth image values contained within corresponding mask $\\mathbf{M}_1$.\nAs the vehicle moves, we track changes in camera position ($\\mathbf{z}$) along the optical axis of position $z_1$.\nWith an input camera movement range between 4.2-68~\\textrm{m} and object depth ($d_1$) between 1.5-62~\\textrm{m}, we generate 1,250 driving object depth estimation examples (500 validation and 750 test).\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.975\\textwidth]{synthia_val_test_set.jpg}\n\t\\caption{ \\textbf{Driving Test Set Examples and Results.}\n\t\tThe ODN$_\\ell$ object depth error is -6 and -23~\\textrm{cm} for the pedestrians, -10~\\textrm{cm} for the bicycle, and -4~\\textrm{cm} for the car\n\t\n\t\n\t}\n\t\\label{fig:val_synthia}\n\\end{figure}\n\n\\subsubsection{Simulation Validation and Test Sets}\n\\label{sec:sim_test}\nFinally, we generate a set of normal and perturbation-based data for simulated objects.\nThe normal set and the continuously-generated training data we use in Section~\\ref{sec:train} both use the same mask-generating procedure from Section~\\ref{sec:gen_masks}, so the normal set provides a consistent evaluation for the type of simulated objects we use during training.\n\nTo test robustness for segmentation errors, we also generate a set of simulated objects with random perturbations added to each mask, $\\mathbf{M}_i$ for $1 \\leq i \\leq n$, as\n\\begin{align}\n\t{p}_i \\sim \\mathcal{N}(0,1), \n\t~\\mathbf{M}_{i,{p}} = \\begin{cases}\n\t\t\\text{dilate}(\\mathbf{M}_i, \\lfloor {p}_i + 0.5 \\rfloor) & \\text{if} ~ {p}_i \\geq 0\\\\\n\t\t\\text{erode}(\\mathbf{M}_i, \\lfloor {p}_i + 0.5 \\rfloor) & \\text{if} ~ {p}_i < 0\\\\\n\t\\end{cases},\n\t\\label{eq:perturb}\n\\end{align}\nwhere $\\mathcal{N}(0,1)$ is a Gaussian distribution with $\\mu=0$, $\\sigma^2=1$, ${p}_i$ randomly determines the perturbation type and magnitude, and $\\mathbf{M}_{i,{p}}$ is the perturbed version of initial mask $\\mathbf{M}_i$.\nNotably, the sign of ${p}_i$ determines a dilation or erosion perturbation, and the rounded magnitude of ${p}_i$ determines the number of iterations using a square connectivity equal to one.\nWhen generating perturbed masks $\\mathbf{M}_{i,{\\rm p}}$, we make no other changes to input data or ground truth labels.\n\nWe generate 5,000 object depth estimation examples (2,000 validation and 3,000 test) for both the normal and perturbation-based simulation sets.\n\n\n\\subsection{Training Object Depth Networks using ODMS}\n\\label{sec:train}\n\nUsing the architecture in Section~\\ref{sec:net}, we train networks for depth loss ${\\mathcal{L}_{d}}$ \\eqref{eq:depthloss}, normalized relative depth loss $\\mathcal{L}_{\\bar{d}}$ \\eqref{eq:normloss}, and relative scale loss $\\mathcal{L}_{\\ell}$ \\eqref{eq:scaleloss}.\nWe call these networks ODN$_d$, ODN$_{\\bar{d}}$, and ODN$_\\ell$ respectively.\nWe train each network with a batch size of 512 randomly-generated training examples using the framework in Section~\\ref{sec:gen_masks} with $n=10$ observations per prediction. We train each network for 5,000 iterations using the Adam Optimizer \\cite{adam14} with a $1\\times10^{-3}$ learning rate, which takes 2.6 days using a single GPU (GTX 1080 Ti).\nNotably, the primary time constraint for training is generating new masks, and we can train a similar configuration with $n=2$ for 5,000 iterations in 15 hours.\n\n\n\\section{Experimental Results}\n\\label{sec:results}\n\nOur primary experiments and analysis use the four ODMS test sets.\nFor each test set, the number of network training iterations is determined by the best validation performance, which we check at every ten training iterations.\nWe determine the effectiveness of each depth estimation method using the mean percent error for each test set, which is calculated for each example as\n\\begin{align}\n\t\\text{Percent Error} = \\left| \\frac{d_1 - \\hat{d}_1}{d_1} \\right| \\times 100 \\%,\n\t\\label{eq:percent}\n\\end{align}\nwhere $d_1$ and $\\hat{d}_1$ are ground truth and predicted object depth at final pose $z_1$.\n\n\\subsection{ODMS Test Results}\n\n\\setlength{\\tabcolsep}{6.75pt}\n\\begin{table} [t]\n\t\\centering\n\t\\small\n\t\\caption{\\textbf{ODMS Test Set Results}\t\n\t}\n\t\\begin{tabular}{| l | c | c | c| c | c | c | c |}\n\t\t\\hline\t\\multicolumn{1}{|c|}{} & Object &\t$n$\t&\t\\multicolumn{5}{c|}{ Mean Percent Error \\eqref{eq:percent} } \\\\\t\\cline{4-8}\n\t\t\\multicolumn{1}{|c|}{Config.}\t& Depth & Input & \\multicolumn{1}{c|}{Robot} & \\multicolumn{1}{c|}{Driving} & \\multicolumn{2}{c|}{Simulated Objects} & All \\\\ \\cline{6-7}\n\t\t\\multicolumn{1}{|c|}{ID} & Method &\tMasks &\t\\multicolumn{1}{c|}{Objects} & \\multicolumn{1}{c|}{Objects}\t&\t\\multicolumn{1}{c|}{Normal} & \\multicolumn{1}{c|}{Perturb} & Sets \\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_\\ell$\t&\t$\\mathcal{L}_{\\ell}$ \\eqref{eq:scaleloss}\t&\t10\t&\t19.3\t& \\bf\t30.1\t&\t8.3\t&\t18.2\t& \\bf\t19.0\t\\\\\t\n\t\tODN$_{\\bar{d}}$\t&\t$\\mathcal{L}_{\\bar{d}}$ \\eqref{eq:normloss} \t&\t10\t&\t18.5\t&\t30.9\t&\t8.2\t&\t18.5\t&\t19.0\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_d$\t&\t${\\mathcal{L}_{d}}$ \\eqref{eq:depthloss}\t&\t10\t& \\bf\t18.1\t&\t47.5\t& \\bf\t5.1\t& \\bf\t11.2\t&\t20.5\t\\\\\t\n\t\tVOS-DE \t&\t\\cite{GrFlCo20}\t&\t10\t&\t32.6\t&\t36.0\t&\t7.9\t&\t33.6\t&\t27.5\t\\\\\t\\hline\n\t\n\t\\end{tabular}\n\t\\label{tab:main}\n\\end{table}\n\nObject depth estimation results for all four ODMS test sets are provided in Table~\\ref{tab:main} for our three ODN configurations and VOS-DE \\cite{GrFlCo20}.\nWe use $n=10$ observations, and ``All Sets'' is an aggregate score across all test sets.\nNotably, VOS-DE uses only the largest connected region of each mask to reduce noise.\n\nThe relative scale-based ODN$_\\ell$ performs best on the Driving set and overall.\nWe show a few quantitative depth estimation examples for ODN$_\\ell$ in Fig.~\\ref{fig:val_test} and Fig.~\\ref{fig:val_synthia}.\nNormalized depth-based ODN$_{\\bar{d}}$ comes in second overall, and depth-based ODN$_{d}$ performs best in three categories but worst in driving.\nBasically, ODN$_{d}$ gets a performance boost from a camera movement range- and depth-based prior (i.e., $\\Delta \\mathbf{z}$ and $f_d$ in \\eqref{eq:depthloss}) at the cost of applicability to other domains where the scale of camera input and depth will vary.\nOn the other hand, the generalization of ODN$_{\\bar{d}}$ and ODN$_\\ell$ from small distances in training to large distances in Driving is highly encouraging.\nVOS-DE performs the worst overall, particularly on test sets with segmentation errors or moving objects.\nHowever, VOS-DE does perform well on normal simulated objects, which only have mask discretization errors. \n\n\n\\subsubsection{Results on Changing the Number of Observations}\n\\label{sec:exp_n}\n\nObject depth estimation results for varied number of observations are provided in Table~\\ref{tab:num}.\nWe repeat training and validation for each new configuration to learn depth estimation with less observations.\nAs $n$ changes, each test set example uses the same endpoint observations (i.e., $\\mathbf{M}_1,\\mathbf{M}_n,z_1,z_n$).\nHowever, the $n-2$ intermediate observations are evenly distributed and do change (e.g., $n=2$ has none).\nNotably, at $n=2$, VOS-DE is equivalent to \\eqref{eq:zobj} and $\\mathbf{\\bar{z}} \\in \\mathbb{R}^{n-2}$ \\eqref{eq:znorm} gives no input to ODN$_{\\bar{d}}$, ODN$_\\ell$.\n\nODN$_\\ell$ has the most consistent and best performance for all $n$ settings, aside from a second place to ODN$_{\\bar{d}}$ at $n=5$.\nODN$_\\ell$ also requires the fewest training iterations for all $n$.\nIn general, ODN$_{\\bar{d}}$ and ODN$_{d}$ performance starts to decrease for $n \\leq 3$. \nVOS-DE performance decreases most significantly at $n=2$, having 2.5 times the error of ODN$_\\ell$ at $n=2$.\nAmazingly, all $n=2$ ODN configurations outperform $n=10$ VOS-DE.\nThus, even with significantly less input data, our learning-based approach outperforms prior work.\n\n\\setlength{\\tabcolsep}{4pt} \n\\begin{table} [t]\n\t\\centering\n\t\\small\n\t\\caption{\\textbf{ODMS Test Set Results vs. Number of Observations}\t\n\t}\n\t\\begin{tabular}{| l | c | c | c| c | c | c | c | c | c |}\n\t\t\\hline\n\t\t\\multicolumn{1}{|c|}{Config.}\t& Depth & \t\\multicolumn{4}{c|}{ Overall Mean Percent Error } & \\multicolumn{4}{c|}{ Average Training Iterations}\\\\\t\\cline{3-10}\n\t\t\\multicolumn{1}{|c|}{ID} & Method &\t$n=2$ &\t$n=3$ &\t$n=5$ &\t$n=10$ & $n=2$ &\t$n=3$ &\t$n=5$ &\t$n=10$ \\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_\\ell$\t&\t$\\mathcal{L}_{\\ell}$ \\eqref{eq:scaleloss}\t& \\bf\t20.4\t& \\bf\t19.9\t&\t20.0\t& \\bf\t19.0\t& \\bf\t2,590\t& \\bf\t3,460\t& \\bf\t3,060\t& \\bf\t3,138\t\\\\\t\n\t\tODN$_{\\bar{d}}$\t&\t$\\mathcal{L}_{\\bar{d}}$ \\eqref{eq:normloss} \t&\t22.7\t&\t20.9\t& \\bf\t19.9\t&\t19.0\t&\t3,993\t&\t4,330\t&\t3,265\t&\t3,588\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_d$\t&\t${\\mathcal{L}_{d}}$ \\eqref{eq:depthloss}\t&\t21.6\t&\t21.2\t&\t20.5\t&\t20.5\t&\t4,138\t&\t4,378\t&\t4,725\t&\t3,300\t\\\\\t\n\t\tVOS-DE\t&\t \\cite{GrFlCo20}\t&\t50.3\t&\t29.7\t&\t27.6\t&\t27.5\t&\tN\/A\t&\tN\/A\t&\tN\/A\t&\tN\/A\t\\\\\t\\hline\n\t\n\t\\end{tabular}\n\t\\label{tab:num}\n\\end{table}\n\n\n\n\\setlength{\\tabcolsep}{4.7pt} \n\\begin{table} [t]\n\t\\centering\n\t\\small\n\t\\caption{\\textbf{Test Results with Perturb Training Data and Radial Input Image}\t\n\t\n\t}\n\t\\begin{tabular}{| l | c | c | c | c| c | c | c | c |}\n\t\t\\hline\t\\multicolumn{1}{|c|}{} & Object & Radial& Type of &\t\\multicolumn{5}{c|}{ Mean Percent Error \\eqref{eq:percent} } \\\\\t\\cline{5-9}\n\t\t\\multicolumn{1}{|c|}{Config.}\t& Depth & Input & Training & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{2}{c|}{Simulated} & All \\\\ \\cline{7-8}\n\t\t\\multicolumn{1}{|c|}{ID} & Method & Image & Data &\t\\multicolumn{1}{c|}{Robot} & \\multicolumn{1}{c|}{Driving}\t&\t\\multicolumn{1}{c|}{Normal} & \\multicolumn{1}{c|}{Perturb} & Sets \\\\\t\\hline \n\t\t\\hline \\multicolumn{9}{|c|}{Perturb Training Data}\\\\ \\hline\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n\t\t\\rowcolor{rowgray}\tODN$_{\\ell p}$\t&\t$\\mathcal{L}_{\\ell}$ \\eqref{eq:scaleloss}\t&\tNo\t&\tPerturb\t&\t22.2\t& \\bf\t29.0\t&\t11.1\t&\t13.0\t&\t18.8\t\\\\\t\n\t\tODN$_{\\bar{d} p}$\t&\t$\\mathcal{L}_{\\bar{d}}$ \\eqref{eq:normloss} \t&\tNo\t&\tPerturb\t&\t25.8\t&\t31.4\t&\t11.1\t&\t13.2\t&\t20.4\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_{d p}$\t&\t${\\mathcal{L}_{d}}$ \\eqref{eq:depthloss}\t&\tNo\t&\tPerturb\t&\t20.1\t&\t60.9\t&\t7.3\t& \\bf\t8.2\t&\t24.1\t\\\\\t\\hline\n\t\t\\hline \\multicolumn{9}{|c|}{Radial Input Image}\\\\ \\hline\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n\t\t\\rowcolor{rowgray}\tODN$_{\\ell r}$\t&\t$\\mathcal{L}_{\\ell}$ \\eqref{eq:scaleloss}\t&\tYes\t&\tNormal\t& \\bf\t13.1\t&\t31.7\t&\t8.6\t&\t17.9\t& \\bf\t17.8\t\\\\\t\n\t\tODN$_{\\bar{d} r}$\t&\t$\\mathcal{L}_{\\bar{d}}$ \\eqref{eq:normloss} \t&\tYes\t&\tNormal\t&\t15.2\t&\t30.9\t&\t8.4\t&\t18.5\t&\t18.3\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_{d r}$\t&\t${\\mathcal{L}_{d}}$ \\eqref{eq:depthloss}\t&\tYes\t&\tNormal\t&\t13.4\t&\t48.6\t& \\bf\t5.6\t&\t11.2\t&\t19.7\t\\\\\t\\hline\n\t\n\t\n\t\n\t\n\t\\end{tabular}\n\t\\label{tab:rad}\n\\end{table}\n\n\\subsubsection{Results with Perturbation Training Data}\nWe train each $n=10$ ODN on continuously-generated perturbation data \\eqref{eq:perturb} from Section~\\ref{sec:sim_test}.\nAs shown in Table~\\ref{tab:rad}, this improves performance for each ODN on the Perturb test set, and demonstrates that we can learn robust depth estimation for specific errors.\nThe perturbed ODN$_\\ell$ configuration, ODN$_{\\ell p}$, improves performance overall and has the best Driving result of any method.\n\n\\subsubsection{Results with Radial Input Image}\n\\label{sec:radial}\nFor our final ODMS results in Table~\\ref{tab:rad}, we train each $n=10$ ODN with an added input radial image for convolution.\nPixel values $\\in [0,1]$ are scaled radially from 1 at the center to 0 at each corner (see Fig.~\\ref{fig:network}).\nThis serves a similar purpose to coordinate convolution \\cite{LiEtAl18} but simply focuses on how centered segmentation mask regions are.\nThis improves overall performance for each ODN, particularly on the Robot test, where objects are generally centered for grasping and peripheral segmentation errors can be ignored.\nNotably, ODN$_{\\ell r}$ has the best Robot and overall result of any method.\n\n\n\\subsection{Robot Object Depth Estimation and Grasping Experiments}\n\\label{sec:robot_exp}\n\nAs a live robotics demonstration, we use ODN$_{\\ell r}$ to locate objects for grasping.\nExperiments start with HSR's grasp camera approaching an object while generating segmentation masks at 1~\\textrm{cm} increments using pre-trained OSVOS.\nOnce ten masks are available, ODN$_{\\ell r}$ starts predicting depth as HSR continues approaching and generating masks. \nBecause ODN$_{\\ell r}$'s prediction speed is negligible compared to HSR's data-collection speed, we use the median depth estimate of multiple permutations of collected data to improve robustness against segmentation errors.\nOnce ODN$_{\\ell r}$ estimates the object to be within 20~\\textrm{cm} of grasping, HSR stops collecting data and grasps the object at that depth.\nUsing this active depth estimation process, we are able to successfully locate and grasp consecutive objects at varied heights in a variety of settings, including placing laundry in a basket and clearing garbage off a table (see Fig.~\\ref{fig:front_vos}).\nWe show these robot experiments in our Supplementary Video at: \\url{https:\/\/youtu.be\/c90Fg_whjpI}.\n\n\n\\section{Conclusions}\n\nWe introduce the \\textbf{O}bject \\textbf{D}epth via \\textbf{M}otion and \\textbf{S}egmentation (ODMS) dataset, which continuously generates synthetic training data with random camera motion, objects, and even perturbations.\nUsing the ODMS dataset, we train the first deep network to estimate object depth from motion and segmentation, leading to as much as a 59\\% reduction in error over previous work.\nBy using ODMS's simple binary mask- and distance-based input, our network's performance transfers across sim-to-real and diverse application domains, as demonstrated by our results on the robotics-, driving-, and simulation-based ODMS test sets.\nFinally, we use our network to perform object depth estimation in real-time robot grasping experiments, demonstrating how our segmentation-based approach to depth estimation is a viable tool for real-world applications requiring 3D perception from a single RGB camera.\n\n\\subsection*{Acknowledgements}\nWe thank Madan Ravi Ganesh, Parker Koch, and Luowei Zhou for various discussions throughout this work. Toyota Research Institute (``TRI\") provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity.\n\n\\section*{Appendix}\n\n\\subsection*{Least-squares Solution for Object Depth}\n\nIn previous work \\cite{GrFlCo20}, we propose a least-squares object depth solution (VOS-DE) that uses more than two observations to add robustness for camera position and segmentation errors. \nWe include this solution here for reference.\nThe VOS-DE formulation derives an alternative form of \\eqref{eq:cobj} as\n\\begin{align}\n\tz_{\\text{object}} \\sqrt{a_i} + c = z_i \\sqrt{a_i},\n\\end{align}\nwhich over $n$ observations in $\\mathbf{A}\\mathbf{x}=\\mathbf{b}$ form yields\n\\begin{align}\n\t\\begin{bmatrix}\n\t\t\\sqrt{a_1} & 1 \\\\ \\sqrt{a_2} & 1 \\\\ \\vdots & \\vdots \\\\ \\sqrt{a_n} & 1\n\t\\end{bmatrix}\n\t\\begin{bmatrix}\n\t\t\\hat{z}_{\\text{object}} \\\\ \\hat{c}\\\\ \n\t\\end{bmatrix}\n\t=\n\t\\begin{bmatrix}\n\t\tz_{1} \\sqrt{a_1} \\\\ z_{2} \\sqrt{a_2} \\\\ \\vdots \\\\ z_{n} \\sqrt{a_n}\n\t\\end{bmatrix}.\n\t\\label{eq:axb}\n\\end{align}\nSolving \\eqref{eq:axb} for $\\hat{z}_{\\text{object}}$ does provide a more robust depth estimate than the two-observation solution \\eqref{eq:zobj}. \nHowever, our learning-based approach from Section~\\ref{sec:learn} outperforms both analytic solutions in experiments.\n\n\\subsection*{ODMS Random Object Mask Examples}\n\nWe provide a few random object mask examples using ODMS's data-generation framework from Section~\\ref{sec:gen_masks}.\nThese synthetic object examples are shown in Fig.~\\ref{fig:synth_obj} and demonstrate the B\\'ezier curve behaviors associated with changing parameters $r_B$ and $\\rho_B$.\n\n\\begin{figure} \n\t\\centering\n\t\\includegraphics[width=0.975\\textwidth]{synth_object.jpg}\n\t\\caption{ \\textbf{ODMS Random Object Mask Examples.}\n\t\tAll examples use $s_\\mathbf{p}=400$, $n_\\mathbf{p}=5$, and $\\ell=\\ell_\\text{min}=1$. \n\t\t$r_B$ values are 0.01, 0.05, 0.2, and 0.5 (\\textit{from left to right}) and $\\rho_B$ values are 0.01, 0.05, and 0.2 (\\textit{from top to bottom}).\n\t\tEach generated object is unique\n\t}\n\t\\label{fig:synth_obj}\n\\end{figure}\n\n\\subsection*{ODMS Validation Results}\n\nAs mentioned in Section~\\ref{sec:results}, the number of network training iterations is determined by the best validation performance, which we check at every ten training iterations.\nIn Table~\\ref{tab:all}, we provide the ODMS validation results and corresponding number of training iterations for all configurations.\nIn general, the relative performance of each configuration is consistent between the ODMS validation and test sets.\n\n\\setlength{\\tabcolsep}{1.75pt} \n\\setlength{\\tabcolsep}{4pt} \n\\begin{table}\n\t\\centering\n\t\\scriptsize\n\t\\caption{\\textbf{Complete ODMS Validation and Test Set Results}\t\n\t}\n\t\\begin{tabular}{| l | c | c | c | c | c | c | c | c |}\n\t\t\\hline\n\t\t\\multicolumn{1}{|c|}{Config.}\t & \t\\multicolumn{4}{c|}{ Mean Percent Error (Validation\/Test) } & \\multicolumn{4}{c|}{ Training Iterations}\\\\\t\\cline{2-9}\n\t\t\\multicolumn{1}{|c|}{ID} &\tRobot &\tDriving &\tNormal &\tPerturb & Robot &\tDriving &\tNormal &\tPerturb \\\\\t\\hline\n\t\t\\hline \\multicolumn{9}{|c|}{Standard Configuration}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_\\ell$\t&\t21.6\/19.3\t&\t29.4\/30.1\t&\t8.2\/8.3\t&\t18.4\/18.2\t&\t2390\t&\t1920\t&\t3370\t&\t4870\t\\\\\t\n\t\tODN$_{\\bar{d}}$ \t&\t19.6\/18.5\t&\t32.0\/30.9\t&\t7.9\/8.2\t&\t18.4\/18.5\t&\t4140\t&\t2990\t&\t3690\t&\t3530\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_d$ \t&\t19.9\/18.1\t&\t48.1\/47.5\t& \\bf\t4.9\/5.1\t&\t11.5\/11.2\t&\t2380\t&\t1650\t&\t4740\t&\t4430\t\\\\\t\n\t\t\\scriptsize VOS-DE \t&\t27.4\/32.6\t&\t35.9\/36.0\t&\t7.9\/7.9\t&\t34.1\/33.6\t&\tN\/A\t&\tN\/A\t&\tN\/A\t&\tN\/A\t\\\\\t\\hline\n\t\t\\hline \\multicolumn{9}{|c|}{$n=5$ Observations}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_{\\ell}$ \t&\t23.4\/20.5\t&\t31.5\/30.5\t&\t8.4\/8.6\t&\t20.2\/20.4\t&\t1000\t&\t1850\t&\t4870\t&\t4520\t\\\\\t\n\t\tODN$_{\\bar{d}}$ \t&\t22.8\/19.5\t&\t34.2\/31.1\t&\t8.4\/8.4\t&\t20.5\/20.6\t&\t1510\t&\t3450\t&\t3770\t&\t4330\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_{d}$\t&\t21.0\/19.4\t&\t44.6\/44.2\t&\t5.4\/5.5\t&\t13.4\/12.9\t&\t4690\t&\t4260\t&\t4980\t&\t4970\t\\\\\t\n\t\t\\scriptsize VOS-DE\t&\t29.5\/35.1\t&\t34.8\/34.6\t&\t7.8\/7.9\t&\t32.8\/32.6\t&\tN\/A\t&\tN\/A\t&\tN\/A\t&\tN\/A\t\\\\\t\\hline\n\t\t\\hline \\multicolumn{9}{|c|}{$n=3$ Observations}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_{\\ell}$\t&\t20.3\/18.6\t&\t31.8\/31.1\t&\t8.4\/8.4\t&\t21.9\/21.6\t&\t1820\t&\t2750\t&\t4890\t&\t4380\t\\\\\t\n\t\tODN$_{\\bar{d}}$ \t&\t19.9\/20.6\t&\t34.7\/33.1\t&\t8.4\/8.4\t&\t21.6\/21.5\t&\t4130\t&\t4320\t&\t4620\t&\t4250\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_{d}$ \t&\t24.0\/21.8\t&\t45.1\/44.5\t&\t5.4\/5.6\t&\t13.8\/12.9\t&\t4800\t&\t3040\t&\t4990\t&\t4680\t\\\\\t\n\t\t\\scriptsize VOS-DE \t&\t33.7\/41.2\t&\t45.2\/34.0\t&\t8.0\/8.1\t&\t37.0\/35.7\t&\tN\/A\t&\tN\/A\t&\tN\/A\t&\tN\/A\t\\\\\t\\hline\n\t\t\\hline \\multicolumn{9}{|c|}{$n=2$ Observations}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_{\\ell}$\t&\t21.3\/19.2\t&\t30.4\/31.4\t&\t8.7\/8.9\t&\t22.0\/22.0\t&\t1140\t&\t1010\t&\t3910\t&\t4300\t\\\\\t\n\t\tODN$_{\\bar{d}}$ \t&\t29.1\/24.2\t&\t39.6\/35.9\t&\t8.6\/8.9\t&\t21.8\/21.8\t&\t3410\t&\t4570\t&\t3370\t&\t4620\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_{d}$\t&\t23.3\/21.1\t&\t45.3\/44.8\t&\t5.8\/6.0\t&\t14.9\/14.4\t&\t2850\t&\t4120\t&\t4610\t&\t4970\t\\\\\t\n\t\t\\scriptsize VOS-DE \t&\t95.8\/65.5\t&\t55.0\/41.1\t&\t8.2\/8.3\t&\t90.6\/86.2\t&\tN\/A\t&\tN\/A\t&\tN\/A\t&\tN\/A\t\\\\\t\\hline\n\t\t\\hline \\multicolumn{9}{|c|}{Perturb Training Data}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_{\\ell p}$\t&\t21.4\/22.2\t& \\bf\t28.6\/29.0\t&\t10.7\/11.1\t&\t12.8\/13.0\t&\t100\t&\t140\t&\t5000\t&\t5000\t\\\\\t\n\t\tODN$_{\\bar{d} p}$\t&\t25.6\/25.8\t&\t31.4\/31.4\t&\t11.0\/11.1\t&\t13.1\/13.2\t&\t420\t&\t2760\t&\t2730\t&\t4270\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_{d p}$\t&\t20.5\/20.1\t&\t59.4\/60.9\t&\t7.0\/7.3\t& \\bf\t8.1\/8.2\t&\t50\t&\t330\t&\t4860\t&\t4780\t\\\\\t\\hline\n\t\t\\hline \\multicolumn{9}{|c|}{Radial Input Image}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_{\\ell r}$\t& \\bf\t13.8\/13.1\t&\t31.6\/31.7\t&\t8.4\/8.6\t&\t18.2\/17.9\t&\t1710\t&\t870\t&\t4940\t&\t3940\t\\\\\t\n\t\tODN$_{\\bar{d} r}$\t&\t16.6\/15.2\t&\t30.7\/30.9\t&\t8.3\/8.4\t&\t18.6\/18.5\t&\t2010\t&\t4200\t&\t4990\t&\t4440\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_{d r}$\t&\t14.1\/13.4\t&\t49.0\/48.6\t&\t5.5\/5.6\t&\t11.7\/11.2\t&\t2210\t&\t460\t&\t4870\t&\t4710\t\\\\\t\\hline\n\t\\end{tabular}\n\t\\label{tab:all}\n\\end{table}\n\n\\subsection*{ODMS Absolute Error Results}\n\nIn Table~\\ref{tab:abs}, we provide ODMS test results for the mean absolute error, which is calculated for each example as\n\\begin{align}\n\t\\text{Absolute Error} = \\left| d_1 - \\hat{d}_1 \\right|,\n\t\\label{eq:abs}\n\\end{align}\nwhere $d_1$ and $\\hat{d}_1$ are ground truth and predicted object depth at final pose $z_1$.\nNotably, our motivation to use percent error \\eqref{eq:percent} in the paper is to provide a consistent comparison across domains with markedly different object depth distances.\nFor example, the 6 \\textrm{cm} absolute error from Fig.~\\ref{fig:val_synthia} is much better for the driving domain than it would be for robot grasping.\n\n\\setlength{\\tabcolsep}{1.75pt} \n\\setlength{\\tabcolsep}{4pt} \n\\begin{table}\n\t\\centering\n\t\\scriptsize\n\t\\caption{\\textbf{Complete ODMS Validation and Test Set Results (Absolute Error)}\t\n\t}\n\t\\begin{tabular}{| l | c | c | c | c | c | c | c | c |}\n\t\t\\hline\n\t\t\\multicolumn{1}{|c|}{}\t & \t\\multicolumn{4}{c|}{ Mean Absolute Error (Validation\/Test) } & \\multicolumn{4}{c|}{ Training Iterations}\\\\\t\\cline{2-9}\n\t\t\\multicolumn{1}{|c|}{Config.} &\tRobot &\tDriving &\tNormal &\tPerturb & &\t &\t &\t \\\\\t\n\t\t\\multicolumn{1}{|c|}{ID} &\t\\textrm{(cm)} &\t\\textrm{(m)} &\t\\textrm{(cm)} &\t\\textrm{(cm)} & Robot &\tDriving &\tNormal &\tPerturb \\\\\t\\hline\n\t\t\\hline \\multicolumn{9}{|c|}{Standard Configuration}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_\\ell$\t&\t7.2\/6.6\t&\t3.8\/4.3\t&\t3.4\/3.4\t&\t7.3\/7.2\t&\t2390\t&\t1920\t&\t3370\t&\t4870\t\\\\\t\n\t\tODN$_{\\bar{d}}$ \t&\t6.4\/6.0\t&\t4.1\/4.4\t&\t3.1\/3.1\t&\t7.4\/7.3\t&\t4140\t&\t2990\t&\t3690\t&\t3530\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_d$ \t&\t6.8\/6.3\t&\t7.1\/7.8\t& \\bf\t1.8\/1.8\t&\t3.9\/3.7\t&\t2380\t&\t1650\t&\t4740\t&\t4430\t\\\\\t\n\t\t\\scriptsize VOS-DE \t&\t8.8\/10.0\t&\t5.0\/5.4\t&\t2.8\/2.8\t&\t15.3\/14.9\t&\tN\/A\t&\tN\/A\t&\tN\/A\t&\tN\/A\t\\\\\t\\hline\n\t\t\\hline \\multicolumn{9}{|c|}{$n=5$ Observations}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_{\\ell 5}$ \t&\t7.8\/7.0\t&\t3.9\/4.3\t&\t3.4\/3.5\t&\t8.2\/8.1\t&\t1000\t&\t1850\t&\t4870\t&\t4520\t\\\\\t\n\t\tODN$_{\\bar{d}5}$ \t&\t7.0\/6.1\t&\t4.4\/4.6\t&\t3.3\/3.3\t&\t8.1\/8.0\t&\t1510\t&\t3450\t&\t3770\t&\t4330\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_{d 5}$\t&\t7.3\/6.8\t&\t6.5\/7.2\t&\t1.9\/2.0\t&\t4.9\/4.7\t&\t4690\t&\t4260\t&\t4980\t&\t4970\t\\\\\t\n\t\t\\scriptsize VOS-DE$_5$\t&\t9.6\/10.8\t&\t5.0\/5.2\t&\t2.9\/2.9\t&\t14.3\/14.2\t&\tN\/A\t&\tN\/A\t&\tN\/A\t&\tN\/A\t\\\\\t\\hline\n\t\t\\hline \\multicolumn{9}{|c|}{$n=3$ Observations}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_{\\ell 3}$\t&\t6.8\/6.3\t&\t4.1\/4.5\t&\t3.4\/3.4\t&\t8.8\/8.6\t&\t1820\t&\t2750\t&\t4890\t&\t4380\t\\\\\t\n\t\tODN$_{\\bar{d}3}$ \t&\t6.8\/7.0\t&\t4.4\/4.7\t&\t3.3\/3.3\t&\t8.6\/8.4\t&\t4130\t&\t4320\t&\t4620\t&\t4250\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_{d 3}$ \t&\t7.9\/7.3\t&\t6.6\/7.3\t&\t1.9\/1.9\t&\t4.8\/4.4\t&\t4800\t&\t3040\t&\t4990\t&\t4680\t\\\\\t\n\t\t\\scriptsize VOS-DE$_3$ \t&\t11.2\/12.6\t&\t6.3\/5.0\t&\t2.9\/2.9\t&\t15.7\/15.3\t&\tN\/A\t&\tN\/A\t&\tN\/A\t&\tN\/A\t\\\\\t\\hline\n\t\t\\hline \\multicolumn{9}{|c|}{$n=2$ Observations}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_{\\ell 2}$\t&\t7.0\/6.4\t&\t3.7\/4.3\t&\t3.5\/3.6\t&\t8.5\/8.4\t&\t1140\t&\t1010\t&\t3910\t&\t4300\t\\\\\t\n\t\tODN$_{\\bar{d}2}$ \t&\t9.2\/7.8\t&\t4.8\/5.0\t&\t3.5\/3.5\t&\t8.6\/8.4\t&\t3410\t&\t4570\t&\t3370\t&\t4620\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_{d 2}$\t&\t8.0\/7.2\t&\t6.8\/7.5\t&\t2.0\/2.1\t&\t5.4\/5.1\t&\t2850\t&\t4120\t&\t4610\t&\t4970\t\\\\\t\n\t\t\\scriptsize VOS-DE$_2$ \t&\t36.2\/21.9\t&\t8.5\/6.7\t&\t3.0\/3.0\t&\t42.1\/39.7\t&\tN\/A\t&\tN\/A\t&\tN\/A\t&\tN\/A\t\\\\\t\\hline\n\t\t\\hline \\multicolumn{9}{|c|}{Perturb Training Data}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_{\\ell p}$\t&\t7.0\/6.9\t& \\bf\t3.5\/4.1\t&\t4.3\/4.5\t&\t5.2\/5.2\t&\t100\t&\t140\t&\t5000\t&\t5000\t\\\\\t\n\t\tODN$_{\\bar{d} p}$\t&\t8.4\/8.5\t&\t4.0\/4.4\t&\t4.4\/4.4\t&\t5.2\/5.1\t&\t420\t&\t2760\t&\t2730\t&\t4270\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_{d p}$\t&\t6.7\/5.8\t&\t8.9\/9.9\t&\t2.4\/2.5\t&\t\\bf 2.8\/2.8\t&\t50\t&\t330\t&\t4860\t&\t4780\t\\\\\t\\hline\n\t\t\\hline \\multicolumn{9}{|c|}{Radial Input Image}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\t\\hline\n\t\t\\rowcolor{rowgray}\tODN$_{\\ell r}$\t& \\bf\t4.4\/4.3\t&\t4.0\/4.5\t&\t3.5\/3.5\t&\t7.4\/7.2\t&\t1710\t&\t870\t&\t4940\t&\t3940\t\\\\\t\n\t\tODN$_{\\bar{d} r}$\t&\t5.6\/5.0\t&\t3.8\/4.3\t&\t3.3\/3.4\t&\t7.5\/7.4\t&\t2010\t&\t4200\t&\t4990\t&\t4440\t\\\\\t\n\t\t\\rowcolor{rowgray}\tODN$_{d r}$\t&\t4.4\/4.4\t&\t7.2\/8.0\t&\t1.9\/1.9\t&\t4.3\/4.0\t&\t2210\t&\t460\t&\t4870\t&\t4710\t\\\\\t\\hline\n\t\\end{tabular}\n\t\\label{tab:abs}\n\\end{table}\n\n\\subsection*{ODMS Robot Test Set Segmentation Examples}\n\nFor the ODMS Robot test set, we intentionally choose challenging objects, spanning from a single die to the 470~\\textrm{mm} long pan.\nNot surprising, segmenting diverse objects presents varied challenges.\nTo illustrate this point, in Fig.~\\ref{fig:test_obj} we show the closest and farthest Robot test set segmentations for the die and pan.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.975\\textwidth]{test_objects.pdf}\n\t\\caption{ \\textbf{ODMS Robot Test Set Segmentation Examples.}\n\t\tThe small die segmentation (\\textit{top}) has fragments of other objects in the closest view (\\textit{left}) and completely misses the die in the farthest view (\\textit{right}).\n\t\tOn the other hand, the larger pan segmentation (\\textit{bottom}) misses parts of the handle that are out of the image in the closest view (\\textit{left}) but is fairly accurate in the farthest view (\\textit{right})\n\t}\n\t\\label{fig:test_obj}\n\\end{figure}\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}