diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeqfv" "b/data_all_eng_slimpj/shuffled/split2/finalzzeqfv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeqfv" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nMagnetometers based on the nitrogen-vacancy (NV) center in diamond~\\cite{doherty_nitrogen-vacancy_2013, schirhagl_nitrogen-vacancy_2014} provide $\\upmu$T\\,- nT sensitivity for single centers at ambient temperature and mm-to-sub-$\\upmu$m length scales, making them attractive resources for studying biomagnetism~\\cite{gille_quantum_2021}, solid state systems~\\cite{tetienne_quantum_2017, casola_probing_2018} and nanoscale NMR~\\cite{mamin_nanoscale_2013} in challenging real-world sensing environments~\\cite{fu_sensitive_2020}. Since many magnetic phenomena of importance in navigation and biomagnetism manifest as slowly varying or static magnetic fields, intense effort has been devoted in particular to improving dc sensitivity~\\cite{barry_sensitivity_2020}, focusing on the diamond material~\\cite{balasubramanian_ultralong_2009, herbschleb_ultra-long_2019}, photon collection efficiency~\\cite{clevenson_broadband_2015}, quantum control sequences to eliminate decoherence~\\cite{lange_controlling_2012, mamin_multipulse_2014, bauch_ultralong_2018} and more recently, the addition of ferrite flux-concentrators~\\cite{fescenko_diamond_2020, zhang_diamond_2021}. Many approaches to improving the measurement signal are frustrated by a commensurate increase in noise, limiting the attainable sensitivity. \n\nTo date, Ramsey-type interferometry~\\cite{ramsey_molecular_1950} is the optimum dc measurement sequence~\\cite{rondin_magnetometry_2014, barry_sensitivity_2020}, and the sensitivity of Ramsey magnetometry is limited by the ensemble dephasing time $T_2^\\ast$, which reflects the magnitude of low frequency noise in the system. In diamond, impurities such as $^{13}$C~\\cite{childress_coherent_2006} or paramagnetic nitrogen~\\cite{bauch_decoherence_2020} are the dominant contributions to $T_2^\\ast$, resulting in $T_2^\\ast<1\\,\\upmu$s for readily-available CVD diamond samples. Additionally, $T_2^\\ast$ varies considerably between diamond samples, and often significantly \\emph{within} a single sample due to gradients or spatial variations of crystal strain or varied dopant density. This decoherence can be largely eliminated by employing time-reversal dynamical decoupling measurement schemes, such as Hahn spin-echo~\\cite{hahn_spin_1950}, but at the cost of insensitivity to slowly varying or static dc fields~\\cite{taylor_high-sensitivity_2008}. Alternative NV-magnetometry techniques~\\cite{acosta_broadband_2010, jeske_laser_2016, wickenbrock_microwave-free_2016} that eschew conventional quantum sensing protocols have also been proposed and demonstrated, though to date yielded comparable sensitivities to standard methods. \n\nIn this work, we use physical rotation to realize a significant improvement in the sensitivity of a dc quantum magnetometer. Mechanically rotating a diamond at rates comparable to the spin coherence time $T_2\\sim 0.1-1$\\,ms up-converts an external dc magnetic field to the rotation frequency, which is then detected using NV spin-echo magnetometry~\\cite{wood_t_2-limited_2018}. The quantum sensing time is increased from $T_2^\\ast$ up to $T_2$, with a potential $\\sqrt{T_2\/T_2^\\ast}$ sensitivity gain, typically an order of magnitude~\\cite{taylor_high-sensitivity_2008}. We demonstrate a 30-fold improvement in sensitivity over comparable Ramsey magnetometry. Furthermore, our demonstrated sensitivity exceeds, by a factor of 4.5, the theoretical shot-noise-limited, unity duty cycle sensitivity of $T_2^\\ast$-limited Ramsey magnetometry with this diamond. \n\n\\begin{figure*}\n\t\\centering\n\t\t\\includegraphics[width = \\textwidth]{fig1.pdf}\n\t\\caption{Diamond rotation up-conversion magnetometry (DRUM) schematic and key results. a) Schematic model of NV center, axis tilted from $z$ by $\\theta_\\text{NV} = 30.2^\\circ$, rotating at $\\omega_\\text{rot}$ with external magnetic field components $B_z$ and $B_x$. b) Rotation of the diamond modulates the Zeeman shift in proportion to $B_x$, at a rate set by the rotation frequency. c) Typical stationary spin-echo (blue circles) and Ramsey (orange squares, detail inset) signals at $B_z = 2.3\\,$mT for the diamond sample used in this work. d) Comparing magnetometry signals: varying an applied dc $B_x$ field (DRUM) or effective $z$-field (Ramsey) at the optimum sensing times traces out fringes. We used 10\\,s and 30\\,s measurement times per point, for Ramsey and DRUM, respectively. (e, zoom-in of d) The DRUM fringes are far faster, as considerably more phase accumulates within the $500\\times$ longer sensing time. f) Allan deviation of DRUM and Ramsey when sensitive (blue) and insensitive (orange, microwaves detuned) to magnetic fields. Both measurements follow $T^{-1\/2}$ scaling for a few hundred seconds, before drifts become dominant.}\n\t\\label{fig:fig1}\n\\end{figure*}\n\nOur method, depicted in Fig. \\ref{fig:fig1}(a,b), is called diamond rotation up-conversion magnetometry (``DRUM\") and was introduced in Ref.~\\cite{wood_t_2-limited_2018}. The NV Hamiltonian in the presence of a magnetic field $\\boldsymbol{B}$ is $H = D_\\text{zfs} S_{z'}^2 + \\gamma_e \\boldsymbol{B}\\cdot\\boldsymbol{S}$, with $D_\\text{zfs}\/2\\pi = 2870\\,$MHz at room temperature, $\\gamma_e\/2\\pi = 2.8\\,$kHz\\,$\\upmu$T$^{-1}$ and $\\boldsymbol{S} = (S_{x'}, S_{y'}, S_{z'})$ the vector of spin-1 Pauli matrices. We have ignored strain, electric fields and coupling to temperature. The quantization axis $z'$ is taken as the nitrogen-vacancy axis, lying along one of the four $[111]$ crystallographic axes of the crystal and therefore at an angle $\\theta_\\text{NV}$ to $z$. We use a single-crystal diamond containing an ensemble of NV centers, with a surface normal parallel to the rotation axis. A bias magnetic field is applied along the rotation axis $z$ to spectrally select one NV class and isolate $m_S$ transitions. The energies of the NV $m_S = 0, \\pm1$ spin states for weak applied fields ($\\hbar = 1$) are $\\omega(m_S) \\approx D_\\text{zfs}+m_S \\gamma_e B\\cos(\\theta_\\text{NV})$, with $B = |\\boldsymbol{B}|$. A weak magnetic field $B_x$ is applied along the lab-frame $x$-axis, making the Zeeman shift time-dependent during rotation at an angular frequency $\\omega_\\text{rot}$. Considering the $|m_S = 0\\rangle-\\leftrightarrow|m_S = -1\\rangle$ transition, the time dependent component is \n\\begin{equation}\n\\omega_{-1,0}(t) \\approx \\gamma_e B_x\\sin\\theta_\\text{NV}\\cos(\\omega_\\text{rot}t - \\phi_0),\n\\label{eq:zeemant}\n\\end{equation}\nwith $\\phi_0$ set by the initial orientation of the diamond, and adjusted to maximize sensitivity to either $x$ or $y$-oriented fields. The dc field in the lab frame is now effectively an ac field in the NV frame, with amplitude $\\delta B_x\\sin\\theta_\\text{NV}$ and frequency $\\omega_\\text{rot}$. \n\nOur previous realization of DRUM used an NV tilt angle $\\theta_\\text{NV} =4^\\circ$, resulting in up-conversion of only a small fraction of the dc field. In this work, we use a $\\langle 110\\rangle$-cut CVD-grown type IIa diamond with a natural abundance of $^{13}$C and approximately [N] = 1\\,ppm, [NV] = 0.01\\,ppm, mounted on an electric motor that can spin at up to 5.83\\,kHz. We choose an NV orientation class making an angle of $30.2^\\circ$ to the $z$-axis for our measurements, yielding $\\sin\\theta_\\text{NV} = 0.5$, which is easily resolved and still sufficient to attain sensitivities exceeding that of optimized Ramsey sensing. Typical stationary spin-echo and Ramsey signals are shown in Fig. \\ref{fig:fig1}(c) for $B_z$ = 2.3\\,mT. The $^{13}$C spin bath is the dominant source of decoherence~\\cite{barry_sensitivity_2020}, limiting $T_2^\\ast$ to less than half a microsecond, but the sample exhibits a much longer $T_2 = 250(9)\\,\\upmu$s, with interferometric visibility restricted to revivals spaced at twice the $^{13}$C Larmor period~\\cite{childress_coherent_2006}. Such a wide gulf separating $T_2$ and $T_2^\\ast$ is not unusual, reflecting the much greater sensitivity of $T_2^\\ast$ to the presence of magnetic impurities and sample-specific imperfections~\\cite{bauch_ultralong_2018, bauch_decoherence_2020}. \n\nThe remaining experimental details are similar to that described in Refs.~\\cite{wood_quantum_2018, wood_t_2-limited_2018, wood_anisotropic_2021}, and further details are provided in the Supplementary Material. Briefly, a scanning confocal microscope optically polarizes (1\\,mW 532\\,nm) and reads (600-800\\,nm, 9$\\times10^6$ cts\/s) the NV fluorescence. Microwave fields are applied along the $z$-axis with a coil antenna to ensure rotational symmetry. Further coils supply the $z$-oriented bias field and create the transverse dc test fields. A fast pulse generator controls the timing of laser and microwave pulses, and is triggered synchronously with the rotation. The laser is pulsed for $3\\,\\upmu$s to optically pump NVs to the $m_S = 0$ state, and the subsequent microwave spin-echo sequence is timed so that the $\\pi$-pulse is applied at the zero-crossing of the up-converted field (Fig. \\ref{fig:fig1}(b)), conferring maximum sensitivity. After the microwave pulses, optical readout is performed after a shuttling time so that the whole sequence takes one rotation period. A second sequence is applied back-to-back, with the final $\\pi\/2$-pulse phase shifted by $180^\\circ$. The detected photoluminescence from each trace is then normalized and the difference computed to extract the contrast $\\mathcal{S}$, which constitutes the DRUM signal.\n\nThe concept of rotational dc up-conversion should be applicable to a wide range of competing magnetometry architectures, including laser and absorption based readout schemes. The point of this paper is to demonstrate the sensitivity advantage of the DRUM technique over standard $T_2^\\ast$-limited Ramsey magnetometry. Our priorities are therefore not to improve the ultimate sensitivity of the measurement, but rather ensure that the two techniques are compared on an equal basis. Using isotopically-purified diamonds with very high NV densities, wide-area collection optics and powerful excitation beams has been shown to be the most effective means to achieve high sensitivities~\\cite{wolf_subpicotesla_2015}, and these techniques should be compatible with diamond rotation, with sufficient engineering application. The shorter coherence times intrinsic to these NV-dense configurations place additional requirements on rotation speed, and these will be discussed later in this work.\n\nDue to strain and impurity inhomogeneity, $T_2^\\ast$ exhibits considerable spatial variation\\footnote{See Supplementary Material}. To assess the peak Ramsey sensitivity, we locate a region with a comparatively high $T_2^\\ast = 360\\,$ns and determine the sensitivity, $\\delta B = \\left(\\frac{d\\mathcal{S}}{dB}\\right)^{-1} \\sigma(\\mathcal{S}) \\sqrt{T}$, with $d\\mathcal{S}\/dB$ the mid-fringe signal slope, $T$ the total integration time, and $\\sigma(\\mathcal{S})$ the standard deviation of the Ramsey signal. For Ramsey, changing the microwave frequency is equivalent to varying a magnetic field exactly parallel to the NV axis. Ramsey fringes as a function of effective magnetic field are shown in Fig. \\ref{fig:fig1}(d). The sensitivity of Ramsey magnetometry was found by detuning to the mid-fringe point and repeating the same 10\\,s averaging interval 10 times. The best we achieved was a standard deviation of this data yielding $\\delta B =0.86\\upmu$T\\,$\\text{Hz}^{-1\/2}$.\n\nNext, we rotated the diamond at 3.75\\,kHz and performed DRUM with a spin-echo time of $\\tau = 180\\,\\upmu$s ($B_z = 0.7\\,$mT), yielding fringes as an applied $x$-field is varied as shown in Fig.\\ref{fig:fig1}(d,e), with each point averaged for 30\\,s. We calculate the operational sensitivity to be $28\\,\\text{nT}\\,\\text{Hz}^{-1\/2}$, about 30 times better than that of Ramsey in the same diamond sample. DRUM exhibits similar long-time averaging behavior to Ramsey, but towards a much lower minimum detectable field. Figure \\ref{fig:fig1}(f) shows the Allan deviation~\\footnote{A comprehensive description of Allan deviation as it applies to our work is provided in the Supplementary Material.} of DRUM and Ramsey as a function of averaging time. To assess the relative magnitudes of magnetic drifts and intrinsic noise in each technique, we measure while sensitive to magnetic fields, and then again with the microwaves detuned by 15\\,MHz, so that only intrinsic measurement noise is present. Ramsey, sensitive to drifts in temperature, local strain as well as magnetic drifts (common also to DRUM) exhibits a significantly higher Allan deviation compared to DRUM. No amount of averaging time with Ramsey can exceed the performance of DRUM. This demonstration constitutes a significant achievement, showing that $T_2^\\ast$ does not intrinsically limit dc magnetic sensitivity for a spin-based quantum magnetometer. In what follows, we describe how the optimum parameters were deduced, and how the ultimate sensitivity depends on parameters we can control.\n\nThe shot-noise limited dc sensitivity of the DRUM measurement is given by~\\footnote{See Supplementary Material}\n\\begin{equation}\n\\delta B = \\frac{\\pi e^{\\left(\\frac{\\tau}{T_2}\\right)^n}}{4 C \\gamma_e \\sin\\theta_\\text{NV}\\sin\\left(\\frac{\\pi\\tau}{2t_\\text{rot}}\\right)}\\frac{1}{\\sqrt{t_\\text{rot}}},\n\\label{eq:sensitivity}\n\\end{equation}\nwith $n\\approx 3$ reflecting sample-specific decoherence processes, $C$ the readout efficiency~\\cite{degen_quantum_2017} and $t_\\text{rot}$ the rotation period. Extracting the optimum performance for DRUM amounts to increasing the slope $d\\mathcal{S}\/dB$, which depends on the rotation speed and spin-echo measurement time, and increasing the measurement signal-to-noise, largely via well-understood means~\\cite{barry_sensitivity_2020} that benefit both Ramsey and DRUM.\n\nThe rotation speed can be chosen to optimize sensitivity. The intrinsic amplitude of the up-converted field is set by the lab-frame dc field, but the phase accumulated by the NV centers depends on the rotation speed. Faster rotation speeds enable the same integration time to sample a greater fraction of the modulated field, and more measurements are possible within a given averaging time, increasing the number of photons collected. However, without recourse to higher-order dynamical decoupling sequences, which supplant spin-echo as the optimum measurement only when $T_2\\gg t_\\text{rot}$, increasing the rotation speed eventually reduces the accumulated phase due to the smaller integrated area under $\\omega_{-1,0}(t)$. \n\nMaximizing the slope also requires minimizing decoherence sources. For a natural abundance diamond, the NV-$^{13}$C bath interaction depends on the strength~\\cite{zhao_decoherence_2012, hall_analytic_2014} and direction~\\cite{stanwix_coherence_2010, wood_anisotropic_2021} of the magnetic field. Consequently, the $z$-bias field tunes the spin-echo time $\\tau$, and must be set so that $\\tau = n_c \/ (B\\times 10.71\\,\\text{kHz\\,mT}^{-1}\\pm\\omega_\\text{rot}\/2\\pi)$, $n_c\\in\\{1, 2, ...\\}$. Stronger $z$-fields result in faster decoherence due to the anisotropic NV-$^\\text{13}$C hyperfine coupling~\\cite{stanwix_coherence_2010,wood_anisotropic_2021}. We therefore minimize the requisite $z$-field by ensuring just the \\emph{first} $^{13}$C revival coincides with the desired measurement time, and choose the rotation direction so that the induced magnetic pseudo-field~\\cite{wood_magnetic_2017} adds to the $z$-bias field, \\emph{i.e.} $\\omega_\\text{rot} >0$.\n\nThe slope $d\\mathcal{S}\/dB$ is given by \n\\begin{equation}\n\\frac{d\\mathcal{S}}{dB} = \\frac{4A}{\\omega_\\text{rot}}e^{-\\left(\\frac{\\tau}{T_2}\\right)^n}\\gamma_e \\sin\\theta_\\text{NV}\\sin\\left(\\frac{\\omega_\\text{rot}\\tau}{4}\\right).\n\\label{eq:slope}\n\\end{equation}\nFor comparison with data, we leave $A$ and $T_2$ as free parameters, possibly dependent on rotation speed, and fix $n = 3$, which best describes our observed relaxation and is consistent with other experiments~\\cite{stanwix_coherence_2010, hall_analytic_2014}. We measured the mid-fringe slope of DRUM fringes as a function of $\\tau$ and $\\omega_\\text{rot}$. Fig.~\\ref{fig:fig2}(a, inset) shows DRUM fringes as the spin-echo time is varied for $\\omega_\\text{rot}\/2\\pi = 3.75\\,$kHz, and the slope extracted from fits of Eq. \\ref{eq:slope} to such data is plotted versus $\\tau$ for rotation speeds from $1-6\\,$kHz (Fig.~\\ref{fig:fig2}(a)). We find that as the rotation speed is increased, an optimal sensing time appears around 3 to 4\\,kHz. \n\n\\begin{figure}[t!]\n\t\\centering\n\t\t\\includegraphics[width = \\columnwidth]{fig2.pdf}\n\t\\caption{Optimization of DRUM. (a) Peak slope $d\\mathcal{S}\/dB$ as a function of r$\\omega_\\text{rot}$, deduced from fits of Eq. \\ref{eq:slope} to slope-vs-$\\tau$ data gathered for DRUM fringes at different rotation speeds (inset, 3.75\\,kHz) (b) Measured $T_2$ (gray dashed line is linear fit) and spin-echo time $\\tau_\\text{opt}$ where the peak slope occurs: decoherence results in $\\tau_\\text{opt} < T_2$. Dashed lines in (a) and (b) correspond to theoretical predictions from measured $T_2$ data. (c) Increasing photon number per measurement bin (blue circles), and reducing measurement noise (orange squares). (d) Sensitivity $\\delta B$ as a function of rotation speed. Shaded regions denote maximum and minimum ranges from 5 repetitions.}\n\t\\label{fig:fig2}\n\\end{figure}\n\nWe also monitored the coherence time $T_2$ and time at which the peak slope occurs $\\tau_\\text{opt}$ as a function of rotation speed (Fig. \\ref{fig:fig2}(b)). The measured $T_2$ is higher than that in Fig. \\ref{fig:fig1}(c) due to the overall lower magnetic field strengths, and drops slightly as a function of rotation speed, which we believe is due to imperfectly canceled $y$-fields or the test field itself inducing weak anisotropy in the NV-$^{13}$C interaction~\\footnote{See Supplementary Material}. Numerically maximizing Eq. \\ref{eq:slope} for the measured $T_2$ yields the dashed theoretical predictions in Fig.~\\ref{fig:fig2}(a, b), confirming the key role $T_2$ plays. While the theory accurately reproduces the peak slope, the optimum time $\\tau_\\text{opt}$ where the slope is maximized is lower than predicted; we attribute this to the particular choice of $n = 3$ in Eq. \\ref{eq:slope}.\n\nThe variation in slope is tempered by increasing photon collection rates (and hence reduction in shot noise) due to increased duty cycle at higher rotation speeds, as shown in Fig. \\ref{fig:fig2}(c). This leads to an almost flat dependence on sensitivity with rotation speed, as shown in Fig. \\ref{fig:fig2}(d). DRUM operates close to the shot-noise predicted sensitivity limit, 30 times better than the Ramsey sensitivity. This is actually greater than $\\sqrt{T_2\/T_2^\\ast} = 26$ due to the $10\\%$ duty cycle of Ramsey measurements (including laser preparation and readout), and the trigonometric factor and integration for less than a full period in DRUM. However, computing the idealized Ramsey sensitivity with unity duty cycle yields $122$\\,nT\\,Hz$^{-1\/2}$, still a factor of 4.5 worse than experimentally demonstrated DRUM with $\\tau < t_\\text{rot}$ and $\\theta_\\text{NV} = 30^\\circ$.\n\n\\emph{Discussion.} We have shown in this work that dc up-conversion magnetometry with a rotating diamond can significantly exceed the sensitivity of conventional Ramsey magnetometry. Unlike previous proposals~\\cite{ajoy_DC_2016}, our scheme up-converts just the magnetic fields of interest and not the deleterious noise that limits quantum coherence, thus definitively improving sensitivity to dc fields. We also retain the vector sensitivity of the NV center, with a timing adjustment to the synchronization allowing for exclusive $y$-field detection. In principle, our scheme is equally applicable to any quantum system where the coupling between qubit and parameter of interest can be modulated in time, and this paper shows that sensitivity exceeding the $T_2^\\ast$ limit is thus possible. \n\nFor up-conversion to be worthwhile, $T_2\\gg T_2^\\ast$ and $t_\\text{rot} \\sim T_2$. More practically, however, the $\\sqrt{\\omega_\\text{rot}}$ scaling of sensitivity in Eq. \\ref{eq:sensitivity} requires a commensurate increase in $C$, \\emph{i.e.} by increasing NV density, which in turn reduces $T_2$~\\cite{bauch_decoherence_2020}. However, with the simplified assumption that $\\tau = t_\\text{rot} = T_2$ and $\\theta_\\text{NV} = 90^\\circ$, DRUM can confer a sensitivity gain over Ramsey in any diamond sample of $\\delta B_\\text{DRUM}\/\\delta B_R = \\frac{\\pi}{4}\\sqrt{T_2^\\ast\/T_2}$. For DRUM to have a factor of 10 better sensitivity than unity-duty-cycle Ramsey, $T_2 =62\\,T_2^\\ast$.\n\nMechanical rotation can be challenging, though not impossible, to achieve for the short $T_2\\sim 5-10\\,\\upmu$s exhibited by NV-dense diamond samples. Commercial NMR magic-angle spinning devices can achieve rotation rates of mm$^3$-scale samples now up to 150\\,kHz~\\cite{schledorn_protein_2020}, and demonstrations of ultrafast rotation of optically-trapped microscale structures~\\cite{reimann_GHz_2018, ahn_ultrasensitive_2020} yield GHz rotation frequencies. Since rapid libration can be substituted for rotation, alternative approaches could potentially leverage fast piezoelectric tip-tilt transducers~\\cite{csencsics_fast_2019} for larger samples or micro-to-nano structures for smaller length scales, for instance in optically or electrically-trapped~\\cite{perdriat_spin-mechanics_2021} micro-to-nano diamonds. Modulation can also be achieved by position displacement in a spatially-varying field, and recent work has demonstrated up-conversion of dc fields to ac using scanning single NV magnetometry by rapidly modulating the distance between tip and sample~\\cite{huxter_scanning_2022}. Another option could be modulation, \\emph{i.e.} via position displacement, of the sensitivity enhancement conferred by ferrite flux concentrators~\\cite{fescenko_diamond_2020}. \n\nIn conclusion, we have demonstrated that $T_2$-limited dc magnetometry can exceed the sensitivity of $T_2^\\ast$-limited Ramsey magnetometry for diamond-based quantum magnetometers. We anticipate our work will stimulate other approaches, not necessarily based on sample rotation, to combine up-conversion with existing schemes to improve magnetic sensitivity. Augmented with fast rotation, improvements such as engineered diamonds with higher NV densities and larger optical excitation and collection areas may be sufficient to achieve the much sought-after fT\\,Hz$^{-1\/2}$ regime of dc magnetic sensitivity with room-temperature, microscale diamond sensors, where applications such as magnetoencephalography in unshielded environments become possible~\\cite{boto_moving_2018}.\n\nThis work was supported by the Australian Research Council (DE210101093, DE190100336). We thank R. E Scholten for insightful discussions and a careful review of the manuscript. The authors (AAW, AS, AMM) are inventors on a United States Patent, App. 16\/533,167 which is based on this work.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nEvolutionary game theory is used on different levels of biological systems, ranging from the genetic level to ecological systems. The language of game theory allows to address basic questions of ecology, related to the emergence of cooperation and biodiversity, as well as the formation of coalitions with applications to social systems. Darwinian dynamics, in particular frequency-dependent selection, can be formulated in terms of game-theoretic arguments \\cite{nowak}. The formation of dynamical patterns is considered as one of the most important promoters of biodiversity \\cite{may,levin,durret,hassel}. Here we consider games of competition, where the competition is realized as predation among $N$ species, where each species preys on $r$ others in a cyclic way. A subclass of these $(N,r)$-games are cyclic games, that is $(N,1)$ with $(3,1)$ being the famous rock-paper-scissors game. An extensive overview on cyclic games is given in \\cite{szabo,perc}. The $(3,1)$-game has been studied in various extensions (spatial, reproduction and deletion, swapping or diffusion, mutation). One of the first studies of a $(3,1)$-game without spatial assignment, but in a deterministic and stochastic realization revealed that fluctuations due to a finite number of agents can drastically alter the mean-field predictions, including an estimate of the extinction probabilities at a given time \\cite{reichen1}. This model was extended to include a spatial grid in \\cite{reichen2}, where the role of stochastic fluctuations and spatial diffusion was analyzed both numerically and analytically. The influence of species mobility on species diversity was studied in \\cite{reichen3}, pattern formation close to a bifurcation point was the topic of \\cite{reichen4}, see also \\cite{reichen5} and the impact of asymmetric interactions was considered in \\cite{reichen6}.\n\nAn extension to four species, first without spatial assignment, shows interesting new features as compared to $(3,1)$: Already in the deterministic limit the trajectories show a variety of possible orbits, and from a certain conserved quantity the late-time behavior can be extrapolated \\cite{durney}. The four species can form alliance pairs similarly to the Game of Bridge \\cite{case}. Under stochastic evolution various extinction scenarios and the competition of the surviving set of species can be analyzed \\cite{28}. Domains and their separating interfaces were studied in \\cite{arXiv:1205.4914}. $(4,1)$ cyclic games on a spatial grid were the topic in \\cite{szabosznaider,luetz}. A phase transition as a function of the concentration of vacant sites is identified between a phase of four coexisting species and a phase with two neutral species that protect each other and extend their domain over the grid. For an extension of this model to long-range selection see \\cite{hua}.\n\nIn this paper we focus on the $(6,3)$-game, including both spiral formation inside domains and domain formation. It is a special case of $(N,r)$-games, which were considered for $N\\ge3$ and $r\\ge1$ by \\cite{m1,m2} and more recently by \\cite{m3}. The authors of \\cite{m1,m2} were the first to notice that for certain combinations of N and r one observes the coexistence of both spiral formation and domain formation. However, it should be noticed that our set of reactions, even if we specialize $(N,r)$ to the $(3,1)$-game, is similar, but not identical with the versions, considered in \\cite{m1,m2,m3} or in \\cite{reichen1}-\\cite{reichen6}. The seemingly minor difference refers to the implementation of an upper threshold to the occupation number of single sites (set to 1 or a finite fixed number), while we use a ``bosonic\" version. We introduce a dynamical threshold, realized via deletion reactions, so that we need not explicitly restrict the occupation number per site. Due to this difference, the bifurcation structure of the mean-field equations is changed.\n\nThe reason why we are interested in the particular combination of $N=6$ and $r=3$ is primarily motivated by two theoretical aspects rather than by concrete applications. As to the first aspect, this game is one of the simplest examples of ``games within games\" in the sense that the domains effectively play a $(2,1)$-game as transient dynamics on a coarse scale (the scale of the domain diameter), while the actors inside the domains play a $(3,1)$-game on the grid scale. Finally, one of the domains gets extinct along with all its actors. As such, this game provides a simple, yet non-trivial example for a mechanism that may be relevant for evolution: In our case, due to the spatial segregation of species, the structural complexity of the system increases in the form of patterns of who is chasing whom, appearing as long-living transients, along with a seemingly change of the rules of the game that is played between the competing domains on the coarse scale, while the rules, which individuals use on the elementary grid sites, are not changed at all. As outlined by Goldenfeld and Woese \\cite{goldenfeldwoese}, it is typical for processes in evolution, in particular in ecology, that ``the governing rules are themselves changed\", as the system evolves in time and the rules depend on the state. In our example it is spatial segregation, which allows for a change of rules from a coarse perspective, as we shall see.\n\nAs to the second aspect, an interesting feature of such an arrangement is the multitude of time and spatial scales that are dynamically generated. Concretely in the $(6,3)$-game the largest of the reaction\/diffusion rates sets the basic time unit. When the species segregate and form domains, the next scale is generated: it is the time it takes the two domains to form until both cover the two-dimensional grid or the one-dimensional chain. The domains are not static, but play the $(2,1)$-game that has a winner in the end. So the extinction time of one of the domains sets the third scale. A single domain then survives, including the moving spirals from the remaining $(3,1)$-game inside the domain. The transients can last very long, depending on the interaction rates and the system size. In the very end, however, in a stochastic realization as well as due to the finite accuracy in the numerical solutions even in the mean-field description, only one out of the three species will survive, and the extinction of the other two species sets the fourth scale. Along with these events, spatial scales emerge, ranging from the basic lattice constant to the radii of spirals and the extension of the domains.\n\nOne of the challenges is to explore which of the observed features in the Gillespie simulations can be predicted analytically. We shall study the predictions on the mean-field level, which is rather conclusive in our ultralocal implementation of reactions and reproduces the results of the Gillespie simulations quite well, since fluctuations turn out to play a minor role for pattern formation. The deterministic equations are derived as the lowest order of a van Kampen expansion. The eigenvalues of the Jacobian are conclusive for the number of surviving species in a stable state, the composition of the domains, and transient behavior, which is observed in the Gillespie simulations. The mean-field equations, including the diffusion term, will be integrated numerically and compared to the results of the Gillespie simulations.\n\nThe paper is organized as follows. In section~\\ref{sec_reactions} we present the model in terms of basic reactions and the corresponding master equation. For generic $(N,r)$ games we summarize in section~\\ref{sec_vankampen} the derivation of the mean-field equations from a van Kampen expansion, followed by a stability analysis via the Jacobian with and without spatial dependence for the specific $(6,3)$ game, and a derivation of the numerical solutions of the mean-field equations in section~\\ref{sec_Jacobian}. In section~\\ref{sec_numerical} we present our results from the Gillespie simulations in comparison to the mean-field results. Section~\\ref{sec_conclusions} summarizes our conclusions and gives an outlook to further challenges related to this class of games. For comparison, the supplementary material contains a detailed stability analysis for the $(3,1)$-game with spiral formation and the $(3,2)$-game with domain formation, as well as the numerical solutions of the mean-field equations and the corresponding Gillespie simulations.\n\n\n\\section{Reactions and Master Equation}\\label{sec_reactions}\nWe start with the simplest set of reactions that represent predation between individuals of different species, followed by reproduction, deletion\n\\begin{eqnarray}\\label{eq:rec_sys}\n\tX_{\\alpha,i}\\, +\\, X_{\\beta,i} & \\overset{k_{\\alpha\\beta}\/V}{\\longrightarrow} & X_{\\alpha,i} \\label{pred}\\\\\n\tX_{\\alpha,i}\\, & \\overset{r_{\n\\alpha,i}}{\\longrightarrow} & 2 X_{\\alpha,i} \\label{repr} \\\\\n\t2X_{\\alpha,i}\\,& \\overset{p_{\\alpha}\/V}{\\longrightarrow} & X_{\\alpha,i} \\label{anih}\n\\end{eqnarray}\t\nand finally diffusion\n\\begin{eqnarray}\\label{diff}\n\tX_{\\alpha,i}\\, & \\overset{D_{\\alpha}\/h^2}{\\longrightarrow} & X_{\\alpha,j}.\n\\end{eqnarray}\n$X_{\\alpha,i}$ represents an individual of species $\\alpha$ at lattice site $i$, while the total number of individuals of species $\\alpha$ at site $i$ will be denoted with $n_{\\alpha,i}$. (We use small characters $n$ for convenience, although the meaning of $n$ is not a density, but the actual occupation number of a certain species at a certain site.) In view of applications to ecological systems, each lattice site stands for a patch housing a subpopulation of a metapopulation, where the patch is not further spatially resolved. Eq.~(\\ref{pred}) represents the predation of species $\\alpha$ on species $\\beta$ with rate $k_{\\alpha\\beta}\/V$, where the parameter $V$ does not stand for the physical volume, but parameterizes the distance from the deterministic limit in the following way: According to our set of reactions, larger values of $V$ lead to higher occupation numbers $n_{\\alpha,i}$ of species $\\alpha$ at sites $i$, since predation and deletion events are rescaled with a factor $1\/V$, and therefore to a larger total rate. The fluctuations in occupation numbers, realized via the Gillespie algorithm, are independent of $V$ or the occupation numbers of sites, since only relative rates enter the probabilities for a certain reaction to happen. Therefore the size of the fluctuations relative to the absolute occupation numbers or to the overall $V$ gets reduced for large V, that is, in the deterministic limit. Predation is schematically described in figure~\\ref{(6,3)}.\n\nEq.~(\\ref{repr}) represents reproduction events with rate $r_{\\alpha}$, and Eq.~(\\ref{anih}) stands for death processes of species $\\alpha$ with rate $p_{\\alpha}\/V$. Death processes are needed to compensate for the reproduction events, since we do not impose any restriction on the number of individuals that can occupy lattice sites. Here we should remark why we implement death processes in the form of Eq.~\\ref{anih} rather than simpler as $X_{\\alpha,i} \\overset{p_{\\alpha}}{\\longrightarrow} \\oslash$. The latter choice could be absorbed in a term $(\\rho-\\gamma)\\phi_i\\equiv \\tilde{\\rho}\\phi_i$ in the mean-field equation (\\ref{eq:pde}) below with uniform couplings $\\rho$ and $\\gamma$. This choice would not lead to a stable coexistence-fixed point \\cite{josef} and therefore not to the desired feature of games within games\\footnote{For the $(6,3)$-game we would have 40 fixed points, the sign of the eigenvalues would then only depend on the sign of the parameter $\\tilde{\\rho}$. At $\\tilde{\\rho}=0$ all fixed points collide and exchange stability through a multiple transcritical bifurcation. For $\\tilde{\\rho}>0$ (the only case of interest), the system has no stable fixed points, and the numerical integration of the differential equations diverges. (Similarly for the (3,1)-game, for $\\tilde{\\rho}>0$, the trivial fixed point with zero species is always an unstable node, while the coexistence fixed point is always a saddle.)}.\n\nThe species diffuse within a two-dimensional lattice, which we reduce to one dimension for simplicity if we analyze the behavior in more detail. We assume that there can be more than one individual of one or more species at each lattice site. Individuals perform a random walk on the lattice with rate $D_{\\alpha}\/h^d$, where $D_{\\alpha}$ is the diffusion constant, $h$ the lattice constant and $d$ the dimension of the grid. Diffusion is described by Eq.~(\\ref{diff}), where $i$ represents the site from which an individual hops, and $j$ is one of the neighboring sites to which it hops. It should be noticed that diffusion is the only place, which leads to a spatial dependence of the results, since apart from diffusion, species interact on-site, that is, within their patch.\n\nIn summary, the main differences to other related work such as references \\cite{reichen1,reichen2,reichen3,reichen4,reichen5,reichen6,durney,case, 28,m1,m2,m3} are the ultralocal implementation of prey and predation, no swapping, no mutations as considered in \\cite{mobilia1,mobilia2}, and a bosonic version with a dynamically ensured finite occupation number of sites. Even if qualitatively similar patterns like spirals or domains are generated in all these versions, the bifurcation diagram, that is, the stability properties and the mode of transition from one to another regime depend on the specific implementation.\n\nWe can now write a master equation for the probability of finding $\\{n\\}$ particles at time $t$ in the system for reaction and diffusion processes, where $\\{n\\}$ stands for $(n_{1,1},...,n_{N,L^d})$ and $N$ is the number of species, $L^d$ the number of sites.\n\\begin{eqnarray}\\label{eq:me_reac}\n\n\\frac{\\partial P^{reac} \\left( \\left\\{ n \\right\\};t \\right)}{\\partial t} &=&\n\t \\underset{i}{\\sum} \\left\\{\n\t %\n\t \\underset{\\alpha,\\beta}{\\sum} \\frac{k_{\\alpha\\beta}}{V} \\left[\n\t n_{\\alpha,i}\\left(n_{\\beta,i}+1\\right)P\\left(n_{\\alpha,i},n_{\\beta,i}+1,...;t \\right)\n\t - n_{\\alpha,i}n_{\\beta,i}P\\left(\\{n\\};t \\right) \\right] \\right.\\nonumber \\\\\n\t \n\t &+&\\left. \\underset{\\alpha}{\\sum} \\frac{p_{\\alpha}}{V} \\left[\n\t \\left( n_{\\alpha,i}+1 \\right) n_{\\alpha,i} P\\left( ...,n_{\\alpha,i}+1,...;t \\right)\n\t - n_{\\alpha,i}\\left( n_{\\alpha,i}-1 \\right) P(\\{n\\};t) \\right] \\right. \\nonumber \\\\\n\t %\n &+& \\left. \\underset{\\alpha}{\\sum} r_{\\alpha} \\left[\n (n_{\\alpha,i}-1)P(n_{\\alpha,i}-1,...;t) - n_{\\alpha,i}P(\\{n\\};t) \\right]\n\t \\right\\}\n\\end{eqnarray}\n\\begin{eqnarray}\\label{eq:me_diff}\n\t\\frac{\\partial P^{diff} \\left( \\left\\{ n \\right\\};t \\right)}{\\partial t} &=& \\underset{\\alpha}{\\sum} \\frac{D_\\alpha}{h^2} \\underset{\\left\\langle i,j \\right\\rangle}{\\sum} \\left[ (n_{\\alpha,i}+1)P(...,n_{\\alpha,i}+1,n_{\\alpha,j}-1,...;t)-n_{\\alpha,i}P(\\{n\\};t)\\right. \\nonumber\\\\\n\t&+&\\left. (n_{\\alpha,j}+1)P(...,n_{\\alpha,i}-1,n_{\\alpha,j}+1,...;t)-n_{\\alpha,j}P(\\{n\\};t)\\right]\n\\end{eqnarray}\nwith $n_{\\alpha,i}\\ge 1$ for all $\\alpha,i$, and\n\\begin{equation}\\label{eq7}\n\t\\partial_tP = \\partial_tP^{reac}+\\partial_tP^{diff}.\n\\end{equation}\n\nAs uniform (with respect to the grid) random initial conditions we assume a Poissonian distribution on each site $i$\n\\begin{equation}\n\tP \\left( \\{n\\} ;0 \\right)=\\underset{\\alpha,i}{\\prod}\\left( \\frac{\\overline{n}^{n_{\\alpha,i}}_{\\alpha,0}}{n_{\\alpha,i}!} e^{-\\overline{n}_{\\alpha,0}} \\right),\n\\end{equation}\nwhere $\\overline{n}_{\\alpha,0}$ is the mean initial number of individuals of species $\\alpha$ per site.\n\n\n\\section{Derivation of the mean-field equations}\\label{sec_vankampen}\nThe master equation is continuous in time and discrete in space. The diffusion term is included as a random walk. Next one takes the continuum limit in space, in which the random walk part leads to the usual diffusion term in the partial differential equation (pde) for the concentrations $\\varphi(\\vec{x},t)\\equiv n_\\alpha(\\vec{x},t)\/V$. The mean-field equations can then be derived by calculating the equations of motion for the first moments $\\langle n_\\alpha(\\vec{x},t)\\rangle$ from the master equation, where the average is defined as $\\langle n_\\alpha(\\vec{x},t)\\rangle=\\sum n_\\alpha(\\vec{x},t)P(\\{n_\\alpha(\\vec{x},t)\\})$ with $P(\\{n_\\alpha(\\vec{x},t)\\})$ being a solution of the master equation, and factorizing higher moments in terms of first-order moments.\n\nAlternatively, we insert the ansatz for the van Kampen expansion according to $n_\\alpha=V\\varphi_\\alpha + \\sqrt{V}\\eta_\\alpha$ in the reaction part. To leading order in $V$ we obtain the deterministic pde for the concentrations of the reaction part. Combined with the diffusion part this leads to the full pde that is given as Eq.~\\ref{eq:pde} in the next section. While this leading order then corresponds to the mean-field level, the next-to-leading order leads a Fokker-Planck equation with associated Langevin equation, from which one can determine the power spectrum of fluctuations. In our realization, the visible patterns are not fluctuation-induced, differently from noise-induced fluctuations as considered in \\cite{goldenbutler}. Therefore our power spectrum of fluctuations is buried under the dominating spectrum that corresponds to patterns from the mean-field level. Therefore we do not further pursue the van Kampen expansion here\\footnote{For details of a possible derivation of the mean-field equations we refer to \\cite{darkathesis}; however, there we derived the mean-field equations via a longer detour towards a field theoretic formulation, where we read off the mean-field equations as leading order of a van Kampen expansion, not applied to the master equation, but to a Lagrangian that appears in the path integral derived from the master equations, in analogy to the derivation in \\cite{goldenbutler}.}.\n\n\n\n\\section{Stability analysis of the mean field equations and their solutions}\\label{sec_Jacobian}\nWe perform a linear stability analysis of the mean field equations by finding the fixed points of the system of partial differential equations\n\\begin{equation}\\label{eq:pde}\n\t\\partial_t\\varphi_\\alpha =\n\tD_\\alpha\\nabla^2\\varphi_\\alpha+\n\tr_\\alpha\\varphi_\\alpha-\n\tp_\\alpha\\varphi_\\alpha^2 -\n\t\\underset{\\beta}{\\sum}k_{\\beta\\alpha}\\varphi_\\alpha\\varphi_\\beta,\n\\end{equation}\nwith $\\phi_\\alpha$ the concentration of species $\\alpha$,\nby setting $\\partial_t\\varphi_\\alpha =D_\\alpha\\nabla^2\\varphi_\\alpha=0$. We will focus on the system with homogeneous parameters $r_\\alpha=\\rho$, $k_{\\alpha\\beta}=\\kappa$ if species $\\alpha$ preys on $\\beta$ and 0 otherwise, $p_\\alpha=\\gamma$, and $D_\\alpha=\\delta$, $\\forall\\alpha,\\beta\\in\\{1,...,N\\}$ and consider the special case of the (6,3)-game. After finding the fixed points, we look at the eigenvalues of the Jacobian $J$ of the system~(\\ref{eq:pde}) to determine the stability of the fixed points. We then extend our analysis to a spatial component by analyzing a linearized system in Fourier space, with Jacobian~\\cite{cianci}\n\\begin{equation}\nJ^{SP}=J+\\underline{D}\\tilde{\\Delta},\n\\end{equation}\nwhere $\\tilde{\\Delta}=-k^2$ is the Fourier transform of the Laplacian and $\\underline{D}$ is the diffusion matrix evaluated at a given fixed point. In our case, the diffusion matrix is a diagonal matrix $\\underline{D}=\\delta\\mathbb{1}$. This leads to a dependence of the stability of the fixed points on diffusion. In the following we focus on the special case to be considered.\n\n\\subsection{Stability analysis and numerical integration for the (6,3)-game}\n{\\bf Stability analysis of the (6,3)-game.}\nThe (6,3)-game is given by the system of mean field equations:\n\\begin{eqnarray}\\label{eq:MF(6,3)}\n\\frac{\\partial\\varphi_1}{\\partial t} & = & \\delta\\nabla^2\\varphi_1 + \\rho\\varphi_1 - \\gamma\\varphi_1^2 - \\kappa\\varphi_1(\\varphi_4+\\varphi_5+\\varphi_6) \\nonumber \\\\\n\\frac{\\partial\\varphi_2}{\\partial t} & = & \\delta\\nabla^2\\varphi_2 + \\rho\\varphi_2 - \\gamma\\varphi_2^2 - \\kappa\\varphi_2(\\varphi_5+\\varphi_6+\\varphi_1) \\nonumber \\\\\n\\frac{\\partial\\varphi_3}{\\partial t} & = & \\delta\\nabla^2\\varphi_3 + \\rho\\varphi_3 - \\gamma\\varphi_3^2 - \\kappa\\varphi_3(\\varphi_6+\\varphi_1+\\varphi_2) \\nonumber\\\\\n\\frac{\\partial\\varphi_4}{\\partial t} & = & \\delta\\nabla^2\\varphi_4 + \\rho\\varphi_4 - \\gamma\\varphi_4^2 - \\kappa\\varphi_4(\\varphi_1+\\varphi_2+\\varphi_3) \\nonumber\\\\\n\\frac{\\partial\\varphi_5}{\\partial t} & = & \\delta\\nabla^2\\varphi_5 + \\rho\\varphi_5 - \\gamma\\varphi_5^2 - \\kappa\\varphi_5(\\varphi_2+\\varphi_3+\\varphi_4) \\nonumber\\\\\n\\frac{\\partial\\varphi_6}{\\partial t} & = & \\delta\\nabla^2\\varphi_6 + \\rho\\varphi_6 - \\gamma\\varphi_6^2 - \\kappa\\varphi_6(\\varphi_3+\\varphi_4+\\varphi_5). \\nonumber\\\\\n\\end{eqnarray}\nIn total there are 64 different fixed points $FP_1$ to $FP_{64}$, of which some have the same set of eigenvalues and differ only by a permutation of the fixed-point coordinates of the eigenvalues,\nso that we can sort all fixed points in 12 groups $FP^1$-$FP^{12}$: for example, the fixed points $(\\rho\/(\\gamma+\\kappa),0,\\rho\/(\\gamma+\\kappa),0,\\rho\/(\\gamma+\\kappa),0)$ and $(0,\\rho\/(\\gamma+\\kappa),0,\\rho\/(\\gamma+\\kappa),0,\\rho\/(\\gamma+\\kappa))$ are in the same group $FP^3$. We will refer to all fixed points by the number of the group they belong to, that is to $FP^1$ to $FP^{12}$, instead of $FP_1$ to $FP_{64}$.\\\\\nThe zero-fixed point $FP^1$, where all components are equal to zero, with all eigenvalues equal to $\\rho$ for $\\delta=0$, and equal to $\\rho-\\delta k^2$ for $\\delta\\neq0$, is unstable for a system without spatial assignment, while it can become stable for a spatial system if $\\rho<\\delta k^2$, as in the cases of the (3,1) and (3,2) games, which are discussed in detail in the supplementary material. In the coexistence-fixed point $FP^2$, all components are equal to $\\rho\/(\\gamma+3 \\kappa)$. It is stable for $\\kappa\/\\gamma<1$, three of the eigenvalues are always negative, the first one being $-\\rho$ and an the second and third one equal to $-\\rho\\gamma\/(\\gamma+\\kappa)$, two are complex conjugates $-\\rho(\\gamma-\\kappa\\pm i \\sqrt{3}\\kappa)\/(\\gamma+3\\kappa)$, and the last one is real $-\\rho(\\gamma-\\kappa)\/(\\gamma+3\\kappa)$. At $\\kappa\/\\gamma=1$, $FP^2$ becomes a saddle, three of the six eigenvalues change sign, complex conjugates change sign of their real part, so a Hopf bifurcation occurs, and the direction corresponding to the last eigenvalue becomes unstable.\\\\\nOther fixed points include the survival of one species ($FP^4$), two species (for both $FP^5$ and $FP^6$), three (for $FP^7$ and $FP^8$), four ( for $FP^9$, $FP^{10}$ and $FP^{11}$), and five species (for $FP^{12}$). All fixed points $FP^4$ to $FP^{12}$ are always saddles in the case of $\\delta=0$.\\\\\nFor $\\delta\\neq0$ all eigenvalues get a $(-\\delta k^2)$-term, which can extend the stability regime in the parameter space, as long as $k\\neq0$, and lead to the coexistence of stable fixed points, which cannot be found for $\\delta=0$.\n\n\\begin{figure}[ht]\n\t\\begin{center}\n\t\t\\includegraphics[width=5cm]{fig1.pdf}\n\t\t\\end{center}\n\t\\caption{Diagram of a (6,3)-game. Colors represent species, each preys on three other species in clockwise direction, shown only for the red species by black arrows. Red lines connect species, which form one domain (red, green, and blue), the other three species (cyan, magenta, and yellow) form the second domain. Each species (like the red one) preys on only one species from its own domain (green), and on two species from the other domain (cyan and magenta), this way eliminating all predators of the third species (blue) from the domains. These rules are characteristic for all games, in which a domain forms of three species playing the (3,1)-game. Colors in this scheme will be used throughout the paper to represent species one (red) to six (yellow). }\\label{(6,3)}\n\\end{figure}\n\nIn view of pattern formation we shall distinguish three regimes. Before we go into detail, let us first give an overview of the sequence of events, if we vary the bifurcation parameters $\\kappa$ and $\\gamma$ as $\\kappa\/\\gamma$. These are events, which we see both in the Gillespie simulations and the numerical integration of the mean-field equations in space, described as a finite grid upon integration.\n\\begin{itemize}\n\\item $\\kappa\/\\gamma<1$: the first regime with $\\kappa\/\\gamma$ smaller than its value at the first Hopf bifurcation at $\\kappa\/\\gamma=1$, where the 6-species coexistence fixed point becomes unstable. As long as this fixed point is stable, we see no patterns, as the system converges at each site of the grid to the 6-species fixed point without dominance of any species, so that the uniform color is gray.\n\\item $1<\\kappa\/\\gamma<2$: the second regime with $\\kappa\/\\gamma$ chosen between the first and second Hopf bifurcations, where the second one happens at $\\kappa\/\\gamma=2$ for the $FP^3$ fixed points. When $\\kappa\/\\gamma=1$ is approached from below, that is from $\\kappa\/\\gamma<1$, two fixed points, belonging to the $FP^3$-group, become stable through a transcritical bifurcation until $\\kappa\/\\gamma=2$, where they become unstable through the second Hopf bifurcation. Each of the two predicts the survival of three species, the ones, which are found inside the domains. Each of these fixed points is, of course, a single-site fixed point, so in principle a subset of the nodes of the grid can individually approach one of the two fixed points, while the complementary set of the nodes would approach the other fixed point. However, as a transient we see two well separated domains with either even or odd species. At the interfaces between them all six species are present and oscillate with small amplitude oscillations, caused by the first Hopf bifurcation of the six-species coexistence fixed-point, where it became a saddle. Which one of the domains wins the effective (2,1)-game in the end, where a single domain with all its three species survives, depends on the initial conditions and on the fact that diffusion is included; the mere stability analysis only suggests that six species at a site destabilize the interface between domains with either even or odd species. In fact, the numerical integration and the Gillespie simulations both show that one domain gets extinct if the lattice size is small enough and\/or the diffusion fast enough. As long as the two fixed points are stable, the (3,1)-game is played at each site of a domain in the sense of coexisting three species, which are not chasing each other, related to the neighboring sites only via diffusion, without forming any patterns. Patterns are only visible at the interface of the domains as a remnant of the unstable six-species coexistence fixed point.\n\\item $\\kappa\/\\gamma>2$: the third regime, which is of most interest for pattern formation.\nStarting from random initial conditions, the species segregate first into two domains, each consisting of three species, one with species 1,3, and 5, the second one with species 2, 4, and 6, and inside both domains the three species play a rock-paper-scissors game, chasing each other, since the two fixed points of the $FP^3$ group became unstable at the second Hopf bifurcation. Due to the interactions according to an effective $(2,1)$-game at the interfaces of the domains (here with either two or four species coexisting), one of the domains will also here get extinct, including the involved three species, while the remaining three survive. Which domain survives depends also here on the initial conditions. As we shall see, the temporal trajectories of the concentrations of the three species in the surviving domain show that they still explore the vicinity of the second Hopf bifurcation from time to time, while they otherwise are attracted by the heteroclinic cycle. The three species in the surviving domain live the longer, the larger the grid size is, in which the species continue playing (3,1). In contrast to the second regime, however, two of the three species in the surviving domain will get extinct as well, and a single one remains in the end. This extinction is caused by fluctuations in the finite population in the stochastic simulation or by the numerical integration on a spatial grid with finite numerical accuracy, respectively.\n\\end{itemize}\nSo the linear stability analysis indicates options for when we can expect oscillatory trajectories: it is the Hopf bifurcations in the (6,3)-game for the $FP^2$ and $FP^3$ fixed points that induce the creation of limit cycles, which here lead to the {\\it formation of spirals} in space in the third regime and only temporary patterns at the interfaces in the second regime, before the system converges to one of the $FP^3$ fixed points.\n\nMoreover, it is the two $FP^3$ fixed points in the (6,3)-game that correspond to the {\\it formation of two domains}. In both the (3,2) and the (6,3)-games, one of these fixed points will be approached as a collective fixed point (shared by all sites of the grid), while the domain corresponding to the other one gets extinct, and patterns are seen if this fixed point is unstable. So in the (6,3)-game the existence of domains including their very composition is due to two stable (second regime) or unstable (third regime) fixed points. Their coexistence is in both regimes transient. In the second regime three species will survive in the end, because the three-species coexistence-fixed point is stable, and it would need a large fluctuation to kick it towards a 1-species unstable fixed point. In contrast, only one species will survive in the third regime, where the same fixed point is unstable. Obviously here it does not need a rare, large fluctuation to kick the system towards the 1-species unstable fixed point, as we always observed a single species to survive in the end, both in the Gillespie simulations and the numerical integration in a relatively short time.\n\nWe should mention, however, that from our Gillespie simulations we cannot exclude that after all, a large fluctuation would also kick the system in the second regime from its metastable state towards one of the unstable 1-species fixed points as well as in the first regime to either one of the two three-species fixed points, or to one of the six 1-species fixed points, when the six-species fixed point is stable in the deterministic limit. So far we have not searched for these rare events, in which two, three or five species would get extinct, respectively.\n\\vskip5pt\n\\textbf{Numerical solutions of the (6,3)-game.}\n\n\\begin{figure}[tp]\n\t\\begin{center}\n\t\t\\includegraphics[scale=1]{fig2.pdf}\n\t\\end{center}\n\t\\caption{Evolution of the (6,3)-game in the second regime in one dimension. The parameters are $ \\gamma = \\kappa = 1 $, $ \\delta\/dx = 0.1 $, and $ \\kappa = 1.5 $. The right and middle columns show the species of each domain separately.\nFor further explanations see the text.}\\label{(6,3)_array_DOM}\n\\end{figure}\n\nIn the following we show evolutions of species concentrations in space and time for parameters, chosen from the second and third regime of the (6,3)-game. These solutions are obtained from the numerical integration of Eq.~\\ref{eq:MF(6,3)}.\nFor the representation on a lattice we will use the following procedure to visualize site occupation: Odd species are represented by the rgb-(red, green, blue) color scheme, while even species are represented by cmy-colors (cyan, magenta, yellow). The three numbers of species $(r,g,b)$, or $(c,m,y)$, divided by the total sum of all species at the site, give a color in the rgb-, or cmy-spectrum that results from a weighted superposition of individual colors, where the weights (color intensities) depend only on the ratios of occupation numbers, rather than on absolute numbers. Moreover, we display the rgb-color scheme if odd species make up the majority at a site and the cmy-scheme otherwise. We should note that a well mixed occupation of odd (even) species leads to a dark (light) gray color in these color schemes.\nFigure~\\ref{(6,3)_array_DOM} shows coexisting domains with oscillations at the interfaces in the second regime. To justify the visualization of data according to the ``majority rule\", we show even and odd species also separately in the two right panels. This way we can see the transitions at the interfaces of the domains between even and odd species more clearly. The light (dark) gray domain corresponds to a well mixed occupancy with even (odd) species, respectively. On the boundaries of the domains all six species are present, and if we zoom into the boundary, we can see small amplitude oscillations caused by the Hopf bifurcation of the 6-species coexistence-fixed point, see figure~\\ref{(6,3)_tx_DOM}. Figures~\\ref{(6,3)_array_DOM} (a)-(c) show the evolution of the system at the first 100 t.u. at which time the domains are already starting to form. In panel (a) it is seen how the transient patterns, generated by transient small domains, shortly after the domains disappear also fade away, so that the transient patterns are generated by oscillations at the interfaces. The figure also reminds to the early time evolution of condensate formation in a zero-range process, where initially many small condensates form, which finally get absorbed in a condensate that is located at a single site with macroscopic occupation in the thermodynamic limit. Here initially many small and short-lived domains form, which get first absorbed into four domains as seen in the figure, but later end up in a single domain with three surviving species. So we see a ``condensation\" in species space, where three out of six species get macroscopically occupied as a result of the interaction, diffusion and an unstable interface, while the remaining three species get extinct, so that the symmetry between the species in the cyclic interactions with identical rates gets dynamically broken.\n\\begin{figure}[tp]\n\t\\begin{center}\n\t\t\\includegraphics[scale=1]{fig3.pdf}\\\\\n\t\\end{center}\n\t\\caption{Trajectories of all six species corresponding to figure~\\ref{(6,3)_array_DOM} at an interface. Red (1), green (3), and blue (5) represent odd species, cyan (2), magenta (4), and yellow (6) even species, from smaller to larger labels, respectively. (a) and (b) show temporal and spatial trajectories, respectively, at the beginning of the integration, corresponding to (a)-(c) in figure~\\ref{(6,3)_array_DOM}, while (c) and (d) refer to late times. For further explanations see the text.}\\label{(6,3)_tx_DOM}\n\\end{figure}\n\nPanels (d)-(f) show the evolution from 10000-10100 time units (t.u.). The displayed domains were checked to coexist numerically stable up to $10^6$ t.u., while for smaller lattices and faster diffusion one domain gets extinct. Figures~\\ref{(6,3)_array_DOM} (a) and (d) should be compared with figures~\\ref{(6,3)1D_2} (a) and (b) of the Gillespie simulations, respectively.\n\nFigure~\\ref{(6,3)_tx_DOM} shows the corresponding oscillating concentration trajectories at early (a) and late (c) times at a site of an interface (x=124), where all six species oscillate around the coexistence-saddle fixed point, as indicated by the horizontal black line in (c), while the spatial dependence at (b) (early) and (d) (late) times displays the domain formation due to two stable fixed points, corresponding to figures~\\ref{(6,3)_array_DOM} (a) and (d), respectively, so that the oscillations are restricted to the interfaces.\n\\begin{figure}[tp]\n\t\\begin{center}\n\t\t\\includegraphics[scale=1]{fig4.pdf}\n\t\\end{center}\n\t\\caption{Evolution of the (6,3)-game in the third regime in one-dimension. The parameters are $ \\gamma = \\kappa = 1 $, $ \\delta\/dx = 0.1 $, and $ \\kappa = 4.0 $. The middle and right columns show the species of each domain separately. For further explanations see the text. }\\label{(6,3)_array_OSC}\n\t%\n\\end{figure}\n\nThe evolution of the (6,3)-game in the third, oscillatory regime in one dimension is shown in figure~\\ref{(6,3)_array_OSC}. Species are represented in the same way as in figure~\\ref{(6,3)_array_DOM}. Panel (a)-(c) show the evolution of the system in the first 100 t.u., (d)-(f) in the first 10000 t.u. The two stable fixed points from the second regime became unstable (saddles) through the second Hopf bifurcation. As in the second regime, at the beginning of the integration there is a separation of odd and even species, but at the same time they start to chase each other, resulting in oscillatory behavior in space and time. Here we see no longer traces of the limit cycle around the six-species coexistence fixed point as in the second regime, since no sites have six species coexisting, even not for a short period of time. At the interfaces between even and odd species usually three species coexist, either two odd and one even, or vice versa, two even and one odd, but these mixtures are not stable, as these 3-species coexistence-fixed points in the deterministic limit are saddles. It also happens that just two or four species coexist at the interface, but also their coexistence-fixed points are saddles. Therefore also here the coexistence of domains is not stable, only one of them survives, and which one depends on the initial conditions, resulting in the extinction of three either odd or even species. In view of Gillespie simulations, figures~\\ref{(6,3)_array_OSC} (a) (early times) and (b) (late times) should be compared with figures~\\ref{(6,3)1D_1} (a) (early) and (b) (late), respectively.\n\nFigure~\\ref{(6,3)_tx_OSC} (a) shows the evolution in time at late times, when only one domain survives. All three species oscillate between zero and one, corresponding to the heteroclinic cycle. From time to time the trajectories are also attracted by the saddle-limit cycle, which is created by the second Hopf bifurcation of the three species-fixed point (black line) as indicated by the small amplitude oscillations. Apart from the amplitude, the heteroclinic and saddle-limit cycles differ in their frequency: the saddle-limit cycle has a higher frequency than the heteroclinic cycle. Panel (b) shows the spatial trajectories at the beginning of the integration when both domains still coexist. Yet we see no mixing of all six species at a single site, the 6-species coexistence-fixed point is no longer felt in this regime.\n\n\\begin{figure}[tp]\n\t\\begin{center}\n\t\t\\includegraphics[scale=1]{fig5.pdf}\\\\\n\t\\end{center}\n\t\\caption{In correspondence to figure~\\ref{(6,3)_array_OSC} (a) temporal trajectories of the surviving domain at late times in the interval 10200-10600 t.u. when only even species exist, and (b) spatial trajectories at early times when still both domains exist. For further explanations see the text.}\\label{(6,3)_tx_OSC}\n\t%\n\\end{figure}\n\nAs we see from figure~\\ref{(6,3)_array_OSCend}, the numerical integration evolves to one of the saddles after having spent a finite time on the heteroclinic cycle and not according to the analytical prediction, where it were only in the infinite-time limit that the trajectory would get stuck in one of the saddles, which are connected by the heteroclinic cycle. According to figure~\\ref{(6,3)_array_OSCend}(a) all trajectories get absorbed in one (the pink one) of the 1-species saddles already at finite time as a result of the finite accuracy of the numerical integration. Yet figure~\\ref{(6,3)_array_OSCend} (b) shows the characteristics of a heteroclinic cycle at finite time: The dwell time of the trajectory in the vicinity of the 1-species saddles gets longer and longer in each cycle, before it fast moves towards the next saddle in the cycle. This escape stops after a finite number of cycles, when the concentration of two of the three species are zero within the numerical accuracy, and therefore no ``resurrection\" is possible.\n\n\\begin{figure}[htbp]\n\t\\begin{center}\n\t\t\\includegraphics[scale=1]{fig6.pdf}\n\t\\end{center}\n\t\\caption{Extinction of all but one species upon approaching the heteroclinic cycle, (a) (x,t)-diagram, (b) species concentrations as a function of time. For further explanations see the text.}\\label{(6,3)_array_OSCend}\n\t%\n\\end{figure}\n\n\n\n\\section{Numerical Methods and Results}\\label{sec_numerical}\nGoing back to the set of reactions, in this section we describe their Gillespie simulations.\nWe solve the system (\\ref{eq:rec_sys})\nby stochastic simulations on a regular square lattice as well as on a one-dimensional ring, using the Gillespie algorithm~\\cite{gillespie}, combined with the so-called next-subvolume method~\\cite{elf}. This method is one option to generalize Gillespie simulations to spatial grids. We choose periodic boundary conditions on a square $L\\times L$ lattice or on a ring with $L$ nodes. In our case nodes, or synonymously sites, represent subvolumes. All reactions except of the diffusion happen between individuals on the same site (in the same subvolume), and a diffusion reaction is a jump of one individual to a neighboring site. One event can change the state of the system of only one (if a reaction happens) or two neighboring (if a diffusion event happens) subvolumes. At each site the initial number of individuals of each species is chosen from a Poisson distribution $P \\left( \\{n\\} ;0 \\right)=\\underset{\\alpha,i}{\\prod}\\left( \\frac{\\overline{n}^{n_{\\alpha,i}}_{\\alpha,0}}{n_{\\alpha,i}!} e^{-\\overline{n}_{\\alpha,0}} \\right)$, with a mean $\\overline{n}_{\\alpha,0}$, which is randomly chosen for each species $\\alpha$.\n\nIn the next-subvolume method we assign the random times of the Gillespie algorithm to subvolumes rather than to a specific reaction. To each subvolume, or site, we assign a time $\\tau$, at which one of the possible events, in our case reactions (predation, birth or death), or diffusion, will happen. The time $\\tau$ is calculated as $\\tau = -\\ln(rn)\/r_{total}$, where $rn$ is a random number generated from a uniform distribution between 0 and 1. The total rate $r_{total}$ depends on the reaction rates and the number of individuals which participate in the event. Events happen at sites in the order of the assigned times $\\tau$. Once it is known at which subvolume the next event happens, the event (reaction or diffusion) is chosen randomly according to the specified reaction rates.\n\nWe start the simulations with initial conditions from a Poisson distribution such that each site of the entire lattice is well mixed with all species. We want to study the dynamics of the system in a parameter regime, where we expect pattern formation. From the linear stability analysis of the mean-field system we expect stable patterns in the regime without stable fixed points in the (6,3)-game, as long as three species are alive, and transient patterns in the (6,3)-game for coexisting stable fixed points.\nWe use the same color scheme as we used before to visualize the numerical solutions of the mean-field equations.\nAs one Gillespie step (GS) we count one integration step here.\n\nOur results confirm the predictions from the mean-field analysis: there are on-site oscillations in time in a limit-cycle regime. There are also oscillations in space, which form spirals on two-dimensional lattices. If the evolution approaches the stochastic counterpart of a stable fixed point in the deterministic limit, we call it shortly ``noisy fixed point\", where the trajectories fluctuate around a value that is the mean field-fixed point value multiplied by the parameter $V$ as defined before. Of particular interest is the influence of the diffusion in relation to the ratio $\\kappa\/\\gamma$ on the patterns. It was the ratio of $\\kappa\/\\gamma$ that determines the stability of the fixed points. As mentioned earlier, the value of $\\delta k^2$, which enters the stability analysis, can extend the stability regime. So in the Gillespie simulations it is intrinsically hard to disentangle the following two reasons for the absence of patterns in the case of fast diffusion: either the stability regime of a fixed point with only one surviving species is extended, or the diffusion is so fast, that the extension of visible patterns is larger than the system size, so that a uniform color may just reflect the homogeneous part within a large pattern.\nAll the mean-field-fixed points are proportional to the value of the parameter $\\rho$. If this value is much larger than $\\kappa$ and $\\gamma$, the fixed-point value is very large. This leads to a large occupation on the sites, which slows down the formation of patterns. The reason is that the number of reactions, which are needed for the system to evolve to stable trajectories, either to oscillations, or to fixed points, increases with the number of individuals in the system.\n\n\\vskip5pt\n\nWe study the stochastic dynamics of a (6,3)-game in regimes, for which we expect pattern formation, i.e. for $\\gamma<\\kappa$. When the coexistence-fixed point $FP^2$ becomes unstable at $\\gamma=\\kappa$, we find the formation of two domains, each consisting of three species, one domain containing odd species, in the figures represented by shared colors red, green, and blue in the rgb-color scheme. The other domain consists of even species, represented by shared colors of cyan, magenta, and yellow in the cmy-color representation, see figure~\\ref{(6,3)2D}. Inside the domains the three species play the (3,1)-game and form spiral patterns. We have checked that the domains in figure~\\ref{(6,3)2D} are not an artefact of the visualization, and determined, for example, the occupancy on a middle column of the lattice (not displayed here).\nOn sites with oscillations of species from one domain, there is a very small or no occupation of species of the second domain, confirming the very existence of the domains.\nThe time evolution of the six species on two sites, chosen, for example, from the middle column of the lattice confirm that the species' trajectories oscillate in time, reflecting the stable limit cycles in the deterministic limit.\n\nHere a remark is in order as to whether radii, propagation velocity or other features of the observed spiral patterns can be predicted analytically. While spiral patterns in spatial rock-paper-scissors games were very well predicted via a multi-scale expansion in the work of \\cite{mobilia1,mobilia2}, we performed a multi-scale expansion (see, for example \\cite{bookkuramoto}) to derive amplitude equations for the time evolution of deviations from the two unstable fixed points, which lose their stability at the two Hopf bifurcations. However, the resulting amplitude equations differ from Ginzburg-Landau equations by a missing imaginary part, which can be traced back to the absence of an explicit constraint to the occupation numbers on sites and the absence of a conserved total number of individuals. As a result, the amplitude equations only predict the transient evolution as long as the trajectory is in the very vicinity of the unstable fixed point, but cannot capture the long-time behavior, which here is determined by an attraction towards the heteroclinic cycle that is responsible for the spiral patterns in our case. So it seems to be this non-local feature in phase space that the multi-scale expansion about the Hopf bifurcation misses.\n\nFor a further discussion of how the patterns depend on the choice of parameters we shall focus on the results on a one-dimensional lattice, since the simulation times are much longer for two dimensions. (In two dimensions, the period of oscillations is as long as about one fifth of the $2^{30}$ Gillespie steps.)\n\n\\begin{figure}[tp]\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.9]{fig7.pdf}\n\t\t\\end{center}\n\t\\caption{Pattern formation for a (6,3)-game on a two-dimensional $(64\\times 64)$-lattice for weak diffusion and far from the bifurcation point. Snap shots are taken at $1000\\cdot2^{15}$ (a), $10000\\cdot2^{15}$ (b), and $32000\\cdot2^{15}$ (c) GS. Two domains are formed, each containing three species, indicated by the different color groups. These species play a (3,1)-game inside the domains and evolve spiral patterns. The parameters are $\\rho=1$, $\\kappa=1$, $\\delta=1$, and $\\gamma=0.2$.\n}\\label{(6,3)2D}\n\\end{figure}\n\n\n\\begin{figure}[tp]\n\t\\begin{center}\n\t\t\\includegraphics[scale=1]{fig8.pdf}\n\t\t\\end{center}\n\t\\caption{Pattern formation in the (6,3)-game on a one-dimensional lattice of 64 sites for $\\kappa\/2<\\gamma<\\kappa$, that is in the second regime, where the coexistence-fixed points $FP^3$ are stable, for weak diffusion $\\delta=0.1$ ((a) and (b)), and strong diffusion $\\delta=1$ ((c) and (d)). For both strengths of the diffusion domains form. In the case of weak diffusion no extinction of domains is observed within the simulation time of $2^{30}$ GS. For strong diffusion, one domain goes extinct after $800\\cdot2^{15}$ GS. Initially, oscillatory patterns appear as remnants of many interfaces between small domains, where within the interfaces six species oscillate due to the unstable 6-species coexistence-fixed point, which fade away with time. This confirms the analytical results that in this parameter regime the $FP^3$-fixed points with $2\\times 3$ coexisting species are both stable, leading to the black color in (c) and (d) for the one surviving domain. Panel (a) shows the time evolution on a lattice for the time interval $(0-1000)\\cdot2^{15}$, (b) for $(30000-31000)\\cdot2^{15}$, (c) for $(0-1000)\\cdot2^{15}$, and (d) for $(10000-11000)\\cdot2^{15}$ GS. The parameters are $\\gamma=\\rho=0.5$, $\\kappa=0.6$.}\\label{(6,3)1D_2}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\t\\begin{center}\n\t\t\\includegraphics[scale=1]{fig9.pdf}\n\t\t\\end{center}\n\t\\caption{Pattern formation in the (6,3)-game on a one-dimensional lattice of size 64, shown on a space-time grid, for $\\gamma<\\kappa\/2$, that is, in the third regime, where the coexistence-fixed points $FP^3$ are unstable, weak diffusion $\\delta=0.1$ ((a) and (b)), and strong diffusion $\\delta=0.1$ ((c) and (d)). The parameters are $\\gamma=\\rho=0.5$ and $\\kappa=1.2$. Two domains form, of which one goes extinct after $600\\cdot2^{15}$ GS in the case of weak diffusion and $150\\cdot2^{15}$ GS in the case of strong diffusion. The surviving domain keeps playing the (3,1)-game. For weak diffusion no further extinction is observed for the simulation time of $2^{30}$ GS, while for strong diffusion, an extinction of all but one species, here the red one, happens after $1470\\cdot2^{15}$ GS. Panel (a) shows patterns in a time interval of $(0-700)\\cdot2^{15}$, (b) for $(30000-30100)\\cdot2^{15}$, (c) for $(0-200)\\cdot2^{15}$, and (d) for $(0-1470)\\cdot2^{15}$ GS.}\\label{(6,3)1D_1}\n\\end{figure}\n\nAs to diffusion, for stronger diffusion patterns are more homogeneous, extinction events happen faster, sometimes they happen only for sufficiently strong diffusion, see figure~ \\ref{(6,3)1D_2}. The extinction time depends also on the $\\kappa\/\\gamma$-ratio, i.e. if the ratio is from the interval (i) $1<\\kappa\/\\gamma<2$, where $FP^3$ is stable and $FP^2$ is unstable, or, (ii) from $\\kappa\/\\gamma>2$, where both fixed points are unstable.\n\nFor case (i), the $FP^3$-fixed points are stable, yet at the beginning of the simulations the dynamics shows oscillatory behavior, caused by the interfaces between the small domains, where six species feel the unstable coexistence-fixed point at $\\gamma=\\kappa$; but after about $500\\cdot2^{15}$ Gillespie steps for weak diffusion, and $1000\\cdot2^{15}$ Gillespie steps for strong diffusion, and for our choice of parameters, the patterns fade away and the system evolves to a homogeneous state in both domains as long as they coexist, see figures~\\ref{(6,3)1D_2} (b) and \\ref{(6,3)1D_2} (d). The closer the system is to the bifurcation point $\\gamma=\\kappa$, the longer live the oscillatory patterns, the stronger feels the system the unstable 6-species fixed point.\n\nFigure~\\ref{(6,3)1D_2} (a) should be compared with the corresponding mean-field solution of figure~\\ref{(6,3)_array_DOM} (a) at early times and figure~\\ref{(6,3)1D_2} (b) with figure~\\ref{(6,3)_array_DOM} (d) at late times, from which we see that the mean-field solutions reproduce the qualitative features including the transient patterns.\n\nIn case (ii), the third regime, domains get faster extinct, both domains play a (3,1)-game inside the domains. After one domain gets extinct, the surviving one keeps on playing the (3,1)-game, until only one species survives. Here one should compare figure~\\ref{(6,3)1D_1} (a) with the corresponding mean-field solution of figure~\\ref{(6,3)_array_OSC} (a) at early times and figure~\\ref{(6,3)1D_1} (b) with figure~\\ref{(6,3)_array_OSC} (d) at later times, where one is left with one domain and three species chasing each other; the final extinction of two further species is not visible in this figure due to the weak diffusion and the larger extinction time.\n\n\n\\section{Conclusions and Outlook}\\label{sec_conclusions}\nBeyond the emerging dynamically generated time-and spatial scales, the most interesting feature of the (6,3)-game is the fact that the rules of the game, specified initially as (6,3), dynamically change to effectively (2,1) and (3,1) as a result of spatial segregation. In view of evolution, here the rules of the game change while being played. They change as a function of the state of the system if the state corresponds to the spatial distribution of coexisting species over the grid.\n\nIn preliminary studies, we investigated the $(27,17)$-game with the following set of coexisting games in a transient period: From a random start we observe segregation towards nine domains playing $(9,7)$ with each other and inside the domains again the $(3,1)$ game. From the superficial visualization of Gillespie simulations, this system looks like a fluid with whirling ``vortices\", where the (3,1)-game is played inside the domains. We expect a rich variety of games with new, emerging, effective rules on a coarser scale, if we not only increase the number $N$ of species, or release the restriction to cyclic predation, but allow for different time scales, defined via the interaction rates. So far we chose the same rates for all interactions and always started from random initial conditions.\n\nWe performed a detailed linear stability analysis, which together with the numerical integration of the mean-field equations reproduced all qualitative features of the Gillespie simulations, even extinction events. That the mean-field analysis worked so well is due to the ultralocal implementation of the interactions, so that the spatial dependence enters only via diffusion. The stability analysis revealed already a rather rich structure with 12 groups of in total 64 fixed points for the (6,3)-game. We focussed on coexistence-fixed points of six, three or one species.\n\nAlong with the fixed points' repulsion or attraction properties we observed three types of extinction, whose microscopic realization is different and deserves further studies:\n\n(i) In the second regime of the (6,3)-game, both domains with either even or odd species are in principle stable, as long as they are not forced to coexist. We have seen a spatial segregation towards a domain with only even and one with only odd species, occupying the sites. At the interface between both domains, six species cannot escape from playing the (6,3)-game. Since the 6-species coexistence-fixed point is unstable, the unstable interface seems to be the driving force to initiate the extinction of one of the two domains including its three species, since interface areas should be reduced to a minimal size. From the coarse-grained perspective, one domain preys on the other domain, which is a (2,1)-game.\n\n(ii) In the third regime of the (6,3)-game, the domain structure in odd and even domains is kept, but in the interior of the domains the species follow heteroclinic cycles, which explain the patterns of three species, chasing each other, inside each domain.\nAt the interface between the domains, two to four species coexist at a site, but for small enough diffusion, coexistence-fixed points of the respective species are always saddles, so also here the instability of the interfaces seems to induce their avoidance, leading\nagain to the extinction of one of the two domains. So from the coarse-grained perspective, again a (2,1)-game is played between the domains.\\\\\nIt should be noticed that in contrast to systems, where the fate of interfaces between domains is explained in terms of the competition between free energy and interface tension, here the growth of domains and the reduction of interfaces are traced back to the linear stability analysis of the system in the deterministic limit, which is conclusive for the dynamics.\n\n(iii) The third type of extinction event was the extinction of two species, when the individual trajectories move either in the vicinity of, or along a heteroclinic cycle, and either a fluctuation from the Gillespie simulations, or the finite numerical accuracy on the grid (used for integration) captured the trajectory in one of the 1-species saddles.\n\n\nWe have not studied rare large fluctuations, which could induce other extinction events and kick the system out of the basin of attraction from the 6- or 3-species stable coexistence-fixed points when stochastic fluctuations are included. Neither have we measured any scaling of the extinction times with the system size or of the domain growth with the system size. This is left for future work.\\\\\nFurthermore, for future work it would be challenging to derive and predict the domain formation on the coarse scale from the underlying $(6,3)$-game on the basic lattice scale in the spirit of the renormalization group, here, however, applied to differential equations rather than to an action.\n\n\n\\section{Acknowledgments}\nOne of us (D.L.) is grateful to the German Research Foundation (DFG)(ME-1332\/25-1) for financial support during this project. We are also indebted to the German Exchange Foundation (DAAD)(ID 57129624)for financial support of our visit at Virginia Tech Blacksburg University, where we would like to thank Michel Pleimling for valuable discussions. We are also indebted to Michael Zaks (Potsdam University) for useful discussions.\\\\\n\n\\section{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Insights and Conclusions}\n\\label{section:conclusions}\n\\emph{Dimensional circuit synthesis} is a new method for generating\ndigital logic circuits that improve the efficiency of training and\ninference of machine learning models from sensor data. The method\ncomplements prior work on\\emph{ dimensional function synthesis}, a\nnew method for learning models from sensor data that enables orders\nof magnitude improvements in training and inference on physics-constrained\nsignal data. Dimensional circuit synthesis, which we present in\nthis paper, implements preprocessing steps required by dimensional\nfunction synthesis in hardware. This article presented the principle\nbehind the methods and the design and implementation of a compiler\nbackend that implements dimensional circuit synthesis for the iCE40,\na low-power miniature FPGA in a miniature wafer-scale\n2.15\\,mm$\\times$2.50\\,mm WLCSP package which is targeted at sensor\ninterfacing tasks and at on-device machine learning. The hardware\naccelerators that the method generates are compact (fewer than four\nthousand gates for all the examples investigated) and low power\n(dissipating less than 6\\,mW even on a non-optimum FPGA). These\nresults show, for the first time, that it could be feasible to\nintegrate physics-inspired machine learning methods within low-cost\nminiaturized sensor integrated circuits, right next to the sensor\ntransducer.\n\n\\section{Introduction}\nSensor integrated circuits are at the forefront of the data pipeline\nfeeding the recent revolution in machine learning systems. Sensors\ntransduce a physical signal such as acceleration, temperature, or\nlight, into a voltage which is then converted by analog-to-digital\nconverters (ADCs) into a numeric representation, for input to\ncomputation. Digital preprocessing within sensor integrated circuits,\nor the software that consume their output, then apply appropriate\ncalibration constants and scaling to convert these digitized voltages\ninto a scaled and dimensionally-meaningful representation of the\nsignal (e.g., acceleration in $m\/s^2$).\n\nFigure~\\ref{fig:introduction:sensors-in-systems} shows how contemporary\nsensor-driven computing systems move the digitized data at the\noutput of signal conversion circuits through many transmission and\nstorage steps before the data are used in training a model or in\ndriving an inference, typically on server far removed from the\nsensing process. This data movement costs time and energy. When\never-greater volumes of data will enable new applications and\ninference models, it will be valuable to perform the necessary\ncomputations as close to the signal acquisition and transduction\nprocess as possible: ideally, in the sensor integrated circuit\nitself (labeled ``\\ding{202}'' in\nFigure~\\ref{fig:introduction:sensors-in-systems}).\n\nHowever, since these sensor integrated circuits are typically\nrequired be low cost (often under 10\\,USD), have small die area\n(often less than 4\\,mm$^2$), and use minimal power (typically under\n1\\,mW), it is challenging to integrate even the most efficient and\ncompact traditional learning and inference methods into these devices\nthemselves.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, angle=0, width=0.485\\textwidth]{Illustrations\/sensors-in-systems.pdf}\n\\caption{Existing sensing systems typically send data to servers\nfor training models and generating inferences. Moving data both\nwithin a system and over networks adds latency and costs energy.}\n\\label{fig:introduction:sensors-in-systems}\n\\end{figure}\n\n\n\n\\subsection{Physics constrains signals from sensors}\nThe values taken on by data from sensors are constrained by the\nlaws of physics and by the dynamics of the structures to which\nsensors are attached. Most physical laws and the governing equations\nfor most system dynamics take the form of sums of product terms\nwith each product term comprising powers of the system's\nvariables~\\cite{feynman1967character, feynman1965feynman,\nBuckingham1914}. Because there are a bounded number of irreducible\nrelative powers in these product terms whose units of measure result\nin meaningful units for the whole expression, it is common in many\nengineering disciplines to use information on units of measure\n(\\textit{dimensional analysis}) to derive candidate relations for\nexperimentally-observed phenomena~\\cite{mahajan2010street,\nBuckingham1914}. Recent work~\\cite{10.1145\/3358218} has used this\nobservation to prune the hypothesis set of functions considered\nduring machine learning to achieve significant improvements in both\ntraining and inference, improving training latency by 8660$\\times$\nand reducing the arithmetic operations in inference over 34$\\times$.\n\nIn this work, we build on these results to develop a new backend\nfor the Newton compiler~\\cite{lim2018newton}. The backend generates\nregister transfer level (RTL) hardware designs for accelerating the\nexecution of the required pre-inference parts of analytic models\nrelating the signals in a multi-sensor system according to the\nspecifications of the physics of a system. Because our method uses\ninformation from dimensional analysis and units of measure to\nsynthesize hardware to accelerate inference from sensors, we call\nthe method \\emph{dimensional circuit synthesis}.\n\n\\subsection{Dimensional circuit synthesis: physics-derived pre-inference processing in sensors}\nDimensional circuit synthesis is a compile-time method to generate\ndigital logic circuits for performing pre-inference processing on\nsensor signals. Dimensional circuit synthesis takes as its input a\nspecification of the signals that can be obtained from the sensors\nin a system and their units of measure. Using these specifications,\ndimensional circuit synthesis generates hardware to compute a set\nof physically plausible expressions relating the signals in the\nsystem. The logic circuits which the method generates represent\nsets of monomial expressions which form dimensionless groupings of\nsensor signals (i.e., products whose units cancel out). The\nsynthesized hardware takes as input digital representations of\nsensor readings and generates the computed value of the dimensionless\nexpressions as its output. A machine learning training or inference\nprocess then uses these dimensionless products as its inputs and\nprior work has shown that this preprocessing can significantly\nimprove both the latency and accuracy of inference. An on-device\n(in-sensor) inference engine will integrate the generated RTL that\nperforms pre-processing with either custom RTL or a programmable\ncore implementing the inference using, e.g., a neural network.\nWe evaluate the generated RTL on the Lattice iCE40, a\nstate-of-the-art, ultra-miniature FPGA that meets the size\nand power consumption constraints of in-sensor processing.\n\n\\subsection{Contributions}\nThis article makes two main contributions to on-device and in-sensor \ninference:\n\\begin{itemize}\n\\item We present dimensional circuit synthesis\n(Section~\\ref{sec:methodology}), a new method to generate RTL\nhardware for pre-processing sensor data prior to inference, thereby\nimproving latency and reducing overhead.\n\n\\item We evaluate the generated RTL on the Lattice\niCE40 ultra-miniature FPGA (Section~\\ref{sec:results}) and show\nthat the generated RTL is fast enough to allow real-time processing,\nwhile consuming minimal power.\n\\end{itemize}\n\n\n\n\n\n\\section{Background and Methodology}\n\\label{sec:methodology}\n\\label{subsec:dfs}\nDimensional circuit synthesis takes as input descriptions of the\nunits of measure of the sensor signals in a system.\nFigure~\\ref{figure:methodology:NewtonExample} shows an example\nNewton description for an unpowered UAV (i.e., a glider). Let a\nphysical system for which we want to construct an efficient predictive\nmodel have $k$ symbols corresponding to physical constants or sensor\nsignals. From the Buckingham $\\Pi$-theorem~\\cite{Buckingham1914},\nwe can form $N \\le k$ dimensionless products, $\\Pi_1 \\ldots \\Pi_N$\nand these dimensionless products are the roots of some function\n$\\Phi$, where\n\\begin{align}\n\\label{equation:Phi}\n\\Phi (\\Pi_1, \\Pi_2, \\dots, \\Pi_i, \\ldots, \\Pi_{N}) = 0 .\n\\end{align}\nWang \\textit{et al.}~\\citep{10.1145\/3358218} use\nEquation~\\ref{equation:Phi} as the basis for generating a preprocessing\nstep of offline training and inference of models of physical systems\nand propose an automated framework for generating these dimensionless\nproducts. In a subsequent calibration step, they learn a model for\nthe function $\\Phi$ and demonstrate that learning $\\Phi$ from the\n$\\Pi_1 \\ldots \\Pi_N$ can be both significantly more efficient and\nmore accurate than learning a function from the original $k$ sensor\nsignals directly. The dimensionless products $\\Pi_1 \\ldots \\Pi_N$\nare essential to both training and inference and to achieving the\norders-of-magnitude speedup. In this work, we present a method\nfor generating hardware to efficiently compute these $\\Pi$s. Doing\nso close to the sensor transducer also reduces the data sensing\nsystems must transmit from the sensor transducer to either a sensor\nhub, microcontroller, or other component performing on-device\ntraining and inference, potentially improving system efficiency and\nperformance. Figure~\\ref{fig:introduction:dfsRTL} shows how the\ngenerated hardware for $\\Pi$ computation fits within an on-device\ninference system.\n\n\n\n\\begin{figure}\n\\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, angle=0, width=1.0\\columnwidth]{Illustrations\/DFS-RTL.pdf}\n\\caption{The hardware generated by dimensional circuit synthesis\npreprocesses $k$ sensor signals to obtain $N < k$ \\emph{dimensionless\nproducts} $\\Pi_1 \\ldots \\Pi_k$. A predictive model takes these\nproducts as input and generates an inference output.}\n\\label{fig:introduction:dfsRTL}\n\\end{figure}\n\n\n\n\\subsection{Dimensional Circuit Synthesis}\nFigure~\\ref{fig:introduction:dfsRTL} shows how hardware blocks\ngenerated by dimensional circuit synthesis calculate the values of\nthe $\\Pi$ products. The input of these modules are the sensor\nsignals corresponding to the physical parameters specified as the\ninput to the dimensional circuit synthesis analysis in the Newton\nspecification language (see, e.g.,\nFigure~\\ref{figure:methodology:NewtonExample}). The calculated\n$\\Pi$ product values correspond to the output of the pre-processing\nstep of the inference function and they feed into any existing\nmethod for classification or regression. This final step could be\na programmable low-power core such as the 32-bit RISC cores now\nintegrated into some state-of-the-art sensor integrated circuits\nor a low-power machine learning accelerator such as\nMarlann~\\cite{SymbioticEDA:Marlann}, implemented in either RTL or\nin a miniature FPGA like we use in our evaluation in\nSection~\\ref{sec:results}.\n\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, angle=0, width=2.0\\columnwidth]{Illustrations\/DFS-compilation-process.pdf}\n\\caption{Proposed dimensional circuit synthesis framework. In\nStep~\\ding{202}, the users provide the specifications and target\ninference parameter of the examined physical system. Newton compiler\nincluding our implemented backend is executed in Step~\\ding{203}.\nIn Step~\\ding{204} the framework translates the generated RTL modules\nto an FPGA bitstream and in parallel we execute a manual calibration\nof dimensional functions~\\cite{10.1145\/3358218} (box with dashed\nborder). In Step~\\ding{205} the generated SW and HW modules are\ndownloaded to the in-sensor inference engine.}\n\\label{fig:introduction:dfsCompilationProcess}\n\\end{figure*}\n\n\n\n\n\\subsubsection{Approach and implementation}\nFigure~\\ref{fig:introduction:dfsCompilationProcess} shows the four\nsteps which make up our implementation of dimensional circuit\nsynthesis. We implemented these steps as a new backend of the\nNewton compiler~\\cite{lim2018newton}, but the techniques in\nFigure~\\ref{fig:introduction:dfsCompilationProcess} could in principle\nbe applied to any specification for physical systems that contains\ninformation on units of measure.\n\nIn Step~\\ding{202}, a user of dimensional circuit synthesis creates\na Newton language description such at that in\nFigure~\\ref{figure:methodology:NewtonExample}, specifying the\nphysical signals that describe the target physical system and from\nwhich a machine learning model will eventually be trained.\n\nNext, in Step~\\ding{203}, the user \ninvokes the Newton compiler with our new dimensional circuit synthesis\nbackend activated. Because the method of constructing dimensionless\ngroups can result in multiple dimensionless products ($\\Pi_i$ in\nEquation~\\ref{equation:Phi}), the user specifies which of the physical\nsignals in the input physical system description will be the target\nvariable of a machine learning model for the function $\\Phi$ from\nEquation~\\ref{equation:Phi}. Our new dimensional circuit synthesis\nbackend identifies the group of dimensionless products where the\ntarget parameter appears in only one of the dimensionless products.\nThe outputs of Step~\\ding{203} are: (i) a function $\\Phi$, defined\nin terms of the dimensionless products $\\Pi_i$, but whose form has\nnot yet been fully defined; (ii) RTL descriptions for hardware to\ncompute the dimensionless products $\\Pi_i$, including RTL descriptions\nof the functional units (multipliers and dividers) that will perform\nthe arithmetic operations of the dimensionless product monomials.\nBecause floating-point operations can be expensive in both resources\nand execution latency on energy-constrained on-device training and\ninference systems, we use a signed fixed-point approximate real\nnumber representation~\\cite{behrooz2000computer} for the signals\nin the dimensionless products computed in the synthesized hardware.\nEach real number is represented by 32 bits, using 1 bit for the\nsign, 16 bits for the decimal part and 15 bits for the fractional\npart (i.e., a Q16.15 fixed-point representation). This choice leads\nto fast and lightweight multiplication and division units by\nsacrificing the ability to use an arbitrary precision floating point\nrepresentation. The compiler backend is fully parametric with respect\nto the length of the fixed point representation as well the precision\nof the fractional part and can generate hardware with arbitrary\nfixed-point representation sizes. This will allow future designs\nto tailor the precision of the compute modules to the requirements\nof the inference algorithms~\\cite{micikevicius2017mixed}.\n\nIn Step~\\ding{204}, we can train the uncalibrated dimensional\nfunction offline on values of the dimensionless groups $\\Pi_i$\ncomputed offline as done in prior work~\\cite{10.1145\/3358218}.\n\n\nFinally, in Step~\\ding{205} the outputs of the hardware blocks\ncomputing the dimensionless products feed the models trained offline\nto generate inferences. Alternatively a system could also use the\nvalues of the dimensionless products to feed in situ training of\nmodels implemented in a processor core, or to feed training in situ\nof a hardware neural network accelerator~\\cite{SymbioticEDA:Marlann}.\n\n\n\n\n\n\n\n\\section{Experimental Evaluation}\n\\label{sec:results}\n\\label{subsec:experimental-setup}\nWe evaluated the hardware generated by the dimensional circuit\nsynthesis backend using a Lattice Semiconductor iCE40 FPGA. The\niCE40 is a low-power miniature FPGA in a miniature wafer-scale\n2.15\\,mm$\\times$2.50\\,mm WLCSP package and is targeted at sensor\ninterfacing tasks and at on-device machine learning. We used the\nfully open-source FPGA design flow, comprising the\nYoSys~\\cite{shah2019yosys+} synthesis tool (version 0.8+456) for\nsynthesis and NextPNR ~\\cite{shah2019yosys+} (version git sha1\n5344bc3) for placing, routing, and timing analysis.\n\nWe performed our measurements on an iCE40 Mobile Development Kit (MDK)\nwhich includes a 1$\\Omega$ sense resistor in series with each of\nthe supply rails of the FPGA (core, PLL, I\/O banks). We measure the current\ndrawn by the FPGA core by measuring the voltage drop across the\nFPGA core supply rail (1.2\\,V) resistor using a Keithley DM7510, a\nlaboratory-grade 7-\\sfrac{1}{2} digital multimeter that can measure\nvoltages down to 10\\,nV and we thereby computed the power dissipated\nby the FPGA core for each configured RTL design.\nWe used a pseudorandom number generator to feed the $\\Pi$ computation\ncircuit modules under evaluation with random input data.\n\n\\begin{table*}\n \\centering\n \\caption{Experimental evaluation on iCE40 FPGA of dimensional circuit modules generated from the description of 7 physical systems.} \\label{table:results}\n\t\t%\n \\tiny\n \\begin{tabular}{lm{3.0cm}lllllll}\n \\toprule\n \\textbf{Name}\t\t\t& \\textbf{Description}\t& \\textbf{Target}\t\t&\\textbf{LUT4}\t\t&\\textbf{Gate}\t&\\textbf{Maximum}\t&\\textbf{Execution}\t&\\textbf{Avg. Power}\t\t&\\textbf{Avg. Power}\\\\\n &\t\t\t\t\t\t& \\textbf{Parameter}\t&\\textbf{Cells} \t&\\textbf{Count}\t&\\textbf{Frequency}\t&\\textbf{Latency}\t&\\textbf{at 12 MHz}\t\t&\\textbf{at 6 MHz}\\\\\n \\hline\n\\textbf{Beam}\t\t\t\t\t& Cantilevered beam model, excluding mass of beam\t\t\t\t\t\t& Beam deflection\t& 2958 & 2590 & 16.88\\,Mhz & 115 cycles & 3.5\\,mW & 1.8\\,mW\\\\\n\\textbf{Pendulum, static} \t\t& Simple pendulum excluding dynamics and friction\t\t\t\t\t\t& Osc. period \t\t& 1402 & 1239 & 17.07\\,Mhz & 115 cycles & 2.0\\,mW & 1.1\\,mW\\\\\n\\textbf{Fluid in Pipe} \t\t\t& Pressure drop of a fluid through a pipe\t\t\t\t\t\t\t\t& Fluid velocity \t& 4258 & 3752 & 15.65\\,Mhz & 188 cycles & 5.8\\,mW & 3.0\\,mW\\\\\n\\textbf{Unpowered flight} \t\t& Unpowered flight (e.g., catapulted drone)\t\t\t\t\t\t\t& Position (height)\t& 1930 & 1865 & 16.44\\,Mhz & 81 cycles & 2.3\\,mW & 1.2\\,mW\\\\\n\\textbf{Vibrating string}\t\t& Vibrating string\t\t\t\t\t\t\t\t\t\t\t\t\t\t& Osc. frequency \t& 2183 & 1787 & 16.67\\,Mhz & 183 cycles & 2.5\\,mW & 1.3\\,mW\\\\\n\\textbf{Warm vibrating string}\t& Vibrating string\twith temperature dependence\t\t\t\t\t\t\t& Osc. frequency \t& 3137 & 2718 & 16.77\\,Mhz & 269 cycles & 1.9\\,mW & 1.0\\,mW\\\\\n\\textbf{Spring-mass system}\t\t& Vertical spring with attached mass \t\t\t\t\t\t\t\t\t& Spring constant \t& 1419 & 1240 & 16.67\\,Mhz & 115 cycles & 3.4\\,mW & 1.8\\,mW\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table*}\n\n\\subsection{Results}\nWe evaluated dimensional circuit synthesis on seven different\nphysical systems described in the Newton specification language.\nTable~\\ref{table:results} provides a brief description of the inputs\nas well as a summary of the measurement results. The table also\nincludes the target parameter for each respective execution of the\nNewton compiler. For example, for the physical description of the\npendulum the target parameter was its oscillation period, while for\nthe physical descriptions of a fluid in pipe the target parameter\nwas the velocity of the fluid. This value of this parameter is\ninferred at run-time by the machine learning model that is fed with\nthe output of the $\\Pi$ computation. The results in\nTable~\\ref{table:results} show the FPGA resource utilization as\nwell as resource utilization when mapped to CMOS gates, of each\ngenerated $\\Pi$ computation module, including the fixed point\narithmetic modules that implement the required arithmetic operations.\n\nThe execution latency column lists the cycles required for\ncompleting the calculation of each of the generated RTL modules.\nWe obtained the number of cycles by simulating the execution of the\nRTL modules for pseudorandom inputs generated by an LFSR. In each\nRTL module, the calculation of different $\\Pi$ products is parallelized\nbut the required operations per $\\Pi$ product are executed serially.\nAs a result, designs with larger resource usage in the table, such\nas the hardware for the unpowered flight model, conclude faster\nthan smaller designs such as the static pendulum. All modules require\nless than 300 cycles. As a result, for both 6 and 12\\,Mhz clocks,\nthe generated hardware can handle sample rates of over 10k\nsamples\/second, permitting real-time operation.\n\nThe last column of Table~\\ref{table:results} shows the measured\npower dissipation of each design running in the iCE40 FPGA. In all\ncases, the power dissipation is less than 6\\,mW and as low as 1\\,mW,\ndemonstrating the suitability of our method for small-factor,\nbattery-operated on-device inference at the edge.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nMany different techniques have been proposed\nfor the detection of dark matter particles which could \nmake up the halo of our Galaxy. Among the different\npossibilities, the detection of a neutrino flux by\nmeans of neutrino telescopes represents certainly \nan interesting tool, which is already at the level of\nimposing some (mild) constraint on the particle physics\nproperties of the neutralino, the most interesting and\nstudied dark matter candidate \\cite{refer}. This\nparticle is present in all the supersymmetric extensions\nof the standard model as a linear combination\nof the superpartners of the neutral gauge and higgs fields.\nIn the present paper we will perform our calculations\nin the minimal supersymmetric extension of the standard\nmodel (MSSM), for a definition of which we refer to Ref.\\cite{ICTP}\nand to the references therein quoted. \n\n\n\\section{Up--going muons from neutralino annihilation in the Earth}\nThe neutrino flux has origin from neutralino pair annihilation inside \nthe Earth where these dark matter particles \ncan be accumulated after having been captured by gravitational \ntrapping. The differential flux is \n\\begin{equation}\n\\Phi^0_{\\stackrel{(-)}{\\nu_\\mu}} (E_\\nu) \\equiv \n\\frac{dN_{\\stackrel{(-)}{\\nu_\\mu}}}{dE_\\nu} =\n\\frac{\\Gamma_A}{4\\pi d^2} \\sum_{F,f}\nB^{(F)}_{\\chi f}\\frac{dN_{f {\\stackrel{(-)}{\\nu_\\mu}}}}{dE_\\nu} \\, , \n\\label{eq:fluxnu}\n\\end{equation}\nwhere $\\Gamma_A$ denotes the annihilation rate,\n$d$ is the distance of the detector from the source (i.e. the\ncenter of the Earth), $F$ lists the \nneutralino pair annihilation final states,\n$B^{(F)}_{\\chi f}$ denotes the branching ratios into\nheavy quarks, $\\tau$ lepton and gluons in the channel $F$.\nThe spectra $dN_{f {\\stackrel{(-)}{\\nu_\\mu}} }\/dE_{\\nu}$ \nare the differential distributions of the (anti) neutrinos generated \nby the $\\tau$ and by hadronization of quarks\nand gluons and the subsequent semileptonic decays of the\nproduced hadrons. For details, see for instance\nRefs. \\cite{ICTP,noi_nuflux,altri_nuflux}. Here we only\nrecall that the annihilation rate depends, through\nits relation with the capture rate of neutralinos in the Earth,\non some astrophysical parameters, the most relevant of which\nis the local density $\\rho_l$.\n\nThe neutrino flux is produced in the inner part of the Earth \\cite{ICTP}\nand propagates toward a detector\nwhere it can be detected as a flux of up--going muons,\nas a consequence of neutrino--muon conversion inside\nthe rock that surrounds the detector. A double differential\nmuon flux can be defined as\n\\begin{eqnarray}\n& & \\frac{d^2 N_\\mu}{d E_\\mu d E_\\nu} = \\\\\n& & N_A \\int_0^\\infty dX \\int_{E_\\mu}^{E_\\nu} d E'_\\mu \ng(E_\\mu, E'_\\mu; X) \\; S(E_\\nu, E_\\mu) \\nonumber \\, ,\n\\end{eqnarray}\nwhere $N_A$ is the Avogadro's number, \n$g(E_\\mu,E'_\\mu; X)$ is the survival\nprobability that a muon of initial energy $E'_\\mu$\nwill have a final energy $E_\\mu$ after propagating\nalong a distance $X$ inside the rock and\n\\begin{equation}\nS(E_\\nu, E_\\mu) = \\sum_i\n \\Phi_i(E_\\nu) \\frac{d \\sigma_i(E_\\nu,E'_\\mu)}{d E'_\\mu}\n\\end{equation}\nwhere $i = \\nu_\\mu, \\bar\\nu_\\mu$\nand $d \\sigma_{{\\stackrel{(-)}{\\nu_\\mu}}} (E_\\nu,E'_\\mu) \/ d E'_\\mu$ is\nthe charged current cross--section for the\nproduction of a muon of energy $ E'_\\mu$ from \na neutrino (antineutrino) of energy $E_\\nu$.\n\n\\FIGURE[t]{\n\\epsfig{figure=yieldx.ps,width=6.5cm,bbllx=50bp,bblly=200bp,bburx=520bp,bbury=650bp,clip=}\n\\caption{Muon response function $d N_\\mu \/ d\\log x$ vs.\nthe parent neutrino fractional energy $x = E_\\nu \/ m_\\chi$ for\nneutralino annihilation in the Earth. Different curves refer to\ndifferent neutralino masses : $m_\\chi = 50$ GeV (solid), \n$m_\\chi = 80$ GeV (dotted), $m_\\chi = 120$ GeV (shot--dashed), \n$m_\\chi = 200$ GeV (long--dash), $m_\\chi = 500$ GeV (dot--dashed).}\n}\n\n\n\nA useful quantity for the discussion in the following Sections is\nthe muon response function\n\\begin{equation}\n\\frac{d N_\\mu}{d E_\\nu} = \\int_{E^{\\mathrm{th}}}^{E_\\nu}\nd E_\\mu \\; \\frac{d^2 N_\\mu}{d E_\\mu d E_\\nu}\n\\end{equation}\nwhere $E^{\\mathrm{th}}$ is minimal energy for detection\nof up--going muons. For SuperKamiokande and MACRO, \n$E^{\\mathrm{th}} \\simeq 1.5$ GeV \\cite{oscill_exp}. \nThe muon response\nfunction indicates the neutrino energy range\nthat is mostly responsile for the up--going muon signal.\nFig. 1 shows a few examples of it, plotted as \nfunctions of the variable $x = E_\\nu\/m_\\chi$, where\n$m_\\chi$ denotes the neutralino mass. Fig. 1 shows that\nthe maximum of the muon reponse happens for neutrino\nenergies of about $E_\\nu \\simeq (0.4 - 0.6) \\; m_\\chi$, with\na half width which extends from $E_\\nu \\simeq 0.1\\; m_\\chi$\nto $E_\\nu \\simeq 0.8 \\; m_\\chi$. \n\nFinally, the total flux of up--going muons is defined as\n\\begin{equation}\n\\Phi_\\mu = \\int_{E^{\\mathrm th}}^{m_\\chi}\nd E_\\nu \\; \\frac{d N_\\mu}{d E_\\nu}\n\\end{equation}\n\n\n\\FIGURE[t]{\n\\epsfig{figure=flux_earth.ps,width=6.5cm,bbllx=50bp,bblly=200bp,bburx=520bp,bbury=650bp,clip=}\n\\caption{Flux of up--going muons $\\Phi_\\mu^{\\mathrm{Earth}}$ from\nneutralino annihilation in the Earth, plotted as a function of\n$m_\\chi$. The solid line denotes the present upper limit \\cite{MACRO}.\nDifferent neutralino compositions are shown with different symbols:\ncrosses for gauginos, open circles for higgsinos and dots for mixed\nneutralinos.}\n}\n\n\nThe natural background for these kind of\nsearches is represented by the flux of up--going\nmuons originated by the atmospheric neutrino flux.\nExperimentally one searches, inside a small angular \ncone around the center of the Earth,\nfor a statistically significant up--going\nmuon excess over the muons of atmospheric $\\nu_\\mu$ origin.\nNo excess has been found so far and therefore, an upper\nlimit on $\\Phi_\\mu$ can be derived. Fig. 2 shows the\npresent most stringent upper limit obtained by the \nMACRO Collaboration \\cite{MACRO}. In the same figure \nthe theoretical calculations of $\\nu_\\mu$ for a scan\nof the supersymmetric parameter space are also displayed.\nThe plot refers to $\\rho_l = 0.3$ GeV cm$^{-3}$\nand is obtained by a variation of the MSSM parameters \nin the following ranges:\n$20\\;\\mbox{GeV} \\leq M_2 \\leq 500\\;\\mbox{GeV}$, \n$20\\;\\mbox{GeV} \\leq |\\mu| \\leq 500\\;\\mbox{GeV}$,\n$80\\;\\mbox{GeV} \\leq m_A \\leq 1000\\;\\mbox{GeV}$,\n$100\\;\\mbox{GeV} \\leq m_0 \\leq 1000\\;\\mbox{GeV}$,\n$-3 \\leq {\\rm A} \\leq +3,\\; 1 \\leq \\tan \\beta \\leq 50$.\nFor further details\nof the calculation, we refer to Ref. \\cite{ICTP}. The comparison\nof the scatter plot with the experimental upper limit would\nimply that a fraction of the supersymmetric configuration\ncould be excluded. However, a variation of the value\nof $\\rho_l$ inside its range of uncertainty can lower the\ntheoretical prediction by about a factor of 3 \\cite{ICTP}. As a consequence,\nwe can conservatively consider that only a small fraction of the susy \nconfigurations can be potentially in conflict with the experimental\nupper limit, when no oscillation effect on the neutrino signal\nis assumed.\n\n\n\n\n\\section{Neutrino oscillation effect on the up--going muon signal}\n\nThe recent data on the atmospheric neutrino deficit indicate that\nthe $\\nu_\\mu$ may oscillate, either into $\\nu_\\tau$ or into\na sterire neutrino $\\nu_s$ \\cite{oscill_exp,oscill_the}. If this\nis the case, also the $\\nu_\\mu$ produced by neutralino annihilations\nwould undergo an oscillation process. The energies involved in both\natmospheric and neutralino--produced neutrinos are the same. \nThe baseline of oscillation of the two neutrino components is\ndifferent, since atmospheric neutrinos cross the entire Earth,\nwhile neutrinos produced by neutralino annihilation travel\nfrom the central part of the Earth to the detector\n(we recall once more that neutralinos annihilate in the core of the\nEarth). On the basis of the features of the $\\nu_\\mu$ oscillation\nwhich are required to fit the experimental data on atmospheric\nneutrinos \\cite{oscill_exp,oscill_the}, we expect that also\nthe neutrino flux from dark matter annihilation would be\naffected. In the next Sections we will explicitely discuss\nthe $\\nu_\\mu \\rightarrow \\nu_\\tau$ and the\n$\\nu_\\mu \\rightarrow \\nu_s$ cases, in a two neutrino mixing \nscenario \\cite{Ellis}.\n\n\n\n\\subsection{$\\nu_\\mu \\rightarrow \\nu_\\tau$ vacuum oscillation}\n\\FIGURE[t]{\n\\epsfig{figure=psurv_vacuum.ps,width=6.5cm,bbllx=50bp,bblly=200bp,bburx=520bp,bbury=650bp,clip=}\n\\caption{$\\nu_\\mu$ survival probability in the case of\n$\\nu_\\mu \\rightarrow \\nu_\\tau$ oscillation. The solid line refers to\n$\\sin^2 (2\\theta_v) = 1$, the dashed line is for $\\sin^2 (2\\theta_v) = 0.8$.\nIn both cases, $\\Delta m^2 = 5\\cdot 10^{-3}$ eV $^{-2}$.}\n}\n\n\\FIGURE[t]{\n\\epsfig{figure=flux_vacuum.ps,width=6.5cm,bbllx=50bp,bblly=200bp,bburx=520bp,bbury=650bp,clip=}\n\\caption{Scatter plot of the ratio \n$(\\Phi_\\mu)^{\\rm VAC}_{\\rm oscill}\/\\Phi_\\mu$ vs. the neutralino\nmass $m_\\chi$. $(\\Phi_\\mu)^{\\rm VAC}_{\\rm oscill}$ is the up--going\nmuon flux in the case of $\\nu_\\mu \\rightarrow \\nu_\\tau$ oscillation,\nwhile $\\Phi_\\mu$ is the corresponding flux in the case of no oscillation.}\n}\n\nIn the case of $\\nu_\\mu \\rightarrow \\nu_\\tau$ oscillation,\nthe $\\nu_\\mu$ flux is reduced because of oscillation, but \nwe have to take into account also that neutralino annihilation \ncan produce $\\nu_\\tau$ which in turn can oscillate into\n $\\nu_\\mu$ and contribute to the up--going muon flux.\nThe $\\nu_\\tau$ flux can be calculated as discussed in Sect. 1\nfor the $\\nu_\\tau$ flux, and it turns out to be always\na relatively small fraction of the $\\nu_\\mu$ flux.\nThe muon neutrino flux at the conversion region can therefore\nbe expressed as\n\\begin{eqnarray}\n\\Phi_{{\\stackrel{(-)}{\\nu_\\mu}}} (E_\\nu) &=& \n\\Phi^0_{{\\stackrel{(-)}{\\nu_\\mu}}}\\; \nP^{\\mathrm{vac}} ({{\\stackrel{(-)}{\\nu_\\mu}}} \\rightarrow\n{{\\stackrel{(-)}{\\nu_\\mu}}}) \\nonumber \\\\\n& + & \n\\Phi^0_{{\\stackrel{(-)}{\\nu_\\tau}}}\\; \nP^{\\mathrm{vac}} ({{\\stackrel{(-)}{\\nu_\\tau}}} \\rightarrow\n{{\\stackrel{(-)}{\\nu_\\mu}}})\n\\end{eqnarray}\nwhere the vacuum survival probability is\n\\begin{eqnarray}\n& & P^{\\mathrm{vac}} ({{\\stackrel{(-)}{\\nu_\\mu}}} \\rightarrow\n{{\\stackrel{(-)}{\\nu_\\mu}}})\n\\;\\; = \\;\\; \\\\ \n& & 1 - \\sin^2(2\\theta_v)\\sin^2\n\\left (\n\\frac{1.27 \\Delta m^2 (\\mathrm{eV}^2) R(\\mathrm{Km})}\n{E_\\nu (\\mathrm{GeV})} \\nonumber\n\\right )\n\\label{vac}\n\\end{eqnarray}\nwhere $\\Delta m^2$ is the mass square difference of the\ntwo neutrino mass eigenstates, $\\theta_v$ is the mixing angle\nin vacuum and $R$ is the Earth's radius. \nFig. 3 shows the survival probability for two different values of\nthe neutrino oscillation parameters. Smaller (larger) values of \n$\\Delta m^2$ have the effect of shifting the curves to the left (right).\nComparing Fig. 1 with Fig. 3, we notice that the reduction of the\nup--going muon flux is stronger when there is matching between the\nthe energy \n$E_\\nu^1 \\simeq 5.2 \\cdot 10^{-3} \\Delta m^2 (\\mathrm{eV}^2)$ \nof the first (from the right) minimum of the \nsurvival probability and the energy \n$E_\\nu \\simeq 0.5 m_\\chi$ \nwhich is responsible for most of the muon response in the detector.\nThis implies that a maximum reduction of the signal could\noccur for neutralino masses of the order of \n$m_\\chi (\\mathrm{GeV}) \\simeq 10^4 \\Delta m^2 (\\mathrm{eV}^2)$.\nThe $\\nu_\\tau \\rightarrow \\nu_\\mu$ oscillation makes the\nreduction of the muon flux less severe, but it is not able\nto completely balance the reduction effect because the\noriginal $\\nu_\\tau$ flux at the source is sizeably smaller than\nthe $\\nu_\\tau$ flux. Therefore, the overall effect of\nthe neutrino oscillation is to reduce the up--going muon\nsignal. This effect is summarized in Fig. 4, where the\nratio between the up--going muon signals in the presence\nand in the absence of oscillation are plotted as a function\nof the neutralino mass. The susy parameter space has been\nvaried in the same ranges quoted for Fig. 2. We notice\nthat the strongest effect is present for light neutralinos,\nsince in this case the muon flux is mostly produced from\nneutrinos whose energy is in the range of maximal suppression\nfor the oscillation phenomenon. The effect is between\n0.5 and 0.8 for $m_\\chi \\lsim 100$ GeV.\nOn the contrary, the fluxes for larger masses are less \naffected, and the reduction is less than about \n20\\% for $m_\\chi \\gsim 200$ GeV.\n\n\n\\subsection{$\\nu_\\mu \\rightarrow \\nu_s$ matter oscillation}\n\\FIGURE[t]{\n\\epsfig{figure=psurv_matter.ps,width=6.5cm,bbllx=50bp,bblly=200bp,bburx=520bp,bbury=650bp,clip=}\n\\caption{$\\nu_\\mu$ survival probability in the case of\n$\\nu_\\mu \\rightarrow \\nu_s$ oscillation, for $\\sin^2 (2\\theta_v) = 0.8$\nand $\\Delta m^2 = 5\\cdot 10^{-3}$ eV $^{-2}$. The solid line refers to\nneutrinos, the dashed line is for antineutrinos.}\n}\n\n\n\\FIGURE[t]{\n\\epsfig{figure=flux_matter.ps,width=6.5cm,bbllx=50bp,bblly=200bp,bburx=520bp,bbury=650bp,clip=}\n\\caption{Scatter plot of the ratio \n$(\\Phi_\\mu)^{\\rm MAT}_{\\rm oscill}\/\\Phi_\\mu$ vs. the neutralino\nmass $m_\\chi$. $(\\Phi_\\mu)^{\\rm MAT}_{\\rm oscill}$ is the up--going\nmuon flux in the case of $\\nu_\\mu \\rightarrow \\nu_s$ oscillation,\nwhile $\\Phi_\\mu$ is the corresponding flux in the case of no oscillation.}\n}\n\nIn the case of $\\nu_\\mu \\rightarrow \\nu_s$ oscillation, \nthe neutrino flux is simply\n\\begin{equation}\n\\Phi_{{\\stackrel{(-)}{\\nu_\\mu}}} (E_\\nu) = \n\\Phi^0_{{\\stackrel{(-)}{\\nu_\\mu}}}\\; \nP^{\\mathrm{mat}} ({{\\stackrel{(-)}{\\nu_\\mu}}} \\rightarrow\n{{\\stackrel{(-)}{\\nu_\\mu}}})\n\\end{equation}\nand no $\\nu_\\mu$ regeneration is possible from the\nsterile neutrino. In this case, the effective potential\nof $\\nu_\\mu$ and $\\nu_s$ inside the Earth are different\nand we have to solve the evolution equation for propagation\nin the core and in the mantle. Neutrinos\n(produced in the center of the Earth) cross once half\nof the core and once the mantle. By considering both\ncore and mantle as of constant density, we can express the\nsurvival probability as \\cite{akhmedov, Kim}\n\\begin{eqnarray}\n& & P^{\\mathrm{mat}} ({{\\stackrel{(-)}{\\nu_\\mu}}} \\rightarrow\n{{\\stackrel{(-)}{\\nu_\\mu}}}) \\,\\,=\\,\\, \\\\\n& & \\left [\nU(\\theta_c) D(\\phi_c) U^\\dagger (\\theta_c - \\theta_m)\n D(\\phi_m) U^\\dagger (\\theta_m)\n\\right ]_{\\mu\\mu} \\nonumber\n\\label{mat}\n\\end{eqnarray}\nwhere $U$ is the $2 \\times 2$\nneutrino mixing matrix, $\\theta_a$ \n($a=c,m$ for core and mantle, respectively) are \nthe effective mixing angles in matter and they are related\nto the vacuum mixing angle $\\theta_v$ as\n\\begin{equation}\n\\sin^2 (2\\theta_a) = \\frac{\\sin^2 (2\\theta_v) \\xi_a^2}\n{\\left [\n {(\\xi_a \\cos(2\\theta_v) + 1)^{2} +\n\\xi_a^2 \\sin^2 (2\\theta_v)}\n\\right ]}\n\\end{equation}\nwith $\\xi_a = \\Delta m^2 \/ (2E_\\nu V_a)$;\n$V_a = \\pm G_F N_n^a \/ \\sqrt{2}$ is the matter \npotential in a medium of number density $N_n^a$\nfor neutrinos ($+$) and antineutrinos ($-$).\nIn Eq.(3.4), $D$ is the evolution\nmatrix $D_{ij}(\\phi_a) = \\delta_{ij} d_j^a$,\nwhere $d_1^a = 1$, $d_2^a = \\exp(i \\phi_a)$ and\n\\begin{equation}\n\\phi_a = V_a R_a \n\\left [\n(\\xi_a \\cos(2\\theta_v) + 1)^2 +\n\\xi_a^2 \\sin^2 (2\\theta_v)\n\\right ]^{1\/2}\n\\end{equation}\n\nIn Fig. 5 an example of the $\\nu_\\mu$ and $\\bar \\nu_\\mu$ \nsurvival probability is given for representative\nvalues in the range allowed by the fits on the\natmospheric neutrino data \\cite{oscill_exp,oscill_the}:\n$\\Delta m^2 = 5\\cdot 10^{-3}$ eV$^2$\nand $\\sin^2 (2\\theta_v) = 0.8$. {}From Fig. 5\nand the previous discussion relative to Fig. 3, we \nexpect that in the case of $\\nu_\\mu \\rightarrow \\nu_s$\nthe reduction of the muon signal is significantly\nless severe than in the case of $\\nu_\\mu \\rightarrow \\nu_\\tau$.\nIn fact, in these case the minima of the survival probability\noccur for lower neutrino energies, and threfore the\noscillation can affect only muon fluxes originated by very\nlight neutralinos. This is manifest in Fig. 6, where\nthe ratio of the up--going muon fluxes in presence and\nabsence of oscillation are shown. In this case, the reduction \nof the signal is always less than 30\\%. This maximal reduction\noccurs for neutralino masses lower that about 80 GeV. For\nlarger masses, the up--going muon flux is almost unaffected.\n\n\n\\section{Conclusions}\nWe have discussed the effect on the up--going muon\nsignal from neutralino annihilation in the Earth, in the\ncase that the $\\nu_\\mu$ flux produced by neutralinos would oscillate\nas indicated by the data on the atmospherice neutrino deficit.\nWhile the experimental upper limit is, at present, practically\nnot affected by the possibility of neutrino \noscillation \\cite{MACRO}, the\ntheoretical predictions are reduced in the presence of\noscillation. With the oscillation parameters deduced\n{}from the fits on the atmospheric neutrino data,\nthe effect is always larger for lighter neutralinos.\nIn the case of $\\nu_\\mu \\rightarrow \\nu_\\tau$\nthe reduction is between 0.5 and 0.8 for \n$m_\\chi \\lsim 100$ GeV and less than about\n20\\% for $m_\\chi \\gsim 200$ GeV.\nIn the case of $\\nu_\\mu \\rightarrow \\nu_s$,\nthe reduction of the signal is up to 30\\% for neutralino \nmasses lower that about 80 GeV and smaller than 10\\% for\nheavier neutralinos.\n\n\n\\acknowledgments\nI wish to thank Sandro Bottino for very stimulating and\ninteresting discussions about the topic of this paper.\nThis work was supported by DGICYT under grant number \nPB95--1077 and by the TMR network grant ERBFMRXCT960090 of \nthe European Union.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nOur understanding of star formation has grown primarily from the study of local regions forming low-mass stars in relative isolation. Nearby regions such as Taurus \\citep[e.g.][]{Goldsmith_Taurus}, $\\rho$ Ophiuchus \\citep[e.g.][]{Young_oph, Johnstone_Oph}, Perseus \\citep[e.g.][]{Enoch2008, Jorgensen_perseus,Kirk_perseus}, Serpens \\citep[e.g.][]{ts98, Harvey_serpensIRAC, Enoch2008}, the Pipe nebula \\citep[e.g.][]{Lombardi_pipe, Muench_pipe}, and Orion \\citep[e.g.][]{Li_orion, Johnstone_OrionB} have been studied to great lengths using a variety of techniques including dust emission, extinction mapping, and molecular line emission. This past decade of research has shown that star formation regions are assembled hierarchically. Within molecular clouds (tens of parsecs in size, containing 10$^4$-10$^5~M_{\\sun}$), we adopt the nomenclature used by \\citet{BerginTafalla_ARAA2007} distinguishing ``clouds'' (10$^3-10^4~M_{\\sun}$, 10$^0-10^1$~pc), ``clumps'' (10-10$^3~M_{\\sun}$, 10$^{-1}$-10$^0$~pc), and ``cores'' (10$^{-1}$-10$^1$~$M_{\\sun}$, 10$^{-2}$-10$^{-1}$~pc).\n\nIn studies of nearby regions, it is possible to resolve pre-stellar cores with single dish observations \\citep[e.g.][]{Johnstone_Oph}. This permits the examination of the properties of the fragmentation of the natal molecular clouds into smaller components. It is then straightforward to construct a mass function of cores and a mass function for individual cores can then be constructed. The core mass distributions typically derived from dust emission studies are found to be strikingly similar to the mass spectra of stars, implying that the masses of stars are a direct result of the way in which the natal molecular cloud fragments. In contrast, when CO line emission is used as a mass probe for cores \\citep{Kramer_CMF}, a more top-heavy distribution results. \n\nWhile these studies have brought us a deep understanding of isolated low-mass star formation, this is not the complete picture for star formation in the Galaxy. The necessary ingredient for star formation is dense molecular gas. However, the H$_2$ distribution in the Milky Way is not uniform. The primary reservoir is in the Molecular Ring, which resides at 4~kpc from the Galactic Center and contains $\\sim$70\\% of the molecular gas inside the solar circle \\citep{Jackson_GRS}. Thus, the Molecular Ring is the heart of Galactic star formation. Indeed, as \\citet{Robinson1984} show, the peak of Galactic far-infrared emission originates from this region.\n\nIn local clouds, most of the recent progress in our understanding of the early stages of star formation have come from studies of pre-stellar objects. Populations of starless cores have been identified in numerous local regions, and they are universally cold and quiescent, exhibiting thermal line widths \\citep{BerginTafalla_ARAA2007}. Recently, a population of cold, dense molecular clouds within the Molecular Ring were detected against the bright Galactic mid-infrared background \\citep[from 7 to 25~$\\mu$m;][]{egan_msx, carey_msx}. These clouds are opaque to mid-infrared radiation and show little or no typical signs of star formation, such as association with IRAS point sources. Initial studies demonstrated that these objects, termed infrared-dark clouds (IRDCs) are dense (n(H$_2$) $> 10^5$ cm$^{-3}$), cold (T $<$ 20K) concentrations of 10$^3$ - 10$^5~M_{\\sun}$ of molecular gas. Based upon the available mass for star formation, infrared-dark clouds are likely the sites of massive star formation.\n\nSince their discovery, further studies of infrared-dark clouds have established their place as the precursors to clusters. A number of studies have detected the presence of deeply embedded massive protostars using sub-millimeter probes \\citep{Beuther_protostars_IRDC, Rathborne_2007_protostars, Pillai_G11}, which confirms that IRDCs are the birth-sites of massive stars. Detailed molecular surveys show that molecules such as NH$_3$ and N$_2$H$^+$ trace the dense gas extremely well \\citep{ragan_msxsurv, Pillai_ammonia}, as seen in local dense prestellar cores \\citep{Bergin2002}. Furthermore, the molecular emission corresponding to the absorbing structure of infrared-dark clouds universally exhibit non-thermal linewidths on par with massive star formation regions. Other studies have uncovered the presence of masers \\citep{Beuther2002_masers, Wang_ammonia} and outflows \\citep{Beuther_IRDCoutflow}, known indicators of ongoing embedded star formation. Already, the evidence shows that these are the sites where massive stars and star clusters will form or are already forming. In order to understand massive star formation, and thus Galactic star formation, it is crucial to understand the structure and evolution of IRDCs.\n\nStudies of infrared-dark clouds to date have left the fundamental properties of cloud fragmentation go relatively unexplored. \\citet{rathborne2006} showed that IRDCs exhibit structure with median size of $\\sim$0.5~pc, but observations of IRDCs with the Spitzer Space Telescope, which we describe in $\\S$\\ref{obs}, reveal that there exists structure well below this level. We characterize the environment in $\\S$\\ref{env} and highly structured nature of infrared-dark clouds in $\\S$\\ref{clumps} by utilizing the high-resolution imaging capabilities of the Spitzer. In $\\S$\\ref{mf}, we analyze the IRDC absorbing structure, derive the clump mass function, and put the results in the context of previous studies. We find the mass function to be shallower than Salpeter initial mass function (IMF) \\citep{Salpeter_imf} and more closely aligned with that observed using CO in massive star forming regions. Given the strong evidence for fragmentation and star formation characteristics of these objects, we suggest they are in the initial stages of fragmentation. The conclusions as well as the broad impact of these results are discussed in $\\S$\\ref{conclusion}. The results of this study provide an important foundation for further studies of IRDCs with the instruments of the future, allowing us to probe the dominant mode of star formation in the Galaxy, which may be fundamentally different from the processes that govern local star formation.\n\n\n\\section{Observations \\& Data Reduction}\n\\label{obs}\n\n\\subsection{Targets}\n\nSearching in the vicinity of ultra-compact HII (UCHII) regions \\citep{wc89} for infrared-dark cloud candidates, \\citet{ragan_msxsurv} performed a survey of 114 candidates in N$_2$H$^+$(1-0), CS(2-1), and C$^{18}$O(1-0) with the FCRAO. In order to study substructure with Spitzer, we have selected a sample of targets from the \\citet{ragan_msxsurv} sample which are compact, typically $2\\arcmin \\times 2\\arcmin$ (or $2 \\times 2$~pc at 4~kpc), and opaque, providing the starkest contrast at 8~$\\mu$m (MSX Band A) with which to examine the absorbing structure. The selected objects also exhibit significant emission in transitions of CS and N$_2$H$^+$ that are known to trace high-density gas, based on their high critical densities. By selecting objects with strong emission in these lines, we ensure that their densities are $>$10$^4$ cm$^{-3}$ and their temperatures are less than 20~K. Under these conditions in local clouds, N$_2$H$^+$ is strongest when CO is depleted in the pre-stellar phase \\citep{bl97}, hence a high N$_2$H$^+$\/CO ratio guided our attempt to select the truly ``starless'' dark clouds in the IRDC sample. Our selection criteria are aimed to isolate earliest stages of star formation in local clouds and give us the best hope of detecting massive starless objects. The eleven IRDCs observed are listed in Table~1 with the distances derived in \\citet{ragan_msxsurv} using a Milky Way rotation curve model \\citep{Fich:1989} assuming the ``near'' kinematic distance. The listed uncertainties in Table~1 arise from the $\\pm$14\\% maximal deviation inherent in the rotation curve model. \n\n\\vspace{0.5in}\n\n\\subsection{Spitzer Observations \\& Data Processing}\n\nObservations of this sample of objects were made on 2005 May 7 -- 9 and September 15 -- 18 with IRAC centered on the coordinates listed in Table~1. Each region was observed 10 times with slightly offset single points in the 12s high dynamic range mode. All four IRAC bands were observed over 7$' \\times$ 7$'$ common field-of-view. MIPS observations were obtained on 2005 April 7 -- 10 of the objects in this sample. Using the ``large\" field size, each region was observed in 3 cycles for 3s at 24~$\\mu$m. MIPS observations cover smaller 5.5$' \\times$ 5.5$'$ fields-of-view but big enough to contain the entire IRDC. Figures~\\ref{fig:g0585}$-$\\ref{fig:g3744} show each IRDC field in all observed wavebands. The absorbing structures of the IRDCs are most prominent at 8~$\\mu$m and 24~$\\mu$m.\n\nWe used IRAC images processed by the Spitzer Science Center (SSC) using pipeline version S14.0.0 to create basic calibrated data (BCD) images. These calibrated data were corrected for bright source artifacts (``banding'', ``pulldown'', and ``muxbleed''), cleaned of cosmic ray hits, and made into mosaics using Gutermuth's WCS-based IRAC post-processing and mosaicking package \\citep[see][for further details]{Gutermuth_ngc1333}. \n\nSource finding and aperture photometry were performed using Gutermuth's PhotVis version 1.10 \\citep{Gutermuth_ngc1333}. We used a 2.4$\\arcsec$ aperture radius and a sky annulus from 2.4$\\arcsec$ to 6$\\arcsec$ for the IRAC photometry. The photometric zero points for the [3.6], [4.5], [5.8], and [8.0] bands were 22.750, 21.995, 19.793, and 20.187 magnitudes, respectively. For the MIPS 24~$\\mu$m photometry, we use a 7.6$\\arcsec$ aperture with 7.6$\\arcsec$ to 17.8$\\arcsec$ sky annuli radii and a photometric zero point of 15.646 magnitude. All photometric zero points are calibrated for image units of DN and are corrected for the adopted apertures.\n\nTo supplement the Spitzer photometry, we incorporate the source photometry from the Two-Micron All Sky Survey (2MASS) Point Source Catalog (PSC). Source lists are matched for a final catalog by first matching the four IRAC band catalogs using Gutermuth's WCSphotmatch utility, enforcing a 1$\\arcsec$ maximal tolerance for positive matches. Then, the 2MASS sources are matched with tolerance 1$\\arcsec$ to the mean positions from the first catalog using the same WCS-based utility. Finally, the MIPS 24~$\\mu$m catalog is integrated with matching tolerance 1.5$\\arcsec$.\n\n\\section{Stellar Content \\& IRDC Environment}\n\\label{env}\n\nThe tremendous sensitivity of Spitzer has given us the first abilty to characterize young stellar populations in detail. Before the Spitzer era, IRAS led the effort in identifying the brightest infrared point sources in the Galaxy. Only one object in this sample, G034.74$-$0.12 (Figure~\\ref{fig:g3474}) has an IRAS point source (18526+0130) in the vicinity. Here, with Spitzer, we have identified tens of young stellar objects (YSOs) in the field of each IRDC.\n\n\\vspace{-0.05in}\n\n\\subsection{Young Stellar Object Identification \\& Classification}\n\nWith this broad spectral coverage from 2MASS to IRAC to MIPS, we apply the robust critieria described in \\citet{Gutermuth_ngc1333} to identify young stellar objects (YSOs) and classify them. Table~2 lists the J, H, K$_s$, 3.6, 4.5, 5.8, 8.0 and 24~$\\mu$m photometry for all stars that met the YSO criteria, and we note the classification as Class I (CI), Class II (CII), embedded protostars (EP), or transition disk objects (TD). A color-color diagram displaying these various classes of YSOs \n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.9]{g0585}\n\\end{center}\n\\caption{G005.85$-$0.23: {\\it Top Row Right: 3.6$\\micron$. Middle Row Left: 4.5$\\micron$. Middle Row Right: 5.8$\\micron$. Bottom Row Left: 8$\\micron$. Bottom Row Right: 24$\\micron$.}}\n\\label{fig:g0585}\n\\end{figure}\n\n\\clearpage\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.9]{g0626}\n\\end{center}\n\\caption{G006.26$-$0.51: Wavelengths as noted in Figure~1.}\n\\label{fig:g0626}\n\\end{figure}\n\n\\clearpage\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.9]{g0916}\n\\end{center}\n\\caption{G009.16$+$0.06: Wavelengths as noted in Figure~1.}\n\\label{fig:g0916}\n\\end{figure}\n\n\\clearpage\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.9]{g0928}\n\\end{center}\n\\caption{G009.28$-$0.15: Wavelengths as noted in Figure~1.}\n\\label{fig:g0928}\n\\end{figure}\n\n\\clearpage\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.9]{g0986}\n\\end{center}\n\\caption{G009.86$-$0.04: Wavelengths as noted in Figure~1. - Embedded Objects (indices 6 and 7 in Table~2 under source G009.86$-$0.04) are labeled. Source G009.86$-$0.04 index 6 is only detectable at 24~$\\mu$m and lies right at the heart of the dust absorption.}\n\\label{fig:g0986}\n\\end{figure}\n\n\\clearpage\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.9]{g1250}\n\\end{center}\n\\caption{G012.50$-$0.22: Wavelengths as noted in Figure~1. - Embedded Object (index 5 in Table~2 under source G012.50-0.22) is labeled.}\n\\label{fig:g1250}\n\\end{figure}\n\n\\clearpage\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.9]{g2337}\n\\end{center}\n\\caption{G023.37$-$0.29: Wavelengths as noted in Figure~1. - Embedded Objects (indices 9 and 10 in Table~2 under source G023.37$-$0.29) are labeled.}\n\\label{fig:g2337}\n\\end{figure}\n\n\\clearpage\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.9]{g2348}\n\\end{center}\n\\caption{G023.48$-$0.53: Wavelengths as noted in Figure~1.}\n\\label{fig:g2348}\n\\end{figure}\n\n\\clearpage\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.9]{g2405}\n\\end{center}\n\\caption{G024.0$5-$0.22: Wavelengths as noted in Figure~1. - Embedded Object (index 1 in Table~2 under source G24.05$-$0.22) is labeled.}\n\\label{fig:g2405}\n\\end{figure}\n\n\\clearpage\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.9]{g3474}\n\\end{center}\n\\caption{G034.74$-$0.12: Wavelengths as noted in Figure~1. - Embedded Object (index 5 in Table~2 under source G034.74$-$0.12.) is labeled.}\n\\label{fig:g3474}\n\\end{figure}\n\n\\clearpage\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.9]{g3744}\n\\end{center}\n\\caption{G037.44$+$0.14: Wavelengths as noted in Figure~1.}\n\\label{fig:g3744}\n\\end{figure}\n\n\\clearpage\n\n\\begin{figure}\n\\hspace{-0.85in}\n\\includegraphics[scale=0.8]{f12}\n\\caption{\\footnotesize{IRAC four color plot for all objects in the IRDC sample for all objects with photometry in all four bands that had errors less than 0.2 magnitudes. Class I protostars are marked with red squares, green circles mark the more-evolved Class II sources, and transition\/debris disk objects are marked with purple circles. The deeply embedded objects identified with this analysis did not have sufficient detections in IRAC bands to appear on the color-color plots. The extinction law from \\citet{Flaherty2007} indicated by the black arrow, and the extinction law from \\citet{Indebetouw2005} is plotted as the blue arrow.}}\n\\label{fig:colorcolor}\n\\end{figure}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[angle=270,scale=0.65]{f13}\n\\caption{\\footnotesize{FCRAO molecular line contours of N$_2$H$^+$~(1-0) (left), C$^{18}$~(1-0) (center) and $^{12}$CO~(1-0) (right) plotted over the {\\em Spitzer} 8~$\\mu$m image of G012.50$-$0.22. The critical density of the molecular transition decreases from left to right.}}\n\\label{fig:g1250_env}\n\\end{center}\n\\end{figure*}\n\n\n\\noindent in the entire sample is shown in Figure~\\ref{fig:colorcolor}\\footnote{No embedded protostar was detected in all four IRAC bands, so none are plotted in Figure~\\ref{fig:colorcolor}.}. The extinction laws from both \\citet{Flaherty2007} and \\citet{Indebetouw2005} are plotted to show the effect of five magnitudes of visual extinction. The objects associated with these IRDCs are a great distance from us and in the plane of the Galaxy, so they naturally suffer from a great deal of extinction, reddening, and foreground contamination. Furthermore, the reddening law used in this classification scheme and the measures taken to extricate extragalactic contaminants may be inaccurate due to the great distance to IRDCs, as the criteria were originally designed to suit local regions. This may result in misclassification of sources. For example, a highly reddened Class I object might appear as an embedded protostar. Nonetheless, if these objects are indeed protostars, it is likely that they are associated with the IRDC. \n\n\nIn Table~3, we summarize the number of each class of YSO in each IRDC field. We note the number of these YSOs that are spatially coincident with the absorbing IRDC clumps (see $\\S$\\ref{structure}). Only $\\sim$10\\% of the YSOs are associated with the dense gas. The rest appear to be a distributed population of stars surrounding the IRDC. This may be because any star directly associated with the IRDC is too heavily obscured to be detected even with the deep Spitzer observations we undertook. Our observations are sensitive to 1-3~$M_{\\sun}$, 1Myr-old pre-main sequence stars \\citep{Baraffe1998}, or 1~$L_{\\sun}$ Class 0 protostar at 4~kpc with no extinction \\citep{whitney_protostar}. With extinction, which can reach 1-2 magnitudes in the {\\em Spitzer} bands, embedded YSOs up to 3-4~$M_{\\sun}$ might be present, but hidden from our view. Another possible reason for the lack of YSOs detected coincident with the dense gas is that the IRDC itself could be in a stage prior to the onset of star formation, and the surrounding stars that are observed have disrupted their natal molecular gas.\n\nTable~4 lists all of the objects identified as embedded objects that are spatially coincident with an IRDC. We list the flux density at each Spitzer wavelength and an estimate of the mid-infrared luminosity derived from integrating the spectral energy distribution, which is dominated by emission at 24~$\\mu$m. In the likely event that the embedded objects are extincted, these mid-infrared luminosities will be underestimated. Taking the average extinction estimations, which can be derived most reliably from the measurements of Class II objects, A$_K$ ranges from 1 to 3, which, if the extinction law \\citet{Flaherty2007} is applied, corresponds to A$_{24}$ of 0.5 to 1.6. As a check, we use a second method to estimate the extinction: based on average values of the optical depth we measure in the IRDCs, we confirm that A$_{24}\\sim$1 is typical in these objects. Given the uncertain extinction properties, and the fact that a large portion of these embedded sources' luminosity will emerge at longer wavelengths not observed here, the luminosities presented in Table~4 are lower limits. Stars with luminosities in this range, according to \\citet{Robitaille2006}, arise from stars ranging from 0.1 to 2~$M_{\\sun}$, but are likely much greater.\n\n\\subsection{Nebulosity at 8 and 24~$\\mu$m}\n\nFour IRDCs in our sample (G006.26$-$0.51, Figure~\\ref{fig:g0626}; G009.16$+$0.06, Figure~\\ref{fig:g0916}; G023.37$-$0.29 Figure~\\ref{fig:g2337}; G034.74$-$0.12, Figure~\\ref{fig:g3474}) exhibit bright emission nebulosity in the IRDC field at 8 and 24~$\\mu$m. These regions tend to be brightest in the thermal infrared (e.g. 24~$\\mu$m) but show some emission at 8~$\\mu$m, which suggests they are sites of high mass star cluster formation. To test whether the apparent active star formation is associated with the IRDC in question, or if it is in the vicinity, we correlate each instance of a bright emission with the molecular observations of the object obtained by \\citet{ragan_msxsurv}. The molecular observations provide velocity information which, due to Galactic rotation, aid in estimating the distance to the mid-infrared emission \\citep{Fich:1989}. This distance compared with the distance to the IRDC enables us to discern whether the IRDC and young cluster are at the same distance or one is in the foreground or background.\n\nIn the case of G006.26$-$0.51 (Figure~\\ref{fig:g0626}), we detect infrared emission at 24~$\\mu$m east of the IRDC. This is spatially coincident and has similar morphology to C$^{18}$O~(1-0) emission emitting at a characteristic velocity of 17~km~s$^{-1}$ \\citep{ragan_msxsurv}, corresponding to a distance of about 3$\\pm$0.5~kpc. The IRDC has a velocity of 23~km~s$^{-1}$, which gives a distance of 3.8~kpc, but with an uncertainty of over 500~pc (see Table~1 and \\citet{ragan_msxsurv}). Given the errors inherent in the distance derivation from the Galactic rotation curve, we cannot conclusively confirm or rule out association. G009.16$+$0.06 (Figure~\\ref{fig:g0916}), has neither distinct velocity component evident in the molecular observations nor does the molecular emission associated with the IRDC overlap with the 24~$\\mu$m emission. Embedded clusters should be associated with molecular emission especially C$^{18}$O which is included in the FCRAO survey. Associated emission for this object likely lies outside the bandpass of the FCRAO observations and is at a greater or lesser distance than IRDC. The 24~$\\mu$m image of G023.37$-$0.29 (Figure~\\ref{fig:g2337}) shows bright emission to the south of the IRDC and another region slightly south and west of the IRDC. This emission is not prominent in the IRAC images, suggesting that this is potentially an embedded star cluster. Molecular observations show strong emission peaks in both CS~(2-1) and N$_2$H$^+$~(1-0) in the vicinity of the IRAC 8~$\\mu$m and MIPS 24~$\\mu$m emission. However, there are three distinct velocity components evident in the observed bandpass, none of which is more spatially coincident with the 24~$\\mu$m emission than the others. Unfortunately, the spatial resolution of the FCRAO survey is insufficient for definitive correlation. Finally, in G34.74$-$0.12 (Figure~\\ref{fig:g3744}), no molecular emission is distinctly associated with the nebulosity; the most likely scenario for this object is that the associated molecular emission lies outside the bandpass of the FCRAO observation and, therefore, is not associated. \n\n\\subsection{Summary of Stellar Content and IRDC Environment}\n\nWe have characterized the star formation that is {\\em possibly} associated with the IRDCs to the extent that the Spitzer and millimeter data allow. The YSO population is distributed, and only a handful of objects identified are directly spatially associated with the IRDC. More explicitly, in this sample, half (5\/11) of the sample shows no clear evidence for {\\em embedded} sources in the dense absorbing gas, and instead appear populated sparsely with young protostars, the photometric properties of which are given in Table~2, and the overall IRDC star content is summarized in Table~3. Among those embedded objects correlated with the absorbing structure at 8~$\\mu$m, which are summarized in Table~4, we find a marked lack of luminous sources ($>$5~$L_{\\sun}$) at these wavelengths. There may be significant extinction at 24~$\\mu$m, in which case we would underestimate their luminosity. Further, even in IRDCs with embedded protostars, most of the cloud core mass is not associated with an embedded source. It is our contention that most of the IRDC mass does not harbor significant massive star formation, and, hence IRDCs are in an early phase of cloud evolution. \n\nBright emission nebulosity is evident at 8~$\\mu$m and 24~$\\mu$m in four fields, presumably due to the presence of high mass stars or a cluster. If the IRDC were associated with the nebulosity, it would be a strong indication that the IRDCs have massive star formation occurring already in the vicinity. Molecular data give no definitive clues that these regions are associated with the IRDCs. \n\nMost studies including this one focus primarily on the dense structures that comprise infrared-dark clouds, yet their connection to the surrounding environment has not yet been discussed in the literature. While it is clear that some star formation is directly associated with the dense material, star formation is also occurring beyond the extent of the IRDC as it appears in absorption. Figure~\\ref{fig:g1250_env} shows molecular line contours from \\citet{ragan_msxsurv} over the {\\em Spitzer} 8~$\\mu$m image. N$_2$H$^+$, a molecule known to trace very dense gas, corresponds exclusively to the dark cloud. On the other hand, C$^{18}$O and, to a greater extent $^{12}$CO, show a much more extended structure, which suggests that the infrared-dark cloud resides within a greater molecular cloud complex. For all of the objects in our sample, the $^{12}$CO emission was present at the edge of the map (up to 2$'$ away from the central position), so it is likely that the emission, and therefore the more diffuse cloud that it probes, extends beyond the mapped area. Thus, the full extent of the surrounding cloud is not probed by our data.\n\n\\section{Tracing mass with dust absorption at 8~$\\micron$}\n\\label{clumps}\n\nEach infrared-dark cloud features distinct absorbing structures evident at all Spitzer wavelengths, but they are most pronounced at 8~$\\mu$m and 24~$\\mu$m due to strong background emission from polycyclic aeromatic hydrocarbons (PAHs) and small dust grains in the respective bandpasses \\citep{Draine_dustreview}. The IRDCs in this sample exhibit a range of morphologies and surrounding environments. Figures~\\ref{fig:g0585}-\\ref{fig:g3744} shows a morphological mix of filamentary dark clouds (e.g. G037.44$+$0.14, Figure~\\ref{fig:g3744}) and large ``round'' concentrations (e.g. G006.26$-$0.51, Figure~\\ref{fig:g0626}). Remarkably, these detailed structures correspond almost identically between the 8~$\\mu$m and 24~$\\mu$m bands, despite the fact that the source of the background radiation arises from separate mechanisms. At 8~$\\mu$m emission from PAHs dominate on average, and at 24~$\\mu$m, the bright background is due to the thermal emission of dust in the Galactic plane. Considering this scenario, it is unlikely that we are mistaking random background fluctuations for dense, absorbing gas with the appropriate characteristics to give rise to massive star and cluster formation.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[angle=270,scale=0.6]{f14}\n\\end{center}\n\\caption{\\footnotesize{{\\it Upper left:} Original IRAC 8~$\\mu$m image of G024.05-0.22. {\\it Upper right:} Background model using the spatial median filtering technique with a 3$\\arcmin$ radius. The dark cloud is virtually eliminated from the background, but still accounts for the large-scale variations. {\\it Lower left:} Same as upper right panel, except that a 1$\\arcmin$ radius is used, which models the dark cloud as part of the background. {\\it Lower right:} Same as upper right and lower left panels, except that a 5$\\arcmin$ radius is used, which misses the background variation and is almost a constant value.}}\n\\label{fig:bgmethod}\n\\end{figure*}\n\n\n\\subsection{Modeling the Foreground and Background}\n\\label{bg}\n\nIn the Galactic plane, the 8~$\\mu$m background emission varies on scales of a few arcminutes. To accurately estimate structures seen in absorption, we account for these variations using a spatial median filtering technique, motivated by the methods used in \\citet{Simon2006}. For each pixel in the IRAC image, we compute the median value of all pixels within a variable radius and assign that value to the corresponding pixel in the background model. Figure~\\ref{fig:bgmethod} illustrates an example of several trials of this method, including models with 1$\\arcmin$, 3$\\arcmin$, and 5$\\arcmin$ radius of pixels included in a given pixel's median calculation. We select the size of the filter to be as small as possible such that the resulting map shows no absorption as background features. If the radius is too small, most of the included pixels will have low values with few representing the true background in the areas where absorption is concentrated (lower left panel of Figure~\\ref{fig:bgmethod}). The background variations are also not well-represented if we select a radius too large (lower right panel of Figure~\\ref{fig:bgmethod}). Based on our analysis, the best size for the filter is 3$\\arcmin$. The observed 8~$\\mu$m emission is a combination of both background and foreground contributions. \n\n\\begin{equation}\n\\int I^{estimate} d\\lambda = \\int I_{BG}^{true} d\\lambda + \\int I_{FG} d\\lambda\n\\label{eq1}\n\\end{equation}\n\n\\noindent where $\\int I^{estimate}~d\\lambda$ is the intensity that we measure from the method described above, $\\int I_{BG}^{true}~d\\lambda$ is the true background intensity, which can only be observed in conjunction with $\\int I_{FG}~d\\lambda$, the foreground intensity, all at 8~$\\mu$m. The relative importance of the foreground emission is not well-known. For simplicity, we assume the foreground can be approximated by constant fraction, $x$, of the emission across each field.\n\n\\begin{equation}\n\\int I_{FG}~d\\lambda = x \\int~I_{BG}^{true}~d\\lambda\n\\end{equation}\n\nOne way to estimate the foreground contribution has already been demonstrated by \\citet{Johnstone_G11}. The authors compare observations of IRDC G011.11$-$0.12 with the {\\em Midcourse Science Experiment (MSX)} at 8~$\\mu$m and the Submillimeter Common-User Bolometer Array (SCUBA) on the James Clerk Maxwell Telescope (JCMT) at 850~$\\mu$m (see their Figure~3) and use the point at which the 8~$\\mu$m integrated flux is at its lowest at high values of 850~$\\mu$m flux for the foreground estimate. The top panel Figure~\\ref{fig:g1111_fluxplot} shows a similar plot to Figure~3 in \\citet{Johnstone_G11}, except our integrated 8~$\\mu$m flux is measured with {\\em Spitzer} and presented here in units of MJy\/sr. SCUBA 850$\\mu$m data for two of the IRDCs in this sample (G009.86$-$0.04 and G012.50$-$0.22) are available as part of the legacy data release \\citep{SCUBA_legacy} and are included in this plot. Just as \\citet{Johnstone_G11} point out, we see a clear trend: where 8~$\\mu$m emission is low along the filament, the 850$\\mu$m flux is at its highest. In the case of G011.11$-$0.12, where the SCUBA data are of the highest quality, we take the minimum 8~$\\mu$m flux density to be an estimate of the foreground contribution. Assuming this trend is valid for our sample of IRDCs, we use the 8~$\\mu$m emission value measured at the dust opacity peak in each source as our estimation of the foreground level for that object (for the remainder of this paper, we will refer to this method as foreground estimation method ``A''). Given these considerations, we find values for $x$ to range between 2 and 5. Up to 20\\% of this foreground contamination is likely due to scattered light in the detector (S.T. Megeath, private communication). We assume constant foreground flux at this level. As an alternative foreground estimate, we also test a case in which we attribute half of the model flux to the background and half to the foreground. This is equivalent to choosing a value of $x$ of 1, and based on Figure~\\ref{fig:g1111_fluxplot}, is also a reasonable estimate. This method will be referred to as foreground estimation method ``B.'' For most of the following figures and discussion, we use estimation method A and refer the results from method B in the text when applicable.\n\nWith an estimation of the foreground contribution, the absorption can be quantitatively linked to the optical depth of the cloud. The measured integrated flux, $\\int I_m d\\lambda$ at any point in the image, including contributions from both the foreground and background, can then be expressed as \n\n\\begin{equation}\n\\int I_m d\\lambda =\\int I_{BG}^{true}e^{-\\tau_8} d\\lambda+ \\int I_{FG} d\\lambda\n\\end{equation}\n\n\\noindent where $\\tau_8$ is the optical depth of the absorbing material. For the subsequent calculations, we use the average intensity, assuming uniform transmission over the IRAC channel 4 passband, and average over the extinction law \\citep[][see Section~\\ref{structure}]{weingartner_draine01} in this wavelength region in order to convert the optical depth into a column density (see discussion in next section). We note that we make no attempt to correct for the spectral shape of the the dominant PAH emission feature in the 8~$\\mu$m Spitzer bandpass, which we assume dominates the background radiation. In addition, clumpy material that may be optically thick and is not resolved by these observations will cause us to underestimate the column density. These factors could introduce an uncertainty in the conversion of optical depth to column density. Still, we will show in Section~\\ref{columnprobe} that dust models compare favorably to our estimation of the dust absorption cross section, lending credence to our use of $\\tau$ as a tracer of column density.\n\n\\begin{figure}\n\\hspace{-0.3in}\n\\includegraphics[scale=0.55]{f15}\n\\caption{\\footnotesize{Spitzer 8~$\\mu$m vs. SCUBA 850$\\mu$m flux for IRDC G011.11$-$0.12, G009.86$-$0.04, and G012.50$-$0.22. The horizontal dashed line marks where the 8~$\\mu$m flux density reaches a minimum in G011.11$-$0.12, which is also indicated for the two other IRDCs with available SCUBA data. This flux density serves as an estimate of the foreground emission at 8~$\\mu$m. The dash-dotted line indicates the mean 8~$\\mu$m emission.}}\n\\label{fig:g1111_fluxplot}\n\\end{figure}\n\n\n\\subsection{Identification of Structure}\n\\label{structure}\n\nFigure~\\ref{fig:opaccontours} shows a map of optical depth G024.05$-$0.22. This provides an example of the the absorbing substructure in one of the IRDCs in our sample. Owing to the high spatial resolution of Spitzer at 8~$\\mu$m (1 pixel = 0.01~pc at 4~kpc, accounting for oversampling), we see substructures down to very small scales ($\\sim$0.03~pc) in {\\em all} IRDCs in our sample. \n\nIn order to identify independent absorbing structures in the 8~$\\mu$m optical depth map, we employed the {\\tt clumpfind} algorithm \\citep{williams_clumpfind}. In the two-dimensional version, {\\tt clfind2d}, the algorithm calculates the location, size, and the peak and total flux of structures based on specified contour levels. We use the Spitzer PET\\footnote{http:\/\/ssc.spitzer.caltech.edu\/tools\/senspet\/} to calculate the sensitivity of the observations, i.e. to what level the data permit us to discern true variations from noise fluctuations. At 8~$\\mu$m, the observations are sensitive to 0.0934 MJy\/sr which, on average, corresponds to an optical depth sensitivity (10-$\\sigma$) of $\\sim$0.02. While the clumps take on a variety of morphologies, since {\\tt clumpfind} makes no assumptions about the clump shapes, we approximate the clump ``size'' by its effective radius, \n\n\\begin{equation}\nr_{eff}=\\sqrt{\\frac{n_{pix}~A_{pix}}{\\pi~f_{os}}}\n\\end{equation}\n\n\\noindent where n$_{pix}$ is the number of pixels assigned to the clump by {\\tt clumpfind}, and A$_{pix}$ is the area subtended by a single pixel. The correction factor for oversampling, $f_{os}$ accounts for the fact that the {\\em Spitzer Space Telescope} has an angular resolution of 2.4$''$ at 8~$\\mu$m, while the pixel scale on the IRAC chip is 1.2$''$, resulting in oversampling by a factor of 4. \n\nThe number and size of structures identified with {\\tt clumpfind} varies depending on the number of contouring levels between the fixed lower threshold, which is set by the sensitivity of the observations, and the highest level set by the deepest absorption. We set the lowest contour level to 10$\\sigma$ above the average background level. In general, increasing the number of contour levels serves to increase the number of clumps found. In all cases, we reach a number of levels where the addition of further contouring levels results in no additional structures. We therefore select the number of contour levels at which the number of clumps levels off, i.e. when the addition of more contour levels reveals no new clumps. We also remove those clumps found at the image edge or bordering a star, as the background estimation is likely inaccurate and\/or at least a portion of the clump is probably obscured by the star, rendering any estimation of the optical depth inaccurate.\n\nUsing {\\tt clumpfind}, each IRDC broke down into tens of clumps, ranging in size from tens to hundreds of pixels per clump. The average clump size is 0.04~pc. Typically, there is one or two central most-massive clumps and multiple smaller clumps in close proximity. In some instances, clumps are strung along a filamentary structure, while in other cases, clumps are radially distributed about a highly-concentrated center. Figure~\\ref{fig:numberedclumps} shows an example of how the clumps are distributed spatially in G024.05$-$0.22 as {\\tt clumpfind} identifies them. \n\nWith reliable identification of clumps, we next calculate individual clump masses. As described, {\\tt clumpfind} gives total optical depth measured at 8~$\\mu$m, $\\tau_{8,tot}$, within the clump boundary, its size and position. This can be directly transformed into $N(H)_{tot}$ via the relationship \n\n\\begin{equation}\n\\label{colequation}\nN(H)_{tot} = {\\frac{\\tau_{8,tot}}{\\sigma_8~f_{os}}}\n\\end{equation}\n\n\\noindent where $\\sigma_8$ is the dust absorption cross section at 8~$\\mu$m. We derive an average value of $\\sigma_8$ over the IRAC channel 4 bandpass using dust models that take into account higher values of R$_V$ corresponding to dense regions in the ISM. Using \\citet{weingartner_draine01}, we use $R_V$ = 5.5, case B values, which agree with recent results from \\citet{Indebetouw2005}. We find the value of $\\sigma_8$ to be 2.3$\\times$10$^{-23}$cm$^2$. \n\nThe column density can then be used with the average clump size and the known distance to the IRDC, assuming all clumps are at approximately the IRDC distance, to find the clump mass. The mass of a clump is given as \n\n\\begin{equation}\nM_{clump} = 1.16 m_H N(H)_{tot} A_{clump} \n\\end{equation}\n\n\\noindent where m$_H$ is the mass of hydrogen, N(H)$_{tot}$ is the total column density of hydrogen, the factor 1.16 is the correction for helium and A$_{clump}$ is the area of the clump. Table~6 gives the location, calculated mass and size of all the clumps identified with {\\tt clumpfind}. We also note which clumps are in the vicinity of candidate young stellar objects (Table~2) or foreground stars, thereby subjecting the given clump properties to greater uncertainty. On average (for foreground estimation method A), 25\\% of clumps border a field star, and these clumps are flagged and not used in the further analysis. In each infrared-dark cloud, we find between 3000$M_{\\sun}$ and 10$^4M_{\\sun}$ total mass in clumps, and typically $\\sim$15\\% of that mass is found in the most massive clump. \n\nWe perform the same analysis on the maps produced with foreground estimation method B. The foreground assumption in this case leads to lower optical depths across the map. Due to the different dynamic range in the optical depth map, {\\tt clumpfind} does not reproduce the clumps that are found with method A exactly. The discrepancy arises in how {\\tt clumpfind} assigns pixels in crowded regions of the optical depth map, so while at large the same material is counted as a clump, the exact assignment of pixels to specific clumps varies somewhat. On average, the clumps found in the ``method B'' maps tend to have lower masses by a factor of 2, though the sizes do not differ appreciably from those found with foreground estimation method A. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{f16}\n\\end{center}\n\\caption{\\footnotesize{G024.05$-$0.22. 8~$\\mu$m optical depth with contours highlighting the structures.}}\n\\label{fig:opaccontours}\n\\medskip\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=270, scale=0.4]{f17}\n\\end{center}\n\\caption{\\footnotesize{G024.05$-$0.22. Results of the {\\tt clumpfind} algorithm plotted over Spitzer 8~$\\mu$m image. Absorption identified as a ``clump'' is denoted by a number.}\n\\label{fig:numberedclumps}}\n\\end{figure}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[angle=270,scale=0.6]{f18}\n\\end{center}\n\\caption{\\footnotesize{Original optical depth image of G024.05$-$0.22 (left) and the wavelet subtracted image (right) of the same region.}}\n\\label{fig:wavelet}\n\\end{figure*}\n\n\n\n\\subsection{Resolving Inaccuracy in Clump Mass Calculation}\n\\label{masserr}\n\nThe clumps identified in this fashion include a contribution from the material in the surrounding envelope. As a result, a portion of the low-mass clump population may not be detected, and the amount of material in a given clump may be overestimated. To examine this effect, we use the {\\tt gaussclumps} algorithm \\citep{StutzkiGusten1990} to identify clumps while accounting for the contribution from the cloud envelope. This method was designed to decompose three-dimensional molecular line observations by deconvoloving the data into clumps fit by Gaussians. To use the algorithm here without altering the code, we fabricated a data cube by essentially by mimicking a third (velocity) dimension, thus simulating three-dimensional clumps that were all centered in velocity on a single central plane. \\citet{mookerjea2004} and \\citet{Motte2003} have used similar techniques to simulate a third dimension to their dust continuum data sets. The {\\tt gaussclumps} algorithm inherently accounts for an elevated baseline level, which can be used to approximate the envelope. Applied to our data set, {\\tt gaussclumps} finds that 15-50\\% of the material is in the envelope. Further discussion of the envelope contribution, including its effect on the mass function, is given in $\\S$\\ref{envelope}.\n\nThe {\\tt clumpfind} and {\\tt gaussclumps} methods result in nearly one-to-one clump identification in the central region of the IRDC. However, because the contribution from the cloud envelope falls off further away from the central concentration of mass in the IRDC, {\\tt gaussclumps} fails to find low-mass clumps on the outskirts of IRDCs as successfully as {\\tt clumpfind}, despite being statistically valid relative to their local background. We conclude that {\\tt gaussclumps} is not suitable to identify of structure in the outskirts of the IRDCs where the envelope is below the central level. \n\nAnother method commonly employed in the literature to account for the extended structures in which dense cores reside is a ``wavelet subtraction'' technique, which is described in \\citet{alves_cmfimf}. To address the varying levels of background across the optical depth map, we use the wavelet transform of the image to extract the dense cores. For one IRDC in our sample, G024.05$-$0.22, we (with the help of J. Alves, private communication) perform the wavelet analysis on the optical depth map. Figure~\\ref{fig:wavelet} shows a comparison between the original optical depth map and the wavelet-subtracted map. With the removal of the ``envelope'' contribution in this fashion, the clumps are up to 90\\% less massive on average, and their average size decreases by 25\\%, or $\\sim$0.02~pc. \n\nBoth using {\\tt gaussclumps} and applying wavelet subtraction methods to extract clumps show that the contribution of the cloud envelope is not yet well-constrained quantitatively. Not only is the cloud envelope more difficult to detect, its structure is likely not as simple as these first order techniques have assumed in modeling it. As such, for the remainder of the paper, we will not attempt to correct the clump masses on an individual basis, but rather focus our attention on the clump population properties as a whole. In $\\S$\\ref{validate}, we employ several techniques to calibrate our mass estimation methods. We will show in $\\S$\\ref{mf} that the effect of the envelope is systematic and does not skew the derived relationships, such as the slope of the mass function. \n\n\\subsection{Validating 8~$\\micron$ absorption as a Tracer of Mass}\n\\label{validate}\n\nIn previous studies, molecular clouds have been predominantly probed with using the emission of warm dust at sub-millimeter wavelengths. While there are inherent uncertainties in the conversion of flux density to mass, the emission mechanism is well-understood. The method described above is a powerful way to trace mass in molecular clouds. To understand the extent of its usefulness, here we validate dust absorption as a mass tracer by drawing comparisons between it and results using more established techniques. First, we relate the dust absorption to dust emission as probes of column density. Second, we use observations of molecular tracers of dense gas not only to further cement the validity of the absorbing structures, but also to place the IRDCs in context with their surroundings. Finally, we show that the sensitivity of the technique does not have a strong dependence on distance. \n\n\\subsubsection{Probing Column Density at Various Wavelengths}\n\\label{columnprobe}\n\nAs we discussed in $\\S$\\ref{bg}, there is an excellent correlation between the 8~$\\mu$m and 850~$\\mu$m flux densities in IRDC G011.11-0.12. Figure~\\ref{fig:g1111_fluxplot} shows the point-to-point correlation between the SCUBA 850~$\\mu$m flux density and Spitzer 8~$\\mu$m flux density. This correspondence itself corroborates the use of absorption as a dust tracer. In addition, the fit to the correlation can confirm that the opacity ratio, $\\kappa_8~$\/$\\kappa_{850}$, is consistent with dust behavior in high density environments. Relating the 8~$\\mu$m flux density \n\n\\begin{equation}\nf_{8}=f_{bg}e^{-\\kappa_{8}\\Sigma(x)}+f_{fg}\n\\end{equation}\n\n\\noindent where $\\kappa_8~$ is the 8~$\\mu$m dust opacity, $\\Sigma(x)$ is the mass column density of emitting material, and $f_{bg}$ and $f_{fg}$ are the background and foreground flux density estimates, respectively (from $\\S$\\ref{bg}), and the 850~$\\mu$m flux density \n\n\\begin{equation}\nf_{850}=B_{850}(T_d=13K) \\kappa_{850} \\Sigma(x) \\Omega\n\\end{equation}\n\n\\noindent where $B_{850}$ is the Planck function at 850~$\\mu$m evaluated for a dust temperature of 13~K, $\\kappa_{850}$ is the dust opacity at 850~$\\mu$m and $\\Omega$ is the solid angle subtended by the JCMT beam at 850~$\\mu$m, one can find a simple relation between the two by solving each for $\\Sigma(x)$ and equating them. The opacity ratio, put in terms of the flux density measurements is as follows:\n\n\\begin{equation}\n\\frac{\\kappa_{8}}{\\kappa_{850}}=\\frac{B_{850}~\\Omega}{f_{850}} ln \\left( \\frac{f_{bg}}{f_{8}-f_{fg}}\\right)\n\\end{equation}\n\n\\noindent From our data, we confirm this ratio is considerably lower ($\\sim$500) in cold, high density environments than in the diffuse interstellar dust as found by \\citet{Johnstone_G11}.\n\nWe perform another consistency check between our data and dust models. With maps at both 8 and 24~$\\mu$m, both showing significant absorbing structure against the bright Galactic background (albeit at lower resolution at 24~$\\mu$m), we can calculate the optical depth of at 24~$\\mu$m in the same way we did in Section~\\ref{bg}. The optical depth scales with the dust opacity by the inverse of the column density ($\\tau_{\\lambda} \\propto \\kappa_{\\lambda} \/ N(H)$), so the ratio of optical depths is equal to the dust opacity ratio. We find that the typical ratio as measured by Spitzer in IRDCs is\n\n\\begin{equation}\n\\frac{\\kappa_{8}}{\\kappa_{24}} = \\frac{\\tau_8}{\\tau_{24}} \\sim 1.2\n\\end{equation}\n\n\\noindent which is comparable to 1.6, the \\citet{weingartner_draine01} prediction (for R$_V$ = 5.5, case B) and 1-1.2 predicted by \\citet{ossenkopf_henning} in the high-density case. We conclude that the dust properties we derive are consistent with the trends that emerge from models of dense environments typical of infrared-dark clouds.\n\n\\begin{figure*}\n\\begin{center}\n\\hbox{\n\\vspace{1.0cm}\n\\hspace{1.5cm}\n\\psfig{figure=f19a.ps,angle=270,height=7.0cm}\n\\hspace{0.4cm}\n\\psfig{figure=f19b.ps,angle=90,height=7.0cm,width=7.0cm}\n}\n\\vspace{-1.0cm}\n\\end{center}\n\\caption{\\footnotesize{Left: Contours of integrated intensity of N$_2$H$^+$ plotted over the IRAC 8~$\\mu$m image of IRDC G012.50$-$0.22. Right: Point-to-point correlation of the N$_2$H$^+$ integrated intensity and the 8~$\\mu$m optical depth. Points with high integrated intensity but low optical depth correspond to stars, whose presence leads to the underestimation of optical depth in the vicinity.}}\n\\label{fig:g1250_bimaplot}\n\\end{figure*}\n\n\n\\begin{figure}\n\\hspace{-1in}\n\\includegraphics[scale=0.8]{f20}\n\\caption{\\footnotesize{Comparison of the total mass derived from N$_2$H$^+$ maps from \\citet{ragan_msxsurv} and total clump mass as derived from dust absorption at 8~$\\micron$, where the black diamonds represent the mass using foreground estimation method A and the grey squares show the masses derived using foreground estimation method B (see $\\S$\\ref{bg}). Three of the IRDCs in the sample did not have adequate N$_2$H$^+$ detections. Error bars for 30\\% systematic errors in the mass are plotted for the clump mass estimates, and a factor of 5 uncertainty is plotted for the N$_2$H$^+$ mass estimates. The dashed line shows a one-to-one correspondence for reference.}\n\\label{fig:n2hpclumps}}\n\\end{figure}\n\n\\subsubsection{Molecular Line Tracers}\n\nMolecular lines are useful probes of dense clouds, with particular molecules being suited for specific density ranges. For instance, chemical models show that N$_2$H$^+$ is an excellent tracer of dense gas in pre-stellar objects \\citep{bl97}. In support of these models, observations of low-mass dense cores \\citep{tafalla_dep, Bergin2002} demonstrate that N$_2$H$^+$ highlights regions of high central density (n$\\sim$10$^6$~cm$^{-3}$), while CO readily freezes out onto cold grains (when n~$>~10^4$~cm$^{-3}$), rendering it undetectable in the central denser regions of the cores. CO is a major destroyer of N$_2$H$^+$, and its freeze-out leads to the rapid rise in N$_2$H$^+$ abundance in cold gas. When a star is born, the CO evaporates from grains and N$_2$H$^+$ is destroyed in the proximate gas \\citep{lbe04}. Thus, N$_2$H$^+$ is a preferential tracer of the densest gas that has not yet collapsed to form a star in low-mass pre-stellar cores.\n\nWhile N$_2$H$^+$ has been used extensively as a probe of the innermost regions of local cores, where densities can reach 10$^6$cm$^{-3}$ \\citep[e.g.][]{taf04}, this chemical sequence has not yet been observationally proven in more massive star forming regions. Nonetheless recent surveys \\citep[e.g.][]{Sakai2008, ragan_msxsurv} confirm that N$_2$H$^+$ is prevalent in IRDCs, and mapping by \\citet{ragan_msxsurv} shows that N$_2$H$^+$ more closely follows the absorbing gas than CS or C$^{18}$O, which affirms that the density is sufficient for appreciable N$_2$H$^+$ emission. These single dish surveys do not have sufficient resolution to confirm the tracer's reliability on the clump or pre-stellar core scales in IRDCs. Interferometric observations will be needed to validate N$_2$H$^+$ as a probe of the chemistry and dynamics of individual clumps (Ragan et al., in prep.). \n\nFor one of the objects in our sample, G012.50$-$0.22, we had previous BIMA observations of N$_2$H$^+$ emission with 8$'' \\times 4.8''$ spatial resolution. The BIMA data were reduced using the standard MIRIAD pipeline reduction methods \\citep{MIRIAD}. As in nearby clouds, such as \\citet{wmb04}, the integrated intensity of N$_2$H$^+$ relates directly to the dust (measured here in absorption) in this infrared-dark cloud. Figure~\\ref{fig:g1250_bimaplot} illustrates the quality of N$_2$H$^+$ as a tracer of dense gas, both in the N$_2$H$^+$ contours plotted over the 8~$\\mu$m {\\em Spitzer} image and the point-to-point correlation between the 8~$\\mu$m optical depth and the integrated intensity of N$_2$H$^+$. The points that lie above the average line, with high integrated intensities but low optical depth, are all in the vicinity of a foreground star in the 8~$\\mu$m image, which lowers our estimate for optical depth. In the sample, however, we have shown that the foreground and young stellar population is largely unassociated with the absorption. \n\nTwo trends are apparent in Figure~\\ref{fig:g1250_bimaplot}. First, below $\\tau~<~0.25$ there is a lack of N$_2$H$^+$ emission. This suggests that the absorption may be picking up a contribution from a lower density extended envelope that is incapable of producing significant N$_2$H$^+$ emission. This issue is discussed in greater detail in $\\S$\\ref{envelope}. Alternatively, the interferometer may filter out extended N$_2$H$^+$ emission. The second trend evident in Figure~\\ref{fig:g1250_bimaplot} is that for $\\tau~>~0.25$, there is an excellent overall correlation, confirming that mid-infrared absorption in clouds at distances of 2 to 5~kpc is indeed tracing the column density of the {\\em dense} gas likely dominated by pre-stellar clumps.\n\nIn addition to directly tracing the dense gas in IRDCs, molecular observations can be brought to bear on critical questions regarding the use of absorption against the Galactic mid-infrared background and how best to calibrate the level of foreground emission. One way to approach this is to use the molecular emission as a tracer of the total core mass and compare this to the total mass estimated from 8~$\\mu$m absorption with differing assumptions regarding the contributions of foreground and background (see $\\S$~\\ref{bg}). In \\citet{ragan_msxsurv} we demonstrated that the distribution of N$_2$H$^+$ emission closely matches that of the mid-infrared absorption (see also $\\S$~\\ref{clumps}). This is similar to the close similarity of N$_2$H$^+$ and dust continuum emission in local pre-stellar cores \\citep[e.g.][]{BerginTafalla_ARAA2007}. Thus we can use the mass estimated from the rotational emission of N$_2$H$^+$ to set limits on viable models of the foreground. In \\citet{ragan_msxsurv} we directly computed a mass using an N$_2$H$^+$ abundance assuming local thermodynamic equilibrium (LTE) and using the H$_2$ column density derived from the MSX 8~$\\mu$m optical depth. However, this estimate is highly uncertain as the optical depth was derived assuming no foreground emission, and the N$_2$H$^+$ emission may not be in LTE. Instead, here, we will use chemical theory and observations of clouds to set limits.\n\nN$_2$H$^+$ appears strong in emission in dense pre-stellar gas due to the freeze-out of CO, its primary destruction route. Detailed theoretical models of this process in gas with densities in excess of 10$^5$~cm$^{-3}$ \\citep{aikawa_be}, as expected for IRDCs, suggest a typical abundance should be $\\sim$10$^{-10}$ with respect to H$_2$ \\citep{maret_n2, aikawa_be, Pagani2007}. This value is consistent with that measured in dense gas in several starless cores \\citep{tafalla_dep, maret_n2}. Using this value we now have a rough test of our foreground and background estimates. For example, in G024.05$-$0.22 we find a total mass of 4100~$M_{\\sun}$ (foreground estimation method A). Using the data in \\citet{ragan_msxsurv}, we find that the total mass traced by N$_2$H$^+$ is 4400~$M_{\\sun}$, providing support for our assumptions. Figure~\\ref{fig:n2hpclumps} shows the relationship between the total clump mass derived from absorption and the total mass derived from our low-resolution maps of N$_2$H$^+$ for the eight IRDCs in our sample that were detected in N$_2$H$^+$. In general, there is good agreement. We plot a 30\\% systematic error in the total clump masses (abscissa) and a factor of 5 in for the total N$_2$H$^+$ mass estimate (ordinate). In the cases where the estimates differ, the N$_2$H$^+$ mass estimate tends to be greater than the total mass derived from the dust absorption clumps. This discrepancy likely arises in large part from an under-estimation of N$_2$H$^+$ abundance and\/or non-LTE conditions. All the same, the consistency of the mass estimates, together with the morphological correspondence, reaffirms that the we are probing the dense clumps in IRDCs and that our mass probe is reasonably calibrated.\n\nWe find no discernible difference between methods A and B of foreground estimation. However, we note that both are substantially better than assuming no foreground contribution. We therefore believe that method A is an appropriate estimate of the foreground contribution (see $\\S$\\ref{bg}). \n\n\\subsubsection{Effects of Distance on Sensitivity}\n\\label{sens}\n\nInfrared-dark clouds are much more distant than the local, well-studied clouds such as Taurus or $\\rho$ Ophiuchus. As such, a clear concern is that the distance to IRDCs may preclude a well-defined census of the clump population. The most likely way in which the our survey is incomplete is the under-representation of low-mass objects due to their relatively small size, blending of clumps along the line of sight, or insensitivity to their absorption against the background. One observable consequence of this effect, assuming IRDCs are a structurally homogeneous class of objects, might be that more distant IRDCs should exhibit a greater number of massive clumps at the expense of the combination of multiple smaller clumps. Another possible effect is the greater the distance to the IRDC, the less sensitive we become to small clumps, and clumps should appear to blend together (i.e. neighboring clumps will appear as one giant clump). Due to this effect, we expect that the most massive clumps of the population will be over-represented. As a test, we examine the distribution of masses and sizes of clumps as a function of IRDC distance, which is shown in Figure~\\ref{fig:masssens}. This sample, with IRDCs ranging in distance from 2.4 to 4.9~kpc away, does not show a strong trend of this nature. We show the detection limit for clumps to illustrate the very good sensitivity of this technique and that while it does impose a lower boundary on clump detectability, most clumps are not close to this value. We found no strong dependence of clump mass or size on the distance to the IRDC and conclude that blending of clumps does not have a great effect on the mass sensitivity.\n\nTypical low-mass star forming cores range in size from 0.03 to 0.1~pc \\citep{BerginTafalla_ARAA2007}. If one were to observe such objects 4~kpc, they would only subtend a few arcseconds. For example, if L1544, a prototypical pre-stellar core, resided at the typical distance to the IRDCs in the sample, it would show sufficient absorption \\citep[based on reported column density measurements by][]{bacmann_iso} against the Galactic background, but according to \\citet{Williams_2006}, would subtend 3$''$ in diameter at our fiducial 4~kpc distance, which is very close to our detection limit. In addition, very low mass clumps could blended into any extended low-density material that is included in our absorption measurement. These effects should limit our sensitivity to the very low-mass end of our clump mass function. \n\nTo first order, we have shown distance is not a major factor because the high-resolution offered by {\\em Spitzer} improves our sensitivity to small structures. However, infrared-dark clouds are forming star clusters and by nature are highly structured and clustered. As such, we can not rule out significant line-of-sight structure. Since independent clumps along the line-of-sight might have distinct characteristic velocities, the addition of kinematical information from high-resolution molecular data (Ragan et al., in prep.) will help the disentanglement.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=90,scale=0.4]{f21}\n\\end{center}\n\\caption{\\footnotesize{Top: The range in clump mass as a function of distance. The median clump mass for each IRDC in the sample is indicated with a diamond. Bottom: The range in clump size as a function of distance. The median clump size for each IRDC in the sample is indicated with a diamond. The resolution limit is plotted as a solid line, and it shows the boundary at which {\\tt clumpfind} defines a ``clump'' for an object at the distance of the indicated host IRDC.}}\n\\vspace{0.2in}\n\\label{fig:masssens}\n\\end{figure}\n\n\\section{Mass Function}\n\\label{mf}\n\nA primary goal of this study is to explore the mass function of clumps in infrared-dark clouds and compare it to that of massive star formation regions, local star formation regions, and the stellar IMF. We note that there is some ambiguity in the literature about the ``clump'' versus the ``core'' mass functions. In the following description, a ``core'' mass function refers to the mass spectrum objects with masses in the ``core'' regime (10$^{-1}$-10$^1$~$M_{\\sun}$, 10$^{-2}$-10$^{-1}$~pc), and a ``clump'' mass function for objects in the ``clump'' regime (10$^{-1}$-10$^1$~$M_{\\sun}$, 10$^{-2}$-10$^{-1}$~pc), as summarized in \\citet{BerginTafalla_ARAA2007}. Here we present the infrared-dark cloud clump mass function. We describe the relevance of this result in the context of Galactic star formation and discuss several methods we use to test its validity.\n\n\\subsection{Mass Function in Context}\n\\label{context}\n\nA fundamental property of the star formation process is the mass spectrum of stars, and, more recently, the mass function of pre-stellar objects. The mass spectrum in either case is most typically characterized by a power law, taking the form $dN\/dM \\propto M^{-\\alpha}$, known as the differential mass function (DMF). In other contexts, the mass function can be described as a function of the logarithm of mass, which is conventionally presented as $dN\/d(log~m) \\propto M^{\\Gamma}$, in which case $\\Gamma=-(\\alpha-1)$. In the results that follow, we present the slope of the clump mass function in terms of $\\alpha$.\n\nA commonly used method for studying mass functions of pre-stellar cores is observation of dust thermal continuum emission in nearby star-forming clouds. Cold dust emission is optically thin at millimeter and sub-millimeter wavelengths, and can therefore be used as a direct tracer of mass. A number of surveys of local clouds \\citep[e.g.][]{Johnstone_Oph, motte_rhooph} have been performed with single-dish telescopes covering large regions in an effort to get a complete picture of the mass distribution of low-mass clouds. This is an extremely powerful technique, but as \\citet{Goodman_col} demonstrate, this technique suffers from some limitations, chief among them poor spatial resolution (in single-dish studies), required knowledge of dust temperatures \\citep{Pavlyuchenkov_2007}, and the insensitivity to diffuse extended structures. \n\nAnother technique that has been employed to map dust employs near-infrared extinction mapping \\citep{alves_cmfimf, Lombardi_pipe}, which is a way of measuring $A_V$ due to dark clouds by probing the color excesses of background stars \\citep{Lombardi_NICER}. This method is restricted to nearby regions of the Galaxy because of sensitivity limitations and the intervention of foreground stars, both of which worsen with greater distance. Also, the dynamic range of $A_V$ in such studies is limited to $\\sim$1-60 \\citep{Lombardi_NICER}, while our technique probes from $A_V$ of a few to $\\sim$100.\n\nThe dust-probing methods mentioned above, both thermal emission from the grains and extinction measures using background stars, often find a core mass function (CMF) that is similar in shape to the stellar initial mass function (IMF), as described by \\citet{Salpeter_imf}, where $\\alpha=2.35$ ($\\Gamma=-1.35$), or \\citet{Kroupa_imf}. This potentially suggests a one-to-one mapping between the CMF and IMF, perhaps scaled by a constant ``efficiency'' factor \\citep[e.g.][]{alves_cmfimf}. Also, both techniques are difficult to apply to regions such as infrared-dark clouds due to their much greater distance. As we show in $\\S$\\ref{structure}, absorbing structure exists below the spatial resolution limit of single-dish surveys. Sensitivity limitations and foreground contamination preclude use of extinction mapping to probe IRDCs.\n\nStructural analysis using emission from CO isotopologues find a somewhat different character to the distribution of mass in molecular clouds. \\citet{Kramer_CMF} determined that the clump mass function in molecular clouds follows a power law with $\\alpha$ between 1.4 and 1.8 ($-0.8 < \\Gamma < -0.4 $). This is significantly shallower than the Salpeter-like slope for clumps found in works using dust as a mass probe. This disagreement may be due to an erroneous assumption about one technique or the other, or it may be that the techniques are finding information about how the fragmentation process takes place from large scale, probed by CO, to small scales, probed by dust. Another possible explanation is that most of the objects in \\citet{Kramer_CMF} are massive star forming regions, and star formation in these regions may be intrinsically different than tyical regions studied in the local neighborhood (e.g. Taurus, Serpens).\n\nSub-millimeter observations of more distant, massive star-formation regions have been undertaken \\citep[e.g.][]{reid_2, Li_orion, mookerjea2004, rathborne2006} with a mixture of results regarding the mass function shape. \\citet{rathborne2006}, for example, performed IRAM observations of a large sample of infrared-dark clouds. Each cloud in that sample is comprised of anywhere from 2 to 18 cores with masses ranging from 8 to 2000$M_{\\sun}$. They find a Salpeter-like ($\\alpha$~$\\sim$2.35) mass function for IRDC cores. However, our Spitzer observations reveal significant structure below the spatial resolution scales of \\citet{rathborne2006}. As we will show (see Section~\\ref{mf}), the mass function within a fragmenting IRDC is shallower than Salpeter and closer to the mass function derived from CO emission.\n\nGiven the strong evidence for fragmentation, it is clear that IRDCs are the precursors to massive clusters. We then naturally draw comparisons between the characteristics of fragmenting IRDCs and the nearest region forming massive stars, Orion. At $\\sim$500~pc, it is possible to resolve what are likely to be pre-stellar objects in Orion individually with current observational capabilities. With the high-resolution of our study, we can examine star formation regions (IRDCs) at a similar level of detail as single-dish telescopes can survey Orion. For example, we detect structures on the same size scale ($\\sim$0.03~pc) as the quiescent cores found by \\citet{Li_orion} in the Orion Molecular Cloud, however the most massive core in their study is $\\sim$50~$M_{\\sun}$. These cores account for only a small fraction of the total mass in Orion.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.3]{f22}\n\\end{center}\n\\caption{\\footnotesize{\nDifferential mass function of ensemble IRDC sample. Black filled circles indicate results of the {\\tt clumpfind} technique, and the green open triangles denote the results of the {\\tt gaussclumps} clump-finding method. The fits are broken power laws. On the high-mass end, the slope of the {\\tt gaussclumps} method mass function ($\\alpha=1.15\\pm0.04$) is shallower than the slope of hte {\\tt clumpfind} mass function ($\\alpha=1.76\\pm0.05$).}}\n\\label{fig:cfgccompareMF}\n\\end{figure}\n\n\n\\subsection{Results: Differential Mass Function}\n\\label{dmf}\n\nWe use the IRDC clump masses calculated in $\\S$\\ref{structure} (using {\\tt clumpfind} and foreground estimation method A) to construct an ensemble mass function in Figure~\\ref{fig:cfgccompareMF}. The mass function that results from using foreground estimation method B is shifted to lower masses by a factor of 2, but the shape is identical. Because IRDCs appear to be in a roughly uniform evolutionary state over the sample (i.e. they are all likely associated with the Molecular Ring, and they possess similar densities and temperatures), we merge all the clumps listed in Table~6 as ensemble and present a single mass function for all the objects at a range of distances. This assumes that the character of the mass function is independent of the distance to a given IRDC. Recall that we see no evidence (see Figure~\\ref{fig:masssens}) for the mass distributions to vary significantly with distance.\n\nFor the calculation of the errors in the DMF we have separately accounted for the error in the mass calculation and the counting statistics. We used a method motivated by \\citet{reid_1} to calculate the mass error. We have assumed that the clump mass error is dominated by the systematic uncertainty of 30\\% in the optical depth to mass correction. For each clump we have randomly sampled a Guassian probability function within the 1$\\sigma$ envelope defined by the percentage error. With these new clump masses we have re-determined the differential mass function. This process is repeated 10$^4$ times, and the standard deviation of the DMF induced by the error in the mass is calculated from the original DMF. This error is added in quadrature to the error introduced by counting statistics. The provided errors are 1$\\sigma$, with the caveat that the value assumed for the systematic uncertainty is open to debate. As a result, when there are large numbers in a given mass bin, the error is dominated by the mass uncertainty. Conversely, when there are few objects in a mass bin, the error is dominated by counting.\n\nThe IRDC clump mass function for this sample spans nearly four orders of magnitude in mass. We fit the mass function with a broken power law weighted by the uncertainties. At masses greater than $\\sim$40$M_{\\sun}$, the mass function is fit with a power law of slope $\\alpha$=1.76$\\pm$0.05. Below $\\sim$40$M_{\\sun}$, the slope becomes much shallower, $\\alpha$=0.52$\\pm$0.04. We also include in Figure~\\ref{fig:cfgccompareMF} the mass function of clumps found with the {\\tt gaussclumps} algorithm, with errors calculated in the identical fashion. Performing fits in the equivalent mass regimes results in a shallower slope for masses greater than 40$M_{\\sun}$ ($\\alpha$=1.15$\\pm$0.04), while the behavior at low masses is similar. As discussed in $\\S$\\ref{structure}, the clumps found with {\\tt clumpfind} and {\\tt gaussclumps} are in good agreement in the central region of each IRDC, but tend to disagree on the outskirts. This is a consequence of the failure of {\\tt gaussclumps} to model the varying background. Examination of the images reveals that the contribution of the diffuse material varies across the image, thereby setting the background level too high for outer clumps (where the envelope contributes less) to be detected. In fact, these clumps appear to be preferentially in the 30 to 500$M_{\\sun}$ range, and a mass function constructed with the {\\tt gaussclumps} result is significantly shallower than derived with {\\tt clumpfind} (see Figure~\\ref{fig:cfgccompareMF}). We conclude that {\\tt gaussclumps} is not suitable to identify structure away from the central region of the IRDC where the envelope level is below the central level. This is further supported by the wavelet analysis which is capable of accounting for a variable envelope contribution. It is worth noting that for the one IRDC for which we have the wavelet analysis, that the slope of the derived mass function shows little appreciable change and agrees with the {\\tt clumpfind} result. \n\nTo put the mass function into context with known Galactic star formation, we plot the clump mass function of all clumps in our sample in Figure~\\ref{fig:literature_MF} along with the core\/clump mass function of a number of other studies probing various mass ranges. We select four studies, each probing massive star forming regions at different wavelengths and resolutions including quiescent cores in Orion \\citep{Li_orion}, clumps in M17 \\citep{reid_2}, clumps in RCW 106 \\citep{mookerjea2004}, and clumps in massive star formation region NGC 6334 \\citep{munoz_ngc6334}. In their papers, each author presents the mass function in a different way, making it difficult to compare the results directly to one another. Here, we recompute the mass function for the published masses in each work uniformly (including the treatment of errors, see above). Each of the mass functions is fit with a power law. Figure~\\ref{fig:literature_MF} highlights the uniqueness of our study in that it spans over a much larger range in masses than any other study to date. \n\nAt the high-mass end, the mass function agrees well with the \\citet{mookerjea2004} and \\citet{munoz_ngc6334} studies, which probed to lower mass limits of 30$M_{\\sun}$ and 4$M_{\\sun}$, respectively. The fall-off from the steep slope at the high mass end to a shallower slope at the low mass end immediately suggests that completeness, enhanced contribution from the envelope and\/or clump blending become an issue. However, the slope at the low mass end compares favorably with \\citet{Li_orion} and \\citet{reid_2} which probe mass ranges 0.1 to 46$M_{\\sun}$ and 0.3 to 200$M_{\\sun}$, respectively. In addition to the general DMF shape at both the high mass and low mass end, the ``break'' in the mass function falls in the 10$M_{\\sun}$ to 50$M_{\\sun}$ range for the ensemble of studies, including ours. If this is a real feature of the evolving mass spectrum, this can shed some light on the progression of the fragmentation process from large, massive objects to the numerous low-mass objects like we see in the local neighborhood. The characteristic ``break'' mass can also be a superficial artifact of differences in binning, mass determination technique, and observational sensitivity. Our study is the only one that spans both mass regimes, and further such work is needed to explore the authenticity of this feature. However, in $\\S$\\ref{conclusion} we speculate that this may be an intrinsic feature. \n\nIt is possible that the slope of the IRDC clump mass function might be an artifact of a limitation in our technique. With the great distances to these clouds, one would expect the effect of clump blending to play a role in the shape of their mass spectrum. We have shown in $\\S$~\\ref{sens} that distance does not dramatically hinder the detection of small clumps. Our study samples infrared-dark clouds from 2.4~kpc to 4.9~kpc, and we find that the number of clumps does not decrease with greater distance, nor does the median mass tend to be be significantly greater with distance. Furthermore, with the present analysis, we see no evidence that including clumps from IRDCs at various distances affects the shape of the mass function.\n\nFrom past studies of local clouds there has been a disparity between mass function slope derived with dust emission and CO \\citep[e.g. compare][]{Johnstone_OrionB, Kramer_CMF}. Our result suggests that massive star forming regions have mass functions with slope in good agreement with CO isotopologues, e.g. $\\alpha$=1.8. This is crucial because CO observations contain velocity information, which allow for the clumps to be decomposed along the line-of-sight. Still, the authors find a shallow slope in agreement with ours. We conclude that clump blending, while unavoidable to some extent, does not skew the shape of the mass function as derived from dust emission or absorption. A close look at \\citet{Kramer_CMF} results finds that the majority of objects studied are massive star formation regions. Given the general agreement of the clump mass function of this sample of IRDCs with other studies of massive star formation regions, we believe this result represents the true character of these objects, not an artifact of the observing technique. \n\nSeveral studies of pre-stellar cores in the local neighborhood show a mass distribution that mimics the shape of the stellar IMF. That the slope of the mass function in infrared-dark clouds is considerably shallower than the stellar IMF should not be surprising. The masses we estimate for these clumps are unlikely to give rise to single stars. Instead, the clumps themselves must fragment further and eventually form a star cluster, likely containing multiple massive stars. Unlike Orion A, for example, which contains $\\sim$10$^4$~$M_{\\sun}$ distributed over a 380 square parsec (6.2 square degrees at 450~pc) region \\citep{Carpenter2000}, in IRDCs, a similar amount of mass is concentrated in clumps extending only a 1.5 square parsec area. Therefore, we posit that IRDCs are not distant analogues to Orion, but more compact complexes capable of star formation on a more massive scale.\n\nGiven the high masses estimated for infrared-dark clouds, yet the lack evidence for the massive stars they must form is perhaps indicating that we see them necessarily {\\em because} we are capturing them just before the onset of star formation. Such a selection effect would mean that we preferentially observe these dark objects because massive stars have yet to disrupt their natal cloud drastically in the process of protostar formation. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.3]{f23}\n\\end{center}\n\\caption{\n\\footnotesize{ \nDifferential mass function of this IRDC sample (black filled circles) fit with a single power-law for M$_{clump}>30M_{\\sun}$ ($\\alpha = 1.76\\pm0.05$) compared with various star formation regions in the high mass regime and their respective single power-law fit slopes. At the high mass end, our fit agrees well with that of other studies: {\\it Open purple diamonds} from \\citet{munoz_ngc6334} ($\\alpha = 1.64\\pm0.06$); {\\it Open green inverted triangles} from \\citet{mookerjea2004} ($\\alpha = 1.59\\pm0.10$). At the low mass end, we fit a second power law for the bins with M$_{clump}<30M_{\\sun}$ ($\\alpha = 0.52\\pm0.04$), which agrees well with other studies in this mass regime: {\\it Open blue diamonds} from \\citet{reid_2} ($\\alpha = 0.80\\pm0.07$); {\\it Open red circles} from quiescent Orion cores from \\citet{Li_orion}($\\alpha = 0.82\\pm0.09$). Note that only this study spans the entire range of masses, so the reality of the apparent break at $\\sim$30$M_{\\sun}$ is in question. }}\n\\label{fig:literature_MF}\n\\end{figure}\n\n\\begin{figure*}\n\\begin{center}\n\\hbox{\n\\vspace{1.0cm}\n\\hspace{-1cm}\n\\psfig{figure=f24a.eps,height=9cm}\n\\hspace{-4cm}\n\\psfig{figure=f24b.eps,height=9cm}\n}\n\\vspace{-1.0cm}\n\\end{center}\n\\caption{\\footnotesize{Left: The mass-radius relationship for {\\tt clumpfind} clumps (foreground method A) in the entire sample of IRDCs (gray), with the clumps found only in G024.05$-$0.22 highlighted in black, and the clumps found in the wavelet subtracted image (red). The solid line denotes the critical Bonnor-Ebert mass-radius relation for T$_{internal}$=15~K. The dashed line is the M~$\\propto$~R$^{2.2}$ from the \\citet{Kramer1996} CO multi-line study of Orion. The dash-dotted line is taken from \\citet{williams_clumpfind}, which finds M~$\\propto$~R$^{2.7}$. Right: The mass-radius relationship for IRDC clumps, including a comparison to all the studies of massive star forming regions included in Figure~\\ref{fig:literature_MF}.}}\n\\label{fig:massrad}\n\\end{figure*}\n\n\n\\subsection{The Contribution from the IRDC Envelope}\n\\label{envelope}\n\nLike nearby clouds, infrared-dark clouds are structured hierarchically, consisting of dense condensations embedded in a more diffuse envelope. Here we present various attempts to estimate the fraction of the total cloud mass resides in dense clumps compared to the extended clouds. First, we use archival $^{13}$CO data to probe the diffuse gas and use it to estimate the envelope mass. To further explore the contribution of the envelope, we demonstrate that a wavelet analysis, a technique designed to remove extended structures from emission maps, gives a similar relationship between envelope and dense clump mass. Alternatively, applying the {\\tt gaussclumps} algorithm to the data provides an average threshold that describes the diffuse structure. \n\nWe use $^{13}$CO~(1-0) molecular line data from the Galactic Ring Survey \\citep{Jackson_GRS} in the area covered by our {\\em Spitzer} observations of G024.05$-$0.22 to probe the diffuse material in the field. The $^{13}$CO emission is widespread, covering the entire area in the IRAC field, thus we are not probing the entire cloud. Assuming local thermodynamic equilibrium (LTE) at a temperature of 15~K and a $^{13}$CO abundance relative to H$_2$ of $4\\times10^{-6}$ \\citep{Goldsmith_Taurus}, we find that the the clump mass is $\\sim$20\\% of the total cloud mass. This demonstrates that IRDCs are the densest regions of much larger molecular cloud complexes; however, the fraction of mass that we estimate the clumps comprise relative to the cloud is an upper limit because the full extent of the cloud is not probed with these data.\n\nIn $\\S$\\ref{masserr}, we discuss two ways in which we account for the envelope in the clump-finding process. First, the {\\tt gaussclumps} algorithm is an alternative method of identifying clumps, and in $\\S$\\ref{dmf} we examine the effect this method has on the clump mass function. The algorithm is insensitive to clumps on the outskirts of the IRDC, thereby flattening the mass function. While {\\tt gaussclumps} may oversimplify the structure of the envelope for the purposes of identifying clumps, it does provide a envelope {\\em threshold}, above which optical depth peaks fit as clumps and below which emission is subtracted. This threshold approximates the level of the envelope, and as a result, {\\tt gaussclumps} finds 15-50\\% of the optical depth level is from the diffuse envelope. The wavelet subtraction technique results in clumps that are on average 90\\% less massive and smaller in size by 25\\% ($\\sim$0.02~pc) than those extracted from the unaltered map.\n\nThese analyses of the IRDC envelope show us that our technique is only sampling 20-40\\% of the clouds total mass and, at the same time, the clump masses themselves include a contribution from the surrounding envelope. Because of these factors, the different methods for isolating ``clumps'' have varying levels of success. For example, using {\\tt gaussclumps} equips us to parametrically remove the envelope component to the clump, but due to the underlying assumption of the baseline level, it misses many clumps that {\\tt clumpfind} identifies successfully. The mass function that results from using the {\\tt gaussclumps} method is shallower than that from {\\tt clumpfind}, as {\\tt gaussclumps} fails to find clumps on the periphery of the dominant (often central) concentration of clumps, where the envelope level is lower. \n\nWhile both the {\\tt clumpfind} and {\\tt gaussclumps} methods have their drawbacks, it is clear that IRDCs have significant structure on a large range of scales. The relatively shallow mass function for IRDC clumps and other massive star forming regions shows that there is a great deal of mass in large objects, and future work is needed to understand the detailed relationship between the dense clumps and their surroundings.\n\n\n\n\\section{Mass-Radius Relation}\n\nNext we investigate the relationship between the mass and size of the clumps found in IRDCs, which informs us of the overall stability of the clump structures. Figure~\\ref{fig:massrad} shows the mass-radius relationship of the {\\tt clumpfind}-identified clumps, highlighting the results for G024.05$-$0.22 and the wavelet-subtracted case. Indeed, the clumps extracted from the wavelet-subtracted map are shifted down in mass by 90\\% and down in size by 25\\%, but the relationship between the quantities does not change. We plot the relation of simple self-gravitating Bonnor-Ebert spheres ($M~(R)~=~2.4 R a^2 \/ G$, where $a$ is the sound speed and set to 0.2~km~s$^{-1}$, solid line) and also the mass-radius relationship observed in a multi-line CO survey of Orion \\citep[M~$\\propto$~R$^{2.2}$,][(dashed line)]{Kramer1996}. For comparison, Figure~\\ref{fig:massrad} also shows these properties from the other studies of massive star formation regions. We note that the spatial resolution of the comparison studies is larger than the resolution of this study. The relationship for Orion \\citep{Li_orion}, M17 \\citep{reid_2}, NGC 6334 \\citep{munoz_ngc6334} and RCW 106 \\citep{mookerjea2004} all agree with the \\citet{Kramer1996} relationship, which is consistent with the mass function agreement to CO studies (see $\\S$\\ref{dmf}). \n\nThe IRDC clumps are clearly gravitationally unstable, showing higher densities than their local Bonnor-Ebert sphere counterparts. The relationship for clumps in IRDCs shows a steeper trend, one closer to the \\citet{williams_clumpfind} relationship, $M~\\propto~R^{2.7}$. Also, dust extinction at 8~$\\mu$m has greater sensitivity to high-densities than CO, which is known to freeze out at extreme densities. Hence, while the IRDC clumps are clearly Jeans unstable, the slope of the relation may be simply a reflection of the different mass probe used here.\n\n\\section{Discussion \\& Conclusion} \n\\label{conclusion}\n\nThe {\\em Spitzer Space Telescope} affords us the ability to probe a spatial regime of massive clouds in the Galactic Ring at comparable resolution as has been applied to the numerous studies of local, low-mass star formation. In this way, we can extend the frontier of detailed star formation studies to include regions the likes of which are not available in the solar neighborhood. This study demonstrates a powerful method for characterizing infrared-dark clouds, the precursors to massive stars and star clusters. These objects provide a unique look at the initial conditions of star formation in the Galactic Ring, the dominant mode of star formation in the Galaxy.\n\nWe present new {\\em Spitzer} IRAC and MIPS 24~$\\mu$m photometric measurements supplemented with 2MASS J, H, K$_s$ photometry of the distributed young stellar population observed in the Spitzer fields. Rigid color criteria are applied to identify candidate young stellar objects that are potentially associated with the infrared-dark clouds. In all, 308 young stellar objects were identified (see Table~2), seven of which are classified as embedded protostars. For those objects, we set lower limits on the infrared luminosities. One IRDC has an IRAS source in the field, which is the best candidate for an associated massive star. Otherwise, our observations provide no evidence for massive star formation in IRDCs, though sensitivity limitations do not rule out the presence of low mass stars and heavily extincted stars. Nebulosity at 8 and 24~$\\mu$m was detected in four of the fields, but when these regions were correlated with molecular data, they do not appear to be associated with the IRDCs. On average, 25\\% of clumps are in the vicinity of stars and $\\sim$10\\% are in near YSOs, which are the most likely sources to be associated with the infrared-dark cloud. Since most of the mass is not associated with any indicator of star formation. This leads us to conclude that IRDCs are at in earlier stage than, say, the nearest example of massive star formation, the Orion Nebula, and these results are powerful clues to the initial conditions of star cluster formation. \n\nWe detail our method of probing mass in IRDCs using dust absorption as a direct tracer of column density. We perform the analysis using two different assumptions (methods A and B) for the foreground contribution to the 8~$\\mu$m flux. The IRDC envelope contribution to the To validate our method in the context of others, we compare and find good agreement between the 8~$\\mu$m absorption and other tracers of dust, such as sub-millimeter emission from dust grains measured with SCUBA and N$_2$H$^+$ molecular line emission measured with FCRAO and BIMA. We show that distance does not play a role in the effectiveness of the technique. The high resolution {\\em Spitzer} observations allows us to probe the absorbing structures in infrared-dark clouds at sub-parsec spatial scales. We apply the {\\tt clumpfind} algorithm to identify independent absorbing structures and use the output to derive the mass and size of the clumps. Tens of clumps are detected in each IRDC, ranging in mass from 0.5 to a few $\\times$ 10$^3~M_{\\sun}$ with sizes from 0.02 to 0.3~pc in diameter. We also apply the {\\tt gaussclumps} algorithm to identify clumps. The structures in the central region of the IRDC correspond almost perfectly to the {\\tt clumpfind} result, but {\\tt gaussclumps} misses clumps on the outskirts because it fails to account for a spatially variable background level. \n\nThe existence of substructure -- from 10$^3$~$M_{\\sun}$ clumps down to 0.5~$M_{\\sun}$ ``cores'' -- indicates that IRDCs are undergoing fragmentation and will ultimately form star clusters. The typical densities (n $>$ 10$^5$ cm$^{-3}$) and temperatures (T~$<$~20 K) of IRDCs are consistent with massive star forming regions, but they lack the stellar content seen in more active massive star formation regions, such as the Orion molecular cloud or W49, for example. The mass available in the most massive clumps, however, leads us to conclude that IRDCs will eventually form multiple massive stars. \n\nThe IRDC clump mass function, with slope $\\alpha = 1.76\\pm0.05$ for masses greater than $\\sim$40$M_{\\sun}$, agrees with the mass function we calculate based on data from other studies of massive objects. The mass function for both IRDCs and these massive clump distributions is shallower than the Salpeter-like core mass function reported in local regions. In fact, the IRDC clump mass function is more consistent with that found when probing molecular cloud structure using CO line emission ($\\alpha = 1.6 - 1.8~$), again supporting the assertion that these objects are at an earlier phase of fragmentation. At the low-mass end ($M < 40M_{\\sun}$), we find a much shallower slope, $\\alpha = 0.52\\pm0.04$, which is somewhat flatter than other studies that cover the same range in masses. This could be due in part to incomplete sampling of the fields. Alternatively, the apparent flattening of the clumps mass function around 40~$M_{\\sun}$ could indicate a transition between objects that will generate clustered star formation and those that give rise to more distributed star formation \\citep{AdamsMyers2001}.\n\nIRDC clumps are generally not in thermodynamic equilibrium, but rather are undergoing turbulent fragmentation. The mass spectrum is consistent with the predictions of gravoturbulent fragmentation of molecular clouds \\citep{Klessen2001}. The dynamic Molecular Ring environment could naturally be conducive for producing concentrated cluster-forming regions.\n\nJust as in all surveys of IRDCs to date, this study is subject to the blending of clumps, which could alter the shape of the mass function to over-represent the most massive clumps at the expense of clumps of all masses and sizes. To the extent that this sample allows, we find that this does not drastically effect the shape of the mass function. Other studies of cloud fragmentation that have the advantage of a third dimension of information also find a shallower clump mass function slope \\citep{Kramer_CMF}. We therefore conclude that this result is a true reflection of the structure in IRDCs and nature of massive star formation. \n\nInfrared-dark clouds are already well-established candidates for the precursors to stellar clusters and exhibit significant structures down to 0.02~pc scales. The properties of IRDCs provide powerful constraints on the initial conditions of massive and clustered star formation. We suggest that the mass function is an evolving entity, with infrared-dark clouds marking one of the earliest stages of cluster formation. The mass distribution is top-heavy, with most of the mass in the largest structures. As the massive clumps fragment further, the mass function will evolve and become steeper. The clumps will ultimately fragment to the stellar scale and {\\em then} take on the Salpeter core mass function that has been observed so prevalently in local clouds. For example, following the (mostly) starless IRDC phase of cluster evolution, the mass spectrum will evolve into its steeper form, aligning with the mass function of local embedded clusters \\citep{Lada_araa03} or star clusters in the Large and Small Magellanic Clouds \\citep{Hunter2003}, with a slope $\\alpha\\sim$~2. As fragmentation proceeds on smaller scales, the mass function would take on yet a steeper character observed in core mass functions \\citep[e.g.][]{alves_cmfimf} and, ultimately, stars \\citep[e.g.][]{Kroupa_imf}. \n\n\\acknowledgements\nSR is indebted to Doug Johnstone, Fabian Heitsch, Lori Allen and Lee Hartmann for useful suggestions on this work. SR and EB thank Darek Lis, Carsten Kramer and Joseph Weingartner for their invaluable assistance in the analysis. This research was supported by {\\em Spitzer} under program ID 3434. This work was also supported by the National Science Foundation under Grant 0707777.\n\n\\bibliographystyle{apj} \n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}