diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzggpg" "b/data_all_eng_slimpj/shuffled/split2/finalzzggpg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzggpg" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe origin of ultra-high energy cosmic-rays is an unsolved mystery. Somewhere, astrophysical particle accelerators accelerate protons or heavier ions to energies above $10^{20}$ eV. Unfortunately, nuclear cosmic-rays are bent by interstellar magnetic fields, so their arrival directions on Earth do not point back to their sources. Despite more than 60 years of studies of ultra-high energy cosmic rays, we have not yet found definitive evidence of any specific source or source classes. One way to find these accelerators is to search for them using a different type of particle: the neutrino. Neutrinos are electrically neutral, so travel in straight lines, and they have small enough interaction cross-sections to escape from even dense sources. They can be produced when nuclei undergoing acceleration interact with either gas or photons in or near the accelerator. The number of neutrinos depends on the density of the gas or photons. \n\nBecause they interact so weakly, a large detector is needed to observe astrophysical neutrinos. Two types of calculations have been used to estimate the neutrino flux, and, from that the required detector size. One used the measured cosmic-ray flux and estimates of the beam-gas or beam-photon density. The maximum neutrino flux occurs when the source is just dense enough to absorb all of the energy from the proton beam; this is known as the Waxman-Bahcall limit \\cite{Bahcall:1999yr}. The other used the measured gamma-ray flux, assuming that the gamma-rays come from $\\pi^0\\rightarrow\\gamma\\gamma$. Both calculations found that a 1 km$^3$ detector should be large enough to find astrophysical neutrino sources. These results drove the size of the IceCube neutrino detector.\n\nIceCube consists of 1 km$^3$ of Antarctic ice at the South Pole, instrumented with 5,160 digital optical modules (DOMs) \\cite{Aartsen:2016nxy}. The DOMs observe the Cherenkov radiation emitted by the charged particles that are produced when neutrinos interact in the ice \\cite{Halzen:2010yj}. They are deployed on 86 vertical strings, each holding 60 DOMs. 78 of the strings are distributed on a 125 m triangular grid, covering about 1 km$^2$, with the DOMs spaced every 17 meters between 1450 and 2450 m below the surface. The remaining 8 strings, called ``Deep Core\" are deployed near the center of the array. They have smaller string-to-string and DOM-to-DOM spacings, giving Deep Core a lower energy threshold than the rest of the detector. IceCube was constructed between 2005 and 2010 by an international collaboration.\n\nEach DOM consists of a 25.4 cm photomultiplier tube in a glass pressure vessel, along with data acquisition, calibration and communications systems \\cite{Abbasi:2008aa}. The DOMs operate autonomously, receiving power and control signals and sending packetized digital data to the surface. A calibration system exchanges pulses with the surface, maintaining the DOM-to-DOM timing calibrations within 3 nsec. Thirteen on-board LEDs are used for PMT and inter-DOM calibrations, including to measure the optical properties of the Antarctic ice \\cite{Aartsen:2013rt}. \n\n\\section{Atmospheric muons and neutrino backgrounds}\n\nAstrophysical neutrino searches must contend with two types of backgrounds. The first are downward-going cosmic-ray muons, which are produced in cosmic-ray air showers. These are far more numerous than neutrinos, with IceCube triggering at about 2800 Hz, mostly from these muons. Two ways to avoid this background are to select upward-going tracks, since the Earth acts as a muon shield, or to select interactions that originate within the detector, with no sign of an incident track. Because atmospheric neutrinos are produced in cosmic-ray air showers, they are likely to be accompanied by air shower particles and cosmic-ray muons; these additional particles may be used to veto atmospheric neutrinos. \n\nThe second background is from atmospheric neutrinos, neutrinos produced in cosmic-ray air showers. Conventional atmospheric neutrinos come from the decay of pions and kaons. These neutrinos are mostly $\\nu_\\mu$, with slightly more $\\nu$ than $\\overline\\nu$. Because IceCube cannot generally differentiate between $\\nu$ and $\\overline\\nu$, we will not further distinguish between them here. The conventional atmospheric neutrino energy spectrum depends on the cosmic-ray air shower spectrum. Pions and kaons are relatively long-lived, so they may interact in the atmosphere before they can decay. This softens the neutrino energy spectrum, leading to a conventional neutrino energy spectrum that roughly goes as $dN_\\nu\/dE_\\nu \\propto E_\\nu^{-3.7}$ for, roughly, $E_\\nu< 100$ TeV, softening to $dN_\\nu\/dE_\\nu \\propto E_\\nu^{-4.0}$ at higher energies. This competition also affects the angular distribution; conventional atmospheric neutrinos are concentrated around the horizon. IceCube uses calculations of particle production in the atmosphere to model neutrino production \\cite{Honda:2006qj}. As Fig. \\ref{fig:atmos} shows, the calculations are in good agreement with the $\\nu_\\mu$ \\cite{Aartsen:2017nbu} and $\\nu_e$ \\cite{Aartsen:2015xup,Aartsen:2012uu} data. \n\n\\begin{figure}[t]\n\\includegraphics[height=2.2in]{atmosphericnu}\n\\includegraphics[height=2.2in]{astroflux.png}\n\\caption{(Left) Atmospheric $\\nu_e$ and $\\nu_\\mu$ flux measurements, compared with the calculations used by IceCube. The red band shows the prompt flux, which has not yet been observed. From Ref. \\cite{Aartsen:2015xup}.\n(Right) The measured differential astrophysical flux measured using contained events (points) and a fit to that data (blue\/purple line and band), compared with the best fit obtained from through-going $\\nu_\\mu$ (pink line and band). From Ref. \\cite{Aartsen:2017mau}.\n}\n\\label{fig:atmos}\n\\end{figure}\n\nPrompt atmospheric neutrinos come from the decay of charm and bottom quarks. Because they are short-lived, they are unlikely to interact, so they follow the cosmic-ray air shower energy spectrum, and they are nearly isotropic. Prompt neutrinos have yet to be observed, but IceCube has set flux limits that are not too much higher than the theoretical predictions. \n\n\\section{Finding Astrophysical Neutrinos}\n\nMany approaches have been used to search for a small flux of astrophysical neutrinos above these large backgrounds. One approach is to \nsearch for point sources, which produce a local concentration which sticks up above the smooth backgrounds. Another is to search for particularly energetic neutrinos, since astrophysical neutrinos are expected to have a spectrum that roughly goes as $dN_\\nu\/dE_\\nu \\propto E_\\nu^{-2.0}$, harder than the atmospheric neutrinos. A third approach is to search for downward-going neutrinos that are unaccompanied by cosmic-ray air showers and atmospheric muons. Finally, IceCube is also searching for $\\nu_\\tau$, which are very rare in air showers, coming only from $D_s^+$ and $B$-meson decays. In contrast, they are expected to be 1\/3 of the astrophysical flux, since neutrino oscillations in-transit convert $\\nu_\\mu$ and $\\nu_e$ into $\\nu_\\tau$. IceCube has yet to see a clear $\\nu_\\tau$ signal.\n\nThe first hint of an astrophysical signal came from two neutrinos, Bert and Ernie, that were found in a search for extremely-high energy neutrinos \\cite{Aartsen:2013bka}. Each were well-contained cascades, with an energy around 1 PeV - essentially golden events. The predicted atmospheric background was $0.082\\pm 0.004$ (stat.) $^{+0.06}_{-0.04}$ events. \n\nThese events prompted a search for more events that originated within the detector. This ``High-Energy Starting Event\" (HESE) analysis divided the detector into an outer veto region, covering the top 10 DOMs in most strings, all of the outer strings, and the bottom DOMs in most strings, and a signal region. A dusty layer in the middle of the detector and a surrounded buffer were also included in the veto region, to eliminate muon tracks that entered undetected. It selected events that deposited more than 6,000 observed photons (photoelectrons) in the detector, but where the first significant deposition was in the signal region. The two year HESE search found strong evidence for astrophysical neutrinos, while the three-year search crossed the $5\\sigma$ discovery threshold. Here, I discuss a newer search which found 82 events in 2078 live days (over 6 years) of data \\cite{Aartsen:2017mau}. The expected background from downward-going muons was $25\\pm 7$ events, determined by adding a second, inner veto layer, and comparing the pass rates in the two layers. The estimated conventional atmospheric neutrino background was $16^{+11}_{-4}$ events, including prompt neutrinos, which were constrained by a previous IceCube $\\nu_\\mu$ study \\cite{Aartsen:2013eka}.\n\nAn additional cut to remove most of the muon background required that events have more than 60 TeV of energy deposited in the detector. The astrophysical component was then fit to a power law $dN_\\nu\/dE_\\nu \\propto E_\\nu^{-\\gamma}$, and the best fit value $\\gamma= 2.92\\pm 0.3$ was found. This index is somewhat softer than expected; most models based on Fermi acceleration predict $\\gamma \\approx 2$. The arrival distribution of the events was consistent with isotropy, as expected.\n\nAn independent search for astrophysical neutrinos used energetic upward-going, through-going muons. Through-going muons offer very good angular resolution, but poor energy resolution, because of the uncertainty about how far outside the detector the neutrino interacted. A fit to the measured muon energy spectrum found a clear $5.6\\sigma$ excess over atmospheric expectations \\cite{Aartsen:2016xlq} with a best-fit spectral index $\\gamma = 2.2\\pm 0.1$. The index is consistent with Fermi acceleration, but in significant tension with the HESE sample. However, as Fig. \\ref{fig:atmos} shows, the through-going muon analysis samples more energetic neutrinos than the HESE analysis. In the energy region where the two samples overlap, the flux estimates agree reasonably well. One possible explanation is that the energy spectrum is not a single power law. However, fits to the HESE sample do not show a preference for a double power law. Another possible explanation is that the track and cascade energy spectral indices are different; this would likely point to an energy-dependent acceleration mechanism, a non-standard oscillation scenario, or a non-standard acceleration scenario.\n\nTo study these possibilities, we performed two tests on a separate sample of contained events \\cite{Aartsen:2018vez}. It consisted of 2650 starting tracks and 965 cascade events, selected using slightly different criteria, to extend the analysis to lower energies, while still removing atmospheric muon background. The starting tracks were subjected to a new energy reconstruction method which used machine learning to separately reconstruct the cascade and track energies, and, from them, determine the visible inelasticity. The inelasticity is discussed below, but here we present two astrophysical neutrino results. First, we perform a fit to the conventional, prompt and astrophysical neutrino fluxes, but we allow the astrophysical fluxes to vary separately in the cascade and track samples. This fit finds $\\gamma=2.43^{+0.28}_{-0.30}$ for the track fit and $\\gamma= 2.62\\pm 0.08$ for the cascades, compared to $\\gamma=2.62 \\pm 0.07$ for the combined sample. The cascade and combined-sample spectral indices are consistent with previous measurement, while the track spectrum is in between the HESE and through-going muon results, but with large enough error bars to encompass both. A second fit to the sample allowed the astrophysical flavor ratio to vary from the standard $\\nu_e:\\nu_\\mu:\\nu_\\tau = 1:1:1$. Figure \\ref{fig:flavor} (left) shows the result of this fit, showing the relative likelihood of flavor ratios. 100\\% $\\nu_\\mu$ and 100\\% $\\nu_e$ are ruled out at greater than $5\\sigma$, but the analysis cannot differentiate between different standard acceleration models. This study uses tracks and cascades of comparable energies, so is less dependent on the neutrino energy spectrum than the previous global fit \\cite{Aartsen:2015knd}.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[height=2.2in]{flav_scan}\n\\includegraphics[height=2.2in]{split_fit_inel}\n\\caption{(Left) The neutrino flavor triangle, showing the allowed regions based on the comparison of starting tracks and cascades. (Right) The measured mean inelasticity as a function of neutrino energy, compared with the standard-model expectations. \nBoth from Ref. \\cite{Aartsen:2018vez}. \n } \n\\label{fig:flavor}\n\\end{figure}\n\n\\section{Point source searches}\n\nIceCube has made many searches for neutrinos from different point sources, and from different classes of objects. So far, the only statistically significant positive result is from the blazar TXS0506 +56 \\cite{IceCube:2018dnn}. On Sept. 22, 2017, IceCube observed a neutrino which was energetic enough to be of likely astrophysical origin. So, it issued a rapid alert response, which led several observatories to perform targeted observations in that direction. Data from the Fermi telescope showed that the position was consistent with a known blazar which was emitting gamma-rays with an energy above 1 GeV. The blazar was in an active state when the neutrino was observed, with higher than average gamma-ray emission. Observations from the MAGIC telescope showed that the source was also emitting photons with energies above 100 GeV. A subsequent search in archival IceCube data showed that the source has emitted a burst of neutrinos during the period Sept. 2014 to March, 2015 \\cite{IceCube:2018cha}. Although confirmation is needed, it appears that we have finally located at least one astrophysical particle accelerator. \n\n\\section{Particle and Nuclear Physics with IceCube}\n\nIceCube can use the energetic neutrinos that it observes to study neutrino interactions, at energies far above the reach of particle accelerators. One analysis used the zenith angle distributions of neutrinos observed in IceCube to study neutrino absorption in the Earth, and, from that, measure the neutrino-nucleon cross-section \\cite{Aartsen:2017kpd}. It used a two-dimensional fit to the zenith angle and muon energy distribution, where the cross-section was a free parameter in the fit. Neutral-current interactions were included by treating absorption as a two-dimensional problem: neutrino energy entering the Earth, and neutrino energy observed in IceCube. Near-horizontal neutrinos provided a nearly absorption-free baseline. The fit assumed that the charged-current and neutral-current interactions were a single multiple, $R$ of the standard model cross-sections \\cite{CooperSarkar:2011pa}, and found $R=1.30^{+0.21}_{-0.19}$ (stat.)$^{+0.39}_{-0.43}$ (syst.) for energies from 6.3 to 980 TeV.\n\nA second analysis used the starting-track study mentioned above to measure the inelasticity distribution of neutrinos with energies from 1 TeV up to above 100 TeV \\cite{Aartsen:2018vez}. Figure \\ref{fig:flavor} (right) shows the mean inelasticity, $\\langle y \\rangle$ as a function of energy. The mean inelasticities are in good agreement with the standard model predictions \\cite{CooperSarkar:2011pa}.\n\n\\section{Conclusions}\n\nIceCube has measured a strong diffuse neutrino flux. In the energy region where they overlap, two independent methods show some tension in the spectral index, but give similar flux measurements. We have observed a coincidence between an energetic, likely-astrophysical neutrino from the direction of the blazar TXS0506+56, emitted during a time when the blazar was in outburst. Using archival data, we then found one period, from Sept. 2014 to March, 2015, when the source was emitting a significant flux of neutrinos. Together, the two observations point to this blazar as an astrophysical particle accelerator.\n\nIceCube has used its sample of high-energy neutrinos to study neutrino interactions at energies far above those accessible at accelerators. Two analyses have measured the neutrino-nucleon cross-section and the inelasticity distribution for charged-current interactions and found them in good agreement with the standard model.\n\nLooking ahead, we expect to collect more data and extend these studies to higher energies with a future, Gen-2 upgrade \\cite{Ackermann:2017pja}.\n\n\\bigskip \\bigskip \\begin{center} \\begin{large\nThis work was supported in part by U.S. National Science Foundation under grants PHY-1307472 and the U.S. Department of Energy under contract number DE-AC02-05-CH11231.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\begin{figure}[t]\n\\centering\n\\centerline{\\includegraphics[width=8.25cm]{mnist_none_hist_scatter.pdf}}\n\\caption{\nA single unit's activation histogram (upper three plots) and two randomly chosen units' activation scatter plots (lower three plots) for MNIST. For a 6-layer Multilayer Perceptron (MLP), the fifth layer's representation vectors calculated using 10,000 test samples were used to generate the plots. For the baseline model, a substantial overlap among different classes can be observed at the time of initialization as shown in (a). Even after 50 epochs of training, still, a substantial overlap can be observed as shown in (b). When class information is used to regularize the representation shapes, the overlap is significantly reduced as shown in (c). Note that a slight correlation between each pair of classes can be observed in the scatter plot of (b), but not in that of (c) due to the use of cw-CR. The figures are best viewed in color.\n}\n\\label{fig:mnist_none_hist_scatter}\n\\end{figure}\n\nFor deep learning, a variety of regularization techniques have been developed by focusing on the \\textit{weight parameters}. A classic example is the use of L2 \\cite{hoerl1970ridge} and L1 \\cite{tibshirani1996regression} weight regularizers. They have been popular because they are easy to use, computationally light, and often result in performance enhancements. Another example is the parameter sharing technique that enforces the same weight values as in the Convolutional Neural Networks (CNNs). \nRegularization techniques that focus on the \\textit{representation} (the activations of the units in a deep network), however, have been less popular even though the performance of deep learning is known to depend on the learned representation heavily. \n\nFor representation shaping (regularization), some of the promising methods for performance and interpretability include \\cite{glorot2011deep,cogswell2015reducing,liao2016learning}.\n\\cite{glorot2011deep} considers increasing representational sparsity, \\cite{cogswell2015reducing} focuses on reducing covariance among hidden units, and \\cite{liao2016learning} forces parsimonious representations using k-means style clustering. While all of them are effective representation regularizers, none of them explicitly use class information for the regularization. A few recent works \\cite{wen2016discriminative,belharbi2017neural,yang2018robust} do utilize class information, and their approaches are based on \\textit{hidden layer activation vectors}. The method of \\cite{belharbi2017neural} is computationally expensive because pair-wise dissimilarities need to be calculated among the same class samples in each mini-batch. \n\nIn this work, two computationally light representation regularizers, cw-CR (class-wise Covariance Regularizer) and cw-VR (class-wise Variance Regularizer), that utilize class information are introduced and studied. We came up with the design ideas by observing typical histograms and scatter plots of deep networks as shown in Figure \\ref{fig:mnist_none_hist_scatter}. In Figure \\ref{fig:mnist_none_hist_scatter} (b), different classes substantially overlap even after the training is complete. If we directly use class information in regularization, as opposed to using it only for cross-entropy cost calculation, we can specifically reduce overlaps or pursue a desired representation characteristic. An example of cw-CR reducing class-wise covariance is shown in Figure \\ref{fig:mnist_none_hist_scatter} (c), and later we will show that cw-VR can notably reduce class-wise variance resulting in minimal overlaps. The two class-wise regularizers are very simple and computationally efficient, and therefore can be easily used as L1 or L2 weight regularizers that are very popular. \n\n\\subsection{Our Contributions}\nThe contributions of this work can be summarized as follows.\n\n\\subsubsection{Introduction of three new representation regularizers} \nWe introduce two representation regularizers that utilize class information. cw-CR and cw-VR reduce per-class covariance and variance, respectively. In this work, their penalty loss functions are defined, and their gradients are analyzed and interpreted. Also, we investigate VR that is cw-VR's all-class counterpart. Intuitively, reducing the variance of each unit's activations does not make sense unless it is applied per class, but we have tried VR for the sake of completeness and found that VR is useful for performance enhancement. cw-CR's all-class counterpart, CR, is analyzed as well, but CR turns out to be the same as DeCov that was already studied in-depth in \\cite{cogswell2015reducing}. \n\n\\subsubsection{Performance improvement with the new representation regularizers}\nRather than trying to find a single case of beating the state-of-the-art record, we performed an extensive set of experiments on the most popular datasets (MNIST, CIFAR-10, CIFAR-100) and architectures (MLP, CNN). Additionally, ResNet \\cite{he2016deep} was tested as an example of a sophisticated network, and an image reconstruction task using autoencoder was tested as an example of a different type of task. We have tested a variety of scenarios with different optimizers, number of classes, network size, and data size. The results show that our representation regularizers outperform the baseline (no regularizer) and L1\/L2 weight regularizers for almost all the scenarios that we have tested. More importantly, class-wise regularizers (cw-CR, cw-VR) usually outperformed their all-class counterparts (CR, VR). Typically cw-VR was the best performing regularizer and achieved the best performance for the autoencoder task, too.\n\n\\subsubsection{Effects of representation regularization}\nThrough visualizations and quantitative analyses, we show that the new representation regularizers indeed shape representations in the ways that we have intended. The quantitative analysis of representation characteristics, however, indicates that each regularizer affects multiple representation characteristics together and therefore the regularizers cannot be used to control a single representation characteristic without at least mildly affecting some other representation characteristics. \n\n\n\n\\section{Related Works}\n\n\\subsection{Regularization for Deep Learning}\nThe classic regularizers apply L2 \\cite{hoerl1970ridge} and L1 \\cite{tibshirani1996regression} \npenalties to the \\textit{weights} of models, and they are widely used for Deep Neural Networks (DNNs) as well. \n\\cite{wen2016learning} extended L1 regularizers by using group lasso to regularize \nthe structures of DNN (i.e., filters, channels, filter shapes, and layer depth).\n\\cite{srivastava2014dropout} devised dropout that randomly applies activation masking \nover the units.\nWhile dropout is applied in a multiplicative manner, \\cite{glorot2011deep} used L1 penalty \nregularization on the activations to encourage sparse representations.\nXCov proposed by \\cite{cheung2014discovering} minimizes the covariance between \nautoencoding units and label encoding units of the same layer such that \nrepresentations can be disentangled. \nBatch normalization (BN) proposed by \\cite{ioffe2015batch} exploits mini-batch statistics \nto normalize activations. It was developed to accelerate training speed by preventing \ninternal covariate shift, but it was also found to be a useful regularizer.\nIn line with batch normalization, weight normalization, developed by \\cite{salimans2016weight}, \nuses mini-batch statistics to normalize weight vectors. \nLayer normalization proposed by \\cite{ba2016layer} is a RNN version of batch normalization,\nwhere they compute the mean and variance used for normalization from all of the summed\ninputs to the units in a layer on a single training case.\nThere are many other publications on regularization techniques for deep learning,\nbut we still do not fully understand how they really affect the performance. \nRecent work by \\cite{zhang2016understanding}\nshows that the traditional concept of controlling generalization error by regularizing the effective capacity does not apply to the modern DNNs. \n\n\n\\subsection{Penalty Regularization on Representations}\nSome of the existing regularization methods explicitly shape representations by adopting a penalty regularization term.\nDeCov \\cite{cogswell2015reducing} is a penalty regularizer that minimizes the off-diagonals of a layer's representation covariance matrix. DeCov reduces co-adaptation of a layer's units by encouraging the units to be decorrelated. In this work, \nit is called as CR (Covariance Regularizer) for consistent naming.\nA recent work \\cite{liao2016learning} used a clustering based regularization that encourages parsimonious representations. In their work, similar representations in sample, spatial, and channel dimensions are clustered and used for regularization such that similar representations are encouraged to become even more similar. While their work can be applied to unsupervised as well as supervised tasks, our work utilizes a much simpler and computationally efficient method of directly using class labels during training to avoid k-means like clustering. \n\n\\subsection{Class-wise Learning}\nTrue class information has been rarely used directly for regularization methods.\nTraditionally, the class information has been used only for evaluating the correctness of\npredictions and the relevant cost function terms. Some of the recent works, however, \nhave adopted the class-wise concept in more sophisticated ways. In those works, \nclass information is used as a switch or for emphasizing the discriminative aspects over different classes. \nAs an example, \\cite{li2008kernel} proposed a kernel learning method using class information to model the manifold structure. They modify locality preserving projection to be class dependent. \\cite{jiang2011learning} \nadded label consistent regularizers for learning a discriminative dictionary. \n\\cite{wen2016discriminative} developed a regularizer called center loss that reduces the activation vector distance between representations and their corresponding class centers for face recognition tasks.\n\\cite{yang2018robust} designed a loss function named prototype loss that improves representation's intra-class compactness for enhancing the robustness of CNN.\nAnother recent work by \\cite{belharbi2017neural} directly uses class labels to encourage similar representations per class as in our work, but it is computationally heavy as explained earlier. \nBesides the pair-wise computation, two optimizers are used for handling the supervised loss term and the hint term separately. \nClass information is used for autoencoder tasks as well. \\cite{shi2016learning} implicitly reduced the intra-class variation of reconstructed samples by minimizing pair-wise distances among same class samples.\nLike the strategies listed above, our cw-VR and cw-CR use class-wise information to control the statistical characteristics of representations. However, our methods are simple because only one optimizer is used, and computationally efficient because pair-wise computation is not required.\n\n\n\n\\section{Class-wise Representation Regularizers: cw-CR and cw-VR}\n\nIn this section, we first present basic statistics of representations. Then, three representation regularizers, cw-CR, cw-VR, and VR are introduced with their penalty loss functions and gradients. Interpretations of the loss functions and gradients are provided as well. \n\n\\subsection{Basic Statistics of Representations}\n\\label{subsection:stats}\nFor the layer $l$, the output activation vector of the layer is defined as \n$\\mathbf{z}_l = \\max(\\mathbf{W}^\\top_l \\mathbf{z}_{l-1} + \\mathbf{b}_l, 0)$ using Rectified Linear Unit (ReLU)\nactivation function. Because we will be focusing on the layer $l$ for most of the explanations, \nwe drop the layer index. \nThen, $z_i$ is the $i^{th}$ element of $\\mathbf{z}$ (i.e. activation of $i^{th}$ unit). \n\nTo use statistical properties of representations, we define mean of unit $i$, $\\mu_i$, and covariance \nbetween unit $i$ and unit $j$, $\\textit{c}_{i,j}$, using the $N$ samples in each mini-batch. \n\\begin{align}\n \\mu_i &= \\frac{1}{N} \\sum_n z_{i,n} \\label{eq:mean} \\\\\n \\textit{c}_{i,j} &= \\frac{1}{N} \\sum_n (z_{i,n} - \\mu_i)(z_{j,n} - \\mu_j) \\label{eq:covariance}\n\\end{align}\nHere, $z_{i,n}$ is the activation of unit $i$ for $n^{th}$ sample in the mini-batch. \nFrom equation (\\ref{eq:covariance}), variance of $i$ unit can be written as the following. \n\\begin{align}\n \\textit{v}_{i} &= \\textit{c}_{i,i} \\label{eq:variance}\n\\end{align}\nWhen class-wise statistics need to be considered, we choose a single label $k$ from $K$ labels\nand evaluate mean, covariance, and variance using only the data samples with true label $k$\nin the mini-batch. \n\\begin{align}\n \\mu_i^k &= \\frac{1}{|S_k|} \\sum_{n \\in S_k} z_{i,n} \\label{eq:mean_cw} \\\\\n \\textit{c}_{i,j}^k &= \\frac{1}{|S_k|} \\sum_{n \\in S_k} (z_{i,n} - \\mu_i^k)(z_{j,n} - \\mu_j^k) \\label{eq:covariance_cw} \\\\ \n \\textit{v}_{i}^k &= \\textit{c}_{i,i}^k \\label{eq:variance_cw}\n\\end{align}\nHere, $S_k$ is the set containing indexes of the samples whose true label is $k$, \nand $|S_k|$ is the cardinality of the set $S_k$.\n\n\\begin{table*}[t]\n\\caption{Penalty loss functions and gradients of the representation regularizers. All the penalty loss functions are normalized with the number of units ($I$) and the number of classes ($K$) such that the value of $\\lambda$ can have a consistent meaning. CR and cw-CR are standardized using the number of distinct covariance combinations.}\n\\centering\n\\begin{tabular}{rlrl}\n\t\t\\hline\n\t\t\\multicolumn{2}{c}{Penalty loss function} & \\multicolumn{2}{c}{Gradient} \\\\ \\hline\n\t\t$\\displaystyle{\\Omega}_{CR}$ & $\\displaystyle=\\frac{2}{I(I-1)}\\sum_{i\\neq j} (c_{i,j})^{2} $ & $\\displaystyle\\frac{\\partial{{\\Omega}_{CR}}}{\\partial{z_{i,n}}}$ & $\\displaystyle=\\frac{4}{NI(I-1)}\\sum_{j\\neq{i}}^{}{c_{i,j}(z_{j,n}-\\mu_{j}})$ \\\\ \n\t\t\n\t\t$\\displaystyle{\\Omega}_{cw{\\text -}CR}$ & $\\displaystyle=\\frac{2}{KI(I-1)}\\sum_k \\sum_{i\\neq j} (c_{i,j}^{k})^{2} $ & $\\displaystyle\\frac{\\partial{{\\Omega}_{cw{\\text-}CR}}}{\\partial{z_{i,n}}}$ & $\\displaystyle=\\frac{4}{KI(I-1)|S_k|}\\sum_{j\\neq{i}}^{}{c_{i,j}^{k}(z_{j,n}-\\mu_{j}^{k}}), n \\in S_k$ \\\\ \n\t\t\n\t\t$\\displaystyle{\\Omega}_{VR}$ & $\\displaystyle=\\frac{1}{I}\\sum_i v_{i}$ & $\\displaystyle\\frac{\\partial{{\\Omega}_{VR}}}{\\partial{z_{i,n}}}$ & $\\displaystyle=\\frac{2}{NI}(z_{i,n}-\\mu_{i})$ \\\\\n\t\t\n\t\t$\\displaystyle{\\Omega}_{cw{\\text -}VR}$ & $\\displaystyle=\\frac{1}{K I}\\sum_k \\sum_i v_{i}^k $ & $\\displaystyle\\frac{\\partial{{\\Omega}_{cw{\\text -}VR}}}{\\partial{{z}_{i,n}}}$ & $\\displaystyle =\\frac{2}{KI|S_k|}({z}_{i,n}-{\\mu}_{i}^{k}), n \\in S_k$ \\\\ \\hline\n\t\\end{tabular}\n\\label{table:loss_function}\n\\end{table*}\n\n\\subsection{cw-CR}\ncw-CR uses off-diagonal terms of the mini-batch covariance matrix of activations per class as the penalty term: ${\\Omega}_{cw{\\text -}CR}=\\sum_k \\sum_{i\\neq j} (c_{i,j}^{k})^{2}$. This term is added to the original cost function $J$, and the total cost function $\\widetilde{J}$ can be denoted as \n\\begin{align}\n \\widetilde{J}=J+\\lambda{\\Omega}_{cw{\\text -}CR}(\\mathbf{z}),\n\\end{align}\nwhere $\\lambda$ is the penalty loss weight ($\\lambda \\in [0, \\infty)$). The penalty loss weight balances between the original cost function $J$ and the penalty loss term $\\Omega$. When $\\lambda$ is equal to zero, $\\widetilde{J}$ is the same as $J$, and cw-CR does not influence the network. When $\\lambda$ is a positive number, the network is regularized by cw-CR, and the performance is affected. In practice, we have observed that deep networks with too large $\\lambda$ cannot be trained at all.\n\n\\subsection{cw-VR}\nA very intuitive way of enforcing distinguished representations per class is to maximize the inter-class distances in the representation space. \nBecause inter-class needs to be maximized, the corresponding penalty term can be inverted or multiplied by -1 before it is minimized with the original cost function. \nWe tried such approaches, but the optimization became unstable (failed to converge).\nAn alternative way is to reduce intra-class (same-class) variance. By applying this idea, the penalty loss term of cw-VR can be formulated as ${\\Omega}_{cw{\\text -}VR}=\\sum_k \\sum_i v_{i}^k$. \n\nWith the design of cw-VR, we naturally invented VR that is the all-class counterpart of cw-VR. VR minimizes the activation variance of each unit, and it is mostly the same as cw-VR except for not using the class information. We expected VR to hurt the performance of deep networks because it encourages all classes to have similar representation in each unit. VR, however, turned out to be effective and useful for performance enhancement. We provide a possible explanation in the Experiments section. \n\n\\subsection{Penalty Loss Functions and Gradients}\n\nThe penalty loss functions of cw-CR and cw-VR are similar to CR and VR, respectively, except that the values are calculated for each class using the mini-batch samples with the same class label. Also, gradients of CR and cw-CR are related to those of VR and cw-VR as shown in Table \\ref{table:loss_function}. We investigate more details of the equations in the following.\n\n\\subsubsection{Interpretation of the gradients}\nAmong the gradient equations shown in Table \\ref{table:loss_function}, the easiest to understand is VR's gradient. It contains the term ${z}_{i,n}-{\\mu}_{i}$, indicating that the representation ${z}_{i,n}$ of each sample $n$ is encouraged to become closer to the mean activation ${\\mu}_{i}$. In this way, each unit's variance can be reduced. For cw-VR, the equation contains ${z}_{i,n}-{\\mu}_{i}^{k}$ instead of ${z}_{i,n}-{\\mu}_{i}$. Therefore the representation ${z}_{i,n}$ of a class $k$ sample is encouraged to become closer to the \\textit{class} mean activation ${\\mu}_{i}^{k}$. Clearly, the variance reduction is applied per class by cw-VR. \n\nFor CR, the equation is less straightforward. As explained in \\cite{cogswell2015reducing}, a possible interpretation is that the covariance term $c_{i,j}$ is encouraged to be reduced where $z_{j,n}-\\mu_j$ acts as the weight. But, another possible interpretation is that $z_{j,n}$ is encouraged to become closer to $\\mu_j$ just as in the case of VR, where $c_{i,j}$ acts as the weight. Note that VR's mechanism is straightforward where each unit's variance is directly addressed in the gradient equation of activation $i$, but CR's mechanism is slightly complicated where all variances over all activations of $j$ ($j=1,...,I$, where $j \\neq i$) are collectively addressed through the summation terms over all $j$ ($j=1,...,I$, where $j \\neq i$). Thus, one can interpret CR as a hybrid regularizer that wants either or both of covariance and variance to be reduced. This can be the reason why the visualizations of CR and VR are similar as will be shown in Figure \\ref{fig:representation} later. \n\nFor cw-CR, it can be interpreted similarly. As in the relationship between VR and cw-VR, cw-CR is the class-wise counterpart of CR and it can be confirmed in the gradient equation: cw-CR has $c_{i,j}^k({z}_{j,n}-{\\mu}_{j}^{k})$ instead of $c_{i,j}({z}_{j,n}-{\\mu}_{j})$. As in our explanation of CR, cw-CR can also be interpreted as trying to reduce either or both of covariance and variance. The visualizations of cw-CR and cw-VR turn out to be similar as well. \n\nThe interpretations can be summarized as follows. VR and cw-VR aim to reduce activation variance whereas CR and cw-CR additionally aim to reduce covariance. CR and VR do not distinguish among different classes, but cw-CR and cw-VR explicitly perform representation shaping per class.\n\n\\subsubsection{Activation squashing effect}\nThere is another important effect that is not necessarily obvious from the gradient formulations.\nFor L1W (L1 weight regularization) and L2W (L2 weight regularization), the gradients contain the weight terms, and therefore the weights are explicitly encouraged to become smaller. Similarly, our representation regularizers include the activation terms $z_{i,n}$ and therefore the activations are explicitly encouraged to become smaller (when activations become close to zero, the mean terms become close to zero as well). Thus, a simple way to reduce the penalty loss is to scale the activations to small values instead of satisfying the balance between the terms in the gradient equations. \nThis means that there is a chance for the learning algorithm to squash activations just so that the representation regularization term can be ignored. As we will see later in the next section, indeed activation squashing happens when our regularizers are applied. Nonetheless, we will also show that the desired statistical properties are sufficiently manifested anyway. One might be able to prevent activation squashing with another regularization technique, but such an experiment was not in the scope of this work. \n\n\n\n\\section{Experiments}\n\\label{sec:experiments}\nIn this section, we investigate performance improvements of the four representation regularizers, where baseline, L1W, L2W, CR, cw-CR, VR, and cw-VR are evaluated for image classification and reconstruction tasks. When a regularizer (including L1W and L2W) was used for an evaluation scenario, the penalty loss weight $\\lambda$ was determined as one of \\{0.001, 0.01, 0.1, 1, 10, 100\\} using 10,000 validation samples. Once the $\\lambda$ was determined, performance evaluation was repeated five times. Code is made available at\n\\url{https:\/\/github.com\/snu-adsl\/class_wise_regularizer}.\n\n\\subsection{Image Classification Task}\n\\begin{table}[t]\n\\centering\n\\caption{Error performance (\\%) for CIFAR-10 CNN model.}\n\\label{table:cifar-10}\n\\begin{tabular}{ccc}\n\\hline\n\\multirow{2}{*}{Regularizer} & \\multicolumn{2}{c}{Optimizer} \\\\ \\cline{2-3} \n & Adam & Momentum \\\\ \\hline\nBaseline & $26.64 \\pm 0.16$ & $25.78 \\pm 0.37$ \\\\ \\hline\nL1W & $26.46 \\pm 0.39$ & $25.73 \\pm 0.40$ \\\\\nL2W & $25.71 \\pm 0.98$ & $26.35 \\pm 0.54$ \\\\ \\hline\nCR & $24.96 \\pm 0.63$ & $26.72 \\pm 0.61$ \\\\ \ncw-CR & $22.99 \\pm 0.58$ & $25.93 \\pm 0.59$ \\\\\nVR & \\pmb{$21.44 \\pm 0.88$} & $25.01 \\pm 0.41$ \\\\\ncw-VR & $21.58 \\pm 0.21$ & \\pmb{$24.42 \\pm 0.31$} \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\\begin{table}[t]\n\\centering\n\\caption{Error performance (\\%) for CIFAR-100 CNN model.}\n\\label{table:cifar-100}\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{cccc}\n\\hline\n\\multirow{2}{*}{Regularizer} & \\multicolumn{3}{c}{Number of Classes} \\\\ \\cline{2-4}\n & 16 & 64 & 100 \\\\ \\hline\nBaseline & $45.75 \\pm 0.73$ & $58.02 \\pm 0.40$ & $61.26 \\pm 0.52$ \\\\ \\hline\nL1W & $45.08 \\pm 1.53$ & $58.08 \\pm 1.18$ & $60.97 \\pm 0.64$ \\\\\nL2W & $45.28 \\pm 1.59$ & $57.47 \\pm 0.66$ & $60.23 \\pm 0.31$ \\\\ \\hline\nCR & $44.55 \\pm 1.10$ & $56.76 \\pm 0.86$ & $59.88 \\pm 0.50$ \\\\ \ncw-CR & $43.50 \\pm 1.21$ & $54.24 \\pm 0.64$ & $57.03 \\pm 0.73$ \\\\\nVR & $42.33 \\pm 1.03$ & $54.32 \\pm 0.40$ & $57.68 \\pm 0.94$ \\\\\ncw-VR & \\pmb{$41.38 \\pm 0.53$} & \\pmb{$54.23 \\pm 1.06$} & \\pmb{$56.75 \\pm 0.64$} \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\n\nThree popular datasets (MNIST, CIFAR-10, and CIFAR-100) were used as benchmarks. An MLP model was used for MNIST, and a CNN model was used for CIFAR-10\/100. The details of the architecture hyperparameters can be found in Section A of the supplementary materials. All the regularizers were applied to the fifth layer of the 6-layer MLP model and the fully connected layer of the CNN model, and the reason will be explained in the Layer Dependency section. For L1W and L2W, we applied regularization to all the layers as well for comparison, but the performance results were comparable to when applied to the fifth layer. Mini-batch size was increased to 500 for CIFAR-100 such that class-wise operations can be appropriately performed but was kept at the default value of 100 for MNIST and CIFAR-10. We have tested a total of 20 scenarios where the choice of an optimizer, number of classes, network size, or data size was varied.\n\nThe results for two CIFAR-10 CNN scenarios are shown in Table \\ref{table:cifar-10} and three CIFAR-100 CNN scenarios are shown in Table \\ref{table:cifar-100}. The rest of the scenarios including full cases of MNIST MLP can be found in Section B of the supplementary materials. In the Table \\ref{table:cifar-10} and Table \\ref{table:cifar-100}, it can be seen that cw-VR achieves the best performance in 4 out of 5 cases and class-wise regularizers perform better than their all-class counterparts except for one case. For the scenarios shown in Table \\ref{table:cifar-100}, we initially guessed that the performance of class-wise regularizers would be sensitive to the number of classes, but cw-VR performed well for all three cases. As for the 20 scenarios that were tested, the best performing one was cw-VR for 11 cases, VR for 5 cases, cw-CR for 2 cases, and CR for 1 case. L1W and L2W were never the best performing one, and the baseline (no regularization) performed the best for only one case. \n\nAs mentioned earlier, in general, VR did not hurt performance compared to the baseline. There are two possible explanations. First, representation characteristics other than variance are affected together by VR (see Table \\ref{table:statistical_property} in the next section), and VR might have indirectly created a positive effect. Second, the cross-entropy term limits how much VR performs variance reduction, and the overall effects might be more complicated than a simple variance reduction.\n\n\nTo test a sophisticated and advanced DNN architecture, we tried the four representation regularizers on ResNet-32\/110. ResNet is known as one of the best performing deep networks for CIFAR-10, and we applied the four representation regularizers to the output layer without modifying the network's architecture or hyperparameters. The results are shown in Table \\ref{table:resnet-110}. All four turned out to have positive effects where cw-VR showed the best performance again. \n\n\\begin{table}[t]\n\\centering\n\\caption{Error performance (\\%) for ResNet-32\/110 (CIFAR-10). \nFor ResNet-32, average of two experiments is shown. For ResNet-110,\nwe experimented five times and \\lq best (mean$\\pm$std)\\rq \\ is reported as in \\cite{he2016deep}.\n}\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{lcc}\n\\hline\n\\multicolumn{1}{c}{Model \\& Regularizer} & He et al. & Ours \\\\ \n\\hline\nResNet-32 & 7.51 & 7.39 \\\\ \nResNet-32 + CR & & 7.27 \\\\ \nResNet-32 + cw-CR & & 7.21 \\\\\nResNet-32 + VR & & 7.22 \\\\\nResNet-32 + cw-VR & & \\textbf{7.17} \\\\ \\hline\nResNet-110 & 6.43 \\small{(6.61$\\pm$0.16)} & 6.12 \\small{(6.31$\\pm$0.14)} \\\\ \nResNet-110 + CR & & 6.17 \\small{(6.26$\\pm$0.05)} \\\\ \nResNet-110 + cw-CR & & 6.10 \\small{(6.18$\\pm$0.10)} \\\\\nResNet-110 + VR & & 6.10 \\small{(6.17$\\pm$0.05)} \\\\\nResNet-110 + cw-VR & & \\textbf{6.00} \\small{(6.18$\\pm$0.15)} \\\\ \\hline\n\\end{tabular}\n}\n\\label{table:resnet-110}\n\\end{table}\n\n\\begin{figure*}[t]\n\\centering\n\\centerline{\\includegraphics[width=1\\textwidth]{representation_characteristics.pdf}}\n\\caption{Visualization of the learned representations for MNIST. The plots in top and middle rows were generated in the same way as in the Figure \\ref{fig:mnist_none_hist_scatter}. The plots in the bottom row show the top three principle components of the representations. \n}\n\\label{fig:representation}\n\\end{figure*}\n\n\\begin{table*}[t]\n\\centering\n\\caption{Quantitative evaluations of representation characteristics. \n}\n\\label{table:statistical_property}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{cccccccc}\n\\hline\nRegularizer &\t Test error (\\%) & \t \\textsc{Activation\\_amplitude} &\t \\shortstack{\\textsc{Covariance} \\\\ (CR)} &\t \\shortstack{\\textsc{Correlation} \\\\ (CR)} & \t \\shortstack{\\textsc{cw\\_Correlation} \\\\ (cw-CR)} &\t \\shortstack{\\textsc{Variance} \\\\ (VR)} &\t \\shortstack{\\textsc{N\\_cw\\_Variance} \\\\ (cw-VR)} \\\\ \\hline\nBaseline &\t $2.85 \\pm 0.11$ & \t 4.93 & \t2.08 &\t 0.27 & \t 0.21 & \t 9.05 & \t 1.33 \\\\ \\hline\nL1W & \t $2.85 \\pm 0.06$ & \t 4.53 & \t1.95 &\t 0.28 & \t 0.22 & \t 7.78 & \t 1.33 \\\\\nL2W & \t $3.02 \\pm 0.40$ & \t 4.76 & \t2.23 &\t 0.29 & \t 0.21 & \t 8.38 & \t 1.36 \\\\ \\hline\nCR &\t $2.50 \\pm 0.05$ & \t \\textit{0.50} & \t0.01 &\t \\textbf{0.19} & \t 0.15 & \t 0.04 & \t 1.37 \\\\\ncw-CR & \t $2.49 \\pm 0.10$ & \t \\textit{0.63} & \t0.02 &\t 0.31 & \t \\textbf{0.19} & \t 0.06 & \t 0.95 \\\\\nVR & \t $2.65 \\pm 0.11$ & \t \\textit{1.35} & \t0.15 &\t 0.26 & \t 0.17 & \t \\textbf{0.58} & \t 1.52 \\\\\ncw-VR & \t \\pmb{$2.42 \\pm 0.06$} &\t \\textit{0.63} & \t0.02 &\t 0.36 & \t 0.25 & \t 0.05 & \t \\textbf{0.74} \\\\ \\hline\n\t\t\t\t\t\t\t\n \t\t\t\n\\end{tabular}\n}\n\\end{table*}\n\n\\subsection{Image Reconstruction Task}\nIn order to test a completely different type of task, we examined an image reconstruction task where a deep autoencoder are used. Class information is used for representation regularization only. A 6-hidden layer autoencoder with a standard L2 objective function was used. Representation regularizers were only applied to the third layer because the representations of the layer are considered as latent variables. The other experiment settings are the same as the image classification tasks in the previous subsection. The reconstruction error of the baseline is $1.44 \\times 10^{-2}$ and become reduced to $1.19 \\times 10^{-2}$ when cw-VR is applied. Result details can be found in Section B of the supplementary materials.\nAs in the classification tasks, class-wise regularizers performed better than their all-class counterparts.\n\n\n\n\n\n\n\n\n\\section{Representation Characteristics}\n\n\nIn this section, we investigate representation characteristics when the regularizers are applied. \n\n\\subsection{Visualization}\nIn Figure \\ref{fig:representation}, the $50^{th}$ epoch plots are shown for the baseline and four representation regularizers. L1W and L2W are excluded because their plots are very similar to those of the baseline.\nPrinciple Component Analysis (PCA) was also performed over the learned representations, and the plots in the bottom row show the top three principal components of the representations (before ReLU).\nThe first thing that can be noticed is that the representation characteristics are quite different depending on which regularizer is used. Apparently, the regularizers are effective at affecting representation characteristics. \nIn the first row, it can be seen that cw-VR minimizes the activation overlaps among different classes as intended. Because the gradient equation of cw-CR is related to that of cw-VR, cw-CR also shows reduced overlaps. CR and VR still show substantial overlaps because class information was not used by them. \nIn the second row, a linear correlation can be observed in the scatter plot of the baseline, but such a linear correlation is mostly removed for CR as expected. For VR, still, linear correlations can be observed. For cw-CR and cw-VR, it is difficult to judge because many points do not belong to the main clusters and their effects on correlation are difficult to guess. As we will see in the following quantitative analysis section, in fact, correlation was not reduced for cw-CR and cw-VR.\nIn the third row, it can be seen that the cw-VR has the least overlaps when the first three principal components are considered. Interestingly, a needle-like shape can be observed for each class in the cw-VR's plot. The plots using learned representations after ReLU are included in Section C of the supplementary materials. Overall, cw-VR shows the most distinct shapes compared to the baseline. \n\n\\subsection{Quantitative Analysis}\nFor the same MNIST task that was used to plot Figure \\ref{fig:mnist_none_hist_scatter} and Figure \\ref{fig:representation}, the quantitative values of representation characteristics were evaluated, and the results are shown in Table \\ref{table:statistical_property}. Each is calculated using only positive activations and is the average of representation statistics. For example, \\textsc{Activation\\_amplitude} is the mean of positive activations in a layer.\nIn the third column (\\textsc{Activation\\_amplitude}), it can be confirmed that indeed the four representation regularizers cause activation squashing. Nonetheless, the error performance is improved as shown in the second column. For CR, covariance is supposed to be reduced. In the fourth column (\\textsc{Covariance}), it can be confirmed that the covariance of CR is much smaller than that of the baseline. The small value, however, is mostly due to the activation squashing. In the fifth column (\\textsc{Correlation}), the normalized version of covariance is shown. The correlation of CR is confirmed to be smaller than that of the baseline, but the reduction rate is much smaller compared to the covariance that was affected by the activation squashing. In any case, CR indeed reduces correlation among hidden units. For cw-CR, class-wise correlation (\\textsc{cw\\_Correlation}) is expected to be small, and it is confirmed in the sixth column. The value 0.19, however, is larger than CR's 0.15 or VR's 0.17. This is an example where not only cw-CR but also other representation regularizers end up reducing \\textsc{cw\\_Correlation} because the regularizers' gradient equations are related. For VR, variance should be reduced. In the seventh column (\\textsc{Variance}), the variance of VR is indeed much smaller than that of the baseline, but again other representation regularizers have even smaller values because their activation squashing is more severe than that of VR. For cw-VR, class-wise variance is supposed to be small. Normalized class-wise variance is shown in the last column (\\textsc{N\\_cw\\_Variance}), and it is confirmed that cw-VR is capable of reducing \\textsc{N\\_cw\\_Variance}. (Normalization was performed by mapping activation range of each hidden unit to [0,10] such that activation squashing effect can be removed.) \n\n\n\n\\section{Layer Dependency}\nIn the previous sections, we have consistently applied the representation regularizers to the upper layers that are closer to the output layer. This is because we have found that it is better to target the upper layers, and two exemplary results are shown in Figure \\ref{fig:layer_dependency}. In Figure \\ref{fig:layer_dependency} (a), the performance improvement becomes larger as the representation regularization targets upper layers. In fact, the best performance is observed when the output layer is regularized. In Figure \\ref{fig:layer_dependency} (b), similar patterns can be seen over the convolutional layers, but the performance degrades when applied to fully connected or output layers. This phenomenon is probably relevant to how representations are developed in deep networks. Because the lower layers often represent many simpler concepts, regularizing the shapes of representations can be harmful. For the upper layers, a smaller number of more complex concepts are represented and therefore controlling representation characteristics (e.g., reduction of activation overlaps) might have a better chance to improve the performance. \n\n\\begin{figure}[t]\n \\centering\n \\subfloat[MNIST]{{\\includegraphics[width=4.1cm]{layer_mnist.png} }}%\n \\hspace{-0.4\\baselineskip}\n \\subfloat[CIFAR-100]{{\\includegraphics[width=4.1cm]{layer_cifar100.png} }}%\n \\caption{Layer dependency of representation regularizers. The x-axis indicates layers where regularizers are applied. CR and cw-CR are excluded in (b) due to the high computational burden of applying them to the convolutional layers. The result of CIFAR-10 can be found in Section D of the supplementary materials.}%\n \\label{fig:layer_dependency}%\n\\end{figure}\n\n\\section{Discussion and Conclusion}\nA well-known representation regularizer is L1 representation regularizer (L1R) whose penalty loss function can be written as ${\\Omega}_{L1R}=\\frac{1}{NI}\\sum_n \\sum_i |z_{i,n}|$. L1R is known to increase representational sparsity. CR and VR have second-order terms in their penalty loss functions, but L1R does not. As a consequence, L1R's class-wise counterpart turns out to have the same penalty function as L1R's (this is trivial to prove). So, one might say that L1R is also a class-wise representation regularizer just like cw-CR and cw-VR. When it is used, however, there is no need for the true class information. For instance, when true label information is not available for an autoencoder problem, one might use L1R and still have a chance to obtain the benefits of class-wise regularization. In our study, we have not included L1R such that we can better focus on the difference between all-class and class-wise regularizers. When cw-VR was directly compared with L1R in terms of performance, we have found that cw-VR performs better than L1R for 12 out of the 21 test scenarios (ResNet-110 and an autoencoder were not tested). Overall, however, it looks like both L1R and cw-VR are very effective representation regularizers for improving performance of deep networks. \n\nDropout and batch normalization are very popular regularizers, but they are fundamentally different because they are not \\lq penalty cost function' regularizers. Instead, they are implemented by directly affecting the feedforward calculations during training. Dropout has been shown to have similar effects as ensemble and data-augmentation through its noisy training procedure, and such benefits are not obtainable with a penalty regularizer. On the other hand, there is a common belief that \\lq dropout reduces co-adaptation (or pair-wise correlation).\\rq \\,Reducing correlation is something that can be done by penalty regularizers as we have shown in this work. When we applied the same quantitative analysis on the test scenarios while using dropout, however, we have found that dropout does not really reduce the correlation. This indicates that the belief might be an incorrect myth. \nBatch normalization has been known to have a stabilization effect because it can adjust covariate shift even when the network is in the early stage of training. Thus a higher learning rate can be used for faster training. Such an effect is not something that can be achieved with a penalty regularizer. But when dropout and batch normalization were directly compared with the two representation regularizers cw-VR and L1R in terms of performance, we have found that at least one of cw-VR and L1R outperforms both of dropout and batch normalization for 16 out of the 20 test cases (ResNet-32\/110 and an autoencoder were not tested).\nDespite the performance results for our benchmark scenarios, it is important to recognize that dropout and batch normalization might be able to play completely different roles that cannot be addressed by the penalty regularizers. When such additional roles are not important for a task as in our test scenarios, there is a very high chance of penalty regularizers outperforming dropout and batch normalization.\n\nPerformance improvement through representation regularizers, especially by utilizing class information, has been addressed in this work and other previous works. The underlying mechanism for the improvement, however, is still unclear. Recently, \\cite{choi2018statistical} showed that\nsome of the statistical properties of representations cannot be the direct cause of performance improvement. The representation regularizers might have tuning effects instead. \n\nWith the enormous efforts of the research community, deep learning is becoming better understood, and regularization techniques are evolving with the in-depth understandings. In this work, we have addressed the fundamentals of using class information for penalty representation regularization. The results indicate that class-wise representation regularizers are very efficient and quite effective, and they should be considered as important and high-potential configurations for learning of deep networks.\n\n\\section*{Acknowledgments}\nThis work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2017R1E1A1A03070560) and by SK telecom Co., Ltd.\n\n\n\\section*{A\\quad Architectures and Hyperparameters}\n\n\\bigskip\n\n\\subsection*{A.1\\quad Default Settings}\nBy default, we chose ReLU, SGD with Adam optimizer, and a learning rate of 0.0001 for networks. Mini-batch size is set to 100 by default but is set to 500 only for CIFAR-100. We evaluated validation performance for \\{0.001, 0.01, 0.1, 1, 10, 100\\} and chose the one \nwith the best performance for each regularizer and condition.\nThen, performance was evaluated through five trainings \nusing the pre-fixed weight value. In the case of CIFAR-10 and CIFAR-100, \nthe last 10,000 instances of 50,000 training data were used as the validation data,\nand after the weight values are fixed, the validation data was merged back into training data. All experiments in this work were carried out using TensorFlow 1.5.\n\n\\bigskip\n\n\\subsection*{A.2\\quad MNIST}\nFor classification tasks, a 6-layer MLP that has 100 hidden units per layer was used. For image reconstruction task, a 6-layer autoencoder was used. The number of hidden units in each layer is 400, 200, 100, 200, 400, and 784 in the order of hidden layers. \n\n\\bigskip\n\n\\subsection*{A.3\\quad CIFAR-10 and CIFAR-100}\nA CNN with four convolutional layers and one fully connected layer was used for both of CIFAR-10 and CIFAR-100. Detailed architecture hyperparameters are shown in Table 6.\n\n\\begin{table}[htbp]\n\\centering\n\\captionsetup{labelformat=empty}\n\\caption{Table 6: Default architecture hyperparameters of CIFAR-10\/100 CNN model.}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{cccccc}\n\\hline\nLayer & \\# of filters (or units) & Filter size & Conv. stride & Pooling size & Pooling stride \\\\ \\hline\nConvolutional layer-1 & 32 & 3 $\\times$ 3 & 1 & - & - \\\\\nConvolutional layer-2 & 64 & 3 $\\times$ 3 & 1 & - & - \\\\\nMax-pooling layer-1 & - & - & - & 2 $\\times$ 2 & 2 \\\\\nConvolutional layer-3 & 128 & 3 $\\times$ 3 & 1 & - & - \\\\\nMax-pooling layer-2 & - & - & - & 2 $\\times$ 2 & 2 \\\\\nConvolutional layer-4 & 128 & 3 $\\times$ 3 & 1 & - & - \\\\\nMax-pooling layer-3 & - & - & - & 2 $\\times$ 2 & 2 \\\\ \nFully connected layer & 128 & - & - & - & - \\\\ \\hline \n\\end{tabular}\n}\n\\label{table:hyperparameters}\n\\end{table}\n\n\n\n\\clearpage\n\n\\section*{B\\quad Result Details}\n\\begin{table*}[ht]\n\\captionsetup{labelformat=empty}\n\\caption{Table 7: Results for MNIST MLP model. \nThe best performing regularizer in each condition (each column) is shown in bold.\nFor the default condition, the standard values of data size=50k and layer width=100 were used.}\n\\vskip -0.8in\n\\begin{center}\n\\begin{small}\n\\begin{tabular}{lcccccr}\n\\hline\n\\multirow{2}{*}{Regularizer} & \\multirow{2}{*}{Default} & \\multicolumn{2}{c}{Data size} & \\multicolumn{2}{c}{Layer width} \\\\ \\cmidrule{3-6} \n & & 1k & 5k & 2 & 8 \\\\ \\hline\nBaseline & $2.85 \\pm 0.11$ & $11.41 \\pm 0.19$ & $6.00 \\pm 0.07$ & $31.62 \\pm 0.07$ & $10.52 \\pm 0.57$ \\\\ \\hline\nL1W & $2.85 \\pm 0.06$ & $11.64 \\pm 0.27$ & $5.96 \\pm 0.11$ & $31.67 \\pm 0.15$ & $11.02 \\pm 0.58$ \\\\ \nL2W & $3.02 \\pm 0.40$ & $11.38 \\pm 0.18$ & $5.86 \\pm 0.10$ & $31.66 \\pm 0.13$ & $10.65 \\pm 0.23$ \\\\ \\hline\nCR (DeCov) & $2.50 \\pm 0.05$ & $11.63 \\pm 0.24$ & $6.05 \\pm 0.06$ & $34.80 \\pm 0.25$ & $10.25 \\pm 0.74$ \\\\ \ncw-CR & $2.49 \\pm 0.10$ & $10.62 \\pm 0.05$ & \\pmb{$5.80 \\pm 0.15$} & $31.50 \\pm 0.11$ & $10.81 \\pm 1.11$ \\\\ \nVR & $2.65 \\pm 0.11$ & $14.42 \\pm 0.14$ & $6.90 \\pm 0.22$ & $32.39 \\pm 0.13$ & \\pmb{$9.22 \\pm 0.28$} \\\\ \ncw-VR & \\pmb{$2.42 \\pm 0.06$} & \\pmb{$10.44 \\pm 0.18$} & $5.90 \\pm 0.12$ & \\pmb{$30.34 \\pm 0.06$} & $10.01 \\pm 0.63$ \\\\ \n\\hline\n\\end{tabular}\n\\label{appendix_mnist}\n\\end{small}\n\\end{center}\n\\vskip 0.1in\n\\bigskip\n\n\\centering\n\\captionsetup{labelformat=empty}\n\\caption{Table 8: Results for CIFAR-10 CNN model. \nThe best performing regularizer in each condition (each column) is shown in bold.\nFor the default condition, the standard values of data size=50k and layer width=128 were used \nand Adam optimizer was applied.}\n\\vskip -0.8in\n\\begin{center}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{cccccccc}\n\\hline\n\\multirow{2}{*}{Regularizer} & \\multirow{2}{*}{Default} & \\multicolumn{2}{c}{Data size} & \\multicolumn{2}{c}{Layer width} & \\multicolumn{2}{c}{Optimizer} \\\\ \\cmidrule{3-8}\n & & 1k & 5k & 32 & 512 & {Momentum} & {RMSProp} \\\\ \\midrule\nBaseline & $26.64 \\pm 0.16$ & $56.07 \\pm 0.36$ & $43.95 \\pm 0.43$ & $28.54 \\pm 0.63$ & $28.52 \\pm 1.06$ & $25.78 \\pm 0.37$ & $28.52 \\pm 1.21$ \\\\ \\hline\nL1W & $26.46 \\pm 0.39$ & $56.64 \\pm 0.91$ & $44.32 \\pm 0.66$ & $28.65 \\pm 1.14$ & $27.96 \\pm 0.72$ & $25.73 \\pm 0.40$ & $28.30 \\pm 0.99$ \\\\\nL2W & $25.71 \\pm 0.98$ & $56.57 \\pm 0.22$ & $44.87 \\pm 0.81$ & $28.54 \\pm 0.30$ & $27.79 \\pm 0.83$ & $26.35 \\pm 0.54$ & $28.02 \\pm 0.88$ \\\\ \\hline\nCR (DeCov) & $24.96 \\pm 0.63$ & $57.40 \\pm 2.11$ & $45.16 \\pm 0.94$ & $26.45 \\pm 0.22$ & $28.65 \\pm 1.21$ & $26.72 \\pm 0.61$ & $27.94 \\pm 0.43$ \\\\\ncw-CR & $22.99 \\pm 0.58$ & $53.50 \\pm 1.05$ & \\pmb{$42.15 \\pm 0.64$} & $26.40 \\pm 0.62$ & $28.54 \\pm 1.01$ & $25.93 \\pm 0.59$ & $27.77 \\pm 0.88$ \\\\\nVR & \\pmb{$21.44 \\pm 0.88$} & $53.90 \\pm 0.97$ & $42.33 \\pm 0.57$ & \\pmb{$24.96 \\pm 0.26$} & $26.61 \\pm 0.47$ & $25.01 \\pm 0.41$ & \\pmb{$26.06 \\pm 0.72$} \\\\\ncw-VR & $21.58 \\pm 0.21$ & \\pmb{$51.93 \\pm 1.09$} & $43.00 \\pm 0.95$ & $25.81 \\pm 0.64$ & \\pmb{$26.46 \\pm 0.25$} & \\pmb{$24.42 \\pm 0.31$} & $26.19 \\pm 1.35$ \\\\\n\\hline\n\\end{tabular}%\n}\n\\label{cifar10_dependency}\n\\end{center}\n\\vskip 0.1in\n\n\\bigskip\n\n\n\\centering\n \\captionsetup{labelformat=empty}\n\\caption{Table 9: Results for CIFAR-100 CNN model. The best performing regularizer in each condition (each column) is shown in bold. For the default condition, the standard values of data size=50k, layer width=128, and number of classes=100 were used.}\n\\vskip -0.8in\n\\begin{center}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{ccccccccc}\n\\hline\n\\multirow{2}{*}{Regularizer} & \\multirow{2}{*}{Default} & \\multicolumn{2}{c}{Data Size} & \\multicolumn{2}{c}{Layer Width} & \\multicolumn{3}{c}{Classes} \\\\ \\cmidrule{3-9}\n & & 1k & 5k & 32 & 512 & 4 & 16 & 64 \\\\ \\midrule\nBaseline & $61.26 \\pm 0.52$ & $90.89 \\pm 0.30$ & $82.21 \\pm 0.72$ & $62.41 \\pm 0.34$ & $61.30 \\pm 0.64$ & \\pmb{$24.95 \\pm 2.36$} & $45.75 \\pm 0.73$ & $58.02 \\pm 0.40$ \\\\ \\hline\nL1W & $60.97 \\pm 0.64$ & $91.33 \\pm 0.37$ & $82.3 \\pm 0.6$ & $62.23 \\pm 0.58$ & $60.92 \\pm 0.47$ & $26.75 \\pm 2.04$ & $45.08 \\pm 1.53$ & $58.08 \\pm 1.18$ \\\\\nL2W & $60.23 \\pm 0.31$ & $90.53 \\pm 0.39$ & $82.05 \\pm 0.70$ & $62.78 \\pm 0.36$ & $61.55 \\pm 0.99$ & $26.90 \\pm 1.24$ & $45.28 \\pm 1.59$ & $57.47 \\pm 0.66$ \\\\ \\hline\nCR (DeCov) & $59.88 \\pm 0.50$ & $91.70 \\pm 0.14$ & $82.47 \\pm 0.41$ & \\pmb{$60.47 \\pm 0.63$} & $60.70 \\pm 0.94$ & $27.25 \\pm 1.51$ & $44.55 \\pm 1.10$ & $56.76 \\pm 0.86$ \\\\\ncw-CR & $57.03 \\pm 0.73$ & $90.85 \\pm 0.29$ & $81.29 \\pm 0.62$ & $61.41 \\pm 0.67$ & $58.02 \\pm 0.25$ & $26.35 \\pm 1.04$ & $43.50 \\pm 1.21$ & $54.24 \\pm 0.64$ \\\\ \nVR & $57.68 \\pm 0.94$ & $91.43 \\pm 0.32$ & $81.85 \\pm 0.38$ & $61.35 \\pm 0.45$ & \\pmb{$56.87 \\pm 0.74$} & $26.10 \\pm 1.81$ & $42.33 \\pm 1.03$ & $54.32 \\pm 0.40$ \\\\\ncw-VR & \\pmb{$56.75 \\pm 0.64$} & \\pmb{$90.45 \\pm 0.22$} & \\pmb{$81.03 \\pm 0.57$} & $60.67 \\pm 0.59$ & $56.91 \\pm 0.73$ & $26.40 \\pm 1.08$ & \\pmb{$41.38 \\pm 0.53$} & \\pmb{$54.23 \\pm 1.06$} \\\\ \\hline\n\n\\end{tabular}%\n}\n\\label{cifar100_dependency}\n\\end{center}\n\\vskip 0.1in\n\n\\bigskip\n\n\n\\centering\n\\captionsetup{labelformat=empty}\n\\caption{Table 10: Mean squared error of deep autoencoder.}\n\\begin{tabular}{cc}\n\\hline\nRegularizer & Mean Squared Error \\\\ \\hline\nBaseline & $1.44 \\times 10^{-2} \\pm 3.36 \\times 10^{-4} $ \\\\ \\hline\nCR & $1.29 \\times 10^{-2} \\pm 2.44 \\times 10^{-4} $ \\\\ \ncw-CR & $1.22 \\times 10^{-2} \\pm 3.63 \\times 10^{-4} $ \\\\ \nVR & $1.29 \\times 10^{-2} \\pm 5.16 \\times 10^{-4} $ \\\\\ncw-VR & \\pmb{$1.19 \\times 10^{-2} \\pm 2.48 \\times 10^{-4} $} \\\\ \\hline\n\\end{tabular}\n\\label{table:autoencoder}\n\\vskip -1.2in\n\\end{table*}\n\n\\clearpage\n\n\n\\section*{C\\quad Principal Component Analysis of Learned Representations}\n\n\\begin{figure}[htbp]\n \\centering\n \\quad\\subfloat[Baseline (Before ReLU)]{{\\includegraphics[width=4.1cm]{none_fc5.png} }}%\n \\qquad\\qquad\\qquad\\quad\n \\subfloat[Baseline (After ReLU)]{{\\includegraphics[width=4.1cm]{none_fc5a.png} }} \n \n \\subfloat[L1W (Before ReLU)]{{\\includegraphics[width=4.5cm]{l1w_fc5.png} }}%\n \\qquad\\qquad\\qquad\n \\subfloat[L1W (After ReLU)]{{\\includegraphics[width=4.5cm]{l1w_fc5a.png} }} \n \n \\subfloat[L2W (Before ReLU)]{{\\includegraphics[width=4.5cm]{l2w_fc5.png} }}%\n \\qquad\\qquad\\qquad\n \\subfloat[L2W (After ReLU)]{{\\includegraphics[width=4.5cm]{l2w_fc5a.png} }} \n \n \\captionsetup{labelformat=empty}\n\\caption{Figure 4: The top three principal components of learned representations (Baseline, L1W, and L2W). Note that representation characteristics of L1W and L2W are very similar to those of the baseline because weight decay methods do not directly shape representations.}%\n \\label{fig:pca_1}%\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\centering\n \\subfloat[CR (Before ReLU)]{{\\includegraphics[width=4.5cm]{cr_fc5.png} }}%\n \\qquad\\qquad\\qquad\n \\subfloat[CR (After ReLU)]{{\\includegraphics[width=4.5cm]{cr_fc5a.png} }}\n \n \\subfloat[cw-CR (Before ReLU)]{{\\includegraphics[width=4.5cm]{cw_cr_fc5.png} }}%\n \\qquad\\qquad\\qquad\n \\subfloat[cw-CR (After ReLU)]{{\\includegraphics[width=4.5cm]{cw_cr_fc5a.png} }}\n \n \\subfloat[VR (Before ReLU)]{{\\includegraphics[width=4.5cm]{vr_fc5.png} }}%\n \\qquad\\qquad\\qquad\n \\subfloat[VR (After ReLU)]{{\\includegraphics[width=4.5cm]{vr_fc5a.png} }} \n \n \\qquad\\subfloat[cw-VR (Before ReLU)]{{\\includegraphics[width=4.cm]{cw_vr_fc5.png} }}%\n \\qquad\\qquad\\qquad\\qquad\n \\subfloat[cw-VR (After ReLU)]{{\\includegraphics[width=4.cm]{cw_vr_fc5a.png} }} \n \\captionsetup{labelformat=empty}\n\\caption{Figure 5: The top three principal components of learned representations (representation regularizers).}%\n \\label{fig:pca_2}%\n\\end{figure}\n\n\n\\clearpage\n\n\\section*{D\\quad Layer Dependency}\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\centerline{\\includegraphics[width=4.1cm]{layer_cifar10.png}}\n\\captionsetup{labelformat=empty}\n\\caption{Figure 6:Layer dependency of representation regularizers on CIFAR-10 CNN model. The x-axis indicates layers where regularizers are applied. CR and cw-CR are excluded because of the high computational burden of applying them to the convolutional layers.}\n\\label{fig:layer_dependency_cifar10}\n\\end{center}\n\\end{figure}\n\n\n\n\n \n \n \n \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe warm ionized medium (WIM) is a major component of our Galaxy's interstellar medium (ISM) and is an important tracer of energy transport in star-forming galaxies. Primarily composed of ionized hydrogen, the WIM has a characteristic temperature of $8000$ K, one-third of the surface-mass density of neutral hydrogen (\\ion{H}{1}), and a vertical scale height of $h_z \\approx 1$ kpc \\citep{Haffner2009, Savage2009}. The power requirement of the WIM is equivalent to the total kinetic energy input to the ISM from supernovae \\citep{Reynolds1991}. After the existence of a low-density ($\\unit[10^{-3}]{cm^{-3}}$) WIM was suggested by \\citet{Hoyle1963}, observations of radio pulsar dispersions \\citep{Taylor1993,Reynolds1989} and faint optical emission lines \\citep{Reynolds1998} have been the primary method of studying this component of the ISM. \n\nDeep H$\\alpha$~imaging shows the existence of extraplanar ionized layers in the disk and halo of other star forming galaxies \\citep{Dettmar1990, Rand1990}. Extended WIM layers become less prevalent in galaxies with lower star formation rates, demonstrating the complicated disk-halo connection in galaxies. In normal star-forming galaxies, the WIM component of the ISM accounts for $59\\% \\pm 19\\%$ of the observed H$\\alpha$~flux \\citep{Oey2007} and $\\gtrsim 90\\%$ of the H$^+$ mass \\citep{Haffner2009}. At larger heights above the plane of galaxies, the WIM becomes increasingly dominant (relative to the cold neutral phase) in addition to the hot ($\\approx \\unit[10^6]{K}$) phase \\citep{Reynolds1991}. \n\nThe primary source of ionization in the WIM is believed to be ionizing radiation from O stars, with photoionization models showing their radiation escaping from \\ion{H}{2} regions and following extended paths cleared out through feedback processes and turbulence \\citep{Reynolds1990, Ciardi2002, Wood2005, Wood2010}. Although originating from the same ionizing source, physical characteristics of the WIM are distinct from classical \\ion{H}{2} regions. Discrete \\ion{H}{2} regions in the plane of the Galaxy have a much higher dust content, a much smaller scale height ($\\approx \\unit[50]{pc}$), and lower temperature ($\\approx \\unit[6000]{K}$) than the diffuse gas of the WIM \\citep{Madsen2006, Kreckel2013}. Additionally, the ionization states of the gas vary in the WIM and classical \\ion{H}{2} regions, with the WIM containing mostly ions such as O$^+$ and S$^+$, as opposed to O$^{++}$ and S$^{++}$ in \\ion{H}{2} regions \\citep{Reynolds1995, Reynolds1998}.\n\nH$\\alpha$~emission provides the bulk of the information about the mass and distribution of the WIM, with $\\gtrsim 80\\%$ of the observed faint H$\\alpha$~flux originating in the WIM and $\\lesssim 20\\%$ from scattered light originating in \\ion{H}{2} regions \\citep{Reynolds1973, Wood1999, Witt2010, Brandt2012, Barnes2014}. The behavior of the classically forbidden, collisionally excited $\\left[ \\text{\\ion{S}{2}} \\right]$~$\\lambda6716$ and $\\left[ \\text{\\ion{N}{2}} \\right]$~$\\lambda6584$ emission lines (referred to as $\\left[ \\text{\\ion{S}{2}} \\right]$~and $\\left[ \\text{\\ion{N}{2}} \\right]$~from hereafter) trace variations in the ionization state and temperature of the emitting gas. \n\nThe $\\left[ \\text{\\ion{S}{2}} \\right]$~\/ H$\\alpha$~and $\\left[ \\text{\\ion{N}{2}} \\right]$~\/ H$\\alpha$~line ratio both increase with decreasing H$\\alpha$~intensity in the Milky Way and other galaxies \\citep{Haffner1999, Madsen2006, Haffner2009}, suggesting an increase in the gas temperature at lower gas densities. A positive correlation between these line ratios and height above the plane is also observed in many other galaxies \\citep{Domgoergen1997, Otte2002, Hoopes2003}. However, because the H$\\alpha$~intensity follows an inverse relationship with height, it is difficult to disentangle the physical cause for change in the observed line ratios, which could be due to changes in gas temperature, gas density, or height above the plane.\n\nRecently, the Wisconsin H-Alpha Mapper (WHAM) has provided an all-sky velocity resolved map of H$\\alpha$~emission in the Milky Way \\citep{Haffner2003,Haffner2010}. Ongoing multi-wavelength observations of H$\\beta$, $\\left[ \\text{\\ion{S}{2}} \\right]$, $\\left[ \\text{\\ion{N}{2}} \\right]$, and other optical emission lines allow for physical conditions of the WIM to be characterized. In this paper, we use WHAM observations to study the spatial and physical properties of the WIM throughout the Sagittarius-Carina arm. Our vantage point within the Galaxy provides an edge-on perspective to study the vertical structure of the WIM in detail. The analysis performed here is motivated by previous work on the Perseus and Scutum-Centaurus arms \\citep{Haffner1999, Hill2014}, but incorporates a novel method of kinematically isolating emission along the spiral arm. \n\nIn Section \\ref{obs}, we describe our observations and Section \\ref{sec_map} presents our spectroscopic maps of the Sagittarius-Carina arm. Section \\ref{vert} details our derivation of the scale height of electron density squared (or emission measure, EM) throughout the spiral arm and Section \\ref{ratio_stat} begins to analyze the physical conditions of the WIM within the Carina arm using $\\left[ \\text{\\ion{S}{2}} \\right]$~emission line data as a tracer of temperature variations. In Section \\ref{disc}, we discuss some of our surprising results for the scale height in the far Carina arm and argue that the bulk of the observed emission corresponds to in-situ emission of photoionized gas. Finally, we wrap up with conclusions and a summary in Section \\ref{summary}. \n\n\\section{Observations} \\label{obs}\n\nWHAM is a dual-etalon Fabry-Perot spectrometer designed to obtain highly sensitive observations of the WIM. WHAM has a $12$ km\\,s$^{-1}$$~$spectral resolution, a $1\\degree$ beam, and observes faint optical emission of H$\\alpha$, $\\left[ \\text{\\ion{S}{2}} \\right]$, $\\left[ \\text{\\ion{N}{2}} \\right]$, H$\\beta$, and other lines. Our H$\\alpha$$~$observations are from the WHAM Sky Survey, which combines observations taken at the Kitt Peak National Observatory \\citep[see][]{Haffner2003} and at the Cerro Tololo Inter-American Observatory \\citep[see][]{Haffner2010}. Additionally, we use preliminary WHAM $\\left[ \\text{\\ion{S}{2}} \\right]$$~$observations from a survey of the southern Galactic plane \\citep{Gotisha2013}. $\\left[ \\text{\\ion{S}{2}} \\right]$~observations are only available along the Carina arm direction in the fourth quadrant of the Galaxy.\n\nH$\\alpha$$~$and $\\left[ \\text{\\ion{S}{2}} \\right]$$~$data are observed using 30-second and 60-second exposures through a $1\\degree$ beam, producing an average spectrum of emisison over this area. For both H$\\alpha$$~$and $\\left[ \\text{\\ion{S}{2}} \\right]$$~$data, we have applied a flat field, an atmospheric template, and a constant baseline to reach a $3\\sigma$ sensitivity of $\\approx \\unit[0.1]{R}$ ($\\unit[1]{Rayleigh(R)}~=~\\unit[\\nicefrac{10^6}{4 \\pi}]{photons~s^{-1}~cm^{-2}~sr^{-1}}$; 1 R corresponds to an emission measure of $EM = \\unit[2.25]{cm^{-6} pc}$ for $T_e = \\unit[8000]{K}$). The H$\\alpha$$~$ data have had a single Gaussian term subtracted corresponding to the geocoronal H$\\alpha$$~$emission line. No corrections for dust extinction have been applied to these data; analyzed sight-lines are restricted to Galactic latitudes $b > 5\\degree$. Details on the instrument, its design, observation modes, and data reduction methods are described in \\citet{Haffner2003}.\n\n\n\\section{Isolating Spiral Arm Emission} \\label{sec_map}\n\\begin{figure}[htb!]\n\\label{gal_map}\n\\epsscale{1.25}\n\\plotone{galaxy_arms_map-eps-converted-to.pdf}\n\\caption{Spiral structure model of the Milky Way. Galactic center (black cross) is at ($\\unit[0]{kpc}$, $\\unit[0]{kpc}$) with circles (black dotted lines) spaced at $4$, $8$, $12$, and $16$ kpc. The Sun (solid black circle) is at ($\\unit[-8.34]{kpc}$, $\\unit[0]{kpc}$) and Galactic rotation is in the clockwise direction. Dashed lines are logarithmic spiral arm fits for the Outer arm (red), Perseus arm (yellow), Local arm (blue), Sagittarius arm (magenta), and Scutum arm (cyan) from \\citet{Reid2014}. Solid black lines are the spiral arm positions used in this work based on CO emission and the longitude-velocity tracks defined by \\citet{Reid2016}. This work assumes the Sagittarius and Carina arms are connected and part of the same coherent spiral structure (also referred to as the Sagittarius-Carina arm).}\n\\end{figure}\n\nGalactic longitude-velocity diagrams of neutral hydrogen and molecular gas emission have long been the traditional method for identifying spiral structure in the Milky Way \\citep{Weaver1970, Cohen1985, Dame2007, Reid2014}. However, moving from velocity space into physical distances is not straightforward and requires many assumptions. Figure \\ref{gal_map} displays the idealized spiral structure model used in this work as viewed from the north Galactic pole. Molecular gas, as traced by CO emission, is used to define tracks of peak emission in longitude-velocity space (frequently called longitude-velocity, or l-v diagrams) for different Galactic structure features in \\citet[][see Appendix]{Reid2016}, as reproduced in Figure \\ref{co_sag} and Figure \\ref{co}. Their work also provides detailed information on distances and Galactocentric radii along these longitude-velocity tracks where maser parallaxes are used to constrain the distances. \n\n\\begin{figure*}[htb!]\n\\label{co_sag}\n\\epsscale{.9}\n\\plotone{co_sag_lv-eps-converted-to.pdf}\n\\caption{Figure reproduced from Figure 7 of \\citet{Reid2016}. CfA CO survey longitude-velocity diagram integrated from $b = +1\\degree$ to $-1\\degree$ showing traces of the Sagittarius arm (red) and the Aquila Rift (blue). The solid and dotted lines trace far and near portions of the structure, respectively.}\n\\end{figure*}\n\nIn this work, these spiral arm tracks are taken as a standard and use kinematic distances assuming $v_{circ} = \\unit[220]{km s^{-1}}$ and $R_{\\odot} = \\unit[8.34]{kpc}$ when a parallax based distance constraint is not given. Since the longitude-velocity tracks are defined solely using observed emission of CO and H\\rom{1} gas, purely kinematic distances do not convert into a smooth spiral shape, but rather a jagged pattern as seen along the near and far Carina arm in Figure \\ref{gal_map}. For more details on how longitude-velocity diagrams can be used to derive spiral structure in the Milky Way, see \\citet[][Section 2 and 3]{Weaver1970}. \n\n\\begin{figure*}[htb!]\n\\label{co}\n\\epsscale{1.1}\n\\plotone{co_lv-eps-converted-to.pdf}\n\\caption{ Figure reproduced from Figure 13 of \\citet{Reid2016}. CfA CO survey longitude-velocity diagram integrated from $b = +5\\degree$ to $-5\\degree$ showing a trace of the Carina arm (red). The solid and dotted lines trace far and near portions of the structure, respectively.}\n\\end{figure*}\n\n\n\\begin{figure}[htb!]\n\\label{spectra}\n\\epsscale{1.15}\n\\plotone{Spectra-eps-converted-to.pdf}\n\\caption{Representative spectra as observed with WHAM along a line of sight toward the Sagittarius arm (upper panel) and the Carina arm (lower panel). The solid line shows H$\\alpha$~emission and the dashed line shows $\\left[ \\text{\\ion{S}{2}} \\right]$~emission multiplied by a factor of three. The shaded regions enclose the integrated region of the spectra used to isolate emission from the near (blue) and far (red) portions of the spiral arm (see Section \\ref{sec_map}, Figures \\ref{maps} and \\ref{mapss} for the the resulting channel maps). Blue dotted lines show the location of the fit Gaussian peaks and red dashed lines show the location of the local maxima (see Figure \\ref{bv_peak}). An offset in the peak velocity of H$\\alpha$~emission for the far Carina arm from the peak CO emission velocity (red shaded region) is seen (see Section \\ref{disc_far}.}\n\\end{figure}\n\n\\begin{figure}[htb!]\n\\label{bv_peak}\n\\epsscale{1.2}\n\\plotone{bb-bv-eps-converted-to.pdf}\n\\caption{Galactic latitude as a function of LSR velocity for WHAM Gaussian peaks. Blue points represent Gaussian peaks manually fit to the data during the atmospheric subtraction process. Red points represent local maxima in the observed spectra. The shaded vertical box encloses the velocity window that data are integrated over for this longitude slice (see Section \\ref{sec_map}, Figures \\ref{maps} and \\ref{mapss} for the the resulting channel maps). Some positive velocity peaks in emission are not covered by this CO l-v track (see Section \\ref{disc_far}).}\n\\end{figure}\n\n\n\nThe presence of the far Carina arm in WHAM observations was first noticed in the spectra as seen in Figure \\ref{spectra}. The presence of faint emission at positive local standard of rest (LSR) velocities led us to investigate the data through individual Gaussian components of the spectra as seen in Figure \\ref{bv_peak}. The Gaussian components, shown as blue points, are those fit to the data during the reduction process. Red points are local maxima in the spectra. The collection of points at positive velocities along these longitudes suggest a detection of ionized gas at galactocentric radii of $R_G > R_{\\odot} = \\unit[8.34]{kpc}$ and closely corresponds with the CO longitude-velocity (l-v) track of the far Carina arm. This relationship led us to use CO emission as a guide to kinematically isolate the H$\\alpha$~emission from the spiral arm as a function of longitude. This method separates emission from the near and far portions of the Carina arm along the same line of sight. \n\n\nWHAM data is integrated over a $\\unit[16]{km~s^{-1}}$ window centered around the CO l-v tracks for the Sagittarius-Carina arm \\citep[see Figure \\ref{co_sag}, Figure \\ref{co} and][]{Reid2016}. The $\\unit[16]{km~s^{-1}}$ width selects peaks of emission rather than encompassing full Galactic emission features, which typically have a width around $\\unit[20 - 30]{km~s^{-1}}$. The narrow width better separates the arm emission from local sources ($v_{LSR} \\approx \\unit[0]{km~s^{-1}}$). Figure \\ref{sag_map} shows the velocity-channel map of $I_{\\text{H}\\alpha}$~for the near Sagittarius arm ($20\\degree < l < 52\\degree$). Velocity-channel maps of $I_{\\text{H}\\alpha}$, $I_{\\text{\\stwo}}$, and the line ratio $\\left[ \\text{\\ion{S}{2}} \\right]$$~$\/ H$\\alpha$$~$for the near and far portions of the Carina arm are in Figures \\ref{maps}, \\ref{mapss}, and \\ref{ratiomap} ($282\\degree < l < 332\\degree$). \n\n\n\\begin{figure}[htb!]\n\\label{sag_map}\n\\epsscale{1.2}\n\\plotone{Ha_Sagittarius_color-eps-converted-to.pdf}\n\\caption{Smoothed map of $I_{\\text{H}\\alpha}$~along the near Sagittarius arm in Galactic coordinates. Data are integrated over a $\\unit[16]{km~s^{-1}}$ window centered around the CO l-v track from \\citet{Reid2016}, reproduced in Figure \\ref{co_sag} and Figure \\ref{co}. The solid white lines show different levels of constant height, $z$, above and below the midplane at the assumed distances for the spiral arm structure. Central CO-informed velocities are shown along the upper axis and assumed distances are shown along the lower axis. This direction of the sky shows strong extinction from the Aquila Rift, as seen by the drop in $I_{\\text{H}\\alpha}$$~$near the midplane. The red circle shows the size of the $1\\degree$ WHAM beam.}\n\\end{figure}\n\n\\begin{figure*}[htb!]\n\\label{maps}\n\\epsscale{1.1}\n\\plotone{Ha_Carina_color-eps-converted-to.pdf}\n\\caption{Smoothed maps of $I_{\\text{H}\\alpha}$~throughout the near (left) and far (right) portions of the Carina arm in Galactic coordinates. Data are integrated over a $\\unit[16]{km~s^{-1}}$ window centered around the CO l-v track from \\citet{Reid2016}, reproduced in Figure \\ref{co_sag} and Figure \\ref{co}. The solid white lines show different levels of constant height, $z$, above and below the midplane at the assumed distances for the spiral arm. Central-CO informed velocities are shown along the upper axis and assumed distances are shown along the lower axis. The red circle shows the size of the $1\\degree$ WHAM beam.}\n\\end{figure*}\n\n\\begin{figure*}[htb!]\n\\label{mapss}\n\\epsscale{1.1}\n\\plotone{Sii_Carina_color-eps-converted-to.pdf}\n\\caption{Smoothed maps of $I_{\\text{\\stwo}}$~throughout the near (left) and far (right) portions of the Carina arm in Galactic coordinates. Data are integrated over a $\\unit[16]{km~s^{-1}}$ window centered around the CO l-v track from \\citet{Reid2016}, reproduced in Figure \\ref{co_sag} and Figure \\ref{co}. The solid white lines show different levels of constant height, $z$, above and below the midplane at the assumed distances for the spiral arm. Central CO-informed velocities are shown along the upper axis and assumed distances are shown along the lower axis. The red circle shows the size of the $1\\degree$ WHAM beam.}\n\\end{figure*}\n\nThe far Carina arm shows a strong perspective effect in H$\\alpha$~and $\\left[ \\text{\\ion{S}{2}} \\right]$~emission, as the spiral arm increases in distance with increasing Galactic longitude (see Figure \\ref{maps} and Figure \\ref{mapss}). Our confidence in observing this distant spiral arm is explained in detail in Section \\ref{disc_far}. The near Sagittarius arm also shows a perspective effect as the distance to the arm segment also increases with Galactic longitude (see Figure \\ref{sag_map}). Significant extinction from the Aquila Rift is seen across the near Sagittarius arm near the midplane. \n\nThe map of $I_{\\text{\\stwo}}$$~$for the Carina arm is incomplete, and observations are in progress for $b \\lesssim -20\\degree$ and for other portions of the sky. The points in the $\\left[ \\text{\\ion{S}{2}} \\right]$~\/ H$\\alpha$~line ratio maps show the size of the $1\\degree$ WHAM beam. The line ratio $\\left[ \\text{\\ion{S}{2}} \\right]$$~$\/ H$\\alpha$$~$along the far Carina map seems to follow the same perspective effect. The $\\left[ \\text{\\ion{S}{2}} \\right]$$~$\/ H$\\alpha$$~$line ratio generally increases with height above the midplane (see Section \\ref{ratio_stat}). \n\n\\begin{figure*}[htb!]\n\\label{ratiomap}\n\\epsscale{1.15}\n\\plotone{Sii_Ha-eps-converted-to.pdf}\n\\caption{Map of the $\\left[ \\text{\\ion{S}{2}} \\right]$$~$\/ H$\\alpha$$~$line ratio in Galactic coordinates throughout the near (left) and far (right) portions of the Carina arm. The solid white lines show different levels of constant height, $z$, above and below the midplane at the assumed distances for the spiral arm. Central CO-informed velocities are shown along the upper axis and assumed distances are shown along the lower axis. The red circle shows the size of the $1\\degree$ WHAM beam.} \n\\end{figure*}\n\n\\section{Vertical Extent of the WIM} \\label{vert}\n\nWe assume $I_{\\text{H}\\alpha}$, tracing density of electrons squared ($n_e^2$, see Section \\ref{v_dis}), follows an exponential drop with height above the midplane, \n\n\\begin{equation}\nn_e^2 \\left(z\\right) = \\left(n_e^2\\right)_0 \\exp{\\left(- \\frac{\\left|z\\right|}{H_{n_e^2}}\\right)}\n \\label{expha}\n\\end{equation}\n\n\\noindent where $H_{n_e^2}$ is the scale height of the electron density squared (or EM scale height) and $z$ is the height above the midplane ($H_{n_e} = 2 H_{n_e^2}$ for a constant temperature and filling fraction; see Section \\ref{v_dis}). Figure \\ref{fits_2} and Figure \\ref{fits} show a sample of observed $I_{\\text{H}\\alpha}$$~$along a fixed Galactic longitude as a function of Galactic latitude. Following \\citet{Haffner1999}, we fit\n\n\\begin{equation}\n\\ln{I_\\text{H$\\alpha$}} = \\ln{I_0} - \\frac{D}{H_{n_e^2}} \\tan{\\left|b\\right|}\n \\label{hfit}\n\\end{equation}\nto the data, where $I_0$ is the midplane intensity (at $b = 0\\degree$), $D$ is the distance to the arm in the midplane, and $\\tan{\\left(b\\right)}$ is the tangent of the Galactic latitude ($\\left|z\\right| = D \\tan{\\left|b\\right|}$). The slope of the data, shown in Figure \\ref{fits_2} and Figure \\ref{fits}, is a direct measure of $D \/ H_{n_e^2}$. Fits are constrained to Galactic latitudes $\\left| b \\right| \\gtrsim 5\\degree$ to leave out \\ion{H}{2} regions in the plane. The range in Galactic latitudes are allowed to have slight variations while fitting to account for a shifting midplane (relative to $b = 0\\degree$) and to mask local sources of emission. \n\nDistances and Galactocentric radii estimates to the Sagittarius arm are from \\citet{Reid2016}, where constraints are made using parallax measurements for masers. The Carina arm does not have parallax-based distance constraints, so kinematic distances are used assuming $v_{circ} = \\unit[220]{km s^{-1}}$ and $R_{\\odot} = \\unit[8.34]{kpc}$, slightly adjusted to the Bayesian distance estimates from \\citet{Reid2016}. Distance uncertainties for the Carina Arm are inherently large, and these systematic errors are not considered in the rest of our analysis. \n\nThe following sections show derived EM scale heights for the near Sagittarius arm, and the near and far portions of the Carina arm (see Figures \\ref{sag}, \\ref{near_car}, and \\ref{far_car} and Table \\ref{sum}). Each data point corresponds to the slope of a $1\\degree$ vertical slice of the $I_{\\text{H}\\alpha}$$~$map along positive (red) and negative (blue) Galactic latitude. Occasional gaps in data points are the result of local contamination or data approaching background noise levels. All error bars are statistical errors on the measured slope and assume no uncertainty in distance. \n\n\\begin{figure}[htb!]\n\\label{fits_2}\n\\epsscale{1.15}\n\\plotone{sag_h-eps-converted-to.pdf}\n\\caption{H$\\alpha$~intensity as a function of $\\tan{\\left(b\\right)}$ between $43.5\\degree < l < 44.5\\degree$ for the near Sagittarius arm. The red and blue lines show fits of Equation \\ref{hfit} above and below the plane, respectively. Note the large extinction feature near $b = 0\\degree$ caused by the Aquila Rift. The significantly different profiles above and below the plane show how the filamentary structure is extending far below the midplane.}\n\\end{figure}\n\n\\begin{figure}[htb!]\n\\label{fits}\n\\epsscale{1.15}\n\\plotone{h_fit-eps-converted-to.pdf}\n\\caption{H$\\alpha$~intensity as a function of $\\tan{\\left(b\\right)}$ between $307\\degree < l < 308\\degree$ for the near (top) and far (bottom) Carina arm. The red and blue lines show fits of Equation \\ref{hfit} above and below the plane, respectively. The dashed line marks the location of the midplane, as determined by fitting a single Gaussian to the data. Note the lack of significant extinction near the midplane.}\n\\end{figure}\n\n\n\n\\subsection{Near Sagittarius arm} \\label{nearsag}\n\nResults for the near Sagittarius arm are in Figure \\ref{sag}, with both $\\nicefrac{D}{H_{n_e^2}}$ and $H_{n_e^2}$ as a function of Galactic longitude and Galactocentric radius. Our derived EM scale height generally agrees with the scale heights for the Scutum-Centaurus and Perseus arms, shown as dashed and dotted lines, respectively, in Figure \\ref{sag} \\citep{Hill2014, Haffner1999}. \n\nIf a constant scale height is assumed along this section, the trend for $\\nicefrac{D}{H_{n_e^2}}$ to increase at higher Galactic longitude agrees with known distances to the arm. A drop in the value of $\\nicefrac{D}{H_{n_e^2}}$ at larger Galactocentric radii is seen, but the derived EM scale heights do not show this trend. The relationship between $\\nicefrac{D}{H_{n_e^2}}$ and Galactocentric radius is likely the result of differences in $D$ to the spiral arm segment. \n\nFilament-like structures extending towards negative Galactic latitudes from the midplane are seen around $40\\degree < l < 46\\degree$ and $28\\degree < l < 35\\degree$. Figure \\ref{sag} shows how the negative Galactic latitude slope measurements (in blue) are generally flattened out for $l > 40\\degree$ when compared with positive Galactic latitudes (in red). Many of the measured scale height outliers (in blue) correspond to these Galactic longitudes, where this filament-like structure strongly extends the height of the ionized gas below the plane. The filamentary features can also be seen in the full-sky H$\\alpha$$~$map from \\citet{Finkbeiner2003}, which combined preliminary WHAM velocity-channel maps with higher spatial resolution H$\\alpha$$~$imaging from the Virginia Tech Spectral Line Survey \\citep[VTSS;][]{Dennison1998} and the Southern H-Alpha Sky Survey Atlas \\citep[SHASSA;][]{Gaustad2001}. Further analysis of these filament features is beyond the scope of this paper. \n\n\\begin{figure*}[htb!]\n\\label{sag}\n\\epsscale{1.15}\n\\plotone{Sag-eps-converted-to.pdf}\n\\caption{Plot showing the measured ratio,$\\nicefrac{D}{H_{n_e^2}}$, of distance (D) to the EM scale height ($H_{n_e^2}$), and estimated $H_{n_e^2}$ along the near Sagittarius arm as a function of Galactic longitude and Galactocentric Radius. Red and blue crosses indicate positive and negative Galactic latitudes, respectively. Uncertainties for $\\nicefrac{D}{H_{n_e^2}}$ are all smaller than the plotting symbols. All errors here assume zero uncertainty in $D$. The shaded regions show the the median ($\\pm$ median absolute deviation from the median) of $H_{n_e^2}$ above (red) and below (blue) the midplane. The dashed and dotted lines represent $H_{n_e^2}$ for the Scutum-Centaurus (extinction-corrected) and Perseus arms, respectively \\citep{Hill2014,Haffner1999}. For $l > 40\\degree$ ($R_G \\lesssim \\unit[6.62]{kpc}$), we see evidence for an extended filamentary structure below the plane, resulting in inconsistent measurements above and below the plane. }\n\\end{figure*}\n\n\\subsection{Near Carina Arm}\n\nFigure \\ref{near_car} shows results for the near Carina arm. The top panel of Figure \\ref{near_car} shows $\\nicefrac{D}{H_{n_e^2}}$ as directly measured from WHAM, and does not make any assumptions on distance. $\\nicefrac{D}{H_{n_e^2}}$ has a minimum value near $l = 320\\degree$, corresponding with the expected Galactic longitude of minimum distance to the near Carina arm. Estimated distances are generally low ($\\approx \\unit[1-2]{kpc} $) and the l-v track is not well separated from local emission at $v_{LSR} = 0$ km\\,s$^{-1}$. Local emission sources and \\ion{H}{2} regions dominate the observed emission up to significant Galactic latitudes ($b \\approx 10$) and cause some asymmetric measurements about the midplane. However, there is still good agreement, but with larger scatter, with the EM scale height along the Scutum-Centaurus and Perseus arms \\citep{Hill2014, Haffner1999}.\n\n\\begin{figure*}\n\\label{near_car}\n\\epsscale{1.15}\n\\plotone{Near_Car-eps-converted-to.pdf}\n\\caption{Plot showing the measured ratio,$\\nicefrac{D}{H_{n_e^2}}$, of distance (D) to the EM scale height ($H_{n_e^2}$), and estimated $H_{n_e^2}$ along the near Carina arm as a function of Galactic longitude and Galactocentric Radius. Red and blue crosses indicate positive and negative Galactic latitudes, respectively. Uncertainties for $\\nicefrac{D}{H_{n_e^2}}$ are all smaller than the plotting symbols. All errors here assume zero uncertainty in $D$. Uncertainties in $D$ for the near Carina arm are large and estimates of both $D$ and Galactocentric radii are used. The shaded regions show the the median ($\\pm$ median absolute deviation from the median) of $H_{n_e^2}$ above (red) and below (blue) the midplane. The dashed and dotted lines represent $H_{n_e^2}$ for the Scutum-Centaurus and Perseus arms, respectively \\citep{Hill2014,Haffner1999}. The gap in points from $315$\\degree to $318$\\degree is due to an inability to fit a linear regression to the data.}\n\\end{figure*}\n\n\\subsection{Far Carina Arm}\n\nFigure \\ref{far_car} shows results for the far Carina arm. $\\nicefrac{D}{H_{n_e^2}}$ is roughly constant around $282\\degree < l < 310\\degree$, and generally larger for $l > 310\\degree$, especially at negative Galactic latitudes (blue points). The far Carina arm is increasing in distance as a function of Galactic longitude, resulting in an EM scale height that increases with increasing Galactic longitude and Galactocentric radius. All derived EM scale heights along this far portion of the Carina arm are significantly larger than that of the Scutum-Centaurus and Perseus arms \\citep{Hill2014, Haffner1999}. EM scale heights reach up to $H_{n_e^2} \\approx \\unit[2.5]{kpc}$ for $l > 302\\degree$. Near tangency, scale heights are lower, but still much larger than other spiral arm sections, with $H_{n_e^2} \\approx \\unit[0.5 - 1]{kpc}$. This surprising result is further discussed in Section \\ref{disc}.\n\n\n\\begin{figure*}\n\\label{far_car}\n\\epsscale{1.15}\n\\plotone{Far_Car-eps-converted-to.pdf}\n\\caption{Plot showing the measured ratio,$\\nicefrac{D}{H_{n_e^2}}$, of distance (D) to the EM scale height ($H_{n_e^2}$), and estimated $H_{n_e^2}$ along the far Carina arm as a function of Galactic longitude and Galactocentric Radius. Red and blue crosses indicate positive and negative Galactic latitudes, respectively. Uncertainties for $\\nicefrac{D}{H_{n_e^2}}$ are all smaller than the plotting symbols. All errors here assume zero uncertainty in $D$. Uncertainties in $D$ for the far Carina arm are large and estimates of both $D$ and Galactocentric radii are used. The shaded regions show the the median ($\\pm$ median absolute deviation from the median) of $H_{n_e^2}$ above (red) and below (blue) the midplane. The dashed and dotted lines represent $H_{n_e^2}$ for the Scutum-Centaurus and Perseus arms, respectively \\citep{Hill2014,Haffner1999}. The gap in points from $315$\\degree to $318$\\degree is due to an inability to fit a linear regression to the data.}\n\\end{figure*}\n\n\\subsection{Other Spiral Arms}\n\nTable \\ref{sum} and Figure \\ref{summary} shows a summary of $\\nicefrac{D}{H_{n_e^2}}$ and $H_{n_e^2}$ for the Sagittarius-Carina arm, along with the Scutum-Centaurus, and Perseus arms from \\citet{Hill2014} and \\citet{Haffner1999}. For the near Sagittarius arm, we report a median value for positive and negative Galactic latitudes for $20\\degree < l < 52\\degree$. Positive Galactic latitude measurements are likely more representative of the WIM, as the negative side shows filament-like structures at $l > 40\\degree$. Median values are shown for the near Carina arm at $286\\degree < l < 332\\degree$, excluding the tangency region from $282\\degree < l < 286\\degree$. For the far Carina arm, four median values are shown for $H_{n_e^2}$ across $282\\degree\\unit[10]{kpc}$) is inherently difficult. WHAM observations clearly detect emission at positive LSR velocities in the fourth quadrant of the Milky Way, where most observed emission is expected to be at negative LSR velocities. In the observed directions, this emission most closely corresponds with the CO l-v track of the far Carina arm from \\citet{Reid2016}. However, it is difficult to be certain if the observed faint emission is from this spiral structure or from a local expanding bubble or high velocity cloud. The CO track is known not to follow a constant Galactocentric radius and similar H$\\alpha$~emission is not seen beyond the Carina arm tangency point, ruling out the possibility of a ring structure. The following sections show a list of features\/tests we use to argue for and against the emission corresponding to a distant spiral structure. The pros tend to outweigh the cons, and possible explanations to account for the reasons against are provided. \n\n\\textbf{1. Observed receding perspective effect.} Although distances to the far Carina arm are not well constrained, the heliocentric distance to the spiral arm should increase significantly as a function of Galactic longitude beyond the tangency point. The H$\\alpha$$~$and $\\left[ \\text{\\ion{S}{2}} \\right]$$~$maps of the far Carina arm clearly show this perspective effect (see solid white lines of constant height, $z$ in Figure \\ref{maps}). \n\n\\textbf{2. Non-axisymmetric structure.} It is well-understood that spiral structures in galaxies are not symmetric across the minor axis. To check for this, we make a symmetric map along the first quadrant, where negative LSR velocities kinematically imply $R_G > R_{\\odot}$. WHAM data is integrated along the same l-v track used in the fourth quadrant, but instead longitude values are reflected to the first quadrant and the sign of the velocity is reflected to negative values. The resulting map, along with the original map in the fourth quadrant are shown in Figure \\ref{axisym}. The two maps are distinct, and the first quadrant does not show a similar structure in H$\\alpha$~emission.\n\n\\begin{figure*}[htb!]\n\\label{axisym}\n\\epsscale{1}\n\\plotone{ha_axisym-eps-converted-to.pdf}\n\\caption{Map of $I_{\\text{H}\\alpha}$~showing the non-axisymmetric nature of the observed far Carina arm structure. The right side shows the far Carina arm in the third quadrant of the Galaxy, as seen in Figure \\ref{maps}, and the left side shows the equivalent map in the first quadrant of the Galaxy. Data are integrated following the same l-v track and velocity width used for the far Carina arm, but with longitude reflected to the first quadrant and velocity reflected to negative values. These maps are significantly different.}\n\\end{figure*}\n\n\\textbf{3. Symmetric across the midplane.} Spiral arms are aware of the midplane of the Galaxy. The H$\\alpha$$~$and $\\left[ \\text{\\ion{S}{2}} \\right]$$~$emission is generally symmetric in Galactic latitude as seen in the maps of Figure \\ref{maps} and Figure \\ref{mapss} and in the Galactic latitude profiles of H$\\alpha$$~$emission plotted in Figure \\ref{fits}. \n\n\\textbf{4. Location of midplane in Galactic latitude.} Figure \\ref{fits} shows the midplane of the Galaxy at the near and far Carina arm are at distinctly different latitudes, with the near arm leaning towards a midplane location of $b > 0\\degree$ and the far arm towards $b < 0\\degree$. A single Gaussian fit to the intensity profile as a function of Galactic latitude locates the midplane. This increases our confidence in separating the near and far components of the arm along a single line of sight. The distinct behavior of these two physical regions help rule out the possibility of observing an extended wing of closer, negative velocity gas reaching out to positive velocities in the fourth quadrant. The LAB survey \\citep{Kalberla2005} shows the midplane location of the neutral gas in the far Carina arm also tends towards $b < 0\\degree$ following the same method. Additionally, at the large distances associated with the far Carina arm, the dust lane in the plane would appear to be very narrow (within one WHAM beam). The intensity as a function of height still shows exponential behavior down to low latitudes close to $b \\approx 0\\degree$.\n\n\\textbf{5. Peaks in emission spectra. } Figure \\ref{bv_peak} shows a sample of peaks of spectra decomposed into Gaussian components, shown in blue. Local maxima of the observed spectra are shown in red. Seeing local maxima correspond to peaks in the Gaussian components increases our confidence that the observed emission feature is from a real Galactic source, rather than a wide wing from local emission at $v_{LSR} \\approx \\unit[0]{km~s^{-1}}$.\n\n\\textbf{6. Reverse distance argument.} Our method provides a direct measure of the ratio of distance to the arm segment, $D$, to the EM scale height, $H_{n_e^2}$. If we assume the scale height of the far Carina arm is consistent with other spiral arm segments (see Table \\ref{sum}), then we can estimate a distance to the arm segment based on our measured values. Using a \"nominal\" value of $H_{n_{e^2}} = \\unit[0.3]{kpc}$, the average distance is $D \\approx \\unit[2]{kpc}$. This places the far Carina arm at around the same distance as the near Carina arm. However, this \"reverse distance argument\" is only valid if the scale height of the WIM around all spiral arm segments is constant and consistent with previous measurements. The Sagittarius-Carina arm is much different than both the Perseus and Scutum-Centaurus arms, so a direct comparison may not be valid. The Carina arm shows up much more clearly in gas diagnostics \\citep{Cohen1985, Dame2007, Reid2016} than through counts of old stars, in which the Perseus and Scutum-Centaurus arms stand out \\citep{Benjamin2009}. This difference suggests the Perseus and Scutum-Centaurus arms have different potential wells than the Carina arm and contain more dense gas and more star formation activity. An explanation of the correlations between our anomalous scale height measurements and other oddities in the stellar and molecular gas structure of the Carina arm requires further study and is beyond the scope of this paper.\n\n\\textbf{7. CO and H$\\alpha$$~$velocity offset.} Our l-v track for the Carina arm is defined using CO emission but the velocity centroids of the H$\\alpha$$~$spectra for the far Carina arm are offset from the CO emission to slightly more positive velocities (see Figure \\ref{bv_peak}). However, there is no reason to believe the cold molecular components of the ISM are spatially and kinematically coincident with the H$\\alpha$~emitting WIM. Other galaxies, such as M$51$ show such a physical offset between CO gas and diffuse ionized gas \\citep{Schinnerer2013}. This offset could be consistent with where star formation takes place within a spiral arm. Additionally, the trends in the H$\\alpha$~emission as a function of longitude are still closely correlated with the CO data.\n\nThere is clearly-detected emission at velocities that most closely lie near the expected velocities for the far Carina arm, and a negative velocity counterpart is not seen as would be expected for an expanding local bubble. Based on these arguments and tests, we see strong evidence for the emission originates in a distant spiral arm. \n\n\\subsection{Physical Conditions}\n\nThe $\\left[ \\text{\\ion{S}{2}} \\right]$~\/ H$\\alpha$~line ratio is known to show large variations while still having a strong correlation with H$\\alpha$~intensity \\citep{Haffner2009}. The narrow confidence intervals despite the large scatter in Figure \\ref{ratio_iha} illustrates this. A power-law between the $\\left[ \\text{\\ion{S}{2}} \\right]$~\/ H$\\alpha$~line ratio and $I_{\\text{H}\\alpha}$~is well supported along the Carina arm (see Figure \\ref{ratio_iha}), along with local gas, the Perseus arm, and the Scutum-Centaurus arm \\citep{Haffner1999, Hill2014}. The most likely physical explanation for this relationship is a change in the temperature of the gas. However, this attribution is not straightforward, unlike with the $\\left[ \\text{\\ion{N}{2}} \\right]$~\/ H$\\alpha$~line ratio which strongly correlates with the temperature of the gas \\citep{Haffner1999, Madsen2006, Haffner2009}. \n\n\\citet{Otte2002} showed the line ratio depends on many physical conditions:\n\n\\begin{equation}\n\\frac{\\text{$\\left[ \\text{\\ion{S}{2}} \\right]$}}{\\text{H$\\alpha$}} = \\left( 7.49 \\times 10^5 \\right) T_4^{0.4}~e^{\\nicefrac{2.14}{T_4}}~\\frac{\\text{H}}{\\text{H}^+}~\\frac{\\text{S}}{\\text{H}}~\\frac{\\text{S}^+}{\\text{S}}\n \\label{eq_ratio}\n\\end{equation}\n\n\\noindent where $T_4$ is the temperature of the emitting gas (in units of $\\unit[10^4]{K}$), $\\nicefrac{\\text{H}^+}{\\text{H}}$ and $\\nicefrac{\\text{S}^+}{\\text{S}}$ are the hydrogen and sulfur ionization fractions, and $\\nicefrac{\\text{S}}{\\text{H}}$ is the sulfur abundance. Following previous work \\citep{Hill2014}, we assume the sulfur abundance does not undergo large variations in the ISM, and adopt the \\citet{Reynolds1998} result for a hydrogen ionization fraction of $\\nicefrac{\\text{H}^+}{\\text{H}} \\gtrsim 0.9$ in the WIM. Then, variations in the line ratio are either from changes in the gas temperature or changes in the sulfur ionization fraction. \n\nIn-depth photoionization modeling of sulfur would disentangle this relationship, but these models are difficult to construct due to a poorly constrained temperature dependence of the dielectric recombination rate of sulfur \\citep{Ali1991,Barnes2014}. Instead, previous studies \\citep{Haffner1999, Madsen2006, Haffner2009} show variations in the $\\left[ \\text{\\ion{S}{2}} \\right]$~\/ H$\\alpha$~line ratio largely track variations in temperature (larger $\\left[ \\text{\\ion{S}{2}} \\right]$~\/ H$\\alpha$~$\\implies$ larger $T_4$) by showing strong correlation with variations in the $\\left[ \\text{\\ion{N}{2}} \\right]$~\/ H$\\alpha$~line ratio. Meanwhile, the $\\left[ \\text{\\ion{S}{2}} \\right]$~\/ $\\left[ \\text{\\ion{N}{2}} \\right]$~line ratio traces changes in the sulfur ionization state. Future work will incorporate ongoing $\\left[ \\text{\\ion{N}{2}} \\right]$~observations with WHAM to fully understand the temperature distribution of the WIM throughout the arm. \n\nFollowing this reasoning and the results of statistical tests (see Section \\ref{ratio_stat}), we conclude the temperature of the emitting gas (as traced by the $\\left[ \\text{\\ion{S}{2}} \\right]$~\/ H$\\alpha$~line ratio) is more strongly correlated with in-situ electron density (as traced by $I_{\\text{H}\\alpha}$~and the EM) than with height above\/below the midplane ($\\left|z\\right|$). These results further support the conclusions of \\citet{Hill2014}: the heating mechanisms involved for the WIM at different $n_e$ do not vary greatly with $\\left|z\\right|$. Mechanisms other than photoelectric heating are necessary \\citep[such as cosmic ray heating, magnetic reconnection, dissipation of turbulence, photoelectric emission from dust grains;][]{Reynolds1992,Reynolds1999,Otte2002, Barnes2014, Rand1998, Wiener2013}.\n\nThis behavior further suggests $I_{\\text{H}\\alpha}$~more closely correlates with electron density in the emitting gas, rather than distance from the plane. The observed enhancement in the line ratio at larger heights is strong evidence for the observed H$\\alpha$~emission originating from the diffuse in-situ gas of the WIM, as opposed to primarily consisting of scattered light from higher density \\ion{H}{2} regions as modeled by \\citet{Seon2012}. Their model suggests an increase in distance from O and B stars in \\ion{H}{2} regions explains the observed line ratio enhancement. However, the Sagittarius-Carina arm and the Scutum-Centaurus arm \\citep{Hill2014} suggest the line ratio more closely depends on the gas density as opposed to height. In our preferred picture, the $\\left[ \\text{\\ion{S}{2}} \\right]$~\/ H$\\alpha$~line ratio is tracing variations in temperature and gas density within the WIM and not distance from OB stars as expected by scattered light models. \n\n\\section{Conclusions and Summary} \\label{summary}\n\nWHAM is used to study the structure and physical conditions of the warm $\\left( \\approx \\unit[8000]{K} \\right)$ ionized medium throughout the Sagittarius-Carina arm. Faint emission from the spiral arm is kinematically isolated using a CO-informed longitude-velocity track. Emission is detected across a large range in Galactic longitude $(\\text{near Sagittarius; }20\\degree < l < 52\\degree \\text{ near\/far Carina; } 282\\degree < l < 332 \\degree )$, spanning a large range in Galactocentric radii $(\\unit[6]{kpc} \\lesssim R \\lesssim \\unit[11]{kpc})$ and heliocentric distance $(\\unit[1]{kpc} \\lesssim D \\lesssim \\unit[15]{kpc})$. The H$\\alpha$~intensity as a function of height above and below the plane suggest an EM scale height ($H_{n_{e}^2} \\approx \\unit[300]{pc}$) consistent with other spiral arms in the Galaxy (Scutum-Centaurus, Perseus) for the near Sagittarius and near Carina arms. Emission seen around the far Carina arm suggest significantly larger scale heights ($H_{n_{e}^2} > \\unit[1000]{pc}$). \n\nThe anomalously large scale heights along the far Carina arm suggest a significant physical difference in the environment surrounding this region of the Galaxy. We offer a few potential explanations for the large scale heights observed, but a complete explanation requires a more in-depth study of the trends in other ISM components within and around this distant spiral arm. At large Galactocentric radii, the scale height of the WIM in the far Carina arm tends to increase rapidly with increasing radius. However, as seen in Figure \\ref{summary}, the Perseus arm has a much smaller scale height at similar Galactocentric radii. \nIndependent measures of the scale height of ionized gas from pulsar dispersion measures \\citep{Gaensler2008, Savage2009} seem to match well with the observed trend for the increasing scale height as a function of Galactocentric radius along the Sagittarius-Carina arm (see Figure \\ref{summary}).\n\nThe large scale heights for the far Carina arm suggest the star formation rate is much different here than within the Perseus or Scutum-Centaurus arm, but there is not direct evidence or measurements of the star formation activity along the far reaches of this arm. Further study of the differences of the midplane conditions and star formation along the far Carina arm are beyond the scope of this work. Future work will incorporate ongoing H$\\beta$~emission observations from WHAM to better analyze the ionized gas conditions near the midplane.\n\nStatistical analysis of the $\\left[ \\text{\\ion{S}{2}} \\right]$~\/ H$\\alpha$~line ratio throughout the near and far Carina arm show a stronger correlation between the line ratio and H$\\alpha$~intensity (inverse power law), as opposed to height, $z$, above the midplane (linear). This supports the interpretation for the $\\left[ \\text{\\ion{S}{2}} \\right]$~\/ H$\\alpha$~line ratio primarily tracing variations in gas temperature and density. We interpret this as evidence for the observed diffuse H$\\alpha$~emission ($\\gtrsim 80\\%$) originating from in-situ gas in the WIM, as opposed to scattered light from \\ion{H}{2} regions. \n\nFuture work will use $\\left[ \\text{\\ion{N}{2}} \\right]$~emission line observations from WHAM to more closely analyze the temperature behavior of the WIM around this spiral arm structure. Ongoing observations of H$\\beta$~will allow for accurate dust extinction corrections throughout all observed lines of sight. \n\n\\acknowledgment\nWe acknowledge the support of the U.S. National Science Foundation (NSF) for WHAM development, operations, and science activities. The survey observations and work presented here were funded by NSF awards AST-0607512 and AST-1108911. R.A.B. acknowledges support from the NASA grants NNX10AI70G and HST-GO-13721.001-A. A.S.H acknowledges support from NASA grant HST-AR-14297.001-A. We would also like to acknowledge the support of NSF REU Site grant AST-1004881 and the contributions of undergraduate researchers Peter Doze (Texas Southern University), Andrew Eagon (University of Wisconsin-Whitewater), and Alex Orchard (University of Wisconsin-Madison) to the initial analysis of the WHAM all-sky survey results.\n\n\n\\bibliographystyle{aasjournal}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\subsection{Related Work}\nSince the original proposal by \\textcite{greydanus}, HNNs have generated much scientific interest. To name a few, generative~\\parencite{HGNs}, recurrent~\\parencite{Chen2020SymplecticRNN} and constrained~\\parencite{Zhong2020SymplecticODENet} versions, as well as Lagrangian Neural Networks~\\parencite{LNN} have been proposed.\n\n\\paragraph{Improvements of HNNs using symplectic integrators}\nAn adapted loss function for HNNs, derived from a symplectic numerical integration scheme, has been concurrently explored in several recent publications. \\textcite{zhu2020deep} use the implicit midpoint rule in an adapted loss and show that, in this case, the SHNN can learn an exact Hamiltonian given by the modified equation. Similarly, \\textcite{Chen2020SymplecticRNN} make a modification using the leapfrog integrator before proposing a recurrent, multi-step training method, but only for separable Hamiltonians. Neither paper uses the modified equation to correct the learned Hamiltonian. Independently, \\textcite{xiong2021nonseparable} use explicit higher-order symplectic methods during training, although they need to consider an augmented Hamiltonian on an augmented phase space with double the dimensionality.\nFinally, \\textcite{dipietro-ssinn} use strong assumptions (including separability) to inform their architecture and train with a fourth-order symplectic integrator that is then explicit. En revanche, they succeed in training with very small, sparse datasets.\n\n\\paragraph{Learning Hamiltonians from data using other architectures}\nBeyond the immediate improvements of HNNs, there exist other proposals, notably other architectures, to learn Hamiltonians from data. Notably, \\textcite{Zhong2020SymplecticODENet} learn both a strongly parametrized and a general Hamiltonian in conjunction with the Neural ODE~\\parencite{chen-neural-ode} model. They also learn systems under the influence of external forces. Further, \\textcite{jin-sympnets} introduce a new architecture to learn any symplectic map and prove respective approximation theorems. They use SHNNs with the implicit midpoint rule as their baseline. \\textcite{taylornets} also directly predict a future state of the system, but they use a separable Hamiltonian in conjunction with a then explicit fourth-order symplectic method.\n\n\n\\subsection{Hamiltonian Neural Networks}\\label{subsec:analysis-hnn}\n\nIn contrast to obtaining trajectories from a known Hamiltonian, the purpose of Hamiltonian Neural Networks (HNNs) \\parencite{greydanus} is to learn a Hamiltonian from data, composed of observed trajectories $y(t)$ which solve Hamilton's equation~\\eqref{eq:hamilton}. More specifically, we consider a data point to be a couple $(y_0, y_1 = \\phi_h(y_0))$ of two consecutive snapshots of the state of the system separated by a time $\\Delta t = h$. Having two data points is crucial to have information about the evolution of the system and to calculate a finite difference approximation $(y_1-y_0)\/h$ of the time derivative $\\dot y$. Since any trajectory with $n$ data points can be split into $n-1$ such couples, we shall consider all our data to be in this form.\n\nDenote the HNN by the function $\\hat H(p, q)$ which implicitly also depends on all weights and biases of the chosen neural network, see Figure~\\ref{fig:hnn-architecture} for a sketch of the architecture of HNNs. In the spirit of \\textcite{greydanus}, its loss function\\footnote{Note that \\textcite{greydanus} used the analytic gradient of the true Hamiltonian as the target for most tasks, which yields a different mathematical problem, i.e. learning a known scalar function from its gradient. The present article only uses finite differences as would be obtained from real data.} for one data point $(y_0, y_1)$ is\n\\begin{equation}\\label{eq:loss-hnn}\n\t\\mathcal L_\\text{HNN} = \\norm\\Big{\\frac{y_1 - y_0}{h} - J^{-1}\\grad \\hat H(y_0)}^2_{L^2}\n\\end{equation}\nwhich we shall rewrite into the form\n\\begin{equation}\\label{eq:loss-hnn-rewritten}\n\t\\mathcal L_\\text{HNN} = h^{-2} \\norm\\Big{ y_1 - \\smash{\\underbrace{\\qty(y_0 + hJ^{-1}\\grad \\hat H(y_0))}_{=\\, \\hat y_1}} }^2_{L^2}.\n\\end{equation}\n\\vspace{1ex} %\n\nNow it becomes clear that, up to a constant factor of $h^2$, we are effectively integrating the HNN's prediction $\\hatH$ using the forward Euler method and comparing the result $\\hat y_1$ to the real observation $y_1$ in the loss function. We can do better than forward Euler!\n\n\\begin{remarks}\\leavevmode\n\\begin{enumerate}[label={(\\roman*)},nosep]\n\t\\item In fact, using the forward Euler method here means that there does not even exist a function~$\\hatH$ such that the loss could be identically zero everywhere in phase space --- an artificial lower bound for the loss is introduced, frustrating the training procedure. This impossibility realizes as a mismatch of order $h$ in the mixed second derivatives $\\grad_{pq} \\hatH$ and $\\grad_{qp} \\hatH$ of the function to be learned, see Appendix~\\ref{ap:non-existence-H} for the details.\n\t\\item One may argue that, instead, the true derivative $\\dot y$ should be more accurately approximated by higher-order difference quotients. Yet, this would only involve operations on the dataset without the HNN itself, so this point of view is rather limited. One would not be able to exploit the real system's symplecticity as explained next.\n\\end{enumerate}\n\\end{remarks}\n\n\n\\subsection{Symplectic methods to the rescue}\\label{subsec:shnn-introduction}\nAccording to the theory of geometric numerical integration~\\parencite{GNI}, the forward Euler method should be replaced by a symplectic method. The two simplest symplectic methods are the \\emph{symplectic Euler method}\n\\begin{equation}\\label{eq:symp-Euler}\n\tp_1 = p_0 - h \\grad_q H(p_1, q_0), \\qquad q_1 = q_0 + h \\grad_p H(p_1, q_0)\n\\end{equation}\nand the \\emph{implicit midpoint rule}\n\\begin{equation}\\label{eq:midpoint}\n\ty_1 = y_0 + h J^{-1} \\grad H\\qty\\Big(\\frac{y_0 + y_1}{2}).\n\\end{equation}\nBoth methods are implicit and are obtained by only changing the point of evaluation of the Hamiltonian vector field. Abstracting the choice of a specific integration scheme by a function $s(y_0, y_1)$ (this covers all methods of interest in this article), we obtain the loss function for Symplectic Hamiltonian Neural Networks (SHNNs):\n\\begin{equation}\\label{eq:loss-shnn}\n\\begin{gathered}\n\t\\mathcal L_\\text{SHNN} = \\norm\\Big{\\frac{y_1 - y_0}{h} - J^{-1}\\grad \\hat H(s(y_0, y_1))}^2_{L^2} \\\\\n\t\\text{(provided that $s(y_0, y_1)$ gives rise to a symplectic integration method)}\n\\end{gathered}\t\n\\end{equation}\n\nUsing a symplectic method ensures that there exists a function $\\hatH$, the \\emph{modified Hamiltonian}, which the SHNN can theoretically learn to an arbitrary precision (assuming that the neural network is large enough and that the dataset sufficiently covers the input space), as encapsulated by the following proposition. Note that the solutions of a general Hamiltonian system are not necessarily well-behaved for long times, so all following theoretical results are only rigorously valid for small enough $h$.\n\n\\begin{proposition}\\label{prop:1}\n\tLet $H : \\RR^{2n} \\rightarrow \\RR$ be a smooth Hamiltonian and $\\dot y = J^{-1} \\grad H(y)$ be the corresponding Hamiltonian system. Fix $h>0$. Denote the true flow of the system after time $h$ by $\\phi_h$, such that any initial condition $y_0 = (p_0, q_0)$ uniquely defines $\\phi_h(y_0) =: y_1 = (p_1, q_1)$. Then,\n\t\\begin{enumerate}[label={(\\alph*)}]\n\t\t\\item (symplectic Euler) there exists a smooth function $\\hatHse : \\RR^{2n} \\rightarrow \\RR$ such that\n\t\t\\begin{equation}\n\t\t\tJ^{-1}\\grad \\hatHse (p_1, q_0) = \\frac{y_1 - y_0}{h} %\n\t\t\\end{equation}\n\t\tfor all $y_0 \\in \\RR^{2n}$ where $y_1 = \\phi_h(y_0)$ is well-defined.\n\t\t\n\t\t\\item (implicit midpoint) there exists a smooth function $\\hatHmp : \\RR^{2n} \\rightarrow \\RR$ such that\n\t\t\\begin{equation}\n\t\t\tJ^{-1}\\grad \\hatHmp \\qty\\Big(\\frac{y_1 + y_0}{2}) = \\frac{y_1 - y_0}{h} %\n\t\t\\end{equation}\n\t\tfor all $y_0 \\in \\RR^{2n}$ where $y_1 = \\phi_h(y_0)$ is well-defined.\n\t\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\n\tThis is exactly the result of Lemma 5.3 in \\parencite[Ch.~VI]{GNI}, using $\\hatHse = \\frac{1}{h} S^1$ or $\\hatHmp = \\frac{1}{h} S^3$.\n\\end{proof}\n\n\\begin{remarks}\\leavevmode\n\\begin{enumerate}[label={(\\roman*)},nosep]\n\t\\item If the Hamiltonian is not globally defined on $\\RR^{2n}$, technical issues regarding the domain of definition of $\\hatH$ come into play. For a star-shaped, open domain of definition of $H$, the result can be adapted, but more generally, the function $\\hatH$ will only exist locally.\n\t\\item A more general result of existence for any general symplectic integration method is given by \\textcite[Sec.~2.2]{chartier} using so-called B-series, a generalized, formal Taylor expansion. What they call the ``modified differential equation,'' written with a modified Hamiltonian is exactly what an SHNN learns.\n\t\\item Note that when numerically solving ODEs, implicit methods like the symplectic Euler method and midpoint rule normally require fixed point iterations at each step. However, during the training of HNNs we are solving the reverse problem for which the true trajectories $(y_0, y_1)$ are already known. This means that symplectic training with implicit methods has the same computational cost.\n\t\\item One property of the learned Hamiltonian should be noted: \\emph{Integrating the modified Hamilton equation of the learned $\\hatH$ with that same method and same time step $h$ as used during training, one will obtain the true flow $\\phi_H^h$ of the real Hamiltonian.} This is the statement of Theorems~5.7 and 5.8 of \\parencite[Ch.~VI]{GNI} (noting that the necessary assumptions are satisfied due to Proposition~\\cref{prop:2} below). However, there is no free lunch: The true intermediate states of the system will not be accessible nor predictable with this method. In fact, reducing the integration time step in this case will worsen the quality of the trajectory.\n\\end{enumerate}\n\\end{remarks}\n\n\n\\subsection{Correction of the learned Hamiltonian}\\label{subsec:shnn-correction}\nWhat's more, we can derive a formal series for the modified Hamiltonian $\\hatH$ learned by an SHNN in terms of the real Hamiltonian $H$ and its derivatives, depending on the used integration method. This allows us to not only understand exactly what our model learns and where it draws its predictive power from, but also to correct the model after training to an arbitrary oder. This way, we can learn the real, physical Hamiltonian $H(p, q)$ to arbitrary precision, purely from discretized snapshots of trajectory data, without any information about the true gradients or vector fields.\n\nMathematically, Hamilton's equation is the characteristic equation of the Hamilton-Jacobi partial differential equation (PDE), obtained by considering the flow of the true Hamiltonian system after a variable time $t \\in \\RR$. Rendering explicit the fact that the modified Hamiltonian naturally depends on the time step $h$ fixed in Proposition~\\cref{prop:1}, we write $\\hatH(p, q, t=h)$. Since the flow is a smooth function of time when it exists, $\\hatH$ will also be smooth in all its variables. Calculating its time derivative, this leads to the following.\n\n\\begin{proposition}\\label{prop:2}\n\tLet $H : \\RR^{2n} \\rightarrow \\RR$ be a smooth Hamiltonian. For both cases of Proposition~\\ref{prop:1}, there exists a neighborhood of $t=0$ where the respective time-dependent modified Hamiltonian $\\hatH(p, q, t)$ solves a Hamilton-Jacobi PDE. Explicitly,\n\t\\begin{enumerate}[label={(\\alph*)}]\n\t\t\\item (symplectic Euler) there exists an open neighborhood $U \\ni 0$ such that $\\forall t \\in U$, \n\t\t\\begin{equation}\n\t\t\t\\pdv{t}\\qty\\big(t\\hatHse(p, q, t)) = H\\qty(p, q + t \\pdv{\\hatHse}{p} (p, q, t))\n\t\t\\end{equation}\n\t\t\\item (implicit midpoint) there exists an open neighborhood $U \\ni 0$ such that $\\forall t \\in U$, \n\t\t\\begin{equation}\n\t\t\t\\pdv{t}\\qty\\big(t \\hatHmp(y, t)) = H\\qty(y + \\frac{t}{2} J^{-1} \\grad_y \\hatHmp (y, t))\n\t\t\\end{equation}\n\t\t%\n\t\t%\n\t\\end{enumerate}\n\tIntegrating either of these relations from $0$ to $h \\in U$ and Taylor expanding the right-hand side generates a formal power series in $h$ whose coefficients depend on $H$ and its (partial) derivatives.\n\\end{proposition}\n\n\\begin{proof}\n\tThis is a direct calculation using generating functions~\\parencite[Sec.~VI.5.3]{GNI}.\n\\end{proof}\n\nThis proposition allows the computation of $\\hatH$ learned by an SHNN if the Hamiltonian $H$ of the original problem is known (see also~\\parencite[Sec.~VI.5.4]{GNI}). However, in practice, we will want to reconstruct $H$ from the modified $\\hatH$ learned from data. Hence, the formal power series needs to be inverted to the desired order. Since the series is purely formal, this can be easily done with the help of symbolic computation programs. For the symplectic Euler method, abbreviating $H = H(p,q)$ and $\\hatH = \\hatHse(p, q, h)$, this yields\n\\begin{equation}\\label{eq:correction-symp-euler}\n\\begin{split}\n\tH = \\hatH &- \\frac{h}{2} \\grad_p \\hatH \\cdot \\grad_q \\hatH \\\\\n\t&+ \\frac{h^2}{12}\\qty\\Big(\\grad_{pp} \\hatH (\\grad_q \\hatH)^2 + 4\\grad_{pq} \\hatH(\\grad_p \\hatH,\\grad_q \\hatH) + \\grad_{qq} \\hatH (\\grad_p \\hatH)^2) + \\mathcal O(h^3).\n\\end{split}\n\\end{equation}\nSimilarly, for the implicit midpoint method, one obtains\n\\begin{equation}\\label{eq:correction-midpoint}\n\tH = \\hatH - \\frac{h^2}{24} \\grad^2 \\hatH \\qty(J^{-1} \\grad H, J^{-1}\u00a0\\grad H) + \\mathcal O(h^4).\n\\end{equation}\n\n\\subsection{Methods}\\label{subsec:methods}\n\nFor each task, four different models were trained based on the loss function~\\eqref{eq:loss-shnn}, for different choices of the scheme function $s = s(y_0, y_1)$ and post-training correction: forward Euler, symplectic Euler, implicit midpoint, and corrected symplectic Euler.\n\nThe forward Euler scheme $s = y_0$ replicates HNNs \\parencite{greydanus} trained with discretized data and represents our baseline. The symplectic Euler scheme $s = (p_1, q_0)$ is a symplectic method of order 1 whereas the implicit midpoint rule \\mbox{$s = (y_0 + y_1)\/2$} is a symplectic method of order 2. Finally, we also trained an SHNN with the symplectic Euler method but afterwards corrected its Hamiltonian using $\\hatH - \\smash{\\frac{h}{2}} \\grad_p \\hatH \\cdot \\grad_q \\hatH$, obtaining a Hamiltonian correct up to second order.\n\nFor each task, we defined a bounded subregion $\\Omega_d$ of the full phase space $\\Omega$ to generate the data from. Given a fixed time step $h > 0$, we generated a dataset of $K$ data points. Each point is given by a couple $(y_0, y_1)$ where $y_0$ is a random initial state chosen uniformly from $\\Omega_d$ and $y_1 = \\phi_h(y_0)$ represents a snapshot of the system's true solution at a time $h$ later. Note that friction was neglected for all tasks; in fact, the architecture of HNNs prevents them from learning any change of the total energy with time. The full data set was separated using a test split of 20\\%.\n\nWe used fully connected neural networks with a $\\tanh$ activation function, $L$ hidden layers and $M$ neurons per hidden layer for all tasks. All models were trained with the AdamW optimizer \\parencite{Adam,AdamW} as implemented in PyTorch~\\parencite{pytorch} using default coefficients, a learning rate of $10^{-3}$ and weight decay of $10^{-2}$, for 5000 epochs without mini-batches (i.e. batch size = $K$). Only the model with the best test loss was saved after training. Table~\\cref{tab:model-parameters} in Appendix~\\cref{ap:training-parameters} summarizes the different choices of model and dataset size for all tasks. Training was performed on a single GPU (Nvidia Tesla K80, cloud hosted) using the CUDA framework version 11.2~\\parencite{cuda-toolkit} as integrated in PyTorch~\\parencite{pytorch}.\n\nThe test and train $L^2$ losses were tracked per epoch while training the models. Afterwards, we measured our principal metric: the average error $\\epsilon_H$ of the learned Hamiltonian over a region $\\Omega_m \\subset \\Omega_d$ of phase space. This quantity was measured as\n\\begin{equation}\\label{eq:h-err}\n\t\\smash{\\epsilon_H = \\Big\\langle \\abs\\big{\\hatH - H - \\ev{\\hatH - H}_{\\Omega_m}} \\Big\\rangle_{\\Omega_m}},\n\\end{equation}\nwhere the mean difference between $\\hatH$ and $H$ is removed inside the absolute value because the neural network only learns $\\hatH$ up to some constant. Note that since this constant is global, one would like to intuitively remove it by evaluation at a single point as $\\hatH - H - (\\hatH(p) - H(p))$. However, this would add the local error at $y=p$ to the function everywhere, whereas the mean does not suffer from any locality issues.\n\nAs an additional metric, we roll out long-term predictions of the trained models from random initial points, using the explicit Runge-Kutta method Dormand and Prince of order 5(4)~\\parencite{rk45} implemented in the \\texttt{scipy.integrate} module~\\parencite{scipy}. Those trajectories are analyzed in two different fashions. Regarding the shape of their trajectories in phase space, following the level curves of the modified Hamiltonian, provides insight into this modification with respect to the true Hamiltonian. Independently, measuring the mean squared $L^2$ error (MSE) between the long-term predictions of our models and the true solution provides insight into the quality of the predictions.\n\n\\medskip\n\\begin{remark}\nThe measuring region $\\Omega_m$ was chosen as the hypercube centered and contained within the data region $\\Omega_d \\subseteq \\RR^{2n}$, with side lengths divided by $\\smash{\\sqrt{2}}$. The average error was not measured directly on $\\Omega_d$ because our models perform drastically worse close to the boundary of $\\Omega_d$ (see Appendix~\\ref{ap:hamiltonian-error-distribution}).\n\\end{remark}\n\n\n\\subsection{Results}\\label{subsec:results}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figs\/losses}\n\t\\caption{Training (dashed lines) and testing (solid lines) losses as a function of the training epoch for the three chosen tasks, the different integration methods and different discretization time steps $h \\in \\{0.05, 0.1, 0.2, 0.4, 0.8\\}$. For each method, the darkest shade of its color corresponds to the largest $h=0.8$ and the lightest shade to the smallest $h=0.05$.}\\label{fig:losses}\t\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figs\/hamiltonian-error}\n\t\\caption{Average error of the learned Hamiltonian $\\epsilon_H$ as a function of the discretization time step $h$, for the three chosen tasks and the different integration methods ($N=2000$). For each point the mean (solid marker) and quartiles (transparent region) are plotted, i.e. 50\\% of all data points respectively lie inside the colored transparent regions. Note that, for every point, the standard error of the mean is too small to be visible. Two reference lines $\\epsilon = h$ and $\\epsilon = h^2$ have been added in grey.}\\label{fig:h-err}\n\\end{figure}\n\n\nThe SHNN models trained well on all datasets, with only minimal overfitting. Figure~\\cref{fig:losses} shows the training and test loss as a function of the training epoch, for an HNN trained with the forward Euler scheme and two SHNNs trained with the symplectic Euler and implicit midpoint schemes, respectively. Remarkable is the fact that the HNN losses plateau very quickly and, depending on the chosen time step $h$, do not descend below a certain threshold. These lower bounds of the squared $L^2$ loss are proportional to $h^2$ which confirms the theoretical result of Appendix~\\cref{ap:non-existence-H}. The fact that they vary by a constant factor realizes as a constant difference on the logarithmic scale. Contrarily, for the trained SHNNs and all three tasks, the losses descend independently of $h$ down to a level of numerical accuracy.\n\nFurther, we analyzed the average error of the learned Hamiltonian, which allows to draw inferences on the order of the used numerical method. The averages of equation~\\eqref{eq:h-err} were calculated using $N=2000$ points uniformly drawn from $\\Omega_m$. The mean and quartiles are shown in Figure~\\cref{fig:h-err} on a double logarithmic scale, which means that an error of order $h^p$ realizes as a straight line with slope $p$. This figure shows that using the forward or symplectic Euler methods yields an error of order $h$, and that using the implicit midpoint method yields an error of order $h^2$ as expected. Further, they confirm that the post-training correction to the SHNN trained with the symplectic Euler method also yields an error of order $h^2$.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figs\/phase-space-and-mses-spring-h08-ann}\n\t\\includegraphics[width=\\linewidth]{figs\/phase-space-and-mses-pendulum-h08-ann}\n\t\\caption{Analysis of individual trajectories of the pendulum and harmonic oscillator, with different SHNNs, trained with a discretization step $h=0.8$. The explicit trajectories drawn in phase space started at $y_0 = (0, 1.5)$ for the harmonic oscillator and $y_0 = (0, 2.5 \\text{ rad})$ for the pendulum. The displayed regions of phase space are $\\Omega_d$ in each case. \\textbf{A}: True vector field and solution. \\textbf{B}: Vector field as learned by an SHNN (trained with symplectic Euler) and predicted trajectories with all methods. \\textbf{C}: Long-time results. MSE and standard error of the mean of $N=50$ trajectories, with initial points drawn randomly from $\\Omega_m$, under the additional restriction that the system does not have enough energy to leave $\\Omega_d$; since our models were not trained outside of $\\Omega_d$.}\\label{fig:trajs}\n\t%\n\\end{figure}\n\nFinally, Figure~\\cref{fig:trajs} shows exemplary long-term trajectories for the spring (Task 1) and non-linear pendulum (Task 2) and a large step $h=0.8$, which makes the differences between the used integration schemes well visible. Several observations can be made. First, while the true Hamiltonian vector field is in general well learned by the SHNN, relatively large errors are visible at the edge of $\\Omega_d$. Second, training with the implicit midpoint rule later predicts the most accurate trajectories, followed by the corrected symplectic Euler method. This is the case both in the shape of the trajectory as well as in the long-term MSE. Third, SHNNs with the symplectic Euler method learn a highly eccentric shape in phase space. This reflects the asymmetry of the method due to the evaluations at $s = (p_1, q_0)$, and in the case of the harmonic oscillator, this explains the perfect ellipse that results. Contrarily, the symmetric implicit midpoint method does not show this behavior. Fourth, correcting after training with symplectic Euler does improve the result (especially in the MSE) but ``overshoots'' the goal in phase space --- the corrected ellipse is eccentric in the opposite direction, as expected from the alternating signs in equation~\\eqref{eq:correction-symp-euler}.\n\nIt is to be noted that the baseline HNN performs exceptionally well on Task 1 due to its simplicity. However, as Figure~\\cref{fig:h-err} shows, too, the more complex the model, the worse the performance of an HNN trained with the forward Euler method. We shall also point to Appendix~\\cref{ap:dephasing} which allows to better understand the oscillations of the MSE.\n\n\n\\subsection{Limitations}\\label{subsec:limitations}\nReal world data is never perfect. The principal limitation of the present, theory-guided article is the fact that it does not yet account for noisy data, which will deteriorate the quality of the learned Hamiltonian. Before SHNNs can be used to extract the behavior of real physical systems (see below), this effect will need to be quantified.\n\nFurther, since we are learning a continuous function with a neural network, the dataset has to densely cover the relevant region in the input space (phase space) to obtain a high-quality model. Such dense and vast datasets may not be available in reality. Yet, especially for high-dimensional systems, restricting to small regions where data is available does not inhibit solid results, also when these regions have holes, or even when considering multiple disconnected components.\n\n\\subsection{Outlook}\nUsing symplectic training and corrections of the modified Hamiltonian makes HNNs more powerful, but many further generalizations of our method could be considered. For example, directly learning the time-dependent generating functions $\\hatH(p, q, t)$ of Proposition~\\cref{prop:2} from a data set $\\{(y_0, t_0, y_1, t_1)_i\\}$ with variable time steps is an interesting question for future research. Alternatively, generalizations to Poisson systems $\\dot y = B(y) \\grad H(y)$ (with suitable conditions on the matrix $B(y)$), which model e.g. interactions with electromagnetism or allow to express Hamiltonian mechanics in non-canonical coordinates~\\parencite[Sec.~VII.2]{GNI}, seem like another fruitful subject.\n\nIn conclusion, Symplectic Hamiltonian Neural Networks are a promising ``grey-box'' approach, using physics-priors to build better machine learning algorithms and simultaneously explain why they work. Applications to almost all fields of physics are imaginable, and seem especially exciting in data-rich yet hard-to-model disciplines like the earth's climate or space weather.\n\n\n\n\\section{Introduction}\n\\input{s1-intro}\n\n\n\\section{Theory}\\label{sec:theory}\n\\input{s2-theory}\n\n\n\\section{Numerical Experiments}\n\\input{s3-experiments}\n\n\n\\section{Discussion and Conclusion}\n\\input{s4-discussion}\n\n\n\n\\begin{ack}\nThe authors would like to express their deep gratitude to SpaceAble for sponsoring, supporting and encouraging this project. In particular, thank you to Issao Ueda, Arnaud Bellizzi, Louis Celier, Quentin Gueho and Julien Cantegreil for all their help.\n\nThe authors further thank Philippe Chartier for many interesting discussions and pointers in the right directions.\n\\end{ack}\n\n\n\\printbibliography\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nCREAM is a balloon-borne experiment designed to perform \ndirect measurements of the energy spectra and elemental composition of cosmic rays (CR)\nup to the PeV scale. Two instruments, launched from McMurdo in 2004 \nand 2005, flew over Antarctica for 42 and 28 days, respectively.\nBoth instruments achieved single-element discrimination \nby means of multiple measurements of the particle charge provided by \na pixelated silicon charge detector (SCD), a segmented \ntiming-based particle-charge detector (TCD) and a Cherenkov detector (CD). \nThe particle energy was measured by a thin ionization calorimeter (CAL)\npreceded by a graphite target. \nDuring the first flight, the payload was equipped \nwith a Transition Radiation Detector (TRD), \nthus allowing redundant energy measurements.\nA detailed description of the instrument can be found\nelsewhere \\cite{ref1}. \nIn this paper, we present an analysis, based on the first flight data, \nthat shows how it is possible to cross-calibrate the TRD and the calorimeter \nto assess the absolute scale of energy measurements in CREAM.\n\\section{Complementary techniques for particle energy measurement}\nDirect measurements of charged CR are based on identification \nof the incoming particle and measurement of its energy.\nAt present, the main active techniques for the determination of CR energy at TeV scale \nare based on Ionization Calorimeters (IC) \nand TRDs. A combination of a IC and a TRD \nwas implemented in the first CREAM payload.\\\\\nThe CREAM-1 TRD is made of 512 single-wire mylar thin-walled proportional tubes \ninserted in a polystyrene foam radiator structure and arranged in 8 layers\n with alternating X\/Y orientations. \nThe 2 cm diameter tubes \nare filled with a mixture of 95\\% xenon 5\\% methane at 1 atm\nwhich has a high efficiency for TR x-rays of a few tens of keV. \nThe TRD can measure the energy of primary nuclei with Z$>$3 by multiple independent sampling of \nthe energy deposit per unit pathlength (dE\/dx) in the tubes. The ionization energy loss \nincreases logarithmically with the Lorentz factor $\\gamma$ in the relativistic rise region which extends from minimum ionization (MIP) \nto the Fermi plateau. In the case of Xe the ratio plateau\/MIP is $\\sim$ 1.5. \nAt energies higher than a few hundred GeV\/n, the ionization energy loss of a charged particle in Xe saturates. \nNevertheless, the energy can be determined from the additional ionization produced in the tubes by the \nTR photons emitted as the particle crosses the foam radiator. \nA reliable estimate of the energy deposit \nrequires a precise measurement of the pathlength of the primary particle traversing the TRD.\nFor this purpose, the detector\nhas been designed to provide accurate particle tracking, with a resolution of \nthe impinging point of the primary particle on the TCD to better than 2 mm. This allows to correct \n the response of TCD and CD for \nspatial non uniformities; it is also essential to \nidentify with a low probability of confusion, the TCD paddles and SCD pixel traversed by the \nprimary particle and hence reconstruct its charge.\nAlthough the main purpose of the CD \nis to provide, combined with the TCD signal, \na trigger for relativistic high-Z nuclei, \nit can also be used \nto measure the velocity of particles at low energies in the range \nfrom the Cherenkov threshold ($\\gamma\\sim1.35$) up to saturation ($\\gamma\\sim10$). \nFor a detailed description of the TRD and its performance \nduring the flight \nsee \\cite{ref2}.\\\\\nThe CAL is a stack of 20 tungsten plates (50$\\times$50 cm$^{2}$, \neach 1 X$_0$ thick) with interleaved active layers \ninstrumented with 1 cm wide ribbons of 0.5 mm diameter scintillating fibers.\nA 0.47 $\\lambda_{int}$ thick carbon target preceding the calorimeter induces a nuclear \ninteraction of the primary particle which initiates a hadronic shower. \nThe electromagnetic (e.m.) core of the shower is imaged by the CAL which is sufficiently thick \nto contain the shower maximum and finely grained to provide \nshower axis reconstruction. \nThe resolution \nof the impact point \non the SCD is about 1 cm.\nThe concept of\nIC is imposed by the requirement of \nweight reduction, making practically impossible to fly a conventional ``total containment'' hadronic calorimeters.\nIn a thin calorimeter, where only the e.m. core of the hadronic shower is sampled, the energy resolution \nis affected by the statistical fluctuations \nin the fraction of energy carried by $\\pi^0$ secondaries\nproduced in the shower, whose decays generate the e.m. cascade.\nAs a result, the energy resolution is poor by the standards of total containment hadron calorimetry\nin experiments at accelerators. \nNevertheless, it is sufficient to reconstruct the steep energy spectra of CR\nnuclei with a nearly energy independent resolution.\n\\section{Calibrations with particle beams}\nBoth the TRD and the CAL were calibrated independently at CERN before the final integration \nin the payload. \nThe CAL was tested \nwith beams of protons, electrons and heavy ions. \nWhile protons and electrons were mainly used to equalize the single ribbons for non-uniformity in \nlight output and gain differences among the photodetectors, \na beam of ion fragments \nwas used to verify\nthe linear response of the CAL up to about 8.2 TeV and to measure \na nearly flat resolution\nat energies above 1 TeV \\cite{CALbeamtest}.\nThe TRD was tested with protons, electrons and pions in a range of Lorentz factors\nfrom $\\sim$150 to 3$\\times$10$^5$. This allowed to calibrate the instrument \nin two separate intervals along the\n specific ionization curve: on the Fermi plateau and in the region of TR saturation. \nA MonteCarlo (MC) simulation of the apparatus based on GEANT4, including \na modelization of the TR emission from the radiator, showed a remarkable agreement \nwith the experimental data and was used to extend the calibration of \nthe detector response\nat lower $\\gamma$ values than the ones available with the beam, i.e. \nto the relativistic rise region (10-500 GeV\/n) \\cite{Swordy}. However, an independent calibration based on flight data \nis preferable in order to validate the MC and to avoid systematic errors in the energy measurement of \nCR nuclei of a few hundred GeV\/n. In fact, the TRD capability to provide a precise energy determination\nin the relativistic rise region \nis essential for an accurate measurement\nof the flux ratio of secondary to primary elements in CR, \nwhich is one of the main CREAM goals. \n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[scale=0.425]{figure1.eps}\n\\caption{Correlation of dE\/dx measurements from the TRD and the CD signals, both expressed in arbitrary units,\nfor different nuclei populations from flight data. \nThe black line is the average TRD response for O nuclei as a function of the CD signal.\n}\n\\label{fig1} \n\\end{center}\n\\vspace{-0.4cm}\n\\end{figure}\n\\begin{figure*}\n\\begin{center}\n\\subfigure[]\n{\n\\includegraphics[scale=0.35]{figure2a.eps}\n\\label{fig2a} \n}\n\\subfigure[]\n{\n\\includegraphics[scale=0.35]{figure2b.eps}\n\\label{fig2b} \n}\n\\vspace{-0.3cm}\n\\caption{(a) dE\/dx measured with the TRD vs. the Lorentz factor $\\gamma$ (in Log$_{10}$ scale), calculated from the energy deposit in the CAL, for the O sample from flight data. The superimposed curve is the GEANT4 prediction for the specific ionization in xenon.\n(b) Distribution of the reconstructed $\\gamma$ for the O selection.}\n\\label{fig2} \n\\end{center}\n\\vspace{-0.5cm}\n\\end{figure*}\n\\section{Cross-calibration with flight data}\nThe TRD can be calibrated with flight data in energy intervals not covered at the beam test,\nby correlating its response with the energy measurements provided by the CD and the CAL.\n\\begin{figure*}\n\\begin{center}\n\\hspace{-0.2cm}\n\\includegraphics[scale=0.5]{figure3.eps}\n\\caption{TRD energy calibration with O (filled circles) and C (open squares) samples from flight data. \nThe energy is measured with the CD below the minimum of ionization (green circles) and with the CAL \nin the relativistic rise region (red circles and blue squares). The dotted line\nrepresents the specific ionization curve in xenon predicted by GEANT4. \n}\n\\label{fig3} \n\\end{center}\n\\vspace{-0.5cm}\n\\end{figure*}\nEvents were selected by requiring that the primary particle track reconstructed by the TRD\nwas within the TCD acceptance and had at least four proportional tubes hit in each view. \nThe pulse heights of the track-matched TCD paddles \nwere combined with the CD signal to get a measurement of the particle charge.\nAn excellent separation of the charge peaks for elements from beryllium to silicon was obtained, \nwith a charge resolution for carbon and oxygen better than 0.2 $e$ \\cite{Coutu}.\nThe energy deposit per unit pathlength (dE\/dx) in the TRD was extracted with a likelihood fit, taking into account\nthe impact parameters of the primary particle track and the signal in each tube. \nEvents were rejected if the two measurements of dE\/dx, obtained by using independently the X and Y views of the TRD, \ndisagreed by more than 20\\%.\nThe correlation of the measured dE\/dx and CD signal (Figure \\ref{fig1})\nallowed to calibrate the TRD response\nin the region below the minimum of the specific ionization curve.\nSix different intervals of $\\gamma$ were selected with the CD,\nand in each interval the average dE\/dx was measured.\nThe scale factor to convert from arbitrary units (a.u.)\nto MeV\/cm was obtained \nby matching the minimum ionization of O nuclei \nto the corresponding point of the MC simulated curve \n(Figure \\ref{fig3}).\nThe Cherenkov emission yield saturates above $\\gamma \\sim$10, therefore the \ncalibration of the TRD in the relativistic rise region has to rely on the CAL energy measurement.\nFor this purpose, two samples of C and O nuclei were identified with the primary particle\ncrossing the TRD and then generating a shower in the CAL module. \nThe dE\/dx measurement was correlated with the particle\nenergy measured with the CAL. The scatter plot for the O sample is shown in Figure \\ref{fig2} \ntogether with its projection on the horizontal axis, which represents \nthe energy distribution reconstructed by the CAL. At values of Log$_{10} \\gamma >$ 1.5, \nit exhibits the typical power-law behaviour expected from\nthe energy dependence of the differential cosmic-ray spectrum.\nThe range of measured $\\gamma$ has been divided into 7 bins \nwherein the mean $\\gamma$ and dE\/dx values have been calculated.\nIn this way the relativistic rise of the energy loss distribution was sampled as shown in Figure \\ref{fig3}. \nThe carbon points have been rescaled by taking into account the Z$^2$ dependence of dE\/dx, \nin order to plot them on the same scale as the oxygen data. \nThe TRD calibration based on the CAL energy measurement\nshows excellent agreement with the MC simulation. \nIn this way, we proved that the GEANT4 prediction for the specific ionization in Xe can be\nused as a reliable calibration to infer the primary particle energy from the dE\/dx measured with the TRD, \neven at energies where the detector was not tested at accelerator beams. \nMoreover, the correct understanding of the absolute scale of the CAL energy measurement was confirmed.\n\\section{Conclusions}\nA preliminary analysis of the data from the first flight of CREAM confirmed the possibility\nto cross-calibrate the energy measurements of TRD and calorimeter.\n\\section{Acknowledgments}\nThis work is supported by NASA, NSF, INFN, PNRA, KICOS, MOST and CSBF. \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}