diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzobgo" "b/data_all_eng_slimpj/shuffled/split2/finalzzobgo" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzobgo" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nOne of the long-standing problems in modern astronomy is the curious division of globular clusters (GCs) into two groups, according to the mean period ($\\left$) of type ab RR Lyrae variables \\citep{Oos39}. Understanding this phenomenon, ``the Oosterhoff dichotomy'', is intimately related to the population II distance scale and the formation of the Milky Way halo \n\\citep[][and references therein]{San81,Lee90,Yoo02,Cat09}.\n\n\\citet{van73} first suggested ``hysteresis mechanism\", which explains the dichotomy as a difference in mean temperature between the type ab RR Lyraes in two groups. That this cannot be the whole explanation for the dichotomy became clear when \\citet{San81} found a period-shift at given temperature between the RR Lyraes in two GCs representing each of the Oosterhoff groups, M15 and M3. Sandage suggested that this shift is due to the luminosity difference, which, however, required that the RR Lyraes in M15 are abnormally enhanced in helium abundance. \\citet[][hereafter LDZ I]{Lee90}, on the other hand, found that RR Lyraes evolved away from the zero-age horizontal-branch (ZAHB) can explain the observed period-shift when the HB type \\citep[][hereafter LDZ II]{Lee94} is sufficiently blue. In the case of M15, which has a blue tail (or extreme blue HB; EBHB) in addition to normal blue HB \\citep{Buo85}, it was not clear though whether this evolution effect alone can reproduce the period-shift. More recent colour-magnitude diagrams (CMDs) for other metal-poor Oosterhoff group II GCs, NGC~4590, 5053, and 5466 \\citep{Wal94,Nem04,Cor99}, also show that the HB types for these GCs are too red (HB~Type $\\approx$ 0.5) to have enough evolution effect. This suggests that a significant fraction of RR Lyraes in these GCs, including M15, are probably near the ZAHB. Therefore, the complete understanding of the difference between the two Oosterhoff groups still requires further investigation.\n\nRecent discovery of multiple populations in GCs is throwing new light on this problem. Even in ``normal'' GCs without signs of supernovae enrichments, observations and population models suggest the presence of two or more subpopulations differing in helium and light elements abundances, including CNO \\citep[][and references therein]{Gra12a}. It was suggested to be due to the chemical pollution and enrichment by intermediate-mass asymptotic giant branch (AGB) stars, fast-rotating massive stars, and\/or rotating AGB stars \\citep{Ven09,Dec07,Dec09}. Since the colour of the HB is sensitively affected by age, helium and CNO abundances (see LDZ II), each subpopulation in a GC would be placed in a different colour regime on the HB. Similarly, this would affect the period of RR Lyraes as the variation in chemical composition would change the luminosity and mass of a HB star within the instability strip, by which the period is determined when temperature is fixed \\citep{van73}. The purpose of this Letter is to suggest that, in the multiple populations paradigm, the difference in period between the two Oosterhoff groups can be reproduced as the instability strip is progressively occupied by different subpopulations with increasing metallicity.\n\n\\section{Population shift within the instability strip}\n\n\nPhotometry of M15 shows, ignoring a few red HB stars, three distinct subgroups on the HB: RR Lyraes, blue HB, and the blue tail (see Fig.~\\ref{fig:hb}, upper panel). Interestingly, other metal-poor group II GCs without the blue tail, such as NGC~5466 and NGC~4590, also show distinct gaps between the blue HB and RR Lyraes \\citep{Cor99,Wal94}. We assume that these subgroups and gaps are originated from distinct subpopulations in these GCs. According to this picture, M15, for example, would contain three subpopulations, while NGC 5466 and 4590 are consisted with two subpopulations. Most of the colour spread on the HB is then due to the presence of multiple populations, rather than the mass dispersion \\citep{Cat04}. Therefore, in our population modeling, the mass dispersion on the HB was assumed to be very small ($\\sigma_{M}$ = 0.009 $M_{\\rm \\sun}$). Figure~\\ref{fig:hb} (upper panel) and Figure~\\ref{fig:acs} compare our models for M15 with the observations. Our models are based on the updated $Y^{\\rm 2}$ isochrones and HB evolutionary tracks, including the cases of enhanced helium and CNO abundances. For the details of our model construction, readers are referred to \\citet{Joo13}.\n\n\n\n\n\n\n\\begin{figure}\n\\centering\n\\vspace{-10pt}\n\\includegraphics[width=0.53\\textwidth]{figure1.eps}\n\n\\caption{Comparison of our HB models for M15, M3-like, and NGC 6441-like with the observed CMDs (data from \\citealt{Buo85,Buo94,Bin84,Pri03}). In our model for M15 (group II), the blue HB belongs to G1, while the RR Lyrae variables (open circles) are produced by helium and CNO enhanced G2. The blue tail hotter than normal blue HB is progeny of more helium-rich G3. When this model is shifted redward by increasing metallicity, the HB morphologies similar to M3 and NGC 6441 are obtained, and the instability strip becomes progressively populated by G1 and G3, respectively. Adopted distance modulus and reddening are ($m$~-~$M$)$_{\\rm V}$ = 15.40, 15.24, \\& 17.25 mag and $E$($B$~-~$V$) = 0.08, 0.03, \\& 0.45 mag, for M15, M3, and NGC 6441, respectively.\\label{fig:hb}}\n\\end{figure}\n\n\n\n\n\n\\begin{table*}\n\n\n\n\\setlength{\\tabcolsep}{11pt}\n\n\\centering\n\n\\begin{minipage}{160mm}\n\n\n\\caption{Parameters from our best-fit simulation of M15}\n\n\n\n\n\n\\begin{tabular}{@{}cccccccc@{}}\n\n\n \\hline\n \\hline\n\nPopulation& {[Fe\/H]\\footnote{[$\\alpha$\/Fe] = 0.3 for G1 \\& G2, 0.5 for G3.}}& $\\Delta$$Z_{\\rm CNO}$ & $Y$ & Age & Mass Loss\\footnote{Mean mass loss on the RGB for $\\eta$ = 0.42.} & $\\left$\\footnote{Mean mass on the HB.} & Fraction \\\\\n&&&&(Gyr)&($M_{\\rm \\sun}$)&($M_{\\rm \\sun}$)& \\\\\n\n\n \\hline\n\nG1&-2.2&0&0.230&12.5&0.140&0.686&0.36\\\\\nG2&-2.2&0.00026&0.245~$\\pm$~0.008&11.4~$\\pm$~0.2&0.142&0.684&0.22\\\\\nG3&-2.2&0&0.327~$\\pm$~0.008&11.3~$\\pm$~0.2&0.129&0.589&0.42\\\\\n\n\\hline\n\n\\vspace{-20pt}\n\n\\label{tbl:pm}\n\n\\end{tabular}\n\n\n\\end{minipage}\n\\end{table*}\n\n\n\n\n\n\n\n\n\n\nIn our modeling for M15, we start by placing the first generation stars (G1) on the HB (see Fig.~\\ref{fig:hb}, upper panel). By adopting [Fe\/H] = -2.2 \\citep{Har96} and the \\citet{Rei77} mass-loss parameter $\\eta$ = 0.42\\footnote{This value of $\\eta$ is somewhat smaller than the value (0.53) suggested by \\citet{Joo13}. This is because helium-enhanced subpopulations are included in their GC sample used in the $\\eta$ calibration, while here we are adopting the value that would be obtained when they are excluded.}, we find G1 would be placed on the blue HB at the age of 12.5 Gyr. When the blue tail is excluded, the HB type of M15 becomes 0.46 \\citep{Wal94}, which is too red to have enough period-shift from the evolution effect alone. This suggests that most, if not all, RR Lyraes in M15 are produced by the second generation stars (G2) with chemical compositions favorable to produce RR Lyraes with longer periods. Both theories and observations suggest G2 would be somewhat enhanced in both helium and CNO abundances, preferably in metal-poor GCs \\citep{Ven09,Dec09,Alv12,Mar12,Gra12b}. While the enhanced helium abundance shifts the HB to blue, both CNO enhancement and younger age for G2 favor redder HB \\citep[LDZ II;][]{Joo13}. Thus, when $\\Delta$$Y$ is relatively small, the effects from CNO and younger age would overwhelm the helium effect, and the net effect will move the HB to red. We find that $\\Delta$$Y$ = 0.015, $\\Delta$$Z_{\\rm CNO}$ = 0.00026 ($\\Delta$[CNO\/Fe] = 0.47 dex), and $\\Delta$$t$ = 1.1 Gyr between G1 and G2 would best match the observations, both distribution of RR Lyraes on the CMD (see Fig.~\\ref{fig:hb}, upper panel) and the period-shift (see below). Some evidence for the possible difference in CNO abundance between G1 and G2 in M15 is provided by \\citet{Coh05}, who found a large spread in [N\/Fe] ($\\sim$~2~dex) among stars having identical [C\/Fe] in M15. This suggests that CNO sum could be different if variation in [O\/Fe] is not significant among them. It is also interesting to note that out of 6 RGB stars observed in M15, \\citet{Sne97} found one ($\\sim$~17~$\\%$) shows significant enhancement ($\\sim$~0.4~dex) in CNO abundance, roughly consistent with our ratio of G2 ($\\sim$~20~$\\%$). Certainly, spectroscopy with a large sample of stars, preferably at the lower RGB, is required to confirm this scenario. Finally, many previous works suggest that the blue tail (EBHB) would be produced by super-helium-rich subpopulation \\citep{D'A04,Lee05,Gra12b,Joo13,Kun13a}, and therefore we assign more helium-rich third and later generations of stars (hereafter collectively G3) for the progenitor of EBHB. It is evident that some helium spread within G3 is required to reproduce the observed extension of the blue tail, which is mimicked here by increasing mass dispersion ($\\sigma_{M}$ = 0.023 $M_{\\rm \\sun}$). Table~\\ref{tbl:pm} lists our best fit input parameters for M15, where the population ratio is from distinct subgroups observed on the HB of \\citet{Buo85}\n\n\n\n\\begin{figure}\n\\centering\n\\vspace{2pt}\n\\includegraphics[width=0.47\\textwidth]{figure2.eps}\n\\vspace{-5pt}\n\\caption{Comparison of our population model with the observed CMD for M15 \\citep[data from][]{And08}. Adopted distance modulus and reddening are ($m$~-~$M$)$_{\\rm F814W}$ = 15.35 mag and $E$(F606W~-~F814W) = 0.065 mag.\\label{fig:acs}}\n\\end{figure}\n\n\n\n\n\n\n\nIt is interesting to see that when our HB model for M15 is shifted redward by increasing metallicity, the HB morphology similar to M3 (HB type = 0.08) is naturally obtained. This is illustrated by a more metal-rich model in Figure~\\ref{fig:hb} (middle panel). For this model, identical values of ages, $\\Delta$$Z_{\\rm CNO}$, and mass-loss parameter $\\eta$ adopted for M15 have been used, while the metal abundance varies from [Fe\/H] = -2.2 to [Fe\/H] = -1.7. The helium abundances for G1 and G2 have also been fixed, while that for G3 has been reduced to Y = 0.28, because otherwise our model would produce blue HB that is too blue compared to the observation of M3. Note that, because of the dispersion in helium abundance among G3, at the red end of the blue HB for M3, which is the critical regime for the \\citet{Cat09b} test, the enhancement in Y would be only $\\sim$~0.015 compared to the red HB. Because the overall HB morphology is shifted to red, the instability strip is now mostly occupied by G1, while the red HB is populated by both G1 and G2. Consequently, the gap, which was placed between the EBHB (G3) and blue HB (G1) in our M15 model, is likewise shifted into the instability strip, which agrees well with the observation for M3 \\citep[see Fig. 14 of][]{Buo94}. Some evolved stars from G1, G2 and G3 are also placed within the instability strip, which would explain the presence of minority population of brighter RR Lyraes with longer periods observed in this GC \\citep{Cac05}. Further fine tuning of input parameters, such as age, $Z_{\\rm CNO}$, helium abundance, and population ratio for each subpopulation would be required to obtain a better match with the observation. \n\n\n\n\nWhen the metallicity is increased further to [Fe\/H] = - 0.7, our models yield the HB morphology that is analogous to NGC 6441 (HB type = - 0.77), the Oosterhoff III GC with longest $\\left$ (lower panel). Again, $\\Delta$$t$ and $\\Delta$$Z_{\\rm CNO}$ among subpopulations, $\\Delta$$Y$ between G1 and G2, and the mass-loss parameter $\\eta$ are held identical to those adopted for M15. For this model, however, absolute ages have been increased by $\\sim$~1 Gyr compared to M3-like model, and the mean value of the helium abundance for G3 has been adopted to be Y = 0.30. Note that the red HB is populated by all three subpopulations, while G3 is penetrating into the instability strip, producing RR Lyrae variables and some blue HB stars.\n\n\n\n\n\n\n\nFigure~\\ref{fig:zahb} explains, using ZAHB models, the origin of the Sandage period-shift effect between M15 (group II) and M3 (group I) in the new paradigm. In our model for M15, G2, which is enhanced in helium and CNO abundances, are in the instability strip. While enhancement in helium increases luminosity, enhancement in CNO reduces mass of HB stars at given temperature. Both of these effects play a role in increasing the period of RR Lyrae variables \\citep{van73} in our model for M15. Consequently, in our synthetic HB models, where the full evolutionary tracks are employed in addition to ZAHB models, the period-shift between M15 and M3 is predicted to be $\\Delta$log~$P$ = 0.040, where $P$ is in days. Adopting the temperature shift of the fundamental blue edge ($\\Delta$log~$T_{\\rm eff}$ = 0.02) as a function of metallicity \\citep{San06}, the same models yield the difference in mean period of type ab RR Lyraes of $\\Delta$$\\left$ = 0.087 day. These values agree well with the observed period-shift ($\\Delta$log~$P$ = 0.044~$\\pm$~0.01; LDZ I) and the difference in $\\left$ ($\\Delta$$\\left$ = 0.082~$\\pm$~0.02 day; \\citealt{Cle01}) between these GCs. Despite the uncertainty in the fundamental blue edge, the fraction of c type RR Lyraes ($f_{\\rm c}$) is predicted to be 0.48 and 0.13 for M15 and M3, respectively, which should be compared with the observed values, 0.53 and 0.18 \\citep{Cle01}. For our NGC 6441-like model, we obtain $\\Delta$$\\left$ = 0.18 day and $f_{\\rm c}$ = 0.22, which is in reasonable agreement with the observation \\citep[$\\Delta$$\\left$ = 0.20~$\\pm$~0.02 day and $f_{\\rm c}$ = 0.33;][] {Pri03}.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.43\\textwidth]{figure3.eps}\n\\vspace{0pt}\n\\caption{Zero-age HB models explaining the origin of the period-shift between M15 and M3 at given temperature (log~$T_{\\rm eff}$ = 3.83, vertical dashed line). While enhancement in helium increases luminosity (middle panel), enhancement in CNO reduces mass of HB stars at given temperature (lower panel). Both of these effects play a role in increasing the period-shift of RR Lyraes in M15 (see the text).\\label{fig:zahb}}\n\\end{figure}\n\n\n\n\n\n\n\n\\begin{figure}\n\\centering\n\\vspace{-10pt}\n\\includegraphics[width=0.51\\textwidth]{figure4.eps}\n\\vspace{-3pt}\n\\caption{Schematic diagram illustrating the ``population shift'' within the instability strip (thick dashed lines) with increasing metallicity. For the most general cases, most of the RR Lyraes are produced by G1, G2 (helium \\& CNO enhanced), and G3 (most helium-rich), respectively, for the Oosterhoff groups I, II, and III (see the text).\\label{fig:oo}}\n\\end{figure}\n\n\n\n\\section{Discussion}\n\n\n\nFigure~\\ref{fig:oo} shows a schematic diagram that explains the main points from our models. As discussed above, RR Lyraes in the metal-poor group II cluster M15 are produced by helium and CNO enhanced G2 (lower panel). When the HB of this metal-poor model is shifted redward with increasing metallicity, the instability strip becomes mostly populated by G1 having Oosterhoff I characteristics (middle panel). When this redward shift continues as metallicity increases further, our models indicate that the instability strip would be more populated by G3, producing first the transition case between the groups I and III, having mildly helium enhanced (Y $\\approx$~0.26) RR Lyraes with Oosterhoff-intermediate characteristics, such as NGC 1851, as suggested by \\citet{Kun13a, Kun13b}. Then, in the most metal-rich regime, if G3 are present, GCs with RR Lyraes having more enhanced helium abundance (Y $\\approx$~0.30) and longest periods like NGC 6441 (group III) would be produced (upper panel). Note that the placements of G1 and G2 in this schematic diagram is valid only for GCs where $\\Delta$$t$ between G1 and G2 is similar to that in the case of M15. If $\\Delta$$t$ is much smaller than $\\sim$~1 Gyr as adopted in our models, for example, the RR Lyraes in group I GCs would be more dominated by G2, while the red HB becomes more populated by G1. It appears unlikely, however, that these variations are common, because otherwise most group I GCs would have Oosterhoff-intermediate characteristics in period-shift and $\\left$. \n\n\nSome GCs like NGC 6397 show very small spread in the colour on the HB, as well as in the Na-O plane \\citep{Car09}, which suggest that these GCs are probably consisted with only G1. Therefore, there are cases\/systems where the suggested population shift cannot be a viable explanation for the Oosterhoff dichotomy. For these GCs, evolution away from ZAHB (LDZ I) and some hysteresis mechanism \\citep{van73} are probably at works for the difference in $\\left$ between the groups I and II.\n\n\n\nThe Na-O anticorrelations observed in some HB stars in GCs can provide an important test on our placements of G1, G2, and G3 on the HB . For example, for M5, when the division of G1 and G2 is made properly at [Na\/Fe] = 0.1 as suggested by \\citet{Car09}, Fig. 9 of \\citet{Gra13} shows that $\\sim$~65~$\\%$ of stars in the red HB have G2 characteristic, while $\\sim$~35~$\\%$ are in G1 regime. This is in agreement with the ratio of G2\/G1 ($\\sim$~2; see Fig. 1) in the red HB of our model for M3-like GCs. For more metal-rich GCs, our models predict that the red HB is roughly equally populated by G1 and G2, which agrees well with the Na-O observations for NGC 1851 and NGC 2808 \\citep{Gra12b,Mar14}. For the blue HB stars, these observations confirm that they belong to Na-rich and He-rich G3. It is interesting to note that one RR Lyrae variable in NGC 1851 was observed to be Na-rich like blue HB stars (G3), which is consistent with our suggestion that NGC 1851 is a transition case between the Oosterhoff I and III. For the group II GCs, the Na - O observation of HB stars is available only for M22 \\citep{Gra14}, which shows that the blue HB stars right next to RR Lyraes have G1 characteristic. While this is in qualitative agreement with our model for M15, the interpretation is more complicated as this GC was also affected by supernovae enrichment \\citep[and references therein]{Joo13}. Certainly, spectroscopic observations for a large sample of RR Lyrae and horizontal-branch stars in GCs representing two Oosterhoff groups, such as M15 and M3, are urgently required to confirm our models.\n\n\n\n\n\\section*{Acknowledgments}\n\nWe thank the referee for a number of helpful suggestions. We also thank Robert Zinn, Pierre Demarque, Suk-Jin Yoon, Chul Chung, Sang-Il Han, and Dong-Goo Roh for helpful discussions. Support for this work was provided by the National Research Foundation of Korea to the Center for Galaxy Evolution Research.\\\\\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\subsection{Right Multiplication} \\label{appendix:right_multiplication}\n We present how to compute $A\\cdot M,$ where $M$ is an uncompressed matrix and $A$ is a \n compressed matrix. This is an extension of right multiplication with vector in \\S~\\ref{right_multiplication_vector}. \n \\begin{theorem}\n Let $A \\in \\Re^{n \\times m}$, $M \\in \\Re^{m \\times p}$, \\textbf{D} be the output of TOC on $A$,\n $\\mathbf{C'}$ be the prefix tree built for decoding, $\\mathbf{C'}[i].seq$ be the sequence of the tree node defined in \n \\S~\\ref{sec:api}, $\\mathbf{C'}[i].key$ be the key of the tree node defined in \\S~\\ref{sec:build_prefix_tree},\n and $\\mathbf{C'}[i].parent$ be the parent index of the tree node defined in \\S~\\ref{sec:build_prefix_tree}.\n Note that $\\mathbf{C'}[i].key$ and $\\mathbf{C'}[i].seq$ are both sparse representations of vectors\n (i.e., $\\mathbf{C'}[i].key \\in \\Re^{1 \\times m}$ and $\\mathbf{C'}[i].seq \\in \\Re^{1 \\times m}$).\n Define function $\\mathcal{F}(x): \\aleph \\rightarrow \\Re^{1\\times p}$ to be\n \n \\begin{align} \\label{equation:matrix_times_uncompress_definition}\n \t\\mathcal{F}(x) = \\mathbf{C'}[x].seq \\cdot M, x = 1, 2, ..., \\mathrm{len}(\\mathbf{C'})-1.\n \\end{align}\n Then, we have\n \\begin{align} \\label{equation:matrix_times_uncompress_first_equation}\n A[i, :] \\cdot M = & \\sum_{j=0}^{\\mathrm{len}(\\mathbf{D}[i, :])-1} \\mathcal{F}(\\mathbf{D}[i][j]), i = 0, 1, ..., n-1.\n \\end{align}\n \\end{theorem}\n \\begin{proof}\n Without loss of generality, we use a specific row $A[i, :]$ in the proof. First, we substitute $A[i, :]$ with sequences stored in prefix tree $\\mathbf{C'}$, then\n \\begin{align} \\label{equation:matrix_times_uncompress_substitution}\n \tA[i, :] \\cdot M =\\sum_{j=0}^{\\mathrm{len}(\\mathbf{D}[i, :])-1} \\mathbf{C'}[\\mathbf{D}[i,j]].seq \\cdot M\n \\end{align}\n Plug Equation~\\ref{equation:matrix_times_uncompress_definition} into Equation~\\ref{equation:matrix_times_uncompress_substitution},\n we get Equation~\\ref{equation:matrix_times_uncompress_first_equation}.\n \\end{proof}\n \n Algorithm~\\ref{alg:matrix_times_uncompress_matrix} shows the details.\n First, we scan $\\mathbf{C'}$ similar to right multiplication with vector, and we use \\textbf{H}[$i$,:] to remember the computed\n value of $\\mathcal{F}(i)$.\n \n Second, we scan \\textbf{D} to compute $A \\cdot M$ stored in \\textbf{R}. For $k$th column of the result \\textbf{R} and each \\textbf{D}[$i$][$j$], we simply add \n \\textbf{H}[\\textbf{D}[$i$][$j$]][$k$] to \\textbf{R}[$i$][$k$]. Because \\textbf{H}[\\textbf{D}[$i$][$j$]][$k$] is a random access of \\textbf{H}, we let the loop of going over each column be the most inner\n loop so that we can scan \\textbf{D} only once and have better cache performance.\n\n\\begin{algorithm} [th!]\n\\caption{Execute $A \\cdot M$ on the TOC output.} \\label{alg:matrix_times_uncompress_matrix}\n\\begin{algorithmic}[1]\n\\Function{MatrixTimesUncompressedMatrix}{\\textbf{D}, \\textbf{I}, $M$}\n\t\\State \\textbf{inputs:} column\\_index:value pairs in the first layer of\n \\textbf{I}, encoded table \\textbf{D}, and uncompressed matrix $M$\n \\State \\textbf{outputs:} the result of $A \\cdot M$ in \\textbf{R}\n \\State $\\mathbf{C'} \\gets$ \\Call{BuildPrefixTree}{\\textbf{I}, \\textbf{D}}\n \\State \\textbf{H} $\\gets \\left[0\\right]$ \\Comment{initialize as a zero matrix}\n \\For {$i$ = 1 to len($\\mathbf{C'}$)-1} \\Comment{scan $\\mathbf{C'}$ to compute \\textbf{H}}\n \\For {$j$ = 0 to num\\_of\\_columns($M$)-1}\n \t\\State \\textbf{H}[$i][j] \\gets \\mathbf{C'}[i].key \\cdot M[:,j] + \\textbf{H}[\\mathbf{C'}[i].parent][j$]\n \\EndFor\n \\EndFor\n \\State \\textbf{R} $\\gets \\left[0\\right]$ \\Comment{initialize as a zero matrix}\n \\For {$i$ = 0 to len(\\textbf{D})-1} \\Comment{scan \\textbf{D} to compute \\textbf{R}}\n\t \\For {$j$ = 0 to len(\\textbf{D}[$i$,:])-1}\n \t\t\\For {$k$ = 0 to num\\_of\\_columns($M$)-1}\n \t\\State $\\textbf{R}[i][k] \\gets \\textbf{R}[i][k] + \\textbf{H}[\\textbf{D}[i][j]][k]$\n \\EndFor\n \\EndFor\n \\EndFor\n \\State \\textbf{return(R)}\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Left Multiplication}\\label{appendix:left_multiplication}\n We discuss how to compute $M\\cdot A$ where $M$ is an uncompressed matrix and $A$ is\n a compressed matrix. This is an extension of left multiplication with vector in \\S~\\ref{left_multiplication_vector}. \n \n \\begin{theorem}\n Let $A \\in \\Re^{n \\times m}$, $M \\in \\Re^{p \\times n}$, \\textbf{D} be the output of TOC on $A$,\n $\\mathbf{C'}$ be the prefix tree built for decoding, $\\mathbf{C'}[i]$.seq be the sequence of the tree node defined in \n \\S~\\ref{sec:api}, $\\mathbf{C'}[i].key$ be the key of the tree node defined in \\S~\\ref{sec:build_prefix_tree},\n and $\\mathbf{C'}[i].parent$ be the parent index of the tree node defined in \\S~\\ref{sec:build_prefix_tree}.\n Note that $\\mathbf{C'}[i].key$ and $\\mathbf{C'}[i].seq$ are both sparse representations of vectors\n (i.e., $\\mathbf{C'}[i].key \\in \\Re^{1 \\times m}$ and $\\mathbf{C'}[i].seq \\in \\Re^{1 \\times m}$).\n Define function $\\mathcal{G}(x): \\aleph \\rightarrow \\Re^{p \\times 1}$ to be\n \n \\begin{align}\n \t\\mathcal{G}(x) = \\sum_{\\substack{\\mathbf{D}[i, j]=x,\n \t\t\t\t\t\\forall i \\in \\aleph,\n \\forall j \\in \\aleph}}\n M[:,i],~x = 1, 2, ..., \\mathrm{len}(\\mathbf{C'})-1. \\label{equation:uncompress_times_matrix_definition}\n \\end{align}\n Then, we have\n \n \\begin{align}\n \tM \\cdot A =& \\sum_{i=1}^{\\mathrm{len}(\\mathbf{C'})-1} \\mathbf{C'}[i].seq \\cdot \\mathcal{G}(i).\n \\label{equation:uncompress_times_matrix_first_equation}\n \\end{align}\n \\end{theorem}\n \n \\begin{proof}\n We substitute $A$ with sequences stored in $\\mathbf{C'}$\n \\begin{align}\n \tM \\cdot A & = \\sum_{i=0}^{n-1} M[:,i] \\cdot A[i, :] \\nonumber \\\\\n & = \\sum_{i=0}^{n-1} \\sum_{j=0}^{\\mathrm{len}(\\mathbf{D}[i, :])-1} M[:,i] \\cdot \\mathbf{C'}[\\mathbf{D}[i][j]].seq.\n \\label{equation:uncompress_times_matrix_substitution}\n \\end{align}\n Merge terms in Equation~\\ref{equation:uncompress_times_matrix_substitution} with same sequences\n \\begin{align}\n \tM \\cdot A = \\sum_{i=1}^{\\mathrm{len}(\\mathbf{C'})-1} \\mathbf{C'}[i].seq \\cdot (\\sum_{\\substack{\\mathbf{D}[i_k, j_k]=i,\n \t\t\t\t\t\\forall i_k \\in \\aleph,\n \\forall j_k \\in \\aleph}}\n M[:,i_k]) \\label{equation:uncompress_times_matrix_merge_terms}\n \\end{align}\n Plug Equation~\\ref{equation:uncompress_times_matrix_definition} into Equation~\\ref{equation:uncompress_times_matrix_merge_terms}, we get Equation~\\ref{equation:uncompress_times_matrix_first_equation}.\n \\end{proof}\n Algorithm~\\ref{alg:uncompress_matrix_times_matrix} shows the details. First, we similarly scan \\textbf{D} as left multiplication with vector. Specifically, we initialize $\\mathbf{H}$ as a zero matrix,\n and then add $M[k][i]$ to $\\mathbf{H}[\\mathbf{D}[i][j]][k]$ for each $\\mathbf{D}[i][j]$. Note that the \\textbf{H} here is stored in transposed manner so that we only need to scan $\\mathbf{D}$ once and have good cache performance at the same time.\n \n Second, we scan $\\mathbf{C'}$ backwards to actually compute $M \\cdot A$ stored in \\textbf{R}. Specifically,\n for $i$th row and each $\\mathbf{C'}[j]$, we add $\\mathbf{C'}[j]$.key * $\\mathbf{H}[i][j]$ to the result of $i$th row \\textbf{R}[i,:] and add $\\mathbf{H}[i][j]$ to\n $\\mathbf{H}[i][\\mathbf{C'}[j]$.parent].\n\n\\begin{algorithm} [th!]\n\\caption{Execute $M \\cdot A$ on the TOC output.}\n\\label{alg:uncompress_matrix_times_matrix}\n\\begin{algorithmic}[1]\n\\Function{UncompressedMatrixTimesMatrix}{\\textbf{D}, \\textbf{I}, $M$}\n\t\\State \\textbf{inputs:} column\\_index:value pairs in the first layer of\n \\textbf{I}, encoded table \\textbf{D}, and uncompressed matrix $M$\n \\State \\textbf{outputs:} the result of $M \\cdot A$ in \\textbf{R}\n \\State $\\mathbf{C'} \\gets$ \\Call{BuildPrefixTree}{\\textbf{I}, \\textbf{D}}\n \\State \\textbf{H} $\\gets \\left[0\\right]$ \\Comment{initialize as a zero matrix}\n \\For {$i$ = 0 to len(\\textbf{D})-1} \\Comment{scan \\textbf{D} to compute \\textbf{H}}\n \\For {$j$ = 0 to len(\\textbf{D}[$i$,:]) -1}\n \\For {$k$ = 0 to num\\_of\\_rows($M$) -1}\n \t\\State $\\textbf{H}[\\textbf{D}[i][j]][k] \\gets M[k][i] + \\textbf{H}[\\textbf{D}[i][j]][k]$\n \\EndFor\n \\EndFor\n \\EndFor\n \\State \\textbf{R} $\\gets \\left[0\\right]$ \\Comment{initialize as a zero matrix}\n \\For {$i$ = len($\\mathbf{C'}$) -1 to 1} \\Comment{scan $\\mathbf{C'}$ to compute \\textbf{R}}\n \\For{$j$ = 0 to num\\_of\\_rows($M$) -1}\n \t\\State $\\textbf{R}[j,:] \\gets \\textbf{R}[j,:] + \\mathbf{C'}[i]$.key * $\\textbf{H}[i][j]$\n \\State Add $\\textbf{H}[i][j]$ to $\\textbf{H}[\\mathbf{C'}[i]$.parent$][j]$\n \n \\EndFor\n \\EndFor\n \\State \\textbf{return(R)}\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\subsection{Integration TOC into Bismarck}~\\label{appendix:integration}\nWe integrated TOC into Bismarck and replaced its existing matrix kernels.\nThere are three key parts of the integration. First, we allocate an arena space in\nBismarck shared memory for storing the ML models. Second, we replace the existing\nBismarck matrix kernel with the TOC matrix kernel for updating the ML models.\nThird, a database table is used to store the TOC compressed mini-batches and\nthe serialized bytes of each TOC compressed mini-batch are stored as a bytes field\nof variable length in the row. After all these, we modified the UDF\nof ML training to read the compressed mini-batch from the table\nand use the replaced matrix kernel to update the ML model in the arena.\n\n\\begin{table*}[th!]\n \\vspace{1mm}\n\\caption{End-to-end MGD runtimes (in minutes) for training machine learning models: Neural network(NN),\nLogistic regression (LR), and Support vector machine (SVM) on datasets Census and Kdd99. Census15m and Census290m is\n 7GB and 140GB respectively, while Kdd7m and Kdd200m is 7GB and 200GB respectively.}\n\\vspace{-3mm}\n\\label{tab:model_runtimes_census_and_kdd}\n\\centering\n \\begin{tabulary}{1.0\\textwidth}{|L||L|L|L|L|L|C|L|L|L|L|L|L|}\n \\hline\n \\multirow{2}{*}{\\textbf{Methods}} & \\multicolumn{3}{c|}{\\textbf{Census15m}} &\n \\multicolumn{3}{c|}{\\textbf{Census290m}} & \\multicolumn{3}{c|}{\\textbf{Kdd7m}}\n & \\multicolumn{3}{c|}{\\textbf{Kdd200m}} \\\\\n & NN & LR & SVM & NN & LR & SVM & NN & LR & SVM & NN & LR & SVM \\\\\n \\hline\n \\hline\n TOC (ours) & \\textbf{35} & \\textbf{0.8} & \\textbf{0.7} & \\textbf{702} & \\textbf{16} & \\textbf{14} & 16.1 & \\textbf{0.2} & \\textbf{0.2} & \\textbf{323} & \\textbf{6.1} & \\textbf{5.9} \\\\\n \\hline\n DEN & 39 & 4.0 & 4.0 & 1108 & 253 & 251 & 29 & 4.6 & 4.4 & 1003 & 608 & 615 \\\\\n \\hline \n CSR & 38 & 1.8 & 1.8 & 942 & 161 & 167 & 19.2 & 0.4 & 0.4 & 438 & 56 & 53 \\\\\n \\hline \n CVI & 37 & 1.1 & 1.0 & 844 & 80 & 67 & 18.5 & 0.3 & 0.3 & 422 & 31 & 30 \\\\\n \\hline \n DVI & 38 & 1.2 & 1.1 & 800 & 46 & 43 & 28.4 & 1.2 & 1.1 & 611 & 71 & 71 \\\\\n \\hline \n Snappy & 41 & 4.7 & 4.6 & 905 & 121 & 115 & 27.2 & 3.5 & 3.5 & 616 & 127 & 128 \\\\\n \\hline \n Gzip & 46 & 11.1 & 11.1 & 965 & 244 & 241 & 33.5 & 7.5 & 7.5 & 683 & 235 & 235 \\\\\n \\hline\n \\hline\n BismarckTOC & 38 & 0.87 & 0.88 & 742 & 17.4 & 14.8 & 16.8 & 0.3 & 0.31 & 329 & 6.4 & 6.3 \\\\\n \\hline\n BismarckDEN & N\/A & 4.2 & 4.3 & N\/A & 321 & 310 & N\/A & 4.0 & 3.8 & N\/A & 645 & 644 \\\\\n \\hline \n BismarckCSR & N\/A & 3.2 & 3.2 & N\/A & 222 & 234 & N\/A & 0.9 & 0.9 & N\/A & 114 & 115 \\\\\n \\hline \n ScikitLearnDEN & 73.2 & 7.3 & 6.6 & 1715 & 604 & 580 & 42 & 5 & 4.6 & 1797 & 771 & 772 \\\\\n \\hline \n ScikitLearnCSR & 105.1 & 5.7 & 5.1 & 2543 & 421 & 408.8 & 44 & 1.7 & 1.5 & 1476 & 166 & 160 \\\\\n \\hline \n TensorFlowDEN & 38.1 & 9.4 & 10.5 & 1073 & 638 & 610 & 21.4 & 5.5 & 5.1 & 1199 & 781 & 779 \\\\\n \\hline \n TensorFlowCSR & 54.7 & 15.1 & 14.0 & 1244 & 681 & 661 & \\textbf{15.2} & 4.1 & 4.4 & 577 & 300 & 274 \\\\\n \\hline\n \\end{tabulary}\n\\end{table*}\n\n\\subsection{End-to-End MGD Runtimes}~\\label{appendix:mgd_runtimes}\nMGD runtimes on Census and Kdd99 are reported in Table~\\ref{tab:model_runtimes_census_and_kdd}. Overall, the results are similar to those presented in \\S~\\ref{sec:mgd_runtimes}. On small datasets like Census15m and Kdd7m, TOC has comparable performance with other methods. On large datasets like Census290m and Kdd200m, TOC is up to 1.8x\/17.8x\/18.3x faster than the state-of-the-art compression schemes for NN\/LR\/SVM respectively. We leave the results of datasets Rcv1 and Deep1Billion because of their extreme sparsity\/density such that we do not expect better performance from TOC.\n\n\\subsection{Theorem~\\ref{theorem:matrix_times_vector}} \\label{appendix:matrix_times_vector}\n\\begin{proof}\nWithout loss of generality, we use a specific row $A[i, :]$ in the proof. First, we substitute $A[i, :]$ with sequences stored in the prefix tree $\\mathbf{C'}$, then\n\\begin{align} \\label{equation:matrix_times_vector_substitution}\n\tA[i, :] \\cdot v =\\sum_{j=0}^{\\mathrm{len}(\\mathbf{D}[i, :])-1} \\mathbf{C'}[\\mathbf{D}[i,j]].seq \\cdot v\n\\end{align}\nPlug Equation~\\ref{equation:matrix_times_vector_definition} into Equation~\\ref{equation:matrix_times_vector_substitution},\nwe get Equation~\\ref{equation:matrix_times_vector_first_equation}.\nFollowing the definition of the sequence of the tree node,\nwe immediately get Equation~\\ref{equation:vector_times_matrix_second_equation}.\n\\end{proof}\n\n\\subsection{Theorem~\\ref{theorem:vector_times_matrix}}\n\\label{appendix:vector_times_matrix}\n\\begin{proof}\nWe substitute $A$ with sequences stored in $\\mathbf{C'}$\n\\begin{align}\n\tv \\cdot A & = \\sum_{i=0}^{n-1} v[i] \\cdot A[i, :] \\nonumber \\\\\n & = \\sum_{i=0}^{n-1} \\sum_{j=0}^{\\mathrm{len}(\\mathbf{D}[i, :])-1} v[i] \\cdot \\mathbf{C'}[\\mathbf{D}[i][j]].seq.\n \\label{equation:vector_times_matrix_substitution}\n\\end{align}\nMerge terms in Equation~\\ref{equation:vector_times_matrix_substitution} with same sequences\n\\begin{align}\n\tv \\cdot A = \\sum_{i=1}^{\\mathrm{len}(\\mathbf{C'})-1} \\mathbf{C'}[i].seq \\cdot (\\sum_{\\substack{\\mathbf{D}[i_k, j_k]=i,\n \t\t\t\t\t\\forall i_k \\in \\aleph,\n \\forall j_k \\in \\aleph}}\n v[i_k]) \\label{equation:vector_times_matrix_merge_terms}\n\\end{align}\nPlug Equation~\\ref{equation:vector_times_matrix_definition} into Equation~\\ref{equation:vector_times_matrix_merge_terms}, we get Equation~\\ref{equation:vector_times_matrix_first_equation}.\n\\end{proof}\n\\section{Running Examples} \\label{running_example}\n\n\\section{Proof of Theorems} \\label{thoerem_proof}\n\\vspace{1mm}\n\\input{appendix\/theorem_proofs}\n\n\\section{More Algorithms} \\label{more_algorithms}\n\\vspace{1mm}\n\\input{appendix\/more_algorithms}\n\n\\section{More Time Complexity Analysis} \\label{more_complexity_analysis}\n\\input{appendix\/more_complexity_analysis}\n\n\\section{More Experiments} \\label{more_experiments}\n\\vspace{1mm}\n\\input{appendix\/more_experiments}\n\n\\subsection{Machine Learning Training}\n\\subsubsection{Empirical Risk Minimization}\nWe begin with a description of ML training in the generalized setting based on standard ML texts~\\cite{SSSSS10,shai}. Formally, we have a hypothesis space $\\mathcal{H}$,\nan instance set $\\mathcal{Z}$, and a loss function $\\ell: \\mathcal{H} \\times \\mathcal{Z} \\mapsto \\bbR$.\nGiven a training set\n$\\mathcal{S} = \\{z_1, z_2, ..., z_n\\}$ which are $n$ i.i.d. draws based on a distribution $\\mathcal{D}$ on $\\mathcal{Z}$, and a hypothesis\n$h \\in \\mathcal{H}$, our goal is to minimize\nthe {\\em empirical risk} over the training set $\\mathcal{S}$ defined as\n\\begin{equation} \\label{eq:general}\n L_{\\mathcal{S}}(h) = \\frac{1}{n}\\sum_{i=1}^n \\ell(h, z_i).\n\\end{equation}\n\nMany ML models including Logistic\/Linear regression, Support vector machine, and Neural network fit into this generalized setting~\\cite{SSSSS10}.\n\n\\subsubsection{Gradient Descent}\nML training can be viewed as the process to find the optimal $\\hat{h} \\in \\mathcal{H}$ such that $\\hat{h} = \\textrm{argmin}\\ L_{\\mathcal{S}}(h).$ This is essentially an optimization problem, and gradient descent is a common and established class of algorithms for solving this problem. There\nare three main variants of gradient descent: batch gradient descent, stochastic gradient descent,\nand mini-batch stochastic gradient descent.\n\n\\noindent \\textbf{Batch Gradient Descent (BGD)}. BGD uses all the training data to compute the gradient and update $h$ per iteration.\n\n\\noindent \\textbf{Stochastic Gradient Descent (SGD)}. SGD uses a single tuple to compute the gradient and update $h$ per iteration.\n\n\\noindent \\textbf{Mini-batch Stochastic Gradient Descent (MGD)}. MGD uses a small batch of tuples (typically tens or hundreds of tuples) to compute the gradient and update $h$ per iteration:\n\n\\begin{equation} \\label{equation:gradient_descent}\nh^{(t)} \\leftarrow h^{(t-1)} - \\lambda \\frac{1}{|B^t|}\\sum_{z \\in B^t}\\frac{\\partial \\ell(h, z)}{\\partial h},\n\\end{equation}\nwhere $B^t$ is the current $t$-th mini-batch we visit, $z$ is a tuple from $B^t$, and $\\lambda$ is the learning rate.\n\nNote that MGD can cover the spectrum of\ngradient descent methods by setting the mini-batch size $|B^t|$. For examples, MGD morphs into SGD and BGD by setting $|B^t| = 1$ and $|B^t| = |\\mathcal{S}|$, respectively.\n\nMGD gains its popularity due to its fast convergence rate and statistical stability. It typically requires fewer epochs (the whole pass over a dataset is an epoch) to\nconverge than BGD and is more stable than SGD~\\cite{ruder2016overview}. Figure~\\ref{fig:optimization_methods_efficiency} illustrates the optimization efficiencies of these gradient descent variants, among which MGD with hundreds of rows in a mini-batch achieves the best balance between fast convergence rate and statistical stability. Thus, in this paper, we focus on MGD with mini-batch size ranging from tens to hundreds of tuples.\n\n\\begin{figure}[th!]\n\\centering\n \\includegraphics[width=0.97\\linewidth]{.\/plots\/gd_efficiencies.pdf}\n\\vspace{2mm}\n \\caption{Optimization efficiencies of BGD, SGD, and MGD for training a neural network with one hidden layer (no convolutional layers) on Mnist. MGD (250 rows)\n has 250 rows in a mini-batch. MGD-20\\%, MGD-50\\%, and MGD-80\\% has 20, 50, and 80 percent of rows of the\n whole dataset in each mini-batch, respectively.}\n\\label{fig:optimization_methods_efficiency}\n\\vspace{-3mm}\n\\end{figure}\n\n\\subsubsection{Shuffle Once v.s. Shuffle Always}\nThe random sampling for tuples in SGD\/MGD is typically done without replacement, which is achieved by shuffling the dataset before an epoch~\\cite{bengio2012practical}. However, shuffling data at every epoch (shuffle always) is expensive and incurs a high overhead. Thus, we follow the standard technique of shuffling once~\\cite{bismarck,wu2017bolt,bengio2012practical} (i.e., shuffling the data once upfront) to improve the ML training efficiency.\n\n\\subsubsection{Core Matrix Operations for Gradient Descent}\nThe core operations, which dominate the CPU time, for using gradient descent to optimize many ML models (e.g., Linear\/Logistic regression, Support vector machine, and Neural network) are matrix\noperations~\\cite{elgohary2016compressed}. We illustrate this point using an example of Linear regression, and summarize the core matrix operations for these ML models in Table~\\ref{tab:matrix_operations_for_gradient_descent}.\n\\begin{table}[th!]\n \\vspace{1mm}\n \\caption{The core matrix operations when using gradient descent to optimize popular ML models. $A = [x_1^T; x_2^T; ...; x_{|B|}^T]$ is\n a batch of data for updating models where $(x_i, y_i) \\in B$. $v$ and $M$ are either ML model parameters or intermediate results for\n computing gradients. We use logistic loss for Logistic regression, hinge loss for Support vector machine, and mean squared loss for Linear regression and Neural network. For the sake\n of simplicity, our neural network structure has a feed forward structure with a single hidden layer.}\n \\vspace{-1mm}\n \\label{tab:matrix_operations_for_gradient_descent}\n \\centering\n \\begin{tabular}{|c||c|c|c|c|}\n \\hline\n \\textbf{ML models} & $A \\cdot v$ & $v \\cdot A$ & $A \\cdot M$ & $M \\cdot A$ \\\\\n \\hline\n \\hline\n Linear regression & $\\checkmark$ & $\\checkmark$ & & \\\\\n \\hline\n Logistic regression & $\\checkmark$ & $\\checkmark$ & & \\\\\n \\hline\n Support vector machine & $\\checkmark$ & $\\checkmark$ & & \\\\\n \\hline\n Neural network & & & $\\checkmark$ & $\\checkmark$ \\\\\n \\hline\n\\end{tabular}\n\n \\end{table}\n\\vspace{1mm}\n\n\\noindent \\textbf{Example}. Consider a supervised classification algorithm Linear regression\nwhere $\\mathcal{Z} = \\mathcal{X} \\times \\mathcal{Y}$, $\\mathcal{X} \\subseteq \\bbR^d$, $\\mathcal{Y} = \\bbR$,\n$\\mathcal{H} = \\bbR^d$, and $\\ell(h, z) = \\frac{1}{2} (y-x^Th)^2$. Let matrix $A = [x_1^T; x_2^T; ...; x_{|B|}^T]$, vector $Y = [y_1; y_2; ...; y_{|B|}]$, then the aggregated \ngradients of the loss function is:\n\\begin{equation} \\label{equation:loss_function}\n\\sum_{ z \\in B} \\frac{\\partial \\ell(h, z)}{\\partial h} = \\sum_{(x, y) \\in B} (x^Th - y)x\n = ((Ah-Y)^TA)^T.\n\\end{equation}\nThus, there are two core matrix operations---matrix times vector and vector times matrix---to compute\nEquation~\\ref{equation:loss_function}.\n\n\\subsection{Data Compression}\nData compression, also known as source coding, is an important technique to reduce data sizes.\nThere are two main components in a data compression scheme, an encoding process that encodes the\ndata into coded symbols (hopefully with fewer bits), and a decoding process that reconstructs the\noriginal data from the compressed representation.\n\nBased on whether the reconstructed data differs from the original data, data compression schemes\nusually can be classified into {\\em lossless} compression or {\\em lossy} compression. In this paper,\nwe propose a lossless compression scheme called tuple-oriented compression (TOC) which is inspired by a classical lossless string\/text compression scheme that has both gained academic influence and industrial popularity,\nLempel-Ziv-Welch~\\cite{welch1984technique,ziv1977universal,ziv1978compression}. For examples, Unix file compression utility\\footnote{ncompress.sourceforge.net} and GIF~\\cite{wiggins2001image} image format are based on LZW.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Sparse-safe Element-wise Operations}\nTo execute sparse-safe element-wise operations (e.g., $A.* c$ and $A.^2$) on the TOC output directly, one can simply scan and modify\n\\textbf{I} because all the unique column\\_index:value pairs in $A$ are stored in \\textbf{I}. Algorithm\n~\\ref{alg:matrix_times_scalar} demonstrates how to execute matrix times scalar operation (i.e., $A. * c$) on the TOC output.\nAlgorithms for other sparse-safe element-wise operations are similar.\n\n\\begin{algorithm} [th!]\n\\caption{Execute matrix times scalar operation (i.e., $A. * c$) on the TOC output.} \\label{alg:matrix_times_scalar}\n\\begin{algorithmic}[1]\n\\Function{MatrixTimesScalar}{\\textbf{I}, c}\n\t\\State \\textbf{inputs:} column\\_index:value pairs in the first layer\n \\Statex of the prefix tree \\textbf{I} and a scalar c\n \\State \\textbf{outputs:} the modified \\textbf{I}\n\t\\For {$i \\gets $ 0 to len(\\textbf{I}) -1}\n \t\\State $\\textbf{I}[i]$.value $\\gets$ $\\textbf{I}[i]$.value * c\n \\EndFor\n \\State \\textbf{return(I)}\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\subsection{Right Multiplication Operations}\n\\label{right_multiplication_vector}\nWe first do some mathematical analysis to transform the uncompressed execution of right multiplication operations to the compressed execution that operates directly on the TOC output without decoding the original table. The analysis also proves the algorithm correctness since the algorithm follows the transformed form directly. Then, we demonstrate the detailed algorithm. In the rest of this subsection, we use $A \\cdot v$ as an example. We put the result of $A \\cdot M$ (similar to $A \\cdot v$) in Appendix~\\ref{more_algorithms} for brevity.\n\n\\begin{theorem}\\label{theorem:matrix_times_vector}\nLet $A \\in \\Re^{n \\times m}$, $v \\in \\Re^{m \\times 1}$, \\textbf{D} be the output of TOC on $A$,\n$\\mathbf{C'}$ be the prefix tree built for decoding, $\\mathbf{C'}[i].seq$ be the sequence of the tree node defined in\n\\S~\\ref{sec:api}, $\\mathbf{C'}[i].key$ be the key of the tree node defined in \\S~\\ref{sec:build_prefix_tree},\nand $\\mathbf{C'}[i].parent$ be the parent index of the tree node defined in \\S~\\ref{sec:build_prefix_tree}.\nNote that $\\mathbf{C'}[i].key$ and $\\mathbf{C'}[i].seq$ are both sparse representations of vectors\n(i.e., $\\mathbf{C'}[i].key \\in \\Re^{1 \\times m}$ and $\\mathbf{C'}[i].seq \\in \\Re^{1 \\times m}$).\nDefine function $\\mathcal{F}(x): \\aleph \\rightarrow \\Re$ to be\n\n\\begin{align} \\label{equation:matrix_times_vector_definition}\n\t\\mathcal{F}(x) = \\mathbf{C'}[x].seq \\cdot v, x = 1, 2, ..., \\mathrm{len}(\\mathbf{C'})-1.\n\\end{align}\nThen, we have\n\\begin{align} \\label{equation:matrix_times_vector_first_equation}\nA[i, :] \\cdot v = & \\sum_{j=0}^{\\mathrm{len}(\\mathbf{D}[i, :])-1} \\mathcal{F}(\\mathbf{D}[i][j]), i = 0, 1, ..., n-1\n\\end{align}\nand\n\\begin{align}\n\t\\mathbf{C'}[i].seq =&~\\mathbf{C'}[i].key + \\mathbf{C'}[\\mathbf{C'}[i].parent].seq, & \\nonumber \\\\\n & i = 1, 2, ..., \\mathrm{len}(\\mathbf{C'})-1.\n \\label{equation:vector_times_matrix_second_equation}\n\\end{align}\n\\end{theorem}\n\n\\begin{proof}\nSee Appendix~\\ref{appendix:matrix_times_vector}\n\\end{proof}\n\n\\noindent \\textbf{\\em Remark on Theorem~\\ref{theorem:matrix_times_vector}}. $A \\cdot v$ can be directly executed on the TOC output following \nEquation~\\ref{equation:matrix_times_vector_first_equation} by scanning $\\mathbf{C'}$ first and scanning $\\mathbf{D}$ second. \nThe detailed steps are demonstrated in Algorithm~\\ref{alg:matrix_times_vector}.\n\nFirst, we scan $\\mathbf{C'}$ to compute $\\mathcal{F}$ function defined in Equation~\\ref{equation:matrix_times_vector_definition}\n(lines 5-7 in Algorithm~\\ref{alg:matrix_times_vector}). The dynamic programming technique is used following\nEquation~\\ref{equation:vector_times_matrix_second_equation}. Specifically, we use \\textbf{H}[$i$] to remember the computed\nvalue of $\\mathcal{F}(i)$. We compute each\n\\textbf{H}[$i$] as the sum of $\\mathbf{C'}[i].key \\cdot v$ and \\textbf{H}[$\\mathbf{C'}$[$i$].parent], which is computed already.\n\nSecond, we scan \\textbf{D} to compute $A \\cdot v$ and store it in \\textbf{R} following Equation~\\ref{equation:matrix_times_vector_first_equation}\n(lines 8-11 in Algorithm~\\ref{alg:matrix_times_vector}). For each \\textbf{D}[$i$][$j$], we simply add \n\\textbf{H}[\\textbf{D}[$i$][$j$]] to \\textbf{R}[$i$].\n\n\\begin{algorithm} [th!]\n\\caption{Execute matrix times vector operation (i.e., $A \\cdot v$) on the TOC output.} \\label{alg:matrix_times_vector}\n\\begin{algorithmic}[1]\n\\Function{MatrixTimesVector}{\\textbf{D}, \\textbf{I}, $v$}\n\t\\State \\textbf{inputs:} column\\_index:value pairs in the first layer of\n the prefix tree \\textbf{I}, encoded table \\textbf{D}, and vector $v$\n \\State \\textbf{outputs:} the result of $A \\cdot v$ in \\textbf{R}\n \\State $\\mathbf{C'} \\gets$ \\Call{BuildPrefixTree}{\\textbf{I}, \\textbf{D}}\n \\State \\textbf{H} $\\gets \\overrightarrow{0}$ \\Comment{initialize as a zero vector}\n \\For {$i$ = 1 to len($\\mathbf{C'}$)-1} \\Comment{scan $\\mathbf{C'}$ to compute \\textbf{H}}\n \t\\State \\textbf{H}[$i$] $ \\gets \\mathbf{C'}$[$i$].key $\\cdot v$ + \\textbf{H}[$\\mathbf{C'}$[$i$].parent]\n \\EndFor\n \\State \\textbf{R} $\\gets \\overrightarrow{0}$ \\Comment{initialize as a zero vector}\n \\For {$i$ = 0 to len(\\textbf{D})-1} \\Comment{scan \\textbf{D} to compute \\textbf{R}}\n \t\\For {$j$ = 0 to len(\\textbf{D}[$i$,:])-1}\n \t\\State \\textbf{R}[$i$] $\\gets$ \\textbf{R}[$i$] + \\textbf{H}[\\textbf{D}[$i$][$j$]]\n \\EndFor\n \\EndFor\n \\State \\textbf{return(R)}\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\\subsection{Left Multiplication Operations}\n\\label{left_multiplication_vector}\nWe first give the mathematical analysis and then present the detailed algorithm. The reason for doing so is similar to that is given in \\S~\\ref{right_multiplication_vector}. We only demonstrate the result of $v \\cdot A$ and put the result of $M \\cdot A$ to Appendix~\\ref{more_algorithms} for brevity.\n\n\n\\begin{theorem} \\label{theorem:vector_times_matrix}\nLet $A \\in \\Re^{n \\times m}$, $v \\in \\Re^{1 \\times n}$, \\textbf{D} be the output of TOC on $A$,\n$\\mathbf{C'}$ be the prefix tree built for decoding, $\\mathbf{C'}[i]$.seq be the sequence of the tree node defined in \n\\S~\\ref{sec:api}, $\\mathbf{C'}[i].key$ be the key of the tree node defined in \\S~\\ref{sec:build_prefix_tree},\nand $\\mathbf{C'}[i].parent$ be the parent index of the tree node defined in \\S~\\ref{sec:build_prefix_tree}.\nNote that $\\mathbf{C'}[i].key$ and $\\mathbf{C'}[i].seq$ are both sparse representations of vectors\n(i.e., $\\mathbf{C'}[i].key \\in \\Re^{1 \\times m}$ and $\\mathbf{C'}[i].seq \\in \\Re^{1 \\times m}$).\nDefine function $\\mathcal{G}(x): \\aleph \\rightarrow \\Re$ to be\n\n\\begin{align}\n\t\\mathcal{G}(x) = \\sum_{\\substack{\\mathbf{D}[i, j]=x,\n \t\t\t\t\t\\forall i \\in \\aleph,\n \\forall j \\in \\aleph}}\n v[i],~x = 1, 2, ..., \\mathrm{len}(\\mathbf{C'})-1. \\label{equation:vector_times_matrix_definition}\n\\end{align}\nThen, we have\n\n\\begin{align}\n\tv \\cdot A =& \\sum_{i=1}^{\\mathrm{len}(\\mathbf{C'})-1} \\mathbf{C'}[i].seq \\cdot \\mathcal{G}(i).\n \\label{equation:vector_times_matrix_first_equation}\n\\end{align}\n\\end{theorem}\n\n\\begin{proof}\nSee Appendix~\\ref{appendix:vector_times_matrix}.\n\\end{proof}\n\n\\noindent \\textbf{\\em Remark on Theorem~\\ref{theorem:vector_times_matrix}}. We can compute $v \\cdot A$ following \nEquation~\\ref{equation:vector_times_matrix_first_equation} by simply scanning \\textbf{D} first and scanning $\\mathbf{C'}$ second. Algorithm~\\ref{alg:vector_times_matrix} presents the detailed steps.\nFirst, we scan \\textbf{D} to compute function $\\mathcal{G}$ \ndefined in Equation~\\ref{equation:vector_times_matrix_definition}. Specifically, we initialize $\\mathbf{H}$ as a zero vector,\nand then add $v[i]$ to $\\mathbf{H}[\\mathbf{D}[i][j]]$ for each $\\mathbf{D}[i][j]$\n(lines 6-8 in Algorithm~\\ref{alg:vector_times_matrix}). After this step is done, $\\mathcal{G}(i) = \\mathbf{H}[i], i$ = 1, 2, $\\dots$, len$(\\mathbf{C'})-1$.\n\nSecond, we scan $\\mathbf{C'}$ backwards to actually compute $v \\cdot A$ and store it in \\textbf{R} following Equation~\\ref{equation:vector_times_matrix_first_equation} (lines 10-12 in Algorithm~\\ref{alg:vector_times_matrix}). The dynamic\nprogramming technique is used following Equation~\\ref{equation:vector_times_matrix_second_equation}. Specifically,\nfor each $\\mathbf{C'}[i]$, we add $\\mathbf{C'}[i].key \\cdot \\mathbf{H}[i]$ to \\textbf{R} and add $\\mathbf{H}[i]$ to\n$\\mathbf{H}[\\mathbf{C'}[i].parent]$.\n\n\\begin{algorithm} [th!]\n\\caption{Execute vector times matrix operation (i.e., $v \\cdot A$) on the TOC output.} \\label{alg:vector_times_matrix}\n\\begin{algorithmic}[1]\n\\Function{VectorTimesMatrix}{\\textbf{D}, \\textbf{I}, $v$}\n\t\\State \\textbf{inputs:} column\\_index:value pairs in the first layer of\n the prefix tree \\textbf{I}, encoded table \\textbf{D}, and vector $v$\n \\State \\textbf{outputs:} the result of $v \\cdot A$ in \\textbf{R}\n \\State $\\mathbf{C'} \\gets$ \\Call{BuildPrefixTree}{\\textbf{I}, \\textbf{D}}\n \\State \\textbf{H} $\\gets \\overrightarrow{0}$ \\Comment{initialize as a zero vector}\n \\For {$i$ = 0 to len(\\textbf{D})-1} \\Comment{scan \\textbf{D} to compute \\textbf{H}}\n \t\\For {$j$ = 0 to len(\\textbf{D}[$i$,:]) -1}\n \t\\State \\textbf{H}[\\textbf{D}[$i$][$j$]] $\\gets$ $v$[$i$] + \\textbf{H}[\\textbf{D}[$i$][$j$]]\n \\EndFor\n \\EndFor\n \\State \\textbf{R} $\\gets \\overrightarrow{0}$ \\Comment{initialize as a zero vector}\n \\For {$i$ = len($\\mathbf{C'}$) -1 to 1} \\Comment{scan $\\mathbf{C'}$ to compute \\textbf{R}}\n \t\\State \\textbf{R} $\\gets$ \\textbf{R} + $\\mathbf{C'}$[$i$].key $\\cdot$ \\textbf{H}[$i$]\n \\State \\textbf{H}[$\\mathbf{C'}[i]$.parent] $\\gets$ \\textbf{H}[$\\mathbf{C'}$[i].parent] + \\textbf{H}[$i$]\n \\EndFor\n \\State \\textbf{return(R)}\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\subsection{Sparse-unsafe Element-wise Operations}\nFor sparse-unsafe element-wise operations (e.g., $A.+c$ and $A+M$), we need to fully decode $A$ first and then\nexecute the operations on $A$. Although this process is slow due to the tedious\ndecoding step, fortunately, sparse-unsafe element-wise operations are less likely to be used for training ML models because the input data is changed. Algorithm~\\ref{alg:matrix_plus_scalar} presents the detailed steps.\n\n\\begin{algorithm} [th!]\n\\caption{Execute matrix plus scalar element-wise operation (i.e., $A.+c$) on the TOC output.}\n\\label{alg:matrix_plus_scalar}\n\\begin{algorithmic}[1]\n\\Function{MatrixPlusScalar}{\\textbf{D}, \\textbf{I}, $c$}\n\t\\State \\textbf{inputs:} column\\_index:value pairs in the first layer of\n the prefix tree \\textbf{I}, encoded table \\textbf{D}, and scalar $c$\n \\State \\textbf{outputs:} the result of $A.+c$ in \\textbf{R}\n \\State $\\mathbf{C'} \\gets$ \\Call{BuildPrefixTree}{\\textbf{I}, \\textbf{D}}\n \\For {$i$ = 0 to len(\\textbf{D})-1}\n \t\\State \\textbf{B}[$i$] $\\gets$ [] \\Comment{initialize \\textbf{B}[$i$] as an empty vector}\n \\For {$j$ = 0 to len(\\textbf{D}[$i$,:])-1}\n \t\\State reverse\\_seq $\\gets$ []\n \\State tree\\_index $\\gets$ \\textbf{D}[$i$][$j$]\n \\While{tree\\_index $\\neq$ 0} \\Comment{backtrack $\\mathbf{C'}$ to get the reversed sequence of the tree node \\textbf{D}[$i$][$j$]}\n \t\\State reverse\\_seq.Append($\\mathbf{C'}$[tree\\_index].key)\n \\State tree\\_index $\\gets \\mathbf{C'}$[tree\\_index].parent\n \\EndWhile\n \\For {$k$ = len(reverse\\_seq)-1 to 0}\n \t\\State \\textbf{B[$i$]}.Append(reverse\\_seq[$k$])\n \\EndFor\n \\EndFor\n \\EndFor\n \\State $A \\gets$ TransformToDense(\\textbf{B})\n \\State \\textbf{R} $\\gets A.+c$\n \\State \\textbf{return(R)}\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsubsection{Access Values of I and D From Physical Bytes}\nAs shown in Figure~\\ref{fig:decoding_overview}, executing many matrix operations requires scanning \\textbf{I} or \\textbf{D}, which are encoded to physical bytes using bit packing and value indexing as explained in \\S~\\ref{sec:physical-encoding}.\nThus, we briefly discuss how to access values of \\textbf{I} and \\textbf{D} from encoded physical bytes.\n\nTo access a non-negative integer encoded using bit packing, one\ncan simply seek to the starting position of the integer, and cast its bytes to uint\\_8, uint\\_16, or \nuint\\_32 respectively. Unfortunately, most programming languages do not support uint\\_24 natively. \nNevertheless, one can copy the bytes into an uint\\_32 and mask its leading byte as zero.\n\nTo access values encoded using value indexing, one can look up the array which stores the unique values using the value indexes.\n\n\\subsubsection{Build Prefix Tree For Decoding} \\label{sec:build_prefix_tree}\nAs shown in Figure~\\ref{fig:decoding_overview}, executing all matrix operations except for\nsparse-safe element-wise operations needs to build the prefix tree $\\mathbf{C'}$, which is a simplified\nvariant of the prefix tree \\textbf{C} built during encoding.\nEach node in $\\mathbf{C'}$ has the same index and key with the node in \n\\textbf{C}. The difference is that each node in $\\mathbf{C'}$ stores the index to its parent, but it does NOT store\nindexes to its children. Table~\\ref{tab:c_prime_example} demonstrates an example of $\\mathbf{C'}$.\n\n\\begin{table}[th!]\n \\vspace{1mm}\n \\caption{An example of $\\mathbf{C'}$, which is a simplified variant of \\textbf{C} in Figure~\\ref{fig:encoding-overview}.\n Each node in $\\mathbf{C'}$ only stores the index to its parent but NOT\n indexes to its children.}\n\\label{tab:c_prime_example}\n \\centering\n \n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\small{\\textbf{Index}} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\\n \\hline\n \\small{\\textbf{Key}} & & 1:1.1 & 2:2 & 3:3 & 4:1.4 & 2:1.1 & 2:2 & 3:3 & 4:1.4 & 3:3 & 3:3 \\\\\n \\hline\n \\small{\\textbf{ParentIndex}} & & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 3 & 6 & 5 \\\\\n \\hline\n \\end{tabular}\n\n \\end{table}\n\nAlgorithm~\\ref{alg:build_prefix_tree_c_prime} presents how to build $\\mathbf{C'}$. There are two main phases in \nAlgorithm~\\ref{alg:build_prefix_tree_c_prime}. In phase \\rom{1}, $\\mathbf{C'}$ and \\textbf{F} are both initialized by\n\\textbf{I}, where \\textbf{F} stores the first column\\_index:value pair of the sequence represented by each tree node.\n\nIn phase \\rom{2}, we scan \\textbf{D} to build $\\mathbf{C'}$ mimicking how \\textbf{C} is built in \nAlgorithm~\\ref{alg:prefix_tree_encoding}. From line 11 to line 13 of\nAlgorithm~\\ref{alg:build_prefix_tree_c_prime}, we add a new prefix tree node indexed by idx\\_seq\\_num. More specifically,\nthe new tree node is a child of the tree node indexed by \\textbf{D}[$i$][$j$] (line 11), the first column\\_index:value \npair of the sequence represented by the new tree node is the same with its parent (line 12), and the key of the new tree node\nis the first column\\_index:value pair of the sequence represented by the next tree node indexed by \\textbf{D}[$i$][$j+1$] (line 13).\n\n\\begin{algorithm} [th!]\n\\caption{Build Prefix Tree $\\mathbf{C'}$} \\label{alg:build_prefix_tree_c_prime}\n\\begin{algorithmic}[1]\n\\Function{BuildPrefixTree}{\\textbf{I}, \\textbf{D}}\n\t\\State \\textbf{inputs:} column\\_index:value pairs in the first layer\n \\Statex of the prefix tree \\textbf{I} and encoded table \\textbf{D}\n \\State \\textbf{outputs:} A prefix tree used for decoding $\\mathbf{C'}$\n \\For {$i \\gets$ 1 to len(\\textbf{I})} \\Comment{phase \\rom{1}: initialize with \\textbf{I}}\n \t\\State $\\mathbf{C'}[i]$.key $\\gets \\textbf{I}[i-1]$\n \\State $\\mathbf{C'}[i]$.parent $\\gets 0$\n \\State $\\textbf{F}{[i]} \\gets \\textbf{I}[i-1]$\n \\Comment{\\textbf{F} stores the first column\\_index:value pair of the sequence of the node}\n\n \\EndFor\n \\State idx\\_seq\\_num $\\gets$ len(\\textbf{I}) + 1\n \\For {$i \\gets $ 0 to len(\\textbf{D}) - 1} \\Comment{phase \\rom{2}: build $\\mathbf{C'}$}\n \\For {$ j \\gets $ 0 to len(\\textbf{D}[$i$]) -2 } \\Comment{skip last element}\n \t\\State $\\mathbf{C'}$[idx\\_seq\\_num].parent $\\gets$ \\textbf{D}[$i$][$j$]\n \\State \\textbf{F}[idx\\_seq\\_num] $\\gets$ \\textbf{F}[\\textbf{D}[$i$][$j$]]\n \\State $\\mathbf{C'}$[idx\\_seq\\_num].key $\\gets$ \\textbf{F}[\\textbf{D}[$i$][$j+1$]]\n \\State idx\\_seq\\_num $\\gets$ idx\\_seq\\_num + 1\n \\EndFor \n \\EndFor\n \\State \\textbf{return ($\\mathbf{C'}$)}\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Time Complexity Analysis}\nWe give detailed time complexity analysis of different matrix operations except for $A \\cdot M$ and $M \\cdot A$, which we put to Appendix~\\ref{more_complexity_analysis} for brevity.\nFor $A.*c$, we only need to scan \\textbf{I}, so the time complexity is $\\mathcal{O}(|\\mathbf{I}|)$.\n\nFor $A \\cdot v$ and $v \\cdot A$, we need to build $\\mathbf{C'}$ , scan $\\mathbf{C'}$, and scan \\textbf{D}. As shown in Algorithm~\\ref{alg:build_prefix_tree_c_prime}, building $\\mathbf{C'}$\nneeds to scan \\textbf{I} and \\textbf{D}, and $|\\mathbf{C'}| = |\\mathbf{I}| + |\\mathbf{D}|$. So the complexity of building and scanning $\\mathbf{C'}$ are $\\mathcal{O}(|\\mathbf{I}| + |\\mathbf{D}|)$.\nOverall, the complexity of $A \\cdot v$ and $v \\cdot A$ are $\\mathcal{O}(|\\mathbf{I}| + |\\mathbf{D}|)$. This indicates that the computational redundancy incurred by the data redundancy\nis generally avoided by TOC matrix execution algorithms. Thus, theoretically speaking, TOC matrix execution algorithms have good performance when there are many data redundancies.\n\nFor $A.+c$, we need to decompress \\textbf{A} first. Similar to LZW, the decompression of TOC is linear in the sense that each element has to be outputted and the cost of\neach output element is constant. Thus, the complexity of decompressing \\textbf{A} is $\\mathcal{O}(|\\mathbf{A}|)$. Overall, the complexity of $A.+c$ is also $\\mathcal{O}(|\\mathbf{A}|)$.\n\n\n\n\n\\subsection{Shared Operators}\n\\input{decoding\/shared_operations}\n\n\n\\input{decoding\/la_operations}\n\n\\input{decoding\/computational_complexity}\n\n\n\\section{Discussion}\n{\\bf Redundancy in data}: \nSome datasets e.g. [] have little redundancy inside and cannot be efficiently compressed using TOC compression. Nevertheless, we have tested other compression methods for example [] and found that dataset like [] cannot be compressed efficiently using these compression methods either. In this case, using the uncompressed data as the input seems to be the reasonable way to do SGD.\\par\n\n{\\bf BGD vs. SGD}:\nAlthough TOC compression is proposed aiming at performance optimization during SGD in machine learning, it is also suitable for BGD. In fact, the performance improvement is more significant in the BGD case because there is more redundancy if the training dataset is not sliced into mini-batches.\\par\n\n{\\bf Multi-layer Neural Network}:\nIn our experiment sec X, we use a neural network with only one hidden layer. Given the proliferation of deep learning, it is worthwhile to consider how to apply TOC compression in deep neural network, which has many hidden layers. Although TOC compression and its corresponding matrix operation accelerations can only be applied to the first hidden layer in this case, it is still beneficial to apply TOC because it significantly reduces the memory footprint required by the data and the I\/O time needed. How to optimize the computations in the middle layers can be an interesting future work. In addition, our technique can be applied to other variants of neural network, for example convolutional neural network, recurrent neural network, etc.\\par\n\n\n\n\n\\subsubsection{Prefix Tree Structure and APIs}\n\\label{sec:api}\nEach node of the prefix tree has an index. Except for the root node, each node stores a column\\_index:value pair as its key. Each node also represents a sequence of column\\_index:value pairs,\nwhich are obtained by concatenating the keys from the prefix tree root to the node itself. For example, in the prefix tree \\textbf{C} in Figure~\\ref{fig:encoding-overview}, the left bottom tree node has index 9, \nstores key 3:3, and represents the sequence of column\\_index:value pairs [1:1.1, 2:2, 3:3].\n\n\nThe prefix tree supports two main APIs: \\textbf{AddNode} and \\textbf{GetIndex}.\n\n\\begin{packeditems}\n\\item $n' = \\textbf{AddNode}(n, k)$. This API creates a new prefix tree node which has key $k$ and is \na child of the tree node with index $n$. It also returns the index of the newly created tree node in $n'$, which is\nassigned from a sequence number starting from 0.\n\n\\item $n' = \\textbf{GetIndex}(n, k)$. This API looks up the tree node which has key $k$ and is a child of the tree node with index $n$.\nIt also returns the index of the found tree node in $n'$. If there is no such node, it returns -1.\n\\end{packeditems}\n\nThe implementation of \\textbf{AddNode} is straightforward. The implementation of \\textbf{GetIndex} is more involved, and we use a standard technique reported in~\\cite{blelloch2001introduction}. In essence, for each tree node, we create a hash map mapping from its child node \nkeys to its child node indexes.\n\n\\subsubsection{Prefix Tree Encoding Algorithm}\n\\label{sec:toc_alg}\nOur prefix tree encoding algorithm encodes the sparse encoded table (e.g., \\textbf{B} in Figure~\\ref{fig:encoding-overview})\nto an encoded table (e.g., \\textbf{D} in Figure~\\ref{fig:encoding-overview}). During the encoding\nprocess, we build a prefix tree (e.g., \\textbf{C} in Figure~\\ref{fig:encoding-overview}) and each original tuple is encoded as a vector of indexes pointing to prefix tree nodes.\nAlgorithm~\\ref{alg:prefix_tree_encoding} presents the pseudo-code of the algorithm. Figure~\\ref{fig:encoding-overview} presents a running example of executing the algorithm and encoding table \\textbf{B} to table \\textbf{D}.\n\nThe prefix tree encoding algorithm has two main phases. In phase \\rom{1} (line 5 to line 8 of \nAlgorithm~\\ref{alg:prefix_tree_encoding}), we initialize the prefix tree\nwith all the unique ~column\\_index:value pairs in the sparse encoded table as the children of the root \nnode. \n\nIn phase \\rom{2} (line 9 to line 17 of Algorithm~\\ref{alg:prefix_tree_encoding}), we leverage the repeated sequences of the tuples so that the same sequence, for example\nR2 and R4 in Figure~\\ref{fig:encoding-overview} both have the sequence [1:1, 2:2], is encoded as the same index to the prefix tree node.\nAt its heart, we scan all the tuples to detect if part of the tuple can match a sequence that already exists in the prefix tree and build up the prefix tree along the way. We use the function \\textsc{LongestMatchFromTree} in Algorithm~\\ref{alg:prefix_tree_encoding}, to find the longest sequence in the prefix tree that matches\nthe sequence in the tuple $\\textbf{t}$ starting from the position $i$. The function returns the tree node index of the longest match in $n$, and the next matching starting position in $j$. If $j \\neq len(\\textbf{t})$, we add a new\nnode to the prefix tree which is the child of the tree node with index $n$ and has key $\\textbf{t}[j]$ to capture this new sequence in the tuple $\\textbf{t}$. In this way, later tuples can leverage this new sequence. Note that the longest match found is at least of length one because \nwe store all the unique column\\_index:value pairs as the children of the root node in phase \\rom{1}. Table~\\ref{tab:encoding_example} gives a running example of executing Algorithm~\\ref{alg:prefix_tree_encoding} on table \\textbf{B} in Figure~\\ref{fig:encoding-overview}.\n\n\nOur prefix tree encoding and LZW are both linear algorithms in the sense that each\ninput unit is read at most twice and the operation on each input unit is constant. So the time complexity of Algorithm~\\ref{alg:prefix_tree_encoding} is $\\mathcal{O}(|\\textbf{B}|)$, where $|\\textbf{B}|$ is the number of column\\_index:value pairs in the sparse encoded table \\textbf{B}.\n\n\\begin{algorithm}\n\\caption{Prefix Tree Encoding Algorithm} \\label{alg:prefix_tree_encoding}\n\\begin{algorithmic}[1]\n\\Function{PrefixTreeEncode}{\\textbf{B}}\n \\State \\textbf{inputs:} sparse encoded table \\textbf{B}\n \\State \\textbf{outputs:} column\\_index:value pairs in the first layer\n \\Statex of the prefix tree \\textbf{I} and encoded table \\textbf{D}\n \\State Initialize \\textbf{C} with a root node with index 0.\n \\For {each tuple \\textbf{t} in \\textbf{B}} \\Comment{phase \\rom{1}: initialization}\n \t\\For {each column\\_index:value pair \\textbf{t}[$i$] in \\textbf{t}}\n \t\\If{\\textbf{C.GetIndex}($0, \\textbf{t}[i]$) = -1}\n \t\\State \\textbf{C.AddNode}($0, \\textbf{t}[i]$)\n \\EndIf\n \\EndFor\n \\EndFor\n \n \\For{each tuple \\textbf{t} in \\textbf{B}} \\Comment{phase \\rom{2}: encoding}\n \t\\State $i \\gets 0$ \\Comment{set the matching starting position}\n \\State $\\textbf{D[t]} \\gets []$ \\Comment{initialize as an empty vector}\n \\While {$i <$ len(\\textbf{t})}\n \t\\State $(n, j) \\gets $ \\Call{LongestMatchFromTree}{\\textbf{t}, $i$, \\textbf{C}}\n \\State \\textbf{D[t]}.append($n$)\n \\If {$j <$ len(\\textbf{t})}\n \t\\State \\textbf{C.AddNode}($n$, \\textbf{t}[$j$])\n \\EndIf\n \\State $i \\gets j$ \n \\EndWhile\n \\EndFor\n \\State \\textbf{I} $\\gets$ first\\_layer(\\textbf{C})\n \\State \\textbf{return(I, D)}\n\\EndFunction\n\\State\n\\Function{LongestMatchFromTree}{\\textbf{t}, $i$, \\textbf{C}}\n\t\\State \\textbf{inputs:} input tuple $\\textbf{t}$, matching starting position $i$\n \\Statex in \\textbf{t}, and prefix tree \\textbf{C}\n \\State \\textbf{outputs:} index of the tree node of the longest match\n \\Statex $n$ and next matching starting position $j$\n \\State $j \\gets i$\n \\State $n' \\gets $ \\textbf{C.GetIndex}($0$, \\textbf{t}[$j$]) \\Comment{matching 1st element}\n \\Do\n \t\\State $n \\gets n'$\n \\State $j \\gets j + 1$ \\Comment{try matching the next element}\n \t\\If {$j <$ len\\textbf{(t)}}\n \t\\State $n' \\gets $ \\textbf{C.GetIndex}($n$, \\textbf{t}[$j$]) \\Comment {return -1 if such a tree node does not exist}\n \\Else\n \t\\State $n' \\gets -1$ \\Comment{reaching the end of tuple \\textbf{t}}\n \\EndIf\n \\doWhile {$n' \\neq -1$}\n\t\\State \\textbf{return($n$, $j$)}\n\\EndFunction\n \n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{table}[bth]\n \\vspace{1mm}\n \\caption{\n We show the steps of running Algorithm~\\ref{alg:prefix_tree_encoding}\n on table \\textbf{B} in Figure~\\ref{fig:encoding-overview}.\n We omit the phase \\rom{1} of the algorithm, which initializes the prefix tree with nodes:\n 1 $\\rightarrow$ [1:1.1], 2 $\\rightarrow$ [2:2], 3 $\\rightarrow$ [3:3], 4 $\\rightarrow$ [4:1.4], and 5 $\\rightarrow$ [2:1.1],\n where the left side of the arrow is the tree node index and the right side of the arrow is the sequence of column\\_index:value pairs represented by the tree node.\n Each entry here illustrates an iteration of the while loop\n (line 12 - line 17) of Algorithm~\\ref{alg:prefix_tree_encoding}.\n Column $i$ is the starting position of the tuple that\n we try to match the sequence in the prefix tree. Column \\textbf{LMFromTree} shows the index and the\n corresponding sequence of the found longest match by the function \\textsc{LongestMatchFromTree}.\n Column \\textbf{App} is the appended tree node index for encoding the tuples in table \\textbf{B}.\n Column \\textbf{AddNode} shows the index and the corresponding sequence of the newly added tree node.}\n\n\n\n\n\n\n\\label{tab:encoding_example}\n\n \\centering\n \n \\begin{tabular}{|c||c|c|c|c|}\n \\hline\n & $i$ & \\textbf{LMFromTree} & \\textbf{App} & \\textbf{AddNode} \\\\\n \\hline\n \\hline\n \\multirow{4}{*}{\\textbf{R1}} & 0 & 1 $\\rightarrow$ [1:1.1] & 1 & 6 $\\rightarrow$ [1:1.1, 2:2] \\\\\n \\cline{2-5}\n & 1 & 2 $\\rightarrow$ [2:2] & 2 & 7 $\\rightarrow$ [2:2, 3:3] \\\\\n \\cline{2-5}\n & 2 & 3 $\\rightarrow$ [3:3] & 3 & 8 $\\rightarrow$ [3:3, 4:1.4] \\\\\n \\cline{2-5}\n & 3 & 4 $\\rightarrow$ [4:1.4] & 4 & NOT called \\\\\n \\hline\n \\multirow{2}{*}{\\textbf{R2}} & 0 & 6 $\\rightarrow$ [1:1.1, 2:2] & 6 & 9 $\\rightarrow$ [1:1.1, 2:2, 3:3] \\\\\n \\cline{2-5}\n & 2 & 3 $\\rightarrow$ [3:3] & 3 & NOT called \\\\\n \\hline\n \\multirow{2}{*}{\\textbf{R3}} & 0 & 5 $\\rightarrow$ [2:1.1] & 5 & 10 $\\rightarrow$ [2:1.1, 3:3] \\\\\n \\cline{2-5}\n & 1 & 8 $\\rightarrow$ [3:3, 4:1.4] & 8 & NOT called \\\\\n \\hline\n \\textbf{R4} & 0 & 6 $\\rightarrow$ [1:1.1, 2:2] & 6 & NOT called \\\\\n \\hline\n \\end{tabular}\n\n\\vspace{-3mm}\n\\end{table}\n\n\\subsubsection{Comparisons with Lempel-Ziv-Welch (LZW)}\n\\label{sec:lzw_comparisons}\n Our prefix tree encoding algorithm is inspired by the classical compression scheme LZW. However, a key difference between LZW and our algorithm is that we preserve the row and column boundaries in the underlying tabular data, which is crucial to directly operate matrix operations on the compressed representation. For examples, our algorithm encodes each tuple separately (although the dictionary is shared) to respect the row boundaries, and the compression unit is a column\\_index:value pair to\nrespect the column boundaries. In contrast, LZW simply encodes a blob of bytes without preserving any structure information. The reason for that is LZW was invented primarily for\nstring\/text compression. There are other several noticeable differences between our algorithm and LZW, which are summarized in Table~\\ref{tab:differences}.\n\n\\begin{table}[th!]\n \\vspace{1mm}\n \\caption{Differences between LZW and our prefix tree encoding. c-v stands for column-index:value.}\n \\vspace{-1mm}\n \\label{tab:differences}\n \\centering\n \\begin{tabular}{|c|c|c|}\n \\hline\n & \\textbf{LZW} & \\textbf{Ours} \\\\\n \\hline\n \\hline\n \\textbf{Input} & bytes & sparse encoded table \\\\\n \\hline\n \\textbf{Encode unit} & 8 bits & c-v pair \\\\\n \\hline\n \\textbf{Tree init.} & all values of 8 bits & all unique c-v pairs \\\\\n \\hline\n \\textbf{Tuple bound.} & lost & preserved \\\\\n \\hline\n \\textbf{Output} & a vector of codes & encoded table \\& prefix tree first layer\\\\\n \\hline\n \\end{tabular}\n \\end{table}\n\\vspace{1mm}\n\n\n\\subsection{Logical Encoding} \\label{sec:logical_encoding}\n\\input{encoding\/logical_encoding}\n\n\\subsection{Physical Encoding} \\label{sec:physical-encoding}\n\\input{encoding\/physical_encoding}\n\n\n\n\n\n\\subsection{Compression Ratios} \\label{compression_ratios}\n\\vspace{1mm}\n\\input{experiments\/compression_ratios}\n\n\n\\subsection{Matrix Operation Runtimes} \\label{matrix_operations}\n\\vspace{1mm}\n\\input{experiments\/la_operations}\n\n\\subsection{End-to-End MGD Runtimes} \\label{sec:mgd_runtimes}\n\\vspace{1mm}\n\\input{experiments\/mgd_runtimes}\n\n\\subsection{Compression and Decompression Runtimes} \\label{compression_and_decompression_runtimes}\n\\vspace{1mm}\n\\input{experiments\/compression_and_decompression_runtimes}\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\\input{intro}\n\n\\section{Background}\\label{sec:preliminaries}\n\\input{background}\n\n\\section{Tuple-oriented Compression}\\label{sec:encoding}\n\\input{encoding}\n\n\\section{Matrix Operation Execution} \\label{sec:la}\n\\input{decoding}\n\n\\section{Experiments} \\label{sec:expts}\n\\input{experiments}\n\n\\section{Discussion} \\label{sec:discuss}\n\\input{discussion}\n\n\\section{Related Work} \\label{sec:related}\n\\input{related}\n\n\\section{Conclusion and Future Work} \\label{sec:conclusion}\n\\input{conclusion}\n\n\\section*{Acknowledgments}\nWe thank all the anonymous reviewers. This work was partially supported by a gift from Google\n\n\\bibliographystyle{abbrv}\n\n\\subsection{A Paradigm for Accelerating Machine Learning using Coding}\n\\label{sec:overview:paradigm}\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.6\\columnwidth]{plots\/overview_plot.pdf}\n\\captionof{figure} {An illustration of our paradigm for using coding to accelerate ML algorithms.}\n\\label{fig:overview}\n\\end{figure}\n\nWe first explain our paradigm for accelerating ML algorithms using coding.\nCommon with such accelerations for low-level data processing, our goal is to \\textit{encode} the input data\nand then modify the computations to operate directly over the {\\em encoded data}.\nTypically, for ML algorithms, the structure of computations is more complicated\nthan that of the low-level data processing operations. For most ML techniques,\nwe have a loss function that is determined by the model structure\n(for example, the logistic loss for logistic regression), the model parameters,\nand the input data points. These pieces jointly determine the values of the loss function\nfor date points in the training set. Then, an optimizer updates the model (typically, iteratively)\nbased on the loss until a convergence criterion is met.\nIn the end, the final model parameters are output.\n\nFigure~\\ref{fig:overview} illustrates our paradigm: the codewords, instead of the raw data input, \nare used as inputs to an ML algorithm along with its model parameters and model structure.\nThen, an optimization algorithm iteratively optimizes the model.\nFrom this paradigm, we can see that a key challenge is that the effectiveness of the encoding in \nimproving runtime performance depends on {\\em both the model structure (most importantly, how \nthe loss function is defined)} and {\\em how the optimization algorithm updates the model during \nthe evaluation steps of the loss function}.\n\n\\subsection{Running Example: Lloyd's Algorithm}\n\\label{sec:overview:lloyd}\n\nWe use the basic version of Lloyd's algorithm for k-means to illustrate our ideas.\nGiven a training set $S$ of $m$ data points in a $d$-dimensional space, this algorithm obtains $k$ \nclusters under the $L_2$-distance metric. Note that Lloyd's algorithm is iterative. There are \nthree main computations in each iteration. First, given a set of clusters $C_1, C_2,\\cdots, C_k$ \nand their corresponding centroids $w_1, \\dots, w_k$, we compute the \\textit{loss function}\nfor each $x$ and each $w_i$:\n\\begin{align}\n \\label{eq:k-means-loss}\n \\ell(w_i, x) = \\|w_i - x\\|_2^2 = \\sum_{l=1}^d(w_{i,l} - x_l)^2,\n\\end{align}\nHere, $w_{i,l}$ is the $l$-th component of centroid $w_i$, and $x_l$ is the $l$-th component (feature) of $x$. \nSecond, each data point is assigned to the cluster $C_j: j \\in \\arg\\min_{i} \\ell(w_i, x), i = 1, \\dots, k$. \nThis is known as the {\\em assignment step} for $x$; we assign each $x$ to its ``best current'' cluster. \nFinally, $w_i$ is \\textit{updated} as $w_i = \\sum_{x \\in C_i}{x} \/ |C_i| $. \nOverall, $O(mkd)$ arithmetic operations are performed per iteration.\n\n\\eat{\nOur first observation is that, while the above description of the assignment step\ndescribes computations in a row-wise access pattern \n(i.e. we evaluate $\\ell(w_i, x)$ for a {\\em fixed} $x$ and over all $w_i$), we can also view this computation\nfrom a column-wise, perspective, and equivalently view the computation as {\\em fixing a single $w$ in $\\{w_1, \\dots, w_k\\}$,\n and computing $\\ell(w, x_j)$ for every $x_j$ ($j=1, \\dots, m$) in $S$.}\n}\n\n\\noindent\\textbf{Opportunity I: Exploiting ``Data Redundancy'' in the Training Data}.\nWe now arrive at our first key observation:\n\\textit{in many datasets, there are often a lot of ``data redundancies'' among the tuples}. \nIn other words, the feature vectors of two examples, $x_i$ and $x_j$ may share values.\nSince the computation of $\\|w - x_j\\|^2$ can be {\\em decomposed} into computations over individual \ncomponents (Equation (\\ref{eq:k-means-loss})), this implies that we can effectively leverage \nsuch data redundancy among the tuples in the training set to avoid {\\em redundant computations}.\nMore specifically, given a coding scheme with encoding function $enc$ and decoding \nfunction $dec$, we define the {\\em encoded loss function} as follows:\n\n\\begin{definition}[Encoded Loss Function]\n Given a coding scheme, the encoded loss function $\\widehat{\\ell}$,\n which receives two parameters, a hypothesis $w$ and a codeword $enc(x)$ of $x$, is defined as\n $\\widehat{\\ell}(w, enc(x)) = \\ell(w, x)$.\n\\end{definition}\n\nNow, suppose that we use a coding scheme to encode the training set $S$ and that all data \npoints can be encoded using only a small universe of $t$ words, $U = \\{c_1, \\dots, c_t\\}$, \nwhere $t \\ll m$. In other words, for every $j$, $enc(x_j)$ can be represented as a tuple \n$\\langle c^j_{i_1}, c^j_{i_2}, \\dots, c^j_{i_{s}} \\rangle$:\nfor some $i_1, \\dots, i_s \\in \\{1, \\dots, t\\}$.\nThen, {\\em $\\widehat{\\ell}(w, enc(x))$ can be computed without decoding.}\nBy remembering the squared $L_2$-distance between $c^j_{i}$ and the \ncorresponding component of $w$ in a {\\em cache table} $T$,\nwe can compute $\\widehat{\\ell}(w, enc(x))$, which is $\\ell(w, x)$,\nusing $s$ queries to $T$. Note that $T$ is of size $kt$\nwhere we remember for every $w_i$ and every $c_s$,\nthe squared $L_2$-distance between $c_s$ and the corresponding parts in $w$.\nFilling $T$ takes at most $O(tkd)$ arithmetic operations.\nNote that $tkd \\ll mkd$ as $t \\ll m$.\n\nSince the {\\em average} number of words $d'$ to encode $x_j$ is much smaller than $d$\n(i.e. $d' \\ll d$), we can compute {\\em all} $\\ell(w_i, x_j)$,\nusing $\\widehat{\\ell}(w_i, enc(x_j))$ and $T$,\nwith a total of at most $O(tkd + mkd')$ arithmetic operations.\nThis is potentially much smaller than $O(mkd)$ as $t \\ll m$ and $d' \\ll d$.\n\n\\noindent\\textbf{Opportunity II: Exploiting ``Redundancies'' in the Codeword Universe}.\nUntil now, our discussion applies to {\\em any} source coding scheme. However, we note that\nfor the special class of {\\em prefix source coding}, such as LZW,\nwe can exploit an additional opportunity. Specifically, for prefix codes,\nthe universe $U = \\{c_1, \\dots, c_t\\}$ has the essential property that for $c, c' \\in U$,\n{\\em $c$ can be a prefix of $c'$}.\nFurthermore, this prefix relationship is encoded in the prefix tree\nwhere each tree node corresponds to a word $c \\in U$, and there is a tree edge from $c$ to $c'$\nif $c$ is a prefix of $c'$. This observation leads to the second opportunity for acceleration:\n{\\em We can fill in the cache table $T$ faster than filling them separately.}\nSpecifically, once we compute the distance between $c$ and $w$,\nwe can {\\em reuse} the result to compute the distance between $c'$ and $w$,\ninstead of redoing the arithmetic operations on the prefix $c$.\nAs a result, one can fill the table $T$ by {\\em following the topological order,\n that is level by level, of the prefix tree}, exploiting the prefix redundancy.\nAs a result, one can aim to fill in $T$ with a total number of $N \\ll O(tkd)$ arithmetic operations.\nThis gives the final arithmetic complexity $O(N + mkd')$, combining both opportunities,\nwhich satisfies that $N + mkd' \\ll tkd + mkd' \\ll mkd$.\n\n\\subsection{Generalizations}\n\\label{sec:overview:generalizations}\nWe now extend the ideas from the last subsection to GLMs.\n\n\\noindent\\textbf{Unification.} We begin by unifying the above two opportunities for the Lloyd's algorithm.\nWe have two observations: (1) In both cases, the {\\em core task} is to evaluate a certain function $f$\nover a dataset $D = \\{z_1, \\dots, z_n\\}$. In the first case, $D$ is the training set $S$,\nand in the second case $D$ is the universe of the coding scheme $U$.\n(2) Suppose that the coding scheme has encoding function $enc$ and decoding function $dec$.\nObserve that what is common to both opportunities is that $f$ is ``decomposable'' in the sense that\nfor any $z \\in D$, if $z = z_1 \\circ z_2$ (where $\\circ$ is the concatenation operator),\nthen $f(z) = F(f(z_1), f(z_2))$ where $F$ is some combining function.\nIn the Lloyd's algorithm, for both opportunities, $f$ is the squared $L_2$-distance function $f(\\cdot) = \\ell(w, \\cdot)$\n(strictly speaking, one should understand this as follows:\nif $enc(x)=\\langle c_1,\\dots,c_s \\rangle$, then $\\ell(w, c_1)$ is defined as\nthe squared 2-norm distance between $dec(c_1)$ and {\\em corresponding components} in $w$).\nNote that we can replace the $L_2$-distance function by any other common distance function,\nsuch as the $L_p$-distance, and the same argument goes through.\n\nNow, with both (1) and (2), if there is significant data redundancy in $D$,\nwe can hope to accelerate $f$ on $D$ using our cache table.\nIn the first opportunity for Lloyd's algorithm, there is redundancy because $D$ is the entire training set.\nIn the second opportunity, $D$ is $U$ and there is redundancy in $U$ as exposed by the prefix tree.\n\n\\noindent\\textbf{Generalization to GLMs.}\nWe note that common optimization algorithms for training GLMs are based on gradient methods,\nfor example {\\em steepest descent}, {\\em conjugate gradient descent}, or {\\em L-BFGS}.\nFor these optimization algorithms, we are in a similar situation as the Lloyd's algorithm\nby just replacing the {\\em $L_2$-distance} function by the {\\em inner product} function.\nAs a result, the same two opportunities can be exploited for GLMs with these algorithms as well.\n\n\\subsection{Discussion}\n\\label{sec:overview:discussions}\nWe have shown how to exploit the same opportunities to use coding techniques to accelerate \nboth k-means trained with the Lloyd's algorithm and GLMs trained with gradient methods. This is intriguing, \nand somewhat surprising, because k-means and GLMs are superficially quite different ML techniques.\nWe are encouraged by this result, as to the best of our knowledge, this is the first case to expose the \nopportunities in accelerating multiple ML algorithms using the same coding techniques. Not only do we\nfind that such acceleration is {\\em possible}, but that there are also {\\em common structures} that can be \nexploited in two fundamental and different ML algorithms.\n\nIt is natural then to wonder if our ideas can be extended to more complicated model structures and\nother optimization algorithms. For example, one could consider kernel methods\nwith appropriate convex optimization algorithms.\nEven more ambitiously, one could consider deep convolutional neural networks trained\nwith stochastic gradient descent. We are currently exploring these ideas and\nwhile we do not have answers to these other models yet, we suspect there could be similar\n\\textit{structures} in those models too that can be exploited using coding techniques.\nFor example, in a convolutional neural network, this might be possible\nas the same small kernel is used in all sliding windows in one convolutional layer\nand the input data also has high redundancy~\\cite{nielsen15, Goodfellow-et-al-2016, NIPS2012_4824}. \nOverall, we hope our initial ideas and results presented in this paper spur more interest in the community to \nsystematically explore such techniques.\n\n\\noindent\\textbf{Comparisons with Data Sketching}.\nOur discussion so far may remind astute readers about data sketching techniques such as\nGaussian Random Projection~\\cite{bingham2001random}, where we ``compress'' the data into lower dimensions,\nand thus save computation and storage costs. To this end, we have the following points:\n{\\bf (1)} Data sketching techniques are ``data independent''\n(e.g. we apply a random linear transformation $A$ to the data vectors,\nwhere $A$ is sampled independently of the data), while our technique is ``data dependent.''\n{\\bf (2)} Our technique guarantees that\n``exactly the same thing is learned as without our technique.'' Data sketching techniques,\non the other hand, guarantees ``approximate'' utility with respect to a specific measure,\nsuch as distances between points or test accuracy.\nIn that sense, our guarantees of losslessness are stronger.\n{\\bf (3)} Intriguingly, some data sketching techniques can also be applied together with our technique.\nFor example, with a linear transformation $A$, one may hope that $Ax$ and $Ax'$ have similar redundancies\nif $x$ and $x'$ have a lot of redundancies.\n\n\\section{Introduction}\n\\label{sec:intro}\n\\input{intro}\n\n\\section{Background}\\label{sec:preliminaries}\n\\input{background}\n\n\\section{Tuple-oriented Compression}\\label{sec:encoding}\n\\input{encoding}\n\n\\section{Matrix Operation Execution} \\label{sec:la}\n\\input{decoding}\n\n\\section{Experiments} \\label{sec:expts}\n\\input{experiments}\n\n\\section{Discussion} \\label{sec:discuss}\n\\input{discussion}\n\n\\section{Related Work} \\label{sec:related}\n\\input{related}\n\n\\section{Conclusion and Future Work} \\label{sec:conclusion}\n\\input{conclusion}\n\n\\section*{Acknowledgments}\nWe thank all the anonymous reviewers. This work was partially supported by a gift from Google\n\n{\\small\n \\bibliographystyle{abbrv}\n \n\n\\subsection{Basic TOC-Learning}\n\\vspace{1mm}\n\\noindent \\textbf{TOC-Lloyd.}\nOne iteration of TOC-Lloyd is presented in Algorithm~\\ref{alg:kmeans-decoding}. At each iteration, we pre-compute the pair-wise squared distances between the dictionary entries and centroids and store them in the 2-dimensional array \\textbf{B}. Note that the starting index for each dictionary entry is used to align the features of the dictionary entry with the correct feature dimensions in the centroids. The notation $c_{|e.values}$ means that only the dimensions corresponding to the features present in $e.values$ are used. The partial distances are then used while computing the full distances between the data points and centroids. The array $\\textbf{DC}$ tracks how many times each dictionary entry needs to be added (equivalent to adding the examples directly) when computing the new centroids. Thus, TOC-Lloyd might avoid a lot of computational redundancies in both the distance computations and centroid updates.\n\n\n\\vspace{1mm}\n\\noindent \\textbf{TOC-GLM.}\nOne iteration of TOC-GLM with gradient descent is presented in Algorithm~\\ref{alg:Compress_LogisticRegression}. At each iteration, we pre-compute the partial inner products between the dictionary entries and the weights and store them in array \\textbf{B}. Again, the notation $w_{|e.values}$ means only the dimensions corresponding to the features present in $e.values$ are used. Then, the gradient is computed in one pass over the compressed data, reusing the pre-computed inner-products. We also aggregate the scalar values after applying the scalar function $g$ for each dictionary entry and update the weights by using each dictionary entry once. Recall that $g(a,b)$ is GLM-specific, e.g., for logistic regression it is $-b\/(1+exp(ab))$, while for linear regression, it is $(a-b)$. Overall, TOC-GLM might avoid a lot of computational redundancy in the inner product computations and weight updates.\n\n\\begin{algorithm}[t]\n \\SetKwInOut{Input}{Input}\n \\Input{Compressed \\textbf{T'}, Dict.~\\textbf{D}, Centroids \\textbf{C}}\n \/\/Initialize new centroids \\textbf{C'}, \\textbf{B}, \\textbf{DC} \\\\\n \\For {each $e$ in \\textbf{D}} { \n \\For {each $c$ in \\textbf{C}} {\n $\\textbf{B}[e.code, c.CID] = l_2(e.values - c_{|e.values})^2$ \\\\\n }\n }\n \\For {each $t$ in \\textbf{T'}} {\n \\For {each $c$ in \\textbf{C}} {\n $s_c$ = 0 \\\\\n \\For {each $e$ in $t$} {\n $s_c$ += $\\textbf{B}[e.code, c.CID]$\n }\n }\n $\\hat{c}$ = $\\textnormal{argmin}_{c \\in \\textbf{C}} s_c$\\\\\n $Count[\\hat{c}.CID] += 1$\\\\\n \\For {each $e$ in $t$} {\n $\\textbf{DC}[e.code, \\hat{c}.CID]$ += 1\n }\n }\n \\For {each $e$ in \\textbf{D}} { \n \\For {each $c$ in \\textbf{C'}} {\n $c_{|e.values}$ += $e.values \\times \\textbf{DC}[e.code, c.CID]$\n }\n }\n \\For {each $c$ in \\textbf{C'}} { \n $c.x = c.x \/ Count[c.CID]$\n }\n\\caption{One Iteration of TOC-Lloyd}\n\\label{alg:kmeans-decoding}\n\\end{algorithm}\n\n\n\\begin{algorithm}[t]\n \\SetKwInOut{Input}{Input}\n \\Input{Compressed \\textbf{T'}, Dict. \\textbf{D}, Weights $w$, Labels $Y$}\n \/\/Initialize new gradient $G$, \\textbf{B}, \\textbf{DW} \\\\\n \\For {each $e$ in \\textbf{D}} {\n $\\textbf{B}[e.code] = w_{|e.values}' e.values $ \\\\\n }\n \\For {each $t$ in \\textbf{T'} \\textnormal{and corresponding} $y$ in \\textbf{Y} } {\n $z = 0$\\\\\n \\For {each $e$ in $t$} {\n $z$ += $\\textbf{B}[e.code]$\n }\n \n $z = g(z, y)$ \\hfill \/\/GLM-specific gradient function $g$\\\\\n \\For {each $e$ in $t$} {\n $\\textbf{DW}[e.code]$ += $z$\n }\n }\n \\For {each $e$ in \\textbf{D}} {\n $G_{|e.values}$ += $e.values \\times \\textbf{DW}[e.code]$\n }\n\\caption{One Iteration of TOC-GLM}\n\\label{alg:Compress_LogisticRegression}\n\\end{algorithm}\n\n\n\\subsection{Exploiting Redundancies in Codewords Universe}\n\\noindent \\textbf{Efficient Precomputing}.\nWe now explain how we efficiently precompute the distances\/inner-products between dictionary entries and centroids\/weights. \nThese precomputations happen in lines 2-4 in Algorithm~\\ref{alg:kmeans-decoding} and lines 2-3 in Algorithm~\\ref{alg:Compress_LogisticRegression}.\nThere are two structures that we leverage to achieve this: the prefix tree structure of the dictionary in TOC, and the structure of the function we evaluate on the dictionary entries.\n\n\\noindent \\textbf{Structure I:}\nTOC stores the dictionary entries in a prefix tree. That is, a dictionary entry $(start\\_idx, [x_0, x_1, ..., x_n])$ will be stored as a path of nodes starting from root. Note that $start\\_idx$ will also be stored as a tree node (with special indicating tag).\n\n\\noindent \\textbf{Structure II:} The functions $dist$\/$weighted\\_sum$ over dictionary entries for Lloyd\/GLM respectively are decomposable. Let $X$ be a dictionary entry, $X'$ be its longest prefix, and $c$ be the last attribute,\n\\begin{equation*} \\label{eq:prefix-reuse}\n dist(X) = dist(X') + dist(c)\n\\end{equation*}\n\\vspace{-4mm}\n\\begin{equation*}\n weighted\\_sum(X) = weighted\\_sum(X') + weighted\\_sum(c)\n\\end{equation*}\n\nThus, we evaluate the dictionary entries in the topological\/breadth-first traversal order of the prefix tree. Each time we evaluate a tree node, we ask for its parent's evaluated value and add it to the evaluation of the single last attribute, which makes computations faster. Our request for the parent's value is never denied because the structure of the prefix tree makes sure its parent must have been evaluated. The detailed algorithm is shown in Algorithm \\ref{alg:dictionary_calculations}.\n\n\\begin{algorithm}[t]\n \\SetKwInOut{Input}{Input}\n \/\/ Step1: This is pre-computing. Once per ML \\\\\n \\Input{The root of the prefix tree during the compression stage of LZW}\n\ttree-node-list = [root] \\\\\n\tdic-entry-array = [] \\\\\n\tidx = 0 \\\\\n\t\\While {tree-node-list not empty} {\n\t\tnode = tree-node-list.pop\\_front() \\\\\n\t\t\\If {node has been outputed as a code} {\n\t\t\tdict-entry-array[idx] = node \\\\\n\t\t\tidx += 1\n\t\t}\n\t\t\\For {child of node} {\n\t\t\tchild.set\\_prefix\\_idx(idx) \\\\\n\t\t\ttree-node-list.push\\_back(child)\n\t\t}\n\t}\n \/\/ Step2: This is invoked basically once per iteration \\\\\n calculated-values=[] \\\\\n \\For {i = 0 to len(dic-entry-array)} {\n \\eIf {dic-entry-array[i] has prefix\\_idx} {\n calculated-values[i] = calculated-values[prefix\\_idx] + func(last attr value of dic-entry-array[i])\n } {\n calculated-values[i] = func(last attr value of dic-entry-array[i])\n }\n }\n \\caption{Efficiently Precompute Intermediate Results on Dictionary Entries}\n \\label{alg:dictionary_calculations}\n\\end{algorithm}\n\n\n\\noindent \\textbf{Complexity of Our Solution}\nBy plugging $dist$\/$weighted\\_sums$ in $func$ in Algorithm \\ref{alg:dictionary_calculations}, it is easy to compute the complexity of our solution as $O(|D|*k)$ and $O(|D|)$ respectively, where $|D|$ is the dictionary and k is the number of centroids. The naive way to evaluate $dist$\/$weighted\\_sums$ separately for each dictionary entry has complexity $O(|D|*k*\\text{avg. len of dict entries})$ and $O(|D| * \\text{avg. len of dict entries})$. Thus, theoretically speaking, our solution achieves a factor of the average length of dictionary entries speedup.\n\nIt's also important to note that our solution is cache friendly. In the second step of Algorithm \\ref{alg:dictionary_calculations}, the access to calculated-values is consecutive because adjacent families of children are spawned by adjacent parents in the tree.\n\n\\noindent \\textbf{Efficient Updating}. We introduce an efficient method to update the dictionary entries into the centroids\/weight for Lloyd\/GLMs. These updates happen at lines 14-16 in Algorithm~\\ref{alg:kmeans-decoding} and lines 11-12 in Algorithm~\\ref{alg:Compress_LogisticRegression}.\n\n\nWe discover similar structures as in efficient precomputing. That is, the updating operation using a dictionary entry X is ``decomposable'' and can be done by updating using the last attribute of X, and then postponing and aggregating the remaining attributes as a whole to the parent of X in the prefix tree.\n\nThus, we update machine learning models using the dictionary entries layer by layer from bottom to top in the prefix tree. For each entry, we only update using the last attribute and postpone\/aggregate the remaining part of attributes to its parent. In this process, we only need to update each entry once because the structure of the prefix tree makes sure all the parents of the entry must have been updated beforehand. Algorithm \\ref{alg:efficient_updating} shows the details of our solution.\n\n\n\\begin{algorithm}[t]\n \\SetKwInOut{Input}{Input}\n \/\/ Step1: We reuse the dic-entry-array in Algorithm \\ref{alg:dictionary_calculations}. \\\\\n \/\/ Step2: This is invoked basically once per iteration \\\\\n update-scalar-vec \/\/ this should be pre-poluated by ML models \\\\\n \\For {i = dic-entry-array.end to head} {\n \\If {dic-entry-array[i] has prefix\\_idx} {\n update-scalar-vec[prefix\\_idx] += update-scalar-vec[i]\n }\n \\textbf{update}(last attr value of dic-entry-array[i], update-scalar-vec[i])\n }\n \\caption{Efficiently Update Machine Learning Models Using Dictionary Entries}\n \\label{alg:efficient_updating}\n\\end{algorithm}\n\nBy plugging updating functions for Lloyd\/GLM in \\textbf{update} in Algorithm \\ref{alg:efficient_updating}, the complexity of our updating algorithm is $O(|D|*k)$ and $(|D|)$ for Lloyd\/GLM respectively, where $|D|$ is the dictionary and k is the number of centroids. On the other hand, the naive way to update the ML models using each dictionary entry separately has complexity $O(|D|*k*\\text{avg. len of dict entries})$ and $O(|D|*\\text{avg. len of dict entries})$ for Lloyd and GLM. Thus, again, our method could be the average length of dictionary entries times faster in theory.\n\n\\eat{\nA key insight that we observe is if X exists in the dictionary, then all its prefixes\/dependencies must also exist in the dictionary. This is formally stated as the following theorem.\n\\begin{theorem}\nSuppose there is a dictionary entry \n\n$X=[start\\_idx, x_0, x_1, ..., x_n]$, then all the prefixes of $X$, \n\n$[start\\_idx, x_0]$, $[start\\_idx, x_0, x_1]$, ..., \n\n$[start\\_idx, x_0, x_1, ..., x_{n-1}]$ must also exist in the dictionary.\n\\end{theorem}\n\\begin{proof}\nWe assume that the statement is false, then there must be a longest prefix $P$ of $X$ that doesn't exist in the dictionary. Denote $P$ as $[start\\_idx, x_0, x_1, ..., x_p]$, then $[start\\_idx, x_0, x_1, ..., x_p, x_{p+1}]$ must exist in the dictionary. In Algorithm \\ref{alg:encoding}, when $[start\\_idx, x_0, x_1, ..., x_p, x_{p+1}]$ is added to the prefix tree, $P$ must already exist in the prefix tree and is going to be outputed. Thus $P$ is a dictionary. This contradicts with the assumption that $P$ is not in the dictionary. Thus all the prefixes of $X$ must be in the dictionary.\n\\end{proof}\nSince all the prefixes\/dependencies of all the dictionary entries are also dictionary entries, we can solve this dynamic programming problem by evaluating the dictionary entries in the topological order of the dependency graph of the dictionary entries (for example, $[start\\_idx, x_0, x_1, ..., x_n]$ depends on $[start\\_idx, x_0, x_1, ..., x_{n-1}]$).\n}\n\n\\eat{\nWe can easily verify this by examing $dist\/weighted\\_sums$ functions explicitly.\n\\begin{equation}\n\\begin{split}\n dist(X) =\n \\begin{bmatrix}\n l_2(X.values, C_{|X.values}^0)^2 & \\\\\n l_2(X.values, C_{|X.values}^1)^2 & \\\\\n \\vdots \\\\\n l_2(X.values, C_{|X.values}^k)^2 &\n \\end{bmatrix}\n\\end{split}\n\\end{equation}\n\nand\n\n\\begin{equation}\n weighted\\_sums(X) = W_{|X.values}' X.values\n\\end{equation}\n, where $C_{|X.values}^i$ denotes the corresponding values in ith centroids for X.values and $W_{|X.values}$ denotes the corresponding weights for X.values. \n}\n\n\n\n\n\n\n\n\n\n\n\\subsection{Two Departures from LZW} \\label{sec:naive_encoding}\n\nThe first difference is that TOC respects tuple boundaries. Drawing inspiration from LZW, we treat the given training dataset \\textbf{T} as a data file with the data serialized in row-major (tuple-oriented) order. However, as a key departure from LZW and text compression, we also track and maintain tuple boundaries in \\textbf{T}, which are needed for ML. Thus, we introduce a new coding scheme we call tuple-oriented coding (TOC). Using a shared \\textit{dictionary}, we encode the data file but prevent the encoding from crossing tuple boundaries.\n\nThe second difference is that dictionary entries in TOC are more complicated. Each entry in the dictionary has a \\textit{code number} and a $2$-tuple: $(startIndex, values)$, wherein $values$ is a sequence of feature values that occurs contiguously in some tuple of \\textbf{T} and $startIndex$ is the column index of the first feature in $values$. $startIndex$ is needed to align $values$ with ML model parameters. For example, if the sequence $[c,d]$ of the tuple $[a,b,c,d,e]$ is chosen to be represented as a code, the 2-tuple for it in the dictionary will be $(2, [c,d])$. The code numbers start from $0$ and identify each entry uniquely.\n\n\\subsection{TOC Encoding}\nTOC starts by initializing with one entry for each unique feature value in \\textbf{T}, i.e., sequences of length one each. The encoding algorithm adds new entries to the dictionary on the fly such that each new sequence is obtained by suffixing one feature value to some pre-existing entry's sequence. For example, $(2, [c,d])$ is added only if $(2, [c])$ is already present in the dictionary. Apart from being able to access the entry for a given code number, the dictionary also exposes the following two functions:\n\\begin{packeditems}\n\\item $C'$ = \\textbf{AddDict}(C, v). Create a new dictionary entry by extending the sequence of an existing entry with code number C with v and return the new code number. \nIf C is -1, the added entry's sequence is just v.\n\n\\item $C'$ = \\textbf{GetIndex}(C, v). Return the code number of the sequence obtained by extending the sequence of an existing entry with code number C with v. \nIf no such extended entry exists, return -1.\n\n\\end{packeditems}\nThe encoding scheme is presented in Algorithm~\\ref{alg:encoding}. The first step initializes the dictionary with an entry for each starting index (lines 2-3) and then each entry for each unique feature value in \\textbf{T} (lines 4-7). \nIn the second step, for each tuple, for each feature yet to be encoded, we find the longest match $W$ in the dictionary using the subroutine \\textbf{GetLongestMatch} shown in Algorithm~\\ref{alg:long}. \nOnce we hit a feature $t[i]$ ($i$ is returned in line 12) that cannot be matched, we add a new entry to the dictionary with the sequence $[W, t[i]]$. We then output the code number corresponding to $W$ to represent the compressed portion of the tuple. This process continues till the whole tuple is encoded. We denote the encoded table \\textbf{T} as \\textbf{T'}.\n\n\\begin{algorithm}[t]\n \\SetKwInOut{Input}{Input}\n \\Input{Denormalized table \\textbf{T}, Number of features d}\n \/\/Step 1: Initialize dictionary \\\\\n \\For {i = 0; i $<$ d; i++} {\n \\textbf{AddDict}(-1, i) \\hfill \/\/ Add starting indexes \\\\\n }\n \\For {tuple t in \\textbf{T}} {\n \\For{i = 0; i $<$ d; i++} {\n \\If{\\textbf{GetIndex}(i, t[i]) = -1} {\n \\textbf{AddDict}(i, t[i]) \\hfill \/\/ Unique feature values\n }\n }\n }\n \/\/Step 2: Compress \\textbf{T} \\\\\n \\For {tuple t in \\textbf{T}} {\n i = 0 \\\\\n \\While{i $<$ d} {\n (C, i) = \\textbf{GetLongestMatch}(i, t, d) \\\\\n \\If{i $<$ d} {\n \\textbf{AddDict}(C, t[i])\n } \n \\textbf{Output}(C)\n }\n }\n \\caption{TOC Encoding}\n \\label{alg:encoding}\n\\end{algorithm}\n\n\\begin{algorithm}[t]\n \\SetKwInOut{Input}{Input}\n \\Input{Starting index i, Tuple t, d}\n C' = \\textbf{GetIndex}(i, t[i]) \\\\\n \\While {C' $\\neq$ -1} {\n (C, i) = (C', i + 1) \\\\\n \\eIf {i $<$ d} {\n C'= \\textbf{GetIndex}(C, t[i])\n } {\n C' = -1\n }\n}\n\\Return{(C, i)}\n\\caption{\\textbf{GetLongestMatch}}\n\\label{alg:long}\n\\end{algorithm}\n\n\\vspace{1mm}\n\\noindent \\textbf{Example.} Figure~\\ref{fig:encoding_example} gives a step-by-step example of how TOC encoding works on a toy dataset with two tuples.\n\n\\begin{figure}[th!]\n\\centering\n \\includegraphics[width=0.6\\linewidth]{.\/plots\/cexample.pdf}\n\\vspace{2mm}\n \\caption{Illustration of TOC encoding. \\textbf{T} has two tuples: $[a, b, c, d, e]$ and $[f, g, c, d, e]$. \\textbf{GetLM} is the output of \\textbf{GetLongestMatch} for the symbol t[i]. \\textbf{AddDict} shows the entry - $code\\_number (start\\_idx, [attrs])$ added to the dictionary upon finding a longest match, where $start\\_idx$ is the column index of the first attribute. We omit Step 1, which initializes the dictionary with starting indexes: $0 (-1, 0)$, $1 (-1, 1)$, $2 (-1, 2)$, $3 (-1, 3)$, and $4 (-1, 4)$, as well as single-symbol sequences: $5 (0, [a])$, $6 (1, [b])$, $7 (2, [c])$, $8 (3, [d])$, $9 (4, [e])$, $10 (0, [f])$, and $11 (1, [g])$. Subsequent steps add more entries that extend these sequences. As an example, the first row shows that the longest match for $a$ is code number $5$, which resulted in the addition of a new entry $(0, [a, b])$ with code number $12$. The highlighted row in Step 1 and 2 is where we keep the boundaries of the tuple. That is, when the boundary of a tuple is reached, we stopped matching more characters in \\textbf{GetLongestMatch} or adding a new entry in \\textbf{AddDict}. Overall, the compressed tuples are $[5, 6, 7, 8, 9]$ and $[10, 11, 14, 9]$}\n\\label{fig:encoding_example}\n\\vspace{-3mm}\n\\end{figure}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\partial{\\partial}\n\\def\\tfrac#1#2{{\\textstyle{#1\\over #2}}}\n\\def\\tfrac{1}{2}{\\tfrac{1}{2}}\n\\def\\times{\\times}\n\\def\\otimes{\\otimes}\n\\def\\cal{h}{\\cal{h}}\n\\def\\cal{i}{\\cal{i}}\n\\def{\\alpha^\\prime}{{\\alpha^\\prime}}\n\\def\\mbox{Tr}{\\mbox{Tr}}\n\\def\\mbox{Str}{\\mbox{Str}}\n\n\\def\\ensuremath{\\mathbb{N}}{\\ensuremath{\\mathbb{N}}}\n\\def\\ensuremath{\\mathbb{Z}}{\\ensuremath{\\mathbb{Z}}}\n\\def\\ensuremath{\\mathbb{Q}}{\\ensuremath{\\mathbb{Q}}}\n\\def\\ensuremath{\\mathbb{R}}{\\ensuremath{\\mathbb{R}}}\n\\def\\ensuremath{\\mathbb{C}}{\\ensuremath{\\mathbb{C}}}\n\n\\def{\\mbox{\\tiny E}}{{\\mbox{\\tiny E}}}\n\\def{\\cal KK}{{\\cal KK}}\n\\defOrt{\\'\\i}n {Ort{\\'\\i}n }\n\n\\def\\cal A} \\def\\hA{\\hat A} \\def\\tA{\\tilde A} \\def\\dA{\\dot A{\\cal A} \\def\\hA{\\hat A} \\def\\tA{\\tilde A} \\def\\dA{\\dot A}\n\\def\\cal B} \\def\\hB{\\hat B} \\def\\tB{\\tilde B} \\def\\dB{\\dot B{\\cal B} \\def\\hB{\\hat B} \\def\\tB{\\tilde B} \\def\\dB{\\dot B}\n\\def\\cal C} \\def\\hC{\\hat C} \\def\\tC{\\tilde C{\\cal C} \\def\\hC{\\hat C} \\def\\tC{\\tilde C}\n\\def\\cal D} \\def\\hD{\\hat D} \\def\\tD{\\tilde D{\\cal D} \\def\\hD{\\hat D} \\def\\tD{\\tilde D}\n\\def\\cal E} \\def\\hE{\\hat E} \\def\\tE{\\tilde E{\\cal E} \\def\\hE{\\hat E} \\def\\tE{\\tilde E}\n\\def\\cal F} \\def\\hF{\\hat F} \\def\\tF{\\tilde F{\\cal F} \\def\\hF{\\hat F} \\def\\tF{\\tilde F}\n\\def\\cal G} \\def\\hG{\\hat G} \\def\\tG{\\tilde G{\\cal G} \\def\\hG{\\hat G} \\def\\tG{\\tilde G}\n\\def\\cal H} \\def\\hH{\\hat H} \\def\\tH{\\tilde H} \\def\\dH{\\dot H{\\cal H} \\def\\hH{\\hat H} \\def\\tH{\\tilde H} \\def\\dH{\\dot H}\n\\def\\cal I} \\def\\hI{\\hat I} \\def\\tI{\\tilde I{\\cal I} \\def\\hI{\\hat I} \\def\\tI{\\tilde I}\n\\def\\cal J} \\def\\hJ{\\hat J} \\def\\tJ{\\tilde J{\\cal J} \\def\\hJ{\\hat J} \\def\\tJ{\\tilde J}\n\\def\\cal K} \\def\\hK{\\hat K} \\def\\tK{\\tilde K{\\cal K} \\def\\hK{\\hat K} \\def\\tK{\\tilde K}\n\\def\\cal L} \\def\\hL{\\hat L} \\def\\tL{\\tilde L{\\cal L} \\def\\hL{\\hat L} \\def\\tL{\\tilde L}\n\\def\\cal M} \\def\\hM{\\hat M} \\def\\tM{\\tilde M{\\cal M} \\def\\hM{\\hat M} \\def\\tM{\\tilde M}\n\\def\\cal N} \\def\\hN{\\hat N} \\def\\tN{\\tilde N{\\cal N} \\def\\hN{\\hat N} \\def\\tN{\\tilde N}\n\\def\\cal O} \\def\\hO{\\hat O} \\def\\tO{\\tilde O{\\cal O} \\def\\hO{\\hat O} \\def\\tO{\\tilde O}\n\\def\\cal P} \\def\\hP{\\hat P} \\def\\tP{\\tilde P{\\cal P} \\def\\hP{\\hat P} \\def\\tP{\\tilde P}\n\\def\\cal Q} \\def\\hQ{\\hat Q} \\def\\tQ{\\tilde Q{\\cal Q} \\def\\hQ{\\hat Q} \\def\\tQ{\\tilde Q}\n\\def\\cal R} \\def\\hR{\\hat R} \\def\\tR{\\tilde R{\\cal R} \\def\\hR{\\hat R} \\def\\tR{\\tilde R}\n\\def\\cal S} \\def\\hS{\\hat S} \\def\\tS{\\tilde S{\\cal S} \\def\\hS{\\hat S} \\def\\tS{\\tilde S}\n\\def\\cal T} \\def\\hT{\\hat T} \\def\\tT{\\tilde T} \\def\\dT{\\dot T{\\cal T} \\def\\hT{\\hat T} \\def\\tT{\\tilde T} \\def\\dT{\\dot T}\n\\def\\cal U} \\def\\hU{\\hat U} \\def\\tU{\\tilde U{\\cal U} \\def\\hU{\\hat U} \\def\\tU{\\tilde U}\n\\def\\cal V} \\def\\hV{\\hat V} \\def\\tV{\\tilde V{\\cal V} \\def\\hV{\\hat V} \\def\\tV{\\tilde V}\n\\def\\cal W} \\def\\hW{\\hat W} \\def\\tW{\\tilde W} \\def\\dW{\\dot W{\\cal W} \\def\\hW{\\hat W} \\def\\tW{\\tilde W} \\def\\dW{\\dot W}\n\\def\\cal X} \\def\\hX{\\hat X} \\def\\tX{\\tilde X{\\cal X} \\def\\hX{\\hat X} \\def\\tX{\\tilde X}\n\\def\\cal Y} \\def\\hY{\\hat Y} \\def\\tY{\\tilde Y{\\cal Y} \\def\\hY{\\hat Y} \\def\\tY{\\tilde Y}\n\\def\\cal Z} \\def\\hZ{\\hat Z} \\def\\tZ{\\tilde Z{\\cal Z} \\def\\hZ{\\hat Z} \\def\\tZ{\\tilde Z}\n\n\\def\\ddot A{\\ddot A}\n\\def\\ddot T{\\ddot T}\n\n\\def\\bar R{\\bar R}\n\n\\def\\tilde{\\hat A}{\\tilde{\\hat A}}\n\\def\\tilde{\\hat B}{\\tilde{\\hat B}}\n\\def\\tilde{\\hat C}{\\tilde{\\hat C}}\n\\def\\tilde{\\hat D}{\\tilde{\\hat D}}\n\\def\\tilde{\\hat E}{\\tilde{\\hat E}}\n\\def\\tilde{\\hat F}{\\tilde{\\hat F}}\n\\def\\tilde{\\hat G}{\\tilde{\\hat G}}\n\\def\\tilde{\\hat H}{\\tilde{\\hat H}}\n\\def\\tilde{\\hat R}{\\tilde{\\hat R}}\n\\def\\tilde{\\hat V}{\\tilde{\\hat V}}\n\n\\def\\hat a} \\def\\ta{\\tilde a} \\def\\da{{\\dot a}{\\hat a} \\def\\ta{\\tilde a} \\def\\da{{\\dot a}}\n\\def\\hat b} \\def\\tb{\\tilde b{\\hat b} \\def\\tb{\\tilde b}\n\\def\\hat c} \\def\\tc{\\tilde c{\\hat c} \\def\\tc{\\tilde c}\n\\def\\hat d} \\def\\td{\\tilde d{\\hat d} \\def\\td{\\tilde d}\n\\def\\hat e} \\def\\te{\\tilde e{\\hat e} \\def\\te{\\tilde e}\n\\def\\hat f} \\def\\tf{\\tilde f{\\hat f} \\def\\tf{\\tilde f}\n\\def\\hat g} \\def\\tg{\\tilde g} \\def\\dg{{\\dot g}{\\hat g} \\def\\tg{\\tilde g} \\def\\dg{{\\dot g}}\n\\def{\\bar g}{{\\bar g}}\n\\def\\hat h} \\def\\th{\\tilde h} \\def\\dh{{\\dot h}{\\hat h} \\def\\th{\\tilde h} \\def\\dh{{\\dot h}}\n\\def{\\bar h}{{\\bar h}}\n\\def\\hat \\imath} \\def\\ti{\\tilde \\imath{\\hat \\imath} \\def\\ti{\\tilde \\imath}\n\\def\\hat \\jmath} \\def\\tj{\\tilde \\jmath{\\hat \\jmath} \\def\\tj{\\tilde \\jmath}\n\\def\\hat k} \\def\\tk{\\tilde k{\\hat k} \\def\\tk{\\tilde k}\n\\def\\hat l} \\def\\tl{\\tilde l{\\hat l} \\def\\tl{\\tilde l}\n\\def\\hat m} \\def\\tm{\\tilde m{\\hat m} \\def\\tm{\\tilde m}\n\\def\\hat n} \\def\\tn{\\tilde n{\\hat n} \\def\\tn{\\tilde n}\n\\def\\hat o} \\def\\to{\\tilde o{\\hat o} \\def\\to{\\tilde o}\n\\def\\hat p} \\def\\tp{\\tilde p{\\hat p} \\def\\tp{\\tilde p}\n\\def\\hat q} \\def\\tq{\\tilde q{\\hat q} \\def\\tq{\\tilde q}\n\\def\\hat r} \\def\\tr{\\tilde r{\\hat r} \\def\\tr{\\tilde r}\n\\def\\hat s} \\def\\ts{\\tilde s{\\hat s} \\def\\ts{\\tilde s}\n\\def\\hat t} \\def\\tildet{\\tilde t{\\hat t} \\def\\tildet{\\tilde t}\n\\def\\hat u} \\def\\tu{\\tilde u{\\hat u} \\def\\tu{\\tilde u}\n\\def\\hat v} \\def\\tv{\\tilde v{\\hat v} \\def\\tv{\\tilde v}\n\\def\\hat w} \\def\\tw{\\tilde w{\\hat w} \\def\\tw{\\tilde w}\n\\def\\hat x} \\def\\tx{\\tilde x} \\def\\dx{{\\dot x}{\\hat x} \\def\\tx{\\tilde x} \\def\\dx{{\\dot x}}\n\\def\\hat y} \\def\\ty{\\tilde y} \\def\\dy{{\\dot y}{\\hat y} \\def\\ty{\\tilde y} \\def\\dy{{\\dot y}}\n\\def\\hat z} \\def\\tz{\\tilde z} \\def\\dz{{\\dot z}{\\hat z} \\def\\tz{\\tilde z} \\def\\dz{{\\dot z}}\n\n\\def{\\tilde{\\hat g}}{{\\tilde{\\hat g}}}\n\n\\def\\hat{\\alpha}} \\def\\talpha{\\tilde{\\alpha}{\\hat{\\alpha}} \\def\\talpha{\\tilde{\\alpha}}\n\\def\\hat{\\beta}} \\def\\tbeta{\\tilde{\\beta}{\\hat{\\beta}} \\def\\tbeta{\\tilde{\\beta}}\n\\def\\hat{\\gamma}} \\def\\tgamma{\\tilde{\\gamma}{\\hat{\\gamma}} \\def\\tgamma{\\tilde{\\gamma}}\n\\def\\hat{\\delta}} \\def\\tdelta{\\tilde{\\delta}{\\hat{\\delta}} \\def\\tdelta{\\tilde{\\delta}}\n\\def\\hat{\\epsilon}} \\def\\tepsilon{\\tilde{\\epsilon}{\\hat{\\epsilon}} \\def\\tepsilon{\\tilde{\\epsilon}}\n\\def\\hat{\\varepsilon}{\\hat{\\varepsilon}}\n\\def\\tilde{\\varepsilon}{\\tilde{\\varepsilon}}\n\\def\\hat{\\zeta}} \\def\\tzeta{\\tilde{\\zeta}{\\hat{\\zeta}} \\def\\tzeta{\\tilde{\\zeta}}\n\\def\\hat{\\eta}} \\def\\teta{\\tilde{\\eta}{\\hat{\\eta}} \\def\\teta{\\tilde{\\eta}}\n\\def\\hat{\\theta}} \\def\\ttheta{\\tilde{\\theta}{\\hat{\\theta}} \\def\\ttheta{\\tilde{\\theta}}\n\\def\\hat{\\kappa}} \\def\\tkappa{\\tilde{\\kappa}{\\hat{\\kappa}} \\def\\tkappa{\\tilde{\\kappa}}\n\\def\\hat{\\lambda}} \\def\\tlambda{\\tilde{\\lambda}{\\hat{\\lambda}} \\def\\tlambda{\\tilde{\\lambda}}\n\\def\\hat{\\mu}} \\def\\tmu{\\tilde{\\mu}{\\hat{\\mu}} \\def\\tmu{\\tilde{\\mu}}\n\\def\\hat{\\nu}} \\def\\tnu{\\tilde{\\nu}{\\hat{\\nu}} \\def\\tnu{\\tilde{\\nu}}\n\\def\\hat{\\xi}} \\def\\txi{\\tilde{\\xi}{\\hat{\\xi}} \\def\\txi{\\tilde{\\xi}}\n\\def\\hat{\\pi}} \\def\\tpi{\\tilde{\\pi}{\\hat{\\pi}} \\def\\tpi{\\tilde{\\pi}}\n\\def\\hat{\\rho}} \\def\\trho{\\tilde{\\rho}{\\hat{\\rho}} \\def\\trho{\\tilde{\\rho}}\n\\def\\hat{\\sigma}} \\def\\tsigma{\\tilde{\\sigma}{\\hat{\\sigma}} \\def\\tsigma{\\tilde{\\sigma}}\n\\def\\hat{\\tau}} \\def\\ttau{\\tilde{\\tau}{\\hat{\\tau}} \\def\\ttau{\\tilde{\\tau}}\n\\def\\hat{\\upsilon}} \\def\\tupsilon{\\tilde{\\upsilon}{\\hat{\\upsilon}} \\def\\tupsilon{\\tilde{\\upsilon}}\n\\def\\hat{\\phi}} \\def\\tphi{\\tilde{\\phi}{\\hat{\\phi}} \\def\\tphi{\\tilde{\\phi}}\n\\def\\hat{\\varphi}} \\def\\tvarphi{\\tilde{\\varphi}{\\hat{\\varphi}} \\def\\tvarphi{\\tilde{\\varphi}}\n\\def\\hat{\\chi}} \\def\\tchi{\\tilde{\\chi}{\\hat{\\chi}} \\def\\tchi{\\tilde{\\chi}}\n\\def\\hat{\\psi}} \\def\\tpsi{\\tilde{\\psi}{\\hat{\\psi}} \\def\\tpsi{\\tilde{\\psi}}\n\\def\\hat{\\omega}} \\def\\tomega{\\tilde{\\omega}{\\hat{\\omega}} \\def\\tomega{\\tilde{\\omega}}\n\n\\def\\dot{\\phi}{\\dot{\\phi}}\n\\def\\ddot \\phi{\\ddot \\phi}\n\\def{\\tilde{\\hat\\phi}}{{\\tilde{\\hat\\phi}}}\n\n\\def\\hat{\\Gamma}{\\hat{\\Gamma}}\n\\def\\tilde{\\Gamma}{\\tilde{\\Gamma}}\n\\def\\bar{\\Gamma}{\\bar{\\Gamma}}\n\n\\def{\\mu\\nu}{{\\mu\\nu}}\n\\def{\\mu\\nu\\rho}{{\\mu\\nu\\rho}}\n\\def{\\hmu\\hnu}{{\\hat{\\mu}} \\def\\tmu{\\tilde{\\mu}\\hat{\\nu}} \\def\\tnu{\\tilde{\\nu}}}\n\\def{\\hmu\\hnu\\hrho}{{\\hat{\\mu}} \\def\\tmu{\\tilde{\\mu}\\hat{\\nu}} \\def\\tnu{\\tilde{\\nu}\\hat{\\rho}} \\def\\trho{\\tilde{\\rho}}}\n\n\\defg_{\\mu\\nu}{g_{\\mu\\nu}}\n\\defB_{\\mu\\nu}{B_{\\mu\\nu}}\n\\defF_{\\mu\\nu}{F_{\\mu\\nu}}\n\\defH_{\\mu\\nu\\rho}{H_{\\mu\\nu\\rho}}\n\\defR_{\\mu\\nu}{R_{\\mu\\nu}}\n\n\\def{\\hat g}_{{\\hat\\mu}{\\hat\\nu}}{{\\hat g}_{{\\hat\\mu}{\\hat\\nu}}}\n\\def{\\hat B}_{{\\hat\\mu}{\\hat\\nu}}{{\\hat B}_{{\\hat\\mu}{\\hat\\nu}}}\n\\def{\\hat C}_{{\\hat\\mu}{\\hat\\nu}{\\hat\\rho}}{{\\hat C}_{{\\hat\\mu}{\\hat\\nu}{\\hat\\rho}}}\n\\def{\\hat F}_{{\\hat\\mu}{\\hat\\nu}}{{\\hat F}_{{\\hat\\mu}{\\hat\\nu}}}\n\\def{\\hat H}_{{\\hat\\mu}{\\hat\\nu}{\\hat\\rho}}{{\\hat H}_{{\\hat\\mu}{\\hat\\nu}{\\hat\\rho}}}\n\\def{\\hat R}_{{\\hat\\mu}{\\hat\\nu}}{{\\hat R}_{{\\hat\\mu}{\\hat\\nu}}}\n\n\\def{\\tilde{\\hat g}}_{{\\hat\\mu}{\\hat\\nu}}{{\\tilde{\\hat g}}_{{\\hat\\mu}{\\hat\\nu}}}\n\\def{\\tilde{\\hat B}}_{{\\hat\\mu}{\\hat\\nu}}{{\\tilde{\\hat B}}_{{\\hat\\mu}{\\hat\\nu}}}\n\\def{\\tilde{\\hat C}}_{{\\hat\\mu}{\\hat\\nu}{\\hat\\rho}}{{\\tilde{\\hat C}}_{{\\hat\\mu}{\\hat\\nu}{\\hat\\rho}}}\n\\def{\\tilde{\\hat H}}_{{\\hat\\mu}{\\hat\\nu}{\\hat\\rho}}{{\\tilde{\\hat H}}_{{\\hat\\mu}{\\hat\\nu}{\\hat\\rho}}}\n\n\\def\\sqrt{|g|}{\\sqrt{|g|}}\n\\def\\sqrt{|g|}\\ e^{-2\\phi}{\\sqrt{|g|}\\ e^{-2\\phi}}\n\\def\\sqrt{|g^{\\mbox{\\tiny E}}|}{\\sqrt{|g^{\\mbox{\\tiny E}}|}}\n\\def\\sqrt{|G|}{\\sqrt{|G|}}\n\n\\def\\sqrt{|{\\hat g}|}{\\sqrt{|{\\hat g}|}}\n\\def\\sqrt{|{\\hat g}|}\\ e^{-2{\\hat \\phi}}{\\sqrt{|{\\hat g}|}\\ e^{-2{\\hat \\phi}}}\n\\def\\sqrt{|{\\hat g}^{\\mbox{\\tiny E}}|}{\\sqrt{|{\\hat g}^{\\mbox{\\tiny E}}|}}\n\\def\\sqrt{|{\\hat G}|}{\\sqrt{|{\\hat G}|}}\n\n\\def\\sqrt{|\\gamma|}{\\sqrt{|\\gamma|}}\n\\def\\sqrt{|{\\hat \\gamma}|}{\\sqrt{|{\\hat \\gamma}|}}\n\n\\def\\makeatletter{\\catcode`\\@=11\n\\makeatletter\n\\def\\mathbox#1{\\hbox{$\\m@th#1$}}%\n\\def\\math@ccstyles#1#2#3#4#5#6#7{{\\leavevmode\n \\setbox0\\mathbox{#6#7}%\n \\setbox2\\mathbox{#4#5}%\n \\dimen@ #3%\n \\baselineskip\\z@\\lineskiplimit#1\\lineskip\\z@\n \\vbox{\\ialign{##\\crcr\n \\hfil \\kern #2\\box2 \\hfil\\crcr\n \\noalign{\\kern\\dimen@}%\n \\hfil\\box0\\hfil\\crcr}}}}\n\\def\\math@ccstyles\\maxdimen{\\math@ccstyles\\maxdimen}\n\\def\\math@ccstyles{-\\maxdimen}{\\math@ccstyles{-\\maxdimen}}\n\\def\\unity%\n {\\math@ccstyles{-\\maxdimen}{.45\\ht0}\\z@\\displaystyle\n {\\mathchar\"006C}\\displaystyle 1}\n\n\\begin{document}\n\n\\rightline{December 2007} \\vspace{1truecm}\n\n\\centerline{\\Large \\bf On Uniqueness of supersymmetric Black holes in AdS(5)} \\vspace{1truecm}\n\n\\centerline{\n {\\bf Pedro J. Silva${}$}\\footnote{E-mail address:\n {\\tt psilva@ifae.es}}}\n\n\\vspace{.4truecm} \\centerline{{Institut de Ci\\`encies de l'Espai (IEEC-CSIC) and\nInstitut de F\\'{\\i}sica d'Altes Energies (IFAE),}} \\centerline{{\\it UAB, E-08193 Bellaterra (Barcelona), Spain.}}\n\\vspace{2truecm}\n\n\\centerline{\\bf ABSTRACT}\n\\vspace{.5truecm}\n\n\\noindent We study the possibility of having Black hole of spherical and ring horizon topology with five independent charges in the $U(1)^3$-model of 5D gauge supergravity. To study these possibilities we consider not only the known result obtained by local supersymmetry analysis but include the input coming from non-local properties of the solutions, like the attractor mechanism, the entropy function of Sen, the Euclidean formulation and general properties of the uplift to ten dimension. For the spherical case, we found that there is no room for more general Black holes than the ones already describe in hep-th\/0601156. On the other hand, if a solution of ring horizon topology exists, we conclude that it must be labeled by three independent parameters only, since it has to satisfy two independent constraints that we explicitly find in terms of its chemical potentials. At the end of the article, based on all the local and non-local information, we put forward a conjecture on the constraints that characterize general Black holes dual to ${\\cal N}=4$ SYM.\n\n\n\\section{Introduction}\n\n\\noindent String theory has provided the first microscopic derivation\nof the entropy for some extremal Black holes (Bh) in diverse dimensions (see for example \\cite{Strominger:1996sh,Emparan}). All the calculations are based on the identification of a given set of microscopic states forming an ensemble (label by the Bh charges) with the Bh geometry. In four dimensions we have Bh uniqueness theorems which guaranty that to a given set of asymptotic charges there corresponds only one Bh solution. This is in general not true for higher dimension where such theorems no longer exist. In particular in five dimensions we have Black rings (Br) and spherical Bh solutions as an explicit example of non-uniqueness. Considering the above, it seems important to understand better what kind of Bh exist in string theory compactifications, if we want to understand to which microscopic ensembles these Bh are related. Unfortunately, this is a very difficult problem to solve in general that nevertheless can be significantly simplified if we include supersymmetry as an input.\n\nIn fact, supersymmetric Bh in minimal five dimensional\nsupergravity and its extension to the $U(1)^n$ vector multiples\nwere classified in \\cite{Reall:2002bh,Gutowski:2004bj}. Here, local analysis of supersymmetry was used to classify all the near horizon geometries (NHG) to\nlater match them with asymptotic data that defines the Bh\nsolutions. The main result in the minimal cases is the uniqueness of the corresponding supersymmetric BMPV Bh \\cite{Breckenridge:1996is} and the supersymmetric Br solutions of \\cite{Elvang:2004rt}.\n\nIn this article, we consider the very important framework of ten\ndimensional type IIB supergravity with $AdS_5\\otimes S^5$\nasymptotic conditions, related by duality to $U(N)$\n${\\cal N}=4$ super Yang-Mills in four dimensions. This theory can\nbe consistently truncated and compactified on the $S^5$ to give\n$SO(6)$ gauge supergravity in five dimensions, that furthermore\ncan be truncated to the $U(1)^3$-model that also admits a\nfinal truncation to minimal gauge supergravity. These different\ntruncations tell us that we can have Bh solutions at four\ndifferent levels.\n\\begin{enumerate}\n\\item Bh in 10D type IIB supergravity.\n\\item Bh in $SO(6)$ 5D gauge supergravity.\n\\item Bh in $U(1)^3$ 5D gauge supergravity.\n\\item Bh in minimal 5D gauge supergravity.\n\\end{enumerate}\n\nAll supersymmetric Bh should be describe in the dual CFT by\nsupersymmetric ensembles of states label by the quantum charges\ncorresponding to the $SO(4)$ angular momenta $J_i$, $i=(1,2)$ and\nthe SO(6) R-charges $Q_I$, $I=(1,2,3)$. At present, there only\nexist explicit supersymmetric Bh solutions in the last two\ntruncations (3) and (4) \\cite{sbh1,sbh2,sbh3}, but is not clear if these\nsolutions are the most general Bh or just a particular family. The\nkey point is that these solutions come with a constraint among\nthe conserved charges that is not apparent in the dual theory\n\\cite{index}.\n\nRegarding the classification of Bh solutions in this framework,\nonly truncation (3) and (4) have been studied so far \\cite{fhp,NH1,NH2,Ast,NH3}, where\nunfortunately local analysis of supersymmetry was too complicated to\nbe fully solved, in both cases. More specifically, only the NHG\nwith two $U(1)$ symmetries are classified and the necessary\nconnection with the corresponding asymptotic region was not\nachieved, leaving the classification unsettle.\n\nIn this work we propose a complementary approach to help on the\nclassification of supersymmetric Bh solutions, bypassing the\nproblem of the asymptotic form of the metric. We base our\nconstruction on global properties of supersymmetric Bh like the\nattractor mechanism \\cite{attractor}, its Euclidean formulation,\nthermodynamics \\cite{yo1,yo2} and general properties of the uplifting\nof gauge supergravities to 10D \\cite{g}. We explicitly consider only the last two truncations, but these new ideas should help also in the other more general cases and in fact we discuss this possibility and some of its implication in the last section.\n\nOur main results are that in the $U(1)^3$-model, there are no other BPS Bh with $S^3$ horizon than the ones we already know, and that if there is a BPS Br in $AdS$ it is highly constraint with only 3 independent parameters. Even more, for these Br, there seems to be an obstruction to take the un-gauge limit (sending the $AdS$ radius to infinity), as the chemical potentials conjugated to the dipole momenta become unbounded. More generally, we found that all BPS Bh in the $U(1)^3$-model present the same single constraint among its chemical potentials. Finally, and at a more speculative level, we propose as a conjecture, that all BPS Bh dual to ${\\cal N}=4$ SYM show this same constraint, that is the same relation that appears in the definition of the SYM Index of \\cite{index}.\n\nThe plan of this work is the following, we summarize the main results of local supersymmetric analysis in section \\ref{localsusy}. Then in section \\ref{attractors}, we consider the information that can be extracted from the attractor mechanism and the entropy function of Sen. In section \\ref{uplift}, we use the extension of the usual thermodynamics relations, the Euclidean formulation and the uplift to 10D of 5D solutions, to derive new more general global constraints of the Bh geometries. Finally, in section \\ref{end}, we summarize our result, discuss on future directions and formulate the above conjecture on BPS Bh constraints.\n\n\\section{Supersymmetry \\& local analysis}\n\\label{localsusy}\n\nAnalysis of local supersymmetry based on killing spinors is a\npowerful technique to classify supersymmetric solutions (see for example \\cite{clasification}). This method was adapted in \\cite{Reall:2002bh} to study the NHG of putative Bh solutions for un-gauge supergravity. Later in \\cite{NH1,NH3}, this method was applied first to minimal gauge supergravity and then to its\n$U(1)^n$ vector multiplet extension. Although the full classification of Bh geometries was not obtained, due to the complicated structure of the theory, the NHG were classified up to one rotational isometry. The main result in the $U(1)^3$-model is that there exist three types of regular NHG corresponding to compact horizons;\n\\begin{itemize}\n\\item $AdS_2\\otimes S^3_{squashed}$ ,\n\\item $AdS_2\\otimes S^2\\otimes S^1$,\n\\item $AdS_2\\otimes T^3$.\n\\end{itemize}\nThe first case corresponds to BH with topological\nspherical horizons while the second and third to BH with\ntopological $S^2\\otimes S^1$ and $T^3$ horizons. In the original\nderivation it is assumed that the Bh have isometry group $R\\times\nU(1)\\times U(1)$, associated to time flow and two rotational\nsymmetries\\footnote{The existence of two $U(1)$ symmetries is\nnot necessary on general grounds, but they are present in all\nknown Bh solutions of gauge and un-gage 5D supergravity. On the\nother hand in \\cite{Gutowski:2007ai} all 1\/2 BPS solutions where classified\nfinding solutions with only one $U(1)$. Therefore the status of\nthese second symmetry is unclear at this point.}. All these NHG\nare 1\/2 BPS and depend at most on {\\it four} parameters only.\n\nBase on the above partial classification, Bh in this gauge supergravity of the $U(1)^3$-model could appear in the following different situations,\n\\begin{enumerate}\n\\item There are more general Bh with only one $U(1)$ rotational\nisometry,\n\\item There are more general Bh with $U(1)^2$ and more parameters\nthat disappear in the NHG,\n\\item The Bh we already know are the most general family of solutions with horizon of $S^3$ topology apart from Br solutions and Bh with horizon of $T^3$ topology.\n\\end{enumerate}\nRegarding the truncation to minimal gauge supergravity, after a\nshort analysis it is easy to see that the second and third cases\ncan not be reduced from the $U(1)^3$-model, since they have\nconstraints incompatible with the truncation. We can safely say that there are no supersymmetric Br solutions in minimal gauge supergravity but\nthere could be supersymmetric Br in the STU-model. On the other\nhand, the case of spherical topology admits reduction to minimal\ngauge supergravity, telling us that there can be supersymmetric Bh\nin both theories. Summarizing in minimal gauge supergravity we are left with the following possibilities,\n\\begin{enumerate}\n\\item There are more general Bh with only one $U(1)$ rotational\nisometry,\n\\item There are more general Bh with topological $S^3$ horizon and more parameters that disappear in the NHG,\n\\item The Bh we know are the most general family of solutions.\n\\end{enumerate}\nIt is important to notice that in \\cite{NH3}, it was proved that the NHG of the most general known Bh of \\cite{sbh3} coincides with the case of $AdS_2\\otimes S^3_{squashed}$ NHG for the $U(1)^3$-model and also to its truncation to minimal gauge supergravity.\n\nThe above information summarizes the results and possibilities that are known using supersymmetry and local analysis. We need to complement this results with extra input data if we want to fully classify all Bh solutions. In the rest of this work we consider the constraints imposed by the attractor mechanism \\cite{attractor} and the entropy function of Sen \\cite{sen}, together with global properties necessary to define the Euclidean regime in the uplifted 10D metric to find new restrictions that help in the Bh classification.\n\nIn the following section we study the NHG using the attractor mechanism, and hence we mainly focus on the cases assuming Bh with isometry group $R\\times U(1)\\times U(1)$ unless we explicitly say the contrary.\n\n\\section{The attractor mechanism \\& the entropy function}\n\\label{attractors}\n\nThe attractor mechanism was originally discussed for ${\\cal N}=2$\nBh in four dimensions, where the values of the scalar fields at\nthe horizon are given by the values of the Bh conserved charges\nand are independent of the asymptotic values of the scalars at\ninfinity. Importantly, the attractor mechanism has provided a new\nway to calculate the Bh entropy. In a series of articles\n\\cite{sen}, Sen recovered the entropy of $D$-dimensional BPS Bh using only the near horizon part of the geometry. Basically, in this regime the solution\nadopts the form $AdS_2 \\otimes \\Upsilon^{D-2}$ with $\\Upsilon$ a compact manifold of the form $\\Pi_{n_i}(\\times S^{n_i})$ with $\\sum n_i=D-2$, plus some electric and magnetics fields. The entropy $S$ is obtained by introducing a\nfunction $f$ as the integral of the corresponding supergravity\nLagrangian over the $\\Upsilon^{D-2}$. More concretely, an entropy\nfunction is defined as $2\\pi$ times the Legendre transform of $f$\nwith respect to the electric fields $e$ and rotational parameters\n$\\alpha$ \\footnote{The analysis of the near horizon geometry has\nbeen extended to stationary Bh that define squashed $AdS_2\\otimes\nS^{D-2}$ geometries in last article of \\cite{sen}.}. Then, an\nextremization procedure fixes the on-shell BPS values of the\ndifferent fields of the solution and in particular, determines\nthe BPS value of the entropy,\n \\begin{eqnarray} S_{bps}=2\\pi\\left( e {\\partial f\\over\n\\partial e}+\\alpha{\\partial f\\over\n\\partial \\alpha} -f \\right)_{bps}\\,.\\label{ae}\n\\end{eqnarray}\nMore recently the extension of the entropy functional to gauge\nsupergravities has been considered in \\cite{Morales,Suryanarayana:2007rk,yo2,Astefanesei:2007vh}. Here, the characteristic Chern-Simons term is included into the discussion and again the entropy of the Bh is recovered with the sole information\nof the NHG.\n\n\\vspace{.5cm}\n\\noindent{\\bf Entropy for BPS Bh in $U(1)^3$-model} \\vspace{.5cm}\n\nLet us start with the most general Bh solution possessing isometry group $R\\times U(1) \\times U(1)$. This solution is labeled by a given set of\nparameters $l_I$, that can be related to physical quantities $L_I$\namong which we have the five conserved charges\n$(J_1,J_2,Q_1,Q_2,Q_3)$ \\footnote{Other physical quantities\nlike possible dipole or higher momenta should not be excluded from the list.}. We are interested in its entropy $S_{Bh}$, that is in principle a function of all this parameters i.e. $S_{Bh}(l_{I})$. Let us focus now in the NHG of this Bh solution (that has to be one of the list presented in previous section). As we said before, all these geometries\ndepend at most in four parameters that we call $\\lambda_i$. Using\nthe entropy functional method we can always calculate\nthe entropy $ S_{NHG}$ using just the NHG and hence it depends only on the parameters $\\lambda_i$ that we write as $ S_{NHG}(\\lambda_i)$. On the other hand, by construction both expressions have to be equal i.e.\n\\begin{equation} S_{Bh}(l_I)= S_{NHG}(\\lambda_i)\\,.\\end{equation}\nThe key point of this argument is to realize that we are looking for Bh solutions that are \\textit{more general} than the one we already known. In all the known solutions the entropy $S_{Bh}$ is a function of all five conserved charges, and therefore the assumed more general solution should\nalso depends on all of the conserved charges plus maybe other physical quantities. But we have established that there are only four independent parameters at most, therefore we concluded that the putative more general Bh is a constraint system where the conserved charges are all related by a single equation on the top of the BPS equation. This is our first new result, that in other words says\n\n\\vspace{.5cm} \\centerline{\\textit{I. There is no BPS Bh with isometry group $R\\times U(1) \\times U(1)$} and five independent}{\\textit{\\hspace{.5cm} charges within the $U(1)^3$-model.}}\\vspace{.5cm}\n\n\\noindent {\\bf Conserved charges for BPS Bh} \\vspace{.5cm}\n\nLet us focus on the conserved charges that can be calculated in these NHG. In \\cite{Suryanarayana:2007rk} it was showed how to compute the conserved charges of a given Bh, using only the NHG. The computation is based on a\nNoether procedure of the reduced three dimensional solution along\nthe two $U(1)$ isometries. We should nevertheless recall that the\ncalculation of the conserved charge is based on a regular behavior\nof the solution as we approach the Bh from infinity, to be more\nprecise, we need a smooth fibration on the radial direction\nbetween the horizon and the corresponding space-like three-surface\nat infinity (see section 3 of \\cite{Suryanarayana:2007rk}). This is not a strong\ncondition for squashed horizons and in fact is satisfied for all\nknown AdS Bh, but things get more complicated for other\ntopologies. For example, in flat space Br do not satisfy this\nconstrain and therefore the NH analysis does not provide the form of the conserved charges \\footnote{We thanks the authors of \\cite{Ha} to point out that in this work they explained how to calculate the electric charges and angular momenta for supersymmetric Br in flat space, base only on the NHG. The procedure is base on using more than one patch of coordinates to overcome the local infinities of the the gauge potentials. It will be interesting to check is implementation for the AdS case.} . Nevertheless, if we assume the above condition to hold, the resulting\nexpression for the conserved charges is not too difficult to calculate in the NHG with topology of squashed $S^3$. Then, is easy to verify that the NH charges reproduce exactly the same parametric dependence that the five charges of the known Bh solution. In other words,\n\n\\vspace{.5cm} \\centerline{\\textit{II. All BPS Bh with isometry group $R\\times U(1) \\times U(1)$} and horizon of\ntopology $S^3$}{\\textit{\\hspace{.9cm}have the same constraint\namong its conserved\ncharges.}}\\vspace{.5cm}\n\n\\noindent Unfortunately, we can not extend the above result to other horizon topologies since we know that the NHG may not provide all the information to compute the Bh asymptotic charges. \\vspace{.5cm}\n\n\\noindent {\\bf Chemical potentials for BPS Bh in AdS} \\vspace{.5cm}\n\nOn the other hand, we can always characterize the constraints of a given Bh geometry in terms of different set of thermodynamic variables by changing ensembles. In particular, we switch to the Grand canonical ensemble where all charges have been traded for its conjugated potentials. In this case the Euclidean action $I_{Bh}$ looks for example like,\n\\begin{equation} I_{Bh}(\\phi_I,w_i)=\\phi_IQ^I+w_iJ^i-S_{Bh} \\label{qsr}\\end{equation}\nwhere $(\\phi_I,w_i)$ are the chemical potentials conjugated to the\naccompanying conserved charges. In \\cite{yo1,yo2} it was developed a method to calculate the above quantities for any extremal Bh.\n\nThe main point of working in this ensemble, is that the calculation of the conjugated chemical potentials can be carried out entirely using only the NHG, and does not require extra assumptions on the regularity of any fibration on the radial direction. These different chemical potentials turn out to be identify with the different electric fields appearing in the 3D reduction of the 5D solution (see \\cite{yo2} for a derivation).\n\nTo perform explicitly this reduction along the two $U(1)$ isometries on the different NHG, we follow the notation of \\cite{Suryanarayana:2007rk}. In 3D we end up with five $U(1)$ gauge fields $(a^I,B^i)$ where $(I=1,2,3)$ is the R-charge index and $(i=1,2)$ corresponds to the two compactified directions. Explicitly the 5D metric $G_{\\mu\\nu}$ and gauge fields $A^I_\\mu$ are rewritten as,\n\\begin{eqnarray} G_{\\mu\\nu}=\\begin{array}{cc}\n \\left( \\begin{array}{cc}\n g_{MN}+h_{ij}\\,b^{\\,i}_Mb^{\\,j}_N & h_{in}b^{\\,i}_M\\\\\n h_{im}b^{\\,i}_N & h_{mn}\n \\end{array} \\right) &\\hbox{and}\\quad A^I_\\mu=\n (a^I_M+A_ib^{\\,i}_M ,A_n)\\,,\n \\end{array}\n\\end{eqnarray} where Greek indices are five dimensional and split into\ncapital Roman indices $(M,N,\\ldots)$ corresponding to\n$(t,r,\\theta)$ and lower Roman indices corresponding to the two\ncompactified dimensions $(\\zeta_1,\\zeta_2)$. In particular, the\ndifferent electric fields come from the field strength defined as\n$H^i=db^i$ and $F^I=da^I$.\n\n\\vspace{.5cm}\n{\\textbf{The $S^3$ case}: Here the physical distinguishing properties of the constraint system is not related to the difference of the two compactified angular directions $(\\zeta_1,\\zeta_2)$ and therefore it is enough to perform a single reduction along a new angular coordinate defined as $\\varphi=\\zeta_1+\\zeta_2$. Due to the above fact and to simplify the form of the expressions involved, we work only with the NHG with two equal angular momenta. The resulting squashed $S^3$ NHG can now be\nrecast in terms of three independent parameters $\\mu_I$ as follows,\n\\begin{eqnarray}\n&&ds^2=v_1\\left(-r^2dt^2+{dr^2\\over r^2}\\right)+v_2\\left[\\sigma_1^2+\n\\sigma_3^2+v_3(\\sigma_3-\\alpha r dt)^2\\right]\\nonumber \\\\\n&&A^I=-e^Irdt+p^I(\\sigma_3-\\alpha r dt)\\quad,X_I=u_I\n\\end{eqnarray}\nwhere $\\sigma_1=\\sin\\varphi d\\theta-\\sin\\theta \\cos\\varphi d\\psi$,\n$\\sigma_2=\\cos\\varphi d\\theta+\\sin\\theta \\sin\\varphi d\\psi$,\n$\\sigma_3=d\\varphi+\\cos\\theta d\\psi$ and $\\theta=(0,\\pi)$,\n$\\varphi=(0,4\\pi)$ and $\\psi=(0,2\\pi)$. The form of the solution\nin terms of three independent parameters $\\mu$ is,\n\\begin{eqnarray}\n&&u_I={\\mu_I\\over\\gamma_3^{1\/3}}\\,,\\quad v_1={\\gamma_3^{1\/3}\\over\n4(1+\\gamma_1)}\\,,\\quad v_2={\\gamma_3^{1\/3}\\over 4}\\,,\\quad\nv_3=1+\\gamma_1-{\\gamma_2^2\\over 4\\gamma_3}\\,,\\quad \\nonumber\\\\\n&&\\alpha={\\gamma_2\\over (1+\\gamma_1)\\sqrt{4\\gamma_3(1+\\gamma_1)-\\gamma_2^2}}\\,,\n\\quad p_I={(\\gamma_1-\\mu_I)\\mu_I^2-\\gamma_3\\over 4\\mu_I^2}\\,,\\nonumber \\\\\n&&e_I={\\gamma_2(\\mu_I^3-\\gamma_1\\mu_I^2)+\n[4\\gamma_3(1+\\gamma_1)-\\gamma_2^2]\\mu_I + \\gamma_2\\gamma_3\\over\n4\\mu_I^2(1+\\gamma_1)\\sqrt{4\\gamma_3(1+\\gamma_1)-\\gamma_2^2}}\\,,\n\\end{eqnarray}\nwhere $\\gamma_1=\\sum_I\\mu_I$, $\\gamma_2=\\sum_{I 30\\ \\rm mag\\ arcsec^{-2}$, and is the result of satellites which are disrupted by the tidal forces of the host galaxy and halo. Empirical models and hydrodynamical simulations suggest that this diffuse light represents a significant fraction ($\\gtrsim 10\\%$) of a galaxy's total stellar mass, generally becoming more important at higher masses ~\\citep{Bullock2005,Conroy2007, Pillepich2018,Sanderson2018,Behroozi2019}. However, disagreement remains between predictions from hydrodynamical simulations and observations~\\citep{Merritt2016,Monachesi2019,Merritt2020}. It is unclear how to, or even if one should, include this light as part of the total stellar mass of a galaxy.\n\nBeyond the measurement of total flux, photometry across multiple photometric bands is required for use in spectral energy distribution (SED) fitting. A separate method is often used for this and then the results are ``normalized'' to the total flux measurment. One popular method is aperature photometry, which measure the flux within a fixed aperature. While these methods are able to produce consistent measurement over multiple photometic bands,\nthey implicitly ignore that galaxies have color gradients~\\citep{Kormendy1989, Saglia2000,LaBarbera2005,Bakos2008, Tortora2010, Guo2012, Dominguezsanchez2019,Suess2020}. Given that these gradients in massive galaxies are generally negative (i.e. bluer colors at larger radii) the total color of a galaxy is likely to be bluer than that measured by aperture photometry. Other studies use paramaterized methods like the SDSS\\texttt{model}~\\citep{Ahn2014} or bulge + disk decompositions~\\citep{Mendel2014}; however, these methods have additional issues, as discussed above.\n\n\nIn this study we test the commonly used methods for measuring the photometry of massive galaxies using an independent dataset from the Dragonfly Telephoto Array, Dragonfly for short~\\citep{Abraham2014,Danieli2020}. Dragonfly currently consists of 48 telephoto lenses jointly aligned to image the same patch of sky in both the $g$ and $r$ bands. It operates as a refracting telescope with a 1m aperture and $f\/0.4$ focal ratio. Dragonfly's design is optimized for low surface brightness imaging, routinely being able to image down to $\\mu_g \\gtrsim 30\\rm \\ mag\\ arcsec^{-2}$ on 1 arcmin scales in the $g$ band~\\citep{Merritt2016,Zhang2018,vanDokkum2019,Gilhuly2020}. SDSS remains the main dataset used to study massive galaxies in the local universe~\\citep{Bernardi2017,Kravtsov2018} and Dragonfly offers a powerful complement to test the methods currently employed. It has superior large scale sky-subtraction due, in part, to the use of single CCDs that cover the entire $2.6^{\\circ} \\times 1.9^{\\circ}$ FOV of each lens. Dragonfly's low surface brightness sensitivity allows the light profile to be measured to fainter limits, reducing the amount of extrapolation necessary thus minimizing the dependence on the choice of parameterization. \n\nWe compare Dragonfly photometry to that of Galaxy Mass and Assembly survey \\citep[GAMA,][]{Driver2011,Baldry2012}, and others. While GAMA is a spectroscopic survey at its heart, it also involved re-analyzing data from public imaging surveys, like SDSS, and performing own multi-wavelength imaging surveys. Our goal is to compare Dragonfly photometry to that published by GAMA~\\citep{Kelvin2012,Wright2016} and other studies that re-analyze SDSS images~\\citep[such as][]{Simard2011,Meert2015}. Specifically we focus on how these differences affect estimates of the total stellar mass and the measurement of the SMF.\n\nThe rest of the paper is organized as follows: In Section~\\ref{sec:method} we describe and test our method for measuring the total flux of galaxies in the Dragonfly Wide Field Survey. Our galaxy sample is described in Section~\\ref{sec:samp}, with initial results shown in Section~\\ref{sec:results}. We compare our measurements to those of GAMA in Section~\\ref{sec:res_GAMA} then investigate the effect of the differences on stellar mass estimates in section~\\ref{sec:res_sm}. We compare Dragonfly measurements to other methods applied to SDSS images in Section~\\ref{sec:meth_comp}. Our results are discussed in Section~\\ref{sec:disc} and then summarized Section~\\ref{sec:conc}.\n\n\\section{Measuring the photometry of galaxies in the DWFS}\\label{sec:method}\n\n\\subsection{Data}\nThe main dataset we use in this study is the Dragonfly Wide Field Survey (DWFS), presented in ~\\citet{Danieli2020}. This survey imaged $330\\ \\rm deg^2$ in well studied equatorial fields with superb low surface brightness sensitivity: the typical 1$\\sigma$ depth is 31 mag arcsec$^{-2}$ on 10 arcmin scales. Our galaxy sample will be drawn from the GAMA database so we will focus on the part of the survey which overlaps with the GAMA equatorial fields~\\citep{Baldry2018}. We use the final co-adds and refer the reader to \\citet{Danieli2020} for details on the instrument, observations and data reduction. One particular detail of note is the sky subtraction procedure. It is performed in two stages, heavily masking all detected sources during the second stage, and fitting a 3rd order polynomial to the entire $1.8^{\\circ} \\times 1.2^{\\circ}$ frame. This preserves any emission features on scales $\\lesssim$ 0.6${^\\circ}$ ensuring that the outskirts of galaxies are preserved for all the galaxies in our sample ($0.1< z < 0.2$, $0.5^{\\prime \\prime} \\gtrsim r_{\\rm eff} \\gtrsim 8^{\\prime \\prime}$).\n\n\\begin{table}[h]\n\\centering\n\\begin{threeparttable}\n\\caption{The fields from the DWFS used in this study.}\n\\begin{tabular}{l|cc}\nField name & RA range (deg) & Dec range (deg) \\\\ \\hline\nG09\\_130.5\\_1 & 128.5 - 132.5 & -0.5 - 2.5 \\\\ \nG09\\_136.5\\_1$^*$ & 134.5 - 138.5 & -0.5 - 2.5 \\\\ \nG09\\_139.5\\_1$^*$ & 137.5 - 141.5 & -0.5 - 2.5 \\\\ \nG09\\_139.5\\_m1$^*$ & 137.5 - 141.5 & -2.5 - 0.5 \\\\ \nG12\\_175.5\\_1$^\\dag$ & 173.5 - 177.5 & -0.5 - 2.5 \\\\ \nG12\\_175.5\\_m1$^\\dag$ & 173.5 - 177.5 & -2.5 - 0.5 \\\\ \nG12\\_178.5\\_m1$^\\dag$ & 176.5 - 180.5 & -2.5 - 0.5 \\\\ \nG15\\_213\\_1 & 211.0 - 215.0 & -0.5 - 2.5 \\\\ \nG15\\_222\\_1$^\\ddag$ & 220.0 - 224.0 & -0.5 - 2.5 \\\\ \nG15\\_222\\_m1$^\\ddag$ & 220.0 - 224.0 & -2.5 - 0.5 \\\\\n\\hline\n\\end{tabular}\n\\begin{tablenotes}\n\\footnotesize\n\\item $^*,^\\dag,^\\ddag$ indicate fields with overlap.\n\\end{tablenotes}\n\\label{tab:fields}\n\\end{threeparttable}\n\\end{table}\n\nThe specific fields we will be using are shown in Table~\\ref{tab:fields}. Each field is $4^\\circ \\times 3^\\circ$, which is larger then the FOV of a single DF lens due to the dithering pattern adopted for the DWFS~\\citep{Danieli2020}. These fields overlap with the G09, G12 and G15 GAMA fields. The ten DWFS fields represent 104 $\\rm deg^2$, with overlap between several of fields. These overlapping regions, totalling roughly 16 $\\rm deg^2$, will be useful later to test and validate our measurements. The DWFS images have a pixel scale of $2.5\\rm\\ ^{\\prime \\prime}\/\\, pixel$ and typical full width at half maximum (FWHM) of the point spread function (PSF) is $5^{\\prime \\prime}$\n\nWe develop a method that allows for a non-parametric measurement of the photometry of galaxies. The method is summarized in Figure~\\ref{fig:method} and will be described in detail below. Given the limitations of the Dragonfly data, mainly the poor spatial resolution, we make quality cuts at certain steps during the method, where the photometry of certain galaxies cannot be measured accurately. Thus, the result is not a complete sample of all galaxies in the field. As we show below, we verify that these cuts do not introduce significant biases.\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width = 0.9\\textwidth]{intro_method.pdf}\n \\caption{Illustration of the four major steps involved in our method of measuring the total flux of galaxies in the DWFS. For full details see Section~\\ref{sec:method} of the text. We show two example galaxies in the two rows. \\textbf{1st Column:} The raw Dragonfly $r$ band images are shown. \\textbf{2nd:} The Dragonfly images after the \\texttt{mrf} procedure. The red lines show the isophotes along which the surface brightness profile is measured. \\textbf{3rd:} The measured surface-brightness profile is shown. The blue solid and orange dashed lines show the exponential + background fit and just the exponential portion respectively. \\textbf{4th:} The measured curve of growth is shown. The orange dashed line shows the extrapolation calculated using the exponential fit. $r_{\\rm extp}$, the radius beyond which we extrapolate the profile is also shown.}\n \\label{fig:method}\n\\end{figure*}\n\n\\subsection{Using \\texttt{mrf} to isolate galaxies}\n\n\nThe first step in our method is isolating the galaxies from other emission using the Multi-resolution filtering (\\texttt{mrf}) algorithm\\footnote{\\url{https:\/\/github.com\/AstroJacobLi\/mrf} }. This algorithm, developed by \\citet{vanDokkum2019a}, is designed to isolate low surface brightness features in low-resolution data, like Dragonfly, by using an independent, higher resolution image to remove compact, high surface brightness emission. A kernel is derived to match the PSFs of two datasets with different spatial resolution. The high-resolution data is then degraded to the resolution of Dragonfly and used to remove the emission due to compact sources.\n\nFor the high resolution data we use images from the The Dark Energy Camera Legacy Survey (DECaLS) \\citep{Dey2019} data-release 8 \\footnote{Downloaded from \\url{http:\/\/legacysurvey.org\/dr8\/}}. The \\textit{DECaLS} images act as the high-resolution data (pixel scale = $0.262\\ ^{\\prime \\prime }\/\\rm pix$) to run \\texttt{mrf} on our DWFS images (pixel scale = $2.5\\ ^{\\prime \\prime}\/\\rm pix$). We use $ 0.7^{\\circ} \\times 0.7^{\\circ}$ cutouts of the DWFS and \\textit{DECaLS} data centered on each galaxy. This is a large enough area, containing enough unsaturated stars for the \\texttt{mrf} algorithm to consistently derive an accurate kernel between the two images. Regions where very bright objects have been removed are additionally masked. We use the masked results of this \\texttt{mrf} procedure in all the following analysis.\n\n\\subsection{Measuring the 1-D surface brightness profile}\n\nOnce we have run \\texttt{mrf} on the galaxy cutout we then measure the 1-D surface brightness profile. This is done using the python package \\texttt{photutils}\\footnote{https:\/\/github.com\/astropy\/photutils} \\citep{photutils}. In particular we measure the surface brightness profile using the \\texttt{ELLIPSE} method based on the algorithm developed in \\citet{Jedrzejewski1987}. This is performed in a non-iterative mode using the sky positions, position angles and axis ratio for each galaxy determined in the GAMA S\\`{e}rsic photometry of the SDSS images~\\citep{Kelvin2012}. These quantities are fixed at all radii. To ensure the axis ratio is applicable for DWFS observations, where the resolution is roughly 10 times lower, we ``convolve'' the intrinsic axis ratio following \\citet{Suess2019}:\n\n\\begin{equation}\n b\/a _{\\rm obs.} = \\sqrt{ \\frac{(b\/a_{\\rm intr}\\, r_{\\rm eff} )^2 + r_{psf}^2 }{{r_{\\rm eff}}^2 + r_{psf}^2} }\n\\end{equation}\n\nHere, $b\/a_{\\rm intr}$ is the intrinsic axis ratio, and $r_{psf}^2$ is the PSF HWHM, which we assume to be $2.5^{\\prime \\prime}$ for the DWFS, and $r_{\\rm eff}$ is the half-light radius measured from the S\\'ersic fits performed on the SDSS image by~\\citet{Kelvin2012}. We measure the surface brightness profile in 2.5 arcsec (corresponding to 1 Dragonfly pixel) steps from the center of the galaxy out to $20\\ r_{\\rm eff}$. During this procedure we discard galaxies where the algorithm fails to converge, which happens for roughly 30\\% of galaxies. This generally occurs because a significant fraction of pixels are masked near the center of the galaxy. \n\n\\subsection{Background subtraction}\nWe use the 1-D surface brightness profile to calculate the total flux of each galaxy. First a constant sky background is subtracted from the profile. This is done by fitting an exponential plus a constant background to the outskirts of the galaxy profile as follows,\n\n\\begin{equation}\n \\label{eqn:bg_model}\n I(r)\\ =\\ \\alpha\\, e^{ -\\beta*r} + c\n\\end{equation}{}\n\nHere $\\alpha,\\beta\\ {\\rm and}\\ c$ are the free parameters to be fit. We only fit regions of the galaxy that have surface brightness $\\geq 28.5\\ \\rm mag\\ arcsec^{-2}$ in either band. In the $r$ band this typically occurs at radii larger then $ 7\\times r_{\\rm eff}$. For a small fraction of galaxies, this fit does not converge, and these galaxies are discarded. The constant background ($c$) is subtracted from the entire surface brightness profile, which is then used in the next step to calculate the total flux. We test this method below, in Section~\\ref{sec:vtest}, with the injection of artificial galaxies and show that this method successfully subtracts the background regardless of the galaxy profile.\n\n\n\\subsection{Measurement of total flux}\n\nNext, we find where the background-subtracted surface brightness profile first drops below a signal to noise ratio (SNR) of 2. We call that radius $r_{\\rm extp.}$. We integrate up to this point to calculate the observed flux, $S_{\\rm obs}$, as shown below:\n\\begin{equation}\n S_{\\rm obs.} = 2\\pi(b\/a _{\\rm obs}) \\int_0^{r_{\\rm extp} } r\\, f(r)\\, dr\n\\end{equation}\nHere, $f(r)$ is the background subtracted surface brightness profile, and the integral is performed using a simple trapezoidal rule. Although we aim to be non-parametric, we still need to extrapolate the profile to account for any light beyond $r_{\\rm extp}$. To accomplish this we use the exponential part of the fit described in Eqn. \\ref{eqn:bg_model}, and integrate from $r_{\\rm extp}$ to infinity. This flux we call the extrapolated flux, and it is calculated as:\n\n \\begin{equation}\\label{eqn:extp_int}\n \\begin{split}\nS_{\\rm extp} &= 2 \\pi (b\/a _{\\rm obs}) \\int_{r_{\\rm extp} }^{\\infty} r\\, \\alpha\\, \\exp( -\\beta\\, r) dr\\\\ \n&= 2 \\pi\\, (b\/a _{\\rm obs})\\, \\alpha\\, e^{ -\\beta\\, r_{\\rm extp} } \\left( \\frac{ 1 + \\beta\\, r_{\\rm extp} } {\\beta^2} \\right)\n \\end{split}\n\\end{equation}\n\nHere, $\\alpha$ and $\\beta$ are the best fit values from the model of the galaxy profile, as shown in Equation~\\ref{eqn:bg_model}. \n\nWe have made a choice to use an exponential extrapolation, but it remains our largest systematic uncertainty. We show below in Section~\\ref{sec:vtest} through comparing results from overlapping regions and injection-recovery tests that it produces accurate and reliable results for S\\'ersic profiles. However, this does not necessarily reflect reality. This issue is further discussed in Section~\\ref{sec:disc}. We also tested a power-law extrapolation. In this case we restrict the power law slope to $< -3$, to ensure the integral converges. This achieved similar results to the exponential extrapolation. We also tried a S\\'ersic like profile with an additional shape parameter, $n$. However, the fit failed due to the data not being able to constrain the $n$ parameter consistently. \n\nFinally the data are brought to the same photometric system as the SDSS photometry. Even though the $g$ and $r$ filters used on Dragonfly are very similar to those of SDSS, a small correction needs to be applied. We derive corrections for the $g$ and $r$ bands in Appendix~\\ref{sec:filt_conv} by comparing the photometry of standard stars in SDSS, Dragonfly and Gaia~\\citep{Gaia2018}. For all of the galaxies we convert the Dragonfly magnitudes to the SDSS filter system based on the observed Dragonfly colors. The median corrections are $\\Delta_g = 0.06$ mag and $\\Delta_r = 0.03$ mag for the $g$ and $r$ bands, respectively, where $m_{\\rm SDSS} = m_{\\rm DF} + \\Delta_x$. The Dragonfly colors are calculated using the total flux measurements once they have been adjusted to match the SDSS filter system.\n\n\\subsection{Validation tests}\n\\label{sec:vtest}\nWe perform two separate tests of our method. The first is an injection-recovery test. We generate 1500 single component S\\'ersic models~\\citep{Sersic1963} using \\texttt{galsim}\\footnote{https:\/\/github.com\/GalSim-developers\/GalSim} with sizes, total magnitudes, S\\'ersic indicies and axis ratios drawn from a distribution similar to the observed galaxies. We inject the models into the DWFS frames and perform the entire analysis pipeline, described in section Sec.~\\ref{sec:samp}, on the simulated galaxies, including all the quality cuts and the \\texttt{mrf} procedure. After all the quality cuts, including the bright neighbour cut described below in Sec.~\\ref{sec:samp}, the photometry is measured for roughly $60 \\%$ of the simulated galaxies. \n \n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.9\\columnwidth]{injrev_results.pdf}\n \\caption{\\textbf{Top:} Comparison of injected and recovered single component S\\'ersic models using our pipeline. The black dashed line shows the one-to-one relation. We find our method accurately recovers the input magnitude well for all galaxies except a few outliers where the recovered magnitude is much greater than the input. \\textbf{Bottom:} The distribution of differences between the injected and recovered magnitudes. The black dashed line shows a Gaussian fit to the distribution.}\n \\label{fig:inj_rev}\n\\end{figure}\n\nFigure~\\ref{fig:inj_rev} displays the results of these tests, focusing on the $g$ band. We find that our non-parametric method works well in general. The mean of the $m_{\\rm recovered} - m_{\\rm injected}$ distribution is near zero, $\\mu = 0.005 \\pm 0.004$. suggesting there is no systemic bias, and a scatter of 0.08 mag. We note that the distribution appears slightly skewed to positive magnitude differences. There is also a small fraction of outliers ($\\sim 3\\%$) for which the recovered magnitude is much brighter then the injected magnitude. This seems to be caused by nearby bright objects that are just outside the $10\\, r_{\\rm eff}$ cut-off or not present in the SDSS catalog. When we increase the size of this cut-off to $20\\, r_{\\rm eff}$, the fraction of outliers decreases, but the total number of galaxies in our sample drastically drops. Therefore we decided to keep the cut-off at $10\\, r_{\\rm eff}$ and accept that there will be some outliers.\n\nThe second test we performed is to compare the photometry of galaxies which lie in the region of overlap between multiple survey fields. These galaxies have had their photometry independently measured and represent a good test of the reliability and uncertainty of our method. There are 169 galaxies which have their photometry successfully measured in multiple fields. We note that the overlap is naturally at the edges of the DWFS fields where the noise is higher, therefore this represents a conservative test. The distribution of magnitude differences between the independent measurements of the same galaxy is shown in Figure~\\ref{fig:dups}. We show the magnitude differences divided by $\\sqrt{2}$, that is, the typical $1\\sigma$ uncertainty for each galaxy, assuming a Gaussian error distribution.\n\nThis distribution is centered at 0 with a width of $\\sigma = 0.046$ mag for the $g$ band and $0.033$ mag for the $r$ band, calculated as the bi-weight scale. Similar to the distribution of recovered magnitudes above, the distribution of the magnitude differences is well approximated by a Gaussian near the center, however there are outliers. Specifically 8\\% (4\\%) of this sample has $|\\Delta\\ \\rm mag\\, |\/ \\sqrt{2} > 0.15$ in the $g$ ($r$) band, which greatly exceeds the expectation of a Gaussian distribution. \n\nWe note that the uncertainty implied by comparing measurements of the same galaxy is significantly smaller than that implied by the injection-recovery tests. In a sense they are measuring two different things. Comparing independent measurements takes the uncertainties in the Dragonfly data, reduction and calibration into account. On the other hand, the injection-recovery test probes the systematic uncertainty caused by, among other aspects of our methods, our choice of an exponential extrapolation.\n\nIn short the error in our photometry measurements is still uncertain and depends on the true surface-brightness profile in the outskirts of galaxies. Specifically the systematic error caused by our choice of an exponential extrapolation and how well it matches realistic galaxy profiles. We plan to investigate this in future works using Dragonfly data by stacking.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\columnwidth]{dup_comp.pdf}\n \\caption{Comparison of independent measurements of the same galaxies which are present in two different fields. We divide the magnitude difference by $\\sqrt{2}$ as we are using the differences to probe the underlying uncertainty of a given measurement. }\n \\label{fig:dups}\n\\end{figure}\n\n\\section{Galaxy Sample}\n\\label{sec:samp}\nThe construction of the galaxy sample relies on the GAMA survey, \\citep[specifically DR3, ][]{Baldry2018}. We select galaxies within the DWFS footprint with $\\log M_*\/M_\\odot> 10.75$ and $0.1 < z < 0.2$, along with the additional quality cuts: $\\texttt{SpecAll.nQ} > 2$ and $\\texttt{StellarMasses.nbands} > 3$ as suggested on the data access website\\footnote{http:\/\/www.gama-survey.org\/dr3\/schema\/}. This mass-selected galaxy sample is $> 99\\%$ complete~\\citep{Taylor2011}. The parent sample contains 3979 galaxies over the ten DWFS fields used.\n\nNext we remove galaxies that have nearby bright objects. This step is done to avoid confusion of the galaxy's light with light from nearby objects. We use the \\texttt{PhotoObjAll} table from the SDSS DR14 database\\footnote{\\url{http:\/\/skyserver.sdss.org\/dr14\/}} to search for sources nearby the galaxies in question. If there is another source that is at least 0.5 times as bright in either the $g$ or the $r$ band within $10 \\times r_{\\rm eff}$ in either the $g$ or $r$ band \\citep[as measured by the S\\'ersic fits in ][]{Kelvin2012} then the galaxy is discarded. This selection is done to remove galaxies in close pairs or nearby bright stars which could contaminate the photometry of the galaxy. This cut removes about $40 \\%$ of the galaxies from the parent sample. \n\nWe obtain photometry for the remaining galaxies. During this process an additional $\\sim 30 \\%$ of the parent sample is discarded during the process of measuring the isophotes. This is often due to there being too many masked pixels near the centre of the galaxy. Finally another small fraction, $< 2 \\%$, of the galaxies in the parent sample is discarded because the exponential + background fit to the outskirts of the 1-D spectrum does not converge. As a final cut, we do not include anything where the extrapolated fraction of the total flux, $ f_{\\rm extp} = S_{\\rm extp}\/ (S_{\\rm extp} + S_{\\rm obs.})$ exceeds 10\\%. This is done to remove galaxies with spurious background fits or that are low signal-to-noise in the Dragonfly data. Overall, this cut removes a small fraction of galaxies ($< 1\\%$) from the parent sample of galaxies. For the remainder of this paper, we use the analysis galaxy sample for which the photometry is fully measured as described above, containing 1188 galaxies, about 30\\% of the parent sample.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width = \\textwidth]{hist_samples.pdf}\n \\caption{Distributions of galaxy properties for the parent sample of galaxies (Blue), drawn directly from the GAMA DR3 database, and our analysis sample (Orange) which has passed all of the quality cuts. The effective radius and magnitude measurements come from the 2-D S\\'ersic fits performed by~\\citet{Kelvin2012} on SDSS data. We find the distributions of the two samples are similar, suggesting that the analysis sample is representative of the parent sample.}\n \\label{fig:samples_hist}\n\\end{figure*}\n\nFigure~\\ref{fig:samples_hist} displays the distribution of galaxy proprieties for both the parent and analysis galaxy samples. The analysis sample contains galaxies that have passed all the quality cuts and the Dragonfly photometry has been successfully measured in both bands. While the analysis sample only contains $\\sim 30\\%$ of the parent sample, the distributions of galaxy properties are similar. This implies that we have not biased the sample significantly while performing the quality cuts described in the method above, and it is representative of the parent sample.\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width = 0.85\\textwidth]{intro_sbp.pdf}\n \\caption{Imaging of example galaxies from \\textit{DECaLS} and Dragonfly. All 3 galaxies have $\\log M_* \/ M_\\odot \\sim 11$ and $z = 0.19,0.15,0.12$ from left to right. \\textbf{Top}: False color image based on \\textit{DECaLS} $g$, $r$ and $z$ imaging. \\textbf{Second Row:} \\textit{DECaLS} $r$ band image logarithmic scaling to highlight the ourskirts of the galaxy. White displays 23 mag arcsec$^{-2}$ and black shows 32 mag arcsec$^{-2}$ \\textbf{Third Row:} Dragonfly $r$ band images after performing the \\texttt{mrf} procedure, with scaling to match the surface brightness limits of \\textit{DECaLS} image above. Blue contours are placed every $7.5$ arcsec (3 DF pixels) and are matched between the two images. \\textbf{Bottom}: Background subtracted surface brightness profiled measured from the Dragonfly data. The grey line shows an example of the Dragonfly PSF.}\n \\label{fig:sbp}\n\\end{figure*}\n\n\\section{Photometry measurements from Dragonfly}\n\\label{sec:results}\nIn Figure~\\ref{fig:sbp} we show some example galaxies in both high-resolution and Dragonfly data. We show $grz$ false color images along with logarithmicaly stretched deep $r$ band images from DECaLS. The false color image is zoomed to roughly 1 arcmin per side while the r-band images is 2 arcmin per side. To compare directly, we plot the Dragonfly r-band images using a logarithmicaly strecth matching the surface-brightness limits of the DECaLS image. The isophotes are matched across the two images.\n\nComparing the DECaLS and Dragonfly images illustrates two important points. First, Dragonfly has a much poorer spatial resolution compared to most modern optical surveys. Therefore, we are not able to spatially resolve the centers of galaxies on scales $\\leq 5^{ \\prime \\prime}$. However, the strength of Dragonfly is in studying the low surface-brightness outskirts of these galaxies. Even visually, one can see that the galaxies in the Dragonfly images are more extended, as a result of the superior low surface-brightness performance compared to the high-resolution data.\n\nThe final row shows the measured $g$ and $r$ band surface brightness profiles measured in the DWFS along with an example of the g band PSF. In detail, the PSF varies between the bands, different fields and different observing nights (see Liu et al. in prep), but it remains qualitatively similar. Comparing the galaxy profile to the PSF further demonstrates that many of the galaxies in our sample are barely resolved. However, this should not affect our measurements of the total flux. One of the advantages of Dragonfly is that it has an extremely well controlled PSF with very little scattered light at large radii, therefore we are able to recover the total flux of galaxies even if they are not well resolved.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\columnwidth]{f_extrap_both.pdf}\n \\caption{The fraction of total flux contained in the exponential extrapolation, $f_{\\rm extp.}$, as a function of the DWFS measured magnitude for both the $g$ and $r$ band. In both bands $f_{\\rm extp.}$ approximately follows a log-normal distribution with a mean of 10$^{-2.5}$ and width of 0.6 dex. }\n \\label{fig:f_extp}\n\\end{figure}\n\nAnother way to asses our pipeline is to investigate the fraction of the total flux contained in the exponential extrapolation, $f_{\\rm extp}$. Ideally this will be very small, so that the details of the extrapolation do not affect the total flux measurement significantly. In Figure~\\ref{fig:f_extp} we show both the $g$ and $r$ extrapolated fraction as a function of the observed magnitude.\n\nIn both the $g$ and $r$ bands, $f_{\\rm extp}$ has a log-normal distribution with a mean of roughly $10^{-2.5}$ and standard deviation of 0.6 dex. Interestingly, there does not appear to be a strong correlation with the observed magnitude of the galaxy. $f_{\\rm extp}$ is generally below 1\\% implying that we are measuring more than 99\\% of the total flux of a galaxy with Dragonfly. This result is somewhat dependent on the form of extrapolation but it is nonetheless encouraging. \nWe make the catalog of photometry measurements for the final analysis sample publicly available here\\footnote{\\url{https:\/\/tbmiller-astro.github.io\/data\/}}\n\n\\section{Comparing Dragonfly photometry to GAMA}\n\\label{sec:res_GAMA}\n\nIn this section we will be comparing the photometry observed in the DWFS to that measured by GAMA DR3. We will be comparing the DWFS total flux measurements to GAMA measurements that use S\\'ersic photometry, truncated at $10\\ r_{\\rm eff}$. We also compare the Dragonfly color, measured using the total fluxes, to the \\texttt{AUTO} color~\\citep{Kelvin2012,Driver2016}. While the database contains a more sophisticated \\texttt{LAMBDAR} photometric measurements that performs better in the UV and IR, where the resolution differ greatly from the optical, for the $g$ and $r$ colors it produces nearly identical results to the \\texttt{AUTO} measurements~\\citep{Wright2016}. We have chosen to focus on these methods of measuring the total flux and color as they form the basis of the stellar mass measurements used to calculate the SMF in \\citet{Baldry2012} and \\citet{Wright2017}.\n\n\\subsection{Comparing the total flux to S\\'ersic Photometry}\n\\label{sec:res_sersic}\nThe first comparison we will be making in this paper is the total flux measured by Dragonfly to the S\\'ersic fits performed by \\citet{Kelvin2012}. The single component S\\'ersic models were found by running \\texttt{GALFIT} on SDSS imaging data. The $r$ band S\\'ersic model, truncated at $10\\ r_{e}$, is used by \\citet{Baldry2012} and \\citet{Wright2017} as the total flux normalization of stellar masses of galaxies measured from SED fitting. Here we compare to the truncated S\\'ersic magnitudes in the $g$ and $r$ band. We will refer to these magnitudes as $g_{\\rm GAMA}$ and $r_{\\rm GAMA}$\n\nIn Figure~\\ref{fig:dmag} we compare DWFS to the GAMA S\\'ersic photometry for both the $g$ and $r$ bands. In both the $g$ and $r$ band, we find that on average, galaxies are brighter in Dragonfly, i.e. have negative $m_{\\rm DF} - m_{\\rm GAMA}$. In the $r$ band, we find there is little dependence of $r_{\\rm DF} - r_{\\rm GAMA}$ on observed magnitude. For the $g$ band the difference increases for galaxies with $g_{\\rm DF} > 18.5$. For bright galaxies, there is no dependence on $g_{DF}$ but for fainter galaxies, $g_{\\rm DF} - g_{\\rm GAMA}$ continues to decrease.\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width = 0.98\\textwidth]{dmag_gama_comb.pdf}\n \\caption {Comparison of Dragonfly to S\\'ersic photometry performed by~\\citet{Kelvin2012} as part of the GAMA survey. We show the distribution of the magnitude differences between our Dragonfly and GAMA magnitudes as a function the Dragonfly magnitude. The red squares show the median magnitude difference as a function of Dragonfly magnitude. Both bands follow a roughly Gaussian distribution. The black dotted line in each histogram panel shows the result of a Gaussian fit to the distribution with parameters displayed in each panel. On average galaxies are brighter in Dragonfly compared to the GAMA S\\'ersic measurements}\n \\label{fig:dmag}\n\\end{figure*}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.99 \\columnwidth]{dmag_n_galfit.pdf}\n \\caption{The difference between Dragonfly and GAMA S\\'ersic photometetry as a function of S\\'ersic index. The S\\'ersic index is calucalated in \\citet{Kelvin2012} using SDSS data.The red squares show the median magnitude difference as a function of S\\'ersic index. There is no significant trend with S\\'ersic index}\n \\label{fig:dmag_n}\n\\end{figure}\n\nThe overall distributions of $g_{\\rm DF} - g_{\\rm GAMA}$ and $r_{\\rm DF} - r_{\\rm GAMA}$ are also shown in Figure~\\ref{fig:dmag}. Each distribution is roughly Gaussian with a mean of -0.12 in the $g$ band and $-0.05$ in the $r$ band. The width of both distributions are similar with $\\sigma = 0.10$ and $\\sigma = 0.09$ for the $g$ and $r$ band respectively.\n\nSince we are using an exponential extrapolation (see Sec.~\\ref{sec:method}) one might expect there to be a systematic bias as a function of the S\\'ersic index, such that high $n$ galaxies have their total flux underestimated. On the other hand, possible truncations in the light profile \\citet{Pohlen2000,Trujillo2005} could work in the opposite direction. To investigate whether there is a correlation with a galaxy's structure in Figure~\\ref{fig:dmag_n} we show how the difference between dragonfly and GAMA S\\'ersic photometry depends on the measured S\\'ersic index of a galaxy. We do not find that there is a significant trend, suggesting our method is robust against such a systematic effect. Although our exponential extrapolation doesn't match the true profile of high $n$ galaxies, since $f_{\\rm extp}$ is $< 10^{-2}$ for many of our galaxies, this potential bias is minimized.\n\n\n\\subsection{Comparing $g-r$ colors to aperture photometry}\n\\label{sec:res_auto}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\columnwidth]{dcol_comb.pdf}\n \\caption{Comparison between Dragonfly measured $g-r$ colors to GAMA colors measured by~\\citet{Driver2016} using the aperature matched \\texttt{AUTO} method on SDSS data. The red line displays the running median color difference as a function of $(g-r)_{\\rm DF}$. The black dashed line in right panel shows the result of a Gaussian fit to the distribution with parameters: $\\mu = -0.05$ and $\\sigma = 0.08$. In general, Dragonfly measures bluer colors compared to the aperture matched technique.}\n \\label{fig:dcol}\n\\end{figure}\n\n\nHere, we will be comparing Dragonfly measured $(g-r)$ color to aperture matched photometry. The Dragonfly colors are calculated using the total flux measurements. We will be comparing to the GAMA \\textit{SExtractor} \\texttt{AUTO} photometry performed on SDSS images~\\citep{Bertin1996, Driver2016}. The flux is measured across the multiple bands within the Kron radius of the $r$-band image, typically $ 1\\ r_{eff, r}\\ \\text{--}\\ 1.5\\ r_{eff, r}$~\\citep{Kron1980}. This color will be referred to as $(g-r)_{\\rm GAMA}$.\n\n\nThe difference between GAMA and Dragonfly measured $g-r$ colors is shown in Figure~\\ref{fig:dcol}. We find there is little dependence of $(g-r)_{\\rm DF} - (g-r)_{\\rm GAMA}$ on the observed color. While there may appear to be a correlation, especially at the extremes of the distribution, it is important to remember that $(g-r)_{\\rm DF}$ is plotted on both the $x$ and $y$ axis. Therefore this apparent correlation is likely caused by outliers in the $(g-r)_{\\rm DF}$ distribution and does not reflect an inherent relationship. Also shown is the distribution of $(g-r)_{\\rm DF} - (g-r)_{\\rm GAMA}$ for all galaxies in the analysis sample. The distribution appears relatively Gaussian with a mean of $-0.06$ mag, consistent with the difference between $g_{\\rm DF} - g_{\\rm Sersic}$ and $r_{\\rm DF} - r_{\\rm Sersic}$ shown in Figure~\\ref{fig:dmag}.\n\nInterestingly the width of the distribution of $(g-r)_{\\rm DF} - (g-r)_{\\rm GAMA}$ is smaller than that of $g_{\\rm DF} - g_{\\rm GAMA}$ or $r_{\\rm DF} - r_{\\rm GAMA}$ individually. While not explicitly measuring the same thing, this implies there is some correlation between $g_{\\rm DF} - g_{\\rm GAMA, AUTO}$ and $ r_{\\rm DF} - r_{\\rm GAMA, AUTO}$. Indeed, we calculate the Pearson correlation coefficient between $g_{\\rm DF} - g_{\\rm GAMA, AUTO}$ and $ r_{\\rm DF} - r_{\\rm GAMA, AUTO}$ to be 0.4, suggesting a moderate correlation. \n\nTo gain insight into what is causing the difference between the Dragonfly and GAMA colors we show color profiles measured by Dragonfly. Here we focus on galaxies with S\\'ersic measured $r_{\\rm eff, r} > 3\\rm \\ arcsec$ (compared to the median value of $\\sim 1.5\\rm \\ arcsec$). Since the Dragonfly PSF HWHM is $\\sim 2.5$ arcsec, these profiles shown should be well-resolved at at $r \\gtrsim r_{\\rm eff, r}$, but it is important to keep in mind that these profiles are not de-convolved from the PSF.\n\nColor profiles measured by Dragonfly are shown in Figure~\\ref{fig:mu_comp}. For each galaxy we normalize the color profile by $(g-r)_{\\rm \\texttt{GAMA} }$ and the radius by $R_{\\rm eff, r}$. At small radii, all of the Dragonfly measurements agree with the GAMA colors. At larger radii we see the scatter in individual color profiles grows. We also show the median normalized profile of galaxies binned by the difference in total color, $(g-r)_{\\rm DF} - (g-r)_{\\rm GAMA}$. Galaxies where the integrated Dragonfly color is bluer (i.e. negative $(g-r)_{\\rm DF} - (g-r)_{\\rm \\texttt{GAMA}}$ ) generally have negative color gradients at $R > R_{\\rm eff.}$. Conversely, galaxies with redder Dragonfly colors have positive color gradients.\n\nThe presence of these color gradients, and the correlation with the difference in total color, provides an explanation for the difference between the GAMA colors and the Dragonfly colors. The GAMA colors are measured within the Kron radius of the SDSS images, which is typically 1-1.5 $r_{\\rm eff, r}$. This means it does not capture the effect of the color gradient. In massive galaxies these color gradients are generally negative and Dragonfly is better able to capture the full effects of these gradients, therefore Dragonfly colors are on average bluer.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.99 \\columnwidth]{norm_gmr_median_legend.pdf}\n \\caption{Normalized color profiles measured by Dragonfly. The color profiles for each galaxy are normalized both by their effective radius and its GAMA measured color. Thin grey lines show individual profiles and the dotted line shows the median color profile of the entire sample. The colored lines show median normalized profiles of galaxies binned by the difference between Dragonfly and GAMA measured total color. Galaxies which are bluer in Dragonfly, i.e. have a negative color difference, show a negatively sloped color profile and vice-versa for galaxies which are redder in Dragonfly. For this analysis, note that we only focus on galaxies with $r_{\\rm eff, r} > 3\\rm \\ arcsec$. }\n \\label{fig:mu_comp}\n\\end{figure}\n\n\\section{Implications for Stellar Mass estimates}\n\\label{sec:res_sm}\n\\subsection{Deriving corrections to GAMA stellar mass estimates}\nIn this section we investigate the effects that the observational differences discussed in Sections~\\ref{sec:res_sersic} and~\\ref{sec:res_auto} have on the estimate of the total stellar mass of a galaxy. There are two effects that we will consider which alter the stellar mass estimate based on the Dragonfly observations. The first is the change to the total luminosity, and the second is the change in mass-to-light ratio due to the change in color.\n\nThe first effect is relatively straightforward to account for. We simply compare the total flux observed by Dragonfly in the $r$ band to that measured by GAMA. Again, this is the flux contained within $10\\, r_{\\rm eff}$ of the $r$ band single component S\\'ersic model~\\citep{Kelvin2012}. The ratio of $r$ band fluxes then directly translates into the ratio of total luminosities.\n \n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.99\\columnwidth]{ML_gmr_log.pdf}\n \\caption{The mass-to-light ratio for our sample of galaxies as measured by~\\citet{Taylor2011}, as a function of the observed $g-r$ color. We fit a log-linear relation, with a term for the redshift evolution, to this data shown by the black dashed lines. The best fit parameters are shown in Eqn.~\\ref{eqn:MLr}. }\n \\label{fig:ML_gmr}\n\\end{figure}\n \nThe second effect is more challenging to account for as the mass-to-light ratio is usually calculated using SED fitting of many photometric bands and Dragonfly only measures the $g$ and $r$ band photometry. To approximate this we will assume there is a log-linear relationship between the mass to light ratio and the $g-r$ color. We fit a linear relationship between $ \\log M\/L_r$ calculated in \\citet{Taylor2011} to the $(g-r)_{\\rm GAMA}$ color which was used to derive it. Since we are using the observed color we also introduce a redshift term which accounts for the shifting bandpass. The data and fit are shown in Figure~\\ref{fig:ML_gmr}. The best fit relation is:\n \n \\begin{equation}\n \\log M_* \/ L_r = -0.12\\, (1\\, +\\, 15.7\\, z) + 0.74\\, (g-r)_{\\rm obs.}\n \\label{eqn:MLr}\n \\end{equation}\n\nFrom this form of the equation one can show that a difference in mass-to-light ratio depends only on the slope of the equation. Comparing the mass-to-light ratio implied by two different colors, $(g-r)$ and $(g-r)^{\\prime}$, we find:\n\n \\begin{equation}\n \\frac{ {M_* \/ L_r}_{(g-r)^\\prime} } { {M_* \/ L_r}_{(g-r)} } = 10^{ 0.74\\, \\Delta (g-r)}\n \\end{equation}\n\nHere, $\\Delta (g-r) = (g-r)^\\prime - (g-r)$ represents the difference between the colors. Using this equation we can easily compare the mass-to-light ratios inferred for Dragonfly colors and other measurements.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width = \\textwidth]{M_ratio_hist.pdf}\n \\caption{The effect of Dragonfly photometry on the GAMA stellar mass estimates. We show the distribution of differences in total luminosity (left), implied mass-to-light ratio (middle) and the combination of the two effect on the the total stellar mass (right). Dashed lines shown Gaussian fits to each distribution with the parameters listed in each panel.}\n \\label{fig:sm_ratio}\n\\end{figure*}\n\nFigure~\\ref{fig:sm_ratio} shows how the Dragonfly measurements affect the estimate of the total stellar mass. We show the distribution of total luminosity and mass-to-light ratios, comparing Dragonfly to those implied by the GAMA measurements. We also show the total effect on the stellar mass measurement when accounting for both effects. The difference in $r$ band magnitude results in the Dragonfly estimate of the total luminosity being 5\\% higher, whereas the mass-to-light ratio inferred by Dragonfly is 10\\% lower due to the bluer colors. These two results oppose each other, and the total effect is to lower the stellar mass by $7\\%$ compared to the methods used by the GAMA survey. In this final panel we display the standard deviation of the $M_* ({\\rm DF}) \/ M_* ({\\rm GAMA})$ distribution; $\\sigma = 0.149$. Also shown is the error the mean, $\\sigma_{\\bar{x}} = \\sigma \/ \\sqrt{N}$, where $N$ is the total number of galaxies in our sample. For the $M_* ({\\rm DF}) \/ M_* ({\\rm GAMA})$ distribution, $\\sigma_{\\bar{x}} = 0.004$.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width = \\textwidth]{Delta_phys_mstar.pdf}\n \\caption{Observational differences between DWFS and GAMA as a function of the GAMA stellar mass (left) and redshift (right). The grey regions show the $2\\sigma_{\\bar{x}}$ range of the differences of the combined effect on the stellar mass in each stellar mass or redshift bin.}\n \\label{fig:sm_phys}\n\\end{figure*}\n\nThe median of these ratios for galaxies in different stellar mass and redshift bins is shown in Figure~\\ref{fig:sm_phys}. The median of $M_* ({\\rm DF}) \/ M_* ({\\rm GAMA})$ appears to increase slightly up to $\\log M_* \/ M_\\odot \\approx 11.2$. At higher masses it appears to remain constant but there are few galaxies in this mass range so we can not confirm this trend with confidence. This is driven mostly by a change in mass-to-light ratio. A possible explanation is that the slope of color gradients, which are responsible for the difference between the GAMA and Dragonfly mass-to-light ratios, vary systematically with stellar mass~\\citep{Wang2019,Suess2020}. These ratios also depend on redshift. While the ratio of total luminosities increases slightly with redshift, this change again appears to be mainly due to the difference in mass-to-light ratio, which evolves more rapidly. The reason for this rapid evolution at $z>0.15$ is not immediately clear. One possible explanation is a simple difference in signal-to-noise. At higher redshifts, the surface brightness profiles outside of $r_{\\rm eff}$ may be below the noise limit of SDSS. If more of the profile is below the noise limit, the failure to account for color gradients becomes more pronounced, increasing the difference between the GAMA aperature colors and Dragonfly colors.\n\n\\subsection{Effect on the measured SMF}\n\\label{sec:smf}\nIn this section we investigate how the systematic differences in stellar mass estimates, highlighted in Fig~\\ref{fig:sm_ratio} and Fig~\\ref{fig:sm_phys}, affect the stellar mass function. Since our analysis sample is incomplete due to a complex series of quality cuts, we can not simply re-measure the SMF using traditional methods and our updated measurements. Instead we calculate the effects implied by the updated photometry using a previously measured SMF. We will be using the double Schecter fit to the bolometric masses found in~\\citet{Wright2017} using data from the GAMA survey.\\citeauthor{Wright2017} use LAMBDAR photometry in SED fitting to calculate the mass-to-light ratio normalized to the total flux of the r-band S\\'ersic model. Using their measured SMF as the probability distribution, we randomly sample a set of $10^7$ galaxies. We then apply a multiplicative ``Dragonfly correction'' to each galaxy's stellar mass based on the results in Figure~\\ref{fig:sm_ratio}. For simplicity, we will assume that this correction is independent of stellar mass. We then re-measure the SMF based on the ``corrected'' mass measurements and compare to the original SMF.\n\nWe will use two different procedures based on the interpretation of the width of the $\\rm M_*(DF)\/M_*(GAMA)$ distribution. If the width of this distribution is caused largely by observational errors, then the mean value should be applied as the correction to all galaxies. Conversely, if the width is entirely caused by intrinsic variation within the galaxy population, then it would be correct to apply a different correction to all galaxies. Specifically each correction should be drawn from the distribution shown. In other words, the former is akin to treating the correction as a systematic error whereas the latter is akin to a random error. The truth is likely somewhere in between these two cases. Since the origin of the uncertainties on our total flux measurements are uncertain (see Section~\\ref{sec:vtest}) we cannot discriminate between these two scenarios and therefore will consider both as limiting cases.\n\nIn the first scenario we will apply a single correction to all galaxies. This procedure is repeated $10^3$ times, drawing this correction from a Gaussian distribution with $\\mu = 0.931$ and $\\sigma = 0.004$. This is the error on the mean measured from the distribution of $\\rm M_*(DF)\/M_*(GAMA)$. In the second case we apply a different multiplicative correction to each galaxy. These corrections are drawn from a Gaussian distribution with $\\mu = 0.931$ and $\\sigma = 0.15$, again derived from the results in Figure~\\ref{fig:sm_ratio}. This process is also repeated $10^3$ times. For each scenario we show the median and 5\\% - 95\\% percentile of DF-corrected SMF at a given stellar mass in Figure~\\ref{fig:SMF_comp}.\n\nBoth procedures show a relatively minor effect on the stellar mass function. At stellar masses less then $10^{11}\\ M_\\odot$ the effect is less then $5\\%$ and and only reaches a maximum of 30\\% by $M_* \\sim 10^{11.75}\\ M_\\odot$. This is a relatively minor effect which does not change the overall shape of the SMF significantly. The two limiting cases have slightly different effects with the intrinsic scatter scenario resulting in less difference overall.\n\nWe have paramaterized the Dragonfly correction to the \\citet{Wright2017} SMF in Appendix~\\ref{sec:smf_param}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\columnwidth]{SMF_comp.pdf}\n \\caption{The top panel displays the SMF measured by~\\citet{Wright2017} using GAMA data and a modified SMF simulating the effects of Dragonfly like photometry. We do this using two methods: assuming that the width of the $\\rm M_*(DF)\/M_*(GAMA)$ distribtuion shown in Figure~\\ref{fig:sm_ratio} is driven by either intrinsic scatter or observational uncertainty. The bottom panel shows the ratio of the simulated Dragonfly corrected SMF to the original \\citeauthor{Wright2017} measurement. The grey region shows the 16th-84th percentile range of $10^4$ bootstrapping samples.}\n \\label{fig:SMF_comp}\n\\end{figure}\n\n\n\\section{Comparison to other methods}\n\\label{sec:meth_comp}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width = 0.95\\textwidth]{obs_comp_mag.pdf}\n \\caption{Comparing Dragonfly measured $r$ band magnitudes to various methods performed on SDSS imaging. We compare to \\texttt{model}, \\texttt{cmodel} and \\texttt{Petro} measurements produced by the SDSS pipeline along with S\\'ersic or bulge + disk decompositions performed by \\citet{Meert2015} and \\citet{Simard2011}.}\n \\label{fig:obs_comp_mag}\n\\end{figure*}\n\nAs the SDSS data are publicly available many studies have re-analyzed the raw imaging data. We compare our total flux measurements to \\citet{Simard2011} and \\citet{Meert2015}. Both apply updated sky-subtraction algorithms and apply bulge + disk decompositions or single component S\\'ersic models to extract the photometry of galaxies. \\citet{Meert2015} parameterize the bulge as a S\\'ersic profile, with $n$ as a free parameter and the disk is fixed as an exponential. For the \\citet{Simard2011} deVExp measurements, the bulge is fixed as a de Vaucouleurs profile and the disk is fixed as an exponential. The \\citet{Meert2015} SerExp photometry measurements are employed as the total flux measurement to calculate the SMF \\citet{Bernardi2013} and \\citet{Bernardi2017}. The \\citet{Simard2011} deVExp measurements are used to calculate the stellar masses used by \\citet{Thanjavur2016} to measure the SMF. Additionally we compare to the \\texttt{model}, \\texttt{cmodel} and \\texttt{Petro} measurements from the SDSS photometric catalog~\\citet{Ahn2014}. The \\texttt{model} photometry is often used across multiple bands when performing SED fitting while the \\texttt{cmodel} and \\texttt{Petro} measurements have been employed as measurements of total flux\\citet{Bell2003,Li2009,Moustakas2013}\n\nComparisons between their photometry and our Dragonfly photometry is shown in Figure~\\ref{fig:obs_comp_mag}. The total magnitudes from the bulge + disk decompositions by \\citet{Meert2015} and \\citet{Simard2011} both agree well with our measurements. However the magnitude difference between the Dragonfly and \\citet{Simard2011} measurements appears to decrease for brighter galaxies. The single component S\\'ersic models by both of these studies are brighter on average then the Dragonfly measurements. This is the total flux of the S\\'ersic model (i.e. integrated to infinity) which may explain the difference between the GAMA measurements, which are truncated at 10 $r_{\\rm eff.}$. The \\texttt{model}, \\texttt{cmodel} and \\texttt{petro} magnitudes reported in the SDSS database severely underestimate the flux measured by Dragonfly by up to 0.3 mag for the brightest galaxies, echoing the results of previous studies ~\\citep{Bernardi2013,Dsouza2015}. \n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.95\\columnwidth]{obs_comp_col.pdf}\n \\caption{Dragonfly colors compared to \\texttt{model}, \\texttt{cmodel} and deVExp measurements performed by \\citet{Simard2011}. Similar to the aperture matched measurements, Dragonfly measures bluer colors then these other methods based on SDSS imaging.}\n \\label{fig:obs_comp_col}\n\\end{figure}\n\nIn addition to aperture photometry, the \\texttt{model} and \\texttt{cmodel} photometry provided by the SDSS pipeline is also commonly used in SED fitting to calculate stellar masses~\\citep{Li2009,Moustakas2013,Bernardi2013}. \\citet{Mendel2014} use the \\citet{Simard2011} deVExp decompositions in their SED fittings. Figure~\\ref{fig:obs_comp_col} shows how these methods compare to our Dragonfly color measurements. Similar to the GAMA measurements, the Dragonfly $g-r$ are on average bluer then these SDSS measurements. The mean color difference is -0.09 mag and -0.04 mag when comparing the Dragonfly colors to the \\texttt{model} \\texttt{cmodel} colors, respectively. The \\citet{Simard2011} two component decompositions also show bluer colors on average the Dragonfly, with the mean color difference being -0.06 mag. Unlike the \\texttt{AUTO} method employed by GAMA these are not aperture measurements, therefore the cause of this discrepancy cannot be understood as simply as with color gradients described above. \n\n\n\\section{Discussion} \\label{sec:disc}\nIn this study we present photometry of galaxies measured in the DWFS and compare to previous methods for measuring the photometry of massive galaxies at $\\log M_*\/ M_\\odot > 10.75$. Our aim was to develop a flexible and non-parametric method which takes advantage of Dragonfly's low surface-brightness sensitivity. We then compare the total flux and color measurements to results from GAMA, derived from SDSS imaging, and other methods which use the same imaging dataset. Our measurements provide an independent test of the photometric methods which are currently used for massive galaxies.\n\nPerhaps the most interesting result of this study is that the S\\'ersic or SerExp models performed by \\citet{Simard2011},\\citet{Kelvin2012} and \\citet{Meert2015} generally match our Dragonfly total flux measurements well. This mirrors previous results from studies with Dragonfly ~\\citep{vanDokkum2014,Merritt2016} which often find a smaller then expected stellar halo around nearby spiral galaxies, the so called missing outskirts problem~\\citep{Merritt2020}. On average we measure profiles out to $10\\, r_{\\rm eff}$, or roughly 15\\%-20\\% of $r_{\\rm vir}$, and down to $\\mu_g \\approx 31\\ \\rm mag\\ arcsec^{-2}$. Yet, we find little evidence for significant variation ($\\gtrsim 5\\%$) from a S\\'ersic or S\\'ersic-Exponential profile. Given simulations and empirical models suggest a large amount of mass in the IHL~\\citep{Pillepich2018,Behroozi2019}, this is perhaps surprising. In a recent study~\\citet{Merritt2020} suggest that the TNG100 cosmological simulation over-predict the amount of light in stellar halos of Milky Way mass galaxies. In future studies we plan to study the profile of individual galaxies and compare to theoretical models predicting the amount and extent of the diffuse stellar halo or IHL. \n\nOur method of integrating the 1-D surface brightness profile is similar to those applied by \\citet{Huang2018} and \\citet{Wang2019} to Hyper-Suprime Cam (HSC) imaging of massive galaxies. \\citeauthor{Huang2018} measure the profiles of individual massive galaxies at $z\\sim0.4$ galaxies out to, and beyond, 100 kpc. By integrating these profiles, the authors calculate the total flux. They find that using over simplified assumptions or shallow imaging misses a significant fraction ($\\gtrsim 20 \\%$) of the total light. Moreover, this discrepancy depends both on the stellar mass and halo mass of the galaxy. \\citet{Wang2019} perform a stacking analysis to study the stellar halos of isolated central galaxies. Applying a similar method of integrating the 1-D profile, they find that the \\texttt{cmodel} method underestimates the total light of galaxies at $10 < \\log M_* \/ M_\\odot < 11.5$ by $\\lesssim 10\\%$ and interestingly that $\\sim 10 \\%$ of the total light is beyond the noise limit of a single (non-stacked) HSC image. Our results generally agree with both of these studies that the \\texttt{cmodel} method underestimate the total light by up to $20\\%$. \n\nOur results are also consistent with~\\citet{Bernardi2017}, who conclude that different methods of calculating total flux only alter the SMF at the level of $\\sim 0.1$ dex (or $\\sim 20\\%$). This is secondary, the authors argue, to differences caused by different treatments and assumptions of stellar populations used in SED fitting that result in systematic variations in the SMF of order $\\lesssim 0.5$ dex. It is important to clarify that this is a separate issue from what we discuss in this study. \\citeauthor{Bernardi2017} discuss how different methods derive different mass-to-light ratios from, mostly, the same photometry, whereas we focus on how systematically biased photometry can alter the implied mass-to-light ratio. Along with the measurement of total flux, the accurate measurement of colors across multiple bands adds an additional systemic issue to the stellar mass estimates of massive galaxies. Comparing to different established techniques, Dragonfly consistently measures bluer colors implying a lower mass-to-light ratio and therefore lower stellar mass.\n\nWe show that the discrepancy between the Dragonfly and aperture measured colors is caused by color gradients. This is an inherent shortcoming of the aperture photometry technique. The bluer colors measured by Dragonfly also corresponds with our understanding about the redder colors of bulges compared to disks~\\citep{Lackner2012} and the general trend of negative color gradients in massive galaxies~\\citep{Saglia2000,LaBarbera2005,Tortora2010}. \\citet{Huang2018} and \\citet{Wang2019} also observe negative color gradients in massive galaxies out to $\\sim 50$ kpc. \\citeauthor{Wang2019} find some evidence of an upturn in the color profile at larger radii, however the authors note this result is sensitive to the details of PSF deconvolution and masking of nearby sources. They additionally find that the gradient becomes shallower for more massive galaxies.\n\nOn-going surveys such as \\textit{DECaLs}~\\citep{Dey2019}, \\textit{DES}\\citep{DES2018}, HSC-SSP~\\citep{Aihara2019} and the upcoming Rubin Observatory~\\citep{Ivezic2019} are set to provide higher quality images over a comparably large area as SDSS. It is unclear if current methods applied to this data will produce more accurate results. \\citet{Huang2018} and \\citet{Wang2019} compare their non-parametric measurements to the \\texttt{cmodel} method applied to HSC images. They find the HSC \\texttt{cmodel} magnitudes underestimate the total flux by $10\\% \\text{--} 25\\%$. This is likely due to the rigidity of the \\texttt{cmodel} paramaterization which does not match the surface brightness profiles of massive galaxies. In order to implement a non-parametric method, these surveys will need to measure galaxy surface brightness profiles down to $\\mu_r \\lesssim 30.5\\ \\rm mag\\ arcsec^{-2}$, we find that on average for 99\\% of the total flux of the galaxy is brighter than this limit. Additionally deeper data will allow the color to be measured within a larger aperture, limiting the effect of color gradients. For example, \\citep{Bellstedt2020} calculate the flux using a curve of growth and define convergence when the flux changes by $<5\\%$, which might lead to uncertainties on the order of 5\\%. An additional issue with deeper data is that there is more contamination from neighboring objects, making de-blending more of a challenge\n\nWhile we designed our method to be as non-parametric as possible, some extrapolation is still necessary. Injection-recovery tests performed in Section~\\ref{sec:method} show that our method accurately recovers the total flux of S\\'ersic like profiles, however this does not necessarily reflect the truth. If there is an over (under) abundance of light below our noise limit of 31 mag arcsec$^{-2}$ compared to these profiles, we could be systematically under (over) predicting the total flux of these galaxies. Currently, the only reliable method to probe below this limit is individual star counts of nearby halos, however this is very resource intensive~\\citep{Radburn-Smith2011}. Using data from the GHOSTS survey, \\citet{Harmsen2017} show that the minor axis profiles of six nearby disk galaxies are consistent with a power-law of slope $-2$ to $-3.7$, but with a large amount of intrinsic scatter consistent with \\citet{Merritt2016} and \\citet{Merritt2020}. However, this is not a perfect comparison. With extremely deep exposures ($\\gtrsim 100$ hr) or stacking techniques it may be possible to reach below 31 mag arcsec$^{-2}$ with Dragonfly, but this too will require significant effort.\n\n\\section{Summary} \\label{sec:conc}\nIn this work we present measurements of the $g$ and $r$ band photometry of massive galaxies in the DWFS. We focus on galaxies with $\\log M_* \/ M_\\odot > 10.75$ at $0.1