diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzevgo" "b/data_all_eng_slimpj/shuffled/split2/finalzzevgo" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzevgo" @@ -0,0 +1,5 @@ +{"text":"\\section*{Experimental setup}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.85\\textwidth]{.\/Figs\/fig_1_setup.pdf}\n \\caption{\n \\textbf{Experimental setup. a,}\n Alice and Bob generate light beams from their local continuous-wave laser sources (LSs) and send them onto their beam splitters.\n From each beam splitter, one output beam goes to the Encoders and is used to perform TF-QKD.\n The other is used to lock the users' LSs through a service fibre, depicted in orange.\n Alice sends part of her light to Bob through the service fibre.\n Bob interferes it with his own, after shifting its frequency by 80~MHz through an acousto-optic modulator (AOM).\n The beating signal resulting from the interference is detected with a phase-sensitive detector (PSD) whose current is proportional to the phase difference between Alice's and Bob's light beams.\n An electronic feedback is then given to Bob's LS based on the detected difference to lock its phase to Alice's one.\n This constitutes a heterodyne optical phase-locked loop (OPLL).\n \\textbf{b,}\n In the Encoder modules, the continuous-wave light prepared locally by Alice's and Bob's LSs, seeds a gain-switched laser diode (LD) that carves it into pulses.\n The optical pulses are either rapidly modulated or finely controlled in phase by the phase modulators (PMs).\n After crossing the electrical polarisation controller (EPC) and the intensity controller (INT), part of the pulses is directed to the power detector (PD) for monitoring the intensity and the other part travels through the quantum channel towards Charlie's beam splitter (BS).\n Here, they interfere with the other user's pulses.\n The outcome of the interference is registered by the SNSPD detectors D1, D2 and D3.\n Variable optical attenuators (VOAs) add losses to the quantum channel.\n C, circulator; SF, spectral filter; FA, fixed attenuator; PL, polariser; PBS, polarising beam splitter.\n }\n \\label{fig:setup}\n\\end{figure*}\n\nIn our realisation of TF-QKD, we consider a generalised protocol (see Methods) that can be modified to encompass various TF-QKD protocols based on coherent states~\\cite{LYDS18,Tamaki.,Ma.,Wang.2018,CYW+18,CAL18}.\nTo experimentally validate these protocols and overcome the SKC$_0$ bound, Alice and Bob should use two separate lasers to prepare coherent states in a given phase and polarisation state, with various intensities.\nThe two separate lasers should be phase-locked to let the users reconcile their phase values.\nWe represent Alice's states as $\\ket{\\sqrt{\\mu_a} e^{i\\varphi_a}}$, where $\\mu_a$ is the intensity and $\\varphi_a \\in [0,2\\pi)$ is the phase.\nBob prepares similar states with the subscript $a$ replaced by $b$.\nThe phases $\\varphi_{a,b}$ include both the bit information and the random values needed in coherent-state TF-QKD.\nThe optical pulses emitted by the users should interfere with high visibility in the intermediate station after having travelled through a pair of highly lossy channels.\nHigh loss is needed to overcome the SKC$_0$~\\cite{LYDS18}.\nThe optical phase should remain stable in time, which is challenging when the channel loss reduces the amount of detected counts.\n\nWe implement these features using the experimental setup shown in Fig.~1.\nEach user is endowed with a continuous-wave laser source (LS).\nAlice's LS acts as the phase reference.\nIts light is split in two at a first beam splitter (BS).\nOne part is sent to Bob through a service fibre, depicted in orange in Fig.~\na, and is used to lock Bob's LS via a heterodyne optical phase-locked loop (OPLL~\\cite{Bordonalli.1999}, see Supplementary Section~1 for further details).\nIn Bob's module, the reference light interferes with another light beam prepared by Bob and shifted by 80~MHz by an acousto optic modulator (AOM).\nA photodiode acts as a phase-sensitive detector (PSD), whose intensity is mapped onto a phase difference $\\delta\\varphi$ between the reference light and Bob's local light.\nA feedback is then given to Bob's laser based on the value of $\\delta\\varphi$.\nWith this OPLL, Bob's laser is locked to Alice's with a phase error less than $5^{\\circ}$, which includes a potential phase fluctuation in the fibre connecting Alice to Bob.\nAn attacker could modify the reference light while it travels from Alice to Bob, but that would not affect the security of the scheme.\nAny modification would translate into a different value of $\\delta\\varphi$, which is equivalent to Eve introducing phase noise on the main channels going from the users to Charlie (see also~\\cite{Koa04} for a similar argument applied to QKD).\nHowever, we do not claim here the robustness of Alice's and Bob's modules to side-channel attacks, which requires more scrutiny, similarly to the one ongoing for the MDI-QKD sending modules.\n\nThe fraction of each user's light not involved in the phase-locking mechanism is directed to the Encoder, depicted in Fig.~1b.\nHere it enters the cavity of a slave laser diode (LD) that is periodically gain-switched to produce a pulse train at 2~GHz.\nThis ensures that each pulse will inherit the phase of the injected optical field, which is locked to the reference light.\nMoreover, Alice and Bob's LDs will emit pulses as narrow as \\SI{70}{ps} at 1548.92~nm, with high extinction ratio and constant intensity due to the strength of optical injection into the slave laser being 1,400 times weaker than the electrical injection, as we measured. After the LD, the optical pulses pass through an in-line phase modulator (PM), which applies fast modulation from an RF signal to encode the phase values required by the specific TF-QKD protocol, and a slow correction from a DC signal to compensate the phase noise on the paths linking to Charlie.\nAfter setting the optical pulses' polarisation and intensity, the pulses pass through 15~GHz filters that clean their spectral mode~\\cite{CLF+16a}, thus ensuring high visibility interference between the twin-fields. Then they are sent to variable optical attenuators (VOAs) that vary the loss of the channel connecting the users to Charlie.\n\nAlice and Bob's optical fields interfere on Charlie's BS and are eventually detected by superconducting nanowire single photon detectors (SNSPDs, Single Quantum EOS 410 CS) cooled at 3.2~K, featuring 22~Hz dark count rate and 44\\% detection efficiency.\nDetector D1 is associated with a 100~ps resolution time tagger and is used to extract the raw key rate.\nD2 monitors the optical field leakage into the non-intended polarisation, which is minimised by Alice and Bob through their polarisation controllers.\nD3 is sampled by a photon counter at a minimum interval of 10~ms to stabilise the overall phase.\n\n\\section*{Results}\n\nThe first task is providing the users with weak coherent pulses that are locked to a common phase reference and capable of interfering on Charlie's BS. Then part of these pulses have to be phase randomised with respect to the phase reference. In some TF-QKD protocols~\\cite{LYDS18,Tamaki.,Ma.,Wang.2018}, it is necessary that the users know the values of the random phases, whereas in others~\\cite{CYW+18,CAL18,LL18} this is not mandatory.\n\nIn the current setup we randomise the phase in an active way and obtain a first-order interference visibility at Charlie of 96.4\\% when the OPLL is active and the two PMs in Alice and Bob encode equal phases.\nWe encode a pseudo-random pattern containing $2^{10}$ symbols having $2^5$ modulation levels through the PMs driven by high-speed 12~GSa\/s digital-to-analogue converters (DACs) with 8-bit amplitude resolution.\nThe number of phases we chose is sufficiently close to a phase randomisation with infinite random phases~\\cite{CZLM15}.\nHowever, to further demonstrate a full phase randomisation, we performed a parallel experiment using a continuous phase randomisation from a master gain-switched laser~\\cite{JCS+11,YLD+14} (see Supplementary Section~2).\nFor that, we removed the OPLL and active phase randomisation from the setup, while leaving active phase encoding.\nThen we locked Alice and Bob's lasers to the main master laser by optical injection locking~\\cite{YPJ+03,CLF+16a} and obtained a visibility of 97.5\\%, 1.1\\% higher than in the previous case.\nWe attribute this difference to the absence of errors from the OPLL and the active phase randomisation.\nThis result shows that the overall visibility is not affected by the absolute number of encoded phases (it is higher with more phases than less) as much as by the components used to implement it.\nThe base quantum bit error rate (QBER) of the system remains in all cases smaller than 1.8\\% and there is no in-principle limitation to increasing the number of the encoded phases.\n\n\nThe ambient temperature fluctuations cause the experimentally obtained interference to drift.\nEven with 40~metres of optical fibre, the environmental fluctuations cause a relative phase drift in our setup of \\SI{0.7}{rad\/s}.\nThis requires a feedback control every 10 -- 100~ms to avoid detrimental effects on the QBER.\nTo implement this phase control, we doubled the pulse pattern to $2048$ bits and temporally interleaved phase-encoded pulses and unmodulated reference pulses, with equal duty cycles and intensities.\nThis is done by clocking the phase modulators in Alice and Bob's setup at 2~GHz and by actively switching between reference pulses and encoding pulses.\nThis reduces the effective clock rate of the TF-QKD protocol to 1~GHz.\nThe photon detection clicks are recorded by Charlie with time-tagging electronics and are grouped in post-processing according to their phase values.\nFrom these events we retrieve the gain and the QBER of the system (Supplementary Section~4).\nThe phase correction from reference pulses is designed to keep Alice and Bob's optical fields locked on Charlie's BS at a constant $\\pi\/2$ phase difference.\nThis is the most efficient solution as it exploits the linear part of the response function (see Supplementary Section~3 for further discussion).\nThe phase offset is continuously monitored by detector D3 and corrected by acting on the DC level of one of the PMs in the transmitting modules.\nThis is equivalent to having an extra PM in Charlie's station, as proposed in ref.~\\cite{LYDS18}.\n\nA main advantage of TF-QKD is the scaling property of the secret key rate with the square-root of the channel transmission, $\\eta^{1\/2}$.\nThis would be impossible without correspondingly having the square-root scaling of the detection rate.\nWe verified this essential feature of TF-QKD directly and summarised the result in Fig.~2\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{.\/Figs\/fig_2_gain_qber.pdf}\n \\caption{\n \\textbf{Gain and QBER of TF-QKD.}\n The gain is the detection probability per encoding gate.\n Gain and QBER are plotted against the channel loss $1-\\eta$.\n The equivalent fibre length on the top axis pertains to an ultra-low-loss (ULL) fibre with attenuation coefficient 0.16~dB\/km.\n The different scaling laws of the gain for a QKD-like single-path quantum transmission and for a TF-QKD-like double-path quantum transmission are apparent.\n The upward and downward triangular (square) points on the dotted (dashed) line are the single-path (double-path) experimental detection rate recorded for different channel loss.\n The circle points on the solid line are the double-path experimental QBER of the interfering pulses.\n All the experimental data agrees well with the theoretical curves.\n }\n \\label{fig:gain_TFQKDvsQKD}\n\\end{figure}\nThe data corresponding to a direct-link quantum transmission were taken by shutting off one arm of our experimental setup, thus allowing a single user at a time to signal to Charlie's station.\nThe data for double-path transmission, on the other hand, were taken with both arms open.\nAs is apparent from the figure, the single-path gain (triangular points on the dotted line) scales linearly with the loss, $1-\\eta$, whereas the double-path gain (square points on the dashed line) scales with the square-root.\nAt any given gain, the double-path TF-QKD can tolerate channel loss twice as large than single-path direct-link QKD.\nIn the same figure, we also report the experimental QBER of TF-QKD, which is composed of three main contributions: the quantum state preparation, detectors' dark counts and feedback routine.\nIn our setup, the last two terms significantly affect the overall QBER only at losses higher than 70~dB.\n\nOur experimental results are independent of the specific security analysis adopted to extract a key rate.\nHence they can be used as a reference to test the performance of any TF-QKD-like protocol.\nHere we analyse the data for three TF-QKD protocols, two~\\cite{LYDS18,Wang.2018} over the whole loss range and one~\\cite{CAL18} at a specific loss around 70~dB.\nThe protocol in ref.~\\cite{LYDS18} is the original TF-QKD scheme and acts as a reference.\nThe protocols in refs.~\\cite{Wang.2018,CAL18}, on the other hand, have been conceived to be unconditionally secure.\nWe have implemented them using three intensities, which is practical as compared with infinite intensities in their initial proposals.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{.\/Figs\/fig_3_SKR.pdf}\n \\caption{\n \\textbf{TF-QKD key rates.}\n Secret key rates are plotted against the channel loss (lower horizontal axis) and the corresponding ULL fibre distance (upper horizontal axis).\n The markers show the acquired experimental data whereas the solid lines follow from simulations.\n The ideal repeaterless SKC$_0$~\\cite{PLOB17} (dashed line) and the realistic one (dotted line) are plotted along with the key rates of the original TF-QKD protocol~\\cite{LYDS18} (red circles), the protocol in ref.~\\cite{Wang.2018} (blue triangles) and the protocol in ref.~\\cite{CAL18} (yellow square).\n The TF-QKD supremacy region is in pink shades.\n %\n The simulations assume 1~GHz effective clock rate.\n The realistic repeaterless bound assumes a total detection efficiency of 35\\% plus 3~dB loss due to having one detector in Charlie's module.\n Other parameters are:\n $\\alpha=0.16$~dB\/km, ULL fibre attenuation;\n $f_{EC}=1.15$, error correction coefficient.\n $\\eta_{C} = 30.8\\%$, total transmission of Charlie's module, resulting from $\\eta_{det}=44\\%$ and $\\eta_{\\textrm{coupling}}=70\\%$;\n $P_{dc} = 22$~Hz, dark count rate.\n Charlie is assumed to be at equal distance from Alice and Bob.\n The photon fluxes are specified in the Methods.\n }\n \\label{fig:SKR}\n\\end{figure*}\n\nIn Fig.~3\nwe plot the secure key rates (SKRs) versus the channel loss for the protocols analysed.\nWe also plot two lines for the repeaterless secret key capacity, which we call \\textit{ideal} and \\textit{realistic} SKC$_0$, respectively.\nThe former is the expression given in~\\cite{PLOB17}, $\\log_{2}[1\/(1-\\eta)]$, with detectors implicitly assumed to be 100\\% efficient.\nThis SKC$_0$ is impossible to overcome without an intermediate repeater.\nThe latter represents a direct comparison with our experiment and assumes a QKD performed with one detector and efficiency slightly larger than in our setup.\n\nThe darker-pink (lighter-pink) shaded area is the supremacy region where the SKR of the protocol in~\\cite{Wang.2018} (the protocol in~\\cite{LYDS18}) surpasses the realistic SKC$_0$.\nThis region extends from about 50~dB to 83~dB, limited only by detectors' dark counts.\nIn this range, our TF-QKD scheme provides more SKR than a QKD scheme with the same components.\nThis is remarkable in light of the fact that TF-QKD is more secure than QKD, as it protects against attacks directed at the detection devices.\nEven more interestingly, there are experimental points that fall beyond the \\textit{ideal} SKC$_0$.\nAt 71.1~dB, for instance, the SKRs of the protocols in refs.~\\cite{Wang.2018} and \\cite{CAL18} are $213.0$~bit\/s and $270.7$~bit\/s, respectively, i.e.~1.90 times and 2.42 times larger than the corresponding ideal SKC$_0$ ($112.0$~bit\/s).\nThis is the first time that this fundamental limit has been overcome experimentally.\nIt is worth mentioning that all the reported SKRs are quite conservative as they include the penalty due to an imperfect error-correction ($f_{EC}=1.15$).\nThe maximum channel loss over which we can stabilise the phase and obtain a positive key rate is 90.8~dB (rightmost red circle in Fig.~\n).\nThis is equivalent to 454~km and 567~km of standard and ultralow-loss (ULL, 0.16~dB\/km) single-mode optical fibre, respectively, connecting the users.\nWe incidentally notice that values up to 0.1419~dB\/km are currently achievable in fibres at 1560~nm~\\cite{TSM+18}.\n\nIt is interesting to compare these results with the current record distances~\\cite{BBR+18,YCY+16} obtained in QKD and MDI-QKD experiments over long-haul fibres in the finite-size scenario.\nThe QKD record distance is 421.1~km in ULL fibre, with a key rate of 0.25~bit\/s over a total channel loss of 71.9~dB~\\cite{BBR+18}.\nThis was made possible by a 2.5~GHz clock rate and a detector dark count rate of 0.1~Hz.\nFor a similar channel loss, with our 1~GHz effective clock rate, the SKR of any of the TF-QKD protocols analysed is three orders of magnitude higher.\nThe longest demonstration of MDI-QKD is on 404~km of ULL optical fibre obtained with the protocol in~\\cite{ZYW16}.\nWith a clock rate of 75~MHz, it provided an SKR of \\num{3.2e-4}~bit\/s over a total channel loss of 64.64~dB~\\cite{YCY+16}, which is six orders of magnitude smaller than the TF-QKD key rates at 71.1~dB.\nAlthough our results have been achieved in the asymptotic regime and do not include finite-size effects or long-haul real fibres, as the experiments in~\\cite{YCY+16,BBR+18}, the improvement they entail appears to be substantial.\n\n\\section*{Conclusions}\n\nIn TF-QKD, quantum information is carried by the optical fields prepared by Alice and Bob.\nFields can tolerate larger loss than photons and can potentially increase the rate and range of quantum communications~\\cite{LYDS18}.\nIn the present work, we have provided the first experimental evidence of this potential, surpassing, for the first time, the rate-loss limit of direct-link quantum communications~\\cite{PLOB17} with an intermediate untrusted node.\n\nTo achieve this goal, the users have to prepare light pulses that are phase stabilised and at the same time phase randomised respect to a shared phase reference. The phase stabilisation between the lasers was achieved by an optical phase-locked loop, which also guarantees that the reference light emitted by Alice and possibly manipulated by Eve is securely transferred to Bob.\nThe phase stabilisation across the channels linking Alice and Bob to Charlie was achieved using efficient feedback based on unmodulated reference optical pulses of the same intensity as the quantum pulses used for key generation.\nThis maintains interference stability to overcome the rate-loss theoretical limit and extract a positive key rate over a 90~dB loss link, which is about 20~dB larger than in any other previous quantum communication test.\n\nOur proof-of-concept experiment shows that TF-QKD can greatly enhance the range and rate of quantum communications using presently available technology.\n\n\\section*{Methods}\n\\smallskip\n\n{\\small\n\\noindent\\textbf{Generalised TF-QKD protocol}\n\\smallskip\n\n\\textit{State Preparation} -- Alice randomly selects: the bit value $\\alpha_a=\\{0,1\\}$, with probability $p_{\\alpha_a}$; the basis value $\\beta_a=\\{0,1\\}=\\{Z,X\\}$, with probability $p_{\\beta_a}$; the global phase value $\\phi_a\\in [0,2\\pi)$, with uniform probability $p_{\\phi_a}$; the intensity value $\\mu_a=\\{u_a,v_a,w_a\\}$, with probability $p_{\\mu_a}$. She uses the setup in Fig.~\nto prepare a coherent state $\\ket{\\sqrt{\\mu_a}e^{i\\varphi_a}}$, where $\\varphi_a=\\phi_a+\\alpha_a\\pi+\\beta_a\\pi\/2$. Bob does the same, with subscripts $a$ replaced by $b$ and same values for the parameters, unless explicitly stated.\n\nThis step represents the state preparation of the original TF-QKD protocol~\\cite{LYDS18} and of the one in ref.~\\cite{Tamaki.}.\nTo increase the asymptotic key rate, the probability of the majority basis, $p_Z$, can be set arbitrarily close to 1, similarly to the efficient version of the BB84 protocol~\\cite{LCA05}.\nAlong the same line, the state preparation of the TF-QKD protocol in ref.~\\cite{Ma.} can be obtained by simply setting the probability of the minority basis, $p_{X}$, equal to 0.\nIn ref.~\\cite{CYW+18} and in the Protocol~3 of ref.~\\cite{CAL18}, which relies on coherent states, the global phase is randomised only in the `test' basis, which we choose here equal to $X$, while it is constant in the `encoding' basis, which we choose here equal to $Z$.\nFor the state preparation of the protocol in ref.~\\cite{Wang.2018}, we can treat the $X$ basis as in the original TF-QKD protocol whereas for the $Z$ basis we set $w_{a,b}=0$ and $p_{u_{a,b}}=1-p_{w_{a,b}}=\\epsilon$, $p_{v_{a,b}}=0$.\nA close-to-optimal value for $\\epsilon$ is $10\\%$. In our simulations, we set it equal to 7.8\\%. We then relate the bit value $0$ ($1$) to the instances of the $Z$ basis where Alice encoded $w_a$ ($u_a$) and Bob encoded $u_b$ ($w_b$).\n\nIn the above preparation of the intensity, we considered for simplicity only three values, as opposed to the infinite values considered in most of the TF-QKD proposals.\nOn one hand, this setting can easily be generalised to any number of intensities, even infinite, like in the original decoy-state QKD~\\cite{LMC05}.\nOn the other hand, the possibility to import the decoy-state technique into TF-QKD is the main motivation for having multiple intensities in the protocol.\nThe only exception is ref.~\\cite{LL18}, which we could not include in the above description as it does not resort to phase randomisation and decoy states to extrapolate a key rate from the acquired sample.\nFor the other protocols, we describe below how to apply the decoy-state technique with three intensities.\n\n\\smallskip\n\n\\textit{Measurement} -- Alice and Bob send their optical pulses to the central relay station, Charlie, who does not need to be honest.\nA honest Charlie would send the incoming pulses on his beam splitter, measure the output pulses and report which of his two detectors clicked.\nA dishonest Charlie, however, could use any detection scheme he pleases.\nThis would not affect the security of the protocol as TF-QKD's security, similarly to MDI-QKD, is independent of the detection scheme.\nThis implies that the secret key rate extracted by the users when Charlie is dishonest is always lower than or equal to the one they would extract if Charlie is honest.\nIn our experiment, Charlie announces counts from the detector D1 only.\n\n\\smallskip\n\n\\textit{Announcement} -- After repeating the above steps many times, the honest Charlie announces over a public authenticated channel the events where one and only one detector clicked. Alice and Bob announce their intensity values $\\mu_{a,b}$, their basis values $\\beta_{a,b}$ and their global phases $\\phi_{a,b}$.\n\nThis step holds for protocols in~\\cite{LYDS18,Ma.}, even if the announcement of the basis is redundant in~\\cite{Ma.} because $p_X=0$. For the protocols in~\\cite{Wang.2018,CYW+18,CAL18}, the intensity values are announced only in the $X$ basis, whereas the global phases are announced only in the $X$ basis for \\cite{Wang.2018} and never announced in \\cite{CYW+18,CAL18}. This latter feature is remarkable and can entail a great experimental simplification. For the protocol in \\cite{Tamaki.}, Alice selects two modes of execution. In the `test mode' the global phases are never disclosed, thus allowing a rigorous application of the decoy-state method; in the `code mode', the global phases are announced to let the users reconcile their bit values. The bases are always announced unless a particular event occurs in the code mode.\n\\smallskip\n\n\\textit{Sifting} -- Among the announced successful detections, Alice and Bob keep the events that have matching values.\nIn all cases, either for a specific basis or for both bases, they keep those events whose phases are `twins', i.e., no more different than a certain tolerance level $\\Delta$ modulo $\\pi$, due to the symmetry of TF-QKD with respect to the addition of $\\pi$ to the phase values~\\cite{Ma.}.\nAfter sifting their data, the users keep $\\alpha_a$ and $\\alpha_b$ ($\\alpha_b \\oplus 1$) as their raw key bits if Charlie announced a detection related to a $0$ ($\\pi$) phase difference between Alice's and Bob's phases.\n\nThis holds for protocols in~\\cite{LYDS18,Ma.,Tamaki.} with minor differences. For the protocol in \\cite{Wang.2018}, it holds in the $X$ basis. The raw key bit, however, is obtained from the $Z$ basis when single clicks are announced by Charlie, irrespective of which detector clicked.\nFor the protocols in \\cite{CYW+18,CAL18}, it holds with $\\Delta=0$ in the encoding basis.\n\n\\smallskip\n\n\\textit{Parameter Estimation} -- A raw key is formed by concatenating the raw key bits obtained in the previous step. All the remaining data unrelated to the key bits can be fully disclosed to estimate the decoy-state parameters related to security.\n\nUp to minor differences, this step is the same in all TF-QKD protocols.\nIn fact, they all use the decoy-state technique~\\cite{Hwang.2003,Wan05,LMC05} to estimate the single-photon quantities related to security, with the exception of \\cite{LL18}, which was already kept out of our description.\nEven in~\\cite{Ma.} decoy states are extensively used to estimate the photon-number dependent quantities appearing in the phase error rate of the protocol. In this case, having only three intensities for $\\mu_{a,b}$ might be insufficient to obtain a tight estimation of the phase error rate.\nA similar argument applies to ref.~\\cite{CYW+18}, where four intensity levels were used to obtain a good key rate.\nHere we find that three intensity levels are sufficient to extract good key rates from the protocols in Refs.~\\cite{LYDS18,Wang.2018} and \\cite{CAL18}.\n\n\\smallskip\n\n\\textit{Key distillation} -- The users run classical post-processing procedures such as error correction and privacy amplification to distil the final secure key from the raw key.\n\nThe amount of privacy amplification in this step is specific to each TF-QKD protocol as it depends on the detailed security analysis. In the present work, we only consider two specific key distillation rates for exemplificative purposes, given in Refs.~\\cite{LYDS18,Wang.2018}. For both protocols we consider the standard decoy state equations in the asymptotic scenario, providing\nthe lower bound for the 0-photon yield, $\\underline{y}_0 = (v Q_w e^w - w Q_v e^v)\/(v-w)$;\nthe lower bound for the 1-photon yield, $\\underline{y}_1 = [u^2 Q_v e^v - u^2 Q_w e^w - (v^2-w^2) ( Q_u e^u - \\underline{y}_0 )] \/ [u(u v-u w-v^2+w^2)]$;\nand the upper bound for the 1-photon error rate, $\\overline{e}_1 = (E_v Q_v e^v - E_w Q_w e^w)\/[(v-w)\\underline{y}_1]$.\nIn these equations, $u=u_a+u_b$, $v=v_a+v_b$ and $w=w_a+w_b$ are the total intensity values for the signal state, the decoy state and the vacuum state, respectively.\nFor the total intensity, we also use the symbol $\\mu=\\mu_a+\\mu_b=\\{u,v,w\\}$.\nIn our experiment, we set $u=0.4$, $v=0.16$ and $w=10^{-5}$ for the protocols in \\cite{LYDS18,Wang.2018} and $u_a=u_b=0.02$, $v_a=v_b=0.2$, $w=10^{-5}$ for the protocol in \\cite{CAL18}.\nThe parameters $\\underline{y}_1$ ($\\underline{y}_0$) and $\\overline{e}_1$ are, respectively, the lower bound for the single-photon (zero-photon) yield and the upper bound for the single-photon phase error rate; $Q_\\mu$ and $E_\\mu$ are the gain and QBER measured by detector D1 in Fig.~\na.\nThen for the original TF-QKD key rate we use\n\\begin{equation}\\label{SKR_TFQKD}\n R^{\\prime}=\\{\\underline{Q}_1 [1-h(\\overline{e}_1)] - f_{EC} Q_{u} h(E_u) \\}\/M^{\\prime}.\n\\end{equation}\nHere, $\\underline{Q}_1 = \\mu e^{-\\mu} \\underline{y}_1$ is the lower bound for the single-photon gain;\n$f_{EC}$ is the error correction factor, set equal to 1.15 in our simulations;\n$h$ the binary entropy function;\n$M^{\\prime}=M\/2$ and $M$ is the number of phase slices used to reconcile the global random phase, set equal to 16 in our simulations.\nIn Eq.~\\eqref{SKR_TFQKD}, $E_u$ includes $E_M$, the intrinsic misalignment error of TF-QKD, which is equal to 1.275\\% for $M=16$~\\cite{LYDS18}.\nThe key rate in Eq.~\\eqref{SKR_TFQKD} is secure under the conditions clarified in ref.~\\cite{LYDS18}. When these conditions are not met, Eve can perform other attacks like the `collective beam splitting' (CBS) attack~\\cite{LYDS18} or the one described in \\cite{WHY18}.\n\nFor the `Send-Not Send' TF-QKD protocol by Wang \\textit{et al.}~\\cite{Wang.2018} we use the key rate equation\n\\begin{equation}\\label{SKR_SNSTFQKD}\n R^{\\prime\\prime}=\\underline{Q}_0^z + \\underline{Q}_1^z [1-h(\\overline{e}_1)] - f_{EC} Q^z h(E^z),\n\\end{equation}\nwith $Q^z = \\epsilon^2 Q_{u} + \\epsilon (1 - \\epsilon) (Q_{u_a} + Q_{u_b}) + (1 - \\epsilon)^2 Q_0$, $ E^z = [\\epsilon^2 Q_{u} + (1 - \\epsilon)^2 Q_0]\/{Q^z}$.\nThe parameter $\\epsilon$ has been defined in the state preparation step of the protocol; $Q_{u}$, $Q_{u_a}$, $Q_{u_b}$, $Q_{0}$ are the gains (i.e. ratio of successfully detected events to sent optical pulses) of the protocol, measured in the experiment when both of the users, only Alice, only Bob, none of the users, respectively, send out optical pulses.\nThe values for these quantities at each attenuation level are reported the Supplementary Information.\nThe single-photon gain in the $Z$ basis assumes the form $\\underline{Q}_1^z = \\[\\epsilon (1 - \\epsilon) (u_a e^{-u_a}+u_b e^{-u_b}) + \\epsilon^2 (u e^{-u})\\] \\underline{y}_1$ and $\\underline{Q}_0=\\[ (1-\\epsilon)^2 + \\epsilon (1-\\epsilon) (e^{-u_a}+e^{-u_b}) + \\epsilon^2 e^{-u} \\]\\underline{y}_0$.\nThe single-photon quantities $\\underline{y}_0$, $\\underline{y}_1$ and $\\overline{e}_1$ are drawn from the $X$ basis of the protocol using the equations written above.\nIn this specific protocol, the number of phase slices is large, leading to no misalignment error in the $X$ basis.\nAlso, unlike the original send-not send TF-QKD~\\cite{Wang.2018}, we consider here three intensities to implement the decoy-state technique, which is practical.\nFinally, we include an extra term $\\underline{Q}_0^z$ in the key rate equation \\eqref{SKR_SNSTFQKD}, accounting for the fact that Eve cannot extract any useful information from the vacuum pulses prepared by the users~\\cite{Lo05,Koa06}.\n\nFor the protocol by Curty \\textit{et al.}~\\cite{CAL18} we use the key rate equation written for their Protocol~3, after adding the error correction factor $f_{EC}$, to make the proposal more practical:\n\\begin{equation}\\label{SKR_Curty}\n R^{\\prime\\prime\\prime} = Q^z [1-h(\\overline{e}^x_{1})] - f_{EC} Q^z h(E^z).\n\\end{equation}\nThis extra term makes the key rate smaller, so it is even more difficult to overcome the SKC$_0$ when it is taken into account.\nThe counts for the raw key come from detector D1 in the setup of Fig.~\na, as for the other protocols.\nMost quantities in Eq.~\\eqref{SKR_Curty} are similar to those already introduced for the other protocols, with the exception of the phase error rate $\\overline{e}^x_{1}$, which deserves a specific discussion.\nIt has been taken from Eq.~(15) of ref.~\\cite{CAL18} and amounts to\n\\begin{align}\\label{e1Cur}\n\\nonumber \\overline{e}^x_{1} &=\\frac{1}{Q^z}\\sum_{j=0,1}\\[\\sum_{m,n=0}^{\\infty} c_{m}^{(j)} c_{n}^{(j)}\\sqrt{\\overline{Y}^x_{mn}}\\]^2 \\\\\n\\nonumber &\\leq \\frac{1}{Q^z}\\sum_{j=0,1}\\[\\sum_{m,n=0}^{\\infty} c_{m}^{(j)} c_{n}^{(j)}\\sqrt{g_{mn}(\\overline{Y}^x_{mn},Y_{\\textrm{cut}})}\\]^2\\\\\n &\\simeq \\frac{1}{Q^z}\\sum_{j=0,1}\\[\\sum_{m,n=0}^{N_{\\textrm{cut}}} c_{m}^{(j)} c_{n}^{(j)}\\sqrt{g_{mn}(\\overline{Y}^x_{mn},Y_{\\textrm{cut}})}\\]^2.\n\\end{align}\nIn Eq.~\\eqref{e1Cur}, the coefficient $c_{k}^{(0)}$ ($c_{k}^{(1)}$) is defined as $c_{k}^{(0)}=e^{-\\mu\/2} \\mu^{k\/2}\/\\sqrt{k!}$ when the integer $k$ is even (odd) and $0$ otherwise~\\cite{CAL18};\n$g_{mn}(\\overline{Y}^x_{mn},Y_{\\textrm{cut}})$ is a function equal to $\\overline{Y}^x_{mn}$ if $m+n1$. In the vertical direction, the\nupward component of the aerodynamic drag force $F_{d,\\perp}$ is\ncounterbalanced by the excess of the gravitational attraction over the\nair buoyancy force \n\\begin{equation}\n F_g = \\frac{1}{6} \\ \\pi \\ D_V^3 \\ (\\rho_{\\rm H_2O} - \\rho_{\\rm air})\n \\ g \\, ,\n\\end{equation}\nwhere $\\rho_{\\rm H_2O} \\simeq 997~{\\rm kg}\/{\\rm m}^3$ and $g \\simeq\n9.8~{\\rm m\/s}^2$ is the acceleration of gravity. Since $\\rho_{\\rm air}\n\\ll \\rho_{\\rm H_2O}$ the air buoyancy force\nbecomes negligible, and so $F_g \\approx M_V g$, with $M_V$ the aerosol\nmass. When the upward\naerodynamic drag force\nequals the gravitational attraction the droplet reaches mechanical\nequilibrium and starts falling with a terminal speed\n\\begin{equation}\nv_{V,f,\\perp} \\approx \\frac{M_V \\ g \\ \\varkappa}{3 \\pi \\ \\eta \\ D_V} \\, .\n\\end{equation}\nThe terminal speed is $\\propto D_V^2$ (due to the diameter dependence of the mass), and hence larger droplets\nwould have larger terminal velocities thereby reaching the ground\nfaster. The terminal speed for various particle sizes is given in\nTable~\\ref{tabla1}. The time $t_f$ it will take the virus to fall to the\nground is simply given by the distance to the ground divided by\n$v_{V,f,\\perp}$. For an initial height, $h \\sim 2~{\\rm m}$, we find\nthat for $D_V = 2~\\mu{\\rm m}$,\n\\be\nt_f = \\frac{h}{v_{V,f,\\perp}} \\sim 4~{\\rm hr} \\, .\n\\ee\nThe time scale as a function of the droplet size and heigth is shown in Fig.~\\ref{fig:1}.\n\nThe aerodynamic drag force holds for rigid spherical particles moving\nat constant velocity relative to the gas flow. To determine the\nstopping range, in the next section we model the elastic scattering of\nthe turbulent puff cloud with the air molecules.\n\n\n\\begin{table}\n\\caption{Cunningham slip correction factor and terminal speed. \\label{tabla1}}\n \\begin{tabular}{ccc}\n \\hline\n \\hline\n $D_V~(\\mu{\\rm m})$ & $\\varkappa$ & $v_{V,f,\\perp}~({\\rm m\/s})$ \\\\ \n\\hline\n $\\phantom{1}0.001$ & $215.3$ & $6.51 \\times 10^{-9}$ \\\\\n $\\phantom{1}0.010$ & $\\phantom{2}22.05$ & $6.67 \\times 10^{-8}$ \\\\\n $\\phantom{1}0.100$ & $\\phantom{22}2.851$ & $8.62 \\times 10^{-7}$ \\\\\n $\\phantom{1}0.500$ & $\\phantom{22}1.327$ & $1.00 \\times 10^{-5}$ \\\\\n $\\phantom{1}1.000$ & $\\phantom{22}1.163$ & $3.52 \\times 10^{-5}$ \\\\\n $\\phantom{1}1.500$ & $\\phantom{22}1.109$ & $7.54 \\times 10^{-5}$ \\\\\n $\\phantom{1}2.000$ & $\\phantom{22}1.081$ & $1.31 \\times 10^{-4}$ \\\\\n $\\phantom{1}3.000$ & $\\phantom{22}1.054$ & $2.87 \\times 10^{-4}$ \\\\\n $\\phantom{1}5.000$ & $\\phantom{22}1.033$ & $7.81 \\times 10^{-4}$ \\\\\n $\\phantom{1}7.000$ & $\\phantom{22}1.023$ & $1.52 \\times 10^{-3}$ \\\\\n ~~~~~~~~~$10.000$~~~~~~~~~ & ~~~~~~~~~$\\phantom{22}1.016$~~~~~~~~~ & ~~~~~~~~~$3.07 \\times 10^{-3}$~~~~~~~~~ \\\\\n \\hline\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\begin{figure}[tb]\n\\postscript{height-vs-diameter.pdf}{0.9}\n\\caption{Contours of the time $t_f$ in minutes in the\n $h-D_V$ plane. \\label{fig:1}}\n\\end{figure}\n\n \n\\section{Stopping Range}\n\\label{sec:3}\n\nRespiratory particles of saliva and mucus are expelled together\nwith a warm and humid air, which generates a convective current. The\naerosols and droplets are initially transported as part of a \ncoherent gas puff of buoyant fluid. The ejected puff of air remains\ncoherent in a volume that varies from $0.00025$ to $0.0025~{\\rm m}^3$~\\cite{Balachandar}.\nThis corresponds to a puff size $0.78 \\lesssim D_P\/{\\rm m} \\lesssim\n1.68$, where follwoing~\\cite{Balachandar} we have taken an entrainment\ncoefficient~\\cite{Morton} of $\\alpha = 0.1$. \nThe puff is ejected with $1 \\lesssim v_{V,0,\\parallel}\/{\\rm\n (m\/s)} \\lesssim 10$~\\cite{Balachandar}. \nThe turbulent puff cloud consists of an admixture of moist exhaled air and\nmucosalivary filaments. Next, in line with our stated plan, we use the\nexperimental data to calculate the range of the average\ndensity of the buoyant fluid in the turbulent cloud.\n\nThe mass ratio of the average air molecule compared to the aerosol, $m_{\\rm{air}}\/ M_V$, is roughly $10^{-12} $\n(since the size of the aerosol and the mass for its chief constituent,\nH$_2$O, compared to the air molecule are $10^4$ and $10^{3}$), though\nthere is an obvious variation with aerosol size at constant\ndensity. If we consider instead the mass inside the puff $M_P$ the ratio $R \\equiv\nm_{\\rm{air}}\/ M_{P}$ is even smaller. Due to the enormous mass ratio, the virions inside\nthe puff will not undergo large angular deflections, so we will treat the virions as having the same direction for its initial and final velocities (since we are looking at a stopping distance, this is a reasonable assumption). Starting with the non-relativistic one-dimensional equation for the virus velocity $\\beta$ we have in the lowest nontrivial order (in $R \\ll 1$) and any frame\n\\bea \n\\left(\n\\begin{array}{c}\n\\beta_1\\\\\nv_{{\\rm air},f}\n\\end{array}\n\\right)\n= \\mathbb{M} \\left(\\begin{array}{c} \\beta_0 \\\\ v_{{\\rm air},0} \\end{array} \\right) \\,,\n\\eea\nwhere the matrix $\\mathbb{M}$ is derived by imposing conservation of energy and momentum, and\nis given by\n\\bea\n\\mathbb{M} = \\left(\n\\begin{array}{cc}\n~~1-2R~~ & ~~2R~~ \\\\\n~~2~~ & ~~ -1~~\n\\end{array}\n\\right) \\,,\n\\eea\nwith $\\beta_0 = v_{V,0,\\parallel}$, and $v_{{\\rm air},0}$ and $v_{{\\rm air},f}$ the initial and final velocities of the air molecule, respectively. \nAs the velocity $ \\beta $ falls with each interaction, the velocity\nloss remains constant; the target particle is a new air molecule at\neach interaction.\n\n\nThough individual air molecules are traveling at an average speed of a few hundred meters per\nsecond, throughout we assume the medium to be stationary. In analogy\nwith the description of the slowing down of alpha particles in matter (which assumes the\nelectronic cloud is at rest), we can describe the\nscattering of the puff in the frame in which the air molecule is at rest, i.e.,\n$v_{{\\rm air},0} = 0$ (in essence, adopting a stationary medium on average). The stopping power is given by the velocity-loss equation\n\\bea\n d\\beta\/dx = \\Delta\\beta\/\\lambda^V_{\\rm mfp} =\n2R \\beta\/\\lambda^V_{\\rm mfp} \\,,\n\\eea\nwith solution \n$ \\ln\\beta= ( 2R \/ \\lambda^V_{\\rm mfp} ) \\int dx $. Finally, we have\nfor the stopping distance\n\\bea \nL=\\lambda^V_{\\rm mfp} \\ \\frac{1}{2R} \\ \\ln\\left(\\frac{\\beta_0}{\\beta_f} \\right) \\,,\n\\label{stopdis}\n\\eea\nwith $\\beta_f \\equiv v_{V,f,\\parallel}$. Note that $L\/\\lambda_{\\rm mfp}^V$ is not only the number of mean free paths traversed by the fiducial virus, but is also the number of interactions of the virus with air molecules; of course, there is a one-to-one correlation between the number of mean free paths traveled and interactions. \n\nSince $\\beta$ is homogeneous and the mass ratio $R$ is a constant for\na given puff size $D_P$, we\nhave the above simple equation. The mass ratio $R$ is very small, and\n$(2R)^{-1}$ is correspondingly very large. There are a tremendous number of mean free paths\/interactions involved as the virions bowling ball rolls over the air molecule.\n\nFinally, we must calculate \n$ \\lambda^V_{\\rm mfp} = 1\/ (n_{\\rm air} \\sigma) $. The air molecules\nact collectively as a fluid, so the volume $V$ over the air density is\ngiven by the ideal gas law as $ k_{\\rm B}T\/P $, where $P$ is the\npressure, $T$ the temperature, and $k_{\\rm B}$ is the Boltzmann\nconstant. We assume a contact interaction equal to the cross-sectional hard-sphere size of the puff, \ni.e. $ \\sigma = \\pi (D_P\/2)^2 $. Substituting into Eq.(\\ref{stopdis}) we\nobtain the final result for the stopping distance\n\\begin{equation}\nL= \\frac{k_{\\rm B}T}{P} \\ \\frac{1}{\\pi (D_P\/2)^2} \\frac{1}{2R} \\\n\\ln \\left(\\frac{\\beta_{0}}{\\beta_{f}} \\right) \\, .\n\\label{buga}\n\\end{equation}\nWe take the sneeze or cough which causes the droplets expulsion to be\nat a standard ambient air pressure of $P = 101$~kPa and\na temperature of $T \\sim 293~{\\rm K}$. It is important to stress that {\\it temperature variation could cause an $\\mathcal{O}(\\lesssim \\pm8 \\%)$ effect in $L$ for extreme ambient cold or warmth}. We now proceed to fit the experimental data. For $L \\sim 8~{\\rm m}$ and\ntaking $v_{V,f,\\parallel} \\sim 3~{\\rm mm\/s}$~\\cite{Bourouiba},\n we obtain $1.8 <\n\\rho_P\/\\rho_{\\rm air} < 4.0$ for $0.78 \\lesssim D_P\/{\\rm m} \\lesssim\n1.68$, \nwhere\n$\\rho_P$ is the average density of the fluid in the puff. \n\nA point worth noting at this juncture is that our model provides an\neffective description of the turbulent puff cloud. Note that\nindependently of their size and their initial velocity all respiratory\nparticles in the cloud experience both gravitational settling and\nevaporation. Aerosols and droplets of all sizes are subject to\ncontinuous settling, but those with settling speed smaller than the\nfluctuating velocity of the surrounding puff would remain trapped\nlonger within the puff. Actually, because of evaporation the water\ncontent of the respiratory particles is monotonically decreasing. At\nthe point of almost complete evaporation the settling velocity of the\naerosols is sufficiently small that they can remain trapped in the puff\nand get advected by ambient air currents and dispersed by ambient\nturbulence. The size of the puff then continuously grows in\ntime~\\cite{Balachandar}. Our result can equivalently be interpreted in\nterms of the effective coherence length of the turbulent cloud\nassuming $\\rho_P \\sim \\rho_{\\rm air}$. The effective size of the puff\nand its effective density are entangled in Eq.~(\\ref{buga}). Numerical\nsimulations show that during propagation\nthe puff edge grows $\\propto t^{1\/4}$~\\cite{Bourouiba}. After a 100~s the puff would grow by\na factor of 3 (see Fig.~7 in~\\cite{Balachandar}), in agreement with\nour analytical estimates. In\nclosing, we note that if we ignore the motion of the air puff carrying\nthe aerosols, as in the analysis of~\\cite{Wells}, it is\nstraightforward to see substituting $R$ by $m_{\\rm air}\/M_V \\sim\n10^{-12}$ into Eq.~(\\ref{buga}) that the individual aerosols would not travel\nmore than a few cm away from the exhaler, even under conditions of\nfast ejections, such as in a sneeze. This emphasizes the relevance of\nincorporating the complete multiphase flow physics in the modeling of\nrespiratory emissions when ascertaining the risk of SARS-CoV-2 airborne\ninfection.\n\n\n\n\n\\section{Conclusions}\n\\label{sec:4}\n \nWe have carried out a physics modeling study for SARS-CoV-2 transport\nin air. We have developed a nuclear physics analogy-based modeling of\nthe complex gas cloud and its payload of pathogen-virions. Using our\npuff model we estimated the average density of the fluid in the\nturbulent cloud is in the range $1.8 < \\rho_P\/\\rho_{\\rm air} < 4.0$. We have also shown that aerosols and droplets can remain suspended\nfor hours in the air. Therefore, once the puff slows down sufficiently,\nand its coherence is lost, the eventual spreading of the infected\naerosols becomes dependent on the ambient air currents and\nturbulence. De facto, as it was first pointed out in~\\cite{Anchordoqui} and later developed in~\\cite{Augenbraun,Evans} airflow\nconditions strongly influence the distribution of viral particles in\nindoor spaces, cultivating a health threat from COVID-19 airborne\ninfection. \n\n\nAltogether, it seems reasonable to adopt additional infection-control\nmeasures for airborne transmission in high-risk settings, such as the\nuse of face masks when in public. \nIf the results of this study - $t_f$ of ${\\cal O} ({\\rm hr})$ for aerosols, for example - are borne out by experiment, then these findings should be taken into account in policy decisions going forward as we continue to grapple with this pandemic.\n\n\n\n\\section*{Appendix}\n\nThere are important considerations in the development of Stokes' law,\nincluding the hypothesis that the gas at particle\nsurface has zero velocity relative to the particle. This hypothesis\nholds well when the diameter of the particle is much larger than the mean\nfree path of gas molecules. The mean free path $\\lambda_{\\rm mfp}^{\\rm\n air}$ is the average distance\ntraveled by a gas molecule between two successive collisions. In\nanalyses of the interaction between gas molecules and particles, it is\nconvenient to use the Knudsen number ${\\rm Kn} = 2 \\lambda_{\\rm mfp}^{\\rm\n air}\/D_V$, a dimensionless number\ndefined as the ratio of the mean free path to particle radius. For ${\\rm Kn}\n\\agt 1$, the drag force is smaller than\npredicted by Stokes' law. Conventionally this condition is described as\na result of slip on the particle surface. The so-called slip\ncorrection is estimated to be~\\cite{Crowder}\n\\begin{equation}\n\\varkappa = 1 + {\\rm Kn} \\left[1.257 + 0.4 \\ \\exp(-1.1\/{\\rm Kn}) \\right] \\, .\n\\end{equation}\nIn our calculations we take \n\\bea\n\\lambda_{\\rm mfp}^{\\rm air} = \\frac{\\eta_{\\rm air}}{\\rho_{\\rm\n air}}\\left(\\frac{\\pi m_{\\rm air}}{2 \\, k_{\\rm B} \\, T}\\right)^{1\/2} \\,,\n\\eea\nwhere $k_{\\rm B}$ is the Boltzmann constant, $T$ is the temperature in\nKelvin, and the density of air is given by\n\\bea\n\\rho_{\\rm air} = \\frac{P}{R_gT} \\,,\n\\eea\nwith $P = 101$ kPa, and where $R_g = 287.058$~J\/(kg~$\\cdot$~K) is the ideal gas constant. The molar mass of air is\n$m_{\\rm mol} = 29\\; {\\rm g}\/{\\rm mol}$, which leads to $m_{\\rm air} = 4.8\\times10^{-26}\\;{\\rm kg}\/{\\rm molecule}$.\\\\\n\n\n\\noindent{\\bf Funding\/Support:} The research of L.A.A. is supported by the U.S. National Science\nFoundation (NSF Grant PHY-1620661). J.B.D. acknowledges support from\nthe National Science Foundation under Grant No. NSF PHY182080. The\nwork of T.J.W. was supported in part by the U.S. Department of Energy\n(DoE grant No. DE-SC0011981).\n\n\\noindent {\\bf Role of the Funder\/Sponsor:} The sponsors had no role in\nthe preparation, review or approval of the manuscript and decision to\nsubmit the manuscript for publication. Any opinions,\nfindings, and conclusions or recommendations expressed in this\narticle are those of the authors and do not necessarily reflect the\nviews of the NSF or DOE.\n\n\\noindent{\\bf Declaration of Competing Interest:} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.\n\n\\noindent{\\bf Ethical Approval:} The manuscript does not contain experiments on animals and humans; hence ethical permission not required.\n\n\n\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nMostow rigidity implies that for hyperbolic $3$-manifolds, the\nhyperbolic metric is a topological invariant, so one might hope that\nthe topological and metric complexities are related. We shall show\nthat this is indeed the case for certain definitions of topological\nand metric complexity. We first describe the notions of complexity we\nshall use, and then give a brief outline of the arguments used to\nrelate topological and metric complexity in the subsequent sections.\nIn \\cite{hoffoss-maher} we considered the linear version of these\ninvariants, while in this paper we consider the more general case of\ninvariants constructed from maps to graphs. It will be convenient to\nwork with the collection of hyperbolic $3$-manifolds which are\ncomplete, but do not contain cusps, and are not necessarily of finite\nvolume. The reason for not considering manifolds with cusps is that\nin the cusped case the surfaces separating the $3$-dimensional regions\nin the topological decomposition we construct from the metric might\nhave essential intersection with the cusps. In other words, if the\ncusps are truncated to form a hyperbolic manifold with torus boundary\ncomponents, then the dividing surfaces may be surfaces with essential\nboundary components on the boundary tori. However, the currently\navailable versions of the topological decomposition results we use,\ndue to Scharlemann-Thompson and Saito-Scharlemann-Schultens, assume\nthat the dividing surfaces are closed.\n\nThis paper is not entirely self-contained, and relies on the results\nof \\cite{hoffoss-maher}, however we review the main definitions and\nresults from \\cite{hoffoss-maher} for the convenience of the reader.\n\n\n\\subsection{Metric complexity}\n\n\n\nIn \\cite{hoffoss-maher}, we considered the following definition of\nmetric complexity. Let $M$ be a closed Riemannian $3$-manifold, and\nlet $f \\colon M \\to \\mathbb{R}$ be a Morse function, i.e. $f$ is a smooth\nfunction, all critical points are non-degenerate, and distinct\ncritical points have distinct images in $\\mathbb{R}$. We define the\n\\emph{area} of $f$ to the maximum area of any level set\n$F_t = f^{-1}(t)$ over all points $t \\in \\mathbb{R}$. We define the\n\\emph{Morse area} of $M$ to be the infimum of the area of all Morse\nfunctions $f \\colon M \\to \\mathbb{R}$.\n\nMore generally, we may consider maps $f \\colon M \\to X$, where $X$ is\na trivalent graph. Recall that for a Morse function $f \\colon M \\to\n\\mathbb{R}$ there are singularities of index $0, 1, 2$ and $3$. The\nsingularities of index $0$ and $3$ are known as birth or death\nsingularities respectively, and the level set foliation near the\nsingular point in $M$ is locally homeomorphic to the level sets of the\nfunction $x^2 + y^2 + z^2$ close to the origin in $\\mathbb{R}^3$. For\nsingularities of index $2$ and $3$, the level sets near the singular\npoint in $M$ are locally homeomorphic to the level sets of the\nfunction $x^2 + y^2 - z^2$ close to the origin in $\\mathbb{R}^3$. \n\nIn the case of index $2$ or $3$, there is a map from a small open ball\ncontaining the singular point to the leaf space of the level set\nfoliation. As the singular leaf divides a small ball about the\nsingular point into three connected components, the leaf space is a\ntrivalent graph with a single vertex and three edges, and we call such\na map a \\emph{trivalent singularity}. If $X$ is a trivalent tree, we\nsay a map $f \\colon M \\to X$ is \\emph{Morse} if it is a Morse function\non the interior of each edge of $X$, and at each trivalent vertex $v$\nof $X$ the pre-image under $f$ is locally homeomorphic to a trivalent\nsingularity. We say that the area of $f$ is the maximum area of\n$F_t$, as $t$ runs over all points $t \\in X$. The \\emph{Gromov area}\nof $M$ is the infimum of the area of $f \\colon M \\to X$ over all\ntrivalent graphs $X$, and all Morse functions $f \\colon M \\to X$.\n\n\nThis definition of metric complexity is a variant of Uryson width,\nstudied by Gromov in \\cite{gromov}, though we consider the area of the\nlevel sets instead of the diameter. Alternatively, one may consider\nit to be a variant of the definition of the waist of a manifold, but\nwe prefer to call it area, as the dimension of our spaces is fixed,\nand the fibers have dimension two.\n\n\n\n\\subsection{Topological complexity}\n\nWe now describe the notions of topological complexity we shall\nconsider. A \\emph{handlebody} is a compact $3$-manifold with boundary,\nhomeomorphic to a regular neighborhood of a graph in\n$\\mathbb{R}^3$. Up to homeomorphism, a handlebody is determined by the\ngenus $g$ of its boundary surface. Every $3$-manifold $M$ has a\n\\emph{Heegaard splitting}, which is a decomposition of the manifold\ninto two handlebodies. This immediately gives a notion of complexity\nfor a $3$-manifold, called the \\emph{Heegaard genus}, which is the\nsmallest genus of any Heegaard splitting of the $3$-manifold.\n\nThere is a refinement of this, due to Scharlemann and Thompson\n\\cite{st}, which we now describe. A \\emph{compression body} $C$ is a\ncompact $3$-manifold with boundary, constructed by gluing some number\nof $2$-handles to one side of a compact (but not necessarily\nconnected) surface cross interval and capping off any resulting\n$2$-sphere components with $3$-balls. The side of the surface cross\ninterval with no attached $2$-handles is called the \\emph{top\n boundary} of the compression body and denoted by $\\partial_+ C$, and\nany other boundary components are called the \\emph{lower boundary} of\nthe compression body, and denoted by $\\partial_- C$. A \\emph{linear\n generalized Heegaard splitting},\\footnote{We warn the reader that\n these are often referred to as generalized Heegaard splittings in\n the literature; however we wish to distinguish them from a more\n general notion described subsequently, which is also occasionally\n referred to in the literature as a generalized Heegaard splitting.}\nwhich we shall abbreviate to \\emph{linear splitting}, is a\ndecomposition of a $3$-manifold $M$ into a linearly ordered sequence\nof (not necessarily connected) compression bodies\n$C_1, \\ldots C_{2n}$, such that the top boundary of an odd numbered\ncompression body $C_{2i+1}$ is equal to the top boundary of the\nsubsequent compression body $C_{2i+2}$, and the lower boundary of\n$C_{2i+1}$ is equal to the lower boundary of the previous compression\nbody $C_{2i}$. Let $H_i$ be the sequence of surfaces consisting of\nthe top boundaries of the compression bodies $C_{2i-1}$ and\n$C_{2i}$. The complexity $c(H_i)$ of the surface $H_i$ is the sum of\nthe genera of each connected component, and the complexity of the\nlinear splitting is the collection of integers $\\{c(H_i)\\}$, arranged\nin decreasing order. We order these complexities with the\nlexicographic ordering. The \\emph{width} of the linear splitting is\nthe maximum value (i.e. the first value) of $c(H_i)$ in the collection\n$\\{c(H_i)\\}$. The \\emph{linear width} of a $3$-manifold $M$ is the\nminimum width over all possible linear generalized Heegaard\nsplittings. As a Heegaard splitting is a special case of a linear\nsplitting, the Heegaard genus of $M$ is an upper bound for the linear\nwidth of $M$. A linear splitting which gives the minimum complexity\nof all possible linear splittings is called the \\emph{thin position}\nlinear splitting.\n\n\nThere is a further refinement of this, described in Saito, Scharlemann\nand Schultens \\cite{sss}. A \\emph{graph generalized Heegaard\n splitting}, which we shall abbreviate to \\emph{graph splitting}, and\nis called a \\emph{fork complex} in \\cite{sss}, is a decomposition of a\ncompact $3$-manifold $M$ into compression bodies $\\{ C_i \\}$, such\nthat for each compression body $C_i$, there is a compression body\n$C_j$ such that the top boundary of $C_i$ is equal to the top boundary\nof $C_j$. Furthermore, for each component of the lower boundary of\n$C_i$, there is a compression body $C_k$, such that that component of\nthe lower boundary of $C_i$ is equal to a component of the lower\nboundary of $C_k$. We emphasize that different components of the lower\nboundary of $C_i$ may be attached to lower boundary components of\ndifferent compression bodies. Let $\\{ H_i \\}$ be the collection of top\nboundary surfaces. The complexity of the graph splitting is the\ncollection of integers $\\{c(H_i)\\}$, arranged in decreasing\norder. Again, we put the lexicographic ordering on these\ncomplexities. A graph splitting which realizes the minimum complexity\nis called a \\emph{thin position graph splitting}. The \\emph{width} of\nthe graph splitting is the maximum integer (i.e the first integer)\nthat appears in the complexity. The \\emph{graph width} of a\n$3$-manifold $M$ is the minimum width over all possible graph\nsplittings of $M$. As a linear splitting is a special case of a graph\nsplitting, the linear width of $M$ is an upper bound for the graph\nwidth of $M$. The graph corresponding to the graph splitting is the\ngraph whose vertices are compression bodies, with edges connecting\npairs of compression bodies with common boundary components.\n\n\n\n\n\n\\subsection{Results}\n\n\nIn order to bound metric complexity in terms of topological complexity\nwe shall assume the following result announced by Pitts and Rubinstein\n\\cite{pr} (see also Rubinstein \\cite{rubinstein}).\n\n\\begin{theorem} \\cites{pr,rubinstein} \\label{conjecture:pr} %\nLet $M$ be a Riemannian $3$-manifold with a strongly irreducible\nHeegaard splitting. Then the Heegaard surface is isotopic to a\nminimal surface, or to the boundary of a regular neighborhood of a\nnon-orientable minimal surface with a small tube attached vertically\nin the I-bundle structure.\n\\end{theorem}\n\nA full proof of this result has not yet appeared in the literature,\nthough recent progress has been made by Colding and De Lellis\n\\cite{cdl}, De Lellis and Pellandrini \\cite{dlp}, and Ketover\n\\cite{ketover}.\n\nIn \\cite{hoffoss-maher} we showed:\n\n\\begin{theorem} \\label{theorem:linear} %\nThere is a constant $K > 0$, such that for any closed hyperbolic\n$3$-manifold,\n\\begin{equation} \nK ( \\text{linear width}(M) ) \\leqslant \\text{Morse area}(M) \\leqslant 4 \\pi\n(\\text{linear width}(M)), \n\\end{equation}\nwhere the right hand bounds hold assuming Theorem\n\\ref{conjecture:pr}.\n\\end{theorem}\n\nIn this paper we show:\n\n\\begin{theorem} \\label{theorem:graph} %\nThere is a constant $K > 0$, such that for any closed hyperbolic\n$3$-manifold,\n\\begin{equation}\n K ( \\text{graph width}(M) ) \\leqslant \\text{Gromov area}(M) \\leqslant 4 \\pi\n(\\text{graph width}(M)), \n\\end{equation}\nwhere the right hand bounds hold assuming Theorem\n\\ref{conjecture:pr}.\n\\end{theorem}\n\nWe also expect there to be upper and lower bounds on topological\ncomplexity in terms of Uryson width, i.e. using diameter instead of\narea, but we do not expect them to be linear. \n\n\n\n\\subsection{Related work in $3$-manifolds}\n\n\n\n\n\nIt may be of interest to compare our results with recent work of\nBrock, Minsky, Namazi and Souto \\cite{bmns} on manifolds with bounded\ncombinatorics. Let $C_1, \\ldots C_n$ be a finite collection of\nhomeomorphism types of compact $3$-manifolds with marked boundary,\nwhich we shall refer to as \\emph{model pieces}, and fix a metric on\neach one. A $3$-manifold $M$ is said to have \\emph{bounded\n combinatorics} if it is a union of (possibly infinitely many) model\npieces glued together by homeomorphisms along their boundaries, with\ncertain restrictions on the gluing maps, which we do not describe in\ndetail here. In particular, a manifold with bounded combinatorics is a\nmanifold of bounded topological width. They show that such a manifold\n$M$ is hyperbolic, with a lower bound on the injectivity radius, and\nthe hyperbolic metric is $K$-bilipshitz homeomorphic to the induced\nmetric on $M$ arising from the metrics on the model pieces. A choice\nof foliation with compact leaves, containing the boundary leaves, on\neach model piece then shows that the metric complexity is linearly\nrelated to the topological complexity for this class of manifolds,\nwhere the linear constants depend on the collection of model pieces.\n\nNote that in our context, a bound on the topological width of the\nmanifold implies that the manifold is a union of compression bodies of\nbounded genus, and there are finitely many of these up to\nhomeomorphism. Their result assumes restrictions on the gluing maps,\nbut then shows the resulting manifold is hyperbolic, but the\nbilipshitz constant $K$ depends on the width of $M$, i.e the genus of\nthe compression bodies. We assume that the manifold $M$ is compact and\nhyperbolic, and make no restriction on the gluing maps between the\ncompression bodies, but we show that the linear constants relating\ntopological and metric complexities are independent of the genus of\nthe compression bodies.\n\n\n\n\\subsection{Outline}\n\nIn \\cite{hoffoss-maher} we considered the linear case, in which the\nrange of the Morse function $f \\colon M \\to \\mathbb{R}$ is $\\mathbb{R}$. Such a Morse\nfunction has the property that for each $t \\in \\mathbb{R}$, the pre-image\n$f^{-1}(t)$ is compact and separating. For the case in which the\nrange of the Morse function $f \\colon M \\to X$ is a graph, one may\nconsider the lifted Morse function\n$\\widetilde f \\colon \\widetilde M \\to \\widetilde X$, where\n$\\widetilde X$ is the universal cover of $X$, and $\\widetilde M$ is\nthe corresponding cover of $M$. This lifted Morse function has the\nproperty that for each $t \\in \\widetilde X$, each pre-image\n$\\widetilde f^{-1}(x)$ is compact and separating, and so many of the\narguments from \\cite{hoffoss-maher} go through directly in this case.\nIn particular, we construct polyhedral approximations to the level\nsets of $\\widetilde f$, and show that they have bounded topological\ncomplexity, as we now describe.\n\nA choice of Margulis constant $\\mu$ determines a thick-thin\ndecomposition for $M$, in which the thin part is a disjoint union of\nMargulis tubes. We also choose a Voronoi decomposition determined by\na maximal $\\epsilon$-separated collection of points in $M$. This implies\nthat every Voronoi cell has diameter at most $\\epsilon$, and, given $\\mu$,\nwe may choose $\\epsilon$ small enough such that every Voronoi cell that\nintersects the thick part contains an embedded ball of radius $\\epsilon\/2$.\nThe thick-thin decomposition of $M$, and the Voronoi decomposition of\n$M$, lift to thick-thin decompositions and Voronoi decompositions of\nthe cover $\\widetilde M$. We give the details of this construction in\nSections \\ref{section:tree}, \\ref{section:voronoi} and\n\\ref{section:thickthin}.\n\nA separating surface $F$ in $\\widetilde M$ determines a partition of\nthe Voronoi cells, depending on which side of the surface the majority\nof the volume of the (metric) ball of radius $\\epsilon\/2$ inside the Voronoi\ncell lies. We will call the boundary between these two sets of\nVoronoi cells a \\emph{polyhedral surface} $S$, which is a union of\nfaces of Voronoi cells, and we can think of this as a combinatorial\napproximation to the original surface $F$.\n\nA key observation from \\cite{hoffoss-maher} is that the number of\nfaces of the polyhedral surface in the thick part is bounded by the\narea of $F$. This is because in the thick part of $M$, the metric\nball of radius $\\epsilon\/2$ in each Voronoi cell is embedded, so moving the\nball along a geodesic connecting the centers of the two Voronoi\nproduces at some point a metric ball whose volume is divided exactly\nin two, giving a lower bound to the area of $F$ near that point.\nThere are bounds on the number of vertices and edges of any Voronoi\ncell in terms of $\\epsilon$, so a bound on the number of faces of $S$ in the\nthick part gives a bound on the Euler characteristic of $S$. We are\nunable to control the number of faces in the thin part, so we cap off\nthe part of $S$ in the thick part with surfaces of bounded Euler\ncharacteristic contained in the thin part. This produces surfaces of\nbounded genus, which we call \\emph{capped surfaces}.\n\nIn this way, the lift of a Morse function\n$\\widetilde f \\colon \\widetilde M \\to \\widetilde X$ gives rise to a\ncollection of polyhedral surfaces in $\\widetilde M$ of bounded genus.\nThese surfaces are constant except at finitely many points of\n$\\widetilde X$, which we call \\emph{cell splitters}, where a level set\ndivides the ball contained in a Voronoi cell exactly in half. We give\nthe details of the construction of the capped surfaces and the\nproperties of the cell splitters in Sections \\ref{section:splitter}\nand \\ref{section:capped}.\n\nThe key step, in Section \\ref{section:equivariant}, is to show that we\nmay construct these surfaces equivariantly in $\\widetilde M$, so they\nproject down to embedded surfaces in $M$, with the same bounds on\ntheir topological complexity.\n\nFinally, in Section \\ref{section:bounded}, by considering the local\nconfiguration near a cell splitter, we show that the regions between\nthe capped surfaces may be constructed using a number of handles\nbounded in terms of the area of the level sets $\\widetilde f^{-1}(t)$,\nand so this the bounds topological complexity of the decomposition of\n$M$ given by the capped surfaces in terms of metric complexity of $M$.\n\nThe bound in the other direction is a direct consequence of the bound\nfrom \\cite{hoffoss-maher}, though we review the argument in the\nSection \\ref{section:sss} for the convenience of the reader.\n\n\n\n\n\n\\subsection{Acknowledgements}\n\nThe authors would like to thank Dick Canary, David Futer, David Gabai,\nJoel Hass, Daniel Ketover, Sadayoshi Kojima, Yair Minksy, Yo'av Rieck\nand Dylan Thurston for helpful conversations, and the Tokyo Institute\nof Technology for its hospitality and support. The second author was\nsupported by the Simons Foundation and PSC-CUNY. This material is\nbased upon work supported by the National Science Foundation under\nGrant No.~DMS-1440140 while the second author was in residence at the\nMathematical Sciences Research Institute in Berkeley, California,\nduring the Fall 2016 semester.\n\n\n\n\\section{Gromov area bounds graph width} \n\\label{section:metric bounds topology}\n\nIn this section we show that we can bound the topological complexity of\nthe manifold in terms of its metric complexity, i.e. we show that\ngraph width is bounded in terms of Gromov area.\n\n\\begin{theorem}\nThere is a constant $K$, such that for any closed hyperbolic\n$3$-manifold $M$,\n\\[ \\text{graph width}(M) \\leqslant K ( \\text{Gromov area}(M) ). \\]\n\\end{theorem}\n\nLet $f \\colon M \\to X$ be a Morse function onto a graph $X$, such that\nthe Gromov area of $f$ is arbitrarily close to the Gromov area of $M$.\nAny metric graph is arbitrarily close to a trivalent metric graph, so\nwe may assume the graph is trivalent. We now show that we may assume\nthe level sets of $f$ are connected.\n\n\\begin{proposition}\nLet $M$ be a Riemannian manifold, and let $f \\colon M \\to X$ be a\nMorse function onto a trivalent graph $X$. Then there is a trivalent\ngraph $X'$, and a Morse function $f' \\colon M \\to X'$ with connected\nlevel sets, with $\\text{Gromov area}(f') \\leqslant \\text{Gromov area}(f)$.\n\\end{proposition}\n\n\\begin{proof}\nThe level sets of the function $f$ give a singular foliation of $M$\nwith compact leaves, which we shall call the \\emph{level set\n foliation}, and the leaves of this foliation are precisely the\nconnected components of the pre-images of points in $M$. Consider the\nleaf space $L$ of the level set foliation, i.e. the space obtained\nfrom $M$ by identifying points in the same leaf. As all leaves are\ncompact, the leaf space is Hausdorff. The leaf space is a trivalent\ngraph, with vertices corresponding to vertex singularities, and the\nmaximum area of the pre-images of the quotient map is less than or\nequal to the maximum area of the pre-images of $f$. Therefore, we may\nchoose $f'$ to be the leaf space quotient map $f' \\colon M \\to L$,\nwhich is a Morse function onto a trivalent graph, and has connected\nlevel sets, with the property that the area of the level sets of $f'$\nis bounded by the area of the level sets of $f$.\n\\end{proof}\n\nIn particular, this means that the vertices of $X$ are precisely the\ncritical points of the Morse function $f$ in which a connected level\nset splits into two connected components.\n\n\n\n\\subsection{Morse functions to trees} \n\\label{section:tree}\n\nWe would like to work in the cover $\\widetilde M$ of $M$ corresponding\nto the universal cover $\\widetilde X$ of the graph $X$, which will\nhave the key advantage that all pre-image surfaces are separating in\n$\\widetilde M$. In fact, the induced map on fundamental groups $f_*\n\\colon \\pi_1 M \\to \\pi_1 X$ is surjective, but as we do not use this\nproperty, we omit the proof.\n\nLet $p \\colon \\twid{M} \\rightarrow M$ be the cover of $M$\ncorresponding to the kernel of the induced map $f_* \\colon \\pi_1 M\n\\rightarrow \\pi_1 X $, and let $c:\\twid{X}\\rightarrow X$ be the\nuniversal cover of $X$, so $\\twid{X}$ is a tree. Then the map $f \\circ\np \\colon \\twid{M} \\rightarrow X$ lifts to a map $h = \\twid{f \\circ p}\n: \\twid{M} \\rightarrow \\twid{X}$. Since each leaf $F_t$ in $M$ maps\nto a single point in $X$, the fundamental group of each leaf is\ncontained in $\\ker(f)$. Therefore, each leaf in $M$ lifts to a leaf\nin $\\twid{M}$, and as the cover is regular, the pre-image of a point\n$t \\in \\widetilde{X}$ is a disjoint union of homeomorphic copies of\n$F_{c(t)}$. In particular, the area bound for the leaves $F_t$ in\n$M$ is also an area bound for the leaves $H_t = h^{-1}(t)$ in\n$\\twid{M}$.\n\n\\begin{center}\n\\begin{tikzpicture}[node distance=2cm, auto]\n \\node (A) {$\\twid{M}$};\n \\node (B) [right of=A] {$\\twid{X}$};\n \\node (C) [below of=A] {$M$};\n \\node (D) [below of=B] {$X$};\n \\draw[->] (A) to node {$h = \\twid{f \\circ p}$} (B);\n \\draw[->] (A) to node {$p$} (C);\n \\draw[->] (B) to node {$c$} (D);\n \\draw[->] (C) to node {$f$} (D);\n\\end{tikzpicture}\n\\end{center}\n\nAs $\\widetilde{X}$ is a tree, every point is separating, and so every pre-image\nsurface $H_t = h^{-1}(t)$ is also separating. \n\n\n\n\n\n\\subsection{Voronoi cells} \\label{section:voronoi}\n\nWe will approximate the level sets of $f$ by surfaces consisting of\nfaces of Voronoi cells. We now describe in detail the Voronoi cell\ndecompositions we shall use, and their properties. The definitions in\nthis section are taken verbatim from \\cite{hoffoss-maher}, but we\ninclude them in this section for the convenience of the reader.\n\nA \\emph{polygon} in $\\mathbb{H}^3$ is a bounded subset of a hyperbolic plane\nwhose boundary consists of a finite number of geodesic segments. A\n\\emph{polyhedron} in $\\mathbb{H}^3$ is a convex topological $3$-ball in $\\mathbb{H}^3$\nwhose boundary consists of a finite collection of polygons. A\n\\emph{polyhedral cell decomposition} of $\\mathbb{H}^3$ is a cell decomposition\nin which which every $3$-cell is a polyhedron, each $2$-cell is a\npolygon, and the edges are all geodesic segments. We say a cell\ndecomposition of a complete hyperbolic manifold $M$ is\n\\emph{polyhedral} if its pre-image in the universal cover $\\mathbb{H}^3$ is\npolyhedral.\n\nLet $X = \\{ x_i \\}$ be a discrete collection of points in\n$3$-dimensional hyperbolic space $\\mathbb{H}^3$. The Voronoi cell $V_i$\ndetermined by $x_i \\in X$ consists of all points of $M$ which are\ncloser to $x_i$ than any other $x_j \\in X$, i.e.\n\\[ V_i = \\{ x \\in \\mathbb{H}^3 \\mid d(x, x_i) \\leqslant d(x, x_j) \\text{ for all }\nx_j \\in \\widetilde{X} \\}. \\]\nWe shall call $x_i$ the \\emph{center} of the Voronoi cell $V_i$, and\nwe shall write ${\\cal{V}} = \\{ V_i \\}$ for the collection of Voronoi cells\ndetermined by $X$. Voronoi cells are convex sets in $\\mathbb{H}^3$, and hence\ntopological balls. The set of points equidistant from both $x_i$ and\n$x_j$ is a totally geodesic hyperbolic plane in $\\mathbb{H}^3$. A \\emph{face}\n$\\Phi$ of the Voronoi decomposition consists of all points which lie\nin two distinct Voronoi cells $V_i$ and $V_j$, so $\\Phi$ is contained\nin a geodesic plane. An \\emph{edge} $e$ of the Voronoi decomposition\nconsists of all points which lie in three distinct Voronoi cells\n$V_i, V_j$ and $V_k$, which is a geodesic segment, and a \\emph{vertex}\n$v$ is a point lying in four distinct Voronoi cells $V_i, V_j, V_k$\nand $V_l$. By general position, we may assume that all edges of the\nVoronoi decomposition are contained in exactly three distinct faces,\nthe collection of vertices is a discrete set, and there are no points\nwhich lie in more than four distinct Voronoi cells. We shall call such\na Voronoi decomposition a \\emph{regular} Voronoi decomposition, and it\nis a polyhedral decomposition of $\\mathbb{H}^3$. As each edge is $3$-valent,\nand each vertex is $4$-valent, this implies that the dual cell\nstructure is a simplicial triangulation of $\\mathbb{H}^3$, which we shall\nrefer to as the \\emph{dual triangulation}. The dual triangulation may\nbe realised in $\\mathbb{H}^3$ by choosing the vertices to be the centers $x_i$\nof the Voronoi cells and the edges to be geodesic segments connecting\nthe vertices, and we shall always assume that we have done this. In\nthis case the triangles and tetrahedra are geodesic triangles and\ntetrahedra in $\\mathbb{H}^3$.\n\nGiven a collection of points $X = \\{ x_i \\}$ in a hyperbolic\n$3$-manifold $M$, let $\\widetilde{X}$ be the pre-image of $X$ in the universal\ncover of $M$, which is isometric to $\\mathbb{H}^3$. As $\\widetilde{X}$ is equivariant,\nthe corresponding Voronoi cell decomposition ${\\cal{V}}$ of $\\mathbb{H}^3$ is also\nequivariant. The distance condition implies that the interior of each\nVoronoi cell $V$ is mapped down homeomorphically by the covering\nprojection, though the covering projection may identify faces, edges\nor vertices of $V_i$ under projection into $M$. By abuse of notation,\nwe shall refer to the resulting polyhedral decomposition of $M$ as the\nVoronoi decomposition ${\\cal{V}}$ of $M$. By general position, we may assume\nthat ${\\cal{V}}$ is regular. The dual triangulation is also equivariant, and\nprojects down to a triangulation of $M$, which we will also refer to\nas the dual triangulation, though this triangulation may no longer be\nsimplicial.\n\nWe shall write $B(x, r)$ for the closed metric ball of radius $r$\nabout $x$ in $M$, i.e.\n\\[ B(x, r) = \\{ y \\in M \\mid d(x, y) \\leqslant r \\}. \\]\nA metric ball in $M$ need not be a topological ball in general. We\nshall write $\\text{inj}_M(x)$ for the injectivity radius of $M$ at $x$,\ni.e. the radius of the largest embedded ball in $M$ centered at $x$.\nThen the injectivity radius of $M$, denoted $\\text{inj}(M)$, is defined to\nbe\n\\[ \\text{inj}(M) = \\inf_{x \\in M} \\text{inj}_M(x). \\]\n\nWe say a collection $\\{ x_i \\}$ of points in $M$ is\n\\emph{$\\epsilon$-separated} if the distance between any pair of points\nis at least $\\epsilon$, i.e. $d(x_i, x_j) \\geqslant \\epsilon$, for all $i\n\\not = j$. Let $\\{ x_i \\}$ be a maximal collection of\n$\\epsilon$-separated points in $M$, and let ${\\cal{V}}$ be the corresponding\nVoronoi cell division of $M$. Since the collection $\\{ x_i \\}$ is\nmaximal, each Voronoi cell is contained in a metric ball of radius\n$\\epsilon$ about its center. Furthermore, if the injectivity radius at the\ncenter $x_i$ is at least $2\\epsilon$, then as the points $x_i$ are distance\nat least $\\epsilon$ apart, each Voronoi cell contains a topological ball of\nradius $\\epsilon\/2$ about its center, i.e.\n\\[ B(x_i, \\epsilon\/2 ) \\subset V_i \\subset B(x_i, \\epsilon). \\]\n\n\\begin{definition}\nLet $M$ be a complete hyperbolic $3$-manifold. We say a Voronoi\ndecomposition ${\\cal{V}}$ is $\\epsilon$-regular, if it is regular, and it arises\nfrom a maximal collection of $\\epsilon$-separated points.\n\\end{definition}\n\nA \\emph{simple arc} in the boundary of a tetrahedron is a properly\nembedded arc in a face of the tetrahedron with endpoints in distinct\nedges. A \\emph{triangle} in a tetrahedron is a properly embedded disc\nwhose boundary is a union of three simple arcs, and a\n\\emph{quadrilateral} is a properly embedded disc whose boundary is the\nunion of four simple arcs. A \\emph{normal surface} in a triangulated\n$3$-manifold is a surface that intersects each tetrahedron in a union\nof normal triangles and quadrilaterals.\n\nOne useful property of $\\epsilon$-regular Voronoi decompositions is that the\nboundary of any union of Voronoi cells is an embedded surface, in fact\nan embedded normal surface in the dual triangulation.\n\n\\begin{proposition} \\cite{hoffoss-maher}*{Proposition 2.2} %\nLet $M$ be a complete hyperbolic manifold without cusps, and let ${\\cal{V}}$\nbe an $\\epsilon$-regular Voronoi decomposition. Let $P$ be a union of\nVoronoi cells in ${\\cal{V}}$, and let $S$ be the boundary of $P$. Then $S$ is\nan embedded surface in $M$.\n\\end{proposition}\n\nIn \\cite{hoffoss-maher} this result is stated for compact hyperbolic\n$3$-manifolds, but the proof works for complete hyperbolic\n$3$-manifolds without cusps.\n\nWe shall say a Voronoi cell $V_i$ with center $x_i$ is an\n\\emph{$\\epsilon$-deep} Voronoi cell if the injectivity radius at $x_i$ is at\nleast $4\\epsilon$, i.e. $\\text{inj}_M(x_i) \\geqslant 4\\epsilon$, and in particular this\nimplies that the metric ball $B(x_i, 3\\epsilon)$ is a topological ball. We\nshall also call centers, faces, edges and vertices of $\\epsilon$-deep\nVoronoi cells $\\epsilon$-deep.\nIn the next section we will choose a fixed $\\epsilon$ independent of the\nmanifold $M$, and we will just say \\emph{deep} instead of\n$\\epsilon$-deep. We shall write $\\mathcal{W}$ for the subset of ${\\cal{V}}$ consisting of\ndeep Voronoi cells. If $\\epsilon < \\tfrac{1}{4}\\text{inj}(M)$, then ${\\cal{V}} = \\mathcal{W}$ and\nall Voronoi cells are deep.\n\n\n\nFinally, we recall that there are bounds, which only depend on\n$\\epsilon$, on the number of vertices, edges and faces of a deep\nVoronoi cell.\n\n\\begin{proposition} \\cite{hoffoss-maher}*{Proposition 2.3}\\label{prop:bound} %\nLet $M$ be a complete hyperbolic $3$-manifold with an $\\epsilon$-regular\nVoronoi decomposition ${\\cal{V}}$, and let $\\mathcal{W}$ be the collection of deep\nVoronoi cells. Then there is a number $J$, which only depends on\n$\\epsilon$, such that each deep Voronoi cell $W_i \\in \\mathcal{W}$ has at most $J$\nfaces, edges and vertices.\n\\end{proposition}\n\nAgain, in \\cite{hoffoss-maher}, these results are stated for compact\nhyperbolic $3$-manifolds, but the proofs work for complete hyperbolic\n$3$-manifolds without cusps.\n\n\n\n\n\n\n\\subsection{Margulis tubes} \\label{section:thickthin}\n\n\n\n\n\nWe will use the Margulis Lemma and the \\emph{thick-thin} decomposition\nfor finite volume hyperbolic $3$-manifolds, and we now review these\nresults.\n\nGiven a number $\\mu > 0$, let $X_\\mu = M_{[\\mu, \\infty)}$ be the\n\\emph{thick part} of $M$, i.e. the union of all points $x$ of $M$ with\n$\\text{inj}_M(x) \\geqslant \\mu$. We shall refer to the closure of the complement\nof the thick part as the \\emph{thin part} and denote it by $T_\\mu =\n\\overline{M \\setminus X}$.\n\nThe Margulis Lemma states that there is a constant $\\mu_0 > 0$, such\nthat for any compact hyperbolic $3$-manifold, the thin part is a\ndisjoint union of solid tori, called \\emph{Margulis tubes}, and each\nof these solid tori is a regular metric neighborhood of an embedded\nclosed geodesic of length less than $\\mu_0$. In the case in which $M$\nis complete without cusps, there is an extra possibility, as a\ncomponent of the thin part may also be the universal cover of such a\nsolid torus, and we shall refer to such a component as an\n\\emph{infinite Margulis tube}. We shall call a number $\\mu_0$ for\nwhich this result holds a \\emph{Margulis constant} for\n$\\mathbb{H}^3$. If $\\mu_0$ is a Margulis constant for $\\mathbb{H}^3$,\nthen so is $\\mu$ for any $0 < \\mu < \\mu_0$, and furthermore, given\n$\\mu$ and $\\mu_0$ there is a number $\\delta > 0$ such that\n$N_{\\delta}(T_{\\mu}) \\subseteq T_{\\mu_0}$. For the remainder for this\nsection we shall fix a pair of numbers $(\\mu, \\epsilon)$ such that there are\nMargulis constants $0 < \\mu_1 < \\mu < \\mu_2$, a number $\\delta$ such\nthat $N_{\\delta}(T_{\\mu}) \\subseteq T_{\\mu_2} \\setminus T_{\\mu_1}$,\nand $\\epsilon = \\tfrac{1}{4} \\min \\{ \\mu_1, \\delta \\}$. We shall call $(\\mu,\n\\epsilon)$ a choice of \\emph{MV}-constants for $\\mathbb{H}^3$.\n\nLet $(\\mu, \\epsilon)$ be a choice of $MV$-constants, and consider an\n$\\epsilon$-regular Voronoi decomposition of $M$. The fact that\n$N_{\\delta}(T_{\\mu}) \\subseteq T_{\\mu_2} \\setminus T_{\\mu_1}$ means\nthat we may adjust the boundary of $T_{\\mu}$ by an arbitrarily small\nisotopy so that it is transverse to the Voronoi cells, and we will\nassume that we have done this for the remainder of this section. Our\nchoice of $\\epsilon$ implies that the thick part $X_\\mu$ is contained in the\nVoronoi cells in the deep part, i.e. $X_\\mu \\subset \\bigcup_{W_i \\in\n \\mathcal{W}} W_i$, so in particular $\\partial X_\\mu = \\partial T_\\mu$ is\ncontained in the deep part. Furthermore, each deep Voronoi cell hits\nat most one component of $T_\\mu$.\n\n\n\n\\subsection{Cell splitters}\\label{section:splitter}\n\nThe polyhedral surfaces we construct will be constant, except for a\ndiscrete collection of points in $Y$, which roughly speaking\ncorrespond to points $t \\in Y$ for which the level set $f^{-1}(t)$\ndivide a Voronoi cell in half. For technical reasons, we use points\nwhich divide a ball of fixed size in the Voronoi cell in half, as we\nnow describe.\n\nLet $t$ be a point in a trivalent tree $Y$. We shall write $Y_t^{c_i}$\nfor the closures of the connected components of $Y \\setminus t$, and\nwe shall call these the \\emph{complements} of $t$. If $t$ lies in the\ninterior of an edge, then there are precisely two complements, while\nif $t$ is a vertex, there are precisely three complements.\n\nLet $M$ be a complete hyperbolic $3$-manifold without cusps, and let\n$h \\colon M \\rightarrow Y$ be a Morse function onto a trivalent tree\n$Y$. Given $t \\in Y$, let $H_t^{c_i} = h^{-1}(Y_t^{c_i})$, and we\nshall refer to these as the \\emph{complements} of $H_t$ in $M$. As\nbefore, there are either two or three complementary regions depending\non whether $t$ lies in the interior of an edge, or is a vertex in $Y$.\n\n\\begin{definition} \\label{definition:splitter} %\nLet $M$ be a complete hyperbolic $3$-manifold without cusps, with an\n$\\epsilon$-regular Voronoi decomposition ${\\cal{V}}$. Let $h \\colon M \\to Y$ be a\nMorse function to a tree $Y$, and let $V$ be a Voronoi cell with\ncenter $x$. Suppose that a point $t \\in Y$ has the property that for\neach complementary region $H_t^{c_i}$, the volume of\n$H_t^{c_i} \\cap B(x, \\epsilon\/2) \\cap V$ is at most half the volume of\nthe topological ball $B(x, \\epsilon\/2) \\cap V$. Then we say that $t$\nis a \\emph{cell splitter} for the Voronoi cell $V$.\n\\end{definition}\n\n\\begin{proposition}\nLet $M$ be a complete hyperbolic $3$-manifold without cusps, with an\n$\\epsilon$-regular Voronoi decomposition ${\\cal{V}}$. Let $h \\colon M \\to Y$ be a\nMorse function to a tree, and let $V$ be a Voronoi cell with center\n$x$. Then there is a unique cell splitter $t \\in Y$ for $V$.\n\\end{proposition}\n\n\\begin{proof}\nWe first show existence. Let $B$ be the topological ball $B(x, \\epsilon\/2)\n\\cap V$, and let $v$ be the volume of this ball. Consider $h(B)\n\\subset Y$. If there is a vertex of $Y$ which is a cell splitter,\nthen we are done. Otherwise, suppose no vertex of $h(B)$ is a cell\nsplitter. If $t$ is a vertex in $h(B)$ which is not a cell splitter,\nthen there is at least one complementary region $Y_t^{c_i}$ such that\n$H_t^{c_i} \\cap B(x, \\epsilon\/2) \\cap V$ has volume more than\n$\\tfrac{1}{2}v$, and $Y_t^{c_i} \\cap h(B)$ has at least one fewer\nvertex. So proceeding by induction, we may reduce to the case in which\n$h(B)$ contains an interval $I$ with no vertices such that $h^{-1}(I)\n\\cap B(x, \\epsilon\/2) \\cap V$ has volume at least $\\tfrac{1}{2} v$. In this\ncase, let $t_0$ and $t_1$ be the endpoints of $I$, and consider\n$h^{-1}([t_0, s])$, for $s \\in I$. When $s = t_0$, this has volume\nless than $\\tfrac{1}{2} v$, and has volume greater than\n$\\tfrac{1}{2}v$ when $s = t_1$. As the volume changes continuously\nwith $s$, there is a point $t'$ such that $H_{t'}$ divides $B$ into\ntwo regions, each of which has volume exactly $\\tfrac{1}{2}v$, so $t'$\nis a cell splitter for $V$.\n\nWe now show uniqueness. First suppose $t$ is a cell splitter which is\nnot a vertex. Then there are precisely two complementary regions\n$H_t^{c_1}$ and $H_t^{c_2}$, each of which must have exactly half the\nvolume of $B(x, \\epsilon\/2) \\cap V$, and we shall denote this volume by\n$v$. Any other point $t'$ has a complementary region which contains at\nleast one of these complements, and so has volume greater than\n$\\tfrac{1}{2} v$, and so can not be a cell splitter.\n\nFinally suppose $t$ is a cell splitter which is a vertex. Then there\nare three complements $H_t^{c_1}, H_t^{c_2}$ and $H_t^{c_3}$, each of\nwhich has volume at most $\\tfrac{1}{2}v$. As each region has volume at\nmost $\\tfrac{1}{2}v$, any two regions must have total volume at least\n$\\tfrac{1}{2}v$. Any other point $t' \\in Y$ must have a complementary\nregion which contains at least two of the complements of $H_t$, and so\nhas a complement with volume strictly greater than $\\tfrac{1}{2}v$,\nand so can not be a cell splitter.\n\\end{proof}\n\n\\begin{definition} \\label{definition:generic} %\nWe say that a Morse function $f \\colon M \\to Y$ to a tree $Y$ is\n\\emph{generic} with respect to a Voronoi decomposition $\\mathcal{V}$\nif the cell splitters for distinct Voronoi cells $V_i$ correspond to\ndistinct points $t_i \\in Y$. We say a point $t \\in Y$ is\n\\emph{generic} if it is not a critical point for the Morse function,\nand is not a cell splitter.\n\\end{definition}\n\nWe may assume that $f$ is generic by an arbitrarily small perturbation\nof $f$, and we shall always assume that $f$ is generic from now on.\nFinally, we remark that a trivalent vertex in $Y$ is not necessarily a\ncell splitter.\n\n\n\n\n\\subsection{Polyhedral and capped surfaces} \\label{section:capped}\n\n\nLet $Q$ be a $3$-dimensional submanifold of a complete hyperbolic\n$3$-manifold $M$ without cusps, with boundary an embedded separating\nsurface $F$. In this section we show how to approximate $Q$ by a union\nof Voronoi cells, which in turn gives an approximation to $F$ by an\nembedded surface $S$ which is a union of faces of Voronoi cells.\n\nWe say a region $R$ is \\emph{generic} if for every Voronoi cell $V_i$\nwith center $x_i$, the region consisting of the intersection of\n$B(x_i, \\epsilon\/2)$ with the interior of $V_i$ does not have exactly half\nits volume lying in $R$. We say a separating surface $F$ in $M$ is\n\\emph{generic} if it bounds a generic region.\n\nLet $P$ be the collection of Voronoi cells for which at least half of\nthe volume of $B(x_i, \\epsilon\/2) \\cap \\text{interior}(V_i)$ lies in $Q$. We\nsay the $P$ is the \\emph{polyhedral region} determined by $Q$. The\npolyhedral region $P$ may be empty, even if $Q$ is non-empty. The\nboundary of $Q$ is a polyhedral surface $S$, which we shall call the\n\\emph{polyhedral surface} associated to $F = \\partial Q$, and is a\nnormal surface in the dual triangulation. We will use the following\nbound on the number of faces and boundary components of the\nintersection of the polyhedral surface $S$ with the thick part of the\nmanifold, in terms of the area of the corresponding surface $F$. If\n$S$ is a surface, we will write $\\norm{\\partial S}$ for the number of\nboundary components of $S$, and if $S'$ is a subset of a polyhedral\nsurface $S$, we will write $\\| S' \\|$ for the number of faces of $S$\nwhich intersect $S'$.\n\n\n\n\\begin{proposition} \n\\cite{hoffoss-maher}*{Proposition 2.10, 2.13} \\label{cor:bound} %\nLet $(\\mu, \\epsilon)$ be $MV$-constants, and let $M$ be a complete\nhyperbolic $3$-manifold without cusps, with an $\\epsilon$-regular Voronoi\ndecomposition ${\\cal{V}}$ with deep part $\\mathcal{W}$ and thick part $X_\\mu$. Then\nthere is a constant $K$, which only depends on the $MV$-constants,\nsuch that for any generic embedded separating surface $F$ in $M$, the\ncorresponding polyhedral surface $S$ satisfies:\n\\[ \\| S \\cap X_\\mu \\| \\leqslant K \\text{area}(F), \\]\nand\n\\[ \\norm{\\partial ( S \\cap X_\\mu ) } \\leqslant K \\text{area}(F). \\]\n\\end{proposition}\n\nIn \\cite{hoffoss-maher}, this result is stated for the level set of a\nMorse function $F$ on a compact hyperbolic manifold, and one may then\nobserve that every separating surface is the level set of some Morse\nfunction, though in fact, the proof only uses the fact that $F$ is\nseparating. In \\cite{hoffoss-maher}*{Proposition 2.10} the bound is\nstated in terms of $S \\cap \\mathcal{W}$. However, as $S \\cap X_\\mu \\subset S\n\\cap \\mathcal{W}$, the stated bound follows immediately.\n\n\nFor a polyhedral surface $S$, each boundary component of the surface\n$S \\cap X_\\mu$ is contained in $T_\\mu$, so $S \\cap X_\\mu$ is a\nproperly embedded surface in $X_\\mu$. We now wish to cap off the\nproperly embedded surfaces $S \\cap X_\\mu$ with properly embedded\nsurfaces in $T_\\mu$ to form closed surfaces. We warn the reader that\nthe following definition differs slightly from the definition in\n\\cite{hoffoss-maher}, as we extend the definition to include the case\nin which $T_\\mu$ has infinite components.\n\n\\begin{definition}\nA separating surface $F$ in $M$ gives rise to a polyhedral surface\n$S$, which meets $\\partial T_\\mu$ transversely, and intersects\n$\\partial T_\\mu$ in a collection of simple closed curves which is\nseparating in $\\partial T_\\mu$. We replace $S$ inside the thin part\nby surfaces $\\{ U_i \\}$ which we now describe. For each torus\ncomponent $T_i$ in $\\partial T_\\mu$ choose a subsurface $U_i$ bounded\nby $S \\cap \\partial T_i$. For each infinite component $T_i$, choose a\nnot necessarily connected surface $U_i$ as follows: for each essential\ncurve in the annulus $\\partial T_i$ choose a disc it bounds in $T_i$,\nand then let $U_i$ be the union of these discs with the planar surface\nbounded by the remaining inessential curves. We call the resulting\nsurface a \\emph{capped surface} $S^+ = (S \\cap X_\\mu) \\cup \\bigcup_i\nU_i$.\n\\end{definition}\n\n\nWe will use the following property of the capped surfaces.\n\n\\begin{proposition} \\label{prop:capped}\nLet $(\\mu, \\epsilon)$ be $MV$-constants, and let $M$ be a complete\nhyperbolic $3$-manifold without cusps, with thin part $T_\\mu$, and\nwith with an $\\epsilon$-regular Voronoi decomposition ${\\cal{V}}$. Then there is a\nconstant $K$, which only depends on $\\epsilon$, such that for any generic\nembedded separating surface $F$ in $M$, the corresponding capped\nsurface $S^+$ satisfies:\n\\[ \\text{genus}(S^+) \\leqslant K \\text{area}(F). \\]\n\\end{proposition}\n\nThe proof of this result is essentially the same as the proof of\n\\cite{hoffoss-maher}*{Proposition 2.14}, and instead of repeating the\nentire argument, we explain the minor extension needed. The only\ndifference is that \\cite{hoffoss-maher}*{Proposition 2.14} is stated\nfor closed hyperbolic manifolds, whereas Proposition \\ref{prop:capped}\nis stated for complete hyperbolic manifolds without cusps, so the thin\npart of $M$ may have infinite Margulis tubes. This makes no\ndifference to the estimates of the number of faces and boundary\ncomponents of the resulting polyhedral surface in terms of the area of\nthe original surface. The extension of the definition of capped\nsurface to the infinite case only involves capping off with planar\nsurfaces, so the same genus bounds hold.\n\n\n\n\n\n\n\\subsection{Disjoint equivariant surfaces}\\label{section:equivariant}\n\nEach collection of points $t_i$ in $Y$ corresponds to a collection\n$S^+_i$ of capped surfaces. In this section we show that if the\ncollection of points is equivariant, then we may arrange for the\ncapped surfaces to be disjoint and equivariant.\n\nLet $M$ be a $3$-manifold which admits a group of covering\ntranslations $G$. We say a subset $U \\subset M$ is \\emph{equivariant}\nif it is preserved by $G$. We say a Voronoi decomposition ${\\cal{V}}$ of $M$\nis \\emph{equivariant} if the centers of the Voronoi cells form an\nequivariant set in $M$.\n\nLet $W$ be an equivariant collection of points in $\\widetilde X$, none\nof which are either cell splitters or critical points of the Morse\nfunction $h$. We say two points $t_i$, $t_j$ in $W$ are\n\\emph{adjacent} if the geodesic connecting them in the tree $\\widetilde{X}$ does\nnot contain any other point of $W$. We may choose $W$ such that the\ngeodesic in $\\widetilde{X}$ connecting any pair of adjacent points in $W$\ncontains either a single cell splitter, a single trivalent trivalent\nvertex of $\\widetilde{X}$, or neither of these two types of points.\n\nConsider the collection $S$ of polyhedral surfaces $S_t$, as $t$ runs\nover $W$. As the collection $W$ is equivariant, $S$ is also\nequivariant. Note that although each surface in $S$ is individually\nembedded, each surface in $S$ will share many common faces with other\nsurfaces in $S$. We will now make this collection simultaneously\nequivariantly disjoint, so that we may push them down to $M$ to obtain\na collection of disjoint surfaces which will act as our splitting\nsurfaces in a graph splitting of $M$.\n\n\\begin{proposition}\nLet $M$ be a closed hyperbolic 3-manifold of injectivity radius at\nleast $2\\epsilon$, with an $\\epsilon$-regular Voronoi decomposition ${\\cal{V}}$, and a\ngeneric Morse function $f : M \\rightarrow X$ onto a trivalent graph\n$X$ with connected level sets. Let $p \\colon \\twid{M} \\rightarrow M$\nbe the cover of $M$ corresponding to the kernel of the induced map\n$f_* \\colon \\pi_1 M \\rightarrow \\pi_1 X $, and let\n$c:\\twid{X}\\rightarrow X$ be the universal cover of $X$. Let $W$ be a\ndiscrete equivariant collection of points in $\\widetilde X$. Then the\ncollection of polyhedral surfaces $\\{ S_w \\mid w \\in W \\}$ in\n$\\twid{M}$ is equivariantly isotopic to a disjoint collection of\nsurfaces $\\{ \\Sigma_w \\mid w \\in W \\}$, and furthermore this equivariant\nisotopy may be chosen to be supported in a neighborhood of the\n2-skeleton of the induced Voronoi decomposition of $\\widetilde M$.\n\\end{proposition}\n\n\n\\begin{proof}\nWe now give a recipe for constructing surfaces $\\Sigma_t$, for $t \\in\nW$. Each individual surface $\\Sigma_t$ will be isotopic to the original\n$S_t$, but the union of the surfaces $\\Sigma_t$ will be equivariantly\ndisjointly embedded in $\\twid{M}$.\n\nWe first show that there is a canonical ordering of the polyhedral\nsurfaces $\\Sigma_t$ which share a common face. Let $\\Phi$ be a face of a\nVoronoi cell in $\\widetilde M$, and let $V(x_1)$ and $V(x_2)$ be the\nadjacent Voronoi cells. Let $t_1$ and $t_2$ be cell splitters for\n$V(x_1)$ and $V(x_2)$, so that $H_{t_i} = h^{-1}(t_i)$ is the surface\nwhich divides $B_{\\epsilon\/2}(x_i)$ precisely in half, for $i = 1,2$.\n\nWe say a point in $\\widetilde{X}$ is \\emph{regular} if it is not a\ncell splitter, and not a critical point for the Morse function $h$.\n\n\\begin{claim}\nThe collection of regular points in $\\widetilde X$ corresponding to\npolyhedral surfaces $\\Sigma_t$ which contain the face $\\Phi$ is precisely\nthe regular points contained in the geodesic in $\\widetilde{X}$ from $t_1$ to\n$t_2$.\n\\end{claim}\n\n\\begin{proof}\nThe two embedded surfaces $H_{t_1}$ and $H_{t_2}$ divide $\\widetilde M$ into\nthree parts; call them $A, B$ and $C$, with $A$ the part only hitting\n$H_{t_1}$, and $B$ the part hitting both $H_{t_1}$ and $H_{t_2}$.\n\nLet $\\gamma$ be the geodesic in $\\widetilde{X}$ from $t_1$ to $t_2$. Each point\n$t$ in $\\gamma$ corresponds to a surface $H_t$ dividing $\\widetilde M$ at most\n3 parts, one of which contains $A$, and another containing $C$. Let\n$P_t$ be the part containing $A$. Then, writing $\\norm{A}$ for the\nvolume of a region $A$, \n\\[ \\norm{ B_{\\epsilon\/2}(x_1) \\cap P_t} \\geqslant \\norm{B_{\\epsilon\/2}(x_1)\n \\cap A} \\geqslant \\frac{1}{2} \\norm{ B_{\\epsilon\/2}(x_1) }\n\\]\nand \n\\[ \\norm{ B_{\\epsilon\/2}(x_2)\n \\setminus P_t} \\geqslant \\norm{B_{\\epsilon\/2}(x_2) \\setminus C} \\geqslant \\frac{1}{2} \\norm{\n B_{\\epsilon\/2}(x_2) }.\n\\]\nTherefore the two Voronoi cells $V(x_1)$ and $V(x_2)$ lie in different\npartitions of the Voronoi cells determined by $t$, and so $\\Phi$ lies\nin the polyhedral surface $\\Sigma_t$.\n\nConversely, suppose $t$ does not lie on the path $\\gamma$, then $t$\ndivides $\\widetilde{X}$ into at most three parts, and $\\gamma$ is contained in\nexactly one of these parts. This means that $H_{t_1}$ and $H_{t_2}$\nare contained in the same complementary component of $H_t$, and so $\\Phi$\ncannot be a face of $\\Sigma_t$.\n\\end{proof}\n\nIt suffices to show that we can isotope the normal surfaces,\npreserving the fact that they are normal, so that they have disjoint\nintersection in the $2$-skeleton of the dual triangulation.\n\nLet $e$ be an edge of the dual triangulation, with vertices $x_1$ and\n$x_2$, with corresponding cell splitters $t_1$ and $t_2$. A normal\nsurface $S_i$ intersects $e$ if and only if the corresponding point\n$w_i$ lies in the geodesic $[t_1, t_2]$ in $\\widetilde{X}$ connecting $t_1$ and\n$t_2$. The points $w_i$ in $e$ therefore inherit an order from $[t_1,\nt_2]$, and we may isotope the normal surfaces by a normal isotopy so\nthat they intersect the edge $e$ in the same order. As the interiors\nof each edge have disjoint images under the covering translations, and\nthe collection of edges is equivariant, we may do this normal isotopy\nequivariantly.\n\nLet $\\Phi$ be a triangle in the dual triangulation, with vertices\n$x_1, x_2$ and $x_3$, and corresponding cell splitters $t_1, t_2$ and\n$t_3$. As above, the collection of normal surfaces which intersect an\nedge $[x_i, x_j]$ of $\\Phi$ corresponds to those $w_i$ lying in the\ngeodesic $[t_i, t_j]$ in $\\widetilde{X}$. The union of the three geodesics\n$[t_i, t_j]$ forms a minimal spanning tree for the three cell\nsplitters in $\\widetilde{X}$. Let $t_0$ be the center of this tree, i.e. the\nunique point that lies in all three geodesics. Note that the tree may\nbe degenerate, so $t_0$ may be equal to one of the other vertices.\n\n\\begin{figure}[H]\n\\begin{center} \n\\begin{tikzpicture}\n\n\\draw (0, 0) node [below] {$x_1$} -- \n (1, 4) node [right] {$x_3$} node [midway, right] {$\\Phi$}-- \n (-2, 2) node [left] {$x_2$}-- cycle;\n\n\n\n\\draw [thick] ( $(0,0)!0.833!(1,4)$ ) -- node [midway, below] {$S_1$}\n($(-2,2)!0.833!(1,4)$ );\n\n\\draw [thick] ( $(0,0)!0.666!(1,4)$ ) -- node [midway, below] {$S_2$}\n($(-2,2)!0.666!(1,4)$ );\n\n\\draw [thick] ( $(-2,2)!0.25!(1,4)$ ) -- node\n[midway, right] {$S_3$} ($(0,0)!0.75!(-2,2)$ );\n\n\\draw [thick] ( $(0,0)!0.25!(1,4)$ ) -- node [midway, above] {$S_4$}\n($(0,0)!0.25!(-2,2)$ );\n\n\\begin{scope}[xshift=1cm, yshift=0.5cm, scale=1.4]\n\\filldraw[black] (3,0) circle (0.05cm) node [right]{$t_1$};\n\\filldraw[black] (2.25,2) circle (0.05cm) node [left]{$t_2$};\n\\filldraw[black] (3.75,2) circle (0.05cm) node [right]{$t_3$};\n\\filldraw[black] (3,1) circle (0.05cm) node [right]{$t_0$};\n\\filldraw[black] ( $(3.75,2)!0.333!(3, 1)$ ) circle (0.05cm) node [right] {$w_1$};\n\\filldraw[black] ( $(3.75,2)!0.666!(3, 1)$ ) circle (0.05cm) node [right] {$w_2$};\n\\filldraw[black] ( $(2.25,2)!0.5!(3, 1)$ ) circle (0.05cm) node [left] {$w_3$};\n\\filldraw[black] (3,0.5) circle (0.05cm) node [right]{$w_4$};\n\n\\draw (3,0) -- (3, 1) -- (3.75, 2);\n\n\\draw (2.25, 2) -- (3, 1); \n\n\\end{scope}\n\n\\end{tikzpicture}%\n\\end{center} \n\\caption{Example of normal surfaces intersecting a face of the dual\n triangulation.}\n\\label{pic:normal tree} \n\\end{figure}\n\nNormal arcs parallel to the edge $[x_2, x_3]$ correspond to surfaces\nwhich hit both of the edges $[x_1, x_2]$ and $[x_1, x_3]$, so\ncorrespond points $w_i$ which lie in both $[t_1, t_2]$ and $[t_1,\nt_3]$, and similarly for the other two cases. The intersection of\nthese two geodesics in $\\widetilde{X}$ is $[t_1, t_0]$, and so the corresponding\nsurfaces appear in the same order on each of the edges in $\\Phi$, and so\nthe arcs are disjoint. The same argument applies to each vertex of\n$\\Phi$.\n\\end{proof}\n\nAs the resulting surfaces in $\\twid{M}$ are disjoint and equivariant,\nthey project down to disjoint surfaces in $M$.\n\nWe now show that the polyhedral surfaces, and their complements,\nproject down homeomorphically into $M$. As the level set surfaces lift\nhomeomorphically to $\\twid{M}$, the area bound for the level sets of\n$f$ is also an area bound for the level sets of $h$. Therefore, each\npolyhedral surface contains a bounded number of faces. The deck\ntransformation group of the universal cover of a graph is equal to the\nfundamental group of the graph, which is a free group, so the orbit of\nany face consists of infinitely many disjoint translates. If two lie\nin the same connected component of a polyhedral surface, then that\npath corresponds to a covering translation, which has infinite order,\nso in fact the connected component contains infinitely many faces,\nwhich contradicts the fact that there is a bound on the number of\nfaces in each component.\n\nEach complementary region is compact, so the same argument applied to\nthe complementary regions shows that they are all mapped down\nhomeomorphically as well.\n\n\n\n\n\\subsection{Bounded handles}\\label{section:bounded}\n\nWe now bound the number of handles in a complementary region of the\ncapped surfaces, which contains a single cell splitter. The following\nresult will complete the proof of the left hand inequality in Theorem\n\\ref{theorem:graph}.\n\n\\begin{proposition}\\label{bounded_handles} \nLet $(\\mu, \\epsilon)$ be $MV$-constants, and let $M$ be a complete\nhyperbolic $3$-manifold without cusps, with an $\\epsilon$-regular Voronoi\ndecomposition ${\\cal{V}}$, and a generic Morse function $h \\colon M \\to Y$,\nwhere $Y$ is a tree. Let $\\{ u_i \\}$ be a collection of points in $Y$,\nwhich separate the cell splitters in $Y$, and let $\\{ S^+_i \\}$ be the\ncorresponding collection of capped surfaces. If $P$ is a\ncomplementary component of the capped surfaces in $M$, the region $P$\nhas at most three boundary components, $S^+_{i_1}, S^+_{i_2}$ and\n$S^+_{i_3}$ say, where the final surface may be empty. Then $P$ is\nhomeomorphic to a manifold with a handle decomposition containing at\nmost\n\\[ K \\text{Gromov area} ( M ) \\]\nhandles, where $K$ depends only on the $MV$-constants.\n\\end{proposition}\n\nWe start with the observation that attaching a compression body $P$ to\na $3$-manifold $Q$ by a subsurface $S$ of the upper boundary component\nof $P$, requires a number of handles which is bounded in terms of the\nHeegaard genus of $P$, and the number of boundary components of the\nattaching surface.\n\n\\begin{proposition}\\label{bounded_handles3}\\cite{hoffoss-maher}*{Proposition 2.16}\nLet $Q$ be a compact $3$-manifold with boundary, and let $R = Q \\cup\nP$, where $P$ is a compression body of genus $g$, attached to $Q$ by a\nhomeomorphism along a (possibly disconnected) subsurface $S$ contained\nin the upper boundary component of $P$ of genus $g$. Then $R$ is\nhomeomorphic to a $3$-manifold obtained from $Q$ by the addition of at\nmost $(4g + 2 \\norm{\\partial S})$ $1$-and $2$-handles, where\n$\\norm{\\partial S}$ is the number of boundary components of $S$.\n\\end{proposition}\n\n\\begin{proof}[Proof (of Proposition \\ref{bounded_handles}).]\nIf $P$ has two boundary components, then the argument is exactly the\nsame as \\cite{hoffoss-maher}*{Proposition 2.15}, so we now consider the\ncase in which $P$ has three boundary components, which, without loss\nof generality we may relabel $S^+_{1}, S^+_{2}$ and $S^+_3$. Let $t$\nbe the cell splitter corresponding to the region $P$, and let $V$ be\nthe corresponding Voronoi cell. As $P$ has three boundary components,\n$t$ must be a vertex of $Y$.\n\nWe first consider the case in which the Voronoi region $V$\ncorresponding to the cell splitter $t$ in $h(P)$ is disjoint from the\nthin part $T_\\mu$. Consider the three polyhedral surfaces $S_1, S_2$\nand $S_3$, corresponding to the three capped surfaces, and let\n$\\Sigma = \\cup S_i \\cup V$ be the union of the polyhedral surfaces,\ntogether with the Voronoi cell $V$. By Proposition \\ref{cor:bound},\nthere is a constant $K$, which only depends on the $MV$-constants,\nsuch that the number of faces of $\\Sigma$ in the thick part is at most\n$3 K_1 \\text{Gromov area}(M)$, i.e.\n\\[ \\| \\Sigma \\cap X_\\mu \\| \\leqslant 3 K_1 \\text{Gromov area}(M), \\]\nwhere $K_1$ is the constant from Proposition \\ref{cor:bound}. The\nnumber of boundary components of each surface $S_i \\cap X_\\mu$ is also\nbounded by Proposition \\ref{cor:bound}, and by Proposition\n\\ref{prop:bound}, the Voronoi cell $V$ has a bounded number $J$ of\nvertices, edges and faces, where $J$ depends only on the\n$MV$-constants. In particular, there is a constant $A$, depending\nonly on the $MV$-constants, such that $P \\cap X_\\mu$ has a handle\nstructure with at most $A ( \\text{Gromov area}(M) )$ handles.\n\nTo bound the number of handles contained in $P$, we observe that $P$\nis a regular neighbourhood of the $3$-complex obtained from capping\noff the boundary components of $\\Sigma \\cap X_\\mu$, using the parts of the\ncapped surfaces in the thin part, i.e. the union of the components of\n$S^+_i \\cap T_\\mu$ over all three capped surfaces. Each component of\n$S^+_i \\cap T_\\mu$ has genus at most one, and the number of boundary\ncomponents of $\\Sigma \\cap X_\\mu$ is bounded linearly in terms of\n$\\text{Gromov area}(M)$, therefore, there is a constant $B$, depending\nonly on the $MV$-constants, such that the number of handles in $P$ is\nat most $B ( \\text{Gromov area}(M) )$, as required. \n\nWe now consider the case in which the region $P$ has image $h(P)$ in\n$Y$ which contains the cell splitter $t$, and the corresponding\nVoronoi cell $V$ intersects $T_\\mu$. In this case, the connected\ncomponents of $V \\cap X_\\mu$ need not be topological balls, and there\nmay be connected components of $P \\cap T_\\mu$ whose boundary components\nare not parallel.\n\nThe connected components of $V \\cap X_\\mu$ are handlebodies of bounded\ngenus, as show in the following result of Kobayashi and Rieck\n\\cite{kobayashi-rieck}. We state a simplified version of their result\nwhich suffices for our purposes, see \\cite{hoffoss-maher} for further\ndetails.\n\n\\begin{proposition}\\cite{kobayashi-rieck}\nLet $\\mu$ be a Margulis constant for $\\mathbb{H}^3$, $M$ be a finite\nvolume hyperbolic $3$-manifold, let $0 < \\epsilon < \\mu$, and let ${\\cal{V}}$ be a\nregular Voronoi decomposition of $M$ arising from a maximal collection\nof $\\epsilon$-separated points. Then there is a number $G$, depending only\non $\\mu$ and $\\epsilon$, such that for any Voronoi cell $V_i$, there are at\nmost $G$ connected components of $V_i \\cap X_\\mu$, each of which is a\nhandlebody of genus at most $G$, attached to $T_\\mu$ by a surface with\nat most $G$ boundary components.\n\\end{proposition}\n\nRecall that attaching a handlebody of genus $G$ to a $3$-manifold\nalong a subsurface of the boundary with at most $G$ boundary\ncomponents requires at most $6G$ handles:\n\n\\begin{proposition}\\cite{hoffoss-maher}*{Proposition 2.16}\nLet $Q$ be a compact $3$-manifold with boundary and let $R = Q \\cup P$\n, where $P$ is a compression body of genus $g$, attached to $Q$ by a\nhomeomorphism along a (possibly disconnected) subsurface $S$ contained\nin the upper boundary component of $P$ of genus $g$. Then $R$ is\nhomeomorphic to a $3$-manifold obtained from $Q$ by the addition of at\nmost $(4 \\text{genus} + 2 \\norm{ \\partial S})$ $1$- and $2$-handles,\nwhere $\\norm{\\partial S}$ is the number of boundary components of $S$.\n\\end{proposition}\n\nTherefore, adding a Voronoi cell which intersects $\\partial T_\\mu$ may\nbe realized by at most $6 G^2$ handles.\n\nIf the Voronoi cell intersects $T_\\mu$, then there may be components\nof $P \\cap T_\\mu$ whose boundary surfaces are not parallel. This case\nis considered in the proof of \\cite{hoffoss-maher}*{Proposition 2.15},\nwhen the manifold has no infinite Margulis tubes, so it suffices to\nconsider the case of a component of $P$ contained in an infinite\nMargulis tube. However, the case of an infinite Margulis tube in\nwhich neither surface is an essential disc is the same as the ordinary\nMargulis tube case, and if both surfaces essential discs then they are\nparallel. Finally, if exactly one surface is an essential disc, then\nthe other surface lies in the same homology class, via the component\nof $P$ in the infinite Margulis tube, and so, after surgering\ninessential boundary components, is also an essential disc. However,\nthe number of boundary components is at most $K_1 \\text{Gromov\n area}(M)$, and so the total number of extra handles over all\ncomponents of $P$ in the infinite Margulis tubes is also bounded by\n$K_1 \\text{Gromov area}(M)$.\n\nWe may choose the constant $K$ to be the maximum of the constants\narising from the two cases considered above, thus completing the proof\nof Proposition \\ref{bounded_handles3}.\n\\end{proof}\n\n\n\n\n\n\\section{Topological complexity bounds metric complexity}\\label{section:sss}\n\n\nIn this section we will show bounds for metric complexity in terms of\ntopological complexity, i.e. the right hand inequality in Theorem\n\\ref{theorem:graph}, assuming the Pitts and Rubinstein result, Theorem\n\\ref{conjecture:pr}.\n\nWe start by reminding the reader of the topological properties of thin\nposition for generalized Heegaard splittings, as shown by Scharlemann\nand Thompson \\cite{st} for the linear case and Saito, Scharlemann and\nSchultens \\cite{sss} for the graph case.\n\n\\begin{theorem} \\cites{st, sss} \\label{theorem:sss} %\nLet $H$ be a graph splitting that is in thin position. Then every even\nsurface is incompressible in $M$ and the odd surfaces form strongly\nirreducible Heegaard surfaces for the components of $M$ cut along the\neven surfaces.\n\\end{theorem}\n\nWe will use the following result due to Gabai and Colding\n\\cite{colding-gabai}*{Appendix A}, building on recent work of Colding\nand Minicozzi \\cite{colding-minicozzi}. It is not stated explicitly\nin their paper, but see \\cite{hoffoss-maher}*{Theorem 3.2} for further\ndetails.\n\n\\begin{theorem} \\cite{colding-gabai} \\label{theorem:minimal}\nLet $M$ be a hyperbolic manifold, with (possibly empty) least area\nboundary, with a minimal Heegaard splitting $H$ of genus $g$. Then,\nassuming Theorem \\ref{conjecture:pr}, the manifold $M$ has a\n(possibly singular) foliation by compact leaves, containing the\nboundary surfaces as leaves, such that each leaf has area at most $4\n\\pi g$.\n\\end{theorem}\n\nBy Theorem \\ref{theorem:sss}, we may consider the compression bodies in the\ngraph splitting in pairs, glued along strongly irreducible Heegaard\nsplittings, and then Theorem \\ref{theorem:minimal} guarantees that\neach pair has a foliation with each leaf having area at most $4 \\pi\ng$. These foliations contain the boundary surfaces as leaves, and so\nthe foliations on each pair extend to foliations of the entire\nmanifold, as required.\n\n\n\n\n\n\n\n\n\n\\begin{bibdiv}\n\\begin{biblist}\n\n\\bib{bmns}{article}{\n author={Brock, Jeffrey},\n author={Minsky, Yair},\n author={Namazi, Hossein},\n author={Souto, Juan},\n title={Bounded combinatorics and uniform models for hyperbolic\n 3-manifolds},\n journal={J. Topol.},\n volume={9},\n date={2016},\n number={2},\n pages={451--501},\n issn={1753-8416},\n}\n\n\\bib{cdl}{article}{\n author={Colding, Tobias H.},\n author={De Lellis, Camillo},\n title={The min-max construction of minimal surfaces},\n conference={\n title={Surveys in differential geometry, Vol.\\ VIII},\n address={Boston, MA},\n date={2002},\n },\n book={\n series={Surv. Differ. Geom., VIII},\n publisher={Int. Press, Somerville, MA},\n },\n date={2003},\n pages={75--107},\n}\n\n\\bib{colding-gabai}{article}{\n author={Colding, Tobias H.},\n author={Gabai, David},\n title={Effective Finiteness of irreducible Heegaard splittings of non Haken 3-manifolds},\n eprint={arXiv:1411.2509},\n date={2014},\n}\n\n\\bib{colding-minicozzi}{article}{\n author={Colding, Tobias Holck},\n author={Minicozzi, William P., II},\n title={The singular set of mean curvature flow with generic\n singularities},\n journal={Invent. Math.},\n volume={204},\n date={2016},\n number={2},\n pages={443--471},\n issn={0020-9910},\n}\n\n\\bib{dlp}{article}{\n author={De Lellis, Camillo},\n author={Pellandini, Filippo},\n title={Genus bounds for minimal surfaces arising from min-max\n constructions},\n journal={J. Reine Angew. Math.},\n volume={644},\n date={2010},\n pages={47--99},\n issn={0075-4102},\n}\n\n\n\\bib{gromov}{article}{\n author={Gromov, M.},\n title={Width and related invariants of Riemannian manifolds},\n language={English, with French summary},\n note={On the geometry of differentiable manifolds (Rome, 1986)},\n journal={Ast\\'erisque},\n number={163-164},\n date={1988},\n pages={6, 93--109, 282 (1989)},\n issn={0303-1179},\n}\n\n\n\\bib{hoffoss-maher}{article}{\n author={Hoffoss, Diane},\n author={Maher, Joseph},\n title={Morse area and Scharlemann-Thompson width for hyperbolic\n 3-manifolds},\n journal={Pacific J. Math.},\n volume={281},\n date={2016},\n number={1},\n pages={83--102},\n issn={0030-8730},\n}\n\n\n\n\n\\bib{kobayashi-rieck}{article}{\n author={Kobayashi, Tsuyoshi},\n author={Rieck, Yo'av},\n title={A linear bound on the tetrahedral number of manifolds of bounded\n volume (after J\\o rgensen and Thurston)},\n conference={\n title={Topology and geometry in dimension three},\n },\n book={\n series={Contemp. Math.},\n volume={560},\n publisher={Amer. Math. Soc., Providence, RI},\n },\n date={2011},\n pages={27--42},\n}\n\n\n\\bib{ketover}{article}{\n\tauthor={Ketover, Daniel},\n\ttitle={Degeneration of Min-Max Sequences in 3-manifolds},\n\tdate={2013},\n\teprint={arXiv:1312.2666},\n}\n\n\n\\bib{pr}{article}{\n author={Pitts, Jon T.},\n author={Rubinstein, J. H.},\n title={Existence of minimal surfaces of bounded topological type in\n three-manifolds},\n conference={\n title={},\n address={Canberra},\n date={1985},\n },\n book={\n series={Proc. Centre Math. Anal. Austral. Nat. Univ.},\n volume={10},\n publisher={Austral. Nat. Univ.},\n place={Canberra},\n },\n date={1986},\n pages={163--176},\n}\n\n\n\\bib{rubinstein}{article}{\n author={Rubinstein, J. Hyam},\n title={Minimal surfaces in geometric 3-manifolds},\n conference={\n title={Global theory of minimal surfaces},\n },\n book={\n series={Clay Math. Proc.},\n volume={2},\n publisher={Amer. Math. Soc., Providence, RI},\n },\n date={2005},\n pages={725--746},\n}\n\n\n\\bib{sss}{book}{\n author={Scharlemann, Martin},\n author={Schultens, Jennifer},\n author={Saito, Toshio},\n title={Lecture notes on generalized Heegaard splittings},\n note={Three lectures on low-dimensional topology in Kyoto},\n publisher={World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ},\n date={2016},\n pages={viii+130},\n isbn={978-981-3109-11-7},\n}\n\n\n\n\\bib{st}{article}{\n author={Scharlemann, Martin},\n author={Thompson, Abigail},\n title={Thin position for $3$-manifolds},\n conference={\n title={Geometric topology},\n address={Haifa},\n date={1992},\n },\n book={\n series={Contemp. Math.},\n volume={164},\n publisher={Amer. Math. Soc.},\n place={Providence, RI},\n },\n date={1994},\n pages={231--238},\n}\n\n\n\n\n\n\n\n\\end{biblist}\n\\end{bibdiv}\n\n\n\\bigskip\n\n\\noindent Diane Hoffoss \\\\\nUniversity of San Diego \\\\\n\\url{dhoffoss@sandiego.edu} \\\\\n\n\\noindent Joseph Maher \\\\\nCUNY College of Staten Island and CUNY Graduate Center \\\\\n\\url{joseph.maher@csi.cuny.edu} \\\\\n\n\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{EXPERIMENTAL DETAILS} \\label{appx:experimental_details}\n\\subsection{Workflow}\n\nInput: Reference data $\\mathcal{D}_{ref}$ , synthetic data $\\mathcal{D}_{syn}$ and test data $\\mathcal{D}_{test}$.\\\\\nOutput: $\\hat{m}$ for all $x\\in \\mathcal{D}_{test}$.\\\\\nSteps:\n\\begin{enumerate}\n\\item Train density model $p_R(X)$ on $\\mathcal{D}_{ref}$.\n\\item Train density model $p_G(X)$ on $\\mathcal{D}_{syn}$.\n\\item Compute $A_{DOMIAS}(x)=\\frac{p_G(x)}{p_R(x)}$ for all $x\\in\\mathcal{D}_{test}$\n\\item Choose threshold $\\tau$, e.g. $\\tau = median \\{A_{DOMIAS}(x)|x\\in\\mathcal{D}_{test}\\}$\n\\item Infer \n\\begin{equation*}\n \\hat{m} =\\begin{cases}1, &\\text{if } A_{DOMIAS}(x)>\\tau,\\\\ 0, &\\text{ otherwise},\n \\end{cases}\n\\end{equation*}\nfor all $x\\in\\mathcal{D}_{test}$.\n\\end{enumerate}\n\n\\subsection{Data}\nWe use the California housing \\citep{Pace1997SparseAutoregressions} (license: CC0 public domain) and Heart Failure (private) datasets, see Table \\ref{tab:datasets} and Figure \\ref{fig:data_correlation} for statistics. All data is standardised.\n\n\\begin{table}[hbt]\n \\centering\n \\caption{Dataset statistics}\n \\label{tab:datasets}\n \\begin{tabular}{l|c|c} \\toprule\n & California Housing & Heart Failure \\\\ \\midrule\n Number of samples & 20640 & 40300\\\\\n Number of features & 8 & 35\\\\\n - binary & 0 & 25\\\\\n - continuous & 8 & 10\\\\ \\bottomrule\n \\end{tabular}\n\n\\end{table}\n\n\n\\begin{figure*}[hbt]\n \\centering\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/correlation_housing.png}\n \\caption{Housing}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/correlation_maggic.png}\n \\caption{Heart Failure}\n \\end{subfigure}\n \\caption{Correlation matrices of features within Housing and Heart Failure datasets. The first feature of the Heart Failure dataset is used for defining the minority group in Section 5.3.}\n \\label{fig:data_correlation}\n\\vspace{-0.25cm}\\end{figure*}\n\n\n\\subsection{Experimental settings}\n\nAll results reported in our paper are based on $8$ repeated runs, with shaded area denoting standard deviations. We experiment on a machine with 8 Tesla K80 GPUs and 32 Intel(R) E5-2640 CPUs. We shuffle the dataset and split the dataset into training set, test set, and reference set. The attack performance is computed over a test set consisting of $50\\%$ training data (i.e. samples from $\\mathcal{D}_{mem})$ and $50\\%$ non-training data. Choices of sizes for those sets are elaborated below\n\n\\paragraph{Experimental Details for Section 5.1}\nIn this section, we experimented on the California Housing Dataset to compare different MIA performance with DOMIAS. For the experiment varying the number of members in the training dataset (i.e. left panel of Figure 3), we use a fixed training epoch $2000$, a fixed number of reference example $|\\mathcal{D}_{ref}|=10000$ and a fixed number of generated example $|\\mathcal{D}_{syn}|=10000$. For the experiment varying the number of training epochs of TVAE (i.e. the right panel of Figure 3), we use a fixed training set size $|\\mathcal{D}_{mem}|=500$, a fixed number of reference example $|\\mathcal{D}_{ref}|=10000$ and a fixed number of generated example $|\\mathcal{D}_{syn}|=10000$. Training with a single seed takes $2$ hours to run in our machine with BNAF as the density estimator. \n\nIn BNAF density estimation, the hyper-parameters we use are listed in Table~\\ref{tab:hyper-param-bnaf}. Our implementation of TVAE is based on the source code provided by ~\\citep{Xu2019ModelingGAN}.\n\\begin{table}[hbt]\n \\centering\n \\caption{Hyperparameters for BNAF}\n \\label{tab:hyper-param-bnaf}\n \\begin{tabular}{c|c}\n \\toprule\n batch-dim & 50 \\cr\n n-layer & 3 \\\\\n hidden-dim & 32 \\\\\n flows & 5 \\\\\n learning rate &$0.01$ \\\\\n epochs & 50 \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\\paragraph{Experimental Details for Section 5.2}\nIn our experiments varying the number of reference data $n_{ref}$, i.e. results reported in the left panel of Figure 4, we fix the training epoch to be $2000$, set $n_{syn}=10000$ and $n_{M}=500$. In the experiments varying the number of generated data $n_{syn}$, i.e. results reported in the right panel of Figure 4, we set $n_{ref}=10000$, training epoch to be $2000$, and $n_{mem}=500$. Our implementation of the kernel density estimation is based on \\textit{sklearn} with an automated adjusted bandwidth. Training with a single seed takes $0.5$ hours to finish in our machine with the kernel density estimator.\n\n\\paragraph{Experimental Details for Section 5.3} Based on results of Section 5.2, the attacking performance on different subgroups can be immediately calculated by adopting appropriate sample weights.\n\n\\paragraph{Experimental Details for Section 5.4}\nIn the Additive-Noise baseline curve, results are generated with the following noise values: \n$[0.7,\n0.9,\n1.1,\n1.3,\n1.5,\n1.7,\n1.9,\n2.3,\n2.5,\n2.9,\n3.5,\n3.9]$. In the ADS-GAN curve, results are generated with the following privacy parameter $\\lambda = [0.2, 0.5, 0.7, 1.0, 1.1, 1.3, 1.5]$. \nIn the WGAN-GP we use a gradient penalty coefficient $10.0$. All the other methods are implemented with recommended hyper-parameter settings. Training different generative models are not computational expensive and take no more than $10$ minutes to finish in our machine. Using a kernel density estimator and evaluating all baseline methods take another $20$ minutes, while using a BNAF estimator takes around $1.5$ more hours.\n\n\n\n\n\\section{ADDITIONAL EXPERIMENTS} \\label{appx:additional_results}\n\n\\subsection{Experiment 5.1 and 5.2 on Heart Failure dataset}\nWe repeat the experiments of Section 5.1 and 5.2 on the Heart Failure dataset, see Figures \\ref{fig:domias_vs_baselines_heart_failure} and \\ref{fig:ablation_results_heart_failure}. Results are noisier, but we observe the same trends as in Sections 5.1 and 5.2\n\n\n\\begin{figure*}[hbt]\n\n \\centering\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/MAGGIC_n_M.jpg}\n \n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/MAGGIC_t_epochs.jpg}\n \n \\end{subfigure}\n \\caption{\\textit{DOMIAS outperforms baselines on Heart Failure dataset.} MIA performance of DOMIAS and baselines versus the generative model training set size $|\\mathcal{D}_{mem}|$ and training time $t_{epochs}$, evaluated on Heart Failure datasets. The same trends are observed as in Section 5.1.}\n \\label{fig:domias_vs_baselines_heart_failure}\n\\vspace{-0.25cm}\n\\end{figure*}\n\n\\begin{figure*}[hbt]\n \\centering\n \\begin{subfigure}{0.48\\textwidth}\n \\includegraphics[width=0.9\\textwidth]{figures\/MAGGIC_n_ref.jpg}\n \n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/MAGGIC_n_G.jpg}\n \n \\end{subfigure}\n \\caption{\\emph{DOMIAS source of gain.} Ablation study of DOMIAS on Heart Failure dataset, with attack performance as a function of the reference dataset size (left) and the synthetic dataset size (right). Similar to Section 5.2, we see that the MIA performance of DOMIAS is largely due to assumption Eq.2 vs Eq. 1, i.e. the value of the reference dataset.}\n \\label{fig:ablation_results_heart_failure}\n\\vspace{-0.25cm}\n\\end{figure*}\n\n\n\n\\subsection{Experiment 5.4: Results other attackers}\nIn Figure \\ref{fig:other_attackers_against_gen} we include the results of experiment 5.4 for all attacks, including error bars. Indeed, we see that DOMIAS outperforms all baselines against most generative models. This motivates using DOMIAS for quantifying worst-case MIA vulnerability.\n\n\\begin{figure*}[hbt]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/5.4_other_attackers.jpg}\n \\caption{DOMIAS consistently outperforms baseline attackers at attacking the different generative models.}\n \\label{fig:other_attackers_against_gen}\n\\end{figure*}\n\n\n\\subsection{CelebA image data} \\label{sec:celeba}\nWe include additional results for membership inference attacks against the image dataset CelebA. Results indicate DOMIAS is significantly better at attacking this high-dimensional data than baseline methods.\n\n\\paragraph{Set-up} We use \\href{https:\/\/mmlab.ie.cuhk.edu.hk\/projects\/CelebA.html}{CelebA}~\\citep{Liu2015DeepWild}, a large-scale face attributes dataset with more than 200K celebrity images. We generate a synthetic dataset with 10k examples using a convolutional VAE with a training set containing the first 1k examples, and use the following 1k examples as test set. Then the following 10k examples are used as reference dataset. As training the BNAF density estimator is computational expensive (especially when using deeper models), we conduct dimensionality reduction with a convolutional auto-encoder with $128$ hidden units in the latent representation space (i.e. output of the encoder) and apply BNAF in such a representation space. The hyper-parameters and network details we use in VAE are listed in Table~\\ref{tab:hyper-param-vae} and Table \\ref{tab:vae_architecture}.\n\n\n\n\n\\begin{table}[hbt]\n \\centering\n \n \\caption{Hyperparameters for VAE}\n \\label{tab:hyper-param-vae}\n \\begin{tabular}{c|c}\n \\toprule\n batch size & 128 \\cr\n n-layer & 5 \\cr\n Optimizer & Adam \\cr\n learning rate &$0.002$ \\cr\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\\begin{table*}\n \\centering\n \\caption{Architecture of VAE}\n \\label{tab:vae_architecture}\n \\begin{subtable}[t]{0.48\\textwidth}\n \n \\caption{Network Structure for Encoder}\n \\label{tab:stru-encoder}\n \\begin{tabular}{c|c}\n \\toprule\n Layer & Params (PyTorch-Style) \\cr\n \\hline\n Conv1 & $(3,64,4,2,1)$ \\cr\n ReLU & $\\cdot$\\cr\n Conv2 & $(64,128,4,2,1)$ \\cr\n ReLU & $\\cdot$\\cr\n Conv3 & $(128,256,4,2,1)$ \\cr\n ReLU & $\\cdot$\\cr\n Conv4 & $(256,256,4,2,1)$ \\cr\n ReLU &$\\cdot$ \\cr\n Linear1 & $(256*4*4,256)$ \\cr\n ReLU & $\\cdot$\\cr\n Linear2 & $(256,256)$ \\cr\n ReLU & $\\cdot$\\cr\n Linear3 & $(256,128*2)$ \\cr\n \\bottomrule\n \\end{tabular}\n \\end{subtable}\n \\hspace{\\fill}\n \\begin{subtable}[t]{0.48\\textwidth}\n \n \\caption{Network Structure for Decoder}\n \\label{tab:stru-decoder}\n \\begin{tabular}{c|c}\n \\toprule\n Layer & Params (PyTorch-Style) \\cr\n \\hline\n Linear1 & $(128,256)$ \\cr\n ReLU & $\\cdot$\\cr\n Linear2 & $(256,256)$ \\cr\n ReLU & $\\cdot$\\cr\n Linear3 & $(256,256*4*4)$ \\cr\n ReLU & $\\cdot$\\cr\n ConvTranspose1 & $(256,256,4,2,1)$ \\cr\n ReLU & $\\cdot$\\cr\n ConvTranspose2 & $(256,128,4,2,1)$ \\cr\n ReLU & $\\cdot$\\cr\n ConvTranspose3 & $(128,64,4,2,1)$ \\cr\n ReLU & $\\cdot$\\cr\n ConvTranspose4 & $(64,3,4,2,1)$ \\cr\n Tanh & $\\cdot$\\cr\n \\bottomrule\n \\end{tabular}\n \\end{subtable}\n\\end{table*}\n\n\\paragraph{Results}\nFigure \\ref{fig:celeba} includes the attacking AUC of DOMIAS and baselines of 8 runs. DOMIAS consistently outperforms other MIA methods, most of which score not much better than random guessing. These methods fail to attack the 128-dimensional representations of the data (originally $64\\times 64$ pixel images), due to most of them using nearest neighbour or KDE-based approaches. On the other hand, DOMIAS is based on the flow-based density estimator BNAF \\citep{deCao2019BlockFlow}, which is a deeper model that is more apt at handling the high-dimensional data.\n\\begin{figure*}[hbt]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/CelebA_results.png}\n \\caption{\\emph{Attacking performance on CelebA.} DOMIAS scores significantly better at attacking image data compared to baselines.}\n \\label{fig:celeba}\n\\end{figure*}\n\n\n\n \n\\section{HIGH-LEVEL PRIOR KNOWLEDGE} \\label{appx:gaussian_prior}\nIf we have no reference data at all, we can still perform more successful attacks compared to baselines if we have high-level statistics of the underlying distribution. Effectively, any informed prior can improve upon methods that use Eq. \\ref{eq:assumption_prev}; this being a special case of Eq. \\ref{eq:assumption_domias}, where one assumes a uniform prior on $p_R$. In this Appendix, we use the Housing dataset and we assume that we only know the mean and standard deviation of the first variable, median income. This is a very realistic setting in practice, since an adversary can relatively easily acquire population statistics for individual features. We subsequently model the reference dataset distribution $p_{ref}$ as a normal distribution of only the age higher-level statistics---i.e. not making any assumptions on any of the other variables, implicitly putting a uniform prior on these when modelling $p_{ref}$. Otherwise, we use the same training settings as in Experiment 5.1 (left panel Figure 3). In Figure \\ref{fig:high-level statistics}. We see that even with this minimal assumption, we still outperform its ablated versions. These results indicate that a relatively weak prior on the underlying distribution without any reference data, can still provide a relatively good attacker model.\n\n\\begin{figure}[hbt]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{figures\/appx_prior_statistics.png}\n \\caption{\\textit{Using DOMIAS with no reference data but high-level statistics of the underlying data.} Using just the mean and standard deviation of the population's median income, DOMIAS outperforms its ablated counterparts that are based on Eq. \\ref{eq:assumption_prev}. }\n \\label{fig:high-level statistics}\n\\end{figure}\n\n\n\\section{HIGH-PRECISION ATTACKS} \\label{appx:high_precision attacks}\n\\citet{Hu2021MembershipRegions} focus on high-precision membership attacks, i.e. can we attack a small set of samples with high certainty. This is an interesting question, since the risk of high-precision attacks may be hidden if one only looks at overall attacking performance. Their work is not applicable to our setting, e.g. they assume full generator and discriminator access. In this section, we show that even in the full black-box setting high-precision MIAs are a serious risk.\n\n\\subsection{Tabular data}\n\n\\paragraph{Set-up} We assume the same dataset and generative model set-up as in Section 5.3. We study which samples the different methods give the highest score, i.e. mark as most likely to be in $\\mathcal{D}_{mem}$. Let $\\mathcal{D}_{test}$ be a test set consisting for 50\\% of samples $x^i$ in $\\mathcal{D}_{mem}$ and 50\\% samples not in $\\mathcal{D}_{mem}$, respectively denoted by $m=1$ and $m=0$. Let $\\hat{m} = A(x)$ be the attacker's prediction, and let $S(A, \\mathcal{D}_{test},q) = \\{x\\in\\mathcal{D}_{test}|\\hat{m}>Quantile(\\{\\hat{m}^i\\}_i,1-q)\\}$ be the set of samples that are given the $q$-quantile's highest score by attacker $A$. We are interested in the mean membership of this set, i.e. the precision if threshold $Quantile(\\{\\hat{m}^i|x^i\\in\\mathcal{D}_{test}\\},1-q)$ is chosen. We include results for DOMIAS and all baselines. Results are averaged over 8 runs.\n\n\\paragraph{Results} In Figure \\ref{fig:high-precision} we plot the top-score precision-quantile curve for each method for each MIA method, i.e. $P(A, \\mathcal{D}_{test}, q) = \\text{mean}(\\{m|x\\in S(A, \\mathcal{D}_{test}, q)\\})$ as a function of $q$. These figures show the accuracy of a high-precision attacker, if this attacker would choose to attack only the top $q$-quantile of samples. We see that unlike other methods, the precision of DOMIAS goes down almost linearly and more gradually. Though MC and GAN-Leaks are able to find the most overfitted examples, they do not find all---resulting from their flawed underlying assumption Eq. 1 that prohibits them from finding overfitted examples in low-density regions. \n\n\n\\begin{figure*}[hbt]\n \\centering\n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_DOMIAS.png}\n \\caption{DOMIAS}\n \\end{subfigure}\n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_Eq.1_BNAF.jpg}\n \\caption{Eq. 1 (BNAF)}\n \\end{subfigure}\n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_hayes-0.png}\n \\caption{LOGAN 0}\n \\end{subfigure}\n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_hayes-D1.png}\n \\caption{LOGAN D1}\n \\end{subfigure} \n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_gan-leaks.png}\n \\caption{GAN-leaks 0}\n \\end{subfigure}\n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_gan-leaks-cal.png}\n \\caption{GAN-leaks CAL}\n \\end{subfigure}\n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_mc.png}\n \\caption{MC}\n \\end{subfigure}\n \\caption{\\emph{DOMIAS is better at high-precision attacks than baselines on heart failure dataset.} Plotting the top-quantile precision $P(A, \\mathcal{D}_{test}, q)$ versus $q$. For example, if the attacker decides to attack only the $20\\%$ highest samples, we get DOMIAS is significantly more precise ($86.2\\pm 5.5\\%$) compared to baselines---LOGAN D0 ($51.0\\pm 3.9\\%$), LOGAN D1 ($72.6\\pm 5.3\\%$), MC ($74.2\\pm 3.0\\%$), GAN-leaks ($74.9\\pm 3.1\\%$), GAN-Leaks CAL ($57.0\\pm 4.1\\%$). Additionally included is Eq. 1 (BNAF), the ablation attacker that does not make use of the reference data. We see that the reference data helps DOMIAS attack a a larger group with high precision.}\n \\label{fig:high-precision}\n\\end{figure*}\n\n\\subsection{Image data}\nLet us run the same high-precision attack on the CelebA dataset---see Appendix \\ref{sec:celeba}, including settings. Again, we see that high-precision attacks are more successful when using DOMIAS, see Figure \\ref{fig:high-precision-celeba}\n\n\\begin{figure*}[hbt]\n \\centering\n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_DOMIAS_celeba.jpg}\n \\caption{DOMIAS}\n \\end{subfigure}\n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_Eq.1_BNAF_celeba.jpg}\n \\caption{Eq. 1 (BNAF)}\n \\end{subfigure}\n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_LOGAN_0_celeba.jpg}\n \\caption{LOGAN 0}\n \\end{subfigure}\n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_LOGAN_D1_celeba.jpg}\n \\caption{LOGAN D1}\n \\end{subfigure} \n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_gan-leaks_celeba.jpg}\n \\caption{GAN-leaks 0}\n \\end{subfigure}\n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_GAN-Leaks_CAL_celeba.jpg}\n \\caption{GAN-leaks CAL}\n \\end{subfigure}\n \\begin{subfigure}{0.33\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/high_precision_mc_celeba.jpg}\n \\caption{MC}\n \\end{subfigure}\n \\caption{\\emph{DOMIAS is better at high-precision attacks than baselines on CelebA image data.} For example, an attacker could attack only the examples with top 2\\% scores, and get a precision of $P=65.7\\pm11.6\\%$---much higher than the second-best method LOGAN 0, scoring $P=54.8\\pm6.5\\%$.}\n \\label{fig:high-precision-celeba}\n\\end{figure*}\n\n\n\\section{DISTRIBUTION SHIFT $\\mathcal{D}_{ref}$ AND $\\mathcal{D}_{mem}$} \\label{appx:distributional_shift}\nThere may exist a distributional shift between reference and training data. Because DOMIAS is primarily intended as a tool for data publishers to test their own synthetic data vulnerability, it is recommended that testing is conducted with a reference dataset from the same distribution (e.g. a hold-out set): this effectively tests the worst-case vulnerability. Hence, our work focused on the case where there is no shift.\n\nNonetheless, reference data may not always come from the same target distribution. For example, reference data may come from a different country, or synthetic data may be created by intentionally changing some part of the real data distribution, e.g. to include fairness guarantees \\citep{xu2019achieving,vanBreugel2021DECAF:Networks}. Thus, let us assume there is a shift and that the reference data $\\mathcal{D}_{ref}$ comes from $\\tilde{p}_R$, a shifted version of $p_R$ (i.e. the distribution from which $\\mathcal{D}_{mem}$ is drawn). We give a specific example and run an experiment to explore how this could affect DOMIAS attacking performance.\n\nLet us assume there is a healthcare provider that publishes $\\mathcal{D}_{syn}$, a synthetic dataset of patients suffering from diabetes, based on underlying data $\\mathcal{D}_{mem}\\sim p_R$. Let us assume there is an attacker that has their own data $\\mathcal{D}_{ref}\\sim \\tilde{p}_R$, for which some samples have diabetes ($A=1$), but others do not ($A=0$). We assume that $A$ itself is latent and unobserved (s.t. the attacker cannot just train a classification model) and that there is a shift in the distribution of $A$ (i.e. with a slight abuse of notation $\\tilde{p}_R(A=1)<1$). Diabetes is strongly correlated with other features $X$ in the data, additionally we assume the actual condition distribution $p_R(X|A)$ is fixed across datasets. This implies the reference and membership set distributions can be written respectively as:\n\\begin{align}\n \\tilde{p}_R(X) &= \\tilde{p}_R(A=1)p(X|A=1) + \\tilde{p}_R(A=0)p(X|A=0) \\\\\n p_R(X) &= p(X|A=1)\n\\end{align}\nSince $p_R(X|A=1)\\neq p_R(X|A=0)$ and $\\tilde{p}_R(A=1)\\neq 1$, there is a distributional shift between $\\tilde{p}_R$ and $p_R$.\n\nNow let us see how different attackers perform in this setting as a function of the amount of shift. Evidently, since some of the baselines do not use reference data, some attackers will be unaffected, but we should expect DOMIAS performance to degrade. We take the Heart Failure dataset, which indeed has a feature denoting diabetes,. We vary the amount of shift of $\\tilde{p}_R$ w.r.t. $p_R$, from $\\tilde{p}(A=0)=0$ (no shift), to $\\tilde{p}(A=0)=0.8$ (a large shift and the original Heart Failure non-diabetes prevalence). Let us assume test data follows the attacker's existing dataset, i.e. $\\tilde{p}_R$. This gives Figure \\ref{fig:distributional_shift}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.7\\textwidth]{figures\/covariate_shift1_non-avg.png}\n \\caption{\\textit{Effect of distributional shift on DOMIAS performance.} A distributional shift between $\\mathcal{D}_{mem}$ and $\\mathcal{D}_{ref}$ degrades attacking performance, but preliminary experiments show that for small to moderate shifts it is still preferable to use reference data even though it is slightly shifted.}\n \\label{fig:distributional_shift}\n\\end{figure*}\n\nWe see performance of DOMIAS degrades with increasing shift, due to it approximating $p_R$ with $\\tilde{p}_R$, affecting its scores (Eq. 2). However, we see that for low amounts of shift this degradation is minimal and we still perform beter than not using the reference dataset (baseline Eq. 1 (BNAF)). This aligns well with the results from 5.2, Figure 4, that showed that an inaccurate approximation of $p_R$ due to few samples is still preferable over not using any reference data.\n\n\\subsubsection*{\\bibname}}\n\n\n\n\n\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc} \n\\usepackage{hyperref} \n\\usepackage{url} \n\\usepackage{booktabs} \n\\usepackage{enumerate}\n\n\\usepackage{graphicx}\n\\usepackage{caption\n\\usepackage{subcaption}\n\\usepackage[table]{xcolor}\n\\usepackage{amssymb}\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\\usepackage{tablefootnote}\n\n\\def\\do\\\/\\do-{\\do\\\/\\do-}\n\n\n\\usepackage{dsfont}\n\n\\newcommand{\\mathds{1}}{\\mathds{1}}\n\\newcommand{\\mathds{E}}{\\mathds{E}}\n\n\\newcommand{\\mathbf{x}}{\\mathbf{x}}\n\\newcommand{\\mathds{R}}{\\mathds{R}}\n\\newcommand{\\mathcal{D}}{\\mathcal{D}}\n\\newcommand{\\cmark}{\\checkmark}%\n\\newcommand{\\xmark}{$\\times$}%\n\n\n\\newtheorem{assumption}{Assumption}\n\\newtheorem{theorem}{Theorem}\n\\newtheorem{remark}{Remark}\n\\newtheorem{example}{Example}\n\n\n\\begin{document}\n\n\n\n\\twocolumn[\n\n\\aistatstitle{Membership Inference Attacks against Synthetic Data through Overfitting Detection}\n\n\\aistatsauthor{ Boris van Breugel \\And Hao Sun \\And Zhaozhi Qian \\And Mihaela van der Schaar }\n\n\\aistatsaddress{ University of Cambridge \\And University of Cambridge \\And University of Cambridge \\And University of Cambridge \\\\Alan Turing Institute } ]\n\n\\begin{abstract}\nData is the foundation of most science. Unfortunately, sharing data can be obstructed by the risk of violating data privacy, impeding research in fields like healthcare. Synthetic data is a potential solution. It aims to generate data that has the same distribution as the original data, but that does not disclose information about individuals. Membership Inference Attacks (MIAs) are a common privacy attack, in which the attacker attempts to determine whether a particular real sample was used for training of the model. Previous works that propose MIAs against generative models either display low performance---giving the false impression that data is highly private---or need to assume access to internal generative model parameters---a relatively low-risk scenario, as the data publisher often only releases synthetic data, not the model. In this work we argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution. We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model. Experimentally we show that DOMIAS is significantly more successful at MIA than previous work, especially at attacking uncommon samples. The latter is disconcerting since these samples may correspond to underrepresented groups. We also demonstrate how DOMIAS' MIA performance score provides an interpretable metric for privacy, giving data publishers a new tool for achieving the desired privacy-utility trade-off in their synthetic data.\n\\end{abstract}\n\n\n\\section{INTRODUCTION}\nReal data may be privacy-sensitive, prohibiting open sharing of data and in turn hindering new scientific research, reproducibility, and the development of machine learning itself. Recent advances in generative modelling provide a promising solution, by replacing the \\textit{real} dataset with a \\textit{synthetic} dataset---which retains most of the distributional information, but does not violate privacy requirements. \n\n\\textbf{Motivation} The motivation behind synthetic data is that data is generated \\emph{from scratch}, such that no synthetic sample can be linked back to any single real sample. However, how do we verify that samples indeed cannot be traced back to a single individual? Some generative methods have been shown to memorise samples during the training procedure, which means the synthetic data samples---which are thought to be genuine---may actually reveal highly private information \\citep{Carlini2018TheNetworks}. To mitigate this, we require good metrics for evaluating privacy, and this is currently one of the major challenges in synthetic data \\citep{Jordon2021Hide-and-SeekRe-identification, Alaa2022HowModels}. Differential privacy (DP) \\citep{Dwork2014ThePrivacy} is a popular privacy definition and used in several generative modelling works \\citep{Ho2021DP-GAN:Nets,Torkzadehmahani2020DP-CGAN:Generation,Chen2020GS-WGAN:Generators,Jordon2019PATE-GAN:Guarantees,Long2019G-PATE:Discriminators,Wang2021DataLens:Aggregation,Cao2021DontDivergence}. However, even though DP is theoretically sound, its guarantees are difficult to interpret and many works \\citep{Rahman2018MembershipModel,Jayaraman2019EvaluatingPractice,Jordon2019PATE-GAN:Guarantees,Ho2021DP-GAN:Nets} reveal that for many settings, either the theoretical privacy constraint becomes meaningless ($\\epsilon$ becomes too big), or utility is severely impacted. This has motivated more lenient privacy definitions for synthetic data, e.g. see \\citep{Yoon2020Anonymizationads-gan}. We take an adversarial approach by developing a privacy attacker model---usable as synthetic data evaluation metric that quantifies the practical privacy risk. \n\n\\textbf{Aim} Developing and understanding privacy attacks against generative models are essential steps in creating better private synthetic data. There exist different privacy attacks in machine learning literature---see e.g. \\citep{Rigaki2020ALearning}---but in this work we focus on Membership Inference Attacks (MIAs) \\citep{Shokri2017MembershipModels}. The general idea is that the attacker aims to determine whether a particular sample they possess was used for training the machine learning model. Successful MIA poses a privacy breach, since mere membership to a dataset can be highly informative. For example, an insurance company may possess a local hospital's synthetic cancer dataset, and be interested to know whether some applicant was used for generating this dataset---disclosing that this person likely has cancer \\citep{Hu2022MembershipSurvey}. Additionally, MIAs can be a first step towards other privacy breaches, like profiling or property inference \\citep{DeCristofaro2021ALearning}. \n\nPrevious work in MIA attacks against generative models is inadequate, conveying a false pretense of privacy. In the NeurIPS 2020 Synthetic Data competition \\citep{Jordon2021Hide-and-SeekRe-identification}, none of the attackers were successful at MIA.\\footnote{Specifically, none performed better than random guessing in at least half of the datasets.} Similar negative results were found in the black-box results of \\citep{Liu2019PerformingModels,Hayes2019LOGAN:Models,Hilprecht2019MonteModels, Chen2019GAN-Leaks:Models}, where additional assumptions were explored to create more successful MIAs. Most of these assumptions (see Sec. \\ref{sec:related}) rely on some access to the generator, which we deem relatively risk-less since direct access is often avoidable in practice. Nonetheless, we show that even in the black-box setting---in which we only have access to the synthetic data---MIA can be significantly more successful than appears in previous work, when we assume the attacker has some independent data from the underlying distribution. In Sec. \\ref{sec:MIA_formalism} we elaborate further on why this is a realistic assumption. Notably, it also allows an attacker to perform significantly better attacks against underrepresented groups in the population (Sec. \\ref{sec:underrepresented}).\n\n\\textbf{Contributions} This paper's main contributions are the following.\n\\begin{enumerate}\n \\item We propose DOMIAS: a membership inference attacker model against synthetic data, that incorporates density estimation to detect generative model overfitting. DOMIAS improves upon prior MIA work by i) leveraging access to an independent reference dataset and ii) incorporating recent advances in deep density estimation.\n \\item We compare the MIA vulnerability of a range of generative models, showcasing how DOMIAS can be used as a metric that enables generative model design choices\n \\item We find that DOMIAS is more successful than previous MIA works at attacking underrepresented groups in synthetic data. This is disconcerting and strongly motivates further research into the privacy protection of these groups when generating synthetic data.\n \n\\end{enumerate}\n\n\n\n\\section{MEMBERSHIP INFERENCE: FORMALISM AND ASSUMPTIONS} \\label{sec:MIA_formalism\n\\textbf{Formalism for synthetic data MIA}\nMembership inference aims to determine whether a given sample comes from the training data of some model \\citep{Shokri2017MembershipModels}. Let us formalise this for the generative setting. Let random variable $X$ be defined on $\\mathcal{X}$, with distribution $p_R(X)$. Let $\\mathcal{D}_{mem}\\overset{iid}{\\sim} p_R(X)$ be a training set of independently sampled points from distribution $p_R(X)$. Now let $G:\\mathcal{Z}\\rightarrow \\mathcal{X}$ be a generator that generates data given some random (e.g. Gaussian) noise $Z$. Generator $G$ is trained on $\\mathcal{D}_{mem}$, and is subsequently used to generate synthetic dataset $\\mathcal{D}_{syn}$. Finally, let $A:\\mathcal{X}\\rightarrow [0,1]$ be the attacker model, that possesses the synthetic dataset $\\mathcal{D}_{syn}$, some test point $x^*$, with $X^*\\sim p_R(X)$, and possibly other knowledge---see below. Attacker $A$ aims to determine whether some $x^*\\sim p_R(X)$ they possess, belonged to $\\mathcal{D}_{mem}$, hence the perfect attacker outputs $A(x^*)=\\mathds{1}[x^*\\in\\mathcal{D}_{mem}]$. The MIA performance of an attacker can be measured using any classification metric. \n\n\\textbf{Assumptions on attacker access} The strictest black-box MI setting assumes the attacker only has access to the synthetic dataset $\\mathcal{D}_{syn}$ and test point $x^*$. In this work we assume access to a real data set that is independently sampled from $p_R(X)$, which we will call the reference dataset and denote by $\\mathcal{D}_{ref}$. The main motivation of this assumption is that an attacker needs some understanding of what real data looks like to infer MI---in Sec. \\ref{sec:method} we will elaborate further on this assumption's benefits. Similar assumptions have been made in the supervised learning MI literature, see e.g. \\citep{Shokri2017MembershipModels, Ye2021EnhancedModels}.\nThis is a realistic scenario to consider for data publishers: though they can control the sharing of their own data, they cannot control whether attackers acquires similar data from the general population. A cautious data publisher would assume the attacker has access to a sufficiently large $\\mathcal{D}_{ref}$ to approximate $p_R(X)$ accurately, since this informally bounds the MIA risk from above. Related MI works \\citep{Liu2019PerformingModels,Hayes2019LOGAN:Models,Hilprecht2019MonteModels, Chen2019GAN-Leaks:Models} consider other assumptions that all require access to the synthetic data's generative model.\\footnote{Though with varying extents, see \\citep{Chen2019GAN-Leaks:Models}} These settings are much less dangerous to the data publisher, since these can be avoided by only publishing the synthetic data. Individual assumptions of related works are discussed further in Sec. \\ref{sec:related}.\n\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/example_transform_dep.png}\n \n \\caption{Generative distribution in original space}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/example_transform_dep_2.png}\n \n \\caption{Distribution in log-transformed space}\n \\end{subfigure}\n \\caption{Should we infer membership $m=1$ for point $A$? Consider the generative distribution for two representations of $X$, optimal methods based on Eq. \\ref{eq:assumption_prev} will infer $m=1$ for green and $m=0$ for red areas. This is problematic; it implies inference of these methods is dependent on the (possibly arbitrary) representation of variable $X$. \\emph{Conclusion: it does not make sense to focus on mere density, MIA needs to target local overfitting directly}. This requires data from (or assumptions on) the underlying distribution.}\n \\label{fig:toy_example_eq1}\n\\end{figure*}\n\n\\section{DOMIAS} \\label{sec:method\n\\subsection{Rethinking the black-box setting: why $\\mathcal{D}_{syn}$ alone is insufficient} \nThe most popular black-box setting assumes only access to $\\mathcal{D}_{syn}$. This gives little information, which is why previous black-box works \\citep{Hayes2019LOGAN:Models, Hilprecht2019MonteModels, Chen2019GAN-Leaks:Models} implicitly assume: \n\\begin{equation}\n\\label{eq:assumption_prev}\n A_{prev}(x^*) = f(p_G(x^*)),\n\\end{equation}\nwhere $A$ indicates the attacker's MIA scoring function, $p_G(\\cdot)$ indicates the generator's output distribution and $f:\\mathds{R}\\rightarrow [0,1]$ is some monotonically increasing function.\nThere are two reasons why Eq. \\ref{eq:assumption_prev} is insufficient. First, the score does not account for the intrinsic distribution of the data. Consider the toy example in Figure \\ref{fig:toy_example_eq2}a. There is a local density peak at $x=4$, but without further knowledge we cannot determine whether this corresponds to an overfitted example or a genuine peak in the real distribution. \\textbf{It is thus naive to think we can do MI without background knowledge}. \n\nSecond, the RHS of Eq. \\ref{eq:assumption_prev} is not invariant w.r.t. bijective transformations of the domain. Consider the left and right plot in Figure \\ref{fig:toy_example_eq1}. Given the original representation, we would infer $M=0$ for any point around $x=4$, whereas in the right plot we would infer $M=1$ for the same points. This dependence on the representation is highly undesirable, as any invertible transformation of the representation should contain the same information. \n\nHow do we fix this? We create the following two desiderata: i) the MI score should target overfitting \\textit{w.r.t. the real distribution}, and ii) it should be independent of representation.\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/example_Eq2_with_baseline.png}\n \n \n \\caption{Original space}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/example_Eq2_transformed_with_baseline.png}\n \n \n \\caption{Log-transformed space}\n \\end{subfigure}\n \\caption{\\emph{DOMIAS scores are not dependent on the feature representation.} This is the same toy example as in Figure \\ref{fig:toy_example_eq1}, where we now assume the bump at $x=4$ has been caused by overfitting in the generator, s.t. this part of the space has become overrepresented w.r.t. the original distribution. DOMIAS infers MI by weighting the generative and real distribution, inferring $m=1$ ($m=0$) for green (red) areas. Note the difference with Figure \\ref{fig:toy_example_eq1}: whereas MI predictions of previous works that use Eq. \\ref{eq:assumption_domias} are dependent on the representation, DOMIAS scores are the same in both domains (Theorem \\ref{theorem:representation}).}\n \\label{fig:toy_example_eq2}\n\\end{figure*}\n\n\n\\subsection{DOMIAS: adding knowledge of the real data.}\nWe need to target overfitting directly. We propose the DOMIAS framework: Detecting Overfitting for Membership Inference Attacks against Synthetic Data. \n\nLet us assume we know the true data distribution $p_R(X)$. We change Eq. \\ref{eq:assumption_prev} to:\n\\begin{equation}\n\\label{eq:assumption_domias}\n A_{\\mathrm{DOMIAS}}(x^*) = f(\\frac{p_G(x^*)}{p_R(x^*)}),\n\\end{equation}\nthat is, we weight Eq. \\ref{eq:assumption_prev} by the real data distribution $p_R(X)$.\\footnote{This work focuses on relative scores, hence we ignore choosing $f$---see Sec. \\ref{sec:discussion}.} Figure \\ref{fig:toy_example_eq2} shows the difference between DOMIAS and previous work using Eq. \\ref{eq:assumption_prev}, by considering the same toy example as in Figure \\ref{fig:toy_example_eq1}. Effectively, Eq. \\ref{eq:assumption_domias} distinguishes between the real and generative distribution, similar in vain to global two-sample tests (e.g. see \\cite{Gretton2012ATest,Arora2019ALearning, Gulrajani2019TowardsGeneralization}). The probability ratio has the advantage that (cf. e.g. probability difference) it is independent of the specific representation of the data:\n\\begin{theorem} \\label{theorem:representation}\nLet $X_G$ and $X_R$ be two random variables defined on $\\mathcal{X}$, with distributions $p_G(X)$ and $p_R(X)$, s.t. $p_G\\ll p_R$, i.e. $p_R$ dominates $p_G$. Let $g:\\mathcal{X}\\rightarrow \\tilde{\\mathcal{X}}, x\\mapsto g(x)$ be some invertible function, and define representations $\\tilde{X}_G = g(X_G)$ and $\\tilde{X}_R=g(X_R)$ with respective distribution $\\tilde{p}_G(\\tilde{X})$ and $\\tilde{p}_R(\\tilde{X})$. Then $\\frac{p_G(X)}{p_R(X)} = \\frac{\\tilde{p}_G(g(X))}{\\tilde{p}_R(g(X))}$, i.e. the same score is obtained for either data representations.\n\\end{theorem}\n\\begin{proof}\nWithout loss of generalisation let us assume continuous variables and almost everywhere continuous $g$. Using the chain rule, we have $\\tilde{p}_{\\cdot}(g(x)) = \\frac{p_\\cdot(x)}{|J(x)|}$ with Jacobian $J(x) = \\frac{dg}{dx}(x)$. Hence we see:\n\\begin{equation*}\n \\frac{\\tilde{p}_G(g(x))}{\\tilde{p}_R(g(x))}= \\frac{p_G(x)\/|J(x)|}{p_R(x)\/|J(x)|} = \\frac{p_G(x)}{p_R(x)}, a.e.\n\\end{equation*}\nas desired.\n\\end{proof}\n\n\n\\textbf{DOMIAS does not purport false privacy safety for underrepresented groups} Figure \\ref{fig:toy_example_eq1}a pinpoints a problem with previous works: methods that rely on assumption Eq. \\ref{eq:assumption_prev} cannot attack low-density regions. As a result, one might conclude that samples in these regions are safer. Exactly the opposite is true: in Figure \\ref{fig:toy_example_eq2} we see DOMIAS infers MI successfully for these samples, whatever the representation. This is distressing, as low-density regions may correspond to underrepresented groups in the population, e.g. ethnic minorities. We will explore this further in the experimental section. \n\n\n\\subsection{Illustrative attacker examples}\nAny density estimator can be used for approximating $p_G(X)$ and $p_R(X)$---e.g. fitting of some parametric family, training a generative model with Monte Carlo Integration, or a deep density estimator. The choice of density estimator should largely depend whether prior knowledge is available---e.g. $p_R$ falls in some parametric family---and on the size of the datasets---for a large dataset a more powerful and more flexible density estimator can be used, whereas for little data this is not suitable as it might lead to overfitting. In the experimental section, we illustrate DOMIAS using the flow-based BNAF \\citep{deCao2019BlockFlow} density estimator, chosen for its training efficiency. For the ablation study in Sec. \\ref{sec:ablation} we also include a Gaussian KDE-based method as a non-parametric alternative.\n\n\\section{RELATED WORK} \\label{sec:related}\n\\textbf{MIAs against generative models} Most of the literature on privacy attacks is focused on discriminative models, not generative models. The few works that are concerned with generative models all focus on membership inference (MIA) \\citep{Shokri2017MembershipModels}. Here we focus on works that can be applied to our attacker setting, see Table \\ref{tab:attacks}.\n\n\\citet{Hayes2019LOGAN:Models} propose LOGAN, a range of MIA attacks for both white-box and black-box access to the generative model, including possible auxiliary information. Two attacks can be applied to our setting. They propose a full black-box attack without auxiliary knowledge (i.e. no reference dataset). This model trains a GAN model on the synthetic data, after which the GAN's discriminator is used to compute the score for test examples. They also propose an attack that assumes an independent test set, similar to DOMIAS' $\\mathcal{D}_{ref}$---see Section 4.1 \\citep{Hayes2019LOGAN:Models}, discriminative setting 1 (D1). Their attacker is a simple classifier that is trained to distinguish between synthetic and test samples. \\citet{Hilprecht2019MonteModels} introduce a number of attacks that focus on approximating the generator distribution at each test point. \nImplicitly, they make assumption \\ref{eq:assumption_prev}, and approximate the probability by using Monte Carlo integration, i.e. counting the proportion of generated points that fall in a given neighbourhood. They do not consider the possible attacker access to a reference dataset. Choosing a suitable distance metric for determining neighbourhoods is non-trivial, however this is somewhat alleviated by choosing a better space in which to compute metrics, e.g. \\citeauthor{Hilprecht2019MonteModels} show that using the Euclidean distance is much more effective when used in conjunction with Principal Component Analysis (PCA). We refer to their method as MC, for Monte Carlo integration.\n\n\\citet{Chen2019GAN-Leaks:Models} give a taxonomy of MIAs against GANs and propose new MIA method GAN-leaks that relies on Eq. \\ref{eq:assumption_prev}. For each test point $x^*$ and some $k\\in\\mathbb{N}$, they sample $S^k_G = \\{x_i\\}_{i=1}^k$ from generator $G$ and use score $A(x^*;G) = \\min_{x_i\\in S^k_G} L_2(x^*, x_i)$ as an unnormalised surrogate for $p_G(x^*)$. \nThey also introduce a calibrated method that uses a reference dataset $\\mathcal{D}_{ref}$ to train a generative reference model $G_{ref}$, giving calibrated score $A(x^*;G,k)-A(x^*;G_{ref},k)$. \nThis can be interpreted as a special case of DOMIAS---Eq. \\ref{eq:assumption_domias}---that approximates $p_R$ and $p_G$ with Gaussian KDEs with infinitesimal kernel width, trained on a random subset of $k$ samples from $\\mathcal{D}_{ref}$ and $\\mathcal{D}_{syn}$. At last, we emphasise that though \\citep{Hayes2019LOGAN:Models, Chen2019GAN-Leaks:Models} consider $\\mathcal{D}_{ref}$ too, they (i) assume this implicitly and just for one of their many models, (ii) do not properly motivate or explain the need for having $\\mathcal{D}_{ref}$, nor explore the effect of $n_{ref}$, and (iii) their MIAs are technically weak and perform poorly as a result, leading to incorrect conclusions on the danger of this scenario (e.g. \\citet{Hayes2019LOGAN:Models} note in their experiments that their D1 model performs no better than random guessing).\n\n\\begin{table*}[bt]\n \\centering\n \\caption{Membership Inference attacks on generative models. (1) Underlying ML method (GAN: generative adversarial network, NN: (weighted) Nearest neighbour, KDE: kernel density estimation, MLP: multi-layer perceptron, DE: density estimator); (2) uses $\\mathcal{D}_{ref}$; (3) approximates Eq. \\ref{eq:assumption_prev} or \\ref{eq:assumption_domias}; (4) by default does not need generation access to generative model---only synthetic data itself. \\textit{\\textsuperscript{\\textdagger}GAN-leaks calibrated is a heuristic correction to GAN-leaks, but implicitly a special case of Eq. \\ref{eq:assumption_domias}.}} \n \\label{tab:attacks}\n \\begin{tabular}{lcccc} \\toprule\n Name & (1) & (2) & (3) & (4)\\\\ \\midrule\n LOGAN 0\\citep{Hayes2019LOGAN:Models} & GAN & \\xmark & Eq. \\ref{eq:assumption_prev}& \\cmark\\\\\n LOGAN D1 \\citep{Hayes2019LOGAN:Models} & MLP & \\cmark & N\/A (heuristic) & \\cmark\\\\\n MC \\citep{Hilprecht2019MonteModels}& NN\/KDE & \\xmark & Eq. \\ref{eq:assumption_prev}& \\xmark\\\\\n GAN-leaks 0 \\citep{Chen2019GAN-Leaks:Models} & NN\/KDE & \\xmark & Eq. \\ref{eq:assumption_prev}& \\xmark\\\\\n GAN-leaks CAL \\citep{Chen2019GAN-Leaks:Models} & NN\/KDE & \\cmark & \\ \\ Eq. \\ref{eq:assumption_domias}\\textsuperscript{\\textdagger} &\\xmark\\\\ \\hline\n DOMIAS (Us) & any DE & \\cmark & Eq. \\ref{eq:assumption_domias}& \\cmark\\\\ \\bottomrule\n \\end{tabular}\n\\end{table*}\n\n\n\\textbf{Stronger attacker access assumptions} Other methods in \\citep{Hayes2019LOGAN:Models, Hilprecht2019MonteModels, Chen2019GAN-Leaks:Models} make much stronger assumptions on attacker access. \\citep{Hayes2019LOGAN:Models} propose multiple attacks with a subset of the training set known, which implies that there has already been a privacy breach---this is beyond the scope of this work. They also propose an attack against GANs that uses the GANs discriminator to directly compute the MIA score, but discriminators are usually not published. \\citet{Chen2019GAN-Leaks:Models} propose attacks with white-box access to the generator or its latent code, but this scenario too can be easily avoided by not publishing the generative model itself. All methods in \\citep{Hilprecht2019MonteModels, Chen2019GAN-Leaks:Models} assume unlimited generation access to the generator (i.e. infinitely-sized $\\mathcal{D}_{syn}$), which is unrealistic for a real attacker---either on-demand generation is unavailable or there is a cost associated to it that effectively limits the generation size \\citep{DeCristofaro2021ALearning}. These methods can still be applied to our setting by sampling from the synthetic data directly.\n\n\n\\textbf{Tangential work}\nThe following MIA work is not compared against. \\citet{Liu2019PerformingModels,Hilprecht2019MonteModels} introduce \\textit{co-membership} \\citep{Liu2019PerformingModels} or \\textit{set MIA} \\citep{Hilprecht2019MonteModels} attacks, in which the aim is to determine for a whole set of examples whether either all or none is used for training. Generally, this is an easier attack and subsumes the task of single attacks (by letting the set size be 1).\n\\citet{Webster2021ThisFaces} define the \\textit{identity} membership inference attack against face generation models, which aims to infer whether some person was used in the generative model (but not necessarily a specific picture of that person). This requires additional knowledge for identifying people in the first place, and does not apply to our tabular data setting. \\citet{Hu2021MembershipRegions} focus on performing high-precision attacks, i.e. determining MIA for a small number of samples with high confidence. Similar to us they look at overrepresented regions in the generator output space, but their work assumes full model access (generator and discriminator) and requires a preset partitioning of the input space into regions. \\citep{Zhang2022MembershipData} is similar to \\citep{Hilprecht2019MonteModels}, but uses contrastive learning to embed data prior to computing distances. In higher dimensions, this can be an improvement over plain data or simpler embeddings like PCA---something already considered by \\citep{Hilprecht2019MonteModels}. However, the application of contrastive learning is limited when there is no \\textit{a priori} knowledge for performing augmentations, e.g. in the unstructured tabular domain. \n \nOn a final note, we like to highlight the relation between MIA and the evaluation of overfitting, memorisation and generalisation of generative models. The latter is a non-trivial task, e.g. see \\citep{Gretton2012ATest,Lopez-Paz2016RevisitingTests,Arora2017GeneralizationGANs,Webster2019DetectingRecovery, Gulrajani2019TowardsGeneralization}. DOMIAS targets overfitting directly and locally through Eq. \\ref{eq:assumption_domias}, a high score indicating local overfitting. \nDOMIAS differs from this line of work by focusing on MIA, requiring sample-based scores. DOMIAS scores can be used for interpreting overfitting of generative models, especially in the non-image domain where visual evaluation does not work. \n \n\\section{EXPERIMENTS} \\label{sec:experiments}\nWe perform experiments showing DOMIAS' value and use cases. In Sec. \\ref{sec:domias_vs_baselines} we show how DOMIAS outperforms prior work, in Sec. \\ref{sec:ablation} we explore why. Sec. \\ref{sec:underrepresented} demonstrates how underrepresented groups in the population are most vulnerable to DOMIAS attack, whilst Sec. \\ref{sec:generative_model_comparison} explores the vulnerability of different generative models---showcasing how DOMIAS can be used as a metric to inform synthetic data generation. For fair evaluation, the same experimental settings are used across MIA models (including $n_{ref}$). Details on experimental settings can be found in Appendix \\ref{appx:experimental_details}.\\footnote{Code is available at \\\\ \\href{https:\/\/github.com\/vanderschaarlab\/DOMIAS}{https:\/\/github.com\/vanderschaarlab\/DOMIAS}} \n\n\\subsection{DOMIAS outperforms prior MIA methods} \\label{sec:domias_vs_baselines}\n\\begin{figure*}[hbt]\n\n \\centering\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/n_M.png}\n \n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/t_epochs.png}\n \n \\end{subfigure}\n \\caption{\\emph{DOMIAS outperforms baselines.} MIA performance of DOMIAS and baselines versus the generative model training set size $|\\mathcal{D}_{mem}|$ and training time $t_{epochs}$ on the California Housing dataset. We observe how MIA AUC goes up for fewer training samples and long generative model training time, as both promote overfitting.}\n \\label{fig:domias_vs_baselines}\n\\end{figure*}\n\n\\textbf{Set-up} We use the California Housing Dataset \\citep{Pace1997SparseAutoregressions} and use TVAE \\citep{Xu2019ModelingGAN} to generate synthetic data. In this experiment we vary the number of TVAE training samples $|\\mathcal{D}_{mem}|$ and TVAE number of training epochs. We compare DOMIAS against LOGAN 0 and LOGAN D1 \\citep{Hayes2019LOGAN:Models}, MC \\citep{Hilprecht2019MonteModels}, and GAN-Leaks 0 and GAN-Leaks CAL \\citep{Chen2019GAN-Leaks:Models}---see Table \\ref{tab:attacks}.\n\n\\textbf{DOMIAS consistently outperforms baselines}\nFigure \\ref{fig:domias_vs_baselines}(a) shows the MIA accuracy of DOMIAS and baselines against TVAE's synthetic dataset, as a function of the number of training samples TVAE $n_{mem}$. For small $n_{mem}$ TVAE is more likely to overfit to the data, which is reflected in the overall higher MIA accuracy. Figure \\ref{fig:domias_vs_baselines}(b) shows the MIA accuracy as a function of TVAE training epochs. Again, we see TVAE starts overfitting, leading to higher MIA for large number of epochs. \n\nIn both plots, we see DOMIAS consistently outperforms baseline methods. Similar results are seen on other datasets and generative models, see Appendix \\ref{appx:additional_results}. Trivially, DOMIAS should be expected to do better than GAN-Leaks 0 and LOGAN 0, since these baseline methods do not have access to the reference dataset and are founded on the flawed assumption of Eq. \\ref{eq:assumption_prev}---which exposes the privacy risk of attacker access to a reference dataset.\n\n\\begin{figure*}[hbt]\n \\centering\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/n_ref.png}\n \n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/n_G.png}\n \n \\end{subfigure}\n \\caption{\\emph{DOMIAS source of gain.} Ablation study of DOMIAS on the California Housing dataset, with attack performance as a function of the reference dataset size (left) and the synthetic dataset size (right). We see that the MIA performance of DOMIAS is largely due to assumption Eq. \\ref{eq:assumption_domias} vs. Eq. \\ref{eq:assumption_prev}, i.e. the value of the reference dataset. The deep flow-based density estimator delivers gains over the simpler KDE approach when enough samples are available.}\n \\label{fig:results_ablation}\n\\end{figure*}\n\n\\subsection{Source of gain} \\label{sec:ablation}\nUsing the same set-up as before, we perform an ablation study on the value of i) DOMIAS' use of the reference set, and ii) the deep density estimator. For the first, we compare using the DOMIAS assumption (Eq. \\ref{eq:assumption_domias}) vs the assumption employed in many previous works (Eq. \\ref{eq:assumption_prev}). For the latter, we compare the results for density estimation based on the flow-based BNAF \\citep{deCao2019BlockFlow} versus a Gaussian kernel density estimator---kernel width given by the heuristic from \\citep{Scott1992MultivariateEstimation}. \n\nFigure \\ref{fig:results_ablation} shows the MIA performance as a function of $n_{syn}$ and $n_{ref}$. Evidently, the source of the largest gain is the use of Eq. \\ref{eq:assumption_domias} over Eq. \\ref{eq:assumption_prev}. As expected, the deep density estimator gives further gains when enough data is available. For lower amounts of data, the KDE approach is more suitable. This is especially true for the approximation of $p_R$ (the denominator of Eq. \\ref{eq:assumption_domias})---small noise in the approximated $p_R$ can lead to large noise in MIA scores. Also note in the right plot that MIA performance goes up with $|\\mathcal{D}_{syn}|$ across methods due to the better $p_G$ approximation; this motivates careful consideration for the amount of synthetic data published.\n\n\\subsection{Underrepresented group MIA vulnerability} \\label{sec:underrepresented}\n\\textbf{Set-up} We use a private medical dataset on heart failure, containing around $40,000$ samples with $35$ mixed-type features (see Appendix \\ref{appx:experimental_details}). We generate synthetic data using TVAE \\citep{Xu2019ModelingGAN}.\n\n\\begin{figure*}[bt]\n\n \\centering\n \n \n \\includegraphics[width=\\textwidth]{figures\/experiment_53.png}\n \\caption{\\emph{DOMIAS is more successful at attacking patients taking high-blood pressure medication. }(left) T-SNE plot of Heart Failure test dataset. There is a cluster of points visible in the top right corner, which upon closer inspection corresponds to subjects who take ARB medication. (right, bottom) Attacking accuracy of DOMIAS and baselines on majority and minority group (averaged over 8 runs). DOMIAS is significantly better at attacking the minority group than the general population. Except for GAN-leaks CAL, baselines fail to capture the excess privacy risk to the patients with blood pressure medication. Comparing DOMIAS with Eq. 1 (BNAF) (see Sec. \\ref{sec:ablation}), we see that the minority vulnerability is largely due to the availability of the reference data. (right, top) Single run attacking success of different MIA methods on these underrepresented samples; correctly inferred membership in green, incorrectly inferred in red.}\n \\label{fig:vulnerable}\n\\end{figure*}\n\n\n\\textbf{Minority groups are most vulnerable to DOMIAS attack} As seen in Sec. \\ref{sec:method}, the assumption underlying previous work (Eq. \\ref{eq:assumption_prev}) will cause these methods to never infer membership for low-density regions. This is problematic, as it gives a false sense of security for these groups---which are likely to correspond to underrepresented groups.\n\nThe left side of Figure \\ref{fig:vulnerable} displays a T-SNE embedding of the Heart Failure dataset, showing one clear minority group, drawn in blue, which corresponds to patients that are on high-blood pressure medication---specifically, Angiotensin II receptor blockers. The right side of Figure \\ref{fig:vulnerable} shows the performance of different MIA models. DOMIAS is significantly better at attacking this vulnerable group compared to the overall population, as well as compared to other baselines. This is not entirely surprising; generative models are prone to overfitting regions with few samples. Moreover, this aligns well with supervised learning literature that finds additional vulnerability of low-density regions, e.g. \\citep{Kulynych2019DisparateAttacks, Bagdasaryan2019DifferentialAccuracy}. Importantly, most MIA baselines give the false pretense that this minority group is \\textit{less vulnerable}. Due to the correspondence of low-density regions and underrepresented groups, \\emph{these results strongly urge further research into privacy protection of low-density regions when generating synthetic data.} \n\n\\begin{figure}[hbt]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{figures\/Eval_GANs.png}\n \\caption{\\emph{DOMIAS can be used to quantify synthetic data MIA vulnerability.} We plot the synthetic data quality versus DOMIAS AUC for different generative models on the California Housing dataset. There is a clear trade-off: depending on the tolerated MIA vulnerability, different synthetic datasets are best.}\n \\label{fig:generative_model_comparison}\n \n\\end{figure}\n\n\\subsection{DOMIAS informs generative modelling decisions} \\label{sec:generative_model_comparison} \n\\textbf{Set-up} Again we use the California Housing dataset, this time generating synthetic data using different generative models. We evaluate the quality and MIA vulnerability of GAN, \\citep{Goodfellow2014GenerativeNetworks}, WGAN-GP \\citep{Arjovsky2017WassersteinNetworks,Gulrajani2017ImprovedGANs}, CTGAN and TVAE \\citep{Xu2019ModelingGAN},\nNFlow \\citep{Durkan2019NeuralFlows}, PATE-GAN \\citep{Jordon2019PATE-GAN:Guarantees}, PrivBayes \\citep{Zhang2017Privbayes:Networks}, and ADS-GAN \\citep{Yoon2020Anonymizationads-gan}. As a baseline, we also include the anonymization method of sampling from training data and adding Gaussian noise. For ADS-GAN and the additive noise model, we vary the privacy level by raising the hyperparameter $\\lambda$ and noise variance, respectively. Results for other attackers are found in Appendix \\ref{appx:additional_results}.\n\n\\textbf{DOMIAS quantifies MIA vulnerability}\nFigure \\ref{fig:generative_model_comparison} presents the DOMIAS MIA AUC against the data quality (in terms of Wasserstein Distance to an independent hold-out set), averaged over eight runs. We see a clear privacy-utility trade-off, with the additive noise model giving a clean baseline. The NeurIPS 2020 Synthetic Data competition \\citep{Jordon2021Hide-and-SeekRe-identification} concluded that disappointingly, adding noise usually outperformed generative models in terms of the privacy-utility trade-off. Though we find this is true for WGAN-GP, PATE-GAN and CTGAN---which fall on the right side of the additive noise curve---other methods do yield better synthetic datasets. \n\nADS-GAN is based on WGAN-GP, hence for small $\\lambda$ (the privacy regularizer) it gets a similar score. Increasing $\\lambda$ promotes a higher distance between generated and training data, hence this reduces vulnerability. At first, it also leads to an increase in quality---raising $\\lambda$ leads to lower overfitting---but when $\\lambda$ increases further the generative distribution is distorted to the point that quality is significantly reduced. In contrast to \\citep{Hilprecht2019MonteModels}, we do not find evidence that VAEs are more vulnerable to MIAs than GANs. The Pareto frontier is given by the additive noise method, TVAE, NFlow and PrivBayes, hence the best synthetic data model will be one of these, depending on the privacy requirements.\n\n\\section{DISCUSSION} \\label{sec:discussion}\n\\textbf{DOMIAS use cases} DOMIAS is primarily a tool for evaluating and interpreting generative model privacy. The overall DOMIAS attacking success is a metric for MIA vulnerability, and may hence guide generative model design choices---e.g. choosing privacy parameters---or aid evaluation---including for competitions like \\citep{Jordon2021Hide-and-SeekRe-identification}. Since DOMIAS provides a sample-wise metric, its scores can also provide insight into privacy and overfitting of specific samples or regions in space---as seen in Sec. \\ref{sec:underrepresented}. Future work may adopt DOMIAS for active privacy protection, e.g. as a loss during training or as an auditing method post-training---removing samples that are likely overfitted.\n\n\\textbf{Underrepresented groups are more vulnerable to MIA attacks} Generative models are more likely to overfit low-density regions, and we have seen DOMIAS is indeed more successful at attacking these samples. This is distressing, since these regions can correspond to underrepresented groups in the population. Similar results have been found in supervised learning literature, e.g. \\citep{Kulynych2019DisparateAttacks, Bagdasaryan2019DifferentialAccuracy}. Protecting against this vulnerability is a trade-off, as outliers in data can often be of interest to downstream research. It is advisable data publishers quantify the excess MIA risk to specific subgroups.\n\n\\textbf{Attacker calibration} In practice, it will often be unknown how much of the test data was used for training. Just like related works, we have ignored this. This challenge is equivalent to choosing a suitable threshold, or suitable $f$ in Eq. \\ref{eq:assumption_domias} and relates closely to calibration of the attacker model, which is challenging for MIA since---to an attacker---usually no ground-truth labels are available. Future work can explore assumptions or settings that could enable calibrated attacks. In Appendix \\ref{appx:high_precision attacks} we include results for high-precision attacks.\n\n\\textbf{High-dimensionality and image data} Traditional density estimation methods (e.g. KDE) perform notoriously poorly in high dimensions. Recent years have seen a rise in density estimation methods that challenge this conception. Domain-specific density estimators, e.g. that define density on lower-dimensional embeddings, can be readily used in DOMIAS. We include preliminary results for the high-dimensional CelebA image dataset in Appendix \\ref{sec:celeba}.\n\n\\textbf{Training data size} We have seen that for large number of training samples, the performance of all attackers goes down to almost 0.5. The same is observed for large generative image models, Appendix \\ref{sec:celeba}. This is reassuring for synthetic data publishers, for whom this indicates a relatively low privacy risk globally. However, global metrics may hide potential high-precision attacks on a small number of individuals, see Appendix \\ref{appx:high_precision attacks}.\n\n\\textbf{Availability of reference dataset} DOMIAS assumes the presence of a reference dataset that enables approximating the true distribution $p_R(X)$. In case there is not sufficient data for the latter, more prior knowledge can be included in the parametrisation of $p_R$; e.g. choose $p_R(X)$ to lie in a more restrictive parametric family. Even in the absence of any data $\\mathcal{D}_{ref}$, an informed prior (e.g. Gaussian) based on high-level statistics can already improve upon related works that rely on assumption Eq. \\ref{eq:assumption_prev}---see Appendix \\ref{appx:gaussian_prior} for results. In Appendix \\ref{appx:distributional_shift} we include further experiments with distributional shifts between the $\\mathcal{D}_{ref}$ and $\\mathcal{D}_{mem}$, in which we find that even with moderate shifts the use of a reference dataset is beneficial.\n\n\n\n\\textbf{Publishing guidelines} Synthetic data does not guarantee privacy, however the risk of MIA attacks can be lessened when synthetic data is published considerately. Publishing just the synthetic data---and not the generative model---will in most cases be sufficient for downstream research, while avoiding more specialised attacks that use additional knowledge. Further consideration is required with the amount of data published: increasing the amount of synthetic data leads to higher privacy vulnerability (Figure \\ref{fig:results_ablation}b and see \\citep{Gretton2012ATest}). Though the amount of required synthetic data is entirely dependent on the application, DOMIAS can aid in finding the right privacy-utility trade-off.\n\n\\textbf{Societal impact} We believe DOMIAS can provide significant benefits to the future privacy of synthetic data, and that these benefits outweigh the risk DOMIAS poses as a more successful MIA method. On a different note, we highlight that success of DOMIAS implies privacy is not preserved, but not vice versa. Specifically, DOMIAS should not be used as a certificate for data privacy. Finally, we hope the availability of a reference dataset is a setting that will be considered in more ML privacy work, as we believe this is more realistic in practice than many more popular MIA assumptions (e.g. white-box generator), yet still poses significant privacy risks. \n\n\n\n\n\\subsubsection*{Acknowledgements}\nWe would like to thank the Office of Navel Research UK, who funded this research.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction\\label{sec1}}\n\n\nThis paper is motivated by a class of problems in graph deep learning, where the\nprimary task is either graph classification or graph regression. \nIn either case, the result should be invariant to arbitrary permutations of graph nodes.\n\nAs we explain below, the mathematical problem analyzed in this paper is a special case \nof the permutation invariance issue described above. To set the notations consider the\nvector space ${\\mathbb{R}}^{n\\times d}$ of $n\\times d$ matrices endowed with the Frobenius norm \n $\\norm{X}=\\left(trace(XX^T)\\right)^{1\/2}$\nand its associated Hilbert-Schmidt scalar product, $\\ip{X}{Y}=trace(XY^T)$.\n Let ${\\mathcal S}_n$ denote the symmetric group of $n\\times n$ permutation matrices. \n ${\\mathcal S}_n$ is a finite group of size $|{\\mathcal S}_n|=n!$.\n\nOn ${\\mathbb{R}}^{n\\times d}$ we consider the equivalence relation $\\sim$ \ninduced by the symmetric group of permutation matrices ${\\mathcal S}_n$ as follows. Let $X,Y\\in{\\mathbb{R}}^{n\\times d}$. \nThen we say $X\\sim Y$ if there is $P\\in{\\mathcal S}_n$ so that $Y=PX$. In other words, two matrices are equivalent if one is a row permutation of the other. \nThe equivalence relation induces a natural distance on the quotient space \n${\\widehat{\\Rnd}}:={\\mathbb{R}}^{n\\times d}\/\\sim$,\n\\begin{equation}\n \\label{eq:1.1}\nd: {\\widehat{\\Rnd}}\\times {\\widehat{\\Rnd}} \\rightarrow\\mathbb{R} ~~,~~d(\\hat{X},\\hat{Y})=\\min_{\\Pi\\in{\\mathcal S}_n}\\norm{X-\\Pi Y} \n\\end{equation}\nThis makes $({\\widehat{\\Rnd}},d)$ a complete metric space.\n\nOur main problem can now be stated as follows:\n\\begin{prob}\\label{prob1}\nGiven $n,d\\geq 1$ positive integers, find $m$ and a bi-Lipschitz map\n $\\hat{\\alpha}:({\\widehat{\\Rnd}},d)\\rightarrow(\\mathbb{R}^m,\\norm{\\cdot}_2)$.\n\\end{prob}\nExplicitly the problem can be restated as follows. One is asked to construct a \nmap $\\alpha:{\\mathbb{R}}^{n\\times d}\\rightarrow\\mathbb{R}^m$ that satisfies the following conditions:\n\\begin{enumerate}\n \\item If $X,Y\\in{\\mathbb{R}}^{n\\times d}$ so that $X\\sim Y$ then $\\alpha(X)=\\alpha(Y)$\n \\item If $X,Y\\in{\\mathbb{R}}^{n\\times d}$ so that $\\alpha(X)=\\alpha(Y)$ then $X\\sim Y$\n \\item There are constants $01$. \n \\item {\\em Sorting Embedding}.\n For $x\\in\\mathbb{R}^n$, consider the sorting map\n\\begin{equation}\\label{eq:ord}\n \\downarrow:\\mathbb{R}^n\\rightarrow\\mathbb{R}^n ~~,~~\n \\downarrow(x)=(x_{\\pi(1)},x_{\\pi(2)},\\ldots,\n x_{\\pi(n)})^T \n\\end{equation}\n where the permutation $\\pi$ is so that\n $x_{\\pi(1)}\\geq x_{\\pi(2)}\\geq\\cdots\\geq x_{\\pi(n)}$. It is obvious that $\\downarrow$ satisfies Conditions (1) and (2) and therefore lifts to an injective map on ${\\widehat{\\Rnd}}$. As we see in Section \\ref{sec3}, the map $\\downarrow$ is bi-Lipschitz. In fact it is isometric, and hence produces an ideal embedding. Our work in Section \\ref{sec3} is to extend such construction to the more general \n case $d>1$.\n\\end{enumerate}\nThe algebraic embedding is a special case of the more general {\\em kernel method} that can be thought of as a projection of the measure \n$a_{\\infty}(X)$ onto a finite dimensional space, e.g., the space of polynomials spanned by $\\{X,X^2,\\cdots,X^n\\}$. In applications such kernel method is known as a ``Readout Map\" \\cite{deepsets}, based on ``Sum Pooling\".\n\nThe sorting embedding has been used in applications under the name of ``Pooling Map\" \\cite{deepsets}, based on ``Max Pooling\". A na\\\"{\\i}ve extension of the unidimensional map (\\ref{eq:ord}) to the case $d>1$ might employ the lexicographic order: order monotone decreasing the rows according to the first column, and break the tie by going to the next column. While this gives rise to an injective map, it is easy to see it is not even continuous, let alone Lipschitz. The main work in this paper is to extend the sorting embedding to the case $d>1$ using a three-step procedure, first embed ${\\mathbb{R}}^{n\\times d}$ into a larger vector space $\\mathbb{R}^{n\\times D}$, then apply $\\downarrow$ in each column independently, and then perform a dimension reduction by a linear map into $\\mathbb{R}^{2nd}$. Similar to the phase retrieval problem (\\cite{bcmn,bod,balan16}), the redundancy introduced in the first step counterbalances the loss of information (here, relative order of one column with respect to another) in the second step. \n\nA summary of main results presented in this paper is contained in the following result.\n\\begin{thm}\\label{t1}\nConsider the metric space $({\\widehat{\\Rnd}},d)$.\n\\begin{enumerate}\n\\item (Polynomial Embedding) There exists a Lipschitz injective map\n\\[ {\\hat{\\alpha}}:{\\widehat{\\Rnd}}\\rightarrow\\mathbb{R}^m \\]\nwith $m=\\left( \\begin{array}{c}\n\\mbox{$d+n$} \\\\\n\\mbox{$d$}\n\\end{array} \\right)$. Two explicit constructions of this map are given in (\\ref{eq:alpha1}) and (\\ref{eq:alpha2}).\n\\item (Sorting based Embedding) There exists a class of bi-Lipschitz maps \n\\[ {\\hat{\\beta}}_{A,B}:({\\widehat{\\Rnd}},d)\\rightarrow(\\mathbb{R}^m,\\norm{\\cdot}) ~,~ {\\hat{\\beta}}_{A,B}(\\hat{X})=B\\left({\\hat{\\beta}}_A(\\hat{X})\\right) \\]\nwith $m=2nd$, where each map ${\\hat{\\beta}}_{A,B}$ is the composition of two bi-Lipschitz maps: a full-rank linear operator $B:\\mathbb{R}^{n\\times D}\\rightarrow \\mathbb{R}^m$, with the nonlinear bi-Lipschitz map ${\\hat{\\beta}}_A:{\\widehat{\\Rnd}}\\rightarrow\\mathbb{R}^{n\\times D}$\n parametrized by a matrix $A\\in\\mathbb{R}^{d\\times D}$ called \"key\". \n Explicitly,\n${\\hat{\\beta}}(\\hat{X})=\\downarrow(XA)$, where $\\downarrow$ acts column-wise.\n These maps are characterized by the following properties:\n \\begin{enumerate}\n\\item For $D=1+(d-1)n!$, any\n$A\\in\\mathbb{R}^{d\\times (1+(d-1)n!)}$ whose columns form a full spark frame defines a bi-Lipschitz map ${\\hat{\\beta}}_A$ on ${\\widehat{\\Rnd}}$. \nFurthermore, a lower Lipschitz constant is given by the smallest $d^{th}$ singular value among all $d\\times d$ sub-matrices of $A$,\n $\\min_{J\\subset[D],|J|=d}s_d(A[J])$.\n\\item For any matrix (``key\") $A\\in\\mathbb{R}^{d\\times D}$ such that the map \n${\\hat{\\beta}}_A$ is injective, then ${\\hat{\\beta}}_A:({\\widehat{\\Rnd}},d)\\rightarrow(\\mathbb{R}^{n\\times D},\\norm{\\cdot})$ is bi-Lipschitz. Furthermore, an upper Lipschitz constant is given by $s_1(A)$, the largest singular value of $A$.\n\\item Assume $A\\in\\mathbb{R}^{d\\times D}$ is such that the map \n${\\hat{\\beta}}_A$ is injective (i.e., a \"universal key\"). Then for almost any linear map $B:\\mathbb{R}^{n\\times D}\\rightarrow\\mathbb{R}^{2nd}$ the map ${\\hat{\\beta}}_{A,B}=B\\circ{\\hat{\\beta}}_A$ is\nbi-Lipschitz.\n\\end{enumerate}\n\\end{enumerate}\n\\end{thm}\n\nAn immediate consequence of this result is the following corollary whose proof is included in subsection \\ref{subsec4.4}:\n\\begin{cor}\n\\label{c0}\nLet $\\beta:\\mathbb{R}^{n\\times d}\\rightarrow\\mathbb{R}^m$ induce a bi-Lipschitz embedding ${\\hat{\\beta}}:{\\widehat{\\Rnd}}\\rightarrow\\mathbb{R}^m$ of the\nmetric space $({\\widehat{\\Rnd}},d)$ into $(\\mathbb{R}^m,\\norm{\\cdot}_2)$. \n\\begin{enumerate}\n\\item For any continuous function $f:\\mathbb{R}^{n\\times d}\\rightarrow\\mathbb{R}$ \ninvariant to row-permutation (i.e., $f(PX)=f(X)$ for every \n$X\\in\\mathbb{R}^{n\\times d}$ and $P\\in{\\mathcal S}_n$) there exists a continuous\nfunction $g:\\mathbb{R}^m\\rightarrow\\mathbb{R}$ such that $f=g\\circ\\beta$.\nConversely, for any $g:\\mathbb{R}^m\\rightarrow\\mathbb{R}$ continuous function, the \nfunction $f=g\\circ\\beta:\\mathbb{R}^{n\\times d}\\rightarrow\\mathbb{R}$ is continuous\nand row-permutation invariant.\n\\item For any Lipschitz continuous function $f:\\mathbb{R}^{n\\times d}\\rightarrow\\mathbb{R}$ \ninvariant to row-permutation (i.e., $f(PX)=f(X)$ for every \n$X\\in\\mathbb{R}^{n\\times d}$ and $P\\in{\\mathcal S}_n$) there exists a Lipschitz continuous\nfunction $g:\\mathbb{R}^m\\rightarrow\\mathbb{R}$ such that $f=g\\circ\\beta$.\nConversely, for any $g:\\mathbb{R}^m\\rightarrow\\mathbb{R}$ Lipschitz continuous function, the \nfunction $f=g\\circ\\beta:\\mathbb{R}^{n\\times d}\\rightarrow\\mathbb{R}$ is Lipschitz continuous\nand row-permutation invariant.\n\\end{enumerate}\n\\end{cor}\n\\vspace{5mm}\n\n\nThe structure of the paper is as follows. Section \\ref{sec2} contains the algebraic embedding method and encoders $\\alpha$ described at part (1) of Theorem \\ref{t1}. Corollary \\ref{cor2} contains part (1) of the main result stated above. Section \\ref{sec3} introduces the sorting based embedding procedure and describes the key-based encoder $\\beta$. Necessary and sufficient conditions for key universality are presented in Proposition\n\\ref{prop3.8}; the injectivity of the encoder described at part (2.a) of Theorem \\ref{t1} is proved in Theorem \\ref{t4}; the bi-Lipschitz property of any universal key described at part (2.b) of Theorem \\ref{t1} is shown in Theorem \\ref{t5}; the dimension reduction statement (2.c) of Theorem \\ref{t1}\nis included in Theorem \\ref{t6}. Proof of Corollary \\ref{c0} is presented in subsection \\ref{subsec4.4}. Section \\ref{sec4} contains applications to graph deep learning. These application use Graph Convolution Networks and the numerical experiments are carried out on two graph data sets: a chemical compound data set (QM9) and a protein data set (PROTEINS\\_FULL). \n\nWhile the motivation of this analysis is provided by graph deep learning applications,\nthis is primarily a mathematical paper. Accordingly the formal theory is presented first, and then is followed by the machine learning application. Those interested in the application (or motivation) can skip directly to Section \\ref{sec4}. \n\n{\\bf Notations}. For an integer $d\\geq 1$, $[d]=\\{1,2,\\ldots,d\\}$. For a matrix $X\\in{\\mathbb{R}}^{n\\times d}$,\n $x_1,\\ldots x_d\\in\\mathbb{R}^n$ denote its columns, $X=[x_1\\vert\\cdots\\vert x_d]$. All norms are Euclidean; for a matrix $X$, $\\norm{X}=\\sqrt{trace(X^TX)}=\\sqrt{\\sum_{k,j}|X_{k,j}|^2}$ denotes the Frobenius norm; for vectors $x$, $\\norm{x}=\\norm{x}_2=\\sqrt{\\sum_{j} |x_j|^2}$. \n \n\n\\subsection{Prior Works}\n\nSeveral methods for representing orbits of vector spaces under the action of permutation (sub)groups have been studied in literature. Here we describe some of these results, without claiming an exhaustive literature survey.\n\nA rich body of literature emanated from the early works on \nsymmetric polynomials and group invariant representations of \nHilbert, Noether, Klein and Frobenius. They are part of standard\ncommutative algebra and finite group representation theory. \n\nPrior works on permutation invariant mappings have predominantly employed some form of summing procedure, though some have alternatively employed some form of sorting procedure.\n\nThe idea of summing over the output nodes of an equivariant network has been well studied. \nThe algebraic invariant theory goes back to Hilbert and Noether (for finite groups) and then continuing with the continuous invariant function theory of \nWeyl and Wigner (for compact groups), \nwho posited that a generator function $\\psi:X\\rightarrow\\mathbb{R}$ gives rise to a function $E:X\\rightarrow\\mathbb{R}$ invariant to the action of a finite group $G$ on $X$, $(g,x)\\mapsto g.x$, via the averaging formula $E(x)=\\frac{1}{|G|}\\sum_{g\\in G} \\psi(g.x)$.\n\nMore recently, this approach provided the framework for universal approximation results of $G$-invariant functions. \\cite{maron2018invariant} showed that invariant or equivariant networks must satisfy a fixed point condition. The equivariant condition is naturally realized by GNNs. The invariance condition is realized by GNNs when followed by summation on the output layer, as was further shown in \\cite{keriven2019universal}, \\cite{pmlr-v97-maron19a} and \\cite{lipman2022}. Subsequently, \\cite{yarotsky2021universal} proved universal approximation results over compact sets for continuous functions invariant to the action of finite or continuous groups. In \\cite{geerts2022}, the authors\nobtained bounds on the separation power of GNNs in terms of the Weisfeiler-Leman (WL) tests by tensorizing the input-output mapping. \n\\cite{sannai2020universal} studied approximations of equivariant maps, while \\cite{NEURIPS2019_71ee911d} showed that if a GNN with sufficient expressivity is well trained, it can solve the graph isomorphism problem.\n\nThe authors of \\cite{OrderMatters_2015arXiv151106391V} designed an algorithm for processing sets with no natural orderings. The algorithm applies an attention mechanism to achieve permutation invariance with the attention keys being generated by a Long-Short Term Memory (LSTM) network. Attention mechanisms amount to a weighted summing and therefore can be considered to fall within the domain of summing based procedures.\n\nIn \\cite{GGsNN_2015arXiv151105493L}, the authors designed a permutation invariant mapping for graph embeddings. The mapping employs two separate neural networks, both applied over the feature set for each node. One neural network produces a set of new embeddings, the other serves as an attention mechanism to produce a weighed sum of those new embeddings.\n\n\n\n\nSorting based procedures for producing permutation invariant mappings over single dimensional inputs have been addressed and used by \\cite{deepsets}, notably in their {\\it max pooling} procedure.\n\nThe authors of \\cite{qi2017pointnet} developed a permutation\ninvariant mapping \n$pointnet$ for point sets that is based on a $max$ function. The mapping takes in a set of vectors, processes each vector through a neural network followed by an scalar output function, and takes the maximum of the resultant set of scalars.\n\nThe paper \\cite{zhang2018end} introduced {\\it SortPooling}. {\\it SortPooling} orders the latent embeddings of a graph according to the values in a specific, predetermined column. All rows of the latent embeddings are sorted according to the values in that column. While this gives rise to an injective map, it is easy to see it is not even continuous, let alone Lipschitz. The same issue\narises with any lexicographic ordering, including the well-known Weisfeiler-Leman embedding \\cite{wl}.\nOur paper introduces a novel method that bypasses this issue.\n\nAs shown in \\cite{pmlr-v97-maron19a}, the sum pooling-based GNNs provides universal approximations for of any permutation invariant continuous function but only on \\emph{compacts}. Our sorting based embedding removes the compactness restriction as well as it extends to all Lipschitz maps.\n\nWhile this paper is primarily mathematical in nature, methods developed here are applied to two graph data sets, QM9 and PROTEINS\\_FULL. Researchers have applied various graph deep learning techniques to both data sets. In particular, \\cite{Gilmer_2017arXiv170401212G} studied extensively the QM9 data set, and compared their method with many other algorithms\nproposed by that time.\n\n\\section{Algebraic Embeddings\\label{sec2}}\n\nThe algebraic embedding presented in this section can be thought of a special kernel to project equation (\\ref{eq:measure}) onto.\n\n\\subsection{Kernel Methods}\nThe kernel method employs a family of continuous kernels (test) functions, $\\{K(x;y)~;~x\\in\\mathbb{R}^d~,~y\\in Y\\}$ parametrized\/indexed by a set $Y$. \nThe measure representation $\\mu=a_{\\infty}(X)$ in (\\ref{eq:measure}) yields a nonlinear map\n\\[ \\alpha:\\mathbb{R}^{n\\times d} \\rightarrow C(Y)\n~~,~~X \\mapsto F(y)=\\int_{R^d} K(x;y)d\\mu \\]\ngiven by\n\\[ \\alpha(X)(y)= \\frac{1}{n}\\sum_{k=1}^n K(x_k;y) \\]\nThe embedding problem \\ref{prob1}) can be restated as follows. One is asked\nto find a finite family of kernels $\\{K(x;y)~;~x\\in\\mathbb{R}^d~,~y\\in Y\\}$, \n $m=|Y|$ so that\n\\begin{equation}\n\\label{eq:kernel}\n{\\hat{\\alpha}}:({\\widehat{\\Rnd}},d) \\rightarrow l^2(Y)\\sim (\\mathbb{R}^m,\\norm{\\cdot}_2) ~~,~~ ({\\hat{\\alpha}}(\\hat{X}))_y = \\frac{1}{n} \\sum_{k=1}^n K(x_k;y)\n\\end{equation}\nis injective, Lipschitz or bi-Lipschitz. \n\nTwo natural choices for the kernel $K$ are the Gaussian kernel and the complex exponential (or, the Fourier) kernel:\n\\[ K_{G}(x,y) = e^{-\\norm{x-y}^2\/\\sigma^2} ~~,\nK_{F}(x,y) = e^{2\\pi i \\ip{x}{y}}\n\\]\nwhere in both cases $Y\\subset\\mathbb{R}^d$. \nIn this paper we analyze a different kernel, namely the polynomial kernel $K_P(x,y)=x_1^{y_1}x_2^{y_2}\\cdots x_d^{y_d}$, $Y\\subset\\{0,1,2,\\ldots,n\\}^d$. \n\n\\subsection{The Polynomial Embedding}\n\nSince the polynomial representation is intimately related to the Hilbert-Noether algebraic invariants theory \\cite{compuinvar} and the Hilbert-Weyl theorem, it is advantageous to start our construction from a different perspective. \n\nThe linear space ${\\mathbb{R}}^{n\\times d}$ is isomorphic to $\\mathbb{R}^{nd}$ by stacking the columns one on top of each other. In this case, the action of the permutation group $S_n$ can be recast as the action of the subgroup $I_d\\otimes S_n$ of the bigger group $S_{nd}$ on $\\mathbb{R}^{nd}$. Specifically, let us denote by $\\sim_G$ the equivalence relation\n\\[ x,y\\in\\mathbb{R}^{nd}~~,~~x\\sim_G y \\Longleftrightarrow y=\\Pi x~,~{\\rm for ~ some}~\\Pi\\in G \\]\ninduced by a subgroup $G$ of $S_{nd}$. In the case\n $G=I_d\\otimes S_n=\\{diag_d(P)~,~P\\in S_n\\}$ of block diagonal permutation obtained by repeating $d$ times the same $P\\in S_n$ permutation along the main diagonal, two vectors $x,y\\in\\mathbb{R}^{nd}$ are $\\sim_G$ equivalent iff there is a permutation matrix $P\\in S_n$ so that $y(1+(k-1)n:kn) = Px(1+(k-1)n:kn)$ for each $1\\leq k\\leq d$. In other words, each disjoint $n$-subvectors in $y$ and $x$ are related by the same permutation. In this framework, the Hilbert-Weyl theorem (Theorem 4.2, Chapter XII, in \\cite{BifTheory2}) states that the ring of invariant polynomials is finitely generated. The G\\\"{o}bel's algorithm (Section 3.10.2 in \\cite{compuinvar}) provides a recipe to find a complete set of invariant polynomials. In the following we provide a direct approach to construct a complete set of polynomial invariants. \n \n Let $\\mathbb{R}[{\\bf x}_1,{\\bf x}_2,...,{\\bf x}_d]$ denote the algebra of polynomials in $d$-variables with real coefficients. \nLet us denote $X\\in{\\mathbb{R}}^{n\\times d}$ a generic data matrix.\nEach row of this matrix defines a \nlinear form over ${\\bf x}_1,...{\\bf x}_d$,\n $\\lambda_k = X_{k,1}{\\bf x}_1+\\cdots + X_{k,d}{\\bf x}_d$.\n Let us denote by $\\mathbb{R}[{\\bf x}_1,\\ldots,{\\bf x}_d][{\\bf t}]$ the algebra of polynomials in variable ${\\bf t}$ with coefficients in the ring $\\mathbb{R}[{\\bf x}_1,\\ldots,{\\bf x}_d]$. Notice $\\mathbb{R}[{\\bf x}_1,{\\bf x}_2,\\ldots,{\\bf x}_d][{\\bf t}]=\\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2,\\ldots,{\\bf x}_d]$ \n by rearranging the terms according to degree in ${\\bf t}$. \n Thus $\\lambda_k\\in\\mathbb{R}[{\\bf x}_1,\\ldots,{\\bf x}_d]\\subset\\mathbb{R}[{\\bf x}_1,\\ldots,{\\bf x}_d][{\\bf t}]$ can be encoded as zeros of a polynomial $P_X$ of degree $n$ in variable ${\\bf t}$ with coefficients in $\\mathbb{R}[{\\bf x}_1,\\ldots,{\\bf x}_d]$:\n \\begin{equation}\n \\label{eq:polyencoding}\n P_X({\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d) = \\prod_{k=1}^n ({\\bf t}-\\lambda_k({\\bf x}_1,\\ldots,{\\bf x}_d))\n =\\prod_{k=1}^n ({\\bf t}-X_{k,1}{\\bf x}_1-\\ldots -X_{k,d}{\\bf x}_d)\n \\end{equation}\n Due to identification $\\mathbb{R}[{\\bf x}_1,{\\bf x}_2,\\ldots,{\\bf x}_d][{\\bf t}]=\\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2,\\ldots,{\\bf x}_d]$,\n we obtain that \\\\\n $P_X\\in \\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2,\\ldots,{\\bf x}_d]$ is a homogeneous polynomial of degree $n$ in $d+1$ variables. Let $\\mathbb{R}_n[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]$ denote the vector space of homogeneous polynomials in $d+1$ variables of degree $n$ with real coefficients. Notice the real dimension of this vector space is \n \\begin{equation}\n \\label{eq:dimRn}\n \\dim_\\mathbb{R} \\mathbb{R}_n[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d] = \\left( \n \\begin{array}{c}\n n+d \\\\\n d\n \\end{array}\n \\right) = \\left(\n \\begin{array}{c}\n n+d \\\\\n n\n \\end{array}\n \\right).\n \\end{equation}\nBy noting that $P_X$ is monic in ${\\bf t}$ (the coefficient of ${\\bf t}^n$ is always 1) we obtain an injective embedding of ${\\widehat{\\Rnd}}$ into $\\mathbb{R}^m$ with \n$m=\\dim_\\mathbb{R} \\mathbb{R}_n[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]-1$ via the coefficients of $P_X$ similar to (\\ref{eq:poly}). This is summarized in the following theorem:\n\\begin{thm}\n\\label{t2}\nThe map $\\alpha_0:{\\mathbb{R}}^{n\\times d}\\rightarrow\\mathbb{R}^{m-1}$ with $m=\\left(\\begin{array}{c}n+d \\\\ d \\end{array} \\right)$ given by the (non-trivial) coefficients of polynomial $P_X\\in\\mathbb{R}_n[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]$ lifts to an analytic embedding ${\\hat{\\alpha}}_0$ of $({\\widehat{\\Rnd}},d)$ into $\\mathbb{R}^m$. Specifically, for $X\\in{\\mathbb{R}}^{n\\times d}$ expand the polynomial\n\\begin{equation} \\label{eq:PA}\nP_X({\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d) = \\prod_{k=1}^n ({\\bf t}-X_{k,1}{\\bf x}_1-\\ldots -X_{k,d}{\\bf x}_d)\n = {\\bf t}^n + \\hspace{-10mm}\\sum_{\\begin{array}{c}\n \\mbox{$p_0,p_1,...,p_d\\geq 0$} \\\\\n \\mbox{$p_0+\\cdots+p_d=n$} \\\\ \n \\mbox{$p_0 1$}\n \\end{array}\\right.\n\\end{equation}\nbe a Lipschitz monotone decreasing function with Lipschitz constant 1.\n\\begin{cor}\\label{cor2}\nConsider the map:\n\\begin{equation}\\label{eq:alpha1}\n\\alpha_1:{\\mathbb{R}}^{n\\times d}\\rightarrow\\mathbb{R}^m~~,~~\n\\alpha_1(X) = \\left( \\begin{array}{c}\n\\mbox{$\\alpha_0\\bigg (\\varphi_0(\\norm{X})X \\bigg )$} \\\\\n\\mbox{$\\norm{X}$}\n\\end{array}\\right),\n\\end{equation}\nwith $m=\\left(\\begin{array}{c}n+d \\\\ d \\end{array} \\right)$.\nThe map $\\alpha_1$ lifts to an injective and globally Lipschitz map ${\\hat{\\alpha}}_1:{\\widehat{\\Rnd}}\\rightarrow\\mathbb{R}^m$ with Lipschitz constant $Lip({\\hat{\\alpha}}_1) \\leq \\sqrt{1+L_0^2}$.\n\\end{cor}\n{\\bf Proof}\n\nClearly $\\alpha_1(\\Pi X)=\\alpha_1(X)$ for any $\\Pi\\in{\\mathcal S}_n$ and $X\\in{\\mathbb{R}}^{n\\times d}$. Assume now that $\\alpha_1(X)=\\alpha_1(Y)$. Then $\\norm{X}=\\norm{Y}$ and since ${\\hat{\\alpha}}_0$ is injective on ${\\widehat{\\Rnd}}$ it follows $\\varphi(\\norm{X})X = \\Pi \\varphi(\\norm{Y})Y$ for some $\\Pi\\in{\\mathcal S}_n$. Thus $X\\sim Y$ which proves $\\alpha_1$ lifts to an injective map on ${\\widehat{\\Rnd}}$. \n\nNow we show ${\\hat{\\alpha}}_1$ is Lipschitz on $({\\widehat{\\Rnd}},d)$ of appropriate Lipschitz constant. Let $X,Y'\\in{\\mathbb{R}}^{n\\times d}$ and $\\Pi_0\\in{\\mathcal S}_n$ so that $d(\\hat{X},\\hat{Y'})=\\norm{X-\\Pi_0 Y'}$. Let $Y=\\Pi_0 Y'$ so that $d(\\hat{X},\\hat{Y})=\\norm{X-Y}$. \n\nChoose two matrices $X,Y\\in{\\mathbb{R}}^{n\\times d}$. We claim $\\norm{\\alpha_1(X)-\\alpha_1(Y)}\\leq \\sqrt{1+L_0^2}\n\\norm{X-Y}$.\nThis follows from two observations: \n\n(i) The map\n\\[ X \\mapsto \\rho(X):=\\varphi_0(\\norm{X})X \\]\nis the nearest-point map to (or, the metric projection map onto) the convex closed set $B_1({\\mathbb{R}}^{n\\times d})$. This means $\\norm{\\varphi_0(\\norm{X})X - Z}\\leq \\norm{X-Z}$ for any $Z\\in B_1({\\mathbb{R}}^{n\\times d})$. \n\n(ii) The nearest-point map to a convex closed subset of a Hilbert space is Lipschitz with constant 1, i.e. it shrinks distances, see \\cite{phelps56}.\n\nThese two observations yield:\n\\begin{multline*} \n\\norm{\\alpha_1(X)-\\alpha_1(Y)}^2 = \\norm{\\alpha_0(\\rho(Y))\n- \\alpha_0(\\rho(Y) )}^2 + |\\norm{X}-\\norm{Y}|^2 \\\\\n \\leq \nL_0^2 \\norm{\\rho(X)-\\rho(Y) }^2 + \\norm{X-Y}^2 \\leq (1+L_0^2)\\norm{X-Y}^2 .\n\\end{multline*}\nThis concludes the proof of this result. $\\qed$\n\\vspace{5mm}\n\nA simple modification of $\\phi_0$ can produce a $C^\\infty$ map by smoothing it out around $x=1$.\n\nOn the other hand the lower Lipschitz constant of ${\\hat{\\alpha}}_1$ is 0 due to terms of the form $X_{i,j}^k$ with $k\\geq 2$. \nIn \\cite{Cahill19}, the authors built a Lipschitz map by a retraction to the unit sphere instead of unit ball. \nInspired by their construction, a modification of $\\alpha_0$ in their spirit reads:\n\\begin{equation}\n \\label{eq:alpha2}\n\\alpha_2:{\\mathbb{R}}^{n\\times d}\\rightarrow\\mathbb{R}^m~~,~~\n\\alpha_2(X)=\\left( \\begin{array}{c}\n\\mbox{$\\norm{X}\\alpha_0\\bigg ( \\frac{X}{\\norm{X}} \\bigg )$} \\\\\n\\mbox{$\\norm{X}$}\n\\end{array}\\right)~,~if~X\\neq 0~~,~and~~\\alpha_2(0)=0.\n\\end{equation}\nIt is easy to see that $\\alpha_2$ satisfies the non-parallel property in \\cite{Cahill19} and is Lipschitz with a slightly better constant than $\\alpha_1$ (the constant is determined by the tangential derivatives of $\\alpha_0$). \nBut, for the same reasons as in \\cite{Cahill19} this map is not bi-Lipschitz. \n\n\\subsection{Dimension reduction in the case $d=2$ and consequences}\n\nIn this subsection we analyze the case $d=2$. \nThe embedding dimension for $\\alpha_0$ is $\\left( \\begin{array}{c} n \\\\ 2 \\end{array}\\right)-1=\\frac{n(n-1)}{2}-1$. \nOn the other hand, consider the following approach. \nEach row of $X$ defines a complex number $z_1=X_{1,1}+i\\,X_{1,2}$, ... , $z_n=X_{n,1}+i\\,X_{n,2}$ that\ncan be encoded by one polynomial of degree $n$ with complex coefficients $Q\\in\\mathbb{C}_n[t]$,\n\\[ Q({\\bf t}) = \\prod_{k=1}^n ({\\bf t}-z_k) = {\\bf t}^n + \\sum_{k=0}^{n-1}\n{\\bf t}^k q_k \\]\nThe coefficients of $Q$ provide a $2n$-dimensional real embedding $\\zeta_0$,\n\\[ \\zeta_0:\\mathbb{R}^{n\\times 2}\\rightarrow\\mathbb{R}^{2n}~~,~~\\zeta_0(X)=(Re(q_{n-1}),Im(q_{n-1}),\\ldots,Re(q_{0}),Im(q_0)) \\]\nwith properties similar to those of $\\alpha_0$. \nOne can similarly modify this embedding to obtain a globally Lipschitz embedding $\\hat{\\zeta}_1$ of $\\hat{R_{n,2}}$ \ninto $\\mathbb{R}^{2n+1}$. \n\nIt is instructive to recast this embedding in the framework of commutative algebras. Indeed, let $\\langle {\\bf x}_1-1,{\\bf x}_2^2+1 \\rangle$ denote\nthe ideal generated by polynomials ${\\bf x}_1-1$ and ${\\bf x}_2^2+1$\nin the algebra $\\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2]$. Consider the quotient space\n $\\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2]\/\\langle {\\bf x}_1-1,{\\bf x}_2^2+1 \\rangle$ and the quotient map\n $\\sigma:\\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2]\\mapsto \\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2]\/\\langle{\\bf x}_1-1,{\\bf x}_2^2+1\\rangle$.\n In particular, let $S=\\sigma(\\mathbb{R}_n[{\\bf t},{\\bf x}_1,{\\bf x}_2])$ denote the vector space projected through this quotient map.\nThen a basis for $S$ is given by $\\{1,{\\bf t},\\ldots,{\\bf t}^n,{\\bf x}_2,{\\bf x}_2 {\\bf t},\\ldots,{\\bf x}_2 {\\bf t}^{n-1},{\\bf x}_2 {\\bf t}^n\\}$. Thus $\\dim S=2n+2$. \nLet \n$\\mathfrak{S}=\\{P_X~,~X\\in\\mathbb{R}^{n\\times 2} \\}\\subset\\mathbb{R}_2[{\\bf t},{\\bf x}_1,{\\bf x}_2]$ \ndenote the set of polynomials realizable as in (\\ref{eq:PA}).\nThen the fact that $\\hat{\\zeta}_0:\\mathbb{R}^{n\\times 2}\\rightarrow\\mathbb{R}^{2n}$ \nis injective is equivalent to the fact that $\\sigma{\\vert}_{\\mathfrak{S}}:\\mathfrak{S}\\rightarrow S$ is injective.\nOn the other hand note \n\\[\n\\sigma(\\mathfrak{S})\\subset {\\bf t}^n+\nspan_\\mathbb{R}\\{1,{\\bf t},\\ldots,{\\bf t}^{n-1},{\\bf x}_2,{\\bf x}_2 {\\bf t}, \\ldots,{\\bf x}_2 {\\bf t}^{n-1} \\} \\]\nwhere the last linear subspace is of dimension $2n$. \n\nIn the case $d=2$ we obtain the identification\n$\\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2]\/\\langle {\\bf x}_1-1,{\\bf x}_2^2+1 \\rangle = \\mathbb{C}[{\\bf t}]$ due to uniqueness of polynomial factorization.\n\nThis observation raises the following {\\em open problem}:\n\nFor $d>2$, is there a non-trivial ideal \n$I=\\langle Q_1,\\ldots,Q_r \\rangle \\subset\\mathbb{R}[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]$\nso that the restriction $\\sigma{\\vert}_{\\mathfrak{S}}$\nof the quotient map $\\sigma:\\mathbb{R}[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]\\rightarrow\n\\mathbb{R}[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]\/I$ is injective? Here $\\mathfrak{S}$ denote the set of polynomials in $\\mathbb{R}_n[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]$ realizable via (\\ref{eq:PA}).\n\\begin{rmk}\nOne may ask the question whether the quaternions can be\nutilized in the case $d=4$. While the quaternions form an associative division algebra, unfortunately polynomials have in general an infinite number of factorization. This prevents an immediate extension of the previous construction to the case $d=4$. \n\\end{rmk}\n\n\\begin{rmk}\nSimilar to the construction in \\cite{Cahill19}, a linear dimension reduction technique may be applicable here (which, in fact, may answer the open problem above) which would reduce the embedding dimension to $m=2nd+1$ (twice the intrinsec dimension plus one for the homogenization variable). \nHowever we did not explore this approach since, even if possible, it would not produce a bi-Lipschitz embedding. \nInstead we analyze the linear dimension reduction technique in the next section in the context of sorting based embeddings. \n\\end{rmk}\n\n\\section{Sorting based Embedding\\label{sec3}}\n\nIn this section we present the extension of the sorting embedding (\\ref{eq:ord}) to the case $d>1$.\n\nThe embedding is performed by a linear-nonlinear transformation that resembles the phase retrieval problem. \nConsider a matrix $A\\in\\mathbb{R}^{d\\times D}$ and the induced nonlinear \ntransformation:\n\n\\begin{equation}\n\\label{eq:qA}\n\\beta_A:{\\mathbb{R}}^{n\\times d}\\rightarrow\\mathbb{R}^{n\\times D}~~,~~\\beta_A(X)=\\downarrow (XA)\n\\end{equation}\nwhere $\\downarrow$ is the monotone decreasing sorting operator acting in each column independently. Specifically, let \n$Y=XA\\in\\mathbb{R}^{n\\times D}$ and note its column vectors\n$Y=[y_1,y_2,\\ldots,y_D]$. Then \n\\[ \\beta_A(X)=\\left[ \\begin{array}{cccc}\n\\mbox{$\\Pi_1 y_1$} & \\mbox{$\\Pi_2 y_1$} & \\cdots & \\mbox{$\\Pi_D y_D$}\n\\end{array} \\right] \\]\nfor some $\\Pi_1,\\Pi_2,\\ldots,\\Pi_D\\in{\\mathcal S}_n$ so that each column is sorted monotonically decreasing:\n\\[ (\\Pi_k y_k)_1\\geq (\\Pi_k y_k)_2\\geq \\cdots\\geq (\\Pi_k y_k)_n. \\]\nNote the obvious invariance $\\beta_A(\\Pi X)=\\beta_A(X)$ for any $\\Pi\\in{\\mathcal S}_n$ and $X\\in{\\mathbb{R}}^{n\\times d}$. Hence $\\beta_A$ \nlifts to a map $\\hat{\\beta_A}$ on ${\\widehat{\\Rnd}}$. \n\\begin{rmk}\nNotice the similarity to the phase retrieval problem, e.g., \\cite{balan16}, where the data is obtained via a linear transformation of the \ninput signal followed by the nonlinear operation of taking the absolute value of the frame coefficients. Here the nonlinear transformation is implemented by sorting the coefficients. \nIn both cases it represents the action of a particular\nsubgroup of the unitary group. \n\\end{rmk}\n\n\nIn this section we analyze necessary and sufficient conditions so that maps of type (\\ref{eq:qA}) are injective, or injective almost everywhere. \nFirst a few definitions.\n\n\\begin{defn}\nA matrix $A\\in\\mathbb{R}^{d\\times D}$ is called a \\emph{universal key} (for ${\\mathbb{R}}^{n\\times d}$) if $\\hat{\\beta_A}$ is injective\non ${\\widehat{\\Rnd}}$.\n\\end{defn}\nIn general we refer to $A$ as a {\\em key} for encoder $\\beta_A$. \n\\begin{defn}\nFix a matrix $X\\in{\\mathbb{R}}^{n\\times d}$. A matrix $A\\in\\mathbb{R}^{d\\times D}$ is said \\emph{admissible} (or an {\\em admissible key}) for $X$ if for any $Y\\in{\\mathbb{R}}^{n\\times d}$ so that $\\beta_A(X)=\\beta_A(Y)$ then $Y=\\Pi X$ for some $\\Pi\\in{\\mathcal S}_n$. \n\\end{defn}\nIn other words, $\\hat{\\beta_A}^{-1}(\\hat{\\beta_A}(\\hat{X}))=\\{\\hat{X}\\}$.\nWe let ${\\mathcal{A}}_{D}(X)$, or simply ${\\mathcal{A}}(X)$, denote the set of admissible keys for $X$. \n\\begin{defn}\nFix $A\\in\\mathbb{R}^{d\\times D}$. A matrix $X\\in{\\mathbb{R}}^{n\\times d}$ is said to be {\\em separated} by $A$ if $A\\in{\\mathcal{A}}(X)$.\n\\end{defn}\nFor a key $A$, we let $\\mathfrak{S}_{n}(A)$, or simply $\\mathfrak{S}(A)$, denote the set of {\\em matrices separated by $A$}. Thus a matrix $X\\in\\mathfrak{S}_n(A)$ if and only if, for any matrix $Y\\in\\mathbb{R}^{n\\times d}$, if $\\beta_A(X)=\\beta_A(Y)$ then $X\\sim Y$.\n\nThus a key $A$ is universal if and only if $\\mathfrak{S}_n(A)={\\mathbb{R}}^{n\\times d}$.\n\nOur goal is to produce keys that are admissible for all matrices in ${\\mathbb{R}}^{n\\times d}$, or at least for almost every data matrix.\nAs we show in Proposition \\ref{prop3.6} below this requires that $D\\geq d$ and $A$ is full rank. In particular this means that the columns of $A$ form a frame for $\\mathbb{R}^d$. \n\n\\subsection{Characterizations of ${\\mathcal{A}}(X)$ and $\\mathfrak{S}(A)$}\n\nWe start off with simple linear manipulations of sets of admissible keys and separated data matrices.\n\n\\begin{prop}\\label{prop3.5}\nFix $A\\in\\mathbb{R}^{d\\times D}$ and $X\\in{\\mathbb{R}}^{n\\times d}$.\n\\begin{enumerate}\n \\item For an invertible $d\\times d$ matrix $T\\in\\mathbb{R}^{d\\times d}$,\n \\begin{equation}\\label{eq:TA}\n \\mathfrak{S}_n(TA) = \\mathfrak{S}_n(A)T^{-1}.\n \\end{equation}\n In other words, if $X$ is separated by $A$ then $XT^{-1}$ is separated by $TA$.\n \n \\item For any permutation matrix $L\\in\\SS_D$ and diagonal invertible matrix $\\Lambda\\in\\mathbb{R}^{D\\times D}$,\n \\begin{equation}\\label{eq:AL}\n \\mathfrak{S}_n(AL\\Lambda)=\\mathfrak{S}_n(A\\Lambda L) = \\mathfrak{S}_n(A).\n \\end{equation}\n In other words, if $X$ is separated by $A$ then $X$ is separated also by $AL\\Lambda$ as well as by $A\\Lambda L$.\n \n \\item Assume $T\\in\\mathbb{R}^{d\\times d}$ is a $d\\times d$ invertible matrix. Then\n \\begin{equation}\\label{eq:XT}\n {\\mathcal{A}}_D(XT)=T^{-1}{\\mathcal{A}}_D(X).\n \\end{equation}\n In other words, if $A$ is an admissible key for $X$ then $T^{-1}A$ is an admissible key for $XT$.\n\\end{enumerate}\n\\end{prop}\n{\\bf Proof}\n\nThe proof is immediate, but we include it here for convenience of the reader. \n\n(1) Denote $B=TA$. Let $Y\\in\\mathbb{R}^{n\\times d}$. Then\n\\[ \\beta_B(Y)=\\beta_B(X) \\Longleftrightarrow \\downarrow(XB)=\\downarrow(YB) \\Longleftrightarrow \\downarrow(XTA)=\\downarrow(YTA)\n\\Longleftrightarrow\\beta_A(XT)=\\beta_A(YT). \\]\nThus, if $X\\in\\mathfrak{S}_n(A)$ and $Y'\\in{\\mathbb{R}}^{n\\times d}$ so that $\\beta_B(Y')=\\beta_B(X')$ with $X'=XT^{-1}$, then $\\beta_A(Y'T)=\\beta_A(X)$. Therefore there exists $\\Pi\\in{\\mathcal S}_n$ so that $Y'T=\\Pi X$. Thus $Y'\\sim X'$. Hence $X'\\in\\mathfrak{S}_n(B)$.\nThis shows $\\mathfrak{S}_n(A)T^{-1}\\subset \\mathfrak{S}_n(TA)$. The reverse include follows by replacing $A$ with $TA$ and $T$ with $T^{-1}$. Together they prove (\\ref{eq:TA}).\n\n(2) Let $Y\\in{\\mathbb{R}}^{n\\times d}$ such that $\\beta_{AL\\Lambda}(X)=\\beta_{AL\\Lambda}(Y)$. \nFor every $1\\leq j\\leq D$ let $k\\in [D]$ be so that $L_{jk}=1$. \n\nIf $\\Lambda_{kk}>0$ then $\\downarrow((XA)_j)=\\downarrow((YA)_j)$. \n\nIf $\\Lambda_{kk}<0$ then $\\downarrow(-(XA)_j)=\\downarrow(-(YA)_j)$.\nBut this implies also $\\downarrow((XA)_j)=\\downarrow((YA)_j)$ since\n$\\downarrow(-z)=L_0\\downarrow(z)$ where $L_0$ is the permutation matrix that has 1 on its main antidiagonal.\n\nEither way, $\\downarrow((XA)_j)=\\downarrow((YA)_j)$. Hence \n$\\downarrow(XA)=\\downarrow(YA)$. Therefore $X\\sim Y$ and thus $X\\in \\mathfrak{S}_n(AL\\Lambda)$. This shows $\\mathfrak{S}_n(A)\\subset\\mathfrak{S}_n(AL\\Lambda)$.\nthe reverse inclusion follows by a similar argument.\nFinally, notice $\\{L\\Lambda\\}$ forms a group since $L^{-1}\\Lambda L$ is also a diagonal matrix. This shows $\\mathfrak{S}_n(A\\Lambda L)=\\mathfrak{S}(AL\\Lambda')$ for some diagonal matrix $\\Lambda'$, and the conclusion (\\ref{eq:AL}) then follows.\n\n(3) The relation (\\ref{eq:XT}) follows from noticing $\\beta_{T^{-1}A}(Y)=\\beta_A(YT)$. $\\qed$\n\nRelation (\\ref{eq:AL}) shows that, since $A$ is assumed full rank, without loss of generality we can assume the first $d$ columns are linearly independent. Let $V$ denote the first $d$ columns of $A$ so that\n\\begin{equation}\\label{eq:AA}\nA = V\\left[ \\begin{array} {ccc}\n\\mbox{$I$} & \\mbox{$\\vert$} & \\mbox{$\\tilde{A}$} \n\\end{array} \\right] \n\\end{equation}\nwhere $\\tilde{A}\\in\\mathbb{R}^{d\\times (D-d)}$. \nThe following result shows that, unsurprisingly, when $D=d>1$, almost every matrix $X$ is not separated by $A$. By Proposition \\ref{prop3.5} we can reduce the analysis to the case $A=I$ by a change of coordinates.\n\\begin{prop}\\label{prop3.6}\nAssume $D=d>1$, $n>1$. Then\n\\begin{enumerate}\n \\item The set of data matrices not separated by $I_d$ includes:\n\\begin{equation}\\label{eq:SA1}\n \\mathbb{B}:=\\{ X\\in\\mathbb{R}^{n\\times d}~,~\\exists i,j,k,l~,~ 1\\leq i0$ and $b_0>0$ so that\n for all $X,Y\\in{\\mathbb{R}}^{n\\times d}$,\n \\begin{equation}\n \\label{eq:Lipbeta2}\n a_0\\, d(\\hat{X},\\hat{Y})\\leq \\norm{\\beta_A(X)-\\beta_A(Y)}\\leq b_0\\,\n d(\\hat{X},\\hat{Y})\n \\end{equation}\n where all are Frobenius norms.\n Furthermore, an estimate for $b_0$ is provided by the largest singular value of $A$, $b_0= s_1(A)$.\n\\end{thm}\n\n{\\bf Proof}\n\nThe upper bound in (\\ref{eq:Lipbeta2}) follows as in the proof of Theorem \\ref{t4}, from equations (\\ref{eq:betaA}) and (\\ref{eq:betaAA}). Notice that\nno property is assumed in order to obtain the upper Lipschitz bound.\n\nThe lower bound in (\\ref{eq:Lipbeta2}) is more difficult. \nIt is shown by contradiction following the strategy \nutilized in the \nComplex Phase Retrieval problem \\cite{balazou}.\n\nAssume $\\inf_{X\\not\\sim Y}\\frac{\\norm{\\beta_A(X)-\\beta_A(Y)}_2^2}{d(\\hat{X},\\hat{Y})^2}=0$. \n\n{\\em Step 1: Reduction to local analysis.} \nSince $d(\\hat{tX},\\hat{tY})=t\\,d(\\hat{X},\\hat{Y})$ for all $t>0$, the \nquotient $\\frac{\\norm{\\beta_A(X)-\\beta_A(Y)}_2}{d(\\hat{X},\\hat{Y})}$ \nis scale invariant. Therefore, there are sequences $(X^t)_t,(Y^t)_t$\nwith $\\norm{Y^t}\\leq\\norm{X^t}=1$ and $d(\\hat{X^t},\\hat{Y^t})>0$ so that\n$\\lim_{t\\rightarrow\\infty} \\frac{\\norm{\\beta_A(X^t)-\\beta_A(Y^t)}_2}{d(\\hat{X^t},\\hat{Y^t})} = 0$.\nBy compactness of the closed unit ball, one can extract convergence subsequences. For easiness of notation, assume $(X^t)_t,(Y^t)_t$ are\nthese subsequences. Let ${X^{\\infty}}=\\lim_t X^t$ and ${Y^{\\infty}} = \\lim_t Y^t$ denote their limits. Notice $\\lim_t \\norm{\\beta_A(X^t)-\\beta_A(Y^t)}_2=0$.\nThis implies $\\norm{\\beta_A({X^{\\infty}})-\\beta_A({Y^{\\infty}})}=0$ and thus $\\beta_A({X^{\\infty}})=\\beta_A({Y^{\\infty}})$. Since $\\widehat{\\beta_A}$ is assumed injective, it follows that $\\widehat{{X^{\\infty}}}=\\widehat{{Y^{\\infty}}}$. \n\nThis means that, if the lower Lipschitz bound vanishes, then this \nis achieved by vanishing of a local lower Lipschitz bound. To follow the terminology in \\cite{balazou}, the type I local lower Lipschitz bound vanishes at some \n$Z_0\\in{\\mathbb{R}}^{n\\times d}$, with $\\norm{Z_0}=1$:\n\\begin{equation}\n \\label{eq:lb}\n{A}(Z_0):= \\lim_{r\\rightarrow 0} \\inf_{\n\\begin{array}{c}\n\\hat{X}\\neq\\hat{Y} \\\\\nd(\\hat{X},\\hat{Z_0})0$ by the definition of $G$.\n\nConsider $X=Z_0+U$ and $Y=Z_0+V$ where $U,V\\in{\\mathbb{R}}^{n\\times d}$ are ``aligned\" in the sense that $d(\\hat{X},\\hat{Y})=\\norm{U-V}$. This property requires that $\\norm{U-V}\\leq\\norm{PX-Y}$, for every $P\\in{\\mathcal S}_n$. \nNext result replaces equivalently this condition \nby requirements involving $(U,V)$ and the group $G$ only.\n\\begin{lem}\n\\label{l3.1}\nAssume $\\norm{U},\\norm{V}<\\frac{1}{4}\\delta_0$, where $\\delta_0=\\min_{P\\in{\\mathcal S}_n\\setminus G} \\norm{(I_n-P)Z_0}$. Let $X=Z_0+U$, $Y=Z_0+V$. \nThen:\n\\begin{enumerate}\n \\item $d(\\hat{X},\\hat{Z_0})=\\norm{U}$ and $d(\\hat{Y},\\hat{Z_0})=\\norm{V}$.\n \\item $d(\\hat{X},\\hat{Y})=\\min_{P\\in G}\\norm{U-PV}=\\min_{P\\in G}\\norm{PU-V}$\n \\item The following\nare equivalent:\n\\begin{enumerate}\n \\item $d(\\hat{X},\\hat{Y})=\\norm{U-V}$.\n \\item For every $P\\in G$, $\\norm{U-V}\\leq \\norm{PU-V}$.\n \\item For every $P\\in G$, $\\ip{U}{V}\\geq \\ip{PU}{V}$.\n\\end{enumerate}\n\\end{enumerate}\n\\end{lem}\n{\\bf Proof of Lemma \\ref{l3.1}}.\n(1)\n\nNote that is $U=0$ then the claim follows. Assume $U\\neq 0$. Then\n\\[ d(\\hat{X},\\hat{Z_0})=\\min_{P\\in{\\mathcal S}_n} \\norm{X-PZ_0}\n= \\min_{P\\in {\\mathcal S}_n}\\norm{(I_n-P)Z_0 + U}\\leq \\norm{U} \\]\nOn the other hand, assume the minimum is achieved for a permutation $P_0\\in{\\mathcal S}_n$. If $P_0\\in G$ then\n$d(\\hat{X},\\hat{Z_0})=\\norm{(I_n-P_0)Z_0+U}=\\norm{U}$. If $P_0\\not\\in G$ then \n\\[ d(\\hat{X},\\hat{Z_0})\\geq \\norm{(I_n-P_0)Z_0}-\\norm{U}>\\frac{3\\delta_0}{4}>\\norm{U}\\geq d(\\hat{X},\\hat{Z_0}) \\]\nwhich yields a contradiction.\nHence $d(\\hat{X},\\hat{Z_0})=\\norm{U}$. Similarly, one shows $d(\\hat{X},\\hat{Z_0})=\\norm{V}$. \n\n(2) Clearly\n\\[ d(\\hat{X},\\hat{Y})=\\min_{P\\in{\\mathcal S}_n}\\norm{PX-Y}\\leq \\min_{P\\in G}\\norm{PX-Y}=\\min_{P\\in G}\\norm{PU-V} \\]\nOn the other hand, for $P\\in{\\mathcal S}_n\\setminus G$ and $Q\\in G$,\n\\[ \\norm{PX-Y}=\\norm{(P-I_n)Z_0 + PU-V}\\geq \\norm{(I_n-P)Z_0} - \\norm{U}-\\norm{V}\\geq \\]\n\\[ \\geq \\delta_0\n-2\\norm{U}-2\\norm{V}+\\norm{QU-V}\\geq \\min_{Q\\in G}\\norm{QU-V}\\geq d(\\hat{X},\\hat{Y}). \\]\n\n(3)\n\n(a)$\\Rightarrow$(b).\n\nIf $d(\\hat{X},\\hat{Y})=\\norm{U-V}$ then\n\\[ \\norm{U-V}\\leq \\norm{PX-Y}=\\norm{(P-I_n)Z_0 + PU-V}\n~~,~~\\forall P\\in {\\mathcal S}_n. \\]\nIn particular, for $P\\in G$, $(P-I_n)Z_0=0$ and\nthe above inequality reduces to (b).\n\n(b)$\\Rightarrow$(a).\n\nAssume (b). For $P\\in G$,\n\\[ \\norm{U-V}=\\norm{X-Y}\\leq\\norm{PU-V}=\\norm{PX-Y}. \\]\nFor $P\\in{\\mathcal S}_n\\setminus G$,\n\\[ \\norm{PX-Y}=\\norm{(P-I_n)Z_0 + PU-V}\\geq \\norm{(I_n-P)Z_0} - \\norm{U}-\\norm{V}\\geq \\]\n\\[ \\geq \\delta_0\n-2\\norm{U}-2\\norm{V}+\\norm{U-V}\\geq \\norm{U-V}=\\norm{X-Y}. \\]\nThis shows $d(\\hat{X},\\hat{Y})=\\norm{X-Y}=\\norm{U-V}$.\n\n(b)$\\Longleftrightarrow$(c). This is immediate after squaring (b) and simplifying the terms.\n\n$\\Box$\n\\vspace{3mm}\n\nConsider now sequences $(\\hat{X^t})_t,(\\hat{Y^t})_t$ that converge to $\\hat{Z_0}$ \nand achieve lower bound 0 as in (\\ref{eq:lb}).\nChoose\nrepresentatives $X_t$ and $Y_t$ in their equivalence classes that satisfy the hypothesis of Lemma \\ref{l3.1} so that $X_t=Z_0+U_t$, $Y_t=Z_0+V_t$, $\\norm{U_t},\\norm{V_y}<\\frac{1}{4}\\delta_0$,\n $d(\\hat{X_t},\\hat{Z_0})=\\norm{U_t}$, $d(\\hat{Y_t},\\hat{Z_0})=\\norm{V_t}$ and $d(\\hat{X_t},\\hat{Y_t})=\\norm{U_t-V_t}>0$.\n With $A=[a_1 |\\cdots|a_D]$ we obtain:\n\\[ \\norm{\\beta_A(X_t)-\\beta_A(Y_t)}_2^2 = \\sum_{j=1}^D \\norm{\\downarrow(X_t a_j)-\\downarrow(Y_t a_j)}_2^2 =\n\\sum_{j=1}^D \\norm{(Z_0+U_t)a_j-\\Pi_{j,t}(Z_0+V_t)a_j}_2^2 \\]\nfor some $\\Pi_{j,t}\\in{\\mathcal S}_n$. In fact $\\Pi_{j,t}\\in argmin_{\\Pi\\in H_j}\\norm{U_t-\\Pi V_t)a_j}_2$. \nPass to sub-sequences (that will be indexed by $t$ for an easier notation) so that $\\Pi_{j,t}=\\Pi_j$ for some $\\Pi_j\\in{\\mathcal S}_n$. Thus\n\\[ \\norm{\\beta_A(X_t)-\\beta_A(Y_t)}_2^2 =\n\\sum_{j=1}^D \\norm{(I_n-\\Pi_j)Z_0a_j + (U_t-\\Pi_j V_t)a_j}_2^2 \\]\nSince the above sequence must converge to $0$ as $t\\rightarrow\\infty$, while $U_t,V_t\\rightarrow 0$, it follows that necessarily $\\Pi_j\\in H_j$ and the\nexpressions simplify to\n\\[ \\norm{\\beta_A(X_t)-\\beta_A(Y_t)}_2^2 =\n\\sum_{j=1}^D \\norm{(U_t-\\Pi_j V_t)a_j}_2^2 \\]\nThus equation (\\ref{eq:lb}) implies that for\nevery $j\\in[D]$,\n\\begin{equation}\n\\label{eq:lb2}\n\\lim_{t\\rightarrow\\infty} \\frac{\\norm{(U_t-\\Pi_j V_t)a_j}_2^2}{\\norm{U_t-V_t}^2} = 0\n\\end{equation} \nwhere $\\Pi_j\\in H_j$, $\\norm{U_t},\\norm{V_t}\\rightarrow 0$, and $U_t,V_t$ are aligned so that $\\ip{U_t}{V_t}\\geq \\ip{PU_t}{V_t}$ for every $P\\in G$.\nEquivalently, relation (\\ref{eq:lb}) can be restated as:\n\\begin{equation}\n \\label{eq:opt2}\n \\inf_{\\begin{array}{c} U,V\\in{\\mathbb{R}}^{n\\times d} \\\\ s.t. \\\\\n U\\neq V \\\\\n \\ip{U}{V}\\geq \\ip{PU}{V} , \\forall P\\in G\n \\end{array} } \\frac{\\sum_{j=1}^D \\norm{(U-\\Pi_j V)a_j}_2^2}{\\norm{U-V}^2} = 0\n\\end{equation}\nfor some permutations $\\Pi_j\\in H_j$, $j\\in[D]$.\nBy Lemma \\ref{l3.1} the constraint in the optimization problem above implies $\\norm{U-V}=\\min_{P\\in G}\\norm{U-PV}$. Hence (\\ref{eq:opt2}) implies:\n\\begin{equation}\n \\label{eq:opt3}\n \\inf_{\\begin{array}{c} U,V\\in{\\mathbb{R}}^{n\\times d} \\\\ s.t. \\\\\n U\\neq P V , \\forall P\\in G\n \\end{array} }\n \\max_{P\\in G} \\frac{\\sum_{j=1}^D \\norm{(U-\\Pi_j V)a_j}_2^2}{\\norm{U-P V}^2} = 0\n\\end{equation}\nfor same permutation matrices $\\Pi_j$'s.\nWhile the above optimization problem seems a relaxation of (\\ref{eq:opt2}), in fact (\\ref{eq:opt3}) implies (\\ref{eq:opt2})\nwith a possibly change of permutation matrices $\\Pi_j$, but \nremaining still in $H_j$.\n\\vspace{5mm}\n\n\n{\\em Step 3. Existence of a Minimizer.} \n \n\n\nThe optimization problem (\\ref{eq:opt2}) is a Quadratically Constrained Ratio of Quadratics (QCRQ) optimization problem. A significant number of papers \nhave been published on this topic \\cite{teb06,teb10}. \nIn particular, \\cite{QCRQbook} presents\na formal setup for analysis of QCRQ problems. \nOur interest is to utilize some of these techniques in order to establish the existence of a minimizer for (\\ref{eq:opt2}) or (\\ref{eq:opt3}). Specifically we show:\n\\begin{lem}\\label{l3.2}\nAssume the key $A$ has linearly independent rows (equivalently, the columns of $A$ form a frame for $\\mathbb{R}^d$) and the lower Lipschitz bound of ${\\hat{\\beta}}_A$ is $0$. Then there are $\\tilde{U},\\tilde{V}\\in{\\mathbb{R}}^{n\\times d}$ so that:\n\\begin{enumerate}\n \\item $\\tilde{U}\\neq P \\tilde{V}$, for every $P\\in G$;\n \\item For every $j\\in[D]$, $(\\tilde{U}-\\Pi_j \\tilde{V})a_j=0$.\n\\end{enumerate}\n\\end{lem}\n{\\bf Proof of Lemma \\ref{l3.2}}\n\n\n\nWe start with the formulation (\\ref{eq:opt3}). Therefore there are sequences \n$(U_t,V_t)_{t\\geq 1}$ so that $U_t\\neq PV_t$ for any $P\\in G, t\\geq 1$, and yet for any $P\\in G$,\n\\[ \\lim_{t\\rightarrow\\infty} \\frac{\\sum_{j=1}^D\\norm{(U_t-\\Pi_j V_t)a_j}_2^2}{\\norm{U_t-P V_t}^2} = 0. \\]\nLet $E=\\{(U,V)\\in{\\mathbb{R}}^{n\\times d}\\times{\\mathbb{R}}^{n\\times d}~,~(U-\\Pi_j)V)a_j=0~,~\\forall j\\in[D]\\}$ denote the null space of the linear operator \n\\[ T:{\\mathbb{R}}^{n\\times d}\\times{\\mathbb{R}}^{n\\times d}\\rightarrow \\mathbb{R}^D~,~(U,V)\\mapsto \\left[\\begin{array}{ccccc}\n(U-\\Pi_1 V)a_1 & \\vert & \\cdots & \\vert & (U-\\Pi_D V)a_D\n\\end{array}\\right],\n\\]\nassociated to the numerator of the above quotient. Let $F_P=\\{(U,V)\\in{\\mathbb{R}}^{n\\times d}\\times{\\mathbb{R}}^{n\\times d}~,~U-PV=0\\}$ be the null space of the linear operator \n\\[ R_P:{\\mathbb{R}}^{n\\times d}\\times{\\mathbb{R}}^{n\\times d}\\rightarrow {\\mathbb{R}}^{n\\times d}~,~(U,V)\\mapsto U-P V. \\]\nA consequence of (\\ref{eq:opt3}) is that for every $P\\in G$, $E\\setminus F_P\\neq\\emptyset$. \nIn particular, $F_p\\cap E$ is a subspace of $E$ of positive codimension. Using the \nBaire category theorem (or more elementary linear algebra arguments), we conclude that\n\\[ E\\setminus \\left(\\cup_{P\\in G} F_P\\right) \\neq \\emptyset. \\]\nLet $(\\tilde{U},\\tilde{V})\\in E\\setminus\\left(\\cup_{P\\in G}F_P\\right)$. This pair satisfies the\nconclusions of Lemma \\ref{l3.2}.\n\n\n\\ignore{\nFirst we rewrite (\\ref{eq:opt2}) in terms of new matrices. Let $S_1=S_1({\\mathbb{R}}^{n\\times d})$ denote the unit sphere of ${\\mathbb{R}}^{n\\times d}$. Let $W=U-V$. Since $W\\neq 0$, let $W_0=\\frac{1}{\\norm{W}}W$ with $\\norm{W_0}=1$.\nLet also $V_0\\in S_1$ be so that $V=t\\norm{W}V_0$\n for some $t\\geq 0$. Then the objective function in (\\ref{eq:opt2}), i.e., the quotient of the two quadratics, simplifies to\n \\[ \\sum_{j=1}^D \\norm{(W_0 + t(I-\\Pi_j)V_0)a_j}_2^2. \\]\n Let $\\Gamma_t$ define the constraints set:\n \\[ \\Gamma_t =\\cap_{P\\in G} \n \\{ (W_0,V_0)\\in S_1\\times S_1~:~\n t^2 \\ip{V_0}{(I-P)V_0}+t\\ip{W_0}{(I-P)V_0}\\geq 0 \\}.\n \\]\nNotice that for each $t\\geq 0$, $\\Gamma_t$ is a closed and hence a compact subset of $S_1\\times S_1$. It may be an empty set for some values of $t$. The problem (\\ref{eq:opt2}) is equivalent to:\n\\[ \\inf_{t\\geq 0} \\inf_{(W_0,V_0)\\in\\Gamma_t}\n\\sum_{j=1}^D \\norm{W_0 a_j + t(I-\\Pi_j)V_0a_j}_2^2 = 0 \\]\nLet $(W(t_k),V(t_k),t_k)\\in S_1\\times S_1\\times [0,\\infty)$, $k\\geq 1$, be a sequence that achieves the lower bound $0$. Extract a subsequence indexed again by $k$ so that\n $\\lim_{k\\rightarrow\\infty} W(t_k)=W_\\infty\\in S_1$ and $\\lim_{k\\rightarrow\\infty}V(t_k)=V_\\infty$. Thus, for all $j\\in [D]$, $\\lim_{k\\rightarrow\\infty} \\norm{W_\\infty a_j + t_k(I-\\Pi_j)V(t_k) a_j}_2 = 0$, which implies\n \\[ \\lim_{k\\rightarrow\\infty} t_k(I-\\Pi_j)V(t_k) a_j = -W\\infty a_j. \\]\n \n Case 1. $\\liminf_{k\\rightarrow\\infty} t_k<\\infty$. In this case extract a subsequence, say\n $(t_{k_l})_l$, so that\n $\\lim_{l\\rightarrow\\infty} t_{k_l}=t_\\infty\\in[0,\\infty)$. \nThis implies\n \\[ W_\\infty a_j +t_\\infty(I-\\Pi_j)V_\\infty a_j = 0~,~\\forall j\\in[D]. \\]\nNotice $(W_\\infty,V_\\infty)\\in \\Gamma_{t_\\infty}$.\nTherefore $\\tilde{U}=W_\\infty + t_\\infty V_\\infty$ and $\\tilde{V}=t_\\infty V_\\infty$ satisfy the conclusions (1),(2), and (3) and lemma \\ref{l3.2} is proved.\n\n\nCase 2. $\\liminf_{k\\rightarrow\\infty} t_k=\\infty$.\n\nIn the rest of the proof of this lemma, we construct an inductive process which ends with a scenario that either satisfies Case 1, or produces a (geo)metric contradiction. \n\nTo simplify notation we shall reuse the index $k$ at each stage.\n\n{\\em Initialization:} Set $p=1$. \nLet $V^{(1)}_{\\infty}=V_\\infty$, $t^{(1)}_k=t_k$, and $R^{(1)}_k=V(t_k)-V_\\infty$.\n\n{\\em Preamble:} Sequences $(t^{(p)}_k,R^{(p)}_k)$ satisfy, for every $j\\in[D]$:\n\\begin{equation}\n\\label{eq:sequences}\n\\lim_{k\\rightarrow\\infty}t^{(p)}_k = +\\infty, \\lim_{k\\rightarrow\\infty}R^{(p)}_k = 0, \\norm{R^{(p)}_k+V^{(p)}_\\infty}=1=\\norm{V^{(p)}_\\infty} , \\lim_{k\\rightarrow\\infty}t^{(p)}_k(I-\\Pi_j)R^{(p)}_k a_j = -W_\\infty a_j \n\\end{equation}\n\n{\\em Refinement:} Extract a subsequence \nindexed again by $k$ that\nsatisfies additionally:\n\\begin{equation}\n\\label{eq:seq2}\n\\norm{R^{(p)}_k}\\leq \\frac{1}{p}~,~ \\lim_{k\\rightarrow\\infty} \\frac{R^{(p)}_k}{\\norm{R^{(p)}_k}}\\in S_1\n\\end{equation}\n\n{\\em Setting up the next iteration:} Set\n\\[ t^{(p+1)}_k=t^{(p)}_k\\norm{R^{(p)}_k} ~,~V^{(p+1)}_\\infty = \\lim_{k\\rightarrow\\infty} \\frac{R^{(p)}_k}{\\norm{R^{(p)}_k}} ~,~ \nR^{(p+1)}_k=\\frac{R^{(p)}_k}{\\norm{R^{(p)}_k}} - V^{(p+1)}_\\infty \\]\n\n{\\em Testing:} If $\\liminf_{k\\rightarrow\\infty}t^{(p+1)}_k<\\infty$ then proceed with Case 1 above, which ends the proof of this lemma.\n\nOtherwise $\\lim_{k\\rightarrow\\infty}t^{(p+1)}_k=\\infty$. Thus $(I-\\Pi_j)V^{(p+1)}_\\infty a_j=0$ for all $j\\in[D]$. \nSet $p\\leftarrow p+1$. The {\\em preamble} conditions (\\ref{eq:sequences}) are again satisfied \nfor all $j\\in[D]$. Then proceed by going to the {\\em refinement} step and iterate. \n\nIf the iterative process described above does not end at some finite $p$, then we construct sequences doubly indexed $(t^{(p)}_k,R^{(p)}_k)_{p,k}$ that satisfy (\\ref{eq:sequences}) and (\\ref{eq:seq2}). \n}\n\n $\\Box$\n\n\n\n\\vspace{5mm}\n\n{\\em Step 4. Contradiction with the universality property of the key.}\n\nSo far we obtained that if the lower Lipschitz bound of ${\\hat{\\beta}}_A$ vanishes than there are $Z_0,\\tilde{U},\\tilde{V}\\in{\\mathbb{R}}^{n\\times d}$ with $Z_0\\neq 0$ and $\\tilde{U}\\neq P \\tilde{V}$, for all $P\\in G$ that satisfy the conclusions of Lemma \\ref{l3.2}. Notice $\\ip{Z_0}{Z_0}=\\ip{PZ_0}{Z_0}$\n for all $P\\in G$ and $(Z_0-\\Pi_j Z_0)a_j=0$ for all $j\\in[D]$. Choose $s>0$ but small enough so that $s\\norm{\\tilde{U}},s\\norm{\\tilde{V}}<\\frac{1}{4}\\delta_0$ with $\\delta_0=\\min_{P\\in{\\mathcal S}_n\\setminus G} \\norm{(I_n-P)Z_0}$.\n Let $X=Z_0+s \\tilde{U}$ and $Y=Z_0+s\\tilde{V}$.\n Then Lemma \\ref{l3.1} implies $d(\\hat{X},\\hat{Y})=\\min_{P\\in G}\\norm{\\tilde{U}-P\\tilde{V}}>0$. \n Hence $\\hat{X}\\neq\\hat{Y}$. On the other hand,\n for every $j\\in[D]$, $Xa_j = \\Pi_j Ya_j$. Thus\n ${\\hat{\\beta}}_A(\\hat{X})={\\hat{\\beta}}_A(\\hat{Y})$. \n Contradiction with the assumption that ${\\hat{\\beta}}_A$ is injective.\n \n This ends the proof of Theorem \\ref{t5}.\n\n$\\Box$\n\\ignore{\n\\begin{rem}\nThe proof of the previous theorem provides estimates for \nboth type I and type II local lower and upper Lipschitz bounds.\n\\end{rem}\n}\n\n\\subsection{Dimension Reduction}\n\nTheorem \\ref{t4} provides an Euclidean bi-Lipschitz embedding of very high dimension, $D=1+(d-1)n!$. On the other hand, Theorem \\ref{t5} shows that any universal key $A\\in\\mathbb{R}^{d\\times D}$ for ${\\widehat{\\Rnd}}$, \nand hence any injective map $\\hat{\\beta}_A$ is bi-Lipschitz. In this subsection we show that \nany bi-Lipschitz Euclidean embedding $\\hat{\\beta}_A:{\\widehat{\\Rnd}}\\rightarrow\\mathbb{R}^{n\\times D}$ with $D>2d$ \ncan be further compressed to a smaller dimension space $\\mathbb{R}^m$ with $m=2nd$ thus yielding\nbi-Lipschitz Euclidean embeddings of redundancy 2. This is shown in the next result.\n\n\\begin{thm}\n \\label{t6} Assume $A\\in\\mathbb{R}^{d\\times D}$ is a universal key for ${\\widehat{\\Rnd}}$ with $D\\geq 2d$. \n Then, for $m\\geq 2nd$, a generic linear operator $B:\\mathbb{R}^{n\\times D}\\rightarrow\\mathbb{R}^{m}$ with respect to Zariski topology on\n $\\mathbb{R}^{n\\times D\\times m}$, the map\n \\begin{equation}\n \\label{eq:AB1}\n \\hat{\\beta}_{A,B}:{\\widehat{\\Rnd}}\\rightarrow\\mathbb{R}^{2nd}~,~ \\hat{\\beta}_{A,B}(\\hat{X})=B\\left(\\hat{\\beta}_A(\\hat{X})\\right)\n \\end{equation}\n is bi-Lipschitz. In particular, almost every full-rank linear operator $B:\\mathbb{R}^{n\\times D}\\rightarrow\\mathbb{R}^{2nd}$ produces such a \n bi-Lipschitz map.\n\\end{thm}\n\n\\begin{rmk}\nThe proof shows that, in fact, the complement set of linear operators $B$ that produce bi-Lipschitz embeddings is included \nin the zero-set of a polynomial. \n\\end{rmk}\n\n\\begin{rmk}\nPutting together Theorems \\ref{t4}, \\ref{t5}, \\ref{t6} we obtain that the metric space ${\\widehat{\\Rnd}}$ admits\na global bi-Lipschitz embedding in the Euclidean space $\\mathbb{R}^{2nd}$. This result is compatible\nwith a Whitney embedding theorem (see \\S 1.3 in \\cite{hirsh}) with the important caveat that the Whitney embedding result\napplies to smooth manifolds, whereas here ${\\widehat{\\Rnd}}$ is merely a non-smooth algebraic variety.\n\\end{rmk}\n\n\\begin{rmk}\nThese three theorems are summarized in part two of the \nTheorem \\ref{t2} presented in \nthe first section.\n\\end{rmk}\n\n\\begin{rmk}\nWhile the embedding dimension grows linearly in $nd$, in fact $m=2nd$, the computational complexity of constructing ${\\hat{\\beta}}_{A,B}$ is NP due to the $1+(d-1)n!$ intermediary dimension.\n\\end{rmk}\n\n\\begin{rmk}\nAs the proofs show, for $D\\geq 1+(d-1)n!$, a generic $(A,B)$ with\nrespect to Zariski topology, $A\\in\\mathbb{R}^{d\\times D}$ and linear map $B:\\mathbb{R}^{n\\times D}\\rightarrow\\mathbb{R}^{2nd}$, produces a bi-Lipschitz embedding $({\\hat{\\beta}}_{A,B},d)$ of ${\\widehat{\\Rnd}}$ into $(\\mathbb{R}^{2nd},\\norm{\\cdot}_2)$. \n\\end{rmk}\n{\\bf Proof of Theorem \\ref{t6} }\n\nThe proof follows a similar approach as in Theorem 3 of \\cite{Cahill19}.\nSee also \\cite{DUFRESNE20091979}.\n\nWithout loss of generality, assume $m0$ so that for every $X,Y\\in\\mathbb{R}^{n\\times d}$, \n$\\norm{B(L_{\\gamma}(X,Y))}\\geq a_\\gamma \\norm{L_{\\gamma}(X,Y)}$.\nLet $a_\\infty = \\min_{\\gamma}a_\\gamma >0$. Thus\n\\[ \\norm{\\beta_{A,B}(X) - \\beta_{A,B}(Y)} =\\norm{B(L_{\\gamma_0}(X,Y))}\n\\geq a_\\infty \\norm{L_{\\gamma_0}(X,Y)}=a_\\infty \\norm{\\beta_A(X)-\\beta_A(Y)} \\]\nwhere $\\gamma_0\\in(S_n)^{2D}$ is a particular $2D$-tuple of permutations. This shows that \n$B{\\vert}_{\\beta_A(\\mathbb{R}^{n\\times d})}:\\beta_A(\\mathbb{R}^{n\\times d})\\rightarrow\\mathbb{R}^m$ is bi-Lipschitz.\nBy Theorem \\ref{t5}, the map ${\\hat{\\beta}}_A$ is bi-Lipschitz. Therefore\nwe get ${\\hat{\\beta}}_{A,B}$ is bi-Lipschitz as well.\n$\\Box$\n\n\n\\subsection{Proof of Corollary \\ref{c0}\\label{subsec4.4}}\n\n(1) It is clear that any continuous $f$ induces a continuous $\\varphi:\\beta(\\mathbb{R}^{n\\times d})\\rightarrow\\mathbb{R}$ via $\\varphi(\\beta(X))=f(X)$. Furthermore, \n$F:=\\beta(\\mathbb{R}^{n\\times d})={\\hat{\\beta}}({\\widehat{\\Rnd}})$\nis a closed subset of $\\mathbb{R}^m$ since ${\\hat{\\beta}}$ is bi-Lipschitz. \nThen a consequence of Tietze extension theorem \n(see problem 8 in \\S 12.1 of \\cite{roydenfitzpatrick})\nimplies that $\\varphi$ admits a continuous extension $g:\\mathbb{R}^m\\rightarrow\\mathbb{R}$. Thus $g(\\beta(X))=f(X)$ \nfor all $X\\in\\mathbb{R}^{n\\times d}$. The converse is trivial.\n\n(2) As at part (1), the Lipschitz continuous function $f$ induces a Lipschitz continuous function $\\varphi:F\\rightarrow\\mathbb{R}$. Since $F\\subset\\mathbb{R}^m$ is a subset of a Hilbert space, by Kirszbraun\nextension theorem (see \\cite{WelWil75}), $\\varphi$ \nadmits a Lipschitz continuous extension \n(even with the same Lipschitz constant!)\n $g:\\mathbb{R}^m\\rightarrow\\mathbb{R}$ so that $g(\\beta(X))=f(X)$ for every $X\\in\\mathbb{R}^{n\\times d}$. The converse is trivial. $\\Box$\n\n\n\\section{Applications to Graph Deep Learning\\label{sec4}}\n\nIn this section we take an empirical look at the permutation invariant mappings presented in this paper. We focus on the problems of graph classification, for which we employ the PROTEINS\\_FULL dataset \\cite{DobsonDoing_proteins}, and graph regression, for which we employ the quantum chemistry QM9 dataset\n\\cite{ramakrishnan2014quantum}. In both problems we want to estimate a function $F: (A,Z) \\rightarrow p$, where $(A,Z)$ characterizes a graph where $A \\in \\mathbb{R}^{n\\times n}$ is an adjacency matrix and $Z \\in \\mathbb{R}^{n\\times r}$ is an associated feature matrix where the $i^{th}$\nrow encodes an array of $r$ features associated with the $i^{th}$ node. $p$ is a scalar output where we have $p \\in \\{0,1\\}$ for binary classification and $p \\in \\mathbb{R}_+$ for regression.\n\nWe estimate $F$ using a deep network that is trained in a supervised manor. The network is comprised of three successive components applied in series: $\\Gamma$, $\\phi$, and $\\zeta$. $\\Gamma$ represents a graph deep network \\cite{GCN},\nwhich produces a set of embeddings $X \\in \\mathbb{R}^{N\\times d}$ across the nodes in the graph. Here $N\\geq n$ is chosen to accommodate the graph with the largest number of nodes. In this case, the last $N-n$ rows of $Y$ are filled with 0's. $\\phi: \\mathbb{R}^{N\\times d} \\rightarrow \\mathbb{R}^{m}$ represents a permutation invariant mapping such as those proposed in this paper. $\\zeta: \\mathbb{R}^{m} \\rightarrow \\mathbb{R}$ is a fully connected neural network. The entire end-to-end network is shown in Figure \\ref{fig:gcn_end2end}.\n\nIn this paper, we model $\\Gamma$ using a Graph Convolutional Network (GCN) outlined in \\cite{GCN}. \nLet ${\\bf D} \\in \\mathbb{R}^{n \\times n}$ be the associated degree matrix for our graph $\\mathcal{G}$. Also let $\\tilde{A}$ be the associated adjacency matrix of $\\mathcal{G}$ with added self connection: $\\tilde{A}=I+A$, where $I$ is the $n \\times n$ identity matrix, and $\\tilde{{\\bf D}}={\\bf D}+I$. Finally, we define the modified adjacency matrix $\\hat{A}=\\tilde{{\\bf D}}^{-1\/2} \\tilde{A} \\tilde{{\\bf D}}^{-1\/2}$. A GCN layer is defined as $H^{(l+1)}=\\sigma (\\hat{A}H^{(l-1)}W^{(l)})$.\nHere $H^{(l-1)}$ represents the GCN state coming into the $l^{th}$ layer, $\\sigma$ represents a chosen nonlinear element-by-element operation such as ReLU, and $W^{(l)}$ represents a matrix of trainable weights assigned to the $l^{th}$ layer whose number of rows match the number of columns in $H^{l}$ and number of columns is set to the size of the embeddings at the (l+1)'th layer. The initial state $H^{(0)}$ of the network is set to the feature set of the nodes of the graph $H^{(0)}=Z$. \n\n\nFor $\\phi$ we employ seven (7) different methods that are described next.\n\\begin{enumerate}\n \\item ordering: For the ordering method, we set $D=d+1$, $\\phi_{ordering}(X)=\\beta_A(X)=\\downarrow(XA)$ with $A=[I~1]$ the identity matrix followed by a column of ones. The ordering and identity-based mappings have the notable disadvantage of not producing the same output embedding size for different sized graphs. To accommodate this and have consistently sized inputs for $\\eta$, we choose to zero-pad $\\phi(X)$ for these methods to produce a vector in $\\mathbb{R}^{m}$, where $m=ND=N(d+1)$ and $N$ is the size of the largest graph in the dataset.\n \\item kernels: For the kernels method, \n $$(\\phi_{kernel}(X))_j=\\sum_{k=1}^n K_G(x_k,a_j)=\\sum_{k=1}^n exp(-\\norm{x_k-a_j}^2),\n ~~j\\in[m],$$ \n for $X=[x_1|\\cdots|x_n]^T$, where kernel vectors $a_1,\\ldots,a_m\\in\\mathbb{R}^d$ \n are generated randomly, each element of each vector is drawn from a standard normal distribution. Each resultant vector is then normalized to produce a kernel vector of magnitude one. When inputting the embedding $X$ to the kernels mapping, we first normalized the embedding for each respective node.\n \\item identity: In this case $\\phi_{id}(X)=X$, which is obviously not a permutation invariant map.\n \\item data augmentation: In this case $\\phi_{data\\;augment}(X)=X$ but data augmentation is used. Our data augmentation scheme works as follows. We take the training set and create multiple permutations of the adjacency and associated feature matrix for each graph in the training set. We add each permuted graph to the training set to be included with the original graphs. In our experiments we use four added permutations for each graph when employing data augmentation.\n \\item sum pooling: The sum pooling method sums the feature values across the set of nodes: $\\phi_{sum\\;pooling}(X)=\\mathbf{1}_{n\\times 1}^T X$.\n \\item sort pooling: The sort pooling method flips entire rows of $X$ so that the last column is ordered descendingly, $\\phi_{sort\\;pool}(X)=\\Pi X$ where $\\Pi\\in{\\mathcal S}_n$ so that $\\Pi\\,X(:,d)=\\downarrow(X(:,d))$. \n \\item set-2-set: This method employs a recurrent neural network\n that achieves permutation invariance through attention-based weighted summations. It has been introduced in \\cite{OrderMatters_2015arXiv151106391V}.\n\\end{enumerate}\n\nFor our deep neural network $\\eta$ we use a simple multilayer perceptron of size described below.\n\nSize parameters related to $\\Gamma$ and $\\zeta$ components are largely held constant across the different implementations.\nHowever the network parameters are trained independently for each method.\n\n\\begin{figure}[!htbp]\n \\centering\n\t\\includegraphics[width=.8\\linewidth]{Results\/GCN_end2end_2.png}\n\t\\caption[.]{.}\n\t\\label{fig:gcn_end2end}\n\\end{figure}\n\n\n\\subsection{Graph Classification}\n\n\\subsubsection{Methodology}\nFor our experiments in graph classification we consider the PROTEINS\\_FULL dataset obtained from \\cite{KKMMN2016} and originally introduced in \\cite{DobsonDoing_proteins}. \nThe dataset consists of 1113 proteins falling into one of two classes: those that function as enzymes and those that do not. Across the dataset there are 450 enzymes in total. \nThe graph for each protein is constructed such that the nodes represent amino acids and the edges represent the bonds between them. The number of amino acids (nodes) vary from around 20 to a maximum of 620 per protein with an average of 39.06. \nEach protein comes with a set of features for each node. \nThe features represent characteristics of the associated amino acid represented by the node. The number of features is $r=29$.\nWe run the end-to-end model with three GCN layers in $\\Gamma$, each with 50 hidden units. \n$\\zeta$ consists of three dense multi-layer perceptron layers, each with 150 hidden units. \nWe set d equal to 1, 10, 50 and 100.\n\nFor each method and embedding size we train for 300 epochs. Note though that the data augmentation method will have experienced five times as many training steps due to the increased size of its training set. We use a batch size of 128 graphs. The loss function minimized during training is the binary cross entropy loss (BCE) defined as\n\\begin{equation}\n\\label{eq:BCE}\nBCE = -\\frac{1}{B}\\sum_{t=1}^B p_t log(\\sigma(\\eta(\\phi(X^{(t)}))))+(1-p_t)log(1-\\sigma(\\eta(\\phi(X^{(t)})))) \n\\end{equation} \nwhere $B=128$ is the batch size, $p_t=1$ when the $t^{th}$ graph\n(protein) is an enzyme and $p_t=0$ otherwise, $\\sigma(x)=\\frac{1}{1+e^{-x}}$ is the sigmoid function that maps the output $\\eta(\\phi(X^{(t)})$ of the 3-layer fully connected network $\\eta$ to $[0,1]$. Three performance metrics were computed: accuracy (ACC), area under the receiver operating characteristic curve (AUC), and average precision (AP) as area under the precision-recall curve from precision scores. These measure are defined as follows (see sklearn.metrics module documentation in pytorch, or \\cite{roc}).\n\nFor a threshold $\\tau\\in[0,1]$, the classification decision $\\hat{p}_t(\\tau)$ is given by:\n\\begin{equation}\n \\hat{p}_t(\\tau) = \\left\\{\n \\begin{array}{rcl}\n 1 & \\mbox{if} & \\mbox{$\\sigma(\\eta(\\phi(X^{(t)}))\\geq \\tau$} \\\\\n 0 & \\mbox{if} & \\mbox{otherwise}\n \\end{array}\\right. .\n\\end{equation}\nBy default $\\tau=\\frac{1}{2}$. For a given threshold, one computes the four scores, true positive (TP), false positive (FP), true negative (TN) and false negative (FN):\n\\begin{equation}\n TP(\\tau) = \\frac{1}{B_1}\\sum_{t=1}^B 1_{\\hat{p}_t(\\tau) = 1}1_{p_t = 1} ~~, ~~\n TN(\\tau) = \\frac{1}{B_0} 1_{\\hat{p}_t(\\tau) = 0}1_{p_t = 0}\n\\end{equation}\n\\begin{equation}\n FP(\\tau) = \\frac{1}{B_0}\\sum_{t=1}^B 1_{\\hat{p}_t(\\tau) = 1}1_{p_t = 0} = 1-TN(\\tau) ~~,~~\n FN(\\tau) = \\frac{1}{B_1}\\sum_{t=1}^B 1_{\\hat{p}_t(\\tau) = 0}1_{p_t = 1} = 1-TP(\\tau)\n\\end{equation}\nwhere $B_0=\\sum_{t=1}^B 1_{p_t = 0}$ and $B_1=\\sum_{t=1}^B 1_{p_t = 1}=B-B_0$.\n\nThese four statistics predict Precision $P(\\tau)$, Recall $R(\\tau)$ (also known as sensitivity or true positive rate), and Specificity $S(\\tau)$ (also known as true negative rate)\n\\begin{equation} P(\\tau) = \\frac{TP(\\tau)}{TP(\\tau)+FP(\\tau)}\n~~,~~R(\\tau) = \\frac{TP(\\tau)}{TP(\\tau)+FN(\\tau)}\n~~,~~S(\\tau) = \\frac{TN(\\tau)}{TN(\\tau)+FP(\\tau)}\n\\end{equation}\n\nAccuracy (ACC) is defined as the fraction of correct classification for default threshold $\\tau=\\frac{1}{2}$ over the set of batch samples:\n\\begin{equation}\n\\label{eq:ACC}\n ACC = \\frac{1}{B}\\sum_{t=1}^B 1_{p_t = \\hat{p}_t(\\frac{1}{2})} =\\frac{B_0}{B} TN(\\frac{1}{2}) + \\frac{B_1}{B} TP(\\frac{1}{2}) \n\\end{equation}\nArea under the receiver operating characteristic curve (AUC) is computed from prediction scores as the area under true positive rate (TPR) vs. false positive rate (FPR) curve, i.e. the recall vs. 1-specificity curve\n\\begin{equation}\n\\label{eq:AUC}\n AUC = \\frac{1}{2}\\sum_{k=1}^K (S(\\tau_{k-1})-S(\\tau_k))(R(\\tau_{k-1})+R(\\tau_k))\n\\end{equation}\nwhere $K$ is the number of thresholds.\nAverage precision (AP) summarizes a precision-recall curve as the weighted mean of precision achieved at each threshold, with the increase in recall from the previous thresholds used as the weight:\n\\begin{equation}\n\\label{eq:AP}\n AP = \\sum_{k=1}^K (R(\\tau_k) - R(\\tau_{k-1}))P(\\tau_k).\n\\end{equation}\n\nWe track the binary cross entropy (BCE) through training and we compute it on the holdout set and a random node permutation of the holdout set (see Figures \\ref{fig:prot1} and \\ref{fig:prot2}). The lower the value the better.\n\nWe look at the three performance metrics on the training set, the holdout set, and a random node permutation of the holdout set: see Figures \\ref{fig:prot3}, and \\ref{fig:prot4} for accuracy (ACC); see Figures \\ref{fig:prot5}, and \\ref{fig:prot6} for area under the receiver operating characteristic curve (AUC); and see Figures \\ref{fig:prot7}, and \\ref{fig:prot8} for average precision (AP). For all these performance metrics, the higher the score the better.\n\n\\subsubsection{Discussion}\n\nTables \\ref{table:t1}-\\ref{table:t12} list values of the three performance metrics (ACC, AUC, AP) at the end of training (after 300 epochs). \nPerformances over the course of training are plotted in Figures \\ref{fig:prot1} through \\ref{fig:prot8}.\n\nThe authors of \\cite{KKMMN2016} utilized a Support Vector Machine (1-layer perceptron) for classification and \nobtained an accuracy (ACC) of 77\\% on the entire data set \nusing 52 features, and an accuracy of 80\\% on a smaller set of 36 features. By comparison, our data augmentation method for $d=100$ achieved an accuracy of 97.5\\% on training data set,\nbut dropped dramatically to 73\\% on holdout data, and 72\\% on \nholdout data set with randomly permuted nodes. \nOn the other hand, both the kernels method and the sum-pooling\nmethod with $d=50$ achieved an accuracy of around 79\\% on\ntraining data set, while dropping accuracy performance by \nonly 2\\% to around\n77\\% on holdout data (as well as holdout data with nodes permuted).\n\nFor $d=1$, data augmentation performed the best on the training set with an area under the receiver operating characteristic (AUC) of 0.896, followed closely by the identity method with an AUC of 0.886. On the permuted holdout set however, sort-pooling performed the best with an AUC of 0.803.\n\nFor $d=10$, sum-pooling, ordering, and kernels performed well on the permuted holdout set with AUC's of 0.821, 0.820, and 0.818 respectively. The high performance of the identity method, data augmentation, and sort-pooling on the training set did not translate to the permuted holdout set at $d=10$. By $d=100$, sum-pooling still performed the best on the permuted holdout set with an AUC of 0.817. This was followed by the kernels method which achieved an AUC of 0.801 on the permuted holdout set.\n\nFor experiments where $d>1$, the identity method and data augmentation show a notable drop in performance from the training set to the holdout set. This trend is also, to a lesser extent, visible in the sort pooling and ordering methods. In the holdout permuted set we see significant oscillations in the performance of both the identity and data augmentation methods.\n\n\\subsection{Graph Regression}\n\n\\subsubsection{Methodology}\nFor our experiments in graph regression we consider the qm9 dataset \\cite{ramakrishnan2014quantum}. This dataset consists of 134 thousand molecules represented as graphs, where the nodes represent atoms and edges represent the bonds between them. \n\nEach graph has between 3 and 29 nodes, $3\\leq n\\leq 29$. Each node has 11 features, $r=11$. We hold out 20 thousand of these molecules for evaluation purposes. The dataset includes 19 quantitative features for each molecule.\n\nFor the purposes of our study, we focus on electron energy gap (units $eV$), which is $\\Delta\\varepsilon$ in \\cite{DFTpaper} whose chemical accuracy is $0.043 eV$ and whose prediction performance of any machine learning technique\nis worse than any other feature.\nThe best existing estimator for this feature is enn-s2s-ens5 from \\cite{Gilmer_2017arXiv170401212G}\n and has a mean absolute error (MAE) of $0.0529eV$ which is $1.23$ larger than the chemical accuracy. \n We run the end to end model with three GCN layers in $\\Gamma$, each with 50 hidden units. $\\eta$ consists of three multi-layer perceptron layers, each with 150 hidden units. We use rectified linear units as our nonlinear activation function. Finally, we vary $d$, the size of the node embeddings that are outputted by $\\Gamma$. We set $d$ equal to 1, 10, 50 and 100.\n\nFor each method and embedding size we train for 300 epochs. Note though that the data augmentation method will have experienced five times as many training steps due to the increased size of its training set. We use a batch size of 128 graphs. The loss function minimized during training is the mean square error (MSE) between the ground truth and the network output \n (see Figures \\ref{fig:a1}, \\ref{fig:a2}) \n\\begin{equation}\n\\label{eq:MSE}\nMSE = \\frac{1}{B}\\sum_{t=1}^B |\\Delta\\varepsilon_t -\\eta(\\phi(X^{(t)}))))|^2\n\\end{equation}\nwhere $B=128$ is the batch size of 128 graphs and $\\Delta\\varepsilon_t$ is the electron energy gap of the $t^{th}$ graph (molecule). The performance\nmetric is Mean Absolute Error (MAE)\n\\begin{equation}\n\\label{eq:MAE}\nMAE = \\frac{1}{B}\\sum_{t=1}^B |\\Delta\\varepsilon_t -\\eta(\\phi(X^{(t)}))))|.\n\\end{equation}\nWe track the mean absolute error through the course of training. We look at this performance metric on the training set, the holdout set, and a random node permutation of the holdout set (see Figures \\ref{fig:b1}, and \\ref{fig:b2}). \n\n\n\\subsubsection{Discussion}\n\nNumerical results at the end of training (after 300 epochs) are included in Tables \\ref{table:a}, \\ref{table:b}, \\ref{table:c} and \\ref{table:d}.\nFrom the results we see that the ordering method performed best for $d=100$\nfollowed closely by the data augmentation method, while both the ordering method and the kernels method performed well for $d=10$, though both fell slightly short of data augmentation which performed marginally better on both the training data and the holdout data, though with significantly more training iterations. For $d=1$, the kernels method failed to train adequately. The identity mapping performed relatively well on training data (for $d=100$ it achieved the smallest MAE among all methods and all parameters) and even the holdout data, however it lost its performance on the permuted holdout data. The identity mapping's failure to generalize across permutations of the holdout set is likely exacerbated by the fact that the QM9 data as presented to the network comes ordered in its node positions from heaviest atom to lightest. Data augmentation notably kept its performance despite this due to training on many permutations of the data. \n\nFor $d=100$, our ordering method achieved a MAE of $0.155eV$ on training data set and $0.187eV$ on holdout data set, which are $3.6$ and $4.35$ times larger than the chemical accuracy ($0.043eV$\\ignore{ cf. Supplementary material of \\cite{Gilmer_2017arXiv170401212G}}), respectively. This is worse than the enn-s2s-ens5 method in \\cite{Gilmer_2017arXiv170401212G} (current best method) that achieved a MAE $0.0529$ (eV), $1.23$ larger than the chemical accuracy, \nbut better than the Coulomb Matrix (CM) representation in \\cite{PhysRevLett.108.058301} that achieved a MAE $5.32$ larger than the chemical accuracy whose features were optimized for this task.\n\n\n\\bibliographystyle{amsplain}\n\n\\section{Introduction\\label{sec1}}\n\n\nThis paper is motivated by a class of problems in graph deep learning, where the\nprimary task is either graph classification or graph regression. \nIn either case, the result should be invariant to arbitrary permutations of graph nodes.\n\nAs we explain below, the mathematical problem analyzed in this paper is a special case \nof the permutation invariance issue described above. To set the notations consider the\nvector space ${\\mathbb{R}}^{n\\times d}$ of $n\\times d$ matrices endowed with the Frobenius norm \n $\\norm{X}=\\left(trace(XX^T)\\right)^{1\/2}$\nand its associated Hilbert-Schmidt scalar product, $\\ip{X}{Y}=trace(XY^T)$.\n Let ${\\mathcal S}_n$ denote the symmetric group of $n\\times n$ permutation matrices. \n ${\\mathcal S}_n$ is a finite group of size $|{\\mathcal S}_n|=n!$.\n\nOn ${\\mathbb{R}}^{n\\times d}$ we consider the equivalence relation $\\sim$ \ninduced by the symmetric group of permutation matrices ${\\mathcal S}_n$ as follows. Let $X,Y\\in{\\mathbb{R}}^{n\\times d}$. \nThen we say $X\\sim Y$ if there is $P\\in{\\mathcal S}_n$ so that $Y=PX$. In other words, two matrices are equivalent if one is a row permutation of the other. \nThe equivalence relation induces a natural distance on the quotient space \n${\\widehat{\\Rnd}}:={\\mathbb{R}}^{n\\times d}\/\\sim$,\n\\begin{equation}\n \\label{eq:1.1}\nd: {\\widehat{\\Rnd}}\\times {\\widehat{\\Rnd}} \\rightarrow\\mathbb{R} ~~,~~d(\\hat{X},\\hat{Y})=\\min_{\\Pi\\in{\\mathcal S}_n}\\norm{X-\\Pi Y} \n\\end{equation}\nThis makes $({\\widehat{\\Rnd}},d)$ a complete metric space.\n\nOur main problem can now be stated as follows:\n\\begin{prob}\\label{prob1}\nGiven $n,d\\geq 1$ positive integers, find $m$ and a bi-Lipschitz map\n $\\hat{\\alpha}:({\\widehat{\\Rnd}},d)\\rightarrow(\\mathbb{R}^m,\\norm{\\cdot}_2)$.\n\\end{prob}\nExplicitly the problem can be restated as follows. One is asked to construct a \nmap $\\alpha:{\\mathbb{R}}^{n\\times d}\\rightarrow\\mathbb{R}^m$ that satisfies the following conditions:\n\\begin{enumerate}\n \\item If $X,Y\\in{\\mathbb{R}}^{n\\times d}$ so that $X\\sim Y$ then $\\alpha(X)=\\alpha(Y)$\n \\item If $X,Y\\in{\\mathbb{R}}^{n\\times d}$ so that $\\alpha(X)=\\alpha(Y)$ then $X\\sim Y$\n \\item There are constants $01$. \n \\item {\\em Sorting Embedding}.\n For $x\\in\\mathbb{R}^n$, consider the sorting map\n\\begin{equation}\\label{eq:ord}\n \\downarrow:\\mathbb{R}^n\\rightarrow\\mathbb{R}^n ~~,~~\n \\downarrow(x)=(x_{\\pi(1)},x_{\\pi(2)},\\ldots,\n x_{\\pi(n)})^T \n\\end{equation}\n where the permutation $\\pi$ is so that\n $x_{\\pi(1)}\\geq x_{\\pi(2)}\\geq\\cdots\\geq x_{\\pi(n)}$. It is obvious that $\\downarrow$ satisfies Conditions (1) and (2) and therefore lifts to an injective map on ${\\widehat{\\Rnd}}$. As we see in Section \\ref{sec3}, the map $\\downarrow$ is bi-Lipschitz. In fact it is isometric, and hence produces an ideal embedding. Our work in Section \\ref{sec3} is to extend such construction to the more general \n case $d>1$.\n\\end{enumerate}\nThe algebraic embedding is a special case of the more general {\\em kernel method} that can be thought of as a projection of the measure \n$a_{\\infty}(X)$ onto a finite dimensional space, e.g., the space of polynomials spanned by $\\{X,X^2,\\cdots,X^n\\}$. In applications such kernel method is known as a ``Readout Map\" \\cite{deepsets}, based on ``Sum Pooling\".\n\nThe sorting embedding has been used in applications under the name of ``Pooling Map\" \\cite{deepsets}, based on ``Max Pooling\". A na\\\"{\\i}ve extension of the unidimensional map (\\ref{eq:ord}) to the case $d>1$ might employ the lexicographic order: order monotone decreasing the rows according to the first column, and break the tie by going to the next column. While this gives rise to an injective map, it is easy to see it is not even continuous, let alone Lipschitz. The main work in this paper is to extend the sorting embedding to the case $d>1$ using a three-step procedure, first embed ${\\mathbb{R}}^{n\\times d}$ into a larger vector space $\\mathbb{R}^{n\\times D}$, then apply $\\downarrow$ in each column independently, and then perform a dimension reduction by a linear map into $\\mathbb{R}^{2nd}$. Similar to the phase retrieval problem (\\cite{bcmn,bod,balan16}), the redundancy introduced in the first step counterbalances the loss of information (here, relative order of one column with respect to another) in the second step. \n\nA summary of main results presented in this paper is contained in the following result.\n\\begin{thm}\\label{t1}\nConsider the metric space $({\\widehat{\\Rnd}},d)$.\n\\begin{enumerate}\n\\item (Polynomial Embedding) There exists a Lipschitz injective map\n\\[ {\\hat{\\alpha}}:{\\widehat{\\Rnd}}\\rightarrow\\mathbb{R}^m \\]\nwith $m=\\left( \\begin{array}{c}\n\\mbox{$d+n$} \\\\\n\\mbox{$d$}\n\\end{array} \\right)$. Two explicit constructions of this map are given in (\\ref{eq:alpha1}) and (\\ref{eq:alpha2}).\n\\item (Sorting based Embedding) There exists a class of bi-Lipschitz maps \n\\[ {\\hat{\\beta}}_{A,B}:({\\widehat{\\Rnd}},d)\\rightarrow(\\mathbb{R}^m,\\norm{\\cdot}) ~,~ {\\hat{\\beta}}_{A,B}(\\hat{X})=B\\left({\\hat{\\beta}}_A(\\hat{X})\\right) \\]\nwith $m=2nd$, where each map ${\\hat{\\beta}}_{A,B}$ is the composition of two bi-Lipschitz maps: a full-rank linear operator $B:\\mathbb{R}^{n\\times D}\\rightarrow \\mathbb{R}^m$, with the nonlinear bi-Lipschitz map ${\\hat{\\beta}}_A:{\\widehat{\\Rnd}}\\rightarrow\\mathbb{R}^{n\\times D}$\n parametrized by a matrix $A\\in\\mathbb{R}^{d\\times D}$ called \"key\". \n Explicitly,\n${\\hat{\\beta}}(\\hat{X})=\\downarrow(XA)$, where $\\downarrow$ acts column-wise.\n These maps are characterized by the following properties:\n \\begin{enumerate}\n\\item For $D=1+(d-1)n!$, any\n$A\\in\\mathbb{R}^{d\\times (1+(d-1)n!)}$ whose columns form a full spark frame defines a bi-Lipschitz map ${\\hat{\\beta}}_A$ on ${\\widehat{\\Rnd}}$. \nFurthermore, a lower Lipschitz constant is given by the smallest $d^{th}$ singular value among all $d\\times d$ sub-matrices of $A$,\n $\\min_{J\\subset[D],|J|=d}s_d(A[J])$.\n\\item For any matrix (``key\") $A\\in\\mathbb{R}^{d\\times D}$ such that the map \n${\\hat{\\beta}}_A$ is injective, then ${\\hat{\\beta}}_A:({\\widehat{\\Rnd}},d)\\rightarrow(\\mathbb{R}^{n\\times D},\\norm{\\cdot})$ is bi-Lipschitz. Furthermore, an upper Lipschitz constant is given by $s_1(A)$, the largest singular value of $A$.\n\\item Assume $A\\in\\mathbb{R}^{d\\times D}$ is such that the map \n${\\hat{\\beta}}_A$ is injective (i.e., a \"universal key\"). Then for almost any linear map $B:\\mathbb{R}^{n\\times D}\\rightarrow\\mathbb{R}^{2nd}$ the map ${\\hat{\\beta}}_{A,B}=B\\circ{\\hat{\\beta}}_A$ is\nbi-Lipschitz.\n\\end{enumerate}\n\\end{enumerate}\n\\end{thm}\n\nAn immediate consequence of this result is the following corollary whose proof is included in subsection \\ref{subsec4.4}:\n\\begin{cor}\n\\label{c0}\nLet $\\beta:\\mathbb{R}^{n\\times d}\\rightarrow\\mathbb{R}^m$ induce a bi-Lipschitz embedding ${\\hat{\\beta}}:{\\widehat{\\Rnd}}\\rightarrow\\mathbb{R}^m$ of the\nmetric space $({\\widehat{\\Rnd}},d)$ into $(\\mathbb{R}^m,\\norm{\\cdot}_2)$. \n\\begin{enumerate}\n\\item For any continuous function $f:\\mathbb{R}^{n\\times d}\\rightarrow\\mathbb{R}$ \ninvariant to row-permutation (i.e., $f(PX)=f(X)$ for every \n$X\\in\\mathbb{R}^{n\\times d}$ and $P\\in{\\mathcal S}_n$) there exists a continuous\nfunction $g:\\mathbb{R}^m\\rightarrow\\mathbb{R}$ such that $f=g\\circ\\beta$.\nConversely, for any $g:\\mathbb{R}^m\\rightarrow\\mathbb{R}$ continuous function, the \nfunction $f=g\\circ\\beta:\\mathbb{R}^{n\\times d}\\rightarrow\\mathbb{R}$ is continuous\nand row-permutation invariant.\n\\item For any Lipschitz continuous function $f:\\mathbb{R}^{n\\times d}\\rightarrow\\mathbb{R}$ \ninvariant to row-permutation (i.e., $f(PX)=f(X)$ for every \n$X\\in\\mathbb{R}^{n\\times d}$ and $P\\in{\\mathcal S}_n$) there exists a Lipschitz continuous\nfunction $g:\\mathbb{R}^m\\rightarrow\\mathbb{R}$ such that $f=g\\circ\\beta$.\nConversely, for any $g:\\mathbb{R}^m\\rightarrow\\mathbb{R}$ Lipschitz continuous function, the \nfunction $f=g\\circ\\beta:\\mathbb{R}^{n\\times d}\\rightarrow\\mathbb{R}$ is Lipschitz continuous\nand row-permutation invariant.\n\\end{enumerate}\n\\end{cor}\n\\vspace{5mm}\n\n\nThe structure of the paper is as follows. Section \\ref{sec2} contains the algebraic embedding method and encoders $\\alpha$ described at part (1) of Theorem \\ref{t1}. Corollary \\ref{cor2} contains part (1) of the main result stated above. Section \\ref{sec3} introduces the sorting based embedding procedure and describes the key-based encoder $\\beta$. Necessary and sufficient conditions for key universality are presented in Proposition\n\\ref{prop3.8}; the injectivity of the encoder described at part (2.a) of Theorem \\ref{t1} is proved in Theorem \\ref{t4}; the bi-Lipschitz property of any universal key described at part (2.b) of Theorem \\ref{t1} is shown in Theorem \\ref{t5}; the dimension reduction statement (2.c) of Theorem \\ref{t1}\nis included in Theorem \\ref{t6}. Proof of Corollary \\ref{c0} is presented in subsection \\ref{subsec4.4}. Section \\ref{sec4} contains applications to graph deep learning. These application use Graph Convolution Networks and the numerical experiments are carried out on two graph data sets: a chemical compound data set (QM9) and a protein data set (PROTEINS\\_FULL). \n\nWhile the motivation of this analysis is provided by graph deep learning applications,\nthis is primarily a mathematical paper. Accordingly the formal theory is presented first, and then is followed by the machine learning application. Those interested in the application (or motivation) can skip directly to Section \\ref{sec4}. \n\n{\\bf Notations}. For an integer $d\\geq 1$, $[d]=\\{1,2,\\ldots,d\\}$. For a matrix $X\\in{\\mathbb{R}}^{n\\times d}$,\n $x_1,\\ldots x_d\\in\\mathbb{R}^n$ denote its columns, $X=[x_1\\vert\\cdots\\vert x_d]$. All norms are Euclidean; for a matrix $X$, $\\norm{X}=\\sqrt{trace(X^TX)}=\\sqrt{\\sum_{k,j}|X_{k,j}|^2}$ denotes the Frobenius norm; for vectors $x$, $\\norm{x}=\\norm{x}_2=\\sqrt{\\sum_{j} |x_j|^2}$. \n \n\n\\subsection{Prior Works}\n\nSeveral methods for representing orbits of vector spaces under the action of permutation (sub)groups have been studied in literature. Here we describe some of these results, without claiming an exhaustive literature survey.\n\nA rich body of literature emanated from the early works on \nsymmetric polynomials and group invariant representations of \nHilbert, Noether, Klein and Frobenius. They are part of standard\ncommutative algebra and finite group representation theory. \n\nPrior works on permutation invariant mappings have predominantly employed some form of summing procedure, though some have alternatively employed some form of sorting procedure.\n\nThe idea of summing over the output nodes of an equivariant network has been well studied. \nThe algebraic invariant theory goes back to Hilbert and Noether (for finite groups) and then continuing with the continuous invariant function theory of \nWeyl and Wigner (for compact groups), \nwho posited that a generator function $\\psi:X\\rightarrow\\mathbb{R}$ gives rise to a function $E:X\\rightarrow\\mathbb{R}$ invariant to the action of a finite group $G$ on $X$, $(g,x)\\mapsto g.x$, via the averaging formula $E(x)=\\frac{1}{|G|}\\sum_{g\\in G} \\psi(g.x)$.\n\nMore recently, this approach provided the framework for universal approximation results of $G$-invariant functions. \\cite{maron2018invariant} showed that invariant or equivariant networks must satisfy a fixed point condition. The equivariant condition is naturally realized by GNNs. The invariance condition is realized by GNNs when followed by summation on the output layer, as was further shown in \\cite{keriven2019universal}, \\cite{pmlr-v97-maron19a} and \\cite{lipman2022}. Subsequently, \\cite{yarotsky2021universal} proved universal approximation results over compact sets for continuous functions invariant to the action of finite or continuous groups. In \\cite{geerts2022}, the authors\nobtained bounds on the separation power of GNNs in terms of the Weisfeiler-Leman (WL) tests by tensorizing the input-output mapping. \n\\cite{sannai2020universal} studied approximations of equivariant maps, while \\cite{NEURIPS2019_71ee911d} showed that if a GNN with sufficient expressivity is well trained, it can solve the graph isomorphism problem.\n\nThe authors of \\cite{OrderMatters_2015arXiv151106391V} designed an algorithm for processing sets with no natural orderings. The algorithm applies an attention mechanism to achieve permutation invariance with the attention keys being generated by a Long-Short Term Memory (LSTM) network. Attention mechanisms amount to a weighted summing and therefore can be considered to fall within the domain of summing based procedures.\n\nIn \\cite{GGsNN_2015arXiv151105493L}, the authors designed a permutation invariant mapping for graph embeddings. The mapping employs two separate neural networks, both applied over the feature set for each node. One neural network produces a set of new embeddings, the other serves as an attention mechanism to produce a weighed sum of those new embeddings.\n\n\n\n\nSorting based procedures for producing permutation invariant mappings over single dimensional inputs have been addressed and used by \\cite{deepsets}, notably in their {\\it max pooling} procedure.\n\nThe authors of \\cite{qi2017pointnet} developed a permutation\ninvariant mapping \n$pointnet$ for point sets that is based on a $max$ function. The mapping takes in a set of vectors, processes each vector through a neural network followed by an scalar output function, and takes the maximum of the resultant set of scalars.\n\nThe paper \\cite{zhang2018end} introduced {\\it SortPooling}. {\\it SortPooling} orders the latent embeddings of a graph according to the values in a specific, predetermined column. All rows of the latent embeddings are sorted according to the values in that column. While this gives rise to an injective map, it is easy to see it is not even continuous, let alone Lipschitz. The same issue\narises with any lexicographic ordering, including the well-known Weisfeiler-Leman embedding \\cite{wl}.\nOur paper introduces a novel method that bypasses this issue.\n\nAs shown in \\cite{pmlr-v97-maron19a}, the sum pooling-based GNNs provides universal approximations for of any permutation invariant continuous function but only on \\emph{compacts}. Our sorting based embedding removes the compactness restriction as well as it extends to all Lipschitz maps.\n\nWhile this paper is primarily mathematical in nature, methods developed here are applied to two graph data sets, QM9 and PROTEINS\\_FULL. Researchers have applied various graph deep learning techniques to both data sets. In particular, \\cite{Gilmer_2017arXiv170401212G} studied extensively the QM9 data set, and compared their method with many other algorithms\nproposed by that time.\n\n\\section{Algebraic Embeddings\\label{sec2}}\n\nThe algebraic embedding presented in this section can be thought of a special kernel to project equation (\\ref{eq:measure}) onto.\n\n\\subsection{Kernel Methods}\nThe kernel method employs a family of continuous kernels (test) functions, $\\{K(x;y)~;~x\\in\\mathbb{R}^d~,~y\\in Y\\}$ parametrized\/indexed by a set $Y$. \nThe measure representation $\\mu=a_{\\infty}(X)$ in (\\ref{eq:measure}) yields a nonlinear map\n\\[ \\alpha:\\mathbb{R}^{n\\times d} \\rightarrow C(Y)\n~~,~~X \\mapsto F(y)=\\int_{R^d} K(x;y)d\\mu \\]\ngiven by\n\\[ \\alpha(X)(y)= \\frac{1}{n}\\sum_{k=1}^n K(x_k;y) \\]\nThe embedding problem \\ref{prob1}) can be restated as follows. One is asked\nto find a finite family of kernels $\\{K(x;y)~;~x\\in\\mathbb{R}^d~,~y\\in Y\\}$, \n $m=|Y|$ so that\n\\begin{equation}\n\\label{eq:kernel}\n{\\hat{\\alpha}}:({\\widehat{\\Rnd}},d) \\rightarrow l^2(Y)\\sim (\\mathbb{R}^m,\\norm{\\cdot}_2) ~~,~~ ({\\hat{\\alpha}}(\\hat{X}))_y = \\frac{1}{n} \\sum_{k=1}^n K(x_k;y)\n\\end{equation}\nis injective, Lipschitz or bi-Lipschitz. \n\nTwo natural choices for the kernel $K$ are the Gaussian kernel and the complex exponential (or, the Fourier) kernel:\n\\[ K_{G}(x,y) = e^{-\\norm{x-y}^2\/\\sigma^2} ~~,\nK_{F}(x,y) = e^{2\\pi i \\ip{x}{y}}\n\\]\nwhere in both cases $Y\\subset\\mathbb{R}^d$. \nIn this paper we analyze a different kernel, namely the polynomial kernel $K_P(x,y)=x_1^{y_1}x_2^{y_2}\\cdots x_d^{y_d}$, $Y\\subset\\{0,1,2,\\ldots,n\\}^d$. \n\n\\subsection{The Polynomial Embedding}\n\nSince the polynomial representation is intimately related to the Hilbert-Noether algebraic invariants theory \\cite{compuinvar} and the Hilbert-Weyl theorem, it is advantageous to start our construction from a different perspective. \n\nThe linear space ${\\mathbb{R}}^{n\\times d}$ is isomorphic to $\\mathbb{R}^{nd}$ by stacking the columns one on top of each other. In this case, the action of the permutation group $S_n$ can be recast as the action of the subgroup $I_d\\otimes S_n$ of the bigger group $S_{nd}$ on $\\mathbb{R}^{nd}$. Specifically, let us denote by $\\sim_G$ the equivalence relation\n\\[ x,y\\in\\mathbb{R}^{nd}~~,~~x\\sim_G y \\Longleftrightarrow y=\\Pi x~,~{\\rm for ~ some}~\\Pi\\in G \\]\ninduced by a subgroup $G$ of $S_{nd}$. In the case\n $G=I_d\\otimes S_n=\\{diag_d(P)~,~P\\in S_n\\}$ of block diagonal permutation obtained by repeating $d$ times the same $P\\in S_n$ permutation along the main diagonal, two vectors $x,y\\in\\mathbb{R}^{nd}$ are $\\sim_G$ equivalent iff there is a permutation matrix $P\\in S_n$ so that $y(1+(k-1)n:kn) = Px(1+(k-1)n:kn)$ for each $1\\leq k\\leq d$. In other words, each disjoint $n$-subvectors in $y$ and $x$ are related by the same permutation. In this framework, the Hilbert-Weyl theorem (Theorem 4.2, Chapter XII, in \\cite{BifTheory2}) states that the ring of invariant polynomials is finitely generated. The G\\\"{o}bel's algorithm (Section 3.10.2 in \\cite{compuinvar}) provides a recipe to find a complete set of invariant polynomials. In the following we provide a direct approach to construct a complete set of polynomial invariants. \n \n Let $\\mathbb{R}[{\\bf x}_1,{\\bf x}_2,...,{\\bf x}_d]$ denote the algebra of polynomials in $d$-variables with real coefficients. \nLet us denote $X\\in{\\mathbb{R}}^{n\\times d}$ a generic data matrix.\nEach row of this matrix defines a \nlinear form over ${\\bf x}_1,...{\\bf x}_d$,\n $\\lambda_k = X_{k,1}{\\bf x}_1+\\cdots + X_{k,d}{\\bf x}_d$.\n Let us denote by $\\mathbb{R}[{\\bf x}_1,\\ldots,{\\bf x}_d][{\\bf t}]$ the algebra of polynomials in variable ${\\bf t}$ with coefficients in the ring $\\mathbb{R}[{\\bf x}_1,\\ldots,{\\bf x}_d]$. Notice $\\mathbb{R}[{\\bf x}_1,{\\bf x}_2,\\ldots,{\\bf x}_d][{\\bf t}]=\\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2,\\ldots,{\\bf x}_d]$ \n by rearranging the terms according to degree in ${\\bf t}$. \n Thus $\\lambda_k\\in\\mathbb{R}[{\\bf x}_1,\\ldots,{\\bf x}_d]\\subset\\mathbb{R}[{\\bf x}_1,\\ldots,{\\bf x}_d][{\\bf t}]$ can be encoded as zeros of a polynomial $P_X$ of degree $n$ in variable ${\\bf t}$ with coefficients in $\\mathbb{R}[{\\bf x}_1,\\ldots,{\\bf x}_d]$:\n \\begin{equation}\n \\label{eq:polyencoding}\n P_X({\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d) = \\prod_{k=1}^n ({\\bf t}-\\lambda_k({\\bf x}_1,\\ldots,{\\bf x}_d))\n =\\prod_{k=1}^n ({\\bf t}-X_{k,1}{\\bf x}_1-\\ldots -X_{k,d}{\\bf x}_d)\n \\end{equation}\n Due to identification $\\mathbb{R}[{\\bf x}_1,{\\bf x}_2,\\ldots,{\\bf x}_d][{\\bf t}]=\\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2,\\ldots,{\\bf x}_d]$,\n we obtain that \\\\\n $P_X\\in \\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2,\\ldots,{\\bf x}_d]$ is a homogeneous polynomial of degree $n$ in $d+1$ variables. Let $\\mathbb{R}_n[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]$ denote the vector space of homogeneous polynomials in $d+1$ variables of degree $n$ with real coefficients. Notice the real dimension of this vector space is \n \\begin{equation}\n \\label{eq:dimRn}\n \\dim_\\mathbb{R} \\mathbb{R}_n[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d] = \\left( \n \\begin{array}{c}\n n+d \\\\\n d\n \\end{array}\n \\right) = \\left(\n \\begin{array}{c}\n n+d \\\\\n n\n \\end{array}\n \\right).\n \\end{equation}\nBy noting that $P_X$ is monic in ${\\bf t}$ (the coefficient of ${\\bf t}^n$ is always 1) we obtain an injective embedding of ${\\widehat{\\Rnd}}$ into $\\mathbb{R}^m$ with \n$m=\\dim_\\mathbb{R} \\mathbb{R}_n[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]-1$ via the coefficients of $P_X$ similar to (\\ref{eq:poly}). This is summarized in the following theorem:\n\\begin{thm}\n\\label{t2}\nThe map $\\alpha_0:{\\mathbb{R}}^{n\\times d}\\rightarrow\\mathbb{R}^{m-1}$ with $m=\\left(\\begin{array}{c}n+d \\\\ d \\end{array} \\right)$ given by the (non-trivial) coefficients of polynomial $P_X\\in\\mathbb{R}_n[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]$ lifts to an analytic embedding ${\\hat{\\alpha}}_0$ of $({\\widehat{\\Rnd}},d)$ into $\\mathbb{R}^m$. Specifically, for $X\\in{\\mathbb{R}}^{n\\times d}$ expand the polynomial\n\\begin{equation} \\label{eq:PA}\nP_X({\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d) = \\prod_{k=1}^n ({\\bf t}-X_{k,1}{\\bf x}_1-\\ldots -X_{k,d}{\\bf x}_d)\n = {\\bf t}^n + \\hspace{-10mm}\\sum_{\\begin{array}{c}\n \\mbox{$p_0,p_1,...,p_d\\geq 0$} \\\\\n \\mbox{$p_0+\\cdots+p_d=n$} \\\\ \n \\mbox{$p_0 1$}\n \\end{array}\\right.\n\\end{equation}\nbe a Lipschitz monotone decreasing function with Lipschitz constant 1.\n\\begin{cor}\\label{cor2}\nConsider the map:\n\\begin{equation}\\label{eq:alpha1}\n\\alpha_1:{\\mathbb{R}}^{n\\times d}\\rightarrow\\mathbb{R}^m~~,~~\n\\alpha_1(X) = \\left( \\begin{array}{c}\n\\mbox{$\\alpha_0\\bigg (\\varphi_0(\\norm{X})X \\bigg )$} \\\\\n\\mbox{$\\norm{X}$}\n\\end{array}\\right),\n\\end{equation}\nwith $m=\\left(\\begin{array}{c}n+d \\\\ d \\end{array} \\right)$.\nThe map $\\alpha_1$ lifts to an injective and globally Lipschitz map ${\\hat{\\alpha}}_1:{\\widehat{\\Rnd}}\\rightarrow\\mathbb{R}^m$ with Lipschitz constant $Lip({\\hat{\\alpha}}_1) \\leq \\sqrt{1+L_0^2}$.\n\\end{cor}\n{\\bf Proof}\n\nClearly $\\alpha_1(\\Pi X)=\\alpha_1(X)$ for any $\\Pi\\in{\\mathcal S}_n$ and $X\\in{\\mathbb{R}}^{n\\times d}$. Assume now that $\\alpha_1(X)=\\alpha_1(Y)$. Then $\\norm{X}=\\norm{Y}$ and since ${\\hat{\\alpha}}_0$ is injective on ${\\widehat{\\Rnd}}$ it follows $\\varphi(\\norm{X})X = \\Pi \\varphi(\\norm{Y})Y$ for some $\\Pi\\in{\\mathcal S}_n$. Thus $X\\sim Y$ which proves $\\alpha_1$ lifts to an injective map on ${\\widehat{\\Rnd}}$. \n\nNow we show ${\\hat{\\alpha}}_1$ is Lipschitz on $({\\widehat{\\Rnd}},d)$ of appropriate Lipschitz constant. Let $X,Y'\\in{\\mathbb{R}}^{n\\times d}$ and $\\Pi_0\\in{\\mathcal S}_n$ so that $d(\\hat{X},\\hat{Y'})=\\norm{X-\\Pi_0 Y'}$. Let $Y=\\Pi_0 Y'$ so that $d(\\hat{X},\\hat{Y})=\\norm{X-Y}$. \n\nChoose two matrices $X,Y\\in{\\mathbb{R}}^{n\\times d}$. We claim $\\norm{\\alpha_1(X)-\\alpha_1(Y)}\\leq \\sqrt{1+L_0^2}\n\\norm{X-Y}$.\nThis follows from two observations: \n\n(i) The map\n\\[ X \\mapsto \\rho(X):=\\varphi_0(\\norm{X})X \\]\nis the nearest-point map to (or, the metric projection map onto) the convex closed set $B_1({\\mathbb{R}}^{n\\times d})$. This means $\\norm{\\varphi_0(\\norm{X})X - Z}\\leq \\norm{X-Z}$ for any $Z\\in B_1({\\mathbb{R}}^{n\\times d})$. \n\n(ii) The nearest-point map to a convex closed subset of a Hilbert space is Lipschitz with constant 1, i.e. it shrinks distances, see \\cite{phelps56}.\n\nThese two observations yield:\n\\begin{multline*} \n\\norm{\\alpha_1(X)-\\alpha_1(Y)}^2 = \\norm{\\alpha_0(\\rho(Y))\n- \\alpha_0(\\rho(Y) )}^2 + |\\norm{X}-\\norm{Y}|^2 \\\\\n \\leq \nL_0^2 \\norm{\\rho(X)-\\rho(Y) }^2 + \\norm{X-Y}^2 \\leq (1+L_0^2)\\norm{X-Y}^2 .\n\\end{multline*}\nThis concludes the proof of this result. $\\qed$\n\\vspace{5mm}\n\nA simple modification of $\\phi_0$ can produce a $C^\\infty$ map by smoothing it out around $x=1$.\n\nOn the other hand the lower Lipschitz constant of ${\\hat{\\alpha}}_1$ is 0 due to terms of the form $X_{i,j}^k$ with $k\\geq 2$. \nIn \\cite{Cahill19}, the authors built a Lipschitz map by a retraction to the unit sphere instead of unit ball. \nInspired by their construction, a modification of $\\alpha_0$ in their spirit reads:\n\\begin{equation}\n \\label{eq:alpha2}\n\\alpha_2:{\\mathbb{R}}^{n\\times d}\\rightarrow\\mathbb{R}^m~~,~~\n\\alpha_2(X)=\\left( \\begin{array}{c}\n\\mbox{$\\norm{X}\\alpha_0\\bigg ( \\frac{X}{\\norm{X}} \\bigg )$} \\\\\n\\mbox{$\\norm{X}$}\n\\end{array}\\right)~,~if~X\\neq 0~~,~and~~\\alpha_2(0)=0.\n\\end{equation}\nIt is easy to see that $\\alpha_2$ satisfies the non-parallel property in \\cite{Cahill19} and is Lipschitz with a slightly better constant than $\\alpha_1$ (the constant is determined by the tangential derivatives of $\\alpha_0$). \nBut, for the same reasons as in \\cite{Cahill19} this map is not bi-Lipschitz. \n\n\\subsection{Dimension reduction in the case $d=2$ and consequences}\n\nIn this subsection we analyze the case $d=2$. \nThe embedding dimension for $\\alpha_0$ is $\\left( \\begin{array}{c} n \\\\ 2 \\end{array}\\right)-1=\\frac{n(n-1)}{2}-1$. \nOn the other hand, consider the following approach. \nEach row of $X$ defines a complex number $z_1=X_{1,1}+i\\,X_{1,2}$, ... , $z_n=X_{n,1}+i\\,X_{n,2}$ that\ncan be encoded by one polynomial of degree $n$ with complex coefficients $Q\\in\\mathbb{C}_n[t]$,\n\\[ Q({\\bf t}) = \\prod_{k=1}^n ({\\bf t}-z_k) = {\\bf t}^n + \\sum_{k=0}^{n-1}\n{\\bf t}^k q_k \\]\nThe coefficients of $Q$ provide a $2n$-dimensional real embedding $\\zeta_0$,\n\\[ \\zeta_0:\\mathbb{R}^{n\\times 2}\\rightarrow\\mathbb{R}^{2n}~~,~~\\zeta_0(X)=(Re(q_{n-1}),Im(q_{n-1}),\\ldots,Re(q_{0}),Im(q_0)) \\]\nwith properties similar to those of $\\alpha_0$. \nOne can similarly modify this embedding to obtain a globally Lipschitz embedding $\\hat{\\zeta}_1$ of $\\hat{R_{n,2}}$ \ninto $\\mathbb{R}^{2n+1}$. \n\nIt is instructive to recast this embedding in the framework of commutative algebras. Indeed, let $\\langle {\\bf x}_1-1,{\\bf x}_2^2+1 \\rangle$ denote\nthe ideal generated by polynomials ${\\bf x}_1-1$ and ${\\bf x}_2^2+1$\nin the algebra $\\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2]$. Consider the quotient space\n $\\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2]\/\\langle {\\bf x}_1-1,{\\bf x}_2^2+1 \\rangle$ and the quotient map\n $\\sigma:\\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2]\\mapsto \\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2]\/\\langle{\\bf x}_1-1,{\\bf x}_2^2+1\\rangle$.\n In particular, let $S=\\sigma(\\mathbb{R}_n[{\\bf t},{\\bf x}_1,{\\bf x}_2])$ denote the vector space projected through this quotient map.\nThen a basis for $S$ is given by $\\{1,{\\bf t},\\ldots,{\\bf t}^n,{\\bf x}_2,{\\bf x}_2 {\\bf t},\\ldots,{\\bf x}_2 {\\bf t}^{n-1},{\\bf x}_2 {\\bf t}^n\\}$. Thus $\\dim S=2n+2$. \nLet \n$\\mathfrak{S}=\\{P_X~,~X\\in\\mathbb{R}^{n\\times 2} \\}\\subset\\mathbb{R}_2[{\\bf t},{\\bf x}_1,{\\bf x}_2]$ \ndenote the set of polynomials realizable as in (\\ref{eq:PA}).\nThen the fact that $\\hat{\\zeta}_0:\\mathbb{R}^{n\\times 2}\\rightarrow\\mathbb{R}^{2n}$ \nis injective is equivalent to the fact that $\\sigma{\\vert}_{\\mathfrak{S}}:\\mathfrak{S}\\rightarrow S$ is injective.\nOn the other hand note \n\\[\n\\sigma(\\mathfrak{S})\\subset {\\bf t}^n+\nspan_\\mathbb{R}\\{1,{\\bf t},\\ldots,{\\bf t}^{n-1},{\\bf x}_2,{\\bf x}_2 {\\bf t}, \\ldots,{\\bf x}_2 {\\bf t}^{n-1} \\} \\]\nwhere the last linear subspace is of dimension $2n$. \n\nIn the case $d=2$ we obtain the identification\n$\\mathbb{R}[{\\bf t},{\\bf x}_1,{\\bf x}_2]\/\\langle {\\bf x}_1-1,{\\bf x}_2^2+1 \\rangle = \\mathbb{C}[{\\bf t}]$ due to uniqueness of polynomial factorization.\n\nThis observation raises the following {\\em open problem}:\n\nFor $d>2$, is there a non-trivial ideal \n$I=\\langle Q_1,\\ldots,Q_r \\rangle \\subset\\mathbb{R}[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]$\nso that the restriction $\\sigma{\\vert}_{\\mathfrak{S}}$\nof the quotient map $\\sigma:\\mathbb{R}[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]\\rightarrow\n\\mathbb{R}[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]\/I$ is injective? Here $\\mathfrak{S}$ denote the set of polynomials in $\\mathbb{R}_n[{\\bf t},{\\bf x}_1,\\ldots,{\\bf x}_d]$ realizable via (\\ref{eq:PA}).\n\\begin{rmk}\nOne may ask the question whether the quaternions can be\nutilized in the case $d=4$. While the quaternions form an associative division algebra, unfortunately polynomials have in general an infinite number of factorization. This prevents an immediate extension of the previous construction to the case $d=4$. \n\\end{rmk}\n\n\\begin{rmk}\nSimilar to the construction in \\cite{Cahill19}, a linear dimension reduction technique may be applicable here (which, in fact, may answer the open problem above) which would reduce the embedding dimension to $m=2nd+1$ (twice the intrinsec dimension plus one for the homogenization variable). \nHowever we did not explore this approach since, even if possible, it would not produce a bi-Lipschitz embedding. \nInstead we analyze the linear dimension reduction technique in the next section in the context of sorting based embeddings. \n\\end{rmk}\n\n\\section{Sorting based Embedding\\label{sec3}}\n\nIn this section we present the extension of the sorting embedding (\\ref{eq:ord}) to the case $d>1$.\n\nThe embedding is performed by a linear-nonlinear transformation that resembles the phase retrieval problem. \nConsider a matrix $A\\in\\mathbb{R}^{d\\times D}$ and the induced nonlinear \ntransformation:\n\n\\begin{equation}\n\\label{eq:qA}\n\\beta_A:{\\mathbb{R}}^{n\\times d}\\rightarrow\\mathbb{R}^{n\\times D}~~,~~\\beta_A(X)=\\downarrow (XA)\n\\end{equation}\nwhere $\\downarrow$ is the monotone decreasing sorting operator acting in each column independently. Specifically, let \n$Y=XA\\in\\mathbb{R}^{n\\times D}$ and note its column vectors\n$Y=[y_1,y_2,\\ldots,y_D]$. Then \n\\[ \\beta_A(X)=\\left[ \\begin{array}{cccc}\n\\mbox{$\\Pi_1 y_1$} & \\mbox{$\\Pi_2 y_1$} & \\cdots & \\mbox{$\\Pi_D y_D$}\n\\end{array} \\right] \\]\nfor some $\\Pi_1,\\Pi_2,\\ldots,\\Pi_D\\in{\\mathcal S}_n$ so that each column is sorted monotonically decreasing:\n\\[ (\\Pi_k y_k)_1\\geq (\\Pi_k y_k)_2\\geq \\cdots\\geq (\\Pi_k y_k)_n. \\]\nNote the obvious invariance $\\beta_A(\\Pi X)=\\beta_A(X)$ for any $\\Pi\\in{\\mathcal S}_n$ and $X\\in{\\mathbb{R}}^{n\\times d}$. Hence $\\beta_A$ \nlifts to a map $\\hat{\\beta_A}$ on ${\\widehat{\\Rnd}}$. \n\\begin{rmk}\nNotice the similarity to the phase retrieval problem, e.g., \\cite{balan16}, where the data is obtained via a linear transformation of the \ninput signal followed by the nonlinear operation of taking the absolute value of the frame coefficients. Here the nonlinear transformation is implemented by sorting the coefficients. \nIn both cases it represents the action of a particular\nsubgroup of the unitary group. \n\\end{rmk}\n\n\nIn this section we analyze necessary and sufficient conditions so that maps of type (\\ref{eq:qA}) are injective, or injective almost everywhere. \nFirst a few definitions.\n\n\\begin{defn}\nA matrix $A\\in\\mathbb{R}^{d\\times D}$ is called a \\emph{universal key} (for ${\\mathbb{R}}^{n\\times d}$) if $\\hat{\\beta_A}$ is injective\non ${\\widehat{\\Rnd}}$.\n\\end{defn}\nIn general we refer to $A$ as a {\\em key} for encoder $\\beta_A$. \n\\begin{defn}\nFix a matrix $X\\in{\\mathbb{R}}^{n\\times d}$. A matrix $A\\in\\mathbb{R}^{d\\times D}$ is said \\emph{admissible} (or an {\\em admissible key}) for $X$ if for any $Y\\in{\\mathbb{R}}^{n\\times d}$ so that $\\beta_A(X)=\\beta_A(Y)$ then $Y=\\Pi X$ for some $\\Pi\\in{\\mathcal S}_n$. \n\\end{defn}\nIn other words, $\\hat{\\beta_A}^{-1}(\\hat{\\beta_A}(\\hat{X}))=\\{\\hat{X}\\}$.\nWe let ${\\mathcal{A}}_{D}(X)$, or simply ${\\mathcal{A}}(X)$, denote the set of admissible keys for $X$. \n\\begin{defn}\nFix $A\\in\\mathbb{R}^{d\\times D}$. A matrix $X\\in{\\mathbb{R}}^{n\\times d}$ is said to be {\\em separated} by $A$ if $A\\in{\\mathcal{A}}(X)$.\n\\end{defn}\nFor a key $A$, we let $\\mathfrak{S}_{n}(A)$, or simply $\\mathfrak{S}(A)$, denote the set of {\\em matrices separated by $A$}. Thus a matrix $X\\in\\mathfrak{S}_n(A)$ if and only if, for any matrix $Y\\in\\mathbb{R}^{n\\times d}$, if $\\beta_A(X)=\\beta_A(Y)$ then $X\\sim Y$.\n\nThus a key $A$ is universal if and only if $\\mathfrak{S}_n(A)={\\mathbb{R}}^{n\\times d}$.\n\nOur goal is to produce keys that are admissible for all matrices in ${\\mathbb{R}}^{n\\times d}$, or at least for almost every data matrix.\nAs we show in Proposition \\ref{prop3.6} below this requires that $D\\geq d$ and $A$ is full rank. In particular this means that the columns of $A$ form a frame for $\\mathbb{R}^d$. \n\n\\subsection{Characterizations of ${\\mathcal{A}}(X)$ and $\\mathfrak{S}(A)$}\n\nWe start off with simple linear manipulations of sets of admissible keys and separated data matrices.\n\n\\begin{prop}\\label{prop3.5}\nFix $A\\in\\mathbb{R}^{d\\times D}$ and $X\\in{\\mathbb{R}}^{n\\times d}$.\n\\begin{enumerate}\n \\item For an invertible $d\\times d$ matrix $T\\in\\mathbb{R}^{d\\times d}$,\n \\begin{equation}\\label{eq:TA}\n \\mathfrak{S}_n(TA) = \\mathfrak{S}_n(A)T^{-1}.\n \\end{equation}\n In other words, if $X$ is separated by $A$ then $XT^{-1}$ is separated by $TA$.\n \n \\item For any permutation matrix $L\\in\\SS_D$ and diagonal invertible matrix $\\Lambda\\in\\mathbb{R}^{D\\times D}$,\n \\begin{equation}\\label{eq:AL}\n \\mathfrak{S}_n(AL\\Lambda)=\\mathfrak{S}_n(A\\Lambda L) = \\mathfrak{S}_n(A).\n \\end{equation}\n In other words, if $X$ is separated by $A$ then $X$ is separated also by $AL\\Lambda$ as well as by $A\\Lambda L$.\n \n \\item Assume $T\\in\\mathbb{R}^{d\\times d}$ is a $d\\times d$ invertible matrix. Then\n \\begin{equation}\\label{eq:XT}\n {\\mathcal{A}}_D(XT)=T^{-1}{\\mathcal{A}}_D(X).\n \\end{equation}\n In other words, if $A$ is an admissible key for $X$ then $T^{-1}A$ is an admissible key for $XT$.\n\\end{enumerate}\n\\end{prop}\n{\\bf Proof}\n\nThe proof is immediate, but we include it here for convenience of the reader. \n\n(1) Denote $B=TA$. Let $Y\\in\\mathbb{R}^{n\\times d}$. Then\n\\[ \\beta_B(Y)=\\beta_B(X) \\Longleftrightarrow \\downarrow(XB)=\\downarrow(YB) \\Longleftrightarrow \\downarrow(XTA)=\\downarrow(YTA)\n\\Longleftrightarrow\\beta_A(XT)=\\beta_A(YT). \\]\nThus, if $X\\in\\mathfrak{S}_n(A)$ and $Y'\\in{\\mathbb{R}}^{n\\times d}$ so that $\\beta_B(Y')=\\beta_B(X')$ with $X'=XT^{-1}$, then $\\beta_A(Y'T)=\\beta_A(X)$. Therefore there exists $\\Pi\\in{\\mathcal S}_n$ so that $Y'T=\\Pi X$. Thus $Y'\\sim X'$. Hence $X'\\in\\mathfrak{S}_n(B)$.\nThis shows $\\mathfrak{S}_n(A)T^{-1}\\subset \\mathfrak{S}_n(TA)$. The reverse include follows by replacing $A$ with $TA$ and $T$ with $T^{-1}$. Together they prove (\\ref{eq:TA}).\n\n(2) Let $Y\\in{\\mathbb{R}}^{n\\times d}$ such that $\\beta_{AL\\Lambda}(X)=\\beta_{AL\\Lambda}(Y)$. \nFor every $1\\leq j\\leq D$ let $k\\in [D]$ be so that $L_{jk}=1$. \n\nIf $\\Lambda_{kk}>0$ then $\\downarrow((XA)_j)=\\downarrow((YA)_j)$. \n\nIf $\\Lambda_{kk}<0$ then $\\downarrow(-(XA)_j)=\\downarrow(-(YA)_j)$.\nBut this implies also $\\downarrow((XA)_j)=\\downarrow((YA)_j)$ since\n$\\downarrow(-z)=L_0\\downarrow(z)$ where $L_0$ is the permutation matrix that has 1 on its main antidiagonal.\n\nEither way, $\\downarrow((XA)_j)=\\downarrow((YA)_j)$. Hence \n$\\downarrow(XA)=\\downarrow(YA)$. Therefore $X\\sim Y$ and thus $X\\in \\mathfrak{S}_n(AL\\Lambda)$. This shows $\\mathfrak{S}_n(A)\\subset\\mathfrak{S}_n(AL\\Lambda)$.\nthe reverse inclusion follows by a similar argument.\nFinally, notice $\\{L\\Lambda\\}$ forms a group since $L^{-1}\\Lambda L$ is also a diagonal matrix. This shows $\\mathfrak{S}_n(A\\Lambda L)=\\mathfrak{S}(AL\\Lambda')$ for some diagonal matrix $\\Lambda'$, and the conclusion (\\ref{eq:AL}) then follows.\n\n(3) The relation (\\ref{eq:XT}) follows from noticing $\\beta_{T^{-1}A}(Y)=\\beta_A(YT)$. $\\qed$\n\nRelation (\\ref{eq:AL}) shows that, since $A$ is assumed full rank, without loss of generality we can assume the first $d$ columns are linearly independent. Let $V$ denote the first $d$ columns of $A$ so that\n\\begin{equation}\\label{eq:AA}\nA = V\\left[ \\begin{array} {ccc}\n\\mbox{$I$} & \\mbox{$\\vert$} & \\mbox{$\\tilde{A}$} \n\\end{array} \\right] \n\\end{equation}\nwhere $\\tilde{A}\\in\\mathbb{R}^{d\\times (D-d)}$. \nThe following result shows that, unsurprisingly, when $D=d>1$, almost every matrix $X$ is not separated by $A$. By Proposition \\ref{prop3.5} we can reduce the analysis to the case $A=I$ by a change of coordinates.\n\\begin{prop}\\label{prop3.6}\nAssume $D=d>1$, $n>1$. Then\n\\begin{enumerate}\n \\item The set of data matrices not separated by $I_d$ includes:\n\\begin{equation}\\label{eq:SA1}\n \\mathbb{B}:=\\{ X\\in\\mathbb{R}^{n\\times d}~,~\\exists i,j,k,l~,~ 1\\leq i0$ and $b_0>0$ so that\n for all $X,Y\\in{\\mathbb{R}}^{n\\times d}$,\n \\begin{equation}\n \\label{eq:Lipbeta2}\n a_0\\, d(\\hat{X},\\hat{Y})\\leq \\norm{\\beta_A(X)-\\beta_A(Y)}\\leq b_0\\,\n d(\\hat{X},\\hat{Y})\n \\end{equation}\n where all are Frobenius norms.\n Furthermore, an estimate for $b_0$ is provided by the largest singular value of $A$, $b_0= s_1(A)$.\n\\end{thm}\n\n{\\bf Proof}\n\nThe upper bound in (\\ref{eq:Lipbeta2}) follows as in the proof of Theorem \\ref{t4}, from equations (\\ref{eq:betaA}) and (\\ref{eq:betaAA}). Notice that\nno property is assumed in order to obtain the upper Lipschitz bound.\n\nThe lower bound in (\\ref{eq:Lipbeta2}) is more difficult. \nIt is shown by contradiction following the strategy \nutilized in the \nComplex Phase Retrieval problem \\cite{balazou}.\n\nAssume $\\inf_{X\\not\\sim Y}\\frac{\\norm{\\beta_A(X)-\\beta_A(Y)}_2^2}{d(\\hat{X},\\hat{Y})^2}=0$. \n\n{\\em Step 1: Reduction to local analysis.} \nSince $d(\\hat{tX},\\hat{tY})=t\\,d(\\hat{X},\\hat{Y})$ for all $t>0$, the \nquotient $\\frac{\\norm{\\beta_A(X)-\\beta_A(Y)}_2}{d(\\hat{X},\\hat{Y})}$ \nis scale invariant. Therefore, there are sequences $(X^t)_t,(Y^t)_t$\nwith $\\norm{Y^t}\\leq\\norm{X^t}=1$ and $d(\\hat{X^t},\\hat{Y^t})>0$ so that\n$\\lim_{t\\rightarrow\\infty} \\frac{\\norm{\\beta_A(X^t)-\\beta_A(Y^t)}_2}{d(\\hat{X^t},\\hat{Y^t})} = 0$.\nBy compactness of the closed unit ball, one can extract convergence subsequences. For easiness of notation, assume $(X^t)_t,(Y^t)_t$ are\nthese subsequences. Let ${X^{\\infty}}=\\lim_t X^t$ and ${Y^{\\infty}} = \\lim_t Y^t$ denote their limits. Notice $\\lim_t \\norm{\\beta_A(X^t)-\\beta_A(Y^t)}_2=0$.\nThis implies $\\norm{\\beta_A({X^{\\infty}})-\\beta_A({Y^{\\infty}})}=0$ and thus $\\beta_A({X^{\\infty}})=\\beta_A({Y^{\\infty}})$. Since $\\widehat{\\beta_A}$ is assumed injective, it follows that $\\widehat{{X^{\\infty}}}=\\widehat{{Y^{\\infty}}}$. \n\nThis means that, if the lower Lipschitz bound vanishes, then this \nis achieved by vanishing of a local lower Lipschitz bound. To follow the terminology in \\cite{balazou}, the type I local lower Lipschitz bound vanishes at some \n$Z_0\\in{\\mathbb{R}}^{n\\times d}$, with $\\norm{Z_0}=1$:\n\\begin{equation}\n \\label{eq:lb}\n{A}(Z_0):= \\lim_{r\\rightarrow 0} \\inf_{\n\\begin{array}{c}\n\\hat{X}\\neq\\hat{Y} \\\\\nd(\\hat{X},\\hat{Z_0})0$ by the definition of $G$.\n\nConsider $X=Z_0+U$ and $Y=Z_0+V$ where $U,V\\in{\\mathbb{R}}^{n\\times d}$ are ``aligned\" in the sense that $d(\\hat{X},\\hat{Y})=\\norm{U-V}$. This property requires that $\\norm{U-V}\\leq\\norm{PX-Y}$, for every $P\\in{\\mathcal S}_n$. \nNext result replaces equivalently this condition \nby requirements involving $(U,V)$ and the group $G$ only.\n\\begin{lem}\n\\label{l3.1}\nAssume $\\norm{U},\\norm{V}<\\frac{1}{4}\\delta_0$, where $\\delta_0=\\min_{P\\in{\\mathcal S}_n\\setminus G} \\norm{(I_n-P)Z_0}$. Let $X=Z_0+U$, $Y=Z_0+V$. \nThen:\n\\begin{enumerate}\n \\item $d(\\hat{X},\\hat{Z_0})=\\norm{U}$ and $d(\\hat{Y},\\hat{Z_0})=\\norm{V}$.\n \\item $d(\\hat{X},\\hat{Y})=\\min_{P\\in G}\\norm{U-PV}=\\min_{P\\in G}\\norm{PU-V}$\n \\item The following\nare equivalent:\n\\begin{enumerate}\n \\item $d(\\hat{X},\\hat{Y})=\\norm{U-V}$.\n \\item For every $P\\in G$, $\\norm{U-V}\\leq \\norm{PU-V}$.\n \\item For every $P\\in G$, $\\ip{U}{V}\\geq \\ip{PU}{V}$.\n\\end{enumerate}\n\\end{enumerate}\n\\end{lem}\n{\\bf Proof of Lemma \\ref{l3.1}}.\n(1)\n\nNote that is $U=0$ then the claim follows. Assume $U\\neq 0$. Then\n\\[ d(\\hat{X},\\hat{Z_0})=\\min_{P\\in{\\mathcal S}_n} \\norm{X-PZ_0}\n= \\min_{P\\in {\\mathcal S}_n}\\norm{(I_n-P)Z_0 + U}\\leq \\norm{U} \\]\nOn the other hand, assume the minimum is achieved for a permutation $P_0\\in{\\mathcal S}_n$. If $P_0\\in G$ then\n$d(\\hat{X},\\hat{Z_0})=\\norm{(I_n-P_0)Z_0+U}=\\norm{U}$. If $P_0\\not\\in G$ then \n\\[ d(\\hat{X},\\hat{Z_0})\\geq \\norm{(I_n-P_0)Z_0}-\\norm{U}>\\frac{3\\delta_0}{4}>\\norm{U}\\geq d(\\hat{X},\\hat{Z_0}) \\]\nwhich yields a contradiction.\nHence $d(\\hat{X},\\hat{Z_0})=\\norm{U}$. Similarly, one shows $d(\\hat{X},\\hat{Z_0})=\\norm{V}$. \n\n(2) Clearly\n\\[ d(\\hat{X},\\hat{Y})=\\min_{P\\in{\\mathcal S}_n}\\norm{PX-Y}\\leq \\min_{P\\in G}\\norm{PX-Y}=\\min_{P\\in G}\\norm{PU-V} \\]\nOn the other hand, for $P\\in{\\mathcal S}_n\\setminus G$ and $Q\\in G$,\n\\[ \\norm{PX-Y}=\\norm{(P-I_n)Z_0 + PU-V}\\geq \\norm{(I_n-P)Z_0} - \\norm{U}-\\norm{V}\\geq \\]\n\\[ \\geq \\delta_0\n-2\\norm{U}-2\\norm{V}+\\norm{QU-V}\\geq \\min_{Q\\in G}\\norm{QU-V}\\geq d(\\hat{X},\\hat{Y}). \\]\n\n(3)\n\n(a)$\\Rightarrow$(b).\n\nIf $d(\\hat{X},\\hat{Y})=\\norm{U-V}$ then\n\\[ \\norm{U-V}\\leq \\norm{PX-Y}=\\norm{(P-I_n)Z_0 + PU-V}\n~~,~~\\forall P\\in {\\mathcal S}_n. \\]\nIn particular, for $P\\in G$, $(P-I_n)Z_0=0$ and\nthe above inequality reduces to (b).\n\n(b)$\\Rightarrow$(a).\n\nAssume (b). For $P\\in G$,\n\\[ \\norm{U-V}=\\norm{X-Y}\\leq\\norm{PU-V}=\\norm{PX-Y}. \\]\nFor $P\\in{\\mathcal S}_n\\setminus G$,\n\\[ \\norm{PX-Y}=\\norm{(P-I_n)Z_0 + PU-V}\\geq \\norm{(I_n-P)Z_0} - \\norm{U}-\\norm{V}\\geq \\]\n\\[ \\geq \\delta_0\n-2\\norm{U}-2\\norm{V}+\\norm{U-V}\\geq \\norm{U-V}=\\norm{X-Y}. \\]\nThis shows $d(\\hat{X},\\hat{Y})=\\norm{X-Y}=\\norm{U-V}$.\n\n(b)$\\Longleftrightarrow$(c). This is immediate after squaring (b) and simplifying the terms.\n\n$\\Box$\n\\vspace{3mm}\n\nConsider now sequences $(\\hat{X^t})_t,(\\hat{Y^t})_t$ that converge to $\\hat{Z_0}$ \nand achieve lower bound 0 as in (\\ref{eq:lb}).\nChoose\nrepresentatives $X_t$ and $Y_t$ in their equivalence classes that satisfy the hypothesis of Lemma \\ref{l3.1} so that $X_t=Z_0+U_t$, $Y_t=Z_0+V_t$, $\\norm{U_t},\\norm{V_y}<\\frac{1}{4}\\delta_0$,\n $d(\\hat{X_t},\\hat{Z_0})=\\norm{U_t}$, $d(\\hat{Y_t},\\hat{Z_0})=\\norm{V_t}$ and $d(\\hat{X_t},\\hat{Y_t})=\\norm{U_t-V_t}>0$.\n With $A=[a_1 |\\cdots|a_D]$ we obtain:\n\\[ \\norm{\\beta_A(X_t)-\\beta_A(Y_t)}_2^2 = \\sum_{j=1}^D \\norm{\\downarrow(X_t a_j)-\\downarrow(Y_t a_j)}_2^2 =\n\\sum_{j=1}^D \\norm{(Z_0+U_t)a_j-\\Pi_{j,t}(Z_0+V_t)a_j}_2^2 \\]\nfor some $\\Pi_{j,t}\\in{\\mathcal S}_n$. In fact $\\Pi_{j,t}\\in argmin_{\\Pi\\in H_j}\\norm{U_t-\\Pi V_t)a_j}_2$. \nPass to sub-sequences (that will be indexed by $t$ for an easier notation) so that $\\Pi_{j,t}=\\Pi_j$ for some $\\Pi_j\\in{\\mathcal S}_n$. Thus\n\\[ \\norm{\\beta_A(X_t)-\\beta_A(Y_t)}_2^2 =\n\\sum_{j=1}^D \\norm{(I_n-\\Pi_j)Z_0a_j + (U_t-\\Pi_j V_t)a_j}_2^2 \\]\nSince the above sequence must converge to $0$ as $t\\rightarrow\\infty$, while $U_t,V_t\\rightarrow 0$, it follows that necessarily $\\Pi_j\\in H_j$ and the\nexpressions simplify to\n\\[ \\norm{\\beta_A(X_t)-\\beta_A(Y_t)}_2^2 =\n\\sum_{j=1}^D \\norm{(U_t-\\Pi_j V_t)a_j}_2^2 \\]\nThus equation (\\ref{eq:lb}) implies that for\nevery $j\\in[D]$,\n\\begin{equation}\n\\label{eq:lb2}\n\\lim_{t\\rightarrow\\infty} \\frac{\\norm{(U_t-\\Pi_j V_t)a_j}_2^2}{\\norm{U_t-V_t}^2} = 0\n\\end{equation} \nwhere $\\Pi_j\\in H_j$, $\\norm{U_t},\\norm{V_t}\\rightarrow 0$, and $U_t,V_t$ are aligned so that $\\ip{U_t}{V_t}\\geq \\ip{PU_t}{V_t}$ for every $P\\in G$.\nEquivalently, relation (\\ref{eq:lb}) can be restated as:\n\\begin{equation}\n \\label{eq:opt2}\n \\inf_{\\begin{array}{c} U,V\\in{\\mathbb{R}}^{n\\times d} \\\\ s.t. \\\\\n U\\neq V \\\\\n \\ip{U}{V}\\geq \\ip{PU}{V} , \\forall P\\in G\n \\end{array} } \\frac{\\sum_{j=1}^D \\norm{(U-\\Pi_j V)a_j}_2^2}{\\norm{U-V}^2} = 0\n\\end{equation}\nfor some permutations $\\Pi_j\\in H_j$, $j\\in[D]$.\nBy Lemma \\ref{l3.1} the constraint in the optimization problem above implies $\\norm{U-V}=\\min_{P\\in G}\\norm{U-PV}$. Hence (\\ref{eq:opt2}) implies:\n\\begin{equation}\n \\label{eq:opt3}\n \\inf_{\\begin{array}{c} U,V\\in{\\mathbb{R}}^{n\\times d} \\\\ s.t. \\\\\n U\\neq P V , \\forall P\\in G\n \\end{array} }\n \\max_{P\\in G} \\frac{\\sum_{j=1}^D \\norm{(U-\\Pi_j V)a_j}_2^2}{\\norm{U-P V}^2} = 0\n\\end{equation}\nfor same permutation matrices $\\Pi_j$'s.\nWhile the above optimization problem seems a relaxation of (\\ref{eq:opt2}), in fact (\\ref{eq:opt3}) implies (\\ref{eq:opt2})\nwith a possibly change of permutation matrices $\\Pi_j$, but \nremaining still in $H_j$.\n\\vspace{5mm}\n\n\n{\\em Step 3. Existence of a Minimizer.} \n \n\n\nThe optimization problem (\\ref{eq:opt2}) is a Quadratically Constrained Ratio of Quadratics (QCRQ) optimization problem. A significant number of papers \nhave been published on this topic \\cite{teb06,teb10}. \nIn particular, \\cite{QCRQbook} presents\na formal setup for analysis of QCRQ problems. \nOur interest is to utilize some of these techniques in order to establish the existence of a minimizer for (\\ref{eq:opt2}) or (\\ref{eq:opt3}). Specifically we show:\n\\begin{lem}\\label{l3.2}\nAssume the key $A$ has linearly independent rows (equivalently, the columns of $A$ form a frame for $\\mathbb{R}^d$) and the lower Lipschitz bound of ${\\hat{\\beta}}_A$ is $0$. Then there are $\\tilde{U},\\tilde{V}\\in{\\mathbb{R}}^{n\\times d}$ so that:\n\\begin{enumerate}\n \\item $\\tilde{U}\\neq P \\tilde{V}$, for every $P\\in G$;\n \\item For every $j\\in[D]$, $(\\tilde{U}-\\Pi_j \\tilde{V})a_j=0$.\n\\end{enumerate}\n\\end{lem}\n{\\bf Proof of Lemma \\ref{l3.2}}\n\n\n\nWe start with the formulation (\\ref{eq:opt3}). Therefore there are sequences \n$(U_t,V_t)_{t\\geq 1}$ so that $U_t\\neq PV_t$ for any $P\\in G, t\\geq 1$, and yet for any $P\\in G$,\n\\[ \\lim_{t\\rightarrow\\infty} \\frac{\\sum_{j=1}^D\\norm{(U_t-\\Pi_j V_t)a_j}_2^2}{\\norm{U_t-P V_t}^2} = 0. \\]\nLet $E=\\{(U,V)\\in{\\mathbb{R}}^{n\\times d}\\times{\\mathbb{R}}^{n\\times d}~,~(U-\\Pi_j)V)a_j=0~,~\\forall j\\in[D]\\}$ denote the null space of the linear operator \n\\[ T:{\\mathbb{R}}^{n\\times d}\\times{\\mathbb{R}}^{n\\times d}\\rightarrow \\mathbb{R}^D~,~(U,V)\\mapsto \\left[\\begin{array}{ccccc}\n(U-\\Pi_1 V)a_1 & \\vert & \\cdots & \\vert & (U-\\Pi_D V)a_D\n\\end{array}\\right],\n\\]\nassociated to the numerator of the above quotient. Let $F_P=\\{(U,V)\\in{\\mathbb{R}}^{n\\times d}\\times{\\mathbb{R}}^{n\\times d}~,~U-PV=0\\}$ be the null space of the linear operator \n\\[ R_P:{\\mathbb{R}}^{n\\times d}\\times{\\mathbb{R}}^{n\\times d}\\rightarrow {\\mathbb{R}}^{n\\times d}~,~(U,V)\\mapsto U-P V. \\]\nA consequence of (\\ref{eq:opt3}) is that for every $P\\in G$, $E\\setminus F_P\\neq\\emptyset$. \nIn particular, $F_p\\cap E$ is a subspace of $E$ of positive codimension. Using the \nBaire category theorem (or more elementary linear algebra arguments), we conclude that\n\\[ E\\setminus \\left(\\cup_{P\\in G} F_P\\right) \\neq \\emptyset. \\]\nLet $(\\tilde{U},\\tilde{V})\\in E\\setminus\\left(\\cup_{P\\in G}F_P\\right)$. This pair satisfies the\nconclusions of Lemma \\ref{l3.2}.\n\n\n\\ignore{\nFirst we rewrite (\\ref{eq:opt2}) in terms of new matrices. Let $S_1=S_1({\\mathbb{R}}^{n\\times d})$ denote the unit sphere of ${\\mathbb{R}}^{n\\times d}$. Let $W=U-V$. Since $W\\neq 0$, let $W_0=\\frac{1}{\\norm{W}}W$ with $\\norm{W_0}=1$.\nLet also $V_0\\in S_1$ be so that $V=t\\norm{W}V_0$\n for some $t\\geq 0$. Then the objective function in (\\ref{eq:opt2}), i.e., the quotient of the two quadratics, simplifies to\n \\[ \\sum_{j=1}^D \\norm{(W_0 + t(I-\\Pi_j)V_0)a_j}_2^2. \\]\n Let $\\Gamma_t$ define the constraints set:\n \\[ \\Gamma_t =\\cap_{P\\in G} \n \\{ (W_0,V_0)\\in S_1\\times S_1~:~\n t^2 \\ip{V_0}{(I-P)V_0}+t\\ip{W_0}{(I-P)V_0}\\geq 0 \\}.\n \\]\nNotice that for each $t\\geq 0$, $\\Gamma_t$ is a closed and hence a compact subset of $S_1\\times S_1$. It may be an empty set for some values of $t$. The problem (\\ref{eq:opt2}) is equivalent to:\n\\[ \\inf_{t\\geq 0} \\inf_{(W_0,V_0)\\in\\Gamma_t}\n\\sum_{j=1}^D \\norm{W_0 a_j + t(I-\\Pi_j)V_0a_j}_2^2 = 0 \\]\nLet $(W(t_k),V(t_k),t_k)\\in S_1\\times S_1\\times [0,\\infty)$, $k\\geq 1$, be a sequence that achieves the lower bound $0$. Extract a subsequence indexed again by $k$ so that\n $\\lim_{k\\rightarrow\\infty} W(t_k)=W_\\infty\\in S_1$ and $\\lim_{k\\rightarrow\\infty}V(t_k)=V_\\infty$. Thus, for all $j\\in [D]$, $\\lim_{k\\rightarrow\\infty} \\norm{W_\\infty a_j + t_k(I-\\Pi_j)V(t_k) a_j}_2 = 0$, which implies\n \\[ \\lim_{k\\rightarrow\\infty} t_k(I-\\Pi_j)V(t_k) a_j = -W\\infty a_j. \\]\n \n Case 1. $\\liminf_{k\\rightarrow\\infty} t_k<\\infty$. In this case extract a subsequence, say\n $(t_{k_l})_l$, so that\n $\\lim_{l\\rightarrow\\infty} t_{k_l}=t_\\infty\\in[0,\\infty)$. \nThis implies\n \\[ W_\\infty a_j +t_\\infty(I-\\Pi_j)V_\\infty a_j = 0~,~\\forall j\\in[D]. \\]\nNotice $(W_\\infty,V_\\infty)\\in \\Gamma_{t_\\infty}$.\nTherefore $\\tilde{U}=W_\\infty + t_\\infty V_\\infty$ and $\\tilde{V}=t_\\infty V_\\infty$ satisfy the conclusions (1),(2), and (3) and lemma \\ref{l3.2} is proved.\n\n\nCase 2. $\\liminf_{k\\rightarrow\\infty} t_k=\\infty$.\n\nIn the rest of the proof of this lemma, we construct an inductive process which ends with a scenario that either satisfies Case 1, or produces a (geo)metric contradiction. \n\nTo simplify notation we shall reuse the index $k$ at each stage.\n\n{\\em Initialization:} Set $p=1$. \nLet $V^{(1)}_{\\infty}=V_\\infty$, $t^{(1)}_k=t_k$, and $R^{(1)}_k=V(t_k)-V_\\infty$.\n\n{\\em Preamble:} Sequences $(t^{(p)}_k,R^{(p)}_k)$ satisfy, for every $j\\in[D]$:\n\\begin{equation}\n\\label{eq:sequences}\n\\lim_{k\\rightarrow\\infty}t^{(p)}_k = +\\infty, \\lim_{k\\rightarrow\\infty}R^{(p)}_k = 0, \\norm{R^{(p)}_k+V^{(p)}_\\infty}=1=\\norm{V^{(p)}_\\infty} , \\lim_{k\\rightarrow\\infty}t^{(p)}_k(I-\\Pi_j)R^{(p)}_k a_j = -W_\\infty a_j \n\\end{equation}\n\n{\\em Refinement:} Extract a subsequence \nindexed again by $k$ that\nsatisfies additionally:\n\\begin{equation}\n\\label{eq:seq2}\n\\norm{R^{(p)}_k}\\leq \\frac{1}{p}~,~ \\lim_{k\\rightarrow\\infty} \\frac{R^{(p)}_k}{\\norm{R^{(p)}_k}}\\in S_1\n\\end{equation}\n\n{\\em Setting up the next iteration:} Set\n\\[ t^{(p+1)}_k=t^{(p)}_k\\norm{R^{(p)}_k} ~,~V^{(p+1)}_\\infty = \\lim_{k\\rightarrow\\infty} \\frac{R^{(p)}_k}{\\norm{R^{(p)}_k}} ~,~ \nR^{(p+1)}_k=\\frac{R^{(p)}_k}{\\norm{R^{(p)}_k}} - V^{(p+1)}_\\infty \\]\n\n{\\em Testing:} If $\\liminf_{k\\rightarrow\\infty}t^{(p+1)}_k<\\infty$ then proceed with Case 1 above, which ends the proof of this lemma.\n\nOtherwise $\\lim_{k\\rightarrow\\infty}t^{(p+1)}_k=\\infty$. Thus $(I-\\Pi_j)V^{(p+1)}_\\infty a_j=0$ for all $j\\in[D]$. \nSet $p\\leftarrow p+1$. The {\\em preamble} conditions (\\ref{eq:sequences}) are again satisfied \nfor all $j\\in[D]$. Then proceed by going to the {\\em refinement} step and iterate. \n\nIf the iterative process described above does not end at some finite $p$, then we construct sequences doubly indexed $(t^{(p)}_k,R^{(p)}_k)_{p,k}$ that satisfy (\\ref{eq:sequences}) and (\\ref{eq:seq2}). \n}\n\n $\\Box$\n\n\n\n\\vspace{5mm}\n\n{\\em Step 4. Contradiction with the universality property of the key.}\n\nSo far we obtained that if the lower Lipschitz bound of ${\\hat{\\beta}}_A$ vanishes than there are $Z_0,\\tilde{U},\\tilde{V}\\in{\\mathbb{R}}^{n\\times d}$ with $Z_0\\neq 0$ and $\\tilde{U}\\neq P \\tilde{V}$, for all $P\\in G$ that satisfy the conclusions of Lemma \\ref{l3.2}. Notice $\\ip{Z_0}{Z_0}=\\ip{PZ_0}{Z_0}$\n for all $P\\in G$ and $(Z_0-\\Pi_j Z_0)a_j=0$ for all $j\\in[D]$. Choose $s>0$ but small enough so that $s\\norm{\\tilde{U}},s\\norm{\\tilde{V}}<\\frac{1}{4}\\delta_0$ with $\\delta_0=\\min_{P\\in{\\mathcal S}_n\\setminus G} \\norm{(I_n-P)Z_0}$.\n Let $X=Z_0+s \\tilde{U}$ and $Y=Z_0+s\\tilde{V}$.\n Then Lemma \\ref{l3.1} implies $d(\\hat{X},\\hat{Y})=\\min_{P\\in G}\\norm{\\tilde{U}-P\\tilde{V}}>0$. \n Hence $\\hat{X}\\neq\\hat{Y}$. On the other hand,\n for every $j\\in[D]$, $Xa_j = \\Pi_j Ya_j$. Thus\n ${\\hat{\\beta}}_A(\\hat{X})={\\hat{\\beta}}_A(\\hat{Y})$. \n Contradiction with the assumption that ${\\hat{\\beta}}_A$ is injective.\n \n This ends the proof of Theorem \\ref{t5}.\n\n$\\Box$\n\\ignore{\n\\begin{rem}\nThe proof of the previous theorem provides estimates for \nboth type I and type II local lower and upper Lipschitz bounds.\n\\end{rem}\n}\n\n\\subsection{Dimension Reduction}\n\nTheorem \\ref{t4} provides an Euclidean bi-Lipschitz embedding of very high dimension, $D=1+(d-1)n!$. On the other hand, Theorem \\ref{t5} shows that any universal key $A\\in\\mathbb{R}^{d\\times D}$ for ${\\widehat{\\Rnd}}$, \nand hence any injective map $\\hat{\\beta}_A$ is bi-Lipschitz. In this subsection we show that \nany bi-Lipschitz Euclidean embedding $\\hat{\\beta}_A:{\\widehat{\\Rnd}}\\rightarrow\\mathbb{R}^{n\\times D}$ with $D>2d$ \ncan be further compressed to a smaller dimension space $\\mathbb{R}^m$ with $m=2nd$ thus yielding\nbi-Lipschitz Euclidean embeddings of redundancy 2. This is shown in the next result.\n\n\\begin{thm}\n \\label{t6} Assume $A\\in\\mathbb{R}^{d\\times D}$ is a universal key for ${\\widehat{\\Rnd}}$ with $D\\geq 2d$. \n Then, for $m\\geq 2nd$, a generic linear operator $B:\\mathbb{R}^{n\\times D}\\rightarrow\\mathbb{R}^{m}$ with respect to Zariski topology on\n $\\mathbb{R}^{n\\times D\\times m}$, the map\n \\begin{equation}\n \\label{eq:AB1}\n \\hat{\\beta}_{A,B}:{\\widehat{\\Rnd}}\\rightarrow\\mathbb{R}^{2nd}~,~ \\hat{\\beta}_{A,B}(\\hat{X})=B\\left(\\hat{\\beta}_A(\\hat{X})\\right)\n \\end{equation}\n is bi-Lipschitz. In particular, almost every full-rank linear operator $B:\\mathbb{R}^{n\\times D}\\rightarrow\\mathbb{R}^{2nd}$ produces such a \n bi-Lipschitz map.\n\\end{thm}\n\n\\begin{rmk}\nThe proof shows that, in fact, the complement set of linear operators $B$ that produce bi-Lipschitz embeddings is included \nin the zero-set of a polynomial. \n\\end{rmk}\n\n\\begin{rmk}\nPutting together Theorems \\ref{t4}, \\ref{t5}, \\ref{t6} we obtain that the metric space ${\\widehat{\\Rnd}}$ admits\na global bi-Lipschitz embedding in the Euclidean space $\\mathbb{R}^{2nd}$. This result is compatible\nwith a Whitney embedding theorem (see \\S 1.3 in \\cite{hirsh}) with the important caveat that the Whitney embedding result\napplies to smooth manifolds, whereas here ${\\widehat{\\Rnd}}$ is merely a non-smooth algebraic variety.\n\\end{rmk}\n\n\\begin{rmk}\nThese three theorems are summarized in part two of the \nTheorem \\ref{t2} presented in \nthe first section.\n\\end{rmk}\n\n\\begin{rmk}\nWhile the embedding dimension grows linearly in $nd$, in fact $m=2nd$, the computational complexity of constructing ${\\hat{\\beta}}_{A,B}$ is NP due to the $1+(d-1)n!$ intermediary dimension.\n\\end{rmk}\n\n\\begin{rmk}\nAs the proofs show, for $D\\geq 1+(d-1)n!$, a generic $(A,B)$ with\nrespect to Zariski topology, $A\\in\\mathbb{R}^{d\\times D}$ and linear map $B:\\mathbb{R}^{n\\times D}\\rightarrow\\mathbb{R}^{2nd}$, produces a bi-Lipschitz embedding $({\\hat{\\beta}}_{A,B},d)$ of ${\\widehat{\\Rnd}}$ into $(\\mathbb{R}^{2nd},\\norm{\\cdot}_2)$. \n\\end{rmk}\n{\\bf Proof of Theorem \\ref{t6} }\n\nThe proof follows a similar approach as in Theorem 3 of \\cite{Cahill19}.\nSee also \\cite{DUFRESNE20091979}.\n\nWithout loss of generality, assume $m0$ so that for every $X,Y\\in\\mathbb{R}^{n\\times d}$, \n$\\norm{B(L_{\\gamma}(X,Y))}\\geq a_\\gamma \\norm{L_{\\gamma}(X,Y)}$.\nLet $a_\\infty = \\min_{\\gamma}a_\\gamma >0$. Thus\n\\[ \\norm{\\beta_{A,B}(X) - \\beta_{A,B}(Y)} =\\norm{B(L_{\\gamma_0}(X,Y))}\n\\geq a_\\infty \\norm{L_{\\gamma_0}(X,Y)}=a_\\infty \\norm{\\beta_A(X)-\\beta_A(Y)} \\]\nwhere $\\gamma_0\\in(S_n)^{2D}$ is a particular $2D$-tuple of permutations. This shows that \n$B{\\vert}_{\\beta_A(\\mathbb{R}^{n\\times d})}:\\beta_A(\\mathbb{R}^{n\\times d})\\rightarrow\\mathbb{R}^m$ is bi-Lipschitz.\nBy Theorem \\ref{t5}, the map ${\\hat{\\beta}}_A$ is bi-Lipschitz. Therefore\nwe get ${\\hat{\\beta}}_{A,B}$ is bi-Lipschitz as well.\n$\\Box$\n\n\n\\subsection{Proof of Corollary \\ref{c0}\\label{subsec4.4}}\n\n(1) It is clear that any continuous $f$ induces a continuous $\\varphi:\\beta(\\mathbb{R}^{n\\times d})\\rightarrow\\mathbb{R}$ via $\\varphi(\\beta(X))=f(X)$. Furthermore, \n$F:=\\beta(\\mathbb{R}^{n\\times d})={\\hat{\\beta}}({\\widehat{\\Rnd}})$\nis a closed subset of $\\mathbb{R}^m$ since ${\\hat{\\beta}}$ is bi-Lipschitz. \nThen a consequence of Tietze extension theorem \n(see problem 8 in \\S 12.1 of \\cite{roydenfitzpatrick})\nimplies that $\\varphi$ admits a continuous extension $g:\\mathbb{R}^m\\rightarrow\\mathbb{R}$. Thus $g(\\beta(X))=f(X)$ \nfor all $X\\in\\mathbb{R}^{n\\times d}$. The converse is trivial.\n\n(2) As at part (1), the Lipschitz continuous function $f$ induces a Lipschitz continuous function $\\varphi:F\\rightarrow\\mathbb{R}$. Since $F\\subset\\mathbb{R}^m$ is a subset of a Hilbert space, by Kirszbraun\nextension theorem (see \\cite{WelWil75}), $\\varphi$ \nadmits a Lipschitz continuous extension \n(even with the same Lipschitz constant!)\n $g:\\mathbb{R}^m\\rightarrow\\mathbb{R}$ so that $g(\\beta(X))=f(X)$ for every $X\\in\\mathbb{R}^{n\\times d}$. The converse is trivial. $\\Box$\n\n\n\\section{Applications to Graph Deep Learning\\label{sec4}}\n\nIn this section we take an empirical look at the permutation invariant mappings presented in this paper. We focus on the problems of graph classification, for which we employ the PROTEINS\\_FULL dataset \\cite{DobsonDoing_proteins}, and graph regression, for which we employ the quantum chemistry QM9 dataset\n\\cite{ramakrishnan2014quantum}. In both problems we want to estimate a function $F: (A,Z) \\rightarrow p$, where $(A,Z)$ characterizes a graph where $A \\in \\mathbb{R}^{n\\times n}$ is an adjacency matrix and $Z \\in \\mathbb{R}^{n\\times r}$ is an associated feature matrix where the $i^{th}$\nrow encodes an array of $r$ features associated with the $i^{th}$ node. $p$ is a scalar output where we have $p \\in \\{0,1\\}$ for binary classification and $p \\in \\mathbb{R}_+$ for regression.\n\nWe estimate $F$ using a deep network that is trained in a supervised manor. The network is comprised of three successive components applied in series: $\\Gamma$, $\\phi$, and $\\zeta$. $\\Gamma$ represents a graph deep network \\cite{GCN},\nwhich produces a set of embeddings $X \\in \\mathbb{R}^{N\\times d}$ across the nodes in the graph. Here $N\\geq n$ is chosen to accommodate the graph with the largest number of nodes. In this case, the last $N-n$ rows of $Y$ are filled with 0's. $\\phi: \\mathbb{R}^{N\\times d} \\rightarrow \\mathbb{R}^{m}$ represents a permutation invariant mapping such as those proposed in this paper. $\\zeta: \\mathbb{R}^{m} \\rightarrow \\mathbb{R}$ is a fully connected neural network. The entire end-to-end network is shown in Figure \\ref{fig:gcn_end2end}.\n\nIn this paper, we model $\\Gamma$ using a Graph Convolutional Network (GCN) outlined in \\cite{GCN}. \nLet ${\\bf D} \\in \\mathbb{R}^{n \\times n}$ be the associated degree matrix for our graph $\\mathcal{G}$. Also let $\\tilde{A}$ be the associated adjacency matrix of $\\mathcal{G}$ with added self connection: $\\tilde{A}=I+A$, where $I$ is the $n \\times n$ identity matrix, and $\\tilde{{\\bf D}}={\\bf D}+I$. Finally, we define the modified adjacency matrix $\\hat{A}=\\tilde{{\\bf D}}^{-1\/2} \\tilde{A} \\tilde{{\\bf D}}^{-1\/2}$. A GCN layer is defined as $H^{(l+1)}=\\sigma (\\hat{A}H^{(l-1)}W^{(l)})$.\nHere $H^{(l-1)}$ represents the GCN state coming into the $l^{th}$ layer, $\\sigma$ represents a chosen nonlinear element-by-element operation such as ReLU, and $W^{(l)}$ represents a matrix of trainable weights assigned to the $l^{th}$ layer whose number of rows match the number of columns in $H^{l}$ and number of columns is set to the size of the embeddings at the (l+1)'th layer. The initial state $H^{(0)}$ of the network is set to the feature set of the nodes of the graph $H^{(0)}=Z$. \n\n\nFor $\\phi$ we employ seven (7) different methods that are described next.\n\\begin{enumerate}\n \\item ordering: For the ordering method, we set $D=d+1$, $\\phi_{ordering}(X)=\\beta_A(X)=\\downarrow(XA)$ with $A=[I~1]$ the identity matrix followed by a column of ones. The ordering and identity-based mappings have the notable disadvantage of not producing the same output embedding size for different sized graphs. To accommodate this and have consistently sized inputs for $\\eta$, we choose to zero-pad $\\phi(X)$ for these methods to produce a vector in $\\mathbb{R}^{m}$, where $m=ND=N(d+1)$ and $N$ is the size of the largest graph in the dataset.\n \\item kernels: For the kernels method, \n $$(\\phi_{kernel}(X))_j=\\sum_{k=1}^n K_G(x_k,a_j)=\\sum_{k=1}^n exp(-\\norm{x_k-a_j}^2),\n ~~j\\in[m],$$ \n for $X=[x_1|\\cdots|x_n]^T$, where kernel vectors $a_1,\\ldots,a_m\\in\\mathbb{R}^d$ \n are generated randomly, each element of each vector is drawn from a standard normal distribution. Each resultant vector is then normalized to produce a kernel vector of magnitude one. When inputting the embedding $X$ to the kernels mapping, we first normalized the embedding for each respective node.\n \\item identity: In this case $\\phi_{id}(X)=X$, which is obviously not a permutation invariant map.\n \\item data augmentation: In this case $\\phi_{data\\;augment}(X)=X$ but data augmentation is used. Our data augmentation scheme works as follows. We take the training set and create multiple permutations of the adjacency and associated feature matrix for each graph in the training set. We add each permuted graph to the training set to be included with the original graphs. In our experiments we use four added permutations for each graph when employing data augmentation.\n \\item sum pooling: The sum pooling method sums the feature values across the set of nodes: $\\phi_{sum\\;pooling}(X)=\\mathbf{1}_{n\\times 1}^T X$.\n \\item sort pooling: The sort pooling method flips entire rows of $X$ so that the last column is ordered descendingly, $\\phi_{sort\\;pool}(X)=\\Pi X$ where $\\Pi\\in{\\mathcal S}_n$ so that $\\Pi\\,X(:,d)=\\downarrow(X(:,d))$. \n \\item set-2-set: This method employs a recurrent neural network\n that achieves permutation invariance through attention-based weighted summations. It has been introduced in \\cite{OrderMatters_2015arXiv151106391V}.\n\\end{enumerate}\n\nFor our deep neural network $\\eta$ we use a simple multilayer perceptron of size described below.\n\nSize parameters related to $\\Gamma$ and $\\zeta$ components are largely held constant across the different implementations.\nHowever the network parameters are trained independently for each method.\n\n\\begin{figure}[!htbp]\n \\centering\n\t\\includegraphics[width=.8\\linewidth]{Results\/GCN_end2end_2.png}\n\t\\caption[.]{.}\n\t\\label{fig:gcn_end2end}\n\\end{figure}\n\n\n\\subsection{Graph Classification}\n\n\\subsubsection{Methodology}\nFor our experiments in graph classification we consider the PROTEINS\\_FULL dataset obtained from \\cite{KKMMN2016} and originally introduced in \\cite{DobsonDoing_proteins}. \nThe dataset consists of 1113 proteins falling into one of two classes: those that function as enzymes and those that do not. Across the dataset there are 450 enzymes in total. \nThe graph for each protein is constructed such that the nodes represent amino acids and the edges represent the bonds between them. The number of amino acids (nodes) vary from around 20 to a maximum of 620 per protein with an average of 39.06. \nEach protein comes with a set of features for each node. \nThe features represent characteristics of the associated amino acid represented by the node. The number of features is $r=29$.\nWe run the end-to-end model with three GCN layers in $\\Gamma$, each with 50 hidden units. \n$\\zeta$ consists of three dense multi-layer perceptron layers, each with 150 hidden units. \nWe set d equal to 1, 10, 50 and 100.\n\nFor each method and embedding size we train for 300 epochs. Note though that the data augmentation method will have experienced five times as many training steps due to the increased size of its training set. We use a batch size of 128 graphs. The loss function minimized during training is the binary cross entropy loss (BCE) defined as\n\\begin{equation}\n\\label{eq:BCE}\nBCE = -\\frac{1}{B}\\sum_{t=1}^B p_t log(\\sigma(\\eta(\\phi(X^{(t)}))))+(1-p_t)log(1-\\sigma(\\eta(\\phi(X^{(t)})))) \n\\end{equation} \nwhere $B=128$ is the batch size, $p_t=1$ when the $t^{th}$ graph\n(protein) is an enzyme and $p_t=0$ otherwise, $\\sigma(x)=\\frac{1}{1+e^{-x}}$ is the sigmoid function that maps the output $\\eta(\\phi(X^{(t)})$ of the 3-layer fully connected network $\\eta$ to $[0,1]$. Three performance metrics were computed: accuracy (ACC), area under the receiver operating characteristic curve (AUC), and average precision (AP) as area under the precision-recall curve from precision scores. These measure are defined as follows (see sklearn.metrics module documentation in pytorch, or \\cite{roc}).\n\nFor a threshold $\\tau\\in[0,1]$, the classification decision $\\hat{p}_t(\\tau)$ is given by:\n\\begin{equation}\n \\hat{p}_t(\\tau) = \\left\\{\n \\begin{array}{rcl}\n 1 & \\mbox{if} & \\mbox{$\\sigma(\\eta(\\phi(X^{(t)}))\\geq \\tau$} \\\\\n 0 & \\mbox{if} & \\mbox{otherwise}\n \\end{array}\\right. .\n\\end{equation}\nBy default $\\tau=\\frac{1}{2}$. For a given threshold, one computes the four scores, true positive (TP), false positive (FP), true negative (TN) and false negative (FN):\n\\begin{equation}\n TP(\\tau) = \\frac{1}{B_1}\\sum_{t=1}^B 1_{\\hat{p}_t(\\tau) = 1}1_{p_t = 1} ~~, ~~\n TN(\\tau) = \\frac{1}{B_0} 1_{\\hat{p}_t(\\tau) = 0}1_{p_t = 0}\n\\end{equation}\n\\begin{equation}\n FP(\\tau) = \\frac{1}{B_0}\\sum_{t=1}^B 1_{\\hat{p}_t(\\tau) = 1}1_{p_t = 0} = 1-TN(\\tau) ~~,~~\n FN(\\tau) = \\frac{1}{B_1}\\sum_{t=1}^B 1_{\\hat{p}_t(\\tau) = 0}1_{p_t = 1} = 1-TP(\\tau)\n\\end{equation}\nwhere $B_0=\\sum_{t=1}^B 1_{p_t = 0}$ and $B_1=\\sum_{t=1}^B 1_{p_t = 1}=B-B_0$.\n\nThese four statistics predict Precision $P(\\tau)$, Recall $R(\\tau)$ (also known as sensitivity or true positive rate), and Specificity $S(\\tau)$ (also known as true negative rate)\n\\begin{equation} P(\\tau) = \\frac{TP(\\tau)}{TP(\\tau)+FP(\\tau)}\n~~,~~R(\\tau) = \\frac{TP(\\tau)}{TP(\\tau)+FN(\\tau)}\n~~,~~S(\\tau) = \\frac{TN(\\tau)}{TN(\\tau)+FP(\\tau)}\n\\end{equation}\n\nAccuracy (ACC) is defined as the fraction of correct classification for default threshold $\\tau=\\frac{1}{2}$ over the set of batch samples:\n\\begin{equation}\n\\label{eq:ACC}\n ACC = \\frac{1}{B}\\sum_{t=1}^B 1_{p_t = \\hat{p}_t(\\frac{1}{2})} =\\frac{B_0}{B} TN(\\frac{1}{2}) + \\frac{B_1}{B} TP(\\frac{1}{2}) \n\\end{equation}\nArea under the receiver operating characteristic curve (AUC) is computed from prediction scores as the area under true positive rate (TPR) vs. false positive rate (FPR) curve, i.e. the recall vs. 1-specificity curve\n\\begin{equation}\n\\label{eq:AUC}\n AUC = \\frac{1}{2}\\sum_{k=1}^K (S(\\tau_{k-1})-S(\\tau_k))(R(\\tau_{k-1})+R(\\tau_k))\n\\end{equation}\nwhere $K$ is the number of thresholds.\nAverage precision (AP) summarizes a precision-recall curve as the weighted mean of precision achieved at each threshold, with the increase in recall from the previous thresholds used as the weight:\n\\begin{equation}\n\\label{eq:AP}\n AP = \\sum_{k=1}^K (R(\\tau_k) - R(\\tau_{k-1}))P(\\tau_k).\n\\end{equation}\n\nWe track the binary cross entropy (BCE) through training and we compute it on the holdout set and a random node permutation of the holdout set (see Figures \\ref{fig:prot1} and \\ref{fig:prot2}). The lower the value the better.\n\nWe look at the three performance metrics on the training set, the holdout set, and a random node permutation of the holdout set: see Figures \\ref{fig:prot3}, and \\ref{fig:prot4} for accuracy (ACC); see Figures \\ref{fig:prot5}, and \\ref{fig:prot6} for area under the receiver operating characteristic curve (AUC); and see Figures \\ref{fig:prot7}, and \\ref{fig:prot8} for average precision (AP). For all these performance metrics, the higher the score the better.\n\n\\subsubsection{Discussion}\n\nTables \\ref{table:t1}-\\ref{table:t12} list values of the three performance metrics (ACC, AUC, AP) at the end of training (after 300 epochs). \nPerformances over the course of training are plotted in Figures \\ref{fig:prot1} through \\ref{fig:prot8}.\n\nThe authors of \\cite{KKMMN2016} utilized a Support Vector Machine (1-layer perceptron) for classification and \nobtained an accuracy (ACC) of 77\\% on the entire data set \nusing 52 features, and an accuracy of 80\\% on a smaller set of 36 features. By comparison, our data augmentation method for $d=100$ achieved an accuracy of 97.5\\% on training data set,\nbut dropped dramatically to 73\\% on holdout data, and 72\\% on \nholdout data set with randomly permuted nodes. \nOn the other hand, both the kernels method and the sum-pooling\nmethod with $d=50$ achieved an accuracy of around 79\\% on\ntraining data set, while dropping accuracy performance by \nonly 2\\% to around\n77\\% on holdout data (as well as holdout data with nodes permuted).\n\nFor $d=1$, data augmentation performed the best on the training set with an area under the receiver operating characteristic (AUC) of 0.896, followed closely by the identity method with an AUC of 0.886. On the permuted holdout set however, sort-pooling performed the best with an AUC of 0.803.\n\nFor $d=10$, sum-pooling, ordering, and kernels performed well on the permuted holdout set with AUC's of 0.821, 0.820, and 0.818 respectively. The high performance of the identity method, data augmentation, and sort-pooling on the training set did not translate to the permuted holdout set at $d=10$. By $d=100$, sum-pooling still performed the best on the permuted holdout set with an AUC of 0.817. This was followed by the kernels method which achieved an AUC of 0.801 on the permuted holdout set.\n\nFor experiments where $d>1$, the identity method and data augmentation show a notable drop in performance from the training set to the holdout set. This trend is also, to a lesser extent, visible in the sort pooling and ordering methods. In the holdout permuted set we see significant oscillations in the performance of both the identity and data augmentation methods.\n\n\\subsection{Graph Regression}\n\n\\subsubsection{Methodology}\nFor our experiments in graph regression we consider the qm9 dataset \\cite{ramakrishnan2014quantum}. This dataset consists of 134 thousand molecules represented as graphs, where the nodes represent atoms and edges represent the bonds between them. \n\nEach graph has between 3 and 29 nodes, $3\\leq n\\leq 29$. Each node has 11 features, $r=11$. We hold out 20 thousand of these molecules for evaluation purposes. The dataset includes 19 quantitative features for each molecule.\n\nFor the purposes of our study, we focus on electron energy gap (units $eV$), which is $\\Delta\\varepsilon$ in \\cite{DFTpaper} whose chemical accuracy is $0.043 eV$ and whose prediction performance of any machine learning technique\nis worse than any other feature.\nThe best existing estimator for this feature is enn-s2s-ens5 from \\cite{Gilmer_2017arXiv170401212G}\n and has a mean absolute error (MAE) of $0.0529eV$ which is $1.23$ larger than the chemical accuracy. \n We run the end to end model with three GCN layers in $\\Gamma$, each with 50 hidden units. $\\eta$ consists of three multi-layer perceptron layers, each with 150 hidden units. We use rectified linear units as our nonlinear activation function. Finally, we vary $d$, the size of the node embeddings that are outputted by $\\Gamma$. We set $d$ equal to 1, 10, 50 and 100.\n\nFor each method and embedding size we train for 300 epochs. Note though that the data augmentation method will have experienced five times as many training steps due to the increased size of its training set. We use a batch size of 128 graphs. The loss function minimized during training is the mean square error (MSE) between the ground truth and the network output \n (see Figures \\ref{fig:a1}, \\ref{fig:a2}) \n\\begin{equation}\n\\label{eq:MSE}\nMSE = \\frac{1}{B}\\sum_{t=1}^B |\\Delta\\varepsilon_t -\\eta(\\phi(X^{(t)}))))|^2\n\\end{equation}\nwhere $B=128$ is the batch size of 128 graphs and $\\Delta\\varepsilon_t$ is the electron energy gap of the $t^{th}$ graph (molecule). The performance\nmetric is Mean Absolute Error (MAE)\n\\begin{equation}\n\\label{eq:MAE}\nMAE = \\frac{1}{B}\\sum_{t=1}^B |\\Delta\\varepsilon_t -\\eta(\\phi(X^{(t)}))))|.\n\\end{equation}\nWe track the mean absolute error through the course of training. We look at this performance metric on the training set, the holdout set, and a random node permutation of the holdout set (see Figures \\ref{fig:b1}, and \\ref{fig:b2}). \n\n\n\\subsubsection{Discussion}\n\nNumerical results at the end of training (after 300 epochs) are included in Tables \\ref{table:a}, \\ref{table:b}, \\ref{table:c} and \\ref{table:d}.\nFrom the results we see that the ordering method performed best for $d=100$\nfollowed closely by the data augmentation method, while both the ordering method and the kernels method performed well for $d=10$, though both fell slightly short of data augmentation which performed marginally better on both the training data and the holdout data, though with significantly more training iterations. For $d=1$, the kernels method failed to train adequately. The identity mapping performed relatively well on training data (for $d=100$ it achieved the smallest MAE among all methods and all parameters) and even the holdout data, however it lost its performance on the permuted holdout data. The identity mapping's failure to generalize across permutations of the holdout set is likely exacerbated by the fact that the QM9 data as presented to the network comes ordered in its node positions from heaviest atom to lightest. Data augmentation notably kept its performance despite this due to training on many permutations of the data. \n\nFor $d=100$, our ordering method achieved a MAE of $0.155eV$ on training data set and $0.187eV$ on holdout data set, which are $3.6$ and $4.35$ times larger than the chemical accuracy ($0.043eV$\\ignore{ cf. Supplementary material of \\cite{Gilmer_2017arXiv170401212G}}), respectively. This is worse than the enn-s2s-ens5 method in \\cite{Gilmer_2017arXiv170401212G} (current best method) that achieved a MAE $0.0529$ (eV), $1.23$ larger than the chemical accuracy, \nbut better than the Coulomb Matrix (CM) representation in \\cite{PhysRevLett.108.058301} that achieved a MAE $5.32$ larger than the chemical accuracy whose features were optimized for this task.\n\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}