diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfjht" "b/data_all_eng_slimpj/shuffled/split2/finalzzfjht" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfjht" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\IEEEPARstart{M}{illions} of asteroids exist in solar system, many the shattered remnants of planetesimals, bodies within the young Sun's solar nebula that never grew large enough to become planets \\cite{kortenkamp_asteroids_2011}. The vast majority of known asteroids orbit within the main asteroid belt located between the orbits of Mars and Jupiter. To further investigate how planets formed and how life began, as well as improve our understanding to asteroids that could impact Earth, some deep exploration programs were proposed, e.g. \\emph{Hayabusa}\\cite{fujiwara_rubble-pile_2006, yoshikawa_hayabusa_2006}, \\emph{Hayabusa2}\\cite{sugita_geomorphology_2019, watanabe_hayabusa2_2017}, and \\emph{OSIRIS-Rex}\\cite{lauretta2017osiris, golish2020ground}. The program objectives involve orbiting observation, autonomous landing, geological sampling, and so on. \n\nTwo near-Earth asteroids, Itokawa \\cite{yoshikawa_hayabusa_2006} and Ryugu \\cite{sugita_geomorphology_2019}, with complex 6-DoF motion are shown in Fig. \\ref{fig1}. It is obvious that 3D visual tracking system is important to explore these two asteroids, which can provide object location, size, and pose. It's also of great significance to spacecraft autonomous navigation, asteroid sample collection, and universe origin study. However, state-of-the-art 4-DoF trackers \\cite{giancolaLeveragingShapeCompletion2019, qiP2BPointtoBoxNetwork2020, yinCenterBased3DObject2021} presented for automous driving are confused about heading angle of asteroid, which makes inaccurate 3D bounding-box estimation. Besides, some 6-DoF tracking methods \\cite{prisacariuPWP3DRealTimeSegmentation2012, prisacariuSimultaneousMonocular2D2013, crivellaroRobust3DObject2018a} under strong assumptions are also impractical to track asteroid. To be honest, constructing an end-to-end deep network that predicts 6-DoF states of asteroid is pretty difficult. We therefore decompose 3D asteroid tracking problem into 3-DoF tracking and pose estimation. And this paper merely focus on the 3-DoF tracking part.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\subfloat[Itokawa]{\n \\includegraphics[width=0.1\\textwidth]{itokawa0003.jpeg} \n \\includegraphics[width=0.1\\textwidth]{itokawa0005.jpeg}\n \\includegraphics[width=0.1\\textwidth]{itokawa0007.jpeg}\n \\includegraphics[width=0.1\\textwidth]{itokawa0010.jpeg}\n\t \\label{fig1_a}\n }\n\t\\hfil\n\t\\subfloat[Ryugu]{\n \\includegraphics[width=0.1\\textwidth]{ryugu0001.jpeg}\n \\includegraphics[width=0.1\\textwidth]{ryugu0004.jpeg}\n \\includegraphics[width=0.1\\textwidth]{ryugu0008.jpeg}\n \\includegraphics[width=0.1\\textwidth]{ryugu0012.jpeg}\n\t \\label{fig1_b}\n }\n\t\\caption{Two mission objectives of \\emph{Hayabusa} and \\emph{Hayabusa2}. (a) Itokawa, a potato-shaped asteroid with 600-meter size, is named after Hideo Itokawa, a Japanese rocket pioneer. (b) Ryugu, a more primordial C-type asteroid with 900-meter size, is discovered on May 1999 by LINEAR telescope.\n }\n\t\\label{fig1}\n\\end{figure}\n\n\\begin{figure*}[ht]\n \\centering\n\t\\includegraphics[width=\\textwidth]{3D_track_framework.pdf}\n \\caption{The deep-learning based 3D tracking framework for asteroid, named as Track3D, which consists of 2D monocular tracker and amodal axis-aligned bounding-box network, A3BoxNet. Experiment shows our framework can run at 77 FPS with high accuracy. And it has great generalization abitlity to 2D tracking algorithm.}\n \\label{fig2}\n \\centering\n\\end{figure*}\n\nInspired by the idea of 2D-driven 3D perception, we present a novel deep-learning based 3D asteroid tracking framework, Track3D. As shown in Fig. \\ref{fig2}, it mainly consists of 2D monocular tracker and a light-weight amodal axis-aligned bounding-box network, A3BoxNet, which can predict accurate target center and size purely relied on partial object point cloud. Extensive experiments show that Track3D reaches state-of-the-art 3D tracking results (0.669 $AO^{3d}$ at 77 FPS) and has great generalization ability to 2D monocular tracker. Moreover, we discover that our framework with 2D-3D tracking fusion strategy can make significant improvement on 2D tracking performance. \n\nHowever, there are few studies on 3D asteroid tracking, as well as relevant dataset, which have greatly hindered the development of asteroid exploration. To this end, we construct the first large-scale 3D asteroid tracking dataset, by acquiring 148,500 binocular images, depth maps, and point clouds of diverse asteroids in various shapes and textures with physics engine. Benefitting from the power and convenience of physics engine, all the 2D and 3D annotations are automatically generated. Meanwhile, we also provide an evaluation toolkit, which includes 2D monocular and 3D tracking evaluation algorithms.\n\nOur contributions in this paper are summarized as follows:\n\\begin{itemize}\n \\item Considering different types of asteroid with various shapes and textures, we construct the first large-scale 3D asteroid tracking dataset, including 148,500 binocular images, depth maps, and point clouds. \n\n \\item The first 3-DoF asteroid tracking framework, Track3D, is also presented, which involves an 2D monocular tracker and A3BoxNet network. The experimental results show the impressive advancement and generalization of our framework, even based on poor 2D tracking algorithm.\n\n \\item We propose a novel light-weight amodal bounding-box estimation network, A3BoxNet, which can predict accurate axis-aligned bounding-box of target merely with partial object point cloud. Randomly sampling 1024 points as network input, A3BoxNet can even achieve 0.712 $AO^{3d}$ and 0.345 $ACE^{3d}$ with up to 281.5 FPS real-time performance.\n \n\\end{itemize}\n\nThe rest of this paper is ordered in the subsequent sections. In Section \\ref{section2}, we review some related works from two aspects: visual tracking in aerospace domain and 3D object tracking. In Section \\ref{section3}, the constructing details of 3D asteroid tracking dataset are presented, involving simulation platform, annotation, and evaluation metrics. We also propose a novel deep-learning based 3D tracking framework in Section \\ref{section4}, which consists of 2D monocular tracker, and a light-weight amodal bounding-box estimation network, A3BoxNet. Section \\ref{section5} introduces more details about 3D tracking framework performance and ablation study. Finally, we make a conclusion in Section \\ref{section6}.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{3D_asteroid_tracking_platform.pdf}\n \\caption{The simulated platform for constructing 3D asteroid tracking dataset. We design three types of 3D asteroid model with 6 colorful textures, of which size and luminosity are modified during data collection. The setups of visual observer spacecraft and imaging mechanism of perspective vision camera are also shown in upper right corner. We pre-align depth map and point cloud to reference frame (i.e. left camera coordinate system). The left and right vision cameras have the same parameters, including far clipping plane, perspective angle, and resolution. In addition, some screenshots of 3D asteroid tracking dataset with 2D and 3D annotations are also visualized at bottom right.}\n \\label{fig3}\n \\centering\n\\end{figure*}\n\n\\section{Related Work \\label{section2}}\n\\subsection{Visual Tracking in Aerospace}\nThe goal of visual tracking is to precisely and robustly perceive real-time states (e.g. location, velocity and size) of an instereted target in complex and dynamic environment. As the basis for pose estimation, behavior understanding, and scene interpretation, visual object tracking has wide application in space debris removel \\cite{agliettiActiveSpaceDebris2020,huangDexterousTetheredSpace2017}, space robotic inspection \\cite{fourieFlightResultsVisionBased2014}, spacecraft rendezvous and docking \\cite{petitVisionbasedSpaceAutonomous2011,sharmaPoseEstimationNoncooperative2018}. \n\nAt present, most of visual trackers in aerospace domain mainly focus on feature-based method. Huang \\cite{huangDexterousTetheredSpace2017} proposed a novel feature tracking algorithm. Firstly, it extracted features by SURF detector. And then, the pyramid-Kanade-Lucas-Tomasi (P-KLT) algorithm was adopted to match key-points between two adjacent frames. Finally, accurate target bounding box is obtained with Greedy Snake method. A feature-tracking scheme was also presented in \\cite{volpePassiveCameraBased2018} that combines traditional feature-point detector and frame-wise matching to track a non-cooperative and unknown satellite. Felicetti \\cite{felicettiImagebasedAttitudeManeuvers2018} put forward an active space debris visual tracking method, in which the chaser satellite can keep the moving object in field of view of optical camera by continuously pose correction. Those feature-based trackers heavily relied on the manually designed feature detector and cannot handle extreme cases in space (e.g. illumination variation, scale variation, fast motion, rotation, truncation, and background clutters).\n\nWith many challenges and benchmarks emerging, such as OTB2015 \\cite{wuObjectTrackingBenchmark2015}, YouTube-BB \\cite{realYouTubeBoundingBoxesLargeHighPrecision2017}, ILSVRC-VID \\cite{russakovskyImageNetLargeScale2015}, GOT-10K \\cite{huangGOT10kLargeHighDiversity2019}, and Visual Object Tracking challenges \\cite{kristanVisualObjectTracking2017, kristanSeventhVisualObject2019}, the development of generic object tracking is far beyond imagination \\cite{lanLearningCommonFeatureSpecific2018}. Especially, deep learning based trackers \\cite{bertinettoFullyconvolutionalSiameseNetworks2016, wangFastOnlineObject2018, liSiamRPNEvolutionSiamese2018, danelljanATOMAccurateTracking2018, zhangDeeperWiderSiamese2019, chenSiameseBoxAdaptive2020, ondrasovicSiameseVisualObject2021} have dominated the whole tracking community in recent years, because of its striking performance. Generic object trackers often follow one protocol that no prior knowledges is available. This hypothesis is naturally suitable for asteroid visual tracking, because of high uncertainty of vision tasks in space. In paper \\cite{zhou2DVisionbasedTracking2021}, most of state-of-the-art generic trackers had been evaluated on space non-cooperative object visual tracking (SNCOVT) dataset, which provides firm research foundation for our work.\n\nHowever, recent visual trackers mainly cope with RGB \\cite{zhangOceanObjectawareAnchorfree2020, yanLearningSpatioTemporalTransformer2021, chenTransformerTracking2021}, RGB-Thermal \\cite{lanModalitycorrelationawareSparseRepresentation2020, lanLearningModalityConsistencyFeature2019, lanRobustMultimodalityAnchor2019}, and RGB-Depth \\cite{songTrackingRevisitedUsing2013} video sequences, which only provides poor target information in 2D space and heavily restricts practical applications of visual tracking. In contrast, 3D visual tracking is more promising and competitive. To this end, we propose a novel deep-learning based 3D tracking framework for asteroid exploration.\n\n\n\\begin{table*}[t]\n \\centering\n \\caption{The default object size categories and corresponding ratio of XYZ.}\n \\label{table1}\n \\begin{tabular}{ccccccccccccccc}\n \\toprule\n category & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 \\\\\n \\midrule\n X & 1 & 1\/2 & 1\/3 & 2\/3 & 1 & 1 & 1 & 1 & 1\/2 & 2\/3 & 1\/2 & 2\/3 & 1 & 1\\\\\n Y & 1 & 1 & 1 & 1 & 1\/2 & 2\/3 & 1 & 1 & 1\/2 & 2\/3 & 1 & 1 & 1\/2 & 2\/3 \\\\\n Z & 1 & 1 & 1 & 1 & 1 & 1 & 1\/2 & 2\/3 & 1 & 1 & 1\/2 & 2\/3 & 1\/2 & 2\/3 \\\\\n \\bottomrule \n \\end{tabular}\n \\centering\n\\end{table*}\n\n\\subsection{3D Object Tracking}\nAlthough there are plenty of related works \\cite{prisacariuPWP3DRealTimeSegmentation2012, heldPrecisionTrackingSparse2013, prisacariuSimultaneousMonocular2D2013, asvadi3DObjectTracking2016, crivellaroRobust3DObject2018a, kartObjectTrackingReconstruction2019, giancolaLeveragingShapeCompletion2019, qiP2BPointtoBoxNetwork2020, yinCenterBased3DObject2021}, the concept of 3D object tracking remains ambiguous. Hence that, we first define that 3D object tracking is to obtain 3D information of any target in real-time by leveraging various visual sensors, given initaial states at the beginning. According to the degrees of freedom of tracking result, all the 3D trackers can be concluded into three categories: 3-DoF, 4-DoF, and 6-DoF Tracker. Obviously, the more degrees of freedom, the more difficult 3D tracking task is. \n\n3-DoF tracking means that both 3D location and size of object (i.e. 3D axis-aligned bounding-box) should be estimated. However, most of researches only considered predicting 3D object center. In paper \\cite{heldPrecisionTrackingSparse2013}, a color-augmented search algorithm was presented to obtain the position and velocity of vehicle. Asvadi et al. \\cite{asvadi3DObjectTracking2016} utilized two parallel mean-shift localizers in image and pcd space, and made fusion for two localizations by Kalman filter. This algorithm can effectively achieve low 3D center error. To the best of our knowledge, the framework proposed in this paper is the first real 3-DoF tracker that can be applied for collision avoidance, 3D reconstruction, and pose estimation.\n\nComparing with 3-DoF methods, 4-DoF tracker often needs to predict an extra heading angle of target, which is original from 3D tracking requirement in autonomous driving. Giancola et al. \\cite{giancolaLeveragingShapeCompletion2019} presented a novel 3D siamese tracker with the regularization of point cloud completion. However, the exhaustively searching for candidate shapes in this method would consume high computational cost. Qi et al. \\cite{qiP2BPointtoBoxNetwork2020} also proposed point-wise tracking paradigm, P2B, which addressed 3D object tracking by potential center localization, 3D target proposal and verification. Since this method purely uses point cloud data as input, it is vulnerable to initial object point cloud and only achieves 0.562 $AO^{3d}$ at 40 FPS on KITTI tracking dataset \\cite{geigerAreWeReady2012}. \n\nIn paper \\cite{prisacariuPWP3DRealTimeSegmentation2012, prisacariuSimultaneousMonocular2D2013}, 6-DoF tracking task was considered as joint 2D segmentation and 3D pose estimation problem, and the method looked for the pose that best segmented the target object from the background. Crivellaro et al. \\cite{crivellaroRobust3DObject2018a} present a novel 6-DoF tracking framework with monocular image, including expensive 2D detector, local pose estimation and extended kalman filter. However, the non-textured 3D model of target should be given in this method, which heavily limits the scope of its application. To be honest, the asteroid tracking is one of 6-DoF tracking tasks. We think it is very hard to construct an end-to-end network that can directly and precisely predict the 6-DoF object states. Therefore, we decompose asteroid tracking problem into 3-DoF tracking and pose estimation. It is worthwhile noting that this paper simply focus on the 3-DoF tracking task of asteroid.\n\n\n\\section{3D Asteroid Tracking Dataset \\label{section3}}\nTo promote the research on 3D tracking for asteroid, we construct a large-scale 3D asteroid tracking dataset, including binocular video sequences, depth maps and point clouds. In addition, 3D tracking evaluation toolkit is also provided for performance analysis. More details about 3D asteroid tracking dataset are introduced in this section.\n\n\\subsection{Dataset Construction}\nThere is no doubt that collecting real data to create large-scale 3D asteroid tracking dataset is impractical, like KITTI dataset \\cite{geigerAreWeReady2012} and Princeton RGB-D Tracking dataset \\cite{songTrackingRevisitedUsing2013}. Meanwhile, constructing dataset by ground simulation \\cite{zhou2DVisionbasedTracking2021} is very expensive and limited. Inspired by UAV123 dataset \\cite{muellerBenchmarkSimulatorUAV2016}, we therefore consider to collect rich 3D tracking data by virtual physics engine, V-rep. The power and convenience of physics engine make automatic data labelling possible, which greatly reduces constructing cost and improves annotation accuracy.\n\nThe critical foundation of dataset constructing based on physics engine is 3D modeling of asteroids with diverse shapes and textures. We create three types of 3D asteroid model (i.e. Asteroid04, 05, and 06) with 6 different textures, which are illustrated at the left of Fig. \\ref{fig3}. From point of our view, one asteroid model with diferent textures can be considered as different fine-grained categories. In addition, we introduce 9 simulated space scenes into dataset construction. All the 3D asteroid models have been controlled by scripts to carry out random 6-DoF motion in simulated scenes. \n\nThe detailed setup of vision sensors for data collection can be seen at upper right corner of Fig \\ref{fig3}. Only two perspective vision cameras are equipmented with 0.4-meter baseline in x-axis direction on observer spacecraft, which can not only acquire binocular video sequences during simulation, but also achieve aligned depth maps and point clouds from camera buffer by the API of physics engine. The camera matrix is very vital for our 3D tracking framework in which point clouds should be projected to image plane, however, V-rep merely provides perspective angle $(\\alpha_x, \\alpha_y)$ and resolution $\\left(W, H\\right)$. To this end, we compute camera intrinsic matrix $M_i$ following Eq. \\ref{eq_1} (more derived details are given in Appendix \\ref{appendix1}):\n\n\\begin{equation}\n M_{i}=\n \\begin{bmatrix}\n \\frac{W}{2 \\tan(\\alpha_x\/2)} & 0 & \\frac{W}{2} \\\\\n 0 & \\frac{H}{ 2 \\tan(\\alpha_y\/2)} & \\frac{H}{2}\\\\\n 0 & 0 & 1\n \\end{bmatrix} \n \\label{eq_1}\n \\end{equation}\n \nAt final, we collect 360 sequences for training and 135 sequences for testing set. Each sequence has 300 frames binocular images, depth maps, and point clouds. Following the protocol proposed in \\cite{huangGOT10kLargeHighDiversity2019}, there is no overlap on fine-grained categories between training and testing set. The screenshots of 3D asteroid tracking testing set with 2D and 3D annotations are shown in Fig. \\ref{fig3}.\n\n\\begin{figure}[t]\n \\centering\n \\subfloat[Size classes]{\\includegraphics[width=0.23\\textwidth]{size_class_distribution_total.pdf} \\label{fig4_a}} \\hfil\n \\subfloat[Object volumes]{\\includegraphics[width=0.23\\textwidth]{object_volume_distribution_total.pdf} \\label{fig4_b}} \\vfill\n \\subfloat[Motion speeds]{\\includegraphics[width=0.23\\textwidth]{object_motion_speed_distribution_total.pdf} \\label{fig4_c}} \\hfil\n \\subfloat[Illumination]{\\includegraphics[width=0.23\\textwidth]{object_illumination_distribution_total.pdf} \\label{fig4_d}} \n \\caption{The statistics of 3D asteroid tracking dataset.}\n \\label{fig4}\n \\centering\n\\end{figure}\n\n\\begin{figure*}[ht]\n \\centering\n\t\\includegraphics[width=0.8\\textwidth]{3D_track_framework_detailed.pdf}\n \\caption{The detailed framework diagram of Track3D}\n \\label{fig5}\n \\centering\n\\end{figure*}\n\n\\subsection{Dataset Statistics}\nIn actual, our framework predicts the object size class and normalized size residuals, rather outputs an axis-aligned 3D bounding-box directly. Therefore, size classes should be predefined and the number of samples in each category has better to be balanced. To this end, we make a statistic analysis for size category of axis-aligned 3D bounding-boxes in 3D asteroid tracking dataset and define 14 default size categories summarized in Table \\ref{table1}. The final distribution of size classes can be seen in Fig. \\ref{fig4_a}. \n\nMeanwhile, the distribution of object volumes and motion speeds are shown in Fig. \\ref{fig4_b} and \\ref{fig4_c}, respectively. Because of random asteroid rotation and manual shape modification, the volume of samples is widely distributed (16 $m^3$ to 1600 $m^3$). It is worthwhile noting that object volume metioned here is the volume of axis-aligned 3D bounding-box, instead the real one. In addition, as we mentioned above that all the 3D asteroid models are controlled by scripts to carry out random 6-DoF motion, Fig. \\ref{fig4_c} clearly demonstrates that the random translation of object obeys normal distribution. \n\nAs paper \\cite{zhou2DVisionbasedTracking2021} had proven that all the generic trakcers are vulnerable to illumination variation, we also consider this factor into our dataset. Furthermore, we have limited the illumination of all samples at low value (see in Fig. \\ref{fig4_d}), which conforms to the reality of space environment.\n\n\\subsection{Evaluation Metrics}\nThe main metric utilized to analyse 2D tracking performance is average overlap, $AO$, which is also widely applied in visual tracking benchmarks like OTB\\cite{wuObjectTrackingBenchmark2015}, VOT\\cite{kristanSeventhVisualObject2019}, and GOT-10k \\cite{huangGOT10kLargeHighDiversity2019}. In this paper, we also introduce this metric to evaluate 2D monocular tracker and 3D tracking results in bird's eye view (i.e. $AO^{2d}$ and $AO^{bev}$). Furthermore, We extend average overlap measurement to evaluate 3D tracking accuracy, named as $AO^{3d}$:\n\\begin{align}\n AO^{3d} &= \\frac{1}{M} \\sum_{s=1}^{M} \\left( \\frac{1}{N_{s}} \\sum_{t=1}^{N_{s}} \\Omega_{t}^{3d} \\right);\\\\ \n \\nonumber \\Omega_{t}^{3d} &= \\frac{\\tilde{A}_{t}^{3d} \\cap A_{t}^{3d}}{\\tilde{A}_{t}^{3d} \\cup A_{t}^{3d}};\n\\end{align}\nin which, $M$ is the sequences number in 3D asteroid tracking test set, $N_{s}$ denotes the length of \\emph{s}-th sequence, and $\\Omega^{3d}_{t}$ denotes the region overlap between axis-aligned 3D annotation $\\tilde{A}_{t}^{3d}$ and 3D tracking result $A_{t}^{3d}$ at \\emph{t}-th frame. It is worthwhile noting that tracking restart mechanism is introduced to 3D tracking evaluation. That is 3D tracker is allowed to reinitialize, once the Intersection-over-Union (IoU) of 3D tracking result $\\Omega^{3d}_{t}$ is zero.\n\nBesides average overlap metric, we also adopt success rate, $SR$, as our indicator, which denotes the percentage of successfully tracked frames where the overlap exceeds a threshold. In this work, we take 0.5 as threshold to measure the success rate of one tracker. However, $SR$ at a specific threshold is not representative, we further introduce success plot presented by \\cite{wuObjectTrackingBenchmark2015} into our evaluation tool.\n\nAnother intuitive technique index of tracker is location precision. Therefore, we propose 3D average center error metric ($ACE^{3d}$) that measures the mean distance between all the predicted trajectories and ground-truth trajectories:\n\\begin{align}\n ACE^{3d} &= \\frac{1}{M} \\sum_{s=1}^{M} \\Delta \\left( \\Gamma_s, \\tilde{\\Gamma}_s \\right)\n\\end{align}\nwhere $\\Gamma_s = \\left\\{ p_t | t=1, \\ldots, N_s \\right\\}$ and $\\tilde{\\Gamma}_s = \\left\\{ \\tilde{p}_t | t=1, \\ldots, N_s \\right\\}$ are the predictions and ground-truths of \\emph{s}-th sequence, which consist of a series of object center in 3D space. $\\Delta \\left( \\Gamma_s, \\tilde{\\Gamma}_s \\right)$ is formulated as:\n\\begin{equation}\n \\Delta\\left(\\Gamma_{s}, \\tilde{\\Gamma}_{s}\\right)=\\frac{1}{N_{s}} \\sum_{t=1}^{N_{s}}\\left\\|p_{t}-\\tilde{p}_{t}\\right\\|_{2}\n\\end{equation}\nin which, $\\left\\| \\cdot \\right\\|_{2}$ denotes Euclidean distance. \n\n\n\\begin{figure*}[t]\n \\centering\n\t\\subfloat[Center regression network]{\\includegraphics[width=0.65 \\textwidth]{Center_Regression_Network.pdf} \\label{fig6_a}} \\hfil\n \\subfloat[Amodal box estimation network]{\\includegraphics[width=0.7 \\textwidth]{Amodal_Box_Estimation.pdf} \\label{fig6_b}}\n \\caption{Two main module of A3BoxNet. (a) predicts coarse object center, (b) is to estimate object center residuals, object size category and normalized size residuals.}\n \\label{fig6}\n \\centering\n\\end{figure*}\n\n\\section{3D Tracking Framework \\label{section4}}\nInspired by the 2D-drieven 3D perception, we propose a deep-learning based 3D tracking framework shown in Fig. \\ref{fig5}, which involves 2D monocular tracker, SiamFC \\cite{bertinettoFullyconvolutionalSiameseNetworks2016} and a novel light-weight amodal axis-aligned bounding-box network, A3BoxNet. Although binocular images and depth maps are also provided in our 3D asteroid tracking dataset, it is worthwhile noting that Track3D only utilizes monocular video sequence and corresponding point clouds, which reduces the complexity of 3D tracking framework and more conforms to real applications in aerospace. In addition, we will introduce a simple but effective 2D-3D tracking fusion strategy in the following subsections. \n\n\\subsection{SiamFC}\nThe SiamFC algorithm utilizes full convolutional layers as an embedding layer, denoted as embedding function $\\varphi$. The exemplar image $z$ and multi-scale search images $X=\\left\\{x_i | i = 1, 2, ..., S \\right\\}$ are mapped to a high-dimensional feature space with $\\varphi$, which can be trained by a general large-scale dataset. And then similarity score maps are generated by cross-correlation between exemplar kernal and the tensors of multi-scale search images, where the 2D tracking result can be reached after post-processing. \n\nFor this algorithm, the key is to learn discriminative embedding function $\\varphi$. In this work, we train the backbone of SiamFC from scratch on ILSVRC-VID \\cite{russakovskyImageNetLargeScale2015} with 50 epochs and further fine-tune it on our 3D asteroid tracking training set. Benefitting from the strong generalization ability of Siamese network, SiamFC can achieve a decent 2D tracking performance. Although there are multifarious deep-learning based 2D monocular trackers, like SiamRPN++ \\cite{liSiamRPNEvolutionSiamese2018}, Ocean \\cite{zhangOceanObjectawareAnchorfree2020}, STARK \\cite{yanLearningSpatioTemporalTransformer2021}, TransT \\cite{chenTransformerTracking2021}, which greatly outperform SiamFC in tracking accuracy, we think weak dependence on high-performance 2D monocular tracker can effectively guarantee the generalization ability of our framework and improve its running speed. \n\nOnce tracking result $A_{t} $ is acquired from 2D monocular tracker, it can be used for frustum proposal extraction from raw points set $P_{\\text{raw}} = \\left\\{ (x_i, y_i, z_i) \\in \\mathbb{R} ^3 | i=1, 2, ..., r \\right\\}$, as shown in Fig. \\ref{fig5}. We first crop out $m$ points from raw point cloud $P_{\\text{crop}} = \\left\\{ (x_i, y_i, z_i) | 1 \\leqslant z_i \\leqslant 45, i=1, 2, ..., r \\right\\}$, and then compute the projected point cloud in image plane $P_{\\text{img}}=\\left\\{ (u_i, v_i) | i=1, 2, ..., m \\right\\}$ with camera matrix $\\mathbf{M} \\in \\mathbb{R}^{3 \\times 4}$. Therefore, we can achieve the frustum proposal $P_{\\text{frustum}} = \\left\\{ (x_i, y_i, z_i) | i=1, 2, ..., n \\right\\} $ by the indices of points which belong to $A_t$ region in $P_{\\text{img}}$. \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.4 \\textwidth]{center_prediction.pdf}\n \\caption{Two-stage object center prediction in bird's eye view. We first estimate the stage-1 center of input points with center regression network. And then center residuals are predicted by amodal box estimation network, also named as stage-2 center.}\n \\label{fig7}\n \\centering\n\\end{figure}\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=0.4 \\textwidth]{2D-3D_fusion_strategy.pdf}\n \\caption{The schematic diagram of 2D-3D tracking fusion strategy. $A^{proj}_t$ is the projection of center cross-section plane of $A^{3d}_t$ in image plane and $A_t$ is the result of 2D monocular tracker at \\emph{t}-th timestep. We compute fused tracking result $\\hat{A}_t$ by the weighted average between $A^{proj}_t$ and $A_t$.}\n \\label{fig8}\n \\end{center}\n\\end{figure}\n\n\n\\begin{table*}[t]\n \\centering\n \\caption{The evaluation results on 3D asteroid tracking dataset. First two rows are the performances of two main modules in Track3D, and the 3rd row is a simple 3D tracking baseline as comparison. All the 2D tracking metrics of 3D tracker are computed by using 2D-3D fusion strategy which makes great improvement on 2D tracking performance. The top-2 evaluation results for each performance indicator are highlighted in \\textcolor{green}{green} and \\textcolor{blue}{blue}.}\n \\label{table2}\n \\begin{tabular}{ccccccc}\n \\toprule\n \\multirow{2}{*}{name} & \\multirow{2}{*}{$AO^{2d}$} & $ACE^{2d}$ & \\multirow{2}{*}{$AO^{bev}$} & \\multirow{2}{*}{$AO^{3d}$} & $ACE^{3d}$ & \\multirow{2}{*}{FPS}\\\\\n & & (pixel) & & & (meter) & \\\\\n \\midrule\n SiamFC & 0.513 & 29.986 & - & - & - & 101.3\\\\\n A3BoxNet & - & - & \\textcolor{green}{0.798} & \\textcolor{green}{0.721} & \\textcolor{green}{0.345} & \\textcolor{green}{281.5} \\\\ \\midrule\n 3D Tracking Baseline & \\textcolor{green}{0.568} & \\textcolor{green}{23.767} & 0.309 & 0.165 & 0.876 & \\textcolor{blue}{103.1} \\\\\n Track3D & \\textcolor{blue}{0.541} & \\textcolor{blue}{26.741} & \\textcolor{blue}{0.756} & \\textcolor{blue}{0.669} & \\textcolor{blue}{0.570} & 77.0 \\\\\n \\bottomrule \n \\end{tabular}\n \\centering\n\\end{table*}\n\n\\subsection{A3BoxNet}\nIn this work, we assume that there are no other objects in the range of LiDAR perception, that is, all of the points in frustum proposal belong to tracked target, which conforms to practical scenario in space. Therefore, we propose a novel light-weight amodal axis-aligned bounding-box network, A3BoxNet, which makes directly prediction on frumstum proposal and no point segmentation is considered into. Fig. \\ref{fig2} clearly shows that our A3BoxNet mainly consists of two modules: center regression network and amodal box estimation network. The former one is responsible for the estimation of stage-1 object center. Another is to predict object center residuals, object size category, and normalized size residuals. It is worthwhile noting that both center regression network and amodal box estimation network are only support fixed number of input points. To this end, we add random sampling at the beginning of A3BoxNet.\n\nThe architecture of center regression network and amodal box estimation network are both illustrated in Fig. \\ref{fig6_a} and \\ref{fig6_b}. It can be clearly seen that there is no significant difference between two networks which are both derived from PointNet \\cite{qiFrustumPointNets3D2018}. In A3BoxNet, we use a number of MLP for high-level features extraction. And max-pooling layer is introduced to aggregate global features with symmetric property that is critical for unsorted point sets. The global feautres concatenated with one-hot vector of coarse-grained asteroid category can be further used for predicting stage-1 center, stage-2 center, and object size. \n\nWe visualize the center predction progress of A3BoxNet in Fig. \\ref{fig7}. The frustum proposal have been normalized by substracting the centroid of point sets firstly, which can improve the translation invariance of A3BoxNet and speed up convergence rate during training. In addtion, it shows that after two stage prediction, estimated object center is very close to the ground-truth. The predicted center is formulateds as follows:\n\\begin{equation}\n C_{pred} = \\bar{C} + \\Delta C_1 + \\Delta C_2\n\\end{equation}\nwhere $\\bar{C}$ is the centroid of point cloud in frustum proposal, $\\Delta C_1$ is the output of center regression network, $\\Delta C_2$ is the center residuals predicted by amodal box estimation network. \n\nExcept for estimating the center residuals of object, our amodal box estimation network also classifies object size to 14 predefined categories (see in Table \\ref{table1}) as well as predicts normalized size residuals ($N_{size}$ scores for size classification, $N_{size} \\times 3$ for size residuals regression). At final, we remap the predicted size category and normalized size residual to original scale by multiplying the largest length of enclosing bounding-box of point clouds. \n\nTo train our A3BoxNet, we use a joint loss function $\\mathcal{L}_{joint}$ to simultaneously optimize both two submodules:\n\\begin{equation}\n \\mathcal{L}_{joint} = \\mathcal{L}_{center-net} + \\mathcal{L}_{box-net}\n\\end{equation}\nin which, \n\\begin{equation}\n \\mathcal{L}_{box-net} = \\mathcal{L}_{center\\_res} + \\mathcal{L}_{size\\_cls} + \\mathcal{L}_{size\\_res}\n\\end{equation}\nmore details about optimization function of A3BoxNet are given in Appendix \\ref{appendix2}. In addition, A3BoxNet is trained on 3D asteroid tracking dataset with 25 epoches and 32 batch size. All the inputs are fixed number of point sets that randomly sampled from object point cloud.\n\n\\subsection{2D-3D Tracking Fusion Strategy}\nWe believe the fusion of 3D tracking and 2D monocular tracking results can make significant improvement on 2D tracking performance. To this end, we propose a simple and effective 2D-3D tracking fusion strategy (as shown in Fig. \\ref{fig8}). At first, we calculate the projection of center cross-section plane of 3D bounding-box in image plane, $A^{proj}_t$. And then, 2D monocular tracking result $A_t$ is weighted with $A^{proj}_t$:\n\\begin{equation}\n \\hat{A}^t = \\lambda_1 \\cdot A^{proj}_t + \\lambda_2 \\cdot A^t\n\\end{equation}\nin this work, we set $\\lambda_1 = 0.3$ and $\\lambda_2 = 0.7$. \n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[3D tracking baseline]{\\includegraphics[width=0.23 \\textwidth, height=0.14 \\textheight]{baseline_success_plot.pdf}\\label{fig11_a}} \n \\hfil\n \\subfloat[Track3D]{\\includegraphics[width=0.23 \\textwidth, height=0.14 \\textheight]{pipeline_success_plot.pdf}\\label{fig11_b}}\n \\hfil\n \\subfloat[2D precision comparison]{\\includegraphics[width=0.23 \\textwidth, height=0.14 \\textheight]{comparing_2d_precision_plot.pdf}\\label{fig11_c}}\n \\hfil\n \\subfloat[3D precision comparison]{\\includegraphics[width=0.23 \\textwidth, height=0.14 \\textheight]{comparing_3d_precision_plot.pdf}\\label{fig11_d}}\n \\caption{The performance comparison between 3D tracking baseline and Track3D. (a) and (b) are respectively success plots of 3D tracking baseline and Track3D with $AO^{2d}$, $AO^{bev}$, and $AO^{3d}$ metrics.}\n \\label{fig10}\n \\centering\n\\end{figure*}\n\n\\begin{figure*}[ht]\n \\centering\n \\begin{tabular}{cccc}\n sequence 0001 & sequence 0022 & sequence 0031 & sequence 0052\n \\\\\n \\includegraphics[width=0.22 \\textwidth]{0001_000050.pdf} & \n \\includegraphics[width=0.22 \\textwidth]{0022_000050.pdf} & \n \\includegraphics[width=0.22 \\textwidth]{0031_000050.pdf} & \n \\includegraphics[width=0.22 \\textwidth]{0052_000050.pdf} \n \\\\\n \\includegraphics[width=0.22 \\textwidth]{0001_000100.pdf} & \n \\includegraphics[width=0.22 \\textwidth]{0022_000100.pdf} & \n \\includegraphics[width=0.22 \\textwidth]{0031_000100.pdf} & \n \\includegraphics[width=0.22 \\textwidth]{0052_000100.pdf} \n \\\\\n \\includegraphics[width=0.22 \\textwidth]{0001_000150.pdf} & \n \\includegraphics[width=0.22 \\textwidth]{0022_000150.pdf} & \n \\includegraphics[width=0.22 \\textwidth]{0031_000150.pdf} & \n \\includegraphics[width=0.22 \\textwidth]{0052_000150.pdf} \n \\\\\n \\includegraphics[width=0.22 \\textwidth]{0001_000200.pdf} & \n \\includegraphics[width=0.22 \\textwidth]{0022_000200.pdf} & \n \\includegraphics[width=0.22 \\textwidth]{0031_000200.pdf} & \n \\includegraphics[width=0.22 \\textwidth]{0052_000200.pdf} \n \\end{tabular}\n \\caption{The tracking results of Track3D on 3D asteroid tracking test set.}\n \\label{fig11}\n \\centering\n\\end{figure*}\n\n\\begin{figure*}[ht]\n \\centering\n \\subfloat[sequence 0001]{\\includegraphics[width=0.23 \\textwidth]{0001_001_trajectory.pdf}\\label{fig12_a}}\n \\hfil\n \\subfloat[sequence 0022]{\\includegraphics[width=0.23 \\textwidth]{0022_001_trajectory.pdf}\\label{fig12_b}} \n \\hfil\n \\subfloat[sequence 0031]{\\includegraphics[width=0.23 \\textwidth]{0031_001_trajectory.pdf}\\label{fig12_c}} \n \\hfil\n \\subfloat[sequence 0052]{\\includegraphics[width=0.23 \\textwidth]{0052_001_trajectory.pdf}\\label{fig12_d}}\n \n \\caption{3D tracking trajectories on 4 sequences in 3D asteroid tracking test set.}\n \\label{fig12}\n \\centering\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[2D success plots]{\\includegraphics[width=0.4\\textwidth]{monocular_trackers_success_plot.pdf} \\label{fig14_a}} \\hfil\n \\subfloat[2D precision plots]{\\includegraphics[width=0.4\\textwidth]{monocular_trackers_2d_precision_plot.pdf} \\label{fig14_b}}\n \\caption{The 2D tracking performance of 12 classic monocular trackers.} \n \\label{fig14}\n \\centering\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[3D success plots]{\\includegraphics[width=0.4\\textwidth]{Track3Ds_success_plot.pdf} \\label{fig15_a}} \\hfil\n \\subfloat[3D precision plots]{\\includegraphics[width=0.4\\textwidth]{Track3Ds_3d_precision_plot.pdf} \\label{fig15_b}} \\vfil\n \\caption{The 3D tracking performance of Track3Ds under 12 different monocular trackers.} \n \\label{fig15}\n \\centering\n\\end{figure*}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Track3D_with_different_2d_trackers.pdf}\n \\caption{the connection between 3D tracking accuracy and 2D tracking performance, which demonstrates great generalization ability of our framework.}\n \\label{fig16}\n \\centering\n\\end{figure}\n\n\n\\section{Experiments \\label{section5}}\nIn this section, we have introduced a simple 3D tracking baseline algorithm as comparison and implemented extensive experiments with Track3D on 3D asteroid tracking dataset to demonstrate the advancement and effectiveness of our framework. We also find out that our framework with 2D-3D fusion strategy can not only handle 3D tracking challenge, but also make improvement on 2D tracking performance. In addition, valuable ablation study is also considered. All the experiments are carried out with Intel i9-9900k@3.60GHz CPU and Nvidia RTX 2080Ti GPU.\n\n\n\\subsection{Framework Performance}\nConsidering that there are a few researh results in the scope of 3-DoF tracking at present, therefore, we replace the A3BoxNet of our framework with minimum enclosing 3D bounding-box algorithm (i.e. computing the length of aixs-aligned 3D bounding-box by $\\max \\left\\{ P_{\\text{frustum}} \\right\\} - \\min \\left\\{ P_{\\text{frustum}} \\right\\}$) to realise a simple 3D tracking baseline as comparison. Both 2D and 3D tracking evaluation results of 3D tracking baseline are summarized at 3rd row in Table \\ref{table2}. And its success plots are shown in Fig. \\ref{fig11_a}. We find out that the 3D tracking baseline with 2D-3D fusion strategy improves 2D monocular tracking performance about 10.5\\%, however, it achieves quite poor performance in 3D space. \n\nThe overall performance of deep-learning based 3D tracking framework, also named as Track3D, is illustrated in Fig. \\ref{fig11_b}. It reaches excellent evaluation results on 3D asteroid tracking test set (0.669 $AO^{3d}$ and 0.570 $ACE^{3d}$) with high real-time performance (77.0 FPS). We visualize part of tracking results of Track3D in Fig. \\ref{fig11}, which demonstrates the effectiveness of our 3D tracking method. It clearly shows that our framework can estimate accurate 3D bounding-box even under extreme truncation (see 150-th frame of sequence 0001 in Fig. \\ref{fig11}). The 3D trajectory plots in Fig. \\ref{fig12} also intuitively show our framework predicts precise 3D object location, which greatly outperforms 3D tracking baseline. In addition, it can be seen in the first colum of Table \\ref{table2} that Track3D with 2D-3D fusion strategy can also make significant improvement on 2D tracking performance. \n\n\\subsection{Module Performance}\nExtensive experiments are implemented in this subsection that explore how two main modules of Track3D (i.e. 2D monocular tracker and A3BoxNet) make influences on final 3D tracking performance, which points the way to design effective 3D tracking framework for future work. \n\nFirstly, plenty of classic monocular trackers have been evaluated on left video sequences of 3D asteroid tracking dataset, such as SiamFC \\cite{bertinettoFullyconvolutionalSiameseNetworks2016}, SiamRPN \\cite{liHighPerformanceVisual2018}, ECO \\cite{danelljanECOEfficientConvolution2017}, Staple \\cite{bertinettoStapleComplementaryLearners2016}, KCF \\cite{henriquesHighSpeedTrackingKernelized2015}, DAT \\cite{posseggerDefenseColorbasedModelfree2015}, BACF \\cite{kianigaloogahiLearningBackgroundAwareCorrelation2017}, STRCF \\cite{liLearningSpatialTemporalRegularized2018}, MKCFup \\cite{tangHighSpeedTrackingMultiKernel2018}, CSRDCF \\cite{lukezicDiscriminativeCorrelationFilter2017}, CSK \\cite{henriquesExploitingCirculantStructure2012}, and MOSSE \\cite{bolmeVisualObjectTracking2010}. The evaluation results are illustrated in Fig. \\ref{fig14} which straightforward shows the accuracy of 12 monocular trackers are distributed in a large range from 0.375 to 0.746. SiamFC adopted in our framework only achieves intermediate 2D tracking performance in both accuracy and precision metrics.\n\nAnd then, we evaluate the whole framework under different monocular trackers respectively and plot 3D success and precision curves in Fig. \\ref{fig15_a} and \\ref{fig15_b}. We found that 3D evaluation curves become much dense comparing with 2D curves of Fig. \\ref{fig12}, which denotes that 2D monocular trackers just have slight influence on 3D tracking performance of our framework. In other words, Track3D can still work even based on poor 2D monocular tracker. We further plot the relationship between 2D tracking performance and framework accuracy in Fig. \\ref{fig16}, which intuitively demonstrates the generalization ability of Track3D.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.4 \\textwidth]{a3boxnet_success_plot.pdf} \\vfil\n \\includegraphics[width=0.4 \\textwidth]{a3boxnet_precision_plot.pdf}\n \\caption{The performance of A3BoxNet with 1024 input points.}\n \\label{fig9}\n \\centering\n\\end{figure}\n\nMeanwhile, We evaluate amodal bounding-box estimation network, A3BoxNet, on 3D asteroid tracking dataset by randomly sampling 1024 points from frustum proposal. Its performance is also illustrated in Fig. \\ref{fig9}. It can be clearly seen that our A3BoxNet predicts high accurate axis-aligned bounding-box purely with partial object points. Furthermore, our network is able to run at 281.5 FPS, which totally satisfies the requirement of application on edge computing device. \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{performance_on_point_nums.pdf}\n \\caption{The pefromance of A3BoxNet under different numbers of point set. There is a contradictory between accuracy and precision of A3BoxNet.}\n \\label{fig13}\n \\centering\n\\end{figure}\n\n\\begin{table}[t]\n \\centering\n \\caption{The ablation study on one-hot category vector.}\n \\label{table3}\n \\begin{tabular}{cccc}\n \\toprule\n name & $AO^{3d}$ & $AO^{bev}$ & $ACE{3d}$ \\\\\n \\midrule\n A3BoxNet & \\textcolor{green}{0.721} & \\textcolor{green}{0.798} & 0.345 \\\\\n without category & 0.715 & 0.792 & \\textcolor{green}{0.316} \\\\\n \\bottomrule\n \\end{tabular}\n \\centering\n\\end{table}\n\nWe also study how the number of input points affects the performance of amodal axis-aligned bounding-box network. We retrain the A3BoxNet by randomly sampling different numbers of point set (e.g. 512, 1024, 2048, 3072, and 4096 points) from object point clouds in 3D asteroid tracking training set. And all the models are trained from scratch with 25 epoches and 32 batch size. The evaluation results are plotted in Fig. \\ref{fig13}, which clearly shows accuracy and precision of A3BoxNet have contradictory with the number of input points. Once the accuracy of model increasing, the corresponding precision will decrease. \n\nBesides, the influence of object category on A3BoxNet is further studied. We remove one-hot vector of object category in center regression network and amodal box estimation network, and retrain A3BoxNet from scratch with 1024 point sets. The performance comparison between original A3BoxNet and A3BoxNet without category information is summarized in Table \\ref{table3}, which proves the object category can make a slight improvement on the 3D accuracy of A3BoxNet.\n\n\n\\section{Conclusion \\label{section6}}\nIn this work, we construct the first large-scale 3D asteroid tracking dataset, which involves 148,500 binocular images, depth maps, and point clouds. All the 2D and 3D annotations are automatically generated, which greatly guarantees the quality of tracking dataset and reduce the cost of data collection. The 3D asteroid tracking dataset will be public on website (\\url{http:\/\/aius.hit.edu.cn\/12920\/list.htm}). Meanwhile, we propose a deep-learning based 3D visual tracking framework, Track3D, which mainly consists of classic 2D monocular tracker and a novel light-weight amodal axis-aligned bounding-box network. The state-of-the-art 3D tracking performance and great generalization ability of our framework have been demonstrated by sufficient experiments. We also find that Track3D with 2D-3D tracking fusion strategy also makes improvement on 2D tracking performance. In future work, We will further apply our Track3D method to normoal cases like automous driving and robot picking.\n\n\\appendices\n\\section{The perspective camera matrix \\label{appendix1}}\nIn section \\ref{section3}, we have mentioned that camera matrix is very important for Track3D to extract frustum proposal. However, physics engine V-rep only provides perspective angle $(\\alpha_x, \\alpha_y)$ and resolution $(W, H)$. To this end, we derive the camera matrix by perspective projection principle, which is also illustrated in Fig. \\ref{fig3}. \n\nSuppose that the virtual focus of perspective camera in both x and y axes are $(f_x, f_y)$, the size of image plane is $(w, h)$, and an object point $P=(X, Y, Z)$ in camera coordinate system is projected at $p=(x_i, y_i)$ in image plane. Fig. \\ref{fig3} clearly shows that:\n\\begin{equation}\n \\frac{X}{Z} = \\frac{x_i}{f_x} \\label{eq_9}\n\\end{equation}\nand, \n\\begin{equation}\n \\frac{w\/2}{f_x} = \\tan\\left( \\frac{\\alpha_x}{2}\\right) \\label{eq_10}\n\\end{equation}\nMeanwhile, the transformation from image coordinate system to pixel coordinate system in x axis is formulated as:\n\\begin{align}\n u &= (x_i + \\frac{w}{2}) \\cdot \\frac{W}{w} \\label{eq_11} \\\\ \n \\nonumber &=x_i \\cdot \\frac{W}{w} + \\frac{W}{2} \n\\end{align}\nSubstitute Eq. \\ref{eq_9} and \\ref{eq_10} into Eq. \\ref{eq_11}, it can be obtained:\n\\begin{equation}\n u = \\frac{W}{ 2 \\tan(\\frac{\\alpha_x}{2})Z} X+ \\frac{W}{2} \\label{eq_12}\n\\end{equation}\nin which, the parameter $f_x$ is eliminated. Similarly, the transformation in y-axis direction from camera coordinate system to pixel coordinate system is also obtained:\n\\begin{equation}\n v = \\frac{H}{ 2 \\tan(\\frac{\\alpha_y}{2})Z} Y+ \\frac{H}{2} \\label{eq_13}\n\\end{equation}\n\nWe further rewrite Eq. \\ref{eq_12} and \\ref{eq_13} in homogeneous matrix form:\n\\begin{align}\n \\begin{bmatrix}\n u \\\\\n v \\\\\n 1\n \\end{bmatrix}=\n \\begin{bmatrix}\n \\frac{W}{ 2 \\tan(\\frac{\\alpha_x}{2})Z} & 0 & \\frac{W}{2Z} \\\\\n 0 & \\frac{H}{ 2 \\tan(\\frac{\\alpha_y}{2})Z} & \\frac{H}{2Z}\\\\\n 0 & 0 & 1\/Z\n \\end{bmatrix} \n \\begin{bmatrix}\n X \\\\\n Y \\\\\n Z\n \\end{bmatrix} \\label{eq_14}\n\\end{align}\nTo eliminate the $Z$ variable in the transformation matrix, we multiply both sides of Eq. \\ref{eq_14} by $Z$:\n\\begin{align}\n Z\\begin{bmatrix}\n u \\\\\n v \\\\\n 1\n \\end{bmatrix}=\n \\begin{bmatrix}\n \\frac{W}{2 \\tan(\\alpha_x\/2)} & 0 & \\frac{W}{2} \\\\\n 0 & \\frac{H}{ 2 \\tan(\\alpha_y\/2)} & \\frac{H}{2}\\\\\n 0 & 0 & 1\n \\end{bmatrix} \n \\begin{bmatrix}\n X \\\\\n Y \\\\\n Z\n \\end{bmatrix} \n\\end{align}\n\nBecause we set left camera coordinate system as reference frame, the camera matrix of left perspective camera can be formulated as:\n\\begin{align}\n M^L = \\begin{bmatrix}\n \\frac{W}{2 \\tan(\\alpha_x\/2)} & 0 & \\frac{W}{2} & 0\\\\\n 0 & \\frac{H}{ 2 \\tan(\\alpha_y\/2)} & \\frac{H}{2} & 0\\\\\n 0 & 0 & 1 & 0\n \\end{bmatrix} \n\\end{align}\n\n\n\\section{Traning objectives \\label{appendix2}}\nIn this work, we utilize a joint loss function $\\mathcal{L}_{joint}$ to optimize A3BoxNet:\n\\begin{equation}\n \\mathcal{L}_{joint} = \\mathcal{L}_{center-net} + \\mathcal{L}_{box-net}\n\\end{equation}\nwhere, $\\mathcal{L}_{center-net}$ adopt huber loss function:\n\\begin{equation}\n \\mathcal{L}_{center-net}=\n \\left\\{\n \\begin{array}{ll}\n 0.5\\alpha^{2}, & \\alpha<1 \\\\\n \\alpha - 0.5, & \\text {otherwise}\n \\end{array}\n \\right.\n\\end{equation}\nin which $\\alpha = \\left\\| \\hat C - (\\bar C + \\Delta C_1)\\right\\|_2$, $\\hat C$ is 3D center label, $\\bar c$ is the centroid of points in frustum proposal, and $\\Delta C_1$ is the prediction of center regression network. \n\nIn addition, \n\\begin{equation}\n \\mathcal{L}_{box-net} = \\mathcal{L}_{center\\_res} + \\mathcal{L}_{size\\_cls} + \\mathcal{L}_{size\\_res}\n\\end{equation}\nwhere $\\mathcal{L}_{size\\_cls}$ utilizes softmax cross entropy loss function:\n\\begin{equation}\n \\mathcal{L}_{size\\_cls} = -\\sum_{i=1}^{N_{size}} \\hat{y}_i \\cdot \\log \\left( \\frac{e^{y_i}}{\\sum_{j=1}^{N_{size}} e^{y_j}}\\right)\n\\end{equation}\nin which $\\hat{y}$ is $N_{size}$ dimensional one-hot vector of size category label, $y$ is the partial outputs of amodal box estimation network, of which dimension is also $N_{size}$. \n\nFurthermore, $\\mathcal{L}_{center\\_res}$ and $\\mathcal{L}_{size\\_res}$ both use huber loss function. $\\mathcal{L}_{center\\_res}$ is formulated as:\n\\begin{equation}\n \\mathcal{L}_{center-net}=\n \\left\\{\n \\begin{array}{ll}\n 0.5\\beta^{2}, & \\beta<2 \\\\\n 2(\\beta - 1), & \\text {otherwise}\n \\end{array}\n \\right.\n\\end{equation}\nin which, $\\beta = \\left\\| \\hat C - (\\bar C + \\Delta C_1 + \\Delta C_2)\\right\\|_2$, $\\Delta C_2$ is 3D center residuals predicted by amodal box estimation network. And $\\mathcal{L}_{size\\_res}$ is as follows:\n\\begin{equation}\n \\mathcal{L}_{center-net}=\n \\left\\{\n \\begin{array}{ll}\n 0.5 \\gamma^{2}, & \\gamma<1 \\\\\n \\gamma - 0.5, & \\text {otherwise}\n \\end{array}\n \\right.\n\\end{equation}\nwhere $\\gamma = \\left\\| \\hat R - r * \\max \\left\\{\\hat S\\right\\}\\right\\|_2$, $\\hat S$ and $\\hat R$ are the size and size residual label, respectively. $r$ is normalized size residual corresponding to the size category predicted by amodal box estimation network. \n\n\\section*{Acknowledgment}\nThis work was kindly supported by the National Key R\\&D Program of China through grant 2019YFB1312001.\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{SecIntro}\n\nVarious approaches have been proposed for simulating the propagation of elastic waves with free boundaries. The first approach is based on variational methods, as done in finite elements \\citep{DAY77}, spectral finite elements \\citep{KOMATITSCH98} and discontinuous Galerkin \\citep{BENJEMAA07}. These methods provide a fine geometrical description of boundaries by adapting the mesh to the boundaries. Boundary conditions are accounted for weakly by the underlying variational formulation. However, a grid-generating tool is required, and small time steps may result from the smallest geometrical elements and from the stability condition. The SAT methods based on energy estimates \\citep{CARPENTER94} avoid these limitations by introducing Cartesian grids and give time-stable high-order schemes with interfaces. However and up to our knowledge, these methods have not been applied so far to elastodynamics with free boundaries. \n\nThe second approach used in this context is based on the strong form of elastodynamics, as done in finite differences and spectral methods \\citep{TESSMER94}. In seismology, finite differences are usually implemented on staggered Cartesian grids, either with completely staggered stencils (CSS) or with the recently developed partially staggered stencils (PSS). With CSS, the velocity and stress components are distributed between different node positions \\citep{VIRIEUX86}. With PSS, all the velocity components are computed at a single node, as are the stress components, although the latter are shifted by half a node in two separate grids. Second-order \\citep{SAENGER00, SAENGER04} and fourth-order \\citep{BOHLEN03,CRUZ04} spatially-accurate PSS have been developed; for further discussion, we denote them PSS-2 and PSS-4, respectively. Unlike variational methods, finite differences require special care to incorporate the free boundary conditions strongly. There exist two main strategies for this purpose:\n\\begin{enumerate}\n\\item First, the boundaries can be taken into account implicitly by adjusting the physical parameters locally \\citep{KELLY76,VIRIEUX86,MUIR92}. The best-known implicit approach is the so-called \\textit{vacuum method} \\citep{ZAHRADNIK95,GRAVES96,MOCZO02,GELIS05}. For instance, the vacuum method applied to PSS involves setting the elastic parameters in the vacuum to zero, and using a small density value in the first velocity node in the vacuum to avoid a division by zero. However, this easy-to-implement method gives at best second-order spatial accuracy. In addition, a systematic numerical study has shown that the accuracy of the solution decreases dramatically when the angle between the boundary and the meshing increases \\citep{BOHLEN06}. Lastly, applying the vacuum method sometimes gives rise to instabilities: see for instance PSS-4 \\citep{BOHLEN03}. \n\\item A second idea is to explicitly change the scheme near the boundaries \\citep{KELLY76}. The best-known explicit approach is the so-called \\textit{image method}, which was developed for dealing with flat boundaries to fourth-order accuracy \\citep{LEVANDER88} and then extended to variable topographies \\citep{JIH88,ROBERTSSON96,ZHANGW06}. However image methods require a fine grid to reduce the spurious diffractions up to an acceptable level. To avoid this spatial oversampling, various techniques have been proposed, such as grid refinement in the vicinity of the boundary \\citep{RODRIGUES93} or adjusted finite-difference approximations: see \\citep{MOCZO07} for a review on these subjects.\n\\end{enumerate}\n\nThe aim of this paper is to present a finite-difference approach accounting for free boundaries without introducing the aforementioned drawbacks of the vacuum and image methods. The requirements are as follows: smooth arbitrary-shaped boundaries must be treated as simply as straight boundaries; the accuracy of the method must not depend on the position of the boundary relative to the meshing; and lastly, the computations must be stable even with very long integration times. We establish that these requirements can be met by applying an explicit approach involving fictitious values of the solution in the vacuum. In previous studies, interface problems in the context of elastodynamics were investigated in a similar way \\citep{PIRAUX01,LOMBARD04,LOMBARD06}. The fictitious values are high-order Taylor expansions of the boundary values of the solution. Estimating these boundary values involves some mathematical background, in order to compute the high-order boundary conditions; to determine a minimal set of independent boundary values; lastly, to perform a least-square numerical estimation of this minimal set. To help the reader, subroutines in FORTRAN are proposed freely at the web page {\\tt http:\/\/w3lma.cnrs-mrs.fr\/\\textasciitilde MI\/Software\/}. These subroutines enable a straightforward implementation of the algorithms detailed in the present paper.\n\nThe disadvantage here is that the above requirements cannot be fully satisfied if staggered-grid schemes are used. Single-grid finite-difference schemes are therefore chosen, where all the unknowns are computed at the same grid nodes. Our numerical experiments are based on the high-order ADER schemes which are widely used in aeroacoustics \\citep{MUNZ05}. Although these schemes are not yet widely used in the field of seismology \\citep{DUMBSER06}, they have also great qualities because of their accuracy and their stability properties: using 10 grid nodes per minimal S-wavelength with a propagation distance of 50 wavelengths gives highly accurate results. Moreover, on Cartesian grids, these methods do not require much more computational memory than staggered-grid schemes.\n\nThis paper is organized as follows. Section 2 deals with the continuous problem: the high-order boundary conditions and compatibility conditions are stated. These conditions are useful for handling the discrete problem presented in section 3, where the focus is on obtaining fictitious values of the solution in the vacuum. In section 4, numerical experiments confirm the efficiency of this method in the case of various topographies. In section 5, conclusions are drawn and some prospects suggested.\n\n\\section{The continuous problem}\\label{SecContinuous}\n\n\\subsection{Framework}\\label{SubSecFrame}\n\nLet us consider a solid $\\Omega$ separated from the vacuum by a boundary $\\Gamma$ (Figure \\ref{FigPatate}). The configuration is in-plane and two-dimensional, with a horizontal axis $x$ and a vertical axis $z$ pointing respectively rightward and downward. The boundary $\\Gamma$ is described by a parametric expression $(x(\\tau),\\,z(\\tau))$ where the parameter $\\tau$ describes the sampling of the boundary. The tangential vector and the normal vector are $\\mitbf{t}=\\,^T(x^{'}(\\tau),\\,z^{'}(\\tau))$ and $\\mitbf{n}=\\,^T(-z^{'}(\\tau),\\,x^{'}(\\tau))$, respectively, with $x^{'}(\\tau)=\\frac{d\\,x}{d\\,\\tau}(\\tau)$, $z^{'}(\\tau)=\\frac{d\\,z}{d\\,\\tau}(\\tau)$, and $T$ refers to the transposed vector. We assume the spatial derivatives at any point of the boundary to be available, as specified below. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.8]{Patate.eps}\n\\caption{{\\it Boundary $\\Gamma$ between a solid $\\Omega$ and vacuum}.}\n\\label{FigPatate}\n\\end{center}\n\\end{figure} \n\nThe solid $\\Omega$ is assumed to be linearly elastic, isotropic, and to have the following constant physical parameters: the density $\\rho$ and the Lam\\'e coefficients $\\lambda$, $\\mu$. The P- and S-wave velocities are $c_p=\\sqrt{(\\lambda+2\\,\\mu)\/\\rho}$ and $c_s=\\sqrt{\\mu\/\\rho}$. A velocity-stress formulation is adopted, hence the unknowns are the horizontal and vertical components of the elastic velocity $\\mitbf{v}=\\,^T(v_x,\\,v_z)$, and the independent components of the elastic stress tensor $\\mitbf{\\sigma}=\\,^T(\\sigma_{xx},\\,\\sigma_{xz},\\,\\sigma_{zz})$. Setting\n$$\n\\begin{array}{l}\n\\displaystyle\n\\mitbf{A}=\n\\left(\n\\begin{array}{ccccc}\n0 & 0 & \\hspace{0.1cm}1\/\\rho & 0 & 0\\\\\n0 & 0 & 0 & 1\/\\rho & 0\\\\\n\\lambda+2\\,\\mu & 0 & 0 & 0 & 0\\\\\n0 & \\mu & 0 & 0 & 0 \\\\\n\\lambda & 0 & 0 & 0 & 0\n\\end{array}\n\\right),\\\\\n\\\\\n\\displaystyle\n\\mitbf{B}=\n\\left(\n\\begin{array}{ccccc}\n0 & 0 & 0 & \\hspace{0.1cm}1\/\\rho & 0\\\\\n0 & 0 & 0 & 0 & 1\/\\rho\\\\\n0 & \\lambda & 0 & 0 & 0\\\\\n\\mu & 0 & 0 & 0 & 0\\\\\n0 & \\lambda+2\\mu & 0 & 0 & 0\n\\end{array}\n\\right), \n\\end{array}\n$$ \nthe solution $\\mitbf{U}=\\,^T(v_x,\\,v_z,\\,\\sigma_{xx},\\,\\sigma_{xz},\\,\\sigma_{zz})$ satisfies the first-order hyperbolic system \\citep{VIRIEUX86}\n\\begin{equation}\n\\frac{\\textstyle \\partial}{\\textstyle \\partial\\,t}\\,\\mitbf{U}=\\mitbf{A}\\,\\frac{\\textstyle \\partial}{\\textstyle \\partial\\,x}\\,\\mitbf{U}+\\mitbf{B}\\,\\frac{\\textstyle \\partial}{\\textstyle \\partial\\,z}\\,\\mitbf{U}.\n\\label{LC}\n\\end{equation}\n\n\\subsection{High-order boundary conditions}\\label{SubSecBC}\n\nAt any point $P(\\tau)$ on the free surface $\\Gamma$ (Figure \\ref{FigPatate}), the stress tensor satisfies the homogeneous Dirichlet conditions $\\mitbf{\\sigma}.\\mitbf{n}=\\mitbf{0}$. These zero-th order boundary conditions are written compactly\n\\begin{equation}\n\\mitbf{L}^0(\\tau)\\,\\mitbf{U}^0(x(\\tau),\\,z(\\tau),\\,t)=\\mitbf{0},\n\\label{L0U0}\n\\end{equation}\nwhere $\\mitbf{U}^0$ is the limit value of $\\mitbf{U}$ at $P$ and $\\mitbf{L}^0$ is the matrix\n$$\n\\mitbf{L}^0(\\tau)=\n\\left(\n\\begin{array}{ccccc}\n0 & 0 & -z^{'}(\\tau) & x^{'}(\\tau) & 0 \\\\\n0 & 0 & 0 & -z^{'}(\\tau) & x^{'}(\\tau) \n\\end{array}\n\\right).\n$$\nFrom now on, the dependence on $\\tau$ is generally omitted. To determine the boundary conditions satisfied by the first-order spatial derivatives of $\\mitbf{U}$, two tasks are performed. First, the zeroth-order boundary conditions (\\ref{L0U0}) are differentiated in terms of $t$. The time derivative is replaced by spatial derivatives using the conservation laws (\\ref{LC}), which gives\n\\begin{equation}\n\\mitbf{L}^0\\left(\\mitbf{A}\\,\\frac{\\textstyle \\partial}{\\textstyle \\partial\\,x}\\,\\mitbf{U}^0+\\mitbf{B}\\,\\frac{\\textstyle \\partial}{\\textstyle \\partial\\,z}\\,\\mitbf{U}^0\\right)=\\mitbf{0}.\n\\label{L0t}\n\\end{equation}\nSecondly, the zeroth-order boundary conditions (\\ref{L0U0}) are differentiated in terms of the parameter $\\tau$ describing $\\Gamma$. The chain-rule gives\n\\begin{equation}\n\\left(\\frac{\\textstyle d}{\\textstyle d\\,\\tau}\\,\\mitbf{L}^0\\right)\\,\\mitbf{U}^0+\\mitbf{L}^0\\left(x^{'}\\frac{\\textstyle \\partial}{\\textstyle \\partial\\,x}\\,\\mitbf{U}^0+z^{'}\\frac{\\textstyle \\partial}{\\textstyle \\partial\\,z}\\,\\mitbf{U}^0\\right)=\\mitbf{0}.\n\\label{L0tau}\n\\end{equation}\nSince the matrix $d\\,\\mitbf{L}^0\/d\\,\\tau$ in (\\ref{L0tau}) involves $x^{''}$ and $z^{''}$, it accounts for the curvature of $\\Gamma$ at $P$. Setting the block matrix\n$$\n\\mitbf{L}^1=\n\\left(\n\\begin{array}{ccc}\n\\mitbf{L}^0 & \\mitbf{0} & \\mitbf{0}\\\\\n[3pt]\n\\mitbf{0} & \\mitbf{L}^0\\,\\mitbf{A} & \\mitbf{L}^0\\,\\mitbf{B} \\\\\n\\displaystyle\n\\frac{\\textstyle d}{\\textstyle d\\,\\tau}\\,\\mitbf{L}^0 & x^{'}\\mitbf{L}^0 & z^{'}\\mitbf{L}^0\n\\end{array}\n\\right),\n$$\nequations (\\ref{L0U0}), (\\ref{L0t}) and (\\ref{L0tau}) give the boundary conditions up to the first-order \n$$\n\\mitbf{L}^1\\,\\mitbf{U}^1=\\mitbf{0},\n$$\nwith \n$$\n\\mitbf{U}^1=\\lim_{M\\in \\Omega \\rightarrow P}\n\\,^T\\left(\n^T\\mitbf{U},\n\\frac{\\textstyle \\partial}{\\textstyle \\partial\\,x}\\,^T\\mitbf{U},\\,\\frac{\\textstyle \\partial}{\\textstyle \\partial\\,z}\\,^T\\mitbf{U}\\right).\n$$\nLet $k\\geq 1$ be an integer whose value will be discussed in section \\ref{SecDiscrete}. To get the boundary conditions up to the $k$-th order, one deduces from (\\ref{L0U0}) \n\\begin{equation}\n\\frac{\\textstyle \\partial^k}{\\textstyle \\partial\\,\\tau^{k-\\alpha}\\,\\partial\\,t^\\alpha}\\,\\mitbf{L}^0\\,\\mitbf{U}^0=\\mitbf{0},\\qquad \\alpha=0,...,k.\n\\label{DTauDt}\n\\end{equation}\nThe $\\tau$-derivatives are replaced by spatial derivatives by applying $(k-\\alpha)$-times the chain rule. The $t$-derivatives are replaced by spatial derivatives by injecting $\\alpha$-times the conservation laws (\\ref{LC}). The boundary conditions so-obtained up to the $k$-th order can be written compactly\n\\begin{equation}\n\\mitbf{L}^k\\,\\mitbf{U}^k=\\mitbf{0},\n\\label{LkUk}\n\\end{equation}\nwith\n\\begin{equation}\n\\displaystyle\n\\mitbf{U}^k=\\lim_{M\\in \\Omega \\rightarrow P}\n\\,^T\\left(\n^T\\mitbf{U},\n...,\\,\n\\frac{\\textstyle \\partial^\\alpha}{\\textstyle \\partial\\, x^{\\alpha-\\beta}\\,\\partial\\,z^\\beta}\\,^T\\mitbf{U},\n...,\\,\n\\frac{\\textstyle \\partial^k}{\\textstyle \\partial\\,z^k}\\,^T\\mitbf{U}\n\\right),\n\\label{Uk}\n\\end{equation}\nwhere $\\alpha=0,...,\\,k$ and $\\beta=0,...,\\,\\alpha$. The vector $\\mitbf{U}^k$ has $n_v=5(k+1)\\,(k+2)\/2$ components. $\\mitbf{L}^k$ is a $n_l \\times n_v$ matrix, with $n_l=(k+1)\\,(k+2)$. This matrix involves the successive derivatives of the curvature of $\\Gamma$ at $P$. Computing $\\mitbf{L}^k$ with $k>2$ is a tedious task, which can be greatly simplified by using computer algebra tools. \n\n\\subsection{Compatibility conditions}\\label{SubSecCC}\n\nThe second spatial derivatives of stress components are linked together by the compatibility condition of Barr\\'e-de Saint Venant \\citep{LOVE}\n\\begin{equation}\n\\begin{array}{l}\n\\displaystyle\n\\frac{\\textstyle \\partial^2 \\,\\sigma_{xz}}{\\textstyle \\partial \\,x\\,\\partial\\,z}\n=\\alpha_2\\,\\frac{\\textstyle \\partial^2 \\,\\sigma_{xx}}{\\textstyle \\partial \\,x^2}\n+\\alpha_1\\,\\frac{\\textstyle \\partial^2 \\,\\sigma_{zz}}{\\textstyle \\partial \\,x^2}\n+\\alpha_1\\,\\frac{\\textstyle \\partial^2 \\,\\sigma_{xx}}{\\textstyle \\partial \\,z^2}\n+\\alpha_2\\,\\frac{\\textstyle \\partial^2 \\,\\sigma_{zz}}{\\textstyle \\partial \\,z^2},\n\\end{array}\n\\label{Barre}\n\\end{equation}\nwith\n$$\n\\alpha_1=\\frac{\\textstyle \\lambda+2\\,\\mu}{\\textstyle 4\\,(\\lambda+\\mu)},\\qquad \n\\alpha_2=-\\frac{\\textstyle \\lambda}{\\textstyle 4\\,(\\lambda+\\mu)}.\n$$\nThis compatibility condition is a necessary and sufficient condition for the strain tensor to be symmetrical. If $k\\geq 2$, it can be differentiated $(k-2)$-times in terms of $x$ and $z$. With $k\\geq 2$, one obtains $n_c=k\\,(k-1)\/2$ relations; with $k<2$, $n_c=0$. Unlike the boundary conditions, these compatibility conditions are satisfied everywhere in $\\Omega$: in particular, they are satisfied at $P$ on $\\Gamma$. The vector of boundary values $\\mitbf{U}^k$ can therefore be expressed in terms of a shorter vector $\\hat{\\mitbf{U}}^k$ with $n_v-n_c$ independent components\n\\begin{equation}\n\\mitbf{U}^k=\\mitbf{G}^k\\,\\hat{\\mitbf{U}}^k. \n\\label{MatG}\n\\end{equation}\nAn algorithm for building the $n_v\\times (n_v-n_c)$ matrix $\\mitbf{G}^k$ is given in \\citet{LOMBARD06}. \n\n\\section{The discrete problem}\\label{SecDiscrete}\n\n\\subsection{Numerical scheme}\\label{SubSecScheme}\n\nTo integrate the hyperbolic system (\\ref{LC}), we introduce a single Cartesian lattice of grid points: $(x_i,z_j,t_n)=(i\\,h,j\\,h,\\,n\\,\\Delta\\,t)$, where $h$ is the mesh spacing and $\\Delta\\,t$ is the time step. Unlike with staggered grids, all the unknowns are computed at the same grid nodes. The approximation $\\mitbf{U}_{i,j}^n$ of $\\mitbf{U}(x_i,z_j,t_n)$ is computed using any explicit, two-step, and spatially-centred finite-difference scheme. A review of the huge body of literature on finite-differences is given in \\citet{LEV90} and \\citet{MOCZO07}. \n\nHere we propose to use ADER schemes, that allow to reach easily arbitrary high-order of time and space accuracy \\citep{MUNZ05}. On Cartesian grids, these finite-volume integration schemes originally developed for aeroacoustic applications are equivalent to finite-difference Lax-Wendroff-type integration schemes \\citep{LORCHER06}. In the numerical experiments described in section \\ref{SecNum}, we use a fourth-order ADER integration scheme. This scheme is stable under the Courant-Friedrichs-Lewy (CFL) condition $c_p\\,\\Delta\\,t\/h \\leq 0.9$ in 2D; as usually with single-grid schemes, it is slightly dissipative \\citep{MUNZ05}. \n\nMany other single-grid schemes can be used in this context. In particular, the method described in the next subsections has been successfully combined with flux-limiter schemes \\citep{LEV90} and with the standard second-order Lax-Wendroff scheme. Difficulties have been encountered with dissipative-free schemes based on centred staggered-grid finite-difference schemes, as we will see in section \\ref{SubSecStag}.\n\n\\subsection{Use of fictitious values}\\label{SubSecExtra1}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{SolMod.eps}\n\\caption{\\textit{Determination of the fictitious value $\\mitbf{U}_{IJ}^*$ required for time-marching at neighboring grid nodes. P is the orthogonal projection of $(x_I,\\,z_J)$ on $\\Gamma$. The $n_p$ grid nodes in $\\Omega$ and inside the circle ${\\cal C}$ centred at $P$ with a radius $d$ are denoted by {\\bf +}}.}\n\\label{FigSolMod}\n\\end{center}\n\\end{figure} \n\nTime-marching at grid-points where the stencil crosses $\\Gamma$ requires fictitious values of the solution in the vacuum, which have to be determined. The question arises as to how to compute, for instance, the fictitious value $\\mitbf{U}_{I,J}^*$ at the grid point $(x_I,\\,z_J)$ in the vacuum, as sketched in Figure \\ref{FigSolMod}. Let $P(\\tau)$ be the orthogonal projection of $(x_I,\\,z_J)$ on $\\Gamma$, with coordinates $(x_P=x(\\tau),\\,z_P=z(\\tau))$. At any grid point $(x_i,\\,z_j)$, we denote\n$$\n\\mitbf{\\Pi}_{i,j}^k=\\left(\\mitbf{I}_5,\n...,\n\\frac{\\textstyle (x_i-x_P)^{\\alpha-\\beta}(z_j-z_P)^\\beta}{\\textstyle (\\alpha-\\beta)\\,!\\,\\beta\\,!}\\mitbf{I}_5,\n...,\n\\frac{\\textstyle (z_j-z_P)^k}{\\textstyle k\\,!}\\mitbf{I}_5\\right)\n$$\nthe $5\\times n_v$ matrix containing the coefficients of $k$-th order Taylor expansions in space at $P$, where $\\mitbf{I}_5$ is the $5\\times5$ identity matrix, $\\alpha=0,...,\\,k$, and $\\beta=0,...,\\,\\alpha$. The fictitious value $\\mitbf{U}_{I,J}^*$ is defined as the Taylor-like extrapolation \n\\begin{equation}\n\\mitbf{U}_{I,J}^*=\\mitbf{\\Pi}_{I,J}^k\\,\\mitbf{U}^k,\n\\label{SolMod0} \n\\end{equation}\nwhere $\\mitbf{U}^k$ defined by (\\ref{Uk}) still remains to be estimated. \n\n\\subsection{Reduced vector of boundary values}\\label{SubSecSVD}\n\nBefore determining $\\mitbf{U}^k$ in (\\ref{SolMod0}), we first reduce the number of independent components it contains. The expressions obtained in section \\ref{SecContinuous} are used for this purpose. The linear homogeneous system following from (\\ref{LkUk}) and (\\ref{MatG}) is\n\\begin{equation}\n\\mitbf{L}^k\\,\\mitbf{G}^k\\,\\hat{\\mitbf{U}}^k=\\mitbf{0}.\n\\label{LGU}\n\\end{equation}\nThis system has fewer equations ($n_l$) than unknowns ($n_v-n_c$). It therefore has an infinite number of possible solutions that constitute a space with the dimension $n_v-n_c-n_l$. Let $\\mitbf{K}_{L^kG^k}$ be a $(n_v-n_c)\\times(n_v-n_c-n_l)$ matrix containing the basis vectors of the kernel of $\\mitbf{L}^k\\,\\mitbf{G}^k$. The general solution of (\\ref{LGU}) is therefore\n\\begin{equation}\n\\hat{\\mitbf{U}}^k=\\mitbf{K}_{L^kG^k} \\overline{\\mitbf{U}}^k,\n\\label{Kernel}\n\\end{equation}\nwhere the $n_v-n_c-n_l$ components of $\\overline{\\mitbf{U}}^k$ are real numbers. Injecting (\\ref{Kernel}) into (\\ref{MatG}) gives\n\\begin{equation}\n\\mitbf{U}^k=\\mitbf{G}^k\\mitbf{K}_{L^kG^k}\\overline{\\mitbf{U}}^k.\n\\label{SVD}\n\\end{equation}\nThe computation of $\\mitbf{K}_{L^kG^k}$ is a key point. For this purpose, we use a classical linear algebra tool: singular value decomposition of $\\mitbf{L}^k\\,\\mitbf{G}^k$. Technical details can be found in the Appendix A of \\citet{LOMBARD04}.\n\n\\subsection{Computation of fictitious values}\\label{SubSecExtra2}\n\nLet us now consider the $n_p$ grid points of $\\Omega$ in the circle ${\\cal C}$ centred at $P$ with a radius $d$; for instance, $n_p=8$ in Figure \\ref{FigSolMod}. At these points, we write the $k$-th order Taylor expansion in space of the solution at $P$, and then we use the expression (\\ref{SVD}). This gives\n\\begin{equation}\n\\begin{array}{lll}\n\\mitbf{U}(x_i,\\,z_j,\\,t_n)&=& \\displaystyle \\mitbf{\\Pi}_{i,j}^k\\mitbf{U}^k+\\mitbf{O}(h^{k+1}),\\\\\n&&\\\\\n&=&\\displaystyle \\mitbf{\\Pi}_{i,j}^k\\mitbf{G}^k\\mitbf{K}_{L^kG^k}\\overline{\\mitbf{U}}^k+\\mitbf{O}(h^{k+1}).\n\\end{array}\n\\label{Taylor}\n\\end{equation}\nThe set of $n_p$ equations (\\ref{Taylor}) is written compactly via a $5\\,n_p\\times (n_v-n_c-n_l)$ matrix $\\mitbf{M}$\n\\begin{equation}\n\\left(\\mitbf{U}(.,\\,t_n)\\right)_{\\cal C}=\\mitbf{M}\\overline{\\mitbf{U}}^k+\\mitbf{O}(h^{k+1}),\n\\label{MatM}\n\\end{equation}\nwhere $\\left(\\mitbf{U}(.,\\,t_n)\\right)_{\\cal C}$ is the vector containing the exact values of the solution at the grid nodes of $\\Omega$ inside ${\\cal C}$. These exact values are replaced by the known numerical values $\\left(\\mitbf{U}^n\\right)_{\\cal C}$, and Taylor rests are removed. From now on, numerical values and exact values of the fields are used indiscriminately. The discrete system thus obtained is overdetermined (see the remark (i) about $d$ and typical values of $n_p$ in subsection \\ref{SubSecOverview}). We now compute its least-squares solution\n\\begin{equation}\n\\overline{\\mitbf{U}}^k=\\mitbf{M}^{-1}\\left(\\mitbf{U}^n\\right)_{\\cal C},\n\\label{Trace}\n\\end{equation}\nwhere the $(n_v-n_c-n_l)\\times 5\\,n_p$ matrix $\\mitbf{M}^{-1}$ denotes the pseudo-inverse of $\\mitbf{M}$. From (\\ref{SolMod0}), (\\ref{SVD}) and (\\ref{Trace}), the fictitious value in the vacuum at $(x_I,\\,z_J)$ is\n\\begin{equation}\n\\begin{array}{lll}\n\\mitbf{U}_{I,J}^*&=& \\displaystyle \\mitbf{\\Pi}_{I,J}^k\\,\\mitbf{G}^k\\mitbf{K}_{L^kG^k}\\mitbf{M}^{-1}\\left(\\mitbf{U}^n\\right)_{\\cal C}\\\\\n&&\\\\\n&=& \\displaystyle \\mitbf{\\Lambda}_{I,J}\\left(\\mitbf{U}^n\\right)_{\\cal C}.\n\\end{array}\n\\label{Extrapolator}\n\\end{equation}\nThe $5\\times 5\\,n_p$ matrix $\\mitbf{\\Lambda}_{I,J}$ is called the \\textit{extrapolator} at $(x_I,\\,z_J)$. The fictitious values have no clear physical meaning. They only allow, by interpolation with numerical values inside $\\Omega$, to recover the high-order Dirichlet conditions (\\ref{Uk}).\n\n\\subsection{Comments and practical details}\\label{SubSecOverview}\n\nThe extrapolation method described in section \\ref{SubSecExtra2} has to be applied at each grid point $(I,\\,J)$ in the vacuum where a fictitious value is required for the time-marching procedure. Useful comments are proposed about this method:\n\n\\begin{enumerate}\n\\item The radius $d$ of ${\\cal C}$ must ensure that the number of equations in (\\ref{MatM}) is greater than the number of unknowns: \n\\begin{equation}\n\\varepsilon(k,\\,d)=\\frac{\\textstyle 5\\,n_p}{\\textstyle n_v-n_c-n_l}\\geq 1.\n\\label{Theta}\n\\end{equation}\nNo theoretical results are available about the optimal value of $\\varepsilon$. However, numerical studies have shown that a definite overestimation ensures long-term stability: typically, $\\varepsilon \\approx 4$. Various strategies can be used to ensure (\\ref{Theta}), such as an adaptative choice of $d$ depending on the local geometry of $\\Gamma$ at $P$. Here we adopt a simpler strategy consisting in using a constant radius $d$. With $k=3$, numerical experiments have shown that $d=3.2\\,h$ is a good candidate for this purpose. In this case, one typically obtains $n_p \\approx 15$.\n\n\\item Since the boundary conditions do not vary with time, the extrapolators $\\mitbf{\\Lambda}_{I,J}$ in (\\ref{Extrapolator}) can be computed and stored during a pre-processing step. At each time step, only small matrix-vector products are required. \n\n\\item The extrapolators $\\mitbf{\\Lambda}_{I,J}$ account for the local geometry of $\\Gamma$ at the projection points $P$ on $\\Gamma$ via $\\mitbf{L}^k$ (section \\ref{SubSecBC}). Moreover, they incorporate the position of $P$ relative to the Cartesian meshing, via $\\mitbf{\\Pi}_{i,j}$ (\\ref{Taylor}) and $\\mitbf{\\Pi}_{I,J}$ (\\ref{Extrapolator}). The set of extrapolators therefore provides a subcell resolution of $\\Gamma$ in the meshing, avoiding the spurious diffractions induced by a naive description of the boundaries.\n\n\\item The stability of the method has not been proved. However, numerical experiments clearly indicate that the CFL condition of stability is not modified compared with the case of a homogeneous medium. The solution does not grow with time, even in the case of long-time simulations (see section \\ref{SubSecTest4}).\n\n\\item In a previous one-dimensional study \\citep{PIRAUX01}, the local truncation error of the method has been rigorously analysed, leading to the following result: using the fictitious values (\\ref{Extrapolator}) ensures a local $r$-th order spatial accuracy if $k\\geq r$, where $r$ is the order of spatial accuracy of the scheme. In 2D configrations with material interfaces \\citep{LOMBARD04,LOMBARD06}, no proof has been conducted, but numerical experiments have shown that the $r$-th order overall accuracy is also maintained by taking $k=r$. Note that a slightly smaller order of extrapolation can be used: $k=r-1$ suffices to provide $r$-th order overall accuracy \\citep{GUSTAFSSON75}. The value $k=3$ is therefore used for the fourth-order ADER scheme.\n\n\\item The extrapolators do not depend on the numerical scheme adopted. They depend only on $k$ and on physical and geometrical features. Standard subroutines for computing the extrapolators $\\mitbf{\\Lambda}_{I,J}$ can therefore be developed and adapted to a wide range of schemes. Subroutines of this kind are freely available in FORTRAN at the web page {\\tt http:\/\/w3lma.cnrs-mrs.fr\/\\textasciitilde MI\/Software\/}. \n\\end{enumerate}\n\n\\subsection{Case of staggered-grid schemes}\\label{SubSecStag}\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n(a) & (b)\\\\\n\\includegraphics[scale=0.56]{FigStagA.eps}&\n\\includegraphics[scale=0.56]{FigStagB.eps}\n\\end{tabular}\n\\end{center}\n\\caption{\\textit{Staggered-grid schemes with a plane boundary $\\Gamma$ parallel to the meshing: two cases can be distinguished, depending on the position of $\\Gamma$ relative to the meshing. Case (a), where the fictitious stress is estimated, works well, while case (b), where the fictitious velocity is estimated, leads to long-term instabilities.}}\n\\label{FigStag}\n\\end{figure} \n\nInstead of using a single-grid scheme as proposed in section \\ref{SubSecScheme}, readers may be interested in adapting our approach to staggered-grid schemes such as CSS or PSS (see section \\ref{SecIntro} for the definition of these terms). However, in the case of some of the boundary positions relative to the meshing, computational instabilities occur, especially when long-time integration is considered.\n\nTo understand why this is so, let us consider PSS-2. Taking a simple flat boundary to exist between the medium and the vacuum leads to two typical geometrical configurations. At one position of the free surface, the boundary discretization will require only the stress field to be extrapolated (Figure \\ref{FigStag}-(a)). Our procedure works satisfactorily with this type of discretization at any order $k$. It also yields stable and accurate solutions when dealing with PSS-4, contrary to the vacuum method. Using 10 grid nodes per minimal S-wavelength gives similar performance in this case to those of our numerical experiments based on the ADER scheme, which are shown in section \\ref{SecNum}.\n\nAt another position of the free surface where only extrapolated velocities are required within a wide zone (Figure \\ref{FigStag}-(b)), our procedure results in instabilities. The reason for this problem is as follows: fictitious velocities involve first-order boundary conditions (\\ref{L0t}) and higher-order conditions (see section \\ref{SubSecBC}), but they do not involve the fundamental zeroth-order Dirichlet conditions (\\ref{L0U0}). Since the latter conditions are never enforced, an increasing oscillating drift occurs near the boundary, which invalidates the computations. Similar behavior is observed with PSS-4, but after a longer time: the numerical solution generally works well during a few thousand time steps, before growing in a unstable manner. \n\nThe extrapolation method presented here is therefore not recommended for use with staggered-grid schemes, especially PSS-2, except in the trivial case sketched in Figure \\ref{FigStag}-(a).\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.7]{FigCorner.eps}\n\\caption{{\\it Boundary $\\Gamma$ with a corner at $P$, replaced locally by an arc of circle with a radius $\\delta$ between $P_0$ and $P_1$}.}\n\\label{FigCorner}\n\\end{center}\n\\end{figure} \n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{c}\n(a)\\\\\n\\includegraphics[scale=0.36]{T1Carte0.eps}\\\\\n(b)\\\\\n\\includegraphics[scale=0.36]{T1Carte1.eps}\\\\\n(c)\\\\\n\\includegraphics[scale=0.36]{T1Carte2.eps}\n\\end{tabular}\n\\end{center}\n\\caption{\\textit{Test 1: snapshots of $v_x$ at the initial instant (a), at mid-term (b) and at the final instant (c).}}\n\\label{FigTest1A}\n\\end{figure} \n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{c}\n(a)\\\\\n\\includegraphics[scale=0.325]{T1Coupe0.eps}\\\\\n(b)\\\\\n\\includegraphics[scale=0.325]{T1Coupe1.eps}\\\\\n(c)\\\\\n\\includegraphics[scale=0.325]{T1Coupe2.eps}\\\\\n(d)\\\\\n\\includegraphics[scale=0.325]{T1Coupe3.eps}\n\\end{tabular}\n\\end{center}\n\\caption{\\textit{Test 1: time history of $v_x$ (a). Zooms on successive time windows, with various discretizations (b,c,d): the number after \\# denotes the number of grid nodes per minimal S-wavelength.}}\n\\label{FigTest1B}\n\\end{figure} \n\n\\subsection{Case of non-smooth geometries}\n\nUp to know, we have assumed that the boundary $\\Gamma$ was sufficiently smooth at the projection points, being at least $C^{k+1}$ at each $P$, where $k\\geq 0$ is the order of differentiation defined in section \\ref{SubSecOverview}. Let us assume now that $\\Gamma$ is only $C^K$ at a point $P$, with $K2\\,t_c$), the mechanical energy $E$ is theoretically maintained. It can be written in terms of $\\mitbf{v}$ and $\\mitbf{\\sigma}$\n\\begin{equation}\n\\begin{array}{l}\n\\displaystyle\nE=\\frac{\\textstyle 1}{\\textstyle 2}\\int\\int_\\Omega\\left\\{ \\rho\\,\\mitbf{v}^2+\\frac{\\textstyle \\lambda+2\\,\\mu}{\\textstyle 4\\,\\mu\\,(\\lambda+\\mu)}\\left(\\sigma_{xx}^2+\\sigma_{zz}^2\\right)+\\frac{\\textstyle 1}{\\textstyle \\mu}\\sigma_{xz}^2\\right.\\\\\n[8pt]\n\\displaystyle\n\\hspace{2cm}\\left.-\\frac{\\textstyle \\lambda}{\\textstyle 2\\,\\mu\\,(\\lambda+\\mu)}\\sigma_{xx}\\,\\sigma_{zz}\\right\\}\\,dx\\,dz.\n\\end{array}\n\\label{NRJ}\n\\end{equation}\nAt each time step, the integral in (\\ref{NRJ}) is estimated by a basic trapezoidal rule at the grid nodes inside $\\Omega$. Figure \\ref{FigTest4}-(b) shows the time history of this mechanical energy so-obtained. It slightly decreases, due to the numerical diffusion of the scheme, which confirms that the method is stable.\n\n\\section{Conclusion}\\label{SecConclu}\n\nHere we have presented a method of incorporating free boundaries into time-domain single-grid finite-difference schemes for elastic wave simulations. This method is based on fictitious values of the solution in the vacuum, which are used by the numerical integration scheme near boundaries. These high-order fictitious values accurately describe both the boundary conditions and the geometrical features of the boundaries. The method is robust, involving negligible extra computational costs. \n\nUnlike the vacuum method, the quality of the numerical solution thus obtained is almost independent of the angle between the free boundaries and the Cartesian meshing. Since the free boundaries do not introduce any additional artefacts, one can use the same discretization as in homogeneous media. Typically, when a fourth-order ADER scheme is used on a propagation distance of 50 minimal wavelengths, 10 grid nodes per minimal S-wavelength yield to a very good level of accuracy. With 5 grid nodes per minimal S-wavelength, the solution is less accurate but still acceptable. \n\nFor the sake of simplicity, we have dealt here with academic cases, considering two-dimensional geometries, constant physical parameters, and simple elastic media. Let us examine briefly the generalization of our approach to more realistic configurations:\n\\begin{enumerate}\n\\item Extending the method to 3-D topographies a priori does not require new tools. The main challenge will concern the computational efficiency of parallelization. A key point is that the determination of each fictitious value is local, using numerical values only at neighboring grid nodes. Particular care will however be required for fictitious values near frontiers between computational subdomains, in order to minimize the exchanges of data.\n\\item Near free boundaries, the domains of propagation are usually smoothly heterogeneous. To generalize our method to continuously variable media, the main novelty expected concerns the high-order boundary conditions detailed in section \\ref{SubSecBC}. With variable matrices $\\mitbf{A}$ and $\\mitbf{B}$ indeed and $k\\geq 2$, the procedure (\\ref{DTauDt}) will involve the following quantities, to be estimated numerically:\n$$\n\\frac{\\textstyle \\partial^{k-1}}{\\textstyle \\partial\\,x^{k-1-\\alpha}\\,\\partial\\,z^\\alpha}\\,\\mitbf{A},\\quad \\frac{\\textstyle \\partial^{k-1}}{\\textstyle \\partial\\,x^{k-1-\\alpha}\\,\\partial\\,z^\\alpha}\\,\\mitbf{B},\\qquad \\alpha=0,...,k-1.\n$$\n\\item Realistic modeling of wave propagation requires to incorporate attenuation. The only rheological viscoelastic models able to approximate constant quality factor over a frequency range are the generalized Maxwell body \\citep{EMMERICH84} and the generalized Zener body \\citep{CARCIONE01}. These two equivalent models \\citep{MOCZO05} yield to additional unknowns called \\textit{memory variables}. In the time domain, the whole set of unknowns satisfies a linear hyperbolic system with source term\n\\begin{equation}\n\\frac{\\textstyle \\partial}{\\textstyle \\partial\\,t}\\,\\mitbf{U}=\\mitbf{A}\\,\\frac{\\textstyle \\partial}{\\textstyle \\partial\\,x}\\,\\mitbf{U}+\\mitbf{B}\\,\\frac{\\textstyle \\partial}{\\textstyle \\partial\\,z}\\,\\mitbf{U}-\\mitbf{S}\\,\\mitbf{U},\n\\label{LCatten}\n\\end{equation}\nwhere $\\mitbf{S}$ is a definite positive matrix. Compared with the elastic case (\\ref{LC}) examined in the present paper, the main difference expected concerns the time differentiation of the boundary condition (\\ref{L0U0}). Indeed, equation (\\ref{L0t}) has to be modified accordingly to (\\ref{LCatten}). Similar modifications are also foreseen in the case of poroelasticity in the low-frequency range \\citep{DAI95}, where the evolution equations can be put in the form (\\ref{LCatten}).\n \\end{enumerate}\n\n\\label{lastpage}\n\n\\begin{acknowledgments}\nThe authors thank the reviewers P. Moczo and I. Oprsal for their instructive comments and bibliographic insights.\n\\end{acknowledgments}\n\n\\def\\hskip .11em plus .33em minus .07em{\\hskip .11em plus .33em minus .07em}\n\\bibliographystyle{gji}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nOur detailed knowledge of the Universe is mostly based on the study of correlation functions of perturbations around a homogeneous background. A considerable effort has been devoted over the years to the calculation of these correlators during inflation, for the CMB temperature fluctuations and for the present distribution of dark and luminous matter. It is by now well understood that calculations dramatically simplify in the parametric limit in which one (or more) of the momenta (that we call $q$ in this paper) becomes much smaller than the others (denoted by $k$). Recently \\cite{Kehagias:2013yd,Peloso:2013zw,Creminelli:2013mca}, these arguments have been applied to the matter (or $\\Lambda$)-dominated phase to show that the leading term as $q \\to 0$ of any correlation function with $(n+1)$ legs can be written in terms of an $n$-point function: the so-called consistency relations. Although the arguments work in a fully relativistic treatment \\cite{Creminelli:2013mca}, which is mandatory if we want to follow the evolution of the modes back in time and connect with inflation, in this paper we focus on the non-relativistic limit, which is valid deep inside the horizon. \n\nThe physical argument behind the consistency relations in the non-relativistic limit is that at leading order in $q$ a long mode gives rise to a homogeneous gravitational field $\\vec \\nabla\\Phi$. The effect of this mode on the short-scale physics can be derived exactly using the equivalence principle and erasing the long mode with a suitable change of coordinates. This logic makes virtually no assumption about the physics at short scales, including the complications due to baryons. However, the cancellation of the long mode by a change of coordinates can be performed only assuming that gravity is all there is: no extra degrees of freedom during inflation (i.e.~single-field inflation) and no extra forces (violation of the equivalence principle) at present. Therefore the consistency relations can be seen as a test of these two assumptions. \n\nIn this paper, which is the natural continuation of \\cite{Creminelli:2013mca}, we follow our study of the subject in two directions. First, we want to extend the consistency relations non-linearly in the long mode (Section~\\ref{sec:resum}). The displacement due to a homogeneous gravitational field scales with time as $\\Delta \\vec x \\sim \\vec \\nabla\\Phi_L \\;t^2$, so that the effect on the short modes of momentum $k$ goes as\n\\begin{equation}\n\\vec k \\cdot \\Delta \\vec x \\sim k \\; q \\;\\Phi_L \\,t^2 \\sim \\frac{k}{q} \\delta_L \\;,\n\\end{equation}\nwhere $\\delta_L$ is the long-mode density contrast\\footnote{This is the leading effect in the non-relativistic limit: relativistic corrections are further suppressed by powers of $k\/aH \\ll 1$ \\cite{Creminelli:2013mca}, which are negligible well inside the horizon.}. Notice that this is parametrically larger than $\\delta_L$, the natural expansion parameter of perturbation theory, and this is why one is able to capture the leading $q \\to 0$ behaviour. \nObviously, the fact that we can erase a homogeneous gravitational field by going to a free falling frame is an exact statement, that does not require the gravitational field to be small. This implies that we do not need to expand in $k\/q \\cdot \\delta_L$ that can be large, while we keep $\\delta_L$ small to allow for a perturbative treatment of the long mode. In Section \\ref{sec:resum} we are going to give a resummed version of the consistency relations which is exact in $k\/q \\cdot \\delta_L$. This allows to discuss the case of multiple soft modes and check the relations with the perturbation theory result. With the same logic, we will study the effect of internal soft modes and loops of soft modes. \n\nThe second topic of the paper (Section \\ref{sec:redshift}) is to derive consistency relations directly in redshift space, since this is where the distribution of matter is measured. We will do so without assuming anything about the short modes, in particular the single-stream approximation that breaks down in virialized objects. The redshift consistency relations contain an extra piece because the long mode, besides inducing a homogeneous gravitational field in real space, also affects the position of the short modes in redshift space along the line-of-sight. The redshift space consistency relations state that the correlation functions vanish at leading order for $q \\to 0$ when the short modes are taken at the same time, as it happens in real space. Given that it is practically impossible, as we will discuss, to study correlation functions of short modes at different times, it is hard to believe that these relations will be verified with real data. However, if a signal is detected at equal times, the consistency relations are not satisfied and this would indicate that at least one of the assumptions does not hold. This would represent a detection of either multi-field inflation or violation of the equivalence principle (or both!). \n\n\nAs explained in \\cite{Creminelli:2013mca}, one of the conditions for the validity of the consistency relations is that the long mode has always been out of the sound horizon since inflation. Indeed, a well-understood example where the consistency relations are not obeyed is the case of baryons and cold dark matter particles after decoupling. Before recombination, while dark matter follows geodesics, baryons are tightly coupled to photons through Thomson scattering and display acoustic oscillations. Later on, baryons recombine and decouple from photons. Thus, as their sound speed drops they start following geodesics, but with a larger velocity than that of dark matter on comoving scales below the sound horizon at recombination. As discussed in \\cite{Tseliakhovich:2010bj}, the long-wavelength relative velocity between baryons and CDM reduces the formation of early structures on small scales, through a genuinely nonlinear effect. \n\nThe fact that baryons have a different initial large-scale velocity compared to dark matter implies, if the long mode is shorter than the comoving sound horizon at recombination, that the change of coordinates that erases the effect of the long mode is not the same for the two species. Thus the effect of the long mode does not cancel out in the equal-time correlators involving different species \\cite{Bernardeau:2011vy,Bernardeau:2012aq}. In particular, the amplitude of the short-scale equal-time $n$-point functions becomes correlated with the long-wavelength isodensity mode, so that the $(n+1)$-point functions in the squeezed limit do not vanish at equal time. This effect, however, becomes rapidly negligible at low redshifts because the relative comoving velocity between baryons and dark matter decays as the scale factor, $|\\vec v_{\\rm b} - \\vec v_{\\rm CDM}| \\propto 1\/a$.\\footnote{The violation of the consistency relations decays as $(D_{\\rm iso}\/D)^2 \\propto (a^2 H f D )^{-2} \\sim (1+z)^{3\/2}$ where $D_{\\rm iso} \\propto |\\vec v_{\\rm b} - \\vec v_{\\rm CDM}|\/( a H f)$ is the growth function of the long-wavelength isodensity mode, $D$ is the growth function of the long-wavelength adiabatic growing mode, $f$ is the growth rate and $H$ is the Hubble rate (see \\cite{Bernardeau:2011vy} for details); in the last approximate equality we have used matter dominance. Thus, the effect is already sub-percent at $z \\sim 40$.} Hence, while a deviation can be sizable at high redshifts, it can be neglected in galaxy surveys and the consistency relations apply also when the long mode is shorter than the comoving sound horizon at recombination.\nWe conclude that the vanishing of the correlation functions at leading order in $q \\to 0$ is very robust.\n\n\n\n\\section{\\label{sec:resum}Resumming the long mode}\n\n\nLet us consider a flat unperturbed FRW universe and add to it a homogenous gradient of the Newtonian potential $\\Phi_L$.\\footnote{Since we are interested in the non-relativistic limit, we do not consider a constant value of $\\Phi_L$, which is immaterial in this limit.} Provided all species feel gravity the same way---namely, assuming the equivalence principle---we can get rid of the effect of $\\vec \\nabla \\Phi_L$ by going into a frame which is free falling in the constant gravitational field. The coordinate change to the free-falling frame is (we are using conformal time $d \\eta \\equiv dt\/a(t)$)\n\\begin{equation}\n\\vec x \\to \\vec x + \\delta \\vec x (\\eta) \\;, \\qquad \\delta \\vec x (\\eta) \\equiv - \\int \\vec v_L (\\tilde \\eta) \\, \\textrm{d} \\tilde \\eta\\;, \\label{displacement}\n\\end{equation}\nwhile time is left untouched. The velocity $\\vec v_L$ satisfies the Euler equation in the presence of the homogenous force,\nwhose solution is \n\\begin{equation}\\label{longv}\n\\vec v_L(\\eta) = - \\frac{1}{a(\\eta)} \\int a (\\tilde \\eta) \\vec \\nabla \\Phi_L (\\tilde \\eta) \\, \\textrm{d}\\tilde\\eta\\;.\n\\end{equation}\n\n\n\nTo derive the consistency relations we start from real space. Here, for definiteness, we denote by $\\delta^{(g)}$ the density contrast of the galaxy distribution. However, the relations that we will derive are more general and hold for any species---halos, baryons, etc., irrespectively of their bias with respect to the underlying dark matter field. \nFollowing the argument above, any $n$-point correlation function of short wavelength modes of $\\delta^{(g)}$ in the presence of a slowly varying $\\Phi_L(\\vec y)$ is equivalent to the same correlation function in displaced spatial coordinates, $\\vec{\\tilde x} \\equiv \\vec x + \\delta \\vec x(\\vec y, \\eta)$, where the displacement field $\\delta \\vec x(\\vec y, \\eta)$ is given by eq.~\\eqref{displacement} and $\\vec y$ is an arbitrary point---e.g., the midpoint between $\\vec x_1, \\ldots, \\vec x_n$---whose choice is irrelevant at order $q\/k$. This statement can be formulated with the following relation,\n\\begin{equation}\n\\begin{split}\n\\langle \\delta^{(g)} (\\vec x_1,\\eta_1) \\cdots \\delta^{(g)} (\\vec x_n,\\eta_n) | {\\Phi_L}(\\vec y)\\rangle &\\approx \\langle \\delta^{(g)} (\\vec{\\tilde x}_1,\\eta_1) \\cdots \\delta^{(g)} (\\vec{\\tilde x}_n,\\eta_n) \\rangle_0\\; \\\\\n& = \\int \\frac{\\textrm{d}^3k_1}{(2\\pi)^3}\\cdots\\frac{\\textrm{d}^3k_n}{(2\\pi)^3} \\langle \\delta^{(g)}_{\\vec k_1} (\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_n} (\\eta_n)\\rangle_0 \\, e^{i \\sum_a \\vec k_a \\cdot (\\vec x_a+ \\delta \\vec x (\\vec y, \\eta_a))} \\;, \\label{cr1}\n\\end{split}\n\\end{equation}\nwhere in the last line we have simply taken the Fourier transform of the right-hand side of the first line. Here and in the following, by the subscript $0$ after an expectation value we mean that the average is taken setting $\\Phi_L = 0$ (and not averaging over it); while by $\\approx$ we mean an equality that holds in the limit in which there is a separation of scales between long and short modes. \nIn momentum space this holds when the momenta of the soft modes is sent to zero.\nIn other words, corrections to the right-hand side of $\\approx$ are suppressed by ${\\cal O} (q\/k)$. \n\nFrom eq.~\\eqref{displacement} and using the continuity equation $\\delta' + \\vec \\nabla \\cdot \\vec v =0$, we can rewrite each Fourier mode of the displacement field as\n\\begin{equation}\n\\delta \\vec x(\\vec p, \\eta) = - i \\frac{\\vec p}{p^2} \\delta (\\vec p, \\eta) \\equiv - i \\frac{\\vec p}{p^2} D(\\eta) \\delta_0(\\vec p) \\;, \\label{deltax2delta}\n\\end{equation}\nwhere in the second equality we have defined $D(\\eta)$, the growth factor of density fluctuations of the {\\em long} mode\nand $\\delta_0 (\\vec p)$, a Gaussian random field with power spectrum $P_0(p)$ which represents the initial condition of the density fluctuations of the long mode \\cite{Bernardeau:2001qr}. Notice that the first equality of eq.~\\eqref{cr1} is based on the crucial assumption that the long mode is statistically uncorrelated with the short ones. This only works in single-field models of inflation, which we assume throughout. Notice also that eq.~\\eqref{deltax2delta}, when going beyond the linear theory, will only receive corrections of order $\\delta$, that we can neglect for our purposes since we are only interested in corrections which are enhanced by $1\/p$.\n\n\nAt this stage, we can compute an $(n+1)$-point correlation function in the squeezed limit by multiplying the left-hand side of eq.~\\eqref{cr1} by $\\delta_L$ and averaging over the long mode.\nSince the only dependence on $\\Phi_L$ in eq.~\\eqref{cr1} is in the exponential of $i \\sum_a \\vec k_a \\cdot \\delta \\vec x( \\vec y,\\eta_a)$, we obtain\n\\begin{equation}\n\\label{CoRe1}\n\\begin{split}\n\\langle \\delta_L( \\vec x, \\eta) \\langle \\delta^{(g)} (\\vec x_1,\\eta_1) \\cdots \\delta^{(g)} (\\vec x_n,\\eta_n) | \\Phi_L \\rangle \\rangle_{\\Phi_L} \\approx &\\int \\frac{\\textrm{d}^3k_1}{(2\\pi)^3} \\cdots \\frac{\\textrm{d}^3k_n}{(2\\pi)^3} \\langle \\delta^{(g)}_{\\vec k_1} (\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_n} (\\eta_n)\\rangle_0 \\, e^{i \\sum_a \\vec k_a \\cdot \\vec x_a} \\\\\n& \\times \\int \\frac{\\textrm{d}^3q}{(2\\pi)^3} e^{i \\vec{q}\\cdot\\vec{x}} \\langle\\delta_{\\vec q} (\\eta) e^{i \\sum_a \\vec k_a \\cdot \\delta \\vec x( \\vec y,\\eta_a)}\\rangle_{\\Phi_L} \\, .\n\\end{split}\n\\end{equation}\nIt is then convenient to rewrite this exponential as\n\\begin{equation}\n\\exp \\Big[ i \\sum_a \\vec k_a \\cdot \\delta \\vec x( \\vec y,\\eta_a) \\Big] =\\exp \\Big[ \\int^\\Lambda\\frac{\\textrm{d}^3p}{(2\\pi)^3} J(\\vec p) \\delta_0(\\vec p) \\Big]\\;, \\label{deltax2J}\n\\end{equation}\nwhere\n\\begin{equation}\nJ(\\vec{p}) \\equiv \\sum_a D(\\eta_a) \\frac{\\vec k_a \\cdot \\vec p}{p^2} \\,e^{i \\vec p \\cdot \\vec{y}}\\, . \\label{Jdef}\n\\end{equation}\nThe integral is restricted to soft momenta, smaller than a UV cut-off $\\Lambda$, which must be much smaller than the hard modes of momenta $k_a$.\nAveraging the right-hand side of eq.~\\eqref{deltax2J} over the long wavelength Gaussian random initial condition $\\delta_0(\\vec p)$ yields\\footnote{This result will receive corrections due to primordial non-Gaussianities. Indeed, even in single-field models of inflation, the statistics of modes with comparable wavelength can deviate from Gaussianity. We neglect these corrections in the following.}\n\\begin{equation}\n\\label{SumRes}\n\\bigg \\langle \\exp \\Big[ \\int^{\\Lambda}\\frac{\\textrm{d}^3p}{(2\\pi)^3} J(\\vec p) \\delta_0(\\vec p) \\Big] \\bigg \\rangle_{\\Phi_L} =\\exp \\bigg[ \\frac12 \\int^\\Lambda\\frac{\\textrm{d}^3p}{(2\\pi)^3} J(\\vec p) J(- \\vec p) P_0(p) \\bigg] \\;.\n\\end{equation}\nWe can use this relation to compute the expectation value of $\\delta_L$ with the exponential,\n\\begin{equation}\n\\begin{split}\n\\bigg \\langle \\delta_{\\vec q}(\\eta) \\exp \\Big( i \\sum_a \\vec k_a \\cdot \\delta \\vec x( \\vec y,\\eta_a) \\Big) \\bigg \\rangle_{\\Phi_L} &= (2\\pi)^3 D(\\eta) \\frac{\\delta}{\\delta J(\\vec q)} \\bigg \\langle \\exp \\Big[ \\int^\\Lambda\\frac{\\textrm{d}^3p}{(2\\pi)^3} J(\\vec p) \\delta_0(\\vec p) \\Big] \\bigg \\rangle_{\\Phi_L} \\\\& = P(q,\\eta) \\frac{J(- \\vec q)}{D(\\eta)} \n\\exp \\bigg[ \\frac12 \\int^\\Lambda\\frac{\\textrm{d}^3p}{(2\\pi)^3} J(\\vec p) J(- \\vec p) P_0(p) \\bigg] \\;,\n\\end{split}\n\\end{equation}\nwhere we have defined the power spectrum at time $\\eta$: $P( q,\\eta) \\equiv D^2(\\eta) P_0(q)$. \nFinally, rewriting eq.~\\eqref{CoRe1} in Fourier space using the above relation and the definition of $J$, eq.~\\eqref{Jdef}, we obtain the resummed consistency relations in the squeezed limit,\n\\begin{equation}\n\\begin{split}\n\\langle \\delta_{\\vec q}(\\eta) \\delta^{(g)}_{\\vec k_1} (\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_n} (\\eta_n)\\rangle' \\approx & - P(q,\\eta) \\, \\sum_a \\frac{D(\\eta_a)}{D(\\eta)} \\frac{\\vec k_a \\cdot \\vec{q}}{q^2} \\; \\langle \\delta^{(g)}_{\\vec k_1} (\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_n} (\\eta_n)\\rangle'_0 \\\\\n& \\times \\exp \\bigg[ {- \\frac12 \\int^\\Lambda \\frac{\\textrm{d}^3 p}{(2\\pi)^3} \\bigg( \\sum_a D(\\eta_a) \\frac{\\vec k_a \\cdot \\vec{p}}{p^2} \\, \\bigg)^2 P_0(p)} \\bigg] \\;, \\label{CR}\n\\end{split}\n\\end{equation}\nwhere, here and in the following, primes on correlation functions indicate that the momentum conserving delta functions have been removed.\nHowever, what one observes in practice is not the expectation value $\\langle\\ldots\\rangle_0$ with the long modes set artificially to zero: one wants to rewrite the right-hand side of eq.~\\eqref{CR} in terms of an average over the long modes. Using eq.~\\eqref{SumRes} one gets:\n\\begin{equation}\n\\label{resumloop}\n\\langle \\langle \\delta^{(g)}_{\\vec k_1} (\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_n} (\\eta_n) | {\\Phi_L} \\rangle \\rangle_{\\Phi_L} \\approx \\exp \\bigg[ {- \\frac12 \\int^\\Lambda \\frac{\\textrm{d}^3 p}{(2\\pi)^3} \\bigg( \\sum_a D(\\eta_a) \\frac{\\vec k_a \\cdot \\vec{p}}{p^2} \\, \\bigg)^2 P_0(p)} \\bigg] \\langle \\delta^{(g)}_{\\vec k_1} (\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_n} (\\eta_n)\\rangle_0 \\;.\n\\end{equation}\nOnce written in terms of the observable quantity the consistency relation comes back to the simple form:\n\\begin{equation}\n\\langle \\delta_{\\vec q}(\\eta) \\delta^{(g)}_{\\vec k_1} (\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_n} (\\eta_n)\\rangle' \\approx - P(q,\\eta) \\, \\sum_a \\frac{D(\\eta_a)}{D(\\eta)} \\frac{\\vec k_a \\cdot \\vec{q}}{q^2} \\; \\langle \\delta^{(g)}_{\\vec k_1} (\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_n} (\\eta_n)\\rangle' \\;. \\label{CR_easy}\n\\end{equation}\nThis equation has the same form as the consistency relations obtained in Refs.~\\cite{Kehagias:2013yd,Peloso:2013zw,Creminelli:2013mca}, but now it {\\em does not} rely on a linear expansion in the displacement field,\n\\begin{equation}\n\\frac{|\\delta \\vec x|}{|\\vec x|} \\sim \\frac{k}{q} \\delta_L \\ll 1 \\;.\n\\end{equation}\nIndeed, to derive eq.~\\eqref{CR} we have assumed that the long mode is in the linear regime, i.e.~$\\delta_L \\ll 1$, but no assumption has been made on $({k}\/{q}) \\delta_L$, which can be as large as one wishes.\nFor equal-time correlators the right-hand side vanishes at leading order in $q$ because $\\sum_a \\vec k_a=\\vec q$, in the same way as in the linearized version \\cite{Kehagias:2013yd,Peloso:2013zw,Creminelli:2013mca}. \nThe resummation of long wavelengths in terms of a global translation of spatial coordinates---whose effect vanishes in equal-time correlation functions---was also performed in \\cite{Bernardeau:2011vy,Bernardeau:2012aq} by using the so-called {\\em eikonal} approximation of the equations of motion of standard perturbation theory\\footnote{It is not surprising that the consistency relation eq.~\\eqref{CR_easy} remains the same even non-linearly in $(k\/q) \\delta_L$ working directly in terms of the expectation values $\\langle\\ldots\\rangle$ averaged over the long modes. Indeed, neglecting primordial non-Gaussianities, the effect of the mode with momentum $\\vec q$ is the same as a change of coordinates, even when the short-scale correlation functions are averaged over all long modes. Since, as we discussed, also eq.~\\eqref{deltax2delta} does not require an expansion in $(k\/q) \\delta_L$, eq.~\\eqref{CR_easy} follows.}\n\n\nIt is important to stress that here we made practically no assumptions on the short modes. We did not assume that they are in the linear regime or that the single-stream approximation holds. The relation also takes into account all complications due to baryon physics and it does not assume a description in terms of a Vlasov-Poisson system. We did not assume any model of bias between the short-scale $\\delta^{(g)}$ and the underlying dark matter distribution $\\delta$. We did not assume that the number of galaxies is conserved at short-scales, so the relation is valid including the formation and merging history. We thus believe that our derivation, rooted only on the equivalence principle, is more robust than the one of \\cite{Kehagias:2013yd,Peloso:2013zw} based on the explicit equations for dark matter and for the galaxy fluid. Notice however that, while we are completely general about the short-modes physics, the long mode is treated in perturbation theory including its bias. Of course what enters in the consistency relations is only the velocity field of the long mode eq.~\\eqref{longv}, related to $\\Phi_L$ by the Euler equation. In converting this quantity in the density of some kind of objects, one has to rely on the conservation equation and this introduces the issue of bias and of its time-dependence. However, one can measure the large-scale potential in many ways, minimizing the systematic and cosmic-variance uncertainty \\cite{Seljak:2008xr}.\n\nAs shown below, one can straightforwardly extend this procedure and derive consistency relations involving an arbitrary number of soft legs in the correlation functions or use it to study the effect of soft loops and internal lines.\n\n\n\n\n\n\n\n\n\\subsection{Several soft legs}\n\nThe generalisation of the consistency relations above to multiple soft legs (for an analogous discussion in inflation see \\cite{Marko}) relies on taking successive functional derivatives with respect to $J(\\vec q _i)$ of eq.~\\eqref{SumRes}. As an example, we can explicitly compute the consistency relations with two soft modes. In this case the $(n+2)$-point function reads\n\\begin{equation}\n\\label{CoRe2}\n\\begin{split}\n \\langle \\delta_L( \\vec y_1,\\tau_1) \\delta_L( \\vec y_2,\\tau_2) \\delta^{(g)} (\\vec x_1,\\eta_1) &\\cdots \\delta^{(g)} (\\vec x_n,\\eta_n) \\rangle \\approx \\int \\frac{\\textrm{d}^3k_1}{(2\\pi)^3} \\cdots \\frac{\\textrm{d}^3k_n}{(2\\pi)^3} \\langle \\delta^{(g)}_{\\vec k_1} (\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_n} (\\eta_n)\\rangle_0 \\,e^{i \\sum_a \\vec k_a \\cdot \\vec x_a} \\\\\n& \\times \\int \\frac{\\textrm{d}^3q_1}{(2\\pi)^3}\\frac{\\textrm{d}^3q_2}{(2\\pi)^3}e^{i (\\vec q_1\\cdot\\vec y_1+ \\vec q_2 \\cdot\\vec y_2)} \\bigg \\langle\\delta_{\\vec q_1}(\\tau_1)\\delta_{\\vec q_2}(\\tau_2) e^{\\int^\\Lambda\\frac{\\textrm{d}^3p}{(2\\pi)^3} J(\\vec{p}) \\delta_0( \\vec p)} \\bigg \\rangle\\, .\n\\end{split}\n\\end{equation}\nTo compute the average over the long modes in the last line, it is enough to take two functional derivatives of eq.~\\eqref{SumRes},\n\\begin{equation}\n\\begin{split}\n\\bigg \\langle \\delta_{\\vec q_1}(\\tau_1)\\delta_{\\vec q_2}(\\tau_2) &e^{\\int^\\Lambda\\frac{\\textrm{d}^3p}{(2\\pi)^3} J(\\vec{p}) \\delta_0(\\vec p)} \\bigg \\rangle=(2 \\pi)^6D(\\tau_1) D(\\tau_2) \\frac{\\delta}{\\delta J(\\vec q_1)}\\frac{\\delta}{\\delta J(\\vec q_2)}\\bigg \\langle e^{\\int^\\Lambda\\frac{\\textrm{d}^3 p }{(2\\pi)^3} J(\\vec{p}) \\delta_0(\\vec p)}\\bigg \\rangle \\\\\n&= \\frac{J(-\\vec q_1)}{D(\\tau_1)}\\frac{J(-\\vec q_2)}{D(\\tau_2)}P(q_1,\\tau_1)P(q_2,\\tau_2) e^{\\frac12 \\int^\\Lambda \\frac{\\textrm{d}^3 p}{(2\\pi)^3} J(\\vec{p})J(-\\vec{p})P_0(p)}\\, ,\n\\end{split}\n\\end{equation}\nwhere we have assumed $\\vec q_1 + \\vec q_2 \\neq 0$ to get rid of unconnected contributions. In Fourier space, this yields\n\\begin{equation}\n\\label{n+2pf}\n\\begin{split}\n\\langle \\delta_{\\vec q_1}(\\tau_1)\\delta_{\\vec q_2}(\\tau_2) \\delta^{(g)}_{\\vec k_1} (\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_n} (\\eta_n)\\rangle' & \\approx P(q_1,\\tau_1)P(q_2,\\tau_2) \\\\ & \\times \\sum_a \\frac{D(\\eta_a)}{D(\\tau_1)} \\frac{\\vec k_a \\cdot \\vec{q_1}}{q_1^2} \\sum_b \\frac{D(\\eta_b)}{D(\\tau_2)} \\frac{\\vec k_b \\cdot \\vec{q_2}}{q_2^2} \\langle \\delta^{(g)}_{\\vec k_1} (\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_n} (\\eta_n)\\rangle' \\;,\n\\end{split}\n\\end{equation}\nwhere again we have used eq.~\\eqref{resumloop} to write the result in terms of correlation functions averaged over the long modes.\n\nAs a simple example, let us consider eq.~\\eqref{n+2pf} in the case where $n=2$ and $\\delta^{(g)}$ describes dark matter perturbations, i.e.~$\\delta^{(g)} \\equiv \\delta$.\nIn this case, at lowest order in $\\frac k q \\delta(\\vec q,\\eta)$---i.e.~setting the exponential in the third line to unity---the above relation reduces to\n\\begin{equation}\n\\label{TriSpec}\n\\langle\\delta_{\\vec q_1}(\\tau_1) \\delta_{\\vec q_2}(\\tau_2) \\delta_{\\vec k_1}(\\eta_1) \\delta_{\\vec k_2}(\\eta_2)\\rangle' \\approx \\frac{(D(\\eta_1) - D(\\eta_2) )^2}{D(\\tau_1) D(\\tau_2) } \\frac{\\vec q_1\\cdot \\vec k_1}{q_1^2} \\frac{\\vec q_2\\cdot \\vec k_1}{q_2^2}P(q_1,\\tau_1)P(q_2,\\tau_2) \\langle \\delta_{\\vec k_1}(\\eta_1)\\delta_{\\vec k_2}(\\eta_2)\\rangle'.\n\\end{equation}\nWe can check that this expression correctly reproduces the tree-level trispectrum computed in perturbation theory in the double-squeezed limit. This can be easily computed by summing the two types of diagrams displayed in Fig.~\\ref{fig:trispectrum}.\n\\begin{figure}\n\\centering\n\\includegraphics[width=5in]{trispectrum.pdf}\n\\caption{\\label{fig:trispectrum} \\small{Two diagrams that contribute to the tree-level trispectrum. Left: $T_{1122}$. Right: $T_{1113}$.}}\n\\end{figure}\nThe diagram on the left-hand side represents the case where the density perturbations of the short modes are both taken at second order, yielding\n\\begin{equation}\n\\label{cont_t_1}\n\\begin{split}\nT_{1122}=& \\, D(\\tau_1) D(\\tau_2) D(\\eta_1) D(\\eta_2) P_{0}(q_1)P_{0}(q_2) F_2(-\\vec q_1, \\vec k_1+\\vec q_1) F_2(-\\vec q_2, \\vec k_2+\\vec q_2) \\langle \\delta_{\\vec k_1} (\\eta_1)\\delta_{\\vec k_2} (\\eta_2)\\rangle' + {\\rm perms} \\\\\n& \\approx -8 \\frac{\\vec q_1\\cdot \\vec k_1}{2 q_1^2} \\frac{\\vec q_2\\cdot \\vec k_1}{2 q_2^2} \\frac{D(\\eta_1) D(\\eta_2)}{D(\\tau_1) D(\\tau_2)} P(q_1,\\tau_1)P(q_2,\\tau_2) \\langle \\delta_{\\vec k_1}(\\eta_1)\\delta_{\\vec k_2}(\\eta_2)\\rangle' \\, ,\n\\end{split}\n\\end{equation}\nwhere, on the right-hand side of the first line, $ F_2(\\vec p_1,\\vec p_2)$ is the usual kernel of perturbation theory, which in the limit where $p_1 \\ll p_2$ simply reduces to $ \\vec p_1 \\cdot \\vec p_2\/( 2 p_1^2)$ \\cite{Bernardeau:2001qr}.\nThe second type of diagram, displayed on the right-hand side of Fig.~\\ref{fig:trispectrum}, is obtained when one of the short density perturbations is taken at third order; it gives\n\\begin{equation}\n\\label{cont_t_2}\n\\begin{split}\nT_{1113} &= D(\\eta_2)^2 D(\\tau_1) D(\\tau_2) P_{0}(q_1)P_{0}(q_2) F_3(-\\vec q_1, - \\vec q_2, - \\vec k_1) \\langle \\delta_{\\vec k_1}(\\eta_1)\\delta_{\\vec k_2}(\\eta_2)\\rangle' + {\\rm perms} \\\\\n& \\approx 4 \\frac{\\vec q_1\\cdot \\vec k_1}{2 q_1^2} \\frac{\\vec q_2\\cdot \\vec k_1}{2 q_2^2} \\frac{D(\\eta_2)^2}{D(\\tau_1) D(\\tau_2)} P(q_1,\\tau_1)P(q_2,\\tau_2) \\langle \\delta_{\\vec k_1}(\\eta_1)\\delta_{\\vec k_2}(\\eta_2)\\rangle' \\, ,\n\\end{split}\n\\end{equation}\nwhere, on the right-hand side of the first line, $ F_3(\\vec p_1,\\vec p_2,\\vec p_3)$ is the third-order perturbation theory kernel, which in the limit where $p_1, p_2 \\ll p_3$ reduces to $ (\\vec p_1 \\cdot \\vec p_3)(\\vec p_2 \\cdot \\vec p_3) \/( 4 p_1^2 p_2^2 )$ \\cite{Bernardeau:2001qr}.\nAs expected, summing up all the contributions to the connected part of the trispectrum, i.e.~$T_{1122}+T_{1131}+T_{1113}$, using eqs.~\\eqref{cont_t_1} and \\eqref{cont_t_2} and $\\vec k_2 \\approx - \\vec k_1$ one obtains eq.~\\eqref{TriSpec}. \n\n\n\\subsection{Soft Loops}\n\nSo far we have derived consistency relations where the long modes appear explicitly as external legs. We now show that our arguments can also capture the effect on short-scale correlation functions of soft modes running in loop diagrams.\nWe already did this in eq~\\eqref{resumloop}\n\\begin{equation}\n\\langle \\langle \\delta^{(g)}_{\\vec k_1} (\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_n} (\\eta_n) | {\\Phi_L} \\rangle \\rangle_{\\Phi_L} \\approx \\exp \\bigg[ {- \\frac12 \\int^\\Lambda \\frac{\\textrm{d}^3 p}{(2\\pi)^3} \\bigg( \\sum_a D(\\eta_a) \\frac{\\vec k_a \\cdot \\vec{p}}{p^2} \\, \\bigg)^2 P_0(p)} \\bigg] \\langle \\delta^{(g)}_{\\vec k_1} (\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_n} (\\eta_n)\\rangle_0 \\;.\n\\end{equation}\nThe exponential in this expression can be expanded at a given order, corresponding to the number of soft loops dressing the $n$-point correlation function. Each loop carries a contribution $\\propto k^2 \\int \\textrm{d} p P_0(p) $ to the correlation function. However, \nthis expression makes it very explicit that at all loop order these contributions have no effect on equal-time correlators, because in this case the exponential on the right-hand side is identically unity. This confirms previous analysis on this subject \\cite{Jain:1995kx,Scoccimarro:1995if,Bernardeau:2011vy,Bernardeau:2012aq,Blas:2013bpa,Carrasco:2013sva}. It is important to notice again, however, that in our derivation this cancellation is more general and robust that in those references, as it takes place independently of the equations of motion for the short modes and is completely agnostic about the short-scale physics. It simply derives from the equivalence principle.\n\n\n\nNevertheless, soft loops contribute to unequal-time correlators. As a check of the expression above, one can compute the contribution of soft modes to the 1-loop unequal-time matter power spectrum, $\\langle \\delta_{\\vec k_1}(\\eta_1) \\delta_{\\vec k_2}(\\eta_2)\\rangle'$, and verify that this reproduces the standard perturbation theory result. Expanding at order $( \\frac kp \\delta)^2$ the exponential in eq.~\\eqref{resumloop} for $n=2$, one obtains the 1-loop contribution to the power spectrum,\n\\begin{equation}\n\\label{1loop}\n\\langle \\delta^{(g)}_{\\vec k}(\\eta_1) \\delta^{(g)}_{-\\vec k}(\\eta_2)\\rangle'_{\\rm 1-soft\\,loop}\\approx -\\frac12\\left({D(\\eta_1)-D(\\eta_2)}\\right)^2 \\int^\\Lambda \\frac{\\textrm{d}^3 p}{(2\\pi)^3}\\bigg(\\frac{\\vec p \\cdot \\vec k}{ p^2}\\bigg)^2 P_0(p) \\langle \\delta^{(g)}_{\\vec k}(\\eta_1) \\delta^{(g)}_{-\\vec k}(\\eta_2)\\rangle'_0 \\, .\n\\end{equation}\nLet us now compute the analogous contribution in perturbation theory. \n\\begin{figure}\n\\centering\n\\includegraphics[width=5in]{loops.pdf}\n\\caption{\\label{fig:loop} \\small Two diagrams that contribute to the 1-loop power spectrum. Left: $P_{22}$. Right: $P_{31}$.}\n\\end{figure}\nTwo types of diagrams are going to be relevant; these are shown in Fig.~\\ref{fig:loop}. The one on the left, usually called $P_{22}$, yields\n\\begin{equation}\nP_{22}\\approx 4 D(\\eta_1) D(\\eta_2) \\int^\\Lambda \\frac{\\textrm{d}^3 p}{(2\\pi)^3}\\bigg(\\frac{\\vec p \\cdot \\vec k}{2 p^2}\\bigg)^2 P_0(p) \\langle \\delta_{\\vec k}(\\eta_1) \\delta_{-\\vec k}(\\eta_2)\\rangle'_0 \\, ,\n\\end{equation}\nwhile the diagram on the right, $P_{31}$, gives\n\\begin{equation}\nP_{31} \\approx - 2 D(\\eta_1)^2 \\int^\\Lambda \\frac{\\textrm{d}^3 p}{(2\\pi)^3}\\bigg(\\frac{\\vec p \\cdot \\vec k}{2 p^2}\\bigg)^2 P_0(p) \\langle \\delta_{\\vec k}(\\eta_1) \\delta_{-\\vec k}(\\eta_2)\\rangle'_0 \\, .\n\\end{equation}\nSumming up all the different contributions, $P_{22} + P_{31} + P_{13}$, one obtains eq.~\\eqref{1loop}.\n\n\\subsection{Soft internal lines}\nAnother kinematical regime in which the consistency relations can be applied is the limit in which the sum of some of the external momenta becomes very small, for instance $|\\vec k_1 +\\cdots + \\vec k_m |\\ll k_1, \\ldots, k_m$. In this limit, the dominant contribution to the $n$-point function comes from the diagram where $m$ external legs of momenta $\\vec k_1,\\ldots,\\vec k_m$ exchange soft modes with momentum $\\vec q=\\vec k_1 +\\cdots + \\vec k_m$ with $n-m$ external legs with momenta $\\vec k_{m+1}, \\ldots, \\vec k_n$ (for an analogous case in inflation see \\cite{Seery:2008ax,Leblond:2010yq}). In the language of our approach, this contribution comes from averaging a product of $m$-point and $(n-m)$-point functions under the effect of long modes. \n\n\\def\\delta^{(g)}{\\delta}\n\nIn this case, the $n$-point function in real space can be written as\n\\begin{equation}\n\\begin{split}\n\\langle\\delta^{(g)}(\\vec x_1,\\eta_1) & \\cdots \\delta^{(g)}(\\vec x_m, \\eta_m) \\; \\delta^{(g)}(\\vec x_{m+1},\\eta_{m+1}) \\cdots \\delta^{(g)}(\\vec x_n,\\eta_n) \\rangle \\\\\n& \\approx \\langle \\langle\\delta^{(g)}(\\vec x_1,\\eta_1) \\cdots \\delta^{(g)}(\\vec x_m, \\eta_m) | \\Phi_L \\rangle \\langle \\delta^{(g)}(\\vec x_{m+1},\\eta_{m+1}) \\cdots \\delta^{(g)}(\\vec x_n,\\eta_n) | \\Phi_L \\rangle \\rangle_{\\Phi_L}\\;,\n\\end{split}\n\\end{equation}\nwhere here and in the rest of the section we drop the superscript ${}^{(g)}$ on the galaxy density contrast to lighten the notation.\nNow we can straightforwardly apply the equations from the previous sections. As before, the long mode can be traded for the change of coordinates. Rewriting the right-hand side in Fourier space we get \n\\begin{equation}\n\\label{isl1}\n\\begin{split}\n\\langle\\delta^{(g)}(\\vec x_1,\\eta_1) & \\cdots \\delta^{(g)}(\\vec x_m,\\eta_m)\\; \\delta^{(g)}(\\vec x_{m+1},\\eta_{m+1}) \\cdots \\delta^{(g)}(\\vec x_n,\\eta_n)\\rangle \\\\\n& \\approx \\int \\frac{\\textrm{d}^3k_1}{(2\\pi)^3}\\cdots\\frac{\\textrm{d}^3k_n}{(2\\pi)^3} \\langle\\delta^{(g)}_{\\vec k_1}(\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_m}(\\eta_m) \\rangle_0 \\langle \\delta^{(g)}_{\\vec k_{m+1}}(\\eta_{m+1}) \\cdots \\delta^{(g)}_{\\vec k_n}(\\eta_n)\\rangle_0 \\, e^{i \\sum_a \\vec k_a \\cdot \\vec x_a} \\\\\n& \\times \\left \\langle \\exp\\bigg[i \\sum_{a=1}^m \\vec k_a \\cdot \\delta \\vec x( \\vec y_1,\\eta_a)\\bigg] \\cdot \\exp\\bigg[i \\sum_{a=m+1}^n \\vec k_a \\cdot \\delta \\vec x( \\vec y_2,\\eta_a)\\bigg] \\right \\rangle_{\\Phi_L} \\;,\n\\end{split}\n\\end{equation}\nwhere $\\vec y_1$ and $\\vec y_2$ are two different points respectively close to $(\\vec x_1,\\vec x_2, \\ldots, \\vec x_m )$ and $(\\vec x_{m+1}, \\vec x_{m+2} , \\ldots ,\\vec x_n)$. \nThe average over the long mode can be rewritten as\n\\begin{equation}\n\\left \\langle \\exp\\left[\\int^\\Lambda \\frac{\\textrm{d} ^3 \\vec p}{(2\\pi)^3} \\big( J_1(\\vec p)+ J_2(\\vec p) \\big) \\delta_0(\\vec p) \\right] \\ \\right \\rangle_{\\Phi_L}\n\\end{equation}\nwith\n\\begin{equation}\nJ_1(\\vec p) = \\sum_{a=1}^m D(\\eta_a) \\frac{\\vec k_a \\cdot \\vec p}{p^2} e^{i \\vec p \\cdot \\vec y_1} \\;, \\quad J_2(\\vec p) = \\sum_{a=m+1}^n D(\\eta_a) \\frac{\\vec k_a \\cdot \\vec p}{p^2} e^{i \\vec p \\cdot \\vec y_2} \\;.\n\\end{equation}\nTaking the expectation value over the long mode using the expression for averaging the exponential of a Gaussian variable, i.e.~eq.~\\eqref{SumRes}, eq.~\\eqref{isl1} can be written as\n\\begin{equation}\n\\label{sl2}\n\\begin{split}\n\\langle\\delta^{(g)}(\\vec x_1,\\eta_1) & \\cdots \\delta^{(g)}(\\vec x_m,\\eta_m)\\; \\delta^{(g)}(\\vec x_{m+1},\\eta_{m+1}) \\cdots \\delta^{(g)}(\\vec x_n,\\eta_n)\\rangle \\\\\n& \\approx \\int \\frac{\\textrm{d}^3k_1}{(2\\pi)^3}\\cdots\\frac{\\textrm{d}^3k_n}{(2\\pi)^3}\\langle\\delta^{(g)}_{\\vec k_1}(\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_m}(\\eta_m) \\rangle ' \\langle \\delta^{(g)}_{\\vec k_{m+1}}(\\eta_{m+1}) \\cdots \\delta^{(g)}_{\\vec k_n}(\\eta_n)\\rangle' \\, e^{i \\sum_a \\vec k_a \\cdot \\vec x_a}\\\\\n& \\times \\exp \\left[ - \\int^\\Lambda \\frac{\\textrm{d}^3 p}{(2\\pi)^3} J_1(\\vec p) J_2(\\vec p) P_0(\\vec p) \\right] \\;.\n\\end{split}\n\\end{equation}\n\nWe are interested in the soft internal lines, that come from the cross term, i.e.~the last line of eq.~\\eqref{sl2}. Notice that $J_1(\\vec p)$ and $J_2(\\vec p)$ are evaluated at different points $\\vec y_1$ and $\\vec y_2$ separated by a distance $\\vec x$.\\footnote{For definiteness, we can choose $\\vec y_1=\\frac{1}{m}\\sum_{a=1}^m \\vec x_a$ and $\\vec y_2=\\frac{1}{n-m}\\sum_{a=m+1}^n \\vec x_a$.}\nIt is lengthy but straightforward to take the Fourier transform of this equation, which yields\n\\begin{equation}\n\\begin{split}\n\\label{soft_lines_final}\n\\langle\\delta^{(g)}_{\\vec k_1}(\\eta_1)& \\cdots \\delta^{(g)}_{\\vec k_m}(\\eta_m) \\delta^{(g)}_{\\vec k_{m+1}}(\\eta_{m+1})\\cdots \\delta^{(g)}_{\\vec k_n}(\\eta_n) \\rangle' \\\\ &\n\\approx \\langle\\delta^{(g)}_{\\vec k_1}(\\eta_1)\\cdots \\delta^{(g)}_{\\vec k_m}(\\eta_m) \\rangle' \\, \\langle \\delta^{(g)}_{\\vec k_{m+1}}(\\eta_{m+1}) \\cdots \\delta^{(g)}_{\\vec k_n}(\\eta_n)\\rangle'\\\\\n&\\times\\int \\textrm{d}^3 x \\, e^{-i\\sum_{i=1}^m \\vec k_i\\cdot \\vec x} \\exp \\bigg[ - \\int^\\Lambda \\frac{\\textrm{d} ^3 p}{(2\\pi)^3}e^{i\\vec p\\cdot \\vec x} \\, \\sum_{a=1}^{m} D(\\eta_a) \\frac{\\vec k_a \\cdot \\vec{p}}{p^2} \\sum_{a=m+1}^{n} D(\\eta_a) \\frac{\\vec k_a \\cdot \\vec{p}}{p^2} \\, P_0(p)\\bigg]\\,.\n\\end{split}\n\\end{equation}\nThe last line encodes the effect of soft modes with total momentum $\\vec q=\\vec k_1 +\\cdots + \\vec k_m$ exchanged between $m$ external legs of momenta $\\vec k_1,\\ldots,\\vec k_m$ and $n-m$ external legs with momenta $\\vec k_{m+1}, \\ldots, \\vec k_n$, in the limit $q\/k_i \\to 0$. Expanding the exponential at a given order in $P_0(p)$ yields the number of soft lines exchanged. The integral in $\\textrm{d}^3 x$ ensures that the sum of the internal momenta is $\\vec q$.\n\nEquation \\eqref{soft_lines_final} can be easily generalized to consider the case where more than two sums of momenta become small, i.e.~when soft internal lines are exchanged between more than two hard-modes diagrams. The conclusion is always the same: soft internal lines do not contribute to equal time correlators at order $\\propto k^2 \\int \\textrm{d} p P_0(p) $. Again, this statement is very general irrespectively of the assumption about the short scales.\n\n\nAs a concrete example, let us consider the case $m=2, n=4$, i.e.~a 4-point function in the collapsed limit $|\\vec k_1 + \\vec k_2 |\\ll k_1, k_2$, and the exchange of a single soft line. In this case, expanding the exponential at first order in $P_0(p)$, the above equation yields\n\\begin{equation}\n\\label{t_wsl}\n\\begin{split}\n\\langle\\delta_{\\vec k_1}(\\eta_1)& \\delta_{\\vec k_2}(\\eta_2) \\delta_{\\vec k_{3}}(\\eta_3) \\delta_{\\vec k_4}(\\eta_4) \\rangle_c' \\\\ &\n\\approx - \\, \\langle\\delta(\\vec k_1,\\eta_1) \\delta(\\vec k_2,\\eta_2) \\rangle ' \\langle \\delta(\\vec k_{3},\\eta_{3}) \\cdots \\delta(\\vec k_4,\\eta_4)\\rangle'\\\\\n&\\times \\int^\\Lambda \\textrm{d} ^3 p \\big( D(\\eta_1) - D(\\eta_2) \\big) \\frac{\\vec k_1 \\cdot \\vec{p}}{p^2} \\, \\big( D(\\eta_3) - D(\\eta_4) \\big) \\frac{\\vec k_3 \\cdot \\vec{p}}{p^2} \\, P_0(p) \\delta_D(\\vec p - \\vec k_1 - \\vec k_2)\\,,\n\\end{split}\n\\end{equation}\nwhere we have considered only the connected diagram and, for simplicity, we are neglecting soft loops attached to each lines.\nTo compare with perturbation theory, we need to compute the tree-level exchange diagram. The contribution from taking $\\vec k_1$ and $\\vec k_3$ at second order yields \n\\begin{equation}\nT_{2121} \\approx - 4 D(\\eta_1) D(\\eta_3) P_0(|\\vec k_1+\\vec k_2|) \n\\frac{\\vec k_1 \\cdot (\\vec k_1 + \\vec k_2 )}{ 2 |\\vec k_1 + \\vec k_2|^2} \\frac{\\vec k_3 \\cdot (\\vec k_1 + \\vec k_2 )}{ 2 |\\vec k_1 + \\vec k_2|^2} \n\\langle \\delta_{\\vec k_1}(\\eta_1) \\delta_{\\vec k_2}(\\eta_2)\\rangle' \\langle \\delta_{\\vec k_3}(\\eta_3) \\delta_{\\vec k_4}(\\eta_4)\\rangle' \\; ,\n\\end{equation}\nand summing up the other permutations lead to\n\\begin{equation}\n\\begin{split}\n\\langle\\delta_{\\vec k_1}(\\eta_1)& \\delta_{\\vec k_2}(\\eta_2) \\delta_{\\vec k_3}(\\eta_3) \\delta_{\\vec k_4}(\\eta_4)\\rangle_c' \\approx - \\big(D(\\eta_1)-D(\\eta_2)\\big) \\big( D(\\eta_3) - D(\\eta_4) \\big) P_0(|\\vec k_1+\\vec k_2|) \\\\ & \\times\n\\frac{\\vec k_1 \\cdot (\\vec k_1 + \\vec k_2 )}{ |\\vec k_1 + \\vec k_2|^2} \\frac{\\vec k_3 \\cdot (\\vec k_1 + \\vec k_2 )}{ |\\vec k_1 + \\vec k_2|^2} \n\\langle \\delta_{\\vec k_1}(\\eta_1) \\delta_{\\vec k_2}(\\eta_2)\\rangle' \\langle \\delta_{\\vec k_3}(\\eta_3) \\delta_{\\vec k_4}(\\eta_4)\\rangle' \\;,\n\\end{split}\n\\end{equation}\nwhich confirms eq.~\\eqref{t_wsl}. \nOne can easily extend this check to the case of several soft-lines.\n\n\n\\def\\delta^{(g)}{\\delta^{(g)}}\n\n\\section{\\label{sec:redshift}Going to redshift space}\n\nThe derivation of the consistency relations has been done in real space, but the galaxy distribution will of course be observed in redshift space. It is thus natural to ask if it is possible to write relations directly in terms of redshift space correlation function. Before doing so, let us stress that it will be difficult---if not impossible---to measure consistency relations at different times. To see the effect of the long mode, one would like to measure at quite different redshifts the short-scale correlation function at a spatial distance which is much smaller than Hubble. This is of course impossible since we can only observe objects on our past lightcone. This implies that, although one can check the consistency relations at different times in simulations, for real data we will have to stick to correlation functions at the same time. Given that the consistency relations vanish at equal time, their main phenomenological interest will be to look for their possible violations, which would indicate that one of the assumptions does not hold. This would represent a detection of either multi-field inflation or violation of the equivalence principle (or both!)\n\nThe mapping between real space $\\vec x$ and redshift space $\\vec s$ in the plane-parallel approximation is given by\n\\begin{equation}\n\\vec s = \\vec x + \\frac{v_z}{{\\cal H}} \\hat z\\;, \\label{rs-rs}\n\\end{equation}\nwhere $\\hat z$ is the direction of the line of sight, $v_z \\equiv \\vec v \\cdot \\hat z$, and $\\vec v$ is the peculiar velocity. Also the relation between $z$ and $\\eta$ receives corrections due to peculiar velocities. These corrections are small for sufficiently distant objects for which $v \\ll H x$. Notice that we do not assume that the peculiar velocity is a function of the position $\\vec x$ since this holds only in the single-stream approximation, which breaks down for virialized objects on small scales \\cite{Seljak:2011tx,Vlah:2012ni}. \n\nThe derivation of the consistency relations follows closely what we did in real space, once we observe that also in redshift space the long mode induces a (time-dependent) translation. Indeed we have\n\\begin{align}\n\\vec x &\\to \\vec x + D \\, \\vec \\nabla \\Phi_{0,L}\\;, \\label{traslation} \\\\\n\\vec v & \\to \\vec v+ f {\\cal H} D \\, \\vec \\nabla \\Phi_{0,L} \\label{vL} \\;,\n\\end{align}\nwhere $D(\\eta)$ is the growth factor, $f(\\eta) \\equiv d \\ln D\/d \\ln a$ is the growth rate and $\\vec \\nabla \\Phi_{0,L}$ a homogenous gradient of the initial gravitational potential $\\Phi_{0,L}$, related to $\\delta_0$ defined in eq.~\\eqref{deltax2delta} by $\\nabla^2 \\Phi_{0,L} = \\delta_{0,L}$. \nThis corresponds to a redshift space translation \n\\begin{align}\n\\vec{\\tilde s} &= \\vec s + \\delta \\vec s\\;, \\\\\n\\delta \\vec s & \\equiv D \\, ( \\vec \\nabla \\Phi_{0,L} + f \\nabla_z \\Phi_{0,L} \\hat z) \\label{deltas}\\;,\n\\end{align}\nwhere we have applied to eq.~\\eqref{rs-rs} a spatial translation of the real-space coordinates and a shift of the peculiar velocity along the line of sight, respectively eqs.~\\eqref{traslation} and \\eqref{vL}.\nAs in real space, we can thus conclude that a redshift-space correlation function in the presence of a long mode $\\Phi_L$ is the same as the correlation function in the absence of the long mode but in {\\em translated} redshift-space coordinates:\\begin{equation}\n\\begin{split}\n\\langle \\delta^{(g,s)}(\\vec s_1,\\eta_1) \\cdots \\delta^{(g,s)}(\\vec s_n,\\eta_n) | {\\Phi_L}\\rangle &\\approx \\langle \\delta^{(g,s)}(\\vec{\\tilde s}_1,\\eta_1) \\cdots \\delta^{(g,s)}(\\vec{\\tilde s}_n,\\eta_n) \\rangle\\; \\\\\n& = \\sum_a \\delta \\vec s_a \\langle \\delta^{(g,s)}(\\vec s_1,\\eta_1) \\cdots \\vec \\nabla_a \\delta^{(g,s)}(\\vec s_a,\\eta_a) \\cdots \\delta^{(g,s)}(\\vec s_n,\\eta_n) \\rangle \\;, \\label{expansion}\n\\end{split}\n\\end{equation}\nwhere $\\delta \\vec s_a \\equiv D_a \\, ( \\vec \\nabla_a \\Phi_{0,L} + f_a \\nabla_{a,z} \\Phi_{0,L} \\hat z) $. To show this notice that the density in redshift space can be written in terms of the real-space distribution function \\cite{Seljak:2011tx,Vlah:2012ni}\n\\begin{equation}\n\\rho_s(\\vec s) = m a^{-3} \\int \\textrm{d}^3 p \\; f \\left(\\vec s - \\frac{v_z}{\\cal H} \\hat z, \\vec p\\right) \\;,\n\\end{equation}\nwhere $m$ is the mass of the particles and $\\vec p$ is the physical momentum. The statistical properties of $\\rho_s(\\vec s)$ in the presence of the long mode are inherited by its expression in real space\n\\begin{equation}\n\\rho_s(\\vec s)_{\\Phi_L} = \\frac{m}{a^3} \\int \\textrm{d}^3 p \\; f \\left(\\vec s - \\frac{v_z}{\\cal H} \\hat z +\\delta\\vec x, \\vec p +a m \\delta \\vec v\\right) = \\frac{m}{a^3} \\int \\textrm{d}^3 p' \\; f \\left(\\vec s - \\frac{v_z- \\delta v_z}{\\cal H} \\hat z +\\delta\\vec x, \\vec {p'}\\right) = \\rho_s(\\vec s + \\delta \\vec s),\n\\end{equation} \nwhere $\\delta x$ and $\\delta \\vec v$ are given by eqs.~\\eqref{traslation} and \\eqref{vL}.\n\nAgain this statement can be directly applied to the galaxy distribution and it thus includes the bias with respect to the dark matter distribution.\nNotice that in the plane-parallel approximation redshift space is still translationally invariant (although it is not rotationally invariant, since the line-of-sight is a preferred direction): correlation function only depends on the distance between points. This implies that the consistency relations will be zero when the short modes are taken at equal time, since the common translation does not change distances.\n\n\nIn the Fourier space conjugate to redshift space, eq.~\\eqref{expansion} becomes\n\\begin{equation}\n\\langle {\\Phi_{0}}({\\vec q}) \\delta^{(g,s)}_{\\vec k_1}(\\eta_1) \\cdots \\delta^{(g,s)}_{\\vec k_n}(\\eta_n) \\rangle \\approx P_\\Phi (q) \\sum_a D(\\eta_a) \\big[ \\vec q \\cdot \\vec k_a + f(\\eta_a) q_z \\, k_{a,z} \\big] \n\\langle \\delta^{(g,s)}_{\\vec k_1}(\\eta_1) \\cdots \\delta^{(g,s)}_{\\vec k_n}(\\eta_n) \\rangle \\;.\n\\end{equation}\nBy using for the long mode the linear relation between the density contrast in redshift space $\\delta$ and the gravitational potential $\\Phi$, i.e.\n\\begin{equation}\n\\delta^{(g,s)}({\\vec q},\\eta) = - (b_1+ f \\mu_{\\vec q}^2) D(\\eta) q^2 \\Phi_0 ({\\vec q}) \\;,\n\\end{equation}\nwhere $b_1$ is a linear bias parameter between galaxies and dark matter and $\\mu_{\\vec k} \\equiv \\vec k \\cdot \\hat z\/k$, the consistency relation above becomes\n\\begin{equation}\n\\label{CR_rs}\n\\begin{split}\n\\langle \\delta^{(g,s)}_{\\vec q}(\\eta) \\delta^{(g,s)}_{\\vec k_1}(\\eta_1) \\cdots \\delta^{(g,s)}_{\\vec k_n}(\\eta_n) \\rangle \\approx & - \\frac{P_{g,s}(q,\\eta)}{b_1+ f \\mu_{\\vec q}^2} \\sum_a \\frac{D(\\eta_a)}{D(\\eta)} \\frac{k_a}{q} \\big[ \\hat q \\cdot \\hat k_a + f(\\eta_a) \\mu_{\\vec q} \\, \\mu_{\\vec k_a} \\big] \\\\ &\\times \\langle \\delta^{(g,s)}_{\\vec k_1}(\\eta_1) \\cdots \\delta^{(g,s)}_{\\vec k_n}(\\eta_n) \\rangle \\;.\n\\end{split}\n\\end{equation}\n\nWe can check that this relation holds in perturbative calculation of redshift space distortions. The redshift space bispectrum reads \\cite{Bernardeau:2001qr}\n\\begin{equation}\n\\begin{split}\n\\langle \\delta^{(g,s)}_{\\vec q}(\\eta) \\delta^{(g,s)}_{\\vec k_1}(\\eta_1) \\delta^{(g,s)}_{\\vec k_2}(\\eta_2) \\rangle' & = \\\\ 2 Z_2(-\\vec q, -\\vec k_2;\\eta_1) Z_1(\\vec q;\\eta) Z_1(\\vec k_2;\\eta_2) &\\langle \\delta({\\vec q}, \\eta) \\delta({-\\vec q},\\eta_1) \\rangle' \\langle \\delta({\\vec k_1},\\eta_1) \\delta({\\vec k_2}, \\eta_2) \\rangle' \\text{ + cyclic}\\;,\n\\end{split}\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{split}\nZ_1(\\vec k;\\eta) & \\equiv (b_1 + f \\mu_{\\vec k}^2) \\;, \\\\\nZ_2(\\vec k_a,\\vec k_b;\\eta) & \\equiv b_1 F_2(\\vec k_a,\\vec k_b) +f \\mu_{\\vec k}^2 G_2(\\vec k_a,\\vec k_b) + \\frac{f \\mu_{\\vec k} k}{2} \\left[\\frac{\\mu_{\\vec k_a}}{k_a}(b_1+f\\mu_{\\vec k_b}^2)+ \\frac{\\mu_{\\vec k_b}}{k_b}(b_1+f\\mu_{\\vec k_a}^2)\\right] +\\frac{b_2}{2} \\;.\n\\end{split}\n\\end{equation}\nHere $b_1$ and $b_2$ are the linear and non-linear bias parameters and $F_2$ and $G_2$ are the standard second-order perturbation kernels for density and velocity respectively \\cite{Bernardeau:2001qr}.\nIn the limit $q \\to 0$ we have\n\\begin{equation}\n2 Z_2 (-\\vec q, -\\vec k_2;\\eta_1) \\approx (b_1 + f_1 \\mu_{\\vec k_2}^2) \\frac{\\vec q \\cdot \\vec k_2}{q^2} + (b_1 + f_1 \\mu_{\\vec k_2}^2) f_1 \\frac{k_2}{q} \\mu_{\\vec q} \\mu_{\\vec k_2} \\;.\n\\end{equation}\nThis gives\n\\begin{equation}\n\\begin{split}\n\\langle \\delta^{(g,s)}_{\\vec q}(\\eta) & \\delta^{(g,s)}_{\\vec k_1}(\\eta_1) \\delta^{(g,s)}_{\\vec k_2}(\\eta_2) \\rangle' \\\\ & \\approx \\frac{P_{g,s}(q,\\eta)}{b_1+ f \\mu_{\\vec q}^2} \\frac{D(\\eta_1)}{D(\\eta)} (b_1 + f_1 \\mu_{\\vec k_2}^2) \\left( \\frac{\\vec q \\cdot \\vec k_2}{q^2} + f_1 \\frac{k_2}{q} \\mu_{\\vec q} \\mu_{\\vec k_2} \\right) Z_1(\\vec k_2;\\eta_2) \\langle \\delta({\\vec k_1},\\eta_1) \\delta({\\vec k_2}, \\eta_2) \\rangle' \\\\ & \\approx -\\frac{P_{g,s}(q,\\eta)}{b_1+ f \\mu_{\\vec q}^2} \\frac{D(\\eta_1)}{D(\\eta)} \\frac{k_1}{q} (\\hat q \\cdot \\hat k_1 + f_1 \\mu_{\\vec q} \\mu_{\\vec k_1}) \\langle \\delta({\\vec k_1},\\eta_1) \\delta({\\vec k_2}, \\eta_2) \\rangle' \\text{ + $(1 \\leftrightarrow 2)$}\\;.\n\\end{split}\n\\end{equation}\nThe consistency relation is satisfied.\n\nAs in real space, it is possible to derive a resummed version of eq.~\\eqref{CR_rs}. The translation in redshift space introduces a factor\n\\begin{equation}\n\\exp \\Big[ i \\sum_a \\vec k_a \\cdot \\delta \\vec s( \\vec y,\\eta_a) \\Big] =\\exp \\Big[ \\int^\\Lambda\\frac{\\textrm{d}^3p}{(2\\pi)^3} \\sum_a D(\\eta_a)\\left( \\vec p \\cdot \\vec k_a + f(\\eta_a) p_z \\, k_{a,z}\\right) \\,e^{i \\vec p \\cdot \\vec{y}}\\Phi_{0}(\\vec p) \\Big]\\;\n\\end{equation}\nin the correlation functions. It is then straightforward to show that, as in Sec.~\\ref{sec:resum}, the consistency relation in redshift space eq.~\\eqref{CR_rs} remains the same even when the effect of all soft modes is resummed.\nMoreover, using the same procedures developed in the previous section, one can easily extend the consistency relations with multiple soft legs, softs loops and soft internal lines to redshift space.\n\n\n\n\\section{Conclusions}\nIn this paper we showed that one can have a complete control of soft modes at any order in $\\frac{k}{q} \\cdot \\delta_q$. The known cancellation of these effects for equal time correlators \\cite{Jain:1995kx,Scoccimarro:1995if,Bernardeau:2011vy,Bernardeau:2012aq,Blas:2013bpa,Carrasco:2013sva} is now on more general grounds: it is physically a consequence of the equivalence principle and the lack of statistical correlation between long and short modes, which holds in single-field inflation. Therefore this cancellation is very robust and holds beyond the single-stream approximation, and including the effects of baryons on short scales. These regimes are beyond the usual arguments based on perturbation theory. Moreover, we now know exactly what is the effect of soft modes on correlators at different times.\nTo make contact with observations one has to understand if the consistency relations can be written directly in redshift space. We showed that this is the case, without adding any assumption about the short modes: for example one does not need to assume the single-stream approximation, which breaks down on short scales.\n\nBesides the theoretical interest of these results, the main conclusion for observations is that a detection in the squeezed limit of a $1\/q$ behaviour at equal time would be a robust detection of either multi-field inflation or a violation of the equivalence principle. The next step is to evaluate how constraining measurements will be for explicit models that do not respect equivalence principle, taking into account that in the data one is obviously limited in the hierarchy between $k$ and $q$. We will come back to this in a future publication \\cite{Creminelli:2013nua}.\n\n\\subsection*{Acknowledgements}\nWhile finishing this paper reference \\cite{Peloso:2013spa} appeared. There is no disagreement with our results: in particular, we both agree that a violation of the EP implies a breaking of the consistency relations in the form of eq.~\\eqref{CR}. We thank M.~Peloso and M.~Pietroni for discussions. We acknowledge related work by A.~Kehagias, J.~Nore\\~na, H.~Perrier and A.~Riotto: where comparison is possible, the results agree.\nIt is a pleasure to thank V.~Desjacques, R.~Scoccimarro and M.~Zaldarriaga for useful discussions, and the anonymous referee for useful comments. JG and FV acknowledge partial support by the ANR {\\it Chaire d'excellence} CMBsecond ANR-09-CEXC-004-01.\n\n\\footnotesize\n\\parskip 0pt\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\label{sec:Introduction}\n\n\\IEEEPARstart{H}{uman} Presence-aware Systems (HPSs) are rapidly\ngrowing as new services become available in various areas of modern\nlife \\cite{Petrov-2019} such as assisted living, ambient intelligence,\nsmart spaces, home automation, human-robot collaboration, safety and\nsecurity, just to cite a few. Among these applications, non-cooperative,\nalso known as passive, HPS are the most attractive since they do not\nrequire the monitored users to carry or wear any electronic device\nor specific sensors. Usually, these systems are vision-based \\cite{dalal-2005,benezeth2011towards,choi-2013};\nhowever, the ubiquitous presence of wireless networks paves the way\ntowards the exploitation of wireless radio-frequency (RF) networks,\nnot only as communication devices, but also as body proximity\/location\nvirtual sensors. Last but non least, radio-based HPS are privacy-neutral\nsince they do not reveal any privacy information about the monitored\npeople.\n\nHPS systems exploit the fact that people, or obstacles, in the surroundings\nof an area covered by a wireless radio network induce signal alterations\nthat can be detected and exploited for body occupancy inference applications.\nFor instance, Device-Free Localization (DFL) systems \\cite{Youssef:2007:CDP:1287853.1287880,savazzi2016magazine}\nexploit a network of RF nodes to detect the presence, to locate and\ntrack the position of moving objects or people in a confined area\ncovered by the wireless network itself. However, a radio-based HPS\nis not only able to localize and track \\cite{wilson10,nicoli2016eusipco,wang-2017}\npeople, or objects, but it has been also proven to efficiently perform\nother tasks such as to count the number of people \\cite{depatla2015jsac},\nto identify and recognize patterns related to their activities \\cite{savazzi2016icassp,wang2015icmcn}\nand intentions \\cite{savazzi2016icassp}, to detect dangerous worker\nconditions and safety status \\cite{kianoush2016iot,Talebpour-2019},\nand act as a proximity monitor \\cite{montanari2017proximity}. This\nis made possible as the presence of targets (\\ie objects or people)\naffects the propagation of the radio waves in the covered area \\cite{Woyach-et-al,patwari10},\nfor example by inducing predictable alterations of the Received Signal\nStrength (RSS) field that depend on the targets position, in both\nstatic \\cite{Seifeldin-et-al} and dynamic \\cite{Saeed-et-al} environments.\n\n\\subsection{Related works}\n\n\\label{subsec:Related-works}\n\nThe effect of the presence of people on the received RF signals is\na well know topic \\cite{obayashi1998body,villanese2000pedestrian}\nand finds its roots in the research activities about the electromagnetic\n(EM) propagation phenomena caused by natural or artificial obstacles\nduring the first experimental trials at the dawn of the radio era\n\\cite{Brittain1994}. These studies have received a great impulse\nafter the middle of the last century mostly for outdoor coverage applications\n\\cite{furutsu-1963,vogler-1982,comparative,tzaras-2000}. However,\ndespite some recent attempts to model the body-induced fading effects\non short-range radio propagation \\cite{smith2013propagation}, these\nresearch activities are mostly related to inter- \\cite{Koutitas,de2015analysis}\nand intra-body \\cite{Namjun_Cho-et-al,andreu2016spatial} short-range\nradio communications. The aim of these research activities is to quantify\nthe radio propagation losses in narrow \\cite{humanbody2} or wide-band\n\\cite{Fort-et-al} indoor scenarios with the main purpose of mitigating\nthese effects. Only a few research works \\cite{Koutitas,liu2009fading,conducting_cylinder}\nfocus their attention on the geometrical relations between the transmitter\n(TX) and receiver (RX) location, the position and composition of the\nbody, and its size.\n\nA general EM model for the prediction of the mathematical relations\nbetween location, size and composition of a \\emph{single-target} and\nthe corresponding EM field perturbation, is still disputable as shown\nin \\cite{hamilton-et-al,yiugitler2016experimental} or too complex\nto be of practical use as based on ray tracing techniques \\cite{de2015analysis,eleryan-et-al}\nor Uniform Theory of Diffraction (UTD) \\cite{Koutitas,conducting_cylinder}.\nOther EM methods \\cite{kibret2015characterizing,ziri2005prediction,yokota2012propagation}\nand physical-statistical models \\cite{eleryan-et-al,mohamed2017physical,rampa2017em,Hillyard-2020}\nare simpler than the previous ones but still limited to a single target.\nTo the authors knowledge, and according to the current literature,\nan analytical, or semi-analytical, approach towards a \\emph{true}\n\\emph{multi-body} model has never been tackled before. Usually, multi-target\n(\\ie multi-body) problems have been solved by assuming the linear\nsuperposition of the single-body extra attenuations \\cite{patwari10,wilson10}.\nHowever, the mutual effects induced by multiple bodies moving concurrently\nin the same space must be accounted for.\n\nIn \\cite{nicoli2016eusipco} a DFL system has been proposed to track\ntwo targets moving concurrently by using an EM model that is fully\ndescribed in \\cite{Rampa-2019}. On the contrary, in this paper, the\nmodel is generalized to an \\emph{arbitrary number of targets}. A practically-usable\nphysical-statistical model is thus designed for the prediction and\nthe evaluation of the body-induced propagation losses, namely the\nRSS field, found in true $N$-targets scenarios with $N\\geq1$. This\n$N$-body model is able to describe both dominant static component\nand stochastic fluctuations of the power loss as a function of the\nlocations of the $N$ targets, their size, orientation and random\nmovements with respect to the link path.\n\n\\subsection{Original contributions}\n\n\\label{subsec:Original-contributions}\n\nThe paper proposes an EM framework where the field perturbations induced\nby an arbitrary number of human bodies are modelled as a superposition\nof \\emph{diffraction} and\\emph{ multipath} terms. The diffraction\ncomponent is defined according to the scalar diffraction theory and\nis characterized by the geometrical description (\\ie location, size,\norientation) and the movement characteristics (\\ie rotations and\nrandom movements around the nominal position) of $N$ targets according\nto the knife-edge hypothesis \\cite{lee1978path,vogler-1982,Edge_diffraction,deygout1991correction}.\nThe multipath fading term is assumed to impair the radio link due\nto the presence of the bodies placed inside the sensitivity area \\cite{rampa2015letter}\naround the LOS (Line Of Sight) path that connects the transmitter\nand the receiver. However, unlike \\cite{rampa2015letter}, where RSS\nperturbations are predicted for a \\emph{single} \\emph{small} target\n\\cite{comparative,rampa2015letter,knife_edge} moving only in the\ncentral part of the LOS path according to the paraxial approximation\n\\cite{rampa2015letter}, this novel model provides a representation\nof the power losses induced by \\emph{multiple} bodies having \\emph{any}\nsize, and placed \\emph{anywhere} in the area surrounding the radio\nlink. The model presented here extents the dual-body case exploited\nin \\cite{nicoli2016eusipco} and then presented in \\cite{Rampa-2019},\nby considering a generic EM scenario with an arbitrary number of human\nbodies in the surroundings of a radio link. In the former reference\n\\cite{nicoli2016eusipco}, the dual-body model is neither derived\nnor justified but it is just introduced to perform DFL tasks and compare\nthe results against other methods. In the latter reference \\cite{Rampa-2019},\nthe dual-body model is derived from prime principles and then described\nand discussed in details. The experimental results presented here\nconfirm that the proposed model can effectively describe the mathematical\nrelations between the target positions and the measured RSS values.\nComparisons with the results obtained with the EM simulator Feko also\nsupport the validity of the proposed model.\n\nThe novel contributions of this paper are: \\emph{i}) the definition\nof a general EM framework for the multi- body scenario; \\emph{ii})\nthe derivation, from prime principles, of the full equations for the\nprediction of the global extra attenuation due to \\emph{$N$} bodies,\nor objects, in the LOS area; \\emph{iii}) the derivation of the analytical\nformulas in the case of paraxial hypothesis for the general $N$ bodies\nscenario; \\emph{iv}) the evaluation of the extra attenuation predictions\nfor the dual-body scenario (\\ie \\emph{$N=2$}) and their comparison\nagainst the results obtained using full EM simulations; and \\emph{v})\ntuning of the dual-body model parameters based on-field RSS measurement\ntrials and comparisons of the model predictions against the aforementioned\nRSS measurements.\n\nThe paper is organized as follows. The diffraction model that accounts\nfor the deterministic term of the multi-body induced extra attenuation\nis shown in Sect. \\ref{sec:Diffraction-model} for any number $N$\nof the targets. The complete physical-statistical model for the prediction\nof the RSS field is illustrated in Sect. \\ref{sec:Physical-statistical-modeling}.\nIn particular, the dual-body model is highlighted as a practical case\nstudy. Sect. \\ref{subsec:Model-calibration} deals with the evaluation\nof the proposed multi-target model featuring a comparative analysis\nagainst experimental measurements and simulation results. The concluding\nremarks are drawn in Sect. \\ref{sec:Conclusions}.\n\n\\begin{figure}[tp]\n\\begin{centering}\n\\includegraphics[clip,scale=0.35]{Fig1}\n\\par\\end{centering}\n\\caption{\\label{fig:layout}a) Generic layout of a HPS-based wireless network\ncomposed by $D$ nodes and $L$ links where $T_{n}$ is the $n$-th\ntarget located inside the monitored area; b) Simplified representation\nof the single-link single-body scenario where the human body $T_{1}$\nis sketched as a 3D cylinder and then simplified as a 2D knife-edge\nsurface $S_{1}$.}\n\n\\vspace{-0.4cm}\n\\end{figure}\n\n\n\\section{Diffraction framework for the multi-body scenario}\n\n\\label{sec:Diffraction-model}\n\nAs sketched in Fig. \\ref{fig:layout}.a, a generic HPS consists of\na mesh of partially, or completely connected, wireless network composed\nof $D$ RF nodes \\cite{Youssef:2007:CDP:1287853.1287880,savazzi2016magazine}\nand $L\\leq D\\left(D-1\\right)\/2$ bidirectional links. The HPS-enabled\nnetwork is composed by nodes that are able to perform power measurements\non the RF signal and, after some processing steps, to extract body\noccupancy-related information. We assume that all the RF nodes can\nmeasure the RSS field values at discrete time instants. No specific\nadditional RF hardware \\cite{savazzi2016magazine} is required since\nRSS values are computed in the normal operations of the networked\nRF nodes for channel estimation\/equalization and frequency\/frame synchronization\ntasks.\n\nWithout any loss of generality, as described in Fig. \\ref{fig:layout}.b,\nin what follows, we will focus on the single-link scenario (\\ie $L=1$\nand $D=2$), by introducing the single-target ($T_{1}$ being $N=1$)\nfirst and then the multi-target ($T_{1},...,T_{N}$ with $N>1$) cases.\nHowever, the multi-body model presented here can be exploited in a\ngeneral multi-link scenario with $D$ nodes, $L$ links and $N$ targets\nby using electric field superposition. In addition, it can be extended\nto make use of other physical layer channel information measures such\nas the Channel State Information (CSI) and the Channel Quality Information\n(CQI) \\cite{savazzi2016magazine} as well. However, this discussion\nis outside the scope of this work.\n\nIt is worth noticing that all the proposed models apply to a generic\nlink of the radio network: therefore, they could be easily tailored\nto predict RSS over arbitrarily complex network structures for more\nrobust body positioning, as proposed in device-free localization methods\n\\cite{patwari10,wilson10,savazzi2016magazine}. In addition, modeling\nof RSS is instrumental for link selection operations, namely to identify\nan optimized subset of links that are most influenced by the target\npresence \\cite{nicoli2016eusipco}.\n\n\\begin{figure}[tp]\n\\begin{centering}\n\\includegraphics[clip,scale=0.24]{Fig2}\n\\par\\end{centering}\n\\caption{\\label{fig:single}Geometrical representation of the same scenario\nshown in \\ref{fig:layout}.b, where an horizontal link of length $d$\nis positioned at distance $H$ from the floor and a 2D knife-edge\nsurface $S_{1}$, with variable traversal size $c_{1}$ and height\n$h_{1}$, represents the body $T_{1}$ that is placed on the floor.}\n\n\\vspace{-0.4cm}\n\\end{figure}\n\n\n\\subsection{Single body model (SBM)}\n\n\\label{subsec:Single-body-model}\n\nThe single body model \\cite{rampa2017em} is briefly recalled in this\nsection since it is the starting point for the multi-target model\nthat will be described in Sect. \\ref{subsec:Multi-body-model}. As\noutlined in Fig. \\ref{fig:single} , the human body (\\ie the only\ntarget $T_{1}$ located near the single-link area) is represented\nby a perfectly EM absorbing 3D homogeneous cylinder with an elliptical\nbase of minor and major axes $a_{1}$ and $b_{1}$, respectively,\nthat simulate the human head, torso, legs and arms (placed near the\ntorso). Most references assume a 3D cylinder with a circular base\n\\cite{conducting_cylinder,ghaddar2004modeling} or a prism \\cite{de2015analysis}\nwhile only a few \\cite{liu2009fading,reusens2007path} assume also\nthat the arms can freely move with respect to the torso. Considering\na dynamic scenario where the 3D cylinder, modeling the body, can freely\nmove horizontally and rotate around its generic \\emph{nominal} position\n$(x,y,z)$ showing different views, the target is reduced \\cite{rampa2017em}\nto a 2D rectangular blade (\\ie a knife-edge surface) \\cite{Edge_diffraction},\northogonal to the LOS path at distance $d_{1}$ and $d_{2}$ from\nthe TX and RX, respectively. The knife-edge surface is vertically\nplaced close to the link area and can freely rotate and move showing\ndifferent body views during its movements. The presence of the floor\ndoes not imply any influence on the EM propagation and it is used\nhere only for geometrical reasons \\ie to define the height of the\nlink and the placement constraints of the knife-edge surface representing\nthe body. Notice that the knife-edge approximation ignores important\nEM parameters such as polarization, permittivity, conductivity, shape,\nradius of curvature, and surface roughness \\cite{Davis-et-al}.\n\nAccording to the Fig. \\ref{fig:layout}.b, the radio link is horizontally\nplaced at distance $H$ from the floor and the 3D target $T_{1}$,\nthat is placed on the floor, is free to move and rotate around the\nvertical axis in the surroundings of the LOS path. The corresponding\nfirst Fresnel's zone ellipsoid \\cite{Edge_diffraction}, with radius\n$R=\\sqrt{\\lambda d_{1}d_{2}\/d}$, does not have any contact with all\nother parts of the scenario (\\eg walls, ceiling, furniture or other\nobstacles) except for the aforementioned target. Being $R\\leq R_{\\max}=\\sqrt{\\lambda d}\/2$,\nwhere $R_{\\max}$ is the maximum value of the radius $R$, $\\lambda$\nis the wavelength and $d$ is the RX-TX distance (\\ie the link path\nlength), this constraint becomes $2H>\\sqrt{\\lambda d}$. Notice that,\nas stated by standard short-range indoor propagation models \\cite{ITU-indoor},\nground attenuation effects may be safely ignored for a radio link\ninside a single indoor large room\/space \\emph{e.g.} a hall.\n\nThe equivalent 2D knife-edge surface $S_{1}$ has height $h_{1}$,\nwidth $c_{1}$, and it is placed orthogonal to the LOS path at location\n$\\mathbf{X}_{1}=[x_{1},y_{1}]^{T}$. $\\mathbf{X}_{1}$ coincides with\nthe first two coordinates of the $S_{1}$ barycenter $G_{1}=\\left(x_{1},y_{1},z_{1}\\right)$\nof the knife-edge $S_{1}$ since $z_{1}$ assumes the constant value\n$z_{1}=h_{1}\/2-H$. The point $G_{1}^{'}$ is the intersection of\nthe vertical axis passing through $G_{1}$ and the horizontal plane\n$z=0$. In the followings, the position of the target $T_{1}$ (\\ie\nthe position of $G_{1}^{'}$) is thus identified by the off-axis displacement\n$y_{1}$ and the distance $x_{1}=d_{1}$ of $S_{1}$ from the TX.\nHowever, a true person can also turn and make involuntary\/voluntary\nmovements while standing on the floor. Therefore, the 3D target $T_{1}$,\nrepresented by the 2D knife-edge surface $S_{1}$, can assume any\norientation $\\chi_{1}\\in[-\\pi,\\pi]$ with respect to the LOS path.\nIt can make also some small movements $\\Delta\\mathbf{X}_{1}=[\\Delta x_{1},\\Delta y_{1}]^{T}$\naround the nominal location $\\mathbf{X}_{1}$ thus showing a changing\ntraversal size $c_{1}=c_{1}(\\chi_{1})\\in[a_{1},b_{1}]$ and location\n$\\mathbf{X}_{1}+\\Delta\\mathbf{X}_{1}$.\n\nAccording to the scalar theory of diffraction, the electric field\nat the RX, that is generated by the isotropic source in TX is modified\nby the presence of the 2D knife-edge surface $S_{1}$ located in the\nlink area \\cite{rampa2017em}. It can be predicted \\cite{comparative}\nas being generated by a virtual array of Huygens' sources located\non $S_{1}$ but not belonging to the obstacle itself. In far field\nconditions, the electric field $dE$ at the RX due to the diffraction\neffects caused by the elementary Huygens' source of area $dS_{1}$\nwith generic coordinates $\\left(x,y,z\\right)$ is given by\n\n\\begin{equation}\ndE=j\\,\\frac{E_{0}\\,d}{\\lambda\\,r_{1}\\,r_{2}}\\,\\exp\\left\\{ -j\\frac{2\\pi}{\\lambda}\\left(r_{1}+r_{2}-d\\right)\\right\\} dS_{1}\\,,\\label{eq:dE_tot}\n\\end{equation}\nwhere $r_{1}$ and $r_{2}$ are the distances of the generic elementary\narea $dS_{1}$ for the TX and RX, respectively. $E_{0}$ is the free-space\nelectric field that is described by the following equation\n\n\\begin{equation}\nE_{0}=-j\\,\\frac{\\eta\\,I\\text{\\ensuremath{\\ell}}}{2\\,\\lambda\\,d}\\,\\exp\\left(-j\\frac{2\\,\\pi\\,d}{\\lambda}\\right)\\,,\\label{eq:E0}\n\\end{equation}\nwith $\\eta$ being the free-space impedance and $I\\text{\\ensuremath{\\ell}}$\nthe momentum of the source. The electric field at the RX is given\nby \\cite{rampa2017em}\n\n\\begin{equation}\n\\begin{array}{ll}\nE= & -j\\,\\frac{\\eta\\,I\\text{\\ensuremath{\\ell}}}{2\\,\\lambda\\,d}\\,\\exp\\left(-j\\frac{2\\,\\pi\\,d}{\\lambda}\\right)\\cdot\\\\\n & \\cdot\\left\\{ 1-j\\,\\frac{d}{\\lambda}\\int\\limits _{S_{1}}\\frac{1}{r_{1}\\,r_{2}}\\,\\exp\\left\\{ -j\\frac{2\\pi}{\\lambda}\\left(r_{1}+r_{2}-d\\right)\\right\\} dS_{1}\\right\\} \\,,\n\\end{array}\\label{eq:E_single}\n\\end{equation}\nwhere the first term refers to the electric field (\\ref{eq:E0}) due\nthe free-space propagation in the empty scenario and the second term\nincludes the diffraction effects due to the body presence according\nto Eq. (\\ref{eq:dE_tot}). The integral of the second term, is computed\nover the rectangular domain defined by the $S_{1}$ region defined\nas $S_{1}=\\left\\{ \\left(x,y,z\\right)\\in\\mathbb{R^{\\textrm{3}}}:\\:x=x_{1}=d_{1}\\right.$,\n$y_{1}-c_{1}\/2\\leq y\\leq y_{1}+c\/2$, $\\left.-H\\leq z\\leq h_{1}-H\\right\\} $.\nFocussing on the \\emph{extra attenuation} induced by the body \\emph{w.r.t.}\nthe free-space, Eq. (\\ref{eq:E_single}) can be written as\n\n\\begin{equation}\n\\frac{E}{E_{0}}=1-j\\,\\frac{d}{\\lambda}\\int\\limits _{S_{1}}\\frac{1}{r_{1}\\,r_{2}}\\,\\exp\\left\\{ -j\\frac{2\\pi}{\\lambda}\\left(r_{1}+r_{2}-d\\right)\\right\\} dS_{1}\\,.\\label{eq:E_E0_tot-single}\n\\end{equation}\nAccording to (\\ref{eq:E_E0_tot-single}), the presence of the body\ninduces an extra attenuation $A_{\\textrm{dB}}=-10\\,\\log_{10}\\left|E\/E_{0}\\right|^{2}$\n\\emph{w.r.t.} the free-space propagation. Being a forward only method,\nthe diffraction model holds only for the generic target $T_{1}$ placed\nin the area $\\mathcal{Y}$ near the radio link where it is $\\mathcal{Y}=\\left\\{ \\left(x,y\\right)\\subset\\mathbb{R^{\\textrm{2}}}:01$) scenario composed\nby an horizontal single-link of length $d$, placed at distance $H$\nfrom the floor, and $N$ different 2D equivalent knife-edge surfaces\n$S_{1},S_{2},$...,$S_{N}$ corresponding to the targets $T_{1},T_{2},$...,$T_{N}$\nlocated in $\\mathbf{X}_{1},\\mathbf{X}_{2},...,\\mathbf{X}_{N}$, respectively.}\n\n\\vspace{-0.4cm}\n\\end{figure}\n\nFor $N>1$ targets, the knife-edge diffraction model (\\ref{eq:dE_tot})\nstill holds true for each \\emph{n-}th surface $S_{n}$, although equation\n(\\ref{eq:E_E0_tot-single}) is no longer valid. The LOS path is now\ndivided into $N+1$ segments of length equal to $d_{1},d_{1,2},d_{2,3},\\ldots,d_{N-1,N},d_{N}$\nwith $d=d_{1}+\\sum_{n=1}^{N-1}d_{n,n+1}+d_{N}$. Generalizing the\nmodel of Sect. \\ref{subsec:Single-body-model} for $N$ targets, the\nterm $r_{n,n+1}$ represents the distance between two consecutive\nelementary areas $dS_{n}$ and $dS_{n+1}$ while the terms $r_{1}$\nand $r_{N}$ represent the distance between the transmitter and $dS_{1}$,\nand the distance between $dS_{N}$ and the receiver, respectively.\nThe \\emph{n}-th elementary area $dS_{n}=d\\xi_{n}d\\varsigma_{n}$ is\nlocated on the \\emph{n-}th plane $S_{n}$ that is identified by its\nposition $\\mathbf{X}_{n}$. The coordinate axes $\\xi_{n}$ and $\\varsigma_{n}$\n(for clarity not shown in Fig. \\ref{fig:link}) have the origin in\n$O_{n}^{'}$ and are directed as the $y$ and $z$ axes. As an additional\nhypothesis with respect to the ones of Sect. \\ref{subsec:Single-body-model},\nonly forward propagation from TX to RX is considered with no backward\nscattered waves between any surfaces $S_{n}$ and the RX (\\ie both\nsingle- and multiple-scattering effects between knife-edges are ignored).\n\nIn far field conditions, by assuming only forward propagation, $\\forall T_{n}$\nwith $n=1,\\ldots,N-1$, the elementary electric field $dE_{n+1}$\ndue to the diffraction effects caused by the elementary Huygens' source\nof area $dS_{n}$, is computed at $dS_{n+1}$ by considering the distance\n$r_{n,n+1}$ between the elements $dS_{n}$ and $dS_{n+1}$ according\nto\n\n\\begin{equation}\ndE_{n+1}=j\\,\\frac{dE_{n}\\,dS_{n}}{\\lambda r_{n,n+1}}\\exp\\left(-j\\frac{2\\pi r_{n,n+1}}{\\lambda}\\right)\\,.\\label{eq:dE_iterative}\n\\end{equation}\nThe electric field $E_{1},$ impinging on the first target, is\n\n\\begin{equation}\nE_{1}=E_{0}\\,\\left(\\frac{d}{r_{1}}\\right)\\exp\\left(-j\\frac{2\\pi\\left(r_{1}-d\\right)}{\\lambda}\\right)\\,,\\label{eq:E_first}\n\\end{equation}\nwhile the electric field $dE$, that is measured at the RX node and\ngenerated by the area $dS_{N}$ of the target $N$ closest to the\nRX is given by\n\n\\begin{equation}\ndE=j\\,\\frac{dE_{N}\\,dS_{N}}{\\lambda r_{N}}\\exp\\left(-j\\frac{2\\pi r_{N}}{\\lambda}\\right)\\,.\\label{eq:E_last}\n\\end{equation}\nCombining Eqs. (\\ref{eq:dE_iterative}-\\ref{eq:E_last}), it is\n\n\\begin{equation}\n\\begin{aligned}dE={} & j^{N}\\frac{d}{\\lambda^{N}\\,r_{N}\\,r_{N-1,N}\\,...\\,r_{1,2}\\,r_{1}}\\,E_{0}\\cdot\\\\\n{} & \\cdot\\exp\\left\\{ -j\\frac{2\\pi}{\\lambda}\\left(r_{N}+r_{N-1,N}+...+r_{1,2}+r_{1}-d\\right)\\right\\} \\cdot\\\\\n{} & \\cdot dS_{1}\\,dS_{2}\\,...\\,dS_{N}\\,.\n\\end{aligned}\n\\label{eq:dE_tot_N}\n\\end{equation}\nTo obtain the electric field $E$, Eq. (\\ref{eq:dE_tot_N}) must be\nintegrated over the domain $S^{\\left(c\\right)}=\\bigcup_{n=1}^{N}S_{n}^{\\left(c\\right)}$\nwhere each region $S_{n}^{\\left(c\\right)}$ corresponds to the 2D\nplane $\\mathcal{P}_{n}\\supset S_{n}$ that does not contain the points\nof the knife-edge surface $S_{n}$. Eq. (\\ref{eq:dE_tot_N}) becomes\nnow\n\n\\begin{equation}\n\\begin{aligned}\\frac{E}{E_{0}}={} & j^{N}\\int\\limits _{S_{1}^{^{\\left(c\\right)}}}\\int\\limits _{S_{2}^{^{\\left(c\\right)}}}\\cdots\\int\\limits _{S_{N}^{^{\\left(c\\right)}}}\\frac{d}{\\lambda^{N}\\,r_{N}\\,r_{N-1,N}\\,...\\,r_{1,2}\\,r_{1}}\\cdot\\\\\n{} & \\cdot\\exp\\left\\{ -j\\frac{2\\pi}{\\lambda}\\left(r_{N}+r_{N-1,N}+...+r_{1,2}+r_{1}-d\\right)\\right\\} \\cdot\\\\\n{} & \\cdot dS_{1}\\,dS_{2}\\,...\\,dS_{N}\\,.\n\\end{aligned}\n\\label{eq:E_E0_tot_N}\n\\end{equation}\n\nWe now define $E^{\\left(n\\right)}$ as the value of the electric field\nat the RX node when only one target, \\ie the \\emph{n-}th body or\nobstacle, is present in the LOS area. Similarly, $E^{\\left(n,m\\right)}$\nrefers to the electric field at the receiver when only the \\emph{n-}th\nand \\emph{m-}th obstacles out of $N$, are in the link area, and so\non. The notation $E^{\\left(1,2,...,N\\right)}$ thus highlights the\ncontributions of the $N$ targets to the link loss: this is the total\nelectric field $E$ at the receiver given by (\\ref{eq:E_E0_tot_N}).\nConsidering the above definitions, Eq. (\\ref{eq:E_E0_tot_N}) may\nbe rewritten to highlight the mutual interactions of the targets,\ngrouped by $\\tbinom{N}{N-1}$ singles, $\\tbinom{N}{N-2}$ pairs, $\\tbinom{N}{N-3}$\ntriples and so on, as\n\n\\begin{equation}\n\\begin{aligned}\\left(-1\\right)^{N}\\,\\frac{E^{\\left(1,2,..,N\\right)}}{E_{0}}{} & =-1+\\underset{\\mathrm{singles}}{\\underbrace{\\sum_{n=1}^{N}\\frac{E^{\\left(n\\right)}}{E_{0}}}}-\\underset{\\mathrm{pairs}}{\\underbrace{\\sum_{n=1}^{N-1}\\sum_{m=n+1}^{N}\\frac{E^{\\left(n,m\\right)}}{E_{0}}}}+\\\\\n+{} & \\underset{\\mathrm{triples}}{\\underbrace{\\sum_{n=1}^{N-2}\\sum_{m=n+1}^{N-1}\\sum_{k=m+1}^{N}\\frac{E^{\\left(n,m,k\\right)}}{E_{0}}}}+...+\\\\\n{} & +\\Psi(S_{1},...,S_{N})\n\\end{aligned}\n\\label{eq:E_E0_tot_N_iter}\n\\end{equation}\nwhere the last term $\\Psi(S_{1},...,S_{N})$: \n\\begin{equation}\n\\begin{aligned}\\Psi(S_{1},...,S_{N}){}{} & =j^{N}\\int\\limits _{S_{1}}\\int\\limits _{S_{2}}...\\int\\limits _{S_{N}}\\frac{d}{\\lambda^{N}\\,r_{N}\\,r_{N-1,N}\\,...\\,r_{1,2}\\,r_{1}}\\cdot\\\\\n{} & \\cdot\\exp\\left\\{ -j\\frac{2\\pi}{\\lambda}\\left(r_{N}+r_{N-1,N}+...\\right.\\right.\\\\\n{} & \\left.\\left....+r_{1,2}+r_{1}-d\\right)\\right\\} \\cdot dS_{1}dS_{2}...dS_{N}\\,,\n\\end{aligned}\n\\label{eq:integ}\n\\end{equation}\nis the integral computed over the composite domain defined by the\nunion $S^{\\left(1,2,...,N\\right)}=\\bigcup_{n=1}^{N}S_{n}$ of the\n$N$ rectangular knife-edge surfaces $S_{n}$. The knife edge surfaces\n(Fig. \\ref{fig:link}) have the following definitions: for $n=1$\nit is $S_{1}=\\left\\{ \\left(x,y,z\\right)\\in\\mathbb{R^{\\textrm{3}}}:\\right.$\n$x=x_{1}=d_{1}$, $y_{1}-c_{1}\/2\\leq$ $y$ $\\leq y_{1}+c_{1}\/2$,\n$\\left.-H\\leq z\\leq h_{1}-H\\right\\} $ while $\\forall n=2,...,N$\nit is also $S_{n}=\\left\\{ \\left(x,y,z\\right)\\in\\mathbb{R^{\\textrm{3}}}:\\right.$\n$x=x_{n}=d_{1}+\\sum_{i=1}^{n-1}d_{i,i+1}$, $y_{n}-c_{n}\/2\\leq$ $y$\n$\\leq y_{n}+c_{n}\/2$, $\\left.-H\\leq z\\leq h_{n}-H\\right\\} $.\n\nUsing (\\ref{eq:E_E0_tot_N_iter}), for a generic number of targets\n$N$, the electric field ratio $\\frac{E^{\\left(1,2,..,N\\right)}}{E_{0}}$\ndue to $N$ obstructing bodies is composed by the single target contributions\n$\\frac{E^{\\left(n\\right)}}{E_{0}}$, for $n=1,...,N$ terms, the target\npairs, $\\frac{E^{\\left(n,m\\right)}}{E_{0}}$, for $n=1,...,N-1,$$m=n+1,...,N,$\nthe triples, $\\frac{E^{\\left(n,m,k\\right)}}{E_{0}}$, for $n=1,...,N-2$,\n$m=n+1,...,N-1$, $k=m+1,...,N$, and so on, up to the contributions\nof the $N-1$ target groups. Likewise PSBM and SBM, MBM gives valid\npredictions when the targets are placed in the area $\\mathcal{Y}$\nnear the LOS path, as defined in Sect. \\ref{subsec:Single-body-model}.\n\nWhen two bodies are in the area $\\mathcal{Y}$, the received electric\nfield $E^{\\left(1,2\\right)}$ embeds the mutual effects of the two\ntargets $T_{1}$ (\\ie $S_{1}$) and $T_{2}$ (\\ie $S_{2}$) on the\nradio propagation. $E^{\\left(1,2\\right)}$ is computed from the single-target\nterms $E^{\\left(1\\right)}$ and $E^{\\left(2\\right)}$ as\n\n\\begin{equation}\n\\begin{aligned}\\frac{E^{\\left(1,2\\right)}}{E_{0}}={} & -1+\\frac{E^{\\left(1\\right)}}{E_{0}}+\\frac{E^{\\left(2\\right)}}{E_{0}}+\\Psi(S_{1},S_{2}).\\end{aligned}\n\\label{eq:E_E0_tot_2}\n\\end{equation}\nwhere the \\emph{mixed} term, that depends on both knife-edges $S_{1}$\nand $S_{2}$, is defined according to Eq. (\\ref{eq:integ}) as\n\\begin{equation}\n\\begin{aligned}\\Psi(S_{1},S_{2}){}{} & =-\\int\\limits _{S_{1}}\\int\\limits _{S_{2}}\\frac{d}{\\lambda^{2}\\,r_{2}\\,r_{1,2}\\,r_{1}}\\cdot\\\\\n{} & \\cdot\\exp\\left\\{ -j\\frac{2\\pi}{\\lambda}\\left(r_{2}+r_{1,2}+r_{1}-d\\right)\\right\\} dS_{1}dS_{2}\\,.\n\\end{aligned}\n\\label{eq:integ_n2}\n\\end{equation}\nIn particular, from Eq. (\\ref{eq:E_E0_tot_2}), the term $E^{\\left(1\\right)}$\nquantifies the effect of the target $T_{1}$ alone in the link area\naccording to Eq. (\\ref{eq:E_E0_tot-single}). It depends on the corresponding\ntarget size $c_{1}$, the target height $h_{1}$, the link height\n$H$ from the floor, and the distances $d_{1}$ and $d-d_{1}=d_{2}+d_{12}$\nof the body $T_{1}$ from the TX an RX, respectively. Likewise, $E^{\\left(2\\right)}$\nrefers to the contributions of target $T_{2}$ only, according to\nits target size $c_{2}$ and height $h_{2}$, the link height $H$\nfrom the floor, and the distances $d_{1}+d_{12}$ and $d_{2}$ of\nthe body $T_{2}$ from the TX an RX, respectively.\n\nFor $N=1$, the proposed multi-body model (MBM) reduces to the single\nbody model (SBM) as expected since all singles, pairs, triples and\nother high order terms of Eq. (\\ref{eq:E_E0_tot_N_iter}) vanish except\nfor the term $\\Psi(S_{1})$. The MBM model for $N=2$ targets (\\ref{eq:E_E0_tot_2})\nhas been initially introduced in \\cite{Rampa-2019} along with some\npreliminary results. This dual-target model can be directly obtained\nfrom the Eq. (\\ref{eq:E_E0_tot_N}) or, equivalently, Eq. (\\ref{eq:E_E0_tot_N_iter}).\nModel comparisons are presented in Sect. \\ref{subsec:Model-calibration}.\n\n\\subsection{Paraxial multi-body model (PMBM)}\n\n\\label{subsec:paraxial_sec}\n\nFor HPS applications, paraxial hypotheses are realistic only for small\ntarget(s), namely for small enough $c_{i}$ and $h_{i}$ \\emph{w.r.t.\n}the path length $d$. The approximation also requires the subject\nto move nearby the LOS path, with small enough $y_{i}$ and $z_{i}$,\nor located in the central part of the LOS path. Carrier wavelength\n$\\lambda$ is also much smaller than the distances $x_{i}$ and $d-x_{i}$\n(Sect. \\ref{subsec:Paraxial-single-body}). Since the paraxial approximation\nis useful in several applications, mostly outdoor, in the following\nsection, we will approximate the full Eqs. (\\ref{eq:E_E0_tot_N})\nor (\\ref{eq:E_E0_tot_N_iter}) using paraxial assumptions. Such a\nmodel will be labeled as PMBM (Paraxial MBM).\n\nBased on the paraxial approximation, Eq. (\\ref{eq:E_E0_tot_N}) becomes\n\n\\begin{equation}\n\\begin{aligned}\\frac{E}{E_{0}}={} & \\left(\\frac{j}{2}\\right)^{N}\\int\\limits _{S_{1}^{^{\\left(c\\right)}}}\\int\\limits _{S_{2}^{^{\\left(c\\right)}}}\\cdots\\int\\limits _{S_{N}^{^{\\left(c\\right)}}}\\\\\n{} & \\frac{d\\,d_{1,2}d_{2,3}...d_{N-1,N}}{\\left(d_{1}+d_{1,2}\\right)\\left(d_{1,2}+d_{2,3}\\right)...\\left(d_{N-1,N}+d_{N}\\right)}\\cdot\\\\\n\\cdot{} & \\exp\\left\\{ -j\\frac{\\pi}{2}\\left(u_{1}^{2}+u_{2}^{2}+...+u_{N}^{2}-2\\alpha_{1,2}u_{1}u_{2}...+\\right.\\right.\\\\\n-{} & \\left.\\left.2\\alpha_{N-1,N}u_{N-1}u_{N}\\right)\\right\\} du_{1}du_{2}...du_{N}\\cdot\\\\\n\\cdot{} & \\exp\\left\\{ -j\\frac{\\pi}{2}\\left(v_{1}^{2}+v_{2}^{2}+...+v_{N}^{2}-2\\alpha_{1,2}v_{1}v_{2}...+\\right.\\right.\\\\\n-{} & \\left.\\left.2\\alpha_{N-1,N}v_{N-1}v_{N}\\right)\\right\\} dv_{1}dv_{2}...dv_{N}.\n\\end{aligned}\n\\label{eq:E_E0_tot_N_app}\n\\end{equation}\nIn the Appendix \\ref{sec:Appendix}, we show how to rewrite Eq. (\\ref{eq:E_E0_tot_N_app})\nto reveal the mutual interactions of targets as in Eq. (\\ref{eq:E_E0_tot_N_iter}).\n\nFor the case of $N=2$ targets, Eq. (\\ref{eq:E_E0_tot_N_app}) becomes\nnow analytically tractable. Using the formulation shown in Eq. (\\ref{eq:E_E0_tot_2}),\nadapted in Appendix \\ref{sec:Appendix} for paraxial assumptions,\nit is \n\\begin{equation}\n\\begin{aligned}\\frac{E^{\\left(1,2\\right)}}{E_{0}}={} & -1+\\frac{E^{\\left(1\\right)}}{E_{0}}+\\frac{E^{\\left(2\\right)}}{E_{0}}-\\frac{1}{4}\\frac{d\\,d_{1,2}}{\\left(d_{1}+d_{1,2}\\right)\\left(d_{1,2}+d_{2}\\right)}\\cdot\\\\\n\\cdot{} & \\int_{-\\unit{\\sqrt{2}\\mathit{H}}\/R_{1}}^{+\\sqrt{2}\\left(h_{1}-H\\right)\/R_{1}}\\int_{-\\unit{\\sqrt{2}\\mathit{H}\/\\mathit{R_{2}}}}^{+\\sqrt{2}\\left(h_{2}-H\\right)\/R_{2}}\\\\\n{} & \\exp\\left\\{ -j\\frac{\\pi}{2}\\left(u_{1}^{2}+u_{2}^{2}-2\\alpha_{1,2}u_{1}u_{2}\\right)\\right\\} du_{1}du_{2}\\cdot\\\\\n\\cdot{} & \\int_{\\left(\\sqrt{2}y_{1}-c_{1}\/\\sqrt{2}\\right)\/R_{1}}^{\\left(\\sqrt{2}y_{1}+c_{1}\/\\sqrt{2}\\right)\/R_{1}}\\int_{\\left(\\sqrt{2}y_{2}-c_{2}\/\\sqrt{2}\\right)\/R_{2}}^{\\left(\\sqrt{2}y_{2}+c_{2}\/\\sqrt{2}\\right)\/R_{2}}\\\\\n{} & \\exp\\left\\{ -j\\frac{\\pi}{2}\\left(v_{1}^{2}+v_{2}^{2}-2\\alpha_{1,2}v_{1}v_{2}\\right)\\right\\} dv_{1}dv_{2}\\,,\n\\end{aligned}\n\\label{eq:E_Er_2_1_N}\n\\end{equation}\nwhere $u_{1},u_{2}$,$v_{1},v_{2}$ and the constant terms $R_{1},R_{2}$\nand $\\alpha_{1,2}$ defined in the same Appendix \\ref{sec:Appendix}.\nNotice that, some approximated models are already available in the\nliterature \\cite{lee1978path,vogler-1982} for the evaluation of the\nextra attenuation due to multiple semi-infinite knife-edge surfaces.\nThese models can be obtained by using the paraxial approximation over\nthe semi-infinite domains representing the targets (\\ie unlike the\n\\emph{finite} target size assumption adopted here); they are typically\neffective in outdoor scenarios for the prediction of the propagation\nloss over non-regular terrain profiles. The interested reader can\ntake a look at \\cite{tzaras-2000} (and references therein) for a\nbrief discussion and comparisons.\n\n\\subsection{Additive models}\n\n\\label{subsec:EM-vs.-additive}\n\nBased on the analysis of the previous sections, the term $\\left|E^{\\left(1,2,...,N\\right)}\/E_{0}\\right|$\nfor $N$ targets can be used to evaluate the extra attenuation $A_{\\textrm{dB}}^{\\left(1,2,...,N\\right)}$\nwith respect to the free-space (\\ie unobstructed or empty) scenario\nas\n\\begin{equation}\nA_{\\textrm{dB}}^{\\left(1,2,...,N\\right)}=-10\\,\\log_{10}\\left|E^{\\left(1,2,...,N\\right)}\/E_{0}\\right|^{2}.\\label{eq:dbatt}\n\\end{equation}\nFrom Eq. (\\ref{eq:E_E0_tot_N_iter}), it is apparent that the extra\nattenuation terms $\\left|\\frac{E^{\\left(1\\right)}}{E_{0}}\\right|,\\left|\\frac{E^{\\left(2\\right)}}{E_{0}}\\right|,\\,...,\\left|\\frac{E^{\\left(N\\right)}}{E_{0}}\\right|$\nalone or, equivalently, $A_{\\textrm{dB}}^{\\left(1\\right)},\\,A_{\\textrm{dB}}^{\\left(2\\right)},...,\\,A_{\\textrm{dB}}^{\\left(N\\right)}$,\nare not sufficient to evaluate $\\left|\\frac{E^{\\left(1,2,..,N\\right)}}{E_{0}}\\right|$\nsince: \\emph{i}) the phase relations between the terms $\\frac{E^{\\left(n\\right)}}{E_{0}}$\nare unknown, \\emph{ii}) the terms $\\frac{E^{\\left(n,m\\right)}}{E_{0}},\\,\\frac{E^{\\left(n,m,k\\right)}}{E_{0}},\\,...,\\,$\nare not available, and \\emph{iii}) the interaction terms between the\ntargets, that are expressed by the integral of the right side of Eq.\n(\\ref{eq:E_E0_tot_N_iter}), are not known as well. These facts prevent\nthe use of single-target measurements for the multiple target case.\nAccording to these considerations, the additive hypothesis, namely\n$A_{\\textrm{dB}}^{\\left(1,2,...,N\\right)}=A_{\\textrm{dB}}^{\\left(1\\right)}+A_{\\textrm{dB}}^{\\left(2\\right)}+...+A_{\\textrm{dB}}^{\\left(N\\right)}$,\nthat is generally exploited in various forms in the literature \\cite{patwari10,wilson10},\nis a rather superficial approximation. For the case of two targets\n$(N=2)$, in Sect. \\ref{subsec:Model-validation-with} an additive\nSBM model is proposed where the individual extra attenuations $A_{\\textrm{dB}}^{(1)}$,\n$A_{\\textrm{dB}}^{(2)}$,..., $A_{\\textrm{dB}}^{(N)}$ follow the\nSBM model described in Eq. (\\ref{eq:E_E0_tot-single}). Limitations\nof such representation are highlighted by a comparison with the MBM\nand PMBM models.\n\n\\section{Physical-statistical multi-body model}\n\n\\label{sec:Physical-statistical-modeling}\n\nIn this section, we propose a true multi-target physical-statistical\nmodel that relates the RSS to the link geometry ($d$, $H$), the\nbodies locations $\\mathbf{X}$, and their geometrical sizes (\\ie\n$\\mathbf{a}$, $\\mathbf{b}$ and $\\mathbf{h}$). In addition to the\ndiffraction, or physical, component analyzed in Sect. \\ref{sec:Diffraction-model},\nthe additional statistical component quantifies the uncertainty of\nbody movements, modelled here by small random voluntary\/involuntary\nmotions $\\Delta\\mathbf{X}$ and rotations $\\boldsymbol{\\chi}$ around\nthe nominal position $\\mathbf{X}$, as well as multipath fading, multiple\nscattering between bodies, backward propagation effects, and other\nrandom fluctuations, not included in the diffraction terms. For the\nsake of simplicity, in the following sections, all geometrical parameters\ndefined in Sect. \\ref{subsec:Multi-body-model} will be represented\nby the compact set $\\boldsymbol{\\Lambda}=\\left\\{ \\mathbf{a},\\mathbf{b},\\mathbf{h},d,H\\right\\} $.\n\nLet $P$ be the RSS measurement performed by the receiver RX and expressed\nin logarithmic scale (\\ie usually in dBm), the power measurement\n$P$ can be modeled as the sum of \\emph{i}) the deterministic term\n$P_{L}=10\\,\\log_{10}\\left|E_{0}\\right|^{2}$ due to the free-space\npath-loss; \\emph{ii}) the extra attenuation term $A_{\\textrm{dB}}=A_{\\textrm{dB}}^{\\left(1,2,...,N\\right)}$\nin (\\ref{eq:dbatt}) with respect to the free-space path-loss, caused\nby the body-induced diffraction terms, and \\emph{iii}) the Gaussian\nrandom term $w$ that includes the lognormal multipath effects \\cite{shadowing},\nmeasurement noise and other random disturbances assumed normally distributed.\nAccording to these assumptions, it is\n\n\\begin{equation}\nP=\\left\\{ \\begin{array}{ll}\nP_{L}-A_{\\textrm{dB}}^{\\left(1,2,...,N\\right)}+w & \\;\\textrm{iff }\\exists\\,\\mathbf{X}_{n}\\in\\mathcal{Y}\\\\\nP_{L}+w_{0} & \\;\\textrm{elsewhere}.\n\\end{array}\\right.\\label{eq:model-2}\n\\end{equation}\nThe free-space term $P_{L}$ is a constant that depends only on the\ngeometry of the scenario, the transmitted power, the gain and configuration\nof the antennas, and the propagation coefficients \\cite{knife_edge}.\nThe term $A_{\\textrm{dB}}^{(1,2,...,N)}=A_{\\textrm{dB}}\\left(\\mathbf{X},\\Delta\\mathbf{X},\\boldsymbol{\\chi},\\boldsymbol{\\Lambda}\\right)$\nis the extra attenuation, expressed in dB, due to the body-induced\ndiffraction with respect to the free-space scenario. Its argument\n$\\left|E^{\\left(1,2,...,N\\right)}\/E_{0}\\right|^{2}$ is computed using\n(\\ref{eq:E_E0_tot_N}) or (\\ref{eq:E_E0_tot_N_iter}) for MBM and\n(\\ref{eq:E_E0_tot_N_app}) or (\\ref{eq:E_E0_N_tot_app_iter}) for\nPMBM. Propagation effects not included in the diffraction models (\\ref{eq:E_E0_tot_N})\nor (\\ref{eq:E_E0_tot_N_iter}) are modeled by the Gaussian noise $w\\sim\\mathcal{N}$$\\left(\\Delta\\mu_{\\text{C}},\\sigma_{0}^{2}+\\Delta\\sigma_{C}^{2}\\right)$\nwith $\\Delta\\mu_{\\text{C}}$ and $\\Delta\\sigma_{C}^{2}$ being the\nresidual stochastic body-induced multipath fading mean and variance\nterms \\cite{dfl,rampa2015letter}. $\\sigma_{0}^{2}$ models the power\nfluctuations induced by environmental changes outside the link area,\nand not attributable to body movements around the LOS link.\n\nFor the empty scenario, where nobody is present in the link area,\nnamely the \\emph{background} configuration, the RSS is simply modelled\nas $P=P_{L}+w_{0}$ with $w_{0}\\sim\\mathcal{N}$$\\left(0,\\sigma_{0}^{2}\\right)$.\nNotice that in HPS systems, $\\mu_{0}=\\mathrm{E}_{w_{0}}\\left[P\\right]=P_{L}$\nand $\\sigma_{0}^{2}=\\mathrm{Var_{w_{0}}}\\left[P\\right]$ can be evaluated\nfrom field measurements during a calibration phase, when there are\nno targets the link area. On the contrary, the presence of people\nmodifies both the mean $\\mu_{1}\\left(\\mathbf{X}\\right)=\\textrm{\\ensuremath{\\mathrm{E_{\\boldsymbol{\\chi},\\Delta\\mathbf{X},\\mathit{w}}}}}\\left[P\\right]$\nand the variance $\\sigma_{1}^{2}\\left(\\mathbf{X}\\right)=\\textrm{\\ensuremath{\\mathrm{\\textrm{Var\\ensuremath{\\mathrm{_{\\boldsymbol{\\chi},\\Delta\\mathbf{X},\\mathit{w}}}}}}}}\\left[P\\right]$\nterms. Based on Eq. (\\ref{eq:model-2}), the mean $\\mu\\left(P\\right)$\nand variance $\\sigma^{2}\\left(P\\right)$ are defined as\n\n\\begin{equation}\n\\mu\\left(P\\right)=\\left\\{ \\begin{array}{ll}\n\\mu_{1}\\left(\\left.\\mathbf{X}\\right|\\boldsymbol{\\Lambda}\\right)=P_{L}+\\Delta\\mu\\left(\\left.\\mathbf{X}\\right|\\boldsymbol{\\Lambda}\\right) & \\;\\textrm{iff }\\exists\\,\\mathbf{X}_{n}\\in\\mathcal{Y}\\\\\n\\mu_{0}=P_{L} & \\;\\textrm{elsewhere}\n\\end{array}\\right.\\,\\label{eq:mu}\n\\end{equation}\nand\n\n\\begin{equation}\n\\sigma^{2}\\left(P\\right)=\\left\\{ \\begin{array}{ll}\n\\sigma_{1}^{2}\\left(\\left.\\mathbf{X}\\right|\\boldsymbol{\\Lambda}\\right)=\\sigma_{0}^{2}+\\Delta\\sigma^{2}\\left(\\left.\\mathbf{X}\\right|\\boldsymbol{\\Lambda}\\right) & \\;\\textrm{iff }\\exists\\,\\mathbf{X}_{n}\\in\\mathcal{Y}\\\\\n\\sigma_{0}^{2} & \\;\\textrm{elsewhere}\n\\end{array}\\right.\\,\\label{eq:sigma}\n\\end{equation}\nwhere it is emphasized the dependency of $P$ from the position $\\mathbf{X}_{n}$\nof at least one target $T_{n}$ in the area $\\mathcal{Y}$ and the\ngeometrical coefficients $\\boldsymbol{\\Lambda}$. The RSS average\n$\\Delta\\mu\\left(\\mathbf{X}\\right)=\\mu_{1}\\left(\\mathbf{X}\\right)-\\mu_{0}$\nand variance $\\Delta\\sigma^{2}\\left(\\mathbf{X}\\right)=\\sigma_{1}^{2}\\left(\\mathbf{X}\\right)-\\sigma_{0}^{2}$\nincrements are defined as\n\\begin{equation}\n\\Delta\\mu\\left(\\left.\\mathbf{X}\\right|\\boldsymbol{\\Lambda}\\right)=\\Delta\\mu_{C}-\\mathrm{E_{\\boldsymbol{\\chi},\\Delta\\mathbf{X}}}\\left[A_{\\textrm{dB}}\\left(\\left.\\mathbf{X}\\right|\\Delta\\mathbf{X},\\boldsymbol{\\chi},\\boldsymbol{\\Lambda}\\right)\\right]\\label{eq: delta P1 mean}\n\\end{equation}\nand\n\n\\begin{equation}\n\\Delta\\sigma^{2}\\left(\\left.\\mathbf{X}\\right|\\boldsymbol{\\Lambda}\\right)=\\Delta\\sigma_{C}^{2}+\\textrm{\\ensuremath{\\mathrm{\\textrm{Var}{}_{\\boldsymbol{\\chi},\\Delta\\mathbf{X}}}}}\\left[A_{\\textrm{dB}}\\left(\\left.\\mathbf{X}\\right|\\Delta\\mathbf{X},\\boldsymbol{\\chi},\\boldsymbol{\\Lambda}\\right)\\right]\\,.\\label{eq:delta P1 sigma}\n\\end{equation}\nThe term $\\textrm{\\ensuremath{A_{\\textrm{dB}}\\left(\\left.\\mathbf{X}\\right|\\Delta\\mathbf{X},\\boldsymbol{\\chi},\\boldsymbol{\\Lambda}\\right)}}$\nhighlights the fact that, given the geometrical parameters $\\boldsymbol{\\Lambda}$\nand the motion terms $\\Delta\\mathbf{X},\\,\\boldsymbol{\\chi}$, the\nextra attenuation is only a function of the positions $\\mathbf{X}$\nof the bodies. In the followings, we assume that the bodies are positioned\nin $\\mathbf{X}$ but each of them can slightly change its location\nand posture making small random movements $\\Delta\\mathbf{X}_{n}$\nwith $\\Delta x_{n},\\Delta y_{n}\\sim\\mathcal{U}\\left(-B,+B\\right)$\nand rotations $\\chi_{n}\\sim\\mathcal{U}\\left(-\\pi,+\\pi\\right)$ around\nthe vertical axis. $\\mathcal{U}\\left(\\alpha,\\beta\\right)$ indicates\nthe uniform distribution within the interval $\\left[\\alpha\\:\\beta\\right]$\nwhile, for each \\emph{n}, the set$\\left[-B\\:+B\\right]\\times\\left[-B\\:+B\\right]$\ndefines the 2D area around the nominal coordinate position $\\mathbf{X}_{n}$\nwhere the \\emph{n}-th target can freely move. To determine eqs. (\\ref{eq: delta P1 mean})\nand (\\ref{eq:delta P1 sigma}), the mean $\\mathrm{E_{\\boldsymbol{\\chi},\\Delta\\mathbf{X}}}\\left[\\cdot\\right]$\nand the variance $\\textrm{\\ensuremath{\\mathrm{\\textrm{Var}{}_{\\boldsymbol{\\chi},\\Delta\\mathbf{X}}}}}\\left[\\cdot\\right]$\nare computed over the aforementioned uniform distribution of $\\Delta\\mathbf{X}$\nand $\\boldsymbol{\\chi}$.\n\nThe residual body-induced multipath terms $\\Delta\\mu_{C}$ and $\\Delta\\sigma_{C}^{2}$\nin (\\ref{eq: delta P1 mean}) and (\\ref{eq:delta P1 sigma}), respectively,\ncan be directly evaluated from field measurements performed during\nthe calibration phase. However, these terms are marginally influenced\nby the specific body locations, as also shown in \\cite{shadowing,rampa2015letter},\nand are thus not relevant for HPS applications. On the contrary, the\ndiffraction term $A_{\\textrm{dB}}\\left(\\left.\\mathbf{X}\\right|\\Delta\\mathbf{X},\\boldsymbol{\\chi},\\boldsymbol{\\Lambda}\\right)$\nprovides a simple but effective method to predict the power perturbation\n$\\Delta\\mu\\left(\\left.\\mathbf{X}\\right|\\boldsymbol{\\Lambda}\\right)$\nand $\\Delta\\sigma^{2}\\left(\\left.\\mathbf{X}\\right|\\boldsymbol{\\Lambda}\\right)$\nas a function of the body position and size.\n\n\\section{Model optimization and validation}\n\n\\label{subsec:Model-calibration}\n\nTo confirm the validity of the proposed multi-body models, several\nEM simulations with the Feko software environment and on-field experiments\nhave been carried out according to the same link scenario sketched\nin Fig. \\ref{fig:link}. Feko\\footnote{The commercial EM simulator designed by Altair Engineering Inc.}\nimplements time and frequency domain full-wave solvers (\\eg MoM,\nFDTD, FEM and MLFMM). For details about the solvers, the interested\nreader may have a look at \\cite{elsherbeni-2014antennabook}.\n\nFirst of all, simulations have been carried out to compare the results\nof the diffraction-based MBM and PMBM models shown in Sect. \\ref{subsec:Multi-body-model}\nand Sect. \\ref{subsec:paraxial_sec}, respectively, with the ones\nobtained with Feko. Next, we have compared the aforementioned models\nagainst the RSS measurements obtained from IEEE 802.15.4 devices \\cite{154E},\ncommonly used in industrial applications \\cite{savazzi2016sensors}.\nConsidering the application to multi-body localization featuring $N=2$\ntargets, the MBM and PMBM model parameters, namely the geometrical\nsizes (\\ie $\\mathbf{a}$, $\\mathbf{b}$ and $\\mathbf{h}$) of the\nknife-edge surfaces, are optimized using a small subset of the experimental\ndata so that they could effectively model the obstructions induced\nby the true targets. The proposed models using optimized sizes of\nthe knife-edges are then validated over different configurations,\nwhere both targets move along the LOS link.\n\nIt is worth noticing that Feko simulations are related to PEC (Perfect\nElectromagnetic Conductor) configurations to describe knife-edge targets.\nOn the contrary, MBM and PMBM 2D models assume perfectly absorbing\nsurfaces. The MBM and PMBM models also ignore important EM parameters\nsuch as polarization, permittivity, conductivity, shape, radius of\ncurvature, and surface roughness \\cite{Davis-et-al} that the Feko\nsimulator is able to tackle.\n\n\\begin{figure}[tp]\n\\begin{centering}\n\\includegraphics[clip,scale=0.4]{Fig4}\n\\par\\end{centering}\n\\caption{\\label{fig:comp-along}Feko PEC simulations for vertically (blue line)\nand horizontally (magenta line) polarized source vs. MBM (black line)\nand PMBM (red line) $A_{\\textrm{dB}}^{\\left(1,2\\right)}$ predictions\nfor targets $T_{1}$ and $T_{2}$ along the LOS path as shown in the\nscenario on the top.}\n\n\\vspace{-0.4cm}\n\\end{figure}\n\n\\begin{figure}[tp]\n\\begin{centering}\n\\includegraphics[scale=0.4]{Fig5}\n\\par\\end{centering}\n\\caption{\\label{fig:comp-across}Feko PEC simulations for vertically (blue\nline) and horizontally (magenta line) polarized source vs. MBM (black\nline) and PMBM (red line) $A_{\\textrm{dB}}^{\\left(1,2\\right)}$ predictions\nfor the target $T_{1}$ along the LOS path and $T_{2}$ across the\nLOS path as shown in the scenario on the top. The dotted lines (with\nthe same colors adopted for the dual-target cases) show the extra\nattenuation predicted by the previous models\/simulations due to the\npresence of the target $T_{1}$ only.}\n\n\\vspace{-0.4cm}\n\\end{figure}\n\n\n\\subsection{Model comparison against EM simulations}\n\nThe MBM and PMBM models are evaluated and compared in this section\nwith the results from EM simulations. To simplify the EM simulation\ncomplexity (mostly due to the long Feko runs), in what follows MBM\nand PMBM models are compared in a dual-target scenario ($N=2$) only.\nFig. \\ref{fig:comp-along} and \\ref{fig:comp-across} show the predicted\nvalues of the extra attenuation $A_{\\textrm{dB}}^{\\left(1,2\\right)}=-10\\,\\log_{10}\\left|E^{\\left(1,2\\right)}\/E_{0}\\right|^{2}$\ncomputed according to the models described by eqs. (\\ref{eq:E_E0_tot_2})\nand (\\ref{eq:E_Er_2_1_N}), namely MBM and PMBM, respectively. No\nmovements\/rotations are allowed and both targets are placed in their\nnominal positions: in both figures, the target $T_{1}$ is fixed in\nthe position $\\mathbf{X}_{1}=[1,0]^{\\mathrm{T}}$while the target\n$T_{2}$ changes its positions along and across the LOS path. In Fig.\n\\ref{fig:comp-along}, $T_{2}$ is placed in $\\mathbf{X}_{2}=[x_{2},0]^{\\mathrm{T}}$\nand moves along the LOS path with $12$ case, in terms of the necessary number of changes. We utilize Hall's theorem on perfect matchings. \n\n\\begin{proposition}[Hall's Theorem] Let $(P,Q)$ be a bipartite graph. If all subsets $P' \\subseteq P$ have at least $|P'|$ neighbors in $Q$, then there is a matching of size $|P|$.\n\\end{proposition}\n\n\\begin{lemma} \\label{lemma:matching}\n Let $C^1$ and $C^2$ be the optimal clustering of $\\mathcal{X} \\subseteq \\mathbb{R}^d$, and assume that any threshold cut requires $t$ changes.\nFor each $i \\in [d]$, there are $t$ disjoint pairs of vectors $(\\mathbf{p}^j,\\mathbf{q}^j)$ in $\\mathcal{X}$ such that $\\mathbf{p}^j\\in C^1$ and $\\mathbf{q}^j\\in C^2$ and $q^j_i\\leq p^j_i$ for every $j \\in [t]$. \n\\end{lemma}\n\\begin{proof}\n\tLet $\\boldsymbol{\\mu}^1$ and $\\boldsymbol{\\mu}^2$ be the centers for the optimal clusters $C^1$ and $C^2$. Focus on index $i \\in [d]$, and assume without loss of generality that $\\mu^1_i\\leq \\mu^2_i$. The $t$ pairs correspond to a matching the following bipartite graph $(P,Q)$. Let $Q = C^2$ and define $P \\subseteq C^1$ as the $t$ points in $C^1$ with largest value in their $i$th coordinate. Connect $\\mathbf{p} \\in P$ and $\\mathbf{q} \\in Q$ by an edge if only if $q_i\\leq p_i.$ \n\tBy construction, a matching with $t$ edges implies our claim.\n\tBy Hall's theorem, we just need to prove that $P'\\subseteq P$ has at least $|P'|$ neighbors.\n\t\n\tIndex $P = \\{\\mathbf{p}^1,\\ldots, \\mathbf{p}^t\\}$ by ascending value of $i$th coordinate, $p^1_i \\leq \\cdots \\leq p^t_i.$ Now, notice that vertices in $P$ have nested neighborhoods: for all $j > j'$, the neighborhood of $\\mathbf{p}^{j'}$ is a subset of the neighborhood of $\\mathbf{p}^{j}$. It suffices to prove that $\\mathbf{p}^{j}$ has at least $j$ neighbors, because this implies that any subset $P' \\subseteq P$ has at least $|P'|$ neighbors, guaranteeing a matching of size $|P| = t$. Indeed, if $|P'| = b$ then we know that $\\mathbf{p}^{j} \\in P'$ for some $j \\geq b$, implying that $P'$ has at least $j \\geq b = |P'|$ neighbors.\n\n\tAssume for contradiction that $\\mathbf{p}^j$ has at most $j-1$ neighbors. We argue that the threshold cut $x_i\\leq p^j_i$ has fewer than $t$ changes, which contradicts the fact that all threshold cuts must make at least $t$ changes. By our assumption, there are at most $j-1$ points that are smaller than $p^j_i$ and belong to the second cluster. \n\tBy the definition of $P$, there are exactly $t-j$ points with a larger $i$th coordinate than $p^j_i$ in the first cluster. Therefore, the threshold cut $x_i\\leq p^j_i$ makes at most $(t-j)+(j-1)2$ case, in terms of the necessary number of changes. We utilize Hall's theorem on perfect matchings. \n\n\\begin{proposition}[Hall's Theorem] Let $(P,Q)$ be a bipartite graph. If all subsets $P' \\subseteq P$ have at least $|P'|$ neighbors in $Q$, then there is a matching of size $|P|$.\n\\end{proposition}\n\n\\begin{lemma} \\label{lemma:matching}\n Let $C^1$ and $C^2$ be the optimal clustering of $\\mathcal{X} \\subseteq \\mathbb{R}^d$, and assume that any threshold cut requires $t$ changes.\nFor each $i \\in [d]$, there are $t$ disjoint pairs of vectors $(\\mathbf{p}^j,\\mathbf{q}^j)$ in $\\mathcal{X}$ such that $\\mathbf{p}^j\\in C^1$ and $\\mathbf{q}^j\\in C^2$ and $q^j_i\\leq p^j_i$ for every $j \\in [t]$. \n\\end{lemma}\n\\begin{proof}\n\tLet $\\boldsymbol{\\mu}^1$ and $\\boldsymbol{\\mu}^2$ be the centers for the optimal clusters $C^1$ and $C^2$. Focus on index $i \\in [d]$, and assume without loss of generality that $\\mu^1_i\\leq \\mu^2_i$. The $t$ pairs correspond to a matching the following bipartite graph $(P,Q)$. Let $Q = C^2$ and define $P \\subseteq C^1$ as the $t$ points in $C^1$ with largest value in their $i$th coordinate. Connect $\\mathbf{p} \\in P$ and $\\mathbf{q} \\in Q$ by an edge if only if $q_i\\leq p_i.$ \n\tBy construction, a matching with $t$ edges implies our claim.\n\tBy Hall's theorem, we just need to prove that $P'\\subseteq P$ has at least $|P'|$ neighbors.\n\t\n\tIndex $P = \\{\\mathbf{p}^1,\\ldots, \\mathbf{p}^t\\}$ by ascending value of $i$th coordinate, $p^1_i \\leq \\cdots \\leq p^t_i.$ Now, notice that vertices in $P$ have nested neighborhoods: for all $j > j'$, the neighborhood of $\\mathbf{p}^{j'}$ is a subset of the neighborhood of $\\mathbf{p}^{j}$. It suffices to prove that $\\mathbf{p}^{j}$ has at least $j$ neighbors, because this implies that any subset $P' \\subseteq P$ has at least $|P'|$ neighbors, guaranteeing a matching of size $|P| = t$. Indeed, if $|P'| = b$ then we know that $\\mathbf{p}^{j} \\in P'$ for some $j \\geq b$, implying that $P'$ has at least $j \\geq b = |P'|$ neighbors.\n\n\tAssume for contradiction that $\\mathbf{p}^j$ has at most $j-1$ neighbors. We argue that the threshold cut $x_i\\leq p^j_i$ has fewer than $t$ changes, which contradicts the fact that all threshold cuts must make at least $t$ changes. By our assumption, there are at most $j-1$ points that are smaller than $p^j_i$ and belong to the second cluster. \n\tBy the definition of $P$, there are exactly $t-j$ points with a larger $i$th coordinate than $p^j_i$ in the first cluster. Therefore, the threshold cut $x_i\\leq p^j_i$ makes at most $(t-j)+(j-1) 2$ leaves}\n\\label{sec:k-means}\n\nWe provide an efficient algorithm to produce a threshold tree with $k$ leaves that constitutes an approximate $k$-medians or $k$-means clustering of a data set $\\mathcal{X}$. Our algorithm, Iterative Mistake Minimization (IMM), starts with a reference set of cluster centers, for instance from a polynomial-time constant-factor approximation algorithm for $k$-medians or $k$-means~\\cite{aggarwal09}, or from a domain-specific clustering heuristic. \n\nWe then begin the process of finding an explainable approximation to this reference clustering, in the form of a threshold tree with $k$ leaves, whose internal splits are based on single features. The way we do this is almost identical for $k$-medians and $k$-means, and the analysis is also nearly the same. Our algorithm is deterministic and its run time is only $O(kdn \\log n)$, after finding the initial centers.\n \n \nAs discussed in Section~\\ref{sec:motivating-examples}, existing decision tree algorithms use greedy criteria that are not suitable for our tree-building process. However, we show that an alternative greedy criterion---minimizing the number of {\\em mistakes} at each split (the number of points separated from their corresponding cluster center)---leads to a favorable approximation ratio to the optimal $k$-medians or $k$-means cost.\n\n\n\\subsection{Our algorithm}\n\\label{sec:k-means-alg}\n\n\\begin{wrapfigure}{r}{0.5\\textwidth}\n\\centering\n\\begin{minipage}{.5\\textwidth}\n\\begin{algorithm}[H]\n\\SetKwFunction{Iterative Mistake Minimization}{Iterative Mistakes Minimization}\n\\SetKwFunction{BuildTree}{BuildTree}\n\\SetKwInOut{Input}{Input}\\SetKwInOut{Output}{Output}\\SetKwInOut{Preprocess}{Preprocess}\n\\Input{%\n\t$\\mathbf{x}^1, \\ldots, \\mathbf{x}^n$ -- vectors in $\\mathbb{R}^d$\\\\\n\t$k$ -- number of clusters\\\\\n }\n\\Output{%\n root of the threshold tree\n}\n\n\\LinesNumbered\n\\setcounter{AlgoLine}{0}\n\\BlankLine\n\n$\\boldsymbol{\\mu}^1, \\ldots \\boldsymbol{\\mu}^k \\leftarrow \\xfunc{k-Means}(\\mathbf{x}^1, \\ldots, \\mathbf{x}^n, k)$\\;\n\n\\ForEach {$j \\in [1, \\ldots, n]$}\n{\n $y^j \\leftarrow \\argmin_{1 \\leq \\ell \\leq k} \\lVert \\mathbf{x}^j - \\boldsymbol{\\mu}^\\ell \\rVert$\\;\n}\n\n\\Return $\\xfunc{build\\_tree}(\\{\\mathbf{x}^j\\}_{j=1}^n, \\{y^j\\}_{j=1}^n, \\{\\boldsymbol{\\mu}^j\\}_{j=1}^k)$\\;\n\n\\SetKwProg{buildtree}{$\\xfunc{build\\_tree}(\\{\\mathbf{x}^j\\}_{j=1}^m, \\{y^j\\}_{j=1}^m, \\{\\boldsymbol{\\mu}^j\\}_{j=1}^k)$:}{}{}\n\\LinesNumbered\n\\setcounter{AlgoLine}{0}\n\\BlankLine\n\\buildtree{}{\n\\If{$\\{y^j\\}_{j=1}^m$ \\text{is homogeneous}}\n{\n $\\xvar{leaf}.cluster \\leftarrow y^1$\\;\n \n \\Return $\\xvar{leaf}$\\;\n}\n\\ForEach {$i \\in [1, \\ldots, d]$}\n{\n $\\ell_i \\leftarrow \\min_{1 \\leq j \\leq m} \\mu^{y^j}_i$\\;\n \n $r_i \\leftarrow \\max_{1 \\leq j \\leq m} \\mu^{y^j}_i$\\;\n}\n$i, \\theta \\leftarrow \\argmin_{i,\\ell_i \\leq \\theta < r_i} \\sum_{j=1}^m \\xfunc{mistake}(\\mathbf{x}^j, \\boldsymbol{\\mu}^{y^j}, i, \\theta)$\\;\\label{ln:k_dynamic}\n\n$\\xvar{M} \\leftarrow \\{j \\mid \\xfunc{mistake}(\\mathbf{x}^j, \\boldsymbol{\\mu}^{y^j}, i, \\theta) = 1\\}_{j=1}^m$\\; \n\n$\\xvar{L} \\leftarrow \\{j \\mid (x^j_i \\leq \\theta) \\wedge (j \\not \\in \\xvar{M})\\}_{j=1}^m$\\;\n\n$\\xvar{R} \\leftarrow \\{j \\mid (x^j_i > \\theta) \\wedge (j \\not \\in \\xvar{M})\\}_{j=1}^m$\\;\n\n$\\xvar{node}.condition \\leftarrow ``x_i \\leq \\theta\"$\\;\n\n$\\xvar{node}.lt \\leftarrow \\xfunc{build\\_tree}(\\{\\mathbf{x}^j\\}_{j \\in \\xvar{L}}, \\{y^j\\}_{j \\in \\xvar{L}}, \\{\\boldsymbol{\\mu}^j\\}_{j=1}^k)$\\;\n\n$\\xvar{node}.rt \\leftarrow \\xfunc{build\\_tree}(\\{\\mathbf{x}^j\\}_{j \\in \\xvar{R}}, \\{y^j\\}_{j \\in \\xvar{R}}, \\{\\boldsymbol{\\mu}^j\\}_{j=1}^k)$\\;\n\n\n\\Return $\\xvar{node}$\\;\n}\n\n\n\\SetKwProg{mistake}{$\\xfunc{mistake}(\\mathbf{x}, \\boldsymbol{\\mu}, i, \\theta)$:}{}{}\n\\LinesNumbered\n\\setcounter{AlgoLine}{0}\n\\BlankLine\n\\mistake{}{\n\\Return $(x_i \\leq \\theta) \\neq (\\mu_i \\leq \\theta)$ ? $1$ : $0$\\;\n}\n\\caption{\\textsc{\\newline Iterative Mistake Minimization}}\n\\label{algo:imm}\n\\end{algorithm}\n\\end{minipage} \\vspace{-2ex}\n\\end{wrapfigure}\n\nAlgorithm~\\ref{algo:imm} takes as input a data set $\\mathcal{X} \\subseteq \\mathbb{R}^d$. The first step is to obtain a reference set of $k$ centers $\\{\\boldsymbol{\\mu}^1, \\ldots, \\boldsymbol{\\mu}^k\\}$, for instance from a standard clustering algorithm. We assign each data point $\\mathbf{x}^j$ the label $y^j$ of its closest center. Then, the {\\tt{build\\_tree}} procedure looks for a tree-induced clustering that fits these labels. The tree is built top-down, using binary splits. Each node $u$ can be associated with the portion of the input space that passes through that node, a hyper-rectangular region $\\mbox{cell}(u) \\subseteq \\mathbb{R}^d$. If this cell contains two or more of the centers $\\boldsymbol{\\mu}^j$, then it needs to be split. We do so by picking the feature $i \\in [d]$ and threshold value $\\theta \\in \\mathbb{R}$ such that the resulting split $x_i \\leq \\theta$ sends at least one center to each side and moreover produces the fewest {\\em mistakes}: that is, separates the fewest points in $\\mathcal{X} \\cap \\mbox{cell}(u)$ from their corresponding centers in $\\{\\boldsymbol{\\mu}^j: 1 \\leq j \\leq k\\} \\cap \\mbox{cell}(u)$. We do not count points whose centers lie outside $\\mbox{cell}(u)$, since they are associated with mistakes in earlier splits. We find the optimal split $(i, \\theta)$ by searching over all pairs efficiently using dynamic programming. We then add this node to the tree, and discard the mistakes (the points that got split from their centers) before recursing on the left and right children. We terminate at a leaf node whenever all points have the same label (i.e., a {\\em homogeneous} subset). As there are $k$ different labels, the resulting tree has exactly $k$ leaves.\nFigure \\ref{fig:imm_example} depicts the operation of Algorithm~\\ref{algo:imm}. \n\nWe first discuss the running time, and we analyze the approximation guarantees of IMM in Section~\\ref{sec:imm-approx-main}.\n\n\n\n\n\\smallskip \\noindent {\\bf Time analysis of tree building.}\nWe sketch how to execute the algorithm in time $O(kdn\\log n)$ for an $n$-point data set. At each step of the top-down procedure, we find a coordinate and threshold pair that minimizes the mistakes at this node (line \\ref{ln:k_dynamic} in \\texttt{build\\_tree} procedure). We use dynamic programming to avoid recomputing the cost from scratch for each potential threshold. For each coordinate $i \\in [d]$, we sort the data and centers. Then, we iterate over possible thresholds. We claim that we can process each node in time $O(dn\\log n)$ because each point will affect the number of mistakes at most twice. Indeed, when the threshold moves, either a data point or a center moves to the other side of the threshold. Since we know the number of mistakes from the previous threshold, we count the new mistakes efficiently as follows. If a single point switches sides, then the number of mistakes changes by at most one. If a center switches sides, which happens at most once, then we update the mistakes for this center. Overall, each point affects the mistakes at most twice (once when changing sides, and once when its center switches sides). Thus, the running time for each internal node is $O(dn\\log n)$. As the tree has $k-1$ internal nodes, the total time is $O(kdn \\log n)$.\n\n\n\\input{imm-pic-steps}\n\n\\newpage\n\\subsection{Approximation guarantee for the IMM algorithm}\n\\label{sec:imm-approx-main}\n\nOur main theoretical contribution is the following result.\n\n\\begin{theorem}\\label{thm:main-k}\\label{thm:main-k-appendix}\n\tSuppose that IMM takes centers $\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k$ and returns a tree $T$ of depth $H$. Then, \n\t\\begin{enumerate}\n\t\t\\item The $k$-medians cost is at most $$\\mathrm{cost}(T)\\leq (2H+1) \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k)$$\n\t\t\\item The $k$-means cost is at most $$\\mathrm{cost}(T)\\leq (8Hk+2) \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k)$$\n\t\\end{enumerate}\n\tIn particular, IMM achieves worst case approximation factors of $O(k)$ and $O(k^2)$ by using any $O(1)$ approximation algorithm (compared to the optimal $k$-medians\/means) to generate the initial centers.\n\\end{theorem}\n\nWe state the theorem in terms of the depth of the tree to highlight that the approximation guarantee may depend on the structure of the input data. If the optimal clusters can be easily identified by a small number of salient features, then the tree may have depth $O(\\log k)$. \nWe later provide a lower bound showing that an $\\Omega(\\log k)$ approximation factor is necessary for $k$-medians and $k$-means (Theorem~\\ref{thm:lb-k}). For this data set, our algorithm produces a threshold tree with depth $O(\\log k)$, and therefore, the analysis is tight for $k$-medians. We leave it as an intriguing open question whether the bound can be improved for $k$-means.\n\n\\subsubsection{Proof Overview for Theorem~\\ref{thm:main-k}}\n\nThe proof proceeds in three main steps. First, we rewrite the cost of IMM in terms of the minimum number of mistakes made between the output clustering and the clustering based on the given centers. Second, we provide a lemma that relates the cost of any clustering to the number of mistakes required by a threshold clustering. Finally, we put these two together to show that the output cost is at most an $O(H)$ factor larger than the $k$-medians cost and at most an $O(Hk)$ factor larger than the $k$-means cost, respectively, where $H$ is the depth of the IMM tree, and the cost is relative to $\\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k)$. \n\nThe approximation bound rests upon a characterization of the excess clustering cost induced by the tree. For any internal node $u$ of the final tree $T$, let $\\mbox{cell}(u) \\subseteq \\mathbb{R}^d$ denote the region of the input space that ends up in that node, and let $B(u)$ be the bounding box of the centers that lie in this node, or more precisely, $B(u) = \\{\\boldsymbol{\\mu}^j: 1 \\leq j \\leq k\\} \\cap \\mbox{cell}(u)$. We will be interested in the diameter of this bounding box, measured either by $\\ell_1$ or squared $\\ell_2$ norm, and denoted by $\\mathrm{diam}_1(B(u))$ and $\\mathrm{diam}_2^2(B(u))$, respectively.\n\n\n\\paragraph{Upper bounding the cost of the tree.} The first technical claim (Lemma~\\ref{lemma:tree-cost}) will show that if IMM takes centers $\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k$ and returns a tree $T$ that incurs $t_u$ mistakes at node $u \\in T$, then \n\\begin{itemize}\n\\item The $k$-medians cost of $T$ satisfies $\\displaystyle \\mathrm{cost}(T)\\leq \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + \\sum_{u \\in T} t_u \\mathrm{diam}_1(B(u)) $\n\\item The $k$-means cost of $T$ satisfies $\\displaystyle \\mathrm{cost}(T)\\leq 2\\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + 2\\cdot \\sum_{u \\in T} t_u \\mathrm{diam}_2^2(B(u))$\n\\end{itemize}\n\nBriefly, any point $\\mathbf{x}$ that ends up in a different leaf from its correct center $\\boldsymbol{\\mu}^j$ incurs some extra cost. To bound this, consider the internal node $u$ at which $\\mathbf{x}$ is separated from $\\boldsymbol{\\mu}^j$. Node $u$ also contains the center $\\boldsymbol{\\mu}^i$ that ultimately ends up in the same leaf as $\\mathbf{x}$. For $k$-medians, the excess cost for $\\mathbf{x}$ can then be bounded by $\\|\\boldsymbol{\\mu}^i - \\boldsymbol{\\mu}^j\\|_1 \\leq \\mathrm{diam}_1(B(u))$. The argument for $k$-means is similar.\n\nThese $\\sum_u t_u \\mathrm{diam}(B(u))$ terms can in turn be bounded in terms of the cost of the reference clustering. \n\n\\paragraph{Lower bounding the reference cost.} \nWe next need to relate the cost of the centers $\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k$ to the number of mistakes and the diameter of the cells in the tree. Lemma~\\ref{aux-claim:general_k_upper_bound} will show that if IMM makes $t_u$ mistakes at node $u \\in T$, then\n\\begin{itemize}\n\\item The $k$-medians cost satisfies \n$\\displaystyle \\sum_{u \\in T} t_u \\cdot \\mathrm{diam}_1(B(u)) \\leq 2H \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k).$\n\\item The $k$-means cost satisfies \n$\\displaystyle \\sum_{u \\in T} t_u \\cdot \\mathrm{diam}_2^2(B(u)) \\leq 4Hk \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k).$\n\\end{itemize}\n\nThe proof for this is significantly more complicated than the upper bound mentioned above. Moreover, it contains the main new techniques in our analysis of tree-based clusterings. \n\nThe core challenge is that we aim to lower bound the cost of the given centers using only information about the number of mistakes at each internal node. Moreover, the IMM algorithm only minimizes the {\\em number} of mistakes, and not the {\\em cost} of each mistake. Therefore, we must show that if every axis-aligned cut in $B(u)$ separates at least $t_u$ points~$\\mathbf{x}$ from their centers, then there must be a considerable distance between the points in $\\mbox{cell}(u)$ and their centers.\n\nTo prove this, we analyze the structure of points in each cell. Specifically, we consider the single-coordinate projection of points in the box $B(u)$, and we order the centers in $B(u)$ from smallest to largest for the analysis. If there are $k'$ centers in node $u$, we consider the partition of $B(u)$ into $2(k'-1)$ disjoint segments, splitting at the centers and at the midpoints between consecutive centers. Since $t_u$ is the minimum number of mistakes, we must in particular have at least $t_u$ mistakes from the threshold cut at each midpoint. We argue that each of these segments is covered at least $t_u$ times by a certain set of intervals. Specifically, we consider the intervals between mistake points and their true centers, and we say that an interval \\textit{covers} a segment if the segment is contained in the interval. This allows us to capture the cost of mistakes at different distance scales. For example, if a point is very far from its true center, then it covers many disjoint segments, and we show that it also implies a large contribution to the cost. \nClaim~\\ref{claim:covering} in Section~\\ref{sec:general_k_upper_bound} provides our main covering result, and we use this to argue that the cost of the given centers can be lower bounded in terms of the distance between consecutive centers in $B(u)$. For $k$-medians, we can directly derive a lower bound on the cost in terms of the $\\ell_1$ diameter $\\mathrm{diam}_1(B(u))$. For $k$-means, however, we employ Cauchy-Schwarz, which incurs an extra factor of $k$ in the bound with $\\mathrm{diam}_2^2(B(u))$. Overall, we sum these bounds over the height $H$ of the tree, leading to the claimed upper bounds in the above lemma. \n\n\\subsubsection{Preliminaries and Notation for Theorem~\\ref{thm:main-k}}\n\nLet $\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k$ be the reference centers, and let $T$ be the resulting IMM tree. Each internal node $u$ corresponds to a value $\\theta_u \\in \\mathbb{R}$ and a coordinate $i \\in [d]$. The tree partitions $\\mathcal{X}$ into $k$ clusters $\\widehat C_1, \\ldots, \\widehat C_k$ based on the points that reach the $k$ leaves in $T$, where we index the clusters so that leaf $j$ contains the centers $\\boldsymbol{\\mu}^j$ and $\\widehat \\boldsymbol{\\mu}^j$, where $\\widehat \\boldsymbol{\\mu}^j$ is the mean of $\\widehat C_j$ for $k$-means and the median of $\\widehat C_j$ for $k$-medians. This provides a bijection between old and new centers (and clusters).\nRecall that the map $c:\\mathcal{X} \\to \\{\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k\\}$ associates each point to its nearest center (i.e., $c(\\mathbf{x})$ corresponds to the cluster assignment given by the centers $\\{\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k\\}$).\n\n\nFor a node $u \\in T$, we let $\\mathcal{X}_u$ denote the surviving data set vectors at node $u \\in T$ based on the thresholds from the root to $u$.\nWe also define $J_u \\subseteq [k]$ be the set of surviving centers at node~$u$ from the set $\\{\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k\\}$, where these centers satisfy the thresholds from the root to~$u$. \nDefine $\\boldsymbol{\\mu}^{L,u}$ and $\\boldsymbol{\\mu}^{R,u}$ to be the maximal (smallest and largest) coordinate-wise values of the centers in $J_u$, that is, for $i \\in [d]$, we set\n$$\n\\mu^{L,u}_i = \\min_{j \\in J_u} \\mu^j_i,\n\\qquad\\mathrm{and}\\qquad \n\\mu^{R,u}_i = \\max_{j \\in J_u} \\mu^j_i.\n$$\nIn other words, using the previous notation and recalling that $B(u) = \\{\\boldsymbol{\\mu}^1,\\ldots,\\boldsymbol{\\mu}^k\\} \\cap \\mathrm{cell}(u)$, we have that\n$$\n\\mathrm{diam}_1(B(u)) = \\|\\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u}\\|_1\n\\qquad \n\\mbox{\\ and\\ }\n\\qquad\n\\mathrm{diam}_2^2(B(u)) = \\|\\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u}\\|_2^2.\n$$\n\nRecall that $t_u$ for node $u \\in T$ denotes the number of {\\em mistakes} incurred during the threshold cut defined by $u$, where a point $\\mathbf{x}$ is a mistake at node $u$ if $x$ reaches $u$, it was not a mistake before, and exactly one of the following two events occurs:\n$$\n\\{c(\\mathbf{x})_i \\leq \\theta_u \\ \\ \\mathrm{and}\\ \\ x_i > \\theta_u\\}\n\\qquad \\mathrm{or} \\qquad\n\\{c(\\mathbf{x})_i > \\theta_u \\ \\ \\mathrm{and}\\ \\ x_i \\leq \\theta_u\\}.\n$$\nLet $\\mathcal{X} = \\cX^{\\mathsf{cor}} \\cup \\cX^{\\mathsf{mis}}$ be a partition of the input data set into two parts, where $\\mathbf{x}$ is in $\\cX^{\\mathsf{cor}}$ if it reaches the same leaf node in $T$ as its center $c(\\mathbf{x})$, and otherwise, $\\mathbf{x}$ is in $\\cX^{\\mathsf{mis}}$. In other words, $\\cX^{\\mathsf{mis}}$ contains all points $\\mathbf{x} \\in \\mathcal{X}$ that are a mistake at any node $u$ in $T$, and the rest of the points are in $\\cX^{\\mathsf{cor}}$. We note that the notion of ``mistakes'' used here is different than the definition of ``changes'' used for the analysis of $2$-means\/medians, even though we reuse some of the same notation.\n\nWe need a standard consequence of the Cauchy\u2013Schwarz inequality to analyze the $k$-means cost.\n\\begin{claim}\\label{clm:cauchy_schwarz_k_means}\n\tFor any $a_1,\\ldots,a_{m}\\in\\mathbb{R},$ it holds that $\\sum_{i=1}^ka_i^2 \\geq \\frac{1}{k}\\left(\\sum_{i=1}^ka_i\\right)^2.$\n\\end{claim}\n\\begin{proof}\n\tDenote by $a$ the vector $(a_1,\\ldots,a_{m})$ and by $b$ the vector $(\\nicefrac{1}{\\sqrt{k}},\\ldots,\\nicefrac{1}{\\sqrt{k}}).$ \n\tBy the Cauchy\u2013Schwarz inequality\n\t$\\frac{1}{k}\\left(\\sum_{i=1}^ka_i\\right)^2=\\inner{a}{b}^2\\leq\\sum_{i=1}^ka_i^2$\n\\end{proof}\n\nWe also need two facts, which state the optimal center for a cluster corresponds to mean or median of the points in the cluster, respectively. The proofs of these facts can be found in standard texts~\\cite{schutze2008introduction}.\n\n\\begin{fact}\\label{fact:1-means-optimal center}\nFor any set $S=\\{\\mathbf{x}^1,\\ldots, \\mathbf{x}^n\\}\\subseteq \\mathbb{R}^d$, the optimal center under the $\\ell_2^2$ cost is the mean $\\boldsymbol{\\mu} = \\frac1n\\sum_{\\mathbf{x}\\in S}\\mathbf{x}.$\n\\end{fact}\n\n\\begin{fact}\\label{fact:1-median-optimal center}\nFor any set $S=\\{\\mathbf{x}^1,\\ldots, \\mathbf{x}^n\\}\\subseteq \\mathbb{R}^d$, the optimal center $\\boldsymbol{\\mu}$ under the $\\ell_1$ cost is the coordinate-wise median, defined for $i\\in[d]$ as $\\mu_i = \\mathsf{median}(x^1_i,\\ldots, x^n_i).$\n\\end{fact}\n\n\\subsubsection{The Two Main Lemmas and the Proof of Theorem~\\ref{thm:main-k}}\n\nTo prove the theorem, we state two lemmas that aid in analyzing the cost of the given clustering versus the IMM clustering. \nThe theorem will follow from these lemmas, and we will prove the lemmas in the proceeding subsections. We start with the lemma relating the number of mistakes~$t_u$ at each node $u$ and the distance between $\\boldsymbol{\\mu}^{L,u}$ and $\\boldsymbol{\\mu}^{R,u}$ to the cost incurred by the given centers.\n\n\\begin{lemma}\\label{lemma:tree-cost}\n\tIf IMM takes centers $\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k$ and returns a tree $T$ of depth $H$ that incurs $t_u$ mistakes at node $u \\in T$, then \n\t\\begin{enumerate}\n\t\t\\item The $k$-medians cost of the IMM tree satisfies $$\\mathrm{cost}(T)\\leq \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + \\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.$$\n\t\t\\item The $k$-means cost of the IMM tree satisfies $$\\mathrm{cost}(T)\\leq 2\\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + 2\\cdot \\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2.$$\n\t\\end{enumerate}\n\\end{lemma}\n\nWe next bound the cost of the given centers in the terms of the number of mistakes in the tree. The key idea is that if there must be many mistakes at each node, then the cost of the given centers must actually be fairly large.\n\n\\begin{lemma}\\label{aux-claim:general_k_upper_bound}\n\tIf IMM takes centers $\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k$ and returns a tree $T$ of depth $H$ that incurs $t_u$ mistakes at node $u \\in T$, then \n\t\\begin{enumerate}\n\t\t\\item The $k$-medians cost satisfies \n\t\t$$\\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1 \\leq 2H \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k).$$\n\t\t\\item The $k$-means cost \n\t\tsatisfies \n\t\t$$\\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2 \\leq 4Hk \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k).$$\n\t\\end{enumerate}\n\\end{lemma}\n\nCombining these two lemmas immediately implies Theorem~\\ref{thm:main-k}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:main-k}.]\nFor $k$-medians, Lemmas~\\ref{lemma:tree-cost} and~\\ref{aux-claim:general_k_upper_bound}\ntogether imply that \n$$\\mathrm{cost}(T)\\leq \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + \\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1\n\\leq \n(2H+1) \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k).$$\nFor $k$-means, we have that \n\\begin{eqnarray*}\n\\mathrm{cost}(T)\\leq 2\\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + 2\\cdot \\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2\n\\leq (8Hk+2) \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k).\n\\end{eqnarray*}\n\n\\end{proof}\n\n\n\n\\subsubsection{Proof of Lemma~\\ref{lemma:tree-cost}}\n\nWe begin with the $k$-medians proof (the $k$-means proof will be similar). Notice that the cost can only increase when measuring the distance to the (suboptimal) center $\\boldsymbol{\\mu}^j$ instead of the (optimal) center $\\widehat \\boldsymbol{\\mu}^j$ for cluster $\\widehat C_j$, and hence,\n$$\n\\mathrm{cost}(T) = \\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\widehat C_j} \\|\\mathbf{x} - \\widehat \\boldsymbol{\\mu}^j\\|_1\n\\leq \n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\widehat C_j} \\|\\mathbf{x} - \\boldsymbol{\\mu}^j\\|_1.$$\nWe can rewrite this sum using the partition $\\cX^{\\mathsf{cor}}$ and $\\cX^{\\mathsf{mis}}$ of $\\mathcal{X}$, using the fact that\nwhenever $\\mathbf{x}\\in \\cX^{\\mathsf{cor}}$, then the distance is computed with respect to the true center $c(\\mathbf{x})$,\n\\begin{eqnarray*}\n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\widehat C_j} \\|\\mathbf{x} - \\boldsymbol{\\mu}^j\\|_1 &=& \n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|_1 +\n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|_1\n\\\\ &=& \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}} \\|\\mathbf{x}- c(\\mathbf{x})\\|_1 +\n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|_1\n\\end{eqnarray*}\nStarting with the above cost bound, and using the triangle inequality, we see\n\\begin{eqnarray*}\n\\mathrm{cost}(T) \n&\\leq&\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}} \\|\\mathbf{x}- c(\\mathbf{x})\\|_1 +\n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|_1\n\\\\ &\\leq& \n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}} \\|\\mathbf{x}- c(\\mathbf{x})\\|_1 +\n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \n(\\|\\mathbf{x}- c(\\mathbf{x})\\|_1 + \\|c(\\mathbf{x})- \\boldsymbol{\\mu}^j\\|_1) \n\\\\ &=& \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) +\n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|c(\\mathbf{x})- \\boldsymbol{\\mu}^j\\|_1\n\\end{eqnarray*}\n\nTo control the second term in the final line, we must bound the cost of the mistakes. We decompose $\\cX^{\\mathsf{mis}}$ based on the node $u$ where $\\mathbf{x} \\in \\cX^{\\mathsf{mis}}$ is first separated from its true center $c(\\mathbf{x})$ due to the threshold at node~$u$.\nTo this end, consider some point $\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j$, where its distance is measured to the incorrect center $\\boldsymbol{\\mu}^{j} \\neq c(\\mathbf{x})$. Both centers $c(\\mathbf{x})$ and $\\boldsymbol{\\mu}^j$ have survived until node $u$ in the threshold tree $T$, and hence, both vectors are part of the definitions of $\\boldsymbol{\\mu}^{L,u}$ and $\\boldsymbol{\\mu}^{R,u}$. In particular, we can use the upper bound\n$$\\|c(\\mathbf{x}) - \\boldsymbol{\\mu}^j\\|_1 \\leq \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.$$\nThere are $t_u$ points in $\\cX^{\\mathsf{mis}}$ caused by the threshold at node $u$, and we have that\n$$\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|c(\\mathbf{x}) - \\boldsymbol{\\mu}^j\\|_1 \\leq \n\\sum_{u \\in T} t_u \\cdot \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.$$\nTherefore, we have, as desired\n\\begin{eqnarray*}\n\\mathrm{cost}(T) &\\leq& \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + \\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|_1\n\\\\&\\leq& \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + \\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.\n\\end{eqnarray*}\n\\noindent\nAnalyzing $k$-means is similar; we incur a factor of two by using Claim~\\ref{clm:cauchy_schwarz_k_means} instead of the triangle inequality:\n\\begin{eqnarray*}\n\\mathrm{cost}(T) &\\leq& \n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}} \\|\\mathbf{x}- c(\\mathbf{x})\\|_2^2 + \n2\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} (\\|\\mathbf{x}- c(\\mathbf{x})\\|_2^2 +\\|c(\\mathbf{x}) - \\boldsymbol{\\mu}^j\\|_2^2)\n\\\\&\\leq& 2\\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + 2\\cdot \\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|c(\\mathbf{x}) - \\boldsymbol{\\mu}^j\\|_2^2\n\\\\ &\\leq& 2\\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + 2\\cdot \\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2\n\\end{eqnarray*}\n\n\n\\subsubsection{Proof of Lemma~\\ref{aux-claim:general_k_upper_bound}}\n\\label{sec:general_k_upper_bound}\n\t\nTo prove this lemma, we bound the cost at each node $u$ of tree in terms of the mistakes made at this node. For this lemma, we define $\\cX^{\\mathsf{cor}}_u$ to be the set of points in $\\mathcal{X}$ that reach node $u$ in $T$ along with their center $c(\\mathbf{x})$. We note that $\\cX^{\\mathsf{cor}}_u$ differs from $\\cX^{\\mathsf{cor}} \\cap \\mathcal{X}_u$ because a point $\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u$ may not make it to $\\cX^{\\mathsf{cor}}$ if there is a mistake later on (i.e., $\\cX^{\\mathsf{cor}}$ is the union of $\\cX^{\\mathsf{cor}}_u$ only over leaf nodes).\n\n\\begin{lemma}\\label{lemma:mistake-bound}\nFor any node $u \\in T$, we have that\n\t$$\n\t\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} \\|\\mathbf{x} - c(\\mathbf{x})\\|_1\n\t\\geq \n\t\\frac{t_u}{2} \\cdot \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.\n\t$$\nand\n\t$$\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} \\|\\mathbf{x} - c(\\mathbf{x})\\|_2^2\n\\geq \n\\frac{t_u}{4k} \\cdot \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2.\n$$\n\\end{lemma}\n\\begin{proof}\nFix a coordinate $i \\in [d]$ and a node $u \\in T$.\nTo simplify notation, we let $z_1 \\leq \\cdots \\leq z_{k'}$ denote the {\\em sorted} values of $i$th coordinate of the $k' \\leq k$ centers that survive until node $u$ (so that $z_1 = \\mu^{L,u}_i$ and $z_{k'} = \\mu^{R,u}_i$). Observe that for each $\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u$, the center $c(\\mathbf{x})$ must have survived until node $u$, and hence, $c(\\mathbf{x})_i$ equals one of the values $z_j$ for $j\\in[k']$. \n\nWe need a definition that allows us to relate the cost in coordinate $i$ to the distances between $z_1$ and $z_{k'}$. For consecutive values $(j,j+1)$, we say that the pair $(j,j+1)$ is {\\em covered} by $\\mathbf{x}$ if either \n\\begin{itemize}\n\t\\item The segment $[z_j, \\frac{z_j + z_{j+1}}{2})$ is contained in the segment $[x_i,c(\\mathbf{x})_i]$, or\n\t\\item The segment $[\\frac{z_j + z_{j+1}}{2}, z_{j+1})$ is contained in the segment $[x_i,c(\\mathbf{x})_i]$.\n\\end{itemize} \n\nWe prove the following claim, which enables us to relate the cost in the $i$th coordinate to the value $z_{k'} - z_1$ by decomposing this value into the distance between consecutive centers.\n\\begin{claim}\\label{claim:covering}\nFor each $j = 1,2,\\ldots, k'-1$, the pair $(j,j+1)$ is {covered} by at least~$t_u$ points $\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u$.\n\\end{claim}\n\\begin{proof}\nSuppose for contradiction that this does not hold. We argue that we can find a threshold value for coordinate $i$ that makes fewer than $t_u$ mistakes. To see this, assume that $(j,j+1)$ is covered by fewer than $t_u$ points $\\mathbf{x} \\in \\mathcal{X}_u$. In particular, setting the threshold to be $\\frac{z_j + z_{j+1}}{2}$ separates fewer than $t_u$ points $\\mathbf{x}$ from their centers $c(\\mathbf{x})$. This implies that there are fewer than $t_u$ mistakes at node $u$, which is a contradiction because the IMM algorithm chooses the coordinate and threshold pair that minimizes the number of mistakes.\n\\end{proof}\n\nNow this claim suffices to prove Lemma~\\ref{lemma:mistake-bound}. The only challenge is that we must string together the covering points $\\mathbf{x}$ to get a bound on $z_{k'}-z_1$.\n\nWe start with the $k$-medians proof. Using the above claim, we can lower bound the contribution of coordinate $i$ to the cost of the given centers. Notice that the values $z_1 \\leq \\cdots \\leq z_{k'}$ partition the interval between $z_1 = \\mu^{L,u}_i$ and $z_{k'} = \\mu^{R,u}_i$. Thus, each time $\\mathbf{x}$ covers a pair $(j,j+1)$, there must be a contribution of $\\frac{z_{j+1} - z_j}{2}$ to the cost $|x_i - c(\\mathbf{x})_i|$. Because each pair is covered at least $t_u$ times by Claim~\\ref{claim:covering}, we conclude that \n$$\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} |x_i - c(\\mathbf{x})_i|\n\\geq t_u \\sum_{j = 1}^{k'-1}\\left(\\frac{z_{j+1} - z_j}{2}\\right)\n= \\frac{t_u}{2} (z_{k'} - z_1).\n$$\nTo relate the bound to $\\boldsymbol{\\mu}^{L,u}$ and $\\boldsymbol{\\mu}^{R,u}$, we note that the above argument holds for each coordinate $i \\in [d]$, and\nwe have that\n$$\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} \\|\\mathbf{x} - c(\\mathbf{x})\\|_1\n= \\sum_{i \\in [d]} \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} |\\mathbf{x}_i - c(\\mathbf{x})_i|\n\\geq \n\\frac{t_u}{2} \\cdot \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.\n$$\n\nFor the $k$-means proof, we apply the same argument as above, this time using Claim~\\ref{clm:cauchy_schwarz_k_means} to bound the sum of squared values as \n$$\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} |x_i - c(\\mathbf{x})_i|^2\n\\geq t_u \\sum_{j = 1}^{k'-1}\\left(\\frac{z_{j+1} - z_j}{2}\\right)^2\n\\geq \\frac{t_u}{k} \\left(\\sum_{j = 1}^{k'-1}\\left(\\frac{z_{j+1} - z_j}{2}\\right)\\right)^2\n= \\frac{t_u}{4k} (z_{k'} - z_1)^2,$$\nand therefore, summing over coordinates $i\\in[d]$, we have\n$$\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} \\|\\mathbf{x} - c(\\mathbf{x})\\|_2^2\n= \\sum_{i \\in [d]} \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} |x_i - c(\\mathbf{x})_i|^2\n\\geq \n\\frac{t_u}{4k} \\cdot \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2.\n$$\n\\end{proof}\n\n\\begin{proof}[Proof of Lemma~\\ref{aux-claim:general_k_upper_bound}.]\n\tWe start with the $k$-medians proof.\nThe factor of $H$ arises because the same points $\\mathbf{x} \\in \\mathcal{X}$ can appear in at most $H$ sets $\\cX^{\\mathsf{cor}}_u$ because $H$ is the depth of the tree. More precisely, using Lemma~\\ref{lemma:mistake-bound} for each node $u$, we have that \n$$\nH \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) \\geq \\sum_{u \\in T} \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} \\|\\mathbf{x} - c(\\mathbf{x})\\|_1\n\\geq \\sum_{u \\in T} \\frac{t_u}{2} \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.\n$$\t\nApplying the same steps for the $k$-means cost, we have that\n$$\nH \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) \\geq \\sum_{u \\in T} \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} \\|\\mathbf{x} - c(\\mathbf{x})\\|_2^2\n\\geq \\sum_{u \\in T} \\frac{t_u}{4k} \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2.\n$$\n\\end{proof}\n\n\n\n\n\n\\subsection{Approximation lower bound}\n\nTo complement our upper bounds, we show that a threshold tree with $k$ leaves cannot, in general, yield better than an $\\Omega(\\log k)$ approximation to the optimal $k$-medians or $k$-means clustering.\n\n\\begin{theorem}\\label{thm:lb-k}\nFor any $k \\geq 2$, there exists a data set with $k$ clusters such that any threshold tree $T$ with $k$ leaves\nmust have $k$-medians and $k$-means cost at least\n$$\n\\mathrm{cost}(T) \\geq \\Omega(\\log k) \\cdot \\mathrm{cost}(opt).\n$$\n\\end{theorem}\n\nThe data set is produced by first picking $k$ random centers from the hypercube $\\{-1,1\\}^d$, for large enough~$d$, and then using each of these to produce a cluster consisting of the $d$ points that can be obtained by replacing one coordinate of the center by zero. Thus the clusters have size $d$ and radius $O(1)$. To prove the lower bound, we use ideas from the study of pseudo-random binary vectors, showing that projecting the centers to any subset of $m\\lesssim\\log_2 k$ coordinates take on all $2^m$ possible values, with each occurring roughly equally often.\n Then, we show that (i) the threshold tree must be essentially a complete binary tree with depth $\\Omega(\\log_2 k)$ to achieve a clustering with low cost, and (ii) any such tree incurs a cost of $\\Omega(\\log k)$ times more than the optimal for this data set (for both $k$-medians and $k$-means).\nThe proof of Theorem~\\ref{thm:lb-k} appears in Appendix~\\ref{sec:general_k_lower_bound}.\n\n\n\\section{Motivating Examples}\n\\label{sec:motivating-examples}\n\n{\\bf Using $k-1$ features may be necessary.}\nWe start with a simple but important bound showing that trees with depth less than~$k$ (or fewer than $k-1$ features) can be arbitrarily worse than the optimal clustering. Consider the data set consisting of the $k-1$ standard basis vectors $\\mathbf{e}^1,\\ldots,\\mathbf{e}^{k-1} \\in \\mathbb{R}^{k-1}$ along with the all zeros vector. As this data set has $k$ points, \nthe optimal $k$-median\/means cost is zero, putting each point in its own cluster. \nUnfortunately, it is easy to see that for this data, depth $k-1$ is necessary for clustering with a threshold tree. Figure~\\ref{fig:simplex} depicts an optimal tree for this data set. Shorter trees do not work because projecting onto any $k-2$ coordinates does not separate the data, as at least two points will have all zeros in these coordinates. Therefore, any tree with depth at most $k-2$ will put two points in the same cluster, leading to non-zero cost, whereas the optimal cost is zero. In other words, for this data set, caterpillar trees such as Figure~\\ref{fig:simplex} are necessary and sufficient for an optimal clustering. This example also shows that $\\Theta(k)$ features are tight for feature selection~\\cite{cohen2015dimensionality} and provides a separation with feature extraction methods that use a linear map to only a logarithmic number of dimensions~\\cite{becchetti2019oblivious, makarychev2019performance}.\n\n\\begin{figure}\n \\centering\n \\subfloat[Optimal threshold tree for the data set in $\\mathbb{R}^{k-1}$ consisting of the $k-1$ standard basis vectors and the all zeros vector. Any optimal tree must use all $k-1$ features and have depth~$k-1$.]{\n \\begin{minipage}[t]{0.45\\textwidth}\n \\centering\n \\begin{tikzpicture}\n\t[inner\/.style={shape=rectangle, rounded corners, draw, align=center, top color=white, bottom color=gray!40, scale=.85},\n\tdots\/.style={shape=rectangle, align=center, top color=white, scale=.85},\n\tleaf\/.style={shape=rectangle, rounded corners, draw, align=center, scale=.85},\n\tlevel 1\/.style={sibling distance=2mm},\n\tlevel 2\/.style={sibling distance=2mm},\n\tlevel 3\/.style={sibling distance=2mm},\n\tlevel 4\/.style={sibling distance=2mm},\n\tlevel distance=8mm]\n\t\\Tree\n\t[.\\node[inner]{$x_{i_1} \\leq 0.5$};\n\t[.\\node[inner]{$x_{i_2} \\leq 0.5$};\n\t[.\\node[inner] {$\\ldots$};\n\t[.\\node[inner] {$x_{i_d} \\leq 0.5$};\n\t\\node[leaf]{\\textbf{$\\mathbf{0}$}};\n\t\\node[leaf]{\\textbf{$\\mathbf{e}^{i_d}$}};\n\t] \n\t\\node[leaf]{\\textbf{$\\ldots$}};\n\t]\n\t\\node[leaf]{\\textbf{$\\mathbf{e}^{i_2}$}};\n\t]\n\t\\node[leaf]{\\textbf{$\\mathbf{e}^{i_1}$}};\n\t]\n\t\\node[top color=white, bottom color=white, scale=.75] at (-0.8,-4.1) {};\n\t\\end{tikzpicture}\n\t\\end{minipage}\n\t\\label{fig:simplex}}\n\t\\hfill\n \\subfloat[The ID3 split results in a $3$-means\/medians clustering with arbitrarily worse cost than the optimal because it places the top two points in separate clusters. Our algorithm (Section~\\ref{sec:k-means}) instead starts with the optimal first split.\n ]{\n \\includegraphics[width=.4\\textwidth, trim=-2cm 0 -2cm 0, clip]{our_vs_id3.png}\n \\label{fig:imm_vs_id3}}\n\\caption{Motivating examples showing that (a) threshold trees may need depth $k-1$ to determine $k$ clusters, and (b) standard decision tree algorithms such as ID3 or CART perform very badly on some data sets.}\n\\end{figure}\n\t\n\n\\paragraph{Standard top-down decision trees do not work.}\n\\label{sec:standard-dt-bad}\nA natural approach to building a threshold tree is to (1) find a good $k$-medians or $k$-means clustering using a standard algorithm, then (2) use it to label all the points, and finally (3) apply a supervised decision tree learning procedure, such as ID3~\\cite{quinlan1986induction,quinlan2014c4} to find a threshold tree that agrees with these cluster labels as much as possible. ID3, like other common decision tree algorithms, operates in a greedy manner, where at each step it finds the best split in terms of \\emph{entropy} or \\emph{information gain}. We will show that this is not a suitable strategy for clustering and that the resulting tree can have cost that is arbitrarily bad. \nIn what follows, denote by $\\mathrm{cost}({ID3}_{\\ell})$ the cost of the decision tree with~$\\ell$ leaves returned by ID3 algorithm.\n\n\n\nFigure \\ref{fig:imm_vs_id3} depicts a data set $\\mathcal{X} \\subseteq \\mathbb{R}^2$ partitioned into three clusters $\\mathcal{X} = \\mathcal{X}_0 \\mathbin{\\mathaccent\\cdot\\cup} \\mathcal{X}_1 \\mathbin{\\mathaccent\\cdot\\cup} \\mathcal{X}_2$.\nWe define two centers $\\boldsymbol{\\mu}^0=(-2,0)$ and $\\boldsymbol{\\mu}^1=(2,0)$ and for each $i\\in \\{0, 1\\}$, we define~$\\mathcal{X}_i$ as $500$ i.i.d. points $\\mathbf{x} \\sim \\mathcal{N}(\\boldsymbol{\\mu}^i, \\epsilon)$ for some small $\\epsilon > 0$. \nThen, $\\mathcal{X}_2 = \\{(-2, v), (2, v)\\}$ where $v \\to \\infty$. \nWith high probability, we have that the optimal $3$-means clustering is $(\\mathcal{X}_0, \\mathcal{X}_1, \\mathcal{X}_2)$, i.e. $\\mathbf{x} \\in \\mathcal{X}$ gets label $y\\in\\{0,1,2\\}$ such that $\\mathbf{x} \\in \\mathcal{X}_y$.\nThe ID3 algorithm minimizes the entropy at each step. In the first iteration, it splits between the two large clusters. As a result $(-2, v)$ and $(2, v)$ will also be separated from one another. Since $ID3_3$ outputs a tree with exactly three leaves, one of the leaves must contain a point from $\\mathcal{X}_2$ together with points from either $\\mathcal{X}_0$ or $\\mathcal{X}_1$, this means that $\\mathrm{cost}(ID3_3)= \\Omega(v) \\to \\infty$.\nNote that $\\mathrm{cost}((\\mathcal{X}_1, \\mathcal{X}_2, \\mathcal{X}_3))$ does not depend on $v$, and hence, it is substantially smaller than $\\mathrm{cost}(ID3_3)$.\nUnlike ID3, the optimal threshold tree first separates $\\mathcal{X}_2$ from $\\mathcal{X}_0 \\mathbin{\\mathaccent\\cdot\\cup} \\mathcal{X}_1$, and in the second split it separates $\\mathcal{X}_0$ and $\\mathcal{X}_1$. Putting the outliers in a separate cluster is necessary for an optimal clustering. It is easy to extend this example to more clusters or to when ID3 uses more leaves.\n\n\n\\section{Introduction}\nA central direction in machine learning is understanding the reasoning behind decisions made by learned models~\\cite{lipton2018mythos, molnar2019, murdoch2019interpretable}. Prior work on AI explainability focuses on the interpretation of a black-box model, known as {\\em post-modeling} explainability~\\cite{baehrens2010explain, ribeiro2018anchors}. While methods such as LIME~\\cite{ribeiro2016should} or Shapley explanations~\\cite{lundberg2017unified} have made progress in this direction, they do not provide direct insight into the underlying data set, and the explanations depend heavily on the given model. This has raised concerns about the applicability of current solutions, leading researchers to consider more principled approaches to interpretable methods~\\cite{rudin2019stop}.\n\nWe address the challenge of developing machine learning systems that are explainable by design, starting from an {\\em unlabeled} data set. Specifically, we consider {\\em pre-modeling} explainability in the context of {clustering}. \nA common use of clustering is to identify patterns or discover structural properties in a data set by quantizing the unlabeled points. For instance, $k$-means clustering may be used to discover coherent groups among a supermarket's customers. While there are many good clustering algorithms, the resulting cluster assignments can be hard to understand because the clusters may be determined using all the features of the data, and there may be no concise way to explain the inclusion of a particular point in a cluster. This limits the ability of users to discern the commonalities between points within a cluster or understand why points ended up in different clusters. \n\n\n\\input{cluster-pic-intro.tex}\n\n\n\nOur goal is to develop accurate, efficient clustering algorithms with concise explanations of the cluster assignments. There should be a simple procedure using a few features to explain why any point belongs to its cluster. Small decision trees have been identified as a canonical example of an easily explainable model~\\cite{molnar2019, murdoch2019interpretable}, and\nprevious work on explainable clustering uses an unsupervised decision tree~\\cite{bertsimas2018interpretable, fraiman2013interpretable,geurts2007inferring,ghattas2017clustering, liu2005clustering}. Each node of the binary tree iteratively partitions the data by thresholding on a single feature. We focus on finding $k$ clusters, and hence, we use trees with $k$ leaves. Each leaf corresponds to a cluster, and the tree is as small as possible. \nWe refer to such a tree as a {\\em threshold tree}.\n\nThere are many benefits of using a small threshold tree to produce a clustering. Any cluster assignment is explained by computing the thresholds along the root-to-leaf path. By restricting to $k$ leaves, we ensure that each such path accesses at most $k-1$ features, independent of the data dimension. \nIn general, a threshold tree provides an initial quantization of the data set, which can be combined with other methods for future learning tasks. While we consider static data sets, new data points can be easily clustered by using the tree, leading to explainable assignments.\nTo analyze clustering quality, we consider the $k$-means and $k$-medians objectives~\\cite{macqueen, steinhaus}. The goal is to efficiently determine a set of $k$ centers that minimize either the squared $\\ell_2$ or the $\\ell_1$ distance, respectively, of the input vectors to their closest center. \n\nFigure~\\ref{fig:optimal_vs_tree} provides an example of standard and explainable $k$-means clustering on the same data set. Figure~\\ref{fig:optimal_clusters} on the left shows an optimal $5$-means clustering. Figure~\\ref{fig:tree_clusters} in the middle shows an explainable, tree-based $5$-means clustering, determined by the tree in Figure~\\ref{fig:decision_tree} on the right. The tree has five leaf nodes, and vectors are assigned to clusters based on the thresholds. Geometrically, the tree defines a set of axis-aligned cuts that determine the clusters. While the two clusterings are very similar, using the threshold tree leads to easy explanations, whereas using a standard $k$-means clustering algorithm leads to more complicated clusters. The difference between the two approaches becomes more evident in higher dimensions, because standard algorithms will likely determine clusters based on all of the feature values.\n\nTo reap the benefits of explainable clusters, we must ensure that the data partition is a good approximation of the optimal clustering. While many efficient algorithms have been developed for $k$-means\/medians clustering, the resulting clusters are often hard to interpret~\\cite{arthur2007k,kanungo02, ostrovsky2013effectiveness, shalev2014understanding}. For example, Lloyd's algorithm alternates between determining the best center for the clusters and reassigning points to the closest center~\\cite{lloyd1982least}. The resulting set of centers depends in a complex way to the other points in the data set. Therefore, the relationship between a point and its nearest center may be the result of an opaque combination of many feature values. This issue persists even after dimension reduction or feature selection, because a non-explainable clustering algorithm is often invoked on the modified data set. As our focus is on pre-modeling explanability, we aim for simple explanations that use the original feature vectors.\n\nEven though Figure~\\ref{fig:optimal_vs_tree} depicts a situation in which the optimal clustering is very well approximated by one that is induced by a tree, it is not clear whether this would be possible in general. Our first technical challenge is to understand the {\\em price of explainability} in the context of clustering: that is, the multiplicative blowup in $k$-means (or $k$-medians) cost that is inevitable if we force our final clustering to have a highly constrained, interpretable, form. The second challenge is to actually find such a tree {\\em efficiently}. This is non-trivial because it requires a careful, rather than random, choice of a subset of features. As we will see, the kind of analysis that is ultimately needed is quite novel even given the vast existing literature on clustering.\n\n\n\\subsection{Our contributions}\n\n\nWe provide several new theoretical results on explainable $k$-means and $k$-medians clustering. Our new algorithms and lower bounds are summarized in Table~\\ref{tab:results_summary}. \n\n\\medskip \\noindent {\\bf Basic limitations.} \nA partition into $k$ clusters can be realized by a binary threshold tree with $k-1$ internal splits. This uses at most $k-1$ features, but is it possible to use even fewer, say $\\log k$ features? In Section~\\ref{sec:motivating-examples}, we demonstrate a simple data set that requires $\\Omega(k)$ features to achieve a explainable clustering with bounded approximation ratio compared to the optimal $k$-means\/medians clustering. In particular, the depth of the tree might need to be $k-1$ in the worst case.\n\nOne idea for building a tree is to begin with a good $k$-means (or $k$-medians) clustering, use it to label all the points, and then apply a supervised decision tree algorithm that attempts to capture this labeling. In Section~\\ref{sec:standard-dt-bad}, we show that standard decision tree algorithms, such as ID3, may produce clusterings with arbitrarily high cost. Thus, existing splitting criteria are not suitable for finding a low-cost clustering, and other algorithms are needed.\n\n\\medskip \\noindent {\\bf New algorithms.} \nOn the positive side, we provide efficient algorithms to find a small threshold tree that comes with provable guarantees on the cost. We note that using a small number of clusters is preferable for easy interpretations, and therefore $k$ is often relatively small.\nFor the special case of two clusters ($k=2$), we show (Theorem~\\ref{thm:optimal_2_median_means}) that a single threshold cut provides a constant-factor approximation to the optimal $2$-medians\/means clustering, with a closely-matching lower bound (Theorem~\\ref{clm:2_median_lower_bound}), and we provide an efficient algorithm for finding the best cut. For general $k$, we show how to approximate any clustering by using a threshold tree with $k$ leaves (Algorithm~\\ref{algo:imm}). The main idea is to minimize the number of mistakes made at each node in the tree, where a mistake occurs when a threshold separates a point from its original center. Overall, the cost of the explainable clustering will be close to the original cost up to a factor that depends on the tree depth (Theorem~\\ref{thm:main-k}). In the worst-case, we achieve an approximation factor of $O(k^2)$ for $k$-means and $O(k)$ for $k$-medians compared to the cost of any clustering (e.g., the optimal cost). These results do not depend on the dimension or input size; hence, we get a constant factor approximation when $k$ is constant. \n\n\\paragraph{Approximation lower bounds.}\nSince our upper bounds depend on $k$, it is natural to wonder whether it is possible to achieve a constant-factor approximation, or whether the cost of explainability grows with~$k$. On the negative side, we identify a data set such that any threshold tree with $k$ leaves must incur an $\\Omega(\\log k)$-approximation for both $k$-medians and $k$-means (Theorem~\\ref{thm:lb-k}). \nFor this data set, our algorithm achieves a nearly matching bound for $k$-medians.\\vspace{2ex}\n\n\n\\begin{table}[!htb]\n\\renewcommand{\\arraystretch}{1.6}\n\\centering\n\\begin{minipage}{.8\\textwidth}\n\\centering\n \\begin{tabular}{|c|cc|cc|}\n \\hline\n %\n \\rowcolor[HTML]{F1F1F1}\n & \\multicolumn{2}{c|}{\\textbf{$k$-medians}} &\n \\multicolumn{2}{c|}{\\textbf{$k$-means}} \\\\\n \n \n \\rowcolor[HTML]{F1F1F1}\n & $k = 2$ & $k > 2$ & $k = 2$ & $k > 2$ \\\\\n \\hline\n \n \\cellcolor[HTML]{F1F1F1}\n \\textbf{Upper Bound} & {$2$} & {$O(k)$} & {$4$} & {$O(k^2)$} \\\\\n \n \\cellcolor[HTML]{F1F1F1} \\textbf{Lower Bound} & {$2 - \\frac{1}{d}$} & {$\\Omega(\\log k)$} & {$3\\left(1 - \\frac{1}{d} \\right)^2$} & {$\\Omega(\\log k)$} \\\\\n \n \\hline\n \\end{tabular}\n \\caption{Summary of our upper and lower bounds on approximating the optimal $k$-medians\/means clustering with explainable, tree-based clusters. The values express the factor increase compared to the optimal solution in the worst case.\n }\n \\label{tab:results_summary}\n\\end{minipage}\n\\end{table}\n\n\n\\input{related.tex}\n\n\\section{Preliminaries}\n\n\\input{prelim.tex}\n\\input{motivation-examples.tex}\n\n\n\\input{2-means-short.tex}\n\\input{k-means-short-new.tex}\n\n\n\\section{Conclusion}\nIn this paper we discuss the capabilities and limitations of explainable clusters. For the special case of two clusters ($k=2$), we provide nearly matching upper and lower bounds for a single threshold cut. For general $k >2$, we present the IMM algorithm that achieves an $O(H)$ approximation for $k$-medians and an $O(Hk)$ approximation for $k$-means when the threshold tree has depth $H$ and $k$ leaves. \nWe complement our upper bounds with a lower bound showing that any threshold tree with $k$ leaves must have cost at least $\\Omega(\\log k)$ more than the optimal for certain data sets. \nOur theoretical results provide the first approximation guarantees on the quality of explainable unsupervised learning in the context of clustering. Our work makes progress toward the larger goal of explainable AI methods with precise objectives and provable guarantees. \n\n\n\nAn immediate open direction is to improve our results for $k$ clusters, either on the upper or lower bound side. One option is to use larger threshold trees with more than $k$ leaves (or allowing more than $k$ clusters). It is also an important goal to identify natural properties of the data that enable explainable, accurate clusters. For example, it would be interesting to improve our upper bounds on explainable clustering for well-separated data. Our lower bound of $\\Omega(\\log k)$ utilizes clusters with diameter $O(1)$ and separation $\\Omega(d)$, where the hardness stems from the randomness of the centers. In this case, the approximation factor $\\Theta(\\log k)$ is tight because our upper bound proof actually provides a bound in terms of the tree depth (which is about $\\log k$, see Appendix~\\ref{apx:lower_bound_log_k}). Therefore, an open question is whether a $\\Theta(\\log k)$ approximation is possible for any well-separated clusters (e.g., mixture of Gaussians with separated means and small variance). Beyond $k$-medians\/means, it would be worthwhile to develop other clustering methods using a small number of features (e.g., hierarchical clustering).\n\n\\paragraph{Acknowledgements.}\nSanjoy Dasgupta has been supported by NSF CCF-1813160. Nave Frost has been funded by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (Grant agreement No. 804302). The contribution of Nave Frost is part of a Ph.D. thesis research conducted at Tel Aviv University. \n\n\n\\section{Preliminaries}\nThroughout we use bold variables for vectors, and we use non-bold for scalars such as feature values.\nGiven a set of points $\\mathcal{X}=\\{\\mathbf{x}^1,\\ldots,\\mathbf{x}^n\\}\\subseteq\\mathbb{R}^d$ and an integer $k$ the goal of $k$-medians and $k$-means clustering is to partition $\\mathcal{X}$ into $k$ subsets and minimize the distances of the points to the centers of the clusters. It is known that the optimal centers correspond to means or medians of the clusters, respectively. Denoting the centers as $\\boldsymbol{\\mu}^1,\\ldots,\\boldsymbol{\\mu}^k$, the aim of $k$-means is to find a clustering that minimizes the following objective\n$$\\mathrm{cost}_2(\\boldsymbol{\\mu}^1, \\ldots, \\boldsymbol{\\mu}^k)=\\sum_{\\mathbf{x}\\in \\mathcal{X}} \\norm{\\mathbf{x}-c_2(\\mathbf{x})}^2_2,$$ where $c_2(\\mathbf{x})=\\argmin_{\\boldsymbol{\\mu} \\in \\{\\boldsymbol{\\mu}^1,\\ldots,\\boldsymbol{\\mu}^k\\}}{\\norm{\\boldsymbol{\\mu} - \\mathbf{x}}_2}$.\nSimilarly, the goal of $k$-medians is to minimize \n$$\\mathrm{cost}_1(\\boldsymbol{\\mu}^1, \\ldots, \\boldsymbol{\\mu}^k)=\\sum_{\\mathbf{x}\\in \\mathcal{X}} \\norm{\\mathbf{x}-c_1(\\mathbf{x})}_1,$$ where $c_1(\\mathbf{x})=\\argmin_{\\boldsymbol{\\mu} \\in \\{\\boldsymbol{\\mu}^1,\\ldots,\\boldsymbol{\\mu}^k\\}}{\\norm{\\boldsymbol{\\mu} - \\mathbf{x}}_1}$. \nAs it will be clear from context whether we are talking about $k$-medians or $k$-means, we abuse notation and write $\\mathrm{cost}$ and $c(\\mathbf{x})$ for brevity. We also fix the data set and use $opt$ to denote the optimal $k$-medians\/means clustering, where the optimal centers are the medians or means of the clusters, respectively; hence, $\\mathrm{cost}(opt)$ refers to the cost of the optimal $k$-medians\/means clustering. \n\n\n\\subsection{Clustering using threshold trees}\n\n\nPerhaps the simplest way to define two clusters is to use a \\emph{threshold cut}, which partitions the data based on a threshold for a single feature. More formally, the two clusters can be written as $\\widehat C^{\\theta,i}=(\\widehat C^1, \\widehat C^2)$, which is defined using a coordinate $i$ and a threshold $\\theta\\in\\mathbb{R}$ in the following way. For each input point $\\mathbf{x}\\in \\mathcal{X}$, we place $\\mathbf{x}=[x_1, \\ldots, x_d]$ in the first cluster $\\widehat C^1$ if $x_i\\leq\\theta$, and otherwise $\\mathbf{x} \\in \\widehat C^2$.\nA threshold cut can be used to explain $2$-means or $2$-medians clustering because a single feature and threshold determines the division of the data set into exactly two clusters. \n\n\nFor $k > 2$ clusters, we consider iteratively using threshold cuts as the basis for the cluster explanations. More precisely, \nwe construct a binary \\emph{threshold tree}. This tree is an unsupervised variant of a decision tree. Each internal node contains a single feature and threshold, which iteratively partitions the data, leading to clusters determined by the vectors that reach the leaves. We focus on trees with exactly $k$ leaves, one for each cluster $\\{1,2,\\ldots, k\\}$, which also limits the depth and total number of features to at most $k-1$. \n\nWhen clustering using such a tree, it is easy to understand why $\\mathbf{x}$ was assigned to its cluster: we may simply inspect the threshold conditions on the root-to-leaf path for $\\mathbf{x}$. This also ensures the number of conditions for the cluster assignment is rather small, which is crucial for interpretability. These tree-based explanations are especially useful in high-dimensional space, when the number of clusters is much smaller than the input dimension ($k \\ll d$). \nMore formally, a threshold tree $T$ with $k$ leaves induces a $k$-clustering of the data. Denoting these clusters as $\\widehat{C}^j \\subseteq \\mathcal{X}$, we define the $k$-medians\/means cost of the tree as\n$$\n\\mathrm{cost}_1(T) = \\sum_{j=1}^k \\sum_{x \\in \\widehat{C}^j} \\|x - \\mbox{median}(\\widehat{C}^j) \\|_1\n\\qquad \\mbox{} \\qquad \n\\mathrm{cost}_2(T) = \\sum_{j=1}^k \\sum_{x \\in \\widehat{C}^j} \\|x - \\mbox{mean}(\\widehat{C}^j) \\|_2^2 \n$$\nOur goal is to understand when it is possible to efficiently produce a tree $T$ such that $\\mathrm{cost}(T)$ is not too large compared to the optimal $k$-medians\/means cost. Specifically, we say that an algorithm is an {\\em $a$-approximation}, if the cost is at most $a$ times the optimal cost, i.e., if the algorithm returns threshold tree $T$ then we have\n$\\mathrm{cost}(T) \\leq a\\cdot \\mathrm{cost}(opt),$ where $opt$ denotes the optimal $k$-medians\/means clustering.\n\n \n\n\n\\section{Lower bounds for two clusters}\\label{sec:k2_lower_bound}\n\n\nWithout loss of generality we can assume that $d\\geq 2.$ We use the following dataset for both $2$-medians and $2$-means. It consists of $2d$ points, partitioned into two clusters of size $d$, which are the points with Hamming distance exactly one from the vector with all 1 entries and the vector with all $-1$ entries: \n\\begin{center}\n\\begin{tabular}{c c}\n \\textbf{Optimal Cluster 1} & \\textbf{Optimal Cluster 2} \\\\\n $(0,-1,-1,-1\\ldots,-1)$ & $(0,1,1,1\\ldots,1)$ \\\\\n $(-1,0,-1,-1\\ldots,-1)$ & $(1,0,1,1\\ldots,1)$\\\\\n $(-1,-1,0,-1\\ldots,-1)$ & $(1,1,0,1\\ldots,1)$\\\\\n $\\vdots$ & $\\vdots$ \\\\\n $(-1,-1,-1,-1\\ldots,0)$ & $(1,1,1,1\\ldots,0)$\\\\\n\\end{tabular}\n\\end{center}\n\n\nLet $\\widehat C = (\\widehat C^1, \\widehat C^2)$ be the best threshold cut. \n\n\\paragraph{2-medians lower bound.}\nThe cost of the cluster with centers $(1,\\ldots,1)$ and $(-1,\\ldots,-1)$ is $2d$, as each point is responsible for a cost of $1.$ Thus, $\\mathrm{cost}(opt)\\leq 2d.$\n\nThere is a coordinate $i$ and a threshold $\\theta$ that defines the cut $\\widehat C$. For any coordinate $i$, there are only three possible values: $-1,0,1$. Thus $\\theta$ is either in $(-1,0)$ or in $(0,1)$. Without loss of generality, assume that $\\theta\\in(-1,0)$ and $i=1$. Thus, the cut is composed of two clusters: one of size $d-1$ and the other of size $d+1$, in the following way:\n\n\\begin{center}\n\\begin{tabular}{c c}\n $\\mathbf{Cluster\\ }\\widehat C^1$ & $\\mathbf{Cluster\\ } \\widehat C^2$ \\\\\n $(-1,0,-1,-1\\ldots,-1)$ & $(1,0,1,1\\ldots,1)$\\\\\n $(-1,-1,0,-1\\ldots,-1)$ & $(1,1,0,1\\ldots,1)$\\\\\n $\\vdots$ & $\\vdots$ \\\\\n $(-1,-1,-1,-1\\ldots,0)$ & $(1,1,1,1\\ldots,0)$\\\\\n & $(0,1,1,1\\ldots,1)$\\\\\n & $(0,-1,-1,-1\\ldots,-1)$\\\\\n\\end{tabular}\n\\end{center}\n\n\nUsing Fact~\\ref{fact:1-median-optimal center}, an optimal center of the first cluster is all $-1$, and the optimal center for the second cluster is all $1$. The cost of the first cluster is $d-1$, as each point costs $1$. The cost of the second cluster is composed of two terms $d$ for all points that include 1 in at least one coordinate and the cost of point $(0,-1,\\ldots,-1)$ is $2(d-1)+1$. So the total cost is $4d-2$. Thus $\\mathrm{cost}(\\widehat C)\\geq (2-1\/d) \\mathrm{cost}(opt).$\n\n\\paragraph{2-means lower bound.}\nFocus on the clustering with centers $$(\\nicefrac{(d-1)}{d},\\ldots,\\nicefrac{(d-1)}{d}) \\qquad \\mbox{and} \\qquad (-\\nicefrac{(d-1)}{d},\\ldots,-\\nicefrac{(d-1)}{d}).$$ \nThe cost of each point in the data is composed of (1) one coordinate with value zero, and the cost of this coordinate is $\\left(\\nicefrac{(d-1)}{d}\\right)^2$ (2) $d-1$ coordinates each with cost $\\nicefrac1d^2.$ Thus, each point has a cost of $\\nicefrac{(d-1)^2}{d^2}+\\nicefrac{d-1}{d^2}.$ Thus, the total cost is $\\frac{2(d-1)^2+2(d-1)}{d}=2(d-1)$. This implies that $\\mathrm{cost}(opt)\\leq 2(d-1).$\n\nAssume without loss of generality that $\\widehat C$ is defined using coordinate $i=1$ and threshold $-0.5$. The resulting clusters $\\widehat C^1$ and $\\widehat C^2$ are as in the case of $2$-medians. The optimal centers are (see Fact~\\ref{fact:1-means-optimal center}): \n$$\\left(-1,-\\frac{d-2}{d-1},\\ldots,-\\frac{d-2}{d-1}\\right) \\qquad \\mbox{and} \\qquad \\left(\\frac{d-1}{d+1},\\frac{d-2}{d+1},\\ldots,\\frac{d-2}{d+1}\\right).$$\nWe want to lower bound $\\mathrm{cost}(\\widehat C).$ We start with the cost of the first cluster, i.e. $\\widehat C^1$. To do so for each point in $\\widehat C^1$, we will evaluate the contribution of each coordinate to the cost (1) the first coordinate adds $0$ to the cost (2) the coordinate with value $0$, adds $\\left(\\frac{d-2}{d-1}\\right)^2$ to the cost (3) the rest of the $d-2$ coordinates adds $\\nicefrac{1}{(d-1)^2}.$ Thus, each point in $\\widehat C^1$ adds to the cost $\\left(\\frac{d-2}{d-1}\\right)^2 + \\frac{d-2}{(d-1)^2}=\\frac{d-2}{d-1}$. Since $\\widehat C^1$ contains $d-1$ points, its total cost is $d-2$.\n\nMoving on to evaluating the cost of $\\widehat C^2$, the cost of the point $(0,-1,\\ldots,-1)$ is composed of two terms (1) the first coordinate adds $\\left(\\frac{d-1}{d+1}\\right)^2$ to the cost (2) each of the other $d-1$ coordinates adds $\\left(1+\\frac{d-2}{d+1}\\right)^2\nto the cost. Thus, this point adds $$\\left(\\frac{d-1}{d+1}\\right)^2+(d-1)\\left(1+\\frac{d-2}{d+1}\\right)^2=\\frac{(d-1)d(4d-3)}{(d+1)^2}.$$\nSimilarly, the point $(0,1,\\ldots,1)$ adds to the cost $$\\left(\\frac{d-1}{d+1}\\right)^2+(d-1)\\left(1-\\frac{d-2}{d+1}\\right)^2=\\frac{(d-1)(d+8)}{(d+1)^2}.$$\nFinally, each of the $d-1$ remaining points in $\\widehat C^2$ adds to the cost $$\\left(1-\\frac{d-1}{d+1}\\right)^2+\\left(\\frac{d-2}{d+1}\\right)^2+(d-1)\\left(1-\\frac{d-2}{d-1}\\right)^2=\\frac{d^2+5d-1}{(d+1)^2}$$\nThus, the cost of $\\widehat C^2$ is $$\\frac{(d-1)(5d^2+3d+7)}{(d+1)^2}$$\nSumming up the costs of $\\widehat C^1$ and $\\widehat C^2$, for $d\\geq 2$\n$$\\mathrm{cost}(\\widehat C)\\geq (d-2)+\\frac{(d-1)(5d^2+3d+7)}{(d+1)^2}\\geq6(d-1)\\left(1-\\frac1d\\right)^2\\geq 3\\left(1-\\frac{1}{d}\\right)^2\\cdot \\mathrm{cost}(opt)$$\n\n\n\n\n\n\\section{Upper Bound Proof for 2-Means}\\label{sec:k2_upper_bound}\n\nWe show that there is a threshold cut $\\widehat C$ with $2$-means cost satisfying\n$\\mathrm{cost}(\\widehat C)\\leq 4 \\cdot \\mathrm{cost}(opt).$ We could just use the same proof idea as in the 2-medians case that first applies Lemma~\\ref{lemma:tree-cost} and then uses the matching result, Lemma~\\ref{lemma:matching}. This leads to a $6$-approximation, instead of $4$. The reason is that we apply twice Claim~\\ref{clm:cauchy_schwarz_k_means}, which is not tight. Improving the approximation to $4$ requires us to apply Claim~\\ref{clm:cauchy_schwarz_k_means} only once. \n\nSuppose $\\boldsymbol{\\mu}^1,\\boldsymbol{\\mu}^2$ are optimal $2$-means centers for the clusters $C^1$ and $C^2$. \nLet $t = \\min(|C^1 \\Delta \\widehat C^1|, |C^1 \\Delta \\widehat C^2|)$ be the minimum number of changes for any threshold cut $\\widehat C^1, \\widehat C^2$, and define $\\cX^{\\mathsf{mis}}$ to the set of $t$ points in the symmetric difference, where $\\mathcal{X} = \\cX^{\\mathsf{cor}} \\cup \\cX^{\\mathsf{mis}}$ and $\\cX^{\\mathsf{cor}} \\cap \\cX^{\\mathsf{mis}} = \\emptyset$.\n\nUsing the same argument as in the proof of Lemma~\\ref{lemma:tree-cost}, we have\n\\begin{eqnarray}\\label{eq:2_means_upper_bound}\n\\mathrm{cost}(\\widehat{C}) &\\leq& \\sum_{j=1}^2 \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|^2_2 +\n\\sum_{j=1}^2 \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|^2_2 \\nonumber\n\\\\ &=& \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}} \\|\\mathbf{x}- c(\\mathbf{x})\\|^2_2 +\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_1} \\|\\mathbf{x}- \\boldsymbol{\\mu}^1\\|^2_2+\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_2} \\|\\mathbf{x}- \\boldsymbol{\\mu}^2\\|^2_2 \\nonumber\n\\\\ &\\leq& \\mathrm{cost}(opt) +\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_1} \\|\\mathbf{x}- \\boldsymbol{\\mu}^1\\|^2_2+\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_2} \\|\\mathbf{x}- \\boldsymbol{\\mu}^2\\|^2_2\n\\end{eqnarray}\nThe goal now is to bound the latter two terms using $\\mathrm{cost}(opt).$\nThis term measures the distance of each $\\mathbf{x} \\in \\cX^{\\mathsf{mis}}$ from the ``other'' center, i.e., not $c(\\mathbf{x})$. \n\n\\begin{claim}\\label{claim:aux-2-means}\n\\begin{eqnarray*}\n\t\\mathrm{cost}(opt)\n\t\\geq \\frac13 \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_1} \\|\\mathbf{x}- \\boldsymbol{\\mu}^1\\|^2_2+\\frac13\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_2} \\|\\mathbf{x}- \\boldsymbol{\\mu}^2\\|^2_2\n\\end{eqnarray*}\n\\end{claim}\n\nUsing Claim~\\ref{claim:aux-2-means},\ntogether with Inequality~(\\ref{eq:2_means_upper_bound}) we have\n$$\\mathrm{cost}(\\widehat{C})\\leq \\mathrm{cost}(opt) + 3\\cdot \\mathrm{cost}(opt)=4\\cdot \\mathrm{cost}(opt),$$ \nand this completes the proof. \n\n\n\\begin{proof}[Proof of Claim~\\ref{claim:aux-2-means}.]\nDenote the $t$ points in $\\cX^{\\mathsf{mis}}$ by $\\cX^{\\mathsf{mis}}=\\{\\mathbf{r}^1,\\ldots,\\mathbf{r}^t\\}.$ Assume that the first $\\ell$ points are in the first optimal cluster, $\\mathbf{r}^1,\\ldots,\\mathbf{r}^\\ell\\in C^1$, and the rest are in the second cluster, $\\mathbf{r}^{\\ell+1},\\ldots,\\mathbf{r}^t\\in C^2.$\n\nApplying Lemma~\\ref{lemma:matching} for each coordinate $i\\in[d]$ guarantees $t$ pairs of vectors $(\\mathbf{p}^1,\\mathbf{q}^1), \\ldots, (\\mathbf{p}^t,\\mathbf{q}^t)$ with the following properties. Each $p^j_i$ corresponds to the $i$th coordinate of some point in $C^1$ and $q^j_i$ corresponds to the $i$th coordinate of some point in $C^2$. Furthermore, for each coordinate, the $t$ pairs correspond to $2t$ distinct points in $\\mathcal{X}$.\nFinally, we can assume without loss of generality that\n$\\mu^1_i \\leq \\mu^2_i$ and $q^j_i \\leq p^j_i$. \n\nFor each point $\\mathbf{r}^j$ in the first $\\ell$ points in $\\cX^{\\mathsf{mis}}$, if $r^j_i\\geq p^j_i$ then we can replace $\\mathbf{p}^j$ with $\\mathbf{r}^j$, thus we can assume without loss of generality that $p_i^j\\geq r^j_i.$ We next show that $\\mathrm{cost}(opt)$ is lower bounded by a function of $t$. There will be two cases depending on whether $p^j_i\\leq \\mu^2_i$ or not. The harder case is the first where the improvement of the approximation from $6$ to $4$ arises. Instead of first bounding the distance between $\\mathbf{r}^j$ and its new center using the distance to its original center and then accounting for $\\norm{\\boldsymbol{\\mu}^1-\\boldsymbol{\\mu}^2}^2_2,$ we directly account for the distance between $\\mathbf{r}^j$ and its new center. \n\n\n\\paragraph{\\textbf{Case 1:}} if $p^j_i\\leq \\mu^2_i$, then \nClaim~\\ref{clm:cauchy_schwarz_k_means} implies that\n\\begin{eqnarray*} \n(\\mu^2_i-q^j_i)^2 + (p^j_i-\\mu_i^1)^2 + (\\mu_i^1-r^j_i)^2\n&\\geq& \\frac{1}{3}(\\mu^2_i-q^j_i+p^j_i-\\mu_i^1+\\mu_i^1-r^j_i)^2\\\\\n&=& \\frac{1}{3}((\\mu^2_i-q^j_i)+(p^j_i-r^j_i))^2\n\\geq \\frac{1}{3}(\\mu^2_i-r^j_i)^2.\n\\end{eqnarray*} The last inequality follows from $q^j_i \\leq p^j_i$ and $r_i^j\\leq p_i^j,$ which imply that $(\\mu^2_i-q^j_i)+(p^j_i-r^j_i)\n\\geq \\mu^2_i-r^j_i\\geq 0,$ which means $((\\mu^2_i-q^j_i)+(p^j_i-r^j_i))^2\n\\geq (\\mu^2_i-r^j_i)^2.$ \n\\paragraph{\\textbf{Case 2:}} if $\\mu_i^2\\leq p_i^j$, then again Claim~\\ref{clm:cauchy_schwarz_k_means} implies that\n\\begin{eqnarray*} \n(p^j_i-\\mu_i^1)^2 + (\\mu_i^1-r^j_i)^2&\\geq& (\\mu_i^2-\\mu_i^1)^2 + (\\mu_i^1-r^j_i)^2 \\geq \\frac12 (\\mu^2_i-\\mu^1_i+\\mu^1_i-r^j_i)^2=\\frac12(\\mu^2_i-r^j_i)^2,\n\\end{eqnarray*}\nwhere in the first inequality we use $(p^j_i-\\mu^1_i)^2\\geq(\\mu^2_i-\\mu^1_i)^2.$\n\nThe two cases imply that for $1\\leq j\\leq \\ell$ $$(\\mu^2_i-q^j_i)^2 + (p^j_i-\\mu_i^1)^2 + (\\mu_i^1-r^j_i)^2\n\\geq \\frac{1}{3}(\\mu^2_i-r^j_i)^2\n.$$\nSimilarly for each point $\\mathbf{r}^j$ in the last $t-\\ell$ points in $\\cX^{\\mathsf{mis}}$, we have $$(\\mu^2_i-q^j_i)^2 + (p^j_i-\\mu_i^1)^2 + (\\mu_i^2-r^j_i)^2\n\\geq \\frac{1}{3}(\\mu^1_i-r^j_i)^2\n.$$\nPutting these together we have \n\\begin{eqnarray*}\n\t\\mathrm{cost}(opt)&\\geq& \\sum_{i=1}^d\\sum_{j=1}^\\ell(\\mu^2_i-q^j_i)^2 + (p^j_i-\\mu_i^1)^2 + (\\mu_i^1-r^j_i)^2\n\t+\n\\sum_{i=1}^d\\sum_{j=\\ell+1}^t(\\mu^2_i-q^j_i)^2 + (p^j_i-\\mu_i^1)^2 + (\\mu_i^2-r^j_i)^2\\\\\n\t&\\geq& \\frac{1}{3}\\sum_{j=1}^\\ell\\sum_{i=1}^d (\\mu^2_i-r^j_i)^2\n\t+\\frac{1}{3}\\sum_{j=\\ell+1}^t\\sum_{i=1}^d (\\mu^1_i-r^j_i)^2 \n\t\\\\ \t&=&\\frac{1}{3}\\sum_{j=1}^\\ell\\norm{\\mathbf{r}^j-\\boldsymbol{\\mu}^2}_2^2+\\frac{1}{3}\\sum_{j=\\ell+1}^t\\norm{\\mathbf{r}^j-\\boldsymbol{\\mu}^1}_2^2\\\\\n&=&\t\\frac13 \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_1} \\|\\mathbf{x}- \\boldsymbol{\\mu}^1\\|^2_2+\\frac13\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_2} \\|\\mathbf{x}- \\boldsymbol{\\mu}^2\\|^2_2\n\\end{eqnarray*}\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\section{Efficient Implementation via Dynamic Programming for $k=2$}\\label{sec:DP_Implementation}\n\n\\subsection{The 2-means case}\nThe psudo-code for finding the best threshold for $k=2$ depicted in Algorithm~\\ref{algo:cut}.\n\n\\begin{figure}[!htb]\n \\centering\n \\begin{minipage}{.6\\linewidth}\n \\begin{algorithm}[H]\n \t\\SetKwFunction{$2$-means Optimal Threshold}{$2$-means Optimal Threshold}\n \t\\SetKwInOut{Input}{Input}\\SetKwInOut{Output}{Output}\n \t\\LinesNumbered\n \t\\Input{%\n \t\t{$\\mathbf{x}^1, \\ldots, \\mathbf{x}^n$} -- vectors in $\\mathbb{R}^d$\n \t}\n \t\\Output{%\n \t\t\\xvbox{2mm}{$i$} -- Coordinate\\\\\n \t\t\\xvbox{2mm}{$\\theta$} -- Threshold\n \t}\n \t\\BlankLine\n \t\\setcounter{AlgoLine}{0}\n \t\n $\\xvar{best\\_cost} \\leftarrow \\infty$\\;\n \t\n \t$\\xvar{best\\_coordinate} \\leftarrow \\textsc{null}$\\;\n \t\n \t$\\xvar{best\\_threshold} \\leftarrow \\textsc{null}$\\;\n \t\n \t$u \\leftarrow \\sum_{j=1}^n \\norm{\\mathbf{x}^j}^2_2$\\;\n \t\n \t\\ForEach {$i \\in [1, \\ldots, d]$}\n \t{\n \t\t$\\mathbf{s} \\leftarrow \\xfunc{zeros}(d)$\\;\n \t\t\n \t\t$\\mathbf{r} \\leftarrow \\sum_{j=1}^n \\mathbf{x}^j$\\;\n \t\t\n \t\t$\\mathcal{X} \\leftarrow \\xfunc{sorted}(\\mathbf{x}^1, \\ldots, \\mathbf{x}^n \\text{ by coordinate }i)$\\;\n \t\t\n \t\t\\ForEach {$\\mathbf{x}^j \\in \\mathcal{X}$} \n \t\t{ \n \t\t\t$\\mathbf{s} \\leftarrow \\mathbf{s} + \\mathbf{x}^j$\\;\n \t\t\t\n \t\t\t$\\mathbf{r} \\leftarrow \\mathbf{r} - \\mathbf{x}^j$\\;\n \t\t\t\n \t\t\t$\\xvar{cost} \\leftarrow u - \\frac{1}{j}\\norm{\\mathbf{s}}^2_2- \\frac{1}{n - j}\\norm{\\mathbf{r}}^2_2$\\;\n \t\t\t\n \t\t\t\\If {$\\xvar{cost} < \\xvar{best\\_cost}$ and $x^j_i \\neq x^{j+1}_i$}\n \t\t\t{\n\n \t\t\t\t$\\xvar{best\\_cost} \\leftarrow \\xvar{cost}$\\;\n \t\t\t\t\n \t\t\t\t$\\xvar{best\\_coordinate} \\leftarrow i$\\;\n \t\t\t\t\n \t\t\t\t$\\xvar{best\\_threshold} \\leftarrow x^j_i$\\;\n \t\t\t}\n \t\t}\n \t}\n \t\\Return $\\xvar{best\\_coordinate}, \\xvar{best\\_threshold}$\\;\n \t\\caption{\\textsc{Optimal Threshold for $2$-means}}\n \t\\label{algo:cut}\n \\end{algorithm}\n \\end{minipage}\n\\end{figure}\n\n\nIn time $O(d)$ we can calculate $\\mathrm{cost}(p+1)$ and the new centers by using the value $\\mathrm{cost}(p)$ and the previous centers. Throughout the computation we save in memory \n\\begin{enumerate}\n\t\\item Two vectors $\\mathbf{s}^{p}=\\sum_{j=1}^p \\mathbf{x}^j$ and $\\mathbf{r}^{p}=\\sum_{j=p + 1}^n \\mathbf{x}^j$.\n\t\\item Scalar $u=\\sum_{j=1}^n \\norm{\\mathbf{x}^j}_2^2$\n\\end{enumerate}\nWe also make use of the identity:\n\\begin{eqnarray*}\n\t\\mathrm{cost}(p)\n\t&=& u - \\frac{1}{p}\\norm{\\mathbf{s}^{p}}_2^2 - \\frac{1}{n-p}\\norm{\\mathbf{r}^{p}}_2^2.\n\\end{eqnarray*}\nThis identity is correct because \n\\begin{eqnarray*}\n \\mathrm{cost}(p) &=& \\sum_{j=1}^{p}\\norm{\\mathbf{x}^j-\\boldsymbol{\\mu}^1(p)}^2_2 + \\sum_{j=p+1}^{n}\\norm{\\mathbf{x}^j-\\boldsymbol{\\mu}^2(p)}^2_2\\\\\n &=& \\sum_{j=1}^{p} \\norm{\\mathbf{x}^j}^2_2 - 2\\sum_{j=1}^{p}\\inner{\\mathbf{x}^j}{\\boldsymbol{\\mu}^1(p)} + \\sum_{j=1}^{p}\\norm{\\boldsymbol{\\mu}^1(p)}^2_2 + \\\\\n && \\sum_{j=p+1}^{n} \\norm{\\mathbf{x}^j}^2_2 - 2\\sum_{j=p+1}^{n}\\inner{\\mathbf{x}^j}{\\boldsymbol{\\mu}^2(p)} + \\sum_{j=p+1}^{n}\\norm{\\boldsymbol{\\mu}^2(p)}^2_2\\\\\n &=& \\sum_{j=1}^{n} \\norm{\\mathbf{x}^j}^2_2 - 2\\inner{\\sum_{j=1}^{p}\\mathbf{x}^j}{\\boldsymbol{\\mu}^1(p)} + \\frac{1}{p}\\norm{\\sum_{j=1}^{p}\\mathbf{x}^j}^2_2 - \\\\\n && 2\\inner{\\sum_{j=p+1}^{n}\\mathbf{x}^j}{\\boldsymbol{\\mu}^2(p)} + \\frac{1}{n-p}\\norm{\\sum_{j=p+1}^{n}\\mathbf{x}^j}^2_2\\\\\n &=& \\sum_{j=1}^{n} \\norm{\\mathbf{x}^j}^2_2 - \\frac{2}{p}\\inner{\\mathbf{s}^{p}}{\\mathbf{s}^{p}}+\\frac{1}{p}\\norm{\\mathbf{s}^{p}}^2_2 - \\frac{2}{n-p}\\inner{\\mathbf{r}^{p}}{\\mathbf{r}^{p}}+\\frac{1}{n-p}\\norm{\\mathbf{r}^{p}}^2_2\\\\\n &=& u - \\frac{1}{p}\\norm{\\mathbf{s}^{p}}^2_2 - \\frac{1}{n-p}\\norm{\\mathbf{r}^{p}}^2_2\n\\end{eqnarray*}\n\n\n By invoking this identity, we can quickly compute the cost of placing the first $p$ points in cluster one and the last $n-p$ points in cluster two. Each such partition can be achieved by using a threshold $\\theta$ between $x_i^p$ and $x_i^{p+1}$. Our algorithm computes these costs for each feature $i \\in[d]$. Then, we output the feature $i$ and threshold $\\theta$ that minimizes the cost. This guarantees that we find the best possible threshold cut.\n\n Overall, Algorithm \\ref{algo:cut} iterates over the $d$ features, and for each feature it sorts the $n$ vectors according to their values in the current feature. Next, the algorithm iterates over the $n$ vectors and for each potential threshold, it calculates the cost by evaluating the inner product of two $d$-dimensional vectors.\n Overall its runtime complexity is $O\\left(nd^2 + nd\\log n\\right)$.\n \n \\subsection{The 2-medians case}\n The high level idea of a finding an optimal 2-medians cut is similar to the 2-means algorithm. The algorithm goes over all possible thresholds. For each threshold, it finds the optimal centers and calculates the cost accordingly. Then, it outputs the threshold cut that minimizes the $2$-medians cost.\n \n \\begin{figure}[!htb]\n \\centering\n \\begin{minipage}{.6\\linewidth}\n \\begin{algorithm}[H]\n \t\\SetKwFunction{$2$-medians Optimal Threshold}{$2$-means medians Threshold}\n \t\\SetKwInOut{Input}{Input}\\SetKwInOut{Output}{Output}\n \t\\LinesNumbered\n \t\\Input{%\n \t\t{$\\mathbf{x}^1, \\ldots, \\mathbf{x}^n$} -- vectors in $\\mathbb{R}^d$\n \t}\n \t\\Output{%\n \t\t\\xvbox{2mm}{$i$} -- Coordinate\\\\\n \t\t\\xvbox{2mm}{$\\theta$} -- Threshold\n \t}\n \t\\BlankLine\n \t\\setcounter{AlgoLine}{0}\n \t\n \t$\\xvar{best\\_cost} \\leftarrow \\infty$\\;\n \t\n \t$\\xvar{best\\_coordinate} \\leftarrow \\textsc{null}$\\;\n \t\n \t$\\xvar{best\\_threshold} \\leftarrow \\textsc{null}$\\;\n \t\n \t\\ForEach {$i \\in [1, \\ldots, d]$}\n \t{\n \t\n \t $\\boldsymbol{\\mu}^2(0) \\leftarrow \\xfunc{median}(\\mathbf{x}^1, \\ldots \\mathbf{x}^n)$\\;\n \t\n \t $\\xvar{cost} \\leftarrow \\sum_{j=1}^n \\norm{\\mathbf{x}^j - \\boldsymbol{\\mu}^2(0)}_1$\\;\n \t\n \t $\\mathcal{X} \\leftarrow \\xfunc{sorted}(\\mathbf{x}^1, \\ldots, \\mathbf{x}^n \\text{ by coordinate }i)$\\;\n \t\n \t\t\\ForEach {$j \\in [1, \\ldots, n - 1]$} \n \t\t{ \n\t \t\t$\\boldsymbol{\\mu}^1(j) \\leftarrow \\xfunc{median}(\\mathbf{x}^1, \\ldots \\mathbf{x}^j)$\\;\n \t\t\n\t \t\t$\\boldsymbol{\\mu}^2(j) \\leftarrow \\xfunc{median}(\\mathbf{x}^{j+1}, \\ldots \\mathbf{x}^n)$\\;\n\t \t\t\n \t\t\t$\\xvar{cost} \\leftarrow \\xvar{cost} + \\norm{\\mathbf{x}^j - \\boldsymbol{\\mu}^1(j)}_1 - \\norm{\\mathbf{x}^j - \\boldsymbol{\\mu}^2(j - 1)}_1$\\;\n \t\t\t\n \t\t\t\\If {$\\xvar{cost} < \\xvar{best\\_cost}$ and $x^j_i \\neq x^{j+1}_i$}\n \t\t\t{\n \t\t\t\t\n \t\t\t\t$\\xvar{best\\_cost} \\leftarrow \\xvar{cost}$\\;\n \t\t\t\t\n \t\t\t\t$\\xvar{best\\_coordinate} \\leftarrow i$\\;\n \t\t\t\t\n \t\t\t\t$\\xvar{best\\_threshold} \\leftarrow x^j_i$\\;\n \t\t\t}\n \t\t}\n \t}\n \t\\Return $\\xvar{best\\_coordinate}, \\xvar{best\\_threshold}$\\;\n \t\\caption{\\textsc{Optimal Threshold for $2$-medians}}\n \t\\label{algo:2-medians-cut}\n \\end{algorithm}\n \\end{minipage}\n\\end{figure}\n \n \\paragraph{Updating cost.} \n To update the cost we need to show how to express $\\mathrm{cost}(p+1)$ in terms of $\\mathrm{cost}(p).$ We know that $\\mathrm{cost}(p+1)$ is equal to \n $$\\mathrm{cost}(p+1)=\\sum_{\\mathbf{x}\\in C_1}\\norm{x-\\boldsymbol{\\mu}^1(p+1)}_1+\\sum_{\\mathbf{x}\\in C_2}\\norm{x-\\boldsymbol{\\mu}^2(p+1)}_1.$$ \n For every feature $i \\in[d]$, there are $n-1$ thresholds to consider. After sorting by this feature, we can consider all splits into $C_1$ and $C_2$, where $C_1$ contains the $p$ smallest points, and $C_2$ contains the $n-p$ largest points. We increase $p$ from $p=1$ to $p=n-1$, computing the clusters and cost at each step. If $p$ is odd then the median of $C_1$ (i.e., the optimal center of $C_1$) does not change compared to $p-1$. The only contribution to the cost is the point~$\\mathbf{x}$ that moved from $C_2$ to $C_1$. If $p$ is even, then at each coordinate there are two cases, depending on whether the median changes or not. If it changes, then let $\\Delta$ denote the change in cost of the points in $C_1$ that are smaller than the median. By symmetry, the change in the cost of the points that are larger is $-\\Delta$. Thus, the change of the cost is balanced by the points that are larger and smaller than the median. Similar reasoning holds for the other cluster $C_2.$ Therefore, we conclude that moving $\\mathbf{x}$ from $C_2$ to $C_1$ changes the cost by exactly $\\norm{\\mathbf{x}-\\boldsymbol{\\mu}^1(p+1)}_1 - \\norm{\\mathbf{x}-\\boldsymbol{\\mu}^2(p)}_1$. Thus, we have the following connection between $\\mathrm{cost}(p+1)$ and $\\mathrm{cost}(p)$:\n $$\\mathrm{cost}(p+1)=\\mathrm{cost}(p) + \\norm{\\mathbf{x}-\\boldsymbol{\\mu}^1(p+1)}_1 - \\norm{\\mathbf{x}-\\boldsymbol{\\mu}^2(p)}_1.$$\n \n \\paragraph{Updating centers.} For each $p$, the cost update relies on efficient calculations of the centers $\\boldsymbol{\\mu}^1(p)$ and $\\boldsymbol{\\mu}^2(p + 1)$. The centers $\\boldsymbol{\\mu}^1(p), \\boldsymbol{\\mu}^2(p)$ are the medians of the clusters at the $p$th threshold. Note that moving from the $p$th thresold to the $(p+1)$th will only change the clusters by moving one vector from one cluster to the other. \n We can determine the changes efficiently by using $d$ arrays, one for each coordinate. Each array will contain (pointers to) the input vectors $\\mathcal{X}$ sorted by their $i$th feature value. As we move the threshold along a single coordinate, we can read off the partition into two clusters, and we can compute the median of each cluster by considering the midpoint in the sorted list. \n \n Overall, this procedure computes the cost of each threshold, while also determining the partition into two clusters and their centers (medians). The time is $O(nd \\log n)$ to sort by each feature, and $O(nd^2)$ to compute $\\mathrm{cost}(p)$ for each $p \\in [n]$ and each feature. Therefore, the total time for the $2$-medians algorithm is \n $O(nd^2 + nd \\log n).$\n \n \n \n \n\n\\subsection{Related work}\n\\label{sec:related}\n\nThe majority of work on explainable methods considers supervised learning, and in particular, explaining predictions of neural networks and other trained models~\\cite{\nalvarez2019weight, deutch2019constraints, garreau2020explaining, kauffmann2019clustering,\nlipton2018mythos,lundberg2017unified, molnar2019, murdoch2019interpretable, ribeiro2016should, ribeiro2018anchors, rudin2019stop, sokol2020limetree}. In contrast, there is much less work on explainable unsupervised learning. Standard algorithms for $k$-medians\/means use iterative algorithms to produce a good approximate clustering, but this leads to complicated clusters that depend on subtle properties of the data set~\\cite{aggarwal09,arthur2007k, kanungo02, ostrovsky2013effectiveness}. Several papers consider the use of decision trees for explainable clustering~\\cite{bertsimas2018interpretable, fraiman2013interpretable,geurts2007inferring,ghattas2017clustering, liu2005clustering}. However, all prior work on this topic is empirical, without any theoretical analysis of quality compared to the optimal clustering. We also remark that the previous results on tree-based clustering have not considered the $k$-medians\/means objectives for evaluating the quality of the clustering, which is the focus of our work. It is NP-hard to find the optimal $k$-means clustering~\\cite{aloise2009np, dasgupta2008hardness} or even a very close approximation~\\cite{awasthi2015hardness}. In other words, we expect tree-based clustering algorithms to incur an approximation factor bounded away from one compared to the optimal clustering.\n\nOne way to cluster based on few features is to use dimensionality reduction.\nTwo main types of dimensionality reduction methods have been investigated for $k$-medians\/means. Work on {\\em feature selection} shows that it is possible to cluster based on $\\Theta(k)$ features and obtain a constant factor approximation for $k$-means\/medians~\\cite{boutsidis2009unsupervised, cohen2015dimensionality}. However, after selecting the features, these methods employ existing approximation algorithms to find a good clustering, and hence, the cluster assignments are not explainable. Work on {\\em feature extraction} shows that it is possible to use the Johnson-Lindenstrauss transform to $\\Theta(\\log k)$ dimensions, while preserving the clustering cost~\\cite{becchetti2019oblivious, makarychev2019performance}. Again, this relies on running a $k$-means\/medians algorithm after projecting to the low dimensional subspace. The resulting clusters are not explainable, and moreover, the features are arbitrary linear combinations of the original features.\n\nBesides explainability, many other clustering variants have received recent attention, such as fair clustering~\\cite{ahmadian2020fair, backurs2019scalable, bera2019fair,chiplunkar2020solve, huang2019coresets,kleindessner2019fair, mahabadi2020individual, schmidt2019fair}, online clustering~\\cite{bhaskara20a, cohen2019online,hess2019sequential, liberty2016algorithm, moshkovitz2019unexpected}, and the use of same-cluster queries~\\cite{ailon2018approximate, ashtiani2016clustering, huleihel2019same, mazumdar2017clustering}. An interesting avenue for future work would be to further develop tree-based clustering methods by additionally incorporating some of these other constraints or objectives.\n\n\\section{Threshold trees with $k > 2$ leaves}\n\\label{sec:k-means}\n\nWe provide an efficient algorithm to produce a threshold tree with $k$ leaves that constitutes an approximate $k$-medians or $k$-means clustering of a data set $\\mathcal{X}$. Our algorithm, Iterative Mistake Minimization (IMM), starts with a reference set of cluster centers, for instance from a polynomial-time constant-factor approximation algorithm for $k$-medians or $k$-means~\\cite{aggarwal09}, or from a domain-specific clustering heuristic. \n\nWe then begin the process of finding an explainable approximation to this reference clustering, in the form of a threshold tree with $k$ leaves, whose internal splits are based on single features. The way we do this is almost identical for $k$-medians and $k$-means, and the analysis is also nearly the same. Our algorithm is deterministic and its run time is only $O(kdn \\log n)$, after finding the initial centers.\n \n \nAs discussed in Section~\\ref{sec:motivating-examples}, existing decision tree algorithms use greedy criteria that are not suitable for our tree-building process. However, we show that an alternative greedy criterion---minimizing the number of {\\em mistakes} at each split (the number of points separated from their corresponding cluster center)---leads to a favorable approximation ratio to the optimal $k$-medians or $k$-means cost.\n\n\n\\subsection{Our algorithm}\n\\label{sec:k-means-alg}\n\n\\begin{wrapfigure}{r}{0.5\\textwidth}\n\\centering\n\\begin{minipage}{.5\\textwidth}\n\\begin{algorithm}[H]\n\\SetKwFunction{Iterative Mistake Minimization}{Iterative Mistakes Minimization}\n\\SetKwFunction{BuildTree}{BuildTree}\n\\SetKwInOut{Input}{Input}\\SetKwInOut{Output}{Output}\\SetKwInOut{Preprocess}{Preprocess}\n\\Input{%\n\t$\\mathbf{x}^1, \\ldots, \\mathbf{x}^n$ -- vectors in $\\mathbb{R}^d$\\\\\n\t$k$ -- number of clusters\\\\\n }\n\\Output{%\n root of the threshold tree\n}\n\n\\LinesNumbered\n\\setcounter{AlgoLine}{0}\n\\BlankLine\n\n$\\boldsymbol{\\mu}^1, \\ldots \\boldsymbol{\\mu}^k \\leftarrow \\xfunc{k-Means}(\\mathbf{x}^1, \\ldots, \\mathbf{x}^n, k)$\\;\n\n\\ForEach {$j \\in [1, \\ldots, n]$}\n{\n $y^j \\leftarrow \\argmin_{1 \\leq \\ell \\leq k} \\lVert \\mathbf{x}^j - \\boldsymbol{\\mu}^\\ell \\rVert$\\;\n}\n\n\\Return $\\xfunc{build\\_tree}(\\{\\mathbf{x}^j\\}_{j=1}^n, \\{y^j\\}_{j=1}^n, \\{\\boldsymbol{\\mu}^j\\}_{j=1}^k)$\\;\n\n\\SetKwProg{buildtree}{$\\xfunc{build\\_tree}(\\{\\mathbf{x}^j\\}_{j=1}^m, \\{y^j\\}_{j=1}^m, \\{\\boldsymbol{\\mu}^j\\}_{j=1}^k)$:}{}{}\n\\LinesNumbered\n\\setcounter{AlgoLine}{0}\n\\BlankLine\n\\buildtree{}{\n\\If{$\\{y^j\\}_{j=1}^m$ \\text{is homogeneous}}\n{\n $\\xvar{leaf}.cluster \\leftarrow y^1$\\;\n \n \\Return $\\xvar{leaf}$\\;\n}\n\\ForEach {$i \\in [1, \\ldots, d]$}\n{\n $\\ell_i \\leftarrow \\min_{1 \\leq j \\leq m} \\mu^{y^j}_i$\\;\n \n $r_i \\leftarrow \\max_{1 \\leq j \\leq m} \\mu^{y^j}_i$\\;\n}\n$i, \\theta \\leftarrow \\argmin_{i,\\ell_i \\leq \\theta < r_i} \\sum_{j=1}^m \\xfunc{mistake}(\\mathbf{x}^j, \\boldsymbol{\\mu}^{y^j}, i, \\theta)$\\;\\label{ln:k_dynamic}\n\n$\\xvar{M} \\leftarrow \\{j \\mid \\xfunc{mistake}(\\mathbf{x}^j, \\boldsymbol{\\mu}^{y^j}, i, \\theta) = 1\\}_{j=1}^m$\\; \n\n$\\xvar{L} \\leftarrow \\{j \\mid (x^j_i \\leq \\theta) \\wedge (j \\not \\in \\xvar{M})\\}_{j=1}^m$\\;\n\n$\\xvar{R} \\leftarrow \\{j \\mid (x^j_i > \\theta) \\wedge (j \\not \\in \\xvar{M})\\}_{j=1}^m$\\;\n\n$\\xvar{node}.condition \\leftarrow ``x_i \\leq \\theta\"$\\;\n\n$\\xvar{node}.lt \\leftarrow \\xfunc{build\\_tree}(\\{\\mathbf{x}^j\\}_{j \\in \\xvar{L}}, \\{y^j\\}_{j \\in \\xvar{L}}, \\{\\boldsymbol{\\mu}^j\\}_{j=1}^k)$\\;\n\n$\\xvar{node}.rt \\leftarrow \\xfunc{build\\_tree}(\\{\\mathbf{x}^j\\}_{j \\in \\xvar{R}}, \\{y^j\\}_{j \\in \\xvar{R}}, \\{\\boldsymbol{\\mu}^j\\}_{j=1}^k)$\\;\n\n\n\\Return $\\xvar{node}$\\;\n}\n\n\n\\SetKwProg{mistake}{$\\xfunc{mistake}(\\mathbf{x}, \\boldsymbol{\\mu}, i, \\theta)$:}{}{}\n\\LinesNumbered\n\\setcounter{AlgoLine}{0}\n\\BlankLine\n\\mistake{}{\n\\Return $(x_i \\leq \\theta) \\neq (\\mu_i \\leq \\theta)$ ? $1$ : $0$\\;\n}\n\\caption{\\textsc{\\newline Iterative Mistake Minimization}}\n\\label{algo:imm}\n\\end{algorithm}\n\\end{minipage} \\vspace{-2ex}\n\\end{wrapfigure}\n\nAlgorithm~\\ref{algo:imm} takes as input a data set $\\mathcal{X} \\subseteq \\mathbb{R}^d$. The first step is to obtain a reference set of $k$ centers $\\{\\boldsymbol{\\mu}^1, \\ldots, \\boldsymbol{\\mu}^k\\}$, for instance from a standard clustering algorithm. We assign each data point $\\mathbf{x}^j$ the label $y^j$ of its closest center. Then, the {\\tt{build\\_tree}} procedure looks for a tree-induced clustering that fits these labels. The tree is built top-down, using binary splits. Each node $u$ can be associated with the portion of the input space that passes through that node, a hyper-rectangular region $\\mbox{cell}(u) \\subseteq \\mathbb{R}^d$. If this cell contains two or more of the centers $\\boldsymbol{\\mu}^j$, then it needs to be split. We do so by picking the feature $i \\in [d]$ and threshold value $\\theta \\in \\mathbb{R}$ such that the resulting split $x_i \\leq \\theta$ sends at least one center to each side and moreover produces the fewest {\\em mistakes}: that is, separates the fewest points in $\\mathcal{X} \\cap \\mbox{cell}(u)$ from their corresponding centers in $\\{\\boldsymbol{\\mu}^j: 1 \\leq j \\leq k\\} \\cap \\mbox{cell}(u)$. We do not count points whose centers lie outside $\\mbox{cell}(u)$, since they are associated with mistakes in earlier splits. We find the optimal split $(i, \\theta)$ by searching over all pairs efficiently using dynamic programming. We then add this node to the tree, and discard the mistakes (the points that got split from their centers) before recursing on the left and right children. We terminate at a leaf node whenever all points have the same label (i.e., a {\\em homogeneous} subset). As there are $k$ different labels, the resulting tree has exactly $k$ leaves.\nFigure \\ref{fig:imm_example} depicts the operation of Algorithm~\\ref{algo:imm}. \n\nWe first discuss the running time, and we analyze the approximation guarantees of IMM in Section~\\ref{sec:imm-approx-main}.\n\n\n\n\n\\smallskip \\noindent {\\bf Time analysis of tree building.}\nWe sketch how to execute the algorithm in time $O(kdn\\log n)$ for an $n$-point data set. At each step of the top-down procedure, we find a coordinate and threshold pair that minimizes the mistakes at this node (line \\ref{ln:k_dynamic} in \\texttt{build\\_tree} procedure). We use dynamic programming to avoid recomputing the cost from scratch for each potential threshold. For each coordinate $i \\in [d]$, we sort the data and centers. Then, we iterate over possible thresholds. We claim that we can process each node in time $O(dn\\log n)$ because each point will affect the number of mistakes at most twice. Indeed, when the threshold moves, either a data point or a center moves to the other side of the threshold. Since we know the number of mistakes from the previous threshold, we count the new mistakes efficiently as follows. If a single point switches sides, then the number of mistakes changes by at most one. If a center switches sides, which happens at most once, then we update the mistakes for this center. Overall, each point affects the mistakes at most twice (once when changing sides, and once when its center switches sides). Thus, the running time for each internal node is $O(dn\\log n)$. As the tree has $k-1$ internal nodes, the total time is $O(kdn \\log n)$.\n\n\n\\input{imm-pic-steps}\n\n\\newpage\n\\subsection{Approximation guarantee for the IMM algorithm}\n\\label{sec:imm-approx-main}\n\nOur main theoretical contribution is the following result.\n\n\\begin{theorem}\\label{thm:main-k}\\label{thm:main-k-appendix}\n\tSuppose that IMM takes centers $\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k$ and returns a tree $T$ of depth $H$. Then, \n\t\\begin{enumerate}\n\t\t\\item The $k$-medians cost is at most $$\\mathrm{cost}(T)\\leq (2H+1) \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k)$$\n\t\t\\item The $k$-means cost is at most $$\\mathrm{cost}(T)\\leq (8Hk+2) \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k)$$\n\t\\end{enumerate}\n\tIn particular, IMM achieves worst case approximation factors of $O(k)$ and $O(k^2)$ by using any $O(1)$ approximation algorithm (compared to the optimal $k$-medians\/means) to generate the initial centers.\n\\end{theorem}\n\nWe state the theorem in terms of the depth of the tree to highlight that the approximation guarantee may depend on the structure of the input data. If the optimal clusters can be easily identified by a small number of salient features, then the tree may have depth $O(\\log k)$. \nWe later provide a lower bound showing that an $\\Omega(\\log k)$ approximation factor is necessary for $k$-medians and $k$-means (Theorem~\\ref{thm:lb-k}). For this data set, our algorithm produces a threshold tree with depth $O(\\log k)$, and therefore, the analysis is tight for $k$-medians. We leave it as an intriguing open question whether the bound can be improved for $k$-means.\n\n\\subsubsection{Proof Overview for Theorem~\\ref{thm:main-k}}\n\nThe proof proceeds in three main steps. First, we rewrite the cost of IMM in terms of the minimum number of mistakes made between the output clustering and the clustering based on the given centers. Second, we provide a lemma that relates the cost of any clustering to the number of mistakes required by a threshold clustering. Finally, we put these two together to show that the output cost is at most an $O(H)$ factor larger than the $k$-medians cost and at most an $O(Hk)$ factor larger than the $k$-means cost, respectively, where $H$ is the depth of the IMM tree, and the cost is relative to $\\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k)$. \n\nThe approximation bound rests upon a characterization of the excess clustering cost induced by the tree. For any internal node $u$ of the final tree $T$, let $\\mbox{cell}(u) \\subseteq \\mathbb{R}^d$ denote the region of the input space that ends up in that node, and let $B(u)$ be the bounding box of the centers that lie in this node, or more precisely, $B(u) = \\{\\boldsymbol{\\mu}^j: 1 \\leq j \\leq k\\} \\cap \\mbox{cell}(u)$. We will be interested in the diameter of this bounding box, measured either by $\\ell_1$ or squared $\\ell_2$ norm, and denoted by $\\mathrm{diam}_1(B(u))$ and $\\mathrm{diam}_2^2(B(u))$, respectively.\n\n\n\\paragraph{Upper bounding the cost of the tree.} The first technical claim (Lemma~\\ref{lemma:tree-cost}) will show that if IMM takes centers $\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k$ and returns a tree $T$ that incurs $t_u$ mistakes at node $u \\in T$, then \n\\begin{itemize}\n\\item The $k$-medians cost of $T$ satisfies $\\displaystyle \\mathrm{cost}(T)\\leq \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + \\sum_{u \\in T} t_u \\mathrm{diam}_1(B(u)) $\n\\item The $k$-means cost of $T$ satisfies $\\displaystyle \\mathrm{cost}(T)\\leq 2\\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + 2\\cdot \\sum_{u \\in T} t_u \\mathrm{diam}_2^2(B(u))$\n\\end{itemize}\n\nBriefly, any point $\\mathbf{x}$ that ends up in a different leaf from its correct center $\\boldsymbol{\\mu}^j$ incurs some extra cost. To bound this, consider the internal node $u$ at which $\\mathbf{x}$ is separated from $\\boldsymbol{\\mu}^j$. Node $u$ also contains the center $\\boldsymbol{\\mu}^i$ that ultimately ends up in the same leaf as $\\mathbf{x}$. For $k$-medians, the excess cost for $\\mathbf{x}$ can then be bounded by $\\|\\boldsymbol{\\mu}^i - \\boldsymbol{\\mu}^j\\|_1 \\leq \\mathrm{diam}_1(B(u))$. The argument for $k$-means is similar.\n\nThese $\\sum_u t_u \\mathrm{diam}(B(u))$ terms can in turn be bounded in terms of the cost of the reference clustering. \n\n\\paragraph{Lower bounding the reference cost.} \nWe next need to relate the cost of the centers $\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k$ to the number of mistakes and the diameter of the cells in the tree. Lemma~\\ref{aux-claim:general_k_upper_bound} will show that if IMM makes $t_u$ mistakes at node $u \\in T$, then\n\\begin{itemize}\n\\item The $k$-medians cost satisfies \n$\\displaystyle \\sum_{u \\in T} t_u \\cdot \\mathrm{diam}_1(B(u)) \\leq 2H \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k).$\n\\item The $k$-means cost satisfies \n$\\displaystyle \\sum_{u \\in T} t_u \\cdot \\mathrm{diam}_2^2(B(u)) \\leq 4Hk \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k).$\n\\end{itemize}\n\nThe proof for this is significantly more complicated than the upper bound mentioned above. Moreover, it contains the main new techniques in our analysis of tree-based clusterings. \n\nThe core challenge is that we aim to lower bound the cost of the given centers using only information about the number of mistakes at each internal node. Moreover, the IMM algorithm only minimizes the {\\em number} of mistakes, and not the {\\em cost} of each mistake. Therefore, we must show that if every axis-aligned cut in $B(u)$ separates at least $t_u$ points~$\\mathbf{x}$ from their centers, then there must be a considerable distance between the points in $\\mbox{cell}(u)$ and their centers.\n\nTo prove this, we analyze the structure of points in each cell. Specifically, we consider the single-coordinate projection of points in the box $B(u)$, and we order the centers in $B(u)$ from smallest to largest for the analysis. If there are $k'$ centers in node $u$, we consider the partition of $B(u)$ into $2(k'-1)$ disjoint segments, splitting at the centers and at the midpoints between consecutive centers. Since $t_u$ is the minimum number of mistakes, we must in particular have at least $t_u$ mistakes from the threshold cut at each midpoint. We argue that each of these segments is covered at least $t_u$ times by a certain set of intervals. Specifically, we consider the intervals between mistake points and their true centers, and we say that an interval \\textit{covers} a segment if the segment is contained in the interval. This allows us to capture the cost of mistakes at different distance scales. For example, if a point is very far from its true center, then it covers many disjoint segments, and we show that it also implies a large contribution to the cost. \nClaim~\\ref{claim:covering} in Section~\\ref{sec:general_k_upper_bound} provides our main covering result, and we use this to argue that the cost of the given centers can be lower bounded in terms of the distance between consecutive centers in $B(u)$. For $k$-medians, we can directly derive a lower bound on the cost in terms of the $\\ell_1$ diameter $\\mathrm{diam}_1(B(u))$. For $k$-means, however, we employ Cauchy-Schwarz, which incurs an extra factor of $k$ in the bound with $\\mathrm{diam}_2^2(B(u))$. Overall, we sum these bounds over the height $H$ of the tree, leading to the claimed upper bounds in the above lemma. \n\n\\subsubsection{Preliminaries and Notation for Theorem~\\ref{thm:main-k}}\n\nLet $\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k$ be the reference centers, and let $T$ be the resulting IMM tree. Each internal node $u$ corresponds to a value $\\theta_u \\in \\mathbb{R}$ and a coordinate $i \\in [d]$. The tree partitions $\\mathcal{X}$ into $k$ clusters $\\widehat C_1, \\ldots, \\widehat C_k$ based on the points that reach the $k$ leaves in $T$, where we index the clusters so that leaf $j$ contains the centers $\\boldsymbol{\\mu}^j$ and $\\widehat \\boldsymbol{\\mu}^j$, where $\\widehat \\boldsymbol{\\mu}^j$ is the mean of $\\widehat C_j$ for $k$-means and the median of $\\widehat C_j$ for $k$-medians. This provides a bijection between old and new centers (and clusters).\nRecall that the map $c:\\mathcal{X} \\to \\{\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k\\}$ associates each point to its nearest center (i.e., $c(\\mathbf{x})$ corresponds to the cluster assignment given by the centers $\\{\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k\\}$).\n\n\nFor a node $u \\in T$, we let $\\mathcal{X}_u$ denote the surviving data set vectors at node $u \\in T$ based on the thresholds from the root to $u$.\nWe also define $J_u \\subseteq [k]$ be the set of surviving centers at node~$u$ from the set $\\{\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k\\}$, where these centers satisfy the thresholds from the root to~$u$. \nDefine $\\boldsymbol{\\mu}^{L,u}$ and $\\boldsymbol{\\mu}^{R,u}$ to be the maximal (smallest and largest) coordinate-wise values of the centers in $J_u$, that is, for $i \\in [d]$, we set\n$$\n\\mu^{L,u}_i = \\min_{j \\in J_u} \\mu^j_i,\n\\qquad\\mathrm{and}\\qquad \n\\mu^{R,u}_i = \\max_{j \\in J_u} \\mu^j_i.\n$$\nIn other words, using the previous notation and recalling that $B(u) = \\{\\boldsymbol{\\mu}^1,\\ldots,\\boldsymbol{\\mu}^k\\} \\cap \\mathrm{cell}(u)$, we have that\n$$\n\\mathrm{diam}_1(B(u)) = \\|\\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u}\\|_1\n\\qquad \n\\mbox{\\ and\\ }\n\\qquad\n\\mathrm{diam}_2^2(B(u)) = \\|\\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u}\\|_2^2.\n$$\n\nRecall that $t_u$ for node $u \\in T$ denotes the number of {\\em mistakes} incurred during the threshold cut defined by $u$, where a point $\\mathbf{x}$ is a mistake at node $u$ if $x$ reaches $u$, it was not a mistake before, and exactly one of the following two events occurs:\n$$\n\\{c(\\mathbf{x})_i \\leq \\theta_u \\ \\ \\mathrm{and}\\ \\ x_i > \\theta_u\\}\n\\qquad \\mathrm{or} \\qquad\n\\{c(\\mathbf{x})_i > \\theta_u \\ \\ \\mathrm{and}\\ \\ x_i \\leq \\theta_u\\}.\n$$\nLet $\\mathcal{X} = \\cX^{\\mathsf{cor}} \\cup \\cX^{\\mathsf{mis}}$ be a partition of the input data set into two parts, where $\\mathbf{x}$ is in $\\cX^{\\mathsf{cor}}$ if it reaches the same leaf node in $T$ as its center $c(\\mathbf{x})$, and otherwise, $\\mathbf{x}$ is in $\\cX^{\\mathsf{mis}}$. In other words, $\\cX^{\\mathsf{mis}}$ contains all points $\\mathbf{x} \\in \\mathcal{X}$ that are a mistake at any node $u$ in $T$, and the rest of the points are in $\\cX^{\\mathsf{cor}}$. We note that the notion of ``mistakes'' used here is different than the definition of ``changes'' used for the analysis of $2$-means\/medians, even though we reuse some of the same notation.\n\nWe need a standard consequence of the Cauchy\u2013Schwarz inequality to analyze the $k$-means cost.\n\\begin{claim}\\label{clm:cauchy_schwarz_k_means}\n\tFor any $a_1,\\ldots,a_{m}\\in\\mathbb{R},$ it holds that $\\sum_{i=1}^ka_i^2 \\geq \\frac{1}{k}\\left(\\sum_{i=1}^ka_i\\right)^2.$\n\\end{claim}\n\\begin{proof}\n\tDenote by $a$ the vector $(a_1,\\ldots,a_{m})$ and by $b$ the vector $(\\nicefrac{1}{\\sqrt{k}},\\ldots,\\nicefrac{1}{\\sqrt{k}}).$ \n\tBy the Cauchy\u2013Schwarz inequality\n\t$\\frac{1}{k}\\left(\\sum_{i=1}^ka_i\\right)^2=\\inner{a}{b}^2\\leq\\sum_{i=1}^ka_i^2$\n\\end{proof}\n\nWe also need two facts, which state the optimal center for a cluster corresponds to mean or median of the points in the cluster, respectively. The proofs of these facts can be found in standard texts~\\cite{schutze2008introduction}.\n\n\\begin{fact}\\label{fact:1-means-optimal center}\nFor any set $S=\\{\\mathbf{x}^1,\\ldots, \\mathbf{x}^n\\}\\subseteq \\mathbb{R}^d$, the optimal center under the $\\ell_2^2$ cost is the mean $\\boldsymbol{\\mu} = \\frac1n\\sum_{\\mathbf{x}\\in S}\\mathbf{x}.$\n\\end{fact}\n\n\\begin{fact}\\label{fact:1-median-optimal center}\nFor any set $S=\\{\\mathbf{x}^1,\\ldots, \\mathbf{x}^n\\}\\subseteq \\mathbb{R}^d$, the optimal center $\\boldsymbol{\\mu}$ under the $\\ell_1$ cost is the coordinate-wise median, defined for $i\\in[d]$ as $\\mu_i = \\mathsf{median}(x^1_i,\\ldots, x^n_i).$\n\\end{fact}\n\n\\subsubsection{The Two Main Lemmas and the Proof of Theorem~\\ref{thm:main-k}}\n\nTo prove the theorem, we state two lemmas that aid in analyzing the cost of the given clustering versus the IMM clustering. \nThe theorem will follow from these lemmas, and we will prove the lemmas in the proceeding subsections. We start with the lemma relating the number of mistakes~$t_u$ at each node $u$ and the distance between $\\boldsymbol{\\mu}^{L,u}$ and $\\boldsymbol{\\mu}^{R,u}$ to the cost incurred by the given centers.\n\n\\begin{lemma}\\label{lemma:tree-cost}\n\tIf IMM takes centers $\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k$ and returns a tree $T$ of depth $H$ that incurs $t_u$ mistakes at node $u \\in T$, then \n\t\\begin{enumerate}\n\t\t\\item The $k$-medians cost of the IMM tree satisfies $$\\mathrm{cost}(T)\\leq \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + \\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.$$\n\t\t\\item The $k$-means cost of the IMM tree satisfies $$\\mathrm{cost}(T)\\leq 2\\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + 2\\cdot \\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2.$$\n\t\\end{enumerate}\n\\end{lemma}\n\nWe next bound the cost of the given centers in the terms of the number of mistakes in the tree. The key idea is that if there must be many mistakes at each node, then the cost of the given centers must actually be fairly large.\n\n\\begin{lemma}\\label{aux-claim:general_k_upper_bound}\n\tIf IMM takes centers $\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k$ and returns a tree $T$ of depth $H$ that incurs $t_u$ mistakes at node $u \\in T$, then \n\t\\begin{enumerate}\n\t\t\\item The $k$-medians cost satisfies \n\t\t$$\\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1 \\leq 2H \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k).$$\n\t\t\\item The $k$-means cost \n\t\tsatisfies \n\t\t$$\\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2 \\leq 4Hk \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k).$$\n\t\\end{enumerate}\n\\end{lemma}\n\nCombining these two lemmas immediately implies Theorem~\\ref{thm:main-k}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:main-k}.]\nFor $k$-medians, Lemmas~\\ref{lemma:tree-cost} and~\\ref{aux-claim:general_k_upper_bound}\ntogether imply that \n$$\\mathrm{cost}(T)\\leq \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + \\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1\n\\leq \n(2H+1) \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k).$$\nFor $k$-means, we have that \n\\begin{eqnarray*}\n\\mathrm{cost}(T)\\leq 2\\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + 2\\cdot \\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2\n\\leq (8Hk+2) \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k).\n\\end{eqnarray*}\n\n\\end{proof}\n\n\n\n\\subsubsection{Proof of Lemma~\\ref{lemma:tree-cost}}\n\nWe begin with the $k$-medians proof (the $k$-means proof will be similar). Notice that the cost can only increase when measuring the distance to the (suboptimal) center $\\boldsymbol{\\mu}^j$ instead of the (optimal) center $\\widehat \\boldsymbol{\\mu}^j$ for cluster $\\widehat C_j$, and hence,\n$$\n\\mathrm{cost}(T) = \\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\widehat C_j} \\|\\mathbf{x} - \\widehat \\boldsymbol{\\mu}^j\\|_1\n\\leq \n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\widehat C_j} \\|\\mathbf{x} - \\boldsymbol{\\mu}^j\\|_1.$$\nWe can rewrite this sum using the partition $\\cX^{\\mathsf{cor}}$ and $\\cX^{\\mathsf{mis}}$ of $\\mathcal{X}$, using the fact that\nwhenever $\\mathbf{x}\\in \\cX^{\\mathsf{cor}}$, then the distance is computed with respect to the true center $c(\\mathbf{x})$,\n\\begin{eqnarray*}\n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\widehat C_j} \\|\\mathbf{x} - \\boldsymbol{\\mu}^j\\|_1 &=& \n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|_1 +\n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|_1\n\\\\ &=& \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}} \\|\\mathbf{x}- c(\\mathbf{x})\\|_1 +\n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|_1\n\\end{eqnarray*}\nStarting with the above cost bound, and using the triangle inequality, we see\n\\begin{eqnarray*}\n\\mathrm{cost}(T) \n&\\leq&\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}} \\|\\mathbf{x}- c(\\mathbf{x})\\|_1 +\n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|_1\n\\\\ &\\leq& \n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}} \\|\\mathbf{x}- c(\\mathbf{x})\\|_1 +\n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \n(\\|\\mathbf{x}- c(\\mathbf{x})\\|_1 + \\|c(\\mathbf{x})- \\boldsymbol{\\mu}^j\\|_1) \n\\\\ &=& \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) +\n\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|c(\\mathbf{x})- \\boldsymbol{\\mu}^j\\|_1\n\\end{eqnarray*}\n\nTo control the second term in the final line, we must bound the cost of the mistakes. We decompose $\\cX^{\\mathsf{mis}}$ based on the node $u$ where $\\mathbf{x} \\in \\cX^{\\mathsf{mis}}$ is first separated from its true center $c(\\mathbf{x})$ due to the threshold at node~$u$.\nTo this end, consider some point $\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j$, where its distance is measured to the incorrect center $\\boldsymbol{\\mu}^{j} \\neq c(\\mathbf{x})$. Both centers $c(\\mathbf{x})$ and $\\boldsymbol{\\mu}^j$ have survived until node $u$ in the threshold tree $T$, and hence, both vectors are part of the definitions of $\\boldsymbol{\\mu}^{L,u}$ and $\\boldsymbol{\\mu}^{R,u}$. In particular, we can use the upper bound\n$$\\|c(\\mathbf{x}) - \\boldsymbol{\\mu}^j\\|_1 \\leq \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.$$\nThere are $t_u$ points in $\\cX^{\\mathsf{mis}}$ caused by the threshold at node $u$, and we have that\n$$\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|c(\\mathbf{x}) - \\boldsymbol{\\mu}^j\\|_1 \\leq \n\\sum_{u \\in T} t_u \\cdot \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.$$\nTherefore, we have, as desired\n\\begin{eqnarray*}\n\\mathrm{cost}(T) &\\leq& \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + \\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|_1\n\\\\&\\leq& \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + \\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.\n\\end{eqnarray*}\n\\noindent\nAnalyzing $k$-means is similar; we incur a factor of two by using Claim~\\ref{clm:cauchy_schwarz_k_means} instead of the triangle inequality:\n\\begin{eqnarray*}\n\\mathrm{cost}(T) &\\leq& \n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}} \\|\\mathbf{x}- c(\\mathbf{x})\\|_2^2 + \n2\\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} (\\|\\mathbf{x}- c(\\mathbf{x})\\|_2^2 +\\|c(\\mathbf{x}) - \\boldsymbol{\\mu}^j\\|_2^2)\n\\\\&\\leq& 2\\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + 2\\cdot \\sum_{j=1}^k \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|c(\\mathbf{x}) - \\boldsymbol{\\mu}^j\\|_2^2\n\\\\ &\\leq& 2\\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) + 2\\cdot \\sum_{u \\in T} t_u \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2\n\\end{eqnarray*}\n\n\n\\subsubsection{Proof of Lemma~\\ref{aux-claim:general_k_upper_bound}}\n\\label{sec:general_k_upper_bound}\n\t\nTo prove this lemma, we bound the cost at each node $u$ of tree in terms of the mistakes made at this node. For this lemma, we define $\\cX^{\\mathsf{cor}}_u$ to be the set of points in $\\mathcal{X}$ that reach node $u$ in $T$ along with their center $c(\\mathbf{x})$. We note that $\\cX^{\\mathsf{cor}}_u$ differs from $\\cX^{\\mathsf{cor}} \\cap \\mathcal{X}_u$ because a point $\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u$ may not make it to $\\cX^{\\mathsf{cor}}$ if there is a mistake later on (i.e., $\\cX^{\\mathsf{cor}}$ is the union of $\\cX^{\\mathsf{cor}}_u$ only over leaf nodes).\n\n\\begin{lemma}\\label{lemma:mistake-bound}\nFor any node $u \\in T$, we have that\n\t$$\n\t\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} \\|\\mathbf{x} - c(\\mathbf{x})\\|_1\n\t\\geq \n\t\\frac{t_u}{2} \\cdot \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.\n\t$$\nand\n\t$$\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} \\|\\mathbf{x} - c(\\mathbf{x})\\|_2^2\n\\geq \n\\frac{t_u}{4k} \\cdot \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2.\n$$\n\\end{lemma}\n\\begin{proof}\nFix a coordinate $i \\in [d]$ and a node $u \\in T$.\nTo simplify notation, we let $z_1 \\leq \\cdots \\leq z_{k'}$ denote the {\\em sorted} values of $i$th coordinate of the $k' \\leq k$ centers that survive until node $u$ (so that $z_1 = \\mu^{L,u}_i$ and $z_{k'} = \\mu^{R,u}_i$). Observe that for each $\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u$, the center $c(\\mathbf{x})$ must have survived until node $u$, and hence, $c(\\mathbf{x})_i$ equals one of the values $z_j$ for $j\\in[k']$. \n\nWe need a definition that allows us to relate the cost in coordinate $i$ to the distances between $z_1$ and $z_{k'}$. For consecutive values $(j,j+1)$, we say that the pair $(j,j+1)$ is {\\em covered} by $\\mathbf{x}$ if either \n\\begin{itemize}\n\t\\item The segment $[z_j, \\frac{z_j + z_{j+1}}{2})$ is contained in the segment $[x_i,c(\\mathbf{x})_i]$, or\n\t\\item The segment $[\\frac{z_j + z_{j+1}}{2}, z_{j+1})$ is contained in the segment $[x_i,c(\\mathbf{x})_i]$.\n\\end{itemize} \n\nWe prove the following claim, which enables us to relate the cost in the $i$th coordinate to the value $z_{k'} - z_1$ by decomposing this value into the distance between consecutive centers.\n\\begin{claim}\\label{claim:covering}\nFor each $j = 1,2,\\ldots, k'-1$, the pair $(j,j+1)$ is {covered} by at least~$t_u$ points $\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u$.\n\\end{claim}\n\\begin{proof}\nSuppose for contradiction that this does not hold. We argue that we can find a threshold value for coordinate $i$ that makes fewer than $t_u$ mistakes. To see this, assume that $(j,j+1)$ is covered by fewer than $t_u$ points $\\mathbf{x} \\in \\mathcal{X}_u$. In particular, setting the threshold to be $\\frac{z_j + z_{j+1}}{2}$ separates fewer than $t_u$ points $\\mathbf{x}$ from their centers $c(\\mathbf{x})$. This implies that there are fewer than $t_u$ mistakes at node $u$, which is a contradiction because the IMM algorithm chooses the coordinate and threshold pair that minimizes the number of mistakes.\n\\end{proof}\n\nNow this claim suffices to prove Lemma~\\ref{lemma:mistake-bound}. The only challenge is that we must string together the covering points $\\mathbf{x}$ to get a bound on $z_{k'}-z_1$.\n\nWe start with the $k$-medians proof. Using the above claim, we can lower bound the contribution of coordinate $i$ to the cost of the given centers. Notice that the values $z_1 \\leq \\cdots \\leq z_{k'}$ partition the interval between $z_1 = \\mu^{L,u}_i$ and $z_{k'} = \\mu^{R,u}_i$. Thus, each time $\\mathbf{x}$ covers a pair $(j,j+1)$, there must be a contribution of $\\frac{z_{j+1} - z_j}{2}$ to the cost $|x_i - c(\\mathbf{x})_i|$. Because each pair is covered at least $t_u$ times by Claim~\\ref{claim:covering}, we conclude that \n$$\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} |x_i - c(\\mathbf{x})_i|\n\\geq t_u \\sum_{j = 1}^{k'-1}\\left(\\frac{z_{j+1} - z_j}{2}\\right)\n= \\frac{t_u}{2} (z_{k'} - z_1).\n$$\nTo relate the bound to $\\boldsymbol{\\mu}^{L,u}$ and $\\boldsymbol{\\mu}^{R,u}$, we note that the above argument holds for each coordinate $i \\in [d]$, and\nwe have that\n$$\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} \\|\\mathbf{x} - c(\\mathbf{x})\\|_1\n= \\sum_{i \\in [d]} \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} |\\mathbf{x}_i - c(\\mathbf{x})_i|\n\\geq \n\\frac{t_u}{2} \\cdot \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.\n$$\n\nFor the $k$-means proof, we apply the same argument as above, this time using Claim~\\ref{clm:cauchy_schwarz_k_means} to bound the sum of squared values as \n$$\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} |x_i - c(\\mathbf{x})_i|^2\n\\geq t_u \\sum_{j = 1}^{k'-1}\\left(\\frac{z_{j+1} - z_j}{2}\\right)^2\n\\geq \\frac{t_u}{k} \\left(\\sum_{j = 1}^{k'-1}\\left(\\frac{z_{j+1} - z_j}{2}\\right)\\right)^2\n= \\frac{t_u}{4k} (z_{k'} - z_1)^2,$$\nand therefore, summing over coordinates $i\\in[d]$, we have\n$$\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} \\|\\mathbf{x} - c(\\mathbf{x})\\|_2^2\n= \\sum_{i \\in [d]} \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} |x_i - c(\\mathbf{x})_i|^2\n\\geq \n\\frac{t_u}{4k} \\cdot \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2.\n$$\n\\end{proof}\n\n\\begin{proof}[Proof of Lemma~\\ref{aux-claim:general_k_upper_bound}.]\n\tWe start with the $k$-medians proof.\nThe factor of $H$ arises because the same points $\\mathbf{x} \\in \\mathcal{X}$ can appear in at most $H$ sets $\\cX^{\\mathsf{cor}}_u$ because $H$ is the depth of the tree. More precisely, using Lemma~\\ref{lemma:mistake-bound} for each node $u$, we have that \n$$\nH \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) \\geq \\sum_{u \\in T} \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} \\|\\mathbf{x} - c(\\mathbf{x})\\|_1\n\\geq \\sum_{u \\in T} \\frac{t_u}{2} \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_1.\n$$\t\nApplying the same steps for the $k$-means cost, we have that\n$$\nH \\cdot \\mathrm{cost}(\\boldsymbol{\\mu}^1,\\ldots, \\boldsymbol{\\mu}^k) \\geq \\sum_{u \\in T} \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}_u} \\|\\mathbf{x} - c(\\mathbf{x})\\|_2^2\n\\geq \\sum_{u \\in T} \\frac{t_u}{4k} \\| \\boldsymbol{\\mu}^{L,u} - \\boldsymbol{\\mu}^{R,u} \\|_2^2.\n$$\n\\end{proof}\n\n\n\n\n\n\\subsection{Approximation lower bound}\n\nTo complement our upper bounds, we show that a threshold tree with $k$ leaves cannot, in general, yield better than an $\\Omega(\\log k)$ approximation to the optimal $k$-medians or $k$-means clustering.\n\n\\begin{theorem}\\label{thm:lb-k}\nFor any $k \\geq 2$, there exists a data set with $k$ clusters such that any threshold tree $T$ with $k$ leaves\nmust have $k$-medians and $k$-means cost at least\n$$\n\\mathrm{cost}(T) \\geq \\Omega(\\log k) \\cdot \\mathrm{cost}(opt).\n$$\n\\end{theorem}\n\nThe data set is produced by first picking $k$ random centers from the hypercube $\\{-1,1\\}^d$, for large enough~$d$, and then using each of these to produce a cluster consisting of the $d$ points that can be obtained by replacing one coordinate of the center by zero. Thus the clusters have size $d$ and radius $O(1)$. To prove the lower bound, we use ideas from the study of pseudo-random binary vectors, showing that projecting the centers to any subset of $m\\lesssim\\log_2 k$ coordinates take on all $2^m$ possible values, with each occurring roughly equally often.\n Then, we show that (i) the threshold tree must be essentially a complete binary tree with depth $\\Omega(\\log_2 k)$ to achieve a clustering with low cost, and (ii) any such tree incurs a cost of $\\Omega(\\log k)$ times more than the optimal for this data set (for both $k$-medians and $k$-means).\nThe proof of Theorem~\\ref{thm:lb-k} appears in Appendix~\\ref{sec:general_k_lower_bound}.\n\n\n\\section{Motivating Examples}\n\\label{sec:motivating-examples}\n\n{\\bf Using $k-1$ features may be necessary.}\nWe start with a simple but important bound showing that trees with depth less than~$k$ (or fewer than $k-1$ features) can be arbitrarily worse than the optimal clustering. Consider the data set consisting of the $k-1$ standard basis vectors $\\mathbf{e}^1,\\ldots,\\mathbf{e}^{k-1} \\in \\mathbb{R}^{k-1}$ along with the all zeros vector. As this data set has $k$ points, \nthe optimal $k$-median\/means cost is zero, putting each point in its own cluster. \nUnfortunately, it is easy to see that for this data, depth $k-1$ is necessary for clustering with a threshold tree. Figure~\\ref{fig:simplex} depicts an optimal tree for this data set. Shorter trees do not work because projecting onto any $k-2$ coordinates does not separate the data, as at least two points will have all zeros in these coordinates. Therefore, any tree with depth at most $k-2$ will put two points in the same cluster, leading to non-zero cost, whereas the optimal cost is zero. In other words, for this data set, caterpillar trees such as Figure~\\ref{fig:simplex} are necessary and sufficient for an optimal clustering. This example also shows that $\\Theta(k)$ features are tight for feature selection~\\cite{cohen2015dimensionality} and provides a separation with feature extraction methods that use a linear map to only a logarithmic number of dimensions~\\cite{becchetti2019oblivious, makarychev2019performance}.\n\n\\begin{figure}\n \\centering\n \\subfloat[Optimal threshold tree for the data set in $\\mathbb{R}^{k-1}$ consisting of the $k-1$ standard basis vectors and the all zeros vector. Any optimal tree must use all $k-1$ features and have depth~$k-1$.]{\n \\begin{minipage}[t]{0.45\\textwidth}\n \\centering\n \\begin{tikzpicture}\n\t[inner\/.style={shape=rectangle, rounded corners, draw, align=center, top color=white, bottom color=gray!40, scale=.85},\n\tdots\/.style={shape=rectangle, align=center, top color=white, scale=.85},\n\tleaf\/.style={shape=rectangle, rounded corners, draw, align=center, scale=.85},\n\tlevel 1\/.style={sibling distance=2mm},\n\tlevel 2\/.style={sibling distance=2mm},\n\tlevel 3\/.style={sibling distance=2mm},\n\tlevel 4\/.style={sibling distance=2mm},\n\tlevel distance=8mm]\n\t\\Tree\n\t[.\\node[inner]{$x_{i_1} \\leq 0.5$};\n\t[.\\node[inner]{$x_{i_2} \\leq 0.5$};\n\t[.\\node[inner] {$\\ldots$};\n\t[.\\node[inner] {$x_{i_d} \\leq 0.5$};\n\t\\node[leaf]{\\textbf{$\\mathbf{0}$}};\n\t\\node[leaf]{\\textbf{$\\mathbf{e}^{i_d}$}};\n\t] \n\t\\node[leaf]{\\textbf{$\\ldots$}};\n\t]\n\t\\node[leaf]{\\textbf{$\\mathbf{e}^{i_2}$}};\n\t]\n\t\\node[leaf]{\\textbf{$\\mathbf{e}^{i_1}$}};\n\t]\n\t\\node[top color=white, bottom color=white, scale=.75] at (-0.8,-4.1) {};\n\t\\end{tikzpicture}\n\t\\end{minipage}\n\t\\label{fig:simplex}}\n\t\\hfill\n \\subfloat[The ID3 split results in a $3$-means\/medians clustering with arbitrarily worse cost than the optimal because it places the top two points in separate clusters. Our algorithm (Section~\\ref{sec:k-means}) instead starts with the optimal first split.\n ]{\n \\includegraphics[width=.4\\textwidth, trim=-2cm 0 -2cm 0, clip]{our_vs_id3.png}\n \\label{fig:imm_vs_id3}}\n\\caption{Motivating examples showing that (a) threshold trees may need depth $k-1$ to determine $k$ clusters, and (b) standard decision tree algorithms such as ID3 or CART perform very badly on some data sets.}\n\\end{figure}\n\t\n\n\\paragraph{Standard top-down decision trees do not work.}\n\\label{sec:standard-dt-bad}\nA natural approach to building a threshold tree is to (1) find a good $k$-medians or $k$-means clustering using a standard algorithm, then (2) use it to label all the points, and finally (3) apply a supervised decision tree learning procedure, such as ID3~\\cite{quinlan1986induction,quinlan2014c4} to find a threshold tree that agrees with these cluster labels as much as possible. ID3, like other common decision tree algorithms, operates in a greedy manner, where at each step it finds the best split in terms of \\emph{entropy} or \\emph{information gain}. We will show that this is not a suitable strategy for clustering and that the resulting tree can have cost that is arbitrarily bad. \nIn what follows, denote by $\\mathrm{cost}({ID3}_{\\ell})$ the cost of the decision tree with~$\\ell$ leaves returned by ID3 algorithm.\n\n\n\nFigure \\ref{fig:imm_vs_id3} depicts a data set $\\mathcal{X} \\subseteq \\mathbb{R}^2$ partitioned into three clusters $\\mathcal{X} = \\mathcal{X}_0 \\mathbin{\\mathaccent\\cdot\\cup} \\mathcal{X}_1 \\mathbin{\\mathaccent\\cdot\\cup} \\mathcal{X}_2$.\nWe define two centers $\\boldsymbol{\\mu}^0=(-2,0)$ and $\\boldsymbol{\\mu}^1=(2,0)$ and for each $i\\in \\{0, 1\\}$, we define~$\\mathcal{X}_i$ as $500$ i.i.d. points $\\mathbf{x} \\sim \\mathcal{N}(\\boldsymbol{\\mu}^i, \\epsilon)$ for some small $\\epsilon > 0$. \nThen, $\\mathcal{X}_2 = \\{(-2, v), (2, v)\\}$ where $v \\to \\infty$. \nWith high probability, we have that the optimal $3$-means clustering is $(\\mathcal{X}_0, \\mathcal{X}_1, \\mathcal{X}_2)$, i.e. $\\mathbf{x} \\in \\mathcal{X}$ gets label $y\\in\\{0,1,2\\}$ such that $\\mathbf{x} \\in \\mathcal{X}_y$.\nThe ID3 algorithm minimizes the entropy at each step. In the first iteration, it splits between the two large clusters. As a result $(-2, v)$ and $(2, v)$ will also be separated from one another. Since $ID3_3$ outputs a tree with exactly three leaves, one of the leaves must contain a point from $\\mathcal{X}_2$ together with points from either $\\mathcal{X}_0$ or $\\mathcal{X}_1$, this means that $\\mathrm{cost}(ID3_3)= \\Omega(v) \\to \\infty$.\nNote that $\\mathrm{cost}((\\mathcal{X}_1, \\mathcal{X}_2, \\mathcal{X}_3))$ does not depend on $v$, and hence, it is substantially smaller than $\\mathrm{cost}(ID3_3)$.\nUnlike ID3, the optimal threshold tree first separates $\\mathcal{X}_2$ from $\\mathcal{X}_0 \\mathbin{\\mathaccent\\cdot\\cup} \\mathcal{X}_1$, and in the second split it separates $\\mathcal{X}_0$ and $\\mathcal{X}_1$. Putting the outliers in a separate cluster is necessary for an optimal clustering. It is easy to extend this example to more clusters or to when ID3 uses more leaves.\n\n\n\\section{Introduction}\nA central direction in machine learning is understanding the reasoning behind decisions made by learned models~\\cite{lipton2018mythos, molnar2019, murdoch2019interpretable}. Prior work on AI explainability focuses on the interpretation of a black-box model, known as {\\em post-modeling} explainability~\\cite{baehrens2010explain, ribeiro2018anchors}. While methods such as LIME~\\cite{ribeiro2016should} or Shapley explanations~\\cite{lundberg2017unified} have made progress in this direction, they do not provide direct insight into the underlying data set, and the explanations depend heavily on the given model. This has raised concerns about the applicability of current solutions, leading researchers to consider more principled approaches to interpretable methods~\\cite{rudin2019stop}.\n\nWe address the challenge of developing machine learning systems that are explainable by design, starting from an {\\em unlabeled} data set. Specifically, we consider {\\em pre-modeling} explainability in the context of {clustering}. \nA common use of clustering is to identify patterns or discover structural properties in a data set by quantizing the unlabeled points. For instance, $k$-means clustering may be used to discover coherent groups among a supermarket's customers. While there are many good clustering algorithms, the resulting cluster assignments can be hard to understand because the clusters may be determined using all the features of the data, and there may be no concise way to explain the inclusion of a particular point in a cluster. This limits the ability of users to discern the commonalities between points within a cluster or understand why points ended up in different clusters. \n\n\n\\input{cluster-pic-intro.tex}\n\n\n\nOur goal is to develop accurate, efficient clustering algorithms with concise explanations of the cluster assignments. There should be a simple procedure using a few features to explain why any point belongs to its cluster. Small decision trees have been identified as a canonical example of an easily explainable model~\\cite{molnar2019, murdoch2019interpretable}, and\nprevious work on explainable clustering uses an unsupervised decision tree~\\cite{bertsimas2018interpretable, fraiman2013interpretable,geurts2007inferring,ghattas2017clustering, liu2005clustering}. Each node of the binary tree iteratively partitions the data by thresholding on a single feature. We focus on finding $k$ clusters, and hence, we use trees with $k$ leaves. Each leaf corresponds to a cluster, and the tree is as small as possible. \nWe refer to such a tree as a {\\em threshold tree}.\n\nThere are many benefits of using a small threshold tree to produce a clustering. Any cluster assignment is explained by computing the thresholds along the root-to-leaf path. By restricting to $k$ leaves, we ensure that each such path accesses at most $k-1$ features, independent of the data dimension. \nIn general, a threshold tree provides an initial quantization of the data set, which can be combined with other methods for future learning tasks. While we consider static data sets, new data points can be easily clustered by using the tree, leading to explainable assignments.\nTo analyze clustering quality, we consider the $k$-means and $k$-medians objectives~\\cite{macqueen, steinhaus}. The goal is to efficiently determine a set of $k$ centers that minimize either the squared $\\ell_2$ or the $\\ell_1$ distance, respectively, of the input vectors to their closest center. \n\nFigure~\\ref{fig:optimal_vs_tree} provides an example of standard and explainable $k$-means clustering on the same data set. Figure~\\ref{fig:optimal_clusters} on the left shows an optimal $5$-means clustering. Figure~\\ref{fig:tree_clusters} in the middle shows an explainable, tree-based $5$-means clustering, determined by the tree in Figure~\\ref{fig:decision_tree} on the right. The tree has five leaf nodes, and vectors are assigned to clusters based on the thresholds. Geometrically, the tree defines a set of axis-aligned cuts that determine the clusters. While the two clusterings are very similar, using the threshold tree leads to easy explanations, whereas using a standard $k$-means clustering algorithm leads to more complicated clusters. The difference between the two approaches becomes more evident in higher dimensions, because standard algorithms will likely determine clusters based on all of the feature values.\n\nTo reap the benefits of explainable clusters, we must ensure that the data partition is a good approximation of the optimal clustering. While many efficient algorithms have been developed for $k$-means\/medians clustering, the resulting clusters are often hard to interpret~\\cite{arthur2007k,kanungo02, ostrovsky2013effectiveness, shalev2014understanding}. For example, Lloyd's algorithm alternates between determining the best center for the clusters and reassigning points to the closest center~\\cite{lloyd1982least}. The resulting set of centers depends in a complex way to the other points in the data set. Therefore, the relationship between a point and its nearest center may be the result of an opaque combination of many feature values. This issue persists even after dimension reduction or feature selection, because a non-explainable clustering algorithm is often invoked on the modified data set. As our focus is on pre-modeling explanability, we aim for simple explanations that use the original feature vectors.\n\nEven though Figure~\\ref{fig:optimal_vs_tree} depicts a situation in which the optimal clustering is very well approximated by one that is induced by a tree, it is not clear whether this would be possible in general. Our first technical challenge is to understand the {\\em price of explainability} in the context of clustering: that is, the multiplicative blowup in $k$-means (or $k$-medians) cost that is inevitable if we force our final clustering to have a highly constrained, interpretable, form. The second challenge is to actually find such a tree {\\em efficiently}. This is non-trivial because it requires a careful, rather than random, choice of a subset of features. As we will see, the kind of analysis that is ultimately needed is quite novel even given the vast existing literature on clustering.\n\n\n\\subsection{Our contributions}\n\n\nWe provide several new theoretical results on explainable $k$-means and $k$-medians clustering. Our new algorithms and lower bounds are summarized in Table~\\ref{tab:results_summary}. \n\n\\medskip \\noindent {\\bf Basic limitations.} \nA partition into $k$ clusters can be realized by a binary threshold tree with $k-1$ internal splits. This uses at most $k-1$ features, but is it possible to use even fewer, say $\\log k$ features? In Section~\\ref{sec:motivating-examples}, we demonstrate a simple data set that requires $\\Omega(k)$ features to achieve a explainable clustering with bounded approximation ratio compared to the optimal $k$-means\/medians clustering. In particular, the depth of the tree might need to be $k-1$ in the worst case.\n\nOne idea for building a tree is to begin with a good $k$-means (or $k$-medians) clustering, use it to label all the points, and then apply a supervised decision tree algorithm that attempts to capture this labeling. In Section~\\ref{sec:standard-dt-bad}, we show that standard decision tree algorithms, such as ID3, may produce clusterings with arbitrarily high cost. Thus, existing splitting criteria are not suitable for finding a low-cost clustering, and other algorithms are needed.\n\n\\medskip \\noindent {\\bf New algorithms.} \nOn the positive side, we provide efficient algorithms to find a small threshold tree that comes with provable guarantees on the cost. We note that using a small number of clusters is preferable for easy interpretations, and therefore $k$ is often relatively small.\nFor the special case of two clusters ($k=2$), we show (Theorem~\\ref{thm:optimal_2_median_means}) that a single threshold cut provides a constant-factor approximation to the optimal $2$-medians\/means clustering, with a closely-matching lower bound (Theorem~\\ref{clm:2_median_lower_bound}), and we provide an efficient algorithm for finding the best cut. For general $k$, we show how to approximate any clustering by using a threshold tree with $k$ leaves (Algorithm~\\ref{algo:imm}). The main idea is to minimize the number of mistakes made at each node in the tree, where a mistake occurs when a threshold separates a point from its original center. Overall, the cost of the explainable clustering will be close to the original cost up to a factor that depends on the tree depth (Theorem~\\ref{thm:main-k}). In the worst-case, we achieve an approximation factor of $O(k^2)$ for $k$-means and $O(k)$ for $k$-medians compared to the cost of any clustering (e.g., the optimal cost). These results do not depend on the dimension or input size; hence, we get a constant factor approximation when $k$ is constant. \n\n\\paragraph{Approximation lower bounds.}\nSince our upper bounds depend on $k$, it is natural to wonder whether it is possible to achieve a constant-factor approximation, or whether the cost of explainability grows with~$k$. On the negative side, we identify a data set such that any threshold tree with $k$ leaves must incur an $\\Omega(\\log k)$-approximation for both $k$-medians and $k$-means (Theorem~\\ref{thm:lb-k}). \nFor this data set, our algorithm achieves a nearly matching bound for $k$-medians.\\vspace{2ex}\n\n\n\\begin{table}[!htb]\n\\renewcommand{\\arraystretch}{1.6}\n\\centering\n\\begin{minipage}{.8\\textwidth}\n\\centering\n \\begin{tabular}{|c|cc|cc|}\n \\hline\n %\n \\rowcolor[HTML]{F1F1F1}\n & \\multicolumn{2}{c|}{\\textbf{$k$-medians}} &\n \\multicolumn{2}{c|}{\\textbf{$k$-means}} \\\\\n \n \n \\rowcolor[HTML]{F1F1F1}\n & $k = 2$ & $k > 2$ & $k = 2$ & $k > 2$ \\\\\n \\hline\n \n \\cellcolor[HTML]{F1F1F1}\n \\textbf{Upper Bound} & {$2$} & {$O(k)$} & {$4$} & {$O(k^2)$} \\\\\n \n \\cellcolor[HTML]{F1F1F1} \\textbf{Lower Bound} & {$2 - \\frac{1}{d}$} & {$\\Omega(\\log k)$} & {$3\\left(1 - \\frac{1}{d} \\right)^2$} & {$\\Omega(\\log k)$} \\\\\n \n \\hline\n \\end{tabular}\n \\caption{Summary of our upper and lower bounds on approximating the optimal $k$-medians\/means clustering with explainable, tree-based clusters. The values express the factor increase compared to the optimal solution in the worst case.\n }\n \\label{tab:results_summary}\n\\end{minipage}\n\\end{table}\n\n\n\\input{related.tex}\n\n\\section{Preliminaries}\n\n\\input{prelim.tex}\n\\input{motivation-examples.tex}\n\n\n\\input{2-means-short.tex}\n\\input{k-means-short-new.tex}\n\n\n\\section{Conclusion}\nIn this paper we discuss the capabilities and limitations of explainable clusters. For the special case of two clusters ($k=2$), we provide nearly matching upper and lower bounds for a single threshold cut. For general $k >2$, we present the IMM algorithm that achieves an $O(H)$ approximation for $k$-medians and an $O(Hk)$ approximation for $k$-means when the threshold tree has depth $H$ and $k$ leaves. \nWe complement our upper bounds with a lower bound showing that any threshold tree with $k$ leaves must have cost at least $\\Omega(\\log k)$ more than the optimal for certain data sets. \nOur theoretical results provide the first approximation guarantees on the quality of explainable unsupervised learning in the context of clustering. Our work makes progress toward the larger goal of explainable AI methods with precise objectives and provable guarantees. \n\n\n\nAn immediate open direction is to improve our results for $k$ clusters, either on the upper or lower bound side. One option is to use larger threshold trees with more than $k$ leaves (or allowing more than $k$ clusters). It is also an important goal to identify natural properties of the data that enable explainable, accurate clusters. For example, it would be interesting to improve our upper bounds on explainable clustering for well-separated data. Our lower bound of $\\Omega(\\log k)$ utilizes clusters with diameter $O(1)$ and separation $\\Omega(d)$, where the hardness stems from the randomness of the centers. In this case, the approximation factor $\\Theta(\\log k)$ is tight because our upper bound proof actually provides a bound in terms of the tree depth (which is about $\\log k$, see Appendix~\\ref{apx:lower_bound_log_k}). Therefore, an open question is whether a $\\Theta(\\log k)$ approximation is possible for any well-separated clusters (e.g., mixture of Gaussians with separated means and small variance). Beyond $k$-medians\/means, it would be worthwhile to develop other clustering methods using a small number of features (e.g., hierarchical clustering).\n\n\\paragraph{Acknowledgements.}\nSanjoy Dasgupta has been supported by NSF CCF-1813160. Nave Frost has been funded by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (Grant agreement No. 804302). The contribution of Nave Frost is part of a Ph.D. thesis research conducted at Tel Aviv University. \n\n\n\\section{Preliminaries}\nThroughout we use bold variables for vectors, and we use non-bold for scalars such as feature values.\nGiven a set of points $\\mathcal{X}=\\{\\mathbf{x}^1,\\ldots,\\mathbf{x}^n\\}\\subseteq\\mathbb{R}^d$ and an integer $k$ the goal of $k$-medians and $k$-means clustering is to partition $\\mathcal{X}$ into $k$ subsets and minimize the distances of the points to the centers of the clusters. It is known that the optimal centers correspond to means or medians of the clusters, respectively. Denoting the centers as $\\boldsymbol{\\mu}^1,\\ldots,\\boldsymbol{\\mu}^k$, the aim of $k$-means is to find a clustering that minimizes the following objective\n$$\\mathrm{cost}_2(\\boldsymbol{\\mu}^1, \\ldots, \\boldsymbol{\\mu}^k)=\\sum_{\\mathbf{x}\\in \\mathcal{X}} \\norm{\\mathbf{x}-c_2(\\mathbf{x})}^2_2,$$ where $c_2(\\mathbf{x})=\\argmin_{\\boldsymbol{\\mu} \\in \\{\\boldsymbol{\\mu}^1,\\ldots,\\boldsymbol{\\mu}^k\\}}{\\norm{\\boldsymbol{\\mu} - \\mathbf{x}}_2}$.\nSimilarly, the goal of $k$-medians is to minimize \n$$\\mathrm{cost}_1(\\boldsymbol{\\mu}^1, \\ldots, \\boldsymbol{\\mu}^k)=\\sum_{\\mathbf{x}\\in \\mathcal{X}} \\norm{\\mathbf{x}-c_1(\\mathbf{x})}_1,$$ where $c_1(\\mathbf{x})=\\argmin_{\\boldsymbol{\\mu} \\in \\{\\boldsymbol{\\mu}^1,\\ldots,\\boldsymbol{\\mu}^k\\}}{\\norm{\\boldsymbol{\\mu} - \\mathbf{x}}_1}$. \nAs it will be clear from context whether we are talking about $k$-medians or $k$-means, we abuse notation and write $\\mathrm{cost}$ and $c(\\mathbf{x})$ for brevity. We also fix the data set and use $opt$ to denote the optimal $k$-medians\/means clustering, where the optimal centers are the medians or means of the clusters, respectively; hence, $\\mathrm{cost}(opt)$ refers to the cost of the optimal $k$-medians\/means clustering. \n\n\n\\subsection{Clustering using threshold trees}\n\n\nPerhaps the simplest way to define two clusters is to use a \\emph{threshold cut}, which partitions the data based on a threshold for a single feature. More formally, the two clusters can be written as $\\widehat C^{\\theta,i}=(\\widehat C^1, \\widehat C^2)$, which is defined using a coordinate $i$ and a threshold $\\theta\\in\\mathbb{R}$ in the following way. For each input point $\\mathbf{x}\\in \\mathcal{X}$, we place $\\mathbf{x}=[x_1, \\ldots, x_d]$ in the first cluster $\\widehat C^1$ if $x_i\\leq\\theta$, and otherwise $\\mathbf{x} \\in \\widehat C^2$.\nA threshold cut can be used to explain $2$-means or $2$-medians clustering because a single feature and threshold determines the division of the data set into exactly two clusters. \n\n\nFor $k > 2$ clusters, we consider iteratively using threshold cuts as the basis for the cluster explanations. More precisely, \nwe construct a binary \\emph{threshold tree}. This tree is an unsupervised variant of a decision tree. Each internal node contains a single feature and threshold, which iteratively partitions the data, leading to clusters determined by the vectors that reach the leaves. We focus on trees with exactly $k$ leaves, one for each cluster $\\{1,2,\\ldots, k\\}$, which also limits the depth and total number of features to at most $k-1$. \n\nWhen clustering using such a tree, it is easy to understand why $\\mathbf{x}$ was assigned to its cluster: we may simply inspect the threshold conditions on the root-to-leaf path for $\\mathbf{x}$. This also ensures the number of conditions for the cluster assignment is rather small, which is crucial for interpretability. These tree-based explanations are especially useful in high-dimensional space, when the number of clusters is much smaller than the input dimension ($k \\ll d$). \nMore formally, a threshold tree $T$ with $k$ leaves induces a $k$-clustering of the data. Denoting these clusters as $\\widehat{C}^j \\subseteq \\mathcal{X}$, we define the $k$-medians\/means cost of the tree as\n$$\n\\mathrm{cost}_1(T) = \\sum_{j=1}^k \\sum_{x \\in \\widehat{C}^j} \\|x - \\mbox{median}(\\widehat{C}^j) \\|_1\n\\qquad \\mbox{} \\qquad \n\\mathrm{cost}_2(T) = \\sum_{j=1}^k \\sum_{x \\in \\widehat{C}^j} \\|x - \\mbox{mean}(\\widehat{C}^j) \\|_2^2 \n$$\nOur goal is to understand when it is possible to efficiently produce a tree $T$ such that $\\mathrm{cost}(T)$ is not too large compared to the optimal $k$-medians\/means cost. Specifically, we say that an algorithm is an {\\em $a$-approximation}, if the cost is at most $a$ times the optimal cost, i.e., if the algorithm returns threshold tree $T$ then we have\n$\\mathrm{cost}(T) \\leq a\\cdot \\mathrm{cost}(opt),$ where $opt$ denotes the optimal $k$-medians\/means clustering.\n\n \n\n\n\\section{Lower bounds for two clusters}\\label{sec:k2_lower_bound}\n\n\nWithout loss of generality we can assume that $d\\geq 2.$ We use the following dataset for both $2$-medians and $2$-means. It consists of $2d$ points, partitioned into two clusters of size $d$, which are the points with Hamming distance exactly one from the vector with all 1 entries and the vector with all $-1$ entries: \n\\begin{center}\n\\begin{tabular}{c c}\n \\textbf{Optimal Cluster 1} & \\textbf{Optimal Cluster 2} \\\\\n $(0,-1,-1,-1\\ldots,-1)$ & $(0,1,1,1\\ldots,1)$ \\\\\n $(-1,0,-1,-1\\ldots,-1)$ & $(1,0,1,1\\ldots,1)$\\\\\n $(-1,-1,0,-1\\ldots,-1)$ & $(1,1,0,1\\ldots,1)$\\\\\n $\\vdots$ & $\\vdots$ \\\\\n $(-1,-1,-1,-1\\ldots,0)$ & $(1,1,1,1\\ldots,0)$\\\\\n\\end{tabular}\n\\end{center}\n\n\nLet $\\widehat C = (\\widehat C^1, \\widehat C^2)$ be the best threshold cut. \n\n\\paragraph{2-medians lower bound.}\nThe cost of the cluster with centers $(1,\\ldots,1)$ and $(-1,\\ldots,-1)$ is $2d$, as each point is responsible for a cost of $1.$ Thus, $\\mathrm{cost}(opt)\\leq 2d.$\n\nThere is a coordinate $i$ and a threshold $\\theta$ that defines the cut $\\widehat C$. For any coordinate $i$, there are only three possible values: $-1,0,1$. Thus $\\theta$ is either in $(-1,0)$ or in $(0,1)$. Without loss of generality, assume that $\\theta\\in(-1,0)$ and $i=1$. Thus, the cut is composed of two clusters: one of size $d-1$ and the other of size $d+1$, in the following way:\n\n\\begin{center}\n\\begin{tabular}{c c}\n $\\mathbf{Cluster\\ }\\widehat C^1$ & $\\mathbf{Cluster\\ } \\widehat C^2$ \\\\\n $(-1,0,-1,-1\\ldots,-1)$ & $(1,0,1,1\\ldots,1)$\\\\\n $(-1,-1,0,-1\\ldots,-1)$ & $(1,1,0,1\\ldots,1)$\\\\\n $\\vdots$ & $\\vdots$ \\\\\n $(-1,-1,-1,-1\\ldots,0)$ & $(1,1,1,1\\ldots,0)$\\\\\n & $(0,1,1,1\\ldots,1)$\\\\\n & $(0,-1,-1,-1\\ldots,-1)$\\\\\n\\end{tabular}\n\\end{center}\n\n\nUsing Fact~\\ref{fact:1-median-optimal center}, an optimal center of the first cluster is all $-1$, and the optimal center for the second cluster is all $1$. The cost of the first cluster is $d-1$, as each point costs $1$. The cost of the second cluster is composed of two terms $d$ for all points that include 1 in at least one coordinate and the cost of point $(0,-1,\\ldots,-1)$ is $2(d-1)+1$. So the total cost is $4d-2$. Thus $\\mathrm{cost}(\\widehat C)\\geq (2-1\/d) \\mathrm{cost}(opt).$\n\n\\paragraph{2-means lower bound.}\nFocus on the clustering with centers $$(\\nicefrac{(d-1)}{d},\\ldots,\\nicefrac{(d-1)}{d}) \\qquad \\mbox{and} \\qquad (-\\nicefrac{(d-1)}{d},\\ldots,-\\nicefrac{(d-1)}{d}).$$ \nThe cost of each point in the data is composed of (1) one coordinate with value zero, and the cost of this coordinate is $\\left(\\nicefrac{(d-1)}{d}\\right)^2$ (2) $d-1$ coordinates each with cost $\\nicefrac1d^2.$ Thus, each point has a cost of $\\nicefrac{(d-1)^2}{d^2}+\\nicefrac{d-1}{d^2}.$ Thus, the total cost is $\\frac{2(d-1)^2+2(d-1)}{d}=2(d-1)$. This implies that $\\mathrm{cost}(opt)\\leq 2(d-1).$\n\nAssume without loss of generality that $\\widehat C$ is defined using coordinate $i=1$ and threshold $-0.5$. The resulting clusters $\\widehat C^1$ and $\\widehat C^2$ are as in the case of $2$-medians. The optimal centers are (see Fact~\\ref{fact:1-means-optimal center}): \n$$\\left(-1,-\\frac{d-2}{d-1},\\ldots,-\\frac{d-2}{d-1}\\right) \\qquad \\mbox{and} \\qquad \\left(\\frac{d-1}{d+1},\\frac{d-2}{d+1},\\ldots,\\frac{d-2}{d+1}\\right).$$\nWe want to lower bound $\\mathrm{cost}(\\widehat C).$ We start with the cost of the first cluster, i.e. $\\widehat C^1$. To do so for each point in $\\widehat C^1$, we will evaluate the contribution of each coordinate to the cost (1) the first coordinate adds $0$ to the cost (2) the coordinate with value $0$, adds $\\left(\\frac{d-2}{d-1}\\right)^2$ to the cost (3) the rest of the $d-2$ coordinates adds $\\nicefrac{1}{(d-1)^2}.$ Thus, each point in $\\widehat C^1$ adds to the cost $\\left(\\frac{d-2}{d-1}\\right)^2 + \\frac{d-2}{(d-1)^2}=\\frac{d-2}{d-1}$. Since $\\widehat C^1$ contains $d-1$ points, its total cost is $d-2$.\n\nMoving on to evaluating the cost of $\\widehat C^2$, the cost of the point $(0,-1,\\ldots,-1)$ is composed of two terms (1) the first coordinate adds $\\left(\\frac{d-1}{d+1}\\right)^2$ to the cost (2) each of the other $d-1$ coordinates adds $\\left(1+\\frac{d-2}{d+1}\\right)^2\nto the cost. Thus, this point adds $$\\left(\\frac{d-1}{d+1}\\right)^2+(d-1)\\left(1+\\frac{d-2}{d+1}\\right)^2=\\frac{(d-1)d(4d-3)}{(d+1)^2}.$$\nSimilarly, the point $(0,1,\\ldots,1)$ adds to the cost $$\\left(\\frac{d-1}{d+1}\\right)^2+(d-1)\\left(1-\\frac{d-2}{d+1}\\right)^2=\\frac{(d-1)(d+8)}{(d+1)^2}.$$\nFinally, each of the $d-1$ remaining points in $\\widehat C^2$ adds to the cost $$\\left(1-\\frac{d-1}{d+1}\\right)^2+\\left(\\frac{d-2}{d+1}\\right)^2+(d-1)\\left(1-\\frac{d-2}{d-1}\\right)^2=\\frac{d^2+5d-1}{(d+1)^2}$$\nThus, the cost of $\\widehat C^2$ is $$\\frac{(d-1)(5d^2+3d+7)}{(d+1)^2}$$\nSumming up the costs of $\\widehat C^1$ and $\\widehat C^2$, for $d\\geq 2$\n$$\\mathrm{cost}(\\widehat C)\\geq (d-2)+\\frac{(d-1)(5d^2+3d+7)}{(d+1)^2}\\geq6(d-1)\\left(1-\\frac1d\\right)^2\\geq 3\\left(1-\\frac{1}{d}\\right)^2\\cdot \\mathrm{cost}(opt)$$\n\n\n\n\n\n\\section{Upper Bound Proof for 2-Means}\\label{sec:k2_upper_bound}\n\nWe show that there is a threshold cut $\\widehat C$ with $2$-means cost satisfying\n$\\mathrm{cost}(\\widehat C)\\leq 4 \\cdot \\mathrm{cost}(opt).$ We could just use the same proof idea as in the 2-medians case that first applies Lemma~\\ref{lemma:tree-cost} and then uses the matching result, Lemma~\\ref{lemma:matching}. This leads to a $6$-approximation, instead of $4$. The reason is that we apply twice Claim~\\ref{clm:cauchy_schwarz_k_means}, which is not tight. Improving the approximation to $4$ requires us to apply Claim~\\ref{clm:cauchy_schwarz_k_means} only once. \n\nSuppose $\\boldsymbol{\\mu}^1,\\boldsymbol{\\mu}^2$ are optimal $2$-means centers for the clusters $C^1$ and $C^2$. \nLet $t = \\min(|C^1 \\Delta \\widehat C^1|, |C^1 \\Delta \\widehat C^2|)$ be the minimum number of changes for any threshold cut $\\widehat C^1, \\widehat C^2$, and define $\\cX^{\\mathsf{mis}}$ to the set of $t$ points in the symmetric difference, where $\\mathcal{X} = \\cX^{\\mathsf{cor}} \\cup \\cX^{\\mathsf{mis}}$ and $\\cX^{\\mathsf{cor}} \\cap \\cX^{\\mathsf{mis}} = \\emptyset$.\n\nUsing the same argument as in the proof of Lemma~\\ref{lemma:tree-cost}, we have\n\\begin{eqnarray}\\label{eq:2_means_upper_bound}\n\\mathrm{cost}(\\widehat{C}) &\\leq& \\sum_{j=1}^2 \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|^2_2 +\n\\sum_{j=1}^2 \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_j} \\|\\mathbf{x}- \\boldsymbol{\\mu}^j\\|^2_2 \\nonumber\n\\\\ &=& \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{cor}}} \\|\\mathbf{x}- c(\\mathbf{x})\\|^2_2 +\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_1} \\|\\mathbf{x}- \\boldsymbol{\\mu}^1\\|^2_2+\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_2} \\|\\mathbf{x}- \\boldsymbol{\\mu}^2\\|^2_2 \\nonumber\n\\\\ &\\leq& \\mathrm{cost}(opt) +\n\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_1} \\|\\mathbf{x}- \\boldsymbol{\\mu}^1\\|^2_2+\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_2} \\|\\mathbf{x}- \\boldsymbol{\\mu}^2\\|^2_2\n\\end{eqnarray}\nThe goal now is to bound the latter two terms using $\\mathrm{cost}(opt).$\nThis term measures the distance of each $\\mathbf{x} \\in \\cX^{\\mathsf{mis}}$ from the ``other'' center, i.e., not $c(\\mathbf{x})$. \n\n\\begin{claim}\\label{claim:aux-2-means}\n\\begin{eqnarray*}\n\t\\mathrm{cost}(opt)\n\t\\geq \\frac13 \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_1} \\|\\mathbf{x}- \\boldsymbol{\\mu}^1\\|^2_2+\\frac13\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_2} \\|\\mathbf{x}- \\boldsymbol{\\mu}^2\\|^2_2\n\\end{eqnarray*}\n\\end{claim}\n\nUsing Claim~\\ref{claim:aux-2-means},\ntogether with Inequality~(\\ref{eq:2_means_upper_bound}) we have\n$$\\mathrm{cost}(\\widehat{C})\\leq \\mathrm{cost}(opt) + 3\\cdot \\mathrm{cost}(opt)=4\\cdot \\mathrm{cost}(opt),$$ \nand this completes the proof. \n\n\n\\begin{proof}[Proof of Claim~\\ref{claim:aux-2-means}.]\nDenote the $t$ points in $\\cX^{\\mathsf{mis}}$ by $\\cX^{\\mathsf{mis}}=\\{\\mathbf{r}^1,\\ldots,\\mathbf{r}^t\\}.$ Assume that the first $\\ell$ points are in the first optimal cluster, $\\mathbf{r}^1,\\ldots,\\mathbf{r}^\\ell\\in C^1$, and the rest are in the second cluster, $\\mathbf{r}^{\\ell+1},\\ldots,\\mathbf{r}^t\\in C^2.$\n\nApplying Lemma~\\ref{lemma:matching} for each coordinate $i\\in[d]$ guarantees $t$ pairs of vectors $(\\mathbf{p}^1,\\mathbf{q}^1), \\ldots, (\\mathbf{p}^t,\\mathbf{q}^t)$ with the following properties. Each $p^j_i$ corresponds to the $i$th coordinate of some point in $C^1$ and $q^j_i$ corresponds to the $i$th coordinate of some point in $C^2$. Furthermore, for each coordinate, the $t$ pairs correspond to $2t$ distinct points in $\\mathcal{X}$.\nFinally, we can assume without loss of generality that\n$\\mu^1_i \\leq \\mu^2_i$ and $q^j_i \\leq p^j_i$. \n\nFor each point $\\mathbf{r}^j$ in the first $\\ell$ points in $\\cX^{\\mathsf{mis}}$, if $r^j_i\\geq p^j_i$ then we can replace $\\mathbf{p}^j$ with $\\mathbf{r}^j$, thus we can assume without loss of generality that $p_i^j\\geq r^j_i.$ We next show that $\\mathrm{cost}(opt)$ is lower bounded by a function of $t$. There will be two cases depending on whether $p^j_i\\leq \\mu^2_i$ or not. The harder case is the first where the improvement of the approximation from $6$ to $4$ arises. Instead of first bounding the distance between $\\mathbf{r}^j$ and its new center using the distance to its original center and then accounting for $\\norm{\\boldsymbol{\\mu}^1-\\boldsymbol{\\mu}^2}^2_2,$ we directly account for the distance between $\\mathbf{r}^j$ and its new center. \n\n\n\\paragraph{\\textbf{Case 1:}} if $p^j_i\\leq \\mu^2_i$, then \nClaim~\\ref{clm:cauchy_schwarz_k_means} implies that\n\\begin{eqnarray*} \n(\\mu^2_i-q^j_i)^2 + (p^j_i-\\mu_i^1)^2 + (\\mu_i^1-r^j_i)^2\n&\\geq& \\frac{1}{3}(\\mu^2_i-q^j_i+p^j_i-\\mu_i^1+\\mu_i^1-r^j_i)^2\\\\\n&=& \\frac{1}{3}((\\mu^2_i-q^j_i)+(p^j_i-r^j_i))^2\n\\geq \\frac{1}{3}(\\mu^2_i-r^j_i)^2.\n\\end{eqnarray*} The last inequality follows from $q^j_i \\leq p^j_i$ and $r_i^j\\leq p_i^j,$ which imply that $(\\mu^2_i-q^j_i)+(p^j_i-r^j_i)\n\\geq \\mu^2_i-r^j_i\\geq 0,$ which means $((\\mu^2_i-q^j_i)+(p^j_i-r^j_i))^2\n\\geq (\\mu^2_i-r^j_i)^2.$ \n\\paragraph{\\textbf{Case 2:}} if $\\mu_i^2\\leq p_i^j$, then again Claim~\\ref{clm:cauchy_schwarz_k_means} implies that\n\\begin{eqnarray*} \n(p^j_i-\\mu_i^1)^2 + (\\mu_i^1-r^j_i)^2&\\geq& (\\mu_i^2-\\mu_i^1)^2 + (\\mu_i^1-r^j_i)^2 \\geq \\frac12 (\\mu^2_i-\\mu^1_i+\\mu^1_i-r^j_i)^2=\\frac12(\\mu^2_i-r^j_i)^2,\n\\end{eqnarray*}\nwhere in the first inequality we use $(p^j_i-\\mu^1_i)^2\\geq(\\mu^2_i-\\mu^1_i)^2.$\n\nThe two cases imply that for $1\\leq j\\leq \\ell$ $$(\\mu^2_i-q^j_i)^2 + (p^j_i-\\mu_i^1)^2 + (\\mu_i^1-r^j_i)^2\n\\geq \\frac{1}{3}(\\mu^2_i-r^j_i)^2\n.$$\nSimilarly for each point $\\mathbf{r}^j$ in the last $t-\\ell$ points in $\\cX^{\\mathsf{mis}}$, we have $$(\\mu^2_i-q^j_i)^2 + (p^j_i-\\mu_i^1)^2 + (\\mu_i^2-r^j_i)^2\n\\geq \\frac{1}{3}(\\mu^1_i-r^j_i)^2\n.$$\nPutting these together we have \n\\begin{eqnarray*}\n\t\\mathrm{cost}(opt)&\\geq& \\sum_{i=1}^d\\sum_{j=1}^\\ell(\\mu^2_i-q^j_i)^2 + (p^j_i-\\mu_i^1)^2 + (\\mu_i^1-r^j_i)^2\n\t+\n\\sum_{i=1}^d\\sum_{j=\\ell+1}^t(\\mu^2_i-q^j_i)^2 + (p^j_i-\\mu_i^1)^2 + (\\mu_i^2-r^j_i)^2\\\\\n\t&\\geq& \\frac{1}{3}\\sum_{j=1}^\\ell\\sum_{i=1}^d (\\mu^2_i-r^j_i)^2\n\t+\\frac{1}{3}\\sum_{j=\\ell+1}^t\\sum_{i=1}^d (\\mu^1_i-r^j_i)^2 \n\t\\\\ \t&=&\\frac{1}{3}\\sum_{j=1}^\\ell\\norm{\\mathbf{r}^j-\\boldsymbol{\\mu}^2}_2^2+\\frac{1}{3}\\sum_{j=\\ell+1}^t\\norm{\\mathbf{r}^j-\\boldsymbol{\\mu}^1}_2^2\\\\\n&=&\t\\frac13 \\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_1} \\|\\mathbf{x}- \\boldsymbol{\\mu}^1\\|^2_2+\\frac13\\sum_{\\mathbf{x} \\in \\cX^{\\mathsf{mis}} \\cap \\widehat C_2} \\|\\mathbf{x}- \\boldsymbol{\\mu}^2\\|^2_2\n\\end{eqnarray*}\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\section{Efficient Implementation via Dynamic Programming for $k=2$}\\label{sec:DP_Implementation}\n\n\\subsection{The 2-means case}\nThe psudo-code for finding the best threshold for $k=2$ depicted in Algorithm~\\ref{algo:cut}.\n\n\\begin{figure}[!htb]\n \\centering\n \\begin{minipage}{.6\\linewidth}\n \\begin{algorithm}[H]\n \t\\SetKwFunction{$2$-means Optimal Threshold}{$2$-means Optimal Threshold}\n \t\\SetKwInOut{Input}{Input}\\SetKwInOut{Output}{Output}\n \t\\LinesNumbered\n \t\\Input{%\n \t\t{$\\mathbf{x}^1, \\ldots, \\mathbf{x}^n$} -- vectors in $\\mathbb{R}^d$\n \t}\n \t\\Output{%\n \t\t\\xvbox{2mm}{$i$} -- Coordinate\\\\\n \t\t\\xvbox{2mm}{$\\theta$} -- Threshold\n \t}\n \t\\BlankLine\n \t\\setcounter{AlgoLine}{0}\n \t\n $\\xvar{best\\_cost} \\leftarrow \\infty$\\;\n \t\n \t$\\xvar{best\\_coordinate} \\leftarrow \\textsc{null}$\\;\n \t\n \t$\\xvar{best\\_threshold} \\leftarrow \\textsc{null}$\\;\n \t\n \t$u \\leftarrow \\sum_{j=1}^n \\norm{\\mathbf{x}^j}^2_2$\\;\n \t\n \t\\ForEach {$i \\in [1, \\ldots, d]$}\n \t{\n \t\t$\\mathbf{s} \\leftarrow \\xfunc{zeros}(d)$\\;\n \t\t\n \t\t$\\mathbf{r} \\leftarrow \\sum_{j=1}^n \\mathbf{x}^j$\\;\n \t\t\n \t\t$\\mathcal{X} \\leftarrow \\xfunc{sorted}(\\mathbf{x}^1, \\ldots, \\mathbf{x}^n \\text{ by coordinate }i)$\\;\n \t\t\n \t\t\\ForEach {$\\mathbf{x}^j \\in \\mathcal{X}$} \n \t\t{ \n \t\t\t$\\mathbf{s} \\leftarrow \\mathbf{s} + \\mathbf{x}^j$\\;\n \t\t\t\n \t\t\t$\\mathbf{r} \\leftarrow \\mathbf{r} - \\mathbf{x}^j$\\;\n \t\t\t\n \t\t\t$\\xvar{cost} \\leftarrow u - \\frac{1}{j}\\norm{\\mathbf{s}}^2_2- \\frac{1}{n - j}\\norm{\\mathbf{r}}^2_2$\\;\n \t\t\t\n \t\t\t\\If {$\\xvar{cost} < \\xvar{best\\_cost}$ and $x^j_i \\neq x^{j+1}_i$}\n \t\t\t{\n\n \t\t\t\t$\\xvar{best\\_cost} \\leftarrow \\xvar{cost}$\\;\n \t\t\t\t\n \t\t\t\t$\\xvar{best\\_coordinate} \\leftarrow i$\\;\n \t\t\t\t\n \t\t\t\t$\\xvar{best\\_threshold} \\leftarrow x^j_i$\\;\n \t\t\t}\n \t\t}\n \t}\n \t\\Return $\\xvar{best\\_coordinate}, \\xvar{best\\_threshold}$\\;\n \t\\caption{\\textsc{Optimal Threshold for $2$-means}}\n \t\\label{algo:cut}\n \\end{algorithm}\n \\end{minipage}\n\\end{figure}\n\n\nIn time $O(d)$ we can calculate $\\mathrm{cost}(p+1)$ and the new centers by using the value $\\mathrm{cost}(p)$ and the previous centers. Throughout the computation we save in memory \n\\begin{enumerate}\n\t\\item Two vectors $\\mathbf{s}^{p}=\\sum_{j=1}^p \\mathbf{x}^j$ and $\\mathbf{r}^{p}=\\sum_{j=p + 1}^n \\mathbf{x}^j$.\n\t\\item Scalar $u=\\sum_{j=1}^n \\norm{\\mathbf{x}^j}_2^2$\n\\end{enumerate}\nWe also make use of the identity:\n\\begin{eqnarray*}\n\t\\mathrm{cost}(p)\n\t&=& u - \\frac{1}{p}\\norm{\\mathbf{s}^{p}}_2^2 - \\frac{1}{n-p}\\norm{\\mathbf{r}^{p}}_2^2.\n\\end{eqnarray*}\nThis identity is correct because \n\\begin{eqnarray*}\n \\mathrm{cost}(p) &=& \\sum_{j=1}^{p}\\norm{\\mathbf{x}^j-\\boldsymbol{\\mu}^1(p)}^2_2 + \\sum_{j=p+1}^{n}\\norm{\\mathbf{x}^j-\\boldsymbol{\\mu}^2(p)}^2_2\\\\\n &=& \\sum_{j=1}^{p} \\norm{\\mathbf{x}^j}^2_2 - 2\\sum_{j=1}^{p}\\inner{\\mathbf{x}^j}{\\boldsymbol{\\mu}^1(p)} + \\sum_{j=1}^{p}\\norm{\\boldsymbol{\\mu}^1(p)}^2_2 + \\\\\n && \\sum_{j=p+1}^{n} \\norm{\\mathbf{x}^j}^2_2 - 2\\sum_{j=p+1}^{n}\\inner{\\mathbf{x}^j}{\\boldsymbol{\\mu}^2(p)} + \\sum_{j=p+1}^{n}\\norm{\\boldsymbol{\\mu}^2(p)}^2_2\\\\\n &=& \\sum_{j=1}^{n} \\norm{\\mathbf{x}^j}^2_2 - 2\\inner{\\sum_{j=1}^{p}\\mathbf{x}^j}{\\boldsymbol{\\mu}^1(p)} + \\frac{1}{p}\\norm{\\sum_{j=1}^{p}\\mathbf{x}^j}^2_2 - \\\\\n && 2\\inner{\\sum_{j=p+1}^{n}\\mathbf{x}^j}{\\boldsymbol{\\mu}^2(p)} + \\frac{1}{n-p}\\norm{\\sum_{j=p+1}^{n}\\mathbf{x}^j}^2_2\\\\\n &=& \\sum_{j=1}^{n} \\norm{\\mathbf{x}^j}^2_2 - \\frac{2}{p}\\inner{\\mathbf{s}^{p}}{\\mathbf{s}^{p}}+\\frac{1}{p}\\norm{\\mathbf{s}^{p}}^2_2 - \\frac{2}{n-p}\\inner{\\mathbf{r}^{p}}{\\mathbf{r}^{p}}+\\frac{1}{n-p}\\norm{\\mathbf{r}^{p}}^2_2\\\\\n &=& u - \\frac{1}{p}\\norm{\\mathbf{s}^{p}}^2_2 - \\frac{1}{n-p}\\norm{\\mathbf{r}^{p}}^2_2\n\\end{eqnarray*}\n\n\n By invoking this identity, we can quickly compute the cost of placing the first $p$ points in cluster one and the last $n-p$ points in cluster two. Each such partition can be achieved by using a threshold $\\theta$ between $x_i^p$ and $x_i^{p+1}$. Our algorithm computes these costs for each feature $i \\in[d]$. Then, we output the feature $i$ and threshold $\\theta$ that minimizes the cost. This guarantees that we find the best possible threshold cut.\n\n Overall, Algorithm \\ref{algo:cut} iterates over the $d$ features, and for each feature it sorts the $n$ vectors according to their values in the current feature. Next, the algorithm iterates over the $n$ vectors and for each potential threshold, it calculates the cost by evaluating the inner product of two $d$-dimensional vectors.\n Overall its runtime complexity is $O\\left(nd^2 + nd\\log n\\right)$.\n \n \\subsection{The 2-medians case}\n The high level idea of a finding an optimal 2-medians cut is similar to the 2-means algorithm. The algorithm goes over all possible thresholds. For each threshold, it finds the optimal centers and calculates the cost accordingly. Then, it outputs the threshold cut that minimizes the $2$-medians cost.\n \n \\begin{figure}[!htb]\n \\centering\n \\begin{minipage}{.6\\linewidth}\n \\begin{algorithm}[H]\n \t\\SetKwFunction{$2$-medians Optimal Threshold}{$2$-means medians Threshold}\n \t\\SetKwInOut{Input}{Input}\\SetKwInOut{Output}{Output}\n \t\\LinesNumbered\n \t\\Input{%\n \t\t{$\\mathbf{x}^1, \\ldots, \\mathbf{x}^n$} -- vectors in $\\mathbb{R}^d$\n \t}\n \t\\Output{%\n \t\t\\xvbox{2mm}{$i$} -- Coordinate\\\\\n \t\t\\xvbox{2mm}{$\\theta$} -- Threshold\n \t}\n \t\\BlankLine\n \t\\setcounter{AlgoLine}{0}\n \t\n \t$\\xvar{best\\_cost} \\leftarrow \\infty$\\;\n \t\n \t$\\xvar{best\\_coordinate} \\leftarrow \\textsc{null}$\\;\n \t\n \t$\\xvar{best\\_threshold} \\leftarrow \\textsc{null}$\\;\n \t\n \t\\ForEach {$i \\in [1, \\ldots, d]$}\n \t{\n \t\n \t $\\boldsymbol{\\mu}^2(0) \\leftarrow \\xfunc{median}(\\mathbf{x}^1, \\ldots \\mathbf{x}^n)$\\;\n \t\n \t $\\xvar{cost} \\leftarrow \\sum_{j=1}^n \\norm{\\mathbf{x}^j - \\boldsymbol{\\mu}^2(0)}_1$\\;\n \t\n \t $\\mathcal{X} \\leftarrow \\xfunc{sorted}(\\mathbf{x}^1, \\ldots, \\mathbf{x}^n \\text{ by coordinate }i)$\\;\n \t\n \t\t\\ForEach {$j \\in [1, \\ldots, n - 1]$} \n \t\t{ \n\t \t\t$\\boldsymbol{\\mu}^1(j) \\leftarrow \\xfunc{median}(\\mathbf{x}^1, \\ldots \\mathbf{x}^j)$\\;\n \t\t\n\t \t\t$\\boldsymbol{\\mu}^2(j) \\leftarrow \\xfunc{median}(\\mathbf{x}^{j+1}, \\ldots \\mathbf{x}^n)$\\;\n\t \t\t\n \t\t\t$\\xvar{cost} \\leftarrow \\xvar{cost} + \\norm{\\mathbf{x}^j - \\boldsymbol{\\mu}^1(j)}_1 - \\norm{\\mathbf{x}^j - \\boldsymbol{\\mu}^2(j - 1)}_1$\\;\n \t\t\t\n \t\t\t\\If {$\\xvar{cost} < \\xvar{best\\_cost}$ and $x^j_i \\neq x^{j+1}_i$}\n \t\t\t{\n \t\t\t\t\n \t\t\t\t$\\xvar{best\\_cost} \\leftarrow \\xvar{cost}$\\;\n \t\t\t\t\n \t\t\t\t$\\xvar{best\\_coordinate} \\leftarrow i$\\;\n \t\t\t\t\n \t\t\t\t$\\xvar{best\\_threshold} \\leftarrow x^j_i$\\;\n \t\t\t}\n \t\t}\n \t}\n \t\\Return $\\xvar{best\\_coordinate}, \\xvar{best\\_threshold}$\\;\n \t\\caption{\\textsc{Optimal Threshold for $2$-medians}}\n \t\\label{algo:2-medians-cut}\n \\end{algorithm}\n \\end{minipage}\n\\end{figure}\n \n \\paragraph{Updating cost.} \n To update the cost we need to show how to express $\\mathrm{cost}(p+1)$ in terms of $\\mathrm{cost}(p).$ We know that $\\mathrm{cost}(p+1)$ is equal to \n $$\\mathrm{cost}(p+1)=\\sum_{\\mathbf{x}\\in C_1}\\norm{x-\\boldsymbol{\\mu}^1(p+1)}_1+\\sum_{\\mathbf{x}\\in C_2}\\norm{x-\\boldsymbol{\\mu}^2(p+1)}_1.$$ \n For every feature $i \\in[d]$, there are $n-1$ thresholds to consider. After sorting by this feature, we can consider all splits into $C_1$ and $C_2$, where $C_1$ contains the $p$ smallest points, and $C_2$ contains the $n-p$ largest points. We increase $p$ from $p=1$ to $p=n-1$, computing the clusters and cost at each step. If $p$ is odd then the median of $C_1$ (i.e., the optimal center of $C_1$) does not change compared to $p-1$. The only contribution to the cost is the point~$\\mathbf{x}$ that moved from $C_2$ to $C_1$. If $p$ is even, then at each coordinate there are two cases, depending on whether the median changes or not. If it changes, then let $\\Delta$ denote the change in cost of the points in $C_1$ that are smaller than the median. By symmetry, the change in the cost of the points that are larger is $-\\Delta$. Thus, the change of the cost is balanced by the points that are larger and smaller than the median. Similar reasoning holds for the other cluster $C_2.$ Therefore, we conclude that moving $\\mathbf{x}$ from $C_2$ to $C_1$ changes the cost by exactly $\\norm{\\mathbf{x}-\\boldsymbol{\\mu}^1(p+1)}_1 - \\norm{\\mathbf{x}-\\boldsymbol{\\mu}^2(p)}_1$. Thus, we have the following connection between $\\mathrm{cost}(p+1)$ and $\\mathrm{cost}(p)$:\n $$\\mathrm{cost}(p+1)=\\mathrm{cost}(p) + \\norm{\\mathbf{x}-\\boldsymbol{\\mu}^1(p+1)}_1 - \\norm{\\mathbf{x}-\\boldsymbol{\\mu}^2(p)}_1.$$\n \n \\paragraph{Updating centers.} For each $p$, the cost update relies on efficient calculations of the centers $\\boldsymbol{\\mu}^1(p)$ and $\\boldsymbol{\\mu}^2(p + 1)$. The centers $\\boldsymbol{\\mu}^1(p), \\boldsymbol{\\mu}^2(p)$ are the medians of the clusters at the $p$th threshold. Note that moving from the $p$th thresold to the $(p+1)$th will only change the clusters by moving one vector from one cluster to the other. \n We can determine the changes efficiently by using $d$ arrays, one for each coordinate. Each array will contain (pointers to) the input vectors $\\mathcal{X}$ sorted by their $i$th feature value. As we move the threshold along a single coordinate, we can read off the partition into two clusters, and we can compute the median of each cluster by considering the midpoint in the sorted list. \n \n Overall, this procedure computes the cost of each threshold, while also determining the partition into two clusters and their centers (medians). The time is $O(nd \\log n)$ to sort by each feature, and $O(nd^2)$ to compute $\\mathrm{cost}(p)$ for each $p \\in [n]$ and each feature. Therefore, the total time for the $2$-medians algorithm is \n $O(nd^2 + nd \\log n).$\n \n \n \n \n\n\\subsection{Related work}\n\\label{sec:related}\n\nThe majority of work on explainable methods considers supervised learning, and in particular, explaining predictions of neural networks and other trained models~\\cite{\nalvarez2019weight, deutch2019constraints, garreau2020explaining, kauffmann2019clustering,\nlipton2018mythos,lundberg2017unified, molnar2019, murdoch2019interpretable, ribeiro2016should, ribeiro2018anchors, rudin2019stop, sokol2020limetree}. In contrast, there is much less work on explainable unsupervised learning. Standard algorithms for $k$-medians\/means use iterative algorithms to produce a good approximate clustering, but this leads to complicated clusters that depend on subtle properties of the data set~\\cite{aggarwal09,arthur2007k, kanungo02, ostrovsky2013effectiveness}. Several papers consider the use of decision trees for explainable clustering~\\cite{bertsimas2018interpretable, fraiman2013interpretable,geurts2007inferring,ghattas2017clustering, liu2005clustering}. However, all prior work on this topic is empirical, without any theoretical analysis of quality compared to the optimal clustering. We also remark that the previous results on tree-based clustering have not considered the $k$-medians\/means objectives for evaluating the quality of the clustering, which is the focus of our work. It is NP-hard to find the optimal $k$-means clustering~\\cite{aloise2009np, dasgupta2008hardness} or even a very close approximation~\\cite{awasthi2015hardness}. In other words, we expect tree-based clustering algorithms to incur an approximation factor bounded away from one compared to the optimal clustering.\n\nOne way to cluster based on few features is to use dimensionality reduction.\nTwo main types of dimensionality reduction methods have been investigated for $k$-medians\/means. Work on {\\em feature selection} shows that it is possible to cluster based on $\\Theta(k)$ features and obtain a constant factor approximation for $k$-means\/medians~\\cite{boutsidis2009unsupervised, cohen2015dimensionality}. However, after selecting the features, these methods employ existing approximation algorithms to find a good clustering, and hence, the cluster assignments are not explainable. Work on {\\em feature extraction} shows that it is possible to use the Johnson-Lindenstrauss transform to $\\Theta(\\log k)$ dimensions, while preserving the clustering cost~\\cite{becchetti2019oblivious, makarychev2019performance}. Again, this relies on running a $k$-means\/medians algorithm after projecting to the low dimensional subspace. The resulting clusters are not explainable, and moreover, the features are arbitrary linear combinations of the original features.\n\nBesides explainability, many other clustering variants have received recent attention, such as fair clustering~\\cite{ahmadian2020fair, backurs2019scalable, bera2019fair,chiplunkar2020solve, huang2019coresets,kleindessner2019fair, mahabadi2020individual, schmidt2019fair}, online clustering~\\cite{bhaskara20a, cohen2019online,hess2019sequential, liberty2016algorithm, moshkovitz2019unexpected}, and the use of same-cluster queries~\\cite{ailon2018approximate, ashtiani2016clustering, huleihel2019same, mazumdar2017clustering}. An interesting avenue for future work would be to further develop tree-based clustering methods by additionally incorporating some of these other constraints or objectives.","meta":{"redpajama_set_name":"RedPajamaArXiv"}}