diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbhyb" "b/data_all_eng_slimpj/shuffled/split2/finalzzbhyb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbhyb" @@ -0,0 +1,5 @@ +{"text":"\\section*{Abstract}\nIn the current work, a problem-splitting approach and a scheme motivated by transfer learning is applied to a structural health monitoring problem. The specific\nproblem in this case is that of localising damage on an aircraft wing. The original experiment is described, together with the initial approach, in which a neural \nnetwork was trained to localise damage. The results were not ideal, partly because of a scarcity of training data, and partly because of the difficulty in \nresolving two of the damage cases. In the current paper, the problem is split into two sub-problems and an increase in classification accuracy is obtained. The \nsub-problems are obtained by separating out the most difficult-to-classify damage cases. A second approach to the problem is considered by adopting ideas from \ntransfer learning (usually applied in much deeper) networks to see if a network trained on the simpler damage cases can help with feature extraction in the more\ndifficult cases. The transfer of a fixed trained batch of layers between the networks is found to improve classification by making the classes more separable in the feature\nspace and to speed up convergence.\n\n\\textbf{Key words: Structural health monitoring (SHM), machine learning, classification, problem splitting, transfer learning.}\n\n\t\n\\section{Introduction}\n\\label{sec:intro}\n\nStructural health monitoring (SHM) refers to the process of implementing a damage detection strategy for aerospace, civil or mechanical engineering infrastructure \\cite{farrar2012structural}. Here, damage is defined as changes introduced into a system\/structure, either intentionally or unintentionally, that affect current or \nfuture performance of the system. Detecting damage is becoming more and more important in modern societies, where everyday activities depend increasingly on \nengineering systems and structures. One the one hand, safety has to be assured, both for users and for equipment or machinery existing within these structures. On \nthe other hand, infrastructure is often designed for a predefined lifetime and damage occurrence may reduce the expected lifetime and have a huge economic impact \nas a result of necessary repairs or even rebuilding or decommison. Damage can be visible on or in structures, but more often it is not, and has to be inferred from \nsignals measured by sensors placed on them.\n\nAn increasingly useful tool in SHM is {\\em machine learning} (ML) \\cite{farrar2012structural}. In many current applications large sets of data are gathered by sensors \nor generated by models and these can be exploited to gain insight into structural dynamics and materials engineering. Machine learning is employed because of its \nefficiency in classification, function interpolation and prediction using data. Data-driven models are built and used to serve SHM purposes. These models can also be \nused to further understand how structures react to different conditions and explain their physics. However, one of the main drawbacks of such methods is the need for \nlarge datasets. ML models may have many parameters which are established during {\\em training} on data which may need to span all the health conditions of interest for the\ngiven structure or system. Larger datasets assist in better tuning of the models as far as accuracy and generalisation are concerned. However, even if large datasets are available, sometimes there are very few observations on damaged states, which are important in SHM. In the current paper, increased \naccuracy of a data-driven SHM classifier will be discussed in terms of two strategies: splitting the problem into two sub-problems and attempting transfer of information\nbetween the two sub-problems in a manner motivated by transfer learning \\cite{Pan2010}.\n\nTransfer learning is the procedure of taking knowledge from a source domain and task and applying it to a different domain and task to help improve performance on the \nsecond task \\cite{Pan2010}. Transfer learning is useful, as we know that a model trained on a dataset can not naturally be applied on another due to difference in data distribution, but can be further tuned to also apply on the second dataset. An accurate representation of the difference between traditional and transfer learning schemes can be seen in Figure \\ref{fig:ml_schemes}. \nThe SHM problem herein will be addressed using neural networks \\cite{Bishop:1995:NNP:525960}, for which transfer learning has been proven quite efficient (although \nusually in deeper learning architectures \\cite{GabrielPuiCheongFung2006,AlMubaid2006}). Due to the layered structure of the networks, after having created a model for a task, transferring a part of it (e.g.\\ some \nsubset of the layers) is easy. The method is used in many disciplines, such as computer vision \\cite{oquab2014learning,shin2016deep}. The most commonly-used learners\nare Convolutional Neural Networks (CNNs), which can be very slow to train and may need a lot of data, which in many cases can be hard to obtain (e.g.\\ labelled \nimages). These problems can be dealt with by using the fixed initial layers of pre-trained models to extract features of images, and then train only the last layers to \nclassify in the new context. In this way, both the number of trainable parameters and the need for huge datasets and computation time are reduced. Another topic that \ntransfer learning has been used in is natural language processing (NLP) \\cite{DBLP:journals\/corr\/BingelS17}, where the same issues of lack of labelled data and large \namounts of training time are dealt with by transferring of pre-trained models into new tasks. Further examples of the benefits of transfer learning can be found in web \ndocument classification \\cite{GabrielPuiCheongFung2006,AlMubaid2006}; in these cases, in newly-created web sites, lack of labelled data occurs. To address this problem, \neven though the new web sites belong to a different domain than the training domain of the existing sites, the same models can be used to help classify documents in the \nnew websites.\n\nIn the context of the current work, transfer learning is considered in transferring knowledge from one sub-problem to the other by introducing pre-trained layers into\nnew classifiers. The classification problem that will be presented is related to damage class\/location. A model trained to predict a subset of the damage classes (source \ntask) with data corresponding of that subset (source domain), will be used to boost performance of a second classifier trained to identify a different subset of damage\nstates. \n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.95\\linewidth]{Figure_1a}\n\t\t\\caption{}\n\t\t\\label{fig:trad_ml}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.95\\linewidth]{Figure_1b}\n\t\t\\caption{}\n\t\t\\label{fig:transfer_ml}\n\t\\end{subfigure}\\\\\n\t\\caption{Traditional (a) and transfer (b) learning schemes (following \\cite{Pan2010}).}\n\t\\label{fig:ml_schemes}\n\\end{figure}\n\\section{Problem description}\n\\label{sec:problem_desc}\n\nSimilar to the aforementioned applications, in SHM machine learning is also used for classification and regression. In data driven SHM one tries to identify features that will reveal whether a structure is damaged or what type of damage is present and so, labelled data are necessity. Therefore, in SHM applications lack of labelled data about damage location or severity is a drawback. SHM problems can be categorised in many ways but are often broken down according to the hierarchical structure proposed by Rytter \\cite{rytter1993vibrational}: \n\n\\begin{enumerate}\n\t\\item Is there damage in the system ({\\em existence})?\n\t\\item Where is the damage in the system ({\\em location})?\n\t\\item What kind of damage is present ({\\em type\/classification})?\n\t\\item How severe is the damage ({\\em extent\/severity})?\n\t\\item How much useful (safe) life remains ({\\em prognosis})?\n\\end{enumerate}\n\nA common approach to the first level is to observe the structure in its normal condition and try to find changes in features extracted from measured signals that \nare sensitive to damage. This approach is called {\\em novelty detection} \\cite{Worden1997,WORDEN2000}, and it has some advantages and disadvantages. The main advantage \nis that it is usually an {\\em unsupervised} method, that is only trained on data that are considered to be from the undamaged condition of the structure, without \na specific target class label. These methods are thus trained to detect any {\\em changes} in the behaviour of the elements under consideration, which can be a \ndisadvantage, since structures can change their behaviour for benign reasons, like changes in their environmental or operational conditions; such benign changes or\n{\\em confounding influences} can raise false alarms.\n\nIn this work a problem of damage localisation is considered (at Level 2 in Rytter's hierarchy \\cite{rytter1993vibrational}); the structure of interest being a wing of \na Gnat trainer aircraft. The problem is one of supervised-learning, as the data for all damage cases were collected and a classification model was trained accordingly. Subsequently, the classifier was used to predict the damage class of newly-presented data. The features used as inputs to the classifier were novelty indices calculated \nbetween frequency intervals of the transmissibilities of the normal condition of the structure (undamaged state) and the testing states. The transmissibility between \ntwo points of a structure is given by equation (\\ref{eq:transmissibility}), and this represents the ratio of two response spectra. This feature is useful because it \ndescribes the response of the structure in the frequency domain, without requiring any knowledge of the frequency content of the excitation. The transmissibility is\ndefined as,\n\n\\begin{equation} \n\\label{eq:transmissibility}\n T_{ij} = \\frac{FRF_i}{FRF_j} = \n \\frac{\\frac{\\mathcal{F}_{i}}{\\mathcal{F}_{excitation}}}{\\frac{\\mathcal{F}_{j}}{\\mathcal{F}_{excitation}}} = \\frac{\\mathcal{F}_{i}}{\\mathcal{F}_{j}}\n\\end{equation}\nwhere, $\\mathcal{F}_{i}$ is the Fourier Transform of the signal given by the $i^{th}$ sensor and $FRF_i$ is the {\\em Frequency Response Function} (FRF) at the $i$th \npoint. \n\nThe experiment was set up as described in \\cite{Worden2007}. The wing of the aircraft was excited with a Gaussian white noise using an electrodynamic shaker attached \non the bottom surface of the wing. The configuration of the sensors placed on the wing can be seen in Figure \\ref{fig:sensors}. Responses were measured with \naccelerometers on the upper surface of the wing, and the transmissibilities between each sensor and the corresponding reference sensor were calculated. The \ntransmissibilities were recorded in the 1-2 kHz range, as this interval was found to be sensitive to the damage that was going to be introduced to the structure. Each transmissibility contained 2048 spectral lines.\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[scale=0.60]{Figure_2.png}\n\t\\caption{Configuration of sensors on the Gnat aircraft wing \\cite{manson2003experimental}.}\n\t\\label{fig:sensors}\n\\end{figure}\n\nInitially, the structure was excited in its normal condition, i.e.\\ with no introduced damage. The transmissibilities of this state were recorded and subsequently, \nto simulate damage, several panels were removed from the wing, one at a time. In each panel removal, the wing was excited again with white Gaussian noise and the transmissibilities were recorded. The panels that were removed are shown in Figure \\ref{fig:panels}. Each panel has a different size, varying from 0.008 to 0.08m$^{2}$ \nand so the localisation of smaller panels becomes more difficult, since their removal affects the transmissibilities less than the bigger panels. The measurements were \nrepeated 200 times for each damage case, ultimately leading to 1800 data points belonging to nine different damage cases\/classes. The data were separated into training, \nvalidation and testing sub sets, each having 66 points per damage case.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[scale=0.70]{Figure_3.png}\n\t\\caption{Schematic showing wing panels removed to simulate the nine damage cases \\cite{manson2003experimental}.}\n\t\\label{fig:panels}\n\\end{figure}\n\nFor the purposes of damage localisation, features had to be selected which would be sensitive to the panel removals; this was initially done manually \\cite{manson2003experimental}, selecting by visual `engineering judgement' the intervals of the transmissibilities that appeared to be more sensitive to damage and \ncalculating the novelty indices of each state by comparison with the transmissibilities of the undamaged state. The novelty indices were computed using the \nMahalanobis squared-distance (MSD) $D^2_{\\zeta}$ of the feature vectors $\\mathbf{x_{\\zeta}}$, which in this case contained the magnitudes of transmissibility spectral \nlines. The MSD is defined by,\n\n\\begin{equation} \n\\label{eq:mahal_dist}\n D_{\\zeta}^{2} = (\\mathbf{x_{\\zeta}} - \\mathbf{\\overline{x}})^{T} S^{-1}(\\mathbf{x_{\\zeta}} - \\mathbf{\\overline{x}})\n\\end{equation}\nwere $\\mathbf{\\overline{x}}$ is the sample mean on the normal condition feature data, and $S$ is the sample covariance matrix.\n\nAfter selecting `by eye' the most important features for damage detection \\cite{manson2003experimental}, a genetic algorithm was used \\cite{Worden2008} to choose the \nmost sensitive features, in order to localise\/classify the damage. Finally, nine features were chosen as the most sensitive and an MLP neural network \\cite{Bishop:1995:NNP:525960} with nine nodes in the input layer, ten nodes in the hidden layer and nine nodes in the decision layer was trained. The confusion matrix \nof the resulting classifier is shown in the Table \\ref{Tab:Initial_network_conf_mat}. It can be seen that the misclassification rate is very low and that the damage \ncases that are most confused are the ones where the missing panel is Panel 3 or Panel 6, which were the smallest ones.\n\n\n\\section{Problem splitting}\n\\label{sec:splitting}\n\nAs mentioned in \\cite{tarassenko1998guide}, the rule-of-thumb for a network that generalises well is that it should be trained with at least ten samples per weight \nof the network. The aforementioned network had 180 trainable weights (and another 19 bias terms) so the 596 training samples are not ideal for the neural network. As a solution, \na splitting of the original problem into two sub-problems is considered here to try and reduce the misclassification rate on the testing data even further. The dataset \nis split into two parts, one containing all the damage cases except Panels 3 \\& 6 and the second containing the rest of the data. Subsequently, two neural network \nclassifiers were trained separately on the new datasets. This was thought to be a good practice, since the panels are the smallest, and their removal affects the \nnovelty indices less than the rest of the panel removals. The impact is that the points appear closer to each other in the feature space, and are swamped by points \nbelonging to other classes, so the initial classifier cannot separate them efficiently. By assigning the tasks to different classifiers, an increase in performance \nis expected, especially in the case of separating the two smallest panel classes.\n\n\\begin{table}[h!]\n\t\\centering\n\t\\begin{tabular}{ |p{3cm}||p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}| }\n\t\t\\hline\n\t\tPredicted panel & 1 & 2 & 3& 4& 5 & 6 & 7 & 8 & 9\\\\\n\t\t\\hline\n\t\tMissing panel 1& 65 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\\\\n\t\tMissing panel 2& 0 & 65 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n\t\tMissing panel 3& 1 & 0 & 62 & 0 & 0 & 1 & 0 & 1 & 1 \\\\\n\t\tMissing panel 4 & 0 & 0 & 0 & 66 & 0 & 0 & 0 & 0 & 0 \\\\\n\t\tMissing panel 5& 0 & 0 & 0 & 0 & 66 & 0 & 0 & 0 & 0 \\\\\n\t\tMissing panel 6& 0 & 3 & 0 & 0 & 0 & 62 & 0 & 1 & 0 \\\\\n\t\tMissing panel 7& 0 & 0 & 0 & 0 & 0 & 0 & 66 & 0 & 0 \\\\\n\t\tMissing panel 8& 1 & 0 & 0 & 0 & 0 & 0 & 0 & 65 & 0 \\\\\n\t\tMissing panel 9& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 66 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Confusion Matrix of neural network classifier, test set, total accuracy: 98.14\\% \\cite{Worden2008}}\n\t\\label{Tab:Initial_network_conf_mat}\n\\end{table}\n\nTo illustrate the data feature space, a visualisation is attempted here. Since the data belong to a nine-dimensional feature space, principal component analysis (PCA) \nwas performed on the data and three of the principal components, explaining 71\\% of total variance, are plotted in scatter plots shown in Figure \\ref{fig:all_data_pcs}. \nPoints referring to data corresponding to the missing panels 3 and 6 (grey and magenta points respectively) are entangled with other class points causing most of the misclassification rate shown above.\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.98\\linewidth]{Figure_4a}\n\t\t\\caption{}\n\t\t\\label{fig:all_data_pcs}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.98\\linewidth]{Figure_4b}\n\t\t\\caption{}\n\t\t\\label{fig:7_classes_pcs}\n\t\\end{subfigure}\\\\\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.98\\linewidth]{Figure_4c}\n\t\t\\caption{}\n\t\t\\label{fig:2_classes_pcs}\n\t\\end{subfigure}\n\t\\caption{Principal components of all samples (a), samples excepting panels 3 and 6 (b) and samples of panels 3 and 6(c).}\n\t\\label{fig:first_pcs}\t\n\\end{figure}\n\nRandom initialisation was followed for the neural networks. Initial values of the weights and biases of the networks were sampled from a normal zero-mean distribution. The two networks were initialised several times and trained for different sizes of the hidden layer to find the ones with optimal structure for the \nnewly-defined problems. After randomly initialising and training multiple neural networks for both cases and keeping the ones with the minimum loss function value \nthe best architectures were found to be networks with nine nodes in the hidden layer for both cases and seven output nodes for the first dataset and two for the \nsecond. The loss function used in training was the categorical cross-entropy function given by,\n\n\\begin{equation} \n\\label{eq:categorical_crossentropy}\n L(y, \\hat{y}) = -\\frac{1}{N}\\sum_{i=1}^{N}\\sum_{j=1}^{n_{cl}} [y_{i, j}log \\hat{y}_{i, j} + (1 - y_{i, j}) log (1 - \\hat{y}_{i, j})]\n\\end{equation}\n\nIn Equation (\\ref{eq:categorical_crossentropy}), $N$ is the number of samples during training, $n_{cl}$ is the number of possible classes, $\\hat{y}_{i,j}$ the estimated probability that the $i$th point belongs to the $j$th class and $y_{i, j}$ is 1 if the $i$th sample belongs to the $j$th class, otherwise it is 0.\n\nConfusion matrices on the test sets for the classifiers are shown in Tables \\ref{Tab:First_set_conf_mat} and \\ref{Tab:Second_set_conf_mat}. By splitting the dataset \ninto two subsets the total accuracy is slightly increased from 98.14\\% to 98.82\\%. This is best considered in terms of classification error, which has been reduced from\n1.86\\% to 1.18\\%, and this is an important reduction in SHM terms. Reduction of the number of trainable parameters has certainly contributed to this improvement, since \nthe amount of training data is small. Performance on the task of separating only the two smallest panel classes was also increased because it is an easier task for the \nclassifier than trying to discriminate them among the panel removals with greater impact on the novelty indices. This fact is also clear in Figure \\ref{fig:2_classes_pcs}, \nwhere the principal components of samples belonging to the classes of missing Panels 3 and 6 are clearly separable.\n\n\\begin{table}[ht!]\n\t\\centering\n\t\\begin{tabular}{ |p{3cm}||p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}| }\n\t\t\\hline\n\t\tPredicted panel & 1 & 2 & 4& 5 & 7 & 8 & 9\\\\\n\t\t\\hline\n\t\tMissing panel 1& 65 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n\t\tMissing panel 2& 0 & 63 & 1 & 0 & 0 & 0 & 2\\\\\n\t\tMissing panel 4 & 1 & 0 & 65 & 0 & 0 & 0 & 0 \\\\\n\t\tMissing panel 5& 0 & 0 & 0 & 66 & 0 & 0 & 0 \\\\\n\t\tMissing panel 7& 0 & 0 & 0 & 0 & 66 & 0 & 0 \\\\\n\t\tMissing panel 8& 1 & 0 & 0 & 0 & 0 & 65 & 0 \\\\\n\t\tMissing panel 9& 1 & 0 & 0 & 0 & 0 & 0 & 65 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Confusion Matrix of neural network classifier trained on the first dataset, test set, total accuracy: 98.48\\%}\n\t\\label{Tab:First_set_conf_mat}\n\\end{table}\n\n\\begin{table}[ht!]\n\t\\centering\n\t\\begin{tabular}{ |p{3cm}||p{1.0cm}|p{1.0cm}|}\n\t\t\\hline\n\t\tPredicted panel & 3 & 6 \\\\\n\t\t\\hline\n\t\tMissing panel 3& 66 & 0 \\\\\n\t\tMissing panel 6& 0 & 66 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Confusion Matrix of neural network classifier trained on the first dataset, test set, total accuracy: 100\\%}\n\t\\label{Tab:Second_set_conf_mat}\n\\end{table}\n\n\\section{Knowledge transfer between the two problems}\n\\label{sec:transfer}\n\nHaving split the problem into two sub-problems, a scheme motivated by transfer learning in deeper learners was examined. The idea being to establish if the features\nextracted at the hidden layer in one problem, could be used for the other. In transfer learning termology, the seven-class problem specifies the source domain and \ntask, while the two-class problem gives the target domain and task. The transfer is carried out by using the fixed input and hidden layers from the classifier in\nthe source task, as the input and hidden layers of the target task; this means that only the weights between the hidden and output layers remain to be trained for\nthe target task. This strategy reduces the number of parameters considerably. The functional form of the network for the source task is given by,\n\n\\begin{equation} \n\\label{eq:network_output}\n \\mathbf{y} = f_{0}W_2(f_{1}(W_{1}\\mathbf{x} + b_{1}) + b_{2}\n\\end{equation}\nwhere $f_0$ and $f_1$ are the non-linear activation functions of the output layer and the hidden layer respectively, $W_{1,2}$ are the weight matrices of the \ntransformations between the layers, $b_{1, 2}$ are the bias vectors of the layers, $\\mathbf{x}$ is the input vector and $\\mathbf{y}$ the output vector. The {\\em softmax} \nfunction is chosen to be the activation function of the decision layer, as this is appropriate to a classification problem. The prediction of the network, concerning \nwhich damage class the sample belongs to, is the index that maximises the output vector $\\mathbf{y}$; the outputs are interpreted as the {\\em a posteriori} probabilities\nof class membership, so this leads to a Bayesian decision rule. Loosely speaking, one can think of the transformation between the hidden and ouput layers as the actual\nclassifier, and the transformation between the input layer into the hidden layer as a map to latent states in which the classes are more easily separable. In the context\nof deep networks, the hope is that the earlier layers carry out an automated feature extraction which facilitates an eventual classifier. In the deep context, transfer\nbetween problems is carried out by simply copying the `feature extraction' layers directly into the new network, and only training the later classificatiion layers. The\nsimple idea explored here, is whether that strategy helps in the much more shallow learner considered in this study. The transfer is accomplished by copying the weights $W_1$\nand biases $b_1$ from sub-problem one directly into the network for sub-problem two, and only training the weights $W_2$ and biases $b_2$.\n\nAs before, multiple neural networks were trained on the first dataset. In a transfer learning scheme, it is even more important that models should not be overtrained, \nsince that will make the model too case-specific and it would be unlikely for it to carry knowledge to other problems. To achieve this for the current problem, an early stopping strategy \nwas followed. Models were trained until a point were the value of the loss function decreases less than a percentage of the current value. An example of this can be \nseen in Figure \\ref{fig:early_stoping} where instead of training the neural network for 1000 epochs, training stops at the point indicated with the red arrow.\n\n\\begin{figure}[ht!]\n \t\\centering\n \t\\includegraphics[scale=0.70]{Figure_5}\n \t\\caption{Training and validation loss histories and the point of early stopping (red arrow).}\n \t\\label{fig:early_stoping}\n\\end{figure}\n\nAfter multiple networks were trained following the early stopping scheme above, the network with the lowest value on validation loss was determined and the transfer \nlearning scheme was applied to the second problem. The nonlinear transformation given by the transition from the input layer to the hidden layer was applied on the data \nof the second dataset. Consequently, another neural network was trained on the transformed data, having only one input layer and one output\/decision layer. To comment \non the effect of the transformation, another two-layer network was trained on the original second dataset and the results were compared. \n\n\\begin{table}[ht!]\n\t\\centering\n\t\\begin{tabular}{ |p{3cm}||p{1.0cm}|p{1.0cm}|}\n\t\t\\hline\n\t\tPredicted panel & 3 & 6 \\\\\n\t\t\\hline\n\t\tMissing panel 3& 65 & 1 \\\\\n\t\tMissing panel 6& 2 & 64 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Confusion Matrix of neural network classifier trained on the original data of the second dataset, test set, total accuracy: 97.72\\%}\n\t\\label{Tab:conf_original_data_2_layer}\n\\end{table}\n\n\\begin{table}[ht!]\n\t\\centering\n\t\\begin{tabular}{ |p{3cm}||p{1.0cm}|p{1.0cm}|}\n\t\t\\hline\n\t\tPredicted panel & 3 & 6 \\\\\n\t\t\\hline\n\t\tMissing panel 3& 65 & 1 \\\\\n\t\tMissing panel 6& 3 & 63 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Confusion Matrix of neural network classifier trained on the transformed data of the second dataset, test set, total accuracy: 96.96\\%}\n\t\\label{Tab:conf_trans_data_2_layer}\n\\end{table}\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.8\\linewidth]{Figure_6a}\n\t\t\\caption{}\n\t\t\\label{fig:first_dataset_features_pcs}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.8\\linewidth]{Figure_6b}\n\t\t\\caption{}\n\t\t\\label{fig:first_dataset_features_pcs_transed}\n\t\\end{subfigure}\n\t\\caption{Principal components of original features of the first dataset (a) and transformed features (b).}\n\t\\label{fig:first_pcs_1}\t\n\\end{figure}\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.8\\linewidth]{Figure_7a}\n\t\t\\caption{}\n\t\t\\label{fig:second_dataset_features_pcs}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.8\\linewidth]{Figure_7b}\n\t\t\\caption{}\n\t\t\\label{fig:second_dataset_trans_features_1_pcs}\n\t\\end{subfigure}\n\t\\caption{Principal components of original features of the second dataset (a) and transformed features (b).}\n\t\\label{fig:second_pcs}\t\n\\end{figure}\n\nThe confusion matrices of the two neural networks on the testing data are given in Tables \\ref{Tab:conf_original_data_2_layer} and \\ref{Tab:conf_trans_data_2_layer}; \nthe misclassification rates are very similar. However, it is interesting to also look at the effect of the transfer on the convergence rate of the network trained \non the transferred data and also to illustrate the feature transformation on the first and the second datasets.\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[scale=0.5]{Figure_8.png}\n\t\\caption{Loss histories of transferred model: train(blue), validation(cyan) and model trained on initial data: train(red), validation(magenta).}\n\t\\label{fig:loss_histories}\n\\end{figure}\n\nThe training histories of the two models can be seen in Figure \\ref{fig:loss_histories}. It is clear that the loss history of the model with transformed data (blue \nand cyan lines) converges faster, especially in the initial part of the training, and it also reaches a lower minimum value for the loss function in the same number \nof training epochs. This can be explained by looking at the effect of the learnt transformation on the data. In Figures \\ref{fig:second_pcs} and \\ref{fig:first_pcs_1} \nthis effect is illustrated. (Note that the points are different from those in Figure \\ref{fig:first_pcs}, because principal component analysis was performed this time \non the normalised data in the interval [-1, 1] for the neural network training). The transformation spreads out the points of the original problem (first dataset) in \norder to make their separation by the decision layer easier; however, it is clear that it also accomplishes the same result on the second dataset. The points in Figure \\ref{fig:second_dataset_trans_features_1_pcs} are spread out compared to the initial points and thus, their separation by the single layer neural network is easier. \nFurthermore, the points lay further away from the required decision boundary and this explains both the faster training convergence and the lower minimum achieved. \nIn contrast to the transformation of the first dataset, in the second dataset, the transformation does not concentrate points of the same class in specific areas of the feature space. \nIn Figure \\ref{fig:first_dataset_features_pcs_transed} the points are both spread and concentrated closer according to the class they belong. This probably means that \nonly a part of the physics of the problem is transferred in the second problem through this specific transformation.\n\n\n\\section{Discussion and Conclusions}\n\\label{sec:conclusions}\n\nFor the SHM classification (location) problem considered here, splitting the dataset into two subsets contributed to increasing the classification accuracy by a \nsmall percentage. This result was explained by the lesser effect that the small panel removals had on the novelty index features. This issue arose because the \npoints representing these classes were close to each other and also points from other classes -- those corresponding to large panel removal\/damage. By considering \nthe two damage cases as different problem, perfect accuracy was achieved in the task of classifying damage to the small panels, and there was also a small increase \nin the performance of the classifier tasked to identify the more severe damage states.\n\nAn attempt at a crude form of transfer learning was also investigated. Having trained the neural network classifier on the first dataset of the seven damage cases, \ntransfer of knowledge to the second sub-problem was considered. This was accomplished by copying the first two layers of the first classifier -- the `feature \nextraction' layers -- directly into the second classifier and only training the connections from the hidden layer to the output. The result is not particularly \nprofound; the transfer does allow a good classifier, even with the smaller set of trainable parameters, but is not as good as training the network from scratch.\nThe result is interesting, because it is clear that the source network is carrying out a feature clustering and cluster separation on the source data, that is still\nuseful when the target data are presented. This suggests that the main issue with the small-panel damage classification is that the data are masked by the close \npresence of the large-panel data. Separating out the small-panel is the obvious answer. The results are interesting because they illustrate in a `toy' example, how\nthe early layers in deeper networks are manipulating features automatically in order to improve the ultimate classification step. The other benefit of the \nseparation into sub-problems, was the faster convergence of the network training.\n\n\\section{Acknowledgement}\n\\label{sec:ack}\n\nThe authors would like to acknowledge Graeme Manson for providing the data used. Moreover, the authors would like to acknowledge the support of the Engineering and Physical Science Research Council (EPSRC) and the European Union (EU). G.T is supported by funding from the EU's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement DyVirt (764547). The other authors are supported by EPSRC grants EP\/R006768\/1, EP\/R003645\/1, EP\/R004900\/1 and EP\/N010884\/1.\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\vspace{-0.25cm}\nAn intelligent system that interacts with its environment faces the problem of \\emph{continual learning}, in which new experience is constantly being acquired, while old experience may still be relevant \\cite{ring1997child}. An effective continual learning system must optimize for two potentially conflicting goals. First, when a previously encountered scenario is encountered, performance should immediately be good, ideally as good as it was historically. Second, maintenance of old skills should not inhibit rapid acquisition of a new skill. These simultaneous constraints -- maintaining the old while still adapting to the new -- represent the challenge known as the \\emph{stability-plasticity} dilemma \\cite{grossberg1982does}.\n\nThe quintessential failure mode of continual learning is \\emph{catastrophic forgetting}, in which new knowledge supplants old knowledge, resulting in high plasticity but little stability. Within biological systems, hippocampal replay has been proposed as a systems-level mechanism to reduce catastrophic forgetting and improve generalization, as in the theory of complementary learning systems \\cite{mcclelland1998complementary}. This process complements local consolidation of past experience at the level of individual neurons and synapses, which is also believed to be present in biological neural networks \\cite{benna2016computational, leimer2019synaptic}.\n\nWithin artificial continual learning systems, deep neural networks have become ubiquitous tools and increasingly are applied in situations where catastrophic forgetting becomes relevant. In reinforcement learning (RL) settings, this problem is often circumvented by using massive computational resources to ensure that all tasks are learned simultaneously, instead of sequentially. Namely, in simulation or in self-play RL, fresh data can be generated on demand and trained on within the same minibatch, resulting in a stable data distribution.\n\nHowever, as RL is increasingly applied to continual learning problems in industry or robotics, situations arise where gathering new experience is expensive or difficult, and thus simultaneous training is infeasible. Instead, an agent must be able to learn from one task at a time, and the sequence in which tasks occur is not under the agent's control. In fact, boundaries between tasks will often be unknown -- or tasks will deform continuously and not have definite boundaries at all \\cite{kaplanis2019policy}. Such a paradigm for training eliminates the possibility of simultaneously acting upon and learning from several tasks, and leads to the possibility of catastrophic forgetting.\n\nThere has recently been a surge of interest in methods for preventing catastrophic forgetting in RL, inspired in part by neuroscience \\cite{kaplanis2018continual,kirkpatrick2017overcoming,rusu2016progressive, schwarz2018progress}. Remarkably, however, such work has focused on synaptic consolidation approaches, while possibilities of experience replay for reducing forgetting have largely been ignored, and it has been unclear how replay could be applied in this context within a deep RL framework.\n\nWe here demonstrate that replay can be a powerful tool in continual learning. We propose a simple technique, Continual Learning with Experience And Replay (CLEAR), mixing on-policy learning from novel experiences (for plasticity) and off-policy learning from replay experiences (for stability). For additional stability, we introduce behavioral cloning between the current policy and its past self. While it can be memory-intensive to store all past experience, we find that CLEAR is just as effective even when memory is severely constrained. Our approach has the following advantages:\n\\begin{itemize}\n \\item \\textbf{Stability and plasticity.} CLEAR performs better than state-of-the-art (Elastic Weight Consolidation, Progress \\& Compress), almost eliminating catastrophic forgetting.\n \\item \\textbf{Simplicity.} CLEAR is much simpler than prior methods for reducing forgetting and can also be combined easily with other approaches.\n \\item \\textbf{No task information.} CLEAR does not rely upon the identity of tasks or the boundaries between them, by contrast with prior art. This means that CLEAR can be applied to a much broader range of situations, where tasks are unknown or are not discrete.\n\\end{itemize}\n\n\\vspace{-0.25cm}\n\\section{Related work}\n\\vspace{-0.25cm}\nThe problem of catastrophic forgetting in neural networks has long been recognized \\cite{grossberg1982does}, and it is known that rehearsing past data can be a satisfactory antidote for some purposes \\cite{french1999catastrophic, mcclelland1998complementary}. Consequently, in the supervised setting that is the most common paradigm in machine learning, forgetting has been accorded less attention than in cognitive science or neuroscience, since a fixed dataset can be reordered and replayed as necessary to ensure high performance on all samples.\n\nIn recent years, however, there has been renewed interest in overcoming catastrophic forgetting in RL and in supervised learning from streaming data, where it is not possible simply to reorder the data (see e.g.~\\cite{ hayes2018memory, lopez2017gradient, parisi2018continual, rios2018closed}). Current strategies for mitigating catastrophic forgetting have primarily focused on schemes for protecting the parameters inferred in one task while training on another. For example, in Elastic Weight Consolidation (EWC) \\cite{kirkpatrick2017overcoming}, weights important for past tasks are constrained to change more slowly while learning new tasks. Progressive Networks \\cite{rusu2016progressive} freezes subnetworks trained on individual tasks, and Progress \\& Compress \\cite{schwarz2018progress} uses EWC to consolidate the network after each task has been learned. Kaplanis et al.~\\cite{kaplanis2018continual} treat individual synaptic weights as dynamical systems with latent dimensions \/ states that protect information. Outside of RL, Zenke et al.~\\cite{zenke2017continual} develop a method similar to EWC that maintains estimates of the importance of weights for past tasks, Li and Hoiem \\cite{li2017learning} leverage a mixture of task-specific and shared parameters, and Milan et al.~\\cite{milan2016forget} develop a rigorous Bayesian approach for estimating unknown task boundaries. Notably all these methods assume that task identities or boundaries are known, with the exception of \\cite{milan2016forget}, for which the approach is likely not scalable to highly complex tasks.\n\nRehearsing old data via experience replay buffers is a common technique in RL, but such methods have largely been driven by the goal of data-efficient learning on single tasks \\cite{gu2017interpolated,lin1992self, mnih2015human}. Research in this vein has included prioritized replay for maximizing the impact of rare experiences \\cite{schaul2015prioritized}, learning from human demonstration data seeded into a buffer \\cite{hester2017deep}, and methods for approximating replay buffers with generative models \\cite{shin2017continual}. A noteworthy use of experience replay buffers to protect against catastrophic forgetting was demonstrated in Isele and Cosgun \\cite{isele2018selective} on toy tasks, with a focus on how buffers can be made smaller. Previous works \\cite{gu2017interpolated,o2016combining, wang2016sample} have explored mixing on- and off-policy updates in RL, though these were focused on speed and stability in individual tasks and did not examine continual learning.\n\nWhile it may not be surprising that replay can reduce catastrophic forgetting to some extent, it is remarkable that, as we show, it is powerful enough to outperform state-of-the-art methods. There is a marked difference between reshuffling data in supervised learning and replaying past data in RL. Notably, in RL, past data are typically leveraged best by \\emph{off-policy} algorithms since historical actions may come from an out-of-date policy distribution. Reducing this deviation is our motivation for the behavioral cloning component of CLEAR, inspired by work showing the power of policy consolidation \\cite{kaplanis2019policy}, self-imitation \\cite{oh2018self}, and knowledge distillation \\cite{furlanello2018born}.\n\n\\vspace{-0.25cm}\n\\section{The CLEAR Method}\n\\vspace{-0.25cm}\nCLEAR uses actor-critic training on a mixture of new and replayed experiences. We employ distributed training based on the Importance Weighted Actor-Learner Architecture presented in \\cite{espeholt2018impala}. Namely, a single learning network is fed experiences (both novel and replay) by a number of acting networks, for which the weights are asynchronously updated to match those of the learner. Training proceeds as in \\cite{espeholt2018impala} by the \\emph{V-Trace} off-policy learning algorithm, which uses truncated importance weights to correct for off-policy distribution shifts. While V-Trace was designed to correct for the lag between the parameters of the acting networks and those of the learning network, we find it also successfully corrects for the distribution shift corresponding to replay experience. Our network architecture and training hyperparameters are chosen to match those in~\\cite{espeholt2018impala} and are not further optimized.\n\nFormally, let $\\theta$ denote the network parameters, $\\pi_\\theta$ the (current) policy of the network over actions $a$, $\\mu$ the policy generating the observed experience, and $h_s$ the hidden state of the network at time $s$. Then, the V-Trace target $v_s$ is given by:\n$$v_s:=V(h_s) + \\sum_{t=s}^{s+n-1} \\gamma^{t-s}\\left(\\prod_{i=s}^{t-1} c_i\\right)\\delta_t V,$$\nwhere $\\delta_t V:= \\rho_t\\left(r_t + \\gamma V(h_{t+1}) - V(h_t)\\right)$ for truncated importance sampling weights $c_i := \\min(\\bar{c}, \\frac{\\pi_\\theta(a_i| h_i)}{\\mu(a_i|h_i)})$, and $\\rho_t = \\min(\\bar{\\rho}, \\frac{\\pi_\\theta(a_t| h_t)}{\\mu(a_t|h_t)})$ (with $\\bar{c}$ and $\\bar{\\rho}$ constants). The policy gradient loss is:\n$$L_\\text{policy-gradient} := -\\rho_s\\log\\pi_\\theta(a_s| h_s)\\left(r_s+\\gamma v_{s+1} - V_\\theta(h_s)\\right).$$\nThe value function update is given by the L2 loss, and we regularize policies using an entropy loss:\n$$L_\\text{value} := \\left(V_\\theta(h_s) - v_s\\right)^2,\\quad L_\\text{entropy} := \\sum_a \\pi_\\theta(a|h_s) \\log \\pi_\\theta(a|h_s).$$\nThe loss functions $L_\\text{policy-gradient}$, $L_\\text{value}$, and $L_\\text{entropy}$ are applied both for new and replay experiences. In addition, we add $L_\\text{policy-cloning}$ and $L_\\text{value-cloning}$ for replay experiences only. In general, our experiments use a 50-50 mixture of novel and replay experiences, though performance does not appear to be very sensitive to this ratio. Further implementation details are given in Appendix \\ref{sec:methods}.\n\nIn the case of replay experiences, two additional loss terms are added to induce behavioral cloning between the network and its past self, with the goal of preventing network output on replayed tasks from drifting while learning new tasks. We penalize (1) the KL divergence between the historical policy distribution and the present policy distribution, (2) the L2 norm of the difference between the historical and present value functions. Formally, this corresponds to adding the loss functions:\n$$L_\\text{policy-cloning} := \\sum_a \\mu(a|h_s) \\log\\frac{\\mu(a|h_s)}{\\pi_\\theta(a|h_s)},\\quad L_\\text{value-cloning} := ||V_\\theta(h_s) - V_\\text{replay}(h_s)||_2^2.$$\nNote that computing $\\text{KL}[\\mu||\\pi_\\theta]$ instead of $\\text{KL}[\\pi_\\theta||\\mu]$ ensures that $\\pi_\\theta(a|h_s)$ is nonzero wherever the historical policy is as well.\n\n\\vspace{-0.25cm}\n\\section{Results}\n\\vspace{-0.25cm}\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[scale=0.33]{figures\/separate.png}\n \\includegraphics[scale=0.33]{figures\/simultaneous.png}\n \\includegraphics[scale=0.33]{figures\/sequential.png}\n \\caption{Separate, simultaneous, and sequential training: the $x$-axis denotes environment steps summed across all tasks and the $y$-axis episode score. In ``Sequential'', thick line segments are used to denote the task currently being trained, while thin segments are plotted by evaluating performance without learning. In simultaneous training, performance on \\texttt{explore\\_object\\_locations\\_small} is higher than in separate training, an example of modest constructive interference. In sequential training, tasks that are not currently being learned exhibit very dramatic catastrophic forgetting. See Appendix \\ref{sec:cumsum} for plots of the same data, showing cumulative performance.}\n \\label{fig:sequential}\n \\vspace{-0.5em}\n\\end{figure*}\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[scale=0.5]{figures\/clear.png}\n \\includegraphics[scale=0.5]{figures\/clear_without_behavioral_cloning.png}\n \\caption{Demonstration of CLEAR on three DMLab tasks, which are trained cyclically in sequence. CLEAR reduces catastrophic forgetting so significantly that sequential tasks train almost as well as simultaneous tasks (compare to Figure \\ref{fig:sequential}). When the behavioral cloning loss terms are ablated, there is still reduced forgetting from off-policy replay alone. As above, thicker line segments are used to denote the task that is currently being trained. See Appendix \\ref{sec:cumsum} for plots of the same data, showing cumulative performance.}\n \\vspace{-1.5em}\n \\label{fig:clear}\n\\end{figure*}\n\n\\subsection{Catastrophic forgetting vs.~interference}\n\\vspace{-0.25cm}\n\\label{subsec:interference}\nOur first experiment (Figure \\ref{fig:sequential}) was designed to distinguish between two distinct concepts that are sometimes conflated, \\emph{interference} and \\emph{catastrophic forgetting}, and to emphasize the outsized role of the latter as compared to the former. Interference occurs when two or more tasks are incompatible (\\emph{destructive interference}) or mutually helpful (\\emph{constructive interference}) within the same model. Catastrophic forgetting occurs when a task's performance goes down not because of incompatibility with another task but because the second task overwrites it within the model. As we aim to illustrate, the two are independent phenomena, and while interference may happen, forgetting is ubiquitous.\\footnote{As an example of (destructive) interference, learning how to drive on the right side of the road may be difficult while also learning how to drive on the left side of the road, because of the nature of the tasks involved. Forgetting, by contrast, results from sequentially training on one and then another task - e.g. learning how to drive and then not doing it for a long time.}\n\nWe considered a set of three distinct tasks within the DMLab set of environments \\cite{beattie2016deepmind}, and compared three training paradigms on which a network may be trained to perform these three tasks: (1) Training networks on the individual tasks \\emph{separately}, (2) training a single network examples from all tasks \\emph{simultaneously} (which permits interference among tasks), and (3) training a single network \\emph{sequentially} on examples from one task, then the next task, and so on cyclically. Across all training protocols, the total amount of experience for each task was held constant. Thus, for separate networks training on separate tasks, the $x$-axis in our plots shows the total number of environment frames summed across all tasks. For example, at three million frames with one million each on tasks one, two, and three. This allows a direct comparison to simultaneous training, in which the same network was trained on all three tasks.\n\nWe observe that in DMLab, there is very little difference between separate and simultaneous training. This indicates minimal interference between tasks. If anything, there is slight constructive interference, with simultaneous training performing marginally better than separate training. We assume this is a result of (i) commonalities in image processing required across different tasks, and (ii) certain basic exploratory behaviors, e.g.,~moving around, that are advantageous across tasks. (By contrast, destructive interference might result from incompatible behaviors or from insufficient model capacity.) \n\nBy contrast, there is a large difference between either of the above modes of training and sequential training, where performance on a task decays immediately when training switches to another task -- that is, catastrophic forgetting. Note that the performance of the sequential training appears at some points to be greater than that of separate training. This is purely because in sequential training, training proceeds exclusively on a single task, then exclusively on another task. For example, the first task quickly increases in performance since the network is effectively seeing three times as much data on that task as the networks training on separate or simultaneous tasks.\n\\begin{figure*}[htb]\n\\centering\n \\begin{tabular}{l|c|c|c}\n &\\small{\\texttt{explore...}}&\\small{\\texttt{rooms\\_collect...}}&\\small{\\texttt{rooms\\_keys...}}\\\\ \\hline\n Separate&29.24&8.79&19.91\\\\ \\hline\n Simultaneous&32.35&8.81&20.56\\\\ \\hline \\hline\n Sequential (no CLEAR)&17.99&5.01&10.87\\\\ \\hline\n CLEAR (50-50 new-replay) &\\textbf{31.40}&\\textbf{8.00}&18.13\\\\ \\hline\n CLEAR w\/o behavioral cloning&28.66&7.79&16.63\\\\ \\hline\n CLEAR, 75-25 new-replay&30.28&7.83&17.86\\\\ \\hline\n CLEAR, 100\\% replay&31.09&7.48&13.39\\\\ \\hline\n CLEAR, buffer 5M&30.33&\\textbf{8.00}&18.07\\\\ \\hline\n CLEAR, buffer 50M&30.82&7.99&\\textbf{18.21}\\\\\n \\end{tabular}\n \\caption{Quantitative comparison of the final cumulative performance between standard training (``Sequential (no CLEAR)'') and various versions of CLEAR on a cyclically repeating sequence of DMLab tasks. We also include the results of training on each individual task with a separate network (``Separate'') and on all tasks simultaneously (``Simultaneous'') instead of sequentially (see Figure \\ref{fig:sequential}). As described in Section \\ref{subsec:interference}, these are no-forgetting scenarios and thus present upper bounds on the performance expected in a continual learning setting, where tasks are presented sequentially. Remarkably, CLEAR achieves performance comparable to ``Separate'' and ``Simultaneous'', demonstrating that forgetting is virtually eliminated. See Appendix \\ref{sec:cumsum} for further details and plots.}\n \\vspace{-1.5em}\n \\label{tab:dmlab}\n\\end{figure*}\n\\vspace{-0.25cm}\n\\subsection{Stability}\n\\vspace{-0.25cm}\nWe here demonstrate the efficacy of CLEAR for diminishing catastrophic forgetting (Figure \\ref{fig:clear}). We apply CLEAR to the cyclically repeating sequence of DMLab tasks used in the preceding experiment. Our method effectively eliminates forgetting on all three tasks, while preserving overall training performance (see ``Sequential'' training in Figure \\ref{fig:sequential} for reference). When the task switches, there is little, if any, dropoff in performance when using CLEAR, and the network picks up immediately where it left off once a task returns later in training. Without behavioral cloning, the mixture of new experience and replay still reduces catastrophic forgetting, though the effect is reduced.\n\nIn Figure \\ref{tab:dmlab}, we perform a quantitative comparison of the performance for CLEAR against the performance of standard training on sequential tasks, as well as training on tasks separately and simultaneously. In order to perform a comparison that effectively captures the overall performance during continual learning (including the effect of catastrophic forgetting), the reward shown for time $t$ is the average $(1\/t)\\sum_{s0}$\nsuch that if one can protect $B$ vertices at each time\nstep (instead of just $1$), then there is a protection\nstrategy where none of the leaves of the tree\ncatches fire.\nIn this context, $B$ is referred to as\nthe \\emph{number of firefighters}.\n\n\nBoth the Firefighter problem and RMFC---both restricted\nto trees as defined above---are known to be computationally\nhard problems.\nMore precisely, Finbow, King, MacGillivray and\nRizzi~\\cite{FinbowKingMacGillivrayRizzi2007} showed \nNP-hardness for the Firefighter problem on trees with \nmaximum degree three.\nFor RMFC on trees, it is NP-hard to decide\nwhether one firefighter,\ni.e., $B=1$, is sufficient~\\cite{king_2010_firefighter};\nthus, unless $\\textsc{P}=\\textsc{NP}$, there is\nno (efficient) approximation algorithm with an approximation\nfactor strictly better than $2$.\n\nOn the positive side, several approximation algorithms\nhave been suggested for the Firefighter problem and RMFC.\nHartnell and Li~\\cite{HartnellLi2000} showed that a \nnatural greedy algorithm is a $\\frac{1}{2}$-approximation for the\nFirefighter problem. This approximation guarantee\nwas later improved by Cai, Verbin\nand Yang~\\cite{CaiVerbinYang2008} to $1-\\frac{1}{e}$,\nusing a natural linear programming (LP) relaxation\nand dependent randomized rounding.\nIt was later observed by Anshelevich, Chakrabarty, Hate \nand Swamy~\\cite{AnshelevichChakrabartyHateSwamy2009}\nthat the Firefighter problem on\ntrees can be interpreted as a monotone submodular\nfunction maximization (SFM) problem subject to a\npartition matroid constraint.\nThis leads to alternative ways to obtain a\n$(1-\\frac{1}{e})$-approximation by using a recent\n$(1-\\frac{1}{e})$-approximation for monotone SFM subject\nto a matroid\nconstraint~\\cite{vondrak_2008_optimal,calinescu_2011_maximizing}.\nThe factor $1-\\frac{1}{e}$ was later only improved for\nvarious restricted tree\ntopologies (see~\\cite{IwaikawaKamiyamaMatsui2011})\nand hence, for arbitrary trees,\nthis is the best known approximation factor to date.\n\nFor RMFC on trees, Chalermsook and\nChuzhoy~\\cite{ChalermsookChuzhoy2010} presented an\n$O(\\log^* n)$-approximation, where $n=|V|$ is the\nnumber of vertices.\\footnote{\n$\\log^* n$ denotes the minimum number $k$ of\nlogs of base two that have to be nested such\nthat $\\underbrace{\\log\\log\\dots\\log}_{k \\text{ logs}} n \\leq 1$.}\nTheir algorithm is based on a natural\nlinear program which is a straightforward\nadaptation of the one used in~\\cite{CaiVerbinYang2008}\nto get a $(1-\\frac{1}{e})$-approximation for the\nFirefighter problem on trees.\n\n\n\nWhereas there are still considerable gaps between\ncurrent hardness results and approximation algorithms\nfor both the Firefighter problem and RMFC on trees, \nthe currently best approximations essentially match\nthe integrality gaps of the underlying LPs.\nMore precisely,\nChalermsook and Vaz~\\cite{chalermsook_2016_new}\nshowed that for any $\\epsilon >0$, the canonical LP used\nfor the Firefighter problem on trees has an integrality\ngap of $1-\\frac{1}{e}+\\epsilon$. This generalized\na previous result by Cai, Verbin and Yang~\\cite{CaiVerbinYang2008},\nwho showed the same gap if the integral solution is required\nto lie in the support of an optimal LP solution.\nFor RMFC on trees, the integrality gap\nof the underlying LP\nis~$\\Theta(\\log^* n)$~\\cite{ChalermsookChuzhoy2010}.\n\n\nIt remained open to what extent these integrality\ngaps may reflect the approximation hardnesses of the\nproblems.\nThis question is motivated by two related problems\nwhose hardnesses of approximation indeed matches the\nabove-mentioned integrality gaps for the Firefighter\nproblem and RMFC.\nIn particular, many versions of monotone SFM subject\nto a matroid constraint---which we recall was shown\nin~\\cite{AnshelevichChakrabartyHateSwamy2009}\nto capture the Firefigther problem on\ntrees as a special\ncase---are\nhard to approximate up to a factor of\n$1-1\/e+\\epsilon$ for any constant $\\epsilon >0$.\nThis includes the problem of maximizing an explicitly\ngiven coverage function subject to a single cardinality\nconstraint, as shown by Feige~\\cite{feige_1998_threshold}.\nMoreover, as highlighted in~\\cite{ChalermsookChuzhoy2010},\nthe Asymmetric $k$-center problem is similar in nature\nto RMFC, and has an approximation hardness of\n$\\Theta(\\log^* n)$.\n\n\nThe goal of this paper is to fill the gap between\ncurrent approximation ratios and hardness results\nfor the Firefighter problem and RMFC on trees.\nIn particular, we present approximation ratios\nthat nearly match the hardness results, thus showing\nthat both problems can be approximated to factors\nthat are substantially better than the integrality\ngaps of the natural LPs.\nOur results are based on several new techniques,\nwhich may be of independent interest.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\\subsection{Our results}\n\n\nOur main results show\nthat both the Firefighter\nproblem and RMFC admit strong approximations\nthat essentially match known hardness bounds,\nshowing that approximation factors can be achieved that\nare substantially stronger than the integrality\ngaps of the natural LPs.\nIn particular, we obtain the following result\nfor RMFC.\n\\begin{theorem}\\label{thm:O1RMFC}\nThere is a $12$-approximation for RMFC.\n\\end{theorem}\nRecalling that RMFC is hard to approximate\nwithin any factor better than $2$, the above\nresult is optimal up to a constant factor,\nand improves on the previously best\n$O(\\log^* n)$-approximation of\nChalermsook and Chuzhoy~\\cite{ChalermsookChuzhoy2010}.\n\n\nMoreover, our main result for the Firefighter problem\nis the following, which, in view of NP-hardness \nof the problem, is essentially best possible in\nterms of approximation guarantee.\n\\begin{theorem}\\label{thm:PtasFF}\nThere is a PTAS for the Firefighter problem\non trees.\\footnote{A polynomial\ntime approximation scheme (PTAS) is\nan algorithm that, for any constant\n$\\epsilon > 0$, returns in polynomial time\na $(1-\\epsilon)$-approximate solution.}\n\\end{theorem}\n\nNotice that the Firefighter problem does not admit\nan FPTAS\\footnote{A fully polynomial time approximation\nscheme (FPTAS) is a PTAS with running\ntime polynomial in the input size and $\\frac{1}{\\epsilon}$.}\nunless $\\textsc{P}=\\textsc{NP}$, since the optimal\nvalue of any Firefighter problem on a tree of\n$n$ vertices is bounded by $O(n)$.\\footnote{\nThe nonexistence of FPTASs unless $\\textsc{P}=\\textsc{NP}$\ncan often be derived easily from strong\nNP-hardness. Notice that the Firefighter problem\nis indeed strongly NP-hard because\nits input size is $O(n)$, in which case NP-hardness is\nequivalent to strong NP-hardness.\n}\nWe introduce several new techniques that allow us\nto obtain approximation factors well beyond \nthe integrality gaps of the natural LPs,\nwhich have been a barrier for previous approaches.\nWe start by providing an overview of these techniques.\n\n\n\\smallskip\n\nDespite the fact that we obtain\napproximation\nfactors beating the integrality gaps, the natural LPs\nplay a central role in our approaches.\nWe start by introducing general transformations\nthat allow for transforming the Firefighter problem\nand RMFC into a more compact and better structured\nform, only losing small factors in terms of\napproximability.\nThese transformations by themselves do not decrease\nthe integrality gaps.\nHowever, they allow us to identify small\nsubstructures, over which we can\noptimize efficiently, and having an optimal solution\nto these subproblems we can define\na residual LP with small integrality gap.\n\nSimilar high-level approaches,\nlike guessing a constant-size\nbut important subset of an optimal solution are well-known\nin various contexts to decrease \nintegrality gaps of natural LPs. The best-known example\nmay be classic PTASs for the knapsack problem, where the\nintegrality gap of the natural LP can be decreased to\nan arbitrarily small constant by first guessing a constant\nnumber of heaviest elements of an optimal solution.\nHowever, our approach differs substantially \nfrom this standard enumeration idea.\nApart from the above-mentioned transformations which,\nas we will show later, already lead to new results\nfor both RMFC and the Firefighter problem, we\nwill introduce new combinatorial approaches to gain information\nabout a \\emph{super-constant} subset of an optimal solution.\nIn particular, for the RMFC problem we define\na recursive enumeration\nalgorithm which, despite being very slow for enumerating all\nsolutions, can be shown to reach a good subsolution \nwithin a small recursion depth that can be reached in\npolynomial time.\nThis enumeration procedure explores the space step by\nstep, and at each step we first solve an LP that determines\nhow to continue the enumeration in the next step.\nWe think that this LP-guided enumeration\ntechnique may be of independent interest.\nFor the Firefighter problem, we use a well-chosen\nenumeration procedure to identify a polynomial\nnumber of additional constraints to be added to the\nLP, that improves its integrality gap to\n$1-\\epsilon$.\n\n\n\n\n\n\n\n\n\n\\subsection{Further related results}\nIwaikawa, Kamiyama and Matsui~\\cite{IwaikawaKamiyamaMatsui2011}\nshowed that the approximation guarantee of $1-\\frac{1}{e}$ can be improved \nfor some restricted families of trees, in particular of low maximum degree.\nAnshelevich, Chakrabarty, Hate and \nSwamy~\\cite{AnshelevichChakrabartyHateSwamy2009} studied the approximability\nof the Firefighter problem in general graphs, which they prove admits no \n$n^{1-\\epsilon}$-approximation for any $\\epsilon > 0$, unless\n$\\textsc{P}=\\textsc{NP}$. In a \ndifferent model, where the protection also spreads through the graph \n(the \\emph{Spreading Model}), the authors show that the problem \nadmits a polynomial $(1-\\frac{1}{e})$-approximation on general graphs. \nMoreover, for RMFC, an $O(\\sqrt n)$-approximation for general \ngraphs and an $O(\\log n)$-approximation for directed layered\ngraphs is presented.\nThe\nlatter result was obtained independently by Chalermsook and Chuzhoy~\\cite{ChalermsookChuzhoy2010}.\nKlein, Levcopoulos and Lingas~\\cite{KleinLevcopoulosLingas2014} introduced a \ngeometric variant of the Firefighter problem, proved its NP-hardness and provided \na constant-factor approximation algorithm. \nThe Firefighter problem and RMFC are natural special cases\nof the Maximum Coverage Problem with \nGroup Constraints (MCGC)~\\cite{ChekuriKumar2004} and the \nMultiple Set Cover problem (MSC)~\\cite{ElkinKortsarz2006}, respectively. \nThe input in MCGC is a set system consisting of a finite set $X$\nof elements with nonnegative weights, a\ncollection of subsets $\\mathcal{S} = \\{S_1, \\cdots, S_k\\}$ of $X$ and \nan integer $k$. The sets in $\\mathcal{S}$ are partitioned into \ngroups $G_1, \\cdots, G_l\\subseteq \\mathcal{S}$.\nThe goal is to pick a\nsubset $H\\subseteq \\mathcal{S}$ of $k$ \nsets from $\\mathcal{S}$ whose union covers elements of total\nweight as large as possible with the \nadditional constraint that $|H \\cap G_j| \\leq 1$\nfor all $j\\in [l]\\coloneqq \\{1,\\dots, l\\}$. In MSC,\ninstead of the fixed bounds for groups and the parameter $k$, the goal is\nto choose a subset $H\\subseteq \\mathcal{S}$\nthat covers $X$ completely, while \nminimizing $\\max_{j\\in [l]}|H \\cap G_j|$. The Firefighter\nproblem and RMFC can naturally be interpreted as special cases of the latter\nproblems with a laminar set system $\\mathcal{S}$.\n\n\nThe Firefighter problem admits polynomial time algorithms in some restricted classes \nof graphs. Finbow, King, MacGillivray and Rizzi~\\cite{FinbowKingMacGillivrayRizzi2007}\nshowed that, while the problem is NP-hard on trees with maximum degree three, \nwhen the fire starts at a vertex with degree two in a subcubic tree, the problem is solvable \nin polynomial time. Fomin, Heggernes and van Leeuwen~\\cite{FominHeggernes_vanLeeuwen2012}\npresented polynomial algorithms for interval graphs, split graphs,\npermutation graphs and $P_k$-free graphs.\n\nSeveral sub-exponential exact algorithms were developed for the Firefighter\nproblem on trees. Cai, Verbin and Yang~\\cite{CaiVerbinYang2008} presented\na $2^{O(\\sqrt{n}\\log n)}$-time algorithm. \nFloderus, Lingas and Persson~\\cite{FloderusLingasPersson2013} presented a\nsimpler algorithm with a slightly better running time, as well as a\nsub-exponential algorithm for general graphs in the\nspreading model and an $O(1)$-approximation in planar graphs\nunder some further conditions.\n\nAdditional directions of research on the Firefighter problem include parameterized\ncomplexity (Cai, Verbin and Yang~\\cite{CaiVerbinYang2008}, Bazgan, Chopin and \nFellows~\\cite{BazganChopinFellows2011}, Cygan, Fomin and \nvan Leeuwen~\\cite{CyganFomin_vanLeeuwen2012} and \nBazgan, Chopin, Cygan, Fellows, Fomin and \nvan Leeuwen~\\cite{BazganChopinCyganFellowsFomin_van_Leeuwen2014}), generalizations\nto the case of many initial fires and many firefighters (Bazgan, Chopin and \nRies~\\cite{BazganChopinRies2013} and Costa, Dantas, Dourado, Penso and \nRautenbach~\\cite{CostaDantasDouradoPensoRautenbach2013}),\nand the study of potential strengthenings of the canonical LP for\nthe Firefighter problem on trees (Hartke~\\cite{Hartke2006} and Chalermsook and Vaz~\\cite{chalermsook_2016_new}).\n\nComputing the \\emph{Survivability} of a graph is \na further problem closely related to Firefighting\nthat has attracted considerable attention\n(see~\\cite{CaiWang2009,CaiChengVerbinZhou2010,Pralat2013,Esperet_Van_Den_HeuvelMaffraySipma2013,Gordinowicz2013,KongZhangWang2014}).\nFor a graph $G$ and a parameter\n$k\\in \\mathbb{Z}_{\\geq 0}$, the $k$-survivability of $G$ \nis the average fraction of nodes that one can save with $k$ firefighters\nin $G$, when the fire starts at a random node.\n\n\n\nFor further references we refer the reader to the survey of Finbow and \nMacGillivray~\\cite{FinbowMacGillivray2009}.\n\n\n\\subsection{Organization of the paper}\n\n\n\nWe start by introducing the classic linear programming\nrelaxations for the Firefighter problem and RMFC\nin Section~\\ref{sec:preliminaries}.\nSection~\\ref{sec:overview} outlines our main\ntechniques and algorithms. Some \nproofs and additional discussion\nare deferred to later sections, namely\nSection~\\ref{sec:proofsCompression}, providing\ndetails on a compression technique that is\ncrucial for both our algorithms, Section~\\ref{sec:proofsFF},\ncontaining proofs for results related to the\nFirefighter problem, and Section~\\ref{sec:proofsRMFC},\ncontaining proofs for results related to RMFC.\nFinally, Appendix~\\ref{apx:trans} contains some basic\nreductions showing how to reduce different variations\nof the Firefighter problem to each other.\n\n\n\n\n\n\n\n\n\\section{Classic LP relaxations and preliminaries}\n\\label{sec:preliminaries}\n\nInterestingly, despite the fact that we\nobtain approximation factors considerably\nstronger than the known integrality gaps of the\nnatural LPs,\nthese LPs still play a central role in our approaches.\nWe thus start by introducing the natural LPs together with\nsome basic notation and terminology.\n\nLet $L\\in \\mathbb{Z}_{\\geq 0}$ be the \\emph{depth} of\nthe tree, i.e., the largest\ndistance---in terms of number of edges---between $r$\nand any other vertex in $G$.\nHence, after at most $L$ time steps, the fire\nspreading process will halt.\nFor $\\ell\\in [L]:=\\{1,\\dots, L\\}$, let\n$V_\\ell\\subseteq V$ be the set of all vertices of\ndistance $\\ell$ from $r$, which we call the\n\\emph{$\\ell$-th level} of the instance.\nFor brevity, we use $V_{\\leq \\ell} = \\cup_{k=1}^\\ell V_k$,\nand we define in the same spirit $V_{\\geq \\ell}$, \n$V_{< \\ell}$, and $V_{> \\ell}$.\nMoreover, we denote by\n$\\Gamma \\subseteq V$ the set of all leaves\nof the tree, and for any $u\\in V$, the set\n$P_u\\subseteq V\\setminus \\{r\\}$ denotes\nthe set of all vertices on the unique $u$-$r$ path\nexcept for the root $r$.\n\nThe relaxation for RMFC used in~\\cite{ChalermsookChuzhoy2010}\nis the following:\n\\begin{equation}\\label{eq:lpRMFC}\n\\begin{array}{*2{>{\\displaystyle}r}c*2{>{\\displaystyle}l}}\n\\min & B & & \\\\\n & x(P_u) &\\geq &1 &\\forall u\\in \\Gamma \\\\\n & x(V_{\\leq \\ell}) &\\leq &B\\cdot \\ell\\hspace*{2em}\n &\\forall \\ell\\in [L]\\\\\n & x &\\in &\\mathbb{R}_{\\geq 0}^{V\\setminus \\{r\\}},\n\\end{array}\\tag{$\\mathrm{LP_{RMFC}}$}\n\\end{equation}\nwhere $x(U):=\\sum_{u\\in U} x(u)$ for any $U\\subseteq V\\setminus \\{r\\}$.\nIndeed, if one enforces $x\\in \\{0,1\\}^{V\\setminus \\{r\\}}$\nand $B\\in \\mathbb{Z}$\nin the above relaxation, an exact description of RMFC is\nobtained where $x$ is the characteristic vector of the\nvertices to be protected and $B$ is the number of\nfirefighters:\nThe constraints $x(P_u)\\geq 1$ for $u\\in \\Gamma$ enforce \nthat for each leaf $u$, a vertex between $u$ and $r$\nwill be protected, which makes sure that $u$ will not\nbe reached by the fire;\nmoreover, the\nconstraints $x(V_{\\leq \\ell})\\leq B\\cdot \\ell$\nfor $\\ell\\in [L]$ describe the vertex sets that can be\nprotected given $B$ firefighters per time step\n(see~\\cite{ChalermsookChuzhoy2010} for more details).\nAlso, as already highlighted in~\\cite{ChalermsookChuzhoy2010},\nthere is an optimal solution to RMFC (and also to the Firefighter\nproblem), that protects with the firefighters available at\ntime step $\\ell$ only vertices in $V_\\ell$.\nHence, the above relaxation\ncan be transformed into one with same optimal objective value\nby replacing the constraints\n$x(V_{\\leq \\ell})\\leq B\\cdot \\ell$ \\;$\\forall\\ell\\in [L]$\nby the constraints\n$x(V_\\ell) \\leq B$ \\;$\\forall \\ell\\in [L]$.\n\n\nThe natural LP relaxation for the Firefighter\nproblem, which leads to the previously best\n$(1-1\/e)$-approximation presented in~\\cite{CaiVerbinYang2008},\nis obtained analogously.\nDue to higher generality, and even more importantly\nto obtain more flexibility\nin reductions to be defined later, we work on a slight\ngeneralization of the Firefighter problem on trees,\nextending it in two ways:\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item Weighted version: vertices $u\\in V\\setminus \\{r\\}$ have\nweights $w(u)\\in \\mathbb{Z}_{\\geq 0}$, and the goal\nis to maximize the total weight of vertices not catching\nfire.\nIn the classical Firefighter problem all weights are one.\n\n\\item General budgets\/firefighters:\nWe allow for having a different number of\nfirefighters at each time step, say $B_\\ell \\in \\mathbb{Z}_{>0}$\nfirefighters for time step $\\ell\\in [L]$.\\footnote{Without\nloss of generality we exclude $B_\\ell=0$, since a level\nwith zero budget can be eliminated through a simple\ncontraction operation. For more details we refer to\nthe proof of Theorem~\\ref{thm:compressionFF} which,\nas a sub-step, eliminates zero-budget levels.\n}\n\\end{enumerate}\nIndeed, the above generalizations are mostly for convenience\nof presentation, since general budgets can be reduced to\nunit budgets (see Appendix~\\ref{apx:trans} for a proof):\n\\begin{lemma}\\label{lem:genBudgetsToUnit}\nAny weighted Firefighter problem on trees with $n$ vertices\nand general budgets can be transformed efficiently \ninto an equivalent weighted Firefighter problem with\nunit-budgets and $O(n^2)$ vertices.\n\\end{lemma}\nWe also show in Appendix~\\ref{apx:trans} that\nup to an arbitrarily small error in terms\nof objective, any weighted Firefighter instance can be\nreduced to a unit-weighted one.\nIn what follows, we always assume to deal with\na weighted Firefighter instance if not specified\notherwise. Regarding the budgets, we will be\nexplicit about whether we work with unit or\ngeneral budgets, since some techniques are easier\nto explain in the unit-budget case, even though\nit is equivalent to general budgets by\nLemma~\\ref{lem:genBudgetsToUnit}.\n\n\\smallskip\n\nAn immediate extension of the LP relaxation\nfor the unit-weighted unit-budget Firefighter\nproblem used in~\\cite{CaiVerbinYang2008}---which\nis based on an IP formulation\npresented in~\\cite{macgillivray_2003_firefighter}---leads\nto the\nfollowing LP relaxation \nfor the weighted Firefighter problem\nwith general budgets. \nFor $u\\in V$, we denote by $T_u\\subseteq V$\nthe set of all vertices in the subtree starting\nat $u$ and including $u$, i.e., all vertices $v$\nsuch that the unique $r$-$v$ path in $G$ contains $u$.\n\n\\begin{equation}\\label{eq:lpFF}\n\\begin{array}{*2{>{\\displaystyle}r}c*2{>{\\displaystyle}l}}\n\\max & \\sum_{u\\in V\\setminus \\{r\\}} x_u w(T_u) & & \\\\\n & x(P_u) &\\leq &1 &\\forall u\\in \\Gamma \\\\\n & x(V_{\\leq \\ell}) &\\leq &\\sum_{i=1}^\\ell B_i\\hspace*{2em}\n &\\forall \\ell\\in [L]\\\\[1.5em]\n & x &\\in &\\mathbb{R}_{\\geq 0}^{V\\setminus \\{r\\}}.\n\\end{array}\\tag{$\\mathrm{LP_{FF}}$}\n\\end{equation}\nThe constraints\n$x(P_u)\\leq 1$ exclude redundancies, i.e., a vertex\n$u$ is forbidden of being protected if another vertex above\nit, on the $r$-$u$ path, is already protected. This\nelimination of redundancies allows for writing the objective\nfunction as shown above.\n\nWe recall that the integrality gap of~\\ref{eq:lpRMFC}\nwas shown to be $\\Theta(\\log^* n)$~\\cite{ChalermsookChuzhoy2010},\nand the integrality gap of~\\ref{eq:lpFF} is\nasymptotically $1-1\/e$\n(when $n\\to \\infty$)~\\cite{chalermsook_2016_new}.\n\n\\smallskip\n\nThroughout the paper, all logarithms are of base $2$ if\nnot indicated otherwise.\nWhen using big-$O$ and related notations\n(like $\\Omega, \\Theta, \\ldots$), we will always\nbe explicit about the dependence on \nsmall error terms $\\epsilon$---as used when talking\nabout $(1-\\epsilon)$-approximations---and not consider\nit to be part of the hidden constant.\nTo make statements where $\\epsilon$ is part of the\nhidden constant, we will use the notation\n$O_{\\epsilon}$ and likewise\n$\\Omega_{\\epsilon}, \\Theta_{\\epsilon},\\ldots$.\n\n\n\n\n\n\n\n\\section{Overview of techniques and algorithms}\n\\label{sec:overview}\n\nIn this section, we present our main technical\ncontributions and outline our algorithms.\nWe start by introducing a compression technique in\nSection~\\ref{subsec:compression} that works\nfor both RMFC and the Firefighter problem and allows\nfor transforming any instance to one on a tree with\nonly logarithmic depth.\nOne key property we achieve with compression,\nis that we can later use (partial)\nenumeration techniques with exponential running\ntime in the depth of the tree. \nHowever, compression on its own already leads to\ninteresting results. In particular, it allows\nus to obtain a QPTAS for\nthe Firefighter problem, and a quasipolynomial time\n$2$-approximation for RMFC.\\footnote{The running time\nof an algorithm is \\emph{quasipolynomial} if it is\nof the form $2^{\\polylog(\\langle \\mathrm{input} \\rangle)}$,\nwhere $\\langle \\mathrm{input} \\rangle$ is the input\nsize of the problem. A QPTAS is an algorithm that,\nfor any constant $\\epsilon >0$, returns\na $(1-\\epsilon)$-approximation in quasipolynomial\ntime.}\nHowever, it seems highly\nnon-trivial to transform these quasipolynomial time\nprocedures to efficient ones. \n\n\n\nTo obtain the claimed results, we develop two \n(partial) enumeration methods to reduce the integrality\ngap of the LP.\nIn Section~\\ref{subsec:overviewFirefighter}, we provide\nan overview of our PTAS for the Firefighter problem,\nand Section~\\ref{subsec:overviewRMFC} presents \nour $O(1)$-approximation for RMFC.\n\n\n\n\n\n\n\n\n\n\n\\subsection{Compression}\\label{subsec:compression}\n\nCompression is a technique that is applicable to both\nthe Firefighter problem and RMFC. It allows for reducing\nthe depth of the input tree at a very small loss in\nthe objective.\nWe start by discussing compression in the context of\nthe Firefighter problem. \n\nTo reduce the depth of the tree, we will\nfirst do a sequence of what we call \\emph{down-pushes}.\nEach down-push acts on two levels $\\ell_1,\\ell_2\\in [L]$\nwith $\\ell_1 < \\ell_2$ of the tree, and moves the budget $B_{\\ell_1}$\nof level $\\ell_1$ down to $\\ell_2$, i.e., the new\nbudget of level $\\ell_2$ will be $B_{\\ell_1}+B_{\\ell_2}$,\nand the new budget of level $\\ell_1$ will be $0$.\nClearly, down-pushes only restrict our options for\nprotecting vertices. However, we can show that\none can do a sequence of down-pushes such that\nfirst, the optimal objective value of the new instance\nis very close to the one of the original instance,\nand second,\nonly $O(\\log L)$ levels have non-zero budgets.\nFinally, levels with $0$-budget can easily be removed\nthrough a simple contraction operation, thus leading\nto a new instance with only $O(\\log L)$ depth.\n\nTheorem~\\ref{thm:compressionFF} below\nformalizes our main compression\nresult for the Firefighter problem, which we state for\nunit-budget Firefighter instances for simplicity.\nSince Lemma~\\ref{lem:genBudgetsToUnit}\nimplies that every general-budget\nFirefighter instance with $n$ vertices can\nbe transformed into a unit-budget Firefighter instance\nwith $O(n^2)$ vertices---and thus $O(n^2)$\nlevels---Theorem~\\ref{thm:compressionFF} can also be used to reduce\nany Firefighter instance on $n$ vertices to one\nwith $O(\\frac{\\log n}{\\delta})$\nlevels, by losing a factor of at most $1-\\delta$\nin terms of objective.\n\n\n\\begin{theorem}\\label{thm:compressionFF}\nLet $\\mathcal{I}$ be a unit-budget Firefighter instance \non a tree with depth $L$, and let $\\delta\\in (0,1)$.\nThen one can efficiently construct a general budget\nFirefighter instance $\\overline{\\mathcal{I}}$ with depth\n$L'=O(\\frac{\\log L}{\\delta})$, and such that\nthe following holds, where $\\val(\\mathsf{OPT}(\\overline{\\mathcal{I}}))$\nand $\\val(\\mathsf{OPT}(\\mathcal{I}))$ are the optimal values of\n$\\overline{\\mathcal{I}}$ and $\\mathcal{I}$, respectively.\n\n\\smallskip\n\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item\n$\\val(\\mathsf{OPT}(\\overline{\\mathcal{I}}))\n \\geq (1-\\delta) \\val(\\mathsf{OPT}(\\mathcal{I}))$, and\n\n\\item any solution to $\\overline{\\mathcal{I}}$ can be\ntransformed efficiently into a solution of $\\mathcal{I}$\nwith same objective value.\n\\end{enumerate}\n\\end{theorem}\n\n\n\n\nFor RMFC we can use a very similar compression technique\nleading to the following.\n\n\\begin{theorem}\\label{thm:compressionRMFC}\nLet $G=(V,E)$ be a rooted tree of depth $L$.\nThen one can construct efficiently a rooted\ntree $G'=(V',E')$ with $|V'|\\leq |V|$ and\ndepth $L'=O(\\log L)$, such that:\n\n\\smallskip\n\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item If the RMFC problem on $G$ has a \nsolution with budget $B\\in \\mathbb{Z}_{> 0}$ at\neach level, then the RMFC problem on $G'$\nwith non-uniform budgets, where level $\\ell \\geq 1$\nhas a budget of $B_\\ell=2^{\\ell} \\cdot B$, has a solution.\n\n\\item Any solution to the RMFC problem on $G'$,\nwhere level $\\ell$ has budget $B_\\ell=2^{\\ell} \\cdot B$,\ncan be transformed efficiently into an RMFC solution\nfor $G$ with budget $2B$.\n\\end{enumerate}\n\n\\end{theorem}\n\nInterestingly, the above compression results already\nallow us to obtain strong quasipolynomial approximation\nalgorithms for the Firefighter problem and RMFC,\nusing dynamic programming.\nConsider for example the RMFC problem. We can first guess\nthe optimal budget $B$, which can be done efficiently\nsince $B\\in \\{1,\\dots, n\\}$. Consider now the instance\n$G'$ claimed by Theorem~\\ref{thm:compressionRMFC}\nwith budgets $B_\\ell = 2^{\\ell} B$.\nBy Theorem~\\ref{thm:compressionRMFC},\nthis RMFC instance is feasible\nand any solution to it can be converted to one of\nthe original RMFC problem with budget $2B$.\nIt is not hard to see that, for the fixed budgets $B_\\ell$,\none can solve the RMFC problem on $G'$ in quasipolynomial\ntime using a bottom-up dynamic programming approach.\nMore precisely, starting with the leaves and moving\nup to the root, we compute for each vertex $u\\in V$\nthe following table. Consider a subset of the available\nbudgets, which can be represented as a vector\n$q\\in [B_1]\\times \\dots \\times [B_{L'}]$. For each such\nvector $q$ we want to know whether or not using the sub-budget\ndescribed by $q$ allows for disconnecting $u$ from all\nleaves below it.\nSince $L'=O(\\log L)$ and the size of each budget $B_\\ell$\nis at most the number of vertices, the table size is\nquasipolynomial.\nMoreover, one can check that these tables can be\nconstructed bottom-up in quasipolynomial time.\nHence, this approach leads to a quasipolynomial time\n$2$-approximation for RMFC. We recall that there is no\nefficient approximation algorithm with an approximation\nratio strictly below $2$,\nunless $\\textsc{P}=\\textsc{NP}$.\nA similar dynamic programming approach for the Firefighter\nproblem on a compressed instance leads to a\nQPTAS.\n\nHowever, our focus is on efficient algorithms, and it\nseems non-trivial to transform the\nabove quasipolynomial time dynamic programming approaches\ninto efficient procedures. To obtain our results,\nwe therefore combine\nthe above compression techniques with\nfurther approaches to be discussed next.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Overview of PTAS for Firefighter problem}\n\\label{subsec:overviewFirefighter}\n\nDespite the fact that~\\ref{eq:lpFF} has a large integrality\ngap\n---which can be shown to be the case even after\ncompression\\footnote{This follows from the fact\nthat through compression with some\nparameter $\\delta\\in (0,1)$, both the optimal\nvalue and optimal LP value change at most\nby a $\\delta$-fraction.}---%\nit is a crucial tool in our PTAS.\nConsider a general-budget\nFirefighter instance,\nand let $x$ be a vertex solution to~\\ref{eq:lpFF}.\nWe say that a vertex $u\\in V\\setminus \\{r\\}$ is\n\\emph{$x$-loose}, or simply \\emph{loose}, if\n$u\\in \\operatorname{supp}(x):=\\{v\\in V\\setminus \\{r\\} \\mid x(v) > 0\\}$\nand $x(P_u) < 1$.\nAnalogously, we call a vertex $u\\in V\\setminus \\{r\\}$\n\\emph{$x$-tight}, or simply \\emph{tight}, if\n$u\\in \\operatorname{supp}(x)$ and $x(P_u)=1$.\nHence, $\\operatorname{supp}(x)$ can be partitioned into\n$\\operatorname{supp}(x)=V^{\\mathcal{L}} \\cup V^{\\mathcal{T}}$,\nwhere $V^{\\mathcal{L}}$ and $V^{\\mathcal{T}}$\nare the set of all loose and tight vertices, respectively.\nUsing a sparsity argument for vertex solutions\nof~\\ref{eq:lpFF} we can bound the number of\n$x$-loose vertices.\n\n\n\\begin{lemma}\\label{lem:sparsityFF}\nLet $x$ be a vertex solution to~\\ref{eq:lpFF}\nfor a Firefighter problem with general budgets.\nThen the number of $x$-loose vertices is at\nmost $L$, the depth of the tree.\n\\end{lemma}\n\nHaving a vertex solution $x$ to~\\ref{eq:lpFF},\nwe can consider a simplified LP obtained from~\\ref{eq:lpFF}\nby only allowing to protect vertices that are $x$-tight.\nA simple yet useful property of $x$-tight vertices is that\nfor any $u,v\\in V^{\\mathcal{T}}$ with $u\\neq v$\nwe have $u\\not\\in P_v$. Indeed, if $u\\in P_v$, then\n$x(P_u) \\leq x(P_v) - x(v) < x(P_v)=1$ because $x(v)>0$.\nHence, no two tight vertices lie on the same leaf-root path.\nThus, when restricting~\\ref{eq:lpFF} to $V^{\\mathcal{T}}$,\nthe path constraints $x(P_u) \\leq 1$ for $u\\in \\Gamma$ transform\ninto trivial constraints requiring $x(v)\\leq 1$ for\n$v\\in V^{\\mathcal{T}}$, and one can easily observe that the\nresulting constraint system is totally unimodular because\nit describes a laminar matroid constraint given by the\nbudget constraints (see~\\cite[Volume B]{Schrijver2003} for more details on\nmatroid optimization).\nRe-optimizing over this LP we get an integral solution\nof objective value at least\n$\\sum_{u\\in V\\setminus \\{r\\}} x_u w(T_u)\n - \\sum_{u\\in V^{\\mathcal{L}}} x_u w(T_u)$,\nbecause the restriction of $x$ to $V^{\\mathcal{T}}$\nis still feasible for the new LP.\n\n\nIn particular, if $\\sum_{u\\in V^{\\mathcal{L}}} x_u w(T_u)$\nwas at most $\\epsilon\\cdot\\val(\\mathsf{OPT})$, where\n$\\val(\\mathsf{OPT})$ is the optimal value of the instance,\nthen this would lead to a PTAS.\nClearly, this is not true in general, since it would contradict\nthe $(1-\\frac{1}{e})$-integrality gap of~\\ref{eq:lpFF}.\nIn the following, we will present techniques to limit\nthe loss in terms of LP-value when re-optimizing\nonly over variables corresponding to tight vertices $V^{\\mathcal{T}}$.\n\n\nNotice that when we work with a compressed instance, by\nfirst invoking Theorem~\\ref{thm:compressionFF} with $\\delta=\\epsilon$,\nwe have $|V^{\\mathcal{L}}|=O(\\frac{\\log N}{\\epsilon})$, where $N$ is the number of\nvertices in the original instance. Hence, a PTAS would\nbe achieved if for all $u\\in V^{\\mathcal{L}}$, we had\n$w(T_u) = \\Theta(\\frac{\\epsilon^2}{\\log N})\\cdot \\val(\\mathsf{OPT})$.\nOne way to achieve this in quasipolynomial time is\nto first guess a subset of $\\Theta(\\frac{\\log N}{\\epsilon^2})$ many\nvertices of an optimal solution with highest impact,\ni.e., among all vertices $u\\in \\mathsf{OPT}$ we\nguess those with largest $w(T_u)$.\nThis techniques has been used in various other settings\n(see for example~\\cite{ravi_1996_constrained,grandoni_2014_new}\nfor further details)\n and leads to another QPTAS for the Firefighter problem.\nAgain, it is unclear how \nthis QPTAS could be turned into an efficient procedure.\n\n\nThe above discussion motivates to investigate vertices\n$u\\in V\\setminus \\{r\\}$ with \n$w(T_u) \\geq \\eta$ for some\n$\\eta = \\Theta(\\frac{\\epsilon^2}{\\log N}) \\val(\\mathsf{OPT})$.\nWe call such vertices \\emph{heavy};\nlater, we will provide an explicit definition of $\\eta$\nthat does not depend on the unknown $\\val(\\mathsf{OPT})$ and\nis explicit about the hidden constant.\nLet $H=\\{u\\in V\\setminus \\{r\\} \\mid w(u) \\geq \\eta\\}$\nbe the set of all heavy vertices.\nObserve that $G[H\\cup \\{r\\}]$---i.e., the induced subgraph\nof $G$ over the vertices $H\\cup\\{r\\}$---is a subtree of\n$G$, which we call the \\emph{heavy tree}.\n\n\nRecall that by the above discussion, if we work on a\ncompressed instance with $L=O(\\frac{\\log N}{\\epsilon})$ levels,\nand if an optimal vertex solution to~\\ref{eq:lpFF}\nhas no loose vertices that are heavy, then an integral\nsolution can be obtained of value at \nleast $1-\\epsilon$ times the LP value.\nHence, if we were able to guess the\nheavy vertices contained in an optimal solution,\nthe integrality gap of the reduced problem\nwould be small\nsince no heavy vertices are left in the LP,\nand can thus not be loose anymore.\n\nWhereas there are too many options\nto enumerate over all possible subsets\nof heavy vertices that an optimal solution\nmay contain, we will do a coarser\nenumeration.\nMore precisely, we will partition\nthe heavy vertices into $O_{\\epsilon}(\\log N)$\nsubpaths and guess for each subpath whether\nit contains a vertex of $\\mathsf{OPT}$.\nFor this to work out we need\nthat the heavy tree has a very\nsimple topology;\nin particular, it should only have\n$O_{\\epsilon}(\\log N)$ leaves.\nWhereas this does not hold in general,\nwe can enforce it by a further transformation\nmaking sure that $\\mathsf{OPT}$ saves a\nconstant-fraction of $w(V)$ which---as we\nwill observe next---indeed limits the\nnumber of leaves of the heavy tree to $O_{\\epsilon}(\\log N)$.\nFurthermore, this transformation is useful to\ncomplete our definition of heavy vertices\nby explicitly defining the threshold $\\eta$.\n\n\n\\begin{lemma}\\label{lem:pruning}\nLet $\\mathcal{I}$ be a general-budget Firefighter instance\non a tree $G=(V,E)$ with weights $w$.\nThen for any $\\lambda\\in \\mathbb{Z}_{\\geq 1}$,\none can efficiently construct\na new Firefighter instance\n$\\overline{\\mathcal{I}}$ on a subtree $G'=(V',E')$ of $G$\nwith same budgets,\nby starting from $\\mathcal{I}$ and applying\nnode deletions and weight reductions, such that\n\\smallskip\n\\begin{enumerate}[nosep, label=(\\roman*)]\n \\item\\label{item:pruningSmallLoss}\n $\\val(\\mathsf{OPT}(\\overline{\\mathcal{I}})) \\geq\n \\left(1 - \\frac{1}{\\lambda}\\right)\n \\val(\\mathsf{OPT}(\\mathcal{I}))$, and\n\n \\item\\label{item:pruningLargeOpt}\n $\\val(\\mathsf{OPT}(\\overline{\\mathcal{I}})) \\geq\n \\frac{1}{\\lambda} w'(V')$,\nwhere $w'\\leq w$ are the vertex weights\nin instance $\\overline{\\mathcal{I}}$.\n\\end{enumerate}\n\\smallskip\nThe deletion of $u\\in V$ corresponds to removing the whole\nsubtree below $u$ from $G$, i.e., all vertices in $T_u$.\n\\end{lemma}\n\nSince Lemma~\\ref{lem:pruning} constructs a new instance\nusing only node deletions and weight reductions, any\nsolution to the new instance is also a solution to the\noriginal instance of at least the same objective value.\n\nOur PTAS for the Firefighter problem first applies the\ncompression Theorem~\\ref{thm:compressionFF} with $\\delta=\\epsilon\/3$\nand then Lemma~\\ref{lem:pruning} with\n$\\lambda = \\lceil\\frac{3}{\\epsilon} \\rceil$ to obtain\na general budget Firefighter instance on a tree $G=(V,E)$.\nWe summarize the properties of this new instance $G=(V,E)$\nbelow. As before, to avoid confusion, we denote by $N$\nthe number of vertices of the \noriginal instance.\n\n\n\\begin{property}\\leavevmode\n\\label{prop:preprocessedFF}\n\n\\begin{enumerate}[nosep, label=(\\roman*)]\n\\item The depth $L$ of $G$ satisfies $L=O(\\frac{\\log N}{\\epsilon})$.\n\n\\item $\\val(\\mathsf{OPT}) \\geq \\lceil\\frac{3}{\\epsilon} \\rceil^{-1} w(V)\n \\geq \\frac{1}{4}\\epsilon w(V)$.\n\n\\item The optimal value $\\val(\\mathsf{OPT})$ of the new instance is\nat least a $(1-\\frac{2}{3}\\epsilon)$-fraction of the\noptimal value of the original instance.\n\n\\item Any solution to the new instance can be transformed\nefficiently into a solution of the original instance\nof at least the same value.\n\\end{enumerate}\n\\end{property}\n\nHence, to obtain a PTAS for the original instance, it\nsuffices to obtain, for any $\\epsilon >0$, a \n$(1-\\frac{\\epsilon}{3})$-approximation for an instance\nsatisfying Property~\\ref{prop:preprocessedFF}.\nIn what follows, we assume to work with an instance\nsatisfying Property~\\ref{prop:preprocessedFF} and show\nthat this is possible.\n\n\nDue to the lower bound on $\\val(\\mathsf{OPT})$ provided\nby Property~\\ref{prop:preprocessedFF}, we now define\nthe threshold\n$\\eta = \\Theta(\\frac{\\epsilon}{\\log N}) \\val(\\mathsf{OPT})$\nin terms of $w(V)$ by\n\\begin{equation*}\n\\eta = \\frac{1}{12} \\frac{\\epsilon^2}{L} w(V),\n\\end{equation*}\nwhich implies that we can afford losing $L$ times a weight\nof $\\eta$, which will sum up to a total loss of at most\n$\\frac{1}{12}\\epsilon^2 w(V) \\leq \\frac{1}{3} \\epsilon \\val(\\mathsf{OPT})$,\nwhere the inequality is due to\nProperty~\\ref{prop:preprocessedFF}.\n\nConsider again the heavy tree $G[H\\cup \\{r\\}]$. Due to\nProperty~\\ref{prop:preprocessedFF} its topology is quite\nsimple. More precisely, the heavy tree has\nonly $O(\\frac{\\log N}{\\epsilon^3})$ leaves.\nIndeed, each leaf $u\\in H$ of the heavy tree fulfills\n$w(T_u) \\geq \\eta$, and two different leaves $u_1,u_2\\in H$\nsatisfy $T_{u_1} \\cap T_{u_2} = \\emptyset$; since the\ntotal weight of the tree is $w(V)$, the heavy tree\nhas at most\n$w(V)\/\\eta = 12 L \/ \\epsilon^2 = O(\\frac{\\log N}{\\epsilon^3})$\nmany leaves.\n\nIn the next step, we define a well-chosen\nsmall subset $Q$ of heavy vertices\nwhose removal (together with $r$) from $G$ will\nbreak $G$ into components of weight at most $\\eta$.\nSimultaneously, we choose $Q$ such that removing\nit together with $r$ from the heavy tree breaks\nit into paths, over which we will do an enumeration\nlater.\n\n\n\\begin{lemma}\\label{lem:setQ}\nOne can efficiently determine a set $Q\\subseteq H$\nsatisfying the following.\n\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item $|Q|=O(\\frac{\\log N}{\\epsilon^3})$.\n\\item $Q$ contains all leaves and all vertices\nof degree at least $3$ of the heavy tree,\nexcept for the root $r$.\n\\item Removing $Q\\cup\\{r\\}$ from $G$ leads to a\ngraph $G[V\\setminus (Q\\cup \\{r\\})]$ where each\nconnected component has vertices whose weight\nsums up to at most $\\eta$.\n\\end{enumerate}\n\n\\end{lemma}\n\n\nFor each vertex $q\\in Q$, let $H_q\\subseteq H$ be\nall vertices that are visited when traversing the\npath $P_q$ from $q$ to $r$\nuntil (but not including) the next\nvertex in $Q\\cup \\{r\\}$.\nHence, $H_q$ is a subpath of the heavy tree such\nthat $H_q\\cap Q = \\{q\\}$, which we call for brevity\na \\emph{$Q$-path}.\nMoreover the set of all $Q$-paths partitions $H$.\n\n\nWe use an enumeration procedure to determine on which\n$Q$-paths to protect a vertex. Since $Q$-paths are subpaths\nof leaf-root paths, we can assume that at most one vertex\nis protected in each $Q$-path.\nOur algorithm enumerates over all $2^{|Q|}$ possible\nsubsets $Z \\subseteq Q$, where $Z$\nrepresents the $Q$-paths on which we will protect a\nvertex. Incorporating this guess into~\\ref{eq:lpFF},\nwe get the following linear program~\\ref{eq:lpFFZ}:\n\n\\begin{equation}\\label{eq:lpFFZ}\n\\begin{array}{*2{>{\\displaystyle}r}c*2{>{\\displaystyle}l}}\n\\max & \\sum_{u\\in V\\setminus \\{r\\}} x_u w(T_u) & & \\\\\n & x(P_u) &\\leq &1 &\\forall u\\in \\Gamma \\\\\n & x(V_{\\leq \\ell}) &\\leq &\\sum_{i=1}^\\ell B_i\\hspace*{2em}\n &\\forall \\ell\\in [L]\\\\\n & x(H_q) &= &1 &\\forall q\\in Z\\\\\n & x(H_q) &= &0 &\\forall q\\in Q\\setminus Z\\\\\n & x &\\in &\\mathbb{R}_{\\geq 0}^{V\\setminus \\{r\\}}.\n\\end{array}\\tag{$\\mathrm{LP_{FF}}(Z)$}\\labeltarget{eq:lpFFZtarget}\n\\end{equation}\n\n\nWe start with a simple observation regarding~\\ref{eq:lpFFZ}.\n\\begin{lemma}\\label{lem:isFaceFF}\nThe polytope over which~\\ref{eq:lpFFZ} optimizes is a face\nof the polytope describing the feasible\nregion of~\\ref{eq:lpFF}.\nConsequently, any vertex solution of~\\ref{eq:lpFFZ} is a\nvertex solution of~\\ref{eq:lpFF}.\n\\end{lemma}\n\\begin{proof}\nThe statement immediately follows by observing that\nfor any $q\\in Q$, the inequalities $x(H_q)\\leq 1$\nand $x(H_q)\\geq 0$ are valid inequalities\nfor~\\ref{eq:lpFF}.\nNotice that $x(H_q)\\leq 1$ is a valid inequality\nfor~\\ref{eq:lpFF} because $H_q$ is a subpath of\na leaf-root path, and the load on any leaf-root\npath is limited to $1$ in~\\ref{eq:lpFF}.\n\\end{proof}\n\nAnalogously to~\\ref{eq:lpFF} we define loose and tight\nvertices for a solution to~\\ref{eq:lpFFZ}.\nA crucial implication of Lemma~\\ref{lem:isFaceFF} is that\nLemma~\\ref{lem:sparsityFF} also applies to any vertex\nsolution of~\\ref{eq:lpFFZ}.\n\nWe will show in the following that for any choice of\n$Z\\subseteq Q$, the integrality gap of~\\ref{eq:lpFFZ}\nis small and we can efficiently obtain an integral solution of\nnearly the same value as the optimal value of~\\ref{eq:lpFFZ}.\nOur PTAS then follows by enumerating all $Z\\subseteq Q$\nand considering the set $Z\\subseteq Q$ of \nall $Q$-paths on which $\\mathsf{OPT}$ protects a vertex.\nThe low integrality gap of~\\ref{eq:lpFFZ} will follow from the fact\nthat we can now limit the impact of loose vertices.\nMore precisely, any loose vertex outside of the heavy\ntree has LP contribution at most $\\eta$ by definition of\nthe heavy tree. Furthermore, for each loose vertex $u$\non the heavy tree, which lies on some $Q$-path $H_q$,\nits load $x(u)$ can be moved to the single tight vertex\non $H_q$. As we will show, such a load redistribution\nwill decrease the LP-value by at most $\\eta$, due to our choice of $Q$.\n\n\nWe are now ready to state our $(1-\\frac{\\epsilon}{3})$-approximation\nfor an instance satisfying Property~\\ref{prop:preprocessedFF},\nwhich, as discussed, implies a PTAS for the Firefighter problem.\nAlgorithm~\\ref{alg:FF} describes our\n$(1-\\frac{\\epsilon}{3})$-approximation.\n\n\n\n\\begin{algorithm}\n\n\\begin{enumerate}[rightmargin=1em]\n\\item Determine heavy vertices $H=\\{u\\in V \\mid w(T_u) \\geq \\eta\\}$,\nwhere $\\eta=\\frac{1}{12} \\frac{\\epsilon^2}{L} w(V)$.\n\n\\item Compute $Q\\subseteq H$ using Lemma~\\ref{lem:setQ}.\n\n\\item For each $Z\\subseteq Q$, obtain an optimal vertex solution\nto~\\ref{eq:lpFFZ}. Let $Z^*\\subseteq Q$ be a set for which the\noptimal value\nof~\\hyperlink{eq:lpFFZtarget}{$\\mathrm{LP_{FF}(Z^*)}$}\nis largest among\nall subsets of $Q$, and let $x$ be an optimal vertex\nsolution to~\\hyperlink{eq:lpFFZtarget}{$\\mathrm{LP_{FF}(Z^*)}$}.\n\n\\item\\label{algitem:reoptFF}\nLet $V^{\\mathcal{T}}$ be the $x$-tight vertices.\nObtain an optimal vertex solution\nto~\\ref{eq:lpFF} restricted to variables\ncorresponding to vertices in $V^{\\mathcal{T}}$.\nThe solution will be a $\\{0,1\\}$-vector, being the\ncharacteristic vector of a\nset $U\\subseteq V^{\\mathcal{T}}$ which we return.\n\\end{enumerate}\n\n\n\\caption{A $(1-\\frac{\\epsilon}{3})$-approximation for\na general-budget Firefighter instance satisfying\nProperty~\\ref{prop:preprocessedFF}.}\n\\label{alg:FF}\n\\end{algorithm}\n\n\nThe following statement completes the proof of\nTheorem~\\ref{thm:PtasFF}.\n\\begin{theorem}\\label{thm:PtasFFProp}\nFor any general-budget Firefighter instance satisfying\nProperty~\\ref{prop:preprocessedFF},\nAlgorithm~\\ref{alg:FF} computes efficiently a feasible\nset of vertices $U\\subseteq V\\setminus \\{r\\}$ to protect\nthat is a $(1-\\frac{\\epsilon}{3})$-approximation. \n\\end{theorem}\n\\begin{proof}\nFirst observe that the linear program solved in\nstep~\\ref{algitem:reoptFF} will indeed lead to\na characteristic vector with only $\\{0,1\\}$-components.\nThis is the case since no two $x$-tight vertices\ncan lie on the same leaf-root path. Hence, as discussed\npreviously, the linear program~\\ref{eq:lpFF} restricted\nto variables corresponding to $V^{\\mathcal{T}}$ is totally\nunimodular; indeed, the leaf-root path constraints $x(P_u)\\leq 1$\nfor $u\\in \\Gamma$\nreduce to $x(v)\\leq 1$ for $v\\in V^{\\mathcal{T}}$, and\nthe remaining LP corresponds to a linear program over a laminar\nmatroid, reflecting the budget constraints.\nMoreover, the set $U$ is clearly budget-feasible since \nthe budget constraints are enforced by~\\ref{eq:lpFF}.\nAlso, Algorithm~\\ref{alg:FF} runs in polynomial time\nbecause $|Q|=O(\\frac{\\log N}{\\epsilon^3})$\nby Lemma~\\ref{lem:setQ} and hence,\nthe number of subsets of $Q$ is bounded by\n$N^{O(\\frac{1}{\\epsilon^3})}$.\n\n\nIt remains to show that $U$ is a\n$(1-\\frac{\\epsilon}{3})$-approximation.\nLet $\\mathsf{OPT}$ be an optimal solution to the considered\nFirefighter instance with value $\\val(\\mathsf{OPT})$.\nObserve first that the value $\\nu^*$\nof~\\hyperlink{eq:lpFFZtarget}{$\\mathrm{LP_{FF}(Z^*)}$}\nsatisfies $\\nu^* \\geq \\val(\\mathsf{OPT})$, because\none of the sets $Z\\subseteq Q$ corresponds to\n$\\mathsf{OPT}$, namely $Z=\\{q\\in Q \\mid H_q\\cap \\mathsf{OPT} \\neq \\emptyset\\}$,\nand for this $Z$ the characteristic vector\n$\\chi^{\\mathsf{OPT}}\\in \\{0,1\\}^{V\\setminus \\{r\\}}$\nof $\\mathsf{OPT}$ is feasible\nfor~\\ref{eq:lpFFZ}.\nWe complete the proof of Theorem~\\ref{thm:PtasFFProp}\nby showing that the value $\\val(U)$ of $U$ satisfies\n$\\val(U) \\geq (1-\\frac{\\epsilon}{3}) \\nu^*$.\nFor this we show how to transform an optimal solution\n$x$ of~\\hyperlink{eq:lpFFZtarget}{$\\mathrm{LP_{FF}(Z^*)}$}\ninto a solution $y$\nto~\\hyperlink{eq:lpFFZtarget}{$\\mathrm{LP_{FF}(Z^*)}$}\nwith $\\operatorname{supp}(y) \\subseteq V^{\\mathcal{T}}$\nand such that the objective value $\\val(y)$ of $y$ satisfies\n$\\val(y)\\geq (1-\\frac{\\epsilon}{3}) \\nu^*$.\n\nLet $V^{\\mathcal{L}} \\subseteq \\operatorname{supp}(x)$ be the set of\n$x$-loose vertices, and let $H$ be all heavy vertices,\nas usual. To obtain $y$, we start with $y=x$\nand first set $y(u)=0$ for each $u\\in V^{\\mathcal{L}}\\setminus H$.\nMoreover, for each $u\\in V^{\\mathcal{L}}\\cap H$ \nwe do the following. Being part of the heavy vertices and\nfulfilling $x(u)>0$, the vertex $u$\nlies on some $Q$-path $H_{q_u}$ for some $q_u\\in Z^*$.\nBecause $x(H_{q_u})=1$, there is a tight vertex\n$v\\in H_{q_u}$. We move the $y$-value from vertex\n$u$ to vertex $v$, i.e., $y(v) = y(v)+y(u)$ and\n$y(u)=0$. This finishes the construction of $y$.\nNotice that $y$ is feasible\nfor~\\hyperlink{eq:lpFFZtarget}{$\\mathrm{LP_{FF}(Z^*)}$},\nbecause it was obtained from $x$ by reducing values\nand moving values to lower levels. \n\nTo upper bound the reduction of the LP-value when\ntransforming $x$ into $y$, we show that the modification\ndone for each loose vertex $u\\in V^{\\mathcal{L}}$ decreased\nthe LP-value by at most $\\eta$.\nClearly, for each $u\\in V^{\\mathcal{L}}\\setminus H$,\nsince $u$ is not heavy we have $w(T_u)\\leq \\eta$; thus\nsetting $y(u)=0$ will have an impact of at most $\\eta$\non the LP value.\nSimilarly, for $u\\in V^{\\mathcal{L}}\\cap H$, moving the\n$y$-value of $u$ to $q_u$ decreases the LP objective value\nby\n\\begin{equation*}\ny(u) \\cdot \\left(w(T_u) - w(T_{v})\\right)\n\\leq\nw(T_u) - w(T_v)\n= w(T_u \\setminus T_{v})\n\\leq \\eta,\n\\end{equation*}\nwhere the last inequality follows by observing\nthat $T_u \\setminus T_{v}\\subseteq T_u\\setminus T_{q_u}$\nare vertices in the\nsame connected component of $G[V\\setminus (Q\\cup \\{r\\})]$,\nand thus have a total weight of at most $\\eta$\nby Lemma~\\ref{lem:setQ}.\n\nHence,\n$\\val(x) - \\val(y) \\leq |V^{\\mathcal{L}}|\n \\cdot \\eta \\leq L\\cdot \\eta$,\nwhere the second inequality follows by\nProperty~\\ref{prop:preprocessedFF}.\nThis completes the proof by \nobserving \nthat $|V^{\\mathcal{L}}| \\leq L$\nby Lemma~\\ref{lem:sparsityFF}, and thus\n\\begin{align*}\n\\val(y) &= \\val(x) + \\left(\\val(y) - \\val(x)\\right)\n\\geq \\val(\\mathsf{OPT}) + \\val(y) - \\val(x)\n\\geq \\val(\\mathsf{OPT}) - L\\cdot \\eta\\\\\n&= \\val(\\mathsf{OPT}) - \\frac{1}{12}\\epsilon^2 w(V)\n\\geq \\left(1-\\frac{1}{3}\\epsilon\\right) \\val(\\mathsf{OPT}),\n\\end{align*}\nwhere the last inequality\nis due to Property~\\ref{prop:preprocessedFF}.\n\n\\end{proof}\n\n\n\n\n\n\\subsection{Overview of $O(1)$-approximation for RMFC}\n\\label{subsec:overviewRMFC}\n\nAlso our $O(1)$-approximation for RMFC uses the natural LP,\ni.e,~\\ref{eq:lpRMFC}, as a crucial tool to guide the algorithm.\nThroughout this section we will work on a compressed instance\n$G=(V,E)$ of RMFC, obtained through Theorem~\\ref{thm:compressionRMFC}.\nHence, the number of levels is $L=O(\\log N)$, where $N$ is the\nnumber of vertices of the original instance. Furthermore, the\nbudget on level $\\ell\\in [L]$ is given by $B_\\ell = 2^{\\ell} B$.\nThe advantage of working with a compressed instance for\nRMFC is twofold.\nFirst, we will again apply sparsity reasonings to limit in certain\nsettings the number of loose (badly structured) vertices by the\nnumber of levels of the instance.\nSecond, the fact that low levels---i.e., levels far away from\nthe root---have high budget, will allow\nus to protect a large number of loose vertices by only\nincreasing $B$ by a constant.\n\n\nFor simplicity, we work with a slight variation \nof~\\ref{eq:lpRMFC}, where we replace, for $\\ell\\in [L]$,\nthe budget constraints\n$x(V_{\\leq \\ell}) \\leq \\sum_{i=1}^{\\ell} B_i$\nby $x(V_\\ell) \\leq B_\\ell$.\nFor brevity, we define\n\\begin{equation*}\nP_B = \\left\\{x\\in \\mathbb{R}_{\\geq 0}^{V\\setminus \\{r\\}}\n \\;\\middle\\vert\\;\n x(V_\\ell) \\leq B\\cdot 2^\\ell \\;\\;\\forall \\ell\\in [L]\n \\right\\}.\n\\end{equation*}\nAs previously mentioned (and shown\nin~\\cite{ChalermsookChuzhoy2010}), the resulting LP\nis equivalent to~\\ref{eq:lpRMFC}.\nFurthermore, since the budget $B$ for a feasible RMFC solution\nhas to be chosen integral, we require $B\\geq 1$.\nHence, the resulting linear relaxation asks to find\nthe minimum $B\\geq 1$ such that \nthe following polytope is non-empty:\n\\begin{equation*}\n\\bar{P}_B = P_B \\cap\n \\left\\{x\\in \\mathbb{R}^{V\\setminus \\{r\\}}_{\\geq 0}\n\\;\\middle\\vert\\;\nx(P_u)\\geq 1 \\;\\;\\forall u\\in \\Gamma\\right\\}.\n\\end{equation*}\n\n\nWe start by discussing approaches to partially round a\nfractional point $x\\in \\bar{P}_B$, for some fixed budget $B\\geq 1$.\nAny leaf $u\\in \\Gamma$ is fractionally cut off from\nthe root through the $x$-values on $P_u$. A crucial property\nwe derive and exploit is that leaves that are \n(fractionally) cut off from $r$ largely on low levels,\ni.e., there is high $x$-value on $P_u$ on vertices\nfar away from the root, can be cut off from the root\nvia a set of vertices to be protected that are budget-feasible\nwhen increasing $B$ only by a constant.\nTo exemplify the above statement, consider the level\n$h=\\lfloor \\log L \\rfloor$ as a threshold to define\ntop levels $V_\\ell$ as those with indices $\\ell\\leq h$\nand bottom levels when $\\ell > h$. For any leaf\n$u \\in \\Gamma$,\nwe partition the path $P_u$ into its top\npart $P_u \\cap V_{\\leq h}$ and its bottom part\n$P_u \\cap V_{> h}$. Consider all leaves that are cut\noff in bottom levels by at least $0.5$ units:\n$W=\\{u\\in \\Gamma \\mid x(P_u\\cap V_{> h}) \\geq 0.5\\}$.\nWe will show that there is a subset of\nvertices $R\\subseteq V_{>h}$ on bottom levels\nto be protected that\nis feasible for budget $\\bar{B}=2B+1 \\leq 3B$ and cuts off\nall leaves in $W$ from the root.\nWe provide a brief sketch why this result holds,\nand present a formal proof later.\nIf we set all entries of $x$ on top levels $V_{\\leq h}$\nto zero, we get a vector $y$ with $\\operatorname{supp}(y) \\subseteq V_{>h}$\nsuch that $y(P_u) \\geq 0.5$ for $u\\in W$. Hence, $2y$ fractionally\ncuts off all vertices in $W$ from the root and is feasible\nfor budget $2B$. To increase sparsity, we can replace $2y$ by\na vertex $\\bar{z}$ of the polytope\n\\begin{equation*}\nQ=\\left\\{z\\in \\mathbb{R}_{\\geq 0}^{V\\setminus \\{r\\}}\n \\;\\middle\\vert\\;\n z(V_\\ell) \\leq 2B\\cdot 2^\\ell \\;\\;\\forall \\ell\\in [L],\n z(V_{\\leq h}) = 0, z(P_u)\\geq 1 \\;\\;\\forall u\\in W\\right\\},\n\\end{equation*}\nwhich describes possible ways to cut off $W$ from $r$\nonly using levels $V_{> h}$, and $Q$ is non-empty\nsince $2y\\in Q$.\nExhibiting a sparsity reasoning analogous to the\none used for the Firefighter problem, we can show that\n$z$ has no more than $L$ many $z$-loose vertices. \nThus, we can first include all $z$-loose vertices\nin the set $R$ of vertices to be protected by increasing\nthe budget of each level $\\ell > h$ by at most\n$L\\leq 2^{h+1} \\leq 2^\\ell$.\nThe remaining vertices in $\\operatorname{supp}(z)$ are well structures\n(no two of them lie on the same leaf-root path), and an \nintegral solution can be obtained easily.\nThe new budget value is $\\bar{B}=2B+1$, where the ``$+1$''\nterm pays for the loose vertices.\n\nThe following theorem formalizes the above reasoning\nand generalizes it in two ways. First, for a leaf $u\\in \\Gamma$\nto be part of $W$, we required it to have a total $x$-value\nof at least $0.5$ within the bottom levels; we will allow\nfor replacing $0.5$ by an arbitrary threshold $\\mu\\in (0,1]$.\nSecond, the level $h$ defining what is top and bottom\ncan be chosen to be of the form $h=\\lfloor \\log^{(q)} L\\rfloor$\nfor $q\\in \\mathbb{Z}_{\\geq 0}$, where\n$\\log^{(q)} L \\coloneqq\n\\log\\log\\dots\\log L$ is the value obtained by\ntaking $q$ many logs of $L$, and\nby convention we set $\\log^{(0)}L \\coloneqq L$.\nThe generalization in terms of $h$ can be thought of as\niterating the above procedure on the RMFC instance\nrestricted to $V_{\\leq h}$.\n\n\n\\begin{theorem}\\label{thm:bottomCover}\nLet $B\\in \\mathbb{R}_{\\geq 1}$, $\\mu \\in (0,1]$, \n$q\\in \\mathbb{Z}_{\\geq 1}$, and\n$h = \\lfloor \\log^{(q)} L\\rfloor$.\nLet $x\\in P_B$ with $\\operatorname{supp}(x)\\subseteq V_{> h}$,\nand we define $W=\\{u\\in \\Gamma \\mid x(P_u) \\geq \\mu\\}$.\nThen one can efficiently compute\na set $R\\subseteq V_{>h}$ such that\n\\smallskip\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item $R\\cap P_u \\neq \\emptyset \\quad \\forall u\\in W$, and\n\\item $\\chi^R \\in P_{B'}$, where $B'= \\frac{q}{\\mu}B + 1$\nand $\\chi^R\\in \\{0,1\\}^{V\\setminus \\{r\\}}$ is the\ncharacteristic vector of $R$.\n\\end{enumerate}\n\n\\end{theorem}\n\n\nTheorem~\\ref{thm:bottomCover} has several interesting\nconsequences.\nIt immediately implies an\nLP-based $O(\\log^* N)$-approximation for RMFC, thus\nmatching the currently best approximation result\nby Chalermsook and Chuzhoy~\\cite{ChalermsookChuzhoy2010}:\nIt suffices to start with an optimal LP solution $B\\geq 1$\nand $x\\in \\bar{P}_B$ and invoke the above theorem with\n$\\mu=1$, $q=1+\\log^* L$.\nNotice that by definition of $\\log^*$ we have\n$\\log^* L = \\min \\{\\alpha \\in \\mathbb{Z}_{\\geq 0} \\mid\n\\log^{(\\alpha)} L \\leq 1\\}$; hence\n$h=\\lfloor \\log^{(1+\\log^* L)} L\\rfloor = 0$, implying that\nall levels are bottom levels.\nSince the integrality gap of the LP\nis~$\\Omega(\\log^* N)=\\Omega(\\log^* L)$,\nTheorem~\\ref{thm:bottomCover} captures the limits of what\ncan be achieved by techniques based on the standard LP.\n\nInterestingly, Theorem~\\ref{thm:bottomCover} also implies\nthat the $\\Omega (\\log^* L)$ integrality gap is only\ndue to the top levels of the instance. More precisely, if,\nfor any $q=O(1)$ and $h=\\lfloor \\log^{(q)} L \\rfloor$,\none would know what vertices an optimal solution $R^*$ protects\nwithin the levels $V_{\\leq h}$, then a constant-factor\napproximation for RMFC follows easily \nby solving an LP on the\nbottom levels $V_{> h}$ and using Theorem~\\ref{thm:bottomCover}\nwith $\\mu=1$\nto round the obtained solution.\n\n\nAlso, using Theorem~\\ref{thm:bottomCover} it is not hard\nto find constant-factor approximation algorithms for RMFC\nif the optimal budget $B_\\mathsf{OPT}$ is large enough, say\n$B \\geq \\log L$.\\footnote{Actually, the argument we\npresent in the following works for any\n$B = \\log^{(O(1))}L$. However, we later only\nneed it for $B\\geq \\log L$ and thus focus \non this case.}\nThe main idea is to solve the LP and define\n$h=\\lfloor \\log L \\rfloor$. Leaves that are largely\ncut off by $x$ on bottom levels can be handled using\nTheorem~\\ref{thm:bottomCover}. For the remaining leaves,\nwhich are cut off mostly on top levels, we can resolve an\nLP only on the top levels $V_{\\leq h}$ to cut them off.\nThis LP solution is sparse and contains at most $h\\leq B$\nloose nodes. Hence, all loose vertices can be selected\nby increasing the budget by at most $h\\leq B$, leading\nto a well-structured residual problem for which one can\neasily find an integral solution.\nThe following theorem summarizes this discussion.\nA formal proof for Theorem~\\ref{thm:bigBIsGood}\ncan be found in Section~\\ref{sec:proofsRMFC}. \n\n\\begin{theorem}\\label{thm:bigBIsGood}\nThere is an efficient algorithm that computes a\nfeasible solution to a (compressed) instance of\nRMFC with budget $B\\leq 3 \\cdot \\max\\{\\log L, B_{\\mathsf{OPT}}\\}$.\n\\end{theorem}\n\n\n\n\n\\medskip\n\nIn what follows, we therefore assume $B_\\mathsf{OPT} < \\log L$\nand present an efficient way to partially\nenumerate vertices to be protected on top levels, \nleading to the claimed $O(1)$-approximation.\n\n\n\\subsubsection*{Partial enumeration algorithm}\n\nThroughout our algorithm, we set \n\\begin{equation*}\nh=\\lfloor\\log^{(2)} L\\rfloor\n\\end{equation*}\nto be the threshold level defining top vertices $V_{\\leq h}$\nand bottom vertices $V_{> h}$.\nWithin our enumeration procedure we will solve LPs\nwhere we explicitly include some vertex set\n$A\\subseteq V_{\\leq h}$ to be part of the protected\nvertices, and also exclude some set $D\\subseteq V_{\\leq h}$\nfrom being protected. Our enumeration works by growing\nthe sets $A$ and $D$ throughout the algorithm.\nWe thus define the following LP for two disjoint\nsets $A,D \\subseteq V_{\\leq h}$:\n\\begin{equation}\\label{eq:lpRMFCAD}\n\\begin{array}{*2{>{\\displaystyle}r}c*2{>{\\displaystyle}l}}\n\\min & B & & & \\\\\n & x &\\in & \\bar{P}_B & \\\\\n & B &\\geq & 1 & \\\\\n & x(u) &= & 1 & \\quad \\forall u\\in A\\\\\n & x(u) &= & 0 & \\quad \\forall u\\in D\\enspace .\\\\\n\\end{array}\\tag{$\\mathrm{LP(A,D)}$}\\labeltarget{eq:lpRMFCADtarget}\n\\end{equation}\nNotice that~\\ref{eq:lpRMFCAD} is indeed an LP even though the\ndefinition of $\\bar{P}_B$ depends on $B$ (but it does so linearly).\n\nThroughout our enumeration procedure, the disjoint\nsets $A, D \\subseteq V_{\\leq h}$ that we consider are\nalways such that for any $u\\in A\\cup D$, we have\n$P_u\\setminus\\{u\\} \\subseteq D$. In other words, the vertices\n$A\\cup D \\cup \\{r\\}$ form the vertex set of a subtree\nof $G$ such that no root-leaf path contains two vertices\nin $A$. We call a disjoint pair of\nsets $A,D\\subseteq V_{\\leq h}$ with this property\na \\emph{clean pair}.\n\n\n\nBefore formally stating our enumeration procedure,\nwe briefly discuss the main idea behind it.\nLet $\\mathsf{OPT}\\subseteq V\\setminus \\{r\\}$ be an optimal solution\nto our (compressed) RMFC instance corresponding to some\nbudget $B_{\\mathsf{OPT}} \\in \\mathbb{Z}_{\\geq 1}$. We assume without loss\nof generality that $\\mathsf{OPT}$ does not contain redundancies, i.e.,\nthere is precisely one vertex of $\\mathsf{OPT}$ on each leaf-root\npath.\nAssume that we already guessed some clean pair\n$A,D \\subseteq V_{\\leq h}$ of vertex sets to be\nprotected and not to be protected, respectively,\nand that this guess is compatible with $\\mathsf{OPT}$, i.e.,\n$A\\subseteq \\mathsf{OPT}$ and $D\\cap \\mathsf{OPT}=\\emptyset$.\nLet $(x,B)$ be an optimal solution to~\\ref{eq:lpRMFCAD}.\nBecause we assume that the sets $A$ and $D$ are compatible with\n$\\mathsf{OPT}$, we have $B\\leq B_{\\mathsf{OPT}}$ because\n$(B_\\mathsf{OPT}, \\chi^\\mathsf{OPT})$ is feasible for \\ref{eq:lpRMFCAD}. We define\n\\begin{equation*}\nW_x = \\left\\{u\\in \\Gamma \\;\\middle\\vert\\;\n x(P_u \\cap V_{> h}) \\geq \\frac{2}{3}\\right\\}\n\\end{equation*}\nto be the set of leaves cut off from the root\nby an $x$-load of at least $\\mu=\\frac{2}{3}$\nwithin bottom levels.\nFor each $u\\in \\Gamma\\setminus W_x$,\nlet $f_u\\in V_{\\leq h}$ be the vertex closest\nto the root among all vertices in\n$(P_u \\cap V_{\\leq h}) \\setminus D$, and we define\n\\begin{equation}\\label{eq:defFx}\nF_x = \\{f_u \\mid u\\in \\Gamma\\setminus W_x\\} \\setminus A.\n\\end{equation}\nNotice that by definition, no two vertices of $F_x$ lie on\nthe same leaf-root path.\nFurthermore, every leaf $u\\in \\Gamma\\setminus W_x$\nis part of the subtree\n$T_f$ for precisely one $f\\in F_x$.\nThe main motivation for considering $F_x$ is that to guess\nvertices in top levels, we can show that it suffices\nto focus on vertices\nlying below some vertex in $F_x$, i.e., vertices\nin the set $Q_x = V_{\\leq h} \\cap (\\cup_{f\\in F_x} T_{f})$.\nTo exemplify this, we first consider the special case\n$\\mathsf{OPT}\\cap Q_x = \\emptyset$, which will also play\na central role later in the analysis of our algorithm.\nWe show that for this case we can get an\n$O(1)$-approximation to RMFC, even though we may only\nhave guessed a proper subset $A\\subsetneq \\mathsf{OPT}\\cap V_{\\leq h}$\nof the $\\mathsf{OPT}$-vertices within the top levels.\n\n\n\\begin{lemma}\\label{lem:goodEnum}\nLet $(A, D)$ be a clean pair of\nvertices that is compatible with $\\mathsf{OPT}$, i.e.,\n$A\\subseteq \\mathsf{OPT}, D\\cap \\mathsf{OPT} = \\emptyset$,\nand let $x$ be an optimal solution\nto~\\ref{eq:lpRMFCAD}.\nMoreover, let $(y,\\bar{B})$ be an optimal solution to\n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A,V_{\\leq h} \\setminus A)}$}.\nThen, if $\\mathsf{OPT}\\cap Q_x=\\emptyset$, we have\n$\\bar{B}\\leq \\frac{5}{2} B_{\\mathsf{OPT}}$.\n\nFurthermore, if $\\mathsf{OPT}\\cap Q_x = \\emptyset$,\nby applying Theorem~\\ref{thm:bottomCover}\nto $y\\wedge \\chi^{V_{> h}}$ with $\\mu=1$ and $q=2$, a set \n$R\\subseteq V_{> h}$ is obtained such that\n$R\\cup A$ is a feasible solution to RMFC with respect\nto the budget $6 \\cdot B_{\\mathsf{OPT}}$.\\footnote{For two vectors\n$a,b\\in \\mathbb{R}^n$ we denote by $a\\wedge b\\in \\mathbb{R}^n$\nthe component-wise minimum of $a$ and $b$.}\n\\end{lemma}\n\\begin{proof}\nNotice that $\\mathsf{OPT}\\cap Q_x=\\emptyset$\nimplies that for each $u\\in \\Gamma \\setminus W_x$,\nwe either have $A\\cap P_u \\neq \\emptyset$ and thus\na vertex of $A$ cuts $u$ off from the root, or\nthe set $\\mathsf{OPT}$ contains a vertex on $P_u \\cap V_{>h}$.\nIndeed, consider a leaf $u\\in \\Gamma \\setminus W_x$\nsuch that $A\\cap P_u = \\emptyset$.\nThen\n$\\mathsf{OPT}\\cap Q_x = \\emptyset$ implies that no vertex\nof $T_{f_u}\\cap V_{\\leq h}$ is part of $\\mathsf{OPT}$.\nFurthermore, $P_{f_u}\\setminus T_{f_u} \\subseteq D$\nbecause $(A,D)$ is a clean pair and $f_u$ is the\ntopmost vertex on $P_u$ that is not in $D$.\nTherefore, $\\mathsf{OPT} \\cap P_u \\cap V_{\\leq h} = \\emptyset$,\nand since $\\mathsf{OPT}$ must contain a vertex in $P_u$, we must\nhave $\\mathsf{OPT}\\cap P_u \\cap V_{>h}\\neq \\emptyset$.\n\nHowever, this observation implies\nthat $z=\\frac{3}{2}(x\\wedge \\chi^{V_{>h}})\n+(\\chi^{\\mathsf{OPT}} \\wedge \\chi^{V_{>h}})+\\chi^A$\nsatisfies\n$z(P_u) \\geq 1$ for all $u\\in \\Gamma$.\nMoreover we have $z\\in P_{\\frac{3}{2}B+B_{\\mathsf{OPT}}}$\ndue to the following.\nFirst, $x\\wedge \\chi^{V_{>h}} \\in P_B$ and\n$\\chi^{\\mathsf{OPT}} \\in P_{B_{\\mathsf{OPT}}}$, which implies\n$z-\\chi^A\\in P_{\\frac{3}{2}B+B_{\\mathsf{OPT}}}$.\nFurthermore, $\\chi^A\\in P_B$, and the vertices in\n$A$ are all on levels $V_{\\leq h}$ which are disjoint\nfrom the levels on which vertices in \n$\\operatorname{supp}(z-\\chi^A)\\subseteq V_{>h}$ lie,\nand thus do not compete\nfor the same budget.\nHence, $(z,\\frac{3}{2}B+B_{\\mathsf{OPT}})$ is feasible for\n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A,V_{\\leq h} \\setminus A)}$},\nand thus\n$\\bar{B} \\leq \\frac{3}{2}B + B_{\\mathsf{OPT}} \\leq \\frac{5}{2} B_{\\mathsf{OPT}}$,\nas claimed.\n\nThe second part of the lemma follows in a straightforward\nway from Theorem~\\ref{thm:bottomCover}.\nObserve first that each leaf $u\\in \\Gamma$ is either\nfully cut off from the root by $y$ on only top levels\nor only bottom levels because $y$ is a $\\{0,1\\}$-solution\non the top levels $V_{\\leq h}$, since on top levels it was\nfixed to $\\chi^A$ because it is a solution to\n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A,V_{\\leq h} \\setminus A)}$}.\nReusing the notation in Theorem~\\ref{thm:bottomCover},\nlet $W=\\{u\\in \\Gamma \\mid (y\\wedge \\chi^{V_{> h}})(P_u) \\geq 1\\}$\nbe all leaves cut off from the root by $y\\wedge \\chi^{V_{>h}}$.\nBy the above discussion, every leaf is thus either part of $W$\nor it is cut off from the root by vertices in $A$. \nTheorem~\\ref{thm:bottomCover} guarantees that $R\\subseteq V_{>h}$\ncuts off all leaves in $W$ from the root, and hence, $R\\cup A$\nindeed cuts off all leaves from the root.\nMoreover, by Theorem~\\ref{thm:bottomCover}, the set\n$R\\subseteq V_{> h}$ is feasible with respect to the\nbudget $5B_{\\mathsf{OPT}} +1 \\leq 6 B_{\\mathsf{OPT}}$.\nFurthermore, $A$ is feasible for budget $B_{\\mathsf{OPT}}$ because\nit is a subset of $\\mathsf{OPT}$. Since $A\\subseteq V_{\\leq h}$\nand $R\\subseteq V_{> h}$ are on disjoint levels, the\nset $R\\cup A$ is feasible for the budget $6 B_{\\mathsf{OPT}}$.\n\\end{proof}\n\n\nOur final algorithm is based on a recursive enumeration\nprocedure that computes a polynomial\ncollection of clean pairs $(A,D)$\nsuch that there is one pair $(A,D)$ in the collection\nwith a corresponding LP solution $x$ of \n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A,D)}$}\nsatisfying that the triple $(A,D,x)$ fulfills the conditions of\nLemma~\\ref{lem:goodEnum}, and thus leading to a\nconstant-factor approximation.\nOur enumeration algorithm\n\\hyperlink{alg:enumRMFCtarget}{$\\mathrm{Enum}(A,D,\\gamma)$}\nis described below.\nIt contains a parameter $\\gamma\\in \\mathbb{Z}_{\\geq 0}$\nthat bounds the recursion depth of the enumerations.\n\n\\smallskip\n\n{\n\\renewcommand{\\thealgocf}{}\n\\begin{algorithm}[H]\n\\SetAlgorithmName{$\\bm{\\mathrm{Enum}(A,D,\\gamma)}$\n\\labeltarget{alg:enumRMFCtarget}\n}{}\n\n\\begin{enumerate}[rightmargin=1em]\n\\item Compute optimal solution $(x,B)$ to \n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A,D)}$}.\n\n\\item\\label{item:stopWhenBLarge}\n\\textbf{If} $B > \\log L$\\textbf{:} \\textbf{stop}.\nOtherwise, continue with step~\\ref{item:addTriple}.\n\n\\item\\label{item:addTriple}\nAdd $(A,D,x)$ to the family of triples to be considered.\n\n\\item\\label{item:enumRecCall} \\tm{if}{\\textbf{I}}%\n\\textbf{f} $\\gamma\\neq 0$ \\textbf{:}\n\\hfill \\texttt{\/\/recursion depth not yet reached \\quad}\n\n\\quad \\tm{for}{\\textbf{F}}\\textbf{or $u\\in F_x$:}\n\\hfill \\texttt{\/\/$F_x$ is defined as in~\\eqref{eq:defFx} \\quad}\n\n\\quad\\quad Recursive call to $\\mathrm{Enum}(A\\cup\\{u\\},D,\\gamma-1)$.\\\\\n\\quad\\quad \\tm[overlay]{end}{}Recursive call\nto $\\mathrm{Enum}(A,D\\cup \\{u\\},\\gamma-1)$.\n\n\\begin{tikzpicture}[overlay, remember picture]\n\\draw (if) ++ (0,-0.5em) |- ($(if |- end) + (0.2,-0.2)$);\n\\draw (for) ++ (0,-0.5em) |- ($(for |- end) + (0.2,-0.1)$);\n\\end{tikzpicture}\n\n\\vspace{-1.5em}\n\n\\end{enumerate}\n\n\n\\caption{Enumerating triples $(A,D,x)$ to find one \nsatisfying the conditions of Lemma~\\ref{lem:goodEnum}.\n}\n\\label{alg:enumRMFC}\n\n\\end{algorithm}\n\\addtocounter{algocf}{-1}\n}%\n\n\\smallskip\n\n\n\nNotice that for any clean pair $(A,D)$ and $u\\in F_x$,\nthe two pairs $(A\\cup \\{u\\}, D)$ and $(A, D\\cup \\{u\\})$\nare clean, too. Hence, if we start \n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(A,D,\\gamma)$}\nwith a clean pair $(A,D)$, we will encounter\nonly clean pairs during all recursive calls.\n\nThe key property of the above enumeration procedure\nis that only a small recursion\ndepth $\\gamma$ is needed for the enumeration algorithm \nto explore a good triple $(A,D,x)$, which satisfies\nthe conditions of Lemma~\\ref{lem:goodEnum}, if we\nstart with the trivial clean pair $(\\emptyset, \\emptyset)$.\nFurthermore, due to step~\\ref{item:stopWhenBLarge},\nwe always have $B\\leq \\log L$ whenever the\nalgorithmm is in step~\\ref{item:enumRecCall}. As we will see\nlater, this allows us to prove that $|F_x|$ is small, which\nwill limit the width of our recursive calls, and leads to\nan efficient procedure as highlighted in the following Lemma.\n\n\n\\begin{lemma}\\label{lem:enumWorks}\nLet $\\bar{\\gamma}= 2(\\log L)^2 \\log^{(2)} L$.\nThe enumeration procedure \\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$}\nruns in polynomial time.\nFurthermore, if $B_\\mathsf{OPT} \\leq \\log L$, then\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$} will\nencounter a triple $(A,D,x)$ satisfying\nthe conditions of Lemma~\\ref{lem:goodEnum}, i.e.,\n\\begin{enumerate}[nosep, label=(\\roman*)]\n\\item $(A,D)$ is a clean pair,\n\\item $A\\subseteq \\mathsf{OPT}$,\n\\item $D\\cap \\mathsf{OPT} = \\emptyset$, and\n\\item $\\mathsf{OPT}\\cap Q_x = \\emptyset$.\n\\end{enumerate}\n\\end{lemma}\n\n\nHence, combining Lemma~\\ref{lem:enumWorks} and\nLemma~\\ref{lem:goodEnum} completes our enumeration procedure\nand implies the following result.\n\n\\begin{corollary}\\label{cor:summaryEnum}\nLet $\\mathcal{I}$ be an RMFC instance on $L$ levels\non a graph $G=(V,E)$ with budgets $B_\\ell = 2^\\ell \\cdot B$.\nThen there is a procedure with running time polynomial\nin $2^L$, returning\na solution $(Q,B)$ for $\\mathcal{I}$, where\n$Q\\subseteq V\\setminus \\{r\\}$ is a set of vertices\nto protect that is feasible for budget $B$,\nsatisfying the following:\nIf the optimal budget $B_{\\mathsf{OPT}}$ for $\\mathcal{I}$ satisfies\n$B_{\\mathsf{OPT}} \\leq \\log L$, then $B\\leq 6 B_\\mathsf{OPT}$.\n\\end{corollary}\n\\begin{proof}\nIt suffices to run \n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$} to\nfirst efficiently obtain a family of triples\n$(A_i,D_i,x_i)_i$, where $(A_i, D_i)$ is a clean pair,\nand $x_i$ is an optimal solution to\n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A_i,D_i)}$}.\nBy Lemma~\\ref{lem:enumWorks}, one of these triples\nsatisfies the conditions of Lemma~\\ref{lem:goodEnum}.\n(Notice that these conditions cannot be checked since\nit would require knowledge of $\\mathsf{OPT}$.)\nFor each triple $(A_i,D_i,x_i)$ we obtain a corresponding\nsolution for $\\mathcal{I}$ following the construction\ndescribed in Lemma~\\ref{lem:goodEnum}. More precisely,\nwe first compute an optimal solution $(y_i,\\bar{B}_i)$ to \n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A_i,V_{\\leq h} \\setminus A_i)}$}.\nThen, by applying Theorem~\\ref{thm:bottomCover} to\n$y_i\\wedge \\chi^{V_{> h}}$ with $\\mu=1$ and $q=2$,\na set of vertices\n$R_i\\subseteq V_{> h}$ is obtained such that\n$R_i\\cup A_i$ is feasible for $\\mathcal{I}$ for some\nbudget $B_i$.\nAmong all such sets $R_i\\cup A_i$, we return the one\nwith minimum $B_i$.\nBecause Lemma~\\ref{lem:enumWorks} guarantees that\none of the triples $(A_i, D_i, x_i)$ satisfies the\nconditions of Lemma~\\ref{lem:goodEnum}, we have by\nLemma~\\ref{lem:goodEnum} that the best protection\nset $Q=R_j\\cup A_j$ among all $R_i\\cup A_i$ has a\nbudget $B_j$ satisfying $B_j \\leq 6 B_{\\mathsf{OPT}}$.\n\\end{proof}\n\n\n\n\n\n\n\n\\subsection*{Summary of our $O(1)$-approximation for RMFC}\n\nStarting with an RMFC instance $\\mathcal{I}^{\\mathrm{orig}}$\non a tree with $N$ vertices, we\nfirst apply our compression result, Theorem~\\ref{thm:compressionRMFC},\nto obtain an RMFC instance $\\mathcal{I}$ on a graph $G=(V,E)$ with depth\n$L=O(\\log N)$, and non-uniform budgets $B_\\ell = 2^\\ell B$\nfor $\\ell\\in [L]$.\nLet $B_{\\mathsf{OPT}}\\in \\mathbb{Z}_{\\geq 1}$ be the optimal\nbudget value for $B$ for instance $\\mathcal{I}$%\n---recall that $B=B_{\\mathsf{OPT}}$ in instance $\\mathcal{I}$\nimplies that level $\\ell\\in [L]$ has budget $2^{\\ell} \\cdot B_{\\mathsf{OPT}}$---%\nand let $B_{\\mathsf{OPT}}^{\\mathrm{orig}}$\nbe the optimal budget for $\\mathcal{I}^{\\mathrm{orig}}$.\nBy Theorem~\\ref{thm:compressionRMFC}, we have\n$B_{\\mathsf{OPT}} \\leq B_{\\mathsf{OPT}}^{\\mathrm{orig}}$, and any solution\nto $\\mathcal{I}$ using budget $B$ can efficiently be transformed\ninto one of $\\mathcal{I}^{\\mathrm{orig}}$ of budget\n$2B$.\n\nWe now invoke\nTheorem~\\ref{thm:bigBIsGood} and Corollary~\\ref{cor:summaryEnum}.\nBoth guarantee that a solution to $\\mathcal{I}$ with certain properties\ncan be computed efficiently.\nAmong the two solutions derived from Theorem~\\ref{thm:bigBIsGood}\nand Corollary~\\ref{cor:summaryEnum}, we consider the one\n$(Q,B)$ with lower budget $B$, where $Q\\subseteq V\\setminus \\{r\\}$\nis a set of vertices to protect, feasible for budget\n$B$.\nIf $B\\geq \\log L$, then Theorem~\\ref{thm:bigBIsGood} implies\n$B\\leq 3 B_{\\mathsf{OPT}}$, otherwise Corollary~\\ref{cor:summaryEnum}\nimplies $B\\leq 6 B_{\\mathsf{OPT}}$. Hence, in any case we have\na $6$-approximation for $\\mathcal{I}$. As mentioned before,\nTheorem~\\ref{thm:compressionRMFC} implies that the solution\n$Q$ can efficiently be transformed into a solution for the\noriginal instance $\\mathcal{I}^{\\mathrm{orig}}$ that is\nfeasible with respect to the budget\n$2 B \\leq 12 B_{\\mathsf{OPT}} \\leq 12 B^{\\mathrm{orig}}_{\\mathsf{OPT}}$,\nthus implying Theorem~\\ref{thm:O1RMFC}.\n\n\n\n\\section{Details on compression results}\\label{sec:proofsCompression}\n\nIn this section, we present the proofs for our compression results,\nTheorem~\\ref{thm:compressionFF} and Theorem~\\ref{thm:compressionRMFC}.\nWe start by proving Theorem~\\ref{thm:compressionFF}. The same ideas are used\nwith a slight adaptation in the proof of Theorem~\\ref{thm:compressionRMFC}. \n\nWe call an instance $\\overline{\\mathcal{I}}$ obtained from \nan instance $\\mathcal{I}$ by a sequence of down-push operations\na \\emph{push-down of} $\\mathcal{I}$.\nWe prove Theorem~\\ref{thm:compressionFF} by proving\nthe following result, of which Theorem~\\ref{thm:compressionFF}\nis an immediate consequence, as we will soon show.\n\n\\begin{theorem}\\label{thm:compressionDownPush}\nLet $\\mathcal{I}$ be a unit-budget Firefighter instance\nwith depth $L$, and let $\\delta\\in (0,1)$.\nThen one can efficiently construct a push-down\n$\\overline{\\mathcal{I}}$\nof $\\mathcal{I}$ such that\n\\smallskip\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item\\label{item:closeToOPT}\n $\\val(\\mathsf{OPT}(\\overline{\\mathcal{I}}))\n \\geq (1-\\delta)\\val(\\mathsf{OPT}(\\mathcal{I}))$, and\n\n\\item\n$\\overline{\\mathcal{I}}$ has nonzero budget\non only $O(\\frac{\\log L}{\\delta})$ levels.\n\\end{enumerate}\n\\end{theorem}\n\nBefore we prove Theorem~\\ref{thm:compressionDownPush}, we show\nhow it implies \nTheorem~\\ref{thm:compressionFF}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:compressionFF}]\nWe start by showing \nhow levels of zero budget can be removed \nthrough the following \\emph{contraction operation}. \nLet $\\ell \\in \\{2,\\dots, L\\}$ be a level whose budget\nis zero. For each vertex\n$u \\in V_{\\ell-1}$ we contract all edges from $u$\nto its children and increase the\nweight $w(u)$ of $u$ by the sum of the weights\nof all of its children.\nFormally, if $u$ has children $v_1, \\dots, v_k\\in V_\\ell$,\nthe vertices $u,v_1, \\dots, u_k$ are replaced by a single\nvertex $z$ with weight $w(z) = w(u) + \\sum_{i=1}^k w(v_i)$,\nand $z$ is adjacent to the parent of $u$ and to all children\nof $v_1,\\dots, v_k$.\nOne can easily observe that this is an ``exact''\ntransformation in the sense that any solution before\nthe contraction remains one after contraction\nand vice versa (when identifying the vertex $z$\nin the contracted version with $v$);\nmoreover, solutions before and\nafter contraction have the same value.\n\nNow, by first applying Theorem~\\ref{thm:compressionDownPush}\nand then applying the latter contraction operations level by\nlevel to all levels\n$\\ell\\in \\{2,\\dots, L\\}$\nwith zero budget (in an arbitrary order),\nwe obtain an equivalent instance with the desired \ndepth, thus satisfying the conditions of\nTheorem~\\ref{thm:compressionFF}.\n\\end{proof}\n\n\nIt remains to prove Theorem~\\ref{thm:compressionDownPush}.\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:compressionDownPush}]\nConsider a unit-budget Firefighter instance on a tree\n$G=(V,E)$ with depth $L$.\nThe push-down $\\overline{\\mathcal{I}}$ that we construct\nwill have nonzero budgets precisely on the following\nlevels $\\mathcal{L} \\subseteq [L]$:\n\\begin{equation*}\n\\mathcal{L} = \\left\\{\\left\\lceil(1+\\delta)^j\\right\\rceil\n \\;\\middle\\vert\\; j\\in \\left\\{0,\\dots,\n \\left\\lfloor\\frac{\\log L}{\\log(1+\\delta)}\n \\right\\rfloor\\right\\}\\right\\}\n \\cup \\{L\\}.\n\\end{equation*}\nFor simplicity, let $\\mathcal{L}= \\{\\ell_1,\\dots, \\ell_k\\}$\nwith $1=\\ell_1 < \\ell_2 < \\dots < \\ell_k=L$.\nHence,\n$k=O(\\frac{\\log L}{\\log(1+\\delta)})\n = O(\\frac{\\log L}{\\delta})$. The push-down\n$\\overline{\\mathcal{I}}$ is obtained by pushing\nany budget on a level not in $\\mathcal{L}$ down\nto the next level in $\\mathcal{L}$. Formally,\nfor $i\\in [k]$, the budget $B_{\\ell_i}$\nat level $\\ell_i$ is given by\n$B_{\\ell_i} = \\ell_i - \\ell_{i-1}$, where\nwe set $\\ell_{0}=0$.\nMoreover, $B_\\ell=0$ for\n$\\ell\\in [L]\\setminus \\mathcal{L}$.\nClearly, the instance $\\overline{\\mathcal{I}}$ can be\nconstructed efficiently. Furthermore, the number\nof levels with nonzero budget is equal to\n$k=O(\\frac{\\log L}{\\delta})$ as desired. It remains\nto show point~\\ref{item:closeToOPT}\nof Theorem~\\ref{thm:compressionDownPush}.\n\nTo show~\\ref{item:closeToOPT}, consider an optimal\nredundancy-free solution $S^*\\subseteq V$ of $\\mathcal{I}$; hence,\n$\\val(\\mathsf{OPT}(\\mathcal{I})) = \\sum_{u\\in S^*} w(T_u)$ and\nno two vertices of $S^*$ lie on the same leaf-root path.\nWe will show that there is a feasible solution\n$\\overline{S}$ to $\\overline{\\mathcal{I}}$ such that\n$\\overline{S}\\subseteq S^*$ and the value of\n$\\overline{S}$ is at least $(1-\\delta)\\val(\\mathsf{OPT}(\\mathcal{I}))$.\nNotice that since $S^*$ is redundancy-free, any subset\nof $S^*$ is also redundancy-free. Hence, the value of\nthe set $\\overline{S}$ to construct will be equal\nto $\\sum_{u\\in \\overline{S}} w(T_u)$.\nThe set $S^*$ being (budget-)feasible for $\\mathcal{I}$\nimplies \n\\begin{equation}\\label{eq:SStarFeasible}\n|S^*\\cap V_{\\leq \\ell}| \\leq \\ell\n \\quad \\forall \\ell\\in [L].\n\\end{equation}\nAnalogously, a set $S\\subseteq V$ is feasible for\n$\\overline{\\mathcal{I}}$ if and only if\n\\begin{equation}\\label{eq:SFeasibleFull}\n|S\\cap V_{\\leq \\ell}| \\leq \\sum_{i=1}^\\ell B_i\n \\quad \\forall \\ell\\in [L].\n\\end{equation}\nHence, we want to show that there is a set $\\overline{S}$\nsatisfying the above system and such that\n$\\sum_{u\\in \\overline{S}}w(T_u)\n \\geq (1-\\delta)\\val(\\mathsf{OPT}(\\mathcal{I}))$.\nNotice that in~\\eqref{eq:SFeasibleFull}, the constraint\nfor any $\\ell\\in [L-1]$ such that $B_{l+1}=0$ is\nredundant due to the constraint for level $\\ell+1$\nwhich has the same right-hand side but a larger\nleft-hand side.\nThus, system~\\eqref{eq:SFeasibleFull} is equivalent\nto the following system\n\\begin{equation}\\label{eq:SFeasibleShort}\n\\begin{aligned}\n|S\\cap V_{\\leq \\ell_{i+1}-1}| &\\leq \\ell_{i} \n \\quad \\forall i\\in [k-1],\\\\\n|S\\cap V| &\\leq L.\n\\end{aligned}\n\\end{equation}\nTo show that there is a good subset\n$\\overline{S}\\subseteq S^*$ that\nsatisfies~\\eqref{eq:SFeasibleShort} we use a\npolyhedral approach.\nObserve that~\\eqref{eq:SFeasibleFull} is the\nconstraint system of a laminar matroid\n(see~\\cite[Volume B]{Schrijver2003} for more information on matroids).\nHence,\nthe convex hull of all characteristic vectors\n$\\chi^S\\in \\{0,1\\}^V$ of sets $S\\subseteq S^*$\nsatisfying~\\eqref{eq:SFeasibleShort} is given\nby the following polytope\n\\begin{equation*}\nP = \\left\\{\nx\\in [0,1]^V \\;\\middle\\vert\\;\n\\begin{minipage}[c]{0.4\\linewidth}\n\\vspace{-1em}\n\\begin{align*}\nx(V_{\\leq \\ell_{i+1}-1}) &\\leq \\ell_{i} \\;\\;\\forall i\\in [k-1],\\\\\nx(V) &\\leq L,\\\\\nx(V\\setminus S^*) &= 0\n\\end{align*}\n\\end{minipage}\n\\right\\}.\n\\end{equation*}\nAlternatively, to see that $P$ indeed\ndescribes the correct polytope,\nwithout relying on matroids, one can observe that its\nconstraint matrix is totally unimodular because it\nhas the consecutive-ones property with respect to the\ncolumns.\n\n\nThus there exists a set $\\overline{S}\\subseteq S^*$ with\n$\\sum_{u\\in \\overline{S}} w(T_u) \\geq (1-\\delta)\\val(\\mathsf{OPT}(\\mathcal{I}))$\nif and only if\n\\begin{equation}\\label{eq:polSb}\n\\max\\left\\{\\sum_{u\\in S^*} x(u)\\cdot\n w(T_u) \\;\\middle\\vert\\;\n x\\in P\\right\\}\\geq (1-\\delta)\\val(\\mathsf{OPT}(\\mathcal{I})).\n\\end{equation}\nTo show~\\eqref{eq:polSb}, and thus complete the proof,\nwe show that $y=\\frac{1}{1+\\delta} \\chi^{S^*}\\in P$.\nThis will indeed imply~\\eqref{eq:polSb} since the\nobjective value of $y$ satisfies\n\\begin{equation*}\n\\sum_{u\\in S^*} y(u) \\cdot w(T_u) =\n \\frac{1}{1+\\delta}\\val(\\mathsf{OPT}(\\mathcal{I}))\n \\geq (1-\\delta)\\val(\\mathsf{OPT}(\\mathcal{I})).\n\\end{equation*}\n\nTo see that $y\\in P$, notice that\n$y(V\\setminus S^*)=0$ and\n$y(V) = \\frac{1}{1+\\delta} |S^*|\n\\leq \\frac{1}{1+\\delta} L \\leq L$, where the\nfirst inequality follows by $S^*$\nsatisfying~\\eqref{eq:SStarFeasible} for $\\ell=L$.\nFinally, for $i\\in [k-1]$, we have\n\\begin{align*}\ny(V_{\\leq \\ell_{i+1}-1}) &=\n \\frac{1}{1+\\delta}\n |S^* \\cap V_{\\leq \\ell_{i+1}-1}|\n\\leq \\frac{1}{1+\\delta}(\\ell_{i+1}-1),\n\\end{align*}\nwhere the inequality follows from $S^*$\nsatisfying~\\eqref{eq:SStarFeasible}\nfor $\\ell=\\ell_{i+1}-1$.\nIt remains to show $\\ell_{i+1} -1 \\leq (1+\\delta)\\ell_i$\nto prove $y\\in P$.\nLet $\\alpha \\in \\mathbb{Z}_{\\geq 0}$ be the smallest\ninteger for which we have\n$\\ell_{i+1} = \\lceil (1+\\delta)^{\\alpha}\\rceil$. In\nparticular, this implies\n$\\ell_{i}=\\lceil (1+\\delta)^{\\alpha-1}\\rceil$. We\nthus obtain\n\\begin{equation*}\n\\ell_{i+1} - 1 \\leq (1+\\delta)^{\\alpha}\n = (1+\\delta) (1+\\delta)^{\\alpha-1}\n \\leq (1+\\delta) \\ell_i,\n\\end{equation*}\nas desired.\n\n\\end{proof}\n\n\n\n\nWe conclude with the proof of Theorem~\\ref{thm:compressionRMFC}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:compressionRMFC}.]\n\n We start by describing the construction of $G' = (V',E')$. As is the case\nin the proof of Theorem~\\ref{thm:compressionFF}, we first change the \nbudget assignment of the instance and then contract all levels with zero budgets.\nNotice that, for a given budget $B$ per layer,\nwe can consider an RMFC instance as a Firefighter instance,\nwhere each leaf $u\\in \\Gamma$ has weight $w(u)=1$, and all other\nweights are zero. Since our goal is to save all leaves, we want\nto save vertices of total weight $|\\Gamma|$.\n\n\nFor simplicity of presentation we assume that $L$ is a power of $2$. This assumption does\nnot compromise generality, as one can always augment the original tree with one path starting from the root and going down to level\n$2^{\\lceil\\log L\\rceil}$.\n\nThe set of levels in which the transformed instance will have\nnonzero budget is \n\\begin{equation*}\n\\mathcal{L} = \\left\\{2^j-1 \\,\\middle\\vert\\, j\\in \\{1,\\ldots, \\log L \\} \\right\\}.\n\\end{equation*}\nHowever, instead of down-pushes we will do \\emph{up-pushes} were\nbudget is moved upwards. More precisely, \nthe budget of any level $\\ell\\in [L]\\setminus \\mathcal{L}$\nwill be assigned to the first level in $\\mathcal{L}$ that\nis above $\\ell$, i.e., has a smaller index than $\\ell$.\nAs for the Firefighter case, we now remove all $0$-budget\nlevels using contraction, which will lead to a new\nweight function $w'$ on the vertices. Since our goal\nis to save the weight of the whole tree,\nwe can remove for each vertex $u$ with $w'(u) > 0$, the\nsubtree below $u$. This does not change the problem since\nwe have to save $u$, and thus will anyway also save its subtree.\nThis finishes our construction of $G'=(V',E')$, and the task\nis again to remove all leaves of $G'$.\nNotice that $G'$ has $L' \\leq |\\mathcal{L}| = \\log L $\nmany levels, and level $\\ell\\in [L']$ has a budget of\n$B 2^{\\ell}$ as desired.\nAnalogous to the\ndiscussion for compression in the context of the Firefighter \nproblem we have that if the original problem is feasible,\nthen so is the RMFC problem on $G'$ with\nbudgets $B 2^{\\ell}$.\nIndeed, before performing the contraction operations (which\ndo not change the problem), the original RMFC problem was\na push-down of the one we constructed.\n\n\nSimilarly, one can observe that before contraction,\nthe instance we obtained is itself a push-down of\nthe original instance with budgets $2B$ on each level.\nHence, analogously to the compression result for\nthe Firefighter case, any solution to the RMFC problem\non $G'$ can \nefficiently be transformed into a solution to the original\nRMFC problem on $G$ with budgets $2B$ on each level.\n\n\\end{proof}\n\n\n\n\n\n\\section{Missing details for Firefighter PTAS}\\label{sec:proofsFF}\n\nIn this section we present the missing proofs for our PTAS for the\nFirefighter problem.\n\n\nWe start by proving Lemma~\\ref{lem:sparsityFF}, showing that\nany vertex solution $x$ to \\ref{eq:lpFF} has\nfew $x$-loose vertices.\nMore precisely, the proof below shows that the number\nof $x$-loose vertices is upper bounded by the number\nof tight budget constraints.\nThe precise same reasoning used in the proof of\nLemma~\\ref{lem:sparsityFF} can also be applied\nin further contexts, in particular for the RMFC problem.\n\n\n\\subsubsection*{Proof of Lemma~\\ref{lem:sparsityFF}}\n\nLet $x$ be a vertex of the polytope defining the feasible set\nof~\\ref{eq:lpFF}.\nHence, $x$ is uniquely defined by\n$|V\\setminus\\{r\\}|$ many linearly independent and tight\nconstraints of this polytope.\nNotice that the tight constraints can be partitioned into\nthree groups:\n\\begin{enumerate}[label=(\\roman*),nosep]\n\\item Tight nonnegativity constraints, one for\neach vertex in $\\mathcal{F}_1=\\{u\\in V\\setminus \\{r\\} \\mid x(u) = 0\\}$.\n\n\\item Tight budget constraints, one for each level in\n$\\mathcal{F}_2 = \\{\\ell\\in [L] \\mid x(V_{\\leq \\ell})=\\sum_{i=1}^\\ell B_i\\}$.\n\n\\item Tight leaf constraints, one for each vertex in\n$\\mathcal{F}_3 = \\{u\\in \\Gamma \\mid x(P_u) = 1\\}$.\n\\end{enumerate}\nDue to potential degeneracies of the polytope describing\nthe feasible set of~\\ref{eq:lpFF} there may be several\noptions to describe $x$ as the unique solution to\na full-rank linear subsystem of the constraints described\nby $\\mathcal{F}_1 \\cup \\mathcal{F}_2 \\cup \\mathcal{F}_3$.\nWe consider a system that contains all tight\nnonnegativity constraints, i.e.,\nconstraints corresponding to $\\mathcal{F}_1$, and\ncomplement these constraints with arbitrary subsets\n$\\mathcal{F}'_2\\subseteq \\mathcal{F}_2$ and \n$\\mathcal{F}'_3\\subseteq \\mathcal{F}_3$ of\nbudget and leaf constraints that lead to a full rank\nlinear system corresponding to the constraints\n$\\mathcal{F}_1 \\cup \\mathcal{F}'_2 \\cup \\mathcal{F}'_3$.\nHence\n\\begin{equation}\\label{eq:fullRankSys}\n|\\mathcal{F}_1| + |\\mathcal{F}'_2| + |\\mathcal{F}'_3| = |V| - 1.\n\\end{equation}\n\n\nLet $V^{\\mathcal{L}}\\subseteq \\operatorname{supp}(x)$\nand $V^{\\mathcal{T}}\\subseteq \\operatorname{supp}(x)$\nbe the $x$-loose and $x$-tight vertices, respectively.\nWe first show $|\\mathcal{F}'_3|\\leq |V^{\\mathcal{T}}|$.\nFor each leaf $u\\in \\mathcal{F}'_3$, let $f_u\\in V^\\mathcal{T}$ be \nthe first vertex on the unique $u$-root path that is part of\n$\\operatorname{supp}(x)$. In particular, if $u\\in \\operatorname{supp}(x)$ then $f_u=u$.\nClearly, $f_u$ must be an $x$-tight vertex because\nthe path constraint with respect to $u$ is tight.\nNotice that for any distinct vertices $u_1,u_2\\in \\mathcal{F}'_3$,\nwe must have $f_{u_1}\\neq f_{u_2}$. Assume by sake of\ncontradiction that $f_{u_1}= f_{u_2}$. However, this implies\n$\\chi^{P_{u_1}} - \\chi^{P_{u_2}}\\in \\spn(\\{\\chi^{v} \\mid v\\in \\mathcal{F}_1\\})$, since \n$P_{u_1} \\Delta P_{u_2} := (P_{u_1} \\setminus P_{u_2})\\cup (P_{u_2} \\setminus P_{u_1}) \\subseteq \\mathcal{F}_1$, and leads to a contradiction\nbecause we exhibited a linear dependence among the constraints\ncorresponding to $\\mathcal{F}'_3$ and $\\mathcal{F}_1$.\nHence, $f_{u_1}\\neq f_{u_2}$ which implies that the\nmap $u \\mapsto f_u$ from $\\mathcal{F}'_3$ to $V^{\\mathcal{T}}$\nis injective and thus\n\\begin{equation}\\label{eq:boundLeafConstr}\n|\\mathcal{F}'_3| \\leq |V^{\\mathcal{T}}|.\n\\end{equation}\nWe thus obtain\n\\begin{align*}\n|\\operatorname{supp}(x)| &= |V|-1-|\\mathcal{F}_1|\n && \\text{($\\operatorname{supp}(x)$ consists of all $u\\in V\\setminus \\{r\\}$ with\n $x(u)\\neq 0$, i.e., $u\\not\\in \\mathcal{F}_1$)}\\\\\n &= |\\mathcal{F}'_2| + |\\mathcal{F}'_3|\n && \\text{(by~\\eqref{eq:fullRankSys})}\\\\\n &\\leq |\\mathcal{F}'_2| + |V^{\\mathcal{T}}|\n && \\text{(by~\\eqref{eq:boundLeafConstr})},\n\\end{align*}\nwhich leads to the desired result since\n\\begin{equation*}\n|V^{\\mathcal{L}}| = |\\operatorname{supp}(x)| - |V^{\\mathcal{T}}|\n \\leq |\\mathcal{F}'_2| \\leq L.\n\\end{equation*}\n\n\n\n\n\n\\qed\n\n\n\\subsubsection*{Proof of Lemma~\\ref{lem:pruning}}\nWithin this proof we focus on protection sets where the budget available\nfor any level is spent on the same level (and not a later one).\nAs discussed, there is always an optimal protection set\nwith this property.\n\nLet $B_\\ell \\in \\mathbb{Z}_{\\geq 0}$ be the budget available at level $\\ell\\in [L]$ and let \n$\\lambda_\\ell = \\lambda B_\\ell$.\n We construct the tree $G'$ using the following greedy procedure. Process\nthe levels of $G$ from the first one to the last one. At every level $\\ell\\in [L]$,\npick $\\lambda_\\ell$ vertices $u^\\ell_1, \\cdots, u^\\ell_{\\lambda_\\ell}$ at the $\\ell$-th \nlevel of $G$ greedily, i.e., pick each next vertex such that the subtree corresponding to that \nvertex has largest weight among all remaining vertices in the level. \nAfter each selection of a vertex the greedy procedure can no longer \nselect any vertex in the corresponding subtree in subsequent iterations.\\footnote{\nFor $\\lambda=1$ this procedure produces a set of vertices, which comprise\na $\\frac{1}{2}$-approximation for the Firefighter problem, as it coincides\nwith the greedy algorithm of Hartnell and Li~\\cite{HartnellLi2000}.}\n\nNow, the tree $G'$ is constructed by deleting from $G$ any vertex\nthat is both not contained in any subtree $T_{u^\\ell_i}$, and not \ncontained in any path $P_{u^\\ell_i}$ for $\\ell\\in [L]$ and $i\\in [\\lambda_\\ell]$.\nIn other words, if $U\\subseteq V$ is the set of all leaves\nof $G$ that were disconnected from the root by the greedy\nalgorithm, then we consider the subtree of $G$ induced\nby the vertices $\\cup_{u\\in U}P_u$.\nFinally, the weights of vertices on the paths \n$P_{u^\\ell_i} \\setminus \\{u^\\ell_i\\}$ for $\\ell\\in [L]$ and $i\\in [\\lambda_\\ell]$ are reduced\nto zero. This concludes the construction of $G'=(V',E')$ and the new weight function $w'$. Denote\nby $D_\\ell = \\{u^\\ell_1,\\cdots, u^\\ell_{\\lambda_\\ell}\\}$ the set of vertices chosen by the\ngreedy procedure in level $\\ell$, and let $D=\\cup_{\\ell\\in [L]} D_{\\ell}$.\nObserve that by construction we have that each vertex\nwith non-zero weight is in the subtree of a vertex in $D$, i.e.,\n$$\nw'(V') = \\sum_{u\\in D} w'(T'_u).\n$$\nThe latter immediately implies point~\\ref{item:pruningLargeOpt}\nof Lemma~\\ref{lem:pruning} because the vertices $D$ can\nbe partitioned into $\\lambda$ many vertex sets that are\nbudget-feasible and can thus be protected in a Firefighter solution.\nHence an optimal solution to the Firefighter problem\non $G'$ covers at least a $\\frac{1}{\\lambda}$-fraction of the total\nweight of $G'$.\n\n\nIt remains to prove point~\\ref{item:pruningSmallLoss} of the Lemma.\nLet $S^* = S^*_1\\cup \\cdots \\cup S^*_L$ be the vertices protected in some optimal\nsolution in $G$, where $S^*_\\ell \\subseteq V_\\ell$ are the vertices protected in level $\\ell$ (and\nhence $|S^*_\\ell| \\leq B_\\ell$). \nWithout loss of generality, we assume that $S^*$ is redundancy-free.\nFor distinct vertices $u,v\\in V$ we say that $u$ \\emph{covers} $v$ if $v\\in T_u \\setminus \\{u\\}$.\n\nFor $\\ell \\in [L]$, let $I_\\ell = S^*_l \\cap D_\\ell$ be the set of vertices protected \nby the optimal solution that are also chosen by the greedy algorithm in level $\\ell$.\nFurthermore, let $J_\\ell \\subseteq S^*_\\ell$\nbe the set of vertices of the optimal solution that are \ncovered by vertices chosen by the greedy algorithm in earlier\niterations, i.e.,\n$J_\\ell = S^*_\\ell \\cap \\bigcup_{u\\in D_1\\cup\\cdots\\cup D_{\\ell -1}} T_u$. \nFinally, let $K_\\ell = S^*_\\ell \\setminus (I_\\ell \\cup J_\\ell)$ be all other\noptimal vertices in level $\\ell$. Clearly, $S^*_\\ell = I_\\ell \\cup J_\\ell \\cup K_\\ell$ \nis a partition of $S^*_\\ell$.\n\nConsider a vertex $u\\in K_\\ell$ for some $\\ell\\in [L]$. From the guarantee of the greedy \nalgorithm it holds that for every vertex $v\\in D_\\ell$ we have $w'(T_v) = w(T_v) \\geq w(T_u)$. \nThe same does not necessarily hold for covered vertices. \nOn the other hand, covered vertices\nare contained in $G'$ with their original weights. We exploit these two \nproperties to prove the existence of a solution in $G'$\nof almost the same weight as $S^*$.\n\nTo prove the existence of a good solution we construct\na solution $A = A_1 \\cup \\cdots \\cup A_L$ with $A_\\ell \\subseteq V_\\ell$ and $|A_\\ell| \\leq B_\\ell$\nrandomly, and prove a bound on its expected quality.\nWe process the levels of the tree $G'$ top-down to construct $A$ step\nby step.\nThis clearly does not compromise generality. Recall that we only need to prove the \nexistence of a good solution, and not compute it efficiently. We can hence assume the\nknowledge of $S^*$ in the construction of $A$. To this end assume that all levels\n$\\ell' < \\ell$ were already processed, and the corresponding sets $A_{\\ell'}$ were\nconstructed. The set $A_{\\ell}$ is constructed as follows:\n\n\\begin{enumerate}\n\\item Include in $A_\\ell$ all vertices in $I_\\ell$.\n\\item Include in $A_\\ell$ all vertices in $J_\\ell$ that are not \ncovered by vertices in $A_1\\cup \\cdots \\cup A_{\\ell-1}$ (vertices selected so far).\n\\item Include in $A_\\ell$ a \\emph{uniformly random subset} of $|K_\\ell|$ vertices\nfrom $D_\\ell \\setminus I_\\ell$.\n\\end{enumerate}\n\nIt is easy to verify that the latter algorithm returns a redundancy-free solution, as no two\nchosen vertices in $A$ lie on the same path to the root. Next, we show that the expected\nweight of vertices saved by $A$ is at least $(1-\\frac{1}{\\lambda})\\val(\\mathsf{OPT}(\\overline{\\mathcal{I}}))$, \nwhich will prove our claim, since then at least one solution has the desired quality.\n\nSince we only need a bound on the expectation we can focus on a single level $\\ell \\in [L]$ \nand show that the contribution of vertices in $A_\\ell$ is in expectation at least $1-\\frac{1}{\\lambda}$\ntimes the contribution of the vertices in $S^*_\\ell$. Observe that the vertices in $I_\\ell$ are\ncontained both in $S^*_\\ell$ and in $A_\\ell$, hence it suffices to show that the contribution\nof $A_\\ell \\setminus I_\\ell$ is at least $1-\\frac{1}{\\lambda}$ times the contribution \nof $S^*_\\ell \\setminus I_\\ell$, in expectation. Also, recall that every vertex in $D_\\ell$\ncontributes at least as much as any vertex in $K_\\ell$, by the greedy selection rule. It follows\nthat the $|K_\\ell|$ randomly selected vertices in $A_\\ell$ have at least as much contribution\nas the vertices in $K_\\ell$. Consequently, to prove the claim is suffices to bound the \nexpected contribution of vertices in $A_\\ell \\cap J_\\ell$ with respect to the contribution of\n$J_\\ell$. Since $A_\\ell \\cap J_\\ell \\subseteq J_\\ell$ it suffices to show that every vertex\n$u\\in J_\\ell$ is also present in $A_\\ell$ with probability at least $1-\\frac{1}{\\lambda}$.\n\nTo bound the latter probability we make use of the random choices in the construction\nof $A$ as follows. Let $\\ell' < \\ell$ be the level at which for some $w\\in D_{\\ell'}$ it \nholds that $u\\in T_w$. In other words, $\\ell'$ is the level that contains the ancestor \nof $u$ that was chosen by the greedy construction of $G'$. Now, since $S^*$ is redundancy-free,\nand by the way that $A$ is constructed, it holds that if $u\\not\\in A_\\ell$ \nthen $w\\in A_{\\ell'}$, namely if $u$ is covered, it can only be covered by the \nunique ancestor $w$ of $u$ that was chosen in the greedy construction of $G'$. Furthermore,\nin such a case the vertex $w$ was selected randomly in the third step of the $\\ell'$-th\niteration. Put differently, the probability that the vertex $u$ is covered \nis exactly the probability that its ancestor $w$ is chosen randomly to be part of $A_{\\ell'}$.\nSince these vertices are chosen to be a random subset of $|K_{\\ell'}|$ vertices from the set $D_{\\ell'}\\setminus I_{\\ell'}$,\nthis probability is at most \n$$\n\\frac{|K_{\\ell'}|}{|D_{\\ell'}| - |I_{\\ell'}|} =\n\\frac{|K_{\\ell'}|}{\\lambda B_{\\ell'} - |I_{\\ell'}|} \\leq \n\\frac{1}{\\lambda}, \n$$\nwhere the last inequality follows from $|K_{\\ell'}| + |I_{\\ell'}| \\leq B_{\\ell'}$.\nThis implies that $u\\in A_\\ell$ with probability of at least $1-\\frac{1}{\\lambda}$, as required\nand concludes the proof of the lemma.\n\n\n\\qed\n\n\n\n\n\n\\subsubsection*{Proof of Lemma~\\ref{lem:setQ}}\n\n\n\nWe construct the set $Q$ in two phases as follows. First we construct \na set $\\overline Q \\subseteq H$ of vertices fulfilling the first and the third properties, i.e.,\nit will satisfy $|\\overline Q| = O(\\frac{\\log N}{\\epsilon^3})$, as well as the property that\n$G[V\\setminus \\overline Q\\cup \\{r\\}]$ has connected components each of weight at most $\\eta$. Then,\nwe add to $\\overline Q$ all vertices of $H$ of degree at least three to arrive\nat the final set $Q$.\n\nIt will be convenient to define heavy vertices and heavy tree with respect to any \nsubtree $G'= (V', E')$ of $G$ which contains the root $r$. Concretely, we \ndefine $H_{G'} = \\{u\\in V'\\setminus \\{r\\} \\,\\mid\\, w(T'_u)\\geq \\eta\\}$ \nto be the set of $G'$-heavy vertices. The $G'$-heavy tree is the\nsubtree $G'[H_{G'} \\cup \\{r\\}]$ of $G'$. Observe that $H = H_G$ and that\n$H_{G'} \\subseteq H$ for every subtree $G'$ of $G$.\n\nTo construct $\\overline Q$ we process the tree $G$ in a bottom-up \nfashion starting with $\\overline Q = \\emptyset$. We will also remove\nparts of the tree in the end of every iteration. The first iteration \nstarts with $G' = G$. In every iteration that starts with tree $G'$, include in \n$\\overline Q$ an arbitrary leaf $u\\in H_{G'}$ of the heavy tree and remove $u$ and all vertices\nin its subtree from $G'$. The procedure ends when there is\neither no heavy vertex in $G'$ anymore, or when $r$ is the\nonly heavy vertex in $G'$.\n\nLet us verify that the claimed properties indeed hold. The fact that \n$|\\overline Q| = O(\\frac{\\log N}{\\epsilon^3})$ follows from the fact that at each iteration \nwe remove a $G'$-heavy vertex including all its subtree from the \ncurrent tree $G'$. This implies that the total weight of the tree $G'$\ndecreases by at least $\\eta$ in every iteration. Since we only include one \nvertex in every iteration we have\n$|\\overline Q| \\leq \\frac{w(V)}{\\eta} = O(\\frac{\\log N}{\\epsilon^3})$.\n\nThe third property follows from the fact that we always remove a leaf\nof the $G'$-heavy tree. Observe that the connected components of \n$G[V\\setminus (\\overline Q \\cup \\{r\\})]$ are contained in the subtrees\nwe disconnect in every iteration in the construction of $\\overline Q$.\nBy definition of $G'$-heavy leaves, in any such iteration where \na $G'$-heavy leaf $u$ is removed from the tree, these parts have weight \nat least $\\eta$, but any subtree rooted at any descendant of $u$ has\nweight strictly smaller than $\\eta$ (otherwise this descendant would\nbe $G'$-heavy as well, contradicting the assumption that it has a\n$G'$-heavy leaf $u$ as an ancestor). Now, since $u$ is included in $\\overline Q$,\nthe connected components are exactly these subtrees, so the property indeed holds.\n\nTo construct $Q$ and conclude the proof it remains to include in $\\overline Q$\nall remaining nodes of degree at least three in the heavy tree. The \nfact that also all leaves of the heavy tree are included in $Q$ is\nreadily implied by the construction of $\\overline Q$, so the second property \nholds for $Q$. Clearly, by removing more vertices from the heavy tree, the sizes\nof connected components only get smaller, so $Q$ also satisfies the third\ncondition, since $\\overline Q$ already did. Finally, the number of \nvertices of degree at least three in the heavy tree is strictly\nless than the number of its leaves, which is $O(\\frac{\\log N}{\\epsilon^3})$;\nfor otherwise a contradiction would occur since the tree would\nhave an average degree of at least $2$.\nThis implies that, in\ntotal, $|Q| = O(\\frac{\\log N}{\\epsilon^3})$,\nso the first property also holds.\n\nTo conclude the proof of the lemma it remains to note that the latter\nconstruction can be easily implemented in polynomial time.\n\n\\qed\n\n\n\n\\section{Missing details for $O(1)$-approximation\nfor RMFC}\\label{sec:proofsRMFC}\n\nThis section contains the missing proofs for our\n$12$-approximation for RMFC.\n\n\n\\subsection*{Proof of Theorem~\\ref{thm:bottomCover}}\n\n\n\nTo prove Theorem~\\ref{thm:bottomCover} we first show\nthe following result, based on which Theorem~\\ref{thm:bottomCover}\nfollows quite directly.\n\n\n\\begin{lemma}\\label{lem:sliceCover}\nLet $B\\in \\mathbb{R}_{\\geq 1}$, $\\eta\\in (0,1]$,\n$k \\in \\mathbb{Z}_{\\geq 1}$, and\n$\\ell_1 = \\lfloor \\log^{(k)} L \\rfloor$,\n$\\ell_2 = \\lfloor \\log^{(k-1)} L \\rfloor$.\nLet $x\\in P_B$ with\n$\\operatorname{supp}(x)\\subseteq V_{(\\ell_1,\\ell_2]}\n \\coloneqq V_{>\\ell_1} \\cap V_{\\leq \\ell_2}$,\nand we define $Y = \\{u\\in \\Gamma \\mid x(P_u) \\geq \\eta\\}$.\nThen one can efficiently compute a\nset $R\\subseteq V_{(\\ell_1,\\ell_2]}$ such\nthat\n\\smallskip\n\\begin{enumerate}[nosep, label=(\\roman*)]\n\\item\\label{item:scHitPath}\n$R\\cap P_u \\neq \\emptyset \\quad \\forall u\\in Y$, and\n\n\\item\\label{item:scBudgetOk}\n$\\chi^R\\in P_{\\bar{B}}$,\nwhere $\\bar{B} = \\frac{1}{\\eta} B + 1$.\n\\end{enumerate}\n\\end{lemma}\n\n\nWe first observe that Lemma~\\ref{lem:sliceCover} indeed\nimplies Theorem~\\ref{thm:bottomCover}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:bottomCover}]\nFor $k=1,\\dots, q$, let\n$\\ell_1^k = \\lfloor \\log^{(k)} L\\rfloor$ and\n$\\ell_2^k = \\lfloor \\log^{(k-1)} L\\rfloor$, and we define\n$x^k\\in P_B$ by $x^k = x \\wedge \\chi^{V_{(\\ell_1^k, \\ell_2^k]}}$.\nHence, $x=\\sum_{k=1}^q x^k$.\nFor each $k\\in [q]$, we apply Lemma~\\ref{lem:sliceCover} to\n$x^k$ with $\\eta = \\frac{\\mu}{q}$ to obtain a set\n$R^k \\subseteq V_{(\\ell_1^k, \\ell_2^k]}$ satisfying\n\\begin{enumerate}[nosep, label=(\\roman*)]\n\\item $R^{k}\\cap P_u \\neq \\emptyset$ \\quad\n$\\forall u\\in Y^k=\\{u\\in \\Gamma \\mid x^k(P_u) \\geq \\eta\\}$, and\n\n\\item $\\chi^{R^k} \\in P_{\\bar{B}}$, where\n$\\bar{B} \\coloneqq \\frac{1}{\\eta} B + 1 = \\frac{q}{\\mu} B + 1\n \\eqqcolon B'$.\n\\end{enumerate}\nWe claim that $R=\\cup_{k=1}^q R^k$ is a set satisfying\nthe conditions of Theorem~\\ref{thm:bottomCover}.\nThe set $R$ clearly satisfies $\\chi^R \\in P_{B'}$\nsince $\\chi^{R^k}\\in P_{B'}$ for $k\\in [q]$\nand the sets $R^k$ are on disjoint levels.\nFurthermore, for each $u\\in W=\\{v\\in \\Gamma \\mid x(P_v)\\geq \\mu\\}$\nwe indeed have $P_u\\cap R\\neq\\emptyset$ due to the following.\nSince $x=\\sum_{k=1}^q x^k$ and $x(P_u) \\geq \\mu$ there exists\nan index $j\\in [q]$ such that $x^j(P_u) \\geq \\eta = \\frac{\\mu}{q}$,\nand hence $P_u \\cap R \\supseteq P_u \\cap R^j \\neq \\emptyset$.\n\n\\end{proof}\n\n\n\n\nThus, it remains to prove Lemma~\\ref{lem:sliceCover}.\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:sliceCover}]\\leavevmode\n\nLet $\\tilde{B} = \\frac{1}{\\eta} B$.\nWe start by determining an optimal vertex solution $y$\nto the linear program $\\min\\{z(V\\setminus \\{r\\}) \\mid z\\in Q\\}$,\nwhere\n\\begin{equation*}\nQ = \\{z\\in P_{\\tilde{B}}\n \\mid z(u) = 0\\;\\forall u\\in V\n \\setminus (V_{(\\ell_1,\\ell_2]} \\cup \\{r\\}),\\;\\;\nz(P_u) \\geq 1 \\;\\forall u\\in Y\\}.\n\\end{equation*}\nNotice that $Q\\neq \\emptyset$\nsince $\\frac{1}{\\eta} x \\in Q$; hence, the above\nLP is feasible.\nFurthermore, notice that $y(P_u)\\leq 1$ for $u\\in \\Gamma$;\nfor otherwise, there is a vertex $v\\in \\operatorname{supp}(y)$ such that\n$y(P_v) > 1$, and hence $y - \\epsilon \\chi^{\\{v\\}}\\in Q$\nfor a small enough $\\epsilon >0$, violating that \n$y$ is an \\emph{optimal} vertex solution.\n\n\nLet $V^{\\mathcal{L}}$ be all $y$-loose vertices.\nWe will show that the set\n\\begin{equation*}\nR = V^{\\mathcal{L}} \\cup \\{u\\in V\\setminus \\{r\\} \\mid y(u)=1\\}\n\\end{equation*}\nfulfills the properties claimed by the lemma.\nClearly, $R\\subseteq V_{(\\ell_1,\\ell_2]}$ since\n$\\operatorname{supp}(y) \\subseteq V_{(\\ell_1,\\ell_2]}$.\n\nTo see that condition~\\ref{item:scHitPath}\nholds, let $u\\in Y$, and notice that we have $y(P_u)=1$.\nEither $|P_u \\cap \\operatorname{supp}(y)| =1$, in which case\nthe single vertex $v$ in $P_u\\cap \\operatorname{supp}(y)$ satisfies\n$y(u)=1$ and is thus contained in $R$; or $|P_u\\cap \\operatorname{supp}(y)| > 1$,\nin which case $P_u\\cap V^{\\mathcal{L}} \\neq \\emptyset$ which again\nimplies $R\\cap P_u \\neq \\emptyset$.\n\n\nTo show that $R$ satisfies~\\ref{item:scBudgetOk},\nwe have to show that $R$ does not exceed the budget\n$\\bar{B}\\cdot 2^\\ell = (\\frac{1}{\\eta}B + 1) 2^\\ell$ of any\nlevel $\\ell\\in \\{\\ell_1+1,\\dots, \\ell_2\\}$.\nWe have\n\\begin{align*}\n|R\\cap V_\\ell| \\leq y(V_\\ell) + |V^{\\mathcal{L}}|\n\\leq \\tilde{B} 2^\\ell + |V^{\\mathcal{L}}|\n= \\frac{1}{\\eta} B 2^\\ell + |V^{\\mathcal{L}}|,\n\\end{align*}\nwhere the second inequality follows from $y\\in Q$.\nTo complete the proof it suffices to show\n$|V^{\\mathcal{L}}| \\leq 2^\\ell$.\nThis follows by a sparsity reasoning analogous to\nLemma~\\ref{lem:sparsityFF} implying that the number\nof $y$-loose vertices is bounded by the number\nof tight budget constraints, and thus\n\\begin{equation}\\label{eq:budgetBoundsFirstStep}\n|V^{\\mathcal{L}}| \\leq \\ell_2 - \\ell_1 \\leq \\ell_2\n = \\lfloor \\log^{(k-1)} L \\rfloor.\n\\end{equation}\nFurthermore,\n\\begin{align*}\n2^\\ell &\\geq 2^{\\ell_1+1} = 2^{\\lfloor \\log^{(k)} L \\rfloor + 1}\n\\geq 2^{\\log^{(k)} L} = \\log^{(k-1)} L,\n\\end{align*}\nwhich, together with~\\eqref{eq:budgetBoundsFirstStep},\nimplies $|V^{\\mathcal{L}}| \\leq 2^\\ell$ and thus\ncompletes the proof.\n\n\\end{proof}\n\n\n\\subsection*{Proof of Theorem~\\ref{thm:bigBIsGood}}\n\nLet $(y,B)$ be an optimal solution to the RMFC\nrelaxation $\\min\\{B \\mid x\\in \\bar{P}_B\\}$\nand let $h=\\lfloor \\log L \\rfloor$.\nHence, $B\\leq B_\\mathsf{OPT}$.\nWe invoke Theorem~\\ref{thm:bottomCover} with respect\nto the vector $y\\wedge \\chi^{V_{>h}}$ and $\\mu=0.5$\nto obtain a set $R_1\\subseteq V_{>h}$ satisfying\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item $R_1\\cap P_u\\neq \\emptyset\n\\quad\\forall u\\in W$,\nand\n\n\\item $\\chi^{R_1} \\in P_{2B+1}$,\n\\end{enumerate}\nwhere\n$W = \\{u\\in \\Gamma \\mid y(P_u\\cap V_{>h}) \\geq 0.5\\}$.\nHence, $R_1$ cuts off all leaves in $W$ from\nthe root by only protecting vertices on\nlevels $V_{> h}$ and using budget bounded by\n$2B+1 \\leq 3B \\leq 3 \\max\\{\\log L, B_\\mathsf{OPT}\\}$.\n\n\nWe now focus on the leaves $\\Gamma \\setminus W$,\nwhich we will cut off from the root by protecting\na vertex set $R_2 \\subseteq V_{\\leq h}$ feasible\nfor budget $3 \\max\\{\\log L, B_\\mathsf{OPT}\\}$.\nLet $(z,\\bar{B})$ be an optimal vertex\nsolution to the\nfollowing linear program\n\\begin{equation}\\label{eq:reoptTop}\n\\min\\left\\{\\bar{B} \\;\\middle\\vert\\;\nx\\in P_{\\bar{B}},\\; \nx(P_u) = 1 \\;\\forall u\\in \\Gamma\\setminus W\n\\right\\}.\n\\end{equation}\nFirst, notice that~\\eqref{eq:reoptTop} is feasible\nfor $\\bar{B}\\leq 2B$. This follows by observing\nthat the vector $q= 2(y\\wedge \\chi^{V_{\\leq h}})$\nsatisfies $q\\in P_{2 B}$ since $y\\in P_B$.\nMoreover, for $u\\in \\Gamma\\setminus W$,\nwe have\n\\begin{equation*}\nq(P_u) = 2 y(P_u \\cap V_{\\leq h})\n= 2 (1-y(P_u \\cap V_{> h})) > 1,\n\\end{equation*}\nwhere the last inequality follows from\n$y(P_u\\cap V_{>h}) < 0.5$ because\n$u\\in \\Gamma\\setminus W$.\nFinally, there exists a vector\n$q' < q$ such that\n$q'(P_u) =1$ for $u\\in \\Gamma\\setminus W$.\nThe vector $q'$ can be obtained from $q$ by\nsuccessively reducing values on vertices\n$v\\in \\operatorname{supp}(q)$ satisfying\n$q(P_v) > 1$.\nThis shows that $(q',2B)$ is a feasible\nsolution to~\\eqref{eq:reoptTop} and hence\n$\\bar{B} \\leq 2B$.\n\nConsider the set of all $z$-loose vertices\n$V^{\\mathcal{L}}=\\{u\\in \\operatorname{supp}(z) \\mid z(P_u)<1\\}$.\nWe define\n\\begin{equation*}\nR_2 = V^{\\mathcal{L}} \\cup \n\\{u\\in \\operatorname{supp}(z) \\mid z(u)=1\\}.\n\\end{equation*}\nNotice that for each $u\\in \\Gamma\\setminus W$,\nthe set $R_2$ contains a vertex on the\npath from $u$ to the root. Indeed, either\n$|\\operatorname{supp}(z)\\cap P_u|=1$ in which case there\nis a vertex $v\\in P_u$ with $z(v)=1$, which is\nthus contained in $R_2$, or $|\\operatorname{supp}(z)\\cap P_u|>1$\nin which case the vertex $v\\in \\operatorname{supp}(z)\\cap P_u$\nthat is closest to the root among all vertices in\n$\\operatorname{supp}(z)\\cap P_u$ is a $z$-loose vertex.\nHence, the set $R=R_1\\cup R_2$ cuts off all leaves\nfrom the root. It remains to show that it is\nfeasible for budget $3 \\max\\{\\log L, B_\\mathsf{OPT}\\}$.\n\nUsing an analogous sparsity reasoning as in\nLemma~\\ref{lem:sparsityFF}, we obtain that\n$|V^{\\mathcal{L}}|$ is bounded by the number\nof tight budget constraints, which is at most\n$h=\\lfloor \\log L \\rfloor \\leq \\log L$.\nHence, for any level $\\ell\\in [h]$, we have\n\\begin{align*}\n|R_2 \\cap V_\\ell| &\\leq |V^{\\mathcal{L}}| + z(V_\\ell) \\\\\n &\\leq \\log L + 2^\\ell \\bar{B} && \\text{($(z,\\bar{B})$ feasible\nfor~\\eqref{eq:reoptTop})}\\\\\n &\\leq \\log L + 2^\\ell \\cdot (2 B) && \\text{($\\bar{B}\\leq 2B$)}\\\\\n &\\leq 2^\\ell \\cdot (3 \\max\\{\\log L, B_\\mathsf{OPT}\\}).\n && \\text{($B\\leq B_\\mathsf{OPT}$)}\n\\end{align*}\nThus, both $R_1$ and $R_2$ are budget-feasible for\nbudget $3 \\max\\{\\log L, B_\\mathsf{OPT}\\}$, and since they\ncontain vertices on disjoint levels, $R=R_1\\cup R_2$\nis feasible for the same budget.\n\n\\qed\n\n\n\n\n\n\\subsection*{Proof of Lemma~\\ref{lem:enumWorks}}\n\nTo show that the running time of\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$}\nis polynomial, we show that there is only a polynomial\nnumber of recursive calls to\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(A,D,\\gamma)$}. Notice that the number\nof recursive calls done in one execution of\nstep~\\ref{item:enumRecCall} of the algorithm is equal\nto $2 |F_x|$.\nWe thus start by upper bounding $|F_x|$ for any solution\n$(x,B)$ to \\ref{eq:lpRMFCAD} with $B < \\log L$.\nConsider a vertex $f_u\\in F_x$, where\n$u\\in \\Gamma\\setminus W_x$.\nSince $u$ is a leaf not in $W_x$, we have\n$x(P_u \\cap V_{\\leq h}) > \\frac{1}{3}$, and\nthus\n\\begin{equation*}\nx(T_{f_u}\\cap V_{\\leq h}) > \\frac{1}{3}\n\\quad \\forall f_u \\in F_x.\n\\end{equation*}\nBecause no two vertices of $F_x$ lie on the same\nleaf-root path, the sets $T_{f_u} \\cap V_{\\leq h}$\nare all disjoint for different $f_u\\in F_x$,\nand hence\n\\begin{align*}\n\\frac{1}{3}|F_x| &< \\sum_{f \\in F_x} x(T_{f}\\cap V_{\\leq h})\\\\\n &\\leq x(V_{\\leq h})\n && \\text{(disjointness of sets $T_{f}\\cap V_{\\leq h}$\n for different $f \\in F_x$})\\\\\n &\\leq \\sum_{\\ell=1}^h 2^\\ell B\n && \\text{($x$ satisfies budget constraints of~\\ref{eq:lpRMFCAD} )}\\\\\n &< 2^{h+1} B\\\\\n &< 2 (\\log L)^2.\n && \\text{($h=\\lfloor \\log^{(2)} L \\rfloor$ and $B < \\log L$)}\n\\end{align*}\nSince the recursion depth is\n$\\bar{\\gamma}=2(\\log L)^2 \\log^{(2)} L$,\nthe number of recursive calls is bounded by\n\\begin{align*}\nO\\left((2 |F_x|)^{\\bar{\\gamma}}\\right) &= \n(\\log L)^{O((\\log L)^2 \\log^{(2)} L)}\n=2^{o(L)} = o(N),\n\\end{align*}\nthus showing that\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$}\nruns in polynomial time.\n\nIt remains to show that \\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$} finds\na triple satisfying the conditions of Lemma~\\ref{lem:goodEnum}.\nFor this we identify a particular execution path of the\nrecursive procedure \n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$} that,\nat any point in the algorithm, will maintain a clean\npair $(A,D)$ that is compatible with $\\mathsf{OPT}$,\ni.e., $A\\subseteq \\mathsf{OPT}$ and $D\\cap \\mathsf{OPT} = \\emptyset$.\nAt the beginning of the algorithm we clearly have\ncompatibility with $\\mathsf{OPT}$ since $A=D=\\emptyset$.\nTo identify the execution path we are interested\nin, we highlight which recursive call we want to follow\ngiven that we are on the execution path.\nHence, consider a clean pair $(A,D)$\nthat is compatible with $\\mathsf{OPT}$ and assume we are\nwithin the execution of\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(A,D,\\gamma)$}.\nLet $(x,B)$ be an optimal solution to~\\ref{eq:lpRMFCAD}.\nNotice that $B \\leq B_\\mathsf{OPT} \\leq \\log L$, because\n$(A,D)$ is compatible with $\\mathsf{OPT}$.\nIf $\\mathsf{OPT}\\cap Q_x=\\emptyset$, then $(A,D,x)$ fulfills the\nconditions of Lemma~\\ref{lem:goodEnum} and we are done.\nHence, assume $\\mathsf{OPT}\\cap Q_x \\neq \\emptyset$, and\nlet $f \\in F_x$ be such that\n$\\mathsf{OPT}\\cap T_{f}\\cap V_{\\leq h}\\neq \\emptyset$.\nIf $f \\in \\mathsf{OPT}$, then consider the execution path\ncontinuing with the call of \n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(A\\cup \\{f\\},D,\\gamma-1)$}; otherwise,\nif $f\\not\\in \\mathsf{OPT}$, we focus on the call of\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(A,D\\cup \\{f\\},\\gamma-1)$}.\nNotice that compatibility with $\\mathsf{OPT}$ is maintained\nin both cases.\n\nTo show that the thus identified execution path of \n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$}\nindeed leads to a triple satisfying the conditions\nof Lemma~\\ref{lem:goodEnum}, we measure progress\nas follows. For any clean pair $(A,D)$\ncompatible with $\\mathsf{OPT}$, we define\na potential function $\\Phi(A,D)\\in \\mathbb{Z}_{\\geq 0}$\nin the following way.\nFor each $u\\in \\mathsf{OPT}\\cap V_{\\leq h}$,\nlet $d_u\\in \\mathbb{Z}_{\\geq 0}$\nbe the distance of $u$ to the first vertex in\n$A\\cup D \\cup \\{r\\}$ when following the unique\n$u$-$r$ path. We define\n$\\Phi(A,D)= \\sum_{u\\in \\mathsf{OPT} \\cap V_{\\leq h}} d_u$.\nNotice that as long as we have a triple $(A,D,x)$\non our execution path that does\nnot satisfy the conditions of Lemma~\\ref{lem:goodEnum},\nthen the next triple $(A',D',x')$ on our execution\npath satisfies $\\Phi(A',D') < \\Phi(A,D)$.\nHence, either we will encounter a triple on our\nexecution path satisfying\nthe conditions of Lemma~\\ref{lem:goodEnum}\nwhile still having a strictly positive potential,\nor we will encounter a triple $(A,D,x)$ compatible\nwith $\\mathsf{OPT}$ and $\\Phi(A,D)=0$, which implies\n$\\mathsf{OPT}\\cap V_{\\leq h} = A$,\nand we thus correctly guessed all vertices of\n$\\mathsf{OPT}\\cap V_{\\leq h}$ implying that\nthe conditions of Lemma~\\ref{lem:goodEnum}\nare satisfied for the triple $(A,D,x)$.\nSince $\\Phi(A,D)\\geq 0$ for any compatible clean\npair $(A,D)$, this implies that a triple\nsatisfying the conditions of Lemma~\\ref{lem:goodEnum}\nwill be encountered if the recursion depth $\\bar{\\gamma}$\nis at least $\\Phi(\\emptyset,\\emptyset)$.\nTo evaluate $\\Phi(\\emptyset,\\emptyset)$ we have to compute\nthe sum of the distances of all\nvertices $u\\in \\mathsf{OPT}\\cap V_{\\leq h}$\nto the root. The distance of $u$ to the root is at\nmost $h$ since $u\\in V_{\\leq h}$. Moreover, \n$|\\mathsf{OPT} \\cap V_{\\leq h}| < 2^{h+1} B_{\\mathsf{OPT}}$\ndue to the budget constraints. Hence,\n\\begin{align*}\n\\Phi(\\emptyset, \\emptyset)\n &< h \\cdot 2^{h+1} \\cdot B_{\\mathsf{OPT}}\\\\\n &\\leq 2 \\log^{(2)} L \\cdot (\\log L)^2\n && \\text{($h=\\lfloor \\log^{(2)} L \\rfloor$ and $B_\\mathsf{OPT} \\leq \\log L$)}\\\\\n &= \\bar{\\gamma},\n\\end{align*}\nimplying that a triple fulfilling the conditions of\nLemma~\\ref{lem:goodEnum} is encountered by\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$}.\n\n\n\\qed\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgements}\nWe are grateful to Noy Rotbart for many stimulating discussions\nand for bringing several relevant references to our attention.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nCosmological observations have confirmed the big bang cosmology and determined the cosmological parameters precisely~\\cite{Ade:2015xua}. The matter contents of the Universe may be phenomenologically given by the standard model particles, the cosmological constant $\\Lambda$, and cold dark matter (CDM). However, the theoretical explanation of the origin of the extra ingredients, dark matter and dark energy, is still lacked. The theoretically expected value of the cosmological constant is too large to explain the present accelerating expansion. An alternative idea is that the acceleration is obtained by a potential of a scalar field instead of $\\Lambda$, and this idea is often called the quintessence model~\\cite{Caldwell:1997ii}. This scalar field could be originated from the gravity sector~\\cite{Fujii:2003pa}. A large class of scalar-tensor theories and $f(R)$ theories can be recast in the form of a theory of a canonical scalar field with a potential after the conformal transformation $\\tilde{g}_{\\mu\\nu}=A^2(\\phi) g_{\\mu\\nu}$ and the field redefinition $\\Phi=\\Phi(\\phi)$ where $\\Phi$ is the canonically normalized field. The metric $\\tilde{g}_{\\mu\\nu}$ is called the Jordan frame metric which the standard model particles are minimally coupled to whereas $g_{\\mu\\nu}$ is the Einstein frame metric in which the gravitational action is given by the Einstein-Hilbert action. In this case, the scalar field has the non-minimal coupling to the matter fields via the coupling function $A$.\n\n\nDark matter is also one of the biggest mystery of the modern cosmology. Although many dark matter candidates have been proposed in the context of the particle physics, any dark matter particles have not been discovered yet~\\cite{Agashe:2014kda,Ackermann:2015zua,Ahnen:2016qkx,Ackermann:2015lka,Khachatryan:2014rra,Conrad:2017pms}. The existence of dark matter is confirmed via only gravitational interactions. Hence, exploring dark matter candidate in the context of gravity is also a considerable approach. Not only dark energy but also dark matter could be explained by modifications of gravity. For instance, a natural extension of general relativity is a theory with a massive graviton (see \\cite{deRham:2014zqa} for a review). If a graviton obtains a mass, the massive graviton can be a dark matter candidate~\\cite{Dubovsky:2004ud,Pshirkov:2008nr,Aoki:2016zgp,Babichev:2016hir,Babichev:2016bxi,Aoki:2017cnz}.\n\n\nA viable dark matter scenario has to explain the present abundance of dark matter which usually leads to a constraint on a production scenario. However, a question arises: why are the energy densities of dark matter and baryon almost the same? If baryon and dark matter are produced by a common mechanism, almost the same abundance could be naturally obtained. On the other hand, if productions of the two are not related but independent, the coincidence might indicate that two energy densities are tuned to be the same order of the magnitude by a mechanism after the productions.\n\n\nIn the present paper, we shall combine two ideas of the modifications of gravity by using the proposal of \\cite{DeFelice:2017oym}: the non-minimal coupling of $\\phi$ and the existence of the massive graviton. We call this theory the chameleon bigravity theory which contains three types of gravitational degrees of freedom: the massless graviton, the massive graviton, and the chameleon field $\\phi$. We identify the massive graviton with dark matter. Since dark matter is originated from the gravity sector, the coupling between $\\phi$ and the dark matter may be given by a different way from the matter sector. We promote parameters in the graviton mass terms to functions of $\\phi$~\\cite{D'Amico:2012zv,Huang:2012pe}, giving rise to a new type of coupling between $\\phi$ and dark matter. In this case, as discussed in \\cite{DeFelice:2017oym}, the field value of $\\phi$ depends on the environment due to the non-minimal coupling as with the chameleon field \\cite{Khoury:2003aq,Khoury:2003rn}, which makes the graviton mass to depend on the environment.\n\n\nWe find that the ratio between energy densities of dark matter and baryon is dynamically adjusted to the observed value by the motion of $\\phi$ and then the ratio at the present is independent of the initial value. Hence, our model can explain the coincidence of the abundance of dark matter and baryon. Furthermore, if the potential of $\\phi$ is designed to be dark energy, the chameleon field $\\phi$ can give rise to the present acceleration of the universe. Both dark energy and dark matter are explained by the modifications of gravity in our model. \n\n\nThe paper is organized as follows. We introduce the chameleon bigravity theory in Sec.~\\ref{sec_bigravity}. In Sec.~\\ref{sec_Fri}, we show the Friedmann equation regarding the massive graviton is dark matter. We also point out the reason why the dark matter-baryon ratio can be naturally explained in the chameleon bigravity theory if we consider the massive graviton as dark matter. Some analytic solutions are given in Sec.~\\ref{sec_analytic} and numerical solutions are shown in Sec.~\\ref{sec_numerical}. These solutions reveal that the observed dark matter-baryon ratio is indeed dynamically obtained independently from the initial ratio. We summarize our results and give some remarks in Sec. \\ref{summary}. In Appendix, we detail the derivation of the Friedmann equation.\n\n\n\\section{Chameleon bigravity theory}\n\\label{sec_bigravity}\n\nWe consider the chameleon bigravity theory in which the mass of the massive graviton depends on the environment~\\cite{DeFelice:2017oym}. The action is given by\n\\begin{align}\nS&=\\int d^4x \\sqrt{-g} \\Biggl[ \\frac{M_g^2}{2}R[g]-\\frac{1}{2}\n K^2(\\phi) \ng^{\\mu\\nu}\\partial_{\\mu}\\phi\\partial_{\\nu}\\phi\n\\nn\n&\\qquad \\qquad \\qquad \\quad +M_g^2m^2 \\sum_{i=0}^4 \\beta_i (\\phi) U_i[s] \\Biggl]\n\\nn\n&+\\frac{M_f^2}{2}\\int d^4 x\\sqrt{-f}R[f]+S_{\\rm m}[\\tilde{g},\\psi]\\,, \\label{action}\n\\end{align}\nwhere $\\phi$ is the chameleon field and $S_{\\rm m}$ is the matter action. The functions $K(\\phi)$ and $\\beta_i(\\phi)$ are arbitrary functions of $\\phi$. The matter fields universally couple to the Jordan frame metric $\\tilde{g}_{\\mu\\nu}=A^2(\\phi)g_{\\mu\\nu}$ with a coupling function $A(\\phi)$. The potentials $U_i[s] \\, (i=0,\\cdots, 4)$ are the elementary symmetric polynomials of the eigenvalues of the matrix $s^{\\mu}{}_{\\nu}$ which is defined by the relation~\\cite{deRham:2010ik,deRham:2010kj,Hassan:2011zd}\n\\begin{align}\ns^{\\mu}{}_{\\alpha}s^{\\alpha}{}_{\\nu}=g^{\\mu\\alpha}f_{\\alpha\\nu}\\,.\n\\end{align}\nThe potential of $\\phi$ is not added explicitly since the couplings between $\\phi$ and the potentials $U_i$ yield the potential of $\\phi$ and thus an additional potential is redundant.\n\nNote that the field $\\phi$ is not a canonically normalized field. The canonical field $\\Phi$ is given by the relation\n\\begin{align}\nd \\Phi = K(\\phi) d\\phi \\,,\n\\end{align}\nby which the function $K$ does not appear explicitly in the action when we write down the theory in terms of $\\Phi$. Since $\\beta_i$ and $A$ are arbitrary functions, we can set $K=1$ by the redefinitions of $\\beta_i$ and $A$ without loss of generality. Nevertheless, we shall retain $K$ and discuss the general form of the action \\eqref{action}.\n\nIn general, the functions $\\beta_i(\\phi)$ can be chosen independently. In the present paper, however, we consider the simplest model such that $\\beta_i(\\phi)=-c_i f(\\phi)$ where $c_i$ are constant while $f(\\phi)$ is a function of $\\phi$. As we will see in next section, the graviton mass and the potential of $\\phi$ around the cosmological background are given by\n\\begin{align}\nm_T^2(\\phi)&:=\\frac{1+\\kappa}{\\kappa}m^2 f(\\phi)(c_1+2c_2+c_3)\n\\,, \\\\\nV_0(\\phi)&:=m^2M_p^2 f(\\phi)(c_0+3c_1+3c_2+c_3)\n\\,, \\label{bare_potential}\n\\end{align}\nwith $\\kappa=M_f^2\/M_g^2$ and $M_p^2=M_g^2+M_f^2$. In this case, both the potential form of $\\phi$ and the $\\phi$-dependence of the graviton mass are determined by $f(\\phi)$ only.\\footnote{Since we have absorbed the potential of $\\phi$ in the mass term of the graviton, $m_T^2M_p$ and $V_0$ seem to be a same order of magnitude. However, $m_T^2M_p^2$ and $V_0$ are not necessary to be the same order because they represent different physical quantities. Indeed, we will assume $V_0 \\ll m_T^2 M_p^2$.} Note that $V_0$ is the bare potential of $\\phi$. The effective potential of $\\phi$ is given by not only $V_0$ but also the amplitude of the massive graviton as well as the energy density of matter due to the non-minimal couplings (see Eq.~\\eqref{effective_potential}).\n\n\n\n\n\n\\section{Basic equations}\n\\label{sec_Fri}\nIn this section, we derive the basic equations to discuss the cosmological dynamics in the model \\eqref{action} supposing that the massive graviton is dark matter. We assume the coherent dark matter scenario in which dark matter is obtained from the coherent oscillation of the zero momentum mode massive gravitons~\\cite{Aoki:2017cnz}. Since the zero momentum mode of the graviton corresponds to the anisotropy of the spacetime, we study the Bianchi type I universe instead of the Friedmann-Lema{\\^i}tre-Robertson-Walker (FLRW) universe. The ansatz of the spacetime metrics are\n\\begin{align}\nds_g^2&=-dt^2 +a^2[ e^{4\\sigma_g} dx^2+e^{-2\\sigma_g}(dy^2+dz^2)]\\,, \\label{Bianchi_g} \\\\\nds_f^2&=\\xi^2\\left[ -c^2 dt^2+a^2\\{ e^{4\\sigma_f} dx^2+e^{-2\\sigma_f}(dy^2+dz^2)\\} \\right] \\,, \\label{Bianchi_f}\n\\end{align}\nwhere $\\{a,\\xi,c,\\sigma_g,\\sigma_f\\}$ are functions of the time $t$. We assume the matter field is a perfect fluid whose energy-momentum tensor is given by\n\\begin{align}\nT^{\\mu}{}_{\\nu}=A^4(\\phi)\\times {\\rm diag}[-\\rho(t),P(t),P(t),P(t)]\\,, \\label{Tmunu}\n\\end{align}\nwhere $\\rho$ and $P$ are the energy density and the pressure in the Jordan frame, respectively. The conservation law of the matter field is\n\\begin{align}\n\\dot{\\rho}+3\\frac{(Aa)^{\\cdot}}{Aa}(\\rho+P)=0\\,, \\label{conservation}\n\\end{align}\nwhere a dot is the derivative with respect to $t$.\n\nAs shown in \\cite{Maeda:2013bha,Aoki:2017cnz}, the small anisotropies $\\sigma_g$ and $\\sigma_f$ can be a dark matter component of the universe in the bimetric model without the chameleon field $\\phi$. We generalize their calculations to those in the present model \\eqref{action}. All equations under the ansatz \\eqref{Bianchi_g} and \\eqref{Bianchi_f} are summarized in Appendix. Here, we only show the Friedmann equation and the equations of motion of the massive graviton and the chameleon field because other equations are not important for the following discussion.\n\n\nWe assume the graviton mass $m_T$ is larger than the Hubble expansion rate $H:=\\dot{a}\/a$. After expanding the equations in terms of anisotropies and a small parameter $\\epsilon:=H\/m_T$, the Friedmann equation is given by\n\\begin{align}\n3M_p^2H^2= \\rho A^4+\\frac{1}{2}\n K^2 \n\\dot{\\phi}^2+V_0+\\frac{1}{2}\\dot{\\varphi}^2+\\frac{1}{2}m_T^2\\varphi^2 \\,, \\label{Fri_no_h}\n\\end{align}\nwhere $\\varphi$ is the massive graviton which is given by a combination of the anisotropies $\\sigma_g$ and $\\sigma_f$ (see Eqs.~\\eqref{graviton1} and \\eqref{graviton2}).\nThe equations of motion of the massive graviton $\\varphi(t)$ and the chameleon field $\\phi(t)$ are \n\\begin{align}\n\\ddot{\\varphi}+3H \\dot{\\varphi}+m_T^2(\\phi) \\varphi &=0 \\,, \\label{eq_varphi}\\\\\n K \\left( \\ddot{\\phi}+3H\\dot{\\phi} \\right)\n + \\dot{K} \\dot{\\phi} + \\frac{\\partial V_{\\rm eff}}{ \\partial \\phi}&=0\\,, \\label{eq_cham}\n\\end{align}\nwhere the effective potential of the chameleon field is given by\n\\begin{align}\nV_{\\rm eff}:= V_0(\\phi)+\\frac{1}{2}m_T^2(\\phi)\\varphi^2 + \\frac{1}{4}A^4(\\phi) (\\rho-3p)\n\\,. \\label{effective_potential}\n\\end{align}\nNote that, although the bigravity theory contains the degree of freedom of the massless graviton (see Eq.~\\eqref{Friedmann_eq}), we neglect the contribution to the Friedmann equation from the massless graviton because the energy density of the massless graviton decreases faster than those of other fields. The effect of the massless graviton is not important for our discussions.\n\n\nWe notice that the basic equations \\eqref{Fri_no_h}, \\eqref{eq_varphi} and \\eqref{eq_cham} are exactly the same as the equations in the theory with two scalar fields given by the action\n\\begin{align}\nS=\\int d^4x \\sqrt{-g} \\Biggl[ &\\frac{M_p^2}{2} R[g]\n-\\frac{1}{2}K^2(\\phi) (\\partial \\phi)^2 -V_0(\\phi) \\nn\n&-\\frac{1}{2} (\\partial \\varphi)^2 -\\frac{1}{2}m_T^2(\\phi) \\varphi^2 \\Biggl]+S_m [\\tilde{g},\\psi]\n\\,. \\label{another_action}\n\\end{align}\nThe cosmological dynamics in \\eqref{action} with $H \\gg m_T$ can be reduced into that in \\eqref{another_action}. Our results obtained below can be straightforward generalized even in the case of \\eqref{another_action} up to the discussion about the cosmological dynamics. The action \\eqref{another_action} gives a toy model of the chameleon bigravity theory. However, the equivalence between \\eqref{action} and \\eqref{another_action} holds only for the background dynamics of the universe in $H\\gg m_T$. The equivalence between the two actions does not hold for small-scale perturbations around the cosmological background~\\cite{Aoki:2017cnz}. \n\n\nWe first consider a solution $\\phi=\\phi_{\\rm min}=$ constant which is realized when\n\\begin{align}\n\\frac{\\partial V_{\\rm eff}}{\\partial \\phi}\n=\\alpha_f \\left[ V_0 +\\frac{1}{2}m_T^2 \\varphi^2 \\right] +\\alpha_A (\\rho-3P) A^4=0\n\\,,\n\\label{phi=const}\n\\end{align}\nwhere\n\\begin{align}\n\\alpha_A :=\\frac{1}{K}\\frac{ d \\ln A}{d \\phi}=\\frac{ d \\ln A}{d \\Phi}\\,, \\quad\n\\alpha_f:=\\frac{1}{K}\\frac{d \\ln f}{d \\phi}=\\frac{ d \\ln f}{d \\Phi} \\,.\n\\end{align}\nThe equation \\eqref{phi=const} is not always compatible with $\\phi=$ constant since each term in \\eqref{phi=const} has different time dependence in general. Nonetheless, as we shall see below, they can be compatible with each other if $\\epsilon \\ll 1 $. In other words, a common constant value of $\\phi_{\\rm min}$ can be a solution all the way from the radiation dominant (RD) epoch to the matter dominant (MD) epoch of the universe. When the chameleon field is constant, the bare potential $V_0$ acts as a cosmological constant which has to be subdominant in the RD and the MD eras. The constant $\\phi$ implies that the graviton mass does not vary and thus we obtain\n\\begin{align}\n\\langle \\dot{\\varphi}^2 \\rangle_T= \\langle m_T^2 \\varphi^2 \\rangle_T \\propto a^{-3}\n\\,,\\label{eqn:scaling-mTconst}\n\\end{align}\nwhere $\\langle\\cdots \\rangle_T$ represents the time average over an oscillation period. The massive gravitons behave like a dark matter component of the universe. When we focus on the time scales much longer than $m_T^{-1}$, $m_T^2 \\varphi^2$ in Eq.~\\eqref{phi=const} can be replaced with $\\langle m_T^2 \\varphi^2 \\rangle_T$, which scales as \\eqref{eqn:scaling-mTconst}. Since $\\rho-3P$ also scales as $\\propto a^{-3}$ in the RD and the MD, the decaying laws of $\\rho-3P$ and $m_T^2 \\varphi^2$ in \\eqref{phi=const} are the same in this case. Hence, when the oscillation timescale of the massive graviton is much shorter than the timescale of the cosmic expansion, i.e., $\\epsilon \\ll 1 $, $\\phi=$ constant can be a solution all the way from the RD to the MD. The value of $\\phi_{\\rm min}$ is determined by simply solving Eq.~\\eqref{phi=const}.\n\n\nSupposing that the massive graviton is the dominant component of dark matter, Eq.~\\eqref{phi=const} in the RD and MD eras is replaced with\n\\begin{align}\n\\left( \\alpha_f \\rho_G+2\\alpha_A \\rho_{b} \\right) A^4=0\\,,\n\\end{align}\nwhere $\\rho_b$ is the baryon energy density and we have ignored $V_0$. The energy density of massive graviton in the Jordan frame is defined by\n\\begin{align}\n\\rho_G:=\\frac{1}{2}A^{-4}\\langle \\dot{\\varphi}^2+m_T^2\\varphi^2 \\rangle_T=A^{-4}m_T^2\\langle \\varphi^2 \\rangle_T\\,,\n\\end{align}\nwhich depends on the chameleon $\\phi$. Therefore, if $\\alpha_A$ and $\\alpha_f$ are assumed to be $\\alpha_A\/\\alpha_f \\simeq -5\/2$, the ratio between dark matter and baryon is automatically tuned to be the observational value. The dark matter-baryon ratio could be naturally explained without any fine-tuning of the productions of dark matter and baryon.\n\n\n\nNeedless to say, the initial value of $\\phi$ must not be at the bottom of the effective potential $(\\phi=\\phi_{\\rm min})$. We shall study the dynamics of $\\phi$ and discuss whether $\\phi$ approaches $\\phi_{\\rm min}$ before the MD era of the universe. \nAlthough we do not assume $\\phi$ is constant, we assume $\\phi$ does not rapidly move so that the graviton mass varies adiabatically\n\\begin{align}\n\\frac{\\dot{m}_T}{m_T^2} \\ll 1 \\,.\n\\label{adiabatic_condition}\n\\end{align}\nUnder the adiabatic condition \\eqref{adiabatic_condition} we can take the adiabatic expansion for the massive graviton:\n\\begin{align}\n\\varphi=u(t)\\cos\\left[ \\int m_T[\\phi(t)] dt \\right]+\\cdots\\,,\n\\end{align}\nwith a slowly varying function $u(t)$. The adiabatic condition \\eqref{adiabatic_condition} is indeed viable for $\\epsilon \\ll 1$ since we will see the time dependence of $m_T$ is given by a power law of $a$ (see Eq.~\\eqref{time_dep_m} for example). The time average over an oscillation period yields $\\langle \\dot{\\varphi}^2 \\rangle_T=\\langle m_T^2 \\varphi^2 \\rangle_T=m_T^2 u^2\/2$. \n\n\nAfter taking the time average over an oscillation period under the adiabatic condition, the equations are reduced into\n\\begin{align}\n3M_p^2H^2= A^4 \\rho_r +A^4 \\rho_b +\\frac{1}{2} K^2 \\dot{\\phi}^2 +V_0+ \\frac{1}{2}m_T^2 u^2 \\,, \\label{Fri}\n\\end{align}\nand\n\\begin{align}\n K \\left( \\ddot{\\phi}+3H \\dot{\\phi} \\right)\n +\\dot{K} \\dot{\\phi} +\\alpha_f V_0 \n& \\nn\n+\\frac{1}{4}\\alpha_f m_T^2 u^2+\\alpha_A A^4\\rho_b&=0\n\\,, \\label{eq_phi} \\\\\n4\\dot{u}+6H u+ \\alpha_f u K\\dot{\\phi}&=0\n\\,, \\label{eq_u}\n\\end{align} \nwhere $\\rho_r$ and $\\rho_b$ are the energy densities of radiation and baryon which decrease as $\\rho_r\\propto (aA)^{-4}$ and $\\rho_b \\propto (aA)^{-3}$ because of the conservation equation. The dynamics of the scale factor $a$, the chameleon field $\\phi$, and the amplitude of the massive graviton $u$ are determined by solving these three equations.\n\n\nBy using the density parameters, the Friedmann equation is rewritten as\n\\begin{align}\n1=\\Omega_r+\\Omega_b+\\Omega_{\\phi}+\\Omega_G \\,,\n\\end{align}\nwith\n\\begin{align}\n\\Omega_r&:=\\frac{A^4\\rho_r}{3M_p^2H^2} \n\\,, \\\\\n\\Omega_b&:=\\frac{A^4\\rho_b}{3M_p^2H^2} \n\\,, \\\\\n\\Omega_{\\phi}&:=\\frac{\\dot{\\phi}^2+2V_0}{6M_p^2H^2}\n\\,, \\\\\n\\Omega_G&:=\\frac{m_T^2 u^2}{6M_p^2H^2}\\,.\n\\end{align}\nWe also introduce the total equation of state parameter in the Einstein frame\n\\begin{align}\nw_E:=-1-\\frac{2\\dot{H}}{3H^2}\\,.\n\\end{align}\n\nThe above quantities are defined in the Einstein frame. Since the matter fields minimally couple with the Jordan frame metric, the observable universe is expressed by the Jordan frame metric. Hence, we also define the Hubble expansion rate and the effective equation of state parameter in the Jordan frame as\n\\begin{align}\nH_J&:=\\frac{(Aa)^{\\cdot}}{A^2a} \\,, \\\\\nw_{\\rm tot}&:=-1-\\frac{2\\dot{H}_J}{3AH_J^2}\\,.\n\\end{align}\n\n\n\n\n\n\n\n\n\n\\section{Analytic solutions}\n\\label{sec_analytic}\nIn this section we show some analytic solutions under the simplest case\n\\begin{align}\nK=1\\,, \\quad A=e^{\\beta\\phi\/M_p}\\,, \\quad\nf=e^{-\\lambda \\phi\/M_p}\\,, \\label{model_A}\n\\end{align}\nwith the dimensionless constants $\\beta$ and $\\lambda$. This model yields that the coupling strengths $\\alpha_A$ and $\\alpha_f$ are constant.\nWe consider four stages of the universe: the radiation dominant era, around the radiation-matter equality, the matter dominant era, and the accelerate expanding era. The analytic solutions are found in each stages of the universe as follows.\n\n\\subsection{Radiation dominant era}\nWe first consider the regime when the contributions to the Friedmann equation from baryon and dark matter are subdominant, that is, $\\Omega_b, \\Omega_G \\ll 1$. The Hubble expansion rate is then determined by the energy densities of radiation and $\\phi$. Since the effective potential of $\\phi$ are determined by the energy densities of baryon and dark matter, in this situation the potential force can be ignored compared with the Hubble friction term ($V_0$ is assumed to be always ignored during both radiation and matter dominations).\nThen, we obtain\n\\begin{align}\n\\dot{\\phi}\\propto a^{-3}\n\\,,\n\\end{align}\nwhich indicates that the field $\\phi$ loses its velocity due to the Hubble friction and then $\\phi$ becomes a constant $\\phi_i$. We can ignore $\\Omega_{\\phi}$ and then find the standard RD universe. At some fixed time deep in the radiation dominant era, we therefore set $\\phi=\\phi_i$ as the initial condition of $\\phi$. We shall then denote the initial values of the energy densities of baryon and the massive graviton as $\\rho_{b,i}$ and $\\rho_{G,i}$, respectively. \n\n\nNote that this constant initial value of $\\phi$ is not necessary to coincide with the potential minimum $\\phi=\\phi_{\\rm min}$, i.e., $\\phi_i \\neq \\phi_{\\rm min}$. The ratio $\\rho_{G,i}\/\\rho_{b,i}$ is not tuned to be five at this stage.\n\n\n\n\\subsection{Following-up era}\nWe then discuss the era just before radiation-matter equality in which we cannot ignore the potential force for $\\phi$. As discussed in the previous subsection, we find $\\phi=\\phi_i$ in the RD universe. When the potential force for $\\phi$ becomes relevant, the chameleon field $\\phi$ starts to evolve into the potential minimum $\\phi=\\phi_{\\rm min}$. Due to the motion of $\\phi$, the smaller one of $\\rho_G$ and $\\rho_b$ follows up the larger one. We obtain $\\rho_G\/\\rho_b=-2\\alpha_A\/\\alpha_f$ when the chameleon field reaches the minimum $\\phi_{\\rm min}$. We call this era of the universe the following-up era.\n\nIf the initial value $\\phi_i$ is close to the potential minimum $\\phi_{\\rm min}$, the dark matter-baryon ratio is already tuned to be almost the value $-2\\alpha_A\/\\alpha_f$, which we set to $\\sim 5$, and thus we do not need to discuss this case. We therefore study the case with $\\phi_i < \\phi_{\\rm min}$ and the case with $\\phi_i > \\phi_{\\rm min}$ (which correspond to $\\rho_{G,i} \\gg \\rho_{b,i}$ and $\\rho_{b,i} \\gg \\rho_{G,i}$, respectively). We shall discuss them in order.\n\n\n\n\\subsubsection{$\\rho_{G,i}\\gg \\rho_{b,i}$ before the equal time}\n\\label{DM>>b}\nIf dark matter (i.e., massive gravitons) is over-produced, the equations are reduced to\n\\begin{align}\n\\ddot{\\phi}+3H \\dot{\\phi}-\\frac{\\lambda}{4M_p} m_T^2 u^2 =0\\,,\n\\\\\n3M_p^2 H^2 =A^4 \\rho_r +\\frac{1}{2}\\dot{\\phi}^2+3m_T^2 u^2\n\\,,\n\\end{align}\nand \\eqref{eq_u}, where we have ignored the contributions from baryon. The system admits a scaling solution\n\\begin{align}\n\\phi&=\\frac{M_p}{\\lambda} \\ln t +{\\rm constant}\\,,\n\\nn\nu&\\propto t^{-1\/2}\n\\,,\n\\nn\na&\\propto t^{1\/2}\n\\,, \\label{scaling_DM}\n\\end{align}\nwhere the density parameters in the Einstein frame are given by\n\\begin{align}\n\\Omega_G=\\frac{4}{3\\lambda^2}\n\\,, \\quad\n\\Omega_{\\phi}=\\frac{2}{3 \\lambda^2}\n\\,, \\quad\n\\Omega_r=1-\\frac{2}{\\lambda^2}\n\\,.\n\\end{align}\nThe effective equation of state parameter in the Jordan frame is given by\n\\begin{align}\nw_{\\rm tot}=\\frac{\\lambda-2\\beta}{3(\\lambda+2\\beta)} \\,,\n\\end{align}\nand then $w_{\\rm tot} =-2\/9$ if $2\\beta=5\\lambda$. This solution exists only when $\\lambda^2>2$ since the density parameter has to be $0<\\Omega_r<1$. \n\n\nFor this scaling solution, the graviton mass decreases as\n\\begin{align}\nm_T^2 \\propto a^{-2}\\,, \\label{time_dep_m}\n\\end{align}\nwhich guarantees the adiabatic condition \\eqref{adiabatic_condition} when $\\epsilon \\ll 1$.\nThe energy density of massive gravitons in the Einstein frame decreases as\n\\begin{align}\nA^4 \\rho_G =\\frac{1}{2}m_T^2 u^2 \\propto a^{-4}\n\\,.\n\\end{align}\nOn the other hand, the energy density of baryon in the Einstein frame ``increases'' as\n\\begin{align}\nA^4 \\rho_b \\propto A a^{-3} \\propto a^{-3+2\\frac{\\beta}{\\lambda}}\n\\,,\n\\end{align}\n(For example, we obtain $A^4 \\rho_b \\propto a^2$ when $2\\beta = 5 \\lambda$). Therefore, even if baryon is negligible at initial, the baryon energy density grows and then it cannot be ignored when the energy density of baryon becomes comparable to that of dark matter.\n\nNote that the Jordan frame energy density of baryon, $\\rho_b$, always decays as $a_J^{-3}$ where $a_J=Aa$ is the scale factor of the Jordan frame metric. The quantity $A^4 \\rho_b$ is the energy density in the Einstein frame.\n\nIn the Einstein frame, the interpretation of the peculiar behavior of $A^4\\rho_G$ and $A^4\\rho_b$ is that the energy density of massive gravitons is converted to that of baryon through the motion of the chameleon field $\\phi$. Although we have considered the non-relativistic massive gravitons, the energy density of that in the Einstein frame behaves as radiation which implies that the field $\\phi$ removes the energy of massive gravitons (indeed, the graviton mass decreases due to the motion of $\\phi$). The removed energy is transferred into baryon via the non-minimal coupling.\n\nDuring the scaling solution, the massive graviton never dominates over radiation because both energy densities of the massive graviton and radiation obey the same decaying law $A^4\\rho_r, A^4\\rho_G \\propto a^4$. Hence, the field $\\phi$ can reach the bottom of the effective potential before the MD era. After reaching the bottom of the effective potential, the standard decaying laws for matters $A^4\\rho_r \\propto a^{-4}$ and $A^4\\rho_G, A^4 \\rho_b \\propto a^{-3}$ are recovered, then the usual dynamics of the universe is obtained with the observed dark matter-baryon ratio. \n\nWe note that the following-up of the baryon energy density can be realized even if the scaling solution does not exist $(\\lambda^2<2)$. The dynamics of this case is numerically studied in Sec.~\\ref{sec_numerical}.\n\n\n\n\\subsubsection{$\\rho_{b,i}\\gg \\rho_{G,i}$ before the equal time}\n\\label{b>>DM}\nIn this case, the equations for the scale factor and $\\phi$ form a closed system given by\n\\begin{align}\n\\ddot{\\phi}+3H\\dot{\\phi}+\\frac{\\beta}{M_p} A^4 \\rho_b=0\\,,\n\\\\\n3M_p^2 H^2=A^4 \\rho_r+ A^4 \\rho_b+\\frac{1}{2}\\dot{\\phi}^2\n\\,. \n\\end{align}\nThe scaling solution is then found as\n\\begin{align}\n\\phi&=-\\frac{M_p}{2\\beta} \\ln t +{\\rm constant} \\,,\n\\nn\na &\\propto t^{1\/2}\n\\,,\n\\end{align}\nin which the density parameters are\n\\begin{align}\n\\Omega_b=\\frac{1}{3 \\beta^2}\n\\,, \\quad\n\\Omega_{\\phi}=\\frac{1}{6\\beta^2}\n\\,, \\quad\n\\Omega_r=1-\\frac{1}{2\\beta^2}\n\\,,\n\\end{align}\nwhere $\\beta$ has to satisfy $\\beta^2>1\/2$.\n\nDuring this scaling solution, the universe does not expand in the Jordan frame. Although the Einstein frame scale factor expands as the RD universe, $a\\propto t^{1\/2}$, the Jordan frame scale factor is given by\n\\begin{align}\na_J=aA={\\rm constant}\n\\,.\n\\end{align}\n\nThe solution for $u$ is found by substituting the scaling solution into \\eqref{eq_u}. We obtain\n\\begin{align}\nu\\propto a^{-\\frac{3}{2}+\\frac{\\lambda}{4\\beta}}\n\\,,\\quad\nm_T^2 \\propto a^{\\lambda\/\\beta}\n\\,,\n\\end{align}\nand then the energy density of massive graviton varies as\n\\begin{align}\nA^4 \\rho_G \\propto a^{-3+\\lambda\/2\\beta}\\,.\n\\end{align}\nThe adiabatic condition \\eqref{adiabatic_condition} is guaranteed when $\\epsilon \\ll 1$.\nWhen $2\\beta \\simeq 5 \\lambda$, the graviton mass roughly increases as $m_T^2 \\propto a^{2\/5}$ and the energy density of massive gravitons in the Einstein frame decreases as $A^4\\rho_G \\propto a^{-14\/5}$. Therefore, even if the energy density of massive gravitons is significantly lower than that of baryon, the correct dark matter-baryon ratio is realized in time since the energy density of massive gravitons decreases slower than that of baryon. \n\n\n\n\\subsection{Matter dominant era}\nAfter $\\phi$ reaches the potential minimum $\\phi_{\\rm min}$, the chameleon field $\\phi$ does not move during the MD universe. As shown in \\cite{Aoki:2017cnz}, when $\\phi$ is constant, the massive graviton behaves as CDM and then the standard MD universe is obtained.\n\n\\subsection{Accelerating expanding era}\n\\label{sec_acc}\nAfter the MD era, the universe must show the accelerating expansion due to dark energy. Although one can introduce a new field to obtain the acceleration, we consider a minimal scenario such that the chameleon field itself is dark energy, i.e., the accelerating expansion is realized by the potential $V_0$. When $V_0$ becomes relevant to the dynamics of $\\phi$, the chameleon field again rolls down which leads to a decreasing of $m_T$. As a result, the energy density of massive gravitons rapidly decreases and then we can ignore the contributions from massive gravitons. The basic equation during the accelerating expansion is thus given by\n\\begin{align}\n3M_p^2 H^2=A^4 \\rho_b+\\frac{1}{2}\\dot{\\phi}^2 +V_0 \n\\,, \\label{Fri_DE} \\\\\n\\ddot{\\phi}+3H\\dot{\\phi}-\\frac{\\lambda}{M_p}V_0+\\frac{\\beta}{M_p}A^4 \\rho_b=0\n\\,, \\label{eq_phi_DE}\n\\end{align}\nwhich yield a scaling solution\n\\begin{align}\n\\phi&=\\frac{2M_p}{\\lambda} \\ln t +{\\rm constant}\n\\,, \\nn\na&\\propto t^{\\frac{2}{3}(1+\\beta\/\\lambda)} \\,,\n\\end{align}\nin which\n\\begin{align}\n\\Omega_b=\\frac{\\lambda^2+\\beta \\lambda -3}{(\\beta+\\lambda)^2}\n\\,, \\quad\n\\Omega_{\\phi}=\\frac{\\beta^2+\\beta \\lambda +3}{(\\beta+\\lambda)^2}\n\\,,\n\\end{align}\nand\n\\begin{align}\nw_{\\rm tot}=-\\frac{2\\beta}{4\\beta+\\lambda}\n\\,.\n\\end{align}\nThe scaling solution exists when\n\\begin{align}\n\\lambda(\\beta+\\lambda)>3\n\\,. \\label{inequality_DE}\n\\end{align}\nFor $2\\beta=5\\lambda$, we find $w_{\\rm tot}=-5\/11$ and the inequality \\eqref{inequality_DE} is reduced into $\\lambda^2 >6\/7$.\n\nThe amplitude of the massive graviton is given by\n\\begin{align}\nu \\propto t^{-\\frac{1}{2}(1+2\\beta\/\\lambda)}\n\\,,\n\\end{align}\nand then the density parameter of massive graviton decreases as\n\\begin{align}\n\\Omega_G\\propto t^{-1-2\\beta\/\\lambda}\n\\,.\n\\end{align}\nThe energy density of massive graviton gives just a negligible contribution during this scaling solution which guarantees the equations \\eqref{Fri_DE} and \\eqref{eq_phi_DE}. \n\n\nOn the other hand, when $\\lambda^2<6\/7$, the non-minimal coupling is small so that the field $\\phi$ can be approximated as a standard quintessence field. As a result, the acceleration is obtained by the slow-roll of $\\phi$ and then the dark energy dominant universe is realized.\n\n\n\n\n\n\n\n\n\n\\section{Cosmic evolutions}\n\\label{sec_numerical}\nIn this section, we numerically solve the equations \\eqref{Fri}-\\eqref{eq_u}. We discuss two cases, the over-produced case ($\\rho_{G,i} \\gg \\rho_{b,i}$) and the less-produced case ($\\rho_{G,i}\\ll \\rho_{b,i}$), in order.\n\n\n\\subsection{Over-produced case}\nFirst, we consider the over-produced case $\\rho_{G,i} \\gg \\rho_{b,i}$. We assume \\eqref{model_A} which we call Model A. A cosmological dynamics is shown in Fig.~\\ref{fig_modelA}. We set $\\rho_{G,i}\/\\rho_{b,i}=\\Omega_{G,i}\/\\Omega_{b,i}=200$ at the initial of the numerical calculation. Although dark matter is initially over-produced, the energy density of baryon follows up that of dark matter and then we obtain $\\rho_G\/\\rho_b\\simeq 5$ when $a_J=Aa\\sim 10^{-4}$ where we normalize the Jordan frame scale factor $a_J$ so that $\\Omega_{\\phi}|_{a_J=1}=0.7$. We note the following-up of $\\rho_b$ is obtained even if $\\lambda^2>2$ is not satisfied (In Fig.~\\ref{fig_modelA}, we set $\\lambda^2=(6\/5)^2<2$).\n\n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[width=7cm,angle=0,clip]{Model_A.eps}\n\\caption{The evolution of the density parameters and the total equation of state parameters in terms of the Jordan frame scale factor $a_J=Aa$ which is normalized to be $\\Omega_{\\phi}|_{a_J=1}=0.7$. We set $\\beta=3$ and $\\lambda=2\\beta\/5=6\/5$ in Model A \\eqref{model_A}. We assume the initial ratio between dark matter and baryon as $\\rho_{G,i}\/\\rho_{b,i}=\\Omega_{G,i}\/\\Omega_{b,i}=200$ with $\\phi_i=0$.\n}\n\\label{fig_modelA}\n\\end{figure}\n\n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[width=7cm,angle=0,clip]{phi.eps}\n\\caption{The evolution of the chameleon field $\\phi$ in Model A and Model B with $\\rho_{G,i}\/\\rho_{b,i}=200$.\n}\n\\label{fig_phi}\n\\end{figure}\n\nThe dynamics of the universe is precisely tested by the CMB observations after the decoupling time $a_J \\simeq 10^{-3}$. The evolutions of the total equation of state parameters are shown in Fig.~\\ref{fig_modelA}. The dynamics of the observable universe is represented by the Jordan frame quantity $w_{\\rm tot}$ because the visible matters couple with the Jordan frame metric. On the other hand, since the dark matter (i.e., massive gravitons) is originated from the the gravity sector, dark matter feels the dynamics of the Einstein frame whose equation of state parameter is denoted by $w_E$. Although the large deviation of dynamics from the standard cosmological one appears before the decoupling time $a_J \\lesssim 10^{-3}$, the standard dust dominant universe is recovered around the decoupling time.\n\n\nWhen we increase the values of $\\beta$ and $\\lambda$, the deviation from the standard evolution is amplified which is caused by the oscillation of $\\phi$ around $\\phi_{\\rm min}$ as shown in Fig,~\\ref{fig_phi}. Since the Jordan frame scale factor is given by $a_J=Aa=a e^{\\beta\\phi\/M_p}$, the oscillation of $\\phi$ yields the oscillation of $a_J$ which is amplified by increasing of $\\beta$.\n\n\nFig.~\\ref{fig_modelA} does not show the dark energy ``dominant'' universe even in the accelerating phase. Instead, the acceleration is realized by the scaling solution as explained in Sec.~\\ref{sec_analytic}. If this scaling solution can pass the observational constraints, it might give an answer for the other coincidence problem of dark energy: why the present dark energy density is almost same as that of matter? However, the cosmological dynamics after the decoupling time is strongly constrained by the observations. Thus, the dark energy model with the scaling solution should have a severe constraint (see \\cite{Amendola:1999er,Amendola:2003eq} for examples). Furthermore, the large coupling $\\alpha_A \\gtrsim M_p^{-1}$ leads to that the Compton wavelength of the chameleon field has to be less than Mpc to screen the fifth force in the Solar System~\\cite{Wang:2012kj}; however, the coupling functions \\eqref{model_A} require the Gpc scale Compton wavelength to give the current accelerating expansion.\n\n\n\nWe then provide a model in which the couplings $\\alpha_A$ and $\\alpha_f$ are initially large but they become small in time. This behavior is realized by the model\n\\begin{align}\nK^2=(1-\\phi^2\/M^2)^{-1} , \\, A=e^{\\beta\\phi\/M_p} ,\\,\nf=e^{-\\lambda \\phi\/M_p}, \\label{model_B}\n\\end{align}\nwhich we call Model B. The only difference from Model A is that $K$ is a function of $\\phi$. If the amplitude of the field $\\phi$ is small at initial $(\\phi \\ll M)$, Model B gives the same behavior as Model A. After $\\phi$ starts to roll and then $|\\phi| \\rightarrow M$, the kinetic function $K$ increases which causes the decreasing of the non-minimal couplings $\\alpha_A,\\alpha_f \\rightarrow 0$ (see Figs.~\\ref{fig_phi} and \\ref{fig_modelB}). \nNote that the field value is restricted in the range $-M<\\phit_j,j\\neq k|H_k)\\mathcal{P}(H_k)\n\\end{eqnarray}\\par\nIn general, the priori probability of different hypotheses can be assumed as uniformly distributed ($\\mathcal{P}(H_i)=\\mathcal{P}(H_j), i \\neq j$), and the conditional probability in (\\ref{ErrorProb}) is assumed to be equal \\cite{KayV2} for symmetry, that is\n\\begin{eqnarray}\n\\mathcal{P}(t_1 > t_2|H_1) =\n \\mathcal{P}(t_2 > t_1|H_2)\n\\end{eqnarray}\nthen the error probability is\n\\begin{eqnarray}\n\\mathcal{P}_E &=& 1 - \\mathcal{P}(t_1 > t_2|H_1\n.\n\\end{eqnarray}\nThus the probability can be analyzed with respect to the conditional probability $\\mathcal{P}( t_1> t_2|H_1)$ under the $H_1$ hypothesis.\nThe statistics (\\ref{OMPDetct}) under the $H_1$ hypothesis have the following joint distributions\n\\begin{eqnarray}\n\n\\left[ t_1 \\quad t_2 \\right]^T\n\t\t& \\thicksim & \\mathcal{N}(\\bm \\mu_{1,2},\\bm \\Sigma_{1,2})\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\bm \\mu_{1,2}=\\left[ \\begin{array}{c}\n\t\t\t \\frac{1}{2}\\|\\bm{\\Phi s_1}\\|_2^2 \\\\\n\t\t\t \\langle\\bm{\\Phi s_2},\\bm{\\Phi s_1}\\rangle-\\frac{1}{2}\\|\\bm{\\Phi s_2}\\|_2^2\n\t\t\\end{array} \\right] \\nonumber\n\\end{eqnarray}\n\\begin{equation}\n\\bm \\Sigma_{1,2} =\\sigma^2\\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\\|\\bm{\\Phi^T\\Phi s_1}\\|_2^2 &\\langle \\bm{\\Phi^T\\Phi s_1},\\bm{\\Phi^T\\Phi s_2}\\rangle \\\\\n\t\t\t\t\t\t\t\t\t\\langle \\bm{\\Phi^1\\Phi s_2},\\bm{\\Phi^T\\Phi s_1}\\rangle &\\|\\bm{\\Phi^T\\Phi s_2}\\|_2^2\n\t\t\t\t\t\t\t\t\\end{array}\\right].\\nonumber\n\\end{equation}\nUsing the property of Gaussian distribution, the probability of false classification is then\n\\begin{eqnarray}\n\\mathcal{P}_E \n&=&Q(\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{2\\sigma\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2})\\nonumber\n\\end{eqnarray}\nand $Q(x)=\\int_{x}^{\\infty} {\\frac{1}{\\sqrt{2\\pi}}\\exp{(-\\frac{t^2}{2})}\\mathrm{d}t}$.\n\\end{proof}\n\\par As a matter of fact, if there is more than 2 hypotheses in the Compressive Classification problem (\\ref{CompressHypo}), the error probability of the Compressive Classifier (\\ref{OMPDetct}) and (\\ref{MatchFitler}) may not be so explicit as (\\ref{CompressDetect}) for the statistical correlation between different $t_i$'s in (\\ref{OMPDetct}). But similar techniques can be utilized and same results can be deduced, we will discuss these m-ary ($m>2$) hypotheses scenarios in the next section.\n\\par So in order to analyze the error probability (\\ref{CompressDetect}) of classifier (\\ref{OMPDetct}) without the constraint of row-orthogonality to measurement matrices and for all possible k-sparse signals $\\bm s_i,i=1,2$, we will have to focus on\n\\begin{equation}\n\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2} \\label{MainTarget}\n\\end{equation}\nfor all k-sparse signals $\\bm{s_1,s_2}\\in\\Lambda_k=\\{\\bm{s} \\in\\mathbb{R}^N,\\|\\bm{s}\\|_0\\leq k\\}$. \\par\nIn the cases where measurement matrices satisfying row-orthogonality ($\\bm{\\Phi\\Phi^T}=\\bm I$), (\\ref{MainTarget}) is then reduced to\n\\begin{equation}\n\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2} = \\|\\bm{\\Phi(s_1-s_2)}\\|_2.\n\\end{equation}\nAnd this is what Davenport \\cite{Davenport06detectionand}\\cite{SPComp} and Zahedi \\cite{Ramin2010}\\cite{Zahedi201264} analyzed in their publications.\n\n\\section{measurement Matrices and the Error Probability of Compressive Signal Classification}\n\\label{sec:main}\n\\par Although there has been plenty of works about the performance analysis of Compressive Classification, all these works have the same row-orthogonality presumption, but without a theoretical explanation. However, what we believe is that there exist other important reasons for the row-orthogonal condition to be necessary. Here is our main result of this paper:\n\\begin{Theorem}\nIn the Compressive Classification problem (\\ref{CompressHypo}), by tightening or row-orthogonalizing the measurement matrix $\\bm \\Phi \\in \\mathbb{R}^{n \\times N},n < N$, the error probability (\\ref{CompressDetect}) of the classifier (\\ref{OMPDetct}) will be reduced, which means\n\\begin{equation}\\label{MainIneq}\n\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2} \\leq\n\\frac{\\|\\bm{\\hat \\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\hat \\Phi^T\\hat \\Phi(s_1-s_2)}\\|_2}\n\\end{equation}\nwhere $\\bm \\Phi \\in \\mathbb{R}^{n \\times N}$ is the arbitrary measurement matrix, and $\\bm {\\hat \\Phi} \\in \\mathbb{R}^{n \\times N}$ is the equi-norm tight frame measurement matrix row-orthogonalized from $\\bm \\Phi$.\n\\end{Theorem}\n\\begin{proof}\nAccording to Section 2, the error probability (\\ref{CompressDetect}) is determined by the following expression (\\ref{MainTarget}):\n\\begin{equation}\n\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2} \\nonumber\n\\end{equation}\nfor all k-sparse signals $\\bm s_1, \\bm s_2$, where $\\bm \\Phi$ is an arbitrary measurement matrix satisfying RIP. \\par\nAccording to the basic presumptions of $\\bm \\Phi$ in (\\ref{CompressHypo}), the arbitrary under-determined measurement matrix $\\bm \\Phi \\in \\mathbb{R}^{n \\times N}$, $n < N$ has full row rank, thus the singular value decomposition of $\\bm \\Phi$ is\n\\begin{equation}\\label{singular}\n\\bm \\Phi = \\bm{U \\left[ \\Sigma_n \\quad O\\right] V^T}\n\\end{equation}\nHere $\\bm \\Sigma_n \\in \\mathbb{R}^{n \\times n}$ is a diagonal matrix with each element $\\bm \\Phi$'s singular value $\\sigma_j \\neq 0$ $(1\\leq j \\leq n)$, and $\\bm U \\in \\mathbb{R}^{n \\times n}$, $\\bm V \\in \\mathbb{R}^{N \\times N}$ are orthogonal matrices composed of $\\bm \\Phi$'s left and right singular vectors. \\par\nIf an arbitrary equi-norm measurement matrix $\\bm \\Phi$ is transformed into an equi-norm tight frame $\\bm{\\hat \\Phi}$, we do orthogonalization to its row vectors, which is equivalent as:\n\\begin{equation}\\label{tighten}\n\\bm{\\hat \\Phi} = \\sqrt{c} \\cdot \\bm{U\\Sigma_n^{-1}U^T\\Phi} = \\sqrt{c}\\cdot \\bm{U \\left[ I_n \\quad O\\right] V^T}\n\\end{equation}\nWhere $\\bm U \\in \\mathbb{R}^{n \\times n}$, $\\bm V \\in \\mathbb{R}^{N \\times N}$ are $\\bm \\Phi$'s singular vector matrices. In a word, row-orthogonalization is equivalent to transforming all singular values of $\\bm \\Phi$ into equal ones. Thus $\\bm{\\hat \\Phi \\hat \\Phi^T = c \\cdot I_n}$, where $c>0$ is a certain constant for normalization.\\par\nThen\n\\begin{eqnarray}\\label{TF}\n\\frac{\\|\\bm{\\hat \\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\hat \\Phi^T\\hat \\Phi(s_1-s_2)}\\|_2}\n=\n\\| \\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\\bm{I_n} &\\bm O\n\t\t\t\t\t\t\t\t\\end{array}\\right] \\bm V^T(\\bm{s_1-s_2})\\|_2.\n\\end{eqnarray}\nAnd for arbitrary measurement matrix $\\bm \\Phi$ that may not be row-orthogonal, we have\n\\begin{eqnarray}\n\\lefteqn{\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2}} \\nonumber \\\\\n&=& \\frac{\\|\\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\t\t\t\\bm \\Sigma_n & \\bm O \\end{array} \\right]\\bm{V^T(s_1-s_2)}\\|_2^2}\n\t\t\t{\\| \\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\t\t\t\\bm \\Sigma_n^2 & \\bm O \\end{array} \\right]\\bm{V^T(s_1-s_2)}\\|_2} .\\label{singularfrac}\n\\end{eqnarray}\nIf we denote $\\bm{V^T(s_1-s_2)}$ by $\\bm u^{(1,2)}$, where $\\bm u^{(1,2)} = [u_1, u_2, \\cdots ,u_N]^T$. Then (\\ref{singularfrac}) becomes\n\\begin{eqnarray}\n\\frac{\\|\\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\t\t\t\\bm \\Sigma_n & \\bm O \\end{array} \\right]\\bm{V^T(s_1-s_2)}\\|_2^2}\n\t\t\t{\\| \\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\t\t\t\\bm \\Sigma_n^2 & \\bm O\\end{array} \\right]\\bm{V^T(s_1-s_2)}\\|_2}\n\t\t\t\t\t\t\t\t\t\t\t\t= \\frac{\\sum_{j=1}^n {\\sigma_j^2 u^{2}_j}}{\\sqrt{\\sum_{j=1}^n {\\sigma_j^4 u^{2}_j}}}\\nonumber \\\\\n\\leq \\sqrt{\\sum_{j=1}^n {u^{2}_j}}=\n\\| \\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\\bm{I_n} &\\bm O\n\t\t\t\t\t\t\t\t\\end{array}\\right] \\bm V^T(\\bm{s_1-s_2})\\|_2 . \\label{CIneq}\n\\end{eqnarray}\nThe last inequality is derived from the Cauchy-Schwarz Inequality, combining (\\ref{singularfrac}), (\\ref{CIneq}) with (\\ref{TF}), then we have\n\\begin{equation}\n\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2} \\leq\n\\frac{\\|\\bm{\\hat \\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\hat \\Phi^T\\hat \\Phi(s_1-s_2)}\\|_2}\n\\end{equation}\nwhich means that row-orthogonalization makes (\\ref{MainTarget}) larger and thus brings lower error probability.\nThe condition when the equality holds is that\n\\begin{eqnarray}\\label{condition}\n\\bm{\\left[\\Sigma_n^2\\quad O\\right]}\\bm{V^T}(\\bm{s_1-s_2})\n=c \\cdot \\bm{\\left[I_n\\quad O\\right]}\\bm{V^T}(\\bm{s_1-s_2})\n\\end{eqnarray}\nwhere $c>0$ is a certain constant.\\par\nIt is obvious that the equality in (\\ref{MainIneq}) holds for all k-sparse signals $\\bm s_1$ and $\\bm s_2$, if and only if $\\bm{\\Sigma_n^2=c \\cdot I_n}$, which means\n\\begin{equation}\n\\bm{\\Phi\\Phi^T} = c \\cdot \\bm I_n\\label{TightCond}\n\\end{equation}\n\\par So the result of (\\ref{MainIneq}) means that when arbitrary under-determined measurement matrices $\\bm \\Phi \\in \\mathbb{R}^{n \\times N},n < N$ are transformed into an equi-norm tight frame, i.e. row-ortho-gonalized, the equality in (\\ref{MainIneq}) will hold, then the value of (\\ref{MainTarget}) will increase, which means improvement of the performance of Compressive Classifier (\\ref{OMPDetct}).\n\\end{proof}\\par\nThe constant $c>0$ above is an amplitude constant for normalization and can take any value, with the equi-norm presumption of measurement matrices, the following corollary can be deduced:\n\\begin{Corollary}\nIf a matrix $\\bm \\Phi\\in \\mathbb{R}^{n \\times N}, n < N$ form an equi-norm tight frame, that is $\\bm{\\Phi\\Phi^T}=c\\cdot I_n$ and the column vectors satisfy $\\|\\bm \\phi_i\\|_2=\\|\\bm \\phi_j\\|_2=\\psi$, then $c = \\frac{N}{n}\\psi^2$.\n\\end{Corollary}\n\\begin{proof}\nIf $\\bm \\Phi$ has equal column norms and satisfies $\\bm{\\Phi\\Phi^T}=c\\cdot I_n$, then\n\\begin{eqnarray}\ntr(\\bm{\\Phi^T\\Phi})=N\\cdot \\psi ^2= tr(\\bm{\\Phi\\Phi^T})=n\\cdot c.\n\\end{eqnarray}\nAs a result, $c=\\frac{N}{n}\\psi^2$.\n\\end{proof}\\par\nIf we let $c = 1$, then we can get $\\|\\bm \\phi_i\\|_2=\\|\\bm \\phi_j\\|_2=\\sqrt{n\/N}$, which coincides with the results of \\cite{Ramin2010} and \\cite{Zahedi201264}.\n\n\\par Before the end of this section, some important discussions are believed to be necessary here.\n\\par Remark 1:\nFurther analysis of the result of Theorem 2 indicates that, when the measurement matrices of the commonly used Compressive Classifier (\\ref{OMPDetct}) are \"tightened\", i.e. row-orthogonalized, then the inequality (\\ref{MainIneq}) becomes equality, and the corresponding error probability will become\n\\begin{eqnarray}\\label{MFeqivalent}\n\\mathcal{P}_E(\\hat{\\bm \\Phi})=\n Q(\\frac{\\|\\bm{\\hat \\Phi(s_1-s_2)}\\|_2}{2 c^{-1\/2}\\cdot \\sigma})\\nonumber \\\\\n= Q(\\frac{\\|\\bm{P_{\\Phi^T} (s_1-s_2)}\\|_2}{2 c^{-1\/2}\\cdot \\sigma})\n\\end{eqnarray}\nwhere $\\bm{P_{\\Phi^T} = \\Phi^T ( \\Phi \\Phi^T)^{-1} \\Phi}$. The last equality is derived from (\\ref{singular}) and (\\ref{tighten}), which equals, as a matter of fact, the error probability of the General Matched Filter Classifier \\cite{SPComp}\\cite{PHDThesis}:\n\\begin{eqnarray}\n\\hat t_i=\n\\langle\\bm{y},\\bm{(\\Phi \\Phi^T)^{-1}\\Phi}\\bm{s_i}\\rangle-\\frac{1}{2}\\|\\bm{P_{\\Phi^T} s_i} \\|_2^2,\\quad i=1,2. \\label{MFDetct}\n\\end{eqnarray}\nThus Theorem 2 and (\\ref{MFeqivalent}) indicates that equi-norm tight frames will improve the Compressive Classifier (\\ref{OMPDetct}) to the level of General Matched Filter Classifier (\\ref{MFDetct}) in the sense of error probability. Although it is obvious that row-orthogonality (\\ref{TightCond}) sufficiently ensures (\\ref{MFDetct}) to become equivalent as (\\ref{OMPDetct}), the necessity with row-orthogonality, or \"tightness\", of the equivalence between the Compressive Classifier (\\ref{OMPDetct}) and the General Matched Filter Classifier (\\ref{MFDetct}) is not so explicit but demonstrated by Theorem 2 and (\\ref{MFeqivalent}). \n\\par Besides, the error probability (\\ref{MFeqivalent}) coincides with Davenport's \\cite{Davenport06detectionand} and \\cite{SPComp} and Zahedi's \\cite{Ramin2010} and \\cite{Zahedi201264}, where they constrained row-orthogonality $\\bm{\\Phi \\Phi^T = c \\cdot I}$ and set $c = 1$. So Theorem 2 explains the benefits of using the row-orthogonal constraint to do Compressive Classification.\n Similar discussions about the improvement of equi-norm tight frames to oracle estimators can be found in \\cite{UniTightFrame}, which is another good support of the advantage of \"tight\".\n\\par Remark 2:\nAs is mentioned in last section, when there are more than 2 hypotheses, the m-ary ($m>2$) compressive classification problem model becomes:\n\\begin{equation}\\label{CompressHypo_mary}\n\\bm{y} =\n\\left \\{ \\begin{array}{ll}\n\t\\bm{\\Phi}(\\bm{s_1} + \\bm{n}) & \\text{Hypothesis $H_1$} \\\\\n\t\\bm{\\Phi}(\\bm{s_2} + \\bm{n}) & \\text{Hypothesis $H_2$} \\\\\n\t\\cdots & \\cdots \\\\\n\t\\bm{\\Phi}(\\bm{s_m} + \\bm{n}) & \\text{Hypothesis $H_m$} \\\\\n\n\\end{array} \\right. .\n\\end{equation}\nUsing the same Compressive Classifier\n\\begin{eqnarray}\nt_i\n=\\langle\\bm{y},\\bm{\\Phi}\\bm{s_i}\\rangle-\\frac{1}{2}\\langle \\bm{\\Phi s_i},\\bm{\\Phi s_i} \\rangle,\\quad i=1,2,\\cdots ,m .\n\\end{eqnarray}\nThe corresponding error probability will be \n\\begin{eqnarray}\n\\mathcal{P}_E \n=1 - \\mathcal{P}( t_T> t_i,\\forall i \\neq T|H_T) ,\\quad T = 1,2,\\cdots,m.\n\\end{eqnarray}\nCombined with the Union Bound of probability theory, the error probability then satisfies\n\\begin{eqnarray}\n\\mathcal{P}_E \\leq\n\\sum_{i \\neq T}^m Q(\\frac{\\|\\bm{\\Phi(s_T-s_i)}\\|_2^2}{2\\sigma\\|\\bm{\\Phi^T\\Phi(s_T-s_i)}\\|_2}), T = 1,2,\\cdots,m .\\label{mErrorProb}\n\\end{eqnarray}\nThe error probability (\\ref{mErrorProb}) is similar to (\\ref{CompressDetect}) except for the inequality due to the use of union bound. In fact, it may be difficult to get any more accurate result than (\\ref{mErrorProb}), because of the statistical correlation between different $t_i$'s. It may not be persuasive to conclude the error probability's decrease brought by equi-norm tight frames, using the same proof in Theorem 2 in this m-ary scenario. because of the inequality in (\\ref{mErrorProb}); however, simulation results in the next section will demonstrate that equi-norm tight frames are still better in the m-mary ($m>2$) Compressive Classification scenario.\n\\par Remark 3: In comparison with the the work of Zahedi in \\cite{Ramin2010} and \\cite{Zahedi201264}, where Equiangular Tight Frames (ETFs) are proved to have the best worst-case performance among all tight frames (row-orthogonal constrained matrices), we just give the proof that for general under-determined measurement matrices, tightening can bring performance improvement for Compressive Classification. Our job is different from theirs, because all of Zahedi's analysis is based on the constraint that the measurement matrices are tight, or row-orthogonal, and the advantage of Equiangular Tight Frames (ETFs, \\cite{Waldron2009}) is that ETFs have the best worst-case (maximum of the minimum) performance among all tight frames of same dimensions, while our result shows that when arbitrary measurement matrices is \"tightened\", i.e. transformed into equi-norm tight frames, the performance of Compressive Classification will get improved. Nonetheless, the existence and construction of ETFs of some certain dimensions remains an open problem (\\cite{Waldron2009}), while doing row-orthogonalization for arbitrary matrices is very easy and practical. So our results provided a convenient approach to improve the performance of compressive classifiers.\n\\par\n\n\\section{Simulations}\n\\label{sec:simulation}\n\n\\begin{figure}\n\\centering\n\n\n\t\n\t\n\t\n\n\n\n\t\n\t\n\t\n\n\t\\begin{minipage}[htbp]{0.5\\textwidth}\n\t\\centering\n\t\t\\includegraphics[width=3.6in]{MonteCaro_2Hypo.eps}\n\t\t\\caption{Monte-Carlo simulation of 2-ary compressive classification error probability using non-tight frames and tight frames (for $k=1$ sparse signals)}\n\t\t\\label{figure1}\n\t\t\\end{minipage}\n\t\\begin{minipage}[htbp]{0.5\\textwidth}\n\t\\centering\n\t\t\\includegraphics[width=3.6in]{MonteCaro_2Hypok10.eps}\n\t\t\\caption{Monte-Carlo simulation of 2-ary compressive classification error probability using non-tight frames and tight frames (for $k=10$ sparse signals)}\n\t\t\\label{figure2}\n\t\t\\end{minipage}\n\t\n\t\\end{figure}\n\t\\begin{figure}\n\t\t\\begin{minipage}[htbp]{0.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=3.6in]{MonteCaro_10Hypo.eps}\n\t\t\\caption{Monte-Carlo simulation of 10-ary compressive classification error probability using non-tight frames and tight frames (for $k=1$ sparse signals)}\n\t\t\\label{figure3}\t\t\n\t\\end{minipage}\\\\\n\t\t\\begin{minipage}[htbp]{0.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=3.6in]{MonteCaro_10Hypok10.eps}\n\t\t\\caption{Monte-Carlo simulation of 10-ary compressive classification error probability using non-tight frames and tight frames (for $k=10$ sparse signals)}\n\t\t\\label{figure4}\t\t\n\t\\end{minipage\n\\end{figure}\n\\par In this section the main result of theorem 2 is verified by Monte-Carlo simulations. In the simulation some arbitrary $k=1$ and $k=10$ sparse signals are generated and classified using non-tight frames and tight frames. The Gaussian Random Matrices are chosen to be the non-tight frames, and the row-orthogonalized ones from those random matrices are chosen as the tight frames. Here we choose $N=500$, and the error probabilities of both 2-ary Compressive Classification and 10-ary Compressive Classification are demonstrated in Fig.\\ref{figure1} and Fig.\\ref{figure3} for $k=1$ sparse signals, and Fig.\\ref{figure2} and Fig.\\ref{figure4} for $k=10$ sparse signals, with the number of measurements $n$ ranging from 100 to 450 and signal to noise ratios $\\bm{\\|s_i\\|_2^2}\/\\sigma^2$ from 5 dB to 20 dB. Each error probability is calculated from average of 10000 independent experiments with tight or non-tight measurement matrices.\n\\par The simulation shows that equi-norm tight frames transformed from general Gaussian Random Matrices have better Compressive Classification performance than those non-tight Gaussian Random Matrices within $n$'s whole range, both for 2-ary classification and m-ary ($m>2$) classification scenarios, which is the benefit that \"tightening\" brings.\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\\par This paper deals with the performance improvement of a commonly used Compressive Classifier (\\ref{OMPDetct}). We prove that the transformation to equi-norm tight Frames from arbitrary measurement matrices will reduce the probability of false classification of the commonly used Compressive Classifier, thus improve the classification performance to the level of the General Matched Filter Classifier (\\ref{MFDetct}), which coincides with the row-orthogonal constraint commonly used before. Although there are other proofs that among all equi-norm tight frames the Equiangular Tight Frames (ETFs) achieve best worst-case classification performance, the existence and construction of ETFs of some dimensions is still an open problem.\nAs the construction of equi-norm tight frames from arbitrary matrices is much simple and practical, the conclusion of this paper can also provide a convenient approach to implement an improved measurement matrix for Compressive Classification. \n\n\n\n\n\n\\bibliographystyle{IEEEbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}