diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhzpu" "b/data_all_eng_slimpj/shuffled/split2/finalzzhzpu"
new file mode 100644--- /dev/null
+++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhzpu"
@@ -0,0 +1,5 @@
+{"text":"\n\\section{Introduction}\n\n\\begin{figure}[!ht]\n \\centering\\includegraphics[width=\\linewidth]{img\/teasor_new.pdf}\n \\vspace{-5.0mm}\n \\caption{\\textbf{Illustration and challenges of cross-resolution person re-ID.} (\\emph{Top}) Existing methods for cross-resolution person re-ID typically leverage image super-resolution models with pre-defined up-sampling rates followed by person re-ID modules. Methods of this class, however, may not be applicable to query images of varying or unseen resolutions. (\\emph{Bottom}) In contrast, our method learns resolution-invariant representations, allowing our model to re-identify persons in images of varying and even unseen resolutions.}\n \\vspace{-4.0mm}\n \\label{fig:teaser}\n\\end{figure}\n\n\\IEEEPARstart{P}{\\revision{erson}} \\revision{re-identification (re-ID)~\\cite{zheng2016person,zhong2017re,wang2015zero,ye2020deep} aims at recognizing the same person across images taken by different cameras, and is an active research topic in computer vision and machine learning.} A variety of applications ranging from person tracking~\\cite{andriluka2008people}, video surveillance system~\\cite{khan2016person}, urban safety monitoring~\\cite{garcia2015person}, to computational forensics~\\cite{vezzani2013people} are highly correlated this research topic. Nevertheless, due to the presence of background clutter, occlusion, illumination or viewpoint changes, and even uncontrolled human pose variations, person re-ID remains a challenging task for practical applications.\n\nDriven by the recent success of convolutional neural networks (CNNs), several learning based methods~\\cite{lin2017improving,shen2018deep,hermans2017defense,zhong2017camera,si2018dual,chen2018group,zhang2019densely,hou2019interaction,zheng2019joint,zheng2019re} have been proposed to address the challenges in person re-ID. Despite promising performance, these methods are typically developed under the assumption that both query and gallery images are of \\emph{similar} or \\emph{sufficiently high} resolutions. This assumption, however, may not hold in practice since image resolutions would vary drastically due to the varying distances between cameras and persons of interest. For instance, query images captured by surveillance cameras are often of low resolution (LR) whereas those in the gallery set are carefully selected beforehand and are typically of high resolution (HR). As a result, direct matching of LR query images and HR gallery ones would lead to non-trivial \\emph{resolution mismatch} problems~\\cite{jing2015super,wang2016scale,jiao2018deep,wang2018cascaded}.\n\nTo address cross-resolution person re-ID, conventional methods typically learn a shared feature space for LR and HR images to mitigate the resolution mismatch problem~\\cite{li2015multi,jing2015super,wang2016scale}. These approaches, however, adopt hand-engineered descriptors which cannot adapt themselves to the task at hand. The lack of an end-to-end learning pipeline might lead to sub-optimal person re-ID performance. To alleviate this issue, a number of approaches~\\cite{wang2018cascaded,jiao2018deep} employing trainable descriptors are presented. These methods leverage image super-resolution (SR) models to convert LR input images into their HR versions, on which person re-ID is carried out. While performance improvements have been shown, these methods suffer from two limitations. First, each employed SR model is designed to upscale image resolutions by a particular factor. Therefore, these methods need to \\emph{pre-determine} the resolutions of LR query images so that the corresponding SR models can be applied. However, designing SR models for each possible resolution input makes these methods hard to scale. Second, in real-world scenarios, query images can be with \\emph{various} resolutions even with the resolutions that are \\emph{not seen} during training. As illustrated in the top of Figure~\\ref{fig:teaser}, query images with varying or unseen resolutions would restrict the applicability of the person re-ID methods that leverage SR models since one cannot assume the resolutions of the input images will be known in advance.\n\nIn this paper, we propose \\emph{Cross-resolution Adversarial Dual Network} (CAD-Net) for cross-resolution person re-ID. The key characteristics of CAD-Net are two-fold. First, to address the resolution variations, CAD-Net derives the \\emph{resolution-invariant representations} via adversarial learning. As shown in the bottom of Figure~\\ref{fig:teaser}, the learned resolution-invariant representations allow our model to handle images of \\emph{varying} and even \\emph{unseen} resolutions. Second, CAD-Net learns to recover the missing details in LR input images. Together with the resolution-invariant features, our model generates HR images that are \\emph{preferable for person re-ID}, achieving the state-of-the-art performance on cross-resolution person re-ID. It is worth noting that the above image resolution recovery and cross-resolution person re-ID are realized by a \\emph{single} model learned in an \\emph{end-to-end} fashion. \n\nMotivated by the multi-scale adversarial learning techniques in semantic segmentation~\\cite{tsai2018learning} and person re-ID~\\cite{chen2019learning}, which have been shown effective in deriving more robust feature representations, we employ multi-scale adversarial networks to align feature distributions between HR and LR images across different feature levels, resulting in consistent performance improvements over single-scale adversarial methods. On the other hand, since there are infinitely many HR images that reduce to the same LR image, it is difficult for a model to simultaneously handle the resolution variations and learn the mapping between LR and HR images. To alleviate this issue, we introduce a consistency loss in the HR feature space to enforce the consistency between the features of the recovered HR images and the corresponding HR ground-truth images, allowing our model to learn HR image representations that are more robust to the variations of HR image recovery. By jointly leveraging the above schemes, our method further improves the performance of cross-resolution person re-ID.\n\nIn addition to person re-ID, the resolution mismatch issue may occur in various applications such as vehicle re-ID~\\cite{kanaci2018vehicle}. To demonstrate the wide applicability of our method, we also evaluate our method on cross-resolution vehicle re-ID and show that our CAD-Net performs favorably against existing cross-resolution vehicle re-ID approaches. Furthermore, to manifest that our formulation is not limited to cross-resolution setting, we show that our proposed algorithm improves person re-ID performance even when no significant resolution variations are present, achieving competitive performance compared to existing person re-ID approaches. Finally, as image labeling process is often labor intensive, we extend our CAD-Net to semi-supervised settings. Experimental results further support the use and extension of our method for such practical yet challenging settings.\n\nThe contributions of this paper are highlighted as follows: \n\n\\begin{itemize}\n \\item We propose an end-to-end trainable network which advances adversarial learning strategies for cross-resolution person re-ID.\n \\item Our model learns resolution-invariant representations while being able to recover the missing details in LR input images, resulting in favorable performance in cross-resolution person re-ID.\n \\item Our model is able to handle query images with varying or even unseen resolutions, without the need to pre-determine the input resolutions.\n \\item Extensive experimental results on five person re-ID and two vehicle re-ID datasets show that our method achieves the state-of-the-art performance on both tasks in the cross-resolution setting, and further validate the effectiveness of our approach for real-world person re-ID applications in a semi-supervised manner.\n\\end{itemize}\n\nIn this work, we significantly extend our previous results~\\cite{CAD-Net} and summarize the main differences in the following.\n\n\\begin{itemize}\n \\item \\textbf{Multi-scale adversarial learning for learning resolution-invariant representations.} Unlike our preliminary work that learns the resolution-invariant representations at a single scale, we adopt multi-scale adversarial network components in this work. The resultant model effectively aligns feature distributions in different levels and derives feature representations across image resolutions, achieving performance improvements over the single-scale model in cross-resolution person re-ID. We refer to our improved method as CAD-Net++.\n \\item \\textbf{HR feature space consistency loss.} To allow our model to handle the variations of HR image recovery, we introduce a feature consistency loss that enforces the consistency between the features of the recovered HR images and the corresponding HR ground-truth images. This loss further consistently improves the performance of cross-resolution person re-ID on all five datasets.\n \\item \\textbf{Applications.} In contrast to our preliminary work that focuses on a single task (i.e., cross-resolution person re-ID), we evaluate our proposed method under various settings with extensive ablation studies, including 1) cross-resolution person re-ID, 2) person re-ID with no significant resolution variations (we refer to this setting as standard person re-ID), 3) cross-resolution vehicle re-ID, and 4) semi-supervised cross-resolution person re-ID. Extensive experimental results confirm the effectiveness of our method in a wide range of scenarios.\n\\end{itemize}\n\\section{Related Work}\n\nPerson re-ID has been extensively studied in the literature. We review several topics relevant to our approach in this section.\n\n{\\flushleft {\\bf Person re-ID.}}\nMany existing methods, e.g., \\cite{lin2017improving,shen2018deep,shen2018person,kalayeh2018human,cheng2016person,chang2018multi,chen2018group,li2018adaptation,sun2018beyond,suh2018part}, are developed to address various challenges in person re-ID, such as background clutter, viewpoint changes, and pose variations. For instance, Yang~\\textit{et~al}\\mbox{.}~\\cite{zhong2017camera} learn a camera-invariant subspace to deal with the style variations caused by different cameras. Liu~\\textit{et~al}\\mbox{.}~\\cite{liu2018pose} develop a pose-transferable framework based on generative adversarial network (GAN)~\\cite{goodfellow2014generative} to yield pose-specific images for tackling pose variations. Several methods addressing background clutter leverage attention mechanisms to emphasize the discriminative parts~\\cite{li2018harmonious,song2018mask,si2018dual}. \\revision{In addition to these methods that learn global features, a few methods further utilize part-level information~\\cite{sun2018beyond} to learn more fine-grained features, adopt human semantic parsing for learning local features~\\cite{kalayeh2018human}, learn multi-scale re-ID representation~\\cite{li2017person}, or derive part-aligned representations~\\cite{suh2018part} for improving person re-ID.} \n\nAnother research trend focuses on domain adaptation~\\cite{ganin2015unsupervised,ganin2016domain,long2015learning,long2016unsupervised,chen2019crdoco,hoffman2017cycada} for person re-ID~\\cite{wei2018person,image-image18,ge2018fd,li2019cross}. These methods either employ image-to-image translation modules (e.g., CycleGAN~\\cite{zhu2017unpaired}) as a data augmentation technique to generate viewpoint specific images with labels~\\cite{wei2018person,image-image18}, or leverage pose information to learn identity related but pose unrelated representations~\\cite{ge2018fd} or pose-guided yet dataset-invariant representations~\\cite{li2019cross} for cross-dataset person re-ID.\n\nWhile promising performance has been demonstrated, the above approaches typically assume that both query and gallery images are of similar or sufficiently high resolutions, which might not be practical for real-world applications.\n\n{\\flushleft {\\bf Cross-resolution person re-ID.}}\n\\revision{A number of methods have been proposed to address the resolution mismatch issue in person re-ID. These methods can be categorized into two groups depending on the adopted feature descriptors: 1) hand-crafted descriptor based methods~\\cite{li2015multi,jing2015super,wang2016scale} and 2) trainable descriptor based methods~\\cite{jiao2018deep,wang2018cascaded,chen2019learning,mao2019resolution,li2018toward}.} Methods in the first group typically use an engineered descriptor such as HOG~\\cite{HoG} for feature extraction and then learn a shared feature space between HR and LR images. For instance, Li~\\textit{et~al}\\mbox{.}~\\cite{li2015multi} jointly perform multi-scale distance metric learning and cross-scale image domain alignment. Jing~\\textit{et~al}\\mbox{.}~\\cite{jing2015super} develop a semi-coupled low-rank dictionary learning framework to seek a mapping between HR and LR images. Wang~\\textit{et~al}\\mbox{.}~\\cite{wang2016scale} learn a discriminating scale-distance function space by varying the image scale of LR images when matching with the HR ones. Nevertheless, these methods adopt hand-crafted descriptors, which cannot easily adapt the developed models to the tasks of interest, and thus may lead to sub-optimal person re-ID performance.\n\n\\revision{To alleviate this issue, several trainable descriptor based approaches are presented for cross-resolution person re-ID~\\cite{jiao2018deep,wang2018cascaded,chen2019learning,mao2019resolution,li2018toward}.} The network of SING~\\cite{jiao2018deep} is composed of several SR sub-networks and a person re-ID module to carry out LR person re-ID. On the other hand, CSR-GAN~\\cite{wang2018cascaded} cascades multiple SR-GANs~\\cite{ledig2017photo} and progressively recovers the details of LR images to address the resolution mismatch problem. Mao~\\textit{et~al}\\mbox{.}~\\cite{mao2019resolution} develop a foreground-focus super-resolution model that learns to recover the resolution loss in LR input images followed by a resolution-invariant person re-ID module. In spite of their promising results, such methods rely on training pre-defined SR models~\\cite{jiao2018deep,wang2018cascaded} or annotating the foreground mask for each training image to guide the learning of image recovery~\\cite{mao2019resolution}. As mentioned earlier, the degree of resolution mismatch, i.e., the resolution difference between the query and gallery images, is typically \\emph{unknown beforehand}. If the resolution of the input LR query is unseen during training, the above methods cannot be easily applied or might not lead to satisfactory performance. On the other hand, the dependence on foreground masks would make such methods hard to scale for real-world applications. Different from the above methods that employ SR models, a recent method motivated by the domain-invariant representations in domain adaptation~\\cite{ganin2015unsupervised,ganin2016domain} is presented~\\cite{chen2019learning}. By advancing adversarial learning strategies in the feature space, the RAIN method~\\cite{chen2019learning} aligns the feature distributions of HR and LR images, allowing the model to be more robust to resolution variations.\n\nSimilar to RAIN~\\cite{chen2019learning}, our method also performs feature distribution alignment between HR and LR images. Our model differs from RAIN~\\cite{chen2019learning} in that our model learns to recover the missing details in LR input images and thus provides more discriminative evidence for person re-ID. By jointly observing features of both modalities in an end-to-end learning fashion, our model recovers HR images that are preferable for person re-ID, resulting in performance improvements on cross-resolution person re-ID. Experimental results demonstrate that our approach can be applied to input images of varying and even unseen resolutions using only a single model with favorable performance.\n\n\\begin{figure*}[!ht]\n \\centering\n \\includegraphics[width=\\linewidth]{img\/model.pdf}\n \\vspace{-6.0mm}\n \\caption{\\textbf{Overview of the proposed Cross-resolution Adversarial Dual Network++ (CAD-Net++).} CAD-Net++ comprises cross-resolution GAN (CRGAN) (highlighted in blue) and cross-modal re-ID network (highlighted in green). The former learns resolution-invariant representations and recovers the missing details in LR input images, while the latter considers both feature modalities for cross-resolution person re-ID.}\n \\vspace{-2.0mm}\n \\label{fig:Model}\n\\end{figure*}\n\n{\\flushleft {\\bf Cross-resolution vision applications.}}\nThe issues regarding cross-resolution handling have been studied in the literature. For face recognition, existing approaches typically rely on face hallucination algorithms~\\cite{zhu2016deep,yu2017hallucinating} or SR mechanisms~\\cite{kim2016accurate,dahl2017pixel,dong2016image} to super-resolve the facial details. Unlike the aforementioned methods that focus on synthesizing the facial details, our model learns to recover re-ID oriented discriminative details. For vehicle re-ID, the resolution mismatch issue is also a challenging yet under studied problem~\\cite{kanaci2018vehicle}. While several efforts have been made~\\cite{zhou2018aware,wang2017orientation,shen2017learning} to address the challenges (e.g., viewpoint or appearance variations) in vehicle re-ID, these resultant methods are developed under the assumption that both query and gallery images are of similar or sufficiently high resolutions. To carry out cross-resolution vehicle re-ID, MSVF~\\cite{kanaci2018vehicle} designs a multi-branch network that learns a representation by fusing features from images of different scales. Our method differs from MSVF~\\cite{kanaci2018vehicle} in three aspects. First, MSVF~\\cite{kanaci2018vehicle} is tailored for cross-resolution vehicle re-ID while our method is developed to address cross-resolution person re-ID. Second, our model does not need to pre-determined the number of branches. Instead, our model carries out cross-resolution person re-ID using only a single model. Third, our model further learns to recover the missing details in LR input images. Through extensive experiments, we demonstrate that our algorithm performs favorably against existing cross-resolution vehicle re-ID approaches.\n\\section{Proposed Method}\n\nIn this section, we first provide an overview of our proposed approach. We then describe the details of each network component as well as the loss functions.\n\n\\subsection{Algorithmic Overview}\n\nWe first define the notations to be used in this paper. In the training stage, we assume we have access to a set of~$N$ HR images $X_H = \\{x_i^H\\}_{i=1}^N$ and its corresponding label set $Y_H = \\{y_i^H\\}_{i=1}^N$, where $x_i^H \\in {\\mathbb{R}}^{H \\times W \\times 3}$ and $y_i^H \\in {\\mathbb{R}}$ are the $i^\\mathrm{th}$ HR image and its label, respectively. To allow our model to handle images of different resolutions, we generate a \\emph{synthetic} LR image set $X_L = \\{x_i^L\\}_{i=1}^N$ by down-sampling each image in $X_H$, followed by resizing them back to the original image size via bilinear up-sampling (i.e., $x_i^L \\in {\\mathbb{R}}^{H \\times W \\times 3}$), where $x_i^L$ is the synthetic LR image of $x_i^H$. Obviously, the label set $Y_L$ for $X_L$ is identical to $Y_H$.\n\nAs shown in Figure~\\ref{fig:Model}, our network comprises two components: the Cross-Resolution Generative Adversarial Network (CRGAN) and the Cross-Modal Re-ID network. To achieve cross-resolution person re-ID, our CRGAN simultaneously learns a resolution-invariant representation $f \\in {\\mathbb{R}}^{h \\times w \\times d}$ ($h \\times w$ is the spatial size of $f$ whereas $d$ denotes the number of channels) from the input cross-resolution images, while producing the associated HR images as the decoder outputs. The recovered HR output image will be encoded as an HR representation $g \\in {\\mathbb{R}}^{h \\times w \\times d}$ by the HR encoder. For person re-ID, we first concatenate $f$ and $g$ to form a joint representation $\\mathbfit{v} = [f, g] \\in {\\mathbb{R}}^{h \\times w \\times 2d}$. The classifier then takes the joint representation $\\mathbfit{v}$ as input to perform person identity classification. The details of each component are elaborated in the following subsections.\n\nAs for testing, our network takes a query image resized to $H \\times W \\times 3$ as the input, and computes the joint representation $\\mathbfit{v} = [f, g] \\in {\\mathbb{R}}^{h \\times w \\times 2d}$. We then apply global average pooling ($\\mathrm{GAP}$) to $\\mathbfit{v}$ for deriving a joint feature vector $\\mathbfit{u} = \\mathrm{GAP}(\\mathbfit{v}) \\in {\\mathbb{R}}^{2d}$, which is applied to match the gallery images via nearest neighbor search with Euclidean distance. It is worth repeating that, the query image during testing can be with varying resolutions or with unseen ones during training (verified in experiments).\n\n\\subsection{Cross-Resolution GAN (CRGAN)}\n\nIn CRGAN, we have a cross-resolution encoder $\\mathcal{E}$ which converts input images across different resolutions into resolution-invariant representations, followed by a high-resolution decoder $\\mathcal{G}$ recovering the associated HR versions.\n\n{\\flushleft {\\bf Cross-resolution encoder $\\mathcal{E}$.}}\nSince our goal is to perform cross-resolution person re-ID, we encourage the cross-resolution encoder $\\mathcal{E}$ to extract resolution-invariant features for input images across resolutions (e.g., HR images in $X_H$ and LR ones in $X_L$). To achieve this, we advance adversarial learning strategies and deploy a resolution discriminator $\\mathcal{D}_{F}$ in the latent \\emph{feature space}. This discriminator $\\mathcal{D}_{F}$ takes the feature maps $f_H$ and $f_L$ as inputs to determine whether the input feature maps are from $X_H$ or $X_L$. \n\nTo be more precise, we define the feature-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{F}}$ as\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{F}} = &~ \\mathbb{E}_{x_H \\sim X_H}[\\log(\\mathcal{D}_{F}(f_H))]\\\\\n + &~ \\mathbb{E}_{x_L \\sim X_L}[\\log(1 - \\mathcal{D}_{F}(f_L))],\n \\end{aligned}\n \\label{eq:adv_loss_feature}\n\\end{equation}\nwhere $f_H = \\mathcal{E}({x_H})$ and $f_L = \\mathcal{E}({x_L}) \\in {\\mathbb{R}}^{h \\times w \\times d}$ denote the encoded HR and LR image features, respectively.\\footnote{For simplicity, we omit the subscript $i$, denote HR and LR images as $x_H$ and $x_L$, and represent their corresponding labels as $y_H$ and $y_L$.}\n\nWhile aligning feature distributions between HR and LR images at a single feature level has been shown effective to some extent in our previous results~\\cite{CAD-Net}, similar to existing methods for semantic segmentation~\\cite{tsai2018learning} and person re-ID~\\cite{chen2019learning}, we adopt multi-scale adversarial networks and align feature distributions at multiple levels to learn more robust feature representations. In this work, we employ the ResNet-$50$~\\cite{he2016deep} as the cross-resolution encoder $\\mathcal{E}$, which has five residual blocks $\\{R_1, R_2, R_3, R_4, R_5\\}$. The feature maps extracted from the last activation layer of each residual block are denoted as $\\{f^1, f^2, f^3, f^4, f^5\\}$, where $f^j \\in {\\mathbb{R}}^{h_j \\times w_j \\times d_j}$ is of spatial size $h_j \\times w_j$ and with $d_j$ channels.\n\nAs shown in Figure~\\ref{fig:multi-discriminator}, our multi-scale discriminator $\\mathcal{D}_F^j$ takes the feature maps $f_H^j$ and $f_L^j$ extracted at the corresponding feature level as inputs, and determines whether the input feature map is from $X_H$ or $X_L$. Note that $j \\in \\{1, 2, 3, 4, 5\\}$ is the index of the feature levels, and $f_H^j$ and $f_L^j$ denote the feature maps of $x_H$ and $x_L$, respectively.\n\nTo train the cross-resolution encoder $\\mathcal{E}$ and the multi-scale resolution discriminators $\\{\\mathcal{D}_F^j\\}$ with cross-resolution input images $x_H$ and $x_L$, we extend the adversarial loss in Eq.~\\eqref{eq:adv_loss_feature} from single-scale to multi-scale adversarial learning and define the multi-scale feature-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{F}}$ as\n\\begin{equation}\n \\begin{split}\n \\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{F}} = \\sum_{j}^{}\\bigg(&~ \\mathbb{E}_{x_H \\sim X_H}[\\log(\\mathcal{D}_F^{j}(f_H^j))] \\\\\n + &~ \\mathbb{E}_{x_L \\sim X_L}[\\log(1 - \\mathcal{D}_F^{j}(f_L^j))]\\bigg).\n \\end{split}\n \\label{eq:adv_multi}\n\\end{equation}\n\nWith the multi-scale feature-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{F}}$, our multi-scale resolution discriminators $\\{\\mathcal{D}_F^j\\}$ align the feature distributions across resolutions, carrying out the learning of resolution-invariant representations.\n\n{\\flushleft {\\bf High-resolution decoder $\\mathcal{G}$.}}\nIn addition to learning the resolution-invariant representation $f$, our CRGAN further synthesizes the associated HR images. This is to recover the missing details in LR inputs, together with the re-ID task to be performed later in the cross-modal re-ID network.\n\nTo achieve this goal, we have an HR decoder $\\mathcal{G}$ in our CRGAN which reconstructs (or recovers) the HR images as the outputs. To accomplish this, we apply an HR reconstruction loss $\\mathcal{L}_\\mathrm{rec}$ between the reconstructed HR images and their corresponding HR ground-truth images. Specifically, the HR reconstruction loss $\\mathcal{L}_\\mathrm{rec}$ is defined as\n\\begin{equation}\n \\label{eq:rec}\n \\begin{aligned}\n \\mathcal{L}_\\mathrm{rec} = &~ \\mathbb{E}_{x_H \\sim X_H}[\\|\\mathcal{G}(f_H) - x_H\\|_1]\\\\\n + &~ \\mathbb{E}_{x_L \\sim X_L}[\\|\\mathcal{G}(f_L) - x_{H}\\|_1],\n \\end{aligned}\n\\end{equation}\nwhere the HR ground-truth image associated with $x_L$ is $x_H$. Following Huang~\\textit{et~al}\\mbox{.}~\\cite{huang2018munit}, we adopt the $\\ell_1$ norm in the HR reconstruction loss $\\mathcal{L}_\\mathrm{rec}$ as it preserves image sharpness. We note that both $X_H$ and $X_L$ will be shuffled during training. That is, images of the same identity but different resolutions will not necessarily be observed by the CRGAN at the same time.\n\nIt is worth noting that, while the aforementioned HR reconstruction loss $\\mathcal{L}_\\mathrm{rec}$ could reduce information loss in the latent feature space, we follow Ledig~\\textit{et~al}\\mbox{.}~\\cite{ledig2017photo} and introduce skip connections between the cross-resolution encoder $\\mathcal{E}$ and the HR decoder $\\mathcal{G}$. This would facilitate the learning process of image reconstruction, as well as allowing more efficient gradient propagation.\n\nTo make the HR decoder $\\mathcal{G}$ produce more perceptually realistic HR outputs and associate with the task of person re-ID, we further adopt adversarial learning in the \\emph{image space} and introduce an HR image discriminator $\\mathcal{D}_{I}$ which takes the recovered HR images (i.e., $\\mathcal{G}(f_L)$ and $\\mathcal{G}(f_H)$) and their corresponding HR ground-truth images as inputs to distinguish whether the input images are real or fake~\\cite{ledig2017photo,wang2018cascaded}. Specifically, we define the image-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{I}}$ as\n\\begin{equation}\\scriptsize\n \\begin{aligned}\n \\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{I}} = &~ \\mathbb{E}_{x_H \\sim X_H}[\\log(\\mathcal{D}_{I}(x_H))] + \\mathbb{E}_{x_L \\sim X_L}[\\log(1 - \\mathcal{D}_{I}(\\mathcal{G}(f_L)))] \\\\\n + &~ \\mathbb{E}_{x_H \\sim X_H}[\\log(\\mathcal{D}_{I}(x_H))] + \\mathbb{E}_{x_H \\sim X_H}[\\log(1 - \\mathcal{D}_{I}(\\mathcal{G}(f_H)))].\n \\end{aligned}\n \\label{eq:adv_loss_image}\n\\end{equation}\n\nIt is also worth repeating that the goal of this HR decoder $\\mathcal{G}$ is not simply to recover the missing details in LR inputs, but also to have such recovered HR images aligned with the learning task of interest (i.e., person re-ID). Namely, we encourage the HR decoder $\\mathcal{G}$ to perform \\textit{re-ID oriented} HR recovery, which is further realized by the following cross-modal re-ID network.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{img\/model_2.pdf}\n \\caption{\\textbf{Multi-scale adversarial learning.} We adopt multiple discriminators to effectively align feature distributions between HR and LR images at different feature levels. This multi-scale adversarial learning strategy allows our model to learn resolution-invariant representations that are more robust to resolution variations.}\n \\label{fig:multi-discriminator}\n \\vspace{-4.0mm}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{img\/model_3.pdf}\n \\caption{\\textbf{Illustration of the feature consistency loss $\\mathcal{L}_\\mathrm{consist}$.} The HR encoder $\\mathcal{F}$ takes the recovered HR image $\\tilde{x}_H$ and the corresponding HR ground-truth image $x_H$ as inputs and derives their HR representations $\\tilde{g}$ and $g$, respectively. We then introduce the feature consistency loss $\\mathcal{L}_\\mathrm{consist}$ to enforce the consistency between $\\tilde{g}$ and $g$. This consistency loss allows our HR encoder $\\mathcal{F}$ to learn HR representations that are more robust to the variations of HR image recovery.}\n \\label{fig:consist-loss}\n \\vspace{-3.0mm}\n\\end{figure}\n\n\\subsection{Cross-Modal Re-ID}\n\nAs shown in Figure~\\ref{fig:Model}, the cross-modal re-ID network first applies an HR encoder $\\mathcal{F}$, which takes the reconstructed HR image from the CRGAN as input, to derive the HR feature representation $g \\in {\\mathbb{R}}^{h \\times w \\times d}$. Then, a classifier $\\mathcal{C}$ is learned to complete person re-ID.\n\nWhile enforcing the HR reconstruction loss $\\mathcal{L}_\\mathrm{rec}$ and the image-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_I}$ allows our model to map LR input images of various resolutions into their HR versions to some extent~\\cite{CAD-Net}, it is still difficult for the model to simultaneously handle the resolution variations and learn the mapping between LR and HR images, especially when there are infinitely many mappings between LR and HR images. To address the variations of HR image recovery, we introduce a feature consistency loss $\\mathcal{L}_\\mathrm{consist}$ that enforces the consistency between the features of the recovered HR images and the corresponding HR ground-truth images. As illustrated in Figure~\\ref{fig:consist-loss}, the HR encoder $\\mathcal{F}$ takes the recovered HR image $\\tilde{x}_H$ and its corresponding HR ground-truth image $x_H$ as inputs and derives the HR representations $\\tilde{g} = \\mathcal{F}(\\tilde{x}_H)$ and $g = \\mathcal{F}(x_H)$, respectively. We then enforce the consistency between $\\tilde{g}$ and $g$ using the $\\ell_1$ distance and define the feature consistency loss $\\mathcal{L}_\\mathrm{consist}$ as\n\\begin{equation}\n \\label{eq:consist}\n \\begin{aligned}\n \\mathcal{L}_\\mathrm{consist} = &~ \\mathbb{E}_{x_H \\sim X_H}[\\|\\mathcal{F}(\\tilde{x}_H) - \\mathcal{F}(x_H)\\|_1]\\\\\n + &~ \\mathbb{E}_{x_L \\sim X_L}[\\|\\mathcal{F}(\\tilde{x}_H) - \\mathcal{F}(x_{H})\\|_1].\n \\end{aligned}\n\\end{equation}\n\nEnforcing the feature consistency loss $\\mathcal{L}_\\mathrm{consist}$ allows the HR decoder $\\mathcal{G}$ to derive the HR representation $g$ which is more robust to the variations of HR image recovery.\n\nAs for the input to the classifier $\\mathcal{C}$, we jointly consider the feature representations of two different modalities for person identity classification, i.e., the resolution-invariant representation $f$ and the HR representation $g$. The former preserves content information, while the latter observes the recovered HR details for person re-ID. Thus, we have the classifier $\\mathcal{C}$ take the concatenated feature representation $\\mathbfit{v} = [f, g] \\in {\\mathbb{R}}^{h \\times w \\times 2d}$ as the input. In this work, the adopted classification loss $\\mathcal{L}_\\mathrm{cls}$ is the integration of the identity loss $\\mathcal{L}_\\mathrm{id}$ and the triplet loss $\\mathcal{L}_\\mathrm{tri}$~\\cite{hermans2017defense}, and is defined as\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{L}_\\mathrm{cls} = \\mathcal{L}_\\mathrm{id} + \\mathcal{L}_\\mathrm{tri},\n \\end{aligned}\n \\label{eq:cls}\n\\end{equation}\nwhere the identity loss $\\mathcal{L}_\\mathrm{id}$ computes the softmax cross entropy between the classification prediction and the corresponding ground-truth one hot vector, while the triplet loss $\\mathcal{L}_\\mathrm{tri}$ is introduced to enhance the discrimination ability during the re-ID process and is defined as\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{L}_\\mathrm{tri}\n = &~ \\mathbb{E}_{(x_H,y_H) \\sim (X_H,Y_H)}\\max(0, \\phi + d_\\mathrm{pos}^H - d_\\mathrm{neg}^H) \\\\\n + &~ \\mathbb{E}_{(x_L,y_L) \\sim (X_L,Y_L)}\\max(0, \\phi + d_\\mathrm{pos}^L - d_\\mathrm{neg}^L),\n \\end{aligned}\n \\label{eq:tri}\n\\end{equation}\nwhere $d_\\mathrm{pos}$ and $d_\\mathrm{neg}$ are the distances between the positive (same label) and the negative (different labels) image pairs, respectively, and $\\phi > 0$ serves as the margin.\n\nIt can be seen that, the above cross-resolution person re-ID framework is very different from existing one like CSR-GAN~\\cite{wang2018cascaded}, which addresses image SR and person re-ID \\emph{separately}. More importantly, the aforementioned identity loss $\\mathcal{L}_\\mathrm{id}$ not only updates the classifier $\\mathcal{C}$, but also refines the HR decoder $\\mathcal{G}$ in our CRGAN. This is the reason why our CRGAN is able to produce \\emph{re-ID oriented} HR outputs, i.e., the recovered HR details preferable for person re-ID.\n\n\\input{exp\/psuedo}\n\n{\\flushleft {\\bf Full training objective.}}\nThe total loss $\\mathcal{L}$ for training our proposed CAD-Net++ is summarized as follows:\n\\begin{equation}\n \\begin{split}\n \\mathcal{L} & = \\mathcal{L}_\\mathrm{cls} + \\lambda_\\mathrm{adv}^{\\mathcal{D}_{F}}\\cdot\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{F}} + \\lambda_\\mathrm{rec}\\cdot\\mathcal{L}_\\mathrm{rec} \\\\ & + \\lambda_\\mathrm{adv}^{\\mathcal{D}_{I}}\\cdot\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{I}} + \\lambda_\\mathrm{consist}\\cdot\\mathcal{L}_\\mathrm{consist},\n \\end{split}\n \\label{eq:fullobj}\n\\end{equation}\nwhere $\\lambda_\\mathrm{adv}^{\\mathcal{D}_{F}}$, $\\lambda_\\mathrm{rec}$, $\\lambda_\\mathrm{adv}^{\\mathcal{D}_{I}}$, and $\\lambda_\\mathrm{consist}$ are the hyper-parameters used to control the relative importance of the corresponding losses. We note that $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{F}}$, $\\mathcal{L}_\\mathrm{rec}$, and $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{I}}$ are developed to learn the CRGAN, $\\mathcal{L}_\\mathrm{consist}$ is introduced to update the cross-modal re-ID component, and $\\mathcal{L}_\\mathrm{cls}$ is designed to update the entire framework.\n\nTo learn our network with the HR training images and their down-sampled LR ones, we minimize the HR reconstruction loss $\\mathcal{L}_\\mathrm{rec}$ for updating our CRGAN, the feature consistency loss $\\mathcal{L}_\\mathrm{consist}$ for updating the HR encoder $\\mathcal{F}$, and the classification loss $\\mathcal{L}_\\mathrm{cls}$ for jointly updating the CRGAN and the cross-modal re-ID network. The image-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_I}$ is computed for producing perceptually realistic HR images, while the multi-scale feature-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_F}$ is optimized for learning resolution-invariant representations.\n\n\\revision{The training details of CAD-Net++ are summarized in Algorithm~\\ref{alg:cadnet}. Specifically, we train our CAD-Net++ until all losses converge.}\n\\section{Experiments}\n\nIn this section, we first describe the implementation details, the adopted datasets for evaluation, and the experimental settings. We then present both quantitative and qualitative results, including ablation studies.\n\n\\subsection{Implementation Details}\n\nWe implement our model using PyTorch. The ResNet-$50$~\\cite{he2016deep} pretrained on ImageNet is used to build the cross-resolution encoder $\\mathcal{E}$ and the HR encoder $\\mathcal{F}$. Note that since $\\mathcal{E}$ and $\\mathcal{F}$ work for different tasks, these two components do not share weights. The classifier $\\mathcal{C}$ is composed of a global average pooling layer and a fully connected layer followed a softmax activation. The architecture of the resolution discriminator $\\mathcal{D}_F$ is the same as that adopted by Tsai~\\textit{et~al}\\mbox{.}~\\cite{tsai2018learning}. The structure of the HR image discriminator $\\mathcal{D}_I$ is similar to the ResNet-$18$~\\cite{he2016deep}. Our HR decoder $\\mathcal{G}$ is similar to that proposed by Miyato~\\textit{et~al}\\mbox{.}~\\cite{miyato2018cgans}. Components $\\mathcal{D}_F$, $\\mathcal{D}_I$, $\\mathcal{G}$, and $\\mathcal{C}$ are all randomly initialized. We use stochastic gradient descent to train the proposed model. For components $\\mathcal{E}$, $\\mathcal{G}$, $\\mathcal{F}$, and $\\mathcal{C}$, the learning rate, momentum, and weight decay are $1 \\times 10^{-3}$, $0.9$, and $5 \\times 10^{-4}$, respectively. For the two discriminators $\\mathcal{D}_F$ and $\\mathcal{D}_I$, the learning rate is set to $1 \\times 10^{-4}$. The batch size is $32$. The margin $\\phi$ in the triplet loss $\\mathcal{L}_\\mathrm{tri}$ is set to $2$. We set the hyper-parameters in all the experiments as follows: $\\lambda_\\mathrm{adv}^{\\mathcal{D}_{F}}$ = 1, $\\lambda_\\mathrm{rec}$ = 1, $\\lambda_\\mathrm{adv}^{\\mathcal{D}_{I}}$ = 1, and $\\lambda_\\mathrm{consist}$ = 1. All images of various resolutions are resized to $256 \\times 128 \\times 3$ in advance. We train our model on a single NVIDIA GeForce GTX $1080$ GPU with $12$ GB memory.\n\n\\subsection{Datasets}\n\nWe adopt five person re-ID datasets, including CUHK03~\\cite{li2014deepreid}, VIPeR~\\cite{gray2008viewpoint}, CAVIAR~\\cite{Cheng:BMVC11}, Market-$1501$~\\cite{zheng2015scalable}, and DukeMTMC-reID~\\cite{zheng2017unlabeled}, and two vehicle re-ID datasets, including VeRi-$776$~\\cite{liu2016deep} and VRIC~\\cite{kanaci2018vehicle} for evaluation. The details of each dataset are described as follows.\n\n{\\flushleft {\\bf CUHK03~\\cite{li2014deepreid}.}}\nThe CUHK03 dataset is composed of $14,097$ images of $1,467$ identities with $5$ different camera views. Following CSR-GAN~\\cite{wang2018cascaded}, we use the $1,367\/100$ training\/test identity split.\n\n{\\flushleft {\\bf VIPeR~\\cite{gray2008viewpoint}.}}\nThe VIPeR dataset contains $632$ person-image pairs captured by $2$ cameras. Following SING~\\cite{jiao2018deep}, we randomly divide this dataset into two non-overlapping halves based on the identity labels. That is, images of a subject belong to either the training set or the test set.\n\n{\\flushleft {\\bf CAVIAR~\\cite{Cheng:BMVC11}.}}\nThe CAVIAR dataset is composed of $1,220$ images of $72$ person identities captured by $2$ cameras. Following SING~\\cite{jiao2018deep}, we discard $22$ people who only appear in the closer camera, and split this dataset into two non-overlapping halves according to the identity labels.\n\n{\\flushleft {\\bf Market-$1501$~\\cite{zheng2015scalable}.}}\nThe Market-$1501$ dataset consists of $32,668$ images of $1,501$ identities with $6$ camera views. We use the widely adopted $751\/750$ training\/test identity split.\n\n{\\flushleft {\\bf DukeMTMC-reID~\\cite{zheng2017unlabeled}.}}\nThe DukeMTMC-reID dataset contains $36,411$ images of $1,404$ identities captured by $8$ cameras. We utilize the benchmarking $702\/702$ training\/test identity split.\n\n{\\flushleft {\\bf VeRi-$776$~\\cite{liu2016deep}.}}\nThe VeRi-$776$ dataset is divided into two subsets: a training set and a test set. The training set is composed of $37,781$ images of $576$ vehicles, and the test set has $11,579$ images of $200$ vehicles. Following the evaluation protocol in \\cite{liu2016deep}, the image-to-track cross-camera search is performed, where we treat one image of a vehicle from one camera as the query, and search for tracks of the same vehicle in other cameras.\n\n{\\flushleft {\\bf VRIC~\\cite{kanaci2018vehicle}.}}\nThe VRIC dataset is a newly collected dataset, which consists of $60,430$ images of $5,656$ vehicle IDs collected from $60$ different cameras in traffic scenes. VRIC differs significantly from existing datasets in that vehicles were captured with variations in image resolution, motion blur, weather condition, and occlusion. The training set has $54,808$ images of $2,811$ vehicles, while the rest $5,622$ images of $2,811$ identities are used for testing.\n\n\\subsection{Experimental Settings and Evaluation Metrics}\n\nWe evaluate the proposed method under three different settings: (1) \\emph{cross-resolution setting}~\\cite{jiao2018deep,kanaci2018vehicle}, (2) \\emph{standard setting}~\\cite{ge2018fd,zhong2017camera}, and (3) \\emph{semi-supervised setting}~\\cite{chen2019learning}. For cross-resolution setting, the test (query) set is composed of LR images while the gallery set contains HR images only. For standard setting (i.e., re-ID with no significant resolution variations), both query and gallery sets contain HR images. For semi-supervised setting (i.e., re-ID with partially labeled datasets), we follow cross-resolution setting where the test (query) set consists LR images while the gallery set comprises HR images only.\n\nIn all of the experiments, we adopt the standard single-shot re-ID setting~\\cite{jiao2018deep,liao2015person}. We note that the cross-resolution re-ID setting analyzes the \\emph{robustness against the resolution variations}, while the standard re-ID setting examines if our method still improves re-ID if no significant resolution variations are present. The semi-supervised re-ID setting aims to investigate whether our proposed algorithm still exhibits sufficient ability in re-identifying images with less supervision.\n\nWe adopt the multi-scale resolution discriminators $\\{\\mathcal{D}_F^j\\}$ which align feature distributions at different feature levels. To balance between learning efficiency and performance, we select the index of feature level with $j \\in \\{1, 2\\}$, and denote our method as ``Ours (multi-scale)'' and the variant of our method with single-scale resolution discriminator ($j = 1$) as ``Ours (single-scale)''.\n\nFor performance evaluation, we adopt the average cumulative match characteristic as the evaluation metric. We note that the performance of our method can be further improved by applying pre-\/post-processing methods, attention mechanisms, or re-ranking. For fair comparisons, no such techniques are used in all of our experiments.\n\n\\input{exp\/exp-ReID.tex}\n\n\\subsection{Evaluation of Cross-Resolution Setting}\n\nWe evaluate our proposed algorithm on both person re-ID~\\cite{jiao2018deep,CAD-Net} and vehicle re-ID~\\cite{kanaci2018vehicle} tasks.\n\n\\subsubsection{Cross-Resolution Person Re-ID}\n\nFollowing SING~\\cite{jiao2018deep}, we consider multiple low-resolution (MLR) person re-ID and evaluate the proposed method on \\emph{four synthetic} and \\emph{one real-world} benchmarks. To construct the synthetic MLR datasets (i.e., MLR-CUHK03, MLR-VIPeR, MLR-Market-$1501$, and MLR-DukeMTMC-reID), we follow SING~\\cite{jiao2018deep} and down-sample images taken by one camera by a randomly selected down-sampling rate $r \\in \\{2, 3, 4\\}$ (i.e., the size of the down-sampled image becomes $\\frac{H}{r} \\times \\frac{W}{r} \\times 3$), while the images taken by the other camera(s) remain unchanged. The CAVIAR dataset inherently contains realistic images of multiple resolutions, and is a \\emph{genuine} and more challenging dataset for evaluating MLR person re-ID.\n\n\\revision{We compare our proposed approach (CAD-Net++) with methods developed for cross-resolution person re-ID, including JUDEA~\\cite{li2015multi}, SLD$^2$L~\\cite{jing2015super}, SDF~\\cite{wang2016scale}, RAIN~\\cite{chen2019learning}, DenseNet-121~\\cite{huang2017densely}, SE-ResNet-50~\\cite{hu2018squeeze}, ResNet-50~\\cite{he2016deep}, FFSR~\\cite{mao2019resolution}, RIFE~\\cite{mao2019resolution}, FSRCNN-reID~\\cite{dong2016accelerating} (FSRCNN followed by the same representation learning method as SING~\\cite{jiao2018deep}), SING~\\cite{jiao2018deep}, and CSR-GAN~\\cite{wang2018cascaded}, and approaches developed for standard person re-ID, including PCB~\\cite{sun2018beyond}, SPreID~\\cite{kalayeh2018human}, Part Aligned~\\cite{suh2018part}, CamStyle~\\cite{zhong2017camera}, and FD-GAN~\\cite{ge2018fd}.} For methods developed for cross-resolution person re-ID, the training set contains HR images and LR ones with all three down-sampling rates $r \\in \\{2, 3, 4\\}$ for each person. For methods developed for standard person re-ID, the training set contains HR images for each identity only.\n\n\\revision{\n{\\flushleft {\\bf Results.}}\nTable~\\ref{table:exp-ReID} reports the quantitative results recorded at ranks $1$, $5$, and $10$ on all five adopted datasets. For CSR-GAN~\\cite{wang2018cascaded} on the MLR-CUHK03, CAVIAR, MLR-Market-$1501$, and MLR-DukeMTMC-reID datasets, and PCB~\\cite{sun2018beyond}, SPreID~\\cite{kalayeh2018human}, Part Aligned~\\cite{suh2018part}, CamStyle~\\cite{zhong2017camera}, FSRCNN-reID~\\cite{dong2016accelerating}, and FD-GAN~\\cite{ge2018fd} on all five datasets, their results are obtained by running the official code with the default implementation setup.} For SING~\\cite{jiao2018deep}, we reproduce their results on the MLR-Market-$1501$ and MLR-DukeMTMC-reID datasets. \n\nOur method adopting either single-scale or multi-scale resolution discriminators performs favorably against all competing methods on all five adopted datasets. The performance gains can be ascribed to three main factors. First, unlike most existing person re-ID methods, our model performs cross-resolution person re-ID in an end-to-end learning fashion. Second, our method learns the resolution-invariant representations, allowing our model to recognize persons in images of different resolutions. Third, our model learns to recover the missing details in LR images, thus providing additional discriminative evidence for person re-ID.\n\n\\input{exp\/vehicle_reid_mlr.tex}\n\n{\\flushleft {\\bf Effect of multi-scale adversarial learning.}}\nThe effect of adopting multi-scale adversarial learning strategy can be observed by comparing two of the variant methods, i.e., Ours (single-scale) and Ours (multi-scale). We observe that adopting multi-scale adversarial learning strategy consistently improves the performance over adopting single-scale adversarial learning strategy on all five datasets. \n\n{\\flushleft {\\bf Effect of deriving joint representation.}}\nThe advantage of deriving joint representation $\\mathbfit{v} = [f,g]$ can be assessed by comparing with two of our variant methods, i.e., Ours (multi-scale) ($f$ only) and Ours (multi-scale) ($g$ only). In the Ours (multi-scale) ($f$ only) method, the classifier $\\mathcal{C}$ only takes the resolution-invariant representation $f$ as input. In the Ours (multi-scale) ($g$ only) method, the classifier $\\mathcal{C}$ only takes the HR representation $g$ as input. We observe that deriving joint representation $\\mathbfit{v}$ consistently improves the performance over these two variant\/baseline methods. \n\n\\input{exp\/standard_reID.tex}\n\n\\subsubsection{Cross-Resolution Vehicle Re-ID}\n\nSimilar to cross-resolution person re-ID, we also consider multiple low-resolution (MLR) setting for vehicle re-ID and evaluate the proposed method on one \\emph{synthetic} and one \\emph{real-world} benchmarks. To construct the synthetic MLR-VeRi$776$ dataset, we down-sample images taken by one camera by a randomly selected down-sampling rate $r \\in \\{2, 3, 4\\}$, whereas the images taken by the other camera(s) remain unchanged. The VRIC dataset is a \\emph{genuine} and more challenging dataset for evaluating MLR vehicle re-ID and contains realistic images of multiple resolutions.\n\n\\revision{We compare our approach with cross-resolution re-ID methods, including FSRCNN-reID~\\cite{dong2016accelerating}, SING~\\cite{jiao2018deep}, CSR-GAN~\\cite{wang2018cascaded}, and MSVF~\\cite{kanaci2018vehicle}, and standard vehicle re-ID approaches, including Siamese-Visual~\\cite{shen2017learning}, OIFE~\\cite{wang2017orientation}, and VAMI~\\cite{zhou2018aware}.} For cross-resolution re-ID methods, the training set contains HR images and LR ones with all three down-sampling rates $r \\in \\{2, 3, 4\\}$ for each vehicle. For standard vehicle re-ID approaches, the training set comprises only HR images for each vehicle.\n\n{\\flushleft {\\bf Results.}}\nTable~\\ref{table:exp-vehicle-reid-mlr} presents the quantitative results recorded at ranks $1$ and $5$, and mAP on the two adopted datasets. We observe that our proposed algorithm achieves the state-of-the-art performance on both datasets. While our method is designed for cross-resolution person re-ID, the favorable performance (about $4\\%$ performance gains at rank 1 on both datasets) over all competing approaches (some of the competing methods are particularly tailored for cross-resolution vehicle re-ID task) demonstrates the generalization of our proposed algorithm.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{img\/Semi-super.png}\n \\caption{\\textbf{Semi-supervised cross-resolution person re-ID on the MLR-CUHK03 dataset (\\%).} Even if the training data is only partially labeled, our method still exhibits sufficient ability in re-identifying person images of various resolutions.}\n \\label{fig:exp-semi}\n \\vspace{-4.0mm}\n\\end{figure}\n\n\\subsection{Evaluation of Standard Setting}\n\n\\revision{To examine if our method still improves re-ID performance when no significant resolution variations are present, we consider standard person re-ID and compare with existing approaches, including JLML~\\cite{Li-2017-IJCAI}, TriNet~\\cite{Hermans-2017-arXiv}, DML~\\cite{Zhang-2018-CVPR}, MGCAM~\\cite{Song-2018-CVPR}, DPFL~\\cite{Cheng-2016-CVPR}, PAN~\\cite{Zheng-2018-TCSVT}, PoseTransfer~\\cite{Zhong-2018-CVPR}, AlignedReID~\\cite{zhang2017alignedreid}, SVDNet~\\cite{sun2017svdnet}, CamStyle~\\cite{zhong2017camera}, PN-GAN~\\cite{qian2017pose}, SPreID~\\cite{kalayeh2018human}, FD-GAN~\\cite{ge2018fd}, Part Aligned~\\cite{suh2018part}, PCB~\\cite{sun2018beyond}, and DG-Net~\\cite{zheng2019joint}, in which the training set contains HR images for each identity only. As for our model, we take the same training set and \\emph{augment} it by down-sampling each image with three down-sampling rates $r \\in \\{2, 3, 4\\}$ (i.e., our training set contains HR images and LR ones with $r \\in \\{2, 3, 4\\}$) per person.}\n\n\\input{exp\/psnr.tex}\n\n\\input{exp\/GAN.tex}\n\n\\revision{To demonstrate that our method can help improve the state-of-the-art methods, we initialize our HR encoder $\\mathcal{F}$ with different pre-trained models as our method employs the same backbone (i.e., ResNet-$50$~\\cite{he2016deep}) as most of these methods. Specifically, we initialize our HR encoder $\\mathcal{F}$ with the pre-trained weights from FD-GAN~\\cite{ge2018fd}, Part Aligned~\\cite{suh2018part}, PCB~\\cite{sun2018beyond}, and DG-Net~\\cite{zheng2019joint}, and denote these variants of our method as Ours (FD-GAN), Ours (Part Aligned), Ours (PCB), and Ours (DG-Net), respectively.}\n\n{\\flushleft {\\bf Results.}}\n\\revision{Table~\\ref{table:exp-standard-reid} reports the quantitative results recorded at rank $1$ and the mAP on the two adopted datasets. We observe that our method results in further improvements over existing methods, achieving the state-of-the-art performance on the Market-$1501$~\\cite{zheng2015scalable} and DukeMTMC-reID~\\cite{zheng2017unlabeled} datasets in the standard person re-ID setting.}\n\nThe above quantitative results demonstrate that when significant resolution variations are present (i.e., cross-resolution setting), methods developed for standard re-ID suffer from the negative effect caused by the resolution mismatch issue. When considering the standard re-ID setting, the proposed method achieves further improvements over existing methods.\n\n\\subsection{Evaluation of Semi-Supervised Setting}\n\nIn the following, we conduct a series of semi-supervised experiments, and evaluate whether the proposed CAD-Net++ remains effective for cross-resolution person re-ID when only a subset of the labeled training data is available. Namely, less labeled training data can be used when computing the classification loss $\\mathcal{L}_\\mathrm{cls}$ (Eq. (\\ref{eq:cls})).\n\nWe evaluate the proposed method on the MLR-CUHK$03$ dataset. For performance evaluation, we choose $k\\%$ of the training data and keep their labels, while ignoring the labels of the rest, for $k\\in\\{0, 20, 40, 60, 80, 100\\}$. Note that the unlabeled data are still utilized in optimizing the multi-scale feature-level adversarial loss (Eq. (\\ref{eq:adv_multi})), the HR reconstruction loss (Eq. (\\ref{eq:rec})), the image-level adversarial loss (Eq. (\\ref{eq:adv_loss_image})), and the feature consistency loss (Eq. (\\ref{eq:consist})). We compare our proposed approach with RAIN~\\cite{chen2019learning} and two baseline methods: ``Baseline (train on HR)''~\\cite{chen2019learning} and ``Baseline (train on HR \\& LR)''~\\cite{chen2019learning}.\n\nFigure~\\ref{fig:exp-semi} presents the model performance at rank $1$. We observe that without any label information, our method achieves $8\\%$ at rank $1$. When the fraction of labeled data is increased to $20\\%$, our model reaches $69.2\\%$ at rank $1$, and is even better than SING~\\cite{jiao2018deep} ($67.7\\%$ at rank $1$) learned with $100\\%$ labeled data. When the fraction of labeled data is set to $40\\%$, our model achieves $77.1\\%$ at rank $1$ and compares favorably against most existing approaches reported in Table~\\ref{table:exp-ReID} that are learned with $100\\%$ labeled data.\n\nThe promising results demonstrate that sufficient re-ID ability is exhibited by our method even if only a small portion of labeled training data are available. This favorable property increases the applicability of the proposed CAD-Net++ in real-world re-ID applications. We attribute this property to the elaborately developed loss functions. Except for the classification loss in Eq. (\\ref{eq:cls}), all other loss functions can utilize unlabeled training data to regularize model training.\n\n\\subsection{Evaluation of the Recovered HR Images}\n\nTo demonstrate that our CRGAN is capable of recovering the missing details in LR input images of varying and even unseen resolutions, we evaluate the quality of the recovered HR images on the MLR-CUHK03 \\emph{test set} using SSIM, PSNR, and LPIPS~\\cite{zhang2018unreasonable} metrics. We employ the ImageNet-pretrained AlexNet~\\cite{krizhevsky2012imagenet} when computing LPIPS. We compare our CRGAN with CycleGAN~\\cite{zhu2017unpaired}, SING~\\cite{jiao2018deep}, and CSR-GAN~\\cite{wang2018cascaded}. For CycleGAN~\\cite{zhu2017unpaired}, we train its model to learn a mapping between LR and HR images. We report the quantitative results of the recovered image quality and person re-ID in Table~\\ref{table:image-comparison} with two different settings: (1) LR images of resolutions seen during training, i.e., $r \\in \\{2, 3, 4\\}$ and (2) LR images of unseen resolution, i.e., $r = 8$.\n\nFor seen resolutions (i.e., left block), we observe that our results using SSIM and PSNR metrics are slightly worse than the CSR-GAN~\\cite{wang2018cascaded} while compares favorably against SING~\\cite{jiao2018deep} and CycleGAN~\\cite{zhu2017unpaired}. However, our method performs favorably against these three methods using LPIPS metric and achieves the state-of-the-art performance when evaluating on cross-resolution person re-ID task. These results indicate that (1) SSIM and PSNR metrics are low-level pixel-wise metrics, which do not reflect high-level perceptual tasks and (2) the end-to-end learning of cross-resolution person re-ID would result in better re-ID performance and recover more perceptually realistic HR images as reflected by LPIPS. \n\nFor unseen resolution (i.e., right block), our method performs favorably against all three competing methods on all the adopted evaluation metrics. These results suggest that our method is capable of handling unseen resolution (i.e., $r = 8$) with favorable performance in terms of both image quality and person re-ID. Note that we only train our model with HR images and LR ones with $r \\in \\{2, 3, 4\\}$.\n\nFigure~\\ref{fig:HR-img} presents six examples. For each person, there are four different resolutions (i.e., $r \\in \\{1, 2, 4, 8\\}$). Note that images with down-sampling rate $r = 1$ indicate that the images remain their original sizes and are the corresponding HR images of the LR ones. We observe that when LR images with down-sampling rate $r = 8$ are given, our model recovers the HR details with the highest quality among all competing methods. In addition, we present four examples in Figure~\\ref{fig:HR-uni} to show that the proposed CRGAN is able to recover the missing details in LR images of \\emph{various} resolutions. In each example, we have eight input images with down-sampling rates $r \\in \\{1, 2, 3, 4, 5, 6, 7, 8\\}$, respectively, where $r \\in \\{1, 2, 3, 4\\}$ are seen during training while the rest are unseen. The corresponding recovered HR images are displayed in Figure~\\ref{fig:HR-uni}. Both quantitative and qualitative results above confirm that our model can handle \\emph{a range of} seen resolutions and generalize well to \\emph{unseen} resolutions using just one single model, i.e., CRGAN.\n\n\\input{exp\/recovery.tex}\n\n\\input{exp\/exp-ablation-hrgan.tex}\n\n\\input{exp\/exp-ablation-consis-loss.tex}\n\n\\input{exp\/exp-ablation-multi-scale.tex}\n\n\\subsection{Ablation Study}\n\n\\subsubsection{Loss Functions}\n\nTo analyze the importance of each developed loss function, we conduct an ablation study on the MLR-CUHK03 dataset. Table~\\ref{table:exp-abla} reports the quantitative results of the recovered HR images and the performance of cross-resolution person re-ID recorded at rank $1$.\n\n{\\flushleft {\\bf Multi-scale feature-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{F}}$.}}\nWithout $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{F}}$, our model does not learn the resolution-invariant representations and thus suffers from the resolution mismatch issue. Significant performance drops in the recovered image quality and re-ID performance occur, indicating the importance of our method for learning the resolution-invariant representations to address the problem of resolution mismatch.\n\n{\\flushleft {\\bf HR reconstruction loss $\\mathcal{L}_\\mathrm{rec}$.}}\nOnce $\\mathcal{L}_\\mathrm{rec}$ is excluded, there is no explicit supervision to guide the CRGAN to perform image recovery, and the model suffers from information loss in compressing visual images into semantic feature maps. Severe performance drops in terms of the recovered image quality and re-ID performance are hence caused.\n\n{\\flushleft {\\bf Image-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{I}}$.}}\nWhen $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{I}}$ is turned off, our model is not encouraged to produce perceptually realistic HR images as reflected by LPIPS, resulting in the performance drop of $3.6\\%$ at rank $1$.\n\n{\\flushleft {\\bf Classification loss $\\mathcal{L}_\\mathrm{cls}$.}}\nAlthough our model is still able to perform image recovery without $\\mathcal{L}_\\mathrm{cls}$, our model cannot perform discriminative learning for person re-ID since data labels are not used during training. Thus, significant performance drop in person re-ID occurs.\n\n\\revision{\n{\\flushleft {\\bf Consistency loss $\\mathcal{L}_\\mathrm{consist}$.}}\nAs shown in Table~\\ref{table:exp-abla}, once $\\mathcal{L}_\\mathrm{consist}$ is disabled, the quality of the recovered HR images almost remains the same but a performance drop of $0.8\\%$ at rank $1$ is occurred. To further evaluate the contribution of the feature consistency loss $\\mathcal{L}_\\mathrm{consist}$, we report the results of the Ours w\/o $\\mathcal{L}_\\mathrm{consist}$ method on all five cross-resolution person re-ID datasets (i.e., MLR-CUHK$03$, MLR-VIPeR, CAVIAR, MLR-Market-$1501$, and MLR-DukeMTMC-reID) and all two cross-resolution vehicle re-ID datasets (i.e., MLR-VeRi$776$ and VRIC) in Table~\\ref{table:ablation-consis-oss}. We observe that without the feature consistency loss $\\mathcal{L}_\\mathrm{consist}$, our model suffers from performance drops on all seven datasets. While the performance improvement in person re-ID contributed by the feature consistency loss $\\mathcal{L}_\\mathrm{consist}$ is marginal on the MLR-CUHK$03$ dataset, our results show that incorporating such a simple loss function consistently improves the performance on all seven datasets of both tasks without increasing the model complexity and capacity (i.e., the number of model parameters remains the same). In particular, on the MLR-Market-$1501$ and MLR-DukeMTMC-reID datasets, introducing the feature consistency loss $\\mathcal{L}_\\mathrm{consist}$ results in $3.7\\%$ and $3.5\\%$ performance improvement at Rank $1$, respectively.\n}\n\nThe ablation study demonstrates that the losses $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{F}}$, $\\mathcal{L}_\\mathrm{rec}$, and $\\mathcal{L}_\\mathrm{cls}$ are crucial to our method, while the losses $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{I}}$ and $\\mathcal{L}_\\mathrm{consist}$ are helpful for further improving the performance.\n\n\\vspace{-4.0mm}\n\\revision{\\subsubsection{Multi-scale adversarial learning}}\n\n\\revision{For learning the resolution-invariant representations $f$, we perform an ablation study on the MLR-CUHK$03$ dataset to analyze the effect of introducing feature-level discriminators $\\{\\mathcal{D}_F^j\\}$ at different feature levels (scales) to align feature distributions between HR and LR images. We report the results recorded at rank $1$ for cross-resolution person re-ID, evaluate the quality of the recovered HR images using SSIM, PSNR, and LPIPS~\\cite{zhang2018unreasonable} metrics, and report the numbers of parameters of the feature-level discriminators $\\{\\mathcal{D}_F^j\\}$. As shown in Table~\\ref{table:exp-multi-abla}, the performance in rank $1$ of cross-resolution person re-ID and the quality of the recovered HR images improve as the number of feature distribution alignments increases. We observe that when introducing five feature-level discriminators $\\{\\mathcal{D}_F^j\\}_{j=1}^{5}$ to align feature distributions at all five feature levels, our model reaches the best performance with the best quality of the recovered HR images, but the number of parameters of the feature-level discriminators $\\{\\mathcal{D}_F^j\\}_{j=1}^{5}$ increases accordingly. To balance the parameter number (efficiency) and performance, we select the setting with the first two levels (i.e., $j \\in \\{1, 2\\}$) for our method, and denote it as ``Ours (multi-level)'' for performance comparison.}\n\n\\begin{figure}[t]\n \\vspace{-2.0mm}\n \\centering\n \\includegraphics[width=1.0\\linewidth]{img\/quali_psnr_hr.pdf}\n \\vspace{-6.0mm}\n \\caption{\\revision{\\textbf{Ablation study of the recovered HR images.} We present the recovered HR images obtained from our method as well as the ablation methods on the MLR-CUHK03 \\emph{test set} (i.e., cross-resolution person re-ID setting).}}\n \\label{fig:abl}\n \\vspace{-5.0mm}\n\\end{figure}\n\n\\input{exp\/tsne.tex}\n\n\\input{exp\/tsne-compare.tex}\n\n\\input{exp\/rank-person.tex}\n\n\\input{exp\/rank-vehicle.tex}\n\n\\vspace{3.0mm}\n\n\\subsubsection{The Recovered HR Images}\n\nHere, we present the recovered HR images generated by our method and the ablation methods on the MLR-CUHK03 \\emph{test set} using the cross-resolution person re-ID setting.\n\nWe visualize two examples of the recovered HR images in Figure~\\ref{fig:abl}. In each example, the recovered HR images with different input down-sampling rates $r = \\{1, 2, 4, 8\\}$ are shown. We observe that without applying the HR reconstruction loss $\\mathcal{L}_\\mathrm{rec}$ (Eq. (\\ref{eq:rec})), our model is not able to recover high-quality HR images. Without the multi-scale feature-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{F}}$ (Eq. (\\ref{eq:adv_multi})), our model does not learn resolution-invariant representations. Thus, our model cannot recover the HR details of images of an unseen resolution, e.g., $r = 8$. While our model can still reconstruct the feature maps to their HR images without the image-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{I}}$ (Eq. (\\ref{eq:adv_loss_image})), the recovered HR images may not look perceptually realistic, especially for input images with the down-sampling rate $r = 8$.\n\nThe examples in Figure~\\ref{fig:abl} demonstrate that the HR reconstruction loss $\\mathcal{L}_\\mathrm{rec}$ is essential to image recovery. The multi-scale feature-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{F}}$ enables our model to deal with images of unseen resolutions while the image-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_{I}}$ encourages our model to recover perceptually realistic HR images.\n\n\\input{exp\/paras.tex}\n\n\\subsection{Resolution-Invariant Representation}\n\n\\subsubsection{Effect of Feature-level Adversarial Loss}\n\nTo demonstrate the effectiveness of our model in deriving the resolution-invariant representations $f$, we first apply global average pooling to $f$ to obtain the resolution-invariant feature vector $\\mathbfit{w} = \\mathrm{GAP}(f) \\in {\\mathbb{R}}^d$. We then visualize $\\mathbfit{w}$ on the MLR-CUHK03 \\emph{test set} in Figure~\\ref{fig:tsne}.\n\nWe select $50$ different identities, each of which is indicated by a unique color, as shown in Figure~\\ref{fig:tsne-baseline} and Figure~\\ref{fig:tsne-identity}. In Figure~\\ref{fig:tsne-baseline}, we observe that without the multi-scale feature-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_F}$ (Eq. (\\ref{eq:adv_multi})), our model cannot establish a well-separated feature space. When loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_F}$ is imposed, the projected feature vectors are well separated as shown in Figure~\\ref{fig:tsne-identity}. These two figures indicate that without loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_F}$, our model does not learn resolution-invariant representations, thus implicitly suffering from the negative impact induced by the resolution mismatch issue.\n\nWe note that the projected feature vectors in Figure~\\ref{fig:tsne-identity} are well separated, suggesting that sufficient re-ID ability can be exhibited by our model. On the other hand, for Figure~\\ref{fig:tsne-resolution}, we colorize each image resolution with a unique color in each identity cluster (four different down-sampling rates $r \\in \\{1, 2, 4, 8\\}$). We observe that the projected feature vectors of the same identity but different down-sampling rates are all well clustered. We note that images with down-sampling rate $r = 8$ are not presented in the training set (i.e., unseen resolution).\n\nThe above visualizations demonstrate that our model learns resolution-invariant representations and generalizes well to unseen image resolution (e.g., $r = 8$) for cross-resolution person re-ID.\n\n\\subsubsection{Visual Comparisons of the Re-ID Feature Vector}\n\nWe visualize the feature vector for person re-ID on the MLR-CUHK$03$ \\emph{test set} via t-SNE and present visual comparisons with SING~\\cite{jiao2018deep} and CSR-GAN~\\cite{wang2018cascaded}. The comparisons are conducted on a subset of $50$ identities. Note that for our method, we use the joint feature vector $\\mathbfit{u} = \\mathrm{GAP}(\\mathbfit{v}) \\in {\\mathbb{R}}^{2d}$ for person re-ID.\n\nFigure~\\ref{fig:tsne-identity-comp} shows the visual results of the three competing methods, including SING~\\cite{jiao2018deep} in Figure~\\ref{fig:tsne-identity-naive}, CSR-GAN~\\cite{wang2018cascaded} in Figure~\\ref{fig:tsne-identity-better}, and our method in Figure~\\ref{fig:tsne-identity-ours}. In each figure, we plot the projected re-ID feature vectors of each identity with a specific color. We observe that both SING~\\cite{jiao2018deep} and CSR-GAN~\\cite{wang2018cascaded} do not separate instances of different identities very well. In contrast, as shown in Figure~\\ref{fig:tsne-identity-ours}, our method successfully recognizes most identities, implying the reliable re-ID ability of our proposed method.\n\nIn Figure~\\ref{fig:tsne-reso-comp}, we adopt resolution-specific coloring and display the visual results of SING~\\cite{jiao2018deep} in Figure~\\ref{fig:tsne-reso-naive}, CSR-GAN~\\cite{wang2018cascaded} in Figure~\\ref{fig:tsne-reso-better}, and our method in Figure~\\ref{fig:tsne-reso-ours}. As shown in Figure~\\ref{fig:tsne-reso-naive} and Figure~\\ref{fig:tsne-reso-better}, SING~\\cite{jiao2018deep} and CSR-GAN~\\cite{wang2018cascaded} tend to mix images yielded with the down-sampling rate $r = 8$, i.e., those in red, even if these images are of different categories. The visual results indicate that both SING~\\cite{jiao2018deep} and CSR-GAN~\\cite{wang2018cascaded} suffer from the resolution mismatch problem. Our method, on the other hand, learns resolution-invariant representations. As shown in Figure~\\ref{fig:tsne-reso-ours}, images even with \\emph{unseen} different down-sampling rates (e.g., $r = 8$) are well clustered with respect to the identities.\n\nThe above visual comparisons verify that through learning resolution-invariant representations, our method works well on images of diverse and even unseen resolutions.\n\n\\subsection{Top-Ranked Gallery Images}\n\n{\\flushleft {\\bf Person re-ID.}}\nAs shown in Figure~\\ref{fig:rank-person}, given an LR query image with down-sampling rate $r = 4$ or $r = 8$, we present the first $7$ top-ranked HR gallery images in Figure~\\ref{fig:rank-person-4} and Figure~\\ref{fig:rank-person-8}, respectively. We compare our method (bottom row) with two approaches developed for cross-resolution person re-ID SING~\\cite{jiao2018deep} (top row) and CSR-GAN~\\cite{wang2018cascaded} (middle row). The green and red boundaries indicate correct and incorrect matches, respectively. In Figure~\\ref{fig:rank-person-4}, all three approaches including ours achieves almost correct matches. However, from the results in the top row of Figure~\\ref{fig:rank-person-8}, we observe that SING~\\cite{jiao2018deep} does not have any correct matches, while CSR-GAN~\\cite{wang2018cascaded} achieves $1$ out of $7$ correct matches. Our method, on the contrary, achieves $6$ out of $7$ correct matches, which again verifies the effectiveness and robustness of our model. Note that the resolution ($r = 8$) of the query image is not seen during training.\n\n{\\flushleft {\\bf Vehicle re-ID.}}\nSimilarly, as shown in Figure~\\ref{fig:rank-vehicle}, given an LR query image with down-sampling rates $r = 4$ and $r = 8$, we present the first $7$ top-ranked HR gallery images in Figure~\\ref{fig:rank-vehicle-4} and Figure~\\ref{fig:rank-vehicle-8}, respectively. We also compare our method (bottom row) with two existing methods, i.e, SING~\\cite{jiao2018deep} (top row) and CSR-GAN~\\cite{wang2018cascaded} (middle row). In Figure~\\ref{fig:rank-vehicle-4}, all three approaches achieve satisfactory re-ID results. We then consider the case where the resolution ($r = 8$) of the query image is unseen during training. In Figure~\\ref{fig:rank-vehicle-8}, SING~\\cite{jiao2018deep} and CSR-GAN~\\cite{wang2018cascaded} only have $2$ and $3$ out of $7$ correct matches, respectively. In contrast, our method achieves $6$ out of $7$ correct matches. The comparison with existing methods also supports the applicability of our method to cross-resolution vehicle re-ID.\n\n\\subsection{Sensitivity Analysis of Hyper-parameters}\n\nWe further analyze the sensitivity of our model against the hyper-parameters introduced in the total loss $\\mathcal{L}$ (Eq. (\\ref{eq:fullobj})). We conduct sensitivity analysis by varying the value of each hyper-parameter and report the results on the MLR-CUHK$03$ \\emph{validation set} using the cross-resolution person re-ID setting. Figure~\\ref{fig:lambda} presents the experimental results.\n\nFor $\\lambda_\\mathrm{rec}$ and $\\lambda_\\mathrm{adv}^{\\mathcal{D}_F}$, if their values are set to $0$, our model suffers from performance drops, as shown Figure~\\ref{fig:lambda-rec} and Figure~\\ref{fig:lambda-adv-F}. When $\\lambda_\\mathrm{rec}$ and $\\lambda_\\mathrm{adv}^{\\mathcal{D}_F}$ lie in a certain range (near $1$), the performance of our method is improved and remains stable. However, once their values are too large, e.g., $100$, significant performance drop occurs since the corresponding losses (i.e., the HR reconstruction loss $\\mathcal{L}_\\mathrm{rec}$ and the multi-scale feature-level adversarial loss $\\mathcal{L}_\\mathrm{adv}^{\\mathcal{D}_F}$) dominate the total loss. In Figure~\\ref{fig:lambda-adv-I}, we observe that when the value of $\\lambda_\\mathrm{adv}^{\\mathcal{D}_I}$ is set to $0$, modest performance drop happens. If $\\lambda_\\mathrm{adv}^{\\mathcal{D}_I}$ varies within a reasonable region, e.g., $0.01\\sim2$, the performance remains stable. However, when the value of $\\lambda_\\mathrm{adv}^{\\mathcal{D}_I}$ is too large, e.g., larger than $10$, severe performance drop occurs since the image-level adversarial loss $\\lambda_\\mathrm{adv}^{\\mathcal{D}_I}$ dominates the total loss.\n\nIn sum, the performance of our model remains stable when the values of the hyper-parameters lie in a certain range (near $1$). As a result, we set $\\lambda_\\mathrm{rec} = 1$, \n$\\lambda_\\mathrm{adv}^{\\mathcal{D}_F} = 1$, and $\\lambda_\\mathrm{adv}^{\\mathcal{D}_I} = 1$ without further fine-tuning them, though applying grid search or random search for hyper-parameter optimization might lead to further performance improvement.\n\\section{Conclusions}\n\nWe have presented an \\emph{end-to-end trainable} generative adversarial network, CAD-Net++, for addressing the resolution mismatch issue in person re-ID. The core technical novelty lies in the unique design of the proposed CRGAN which learns the \\emph{resolution-invariant} representations while being able to recover \\emph{re-ID oriented} HR details preferable for person re-ID. Our cross-modal re-ID network jointly considers the information from two feature modalities, resulting in improved re-ID performance. \\revised{Extensive experimental results show that our approach performs favorably against existing cross-resolution person re-ID methods on five challenging benchmarks, achieves competitive performance against existing approaches even when no significant resolution variations are present, and produces perceptually higher quality HR images using only a \\emph{single} model. Visualization of the resolution-invariant representations further verifies our ability in handling query images with \\emph{varying} or even \\emph{unseen} resolutions. Furthermore, we demonstrate the applicability of our method through cross-resolution vehicle re-ID task. Experimental results confirm the generalization of our model on cross-resolution visual tasks. The extensions to semi-supervised settings also demonstrate the superiority of our method over existing approaches. Thus, the use of our model for practical re-ID applications can be strongly supported.}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgments}\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\n\nCompressive sensing has triggered significant research activity in recent years. Its central motif is that sparse signals can be recovered from what was previously believed to be highly incomplete\ninformation \\cite{carota06,do06-2}. In particular, it is now known \\cite{carota06,cata06,ruve08,ra07,ra08,ra09-1} that an $s$-sparse trigonometric polynomial of maximal degree $N$ can be recovered from $m \\asymp s \\log^4(N)$ sampling points. These $m$ samples can be chosen as a random subset from the discrete set $\\{j\/N\\}_{j=0}^{N-1}$ \\cite{carota06,cata06,ruve08}, or independently from the uniform measure on $[0,1]$, see \\cite{ra07,ra08,ra09-1}. \n\nUntil now, all sparse recovery results of this type required that the underlying basis be uniformly bounded like the trigonometric system, so as to be \\emph{incoherent} with point samples \\cite{caro07}. As the main contribution of this paper, we show that this condition may be relaxed, obtaining comparable sparse recovery results for any basis that is bounded by a square-integrable envelope function. As a special case, we focus on the Legendre system over the domain $[-1,1]$. To account for the blow-up of the Legendre system near the endpoints of its domain, the random sampling points are drawn according to the Chebyshev probability measure. This aligns with classical results on Lagrange interpolation which support the intuition that Chebyshev points are much better suited for the recovery of polynomials than uniform points are \\cite{br97-1}.\n\nIn order to deduce our main results we establish the \\emph{restricted isometry property} (RIP) for a preconditioned\nversion of the matrix whose entries are the Legendre polynomials evaluated at sample points chosen from the Chebyshev measure. The concept of preconditioning seems to be new in the context\nof compressive sensing, although it has appeared within the larger scope of sparse approximation in a different context in \\cite{sv08}. It is likely that the idea of preconditioning can be exploited in other situations of interest as well.\n\n\nSparse expansions of multivariate polynomials in terms of tensor products\nof Legendre polynomials recently appeared in the problem of numerically \nsolving stochastic or parametric PDEs \\cite{codesc10,alho10}. Our results indeed extend easily to tensor\nproducts of Legendre polynomials, and the application of our techniques in this context \nof numerical solution of SPDEs seems very promising. \nOur results may also be transposed into the setting of function approximation. In particular, we show that the aforementioned sampling and reconstruction procedure is guaranteed to produce near-optimal approximations to functions\nin infinite-dimensional spaces of functions having $\\ell_p$-summable Fourier-Legendre\ncoefficients ($0
0$ will always denote a universal constant that\nmight be different in each occurence. \n\nThe Chebyshev probability measure (also referred to as arcsine distribution) on $[-1,1]$ is given by $d\\nu(x) = \\pi^{-1} (1 - x^2)^{-1\/2}dx$. \nIf a random variable $X$ is uniformly distributed on $[0,\\pi]$, then the random variable $Y = \\cos{X}$ is distributed according to the Chebyshev measure.\n\n\\section{Recovery of Legendre-sparse polynomials from a few samples}\n\nConsider the problem of recovering a polynomial $g$ \nfrom $m$ sample values $g(x_1),\\hdots,g(x_m)$. If the number of sampling points is less than or \nequal to the degree of $g$, such reconstruction is impossible in general due to \ndimension reasons. Therefore, as usual in the compressive sensing literature, \nwe make a sparsity assumption. \nIn order to introduce a suitable\nnotion of sparsity we consider the basis of Legendre polynomials $L_n$ on $[-1,1]$, normalized so as to be orthonormal with respect to the uniform measure on $[-1,1]$, i.e. $\\frac{1}{2} \\int_{-1}^{1} L_n(x) L_{\\ell}(x) dx = \\delta_{n,\\ell}$. \n\n\n\n\n\nAn arbitrary real-valued \npolynomial $g$ of degree $N-1$ can be expanded in terms of Legendre polynomials\n\\begin{equation}\\label{gLegendre}\ng(x) = \\sum_{n=0}^{N-1} c_n L_n(x), \\quad x \\in [-1,1].\n\\end{equation}\nIf the coefficient vector $c \\in {\\mathbb{R}}^{N}$ is $s$-sparse, we call the corresponding polynomial\n\\emph{Legendre $s$-sparse}, or simply Legendre-sparse. If $\\sigma_s(c)_1$ decays \nquickly as $s$ increases, then $g$ is called Legendre--compressible.\n\nWe aim to reconstruct Legendre--sparse polynomials, and more generally Legendre--compressible polynomials, of maximum degree $N-1$ from $m$ samples $g(x_1),\\hdots,g(x_m)$, where $m$ \nis desired to be small -- at least smaller than $N$. Writing $g$ in the form \\eqref{gLegendre}\nthis task clearly amounts to reconstructing the coefficient vector $c \\in {\\mathbb{R}}^{N}$. \n\nTo the set of $m$ sample points $(x_1, \\hspace{1mm} \\hdots \\hspace{1mm}, x_m)$ we associate the $m \\times N$ \\emph{Legendre} matrix $\\Phi$ defined component-wise by \n\\begin{equation}\n\\label{legendre_matrix}\n\\Phi_{j,k} = \nL_{k-1}(x_j), \\quad j \\in [m],\\hspace{2mm} k \\in [N].\n\\end{equation}\nNote that the samples $y_j = g(x_j)$ may be expressed concisely in terms of the coefficient \nvector $c \\in \\mathbb{R}^{N}$ according to\n$$\ny =\n\\Phi c.\n$$\nReconstructing $c$ from the vector $y$ amounts to solving this system of linear\nequations. As we are interested in the underdetermined case $m < N$, this system\ntypically has infinitely many solutions, and our task is to single out the original sparse $c$. The obvious but naive approach for doing this is by solving for the sparsest solution that agrees with the measurements, \n\\begin{equation}\\label{l0:prog}\n\\min_{ z \\in {\\mathbb{R}}^{N}} \\|z\\|_0 \\quad \\mbox{subject to}\\quad \\Phi z = y.\n\\end{equation}\nUnfortunately, this problem is NP-hard in general \\cite{avdama97,aw10}. To overcome this computational\nbottleneck the compressive sensing\nliterature has suggested various tractable alternatives \\cite{gitr07,carota06,netr08}, \nmost notably $\\ell_1$-minimization (basis pursuit) \\cite{chdosa99,carota06,do06-2}, \non which we focus in this paper. Nevertheless, it follows from our findings that \ngreedy algorithms such as CoSaMP \\cite{netr08} or Iterative Hard Thresholding \\cite{blda09} may also be\nused for reconstruction.\n\n\\medskip\n\nOur main result is that any Legendre $s$-sparse polynomial may be recovered efficiently from a number of samples $m \\asymp s \\log^3(s)\\log(N)$. Note that at least up to the logarithmic factors, this rate is optimal. Also the \ncondition on $m$ is implied by the simpler one $m \\asymp s \\log^4{N}$\nReconstruction is also robust: \\emph{any} polynomial may be recovered efficiently to within a factor of its best approximation by a Legendre $s$-sparse polynomial, and, if the measurements are corrupted by noise, $g(x_1) + \\eta_1, \\hdots, g(x_m) + \\eta_m$, to within an additional factor of the noise level $\\varepsilon = \\| \\eta \\|_{\\infty}$. We have\n\n\\begin{theorem}\n\\label{uniform:noise}\nLet $N,m,s \\in {\\mathbb{N}}$ be given such that\n$$\nm \\geq C s \\log^3(s) \\log(N).\n$$\nSuppose that $m$ sampling points $(x_1, \\hdots, x_m)$ are drawn independently at random from the Chebyshev \nmeasure, and consider the $m \\times N$ Legendre matrix $\\Phi$ with entries $\\Phi_{j,k} = L_{k-1}(x_j)$, and the $m \\times m$ diagonal matrix ${\\cal A}$ with entries $a_{j,j} = (\\pi\/2)^{1\/2}(1-x_j^2)^{1\/4}$. Then with probability exceeding\n$1-N^{-\\gamma \\log^3(s)}$ the following \nholds for all polynomials $g(x) = \\sum_{k=0}^{N-1} c_k L_k(x)$.\nSuppose that noisy sample values $y = \\big( g(x_1) + \\eta_1, \\hdots, g(x_m) + \\eta_m \\big) = \\Phi c + \\eta$ are observed, and $\\|{\\cal A}\\eta\\|_{\\infty} \\leq \\varepsilon$. Then the coefficient vector $c = (c_0, c_1, \\hdots, c_{N-1})$ is recoverable to within a factor of its best $s$-term approximation error \nand to a factor of the noise level by solving the inequality-constrained $\\ell_1$-minimization problem\n\\begin{align}\\label{relaxed}\nc^{\\#} = \\arg \\min_{z \\in {\\mathbb{R}}^N} \\| z \\|_1 \\quad \\mbox{ subject to } \\quad \\| {\\cal A} \\Phi z - {\\cal A} y \\|_2 \\leq \\sqrt{m}\\varepsilon.\n\\end{align}\nPrecisely, \n\\begin{equation}\\label{l1:approx2}\n\\| c -c^{\\#} \\|_{2} \\leq \\frac{C_1 \\sigma_s(c)_1}{\\sqrt{s}} + C_2\\varepsilon,\n\n\\end{equation}\n\nand\n\\begin{equation}\\label{l1:approx}\n\\|c-c^{\\#}\\|_{1} \\leq D_1 \\sigma_s(c)_1 + D_2 \\sqrt{s} \\varepsilon.\n\\end{equation}\nThe constants $C, C_1,C_2,D_1,D_2$, and $\\gamma$ are universal.\n\\end{theorem}\n\n\\begin{remark}\\label{rem22}\n\\begin{itemize}\n\\item[(a)] \\emph{In the noiseless ($\\varepsilon = 0$) and exactly $s$-sparse case ($\\sigma_s(x)_1 = 0$), the above theorem\nimplies exact recovery via} \n\\[\nc^{\\#} = \\arg \\min_{z \\in {\\mathbb{R}}^N} \\| z \\|_1 \\quad \\mbox{ \\emph{subject to} } \\quad \\Phi z = y.\n\\]\n\\item[(b)] \\emph{The condition $\\|{\\cal A}\\eta\\|_{\\infty} \\leq \\varepsilon$ is satisfied in particular if $\\|\\eta\\|_\\infty \\leq \\varepsilon$.}\n\n\\item[(c)]\n\\emph{The proposed recovery method \\eqref{relaxed} is \\emph{noise-aware}, in that it requires knowledge of the noise level $\\varepsilon$ a priori.\nOne may remove this drawback by using other reconstruction algorithms such as \nCoSaMP \\cite{netr08} or Iterative Hard Thresholding \\cite{blda09} which also achieve the reconstruction rates \\eqref{l1:approx2} and \\eqref{l1:approx} under the stated hypotheses, but do not require knowledge of $\\varepsilon$ \\cite{blda09,netr08}. Actually, those algorithms always return $2s$-sparse vectors as approximations, in which case the $\\ell_1$-stability result \\eqref{l1:approx} follows immediately from \\eqref{l1:approx2}, see \\cite[p.\\ 87]{b09} for details. }\n\n\n\n\\end{itemize}\n\\end{remark}\n\n\\section{Numerical Experiments}\n\nLet us illustrate the results of Theorem \\ref{uniform:noise}. In Figure $1(a)$ we plot a polynomial $g$ that is $5$-sparse in Legendre basis and with maximal degree $N = 80$ along with $m = 20$ sampling points drawn independently from the Chebyshev measure. This polynomial is reconstructed exactly from the illustrated sampling points as the solution to the $\\ell_1$-minimization problem \\eqref{relaxed} with $\\varepsilon = 0$. In Figure $1(b)$ we plot the same \nLegendre-sparse polynomial in solid line, but the $20$ samples \nhave now been corrupted by zero-mean Gaussian noise $y_j = g(x_j) + \\eta_j$. Specifically, we take ${\\mathbb{E}} \\hspace{1mm} (|\\eta_j|^2 ) = 0.025$, so that the expected noise level $\\varepsilon \\approx 0.16$. In the same figure, we superimpose \nin dashed line the polynomial obtained from these noisy measurements as the solution \nof the inequality-constrained $\\ell_1$-minimization problem \\eqref{relaxed} with noise level \n$\\varepsilon = 0.16$. \n\n\\begin{figure}[h]\n\\label{fig:illustrate}\n\\subfigure{\n\\includegraphics[width=0.45\\textwidth]{P_exact_80_20_5}\n}\\hfill\n\\subfigure{\n\\includegraphics[width=0.45\\textwidth]{Pnoisy80_20_5_recon}\n}\n\\caption{(a) A Legendre-$5$-sparse polynomial of maximal degree $N = 80$, and its exact reconstruction from $20$ samples drawn independently from the Chebyshev distribution. (b) The same polynomial (solid line), and its approximate reconstruction from $20$ samples corrupted with noise (dashed line).}\n\\end{figure}\n\n\nTo be more complete, we plot a phase diagram illustrating, for $N = 300$, and several values of $s\/m$ and $m\/N$ between $0$ and $.7$, the success rate of $\\ell_1$-minimization in exactly recovering Legendre $s$-sparse polynomials $g(x) = \\sum_{k=0}^{N-1} c_k L_k(x)$. \nThe results, illustrated in Figure $2$, show a sharp transition between uniform\nrecovery (in black) and no recovery whatsoever (white). This transition curve is similar to the phase transition curves obtained for other compressive sensing matrix ensembles, e.g.\\ the random partial discrete Fourier matrix or the Gaussian ensemble. For more details, we refer the reader to \\cite{dt}.\n\n\n\\begin{SCfigure}\n\\centering\n{\\label{legendre:plot}\n \\psfrag{ylabel}{$\\frac{s}{m}$}\n \\psfrag{xlabel}{$\\frac{m}{N}$}}\n\\includegraphics[width=7cm, height = 6cm]{phase}\n\\caption{Phase diagram illustrating the transition between uniform recovery (black) and no recovery whatsoever (white) of Legendre-sparse polynomials of sparsity level $s$ and using $m$ measurements, as $s$ and $m$ vary over the range $s \\leq m \\leq N = 300$. In particular, for each pair $(s\/m, m\/N)$, we record the rate of success out of $50$ trials of $\\ell_1$-minimization in recovering $s$-sparse coefficient vectors with random support over $[N]$ and with i.i.d. standard Gaussian coefficients from $m$ measurements distributed according to the Chebyshev measure. }\n\\centering\n\\end{SCfigure}\n\n\n\n\n\\section{Sparse recovery via restricted isometry constants}\n We prove Theorem \\ref{uniform:noise} by showing that the preconditioned Legendre matrix ${\\cal A}\\Phi$ satisfies the \\emph{restricted isometry property} (RIP) \\cite{cata06,carota06-1}. To begin, let us recall the notion of restricted isometry constants for a matrix $\\Psi$.\n\n\\begin{definition}[Restricted isometry constants]\nLet $\\Psi \\in {\\mathbb{C}}^{m \\times N}$. For $s \\leq N$, the restricted isometry constant $\\delta_s$ \nassociated to $\\Psi$ is the smallest number $\\delta$ for which\n\\begin{equation}\\label{def:RIP}\n(1-\\delta) \\|c\\|_2^2 \\leq \\|\\Psi c\\|_2^2 \\leq (1+\\delta) \\|c\\|_2^2\n\\end{equation}\nfor all $s$-sparse vectors $c \\in {\\mathbb{C}}^N$.\n\\end{definition}\n\nInformally, the matrix $\\Psi$ is said to have the restricted isometry property if\n$\\delta_s$ is small for $s$ reasonably large compared to $m$.\nFor matrices satisfying the restricted isometry property, the following $\\ell_1$-recovery results can be shown\n\\cite{carota06-1,ca08,fola09,fo09}.\n\n\\begin{theorem}[Sparse recovery for RIP-matrices]\n\\label{thm:l1:stable} \nLet $\\Psi \\in {\\mathbb{C}}^{m \\times N}$. Assume\nthat its restricted isometry constant $\\delta_{2s}$ \nsatisfies\n\\begin{equation}\n\\label{RIP:const}\n\\delta_{2s} < 3\/(4 + \\sqrt{6}) \\approx 0.4652.\n\\end{equation}\nLet $x \\in {\\mathbb{C}}^N$ and assume noisy measurements $y = \\Psi x + \\eta$ are given with $\\|\\eta\\|_2 \\leq \\varepsilon$. Let $x^\\#$ be the minimizer of \n\\begin{align}\\label{l1eps:prog}\n\\arg \\min_{z \\in {\\mathbb{C}}^N} \\quad \\| z \\|_1 \\mbox{ subject to } \\quad \\|\\Psi z - y \\|_2 \\leq \\varepsilon.\n\\end{align}\nThen\n\\begin{align}\n\\label{l2noise}\n\\|x - x^\\#\\|_2 \\leq C_1 \\frac{\\sigma_s(x)_1}{\\sqrt{s}} + C_2 \\varepsilon,\n\\end{align}\nand\n\\begin{align}\n\\label{l1noise}\n\\|x - x^\\#\\|_1 \\leq D_1 \\sigma_s(x)_1 + D_2 \\sqrt{s} \\varepsilon.\n\\end{align}\nThe constants $C_1, D_1, C_2, D_2 >0$ depend only on $\\delta_{2s}$.\nIn particular, if $x$ is $s$-sparse then reconstruction is exact, $x^\\# = x$.\n\\end{theorem}\nThe constant in \\eqref{RIP:const} is\nthe result of several refinements.\nCand{\\`e}s provided the value\n$\\sqrt{2}-1$ in \\cite{ca08}, Foucart and Lai the value $0.45$ in \\cite{fola09}, while\nthe version in \\eqref{RIP:const} was shown in \\cite{fo09}. \nThe proof of \\eqref{l2noise} can be found in \\cite{ca08}. The $\\ell_1$-error bound \\eqref{l1noise} is straightforward from these calculations, but does not seem to appear explicitly in the literature. \n\n\\medskip\n\nSo far, all good constructions of matrices with the restricted isometry property use randomness. \nThe RIP constant for a matrix whose entries are (properly normalized) independent and identically distributed Gaussian or Bernoulli random variables satisfies $\\delta_{s} \\leq \\delta$ with probability\nat least $1- e^{-c_1(\\delta) m}$ provided\n\\begin{equation}\nm \\geq c_2(\\delta) s \\log(N\/s); \n\\label{s}\n\\end{equation}\nsee for example \\cite{badadewa08,cata06,rascva08,ra09-1}. To be more precise, it can be shown that \n$c_1(\\delta) = C_1 \\delta^{2}$ and $c_2(\\delta) = C_2 \\delta^{-2}$. \nLower bounds for Gelfand widths of $\\ell_1$-balls show that the bound \\eqref{s} is optimal \\cite{do06-2,codade09,foparaul10}. \n\n If one allows for slightly more measurements than the optimal number \\eqref{s}, the restricted isometry property also holds for a rich class of \\emph{structured} random matrices; the structure of these matrices allows for fast matrix-vector multiplication, which accelerates the speed of reconstruction procedures such as $\\ell_1$ minimization.\nA quite general class of structured random matrices are those associated to \\emph{bounded orthonormal systems}. This concept is introduced in \\cite{ra09-1}, although it is already contained somewhat implicitly\nin \\cite{cata06,ruve08} for discrete systems. Let ${\\cal{D}}$ be a measurable space -- for instance, a measurable subset of\n${\\mathbb{R}}^d$ -- endowed with a probability measure $\\nu$. Further, let $\\{ \\psi_j$, $j \\in [N] \\}$, \nbe an orthonormal system of (real or complex-valued) functions on ${\\cal D}$, i.e.,\n\\begin{equation}\n\\int_{\\cal D} \\psi_j(x) \\overline{\\psi_k(x)} d\\nu(x) = \\delta_{j,k}, \\quad k,j \\in [N].\n\\end{equation}\nIf this orthonormal system is uniformly bounded,\n\\begin{equation}\n\\label{Linf_bound}\n\\sup_{j \\in [N]} \\|\\psi_j\\|_\\infty = \\sup_{j \\in [N]} \\sup_{x \\in {\\cal D}} |\\psi_j(x)| \\leq K\n\\end{equation}\nfor some constant $K\\geq 1$, we call systems $\\{\\psi_j\\}$ satisfying\nthis condition \\emph{bounded orthonormal systems}.\n\n\n\\begin{theorem}[RIP for bounded orthonormal systems]\n\\label{thm:BOS:RIP} \nConsider \nthe matrix $\\Psi \\in {\\mathbb{C}}^{m \\times N}$ with entries\n\\begin{equation}\\label{def:Phi:matrix}\n\\Psi_{\\ell,k} = \\psi_k(x_\\ell), \\quad \\ell \\in [m], k \\in [N],\n\\end{equation}\nformed by i.i.d.\\ samples $x_\\ell$ drawn from the orthogonalization measure $\\nu$\nassociated to the bounded\northonormal system $\\{ \\psi_j$, $j \\in [N] \\}$ having uniform bound $K\\geq 1$ in \\eqref{Linf_bound}. \nIf \n\\begin{equation}\\label{BOS:RIP:cond}\nm \\geq C\\delta^{-2} K^2 s \\log^3(s) \\log(N),\n\\end{equation}\nthen with probability at least \n$1-N^{-\\gamma \\log^3(s)},$\nthe restricted isometry constant $\\delta_s$ of $\\frac{1}{\\sqrt{m}} \\Psi$ satisfies $\\delta_s \\leq \\delta$. The constants $C,\\gamma>0$ are universal.\n\\end{theorem}\n\nWe note that condition \\eqref{BOS:RIP:cond} is stated slightly different in \\cite{ra09-1}, namely as\n\\[\n\\frac{m}{\\log(m)} \\geq C\\delta^{-2} K^2 s \\log^2(s) \\log(N).\n\\]\nHowever, it is easily seen that \\eqref{BOS:RIP:cond} implies this condition (after possibly adjusting constants).\nNote also that \\eqref{BOS:RIP:cond} is implied by the simpler condition\n\\[\nm \\geq C K^2 \\delta^{-2} s \\log^4(N).\n\\]\n\n\\medskip\n\nAn important special case of a bounded orthonormal system is the random partial Fourier matrix, which is formed by choosing a random subset of $m$ rows from the $N \\times N$ discrete Fourier matrix. The continuous analog of this system is the matrix associated to the trigonometric polynomial basis \n$\\{ x \\mapsto e^{2\\pi i n x } , \\quad n = 0, \\hdots, N-1 \\}$ evaluated at $m$ sample points chosen \nindependently from the uniform measure on $[0,1]$. Note that the trigonometric system has \ncorresponding optimal uniform bound $K=1$. Another example is the matrix associated to the \nChebyshev polynomial system evaluated at sample points chosen independently \nfrom the corresponding orthogonalization measure, the Chebyshev measure. In this case, $K = \\sqrt{2}$. \n\n\\section{Proof of Theorem \\ref{uniform:noise}}\n\nAs a first approach towards recovering Legendre-sparse polynomials from random samples, one may try to apply Theorem \\ref{thm:BOS:RIP} directly, selecting the sampling points $\\{ x_j, j \\in [m] \\}$, independently from the normalized Lebesgue measure on $[-1,1]$, the orthogonalization measure for the Legendre polynomials. However, as shown in \\cite{szego}, the $L^{\\infty}$-norms of the Legendre polynomials grow according to $\\| L_n \\|_{\\infty} = | L_n (1) | = |L_n(-1) | = (2n + 1)^{1\/2}$. \nApplying $K = \\| L_{N-1} \\|_{\\infty} = (2N-1)^{1\/2}$ in Theorem \\ref{thm:BOS:RIP} produces a required number of samples\n\\[\nm \\asymp N \\delta^{-2} s \\log^3(s) \\log(N). \n\\]\nOf course, this bound is completely useless, because the required number of samples\nis now larger than $N$ -- an almost trivial estimate. Therefore, in order to deduce\nsparse recovery results for the Legendre polynomials, we must take a different approach.\n\nDespite growing unboundedly with increasing degree at the endpoints $+1$ and $-1$, an \nimportant characteristic of the Legendre polynomials is that they are \nall bounded by the same envelope function.\nThe following result \\cite[Theorem 7.3.3]{szego}, gives a precise estimate \nfor this bound.\n \n\\begin{lemma}\n\\label{thm:growth}\nFor all $n \\geq 1$ and for all $x \\in [-1,1]$,\n\\[\n\\label{uniform_bound}\n(1 - x^2)^{1\/4} | \\thinspace L_n(x) \\thinspace | < 2\\pi^{-1\/2}\\Big( 1 + \\frac{1}{2n} \\Big)^{1\/2}, \\hspace{5mm} -1 \\leq x \\leq 1;\n\\]\nhere, the constant $2 \\pi^{-1\/2}$ cannot be replaced by a smaller one. \n\\end{lemma}\n\n\\begin{proof}[Proof of Theorem \\ref{uniform:noise}]\n\nIn light of Lemma \\ref{thm:growth}, we apply a preconditioning\ntechnique to transform the Legendre polynomial system into a bounded orthonormal system. Consider the functions \n\\begin{equation}\\label{reweight}\nQ_n(x) = (\\pi\/2)^{1\/2} (1 - x^2)^{1\/4} L_n(x).\n\\end{equation}\nThe matrix $\\Psi$ with entries $\\Psi_{j,n}=Q_{n-1}(x_j)$ may be written as $\\Psi = {\\cal A} \\Phi$ where ${\\cal A}$ is the diagonal matrix with entries $a_{j,j} = (\\pi\/2)^{1\/2} (1 - x_j^2)^{1\/4}$ as in Theorem \\ref{thm:growth}, and $\\Phi \\in {\\mathbb{R}}^{m \\times N}$ is the Legendre matrix with entries $\\Phi_{j,n} = L_{n-1}(x_j)$. By Lemma \\ref{thm:growth}, the system $\\{ Q_n \\}$ is uniformly bounded on $[-1,1]$ and satisfies the bound $\\| Q_n \\|_{\\infty} \\leq \\sqrt{2 + \\frac{1}{n}} \\leq \\sqrt{3}$. Due to the orthonormality of the Legendre system with respect to the normalized Lebesgue measure on $[-1,1]$, the $Q_n$ are orthonormal with respect to the Chebyshev probability measure $d \\nu(x) = \\pi^{-1}(1 - x^2)^{-1\/2} dx$ on $[-1,1]$:\n\\begin{eqnarray}\n\\label{ortho}\n\\int_{-1}^1 \\pi^{-1}Q_n(x) Q_{k}(x) (1-x^2)^{-1\/2} dx &=& \\frac{1}{2}\\int_{-1}^1 L_n(x) L_{k}(x) dx = \\delta_{n,k}. \\nonumber\n\\end{eqnarray}\n\nTherefore, the $\\{ Q_n \\}$ form a bounded orthonormal system in the sense of Theorem \\ref{thm:BOS:RIP} with uniform bound $K = \\sqrt{3}$. By Theorem \\ref{thm:BOS:RIP}, the renormalized matrix $\\frac{1}{\\sqrt{m}} \\Psi$ has the restricted isometry property with constant $\\delta_s \\leq \\delta$ with high probability once $m \\geq C\\delta^{-2}s \\log^4(N)$. We then apply Theorem \\ref{thm:l1:stable} to the noisy samples $\\frac{1}{\\sqrt{m}} {\\cal A}y$ where $y = \\big( g(x_1) + \\eta_1, ... , g(x_m) + \\eta_m \\big)$ and observe that $\\| {\\cal A} \\eta \\|_{\\infty} \\leq \\varepsilon$ implies $\\frac{1}{\\sqrt{m}} \\| {\\cal A} \\eta \\|_2 \\leq \\varepsilon$. This gives Theorem \\ref{uniform:noise}.\n\n\\end{proof}\n\n\n\n\n\\section{Universality of the Chebyshev measure}\nThe Legendre polynomials are orthonormal with respect to the uniform measure on $[-1,1]$; we may instead consider an arbitrary weight function $v$ on $[-1,1]$, and the polynomials $\\{p_n\\}$ that are orthonormal with respect to $v$. \nSubject to a mild continuity condition on $v$, a result similar to Lemma \\ref{thm:growth} concerning the uniform growth of ${p_n}$ still holds, and the sparse recovery results of Theorem \\ref{uniform:noise} extend to this more general scenario. In all cases, the sampling points are chosen according to the Chebyshev measure. \n\nLet us recall the following general bound, see e.g.\\ Theorem 12.1.4 in Szeg{\\\"o} \\cite{szego}. \n\n\\begin{theorem} \n\\label{Lip:Dini}\nLet $v$ be a weight function on $[-1,1]$ and set\n$f_v(\\theta) = v(\\cos \\theta)|\\sin(\\theta)|$. Suppose that\n$f_v$ satisfies the Lipschitz-Dini condition, that is,\n\\begin{equation}\\label{LipschitzDini}\n|f_v(\\theta+\\delta) - f_v(\\theta)| \\leq L |\\log(1\/\\delta)|^{-1-\\lambda}, \\quad \\mbox{ for all } \\theta \\in [0,2\\pi), \\delta > 0,\n\\end{equation}\nfor some constants $L,\\lambda > 0$. \n Let $\\{p_n, n \\in {\\mathbb{N}}_0\\}$, be the associated orthonormal polynomial system. Then\n\\begin{equation}\\label{weight:estimate}\n(1-x^2)^{1\/4} v(x)^{1\/2} |p_n(x)| \\leq C_v \\quad \\mbox{ for all } n\\in {\\mathbb{N}}, x \\in [-1,1].\n\\end{equation}\nThe constant $C_v$ depends only on the weight function $v$.\n\\end{theorem}\nThe Lipschitz-Dini condition \\eqref{LipschitzDini} is satisfied for a range\nof Jacobi polynomials $p_n = p_n^{(\\alpha,\\beta)}$, $n \\geq 0,$ $\\alpha,\\beta \\geq -1\/2$, \nwhich are orthogonal\nwith respect to the weight function $v(x) = (1-x)^\\alpha(1+x)^\\beta$. \nThe Legendre polynomials are a special case of the Jacobi polynomials corresponding to $\\alpha = \\beta = 0$; more generally, the case $\\alpha = \\beta$ correspond to the ultraspherical polynomials. The Chebyshev polynomials are another important special case of ultraspherical polynomials, corresponding to parameters $\\alpha = \\beta = -1\/2$, and Chebyshev measure. \n\n\nFor any orthonormal polynomial system satisfying a bound of the form \\eqref{weight:estimate}, the following RIP-estimate applies.\n\\begin{theorem}\n\\label{RIP:precondition}\nConsider a positive weight function $v$ on $[-1,1]$ satisfying the conditions of Theorem \\ref{Lip:Dini}, and consider the orthonormal polynomial system $\\{ p_n \\}$ with respect to the probability measure $d\\nu(x) = c\\,v(x) dx$ on $[-1,1]$ where \n$c^{-1} = \\int_{-1}^1 v(x)dx$. \n\nSuppose that $m$ sampling points $(x_1, \\hdots, x_m)$ are drawn independently at random from the Chebyshev \nmeasure, and consider the $m \\times N$ composite matrix $\\Psi = {\\cal A} \\Phi$, where $\\Phi$ is the matrix with entries $\\Phi_{j,n} = p_{n-1}(x_j)$, and ${\\cal A}$ is the diagonal matrix with entries $a_{j,j} = (c\\pi)^{1\/2} (1-x_j^2)^{1\/4} v(x_j)^{1\/2}$. Assume that \n\\begin{equation}\\label{RIP:prec:cond}\nm \\geq C \\delta^{-2} s \\log^3(s) \\log(N). \n\\end{equation}\nThen with probability at least $1-N^{-\\gamma \\log^3(s)}$ the restricted isometry\nconstant of the composite matrix $\\frac{1}{\\sqrt{m}} \\Psi = \\frac{1}{\\sqrt{m}} {\\cal A} \\Phi$ satisfies $\\delta_s \\leq \\delta$. The constant $C$ depends only on $v$, and the constant $\\gamma > 0$ is universal.\n\\end{theorem}\n\n\\begin{proof}[Proof of Theorem \\ref{RIP:precondition}]\nObserve that $\\Psi_{j,n} = Q_{n-1}(x_j)$, where \n$$Q_n(x) =(c \\pi)^{1\/2} (1-x^2)^{1\/4} v(x)^{1\/2} p_n(x).$$\nFollowing Theorem \\ref{Lip:Dini}, the system $\\{ Q_n \\}$ is uniformly bounded on $[-1,1]$ and satisfies the bound $\\| Q_n \\|_{\\infty} \\leq (c\\pi)^{-1\/2} C_v$; moreover, due to the orthonormality of the polynomials $\\{ p_n \\}$ with respect to the measure $d\\nu(x)$, the $\\{Q_n\\}$ are orthonormal with respect to the Chebyshev measure:\n\\begin{eqnarray}\n\\label{ortho_nu}\n\\int_{-1}^1 \\pi^{-1}Q_n(x) Q_{k}(x) (1-x^2)^{-1\/2} dx &=& \\int_{-1}^1 c p_n(x) p_{k}(x) v(x)dx = \\delta_{n,k}. \n\\end{eqnarray}\nTherefore, the $\\{ Q_n \\}$ form a bounded orthonormal system with associated matrix $\\Psi$ as in Theorem \\ref{RIP:precondition} formed from samples $\\{ x_j \\}$ drawn from the Chebyshev distribution. Theorem \\ref{thm:BOS:RIP} implies that the renormalized composite matrix $\\frac{1}{\\sqrt{m}}\\Psi$ has the restricted isometry property as stated.\n\\end{proof}\n\n\\begin{corollary}\n\\label{poly:noise}\nConsider an orthonormal polynomial system $\\{ p_n \\}$ associated to a measure $v$ satisfying the conditions of Theorem \\ref{Lip:Dini}.\nLet $N,m,s \\in {\\mathbb{N}}$ satisfy the conditions of Theorem \\ref{RIP:precondition}, and \nconsider the matrix $\\Psi = {\\cal{A}} \\Phi$ as defined there. \n\nThen with probability exceeding\n$1-N^{-\\gamma \\log^3(s)}$ the following \nholds for all polynomials $g(x) = \\sum_{k=0}^{N-1} c_k p_k(x)$.\nIf noisy sample values $y = \\big( g(x_1) + \\eta_1, \\hdots, g(x_m) + \\eta_m \\big) = \\Phi c + \\eta$ are observed, and $\\|\\eta\\|_{\\infty} \\leq \\varepsilon$, then the coefficient vector $c = (c_0, c_1, \\hdots, c_{N-1})$ is recoverable to within a factor of its best $s$-term approximation error \nand to a factor of the noise level by solving the inequality-constrained $\\ell_1$-minimization problem\n\\begin{align}\\label{relaxed2}\nc^{\\#} = \\arg \\min_{z \\in {\\mathbb{R}}^N} \\| z \\|_1 \\quad \\mbox{ subject to } \\quad \\| {\\cal A} \\Phi z - {\\cal A} y \\|_2 \\leq \\sqrt{m}\\varepsilon.\n\\end{align}\nPrecisely, \n$$\n\\| c -c^{\\#} \\|_{2} \\leq \\frac{C_1 \\sigma_s(c)_1}{\\sqrt{s}} + D_1\\varepsilon,\n\n$$\nand\n\\begin{equation}\\label{l1:approx3}\n\\|c-c^{\\#}\\|_{1} \\leq C_2 \\sigma_s(c)_1 + D_2 \\sqrt{s} \\varepsilon.\n\\end{equation}\nThe constants $C_1,C_2,D_1,D_2$ and $\\gamma$ are universal.\n\\end{corollary}\n\n\n\nAs a byproduct of Theorem \\ref{RIP:precondition}, we also obtain condition number estimates\nfor preconditioned orthogonal polynomial matrices that should be of interest on their own, and improve on the results in \\cite{grpora07}. Theorem \\ref{RIP:precondition} implies that all submatrices of a preconditioned random orthogonal polynomial matrix \n$\\frac{1}{\\sqrt{m}}\\Psi = \\frac{1}{\\sqrt{m}} {\\cal A} \\Phi \\in {\\mathbb{R}}^{m \\times N}$ with at most $s$ columns are simultaneously well-conditioned, provided \\eqref{RIP:prec:cond} holds. If one is only interested in a particular subset of $s$ columns, i.e., a particular subset of $s$ orthogonal polynomials, the number of measurements in \\eqref{RIP:prec:cond} can be reduced to \n\\begin{equation}\\label{m:est}\nm \\geq C s \\log(s);\n\\end{equation}\nsee Theorem 7.3 in \\cite{ra09-1} for more details. \n\n\\medskip\n\n\\noindent\n{\\bf Stability with respect to the sampling measure.}\n\nThe requirement that sampling points $x_j$ are drawn from the Chebyshev measure in the previous theorems can be relaxed somewhat. In particular, suppose that the sampling points $x_j$ are drawn not from the Chebyshev measure, but from a more general probability measure $d\\nu(x) = \\rho(x) dx$ on $[-1,1]$ with \n$\\rho(x) \\geq c' (1-x^2)^{-1\/2}$ (and $\\int_{-1}^1 \\rho(x) dx = 1$). Now assume a weight function $v$ satisfying the Lipschitz-Dini condition \\eqref{LipschitzDini} and the associated orthonormal polynomials $p_n(x)$ are given. Then, by Theorem \\ref{Lip:Dini} the functions\n\\begin{equation}\\label{Qn:def}\nQ_n(x) =(c \\pi)^{1\/2} \\rho(x)^{-1\/2} v(x)^{1\/2} p_n(x)\n\\end{equation}\nform a bounded orthonormal system with respect to the probability measure $\\tilde{c} \\rho(x) v(x) dx$. Therefore, all previous\narguments are again applicable. \nWe note, however, that taking $\\rho(x) dx$ to be the Chebyshev measure produces the smallest constant $K$\nin the boundedness condition \\eqref{Linf_bound} due to normalization reasons.\n\n\n\\section{Recovery in infinite-dimensional function spaces}\n\nWe can transform the previous results into approximation results on the level of continuous functions. For simplicity, we restrict the scope of this section to the Legendre basis, although all of our results extend to any orthonormal polynomial system with a Lipschitz-Dini weight function, as well as to the trigonometric system, for which related results have not been\nworked out yet, either.\n\nWe introduce the following weighted norm on continuous functions in $[-1,1]$:\n\\[\n\\|f\\|_{\\infty,w} := \\sup_{x \\in [-1,1]} |f(x)| w(x), \\quad w(x) = \\sqrt{\\frac{\\pi}{2}} (1-x^2)^{1\/4}.\n\\]\nFurther, we define\n\\begin{equation}\n\\label{weight:norm}\n\\sigma_{N,s}(f)_{\\infty, w} := \\inf_{c \\in {\\mathbb{R}}^N} \\left\\{ \\sigma_s(c)_{1} + \\sqrt{s} \\|f- \\sum_{k=0}^{N-1} c_k L_k \\|_{\\infty, w}\\right\\}.\n\\end{equation}\nThe above quantity involves the best $s$-term approximation error of $c$, as well as the ability of Legendre coefficients \n$c \\in {\\mathbb{R}}^{N}$ to approximate the given function $f$ in the $L_\\infty$-norm. In some sense, it provides a mixed linear and\nnonlinear approximation error. The $c$ which ``balances'' both error terms determines $\\sigma_{N,s}(f)_\\infty$.\nThe factor $\\sqrt{s}$ scaling the ``linear approximation part'' may seem to lead to non-optimal estimates at first sight, but \nlater on, the strategy will actually be to choose $N$ in dependence of $s$ such that $\\sigma_{N,s}(f)_\\infty$ becomes\nof the same order as $\\sigma_s(c)_1$. In any case, we \nnote the (suboptimal) estimate\n\\[\n\\sigma_{N,s}(f)_{\\infty, w} \\leq \\sqrt{s}\\, \\rho_{N,s}(f)_{\\infty,w},\n\\]\nwhere\n\\[\n\\rho_{N,s}(f)_{\\infty,w} := \\inf_{c \\in {\\mathbb{R}}^N, \\|c\\|_0 \\leq s} \\|f- \\sum_{k=0}^{N-1} c_k L_k \\|_{\\infty, w}.\n\\]\n\nOur aim is to obtain\na good approximation to a continuous function $f$ from $m$ sample values, and to compare the approximation\nerror with $\\sigma_{N,s}(f)_{\\infty,w}$. We have\n\n\n\n\\begin{proposition}\\label{thm:function:approx} Let $N,m,s$ be given with\n\\[\nm \\geq C s \\log^3(s) \\log(N).\n\\]\nThen there exist sampling points $x_1,\\hdots,x_m$ (i.e., chosen i.i.d.\\ from the Chebyshev measure) and an efficient \nreconstruction procedure (i.e., $\\ell_1$-minimization), such that for\nany continuous function $f$ with associated error $\\sigma_{N,s}(f)_{\\infty, w}$, the\npolynomial $P$ of degree at most $N$ \nreconstructed from $f(x_1),\\hdots,f(x_m)$ satisfies\n\\[\n\\|f-P\\|_{\\infty,w} \\leq C' \\sigma_{N,s}(f)_{\\infty, w}.\n\\] \nThe constants $C, C'>0$ are universal.\n\\end{proposition}\n\nThe quantity $\\sigma_{N,s}(f)_{\\infty,w}$ involves \nthe two numbers $N$ and $s$.\nWe now describe how $N$ can be chosen in dependence on $s$, reducing the number of parameters to one. \nWe illustrate this strategy below in a more concrete\nsituation. To describe the setup we introduce analogues of the Wiener algebra in the Legendre polynomial setting. \nLet $c(f)$ with entries\n\\[\nc_k(f) = \\frac{1}{2} \\int_{-1}^1 f(x) L_k(x) dx,\\quad k \\in {\\mathbb{N}}_0, \n\\]\ndenote the vector of Fourier-Legendre coefficients of $f$. Then we define\n\\[\nA_p := \\{ f \\in C[-1,1], \\|c(f)\\|_p < \\infty \\},\\quad 0
0$, a weighted Wiener type space $A_{1,\\alpha}$, containing the functions $f \\in C[-1,1]$ with finite norm\n\\[\n\\|f\\|_{A_{1,\\alpha}} := \\sum_{k \\in {\\mathbb{N}}_0} (1+k)^\\alpha |c_k(f)|.\n\\]\nOne should imagine $\\alpha\\ll 1$ very small, so that $f \\in A_{1,\\alpha}$ \ndoes not impose a severe restriction on $f$, compared to $f \\in A_q$. Then instead of\n$f \\in A_q$ we make the slightly stronger requirement $f \\in A_q \\cap A_{1,\\alpha}$, $00$, and $m,s \\in {\\mathbb{N}}$ be given such that\n\\begin{equation}\\label{ms:rel}\nm \\geq C \\alpha^{-1}\\left(\\frac{1}{q}-\\frac{1}{2}\\right) s \\log^4(s).\n\\end{equation}\nThen there exist sampling points $x_1,\\hdots,x_m \\in [-1,1]$ (i.e., random Chebyshev points) such that for every \n$f \\in A_q \\cap A_{1,\\alpha}$ a polynomial $P$ of degree at most \n$N = \\lceil s^{(1\/q-1\/2)\/\\alpha}\\rceil$\ncan be reconstructed from the sample values $f(x_1),\\hdots,f(x_m)$ such that\n\\begin{equation}\\label{recovery:rate}\n\\frac{1}{\\sqrt{3}}\\|f - P\\|_{\\infty,w} \\leq \\|f-P\\|_{A_1} \\leq C(\\|f\\|_{A_q} + \\|f\\|_{A_{1,\\alpha}}) s^{1-1\/q}.\n\\end{equation}\n\\end{theorem}\nNote that up to $\\log$-factors the number of required samples is of the\norder of the number $s$ of degrees of freedom (the sparsity) \nallowed in the estimate \\eqref{Stechkin}, and the reconstruction \nerror \\eqref{recovery:rate} satisfies the same rate. Clearly $\\ell_1$-minimization or greedy alternatives can be used for reconstruction. This result \nmay be considered\nas an extension of the theory of compressive sensing to infinite dimensions (although all the\nkey tools are actually finite dimensional).\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Proof of Proposition \\ref{thm:function:approx}}\n\nLet $P_{opt} = \\sum_{k=0}^{N-1} c_{k,opt} L_k$ denote the polynomial of degree at most $N-1$ whose coefficient vector $c_{opt}$ realizes the approximation error $\\sigma_{N,s}(f)_{\\infty, w}$, as defined in \\eqref{weight:norm}. The samples $f(x_1), \\hdots, f(x_m)$ can be seen as noise corrupted samples of $P_{opt}$, that is, $f(x_{\\ell}) = P_{opt}(x_{\\ell}) + \\eta_{\\ell},$ and $| \\eta_{\\ell} | w(x_{\\ell}) \\leq \\| f - P_{opt} \\|_{\\infty, w} := \\varepsilon$. The preconditioned system reads then $f(x_{\\ell}) w(x_{\\ell}) = \\sum_{k=0}^{N-1} c_{k,opt} L_k(x_{\\ell}) w(x_{\\ell}) + \\varepsilon_{\\ell}$, with $| \\varepsilon_{\\ell} | \\leq \\varepsilon$. According to Theorem \\ref{thm:BOS:RIP} and Theorem \\ref{thm:growth}, the matrix $\\frac{1}{\\sqrt{m}}\\Psi$ consisting of entries $\\Psi_{\\ell,k} = w(x_{\\ell})L_{k-1}(x_\\ell)$ satisfies the RIP with high probability, provided the stated condition on the minimal number of samples holds.\nDue to Theorem \\ref{thm:l1:stable}, an application of noise-aware $\\ell_1$-minimization \\eqref{l1eps:prog} to $y = (f(x_\\ell)w(x_{\\ell}))_{\\ell=1}^m$ with $\\varepsilon$ replaced\nby $\\sqrt{m}\\varepsilon$ yields a coefficient vector $c$ satisfying $\\| c - c_{opt} \\|_1 \\leq C_1 \\sigma_s(c_{opt})_1 + C_2 \\sqrt{s} \\varepsilon$. We denote the polynomial corresponding to this coefficient vector by $P(x) = \\sum_{k=0}^{N-1} c_k L_k(x)$. Then \n\\begin{align}\n\\| f - P \\|_{\\infty, w} &\\leq\n \\| f - P_{opt} \\|_{\\infty, w} + \\| P_{opt} - P \\|_{\\infty, w}\n\\leq \\frac{\\sigma_{N,s}(f)_{\\infty, w}}{\\sqrt{s}} + \\sqrt{3} \\| c - c_{opt} \\|_1 \\nonumber \\\\\n&\\leq \\frac{\\sigma_{N,s}(f)_{\\infty, w}}{\\sqrt{s}} + \\sqrt{3} \\Big[ C_1\\sigma_s(c_{opt})_1 + C_2 \\sqrt{s} \\| f - P_{opt} \\|_{\\infty, w}\\Big] \n\\leq C \\sigma_{N,s}(f)_{\\infty, w}.\\nonumber\n\\end{align}\nThis completes the proof.\n\nThe attentive reader may have noticed that our recovery method, noise-aware \n$\\ell_1$-minimization \\eqref{l1eps:prog}, requires knowledge of $\\sigma_{N,s}(f)$, see also\nRemark \\ref{rem22}(c). \nOne may remove this drawback by considering CoSaMP \\cite{netr08} or\nIterative Hard Thresholding \\cite{blda09} instead. The required \nerror estimate in $\\ell_1$ follows from the $\\ell_2$-stability results for these algorithms\nin \\cite{blda09,netr08}, as both algorithms produce a $2s$-sparse vector, \nsee \\cite[p.\\ 87]{b09} for details. \n\n\\subsection{Proof of Theorem \\ref{thm:Ap}}\n\nLet $f \\in A_q \\cap A_{1,\\alpha}$ with Fourier Legendre coefficients $c_k(f)$.\nLet $N > s$ be a number to be chosen later and introduce the truncated Legendre expansion\n\\[\nf_N(x) = \\sum_{k=0}^{N-1} c_k(f) L_k(x),\n\\]\nwhich has truncated Fourier-Legendre coefficient vector $c^{(N)}$ with entries\n$c^{(N)}_k =c_k(f)$ if $k R} } \\!\\!\\!\\!\\! \\dist{p p'} }\n \\leq \\frac{\\eps}{10c} n R +\\frac{\\eps}{10c}\\sum_{p\\in P}\n \\mathbf{d}\\hspace{-1pt}\\pth{p,A} \\leq \\frac{2\\eps}{10c}\n \\nu_A(P) \\leq \\eps \\nu_{\\mathrm{opt}}\\pth{P,k},\n \\]\n since $\\dist{p p'} \\leq \\frac{\\eps}{10c}\\mathbf{d}\\hspace{-1pt}(p,A)$ if\n $\\mathbf{d}\\hspace{-1pt}(p,A) \\geq R$, and $\\dist{p p'} \\leq\n \\frac{\\eps}{10c} R$, if $\\mathbf{d}\\hspace{-1pt}(p,A) \\leq R$, by the\n construction of the grid. This implies\n $\\cardin{\\nu_Y\\pth{P} - \\nu_Y({\\mathcal{S}})} \\leq \\eps\n \\nu_Y \\pth{ P}$, since $\\nu_{\\mathrm{opt}}(P,k) \\leq\n \\nu_Y(P)$.\n\\end{proof}\n\nIt is easy to see that the above algorithm can be easily\nextended for weighted point sets.\n\\begin{theorem}\n \\thmlab{coreset:fast:k:median}%\n \n Given a point set $P$ with $n$ points, and a point set $A$\n with $m$ points, such that $\\nu_A(P) \\leq c \\nu_{\\mathrm{opt}}(P,k)$,\n where $c$ is a constant. Then, one can compute a weighted set\n ${\\mathcal{S}}$ which is a $(k,\\eps)$-coreset for $P$, and\n $\\cardin{{\\mathcal{S}}} = O\\pth{ (\\cardin{A} \\log{n}) \/\\eps^d}$.\n The running time is $O(n )$ if $m=O(\\sqrt{n})$ and $O( n\\log{m})$\n otherwise.\n\n If $P$ is weighted, with total weight $W$, then\n $\\cardin{{\\mathcal{S}}} = O\\pth{ (\\cardin{A} \\log{W})\n \/\\eps^d}$.\n\n\\end{theorem}\n\n\n\n\n\n\n\\subsection{Coreset for $k$-Means}\n\\seclab{k:means:coreset}\n\nThe construction of the $k$-means coreset follows the $k$-median\nwith a few minor modifications. Let $P$ be a set of $n$ points in\n$\\Re^d$, and a $A$ be a point set $A = \\brc{x_1,\\ldots,\nx_m}$, such that $\\mu_A(P) \\leq c \\mu_{\\mathrm{opt}}(P,k)$. Let $R =\n\\sqrt{(\\mu_A(P)\/(c n))}$ be a lower bound estimate of the\naverage mean radius $\\mathrm{R^{\\mu}_{opt}}(P,k) = \\sqrt{ \\mu_{\\mathrm{opt}}(P,k)\/n}$. For\nany $p \\in P_i$, we have $\\dist{p x_i} \\leq \\sqrt{c n} R$, since\n$\\dist{p\n x_i}^2 \\leq \\mu_A(P)$, for $i=1,\\ldots, m$.\n\nNext, we construct an exponential grid around each point of $A$,\nas in the $k$-median case, and snap the points of $P$ to this grid,\nand we pick a representative point for such grid cell. See\n\\secref{construction} for details. We claim that the resulting set of\nrepresentatives ${\\mathcal{S}}$ is the required coreset.\n\n\\begin{theorem}\n \\thmlab{k:means:weighted:coreset}%\n %\n Given a set $P$ with $n$ points, and a point set $A$ with $m$\n points, such that $\\mu_A(P) \\leq c \\mu_{\\mathrm{opt}}(P,k)$, where $c$\n is a constant. Then, can compute a weighted set ${\\mathcal{S}}$ which\n is a $(k,\\eps)$-coreset for $P$, and $\\cardin{{\\mathcal{S}}} = O\\pth{\n (m \\log{n}) \/(c\\eps)^d}$. The running time is $O(n )$ if\n $m=O(n^{1\/4})$ and $O( n\\log{m})$ otherwise.\n \n If $P$ is a weighted set with total weight $W$, then the\n size of the coreset is $O\\pth{ (m \\log{W})\/\\eps^d}$.\n\\end{theorem}\n\n\\begin{proof}\n We prove the theorem for an unweighted point set. The\n construction is as in \\secref{k:means:coreset}. As for\n correctness, consider an arbitrary set $B$ of $k$ points in\n $\\Re^d$. The proof is somewhat more tedious than the median case,\n and we give short description of it before plunging into the\n details. We partition the points of $P$ into three sets: (i)\n Points that are close (i.e., $\\leq R$) to both $B$ and\n $A$. The error those points contribute is small because they\n contribute small terms to the summation. (ii) Points that are\n closer to $B$ than to $A$ (i.e., $P_A$). The error\n those points contribute can be charged to an $\\eps$ fraction of\n the summation $\\mu_A(P)$. (iii) Points that are closer to\n $A$ than to $B$ (i.e., $P_B$). The error is here\n charged to the summation $\\mu_B(P)$. Combining those\n three error bounds, give us the required result.\n\n For any $p\\in P$, let $p'$ the image of $p$ in ${\\mathcal{S}}$; namely,\n $p'$ is the point in the coreset ${\\mathcal{S}}$ that represents $p$.\n Now, we have\n \\begin{equation*}\n \\E%\n =%\n \\cardin{\\mu_B(P) - \\mu_B(S) } \\leq\n \\sum_{p\\in P} \\cardin{ {\\mathbf{d}\\hspace{-1pt}(p,B) }^2\n - \\mathbf{d}\\hspace{-1pt}(p', B)^2}%\n \\leq\n \\sum_{p\\in P} \\cardin{ \\pth{\\mathbf{d}\\hspace{-1pt}(p, B)\n - \\mathbf{d}\\hspace{-1pt}(p', B)}\n \\pth{\\Bigl.\\!\\mathbf{d}\\hspace{-1pt}(p, B) + \\mathbf{d}\\hspace{-1pt}(p',B)} }\n \\end{equation*}\n Let\n $P_R = \\Set{ p \\in P }{ \\mathbf{d}\\hspace{-1pt}(p, B ) \\leq R \\text{ and }\n \\mathbf{d}\\hspace{-1pt}(p , A)\\leq R}$,\n $P_A = \\Set{ p \\in P \\setminus P_R }{ \\mathbf{d}\\hspace{-1pt}(p,B) \\leq\n \\mathbf{d}\\hspace{-1pt}(p,A)}$, and let\n $P_B = P \\setminus \\pth{ P_R \\cup P_A}$. By the triangle\n inequality, for $p \\in P$, we have\n $\\mathbf{d}\\hspace{-1pt}(p',B) + \\dist{p p'} \\geq \\mathbf{d}\\hspace{-1pt}(p, B)$ and\n $\\mathbf{d}\\hspace{-1pt}(p, B) + \\dist{p p'} \\geq \\mathbf{d}\\hspace{-1pt}(p',B)$. Thus,\n $\\dist{p p'} \\geq \\cardin{ \\mathbf{d}\\hspace{-1pt}(p, B) - \\mathbf{d}\\hspace{-1pt}(p', B)}$.\n \n Also,\n $\\mathbf{d}\\hspace{-1pt}(p, B) + \\mathbf{d}\\hspace{-1pt}(p', B) \\leq 2\\mathbf{d}\\hspace{-1pt}(p, B) +\n \\dist{p p'}$, and thus\n \\begin{align*}\n \\E_R%\n &=%\n \\sum_{p\\in P_R} \\cardin{ \\pth{ \\mathbf{d}\\hspace{-1pt}(p, B) -\n \\mathbf{d}\\hspace{-1pt}(p',B)} \\pth{\\mathbf{d}\\hspace{-1pt}(p, B) + \\mathbf{d}\\hspace{-1pt}(p',B)}\n }%\n \n \\leq %\n \\sum_{p\\in P_R} \\dist{p p'} \\pth{ 2 \\mathbf{d}\\hspace{-1pt}(p, B) + \\dist{p\n {}p'}}%\n \\\\&%\n \\leq%\n \\sum_{p\\in P_R} \\frac{\\eps}{10} R \\pth{ 2 R\n + \\frac{\\eps}{10} R}%\n \n \\leq %\n \\frac{\\eps}{3} \\sum_{p\\in P_R} R^2 \\leq \\frac{\\eps}{3}\n \\mu_{\\mathrm{opt}}(P,k),\n \\end{align*}\n since by definition, for $p \\in P_R$, we have\n $\\mathbf{d}\\hspace{-1pt}(p, A), \\mathbf{d}\\hspace{-1pt}(p,B) \\leq R$.\n \n By construction $\\dist{p p'} \\leq (\\eps\/10c)\\mathbf{d}\\hspace{-1pt}(p,A)$, for\n all $p \\in P_A$, as $\\mathbf{d}\\hspace{-1pt}(p, A) \\geq R$. Thus,\n \\begin{align*}\n \\E_A\n &=%\n \\sum_{p\\in P_A} \\dist{p p'} \\pth{ 2\n \\mathbf{d}\\hspace{-1pt}(p, B) + \\dist{p p'}}\n \\leq \\sum_{p\\in P_A} \\frac{\\eps}{10c}\n \\mathbf{d}\\hspace{-1pt}(p,A) \\pth{ 2 + \\frac{\\eps}{10c}}\n \\mathbf{d}\\hspace{-1pt}(p, A) %\n \\\\&%\n \\leq%\n \\frac{\\eps}{3c} \\sum_{p \\in\n P_A} \\pth{ \\mathbf{d}\\hspace{-1pt}(p,A)}^2 \\leq\n \\frac{\\eps}{3} \\mu_{\\mathrm{opt}}(P,k)\n \\leq \\frac{\\eps}{3} \\mu_B(P).\n \\end{align*}\n\n As for $p \\in P_B$, we have $\\dist{p p'} \\leq\n \\frac{\\eps}{10c}\\mathbf{d}\\hspace{-1pt}(p, B)$, since $\\mathbf{d}\\hspace{-1pt}(p, B) \\geq\n R$, and $\\mathbf{d}\\hspace{-1pt}(p,A) \\leq \\mathbf{d}\\hspace{-1pt}(p, B)$. Implying\n $\\dist{p p'} \\leq (\\eps\/10c)\\mathbf{d}\\hspace{-1pt}(p,B)$ and thus\n \\begin{align*}\n \\E_B\n &=%\n \\sum_{p\\in P_B} \\dist{p p'} \\pth{ 2\n \\mathbf{d}\\hspace{-1pt}(p, B) + \\dist{p p'}}\n \\leq\n \\sum_{p \\in P_B}\n \\frac{\\eps}{10c}\\mathbf{d}\\hspace{-1pt}(p,B)\n \\pth{ 2\n \\mathbf{d}\\hspace{-1pt}(p, B) + \\frac{\\eps}{10c}\\mathbf{d}\\hspace{-1pt}(p,B)}%\n \\\\&%\n \\leq%\n \\sum_{p \\in P_B} \\frac{\\eps}{3} \\mathbf{d}\\hspace{-1pt}(p, B)^2\n \\leq \\frac{\\eps}{3} \\mu_B(P).\n \\end{align*}\n We conclude that $\\E = \\cardin{\\mu_B(P) - \\mu_B(S)\n } \\leq \\E_R + \\E_A + \\E_B \\leq \\frac{3\\eps}{3}\n \\mu_B(P), $ which implies that $(1-\\eps)\\mu_B(P)\n \\leq \\mu_B(S) \\leq (1+\\eps) \\mu_B(P)$, as\n required. It is easy to see that we can extend the analysis\n for the case when we have weighted points.\n\\end{proof}\n\n\n\n\n\n\\section{Fast Constant Factor Approximation Algorithm}\n\\seclab{fast:const:factor}\n\nLet $P$ be the given point set in $\\Re^d$. We want to quickly\ncompute a constant factor approximation to the $k$-means\nclustering of $P$, while using more than $k$ centers. The number\nof centers output by our algorithm is $O\\pth{k \\log^3 n}$.\nSurprisingly, the set of centers computed by the following\nalgorithm is a good approximation for both $k$-median and\n$k$-means. To be consistent, throughout this section, we refer to\n$k$-means, although everything holds nearly verbatim for\n$k$-median as well.\n\n\\begin{defn}[bad points]\n For a point set $X$, define a point $p \\in P$ as\n \\emph{bad} with respect to $X$, if the cost it pays in\n using a center from $X$ is prohibitively larger than the\n price $C_{\\mathrm{opt}}$ pays for it; more precisely $\\mathbf{d}\\hspace{-1pt}(p,X)\n \\geq 2 \\mathbf{d}\\hspace{-1pt}(p,C_{\\mathrm{opt}})$. A point $p \\in P$ which is not\n bad, is by necessity, if not by choice, \\emph{good}.\n Here $C_{\\mathrm{opt}} =C_{\\mathrm{opt}}(P,k)$ is a set of optimal $k$-means\n centers realizing $\\mu_{\\mathrm{opt}}(P,k)$.\n\\end{defn}\nWe first describe a procedure which given $P$, computes a\nsmall set of centers $X$ and a large $P'\\subseteq P$ such\nthat $X$ induces clusters $P'$ well. Intuitively we want a\nset $X$ and a large set of points $P'$ which are \\emph{good}\nfor $X$.\n\n\n\\subsection{Construction of the Set $X$ of Centers}\n\n\\seclab{good:subset:centers}\n\nFor $k=O(n^{1\/4})$, we can compute a $2$-approximate $k$-center\nclustering of $P$ in linear time \\cite{h-cm-04}, or alternatively, for\n$k=\\Omega(n^{1\/4})$, in $O(n\\log{k})$ time, using the algorithm of\nFeder and Greene \\cite{fg-oafac-88}. This is the \\emph{min-max\n clustering} where we cover $P$ by a set of $k$ balls such the\nradius of the largest ball is minimized. Let $V$ be the set of $k$\ncenters computed, together with the furthest point in $P$ from those\n$k$ centers.\n\nLet $L$ be the radius of this $2$-approximate clustering. Since both\nthose algorithms are simulating the (slower) algorithm of Gonzalez\n\\cite{g-cmmid-85}, we have the property that the minimal distance\nbetween any points of $V$ is at least $L$. Thus, any $k$-means\nclustering of $P$, must have price at least $(L\/2)^2$, and is at most\nof price $n L^2$, and as such $L$ is a rough estimate of\n$\\mu_{\\mathrm{opt}}(P,k)$. In fact, this holds even if we restrict out attention\nonly to $V$; explicitly $(L\/2)^2 \\leq \\mu_{\\mathrm{opt}}(V, k) \\leq \\mu_{\\mathrm{opt}}(P,k)\n\\leq n L^2$.\n\nNext, we pick a random sample $Y$ from $P$ of size $\\rho = \\gamma k\n\\log^2 n$, where $\\gamma$ is a large enough constant whose value\nwould follow from our analysis. Let $X = Y \\cup V$ be the required\nset of cluster centers. In the extreme case where $\\rho > n$, we just\nset $X$ to be $P$.\n\n\n\n\\subsection{A Large Good Subset for $X$}\n\\seclab{good:subset:points}\n\n\\subsubsection{Bad points are few}\nConsider the set $C_{\\mathrm{opt}}$ of $k$ optimal centers for the $k$-means, and\nplace a ball ${b}_i$ around each point of $c_i\\inC_{\\mathrm{opt}}$, such that\n${b}_i$ contain $\\eta = n\/(20k\\log{n})$ points of $P$. If\n$\\gamma$ is large enough, it is easy to see that with high\nprobability, there is at least one point of $X$ inside every ball\n${b}_i$. Namely, $X \\cap {b}_i \\ne \\emptyset$, for $i=1,\\ldots,\nk$.\n\nLet $P_{\\mathrm{bad}}$ be the set of all bad points of $P$. Assume, that\nthere is a point $x_i \\in X$ inside ${b}_i$, for $i=1,\\ldots,\nk$. Observe, that for any $p \\in P \\setminus {b}_i$, we have\n$\\dist{p x_i} \\leq 2\\dist{p c_i}$. In particular, if $c_i$ is the\nclosest center in $C_{\\mathrm{opt}}$ to $p$, we have that $p$ is good. Thus,\nwith high probability, the only bad points in $P$ are the one that\nlie inside the balls ${b}_1,\\ldots, {b}_k$. But every one of\nthose balls, contain at most $\\eta$ points of $P$. It\nfollows, that with high probability, the number of bad points in\n$P$ with respect to $X$ is at most $\\beta= k \\cdot \\eta =\nn\/(20\\log n)$.\n\n\\subsubsection{Keeping Away from Bad Points}\n\nAlthough the number of bad points is small, there is no easy way to\ndetermine the set of bad points. We instead construct a set $P'$\nensuring that the clustering cost of the bad points in $P'$ does not\ndominate the total cost. For every point in $P$, we compute its\napproximate nearest neighbor in $X$. This can be easily done in\n$O(n\\log \\cardin{X} + \\cardin{X} \\log \\cardin{X})$ time using\nappropriate data structures \\cite{amnsw-oaann-98}, or in $O(n +\nn\\cardin{X}^{1\/4} \\log n)$ time using \\corref{batch:n:n:2} (with $D=n\nL$). This stage takes $O(n)$ time, if $k=O(n^{1\/4})$, else it takes\n$O(n \\log{\\cardin{X}} + \\cardin{X} \\log{\\cardin{X}}) = O(n \\log( k\\log\nn))$ time, as $\\cardin{X} \\leq n$.\n\nIn the following, to simplify the exposition, we assume that we\ncompute exactly the distance $r(p) = \\mathbf{d}\\hspace{-1pt}(p,X)$, for $p \\in P$.\n\nNext, we partition $P$ into classes in the following way. Let\n$P[a,b] = \\Set{ p \\in P }{a\\leq r(p)< b}$. Let $P_0 = P[0, L\/(4n)]$,\n$P_\\infty = P[2Ln,\\infty]$ and $P_i = P\\pbrc{2^{i-1} L\/n, 2^{i} L\/n}$,\nfor $i=1,\\ldots, M$, where $M = 2\\ceil{\\lg n} + 3$. This partition of\n$P$ can be done in linear time using the $\\log$ and floor function.\n\nLet $P_\\alpha$ be the last class in this sequence that\ncontains more than $2\\beta = 2(n\/(20\\log{n}))$ points. Let\n$P' = V \\cup \\bigcup_{i \\leq \\alpha} P_i$. We claim that\n$P'$ is the required set. Namely, $\\cardin{P'} \\geq n\/2$\nand $\\mu_X(P') = O(\\mu_{C_{\\mathrm{opt}}}(P'))$, where $C_{\\mathrm{opt}}\n=C_{\\mathrm{opt}}(P,k)$ is the optimal set of centers for $P$.\n\n\\subsubsection{Proof of Correctness}\n\nClearly the set $P'$ contains at least $\\pth{n - \\cardin{P_\\infty}\n- M \\cdot \\pth{2n\/20 \\log{n}}}$ points. Since $P_\\infty \\subseteq\nP_{\\mathrm{bad}}$ and $\\cardin{P_{\\mathrm{bad}}} \\leq \\beta$, hence $|P'| > n\/2$.\n\nIf $\\alpha > 0$, we have $\\cardin{P_\\alpha} \\geq 2\\beta =\n2(n\/(20\\log{n}))$. Since $P'$ is the union of all the classes with\ndistances smaller than the distances in $P_\\alpha$, it follows that\nthe worst case scenario is when all the bad points are in $P_\\alpha$.\nBut with high probability the number of bad points is at most $\\beta$,\nand since the price of all the points in $P_\\alpha$ is roughly the\nsame, it follows that we can charge the price of the bad points in\n$P'$ to the good points in $P_\\alpha$.\n\nFormally, let $Q'= P_\\alpha \\setminus P_{\\mathrm{bad}}$. For any point $p\n\\in P' \\cap P_{\\mathrm{bad}}$ and $q \\in Q'$, we have $\\mathbf{d}\\hspace{-1pt}(p,X) \\leq\n2\\mathbf{d}\\hspace{-1pt}(q,X)$. Further $|Q'| > |P_{\\mathrm{bad}}|$. Thus, $\\mu_X(P' \\cap\nP_{\\mathrm{bad}}) \\leq 4\\mu_X(Q') \\leq 16 \\mu_{C_{\\mathrm{opt}}}(Q') \\leq\n16\\mu_{C_{\\mathrm{opt}}}(P')$. Thus, \n\\[\n\\mu_X(P') = \\mu_X(P' \\cap P_{\\mathrm{bad}}) + \\mu_X(P'\n\\setminus P_{\\mathrm{bad}} ) \\leq 16 \\mu_{C_{\\mathrm{opt}}}(P') +\n4\\mu_{C_{\\mathrm{opt}}}(P') = 20 \\mu_{C_{\\mathrm{opt}}}(P').\n\\]\n\nIf $\\alpha =0$ then for any point $p \\in P'$, we have $(\\mathbf{d}\\hspace{-1pt}(p,X))^2\n\\leq n(L\/4n)^2 \\leq L^2\/(4n)$. and thus $\\mu_X(P') \\leq L^2\/4 \\leq\n\\mu_{C_{\\mathrm{opt}}}(V) \\leq \\mu_{C_{\\mathrm{opt}}}(P')$, since $V \\subseteq P'$.\n\nIn the above analysis we assumed that the nearest neighbor data\nstructure returns the exact nearest neighbor. If we were to use an\napproximate nearest neighbor instead, the constants would slightly\ndeteriorate.\n\n\\begin{lemma}\n \\lemlab{good:subset}%\n %\n Given a set $P$ of $n$ points in $\\Re^d$, and parameter $k$, one\n can compute sets $P'$ and $X \\subseteq P$ such that, with high\n probability, $\\cardin{P'} \\geq n\/2$, $\\cardin{X} = O(k \\log^2 n)$,\n and $\\mu_{C_{\\mathrm{opt}}}(P') \\geq \\mu_X(P')\/32$, where $C_{\\mathrm{opt}}$ is\n the optimal set of $k$-means centers for $P$. The running time of\n the algorithm is $O(n)$ if $k = O(n^{1\/4})$, and $O(n \\log{(k\n \\log{n})} )$ otherwise.\n\\end{lemma}\n\n\nNow, finding a constant factor $k$-median clustering is easy. Apply\n\\lemref{good:subset} to $P$, remove the subset found, and repeat on\nthe remaining points. Clearly, this would require $O(\\log{n})$\niterations. We can extend this algorithm to the weighted case, by\nsampling $O(k \\log^2 W)$ points at every stage, where $W$ is the total\nweight of the points. Note however, that the number of points no\nlonger shrink by a factor of two at every step, as such the running\ntime of the algorithm is slightly worse.\n\n\\begin{theorem}[Clustering with more centers]\n \\thmlab{k:cluster:const:rough}%\n %\n Given a set $P$ of $n$ points in $\\Re^d$, and parameter $k$, one\n can compute a set $X$, of size $O(k\\log^3 n)$, such that\n $\\mu_X(P) \\leq 32 \\mu_{\\mathrm{opt}}(P,k)$. The running time of the\n algorithm is $O(n)$ if $k = O(n^{1\/4})$, and $O(n \\log{(k \\log\n n)})$ otherwise.\n \n Furthermore, the set $X$ is a good set of centers for\n $k$-median. Namely, we have that $\\nu_X(P) \\leq 32\n \\nu_{\\mathrm{opt}}(P,k)$.\n \n If the point set $P$ is weighted, with total weight $W$, then the\n size of $X$ becomes $O(k \\log^3 W)$, and the running time becomes\n $O(n \\log^2 W )$.\n\\end{theorem}\n\n\n\\section{$(1+\\eps)$-Approximation for $k$-Median}\n\\seclab{eps:approx:k:median}\n\nWe now present the approximation algorithm using exactly $k$\ncenters. Assume that the input is a set of $n$ points. We use the set\nof centers computed in \\thmref{k:cluster:const:rough} to compute a\nconstant factor coreset using the algorithm of\n\\thmref{coreset:fast:k:median}. The resulting coreset ${\\mathcal{S}}$, has\nsize $O(k \\log^4 n )$. Next we compute a $O(n)$ approximation to the\n$k$-median for the coreset using the $k$-center (min-max) algorithm\n\\cite{g-cmmid-85}. Let $C_0 \\subseteq {\\mathcal{S}}$ be the resulting set\nof centers. Next we apply the local search algorithm, due to Arya\n\\textit{et~al.}\\xspace{} \\cite{agkmm-lshkm-04}, to $C_0$ and ${\\mathcal{S}}$, where the set\nof candidate points is ${\\mathcal{S}}$. This local search algorithm, at\nevery stage, picks a center $c$ from the current set of centers\n$C_{curr}$, and a candidate center $s \\in {\\mathcal{S}}$, and swaps $c$ out\nof the set of centers and $s$ into the set of centers. Next, if the\nnew set of centers\n$C_{curr}' = C_{curr} \\setminus \\brc{c} \\cup \\brc{s}$ provides a\nconsiderable improvement over the previous solution (i.e.,\n$\\nu_{C_{curr}}({\\mathcal{S}}) \\leq\n(1-\\eps\/k)\\nu_{C_{curr}'}({\\mathcal{S}})$ where $\\eps$ here is an\narbitrary small constant), then we set $C_{curr}$ to be $C_{curr}'$.\nArya \\textit{et~al.}\\xspace{} \\cite{agkmm-lshkm-04} showed that the algorithm\nterminates, and it provides a constant factor approximation to\n$\\nu^D_{\\mathrm{opt}}({\\mathcal{S}}, k)$, and as hence to $\\nu_{\\mathrm{opt}}(P,k)$. It is easy to\nverify that it stops after $O(k \\log{n})$ such swaps. Every swap, in\nthe worst case, requires considering $\\cardin{{\\mathcal{S}}}k$\nsets. Computing the price of clustering for every such candidate set\nof centers takes $O\\pth{ \\cardin{{\\mathcal{S}}} k }$ time. Thus, the\nrunning time of this algorithm is\n$O \\pth{ \\cardin{{\\mathcal{S}}}^2 k^{3} \\log n}=O \\pth{ k^5 \\log^9 n\n}$. Finally, we use the new set of centers with\n\\thmref{coreset:fast:k:median}, and get a $(k,\\eps)$-coreset for\n$P$. It is easy to see that the algorithm works for weighted\npoint-sets as well. Putting in the right bounds from\n\\thmref{k:cluster:const:rough} and \\thmref{coreset:fast:k:median} for\nweighted sets, we get the following.\n\n\\begin{lemma}[coreset]\n \\lemlab{k:coreset:small:median}%\n %\n Given a set $P$ of $n$ points in $\\Re^d$, one can\n compute a $k$-median $(k,\\eps)$-coreset ${\\mathcal{S}}$ of\n $P$, of size $O\\pth{ (k\/\\eps^d)\\log{n} }$, in time $O\n \\pth{ n + k^5 \\log^9 n }$.\n \n If $P$ is a weighted set, with total weight $W$, the\n running time of the algorithm is $O( n \\log^2 W + k^5\n \\log^9 W)$.\n\\end{lemma}\n\nWe would like to apply the algorithm of Kolliopoulos and Rao\n\\cite{kr-nltas-99} to the coreset, but unfortunately, their\nalgorithm only works for the discrete case, when the medians are\npart of the input points. Thus, the next step is to generate from\nthe coreset, a small set of candidate points in which we can\nassume all the medians lie, and use the (slightly modified)\nalgorithm of \\cite{kr-nltas-99} on this set.\n\n\\begin{defn}[Centroid Set]\n Given a set $P$ of $n$ points in $\\Re^d$, a set $\\mathcal{D}\n \\subseteq \\Re^d$ is an \\emph{$(k,\\eps)$-approximate\n centroid set} for $P$, if there exists a subset\n $C \\subseteq \\mathcal{D}$ of size $k$, such that\n $\\nu_C(P) \\leq (1+\\eps)\\nu_{\\mathrm{opt}}(P,k)$.\n\\end{defn}\n\n\\begin{lemma}\n \\lemlab{k:median:cen:set}%\n %\n Given a set $P$ of $n$ points in $\\Re^d$, one can\n compute an $(k,\\eps)$-centroid set $\\mathcal{D}$ of size\n $O(k^2\\eps^{-2d}\\log^2{n})$. The running time of this\n algorithm is $O \\pth{ n + k^5 \\log^9 n +\n k^2\\eps^{-2d}\\log^2{n} }$.\n \n For the weighted case, the running time is\n $O \\pth{ n \\log^2 W + k^5 \\log^9 W + k^2\\eps^{-2d}\\log^2{W} }$,\n and the centroid set is of size $O(k^2\\eps^{-2d}\\log^2{W})$.\n\\end{lemma}\n\n\\begin{proof}\n Compute a $(k,\\eps\/12)$-coreset ${\\mathcal{S}}$ using\n \\lemref{k:coreset:small:median}. We retain the set\n $B$ of $k$ centers, for which $\\nu_B( P )\n =O( \\nu_{\\mathrm{opt}}(P,k) )$, which is computed during the\n construction of ${\\mathcal{S}}$. Further let $R =\n \\nu_B(P) \/ n$.\n \n Next, compute around each point of ${\\mathcal{S}}$, an exponential grid\n using $R$, as was done in \\secref{construction}. This results in\n a point set $\\mathcal{D}$ of size of $O(k^2\\eps^{-2d}\\log^2{n})$. We\n claim that $\\mathcal{D}$ is the required centroid set. The proof\n proceeds on similar lines as the proof of\n \\thmref{coreset:fast:k:median}.\n \n Indeed, let $C_{\\mathrm{opt}}$ be the optimal set of $k$ medians.\n We snap each point of $C_{\\mathrm{opt}}$ to its nearest neighbor in\n $\\mathcal{D}$, and let $X$ be the resulting set. Arguing as\n in the proof of \\thmref{coreset:fast:k:median}, we have\n that $\\cardin{\\nu_X({\\mathcal{S}})\n -\\nu_{C_{\\mathrm{opt}}}({\\mathcal{S}})}$ $\\leq\n (\\eps\/12)\\nu_{C_{\\mathrm{opt}}}({\\mathcal{S}})$. On the other hand,\n by definition of a coreset, $\\cardin{\\nu_{C_{\\mathrm{opt}}}(P) -\n \\nu_{C_{\\mathrm{opt}}}({\\mathcal{S}})}\\leq (\\eps\/12)$\n $\\nu_{C_{\\mathrm{opt}}}(P)$ and $\\cardin{\\nu_{X}(P) -\n \\nu_{X}({\\mathcal{S}})}\\leq (\\eps\/12)$ $\\nu_{X}(P)$.\n As such, $\\nu_{C_{\\mathrm{opt}}}({\\mathcal{S}}) \\leq\n (1+\\eps\/12)\\nu_{C_{\\mathrm{opt}}}(P)$ and it follows\n \\begin{equation*}\n \\cardin{\\nu_X({\\mathcal{S}}) -\\nu_{C_{\\mathrm{opt}}}({\\mathcal{S}})}\n \\leq (\\eps\/12)(1+\\eps\/12)\\nu_{C_{\\mathrm{opt}}}(P) \\leq\n (\\eps\/6) \\nu_{C_{\\mathrm{opt}}}(P). \n \\end{equation*}\n As such,\n \\begin{align*}\n \\nu_X(P) %\n &\\leq%\n \\frac{1}{1-\\eps\/12}\n \\nu_X({\\mathcal{S}}) \\leq 2\n \\nu_X({\\mathcal{S}})\n \\leq 2 \\pth{ \\nu_{C_{\\mathrm{opt}}}({\\mathcal{S}}) +\n \\frac{\\eps}{6} \\nu_{C_{\\mathrm{opt}}}(P)}%\n \\\\&%\n \\leq%\n 2 \\pth{\\pth{1+ \\frac{\\eps}{12}} \\nu_{C_{\\mathrm{opt}}}(P) + \\frac{\\eps}{6}\n \\nu_{C_{\\mathrm{opt}}}(P)}\n \\leq%\n 3\\nu_{C_{\\mathrm{opt}}}(P),\n \\end{align*}\n for $\\eps < 1$. We conclude that $\\cardin{\\nu_X(P)\n - \\nu_X({\\mathcal{S}}) } \\leq (\\eps\/12)\\nu_X(P) \\leq\n (\\eps\/3) \\nu_{C_{\\mathrm{opt}}}(P)$. Putting things together,\n we have\n \\begin{align*}\n \\cardin{\\nu_{X}(P) - \\nu_{C_{\\mathrm{opt}}}(P) }\n &\\leq%\n \\cardin{\\nu_X(P) - \\nu_{X}({\\mathcal{S}}) }\n + \\cardin{\\nu_{X}({\\mathcal{S}}) -\n \\nu_{C_{\\mathrm{opt}}}({\\mathcal{S}}) }\n + \\cardin{\\nu_{C_{\\mathrm{opt}}}({\\mathcal{S}}) -\n \\nu_{C_{\\mathrm{opt}}}(P) } \\\\\n &\\leq%\n \\pth{\\frac{\\eps}{3} + \\frac{\\eps}{6} +\n \\frac{\\eps}{12}} \\nu_{C_{\\mathrm{opt}}}(P)\n \\leq \\eps\\nu_{C_{\\mathrm{opt}}}(P).\n \\end{align*}\n\\end{proof}\n\nWe are now in the position to get a fast approximation\nalgorithm. We generate the centroid set, and then we modify\nthe algorithm of Kolliopoulos and Rao so that it considers\ncenters only from the centroid set in its dynamic\nprogramming stage. For the weighted case, the depth of the\ntree constructed in \\cite{kr-nltas-99} is $O(\\log{W})$\ninstead of $O(\\log{n})$. Further since their algorithm works\nin expectation, we run it independently\n$O(\\log(1\/\\delta)\/\\eps)$ times to get a guarantee of\n$(1-\\delta)$.\n\n\\begin{theorem}[\\cite{kr-nltas-99}]\n \\thmlab{kr:algo}%\n %\n Given a weighted point set $P$ with $n$ points in $\\Re^d$, with\n total weight $W$, a centroid set $\\mathcal{D}$ of size at most $n$,\n and a parameter $\\delta > 0$, one can compute\n $(1+\\eps)$-approximate $k$-median clustering of $P$ using only\n centers from $\\mathcal{D}$. The overall running time is\n $O \\pth{ \\varrho}%{C_{\\mathrm{kr}} n (\\log k) (\\log W) \\log (1\/\\delta) }$, where\n $\\varrho}%{C_{\\mathrm{kr}} = \\CkrExp$. The algorithm succeeds with probability\n $\\geq 1-\\delta$.\n\n\\end{theorem}\n\\remove{\n\\begin{proof}\n We need to modify the algorithm of \\cite{kr-nltas-99} so that it\n considers only centers from the centroid set. This is a\n straightforward modification of their dynamic programming stage.\n\n We execute the algorithm of \\cite{kr-nltas-99} $M =\n O(\\log(1\/\\delta)\/\\eps)$ times independently. Since the\n algorithm of \\cite{kr-nltas-99} works in expectation, it\n follows by the Markov inequality, in each such execution\n the algorithm succeeds with probability larger $\\eps\/2$.\n Thus, it fails with probability $\\leq (1-\\eps\/2)^M \\leq\n \\exp(-M \\eps\/2) \\leq \\delta$.\n\n As for the running time, the depth of the tree\n constructed by \\cite{kr-nltas-99} is $O(\\log W)$ in this\n case (instead of $O(\\log n)$ as in the original\n settings). Thus, the bound on the running time follows.\n\\end{proof}\n}\n\nThe final algorithm is the following: Using the algorithms\nof \\lemref{k:coreset:small:median} and\n\\lemref{k:median:cen:set} we generate a $(k,\\eps)$-coreset\n${\\mathcal{S}}$ and an $\\eps$-centroid set $\\mathcal{D}$ of $P$,\nwhere $\\cardin{{\\mathcal{S}}} = O(k\\eps^{-d} \\log{n})$ and\n$\\cardin{\\mathcal{D}} = O(k^2\\eps^{-2d} \\log^2{n})$. Next, we\napply the algorithm of \\thmref{kr:algo} on ${\\mathcal{S}}$ and\n$\\mathcal{D}$.\n\\begin{theorem}[$(1+\\eps)$-approx $k$-median]\n \\thmlab{fast:k:median}%\n %\n Given a set $P$ of $n$ points in $\\Re^d$, and parameter $k$, one\n can compute a $(1+\\eps)$-approximate $k$-median clustering of $P$\n (in the continuous sense) in\n $O \\pth{ n + k^5 \\log^9 n + \\varrho}%{C_{\\mathrm{kr}} k^2 \\log^5 n }$ time, where\n $\\varrho}%{C_{\\mathrm{kr}} = \\CkrExp$ and $c$ is a constant. The algorithm outputs a\n set $X$ of $k$ points, such that\n $\\nu_X(P) \\leq (1+\\eps)\\nu_{\\mathrm{opt}}(P,k)$. If $P$ is a weighted set,\n with total weight $W$, the running time of the algorithm is\n $O ( n \\log^2 W + k^5 \\log^9 W + \\varrho}%{C_{\\mathrm{kr}} k^2 \\log^5 W )$.\n\\end{theorem}\n\nWe can extend our techniques to handle the discrete median\ncase efficiently as follows.\n\\begin{lemma}\n \\lemlab{k:median:cen:set:discrete}%\n %\n Given a set $P$ of $n$ points in $\\Re^d$, one can compute a\n discrete $(k,\\eps)$-centroid set $\\mathcal{D} \\subseteq P$ of size\n $O(k^2\\eps^{-2d}\\log^2{n})$. The running time of this algorithm is\n $ \\displaystyle O \\pth{ n + k^5 \\log^9 n + k^2\\eps^{-2d}\\log^2{n}\n }$ if $k \\leq \\eps^d n^{1\/4}$ and $ \\displaystyle O \\pth{ n\n \\log{n} + k^5 \\log^9 n + k^2\\eps^{-2d}\\log^2{n} }$ otherwise.\n\\end{lemma}\n\n\\begin{proof}\n We compute a $\\eps\/4$-centroid set $\\mathcal{D}$, using\n \\lemref{k:median:cen:set}, and let $m =\\cardin{\\mathcal{D}} =\n O(k^2\\eps^{-2d}\\log^2{n})$. Observe that if $m > n$ then we set\n $\\mathcal{D}$ to be $P$. Next, snap every point in $P$ to its\n (approximate) nearest neighbor in $\\mathcal{D}$, using\n \\corref{batch:n:n:2}. This takes $O(n+ m n^{1\/10} \\log(n) )= O(n\n + k^2 n^{1\/10} \\eps^{-2d} \\log^3 n) = O(n)$ time, if $k \\leq\n \\eps^d n^{1\/4}$, and $O(n \\log n)$ otherwise (then we use the\n data-structure of \\cite{amnsw-oaann-98} to perform the nearest\n neighbor queries). For every point $x\\in\\mathcal{D}$, let $P(x)$ be\n the of points in $P$ mapped to $x$. Pick from every set $P(x)$\n one representative point, and let $U \\subseteq P$ be the resulting\n set. Consider the optimal discrete center set $C_{\\mathrm{opt}}$, and\n consider the set $X$ of representative points that corresponds to\n the points of $C_{\\mathrm{opt}}$. Using the same argumentation as in\n \\lemref{k:median:cen:set} it is easy to show that $\\nu_X(P)\n \\leq (1+\\eps)\\nu^D_{\\mathrm{opt}}(P,k)$.\n\\end{proof}\n\nCombining \\lemref{k:median:cen:set:discrete} and\n\\thmref{fast:k:median}, we get the following.\n\\begin{theorem}[Discrete k-medians]\n \\thmlab{approx:k:median:discrete}%\n %\n One can compute an $(1+\\eps)$-approximate discrete $k$-median of a\n set of $n$ points in time $\\displaystyle O \\pth{ n + k^5 \\log^9 n\n + \\varrho}%{C_{\\mathrm{kr}} k^2 \\log^5 n }$, where $\\varrho}%{C_{\\mathrm{kr}}$ is the constant from\n \\thmref{kr:algo}.\n\\end{theorem}\n\n\\begin{proof}\n The proof follows from the above discussion. As for the running\n time bound, it follows by considering separately the case when\n $1\/\\eps^{2d} \\leq 1\/n^{1\/10}$, and the case when $1\/\\eps^{2d} \\geq\n 1\/n^{1\/10}$, and simplifying the resulting expressions. We omit\n the easy but tedious computations.\n\\end{proof}\n\n\n\n\n\n\n\n\n\\section{A $(1+\\eps)$-Approximation Algorithm for $k$-Means}\n\\seclab{eps:approx:k:mean}\n\n\n\\subsection{Constant Factor Approximation}\n\\seclab{exact:k:means:const}\n\nIn this section we reduce the number of centers to be exactly $k$. We\nuse the set of centers computed by \\thmref{k:cluster:const:rough} to\ncompute a constant factor coreset using the algorithm of\n\\thmref{k:means:weighted:coreset}. The resulting coreset ${\\mathcal{S}}$,\nhas size $O(k \\log^4 n)$. Next we compute a $O(n)$ approximation to\nthe $k$-means for the coreset using the $k$-center (min-max) algorithm\n\\cite{g-cmmid-85}. Let $C_0 \\subseteq {\\mathcal{S}}$ be the resulting set\nof centers. Next we apply the local search algorithm, due to Kanungo\n\\textit{et~al.}\\xspace{} \\cite{kmnpsw-lsaak-04}, to $C_0$ and ${\\mathcal{S}}$, where the set\nof candidate points is ${\\mathcal{S}}$. This local search algorithm, at\nevery stage, picks a center $c$ from the current set of centers\n$C_{curr}$, and a candidate center $s \\in {\\mathcal{S}}$, and swaps $c$ out\nof the set of centers and $c$ into the set of centers. Next, if the\nnew set of centers\n$C_{curr}' = C_{curr} \\setminus \\brc{c} \\cup \\brc{s}$ provides a\nconsiderable improvement over the previous solution (i.e.,\n$\\mu_{C_{curr}}({\\mathcal{S}}) \\leq\n(1-\\eps\/k)\\mu_{C_{curr}'}({\\mathcal{S}})$ where $\\eps$ here is an\narbitrary small constant), then we set $C_{curr}$ to be\n$C_{curr}'$. Extending the analysis of Arya \\textit{et~al.}\\xspace{}\n\\cite{agkmm-lshkm-04}, for the $k$-means algorithm, Kanungo \\textit{et~al.}\\xspace{}\n\\cite{kmnpsw-lsaak-04} showed that the algorithm terminates, and it\nprovides a constant factor approximation to $\\mu^D_{\\mathrm{opt}}({\\mathcal{S}}, k)$,\nand as hence to $\\mu_{\\mathrm{opt}}(P,k)$. It is easy to verify that it stops\nafter $O(k \\log{n})$ such swaps. Every swap, in the worst case,\nrequires considering $\\cardin{{\\mathcal{S}}}k$ sets. Computing the price of\nclustering for every such candidate set of centers takes\n$O\\pth{ \\cardin{{\\mathcal{S}}} k }$ time. Thus, the running time of this\nalgorithm is\n$O \\pth{ \\cardin{{\\mathcal{S}}}^2 k^{3} \\log n}=O \\pth{ k^5 \\log^9 n }$.\n\n\\begin{theorem}\n \\thmlab{k:means:const:fast}%\n %\n Given a point set $P$ in $\\Re^d$ and parameter $k$, one can\n compute a set $X \\subseteq P$ of size $k$, such that $\\mu_X(P)\n =O(\\mu_{\\mathrm{opt}}(P,k))$. The algorithm succeeds with high probability.\n The running time is $O(n + k^5 \\log^9 n)$ time.\n\n If $P$ is weighted, with total weight $W$, then the\n algorithm runs in time $O ( n + k^5 \\log^4 n$ $\\log^5 W\n )$.\n\\end{theorem}\n\n\\remove{\n\\begin{proof}\n The unweighted case follows by the above discussion. As\n for the weighted case, computing a set $C$ of $O(k\n \\log^2 n \\log W)$ centers, which are constant factor\n approximation to the $k$-means clustering of $P$, can be\n done in $O(n \\log{n} + k \\log^3 n \\log W)$ time, using\n \\thmref{k:cluster:const:rough}. Computing a constant\n factor coreset from $C$, takes $O(n\\log{ \\cardin{C}}) =\n O(n (\\log{n} + \\log\\log W))$ time, using\n \\thmref{k:means:weighted:coreset}, and results in a\n coreset ${\\mathcal{S}}$ of size $O(\\cardin{C} \\log W) = O(k\n \\log^2 n \\log^2 W)$.\n\n Finally, using the local search algorithm of\n \\cite{kmnpsw-lsaak-04} takes\n $O \\pth{ \\cardin{{\\mathcal{S}}}^2 k^{3} \\log W } = O \\pth{k^5 \\log^4 n\n \\log^5 W }$ time.\n\\end{proof}\n}\n\n\n\\subsection{The $(1+\\eps)$-Approximation}\n\\seclab{exact:k:means:eps}\n\nCombining \\thmref{k:means:const:fast} and\n\\thmref{k:means:weighted:coreset}, we get the following result\nfor coresets.\n\\begin{theorem}[coreset]\n \\thmlab{k:coreset:small:means}%\n %\n Given a set $P$ of $n$ points in $\\Re^d$, one can\n compute a $k$-means $(k,\\eps)$-coreset ${\\mathcal{S}}$ of\n $P$, of size $O\\pth{ (k\/\\eps^d)\\log{n} }$, in time $O\n \\pth{ n + k^5 \\log^9 n }$.\n \n If $P$ is weighted, with total weight $W$, then the\n coreset is of size $O\\pth{ (k\/\\eps^d)\\log{W} }$, and the\n running time is $O(n \\log^2 W + k^5 \\log^9 W)$.\n\\end{theorem}\n\n\\begin{proof}\n We first compute a set $A$ which provides a constant factor\n approximation to the optimal $k$-means clustering of $P$, using\n \\thmref{k:means:const:fast}. Next, we feed $A$ into the\n algorithm \\thmref{k:means:weighted:coreset}, and get a\n $(1+\\eps)$-coreset for $P$, of size $O((k\/\\eps^d) \\log{W})$.\n\\end{proof}\n\nWe now use techniques from \\Matousek{} \\cite{m-agc-00} to compute the\n$(1+\\eps)$-approximate $k$-means clustering on the coreset.\n\\begin{defn}[Centroid Set]\n Given a set $P$ of $n$ points in $\\Re^d$, a set $T\n \\subseteq \\Re^d$ is an \\emph{$\\eps$-approximate centroid\n set} for $P$, if there exists a subset $C \\subseteq\n T$ of size $k$, such that $\\mu_C(P) \\leq\n (1+\\eps)\\mu_{\\mathrm{opt}}(P,k)$.\n\\end{defn}\n\n\\Matousek{} showed that there exists an $\\eps$-approximate centroid\nset of size $O(n\\eps^{-d} \\log(1\/\\eps))$. Interestingly enough, his\nconstruction is weight insensitive. In particular, using an\n$(k,\\eps\/2)$-coreset ${\\mathcal{S}}$ in his construction, results in a\n$\\eps$-approximate centroid set of size $O\\pth{ \\cardin{{\\mathcal{S}}}\n \\eps^{-d} \\log(1\/\\eps)}$.\n\n\\begin{lemma}\n For a weighted point set $P$ in $\\Re^d$, with total\n weight $W$, there exists an $\\eps$-approximate centroid\n set of size $O(k\\eps^{-2d}\\log{W} \\log{(1\/\\eps)} )$.\n\\end{lemma}\n\nThe algorithm to compute the $(1+\\eps)$-approximation now follows\nnaturally. We first compute a coreset ${\\mathcal{S}}$ of $P$ of size\n$O\\pth{(k\/\\eps^d)\\log{W} }$ using the algorithm of\n\\thmref{k:coreset:small:means}. Next, we compute in\n$O\\pth{\\cardin{{\\mathcal{S}}} \\log \\cardin{{\\mathcal{S}}} + \\cardin{{\\mathcal{S}}}\n e^{-d} \\log{\\frac{1}{\\eps}}}$ time a $\\eps$-approximate centroid\nset $U$ for ${\\mathcal{S}}$, using the algorithm from \\cite{m-agc-00}. We\nhave $\\cardin{U} = O(k\\eps^{-2d}\\log{W} \\log{(1\/\\eps)} )$. Next we\nenumerate all $k$-tuples in $U$, and compute the $k$-means clustering\nprice of each candidate center set (using ${\\mathcal{S}}$). This takes\n$O\\pth{ \\cardin{U}^k \\cdot k \\cardin{{\\mathcal{S}}}}$ time. And clearly,\nthe best tuple provides the required approximation.\n\n\\begin{theorem}[$k$-means clustering]\n \\thmlab{k:means:approx}%\n %\n Given a point set $P$ in $\\Re^d$ with $n$ points, one can compute\n $(1+\\eps)$-approximate $k$-means clustering of $P$ in time\n \\[\n O \\pth{n + k^5 \\log^9 n + {k^{k+2}}{\\eps^{-(2d+1)k}}\n {\\log^{k+1}{n}} \\log^k(\\nfrac{1}{\\eps})}.\n \\]\n For a weighted set, with total weight $W$, the running\n time is\n \\[\n O \\pth{ n \\log^2 W + k^5 \\log^4 n \\log^5 W +k^{k+2}\n \\eps^{-(2d+1)k} \\log^{k+1} W \\log^{k} (1\/\\eps)}.\n \\]\n\\end{theorem}\n\n\n\n\n\n\n\n\n\\section{Streaming}\n\\seclab{streaming}\n\nA consequence of our ability to compute quickly a\n$\\pth{k,\\eps}$-coreset for a point set, is that we can\nmaintain the coreset under insertions quickly.\n\n\\begin{observation}\n (i) If $C_1$ and $C_2$ are the $(k,\\eps)$-coresets for\n disjoint sets $P_1$ and $P_2$ respectively, then\n $C_1\\cup C_2$ is a $(k,\\eps)$-coreset for $P_1\\cup P_2$.\n\n (ii) If $C_1$ is $(k,\\eps)$-coreset for $C_2$, and $C_2$\n is a $(k,\\delta)$-coreset for $C_3$, then $C_1$ is a\n $(k,\\eps+\\delta)$-coreset for $C_3$.\n\\end{observation}\n\nThe above observation allows us to use Bentley and Saxe's technique\n\\cite{bs-dspsd-80} as follows. Let $P = \\pth{p_1,p_2,\\ldots,p_n}$ be\nthe sequence of points seen so far. We partition $P$ into sets $P_0,\nP_1,P_2,\\ldots,P_t$ such that each either $P_i$ empty or $|P_i| =\n2^{i}M$, for $i>0$ and $M = O(k\/\\eps^d)$. We refer to $i$ as the rank\nof $i$.\n\nDefine $\\rho_j = \\eps\/\\pth{c(j+1)^2}$ where c is a large\nenough constant, and $1+\\delta_j = \\prod_{l=0}^j(1+\\rho_l)$,\nfor $j=1,\\ldots, \\ceil{\\lg n}$. We store a $\\pth{k,\\delta_j\n}$-coreset $Q_{j}$ for each $P_j$. It is easy to verify\nthat $1+\\delta_{j} \\leq 1+\\eps\/2$ for $j=1,\\ldots, \\ceil{\\lg\n n}$ and sufficiently large $c$. Thus the union of the\n$Q_i$s is a $(k,\\eps\/2)$-coreset for $P$.\n\nOn encountering a new point $p_{u}$, the update is done in\nthe following way: We add $p_u$ to $P_0$. If $P_0$ has less\nthan $M$ elements, then we are done. Note that for $P_0$ its\ncorresponding coreset $Q_0$ is just itself. Otherwise, we\nset $Q_1' = P_0$, and we empty $Q_0$. If $Q_1$ is present,\nwe compute a $(k,\\rho_2)$ coreset to $Q_1\\cup Q'_1$ and call\nit $Q'_2$, and remove the sets $Q_1$ and $Q'_1$. We continue\nthe process until we reach a stage $r$ where $Q_r$ did not\nexist. We set $Q_r'$ to be $Q_r$. Namely, we repeatedly\nmerge sets of the same rank, reduce their size using the\ncoreset computation, and promote the resulting set to the\nnext rank. The construction ensures that $Q_r$ is a\n$(k,\\delta_r)$ coreset for a corresponding subset of $P$ of\nsize $2^r M$. It is now easy to verify, that $Q_r$ is a $(k,\n\\prod_{l=0}^j(1+\\rho_l) - 1)$-coreset for the corresponding\npoints of $P$.\n\nWe further modify the construction, by computing a\n$(k,\\eps\/6)$-coreset $R_i$ for $Q_i$, whenever we compute $Q_i$. The\ntime to do this is dominated by the time to compute $Q_i$. Clearly,\n$\\cup R_i$ is a $(k,\\eps)$-coreset for $P$ at any point in time, and\n$\\cardin {\\cup R_i} = O(k\\eps^{-d} \\log^2{n})$.\n\n\\paragraph{Streaming $k$-means}\nIn this case, the $Q_i$s are coresets for $k$-means clustering.\nSince $Q_i$ has a total weight equal to $2^i M$ (if it is not\nempty) and it is generated as a $(1+\\rho_i)$ approximation, by\n\\thmref{k:coreset:small:means}, we have that $|Q_{i}| = O\\pth{k\n\\eps^{-d} \\pth {i+1}^{2d}(i+\\log{M})}$. Thus the total storage\nrequirement is $O\\pth{\\pth{k\\log^{2d+2}{n}}\/\\eps^d}$.\n\nSpecifically, a $(k,\\rho_j)$ approximation of a subset $P_j$ of\nrank $j$ is constructed after every $2^j M$ insertions, therefore\nusing \\thmref{k:coreset:small:means} the amortized time spent for\nan update is\n\\begin{align*}\n &\\hspace{-1cm}\\sum_{i=0}^{\\ceil{\\log{(n\/M)}}} \\frac{1}{2^i M}\n O\\pth{\\cardin{Q_i} \\log^2 \\cardin{P_i} + k^5 \\log^9\n \\cardin{P_i}}%\n \\\\&%\n =%\n \\sum_{i=0}^{\\ceil{\\log{(n\/M)}}} \\frac{1}{2^i M}\n O\\pth{\\pth{ \\frac{k}{\\eps^d} i^{2d}\\pth{ i +\n \\log M}^2 + k^5 \\pth{ i +\n \\log M}^9 }}\n = O \\pth{ \\log^2 (k\/\\eps) + k^5}.\n\\end{align*}\nFurther, we can generate an approximate $k$-means clustering\nfrom the $(k,\\eps)$-coresets, by using the algorithm of\n\\thmref{k:means:approx} on $\\cup_i R_i$, with $W=n$. The\nresulting running time is $O(k^5 \\log^9 n +\n{k^{k+2}}{\\eps^{-(2d+1)k}} {\\log^{k+1}{n}}\n\\log^k(\\nfrac{1}{\\eps}))$.\n\n\\paragraph{Streaming $k$-medians}\nWe use the algorithm of \\lemref{k:coreset:small:median} for\nthe coreset construction. Further we use\n\\thmref{fast:k:median} to compute an\n$(1+\\eps)$-approximation to the $k$-median from the current\ncoreset. The above discussion can be summarized as follows.\n\n\\begin{theorem}\n \\thmlab{k:means:streaming}%\n %\n Given a stream $P$ of $n$ points in $\\Re^d$ and $\\eps > 0$, one\n can maintain a $(k,\\eps)$-coresets for $k$-median and $k$-means\n efficiently and use the coresets to compute a\n $(1+\\eps)$-approximate $k$-means\/median for the stream seen so\n far. The relevant complexities are:\n \\begin{itemize}\n \\item Space to store the information:\n $O\\pth{k\\eps^{-d} \\log^{2d+2}{n}}$.\n\n \\item Size and time to extract coreset of the current set:\n $O(k\\eps^{-d} \\log^2 n)$.\n \n \\item Amortized update time: $O\\pth{ \\log^2 (k\/\\eps)\n + k^5}$.\n\n \\item Time to extract $(1+\\eps)$-approximate $k$-means clustering:\\\\\n $O\\pth{k^5 \\log^9 n + {k^{k+2}}{\\eps^{-(2d+1)k}}\n {\\log^{k+1}{n}} \\log^k(\\nfrac{1}{\\eps})}$.\n \n \\item Time to extract $(1+\\eps)$-approximate $k$-median\n clustering:\\\\ $O\\pth{\\varrho}%{C_{\\mathrm{kr}} k \\log^7 n}$, where $\\varrho}%{C_{\\mathrm{kr}} =\n \\CkrExp$.\n \\end{itemize}\n\\end{theorem}\nInterestingly, once an optimization problem has a coreset, the\ncoreset can be maintained under both insertions and deletions,\nusing linear space. The following result follows in a plug and\nplay fashion from \\cite[Theorem 5.1]{ahv-aemp-04}, and we omit the\ndetails.\n\\begin{theorem}\n Given a point set $P$ in $\\Re^d$, one can maintain a\n $(k,\\eps)$-coreset of $P$ for $k$-median\/means, using\n linear space, and in time $O(k \\eps^{-d} \\log^{d+2} n\n \\log \\frac{k \\log{n}}{\\eps} + k^5 \\log^{10} n )$ per\n insertion\/deletions.\n\\end{theorem}\n\n\n\\section{Conclusions}\n\\seclab{conclusions}\n\nIn this paper, we showed the existence of small coresets for\nthe $k$-means and $k$-median clustering. At this point,\nthere are numerous problems for further research. In\nparticular:\n\\begin{enumerate}\n \\item Can the running time of approximate $k$-means clustering be\n improved to be similar to the $k$-median bounds? Can one do FPTAS\n for $k$-median and $k$-means (in both $k$ and $1\/\\eps$)?\n Currently, we can only compute the $(k,\\eps)$-coreset in fully\n polynomial time, but not extracting the approximation itself from\n it.\n\n \\item Can the $\\log{n}$ in the bound on the size of the coreset be\n removed?\n\n \\item Does a coreset exist for the problem of $k$-median and\n $k$-means in high dimensions? There are some partial relevant\n results \\cite{bhi-accs-02}.\n\n \\item Can one do efficiently $(1+\\eps)$-approximate streaming for\n the discrete $k$-median case?\n \n \\item Recently, Piotr Indyk \\cite{i-eabca-04} showed how to\n maintain a $(1+\\eps)$-approximation to $k$-median under insertion\n and deletions (the number of centers he is using is roughly $O(k\n \\log^2{\\Delta})$ where $\\Delta$ is the spread of the point set). \n It would be interesting to see if one can extend our techniques to\n maintain coresets also under deletions. It is clear that there is\n a linear lower bound on the amount of space needed, if one assume\n nothing. As such, it would be interesting to figure out what are\n the minimal assumptions for which one can maintain\n $(k,\\eps)$-coreset under insertions and deletions.\n\\end{enumerate}\n\n\\section*{Acknowledgments}\n\nThe authors would like to thank Piotr Indyk, Satish Rao and\nKasturi Varadarajan for useful discussions of problems\nstudied in this paper and related problems.\n\n\n\\newcommand{\\etalchar}[1]{$^{#1}$}\n \\providecommand{\\CNFX}[1]{ {\\em{\\textrm{(#1)}}}}\n \\providecommand{\\tildegen}{{\\protect\\raisebox{-0.1cm}{\\symbol{'176}\\hspace{-0.03cm}}}}\n \\providecommand{\\SarielWWWPapersAddr}{http:\/\/sarielhp.org\/p\/}\n \\providecommand{\\SarielWWWPapers}{http:\/\/sarielhp.org\/p\/}\n \\providecommand{\\urlSarielPaper}[1]{\\href{\\SarielWWWPapersAddr\/#1}{\\SarielWWWPapers{}\/#1}}\n \\providecommand{\\Badoiu}{B\\u{a}doiu}\n \\providecommand{\\Barany}{B{\\'a}r{\\'a}ny}\n \\providecommand{\\Bronimman}{Br{\\\"o}nnimann} \\providecommand{\\Erdos}{Erd{\\H\n o}s} \\providecommand{\\Gartner}{G{\\\"a}rtner}\n \\providecommand{\\Matousek}{Matou{\\v s}ek}\n \\providecommand{\\Merigot}{M{\\'{}e}rigot}\n \\providecommand{\\Hastad}{H\\r{a}stad\\xspace}\n \\providecommand{\\CNFCCCG}{\\CNFX{CCCG}}\n \\providecommand{\\CNFBROADNETS}{\\CNFX{BROADNETS}}\n \\providecommand{\\CNFESA}{\\CNFX{ESA}}\n \\providecommand{\\CNFFSTTCS}{\\CNFX{FSTTCS}}\n \\providecommand{\\CNFIJCAI}{\\CNFX{IJCAI}}\n \\providecommand{\\CNFINFOCOM}{\\CNFX{INFOCOM}}\n \\providecommand{\\CNFIPCO}{\\CNFX{IPCO}}\n \\providecommand{\\CNFISAAC}{\\CNFX{ISAAC}}\n \\providecommand{\\CNFLICS}{\\CNFX{LICS}}\n \\providecommand{\\CNFPODS}{\\CNFX{PODS}}\n \\providecommand{\\CNFSWAT}{\\CNFX{SWAT}}\n \\providecommand{\\CNFWADS}{\\CNFX{WADS}}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\nPattern recognition is becoming more important in various fields due to the development of machine learning. Traditional pattern recognition focuses on single-task learning (STL), with multi-task learning (MTL) generally being disregarded. The MTL aims to use helpful information in several related tasks to improve the generalization performance of all tasks. Multi-task learning aims to enhance predictions by exchanging group knowledge amongst related training data sets known as ``tasks''.\nTherefore, multi-task learning is a significant area of research in machine learning. \nThe study of multi-task learning has been of interest in diverse fields, including multi-level analysis~\\cite{bakker2003task}, medical diagnosis~\\cite{bi2008improved}, semi-supervised learning~\\cite{ando2005framework}, web search ranking~\\cite{chapelle2010multi}, speech recognition~\\cite{birlutiu2010multi}, cell biology~\\cite{ren2016multicell}, person identification~\\cite{su2017multi}, drug interaction extraction~\\cite{zhou2018position}, object tracking~\\cite{cheng2015multi}, etc.\n\nSeveral MTL approaches have been proposed throughout the years, which can be classified into several groups. SVM-based methods are among such approaches. Because of the effectiveness of support vector machine (SVM) \\cite{burges1998tutorial} in multi-task learning, several researchers have concentrated on multi-task SVM \\cite{ji2013multitask,shiao2012implementation,xue2016multi,yang2010multi}. Evgeniou et al.~\\cite{evgeniou2004regularized} developed a multi-task learning strategy based on the minimization of regularization functions similar to those used in SVM.\n\nAs we know, Jeyadeva et al.~\\cite{khemchandani2007twin} proposed twin support vector machine (TSVM) for binary classification in 2007, based on the main notion of GEPSVM \\cite{mangasarian2005multisurface}. TSVM divides the positive and negative samples by producing two non-parallel hyperplanes via solving two smaller quadratic programming problems (QPP) rather than one large QPP considered in SVM. In contrast to the substantial research conducted on multi-task support vector machines, there have been few efforts to incorporate multi-task learning into twin support vector machines (TSVM). For instance, inspired by multi-task learning and TSVM, Xie and Sun proposed a multi-task twin support vector machine \\cite{xie2012multitask}. They used twin support vector machines for multi-task learning and referred to the resulting model as directed multi-task twin support vector machine (DMTSVM). Following that, multi-task centroid twin support vector machines (MCTSVM) \\cite{xie2015multitask} were suggested to cope with outlier samples in each task. In addition, motivated by least-squares twin support vector machine (LS-TWSVM) \\cite{kumar2009least}, Mei and Xu~\\cite{mei2019multi} proposed a multi-task least-squares twin support vector machine (MTLS-TWSVM). Instead of the two QPP problems addressed by DMTSVM, MTLS-TWSVM solves only two smaller linear equations, resulting in quick computation. \n\nUniversum data is defined as a set of unlabeled samples that do not belong to any class \\cite{chapelle2007analysis,vapnik200624,weston2006inference}. These data demonstrate the ability to encode past knowledge by providing meaningful information in the same domain as the problem. The Universum data have effectively improved learning performance in classification and clustering. By incorporating the Universum data into SVM, Vapnik \\cite{vapnik2006estimation} proposed a novel model and referred to it as support vector machine with Universum ($\\mathfrak{U}$-SVM). Weston et al.~\\cite{weston2006inference} investigated this new framework (Universum data) and proved that the use of these data outperformed approaches that just used labeled samples. \n\nInspired by $\\mathfrak{U}$-SVM, Sinz et al.~\\cite{chapelle2007analysis} introduced least-squares support vector machine with Universum data ($\\mathfrak{U}_{LS}$-SVM). Also, Zhang et al.~\\cite{zhang2008semi} proposed semi-supervised algorithms based on a graph for the learning of labeled samples, unlabeled samples, and Universum data. Various studies confirm the helpfulness of Universum data for supervised and semi-supervised learning. The training procedure incorporates Universum data, which increases the total number of samples and adds substantial computing complexity. As a result, the classical $\\mathfrak{U}$-SVM has disadvantages such as high computational complexity due to facing a larger QPP.\nFortunately, in 2012, Qi et al.~\\cite{qi2012twin} proposed a twin support vector machine using Universum data ($\\mathfrak {U}$-TSVM) that addressed this computational shortcoming. Instead of solving one large QPP, which is done in the standard $\\mathfrak {U}$-SVM algorithm, this approach solves two smaller QPPs. The authors demonstrated that the approach not only reduced the time of computation, but also outperformed the traditional $\\mathfrak{U}$-SVM in terms of classification accuracy. Following that, Xu et al.~\\cite{xu2016least} developed least-squares twin support vector machine using Universum data ($\\mathfrak{U}_{LS}$-TSVM) for classification. They described the way two nonparallel hyperplanes could be found by solving a pair of systems of linear equations. As a result, $\\mathfrak{U}_{LS}$-TSVM works faster than $\\mathfrak{U}$-TSVM. Inspired by $ \\nu$-TSVM and $\\mathfrak{U}$-TSVM, Xu et al.~\\cite{xu2016nu} presented a $ \\nu$-TSVM with Universum data ($\\mathfrak{U}_{\\nu}$-TSVM). It allows the incorporation of the prior knowledge embedded in the unlabeled samples into supervised learning to improve generalization performance. Xiao et al.~\\cite{xiao2021new} established Universum learning in 2021 to make non-target task data behave as previous knowledge, and suggested a novel multi-task support vector machine using Universum data (U-MTLSVM). In general, Universum models have been of interest to many researchers because of their simple structures and good generalization performance \\cite{moosaei2021DC,moosaei2021sparse,richhariya2018improved,richhariya2018eeg}.\n\nDespite the work done in MTL, there is still a need to create more efficient approaches in terms of accuracy and other field measures. Inspired by DMTSVM and Universum data, we present a significant multi-task twin support vector machine using Universum data ($ \\mathfrak{U} $MTSVM).\nThis paper presents two approaches to the solution of the proposed model.\nWe obtain the dual formulation of $\\mathfrak{U}$MTSVM, and try to solve the quadratic programming problems at the first approach. In addition, we propose a least-squares version of the multi-task twin support vector machine with Universum data (referred to as LS-$\\mathfrak{U}$MTSVM) to further increase the generalization performance and reduce the time of computation. The LS-$\\mathfrak{U}$MTSVM only deals with two smaller linear equations instead of the two dual quadratic programming problems which are used in $ \\mathfrak{U} $MTSVM. \n\nThe contributions of our research can be summarized as follows. \n\\begin{itemize}\n\t\\item Using Universum data, we present a new multi-task twin support vector machine model. This model naturally extends DMTSVM by adding Universum data.\n\t\\item The proposed model has the same advantages as DMTSVM, and furthermore, improves its performance.\n\t\\item We propose two approaches to finding the solution of the proposed model, namely, solving the dual problem instead of the\n\tprimal problem, and introducing the least-squares version of $\\mathfrak{U}$MTSVM, which is called LS-$\\mathfrak{U}$MTSVM, for solving the primal problem.\n\\end{itemize}\n\nThe remainder of this paper is organized as follows. Following a quick review of\nDMTSVM and MTLS-TWSVM in Section~2, we describe the details of our proposed $\\mathfrak{U}$MTSVM and introduce its dual problem in Section~3. In Section~4, we present a new algorithm, namely, LS-$\\mathfrak{U}$MTSVM. Next, we provide some numerical experiments in Section~5. Finally, we summarize our findings in Section~6 within a brief conclusion.\n\n\\paragraph{Notation.}\nWe use $ \\mathbb{R}^n $ for the $ n$-dimensional real vector space and $I$ for the identity matrix. The transpose and Euclidean norm of a matrix $ A $ are denoted by the symbols $A^T $ and $ \\|\\cdot\\| $, respectively.\n \n \n\n The gradient of the function $f$ with respect to the variable $ x $ is denoted by $ \\nabla_{x} f(x)$ or simply $ \\nabla f(x)$.\n Next, we use $\\langle x,y\\rangle =x^{T}y $ to denote the inner product of two $ n$-dimensional vectors $ x $ and $ y $. The symbol $blkdiag({{P}_{1}},\\ldots,{{P}_{T}})$ denotes the block-diagonal matrix created by ${{P}_{1}},\\ldots,{{P}_{T}}$ matrices.\n\n\n\\section{Related work}\nIn this section, we first give an overview of the multi-task problem. Then, we introduce the direct multi-task twin support vector machine (DMTSVM) and the multi-task least-squares twin support vector machine (MTLS-TWSVM). It is preferable to define the fundamental foundations of these procedures since they serve as a solid basis for our suggested method. For the multi-task problem, we have $T$ training task, and we assume the set $S_{t}$, for $t=1,\\ldots, T$, stores the labeled samples for the $t$-th task. It is given by \n$$S_{t}=\\{(x_{1t}, y_{1t}),\\ldots,(x_{{n_{t}}t}, y_{{n_{t}}t}) \\}\n,$$\nwhere, $n_{t}$ is the number of samples in task $t$, $x_{it}\\in \\mathbb{R}^{n}$, $y_{it}\\in \\{\\pm 1\\},\\,\\, i=1,\\ldots,n_{t}$.\n\n\n\\subsection{Multi-task twin support vector machine}\\label{2.1} \n\nXie et al.~\\cite{xie2012multitask} introduced a new classification method that directly incorporates the regularized multi-task learning (RMTL) \\cite{evgeniou2004regularized} concept into TSVM and called it direct multi-task twin support vector machine (DMTSVM). \n\nSuppose positive samples and positive samples in \n$t$-th task are presented by $X_{p}$ and $X_{pt}$, respectively, while $X_{n}$ represents the negative samples and negative samples in $t$-th task are presented by $X_{nt}$, \n that is, $X_p^T=[X_{p1}\\,X_{p2}\\, \\dots X_{pT}]$.\nNow, for every task $t\\in \\{1, \\ldots, T\\}$ we define:\n\\[ A=[X_{p}\\,\\,\\, e_1],~ A_{t}=[X_{pt}\\,\\,\\, e_{1t}],~ B=[X_{n}\\,\\,\\, e_2],~ B_{t}=[X_{nt}\\,\\,\\, e_{2t}], \\]\nwhere $e_1,e_2,e_{1t}$ and $e_{2t}$ are one vectors of appropriate dimensions. Assume that all tasks have two mean hyperplanes in common, i.e., $u_{0}=[w_{1}, b_{1}]^{T}$ and $v_{0}=[w_{2}, b_{2}]^{T}$. The two hyperplanes in the $t$-th task for positive and negative classes are \n$(u_{0}+u_{t})=[w_{1t}, b_{1t}]^{T}$ and $(v_{0}+v_{t})=[w_{2t}, b_{2t}]^{T}$, respectively. The bias between task $t$ and the common mean vectors $u_{0}$ and $v_{0}$ is represented by $u_{t}$ and $v_{t}$, respectively.\n\nThe DMTSVM optimization problems are expressed below:\n\\begin{align}\\label{1} \n\\underset{u_{0}, u_{t}, \\xi_{t}}\\min\\ \n &\\dfrac{1}{2}\\| Au_{0}\\|^{2}\n +\\dfrac{\\mu_{1}}{2T}\\sum_{t=1}^{T} \\| A_{t}u_{t}\\|^{2}+c_{1}\\sum_{t=1}^{T}e_{2t}^{T}\\xi_{t},\\nonumber\\\\\n\\text{s.t.}\\,\\,\\,\\,& -B_{t}(u_{0}+u_{t})+\\xi_{t}\\geq e_{2t},\\nonumber\\\\\n&\\hspace*{2.7cm} \\xi_{t}\\geq 0,\n\\end{align}\nand\n\\begin{align}\\label{2} \n\\underset{v_{0}, v_{t}, \\eta_{t}}\\min\\ &\\dfrac{1}{2}\\| Bv_{0}\\|^{2}+\\dfrac{\\mu_{2}}{2T}\\sum_{t=1}^{T} \\| B_{t}v_{t}\\|^{2}+c_{2}\\sum_{t=1}^{T}e_{1t}^{T}\\eta_{t},\\nonumber\\\\\n\\text{s.t.}\\,\\,\\,\\,& A_{t}(v_{0}+v_{t})+\\eta_{t}\\geq e_{1t},\\nonumber\\\\\n&\\hspace*{2.2cm} \\eta_{t}\\geq 0.\n\\end{align}\n In problems \\eqref{1} and \\eqref{2}, $t\\in\\{1,\\ldots,T\\}$, $c_{1}$, $c_{2}$, and $e_{1t}$ and $e_{2t}$ are one vectors of appropriate dimensions. Next, $\\mu_{1}$ and $\\mu_{2}$ are positive parameters used for correlation of all tasks.\n If $\\mu_{1}$ and $\\mu_{2}$ give small penalty on vectors $ u_{t} $ and $ v_{t} $, then $ u_{t} $ and $ v_{t} $ tend to be larger. As a consequence, the models give less similarity.\n When $\\mu_{1}\\rightarrow \\infty$ and $\\mu_{2}\\rightarrow \\infty$, $ u_{t} $ and $ v_{t} $ tend to be smaller and make the $T$ models similar~\\cite{evgeniou2004regularized}.\n\nBy defining\n\\begin{align*}\n& Q =B{{({{A}^{T}}A)}^{-1}}{{B}^{T}}, \\quad \n {{P}_{t}}={{B}_{t}}{{(A_{t}^{T}A_{t})}^{-1}}B_{t}^{T}, \\\\\n& \\alpha=[\\alpha_{1}^{{T}},\\ldots,\\alpha_{T}^{{T}}]^{T},\\quad\n P=blkdiag({{P}_{1}},\\ldots,{{P}_{T}}), \n\\end{align*}\nthe dual problem of the problem \\eqref{1} may be expressed as follows:\n\\begin{align}\\label{13} \n\\underset{\\alpha}\\max&\\ -\\dfrac{1}{2}\\alpha^{T}(Q+\\textstyle\\frac{T}{\\mu_{1}}P)\\alpha\n +e_{2}^T\\alpha\\nonumber\\\\\n\\text{s.t.}\\,& \\ \\ 0\\leq\\alpha\\leq c_{1}e_{2}.\n\\end{align}\nBy resolving the aforementioned dual problem, we may discover:\n\\begin{align} \\nonumber\n& u_{0}= -(A^{T}A)^{-1}B^{T}\\alpha, \\\\ \n& u_{t}= -\\dfrac{T}{\\mu_{1}}(A^{T}_{t}A_{t})^{-1}B^{T}_{t}\\alpha_{t}. \\nonumber\n\\end{align}\n\nSimilarly, we may derive the dual problem of the problem \\eqref{2} as follows:\n\\begin{align}\\label{14} \n\\underset{\\alpha^{\\ast}}\\max&\\ -\\dfrac{1}{2}\\alpha^{\\ast^{T}}(R+\\textstyle\\frac{T}{\\mu_{2}}S)\\alpha^{\\ast}\n+e_{1}^T\\alpha^{\\ast}\\nonumber\\\\\n\\text{s.t.}\\,& \\ \\ 0\\leq\\alpha^{\\ast}\\leq c_{2}e_{1},\n\\end{align}\nwhere $\\alpha^{\\ast}=[\\alpha_{1}^{\\ast^{T}},\\ldots,\\alpha_{T}^{\\ast^{T}}]^{T}$,\n$R=A{{({{B}^{T}}B)}^{-1}}{{A}^{T}}$, and ${{S}_{t}}={{A}_{t}}{{(B_{t}^{T}{{B}_{t}})}^{-1}}A_{t}^{T}$ and $S=blkdiag({{S}_{1}},\\ldots ,{{S}_{T}})$.\nBy solving problem \\eqref{13} and \\eqref{14}, we can set the hyperplanes of every task $(u_{0}+u_{t})$ and \n$(v_{0}+v_{t})$. Meanwhile, a new data point $x$ in the $t$-th task is determined to class $i\\in \\{+1, -1\\}$ by using the following decision function:\n\\begin{align}\\label{15} \nf(x)=\\arg \\underset{k=1,2}\\min\\, |x^{T}w_{kt}+b_{kt}|.\n\\end{align}\n\n\n\\subsection{Multi-task least squares twin support vector machine}\\label{2.2} \nInspired by DMTSVM and the least squares twin support vector machine (LSTWSVM), Mei et al.~\\cite{mei2019multi} proposed a novel multi-task least squares twin support vector machine and called MTLS-TWSVM. \n\nThe notation of $X_{p}, X_{pt},X_{n},X_{nt}, A, A_{t}, B$ and $B_{t}$ is the same as that used in subsection \\eqref{2.1}. The MTLS-TWSVM problems are formulated as follows:\n\\begin{align}\\label{16} \n\\mathop {\\min }_{u_{0}, u_{t}, \\xi_{t}\\,} \\, &\\dfrac{1}{2}\\| Au_{0}\\|^{2} +\\dfrac{\\mu_{1}}{2T}\\sum_{t=1}^{T} \\| A_{t}u_{t}\\|^{2}+\\dfrac{c_{1}}{2}\\sum_{t=1}^{T}\\| \\xi_{t}\\|^{2},\\nonumber\\\\\n\\text{ s.t.} ~~& -B_{t}(u_{0}+u_{t})+\\xi_{t}=e_{2t},\n\\end{align}\nand\n\\begin{align}\\label{17} \n\\mathop {\\min }_{v_{0}, v_{t}, \\eta_{t}\\,} \\,& \\dfrac{1}{2}\\| Bv_{0}\\|^{2} +\\dfrac{\\mu_{2}}{2T}\\sum_{t=1}^{T} \\| B_{t}v_{t}\\|^{2}+\\dfrac{c_{2}}{2}\\sum_{t=1}^{T}\\|\\eta_{t}\\|^{2},\\nonumber\\\\\n\\text{ s.t.} ~~& A_{t}(v_{0}+v_{t})+\\eta_{t}= e_{1t},\n\\end{align}\nwhere $\\mu_{1}, c_{1},\\mu_{2}$ and $c_{2}$ are positive parameters. \nThe Lagrangian function associated with the problem \\eqref{16} is defined by\n\\begin{align}\\label{18} \nL_{1}= \\dfrac{1}{2}\\| Au_{0}\\|^{2}&+\\dfrac{\\mu_{1}}{2T}\\sum_{t=1}^{T} \\parallel A_{t}u_{t}\\parallel^{2}+\\dfrac{c_{1}}{2}\\sum_{t=1}^{T}\\| \\xi_{t}\\parallel^{2}\\nonumber\\\\\n& -\\sum_{t=1}^{T}\\alpha_{t}^{T}\\big(- B_{t}(u_{0}+u_{t})+\\xi_{t}-e_{2t}\\big),\n\\end{align}\nwhere $\\alpha=[\\alpha_{1}^{T},\\ldots,\\alpha_{T}^{T}]^{T}$ are the Lagrangian multipliers. \nAfter writing the partial derivatives of Lagrangian function \\eqref{18} with respect to $ u_{0}$, $ u_{t} $, $ \\xi_{t} $, and $ \\alpha_{t} $, we derive \n\\begin{equation}\\nonumber\n\\alpha = \\left(Q+\\frac{T}{\\mu_{1}}P+\\frac{1}{c_{1}}I\\right)^{-1}e_{2},\n\\end{equation}\nwhere $Q=B(A^{T}A)^{-1}B^{T}$, $P_{t}=B_{t}(A_{t}^{T}A_{t})^{-1}B^{T}_{t}$, and\n$P=blkdiag(P_{1},\\ldots,P_{T})$. \nThen we can compute the solution of problem~\\eqref{16}:\n\\begin{align} \\nonumber\nu_{0}=-(A^{T}A)^{-1}B^{T}\\alpha,~~~\n u_{t}=-\\dfrac{T}{\\mu_{1}}(A^{T}_{t}A_{t})^{-1}B^{T}_{t}\\alpha_{t}.\n\\end{align}\n\n\n\n\n\nSimilarly, the following relations may be used to find the solution of \\eqref{17}:\n\\begin{align}\\nonumber \n\\alpha^{\\ast}=\\left(R+\\frac{T}{\\mu_{2}}S+\\frac{1}{c_{2}}I\\right)^{-1}e_{1},\n\\end{align}\nwhere $R=A(B^{T}B)^{-1}A^{T}$, $S_{t}=A_{t}(B_{t}^{T}B_{t})^{-1}A^{T}_{t}$, \n$S=blkdiag(S_{1},\\ldots,S_{T})$ and $\\alpha^{\\ast}=[\\alpha_{1}^{\\ast^{T}},\\ldots,\\alpha_{T}^{\\ast^{T}}]^{T}$.\nAs a result, the classifier parameters $u_{0}, u_{t},v_{0}$ and $v_{t}$ of the $t$-th task are determined.\n\n\n\\section{Multi-task twin support vector machine with Universum data}\nMotivated by $\\mathfrak{U}$-TSVM and DMTSVM, we would like to introduce a new multi-task model and name it as multi-task twin support vector machine with Universum data ($\\mathfrak{U}$MTSVM).\n\nFor a multi-task problem, we have $\\widetilde{T}$ training sets, and suppose that the training set $\\widetilde{T}_{t}$ for $t=1,\\ldots,T$ consists of two subsets and each task $t$ contains $n_{t}$ samples as follows\n\t$$\\widetilde{T}_{t}=S_{t}\\cup X_{\\mathfrak{U}t},$$\n\twhere\n\t\\begin{align*}\n\t& S_{t}=\\{(x_{1t}, y_{1t}),\\ldots,(x_{{n_{t}}t}, y_{{n_{t}}t}) \\},\n \\\\\n\t& X_{\\mathfrak{U}t}=\\{x_{1t}^{\\ast},\\ldots,x_{u_{t}t}^{\\ast} \\},\n\t\\end{align*}\nwith $x_{it}\\in \\mathbb{R}^{n}$, $y_{i}\\in \\{\\pm 1\\}$ and $i=1,\\ldots,n_{t}$. \nHence, the set $S_{t}$ denotes the labeled\nsamples for the $t$-th task, and $X_{\\mathfrak{U}t}$ contains the Universum data for the $t$-th task. For every task, we expect to build the classifier based on positive and negative labeled samples as well as Universum data of this task.\n\n All tasks have two mean hyperplanes $u_{0}=[w_{1},b_{1}]^{T}$ and $v_{0}=[w_{2},b_{2}]^{T}$. \nThe two hyperplanes in the $t$-th task for positive and negative classes are $(u_{0}+u_{t})=[w_{1t},b_{1t}]^{T}$ and $(v_{0}+v_{t})=[w_{2t},b_{2t}]^{T}$, respectively. \nWe employ the same notation of $X_{p},X_{pt}, X_{n},X_{nt}, A, A_{t}, B$ and $B_{t}$ as we used in subsections~\\ref{2.1} and~\\ref{2.2}. \nIn addition, $X_{\\mathfrak{U}}$ denotes the Universum samples, and Universum samples in $t$-th task is presented by matrix~$X_{\\mathfrak{U}t}$. Then, for every \ntask $t\\in \\{1,\\ldots,T\\}$, we can define:\n\\[\\mathfrak{U}=[X_{\\mathfrak{U}}~e_{u}] ~\\mbox{ and } ~\\mathfrak{U}_t=[X_{\\mathfrak{U}t}~e_{ut}],\\]\nwhere $e_{u}$ and $e_{ut}$ are vector ones of appropriate dimensions.\n\n\\subsection{Linear case}\n In this part, we introduce the linear case of our new model ($\\mathfrak{U}$MTSVM). Before define the $ \\delta-$insensitive loss function for Universum data, we define the hinge loss function as $H_{\\delta}[\\theta]=\\max \\lbrace 0,\\delta-\\theta \\rbrace $.\n\t We will define the $ \\delta-$insensitive loss $ U^{t}[\\theta]=H^{t}_{-\\delta}[\\theta]+H^{t}_{-\\delta}[-\\theta],~t=1,...,T $ for Universum data in each task.\n\t This loss measures the real-valued output of our classifier \n$ f_{w_{1t},b_{1t}}(x)=w_{1t}^{T}x+b_{1t} $ and $ f_{w_{2t},b_{2t}}(x)=w_{2t}^{T}x+b_{2t} $ on $X_{\\mathfrak{U}}$ and penalizes outputs that are far from zero \\cite{qi2012twin}. We then wish to minimize the total losses $ \\sum_{t=1}^{T} \\sum_{j=1}^{u_{t}} U^{t}[f_{w_{1t},b_{1t}}(x^{*}_{jt})] $ and $ \\sum_{t=1}^{T} \\sum_{j=1}^{u_{t}} U^{t}[f_{w_{2t},b_{2t}}(x^{*}_{jt})]$, and the classifiers have greater possibility when these values are less, and vice versa \\cite{zhang2008semi}. Therefore by adding the following terms in the objective functions of DMTSVM, we introduce our new model ($\\mathfrak{U}$MTSVM):\n\\[c_{u} \\sum_{t=1}^{T} \\sum_{j=1}^{u_{t}} U^{t}[f_{w_{1t},b_{1t}}(x^{*}_{jt})], ~~\\mbox{and} ~~c^{*}_{u}\\sum_{t=1}^{T} \\sum_{j=1}^{u_{t}} U^{t}[f_{w_{2t},b_{2t}}(x^{*}_{jt})],\\]\nwhere $ c_{u} $ and $ c^{*}_{u} $ controls the loss of Universum data. Hence, the optimization formulas of our proposed $\\mathfrak{U}$MTSVM may be stated as follows:\n\n\n\n\\begin{align}\n\\mathop {\\min }_{u_{0},u_{t},\\xi_{t},\\psi _{t}\\,} \\,& \\frac{1}{2}\\|A u_{0}\\|^{2}+\\dfrac{\\mu_{1}}{2T} \\sum_{t=1}^{T}\\|A_{t}u_{t}\\|^{2} +c_{1}\\sum_{t=1}^{T} e_{2t}^{T} \\xi _{t}+c_{u}\\sum_{t=1}^{T} e_{ut}^{T} \\psi _{t}\\nonumber \\\\\n\\text{ s.t.} ~~& - B_{t} (u_{0}+u_{t})+{{\\xi }_{t}}\\geq e_{2t},\\nonumber \\\\ \n& \\mathfrak{U}_{t} (u_{0}+u_{t})+\\psi_{t}\\geq (-1+\\varepsilon)e_{ut}, \\nonumber\\\\ \n& \\xi _{t} \\geq 0, \\ \\ \n \\psi_{t}\\geq 0,\n\\label{26}\n\\end{align}\nand\n\\begin{align}\n\\mathop {\\min }_{v_{0},v_{t},\\eta_{t},\\psi^{*} _{t}\\,} \\,& \\frac{1}{2}\\|B v_{0}\\|^{2}+\\dfrac{\\mu_{2}}{2T} \\sum_{t=1}^{T}\\|B_{t}v_{t}\\|^{2} +c_{2}\\sum_{t=1}^{T} e_{1t}^{T} \\eta_{t}+c^{*}_{u}\\sum_{t=1}^{T} e_{ut}^{T} \\psi _{t}^{*}\\nonumber \\\\\n\\text{ s.t.}~~ & A_{t} (v_{0}+v_{t})+{{\\eta }_{t}}\\geq e_{1t},\\nonumber \\\\ \n& -\\mathfrak{U}_{t} (v_{0}+v_{t})+\\psi^{*}_{t}\\geq (-1+\\varepsilon)e_{ut}, \\nonumber\\\\ \n& \\eta _{t} \\geq 0, \\ \\ \n \\psi^{*}_{t}\\geq 0,\n\\label{27}\n\\end{align}\nwhere $c_{1}, c_{2}, c_{u}$ and $c_{u}^{*}$ are penalty parameters. $\\xi_{t}, \\eta_{t}$, $ \\psi_{t}=(\\psi_{1},...,\\psi_{ut})$ and $\\psi_{t}^{*}=(\\psi_{1}^{*},...,\\psi_{ut}^{*})$\nare the corresponding slack vectors. $T$ denotes the number of task parameters, and $\\mu_{1}$ and $\\mu_{2}$ are the positive\nparameters, which controls preference of the tasks.\n\n\n\n\n\nThe Lagrangian function associated with problem (\\ref{26}) is denoted by\n\\begin{align}\\label{28}\nL_{1}= & \\frac{1}{2}\\|A u_{0}\\|^{2}+\\dfrac{\\mu_{1}}{2T} \\sum_{t=1}^{T}\\|A_{t}u_{t}\\|^{2} +c_{1}\\sum_{t=1}^{T} e_{2t}^{T} \\xi _{t}\n +c_{u}\\sum_{t=1}^{T} e_{ut}^{T} \\psi _{t}-\\sum_{t=1}^{T} \\alpha_{1t}^{T}(-B_{t}(u_{0}+u_{t})+\\xi_{t}\\nonumber \\\\\n& -e_{2t})- \\sum_{t=1}^{T} \\beta_{1t}^{T}\\xi_{t}-\\sum_{t=1}^{T} \\alpha_{2t}^{T}(\\mathfrak{U}_{t}(u_{0}+u_{t})+\\psi_{t}\n -(-1+\\varepsilon)e_{ut})- \\sum_{t=1}^{T} \\beta_{2t}^{T}\\psi_{t},\n\\end{align}\nwhere $\\alpha_{1t},\\alpha_{2t}, \\beta_{1t}$ and $\\beta_{2t}$ are the Lagrange multipliers.\nThus, denoting $\\alpha_1=[\\alpha_{11}^{T},\\ldots, \\alpha_{1T}^{T}]^{T}$ and $\\alpha_2=[\\alpha_{21}^{T},\\ldots, \\alpha_{2T}^{T}]^{T}$, the KKT necessary and sufficient optimality conditions for \\eqref{26} are given by\n\n\n\\begin{align}\n\\dfrac{\\partial L_{1}}{\\partial u_{0}}& =A^{T}Au_{0}+B^{T}\\alpha_{1}-\\mathfrak{U}^{T}\\alpha_{2}=0,\\label{29} \\\\\n\\dfrac{\\partial L_{1}}{\\partial u_{t}}& =\\dfrac{\\mu_{1}}{T}A_{t}^{T}A_{t}u_{t}+B_{t}^{T}\\alpha_{1t}-\\mathfrak{U}_{t}^{T}\\alpha_{2t}=0,\\label{30} \\\\\n\\dfrac{\\partial L_{1}}{\\partial \\xi_{t}}& =c_{1}e_{2}-\\alpha_{1}-\\beta_{1}=0,\\label{31} \\\\\n\\dfrac{\\partial L_{1}}{\\partial \\psi_{t}}& =c_{u}e_{u}-\\alpha_{2}-\\beta_{2}=0.\\label{32} \n\\end{align}\nSince $\\beta_{1}\\geq 0$ and $\\beta_{2}\\geq 0$, from (\\ref{31}) and (\\ref{32}), we have \n\\begin{align} \\nonumber 0\\leq \\alpha_{1}\\leq c_{1}e_{2}, ~~~\n0\\leq \\alpha_{2}\\leq c_{u}e_{u}. \\nonumber\n\\end{align}\nAlso, from the equations (\\ref{29}) and (\\ref{30}), we have \n\\begin{align} \\nonumber u_{0}&=-(A^{T}A)^{-1} (B^{T}\\alpha_{1}-\\mathfrak{U}^{T}\\alpha_{2}), \\nonumber \\\\ \nu_{t}&=-\\dfrac{T}{\\mu_{1}}(A_{t}^{T}A_{t})^{-1} (B_{t}^{T}\\alpha_{1t}-\\mathfrak{U}_{t}^{T}\\alpha_{2t}). \\nonumber\n\\end{align}\nThen, substituting $u_{0}$ and $u_{t}$ into (\\ref{28}) :\n\\begin{align*} L_{1}={}&\\dfrac{1}{2}(\\alpha^{T}_{1}B -\\alpha_{2}^{T}\\mathfrak{U})(A^{T}A)^{-1} (B^{T}\\alpha_{1}-\\mathfrak{U}^{T}\\alpha_{2}) \\\\\n&+\\frac{T}{ 2\\mu_{1}}\\sum_{t=1}^{T}(\\alpha_{1t}^{T}B_{t}- \\alpha_{2t}^{T}\\mathfrak{U}_{t}) (A_{t}^{T}A_{t})^{-1}(B_{t}^{T}\\alpha_{1t}-\\mathfrak{U}_{t}^{T}\\alpha_{2t})\\\\\n& -\\frac{T}{ \\mu_{1}}\\sum_{t=1}^{T}\\alpha_{1t}^{T}B_{t} (A_{t}^{T}A_{t})^{-1} (B_{t}^{T}\\alpha_{1t}-\\mathfrak{U}_{t}^{T}\\alpha_{2t})\\\\\n&-\\sum_{t=1}^{T}\\alpha_{2t}^{T}\\mathfrak{U}_{t}(A_{t}^{T}A_{t})^{-1} (B ^{T}\\alpha_{1}-\\mathfrak{U}^{T}\\alpha_{2})\\\\\n& -\\dfrac{T}{ \\mu_{1}} \\sum_{t=1}^{T}\\alpha_{1t}^{T}\\mathfrak{U}_{t}(A_{t}^{T}A_{t})^{-1} (B_{t}^{T}\\alpha_{1t}-\\mathfrak{U}_{t}^{T}\\alpha_{2t})\\\\\n&=-\\dfrac{1}{2}(\\alpha^{T}_{1}B -\\alpha_{2}^{T}\\mathfrak{U})(A^{T}A)^{-1} (B^{T}\\alpha_{1}-\\mathfrak{U}^{T}\\alpha_{2})\\\\\n&-\\dfrac{T}{ 2\\mu_{1}} \\sum_{t=1}^{T}(\\alpha_{1t}^{T}B_{t}- \\alpha_{2t}^{T}\\mathfrak{U}_{t}) (A_{t}^{T}A_{t})^{-1} (B_{t}^{T}\\alpha_{1t}-\\mathfrak{U}_{t}^{T}\\alpha_{2t}).\n\\end{align*}\nDefining\n\\begin{align*}\nQ& =\\left[ \\begin{matrix}\nB\\\\\n\\mathfrak{U}\n\\end{matrix}\\right] (A^{T}A)^{-1}\\left[ \\begin{matrix}\nB^{T}&\\mathfrak{U}^{T}\n\\end{matrix}\\right],\\\\\nP_{t}& =\\left[ \\begin{matrix}\nB_{t}\\\\\n\\mathfrak{U}_{t}\n\\end{matrix}\\right] (A^{T}A)^{-1}\\left[ \\begin{matrix}\nB_{t}^{T}&\\mathfrak{U}^{T}_{t}\n\\end{matrix}\\right],\\\\\nP&= blkdiag \\,(P_{1},\\ldots,P_{T}),\n\\end{align*}\nthe dual problem of (\\ref{26}) may be expressed as \n\\begin{align}\\label{37}\n\\mathop {\\max }_{\\alpha_{1}, \\alpha_{2}} \\,& -\\frac{1}{2}\\left[ \\alpha_{1}^{T}, \\alpha_{2}^{T}\\right] \\left( Q+\\dfrac{T}{\\mu_{1}}P\\right) \\left[ \\begin{matrix}\n\\alpha_{1}\\\\\n\\alpha_{2}\n\\end{matrix}\\right]\n+\\left[ \\alpha_{1}^{T}, \\alpha_{2}^{T}\\right] \\left[ \\begin{matrix}\ne_{2}\\\\\n(-1+\\varepsilon)e_{u}\n\\end{matrix}\\right]\\nonumber\\\\\n\\text{s.t.} \\,\\,\\,\\,\\,& 0\\leq\\alpha_{1}\\leq c_{1}e_{2},\\nonumber \\\\ \n& 0\\leq\\alpha_{2}\\leq c_{u}e_{u}.\n\\end{align}\nSimilarly, by introducing \n\\begin{align*}\n&R =\\left[ \\begin{matrix}\nA\\\\\n\\mathfrak{U}\n\\end{matrix}\\right] (B^{T}B)^{-1}\\left[ \\begin{matrix}\nA^{T}&\\mathfrak{U}^{T}\n\\end{matrix}\\right], ~~\nS_{t} =\\left[ \\begin{matrix}\nA_{t}\\\\\n\\mathfrak{U}_{t}\n\\end{matrix}\\right] (A_{t}^{T}A_{t})^{-1}\\left[ \\begin{matrix}\nA_{t}^{T}&\\mathfrak{U}^{T}_{t}\n\\end{matrix}\\right],\\\\\n&S= blkdiag\\,(S_{1},\\ldots, S_{T}),~~\n\\alpha^{*}_{1}=\\big[(\\alpha_{11}^{\\ast})^T,\\ldots,(\\alpha_{1T}^{\\ast})^T\\big]^{T},~~ \n\\alpha^{*}_{2}=\\big[(\\alpha_{21}^{\\ast})^T,\\ldots,(\\alpha_{2T}^{\\ast})^T\\big]^{T},\n\\end{align*}\nthe dual problem of (\\ref{27}) can be obtained as \n\\begin{align}\\label{38}\n\\mathop {\\max }_{\\alpha^{*}_{1}, \\alpha^{*}_{2}} \\,& -\\frac{1}{2}\\left[ \\alpha_{1}^{*T}, \\alpha_{2}^{*T}\\right] \n\\left( R+\\dfrac{T}{\\mu_{2}}S\\right) \n\\left[ \\begin{matrix}\n\\alpha^{*}_{1}\\\\\n\\alpha^{*}_{2}\n\\end{matrix}\\right]\n+\\left[ \\alpha_{1}^{*T}, \\alpha_{2}^{*T}\\right] \\left[ \\begin{matrix}\ne_{1}\\\\\n(-1+\\varepsilon)e_{u}\n\\end{matrix}\\right]\\nonumber\\\\\n\\text{s.t.}\\,\\,\\,\\,\\, & 0\\leq\\alpha^{*}_{1}\\leq c_{2}e_{1},\\nonumber \\\\ \n& 0\\leq\\alpha^{*}_{2}\\leq c^{*}_{u}e_{u}.\n\\end{align}\nBy solving problems (\\ref{37}) and (\\ref{38}), we find the $\\alpha_{1}, \\alpha_{2}, \\alpha_{1}^{*}$ and $\\alpha_{2}^{*}$, and then the classifiers parameters $u_{0}, u_{t}, v_{0}$ and $v_{t}$ of the $t$-th task can be obtained.\nThe label of a new sample $x\\in \\mathbb{R}^{n}$ is determined by (\\ref{15}).\nA linear $\\mathfrak{U}$MTSVM can be obtained by the steps Algorithm~\\ref{A1}.\n\\begin{algorithm} [t] \n\t\\caption{\\label{A1} A linear multi-task twin support vector machine with Universum ($\\mathfrak{U}$MTSVM) }\n\n\t\\algsetup{linenosize=\\normalsize}\n\t\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\t\\begin{algorithmic}[1]\n\t\t\\normalsize\n\t\t\\REQUIRE{\\mbox{}\\\\-- The training set $\\tilde{T}$ and Universum data $ X_{\\mathfrak{U}} $;\\\\\n\t\t\t-- Decide on the total number of tasks included in the data set and assign this value to T;\\\\\n\t\t\t-- Select classification task $ S_{t}~(t=1,\\ldots,T) $ in training data set $\\tilde{T}$;\\\\\n\t\t\t-- Divide Universum data $X_{\\mathfrak{U}} $ by $t$-task and get $ X_{\\mathfrak{U}t}~ (t=1,\\ldots,T)$;\\\\\n\t\t\t-- Choose appropriate parameters\n\t\t\t$c_{1}, c_{2},c_{u}$, $c_{u}^{*}$, $ \\mu_{1} $, $ \\mu_{2} $, and parameter $\\varepsilon \\in (0,1)$.}\\\\\n\t\t{\\textbf{The outputs:}}\n\t\t\\begin{list}{--}{}\n\t\t\t\\item $ u_{0},~u_{t},~v_{0}$, and $v_{t}. $\n\t\t\\end{list}\n\t\t\n\t\t{\\textbf{The process:}}\n\t\t\n\t\t\\STATE\n\t\tSolve the optimization problems (16) and (17), and get $ \\alpha_{1},~\\alpha_{2},~\\alpha^{*}_{1}$, and $\\alpha^{*}_{2}. $\n\t\t\\STATE\n\t\tCalculate $ u_{0},~u_{t},~v_{0}$, and $v_{t}. $\n\t\t\\STATE\n\t\tBy utilizing the decision function (\\ref{15}), assign a new point $ x $ in the $t$-th task to class $ +1 $ or $ -1 $.\n\t\t\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Nonlinear case}\n\nIt is understandable that a linear classifier would not be appropriate for training data that are linearly inseparable. To deal with such issues, employ the kernel technique. To that end, we introduce the kernel function $K(\\cdot,\\cdot)$ and define\n $D=\\left[ A_{1}^{T}, B_{1}^{T}, A_{2}^{T}, B_{2}^{T},\\ldots, A_{T}^{T},B_{T}^{T}\\right] ^{T}$, $\\overline{A}=\\left[ K(A, D^{T}),e_{1}\\right] $, $\\overline{A}_{t}=\\left[ K(A_{t}, D^{T}),e_{1t}\\right] $, $\\overline{B}=[K(B, D^{T}),e_{2}]$, $\\overline{B}_{t}=\\left[ K(B_{t}, D^{T}),e_{2t}\\right] $, $\\overline{\\mathfrak{U}}=\\left[ K(X_{\\mathfrak{U}}, D^{T}),e_{u}\\right] $ and $\\overline{\\mathfrak{U}}_{t}=\\left[ K(X_{\\mathfrak{U}t}, D^{T}),e_{ut}\\right] $. Hence, the nonlinear formulations for $\\mathfrak{U}$MTSVM are defined as follow:\n\\begin{align}\n\\mathop {\\min }_{u_{0},u_{t},\\xi_{t},\\psi _{t}\\,} \\,& \\frac{1}{2}\\|\\bar{A} u_{0}\\|^{2}+\\frac{\\mu_{1}}{2T} \\sum_{t=1}^{T}\\|\\bar{A}_{t}u_{t}\\|^{2} +c_{1}\\sum_{t=1}^{T} e_{2t}^{T} \\xi _{t}\n+c_{u}\\sum_{t=1}^{T} e_{ut}^{T} \\psi _{t}\\nonumber \\\\\n\\text{s.t.} ~~& - \\overline{B}_{t} (u_{0}+u_{t})+{{\\xi }_{t}}\\geq e_{2t},\\nonumber \\\\ \n& \\bar{\\mathfrak{U}}_{t} (u_{0}+u_{t})+\\psi_{t}\\geq (-1+\\varepsilon)e_{ut}, \\nonumber\\\\ \n& \\xi _{t} \\geq 0, \\ \\ \n \\psi_{t}\\geq 0,\n\\label{40}\n\\end{align}\nand\n\\begin{align}\n\\mathop {\\min }_{v_{0},v_{t},\\eta_{t},\\psi^{*} _{t}\\,} \\,& \\frac{1}{2}\\|\\bar{B} v_{0}\\|^{2}+\\frac{\\mu_{2}}{2T} \\sum_{t=1}^{T}\\|\\bar{B}_{t}v_{t}\\|^{2} +c_{2}\\sum_{t=1}^{T} e_{1t}^{T} \\eta _{t}\n+c^{*}_{u}\\sum_{t=1}^{T} e_{ut}^{T} \\psi _{t}\\nonumber \\\\\n\\text{s.t.} ~~& \\bar{A}_{t} (v_{0}+v_{t})+{{\\eta }_{t}}\\geq e_{1t},\\nonumber \\\\ \n& -\\overline{\\mathfrak{U}}_{t} (v_{0}+v_{t})+\\psi^{*}_{t}\\geq (-1+\\varepsilon)e_{ut}, \\nonumber\\\\ \n& \\eta _{t} \\geq 0, \\ \\ \n \\psi^{*}_{t}\\geq 0,\n\\label{41}\n\\end{align}\nhere $c_{1}$, $c_{2}$, $c_{u}$ and $c_{u}^{*}$ are penalty parameters, and\n$\\xi_{t}$, $\\eta_{t}$, $\\psi_{t}$ and $\\psi_{t}^{*}$\nare the corresponding slack vectors. By $T$ we denote the number of task parameters, and $\\mu_{1}$ and $\\mu_{2}$ are the positive\nparameters, which control preference of the tasks.\nAfter using the Lagrange multipliers and KKT conditions, the duals of problems (\\ref{40}) and (\\ref{41}) read as follows:\n\\begin{align}\\label{42}\n\\mathop {\\max }_{\\alpha_{1}, \\alpha_{2}} \\,& -\\frac{1}{2}\\left[ \\alpha_{1}^{T}, \\alpha_{2}^{T}\\right]\n\\left( Q+\\dfrac{T}{\\mu_{1}}P\\right) \\left[ \\begin{matrix}\n\\alpha_{1}\\\\\n\\alpha_{2}\n\\end{matrix}\\right]\n+\\left[ \\alpha_{1}^{T}, \\alpha_{2}^{T}\\right] \\left[ \\begin{matrix}\ne_{2}\\\\\n(-1+\\varepsilon)e_{u}\n\\end{matrix}\\right]\\nonumber \\\\\n\\text{s.t.} \\,\\,\\,\\,\\, & 0\\leq\\alpha_{1}\\leq c_{1}e_{2},\\nonumber \\\\ \n& 0\\leq\\alpha_{2}\\leq c_{u}e_{u},\n\\end{align}\nand\n\\begin{align}\\label{43}\n\\mathop {\\max }_{\\alpha^{*}_{1}, \\alpha^{*}_{2}} \\,& -\\frac{1}{2}\\left[ \\alpha_{1}^{*T}, \\alpha_{2}^{*T}\\right] \n\\left( R+\\dfrac{T}{\\mu_{2}}S\\right) \\left[ \\begin{matrix}\n\\alpha^{*}_{1}\\\\\n\\alpha^{*}_{2}\n\\end{matrix}\\right]\n+\\left[ \\alpha_{1}^{*T}, \\alpha_{2}^{*T}\\right] \\left[ \\begin{matrix}\ne_{1}\\\\\n(-1+\\varepsilon)e_{u}\n\\end{matrix}\\right]\\nonumber\\\\\n\\text{s.t.}\\,\\,\\,\\,\\, & 0\\leq\\alpha^{*}_{1}\\leq c_{2}e_{1},\\nonumber \\\\ \n& 0\\leq\\alpha^{*}_{2}\\leq c^{*}_{u}e_{u},\n\\end{align}\nwhere\n\\begin{align*}\nQ& =\\left[ \\begin{matrix}\n\\bar{B}\\\\\n\\bar{ \\mathfrak{U}}\n\\end{matrix}\\right] ( \\bar{A}^{T} \\bar{A})^{-1}\\left[ \\begin{matrix}\n\\bar{B}^{T}& \\bar{ \\mathfrak{U}}^{T}\n\\end{matrix}\\right],~~\nP_{t} =\\left[ \\begin{matrix}\n\\bar{B}_{t}\\\\\n\\bar{ \\mathfrak{U}}_{t}\n\\end{matrix}\\right] ( \\bar{A}_{t}^{T} \\bar{A}_{t})^{-1}\\left[ \\begin{matrix}\n\\bar{B}_{t}^{T}& \\bar{ \\mathfrak{U}}^{T}_{t}\n\\end{matrix}\\right],\\\\\nP&=blkdiag \\,(P_{1},\\ldots, P_{T}),~~\nR =\\left[ \\begin{matrix}\n\\bar{A}\\\\\n\\bar{ \\mathfrak{U}}\n\\end{matrix}\\right] (\\bar{B}^{T}\\bar{B})^{-1}\\left[ \\begin{matrix}\n\\bar{A}^{T}& \\bar{ \\mathfrak{U}}^{T}\n\\end{matrix}\\right],\\\\\nS_{t}& =\\left[ \\begin{matrix}\n\\bar{A}_{t}\\\\\n\\bar{ \\mathfrak{U}}_{t}\n\\end{matrix}\\right] (\\bar{A}_{t}^{T}\\bar{A}_{t})^{-1}\\left[ \\begin{matrix}\n\\bar{A}_{t}^{T}& \\bar{ \\mathfrak{U}}^{T}_{t}\n\\end{matrix}\\right], ~~\nS= blkdiag \\,(S_{1},\\ldots, S_{T}).\n\\end{align*}\n\n\nA new data point $x$ in the $t$-th task is determined to class $i\\in \\{+1, -1\\}$ by using the following decision function:\n\\begin{align}\\label{n15} \nf(x)=\\arg \\underset{k=1,2}\\min\\, \\big|K\\big(x,D^{T}\\big)w_{kt}+b_{kt}\\big|.\n\\end{align}\nThe nonlinear $\\mathfrak{U}$MTSVM is described in the steps of Algorithm~\\ref{A2}.\n\\begin{algorithm} [t] \n\t\\caption{\\label{A2} A nonlinear multi-task twin support vector machine with Universum ($\\mathfrak{U}$MTSVM) }\n\n\t\\algsetup{linenosize=\\normalsize}\n\t\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\t\\begin{algorithmic}[1]\n\t\t\\normalsize\n\t\t\\REQUIRE{\\mbox{}\\\\-- The training set $\\tilde{T}$ and Universum data $X_{\\mathfrak{U}} $;\\\\\n\t\t\t-- Decide on the total number of tasks included in the data set and assign this value to T;\\\\\n\t\t\t-- Select classification task $ S_{t}~(t=1,\\ldots,T) $ in training data set $\\tilde{T}$;\\\\\n\t\t\t-- Divide Universum data $X_{\\mathfrak{U}} $ by $t$-task and get $ X_{\\mathfrak{U}t}~ (t=1,\\ldots,T)$;\\\\\n\t\t\t-- Choose appropriate parameters\n\t\t\t$c_{1}, c_{2},c_{u}$, $c_{u}^{*}$, $ \\mu_{1} $, $ \\mu_{2} $, and parameter $\\varepsilon \\in (0,1)$.\\\\\n\t\t-- Select proper kernel function and kernel parameter.\n\t}\\\\\n\t\t{\\textbf{The outputs:}}\n\t\t\\begin{list}{--}{}\n\t\t\t\\item $ u_{0},~u_{t},~v_{0}$, and $v_{t}. $\n\t\t\\end{list}\n\t\t\n\t\t{\\textbf{The process:}}\n\t\t\n\t\t\\STATE\n\t\tSolve the optimization problems (20) and (21), and get $ \\alpha_{1},~\\alpha_{2},~\\alpha^{*}_{1}$, and $\\alpha^{*}_{2}. $\n\t\t\\STATE\n\t\tCalculate $ u_{0},~u_{t},~v_{0}$, and $v_{t}. $\n\t\t\\STATE\n\t\tAssign a new point $ x $ in the $t$-th task to class $ +1 $ or $ -1 $ by using decision function~(\\ref{n15}).\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Least squares multi-task twin support vector machine with Universum data}\nIn this section, we introduce the least-squares version of $\\mathfrak{U}$MTSVM for the linear and nonlinear cases, to which we refer as least-squares multi-task twin support vector machine with Universum data (LS-$\\mathfrak{U}$MTSVM). Our proposed method combines the advantages of DMTSVM, $ \\mathfrak{U}_{LS} $-TSVM, and MTLS-TWSVM. In terms of generalization performance, the proposed method is superior to MTLS-TWSVM, because it improves prediction accuracy by absorbing previously embedded knowledge embedded in the Universum data. In terms of the time of computation, LS-$\\mathfrak{U}$MTSVM works faster than DMTSVM by solving two systems of linear equations instead of two quadratic programming problems.\n\n\\subsection{Linear case}\nWe modify problems (\\ref{26}) and \n(\\ref{27}) in the least squares sense and replace the inequality constraint with equality requirements as follows\n\\begin{align}\\label{44}\n\\mathop {\\min }_{u_{0},u_{t},\\xi_{t},\\psi _{t}\\,} \\,& \\frac{1}{2}\\|A u_{0}\\|^{2}+\\dfrac{\\mu_{1}}{2T} \\sum_{t=1}^{T}\\|A_{t}u_{t}\\|^{2} +\\dfrac{c_{1}}{2}\\sum_{t=1}^{T} \\| \\xi _{t}\\|^{2}\n+\\dfrac{c_{u}}{2}\\sum_{t=1}^{T} \\|\\psi _{t}\\|^{2}\\nonumber \\\\\n\\text{s.t.} ~~& - B_{t} (u_{0}+u_{t})+{{\\xi }_{t}}= e_{2t},\\nonumber \\\\ \n& \\mathfrak{U}_{t} (u_{0}+u_{t})+\\psi_{t}= (-1+\\varepsilon)e_{ut}, \n\\end{align}\nand\n\\begin{align}\\label{45}\n\\mathop {\\min }_{v_{0},v_{t},\\eta_{t},\\psi^{*} _{t}\\,} \\,& \\frac{1}{2}\\|B v_{0}\\|^{2}+\\dfrac{\\mu_{2}}{2T} \\sum_{t=1}^{T}\\|B_{t}v_{t}\\|^{2} +\\dfrac{c_{2}}{2}\\sum_{t=1}^{T} \\| \\eta_{t}\\|^{2}\n+\\dfrac{c^{*}_{u}}{2}\\sum_{t=1}^{T}\\|\\psi _{t}^{*}\\|^{2}\\nonumber \\\\\n\\text{s.t.}~~ & A_{t} (v_{0}+v_{t})+{{\\eta }_{t}}= e_{1t},\\nonumber \\\\ \n& -\\mathfrak{U}_{t} (v_{0}+v_{t})+\\psi^{*}_{t}= (-1+\\varepsilon)e_{ut}.\n\\end{align}\nHere, $c_{1},c_{2},c_{u}$, and $c_{u}^{\\ast}$ are penalty parameters, $\\xi_{t}, \\eta_{t}, \\psi_{t}$ and $\\psi_{t}^{\\ast}$ are slack variables for $t$-th task and $e_{1t}, e_{2t}$, and $e_{ut}$ are vectors of appropriate dimensions whose all components are equal to $1$. \n\n\nIt is worth noting that the loss functions in \\eqref{44} and \\eqref{45} are the square of the 2-norm of the slack variables $\\psi$ and $\\psi^{*}$ rather than the 1-norm in problems (\\ref{26}) and (\\ref{27}), which renders the constraints $\\psi_{t}\\geq 0$ and $\\psi^{*}_{t}\\geq 0$ superfluous.\n\nThe Lagrangian function for the problem \\eqref{44} can be written as follows: \n\\begin{align}\\label{46} \nL_{1} ={}& \\dfrac{1}{2}\\| Au_{0}\\|^{2}+\\frac{\\mu_{1}}{2T}\\sum_{t=1}^{T} \\| A_{t}u_{t}\\|^{2}+\\frac{c_{1}}{2}\\sum_{t=1}^{T}\\| \\xi_{t}\\|^{2}\\nonumber\\\\\n& -\\sum_{t=1}^{T}\\alpha_{t}^{T}(- B_{t}(u_{0}+u_{t})+\\xi_{t}-e_{2t})+\\dfrac{c_{u}}{2}\\sum_{t=1}^{T}\\| \\psi_{t}\\|^{2}\\nonumber\\\\\n& -\\sum_{t=1}^{T}\\beta_{t}^{T}(\\mathfrak{U}_{t}(u_{0}+u_{t})+\\psi_{t}-(-1+\\varepsilon)e_{ut}),\n\\end{align}\nwhere $\\alpha_{t}$ and $\\beta_{t}$ are the Lagrange multipliers.\nThe Lagrangian function \\eqref{46} is differentiable and the KKT optimally conditions can be obtained as follows:\n\\begin{align}\n&\\dfrac{\\partial L}{\\partial u_{0}}=A^{T}Au_{0}+B^{T}\\alpha -\\mathfrak{U}^{T} \\beta =0,\\label{47} \\\\\n&\\dfrac{\\partial L}{\\partial u_{t}}=\\dfrac{\\mu_{1}}{T}A^{T}_{t}A_{t}u_{t}+B^{T}_{t}\\alpha_{t} - \\mathfrak{U}^{T}_{t} \\beta_{t}=0,\\label{48} \\\\\n&\\dfrac{\\partial L}{\\partial \\xi_{t}}=c_{1}\\xi_{t}-\\alpha_{t} =0, \\label{49} \\\\\n&\\dfrac{\\partial L}{\\partial \\psi_{t}}=c_{u}\\psi_{t}-\\beta_{t} =0, \\label{50} \\\\\n&\\dfrac{\\partial L_{1}}{\\partial \\alpha_{t}}=B_{t}(u_{0}+u_{t})-\\xi_{t}+e_{2t}=0,\\label{51} \\\\\n&\\dfrac{\\partial L_{1}}{\\partial \\beta_{t}}=-\\mathfrak{U}_{t}(u_{0}+u_{t})-\\psi_{t}+(-1+\\varepsilon)e_{ut}=0.\\label{52} \n\\end{align}\nFrom equations \\eqref{47}--\\eqref{50}, we derive\n\\begin{align}\n& u_{0}= -(A^{T}A)^{-1}(B^{T}\\alpha -\\mathfrak{U}\\beta), \\label{53}\\\\ \n& u_{t}= -\\dfrac{T}{\\mu_{1}}(A^{T}_{t}A_{t})^{-1}(B^{T}_{t}\\alpha_{t}-\\mathfrak{U}_{t}^{T}\\beta_{t}), \\label{54}\\\\ \n& \\xi_{t}=\\dfrac{\\alpha_{t}}{c_{1}},\\label{55} \\\\\n& \\psi_{t}=\\dfrac{\\beta_{t}}{c_{u}}.\\label{56} \n\\end{align}\nBy substituting $u_{0}$, $u_{t}$, $\\xi_{t}$ and $\\psi_{t}$ into the equations \\eqref{51} and \\eqref{52}, we have \n\\begin{align}\\label{57} \n{{B}_{t}}\\left[ -{{({{A}^{T}}A)}^{-1}}({B}^{T}\\alpha - \\mathfrak{U}^{T}\\beta)\n-\\frac{T}{{{\\mu }_{1}}}{{({{A}^{T}_{t}}A_{t})}^{-1}}(B_{t}^{T}{{\\alpha }_{t}}\n-\\mathfrak{U}_{t}^{T}\\beta_{t}) \\right]\n-\\frac{{{\\alpha }_{t}}}{{{c}_{1}}}\n&= -{{e}_{2t}},\n\\\\%\\end{align}\n\\label{58} \n{ -\\mathfrak{U}_{t}}\\left[ -{{({{A}^{T}}A)}^{-1}}({B}^{T}\\alpha - \\mathfrak{U}^{T}\\beta)\n -\\frac{T}{{{\\mu }_{1}}}{{({{A}^{T}_{t}}A_{t})}^{-1}}(B_{t}^{T}{{\\alpha }_{t}}-\\mathfrak{U}_{t}^{T}\\beta_{t}) \\right]-\\frac{{{\\beta }_{t}}}{{{c}_{u}}}\n &=-(-1 + \\varepsilon){{e}_{ut}},\n\\end{align}\nwhere $t\\in \\{1, \\ldots, T\\}$, $\\alpha=[\\alpha_{1}^{T},\\ldots, \\alpha_{T}^{T}]^{T}$ and $\\beta=[\\beta_{1}^{T},\\ldots, \\beta_{T}^{T}]^{T}$. Here, we define \n\\begin{align*}\n& Q_{1} =B{{({{A}^{T}}A)}^{-1}}{{B}^{T}}, ~~\n Q_{2} =B{{({{A}^{T}}A)}^{-1}}{{\\mathfrak{U}}^{T}}, ~~\nS_{1} =\\mathfrak{U}{{({{A}^{T}}A)}^{-1}}{{B}^{T}}, ~~ S_{2} =\\mathfrak{U}{{({{A}^{T}}A)}^{-1}}{{\\mathfrak{U}}^{T}}, \n\\\\\n& {{P}_{1t}}={{B}_{t}}{{(A_{t}^{T}A_{t})}^{-1}}B_{t}^{T}, ~~ {{P}_{2t}}={{B}_{t}}{{(A_{t}^{T}A_{t})}^{-1}}\\mathfrak{U}_{t}^{T},~~ P_{1}=blkdiag({{P}_{11}},\\ldots ,{{P}_{1T}}),\n\\\\\n& P_{2}=blkdiag({{P}_{21}},\\ldots ,{{P}_{2T}}),~~ R_{1t}=\\mathfrak{U}_{t}{{({{A}^{T}_{t}}A_{t})}^{-1}}{{B}^{T}_{t}},~~ R_{2t}=\\mathfrak{U}_{t}{{({{A}^{T}_{t}}A_{t})}^{-1}}{{\\mathfrak{U}}^{T}_{t}}, \n\\\\\n& R_{1}=blkdiag({{R}_{11}},\\ldots ,{{R}_{1T}}),~~ R_{2}=blkdiag({{R}_{21}},\\ldots ,{{R}_{2T}}).\n\\end{align*}\nThen \\eqref{57} and \\eqref{58} can be converted to the following equations:\n\\begin{align}\n& \\left( {{Q}_{1}}\\alpha -{{Q }_{2}}\\beta \\right)+\\frac{T}{{{\\mu }_{1}}}\\left( {{P}_{1}}\\alpha -{{P}_{2}}\\beta \\right)+\\frac{1}{{{c}_{1}}}{{I}_{1}}\\alpha ={{e}_{2}}, \\label{59} \\\\ \n& \\left( -{{S}_{1}}\\alpha +{{S}_{2}}\\beta \\right)-\\frac{T}{{{\\mu }_{2}}}\\left( {{R}_{1}}\\alpha -{{R}_{2}}\\beta \\right)+\\frac{1}{{{c}_{u}}}{{I}_{2}}\\beta \n=\\left( -1+\\varepsilon \\right){{e}_{u}}. \\label{60}\n\\end{align}\nCombining equations \\eqref{59} and\\eqref{60}, we obtain\n\\begin{align}\n\\left[ \\begin{matrix}\nQ_{1}&-Q_{2}\\\\\n-S_{1}&S_{2}\n\\end{matrix}\\right] \\left[\\begin{matrix}\n\\alpha\\\\\n\\beta\n\\end{matrix} \\right] &+ \\dfrac{T}{\\mu_{1}}\\left[ \\begin{matrix}\nP_{1}&-P_{2}\\\\\n-R_{1}&R_{2}\n\\end{matrix}\\right] \\left[\\begin{matrix}\n\\alpha\\\\\n\\beta\n\\end{matrix} \\right] \n+ \\left[ \\begin{matrix}\n\\dfrac{1}{c_{1}}I_{1}&0\\\\\n0&\\dfrac{1}{c_{u}}I_{2}\n\\end{matrix}\\right] \\left[\\begin{matrix}\n\\alpha\\\\\n\\\\\n\\beta\n\\end{matrix} \\right] \n= \\left[\\begin{matrix}\ne_{2}\\\\\n\\\\\n(-1+\\varepsilon)e_{u}\n\\end{matrix} \\right],\\label{61} \n\\end{align}\nthen, we can write\n\\begin{align}\n\\left[ \\begin{matrix}\n\\alpha \\\\\n\\beta \\\\\n\\end{matrix} \\right]&=\\left[ \\left[ \\begin{matrix}\n{{Q}_{1}} & -{{Q }_{2}} \\\\\n-{{S}_{1}} & {{S}_{2}} \\\\\n\\end{matrix} \\right]+\\frac{T}{{{\\mu }_{1}}}\\left[ \\begin{matrix}\n{{P}_{1}} & -{{P}_{2}} \\\\\n-{{R}_{1}} & {{R}_{2}} \\\\\n\\end{matrix} \\right] \n + \\left[ \\begin{matrix}\n\\dfrac{1}{{{c}_{1}}}{{I}_{1}} & 0 \\\\\n0 & \\dfrac{1}{{{c}_{u}}}{{I}_{2}} \\\\\n\\end{matrix} \\right] \\right]^{-1}\\left[ \\begin{matrix}\n{{e}_{2}} \\\\\n(-1+\\varepsilon ){{e}_{u}} \n\\end{matrix} \\right]. \\label{62} \n\\end{align}\nSimilarly, the Lagrangian function for the problem \\eqref{45} can be written as follows:\n\\begin{align}\\label{63} \nL_{2} ={}& \\dfrac{1}{2}\\| Bv_{0}\\|^{2}+\\dfrac{\\mu_{2}}{2T}\\sum_{t=1}^{T} \\| B_{t}v_{t}\\|^{2}+\\dfrac{c_{2}}{2}\\sum_{t=1}^{T}\\| \\eta_{t}\\|^{2}\\nonumber\\\\\n& -\\sum_{t=1}^{T}\\alpha_{t}^{\\ast^{T}}(A_{t}(v_{0}+v_{t})+\\eta_{t}-e_{1t})+\\dfrac{c_{u}^{\\ast}}{2}\\sum_{t=1}^{T}\\| \\psi_{t}^{\\ast}\\|^{2}\\nonumber\\\\\n& -\\sum_{t=1}^{T}\\beta_{t}^{\\ast^{T}}(\\mathfrak{U}_{t}(v_{0}+v_{t})+\\psi_{t}^{\\ast}-(-1+\\varepsilon)e_{ut}).\n\\end{align}\nBy performing a similar process, the following equations are obtained:\n\\begin{align}\n\t(Q _{1}^{*}\\alpha _{1}^{*}-Q _{2}^{*}{{\\beta }^{*}})+\\frac{T}{{{\\mu }_{2}}}\\left( P_{1}^{*}{{\\alpha }^{*}}-P_{2}^{*}{{\\beta }^{*}} \\right)+\\frac{1}{{{c}_{2}}}{{I}_{1}}{{\\alpha }^{*}}\n&={{e}_{1}},\\label{64} \n\\\\%\t\\end{align}\n\\left( {{S}^{*}_{1}}\\alpha^{*}-S_{2}^{*}{{\\beta }^{*}} \\right)+\\frac{T}{{{\\mu }_{2}}}( R_{1}^{*}{{\\alpha }^{*}}\n-R_{2}^{*}{{\\beta }^{*}} )\n-\\frac{1}{{{c}_{u}^{\\ast}}}{{I}_{2}}{{\\beta }^{*}}\n&= -(-1+\\varepsilon)e_{u}, \\label{65} \n\\end{align}\nwhere\n\\begin{align*}\n& Q_{1}^{\\ast} =A{{({{B}^{T}}B)}^{-1}}{{A}^{T}}, ~~ Q_{2}^{\\ast} =A{{({{B}^{T}}B)}^{-1}}{{\\mathfrak{U}}^{T}}, ~~ S_{1}^{\\ast} =\\mathfrak{U}{{({{B}^{T}}B)}^{-1}}{{A}^{T}}, \n\\\\\n&S_{2}^{\\ast} =\\mathfrak{U}{{({{B}^{T}}B)}^{-1}}{{\\mathfrak{U}}^{T}},~~ {{P}_{1t}^{\\ast}}={{A}_{t}}{{(B_{t}^{T}B_{t})}^{-1}}A_{t}^{T}, ~~ {{P}_{2t}^{\\ast}}={{A}_{t}}{{(B_{t}^{T}B_{t})}^{-1}}\\mathfrak{U}_{t}^{T},\\\\\n& P_{1}^{\\ast}=blkdiag({{P}_{11}^{\\ast}},\\ldots ,{{P}_{1T}^{\\ast}}),~~ P_{2}^{\\ast}=blkdiag({{P}_{21}^{\\ast}},\\ldots ,{{P}_{2T}^{\\ast}}),~~ R_{1t}^{\\ast}=\\mathfrak{U}_{t}{{({{B}^{T}_{t}}B_{t})}^{-1}}{{A}^{T}_{t}}, \\\\\n& R_{2t}^{\\ast}=\\mathfrak{U}_{t}{{({{B}^{T}_{t}}B_{t})}^{-1}}{{\\mathfrak{U}}^{T}_{t}}, \n~~ R_{1}^{\\ast}=blkdiag({{R}_{11}^{\\ast}},\\ldots ,{{R}_{1T}^{\\ast}}),\n~~ R_{2}^{\\ast}=blkdiag({{R}_{21}^{\\ast}},\\ldots ,{{R}_{2T}^{\\ast}}).\n\\end{align*}\nCombining equations \\eqref{64} and \\eqref{65}, we have\n\\begin{align}\n\\left[ \\begin{matrix}\nQ_{1}^{\\ast}&-Q_{2}^{\\ast}\\\\\nS_{1}^{\\ast}&-S_{2}^{\\ast}\n\\end{matrix}\\right] \\left[\\begin{matrix}\n\\alpha^{\\ast}\\\\\n\\beta^{\\ast}\n\\end{matrix} \\right] &+ \\dfrac{T}{\\mu_{2}}\\left[ \\begin{matrix}\nP_{1}^{\\ast}&-P_{2}^{\\ast}\\\\\nR_{1}^{\\ast}&-R_{2}^{\\ast}\n\\end{matrix}\\right] \\left[\\begin{matrix}\n\\alpha^{\\ast}\\\\\n\\beta^{\\ast}\n\\end{matrix} \\right] \n+ \\left[ \\begin{matrix}\n\\dfrac{1}{c_{2}}I_{1}&0\\\\\n0&-\\dfrac{1}{c_{u}^{\\ast}}I_{2}\n\\end{matrix}\\right] \\left[\\begin{matrix}\n\\alpha^{\\ast}\\\\\n\\\\\n\\beta^{\\ast}\n\\end{matrix} \\right] \n= \\left[\\begin{matrix}\ne_{1}\\\\\n\\\\\n-(-1+\\varepsilon)e_{u}\n\\end{matrix} \\right],\n\\end{align}\nthen, we can write\n\\begin{align}\n\\left[ \\begin{matrix}\n\\alpha^{\\ast} \\\\\n\\beta^{\\ast} \\\\\n\\end{matrix} \\right]&=\\left[ \\left[ \\begin{matrix}\n{{Q}_{1}^{\\ast}} & -{{Q }_{2}^{\\ast}} \\\\\n{{S}_{1}^{\\ast}} & {-{S}_{2}^{\\ast}} \\\\\n\\end{matrix} \\right]+\\frac{T}{{{\\mu }_{2}}}\\left[ \\begin{matrix}\n{{P}_{1}^{\\ast}} & -{{P}_{2}^{\\ast}} \\\\\n{{R}_{1}^{\\ast}} & -{{R}_{2}^{\\ast}} \\\\\n\\end{matrix} \\right] + \\left[ \\begin{matrix}\n\\dfrac{1}{{{c}_{2}}}{{I}_{1}} & 0 \\\\\n0 & \\dfrac{1}{{{c}_{u}^{\\ast}}}{{I}_{2}} \\\\\n\\end{matrix} \\right] \\right]^{-1}\\left[ \\begin{matrix}\n{{e}_{1}} \\\\\n- (-1+\\varepsilon ){{e}_{u}} \n\\end{matrix} \\right].\\label{66} \n\\end{align}\nFinally, by finding solutions \\eqref{62} and \\eqref{66}, the classifier parameters $u_{0}$, $u_{t}$, $v_{0}$ and $v_{t}$ are obtained.\nThe decision function \\eqref{15} can be used to assign a new data point $x\\in \\mathbb{R}^{n}$ to its appropriate class.\nAccording to the discussion above, we illustrate the LS-$\\mathfrak{U}$MTSVM via Algorithm~\\ref{A3}.\n\\begin{algorithm} [t] \n\t\\caption{\\label{A3} A linear least squares multi-task twin support vector machine with Universum (LS-$\\mathfrak{U}$MTSVM) }\n\n\t\\algsetup{linenosize=\\normalsize}\n\t\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\t\\begin{algorithmic}[1]\n\t\t\\normalsize\n\t\t\\REQUIRE{\\mbox{}\\\\-- The training set $\\tilde{T}$ and Universum data $ X_{\\mathfrak{U}} $;\\\\\n\t\t\t-- Decide on the total number of tasks included in the data set and assign this value to T;\\\\\n\t\t\t-- Select classification task $ S_{t}~(t=1,\\ldots,T) $ in training data set $\\tilde{T}$;\\\\\n\t\t\t-- Divide Universum data $ X_{\\mathfrak{U}} $ by $t$-task and get $ X_{\\mathfrak{U}t}~ (t=1,\\ldots,T)$;\\\\\n\t\t\t-- Choose appropriate parameters\n\t\t\t$c_{1}, c_{2},c_{u}$, $c_{u}^{*}$, $ \\mu_{1} $, $ \\mu_{2} $, and parameter $\\varepsilon \\in (0,1)$.}\\\\\n\t\t{\\textbf{The outputs:}}\n\t\t\\begin{list}{--}{}\n\t\t\t\\item $ u_{0},~u_{t},~v_{0}$, and $v_{t}. $\n\t\t\\end{list}\n\t\t\n\t\t{\\textbf{The process:}}\n\t\t\n\t\t\\STATE\n\t\tSolve the two small systems of linear equations (41) and (46), and get $ \\alpha,~\\beta,~\\alpha^{*}$, and $\\beta^{*}. $\n\t\t\\STATE\n\t\tCalculate $ u_{0},~u_{t},~v_{0}$, and $v_{t}. $\n\t\t\\STATE\n\t\t\tBy utilizing the decision function (\\ref{15}), assign a new point $ x $ in the $t$-th task to class $ +1 $ or $ -1 $.\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\\subsection{Nonlinear case}\n\nIn the following, we introduce a nonlinear version of our proposed LS-$\\mathfrak{U}$MTSVM because there are situations that are not linearly separable, in which case the kernel trick can be used.\nTherefore, we use the kernel function $K(\\cdot,\\cdot)$ and define\n\\begin{align*}\n& D={{\\left[ A_{1}^{T},B_{1}^{T},A_{2}^{T},B_{2}^{T},\\ldots ,A_{T}^{T},B_{T}^{T} \\right]}^{T}}, \\\\ \n& \\overline{A}=\\left[ K(A,{{D}^{T}}),e_{1} \\right],\\,\\,\\,\\,{{\\overline{A}}_{t}}=\\left[ K({{A}_{t}},{{D}^{T}}),{{e}_{1t}} \\right], \\\\ \n& \\overline{B}=\\left[ K(B,{{D}^{T}}),e_{2} \\right],\\,\\,\\,\\,{{\\overline{B}}_{t}}=\\left[ K({{B}_{t}},{{D}^{T}}),{{e}_{2t}} \\right], \\\\ \n& \\overline{\\mathfrak{U}}=\\left[ K(X_{\\mathfrak{U}},{{D}^{T}}),e_{u} \\right],\\,\\,\\,\\,{{\\overline{\\mathfrak{U}}}_{t}}=\\left[ K({X_{\\mathfrak{U}t}},{{D}^{T}}),{{e}_{ut}} \\right].\n\\end{align*}\nSo, the nonlinear formulations of the optimization problems \\eqref{44} and \\eqref{45} can be written as\n\\begin{align}\\label{67} \n\\underset{u_{0}, u_{t}, \\xi_{t},\\psi_{t}}\\min ~& \\dfrac{1}{2}\\| \\overline{A}u_{0}\\|^{2} +\\dfrac{\\mu_{1}}{2T}\\sum_{t=1}^{T} \\| \\overline{A}_{t}u_{t}\\|^{2}+\\dfrac{c_{1}}{2}\\sum_{t=1}^{T}\\| \\xi_{t}\\|^{2}+\\dfrac{c_{u}}{2}\\sum_{t=1}^{T}\\| \\psi_{t}\\|^{2}\\nonumber\\\\\n\\text{s.t.}\\,\\,\\,\\,\\,\\,& -\\overline{B}_{t}(u_{0}+u_{t})+\\xi_{t}= e_{2t},\\\\\n&\\overline{\\mathfrak{U}}_{t}(u_{0}+u_{t})+\\psi_{t}=(-1+\\varepsilon)e_{ut},\\nonumber\n\\end{align}\nand\n\\begin{align}\\label{68} \n\\underset{v_{0}, v_{t}, \\eta_{t},\\psi_{t}^{\\ast}}\\min ~&\\dfrac{1}{2}\\| \\overline{B}v_{0}\\|^{2} +\\dfrac{\\mu_{2}}{2T}\\sum_{t=1}^{T} \\| \\overline{B}_{t}v_{t}\\|^{2}+\\dfrac{c_{2}}{2}\\sum_{t=1}^{T}\\| \\eta_{t}\\|^{2}+\\dfrac{c_{u}^{\\ast}}{2}\\sum_{t=1}^{T}\\| \\psi_{t}^{\\ast}\\|^{2}\\nonumber\\\\\n\\text{s.t.}\\,\\,\\,\\,\\,\\,& \\overline{A}_{t}(v_{0}+v_{t})+\\eta_{t}= e_{1t},\\\\\n&-\\overline{\\mathfrak{U}}_{t}(v_{0} +v_{t})+\\psi_{t}^{\\ast}=(-1+\\varepsilon)e_{ut},\\nonumber\n\\end{align}\nhere, parameters $ c_{1}$, $c_{2}$, $c_{u}$, $c_{u^*}$, $\\mu_{1}$, and $\\mu_{2} $ are as defined in section~4.1. In a similar way to the linear case, we can written the Lagrangian function problems (\\ref{67}) and (\\ref{68}) and KKT optimality conditions. After that, the optimal solutions of problems (\\ref{67}) and (\\ref{68}) take the form\n\\begin{align}\n\\left[ \\begin{matrix}\n\\alpha \\\\\n\\beta \\\\\n\\end{matrix} \\right]\n&=\\left[ \\left[ \\begin{matrix}\n{{Q}_{1}} & -{{Q }_{2}} \\\\\n-{{S}_{1}} & {{S}_{2}} \\\\\n\\end{matrix} \\right]+\\frac{T}{{{\\mu }_{1}}}\\left[ \\begin{matrix}\n{{P}_{1}} & -{{P}_{2}} \\\\\n-{{R}_{1}} & {{R}_{2}} \\\\\n\\end{matrix} \\right] \n+ \\left[ \\begin{matrix}\n\\dfrac{1}{{{c}_{1}}}{{I}_{1}} & 0 \\\\\n0 & \\dfrac{1}{{{c}_{u}}}{{I}_{2}} \\\\\n\\end{matrix} \\right] \\right]^{-1}\\left[ \\begin{matrix}\n{{e}_{2}} \\\\\n(-1+\\varepsilon ){{e}_{u}} \n\\end{matrix} \\right], \\label{69} \n\\end{align}\nand\n\\begin{align}\n\\left[ \\begin{matrix}\n\\alpha^{\\ast} \\\\\n\\beta^{\\ast} \\\\\n\\end{matrix} \\right]&=\\left[ \\left[ \\begin{matrix}\n{{Q}_{1}^{\\ast}} & -{{Q }_{2}^{\\ast}} \\\\\n{{S}_{1}^{\\ast}} & {-{S}_{2}^{\\ast}} \\\\\n\\end{matrix} \\right]+\\frac{T}{{{\\mu }_{2}}}\\left[ \\begin{matrix}\n{{P}_{1}^{\\ast}} & -{{P}_{2}^{\\ast}} \\\\\n{{R}_{1}^{\\ast}} & -{{R}_{2}^{\\ast}} \\\\\n\\end{matrix} \\right] + \\left[ \\begin{matrix}\n\\dfrac{1}{{{c}_{2}}}{{I}_{1}} & 0 \\\\\n0 & \\dfrac{1}{{{c}_{u}^{\\ast}}}{{I}_{2}} \\\\\n\\end{matrix} \\right] \\right]^{-1}\\left[ \\begin{matrix}\n{{e}_{1}} \\\\\n- (-1+\\varepsilon ){{e}_{u}} \n\\end{matrix} \\right],\\label{70} \n\\end{align}\nwhere\n\\begin{align*}\n& Q_{1} =\\overline{B}{{({{\\overline{A}}^{T}}\\overline{A})}^{-1}}{{\\overline{B}}^{T}}, ~~\nQ_{2} =\\overline{B}{{({{\\overline{A}}^{T}}\\overline{A})}^{-1}}{{\\overline{\\mathfrak{U}}}^{T}}, ~~\nS_{1} =\\overline{\\mathfrak{U}}{{({{\\overline{A}}^{T}}\\overline{A})}^{-1}}{{\\overline{B}}^{T}}, ~~ S_{2} =\\overline{\\mathfrak{U}}{{({{\\overline{A}}^{T}}\\overline{A})}^{-1}}{{\\overline{\\mathfrak{U}}}^{T}}, \n\\\\\n& {{P}_{1t}}={{\\overline{B}}_{t}}{{(\\overline{A}_{t}^{T}\\overline{A}_{t})}^{-1}}\\overline{B}_{t}^{T}, ~~ {{P}_{2t}}={{\\overline{B}}_{t}}{{(\\overline{A}_{t}^{T}\\overline{A}_{t})}^{-1}}\\overline{\\mathfrak{U}}_{t}^{T},~~ P_{1}=blkdiag({{P}_{11}},\\ldots ,{{P}_{1T}}),\n\\\\\n& P_{2}=blkdiag({{P}_{21}},\\ldots ,{{P}_{2T}}),~~ R_{1t}=\\overline{\\mathfrak{U}}_{t}{{({{\\overline{A}}^{T}_{t}}\\overline{A}_{t})}^{-1}}{{\\overline{B}}^{T}_{t}},~~ R_{2t}=\\overline{\\mathfrak{U}}_{t}{{({{\\overline{A}}^{T}_{t}}\\overline{A}_{t})}^{-1}}{{\\overline{\\mathfrak{U}}}^{T}_{t}}, \n\\\\\n& R_{1}=blkdiag({{R}_{11}},\\ldots ,{{R}_{1T}}),~~ R_{2}=blkdiag({{R}_{21}},\\ldots ,{{R}_{2T}}),\n\\end{align*}\nand\n\\begin{align*}\n& Q_{1}^{\\ast} =\\overline{A}{{({{\\overline{B}}^{T}}\\overline{B})}^{-1}}{{\\overline{A}}^{T}}, ~~ Q_{2}^{\\ast} =\\overline{A}{{({{\\overline{B}}^{T}}\\overline{B})}^{-1}}{{\\overline{\\mathfrak{U}}}^{T}}, ~~ S_{1}^{\\ast} =\\overline{\\mathfrak{U}}{{({{\\overline{B}}^{T}}\\overline{B})}^{-1}}{{\\overline{A}}^{T}}, \n\\\\\n&S_{2}^{\\ast} =\\overline{\\mathfrak{U}}{{({{\\overline{B}}^{T}}\\overline{B})}^{-1}}{{\\overline{\\mathfrak{U}}}^{T}},~~ {{P}_{1t}^{\\ast}}={{\\overline{A}}_{t}}{{(\\overline{B}_{t}^{T}\\overline{B}_{t})}^{-1}}\\overline{A}_{t}^{T}, ~~ {{P}_{2t}^{\\ast}}={{\\overline{A}}_{t}}{{(\\overline{B}_{t}^{T}\\overline{B}_{t})}^{-1}}\\overline{\\mathfrak{U}}_{t}^{T},\n\\\\\n& P_{1}^{\\ast}=blkdiag({{P}_{11}^{\\ast}},\\ldots ,{{P}_{1T}^{\\ast}}),~~ P_{2}^{\\ast}=blkdiag({{P}_{21}^{\\ast}},\\ldots ,{{P}_{2T}^{\\ast}}),~~ R_{1t}^{\\ast}=\\overline{\\mathfrak{U}}_{t}{{({{\\overline{B}}^{T}_{t}}\\overline{B}_{t})}^{-1}}{{\\overline{A}}^{T}_{t}}, \n\\\\\n& R_{2t}^{\\ast}=\\overline{\\mathfrak{U}}_{t}{{({{\\overline{B}}^{T}_{t}}\\overline{B}_{t})}^{-1}}{{\\overline{\\mathfrak{U}}}^{T}_{t}}, \n~~ R_{1}^{\\ast}=blkdiag({{R}_{11}^{\\ast}},\\ldots ,{{R}_{1T}^{\\ast}}),\n~~ R_{2}^{\\ast}=blkdiag({{R}_{21}^{\\ast}},\\ldots ,{{R}_{2T}^{\\ast}}).\n\\end{align*}\nThen the corresponding decision function of the $t$-th task can be computed by~(\\ref{n15}). \nAlgorithm~\\ref{A4} describes the process of nonlinear case.\n\n\\begin{algorithm} [t] \n\t\\caption{\\label{A4} A nonlinear least squares multi-task twin support vector machine with Universum (LS-$\\mathfrak{U}$MTSVM)}\n\n\t\\algsetup{linenosize=\\normalsize}\n\t\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\t\\begin{algorithmic}[1]\n\t\t\\normalsize\n\t\t\\REQUIRE{\\mbox{}\\\\-- The training set $\\tilde{T}$ and Universum data $ X_{\\mathfrak{U}} $;\\\\\n\t\t\t-- Decide on the total number of tasks included in the data set and assign this value to T;\\\\\n\t\t\t-- Select classification task $ S_{t}~(t=1,\\ldots,T) $ in training data set $\\tilde{T}$;\\\\\n\t\t\t-- Divide Universum data $ X_{\\mathfrak{U}} $ by $t$-task and get $ X_{\\mathfrak{U}t}~ (t=1,\\ldots,T)$;\\\\\n\t\t\t-- Choose appropriate parameters\n\t\t\t$c_{1}, c_{2},c_{u}$, $c_{u}^{*}$, $ \\mu_{1} $, $ \\mu_{2} $, and parameter $\\varepsilon \\in (0,1)$.\\\\\n\t\t-- Select proper kernel function and kernel parameter.} \\\\\n\t\t{\\textbf{The outputs:}}\n\t\t\\begin{list}{--}{}\n\t\t\t\\item $ u_{0},~u_{t},~v_{0}$, and $v_{t}. $\n\t\t\\end{list}\n\t\t\n\t\t{\\textbf{The process:}}\n\t\t\n\t\t\\STATE\n\t\tSolve the two small systems of linear equations (41) and (46), and get $ \\alpha,~\\beta,~\\alpha^{*}$, and $\\beta^{*}. $\n\t\t\\STATE\n\t\tCalculate $ u_{0},~u_{t},~v_{0}$, and $v_{t}. $\n\t\t\\STATE\n\t\t\tBy utilizing the decision function (\\ref{15}), assign a new point $ x $ in the $t$-th task to class $ +1 $ or $ -1 $.\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\n\n\n\n\n\n\n\\section{Numerical experiments}\nThis section presents the results of experiments on various single-task learning algorithms and multi-task learning algorithms. \n\n The single-task learning algorithms considered consist of TBSVM \\cite{shao2011improvements}, I$ \\nu $-TBSVM \\cite{wang2018improved} and $ \\mathfrak{U}_{LS} $-TSVM \\cite{xu2016least}, while the multi-task learning methods are DMTSVM \\cite{xie2012multitask}, MTLS-TWSVM \\cite{mei2019multi}, and our proposed methods, i.e., $\\mathfrak{U}$MTSVM and LS-$\\mathfrak{U}$MTSVM.\n All numerical\nexperiments for both linear and nonlinear models were performed in Matlab R2018b on a PC with 4 GB of RAM and Core(TM) i7 CPU @2.20 GHz under the Microsoft Windows 64-bit operating system. Moreover, to determine the classification accuracies and performance of the algorithms, we used a five-fold strategy for cross-validation. The following steps describe the cross-validation procedure.\n\n\\begin{itemize}\n\t\\item Partition the data sets randomly into five separate subsets of equal size.\n\t\\item Apply the model to four of the subsets selected as the training data.\n\t\\item Consider the one remaining subsets as the test data and evaluate the model on it.\n\t\\item Repeat the process until each of the five sets has been utilized as test data.\n\\end{itemize}\nThe accuracy is defined as the number of accurate predictions divided by the total number of forecasts. The value is then multiplied by 100 to give the percentage accuracy. We randomly selected an equal amount of data from each class for the benchmark and medical data sets, then used half of them to build the Universum data by averaging pairs of samples from different classes.\n\\subsection{Parameters selection}\n\nIt is obvious that the performance of classification algorithms depends on the selection of proper parameters \\cite{lu2018svm,xiao2021new}. Therefore, we will discuss selecting the optimal parameters of the single-task and multi-task learning methods. This subsection used five-fold cross-validation using the grid search approach to select parameters. The 3D surface plots in Figure~\\ref{fig1} demonstrate the influence of changing in the various values of parameters $c_{1}$, $c_{2}$, $c_{u}$, $c_{u^{*}}$, $\\mu_{1}$ and $\\mu_{2}$ on accuracy for proposed LS-$\\mathfrak{U}$MTSVM method on Caesarian data set in linear state. From Figure~\\ref{fig1}(a--b), it can be inferred that the accuracy obtained is quite sensitive to the selected parameters. Of course, the accuracy may be stable at some values. Therefore, the choice of parameters depends on the distribution of points in a particular data set.\n\n\\begin{figure}[hbt!]\n\t\\centering\n\t\\includegraphics[width=12cm]{multiparameter.png}\n\t\\caption{The effect of different values of parameter on the Caesarian data sets}\n\t\\label{fig1}\n\\end{figure}\nThe performance of the particular algorithms is determined by the parameters $ c_{1}$, $ c_{2}$, $ c_{3}$ and $c_{4}$ in TBSVM; $ c_{1}$, $ c_{2}$, $ c_{3}$, $ c_{4}$ and $ \\nu_{1}, \\nu_{2} $ in I$ \\nu $-TBSVM; $ c_{1}$, $ c_{2}$, $ c_{u}$, $c_{u}^{*}$ and $ \\varepsilon $ in $ \\mathfrak{U}_{LS} $-TSVM; $ c_{1}$, $c_{2}$, $\\mu_{1}$ and $\\mu_{2}$ in DMTSVM; $ c_{1},~c_{2},~\\mu_{1}$ and $\\mu_{2}$ in MTLS-TWSVM; $ c_{1}$, $c_{2}$, $c_{u}$, $c_{u}^{*}$, $\\mu_{1}$, $\\mu_{2}$ and $ \\varepsilon $ in $\\mathfrak{U}$MTSVM; and $ c_{1}$, $c_{2}$, $c_{u}$, $c_{u}^{*}$, $\\mu_{1}$, $\\mu_{2}$ and $ \\varepsilon $ in LS-$\\mathfrak{U}$MTSVM. Therefore, the following ranges are considered for selecting the optimal values of the parameters. In our experiments, parameters $ c_{1}$, $c_{2}, c_{3}$, $c_{4}$, $c_{u}$, $c_{u}^{*}$, $\\mu_{1}$ and $\\mu_{2} $ are all selected from the set $ \\lbrace 2^i \\mid i=-10,\\dots,10\\rbrace $; $ \\nu_{1}$, $\\nu_{2}$ and $ \\varepsilon $ are selected from the set $ \\lbrace 0.1,\\dots,0.9\\rbrace $.\n\n\nIn our experiments, due to the better performance for inseparably data sets, we use the Gaussian kernel function (that is, $K(x,y)=\\exp(-\\gamma \\|x-y\\|^2)$, $\\gamma > 0$).\nWe choose a value for the kernel parameter $ \\gamma $ from the range $ \\lbrace 2^i \\mid i=-10,\\dots,10\\rbrace$.\n\n\\subsection{Benchmark data sets}\nIn this subsection, we have compared multi-task learning algorithms on five benchmark data sets involving: Monk, Landmine, Isolet, Flags and Emotions. Table~\\ref{tab1} shows details of data sets. \n\n\\begin{table}[htp!]\n\t\\small\n\t\t\\centering\n\t\\caption{The information of benchmark data sets.}\\label{tab1}\n\t\\begin{tabular}{cccc}\n\t\t\\toprule\n\t\tData set & $\\#$ Samples & $\\#$ Features & $\\#$ Tasks\\\\\n\t\t\\midrul\n\t\n\t\tMonk &432 &6 & 3 \\\\\n\t\n\t\tLandmine &9674 &9 & 4 \\\\\n\t\n\t\tIsolet& 7797&617 & 5 \\\\\n\t\n\t\tFlags & 194& 19 & 7\\\\\n\t\n\t\tEmotions &593 & 72 & 6 \\\\\n\t\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\nThe information of these data sets is as follows. \n\n\\begin{itemize}\n\n\t\\item \\textbf{Monk}: In July 1991, the monks of Corsendonk Priory were confronted with the 2nd European Summer School on Machine Learning, which was hosted at their convent. After listening to various learning algorithms for more than a week, they were confused: which algorithm would be the best? Which ones should you stay away from? As a consequence of this quandary, they devised the three MONK's challenges as a basic goal to which all learning algorithms should be compared.\n\t\\item \\textbf{Landmine}: This data collection was gathered from 29 landmine regions, each of which corresponded to a separate landmine field. The data set comprises nine characteristics, and each sample is labeled with a 1 for a landmine or a 0 for a cluster, reflecting positive and negative classifications. The first 15 areas correlate too strongly with foliated regions, whereas the latter 14 belong to bare ground or uninhabited places. We adopted the following procedures for the Landmine data set to get better and more equitable experimental outcomes. This data set has more negative labels than positive ones; as a result, we started by removing some negative samples to balance things out. In our experiment, we partition the data into four tasks.\nAs a result, we picked four densely foliated areas as a selection of positive data. We constructed an experimental data set using four places from bare ground or desert regions as a negative data subset. To produce the data set represented by Landmine in Table~\\ref{tab2}, we selected the four sites 1, 2, 3, and 4 from foliated regions and identified the four areas 16, 17, 18, and 19 from bare earth regions.\n\n\t\\item\\textbf{Isolet}: Isolet is a widely used data set in speech recognition that is collected as follows. One hundred fifty peoples speak 26 letters of the English alphabet twice. Therefore, 52 training samples are generated for each speaker. Each speaker is classified into one of five categories. Consequently, we have five sets of data sets that can be considered five tasks.\n\tOn the one hand, these five tasks are closely related because they are gathered from the same utterances. On the other hand, the five tasks differ because speakers in different groups pronounce the English letters differently.\n\tIn this paper, we selected four pairs of similar-sounding letters, including (O, U), (X, Y), (H, L) and (P, Q) for our experiments.\n\t\\item \\textbf{Flags}: The flags data set offers information about the flags of various countries. It contains 194 samples and 19 features. This data set is divided into seven tasks based on different colors. Each task is represented by a 19-dimensional feature vector derived from flag images of several nations. \n\tEach sample may have a maximum of seven labels, because the task of recognizing each label may be seen as connected. Hence, we consider it as a multi-task learning problem. Thus, this data set contains seven tasks. In Table~\\ref{tab2}, we compare the performance of the aforementioned multi-task learning methods on this data set.\n\t\\item\\textbf{Emotions}: Emotion recognition from text is one of the most difficult challenges in Natural Language Processing. The reason for this is the lack of labeled data sets and the problem's multi-class character. Humans experience a wide range of emotions, and it is difficult to gather enough information for each feeling, resulting in the issue of class imbalance. We have labeled data for emotion recognition here, and the goal is to construct an efficient model to identify the emotion.\nThe Emotions data collection comprises Twitter posts in English that depict six primary emotions: anger, contempt, fear, joy, sorrow, and surprise. All samples are labeled in six different ways. Each sample may contain more than one label (or emotion). Different emotion recognition tasks have similar characteristics and can be considered related tasks. So it may be viewed as a multi-task classification issue, with each task requiring the identification of a single kind of emotion. We use 50 samples from this data set to test various multi-task learning algorithms in this experiment. The results of comparing the performance of multi-task learning algorithms on this data set are reported in Table~\\ref{tab2}.\n\\end{itemize}\nAs mentioned, we compare the performance of the proposed methods with DMTSVM and MTLS-TWSVM in this subsection. Table~\\ref{tab2} shows the average accuracies (``Acc''), standard deviations (``Std'') and the running time (``Time'') on the five popular multi-task data sets. \nAs is seen in Table~\\ref{tab2}, the best performance is achieved by the proposed LS-$\\mathfrak{U}$MTSVM followed by $ \\mathfrak{U}$MTSVM. For example, on the Emotions data set, the accuracies for DMTSVM and MTLS-TWSVM are $65.33\\%$, and $66\\%$, respectively. In comparison, the $ \\mathfrak{U}$MTSVM and LS-$\\mathfrak{U}$MTSVM method achieved the accuracies $69.30\\%$, and $75.73\\%$, which performs better than the other two multi-task learning algorithms. As a result, due to the nature of multi-task learning, it is advantageous to combine data with Universum data throughout the training phase.\nFurthermore, the results seem to match our intuition that Universum data play an essential role in the performance of $ \\mathfrak{U}$MTSVM and LS-$\\mathfrak{U}$MTSVM. When $ \\mathfrak{U}$MTSVM and LS-$\\mathfrak{U}$MTSVM compare with DMTSVM and MTLS-TWSVM, we find that our proposed algorithms indeed exploit Universum data to improve the prediction accuracy and stability.\nTherefore, $ \\mathfrak{U}$MTSVM and LS-$\\mathfrak{U}$MTSVM perform better than other multi-task learning algorithms, i.e., DMTSVM and MTLS-TWSVM. \n\nIn terms of learning speed, although our proposed methods are not the fastest ones due incorporating Universum data, they offer better accuracies at an acceptable time.\n\n\\begin{table}[htp!]\n\t\\small\n\t\\caption{Performance comparison of nonlinear multi-task learning methods on benchmark data sets.}\\label{tab2}\n\t\\begin{tabular}{ccccc}\n\t\t\\toprul\n\t\t & DMTSVM& $ \\mathfrak{U} $MTSVM& MTLS-TWSVM & LS-$\\mathfrak{U}$MTSVM\\\\\n\t\tData set & Acc ($\\%$)$ \\pm $Std &Acc ($\\%$) $ \\pm $Std& Acc ($\\%$)$ \\pm $Std & Acc ($\\%$)$ \\pm $Std \\\\\n\t\t& Time ($s$)&Time ($s$)& Time ($s$)& Time ($s$)\\\\\n\t\t\\midrul\n\t\tMonk&92.76$ \\pm $0.02&99.61$ \\pm $0.00&94.80$ \\pm $0.02&\\textbf{99.80$ \\pm $0.02}\\\\\n\t\t&27.82& 36.99&25.46&30.21\\\\\n\t\n\t\tLandmine&92.33$ \\pm $0.01&93$ \\pm $0.11 &94.05$ \\pm $0.05&\\textbf{94.50$ \\pm $0.00}\\\\\n\t\t&40.69&42.21 &36.69&39.88\\\\\n\t\n\t\tIsolet (O, U) &99.60$ \\pm $0.00&99.67$ \\pm $0.01 &99$ \\pm $0.02&\\textbf{99.77$ \\pm $0.01}\\\\\n\t\t&11.49&15.86&10.21&11.54\\\\\n\t\n\t\tIsolet (X, Y) &99.50$ \\pm $0.07&98.83$ \\pm $0.01 &99.60$ \\pm $0.00&\\textbf{100$ \\pm $0.00}\\\\\n\t\t&11.60&15.28 &10.39&11.99\\\\\n\t\n\t\tIsolet (H, L) &98.16$ \\pm $0.02&\\textbf{100$ \\pm $0.04} &99.83$ \\pm $0.01&\\textbf{100$ \\pm $0.00}\\\\\n\t\t&11.55&15.37&10.27&11.89\\\\\n\t\n\t\tIsolet (P, Q) &96.83$ \\pm $0.04&96.33$ \\pm $0.05 &97.83$ \\pm $0.03&\\textbf{100$ \\pm $0.00}\\\\\n\t\t&12.24&15.26 &10.32&11.88\\\\\n\t\n\t\tFlags &55.29$ \\pm $0.25&57.48$ \\pm $0.07 &57.13$ \\pm $0.26&\\textbf{59.57$ \\pm $0.28}\\\\\n\t\n\t\t&3.49& 3.88&2.52&3.12\\\\\n\t\n\t\tEmotions&65.33$ \\pm $0.19&69.30$ \\pm $0.21 &66$ \\pm $0.23&\\textbf{75.73$ \\pm $0.18}\\\\\n\t\n\t\t&2.25& 2.60&1.15&2.30\\\\\n\t\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\\subsection{Medical data sets}\nIn this subsection of our experiments, we focus on comparing our proposed methods and several classifier methods, including single-task and multi-task learning methods in linear and nonlinear states. Therefore, we select four popular medical data sets to test these algorithms, including Immunotherapy, Ljubljana Breast Cancer, \tBreast Cancer Coimbra, and Caesarian data sets. \nA summary of the data sets information is provided in Table~\\ref{tab3}. The details of the data sets are as follows.\n\n\\begin{table}[htp!]\n\t\\small\n\t\\caption{The information of medical data sets.}\\label{tab3}\n\t\t\\centering\n\t\\begin{tabular}{cccc}\n\t\t\\toprul\n\t\tData set & $\\#$ Samples & $\\#$ Features & $\\#$ Tasks\\\\\n\t\t\\midrul\n\t\n\t\tImmunotherapy &90 & 8& 3 \\\\\n\t\n\t\tLjubljana Breast Cancer &286 & 9& 5 \\\\\n\t\n\t\tBreast Cancer Coimbra &116 & 9 & 3 \\\\\n\t\n\t\tCaesarian &80 & 5 & 2 \\\\ \n\t\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\n\n\n\n\n\\begin{itemize}\n\t\\item \\textbf{Immunotherapy}: This data collection provides information regarding wart treatment outcomes of 90 individuals utilizing Immunotherapy. The Immunotherapy data set includes 90 instances, and each instance has eight features. The features of this data include sex, age, type, number of warts, induration diameter, area and the result of treatment.\n\tFor this data set, we partition the data into three tasks using the variable type: task 1 (type \\quo{1} = Common, 47 instances), task 2 (type \\quo{2} = Plantar, 22 instances), and task 3 (type \\quo{3} = Both, 21 instances). Since the kind of wart within each job are varied, this variable is also incorporated in our model.\n\t\n\t\\item \\textbf{Ljubljana Breast Cancer}: Nowadays, breast cancer is one of the most common malignancies in women, which has captured the attention of people all around the globe. The illness is the leading cause of mortality in women aged 40 to 50, accounting for around one-fifth of all fatalities in this age range. Every year, more than 14,000 individuals die, and the number is increasing \\cite{wang2021data}. Thus, there remains a need to remove the cancer early to reduce recurrence. Since recurrence within five years of diagnosis is correlated with the chance of death, understanding and predicting recurrence susceptibility is critical. The Ljubljana breast cancer data set provides 286 data points on the return of breast cancer five years following surgical removal of a tumor. After deleting nine instances where values are missing, we are left with 277. Each data point has one class label (for recurrence or no-recurrence events) and nine attributes, including age, menopausal status, tumor size, invasive nodes, node caps, degree of malignancy, breast (left, right), breast quadrant (left-up, left-low, right-up, right-low, central), and irradiation (yes, no).\n\tHere, we divide the data into five tasks using the variable tumor size: task 1 (0 $ \\leq $tumor size$ \\leq $19), task 2 (20 $ \\leq $ tumor size $ \\leq $24), task 3 (25 $ \\leq $ tumor size $ \\leq $29), task 4 (30 $ \\leq $ tumor size $ \\leq $34), and task 5 (35 $ \\leq $tumor size $ \\leq $54). \n\t\n\t\\item \\textbf{Breast Cancer Coimbra}: The Breast Cancer Coimbra data set is the second breast cancer data set we utilize for comparison. The Gynecology Department of the Coimbra Hospital and University Center (CHUC) in Portugal collected this data set between 2009 and 2013. The Breast Cancer Coimbra data set contains 116 instances, each with nine features. This data set consists of 9 quantitative attributes and a class label attribute indicating if the clinical result is positive for existing cancer or negative (patient or healthy). Clinical characteristics were observed or assessed in 64 patients with breast cancer and 52 healthy controls. Age, BMI, insulin, glucose, HOMA, leptin, resistin, adiponectin, and MCP-1 are all quantitative characteristics. The features are anthropometric data and measurements acquired during standard blood analysis. The qualities have the potential to be employed as a biomarker for breast cancer.\n\tIn this experiment, we partition the data set into three tasks using the feature BMI. Based on tissue mass (muscle, fat, and bone) and height, the BMI is a simple rule of thumb to classify a person as underweight, normal weight, overweight, or obese. Underweight (less than 18.5 $kg\/m^2$), normal weight (18.5 $kg\/m^2$ to 24.9 $kg\/m^2$), overweight (25 $kg\/m^2$ to 29.9 $kg\/m^2$), and obese (30 $kg\/m^2$ or more) are the four major adult BMI categories.\n\tHence, we consider that the first task is underweight people, the second task is normal-weight people, and the third task is overweight and obese.\n\t\n\t\\item\\textbf{Caesarian}: This data set, which aims to deliver via cesarean section or natural birth, provides information on 80 pregnant women who have had the most extreme delivery complications in the medical field. The Caesarian data set includes 80 instances, and each instance has\n\tfive features. The features of this data include age, delivery number, blood pressure, delivery time, and heart problem. The heart problem\n\tfeature in Caesarian data sets has two forms. We separate the data into two tasks using the variable a heart problem; task 1: The patient has\n\ta heart problem, and task 2: The patient does not have a heart problem.\n\\end{itemize}\nTo analyze the performance of our proposed methods, we used medical data sets and compared our proposed algorithms to five (multi-task and single-task learning) algorithms, i.e., TBSVM, I$ \\nu $-TBSVM, $ \\mathfrak{U}_{LS}$-TSVM, DMTSVM, and MTLS-TWSVM.\n We can see from the results of Tables~\\ref{tab4} and~\\ref{tab5} that our algorithms outperform all algorithms in linear and nonlinear states. This occurs because proposed methods add Universum data to the model learning process to modify the classification decision boundaries. The proposed methods train all tasks simultaneously, and they can take advantage of the underlying information among all tasks and improve their performance.\n\n\\begin{table}[htp!]\n\t\\small\n\t\\caption{Performance comparison of linear single-task and multi-task learning methods on medical data sets.}\\label{tab4}\n\t\\scalebox{0.85}{\n\t\\begin{tabular}{ccccc}\n\t\t\\toprul\n\t\t & Immunotherapy& \tLjubljana Breast Cancer & \tBreast Cancer Coimbra &Caesarian \\\\\n\t\tMethods & Acc ($\\%$)$ \\pm $Std &Acc ($\\%$) $ \\pm $Std& Acc ($\\%$)$ \\pm $Std & Acc ($\\%$)$ \\pm $Std \\\\\n\t\t& Time ($s$)&Time ($s$)& Time ($s$)& Time ($s$)\\\\\n\t\t\\midrul\n\t\tTBSVM&77.81$ \\pm $0.09 &75.11 $ \\pm $0.06&72.36$ \\pm $0.02&69.09$ \\pm $0.03 \\\\\n\t\t&1.41 &1.49 &1.40&1.42 \\\\\n\t\n\t\tI$\\nu$-TBSVM &79.97$ \\pm $0.15& 73.30$ \\pm $0.03&73.26$ \\pm $0.15&65.01$ \\pm $0.06 \\\\\n\t\t&1.63&3.22 &1.76& 1.58\\\\\n\t\n\t\t$ \\mathfrak{U}_{LS} $-TSVM&78.92$ \\pm $0.10& 74.72$ \\pm $0.08 &79.31$ \\pm $0.09&71.59$ \\pm $0.11\\\\\n\t\t&0.04&0.04&0.45&0.04\\\\\n\t\n\t\tDMTSVM & 81.29$ \\pm $0.06&71.14$ \\pm $0.09 &75.43$ \\pm $0.20&63.86$ \\pm $0.12\\\\\n\t\t&1.49& 1.60&1.50&1.48\\\\\n\t\n\t\t$ \\mathfrak{U} $MTSVM &84.63$ \\pm $0.07&73.26$ \\pm $0.08 &80.24$ \\pm $0.13&71.67$ \\pm $0.13\\\\\n\t\t&1.53&1.64 &1.55&1.48\\\\\n\t\n\t\tMTLS-TWSVM &86.11$ \\pm $0.08&75.09$ \\pm $0.19 &83.39$ \\pm $0.11&76.95$ \\pm $0.11\\\\\n\t\t&0.20&0.25 &0.19&0.16\\\\\n\t\n\t\tLS-$\\mathfrak{U}$MTSVM &\\textbf{88.11$ \\pm $0.11}& \\textbf{75.41$ \\pm $0.00} &\\textbf{85.37$ \\pm $0.12}&\\textbf{78.52$ \\pm $0.12}\\\\\n\t\n\t\t&0.25& 0.33&0.25&0.20\\\\\n\t\n\t\t\\bottomrule\n\t\\end{tabular}}\n\\end{table}\n\n\\begin{table}[htp!]\n\t\\small\n\t\\caption{Performance comparison of nonlinear single-task and multi-task learning methods on medical data sets.}\\label{tab5}\n\\scalebox{0.85}{\n\t\\begin{tabular}{ccccc}\n\t\t\t\\toprul\n\t\t & Immunotherapy& Ljubljana Breast Cancer & Breast Cancer Coimbra &Caesarian \\\\\n\t\tMethods & Acc ($\\%$)$ \\pm $Std &Acc ($\\%$) $ \\pm $Std& Acc ($\\%$)$ \\pm $Std & Acc ($\\%$)$ \\pm $Std \\\\\n\t\t& Time ($s$)&Time ($s$)& Time ($s$)& Time ($s$)\\\\\n\t\t\t\\midrul\n\t\tTBSVM&78.88$ \\pm $0.02 &72.54$ \\pm $0.05&60.39$ \\pm $0.07&71.35$ \\pm $0.04 \\\\\n\t\t&1.50 &1.49 &1.46&1.47 \\\\\n\t\n\t\tI$\\nu$-TBSVM &78.92$ \\pm $0.01& 72.90$ \\pm $0.03&60.87$ \\pm $0.17&71.14$ \\pm $0.14 \\\\\n\t\t&1.43&1.59 &1.45&1.46 \\\\\n\t\n\t\t$ \\mathfrak{U}_{LS} $-TSVM&78.92$ \\pm $0.02&74.74$ \\pm $0.37&63.84$ \\pm $0.06&72.28$ \\pm $0.02\\\\\n\t\t&0.06&0.11&0.07&0.07\\\\\n\t\tDMTSVM &80.25$ \\pm $0.05&73.71$ \\pm $0.09 &63.74$ \\pm $0.09&72.56$ \\pm $0.13\\\\\n\t\t&1.58& 2.14&1.67&1.56\\\\\n\t\n\t\t$ \\mathfrak{U} $MTSVM &80.25$ \\pm $0.05&75.62$ \\pm $0.11 &65.71$ \\pm $0.19&74.14$ \\pm $0.1\\\\\n\t\t&1.53&2.15 &1.69&1.57\\\\\n\t\n\t\tMTLS-TWSVM &80.22$ \\pm $0.25&75.32$ \\pm $0.71 &65.33$ \\pm $0.11&73.15$ \\pm $0.09\\\\\n\t\t&0.25&0.71 &0.30&0.21\\\\\n\t\n\t\tLS-$\\mathfrak{U}$MTSVM &\\textbf{80.56$ \\pm $0.05}&\\textbf{77.15$ \\pm $0.09} &\\textbf{67.17$ \\pm $0.14}&\\textbf{76.74$ \\pm $0.11}\\\\\n\t\n\t\t&0.38& 1.55&0.46&0.27\\\\\n\t\n\t\t\\bottomrule\n\t\\end{tabular}\n}\n\\end{table}\n\n\n\n\n\n\\section{Conclusion}\nIn this paper, we introduced the twin support vector machine in the framework of multi-task learning with Universum data sets and proposed a new model called $\\mathfrak{U}$MTSVM. In addition, we suggested two approaches to solving our novel model. As the first approach, we solved the $\\mathfrak{U}$MTSVM by the dual problem, a quadratic programming problem. Also, we suggested the least-squares version of $\\mathfrak{U}$MTSVM and called it LS-$\\mathfrak{U}$MTSVM. The LS-$\\mathfrak{U}$MTSVM only dealt with two systems of linear equations. Hence, comprehensive experiments on several popular multi-task data sets and medical data sets demonstrated the effectiveness of our proposed methods in terms of classification performance. The experiments confirmed that our algorithms achieved better experimental results compared to three single-task learning algorithms and two multi-task learning algorithms. \n\n\\subsubsection*{Acknowledgments} \nH. Moosaei and M~Hlad\\'{\\i}k were supported by the Czech Science Foundation Grant P403-22-11117S. \n In addition, the work of H. Moosaei was supported by the Center for Foundations of Modern Computer Science (Charles Univ.\\ project UNCE\/SCI\/004). \n \n\\section*{Conflict of interest}\nThe authors state that they do not have any conflicts of interest.\n\n\\bibliographystyle{spmpsci}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\nNuclear mass is one of the most fundamental properties of nuclei.\nIt is of great importance not only in the nuclear physics but also in the astrophysics~\\cite{Lunney2003RMP, Blaum2006PR}.\nFor example, the masses of nuclei widely ranged from the valley of stability to the vicinity of neutron drip line are involved in simulating the rapid neutron capture (r-process) of stellar nucleosynthesis~\\cite{Arnould2007PR}.\nAlthough considerable achievements have been made in mass measurement~\\cite{Wang2017CPC}, vast amount of nuclei on the neutron-rich side away from valley of stability are still beyond the experimental capability in the foreseeable future.\nTherefore, reliable nuclear models for high-precision description of nuclear masses are strongly required.\n\nDuring the past decades, various global nuclear models have been proposed to describe the nuclear mass, including the finite-range droplet model (FRDM)~\\cite{Moller1995ADaNDT, Moeller2016ADNDT}, the semi-empirical Weizs$\\mathrm{\\ddot{a}}$cker-Skyrme (WS) model~\\cite{Wang2010PRC, Wang2014PLB}, the non-relativistic~\\cite{Goriely2009PRLa, Goriely2013PRC, Erler2012N, Goriely2016TheEuropeanPhysicalJournalA52202} and relativistic~\\cite{Geng2005PoTP, Afanasjev2013PLB, Agbemava2014PRC,Zhang2014FoP, Lu2015PRC} density functional theories (DFTs), etc.\nThe nuclear DFTs start from universal density functionals containing a few parameters determined by fitting to the properties of finite nuclei or nuclear matter.\nThey can describe the nuclear masses, ground and excited state properties in a unified way~\\cite{Ring1996PPNP, Bender2003RMP, Vretenar2005PR}.\nIn particular, due to the consideration of the Lorentz symmetry, the relativistic or covariant density functional theory (CDFT) naturally includes the nucleonic spin degree of freedom and the time-odd mean fields, which play an essential role in describing the moments of inertia for nuclear rotations~\\cite{Koenig1993PRL, Afanasjev2000NPA, Afanasjev2000PRC, Zhao2012PRC}.\nUp to now, the CDFT has received wide attentions because of its successful description of many nuclear phenomena~\\cite{Ring1996PPNP, Vretenar2005PR, Meng2006PiPaNP, Niksic2011PiPaNP, Meng2013FoP, meng2016relativistic, Zhao2018Int.J.Mod.Phys.E1830007}.\n\nIn the framework of CDFT, the masses for over 7000 nuclei with $8\\leq Z\\leq100$ up to the proton and neutron drip lines were investigated based on the axial relativistic mean field (RMF) theory~\\cite{Geng2005PoTP}.\nLater on, to explore the location of the proton and neutron drip lines, a systematic investigation has been performed for even-even nuclei within the axial relativistic Hartree-Bogoliubov (RHB) theory~\\cite{Afanasjev2013PLB, Agbemava2014PRC, Afanasjev2015PRC}.\nVery recently, the ground-state properties of nuclei with $8\\leq Z\\leq 120$ from the proton drip line to the neutron drip line have been calculated using the spherical relativistic continuum Hartree-Bogoliubov (RCHB) theory, in which the couplings between the bound states and the continuum can be considered properly~\\cite{Xia2018ADaNDT}.\nThe root-mean-square (rms) deviation with respect to the experimental nuclear masses in these pure CDFT calculation is typically around several MeV.\nTo achieve a higher precision, one needs to go beyond the mean-field approximation and consider the beyond-mean-field dynamical correlation energies (DCEs).\n\nIn Ref.~\\cite{Zhang2014FoP}, Zhang \\emph{et al.} have carried out a global calculation of the binding energies for 575 even-even nuclei ranging from $Z=8$ to $Z=108$ based on the axial RMF, and the Bardeen-Cooper-Schrieffer (BCS) approximation is adopted to consider the pairing correlations.\nIn this axial RMF+BCS calculation, the DCEs, namely the rotational and vibrational correlation energies were obtained by cranking prescription.\nAfter including the DCEs, the rms deviation for binding energies of the 575 even-even nulcei reduces from 2.58 MeV to 1.24 MeV.\nLater on, the DCEs of these nuclei were revisited in Ref.~\\cite{Lu2015PRC} using the five-dimensional collective Hamiltonian (5DCH) method with the collective parameters determined by the CDFT calculations~\\cite{Niksic2009PRC, Li2009PRC}.\nThe 5DCH method takes into account the DCEs in a more proper way, and the resulting rms deviation reduces from 2.52 MeV to 1.14 MeV~\\footnote{Note that in Ref.~\\cite{Zhang2014FoP}, the adopted experimental mass data are taken from Ref.~\\cite{Audi2003NuclearPhysicsA729337-676}, while in Ref.~\\cite{Lu2015PRC}, the experimental mass data are from Ref.~\\cite{Audi2012ChinesePhysicsC361287--1602}.\nMoreover, compared to the theoretical results shown in Ref.~\\cite{Zhang2014FoP}, the energies associated with triaxial deformation are further included in Ref.~\\cite{Lu2015PRC}.}\n\nThe studies shown in Refs.~\\cite{Zhang2014FoP, Lu2015PRC} demonstrate that the inclusion of the DCEs can significantly improve the description of the nuclear masses.\nSo far, the inclusion of DCEs in the CDFT is still confined to the nuclei with mass known and the DCEs of most neutron-rich nuclei crucial in simulating the r-process still remain uninvestigated.\nTherefore, it is necessary to extend the investigation from the nuclei with mass known to the boundary of nuclear landscape.\nMeanwhile, the pairing correlations are treated by BCS approximation in Refs.~\\cite{Zhang2014FoP, Lu2015PRC}.\nFor the description of nuclei around the neutron drip line, this approximation is questionable because the continuum effect can not be taken into account properly ~\\cite{Dobaczewski1984NPA}.\nNevertheless, the methods with the Bogoliubov transformation can provide a better description for the pairing correlations in weakly bound nuclei.\nTherefore, in present paper, nuclear masses of even-even nuclei from O to Sn isotopes ranging from the proton drip line to the neutron drip line are performed within the triaxial relativistic Hartree-Bogoliubov theory~\\cite{PhysRevC.81.054318}, and the beyond mean-field quadrupole DCEs are included by the 5DCH method.\n\n\\section{Numerical details}\nTo this end, the large-scale deformation constrained triaxial RHB calculation is carried out to generate the mean-filed states in the whole $(\\beta,\\gamma)$ plane.\nThe well-known density functional PC-PK1~\\cite{Zhao2010PRC} is adopted in the particle-hole channel.\nThis density functional particularly improves the description for isospin dependence of binding energies and it has been successfully used in describing the Coulomb displacement energies between mirror nuclei~\\cite{Sun2011Sci.ChinaSer.G-Phys.Mech.Astron.210}, nuclear masses~\\cite{Zhao2012Phys.Rev.C64324,Lu2015PRC}, quadrupole moments~\\cite{Zhao2014Phys.Rev.C11301,Yordanov2016Phys.Rev.Lett.32501,Haas2017EPL62001}, superheavy nuclei~\\cite{Zhang2013Phys.Rev.C54324,Lu2014Phys.Rev.C14323,Agbemava2015Phys.Rev.C54310,Li2015Front.Phys.268}, nuclear shape phase transitions~\\cite{Quan2018Phys.Rev.C31301, Quan2017PRC}, magnetic and antimagnetic rotations~\\cite{Zhao2011PRL,Zhao2011PLB,Meng2013FoP,Meng2016PhysicaScripta53008}, chiral rotations~\\cite{Zhao2017Phys.Lett.B1}, etc.\nA finite range separable pairing force with $G=-728$ MeV~\\cite{Tian2009PLB} is used in the particle-particle channel.\nThe triaxl RHB equation is solved by a three-dimensional harmonic oscillator basis expansion in Cartesian coordinate with 12 and 14 major shells for nuclei with $Z<20$ and $20\\leq Z\\leq50$, respectively.\nThe obtained quasiparticle energies and wave functions are used to calculated the mass parameters, moments of inertia, and collective potentials in the 5DCH, which are functions of the quadrupole deformation parameters $\\beta$ and $\\gamma$.\nThe DCE $E_{\\mathrm{corr}}$ is defined as the energy difference between the lowest mean-field state and the $0_1^+$ state of 5DCH.\n\n\\section{Results and discussion}\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=11cm]{Figure1.pdf}\\\\\n \\caption{Even-even nuclei from O to Sn isotopes predicted by the triaxial RHB approach with (panel (b)) and without (panel (a)) dynamical correlation energies.\n Discrepancies of the calculated binding energies with the data~\\cite{Wang2017CPC} are denoted by colors.\n The proton and neutron drip lines predicted by spherical RCHB (PC-PK1)~\\cite{Xia2018ADaNDT}, axial RHB (DD-PC1)~\\cite{Agbemava2014PRC} and axial RMF+BCS (TMA)~\\cite{Geng2005PoTP} are also plotted for comparison.}\\label{1}\n\\end{figure}\nThe bound nuclear regions from O to Sn isotopes predicted by the triaxial RHB approach with and without DCEs are shown in Figure~\\ref{1}.\nThe discrepancies of the calculated binding energies with respect to the data are scaled by colors.\nThe binding energies calculated by triaxial RHB approach shown in panel (a) are given by the binding energies of the lowest mean-field states, while in panel (b), the DCEs are taken into account.\n\nIn the triaxial RHB calculations without DCEs, it is found that the binding energies are systematically underestimated.\nMost of the deviations are in the range of 0.5 MeV $\\sim$ 4.5 MeV, resulting in the rms deviation of 2.50 MeV.\nBy including the DCEs, the underestimation of binding energies are improved significantly, and the rms deviation is reduced from 2.50 MeV to 1.59 MeV.\nHowever, in the region $(N,Z)\\sim(24,12)$, large deviations exist even though the DCEs have been considered.\nThis might be associated with the complex shell evolution around this region.\nTo have a better description of binding energies in this region, the tensor interaction~\\cite{Long2007Phys.Rev.C34314} may need to be included in the adopted density functional, which is beyond the scope of the present investigations.\n\nIn order to estimate the number of bound nuclei from O to Sn isotopes, two-proton and two-neutron drip lines predicted by present triaxial RHB approach with and without DCEs are also plotted in Figure.~\\ref{1}.\nThe predicted number of bound even-even nuclei between proton and neutron drip lines from O to Sn isotopes without DCEs is 569.\nThe inclusion of DCEs has little impact on the proton and neutron drip lines and the corresponding number of bound nuclei is 564.\nFor comparison, the drip lines predicted by the spherical RCHB (PC-PK1)~\\cite{Xia2018ADaNDT}, axial RHB(DD-PC1)~\\cite{Agbemava2014PRC} and axial RMF+BCS(TMA)~\\cite{Geng2005PoTP} are also shown.\nIt is found that theoretical differences for proton drip lines are rather small.\nHowever, the neutron drip lines predicted by different approaches differ considerably and the differences increase with the mass number.\nThe neutron drip line predicted by the triaxial RHB approach locates in between of those by the axial RHB and spherical RCHB approaches.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=11cm]{Figure2.pdf}\\\\\n \\caption{Contour map of the dynamical correlation energies $E_\\mathrm{corr}$ calculated by the 5DCH model based on triaxial RHB calculations as functions of neutron and proton numbers.}\\label{2}\n\\end{figure}\nFigure \\ref{2} displays the contour map of the dynamical correlation energies $E_\\mathrm{corr}$ calculated by the 5DCH model based on triaxial RHB calculations.\nThe calculated $E_\\mathrm{corr}$ ranges from 0 to 5 MeV, and varies mainly in the region of $2.0$---$4.0$ MeV.\nDue to the shape fluctuations, the dynamical correlation energies are pronounced for nuclei around $Z \\sim 32, 40$ and $N \\sim 34, 60$.\nSimilarly to the finding reported in Ref.~\\cite{Lu2015PRC}, the dynamical correlation energies for the semi-magic nuclei with $Z=28, 50$ and $N = 28, 82$ are nonzero or even rather large.\nThis is caused by the fact that the potential energy surfaces for these nuclei are either soft or with shape coexisting phenomena.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=10cm]{Figure3.pdf}\\\\\n \\caption{The dynamical correlation energies calculated by 5DCH based on triaxial RHB (circles) in comparison with those based on triaxial RMF+BCS~\\cite{Lu2015PRC} (triangles).}\\label{3}\n\\end{figure}\nIn Ref.~\\cite{Lu2015PRC}, the binding energies of 575 even-even nuclei in the region of $8\\leq Z\\leq108$ have been calculated using the 5DCH method in the framework of triaxial RMF+BCS.\nFor the 228 nuclei with $8\\leq Z\\leq50$ in Ref.~\\cite{Lu2015PRC}, the rms deviation with respect to data is 1.23 MeV, whereas the rms deviation in present calculations for these nuclei is 1.47 MeV.\nIt is found that the lowest mean-field binding energies given by these two calculations are of little difference, so the differences mainly come from the $E_{\\mathrm{corr}}$.\n\nThe dynamical correlation energies $E_{\\mathrm{corr}}$ calculated by 5DCH based on triaxial RHB and triaxial RMF+BCS are plotted in Figure \\ref{3} as functions of neutron number $N$.\nEven though the systematics of $E_{\\mathrm{corr}}$ are similar for both calculations, the triaxial RHB-based $E_{\\mathrm{corr}}$ are systematically larger than those based on triaxial RMF+BCS.\nThe rms deviation between these two results is 0.53 MeV and this leads to the overall difference in the binding energies.\nThe systematic difference of $E_{\\mathrm{corr}}$ might be originated from the different treatments of pairing correlations.\nThe pairing correlations in present calculations are considered by the Bogoliubov transformation, while in Ref.~\\cite{Lu2015PRC}, the pairing correlations are considered by the BCS approximation.\nDifferent pairing properties may lead to different zero point energies, and thus results in different dynamical correlation energies $E_{\\mathrm{corr}}$~\\cite{Lu2015PRC}.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=11cm]{Figure4.pdf}\\\\\n \\caption{Contour map of the quadrupole deformation $\\beta$ calculated by the triaxial RHB approach as functions of the neutron and proton numbers. Nuclei with triaxial deformation are denoted by black triangles.}\n \\label{4}\n\\end{figure}\nThe contour map of triaxial RHB calculated quadrupole deformation $\\beta$ are presented in Figure \\ref{4}.\nThe quadrupole deformation corresponds to the energy minima on the whole $(\\beta,\\gamma)$ plane.\nHere, $\\beta$ is defined as positive for $0^\\circ\\leq\\gamma<30^\\circ$ and negative for $30^\\circ<\\gamma\\leq60^\\circ$.\nIn general, the nuclei near magic numbers possess small or vanishing deformation.\nHowever, it is found that single magic numbers do not enforce sphericity, especially for neutron-rich nuclei.\nFor example, neutron-rich isotones with $N = 28, Z<20$ and $N = 50, Z<28$ show remarkable deformation.\nIn addition, the deformation develops when moving away from the magic numbers either isotopically or isotonically.\nThere are four large deformed regions located at $(N, Z)\\sim (24,14), (34,32), (60,40)$ and $(94,46)$.\nThese regions with large deformation correspond to the regions with large DCEs as shown in Figure~\\ref{2}.\n\nThe nuclei with triaxial deformation i.e. $\\gamma\\neq 0^\\circ, 60^\\circ$, are also shown in Figure~\\ref{4}.\nThere are 15 nuclei with triaxial deformation and most of them belong to Ge, Mo, and Ru isotopes.\nOur theoretical investigations provide good candidates for the experiment to study the possibility of triaxial deformations.\n\n\\section{Summary}\nIn summary, the nuclear masses of even-even nuclei with $8\\leq Z\\leq 50$ ranging from the proton drip line to neutron drip line are systematically investigated using the triaxial relativistic Hartree-Bogoliubov theory with the relativistic density functional PC-PK1, and the quadrupole dynamical correlation energies are taken into account by solving the five-dimensional collective Hamiltonian.\nWith the inclusion of dynamical correlation energies, the prediction of triaxial relativistic Hartree-Bogoliubov theory for 252 nuclei masses is improved significantly with the rms deviation reducing from 2.50 MeV to 1.59 MeV.\nIt is found that the dynamical correlation energies have little influence on the positions of proton and neutron drip lines, and the predicted numbers of bound even-even nuclei between proton and neutron drip lines with and without dynamical correlation energies are 569 and 564 respectively.\nIn present calculations, the obtained dynamical correlation energies range from 0 to 5 MeV which are slightly larger than the results of previous work~\\cite{Lu2015PRC}.\nThe discrepancies might be caused by the different treatments of pairing correlations, which would lead to different zero point energies, and thus different dynamical correlation energies.\nThe contour map of quadrupole deformation $\\beta$ and $\\gamma$ associated with the dynamical correlation energies is also discussed in detail.\nThere are 15 nuclei predicted to have triaxial deformation, which provide good candidates for the experiment to study the possibility of triaxial deformations.\nThe final aim of this project is to build a whole nuclear mass table including both triaxial degrees of freedom and dynamical correlation energies. The works following this line are still in progress.\n\n\\ack\nThe authors are grateful to Prof. Zhi Pan Li and Prof. PengWei Zhao for providing\nthe numerical computation codes and the fruitful discussions as well as critical readings\nof our manuscript.\nThis work was partly supported by the National Key R\\&D Program of China (Contract No. 2018YFA0404400 and No. 2017YFE0116700) and\nthe National Natural Science Foundation of China (Grants No. 11621131001, No. 11875075, No. 11935003, and No. 11975031).\n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}