arxiv_id
stringlengths
7
11
title
stringlengths
7
243
abstract
stringlengths
3
2.79k
link
stringlengths
21
49
authors
listlengths
1
451
updated
stringlengths
20
20
published
stringlengths
20
20
2407.04667
The diameter of a stochastic matrix: A new measure for sensitivity analysis in Bayesian networks
Bayesian networks are one of the most widely used classes of probabilistic models for risk management and decision support because of their interpretability and flexibility in including heterogeneous pieces of information. In any applied modelling, it is critical to assess how robust the inferences on certain target variables are to changes in the model. In Bayesian networks, these analyses fall under the umbrella of sensitivity analysis, which is most commonly carried out by quantifying dissimilarities using Kullback-Leibler information measures. In this paper, we argue that robustness methods based instead on the familiar total variation distance provide simple and more valuable bounds on robustness to misspecification, which are both formally justifiable and transparent. We introduce a novel measure of dependence in conditional probability tables called the diameter to derive such bounds. This measure quantifies the strength of dependence between a variable and its parents. We demonstrate how such formal robustness considerations can be embedded in building a Bayesian network.
http://arxiv.org/pdf/2407.04667v1
[ "Manuele Leonelli", "Jim Q. Smith", "Sophia K. Wright" ]
2024-07-05T17:22:12Z
2024-07-05T17:22:12Z
2407.04663
Unsupervised 4D Cardiac Motion Tracking with Spatiotemporal Optical Flow Networks
Cardiac motion tracking from echocardiography can be used to estimate and quantify myocardial motion within a cardiac cycle. It is a cost-efficient and effective approach for assessing myocardial function. However, ultrasound imaging has the inherent characteristics of spatially low resolution and temporally random noise, which leads to difficulties in obtaining reliable annotation. Thus it is difficult to perform supervised learning for motion tracking. In addition, there is no end-to-end unsupervised method currently in the literature. This paper presents a motion tracking method where unsupervised optical flow networks are designed with spatial reconstruction loss and temporal-consistency loss. Our proposed loss functions make use of the pair-wise and temporal correlation to estimate cardiac motion from noisy background. Experiments using a synthetic 4D echocardiography dataset has shown the effectiveness of our approach, and its superiority over existing methods on both accuracy and running speed. To the best of our knowledge, this is the first work performed that uses unsupervised end-to-end deep learning optical flow network for 4D cardiac motion tracking.
http://arxiv.org/pdf/2407.04663v1
[ "Long Teng", "Wei Feng", "Menglong Zhu", "Xinchao Li" ]
2024-07-05T17:18:46Z
2024-07-05T17:18:46Z
2407.04662
Multitaper mel-spectrograms for keyword spotting
Keyword spotting (KWS) is one of the speech recognition tasks most sensitive to the quality of the feature representation. However, the research on KWS has traditionally focused on new model topologies, putting little emphasis on other aspects like feature extraction. This paper investigates the use of the multitaper technique to create improved features for KWS. The experimental study is carried out for different test scenarios, windows and parameters, datasets, and neural networks commonly used in embedded KWS applications. Experiment results confirm the advantages of using the proposed improved features.
http://arxiv.org/pdf/2407.04662v1
[ "Douglas Baptista de Souza", "Khaled Jamal Bakri", "Fernanda Ferreira", "Juliana Inacio" ]
2024-07-05T17:18:25Z
2024-07-05T17:18:25Z
2407.01864
Research on target detection method of distracted driving behavior based on improved YOLOv8
With the development of deep learning technology, the detection and classification of distracted driving behaviour requires higher accuracy. Existing deep learning-based methods are computationally intensive and parameter redundant, limiting the efficiency and accuracy in practical applications. To solve this problem, this study proposes an improved YOLOv8 detection method based on the original YOLOv8 model by integrating the BoTNet module, GAM attention mechanism and EIoU loss function. By optimising the feature extraction and multi-scale feature fusion strategies, the training and inference processes are simplified, and the detection accuracy and efficiency are significantly improved. Experimental results show that the improved model performs well in both detection speed and accuracy, with an accuracy rate of 99.4%, and the model is smaller and easy to deploy, which is able to identify and classify distracted driving behaviours in real time, provide timely warnings, and enhance driving safety.
http://arxiv.org/pdf/2407.01864v2
[ "Shiquan Shen", "Zhizhong Wu", "Pan Zhang" ]
2024-07-05T17:17:48Z
2024-07-02T00:43:41Z
2407.04656
Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models with Adaptive Expert Placement
Sparsely-activated Mixture-of-Experts (MoE) architecture has increasingly been adopted to further scale large language models (LLMs) due to its sub-linear scaling for computation costs. However, frequent failures still pose significant challenges as training scales. The cost of even a single failure is significant, as all GPUs need to wait idle until the failure is resolved, potentially losing considerable training progress as training has to restart from checkpoints. Existing solutions for efficient fault-tolerant training either lack elasticity or rely on building resiliency into pipeline parallelism, which cannot be applied to MoE models due to the expert parallelism strategy adopted by the MoE architecture. We present Lazarus, a system for resilient and elastic training of MoE models. Lazarus adaptively allocates expert replicas to address the inherent imbalance in expert workload and speeds-up training, while a provably optimal expert placement algorithm is developed to maximize the probability of recovery upon failures. Through adaptive expert placement and a flexible token dispatcher, Lazarus can also fully utilize all available nodes after failures, leaving no GPU idle. Our evaluation shows that Lazarus outperforms existing MoE training systems by up to 5.7x under frequent node failures and 3.4x on a real spot instance trace.
http://arxiv.org/pdf/2407.04656v1
[ "Yongji Wu", "Wenjie Qu", "Tianyang Tao", "Zhuang Wang", "Wei Bai", "Zhuohao Li", "Yuan Tian", "Jiaheng Zhang", "Matthew Lentz", "Danyang Zhuo" ]
2024-07-05T17:13:41Z
2024-07-05T17:13:41Z
2404.04687
Z-Splat: Z-Axis Gaussian Splatting for Camera-Sonar Fusion
Differentiable 3D-Gaussian splatting (GS) is emerging as a prominent technique in computer vision and graphics for reconstructing 3D scenes. GS represents a scene as a set of 3D Gaussians with varying opacities and employs a computationally efficient splatting operation along with analytical derivatives to compute the 3D Gaussian parameters given scene images captured from various viewpoints. Unfortunately, capturing surround view ($360^{circ}$ viewpoint) images is impossible or impractical in many real-world imaging scenarios, including underwater imaging, rooms inside a building, and autonomous navigation. In these restricted baseline imaging scenarios, the GS algorithm suffers from a well-known 'missing cone' problem, which results in poor reconstruction along the depth axis. In this manuscript, we demonstrate that using transient data (from sonars) allows us to address the missing cone problem by sampling high-frequency data along the depth axis. We extend the Gaussian splatting algorithms for two commonly used sonars and propose fusion algorithms that simultaneously utilize RGB camera data and sonar data. Through simulations, emulations, and hardware experiments across various imaging scenarios, we show that the proposed fusion algorithms lead to significantly better novel view synthesis (5 dB improvement in PSNR) and 3D geometry reconstruction (60% lower Chamfer distance).
http://arxiv.org/pdf/2404.04687v2
[ "Ziyuan Qu", "Omkar Vengurlekar", "Mohamad Qadri", "Kevin Zhang", "Michael Kaess", "Christopher Metzler", "Suren Jayasuriya", "Adithya Pediredla" ]
2024-07-05T16:58:15Z
2024-04-06T17:23:43Z
2406.14657
OpenDebateEvidence: A Massive-Scale Argument Mining and Summarization Dataset
We introduce OpenDebateEvidence, a comprehensive dataset for argument mining and summarization sourced from the American Competitive Debate community. This dataset includes over 3.5 million documents with rich metadata, making it one of the most extensive collections of debate evidence. OpenDebateEvidence captures the complexity of arguments in high school and college debates, providing valuable resources for training and evaluation. Our extensive experiments demonstrate the efficacy of fine-tuning state-of-the-art large language models for argumentative abstractive summarization across various methods, models, and datasets. By providing this comprehensive resource, we aim to advance computational argumentation and support practical applications for debaters, educators, and researchers. OpenDebateEvidence is publicly available to support further research and innovation in computational argumentation. Access it here: https://huggingface.co/datasets/Yusuf5/OpenCaselist
http://arxiv.org/pdf/2406.14657v2
[ "Allen Roush", "Yusuf Shabazz", "Arvind Balaji", "Peter Zhang", "Stefano Mezza", "Markus Zhang", "Sanjay Basu", "Sriram Vishwanath", "Mehdi Fatemi", "Ravid Shwartz-Ziv" ]
2024-07-05T16:51:15Z
2024-06-20T18:22:59Z
2201.13250
Differentiating and Integrating ZX Diagrams with Applications to Quantum Machine Learning
ZX-calculus has proved to be a useful tool for quantum technology with a wide range of successful applications. Most of these applications are of an algebraic nature. However, other tasks that involve differentiation and integration remain unreachable with current ZX techniques. Here we elevate ZX to an analytical perspective by realising differentiation and integration entirely within the framework of ZX-calculus. We explicitly illustrate the new analytic framework of ZX-calculus by applying it in context of quantum machine learning for the analysis of barren plateaus.
http://arxiv.org/pdf/2201.13250v5
[ "Quanlong Wang", "Richie Yeung", "Mark Koch" ]
2024-07-05T16:47:13Z
2022-01-31T13:59:28Z
2407.04631
An autoencoder for compressing angle-resolved photoemission spectroscopy data
Angle-resolved photoemission spectroscopy (ARPES) is a powerful experimental technique to determine the electronic structure of solids. Advances in light sources for ARPES experiments are currently leading to a vast increase of data acquisition rates and data quantity. On the other hand, access time to the most advanced ARPES instruments remains strictly limited, calling for fast, effective, and on-the-fly data analysis tools to exploit this time. In response to this need, we introduce ARPESNet, a versatile autoencoder network that efficiently summmarises and compresses ARPES datasets. We train ARPESNet on a large and varied dataset of 2-dimensional ARPES data extracted by cutting standard 3-dimensional ARPES datasets along random directions in $mathbf{k}$. To test the data representation capacity of ARPESNet, we compare $k$-means clustering quality between data compressed by ARPESNet, data compressed by discrete cosine transform, and raw data, at different noise levels. ARPESNet data excels in clustering quality despite its high compression ratio.
http://arxiv.org/pdf/2407.04631v1
[ "Steinn Ymir Agustsson", "Mohammad Ahsanul Haque", "Thi Tam Truong", "Marco Bianchi", "Nikita Klyuchnikov", "Davide Mottin", "Panagiotis Karras", "Philip Hofmann" ]
2024-07-05T16:41:49Z
2024-07-05T16:41:49Z
2403.06725
Improving Low-Resource Knowledge Tracing Tasks by Supervised Pre-training and Importance Mechanism Fine-tuning
Knowledge tracing (KT) aims to estimate student's knowledge mastery based on their historical interactions. Recently, the deep learning based KT (DLKT) approaches have achieved impressive performance in the KT task. These DLKT models heavily rely on the large number of available student interactions. However, due to various reasons such as budget constraints and privacy concerns, observed interactions are very limited in many real-world scenarios, a.k.a, low-resource KT datasets. Directly training a DLKT model on a low-resource KT dataset may lead to overfitting and it is difficult to choose the appropriate deep neural architecture. Therefore, in this paper, we propose a low-resource KT framework called LoReKT to address above challenges. Inspired by the prevalent "pre-training and fine-tuning" paradigm, we aim to learn transferable parameters and representations from rich-resource KT datasets during the pre-training stage and subsequently facilitate effective adaptation to low-resource KT datasets. Specifically, we simplify existing sophisticated DLKT model architectures with purely a stack of transformer decoders. We design an encoding mechanism to incorporate student interactions from multiple KT data sources and develop an importance mechanism to prioritize updating parameters with high importance while constraining less important ones during the fine-tuning stage. We evaluate LoReKT on six public KT datasets and experimental results demonstrate the superiority of our approach in terms of AUC and Accuracy. To encourage reproducible research, we make our data and code publicly available at https://anonymous.4open.science/r/LoReKT-C619.
http://arxiv.org/pdf/2403.06725v3
[ "Hengyuan Zhang", "Zitao Liu", "Shuyan Huang", "Chenming Shang", "Bojun Zhan", "Yong Jiang" ]
2024-07-05T16:24:29Z
2024-03-11T13:44:43Z
2407.04620
Learning to (Learn at Test Time): RNNs with Expressive Hidden States
Self-attention performs well in long context but has quadratic complexity. Existing RNN layers have linear complexity, but their performance in long context is limited by the expressive power of their hidden state. We propose a new class of sequence modeling layers with linear complexity and an expressive hidden state. The key idea is to make the hidden state a machine learning model itself, and the update rule a step of self-supervised learning. Since the hidden state is updated by training even on test sequences, our layers are called Test-Time Training (TTT) layers. We consider two instantiations: TTT-Linear and TTT-MLP, whose hidden state is a linear model and a two-layer MLP respectively. We evaluate our instantiations at the scale of 125M to 1.3B parameters, comparing with a strong Transformer and Mamba, a modern RNN. Both TTT-Linear and TTT-MLP match or exceed the baselines. Similar to Transformer, they can keep reducing perplexity by conditioning on more tokens, while Mamba cannot after 16k context. With preliminary systems optimization, TTT-Linear is already faster than Transformer at 8k context and matches Mamba in wall-clock time. TTT-MLP still faces challenges in memory I/O, but shows larger potential in long context, pointing to a promising direction for future research.
http://arxiv.org/pdf/2407.04620v1
[ "Yu Sun", "Xinhao Li", "Karan Dalal", "Jiarui Xu", "Arjun Vikram", "Genghan Zhang", "Yann Dubois", "Xinlei Chen", "Xiaolong Wang", "Sanmi Koyejo", "Tatsunori Hashimoto", "Carlos Guestrin" ]
2024-07-05T16:23:20Z
2024-07-05T16:23:20Z
2407.04617
Randomized Physics-Informed Neural Networks for Bayesian Data Assimilation
We propose a randomized physics-informed neural network (PINN) or rPINN method for uncertainty quantification in inverse partial differential equation (PDE) problems with noisy data. This method is used to quantify uncertainty in the inverse PDE PINN solutions. Recently, the Bayesian PINN (BPINN) method was proposed, where the posterior distribution of the PINN parameters was formulated using the Bayes' theorem and sampled using approximate inference methods such as the Hamiltonian Monte Carlo (HMC) and variational inference (VI) methods. In this work, we demonstrate that HMC fails to converge for non-linear inverse PDE problems. As an alternative to HMC, we sample the distribution by solving the stochastic optimization problem obtained by randomizing the PINN loss function. The effectiveness of the rPINN method is tested for linear and non-linear Poisson equations, and the diffusion equation with a high-dimensional space-dependent diffusion coefficient. The rPINN method provides informative distributions for all considered problems. For the linear Poisson equation, HMC and rPINN produce similar distributions, but rPINN is on average 27 times faster than HMC. For the non-linear Poison and diffusion equations, the HMC method fails to converge because a single HMC chain cannot sample multiple modes of the posterior distribution of the PINN parameters in a reasonable amount of time.
http://arxiv.org/pdf/2407.04617v1
[ "Yifei Zong", "David Barajas-Solano", "Alexandre M. Tartakovsky" ]
2024-07-05T16:16:47Z
2024-07-05T16:16:47Z
2404.00666
Accelerated Parameter-Free Stochastic Optimization
We propose a method that achieves near-optimal rates for smooth stochastic convex optimization and requires essentially no prior knowledge of problem parameters. This improves on prior work which requires knowing at least the initial distance to optimality d0. Our method, U-DoG, combines UniXGrad (Kavis et al., 2019) and DoG (Ivgi et al., 2023) with novel iterate stabilization techniques. It requires only loose bounds on d0 and the noise magnitude, provides high probability guarantees under sub-Gaussian noise, and is also near-optimal in the non-smooth case. Our experiments show consistent, strong performance on convex problems and mixed results on neural network training.
http://arxiv.org/pdf/2404.00666v2
[ "Itai Kreisler", "Maor Ivgi", "Oliver Hinder", "Yair Carmon" ]
2024-07-05T16:15:53Z
2024-03-31T12:21:57Z
2407.04616
Isomorphic Pruning for Vision Models
Structured pruning reduces the computational overhead of deep neural networks by removing redundant sub-structures. However, assessing the relative importance of different sub-structures remains a significant challenge, particularly in advanced vision models featuring novel mechanisms and architectures like self-attention, depth-wise convolutions, or residual connections. These heterogeneous substructures usually exhibit diverged parameter scales, weight distributions, and computational topology, introducing considerable difficulty to importance comparison. To overcome this, we present Isomorphic Pruning, a simple approach that demonstrates effectiveness across a range of network architectures such as Vision Transformers and CNNs, and delivers competitive performance across different model sizes. Isomorphic Pruning originates from an observation that, when evaluated under a pre-defined importance criterion, heterogeneous sub-structures demonstrate significant divergence in their importance distribution, as opposed to isomorphic structures that present similar importance patterns. This inspires us to perform isolated ranking and comparison on different types of sub-structures for more reliable pruning. Our empirical results on ImageNet-1K demonstrate that Isomorphic Pruning surpasses several pruning baselines dedicatedly designed for Transformers or CNNs. For instance, we improve the accuracy of DeiT-Tiny from 74.52% to 77.50% by pruning an off-the-shelf DeiT-Base model. And for ConvNext-Tiny, we enhanced performance from 82.06% to 82.18%, while reducing the number of parameters and memory usage. Code is available at url{https://github.com/VainF/Isomorphic-Pruning}.
http://arxiv.org/pdf/2407.04616v1
[ "Gongfan Fang", "Xinyin Ma", "Michael Bi Mi", "Xinchao Wang" ]
2024-07-05T16:14:53Z
2024-07-05T16:14:53Z
2403.07322
A Question-centric Multi-experts Contrastive Learning Framework for Improving the Accuracy and Interpretability of Deep Sequential Knowledge Tracing Models
Knowledge tracing (KT) plays a crucial role in predicting students' future performance by analyzing their historical learning processes. Deep neural networks (DNNs) have shown great potential in solving the KT problem. However, there still exist some important challenges when applying deep learning techniques to model the KT process. The first challenge lies in taking the individual information of the question into modeling. This is crucial because, despite questions sharing the same knowledge component (KC), students' knowledge acquisition on homogeneous questions can vary significantly. The second challenge lies in interpreting the prediction results from existing deep learning-based KT models. In real-world applications, while it may not be necessary to have complete transparency and interpretability of the model parameters, it is crucial to present the model's prediction results in a manner that teachers find interpretable. This makes teachers accept the rationale behind the prediction results and utilize them to design teaching activities and tailored learning strategies for students. However, the inherent black-box nature of deep learning techniques often poses a hurdle for teachers to fully embrace the model's prediction results. To address these challenges, we propose a Question-centric Multi-experts Contrastive Learning framework for KT called Q-MCKT. We have provided all the datasets and code on our website at https://github.com/rattlesnakey/Q-MCKT.
http://arxiv.org/pdf/2403.07322v3
[ "Hengyuan Zhang", "Zitao Liu", "Chenming Shang", "Dawei Li", "Yong Jiang" ]
2024-07-05T16:11:43Z
2024-03-12T05:15:42Z
2208.06263
Probabilistic Rank and Reward: A Scalable Model for Slate Recommendation
We introduce Probabilistic Rank and Reward (PRR), a scalable probabilistic model for personalized slate recommendation. Our approach allows off-policy estimation of the reward in the scenario where the user interacts with at most one item from a slate of K items. We show that the probability of a slate being successful can be learned efficiently by combining the reward, whether the user successfully interacted with the slate, and the rank, the item that was selected within the slate. PRR outperforms existing off-policy reward optimizing methods and is far more scalable to large action spaces. Moreover, PRR allows fast delivery of recommendations powered by maximum inner product search (MIPS), making it suitable in low latency domains such as computational advertising.
http://arxiv.org/pdf/2208.06263v3
[ "Imad Aouali", "Achraf Ait Sidi Hammou", "Otmane Sakhi", "David Rohde", "Flavian Vasile" ]
2024-07-05T16:05:41Z
2022-08-10T13:18:00Z
2402.02989
DexDiffuser: Generating Dexterous Grasps with Diffusion Models
We introduce DexDiffuser, a novel dexterous grasping method that generates, evaluates, and refines grasps on partial object point clouds. DexDiffuser includes the conditional diffusion-based grasp sampler DexSampler and the dexterous grasp evaluator DexEvaluator. DexSampler generates high-quality grasps conditioned on object point clouds by iterative denoising of randomly sampled grasps. We also introduce two grasp refinement strategies: Evaluator-Guided Diffusion (EGD) and Evaluator-based Sampling Refinement (ESR). The experiment results demonstrate that DexDiffuser consistently outperforms the state-of-the-art multi-finger grasp generation method FFHNet with an, on average, 9.12% and 19.44% higher grasp success rate in simulation and real robot experiments, respectively. Supplementary materials are available at https://yulihn.github.io/DexDiffuser_page/
http://arxiv.org/pdf/2402.02989v2
[ "Zehang Weng", "Haofei Lu", "Danica Kragic", "Jens Lundell" ]
2024-07-05T16:04:15Z
2024-02-05T13:27:41Z
2407.04605
Linear causal disentanglement via higher-order cumulants
Linear causal disentanglement is a recent method in causal representation learning to describe a collection of observed variables via latent variables with causal dependencies between them. It can be viewed as a generalization of both independent component analysis and linear structural equation models. We study the identifiability of linear causal disentanglement, assuming access to data under multiple contexts, each given by an intervention on a latent variable. We show that one perfect intervention on each latent variable is sufficient and in the worst case necessary to recover parameters under perfect interventions, generalizing previous work to allow more latent than observed variables. We give a constructive proof that computes parameters via a coupled tensor decomposition. For soft interventions, we find the equivalence class of latent graphs and parameters that are consistent with observed data, via the study of a system of polynomial equations. Our results hold assuming the existence of non-zero higher-order cumulants, which implies non-Gaussianity of variables.
http://arxiv.org/pdf/2407.04605v1
[ "Paula Leyes Carreno", "Chiara Meroni", "Anna Seigal" ]
2024-07-05T15:53:16Z
2024-07-05T15:53:16Z
2406.11325
Deep-Learning-Based Channel Estimation for Distributed MIMO with 1-bit Radio-Over-Fiber Fronthaul
We consider the problem of pilot-aided, uplink channel estimation in a distributed massive multiple-input multiple-output (MIMO) architecture, in which the access points are connected to a central processing unit via fiber-optical fronthaul links, carrying a two-level-quantized version of the received analog radio-frequency signal. We adapt to this architecture the deep-learning-based channel-estimation algorithm recently proposed by Nguyen et al. (2023), and explore its robustness to the additional signal distortions (beyond 1-bit quantization) introduced in the considered architecture by the automatic gain controllers (AGCs) and by the comparators. These components are used at the access points to generate the two-level analog waveform from the received signal. Via simulation results, we illustrate that the proposed channel-estimation method outperforms significantly the Bussgang linear minimum mean-square error channel estimator, and it is robust against the additional impairments introduced by the AGCs and the comparators.
http://arxiv.org/pdf/2406.11325v2
[ "Alireza Bordbar", "Lise Aabel", "Christian Häger", "Christian Fager", "Giuseppe Durisi" ]
2024-07-05T15:51:16Z
2024-06-17T08:38:29Z
2407.04600
Understanding the Gains from Repeated Self-Distillation
Self-Distillation is a special type of knowledge distillation where the student model has the same architecture as the teacher model. Despite using the same architecture and the same training data, self-distillation has been empirically observed to improve performance, especially when applied repeatedly. For such a process, there is a fundamental question of interest: How much gain is possible by applying multiple steps of self-distillation? To investigate this relative gain, we propose studying the simple but canonical task of linear regression. Our analysis shows that the excess risk achieved by multi-step self-distillation can significantly improve upon a single step of self-distillation, reducing the excess risk by a factor as large as $d$, where $d$ is the input dimension. Empirical results on regression tasks from the UCI repository show a reduction in the learnt model's risk (MSE) by up to 47%.
http://arxiv.org/pdf/2407.04600v1
[ "Divyansh Pareek", "Simon S. Du", "Sewoong Oh" ]
2024-07-05T15:48:34Z
2024-07-05T15:48:34Z
2304.02011
FakET: Simulating Cryo-Electron Tomograms with Neural Style Transfer
In cryo-electron microscopy, accurate particle localization and classification are imperative. Recent deep learning solutions, though successful, require extensive training data sets. The protracted generation time of physics-based models, often employed to produce these data sets, limits their broad applicability. We introduce FakET, a method based on Neural Style Transfer, capable of simulating the forward operator of any cryo transmission electron microscope. It can be used to adapt a synthetic training data set according to reference data producing high-quality simulated micrographs or tilt-series. To assess the quality of our generated data, we used it to train a state-of-the-art localization and classification architecture and compared its performance with a counterpart trained on benchmark data. Remarkably, our technique matches the performance, boosts data generation speed 750 times, uses 33 times less memory, and scales well to typical transmission electron microscope detector sizes. It leverages GPU acceleration and parallel processing. The source code is available at https://github.com/paloha/faket.
http://arxiv.org/pdf/2304.02011v3
[ "Pavol Harar", "Lukas Herrmann", "Philipp Grohs", "David Haselbach" ]
2024-07-05T15:43:02Z
2023-04-04T17:59:09Z
2403.13658
Multimodal Variational Autoencoder for Low-cost Cardiac Hemodynamics Instability Detection
Recent advancements in non-invasive detection of cardiac hemodynamic instability (CHDI) primarily focus on applying machine learning techniques to a single data modality, e.g. cardiac magnetic resonance imaging (MRI). Despite their potential, these approaches often fall short especially when the size of labeled patient data is limited, a common challenge in the medical domain. Furthermore, only a few studies have explored multimodal methods to study CHDI, which mostly rely on costly modalities such as cardiac MRI and echocardiogram. In response to these limitations, we propose a novel multimodal variational autoencoder ($text{CardioVAE}_text{X,G}$) to integrate low-cost chest X-ray (CXR) and electrocardiogram (ECG) modalities with pre-training on a large unlabeled dataset. Specifically, $text{CardioVAE}_text{X,G}$ introduces a novel tri-stream pre-training strategy to learn both shared and modality-specific features, thus enabling fine-tuning with both unimodal and multimodal datasets. We pre-train $text{CardioVAE}_text{X,G}$ on a large, unlabeled dataset of $50,982$ subjects from a subset of MIMIC database and then fine-tune the pre-trained model on a labeled dataset of $795$ subjects from the ASPIRE registry. Comprehensive evaluations against existing methods show that $text{CardioVAE}_text{X,G}$ offers promising performance (AUROC $=0.79$ and Accuracy $=0.77$), representing a significant step forward in non-invasive prediction of CHDI. Our model also excels in producing fine interpretations of predictions directly associated with clinical features, thereby supporting clinical decision-making.
http://arxiv.org/pdf/2403.13658v3
[ "Mohammod N. I. Suvon", "Prasun C. Tripathi", "Wenrui Fan", "Shuo Zhou", "Xianyuan Liu", "Samer Alabed", "Venet Osmani", "Andrew J. Swift", "Chen Chen", "Haiping Lu" ]
2024-07-05T15:42:25Z
2024-03-20T15:06:49Z
2407.04591
Proximal Point Method for Online Saddle Point Problem
This paper focuses on the online saddle point problem, which involves a sequence of two-player time-varying convex-concave games. Considering the nonstationarity of the environment, we adopt the duality gap and the dynamic Nash equilibrium regret as performance metrics for algorithm design. We present three variants of the proximal point method: the Online Proximal Point Method~(OPPM), the Optimistic OPPM~(OptOPPM), and the OptOPPM with multiple predictors. Each algorithm guarantees upper bounds for both the duality gap and dynamic Nash equilibrium regret, achieving near-optimality when measured against the duality gap. Specifically, in certain benign environments, such as sequences of stationary payoff functions, these algorithms maintain a nearly constant metric bound. Experimental results further validate the effectiveness of these algorithms. Lastly, this paper discusses potential reliability concerns associated with using dynamic Nash equilibrium regret as a performance metric.
http://arxiv.org/pdf/2407.04591v1
[ "Qing-xin Meng", "Jian-wei Liu" ]
2024-07-05T15:40:15Z
2024-07-05T15:40:15Z
2403.13213
From Representational Harms to Quality-of-Service Harms: A Case Study on Llama 2 Safety Safeguards
Recent progress in large language models (LLMs) has led to their widespread adoption in various domains. However, these advancements have also introduced additional safety risks and raised concerns regarding their detrimental impact on already marginalized populations. Despite growing mitigation efforts to develop safety safeguards, such as supervised safety-oriented fine-tuning and leveraging safe reinforcement learning from human feedback, multiple concerns regarding the safety and ingrained biases in these models remain. Furthermore, previous work has demonstrated that models optimized for safety often display exaggerated safety behaviors, such as a tendency to refrain from responding to certain requests as a precautionary measure. As such, a clear trade-off between the helpfulness and safety of these models has been documented in the literature. In this paper, we further investigate the effectiveness of safety measures by evaluating models on already mitigated biases. Using the case of Llama 2 as an example, we illustrate how LLMs' safety responses can still encode harmful assumptions. To do so, we create a set of non-toxic prompts, which we then use to evaluate Llama models. Through our new taxonomy of LLMs responses to users, we observe that the safety/helpfulness trade-offs are more pronounced for certain demographic groups which can lead to quality-of-service harms for marginalized populations.
http://arxiv.org/pdf/2403.13213v4
[ "Khaoula Chehbouni", "Megha Roshan", "Emmanuel Ma", "Futian Andrew Wei", "Afaf Taik", "Jackie CK Cheung", "Golnoosh Farnadi" ]
2024-07-05T15:40:13Z
2024-03-20T00:22:38Z
2407.04589
Remembering Everything Makes You Vulnerable: A Limelight on Machine Unlearning for Personalized Healthcare Sector
As the prevalence of data-driven technologies in healthcare continues to rise, concerns regarding data privacy and security become increasingly paramount. This thesis aims to address the vulnerability of personalized healthcare models, particularly in the context of ECG monitoring, to adversarial attacks that compromise patient privacy. We propose an approach termed "Machine Unlearning" to mitigate the impact of exposed data points on machine learning models, thereby enhancing model robustness against adversarial attacks while preserving individual privacy. Specifically, we investigate the efficacy of Machine Unlearning in the context of personalized ECG monitoring, utilizing a dataset of clinical ECG recordings. Our methodology involves training a deep neural classifier on ECG data and fine-tuning the model for individual patients. We demonstrate the susceptibility of fine-tuned models to adversarial attacks, such as the Fast Gradient Sign Method (FGSM), which can exploit additional data points in personalized models. To address this vulnerability, we propose a Machine Unlearning algorithm that selectively removes sensitive data points from fine-tuned models, effectively enhancing model resilience against adversarial manipulation. Experimental results demonstrate the effectiveness of our approach in mitigating the impact of adversarial attacks while maintaining the pre-trained model accuracy.
http://arxiv.org/pdf/2407.04589v1
[ "Ahan Chatterjee", "Sai Anirudh Aryasomayajula", "Rajat Chaudhari", "Subhajit Paul", "Vishwa Mohan Singh" ]
2024-07-05T15:38:36Z
2024-07-05T15:38:36Z
2406.02584
Planetary Causal Inference: Implications for the Geography of Poverty
Earth observation data such as satellite imagery can, when combined with machine learning, can have far-reaching impacts on our understanding of the geography of poverty through the prediction of living conditions, especially where government-derived economic indicators are either unavailable or potentially untrustworthy. Recent work has progressed in using Earth Observation (EO) data not only to predict spatial economic outcomes but also to explore cause and effect, an understanding which is critical for downstream policy analysis. In this review, we first document the growth of interest in using satellite images together with EO data in causal analysis. We then trace the relationship between spatial statistics and machine learning methods before discussing four ways in which EO data has been used in causal machine learning pipelines -- (1.) poverty outcome imputation for downstream causal analysis, (2.) EO image deconfounding, (3.) EO-based treatment effect heterogeneity, and (4.) EO-based transportability analysis. We conclude by providing a step-by-step workflow for how researchers can incorporate EO data in causal ML analysis going forward, outlining major choices of data, models, and evaluation metrics.
http://arxiv.org/pdf/2406.02584v2
[ "Kazuki Sakamoto", "Connor T. Jerzak", "Adel Daoud" ]
2024-07-05T15:37:15Z
2024-05-30T20:48:10Z
2407.04587
Multimodal Classification via Modal-Aware Interactive Enhancement
Due to the notorious modality imbalance problem, multimodal learning (MML) leads to the phenomenon of optimization imbalance, thus struggling to achieve satisfactory performance. Recently, some representative methods have been proposed to boost the performance, mainly focusing on adaptive adjusting the optimization of each modality to rebalance the learning speed of dominant and non-dominant modalities. To better facilitate the interaction of model information in multimodal learning, in this paper, we propose a novel multimodal learning method, called modal-aware interactive enhancement (MIE). Specifically, we first utilize an optimization strategy based on sharpness aware minimization (SAM) to smooth the learning objective during the forward phase. Then, with the help of the geometry property of SAM, we propose a gradient modification strategy to impose the influence between different modalities during the backward phase. Therefore, we can improve the generalization ability and alleviate the modality forgetting phenomenon simultaneously for multimodal learning. Extensive experiments on widely used datasets demonstrate that our proposed method can outperform various state-of-the-art baselines to achieve the best performance.
http://arxiv.org/pdf/2407.04587v1
[ "Qing-Yuan Jiang", "Zhouyang Chi", "Yang Yang" ]
2024-07-05T15:32:07Z
2024-07-05T15:32:07Z
2312.06681
Steering Llama 2 via Contrastive Activation Addition
We introduce Contrastive Activation Addition (CAA), an innovative method for steering language models by modifying their activations during forward passes. CAA computes "steering vectors" by averaging the difference in residual stream activations between pairs of positive and negative examples of a particular behavior, such as factual versus hallucinatory responses. During inference, these steering vectors are added at all token positions after the user's prompt with either a positive or negative coefficient, allowing precise control over the degree of the targeted behavior. We evaluate CAA's effectiveness on Llama 2 Chat using multiple-choice behavioral question datasets and open-ended generation tasks. We demonstrate that CAA significantly alters model behavior, is effective over and on top of traditional methods like finetuning and system prompt design, and minimally reduces capabilities. Moreover, we gain deeper insights into CAA's mechanisms by employing various activation space interpretation methods. CAA accurately steers model outputs and sheds light on how high-level concepts are represented in Large Language Models (LLMs).
http://arxiv.org/pdf/2312.06681v4
[ "Nina Panickssery", "Nick Gabrieli", "Julian Schulz", "Meg Tong", "Evan Hubinger", "Alexander Matt Turner" ]
2024-07-05T15:30:45Z
2023-12-09T04:40:46Z
2304.04343
Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence
Black-box adversarial attacks have shown strong potential to subvert machine learning models. Existing black-box attacks craft adversarial examples by iteratively querying the target model and/or leveraging the transferability of a local surrogate model. Recently, such attacks can be effectively mitigated by state-of-the-art (SOTA) defenses, e.g., detection via the pattern of sequential queries, or injecting noise into the model. To our best knowledge, we take the first step to study a new paradigm of black-box attacks with provable guarantees -- certifiable black-box attacks that can guarantee the attack success probability (ASP) of adversarial examples before querying over the target model. This new black-box attack unveils significant vulnerabilities of machine learning models, compared to traditional empirical black-box attacks, e.g., breaking strong SOTA defenses with provable confidence, constructing a space of (infinite) adversarial examples with high ASP, and the ASP of the generated adversarial examples is theoretically guaranteed without verification/queries over the target model. Specifically, we establish a novel theoretical foundation for ensuring the ASP of the black-box attack with randomized adversarial examples (AEs). Then, we propose several novel techniques to craft the randomized AEs while reducing the perturbation size for better imperceptibility. Finally, we have comprehensively evaluated the certifiable black-box attacks on the CIFAR10/100, ImageNet, and LibriSpeech datasets, while benchmarking with 16 SOTA empirical black-box attacks, against various SOTA defenses in the domains of computer vision and speech recognition. Both theoretical and experimental results have validated the significance of the proposed attack.
http://arxiv.org/pdf/2304.04343v2
[ "Hanbin Hong", "Xinyu Zhang", "Binghui Wang", "Zhongjie Ba", "Yuan Hong" ]
2024-07-05T15:27:01Z
2023-04-10T01:12:09Z
2405.18119
Low-Resource Crop Classification from Multi-Spectral Time Series Using Lossless Compressors
Deep learning has significantly improved the accuracy of crop classification using multispectral temporal data. However, these models have complex structures with numerous parameters, requiring large amounts of data and costly training. In low-resource situations with fewer labeled samples, deep learning models perform poorly due to insufficient data. Conversely, compressors are data-type agnostic, and non-parametric methods do not bring underlying assumptions. Inspired by this insight, we propose a non-training alternative to deep learning models, aiming to address these situations. Specifically, the Symbolic Representation Module is proposed to convert the reflectivity into symbolic representations. The symbolic representations are then cross-transformed in both the channel and time dimensions to generate symbolic embeddings. Next, the Multi-scale Normalised Compression Distance (MNCD) is designed to measure the correlation between any two symbolic embeddings. Finally, based on the MNCDs, high quality crop classification can be achieved using only a k-nearest-neighbor classifier kNN. The entire framework is ready-to-use and lightweight. Without any training, it outperformed, on average, 7 advanced deep learning models trained at scale on three benchmark datasets. It also outperforms more than half of these models in the few-shot setting with sparse crop labels. Therefore, the high performance and robustness of our non-training framework makes it truly applicable to real-world crop mapping. Codes are available at: https://github.com/qinfengsama/Compressor-Based-Crop-Mapping.
http://arxiv.org/pdf/2405.18119v2
[ "Wei Cheng", "Hongrui Ye", "Xiao Wen", "Jiachen Zhang", "Jiping Xu", "Feifan Zhang" ]
2024-07-05T15:23:58Z
2024-05-28T12:28:12Z
2407.04581
Leveraging Large Language Models for Integrated Satellite-Aerial-Terrestrial Networks: Recent Advances and Future Directions
Integrated satellite, aerial, and terrestrial networks (ISATNs) represent a sophisticated convergence of diverse communication technologies to ensure seamless connectivity across different altitudes and platforms. This paper explores the transformative potential of integrating Large Language Models (LLMs) into ISATNs, leveraging advanced Artificial Intelligence (AI) and Machine Learning (ML) capabilities to enhance these networks. We outline the current architecture of ISATNs and highlight the significant role LLMs can play in optimizing data flow, signal processing, and network management to advance 5G/6G communication technologies through advanced predictive algorithms and real-time decision-making. A comprehensive analysis of ISATN components is conducted, assessing how LLMs can effectively address traditional data transmission and processing bottlenecks. The paper delves into the network management challenges within ISATNs, emphasizing the necessity for sophisticated resource allocation strategies, traffic routing, and security management to ensure seamless connectivity and optimal performance under varying conditions. Furthermore, we examine the technical challenges and limitations associated with integrating LLMs into ISATNs, such as data integration for LLM processing, scalability issues, latency in decision-making processes, and the design of robust, fault-tolerant systems. The study also identifies key future research directions for fully harnessing LLM capabilities in ISATNs, which is crucial for enhancing network reliability, optimizing performance, and achieving a truly interconnected and intelligent global network system.
http://arxiv.org/pdf/2407.04581v1
[ "Shumaila Javaid", "Ruhul Amin Khalil", "Nasir Saeed", "Bin He", "Mohamed-Slim Alouini" ]
2024-07-05T15:23:43Z
2024-07-05T15:23:43Z
2407.04579
GOALPlace: Begin with the End in Mind
Co-optimizing placement with congestion is integral to achieving high-quality designs. This paper presents GOALPlace, a new learning-based general approach to improving placement congestion by controlling cell density. Our method efficiently learns from an EDA tool's post-route optimized results and uses an empirical Bayes technique to adapt this goal/target to a specific placer's solutions, effectively beginning with the end in mind. It enhances correlation with the long-running heuristics of the tool's router and timing-opt engine -- while solving placement globally without expensive incremental congestion estimation and mitigation methods. A statistical analysis with a new hierarchical netlist clustering establishes the importance of density and the potential for an adequate cell density target across placements. Our experiments show that our method, integrated as a demonstration inside an academic GPU-accelerated global placer, consistently produces macro and standard cell placements of superior or comparable quality to commercial tools. Our empirical Bayes methodology also allows a substantial quality improvement over state-of-the-art academic mixed-size placers, achieving up to 10x fewer design rule check (DRC) violations, a 5% decrease in wirelength, and a 30% and 60% reduction in worst and total negative slack (WNS/TNS).
http://arxiv.org/pdf/2407.04579v1
[ "Anthony Agnesina", "Rongjian Liang", "Geraldo Pradipta", "Anand Rajaram", "Haoxing Ren" ]
2024-07-05T15:16:25Z
2024-07-05T15:16:25Z
2402.07355
Sampling from the Mean-Field Stationary Distribution
We study the complexity of sampling from the stationary distribution of a mean-field SDE, or equivalently, the complexity of minimizing a functional over the space of probability measures which includes an interaction term. Our main insight is to decouple the two key aspects of this problem: (1) approximation of the mean-field SDE via a finite-particle system, via uniform-in-time propagation of chaos, and (2) sampling from the finite-particle stationary distribution, via standard log-concave samplers. Our approach is conceptually simpler and its flexibility allows for incorporating the state-of-the-art for both algorithms and theory. This leads to improved guarantees in numerous settings, including better guarantees for optimizing certain two-layer neural networks in the mean-field regime. A key technical contribution is to establish a new uniform-in-$N$ log-Sobolev inequality for the stationary distribution of the mean-field Langevin dynamics.
http://arxiv.org/pdf/2402.07355v4
[ "Yunbum Kook", "Matthew S. Zhang", "Sinho Chewi", "Murat A. Erdogdu", "Mufan Bill Li" ]
2024-07-05T15:15:36Z
2024-02-12T01:04:39Z
2401.09673
Artwork Protection Against Neural Style Transfer Using Locally Adaptive Adversarial Color Attack
Neural style transfer (NST) generates new images by combining the style of one image with the content of another. However, unauthorized NST can exploit artwork, raising concerns about artists' rights and motivating the development of proactive protection methods. We propose Locally Adaptive Adversarial Color Attack (LAACA), empowering artists to protect their artwork from unauthorized style transfer by processing before public release. By delving into the intricacies of human visual perception and the role of different frequency components, our method strategically introduces frequency-adaptive perturbations in the image. These perturbations significantly degrade the generation quality of NST while maintaining an acceptable level of visual change in the original image, ensuring that potential infringers are discouraged from using the protected artworks, because of its bad NST generation quality. Additionally, existing metrics often overlook the importance of color fidelity in evaluating color-mattered tasks, such as the quality of NST-generated images, which is crucial in the context of artistic works. To comprehensively assess the color-mattered tasks, we propose the Adversarial Color Distance Metric (ACDM), designed to quantify the color difference of images pre- and post-manipulations. Experimental results confirm that attacking NST using LAACA results in visually inferior style transfer, and the ACDM can efficiently measure color-mattered tasks. By providing artists with a tool to safeguard their intellectual property, our work relieves the socio-technical challenges posed by the misuse of NST in the art community.
http://arxiv.org/pdf/2401.09673v3
[ "Zhongliang Guo", "Junhao Dong", "Yifei Qian", "Kaixuan Wang", "Weiye Li", "Ziheng Guo", "Yuheng Wang", "Yanli Li", "Ognjen Arandjelović", "Lei Fang" ]
2024-07-05T14:51:55Z
2024-01-18T01:18:59Z
2407.04559
Not (yet) the whole story: Evaluating Visual Storytelling Requires More than Measuring Coherence, Grounding, and Repetition
Visual storytelling consists in generating a natural language story given a temporally ordered sequence of images. This task is not only challenging for models, but also very difficult to evaluate with automatic metrics since there is no consensus about what makes a story 'good'. In this paper, we introduce a novel method that measures story quality in terms of human likeness regarding three key aspects highlighted in previous work: visual grounding, coherence, and repetitiveness. We then use this method to evaluate the stories generated by several models, showing that the foundation model LLaVA obtains the best result, but only slightly so compared to TAPM, a 50-times smaller visual storytelling model. Upgrading the visual and language components of TAPM results in a model that yields competitive performance with a relatively low number of parameters. Finally, we carry out a human evaluation study, whose results suggest that a 'good' story may require more than a human-like level of visual grounding, coherence, and repetition.
http://arxiv.org/pdf/2407.04559v1
[ "Aditya K Surikuchi", "Raquel Fernández", "Sandro Pezzelle" ]
2024-07-05T14:48:15Z
2024-07-05T14:48:15Z
2407.04557
Structural Constraint Integration in Generative Model for Discovery of Quantum Material Candidates
Billions of organic molecules are known, but only a tiny fraction of the functional inorganic materials have been discovered, a particularly relevant problem to the community searching for new quantum materials. Recent advancements in machine-learning-based generative models, particularly diffusion models, show great promise for generating new, stable materials. However, integrating geometric patterns into materials generation remains a challenge. Here, we introduce Structural Constraint Integration in the GENerative model (SCIGEN). Our approach can modify any trained generative diffusion model by strategic masking of the denoised structure with a diffused constrained structure prior to each diffusion step to steer the generation toward constrained outputs. Furthermore, we mathematically prove that SCIGEN effectively performs conditional sampling from the original distribution, which is crucial for generating stable constrained materials. We generate eight million compounds using Archimedean lattices as prototype constraints, with over 10% surviving a multi-staged stability pre-screening. High-throughput density functional theory (DFT) on 26,000 survived compounds shows that over 50% passed structural optimization at the DFT level. Since the properties of quantum materials are closely related to geometric patterns, our results indicate that SCIGEN provides a general framework for generating quantum materials candidates.
http://arxiv.org/pdf/2407.04557v1
[ "Ryotaro Okabe", "Mouyang Cheng", "Abhijatmedhi Chotrattanapituk", "Nguyen Tuan Hung", "Xiang Fu", "Bowen Han", "Yao Wang", "Weiwei Xie", "Robert J. Cava", "Tommi S. Jaakkola", "Yongqiang Cheng", "Mingda Li" ]
2024-07-05T14:42:54Z
2024-07-05T14:42:54Z
2407.04551
An AI Architecture with the Capability to Classify and Explain Hardware Trojans
Hardware trojan detection methods, based on machine learning (ML) techniques, mainly identify suspected circuits but lack the ability to explain how the decision was arrived at. An explainable methodology and architecture is introduced based on the existing hardware trojan detection features. Results are provided for explaining digital hardware trojans within a netlist using trust-hub trojan benchmarks.
http://arxiv.org/pdf/2407.04551v1
[ "Paul Whitten", "Francis Wolff", "Chris Papachristou" ]
2024-07-05T14:36:19Z
2024-07-05T14:36:19Z
2407.04547
Real-time Timbre Remapping with Differentiable DSP
Timbre is a primary mode of expression in diverse musical contexts. However, prevalent audio-driven synthesis methods predominantly rely on pitch and loudness envelopes, effectively flattening timbral expression from the input. Our approach draws on the concept of timbre analogies and investigates how timbral expression from an input signal can be mapped onto controls for a synthesizer. Leveraging differentiable digital signal processing, our method facilitates direct optimization of synthesizer parameters through a novel feature difference loss. This loss function, designed to learn relative timbral differences between musical events, prioritizes the subtleties of graded timbre modulations within phrases, allowing for meaningful translations in a timbre space. Using snare drum performances as a case study, where timbral expression is central, we demonstrate real-time timbre remapping from acoustic snare drums to a differentiable synthesizer modeled after the Roland TR-808.
http://arxiv.org/pdf/2407.04547v1
[ "Jordie Shier", "Charalampos Saitis", "Andrew Robertson", "Andrew McPherson" ]
2024-07-05T14:32:52Z
2024-07-05T14:32:52Z
2407.04542
Rethinking Image Compression on the Web with Generative AI
The rapid growth of the Internet, driven by social media, web browsing, and video streaming, has made images central to the Web experience, resulting in significant data transfer and increased webpage sizes. Traditional image compression methods, while reducing bandwidth, often degrade image quality. This paper explores a novel approach using generative AI to reconstruct images at the edge or client-side. We develop a framework that leverages text prompts and provides additional conditioning inputs like Canny edges and color palettes to a text-to-image model, achieving up to 99.8% bandwidth savings in the best cases and 92.6% on average, while maintaining high perceptual similarity. Empirical analysis and a user study show that our method preserves image meaning and structure more effectively than traditional compression methods, offering a promising solution for reducing bandwidth usage and improving Internet affordability with minimal degradation in image quality.
http://arxiv.org/pdf/2407.04542v1
[ "Shayan Ali Hassan", "Danish Humair", "Ihsan Ayyub Qazi", "Zafar Ayyub Qazi" ]
2024-07-05T14:29:12Z
2024-07-05T14:29:12Z
2311.06112
Turbulence Scaling from Deep Learning Diffusion Generative Models
Complex spatial and temporal structures are inherent characteristics of turbulent fluid flows and comprehending them poses a major challenge. This comprehesion necessitates an understanding of the space of turbulent fluid flow configurations. We employ a diffusion-based generative model to learn the distribution of turbulent vorticity profiles and generate snapshots of turbulent solutions to the incompressible Navier-Stokes equations. We consider the inverse cascade in two spatial dimensions and generate diverse turbulent solutions that differ from those in the training dataset. We analyze the statistical scaling properties of the new turbulent profiles, calculate their structure functions, energy power spectrum, velocity probability distribution function and moments of local energy dissipation. All the learnt scaling exponents are consistent with the expected Kolmogorov scaling. This agreement with established turbulence characteristics provides strong evidence of the model's capability to capture essential features of real-world turbulence.
http://arxiv.org/abs/2311.06112v2
[ "Tim Whittaker", "Romuald A. Janik", "Yaron Oz" ]
2024-07-05T14:28:33Z
2023-11-10T15:27:07Z
2407.04541
PoPreRo: A New Dataset for Popularity Prediction of Romanian Reddit Posts
We introduce PoPreRo, the first dataset for Popularity Prediction of Romanian posts collected from Reddit. The PoPreRo dataset includes a varied compilation of post samples from five distinct subreddits of Romania, totaling 28,107 data samples. Along with our novel dataset, we introduce a set of competitive models to be used as baselines for future research. Interestingly, the top-scoring model achieves an accuracy of 61.35% and a macro F1 score of 60.60% on the test set, indicating that the popularity prediction task on PoPreRo is very challenging. Further investigations based on few-shot prompting the Falcon-7B Large Language Model also point in the same direction. We thus believe that PoPreRo is a valuable resource that can be used to evaluate models on predicting the popularity of social media posts in Romanian. We release our dataset at https://github.com/ana-rogoz/PoPreRo.
http://arxiv.org/pdf/2407.04541v1
[ "Ana-Cristina Rogoz", "Maria Ilinca Nechita", "Radu Tudor Ionescu" ]
2024-07-05T14:28:12Z
2024-07-05T14:28:12Z
2305.17400
Query-Policy Misalignment in Preference-Based Reinforcement Learning
Preference-based reinforcement learning (PbRL) provides a natural way to align RL agents' behavior with human desired outcomes, but is often restrained by costly human feedback. To improve feedback efficiency, most existing PbRL methods focus on selecting queries to maximally improve the overall quality of the reward model, but counter-intuitively, we find that this may not necessarily lead to improved performance. To unravel this mystery, we identify a long-neglected issue in the query selection schemes of existing PbRL studies: Query-Policy Misalignment. We show that the seemingly informative queries selected to improve the overall quality of reward model actually may not align with RL agents' interests, thus offering little help on policy learning and eventually resulting in poor feedback efficiency. We show that this issue can be effectively addressed via near on-policy query and a specially designed hybrid experience replay, which together enforce the bidirectional query-policy alignment. Simple yet elegant, our method can be easily incorporated into existing approaches by changing only a few lines of code. We showcase in comprehensive experiments that our method achieves substantial gains in both human feedback and RL sample efficiency, demonstrating the importance of addressing query-policy misalignment in PbRL tasks.
http://arxiv.org/pdf/2305.17400v3
[ "Xiao Hu", "Jianxiong Li", "Xianyuan Zhan", "Qing-Shan Jia", "Ya-Qin Zhang" ]
2024-07-05T14:26:21Z
2023-05-27T07:55:17Z
2407.04540
Improved algorithms for learning quantum Hamiltonians, via flat polynomials
We give an improved algorithm for learning a quantum Hamiltonian given copies of its Gibbs state, that can succeed at any temperature. Specifically, we improve over the work of Bakshi, Liu, Moitra, and Tang [BLMT24], by reducing the sample complexity and runtime dependence to singly exponential in the inverse-temperature parameter, as opposed to doubly exponential. Our main technical contribution is a new flat polynomial approximation to the exponential function, with significantly lower degree than the flat polynomial approximation used in [BLMT24].
http://arxiv.org/pdf/2407.04540v1
[ "Shyam Narayanan" ]
2024-07-05T14:25:22Z
2024-07-05T14:25:22Z
2407.04534
Introducing 'Inside' Out of Distribution
Detecting and understanding out-of-distribution (OOD) samples is crucial in machine learning (ML) to ensure reliable model performance. Current OOD studies, in general, and in the context of ML, in particular, primarily focus on extrapolatory OOD (outside), neglecting potential cases of interpolatory OOD (inside). This study introduces a novel perspective on OOD by suggesting OOD can be divided into inside and outside cases. In addition, following this framework, we examine the inside-outside OOD profiles of datasets and their impact on ML model performance. Our analysis shows that different inside-outside OOD profiles lead to nuanced declines in ML model performance, highlighting the importance of distinguishing between these two cases for developing effective counter-OOD methods.
http://arxiv.org/pdf/2407.04534v1
[ "Teddy Lazebnik" ]
2024-07-05T14:22:13Z
2024-07-05T14:22:13Z
2210.16371
Distributed Black-box Attack: Do Not Overestimate Black-box Attacks
Black-box adversarial attacks can fool image classifiers into misclassifying images without requiring access to model structure and weights. Recent studies have reported attack success rates of over 95% with less than 1,000 queries. The question then arises of whether black-box attacks have become a real threat against IoT devices that rely on cloud APIs to achieve image classification. To shed some light on this, note that prior research has primarily focused on increasing the success rate and reducing the number of queries. However, another crucial factor for black-box attacks against cloud APIs is the time required to perform the attack. This paper applies black-box attacks directly to cloud APIs rather than to local models, thereby avoiding mistakes made in prior research that applied the perturbation before image encoding and pre-processing. Further, we exploit load balancing to enable distributed black-box attacks that can reduce the attack time by a factor of about five for both local search and gradient estimation methods.
http://arxiv.org/pdf/2210.16371v4
[ "Han Wu", "Sareh Rowlands", "Johan Wahlstrom" ]
2024-07-05T14:18:18Z
2022-10-28T19:14:03Z
2407.04528
GPT vs RETRO: Exploring the Intersection of Retrieval and Parameter-Efficient Fine-Tuning
Parameter-Efficient Fine-Tuning (PEFT) and Retrieval-Augmented Generation (RAG) have become popular methods for adapting large language models while minimizing compute requirements. In this paper, we apply PEFT methods (P-tuning, Adapters, and LoRA) to a modified Retrieval-Enhanced Transformer (RETRO) and a baseline GPT model across several sizes, ranging from 823 million to 48 billion parameters. We show that RETRO models outperform GPT models in zero-shot settings due to their unique pre-training process but GPT models have higher performance potential with PEFT. Additionally, our study indicates that 8B parameter models strike an optimal balance between cost and performance and P-tuning lags behind other PEFT techniques. We further provide a comparative analysis of between applying PEFT to an Instruction-tuned RETRO model and base RETRO model. This work presents the first comprehensive comparison of various PEFT methods integrated with RAG, applied to both GPT and RETRO models, highlighting their relative performance.
http://arxiv.org/pdf/2407.04528v1
[ "Aleksander Ficek", "Jiaqi Zeng", "Oleksii Kuchaiev" ]
2024-07-05T14:16:47Z
2024-07-05T14:16:47Z
2407.04525
Enhancing learning in artificial neural networks through cellular heterogeneity and neuromodulatory signaling
Recent progress in artificial intelligence (AI) has been driven by insights from neuroscience, particularly with the development of artificial neural networks (ANNs). This has significantly enhanced the replication of complex cognitive tasks such as vision and natural language processing. Despite these advances, ANNs struggle with continual learning, adaptable knowledge transfer, robustness, and resource efficiency - capabilities that biological systems handle seamlessly. Specifically, ANNs often overlook the functional and morphological diversity of the brain, hindering their computational capabilities. Furthermore, incorporating cell-type specific neuromodulatory effects into ANNs with neuronal heterogeneity could enable learning at two spatial scales: spiking behavior at the neuronal level, and synaptic plasticity at the circuit level, thereby potentially enhancing their learning abilities. In this article, we summarize recent bio-inspired models, learning rules and architectures and propose a biologically-informed framework for enhancing ANNs. Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors and dendritic compartments to simulate morphological and functional diversity of neuronal computations. Finally, we outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, balances bioinspiration and complexity, and provides scalable solutions for pressing AI challenges, such as continual learning, adaptability, robustness, and resource-efficiency.
http://arxiv.org/pdf/2407.04525v1
[ "Alejandro Rodriguez-Garcia", "Jie Mei", "Srikanth Ramaswamy" ]
2024-07-05T14:11:28Z
2024-07-05T14:11:28Z
2407.04522
Graph Reinforcement Learning in Power Grids: A Survey
The challenges posed by renewable energy and distributed electricity generation motivate the development of deep learning approaches to overcome the lack of flexibility of traditional methods in power grids use cases. The application of GNNs is particularly promising due to their ability to learn from graph-structured data present in power grids. Combined with RL, they can serve as control approaches to determine remedial grid actions. This review analyses the ability of GRL to capture the inherent graph structure of power grids to improve representation learning and decision making in different power grid use cases. It distinguishes between common problems in transmission and distribution grids and explores the synergy between RL and GNNs. In transmission grids, GRL typically addresses automated grid management and topology control, whereas on the distribution side, GRL concentrates more on voltage regulation. We analyzed the selected papers based on their graph structure and GNN model, the applied RL algorithm, and their overall contributions. Although GRL demonstrate adaptability in the face of unpredictable events and noisy or incomplete data, it primarily serves as a proof of concept at this stage. There are multiple open challenges and limitations that need to be addressed when considering the application of RL to real power grid operation.
http://arxiv.org/pdf/2407.04522v1
[ "Mohamed Hassouna", "Clara Holzhüter", "Pawel Lytaev", "Josephine Thomas", "Bernhard Sick", "Christoph Scholz" ]
2024-07-05T14:07:15Z
2024-07-05T14:07:15Z
2407.04521
Unified continuous-time q-learning for mean-field game and mean-field control problems
This paper studies the continuous-time q-learning in the mean-field jump-diffusion models from the representative agent's perspective. To overcome the challenge when the population distribution may not be directly observable, we introduce the integrated q-function in decoupled form (decoupled Iq-function) and establish its martingale characterization together with the value function, which provides a unified policy evaluation rule for both mean-field game (MFG) and mean-field control (MFC) problems. Moreover, depending on the task to solve the MFG or MFC problem, we can employ the decoupled Iq-function by different means to learn the mean-field equilibrium policy or the mean-field optimal policy respectively. As a result, we devise a unified q-learning algorithm for both MFG and MFC problems by utilizing all test policies stemming from the mean-field interactions. For several examples in the jump-diffusion setting, within and beyond the LQ framework, we can obtain the exact parameterization of the decoupled Iq-functions and the value functions, and illustrate our algorithm from the representative agent's perspective with satisfactory performance.
http://arxiv.org/pdf/2407.04521v1
[ "Xiaoli Wei", "Xiang Yu", "Fengyi Yuan" ]
2024-07-05T14:06:59Z
2024-07-05T14:06:59Z
2405.05962
Age Aware Scheduling for Differentially-Private Federated Learning
This paper explores differentially-private federated learning (FL) across time-varying databases, delving into a nuanced three-way tradeoff involving age, accuracy, and differential privacy (DP). Emphasizing the potential advantages of scheduling, we propose an optimization problem aimed at meeting DP requirements while minimizing the loss difference between the aggregated model and the model obtained without DP constraints. To harness the benefits of scheduling, we introduce an age-dependent upper bound on the loss, leading to the development of an age-aware scheduling design. Simulation results underscore the superior performance of our proposed scheme compared to FL with classic DP, which does not consider scheduling as a design factor. This research contributes insights into the interplay of age, accuracy, and DP in federated learning, with practical implications for scheduling strategies.
http://arxiv.org/pdf/2405.05962v2
[ "Kuan-Yu Lin", "Hsuan-Yin Lin", "Yu-Pin Hsu", "Yu-Chih Huang" ]
2024-07-05T14:01:11Z
2024-05-09T17:58:25Z
2407.04516
G-Adaptive mesh refinement -- leveraging graph neural networks and differentiable finite element solvers
We present a novel, and effective, approach to the long-standing problem of mesh adaptivity in finite element methods (FEM). FE solvers are powerful tools for solving partial differential equations (PDEs), but their cost and accuracy are critically dependent on the choice of mesh points. To keep computational costs low, mesh relocation (r-adaptivity) seeks to optimise the position of a fixed number of mesh points to obtain the best FE solution accuracy. Classical approaches to this problem require the solution of a separate nonlinear "meshing" PDE to find the mesh point locations. This incurs significant cost at remeshing and relies on certain a-priori assumptions and guiding heuristics for optimal mesh point location. Recent machine learning approaches to r-adaptivity have mainly focused on the construction of fast surrogates for such classical methods. Our new approach combines a graph neural network (GNN) powered architecture, with training based on direct minimisation of the FE solution error with respect to the mesh point locations. The GNN employs graph neural diffusion (GRAND), closely aligning the mesh solution space to that of classical meshing methodologies, thus replacing heuristics with a learnable strategy, and providing a strong inductive bias. This allows for rapid and robust training and results in an extremely efficient and effective GNN approach to online r-adaptivity. This method outperforms classical and prior ML approaches to r-adaptive meshing on the test problems we consider, in particular achieving lower FE solution error, whilst retaining the significant speed-up over classical methods observed in prior ML work.
http://arxiv.org/pdf/2407.04516v1
[ "James Rowbottom", "Georg Maierhofer", "Teo Deveney", "Katharina Schratz", "Pietro Liò", "Carola-Bibiane Schönlieb", "Chris Budd" ]
2024-07-05T13:57:35Z
2024-07-05T13:57:35Z
2301.11461
Learning to Generate All Feasible Actions
Modern cyber-physical systems are becoming increasingly complex to model, thus motivating data-driven techniques such as reinforcement learning (RL) to find appropriate control agents. However, most systems are subject to hard constraints such as safety or operational bounds. Typically, to learn to satisfy these constraints, the agent must violate them systematically, which is computationally prohibitive in most systems. Recent efforts aim to utilize feasibility models that assess whether a proposed action is feasible to avoid applying the agent's infeasible action proposals to the system. However, these efforts focus on guaranteeing constraint satisfaction rather than the agent's learning efficiency. To improve the learning process, we introduce action mapping, a novel approach that divides the learning process into two steps: first learn feasibility and subsequently, the objective by mapping actions into the sets of feasible actions. This paper focuses on the feasibility part by learning to generate all feasible actions through self-supervised querying of the feasibility model. We train the agent by formulating the problem as a distribution matching problem and deriving gradient estimators for different divergences. Through an illustrative example, a robotic path planning scenario, and a robotic grasping simulation, we demonstrate the agent's proficiency in generating actions across disconnected feasible action sets. By addressing the feasibility step, this paper makes it possible to focus future work on the objective part of action mapping, paving the way for an RL framework that is both safe and efficient.
http://arxiv.org/abs/2301.11461v2
[ "Mirco Theile", "Daniele Bernardini", "Raphael Trumpp", "Cristina Piazza", "Marco Caccamo", "Alberto L. Sangiovanni-Vincentelli" ]
2024-07-05T13:57:34Z
2023-01-26T23:15:51Z
2407.04513
LayerShuffle: Enhancing Robustness in Vision Transformers by Randomizing Layer Execution Order
Due to their architecture and how they are trained, artificial neural networks are typically not robust toward pruning, replacing, or shuffling layers at test time. However, such properties would be desirable for different applications, such as distributed neural network architectures where the order of execution cannot be guaranteed or parts of the network can fail during inference. In this work, we address these issues through a number of proposed training approaches for vision transformers whose most important component is randomizing the execution order of attention modules at training time. We show that with our proposed approaches, vision transformers are indeed capable to adapt to arbitrary layer execution orders at test time assuming one tolerates a reduction (about 20%) in accuracy at the same model size. We also find that our trained models can be randomly merged with each other resulting in functional ("Frankenstein") models without loss of performance compared to the source models. Finally, we layer-prune our models at test time and find that their performance declines gracefully.
http://arxiv.org/pdf/2407.04513v1
[ "Matthias Freiberger", "Peter Kun", "Anders Sundnes Løvlie", "Sebastian Risi" ]
2024-07-05T13:54:15Z
2024-07-05T13:54:15Z
2309.03229
Which algorithm to select in sports timetabling?
Any sports competition needs a timetable, specifying when and where teams meet each other. The recent International Timetabling Competition (ITC2021) on sports timetabling showed that, although it is possible to develop general algorithms, the performance of each algorithm varies considerably over the problem instances. This paper provides an instance space analysis for sports timetabling, resulting in powerful insights into the strengths and weaknesses of eight state-of-the-art algorithms. Based on machine learning techniques, we propose an algorithm selection system that predicts which algorithm is likely to perform best when given the characteristics of a sports timetabling problem instance. Furthermore, we identify which characteristics are important in making that prediction, providing insights in the performance of the algorithms, and suggestions to further improve them. Finally, we assess the empirical hardness of the instances. Our results are based on large computational experiments involving about 50 years of CPU time on more than 500 newly generated problem instances.
http://arxiv.org/abs/2309.03229v2
[ "David Van Bulck", "Dries Goossens", "Jan-Patrick Clarner", "Angelos Dimitsas", "George H. G. Fonseca", "Carlos Lamas-Fernandez", "Martin Mariusz Lester", "Jaap Pedersen", "Antony E. Phillips", "Roberto Maria Rosati" ]
2024-07-05T13:51:29Z
2023-09-04T15:13:56Z
2407.04507
Few-Shot Airway-Tree Modeling using Data-Driven Sparse Priors
The lack of large annotated datasets in medical imaging is an intrinsic burden for supervised Deep Learning (DL) segmentation models. Few-shot learning approaches are cost-effective solutions to transfer pre-trained models using only limited annotated data. However, such methods can be prone to overfitting due to limited data diversity especially when segmenting complex, diverse, and sparse tubular structures like airways. Furthermore, crafting informative image representations has played a crucial role in medical imaging, enabling discriminative enhancement of anatomical details. In this paper, we initially train a data-driven sparsification module to enhance airways efficiently in lung CT scans. We then incorporate these sparse representations in a standard supervised segmentation pipeline as a pretraining step to enhance the performance of the DL models. Results presented on the ATM public challenge cohort show the effectiveness of using sparse priors in pre-training, leading to segmentation Dice score increase by 1% to 10% in full-scale and few-shot learning scenarios, respectively.
http://arxiv.org/pdf/2407.04507v1
[ "Ali Keshavarzi", "Elsa Angelini" ]
2024-07-05T13:46:11Z
2024-07-05T13:46:11Z
2406.14380
Estimating Treatment Effects under Recommender Interference: A Structured Neural Networks Approach
Recommender systems are essential for content-sharing platforms by curating personalized content. To evaluate updates to recommender systems targeting content creators, platforms frequently rely on creator-side randomized experiments. The treatment effect measures the change in outcomes when a new algorithm is implemented compared to the status quo. We show that the standard difference-in-means estimator can lead to biased estimates due to recommender interference that arises when treated and control creators compete for exposure. We propose a "recommender choice model" that describes which item gets exposed from a pool containing both treated and control items. By combining a structural choice model with neural networks, this framework directly models the interference pathway while accounting for rich viewer-content heterogeneity. We construct a debiased estimator of the treatment effect and prove it is $sqrt n$-consistent and asymptotically normal with potentially correlated samples. We validate our estimator's empirical performance with a field experiment on Weixin short-video platform. In addition to the standard creator-side experiment, we conduct a costly double-sided randomization design to obtain a benchmark estimate free from interference bias. We show that the proposed estimator yields results comparable to the benchmark, whereas the standard difference-in-means estimator can exhibit significant bias and even produce reversed signs.
http://arxiv.org/pdf/2406.14380v3
[ "Ruohan Zhan", "Shichao Han", "Yuchen Hu", "Zhenling Jiang" ]
2024-07-05T13:40:48Z
2024-06-20T14:53:26Z
2306.09633
The False Dawn: Reevaluating Google's Reinforcement Learning for Chip Macro Placement
Reinforcement learning (RL) for physical design of silicon chips in a Google 2021 Nature paper stirred controversy due to poorly documented claims that raised eyebrows and drew critical media coverage. The paper withheld critical methodology steps and most inputs needed to reproduce results. Our meta-analysis shows how two separate evaluations filled in the gaps and demonstrated that Google RL lags behind (i) human designers, (ii) a well-known algorithm (Simulated Annealing), and (iii) generally-available commercial software, while being slower; and in a 2023 open research contest, RL methods weren't in top 5. Crosschecked data indicate that the integrity of the Nature paper is substantially undermined owing to errors in conduct, analysis and reporting. Before publishing, Google rebuffed internal allegations of fraud, which still stand. We note policy implications and conclusions for chip design.
http://arxiv.org/pdf/2306.09633v9
[ "Igor L. Markov" ]
2024-07-05T13:35:54Z
2023-06-16T05:32:24Z
2407.04493
PROUD: PaRetO-gUided Diffusion Model for Multi-objective Generation
Recent advancements in the realm of deep generative models focus on generating samples that satisfy multiple desired properties. However, prevalent approaches optimize these property functions independently, thus omitting the trade-offs among them. In addition, the property optimization is often improperly integrated into the generative models, resulting in an unnecessary compromise on generation quality (i.e., the quality of generated samples). To address these issues, we formulate a constrained optimization problem. It seeks to optimize generation quality while ensuring that generated samples reside at the Pareto front of multiple property objectives. Such a formulation enables the generation of samples that cannot be further improved simultaneously on the conflicting property functions and preserves good quality of generated samples. Building upon this formulation, we introduce the PaRetO-gUided Diffusion model (PROUD), wherein the gradients in the denoising process are dynamically adjusted to enhance generation quality while the generated samples adhere to Pareto optimality. Experimental evaluations on image generation and protein generation tasks demonstrate that our PROUD consistently maintains superior generation quality while approaching Pareto optimality across multiple property functions compared to various baselines.
http://arxiv.org/abs/2407.04493v1
[ "Yinghua Yao", "Yuangang Pan", "Jing Li", "Ivor Tsang", "Xin Yao" ]
2024-07-05T13:32:06Z
2024-07-05T13:32:06Z
2407.04491
Better by Default: Strong Pre-Tuned MLPs and Boosted Trees on Tabular Data
For classification and regression on tabular data, the dominance of gradient-boosted decision trees (GBDTs) has recently been challenged by often much slower deep learning methods with extensive hyperparameter tuning. We address this discrepancy by introducing (a) RealMLP, an improved multilayer perceptron (MLP), and (b) improved default parameters for GBDTs and RealMLP. We tune RealMLP and the default parameters on a meta-train benchmark with 71 classification and 47 regression datasets and compare them to hyperparameter-optimized versions on a disjoint meta-test benchmark with 48 classification and 42 regression datasets, as well as the GBDT-friendly benchmark by Grinsztajn et al. (2022). Our benchmark results show that RealMLP offers a better time-accuracy tradeoff than other neural nets and is competitive with GBDTs. Moreover, a combination of RealMLP and GBDTs with improved default parameters can achieve excellent results on medium-sized tabular datasets (1K--500K samples) without hyperparameter tuning.
http://arxiv.org/pdf/2407.04491v1
[ "David Holzmüller", "Léo Grinsztajn", "Ingo Steinwart" ]
2024-07-05T13:29:30Z
2024-07-05T13:29:30Z
2407.04485
Leveraging Graph Structures to Detect Hallucinations in Large Language Models
Large language models are extensively applied across a wide range of tasks, such as customer support, content creation, educational tutoring, and providing financial guidance. However, a well-known drawback is their predisposition to generate hallucinations. This damages the trustworthiness of the information these models provide, impacting decision-making and user confidence. We propose a method to detect hallucinations by looking at the structure of the latent space and finding associations within hallucinated and non-hallucinated generations. We create a graph structure that connects generations that lie closely in the embedding space. Moreover, we employ a Graph Attention Network which utilizes message passing to aggregate information from neighboring nodes and assigns varying degrees of importance to each neighbor based on their relevance. Our findings show that 1) there exists a structure in the latent space that differentiates between hallucinated and non-hallucinated generations, 2) Graph Attention Networks can learn this structure and generalize it to unseen generations, and 3) the robustness of our method is enhanced when incorporating contrastive learning. When evaluated against evidence-based benchmarks, our model performs similarly without access to search-based methods.
http://arxiv.org/pdf/2407.04485v1
[ "Noa Nonkes", "Sergei Agaronian", "Evangelos Kanoulas", "Roxana Petcu" ]
2024-07-05T13:08:58Z
2024-07-05T13:08:58Z
2402.02964
Mixed Noise and Posterior Estimation with Conditional DeepGEM
Motivated by indirect measurements and applications from nanometrology with a mixed noise model, we develop a novel algorithm for jointly estimating the posterior and the noise parameters in Bayesian inverse problems. We propose to solve the problem by an expectation maximization (EM) algorithm. Based on the current noise parameters, we learn in the E-step a conditional normalizing flow that approximates the posterior. In the M-step, we propose to find the noise parameter updates again by an EM algorithm, which has analytical formulas. We compare the training of the conditional normalizing flow with the forward and reverse KL, and show that our model is able to incorporate information from many measurements, unlike previous approaches.
http://arxiv.org/abs/2402.02964v2
[ "Paul Hagemann", "Johannes Hertrich", "Maren Casfor", "Sebastian Heidenreich", "Gabriele Steidl" ]
2024-07-05T13:04:10Z
2024-02-05T12:42:21Z
2407.04481
Using Petri Nets as an Integrated Constraint Mechanism for Reinforcement Learning Tasks
The lack of trust in algorithms is usually an issue when using Reinforcement Learning (RL) agents for control in real-world domains such as production plants, autonomous vehicles, or traffic-related infrastructure, partly due to the lack of verifiability of the model itself. In such scenarios, Petri nets (PNs) are often available for flowcharts or process steps, as they are versatile and standardized. In order to facilitate integration of RL models and as a step towards increasing AI trustworthiness, we propose an approach that uses PNs with three main advantages over typical RL approaches: Firstly, the agent can now easily be modeled with a combined state including both external environmental observations and agent-specific state information from a given PN. Secondly, we can enforce constraints for state-dependent actions through the inherent PN model. And lastly, we can increase trustworthiness by verifying PN properties through techniques such as model checking. We test our approach on a typical four-way intersection traffic light control setting and present our results, beating cycle-based baselines.
http://arxiv.org/pdf/2407.04481v1
[ "Timon Sachweh", "Pierre Haritz", "Thomas Liebig" ]
2024-07-05T13:04:06Z
2024-07-05T13:04:06Z
2407.04480
LoCo: Low-Bit Communication Adaptor for Large-scale Model Training
To efficiently train large-scale models, low-bit gradient communication compresses full-precision gradients on local GPU nodes into low-precision ones for higher gradient synchronization efficiency among GPU nodes. However, it often degrades training quality due to compression information loss. To address this, we propose the Low-bit Communication Adaptor (LoCo), which compensates gradients on local GPU nodes before compression, ensuring efficient synchronization without compromising training quality. Specifically, LoCo designs a moving average of historical compensation errors to stably estimate concurrent compression error and then adopts it to compensate for the concurrent gradient compression, yielding a less lossless compression. This mechanism allows it to be compatible with general optimizers like Adam and sharding strategies like FSDP. Theoretical analysis shows that integrating LoCo into full-precision optimizers like Adam and SGD does not impair their convergence speed on nonconvex problems. Experimental results show that across large-scale model training frameworks like Megatron-LM and PyTorch's FSDP, LoCo significantly improves communication efficiency, e.g., improving Adam's training speed by 14% to 40% without performance degradation on large language models like LLAMAs and MoE.
http://arxiv.org/pdf/2407.04480v1
[ "Xingyu Xie", "Zhijie Lin", "Kim-Chuan Toh", "Pan Zhou" ]
2024-07-05T13:01:36Z
2024-07-05T13:01:36Z
2407.04476
Rethinking Data Input for Point Cloud Upsampling
In recent years, point cloud upsampling has been widely applied in fields such as 3D reconstruction and surface generation. However, existing point cloud upsampling inputs are all patch based, and there is no research discussing the differences and principles between point cloud model full input and patch based input. In order to compare with patch based point cloud input, this article proposes a new data input method, which divides the full point cloud model to ensure shape integrity while training PU-GCN. This article was validated on the PU1K and ABC datasets, but the results showed that Patch based performance is better than model based full input i.e. Average Segment input. Therefore, this article explores the data input factors and model modules that affect the upsampling results of point clouds.
http://arxiv.org/pdf/2407.04476v1
[ "Tongxu Zhang" ]
2024-07-05T12:53:34Z
2024-07-05T12:53:34Z
2102.04399
How to Stay Curious while Avoiding Noisy TVs using Aleatoric Uncertainty Estimation
Exploration in environments with sparse rewards is difficult for artificial agents. Curiosity driven learning -- using feed-forward prediction errors as intrinsic rewards -- has achieved some success in these scenarios, but fails when faced with action-dependent noise sources. We present aleatoric mapping agents (AMAs), a neuroscience inspired solution modeled on the cholinergic system of the mammalian brain. AMAs aim to explicitly ascertain which dynamics of the environment are unpredictable, regardless of whether those dynamics are induced by the actions of the agent. This is achieved by generating separate forward predictions for the mean and variance of future states and reducing intrinsic rewards for those transitions with high aleatoric variance. We show AMAs are able to effectively circumvent action-dependent stochastic traps that immobilise conventional curiosity driven agents. The code for all experiments presented in this paper is open sourced: http://github.com/self-supervisor/Escaping-Stochastic-Traps-With-Aleatoric-Mapping-Agents.
http://arxiv.org/pdf/2102.04399v3
[ "Augustine N. Mavor-Parker", "Kimberly A. Young", "Caswell Barry", "Lewis D. Griffin" ]
2024-07-05T12:51:54Z
2021-02-08T18:05:08Z
2404.06198
The impact of data set similarity and diversity on transfer learning success in time series forecasting
Pre-trained models have become pivotal in enhancing the efficiency and accuracy of time series forecasting on target data sets by leveraging transfer learning. While benchmarks validate the performance of model generalization on various target data sets, there is no structured research providing similarity and diversity measures to explain which characteristics of source and target data lead to transfer learning success. Our study pioneers in systematically evaluating the impact of source-target similarity and source diversity on zero-shot and fine-tuned forecasting outcomes in terms of accuracy, bias, and uncertainty estimation. We investigate these dynamics using pre-trained neural networks across five public source datasets, applied to forecasting five target data sets, including real-world wholesales data. We identify two feature-based similarity and diversity measures, finding that source-target similarity reduces forecasting bias, while source diversity improves forecasting accuracy and uncertainty estimation, but increases the bias.
http://arxiv.org/pdf/2404.06198v2
[ "Claudia Ehrig", "Benedikt Sonnleitner", "Ursula Neumann", "Catherine Cleophas", "Germain Forestier" ]
2024-07-05T12:34:41Z
2024-04-09T10:41:59Z
2402.18839
Extended Flow Matching: a Method of Conditional Generation with Generalized Continuity Equation
The task of conditional generation is one of the most important applications of generative models, and numerous methods have been developed to date based on the celebrated flow-based models. However, many flow-based models in use today are not built to allow one to introduce an explicit inductive bias to how the conditional distribution to be generated changes with respect to conditions. This can result in unexpected behavior in the task of style transfer, for example. In this research, we introduce extended flow matching (EFM), a direct extension of flow matching that learns a "matrix field" corresponding to the continuous map from the space of conditions to the space of distributions. We show that we can introduce inductive bias to the conditional generation through the matrix field and demonstrate this fact with MMOT-EFM, a version of EFM that aims to minimize the Dirichlet energy or the sensitivity of the distribution with respect to conditions. We will present our theory along with experimental results that support the competitiveness of EFM in conditional generation.
http://arxiv.org/pdf/2402.18839v6
[ "Noboru Isobe", "Masanori Koyama", "Jinzhe Zhang", "Kohei Hayashi", "Kenji Fukumizu" ]
2024-07-05T12:19:09Z
2024-02-29T04:12:32Z
2407.04460
Smart Sampling: Helping from Friendly Neighbors for Decentralized Federated Learning
Federated Learning (FL) is gaining widespread interest for its ability to share knowledge while preserving privacy and reducing communication costs. Unlike Centralized FL, Decentralized FL (DFL) employs a network architecture that eliminates the need for a central server, allowing direct communication among clients and leading to significant communication resource savings. However, due to data heterogeneity, not all neighboring nodes contribute to enhancing the local client's model performance. In this work, we introduce textbf{emph{AFIND+}}, a simple yet efficient algorithm for sampling and aggregating neighbors in DFL, with the aim of leveraging collaboration to improve clients' model performance. AFIND+ identifies helpful neighbors, adaptively adjusts the number of selected neighbors, and strategically aggregates the sampled neighbors' models based on their contributions. Numerical results on real-world datasets with diverse data partitions demonstrate that AFIND+ outperforms other sampling algorithms in DFL and is compatible with most existing DFL optimization algorithms.
http://arxiv.org/pdf/2407.04460v1
[ "Lin Wang", "Yang Chen", "Yongxin Guo", "Xiaoying Tang" ]
2024-07-05T12:10:54Z
2024-07-05T12:10:54Z
2407.04451
Hindsight Preference Learning for Offline Preference-based Reinforcement Learning
Offline preference-based reinforcement learning (RL), which focuses on optimizing policies using human preferences between pairs of trajectory segments selected from an offline dataset, has emerged as a practical avenue for RL applications. Existing works rely on extracting step-wise reward signals from trajectory-wise preference annotations, assuming that preferences correlate with the cumulative Markovian rewards. However, such methods fail to capture the holistic perspective of data annotation: Humans often assess the desirability of a sequence of actions by considering the overall outcome rather than the immediate rewards. To address this challenge, we propose to model human preferences using rewards conditioned on future outcomes of the trajectory segments, i.e. the hindsight information. For downstream RL optimization, the reward of each step is calculated by marginalizing over possible future outcomes, the distribution of which is approximated by a variational auto-encoder trained using the offline dataset. Our proposed method, Hindsight Preference Learning (HPL), can facilitate credit assignment by taking full advantage of vast trajectory data available in massive unlabeled datasets. Comprehensive empirical studies demonstrate the benefits of HPL in delivering robust and advantageous rewards across various domains. Our code is publicly released at https://github.com/typoverflow/WiseRL.
http://arxiv.org/pdf/2407.04451v1
[ "Chen-Xiao Gao", "Shengjun Fang", "Chenjun Xiao", "Yang Yu", "Zongzhang Zhang" ]
2024-07-05T12:05:37Z
2024-07-05T12:05:37Z
2407.04449
Multi-modal Masked Siamese Network Improves Chest X-Ray Representation Learning
Self-supervised learning methods for medical images primarily rely on the imaging modality during pretraining. While such approaches deliver promising results, they do not leverage associated patient or scan information collected within Electronic Health Records (EHR). Here, we propose to incorporate EHR data during self-supervised pretraining with a Masked Siamese Network (MSN) to enhance the quality of chest X-ray representations. We investigate three types of EHR data, including demographic, scan metadata, and inpatient stay information. We evaluate our approach on three publicly available chest X-ray datasets, MIMIC-CXR, CheXpert, and NIH-14, using two vision transformer (ViT) backbones, specifically ViT-Tiny and ViT-Small. In assessing the quality of the representations via linear evaluation, our proposed method demonstrates significant improvement compared to vanilla MSN and state-of-the-art self-supervised learning baselines. Our work highlights the potential of EHR-enhanced self-supervised pre-training for medical imaging. The code is publicly available at: https://github.com/nyuad-cai/CXR-EHR-MSN
http://arxiv.org/pdf/2407.04449v1
[ "Saeed Shurrab", "Alejandro Guerra-Manzanares", "Farah E. Shamout" ]
2024-07-05T12:04:12Z
2024-07-05T12:04:12Z
2407.04440
Wavelet-based Temporal Attention Improves Traffic Forecasting
Spatio-temporal forecasting of traffic flow data represents a typical problem in the field of machine learning, impacting urban traffic management systems. Traditional statistical and machine learning methods cannot adequately handle both the temporal and spatial dependencies in these complex traffic flow datasets. A prevalent approach in the field is to combine graph convolutional networks and multi-head attention mechanisms for spatio-temporal processing. This paper proposes a wavelet-based temporal attention model, namely a wavelet-based dynamic spatio-temporal aware graph neural network (W-DSTAGNN), for tackling the traffic forecasting problem. Benchmark experiments using several statistical metrics confirm that our proposal efficiently captures spatio-temporal correlations and outperforms ten state-of-the-art models on three different real-world traffic datasets. Our proposed ensemble data-driven method can handle dynamic temporal and spatial dependencies and make long-term forecasts in an efficient manner.
http://arxiv.org/pdf/2407.04440v1
[ "Yash Jakhmola", "Nitish Kumar Mishra", "Kripabandhu Ghosh", "Tanujit Chakraborty" ]
2024-07-05T11:42:39Z
2024-07-05T11:42:39Z
2306.13595
Autoencoders for Real-Time SUEP Detection
Confining dark sectors with pseudo-conformal dynamics can produce Soft Unclustered Energy Patterns (SUEP), at the Large Hadron Collider: the production of dark quarks in proton-proton collisions leading to a dark shower and the high-multiplicity production of dark hadrons. The final experimental signature is spherically-symmetric energy deposits by an anomalously large number of soft Standard Model particles with a transverse energy of O(100) MeV. Assuming Yukawa-like couplings of the scalar portal state, the dominant production mode is gluon fusion, and the dominant background comes from multi-jet QCD events. We have developed a deep learning-based Anomaly Detection technique to reject QCD jets and identify any anomalous signature, including SUEP, in real-time in the High-Level Trigger system of the Compact Muon Solenoid experiment at the Large Hadron Collider. A deep convolutional neural autoencoder network has been trained using QCD events by taking transverse energy deposits in the inner tracker, electromagnetic calorimeter, and hadron calorimeter sub-detectors as 3-channel image data. Due to the sparse nature of the data, only ~0.5% of the total ~300 k image pixels have non-zero values. To tackle this challenge, a non-standard loss function, the inverse of the so-called Dice Loss, is exploited. The trained autoencoder with learned spatial features of QCD jets can detect 40% of the SUEP events, with a QCD event mistagging rate as low as 2%. The model inference time has been measured using the Intel CoreTM i5-9600KF processor and found to be ~20 ms, which perfectly satisfies the High-Level Trigger system's latency of O(100) ms. Given the virtue of the unsupervised learning of the autoencoders, the trained model can be applied to any new physics model that predicts an experimental signature anomalous to QCD jets.
http://arxiv.org/abs/2306.13595v3
[ "Simranjit Singh Chhibra", "Nadezda Chernyavskaya", "Benedikt Maier", "Maurzio Pierini", "Syed Hasan" ]
2024-07-05T11:34:10Z
2023-06-23T16:35:00Z
2402.11997
Remember This Event That Year? Assessing Temporal Information and Reasoning in Large Language Models
Large Language Models (LLMs) are increasingly ubiquitous, yet their ability to retain and reason about temporal information remains limited, hindering their application in real-world scenarios where understanding the sequential nature of events is crucial. Our study experiments with 12 state-of-the-art models (ranging from 2B to 70B+ parameters) on a novel numerical-temporal dataset, textbf{TempUN}, spanning from 10,000 BCE to 2100 CE, to uncover significant temporal retention and comprehension limitations. We propose six metrics to assess three learning paradigms to enhance temporal knowledge acquisition. Our findings reveal that open-source models exhibit knowledge gaps more frequently, suggesting a trade-off between limited knowledge and incorrect responses. Additionally, various fine-tuning approaches significantly improved performance, reducing incorrect outputs and impacting the identification of 'information not available' in the generations. The associated dataset and code are available at (https://github.com/lingoiitgn/TempUN).
http://arxiv.org/pdf/2402.11997v2
[ "Himanshu Beniwal", "Dishant Patel", "Kowsik Nandagopan D", "Hritik Ladia", "Ankit Yadav", "Mayank Singh" ]
2024-07-05T11:26:51Z
2024-02-19T09:43:03Z
2407.04418
Enabling On-Device LLMs Personalization with Smartphone Sensing
This demo presents a novel end-to-end framework that combines on-device large language models (LLMs) with smartphone sensing technologies to achieve context-aware and personalized services. The framework addresses critical limitations of current personalization solutions via cloud-based LLMs, such as privacy concerns, latency and cost, and limited personal sensor data. To achieve this, we innovatively proposed deploying LLMs on smartphones with multimodal sensor data and customized prompt engineering, ensuring privacy and enhancing personalization performance through context-aware sensing. A case study involving a university student demonstrated the proposed framework's capability to provide tailored recommendations. In addition, we show that the proposed framework achieves the best trade-off in privacy, performance, latency, cost, battery and energy consumption between on-device and cloud LLMs. Future work aims to integrate more diverse sensor data and conduct large-scale user studies to further refine the personalization. We envision the proposed framework could significantly improve user experiences in various domains such as healthcare, productivity, and entertainment by providing secure, context-aware, and efficient interactions directly on users' devices.
http://arxiv.org/pdf/2407.04418v1
[ "Shiquan Zhang", "Ying Ma", "Le Fang", "Hong Jia", "Simon D'Alfonso", "Vassilis Kostakos" ]
2024-07-05T11:09:05Z
2024-07-05T11:09:05Z
2406.11909
Mixture-of-Subspaces in Low-Rank Adaptation
In this paper, we introduce a subspace-inspired Low-Rank Adaptation (LoRA) method, which is computationally efficient, easy to implement, and readily applicable to large language, multimodal, and diffusion models. Initially, we equivalently decompose the weights of LoRA into two subspaces, and find that simply mixing them can enhance performance. To study such a phenomenon, we revisit it through a fine-grained subspace lens, showing that such modification is equivalent to employing a fixed mixer to fuse the subspaces. To be more flexible, we jointly learn the mixer with the original LoRA weights, and term the method Mixture-of-Subspaces LoRA (MoSLoRA). MoSLoRA consistently outperforms LoRA on tasks in different modalities, including commonsense reasoning, visual instruction tuning, and subject-driven text-to-image generation, demonstrating its effectiveness and robustness. Codes are available at https://github.com/wutaiqiang/MoSLoRA.
http://arxiv.org/pdf/2406.11909v2
[ "Taiqiang Wu", "Jiahao Wang", "Zhe Zhao", "Ngai Wong" ]
2024-07-05T11:06:12Z
2024-06-16T14:19:49Z
2407.02984
Semantically Rich Local Dataset Generation for Explainable AI in Genomics
Black box deep learning models trained on genomic sequences excel at predicting the outcomes of different gene regulatory mechanisms. Therefore, interpreting these models may provide novel insights into the underlying biology, supporting downstream biomedical applications. Due to their complexity, interpretable surrogate models can only be built for local explanations (e.g., a single instance). However, accomplishing this requires generating a dataset in the neighborhood of the input, which must maintain syntactic similarity to the original data while introducing semantic variability in the model's predictions. This task is challenging due to the complex sequence-to-function relationship of DNA. We propose using Genetic Programming to generate datasets by evolving perturbations in sequences that contribute to their semantic diversity. Our custom, domain-guided individual representation effectively constrains syntactic similarity, and we provide two alternative fitness functions that promote diversity with no computational effort. Applied to the RNA splicing domain, our approach quickly achieves good diversity and significantly outperforms a random baseline in exploring the search space, as shown by our proof-of-concept, short RNA sequence. Furthermore, we assess its generalizability and demonstrate scalability to larger sequences, resulting in a ~30% improvement over the baseline.
http://arxiv.org/pdf/2407.02984v2
[ "Pedro Barbosa", "Rosina Savisaar", "Alcides Fonseca" ]
2024-07-05T10:48:27Z
2024-07-03T10:31:30Z
2407.04407
Trustworthy Classification through Rank-Based Conformal Prediction Sets
Machine learning classification tasks often benefit from predicting a set of possible labels with confidence scores to capture uncertainty. However, existing methods struggle with the high-dimensional nature of the data and the lack of well-calibrated probabilities from modern classification models. We propose a novel conformal prediction method that employs a rank-based score function suitable for classification models that predict the order of labels correctly, even if not well-calibrated. Our approach constructs prediction sets that achieve the desired coverage rate while managing their size. We provide a theoretical analysis of the expected size of the conformal prediction sets based on the rank distribution of the underlying classifier. Through extensive experiments, we demonstrate that our method outperforms existing techniques on various datasets, providing reliable uncertainty quantification. Our contributions include a novel conformal prediction method, theoretical analysis, and empirical evaluation. This work advances the practical deployment of machine learning systems by enabling reliable uncertainty quantification.
http://arxiv.org/pdf/2407.04407v1
[ "Rui Luo", "Zhixin Zhou" ]
2024-07-05T10:43:41Z
2024-07-05T10:43:41Z
2407.04406
On Quantum Channel Learning
The problem of an optimal mapping between Hilbert spaces $IN$ and $OUT$, based on a series of density matrix mapping measurements $rho^{(l)} to varrho^{(l)}$, $l=1dots M$, is formulated as an optimization problem maximizing the total fidelity $mathcal{F}=sum_{l=1}^{M} omega^{(l)} Fleft(varrho^{(l)},sum_s B_s rho^{(l)} B^{dagger}_sright)$ subject to probability preservation constraints on Kraus operators $B_s$. For $F(varrho,sigma)$ in the form that total fidelity can be represented as a quadratic form with superoperator $mathcal{F}=sum_sleftlangle B_smiddle|Smiddle| B_s rightrangle$ (either exactly or as an approximation) an iterative algorithm is developed to find the global maximum. The result comprises in $N_s$ operators $B_s$ that collectively form an $IN$ to $OUT$ quantum channel $A^{OUT}=sum_s B_s A^{IN} B_s^{dagger}$. The work introduces two important generalizations of unitary learning: 1. $IN$/$OUT$ states are represented as density matrices. 2. The mapping itself is formulated as a general quantum channel. This marks a crucial advancement from the commonly studied unitary mapping of pure states $phi_l=mathcal{U} psi_l$ to a general quantum channel, what allows us to distinguish probabilistic mixture of states and their superposition. An application of the approach is demonstrated on unitary learning of density matrix mapping $varrho^{(l)}=mathcal{U} rho^{(l)} mathcal{U}^{dagger}$, in this case a quadratic on $mathcal{U}$ fidelity can be constructed by considering $sqrt{rho^{(l)}} to sqrt{varrho^{(l)}}$ mapping, and on a general quantum channel of Kraus rank $N_s$, where quadratic on $B_s$ fidelity is an approximation -- a quantum channel is then built as a hierarchy of unitary mappings. The approach can be applied to study decoherence effects, spontaneous coherence, synchronizing, etc.
http://arxiv.org/pdf/2407.04406v1
[ "Mikhail Gennadievich Belov", "Victor Victorovich Dubov", "Alexey Vladimirovich Filimonov", "Vladislav Gennadievich Malyshkin" ]
2024-07-05T10:43:24Z
2024-07-05T10:43:24Z
2407.04405
Discovering symbolic expressions with parallelized tree search
Symbolic regression plays a crucial role in modern scientific research thanks to its capability of discovering concise and interpretable mathematical expressions from data. A grand challenge lies in the arduous search for parsimonious and generalizable mathematical formulas, in an infinite search space, while intending to fit the training data. Existing algorithms have faced a critical bottleneck of accuracy and efficiency over a decade when handling problems of complexity, which essentially hinders the pace of applying symbolic regression for scientific exploration across interdisciplinary domains. To this end, we introduce a parallelized tree search (PTS) model to efficiently distill generic mathematical expressions from limited data. Through a series of extensive experiments, we demonstrate the superior accuracy and efficiency of PTS for equation discovery, which greatly outperforms the state-of-the-art baseline models on over 80 synthetic and experimental datasets (e.g., lifting its performance by up to 99% accuracy improvement and one-order of magnitude speed up). PTS represents a key advance in accurate and efficient data-driven discovery of symbolic, interpretable models (e.g., underlying physical laws) and marks a pivotal transition towards scalable symbolic learning.
http://arxiv.org/pdf/2407.04405v1
[ "Kai Ruan", "Ze-Feng Gao", "Yike Guo", "Hao Sun", "Ji-Rong Wen", "Yang Liu" ]
2024-07-05T10:41:15Z
2024-07-05T10:41:15Z
2407.04400
Hard-Attention Gates with Gradient Routing for Endoscopic Image Computing
To address overfitting and enhance model generalization in gastroenterological polyp size assessment, our study introduces Feature-Selection Gates (FSG) or Hard-Attention Gates (HAG) alongside Gradient Routing (GR) for dynamic feature selection. This technique aims to boost Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) by promoting sparse connectivity, thereby reducing overfitting and enhancing generalization. HAG achieves this through sparsification with learnable weights, serving as a regularization strategy. GR further refines this process by optimizing HAG parameters via dual forward passes, independently from the main model, to improve feature re-weighting. Our evaluation spanned multiple datasets, including CIFAR-100 for a broad impact assessment and specialized endoscopic datasets (REAL-Colon, Misawa, and SUN) focusing on polyp size estimation, covering over 200 polyps in more than 370,000 frames. The findings indicate that our HAG-enhanced networks substantially enhance performance in both binary and triclass classification tasks related to polyp sizing. Specifically, CNNs experienced an F1 Score improvement to 87.8% in binary classification, while in triclass classification, the ViT-T model reached an F1 Score of 76.5%, outperforming traditional CNNs and ViT-T models. To facilitate further research, we are releasing our codebase, which includes implementations for CNNs, multistream CNNs, ViT, and HAG-augmented variants. This resource aims to standardize the use of endoscopic datasets, providing public training-validation-testing splits for reliable and comparable research in gastroenterological polyp size estimation. The codebase is available at github.com/cosmoimd/feature-selection-gates.
http://arxiv.org/pdf/2407.04400v1
[ "Giorgio Roffo", "Carlo Biffi", "Pietro Salvagnini", "Andrea Cherubini" ]
2024-07-05T10:20:24Z
2024-07-05T10:20:24Z
2401.10690
Beyond RMSE and MAE: Introducing EAUC to unmask hidden bias and unfairness in dyadic regression models
Dyadic regression models, which predict real-valued outcomes for pairs of entities, are fundamental in many domains (e.g. predicting the rating of a user to a product in Recommender Systems) and promising and under exploration in many others (e.g. approximating the adequate dosage of a drug for a patient in personalized pharmacology). In this work, we demonstrate that non-uniformity in the observed value distributions of individual entities leads to severely biased predictions in state-of-the-art models, skewing predictions towards the average of observed past values for the entity and providing worse-than-random predictive power in eccentric yet equally important cases. We show that the usage of global error metrics like Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) is insufficient to capture this phenomenon, which we name eccentricity bias, and we introduce Eccentricity-Area Under the Curve (EAUC) as a new complementary metric that can quantify it in all studied models and datasets. We also prove the adequateness of EAUC by using naive de-biasing corrections to demonstrate that a lower model bias correlates with a lower EAUC and vice-versa. This work contributes a bias-aware evaluation of dyadic regression models to avoid potential unfairness and risks in critical real-world applications of such systems.
http://arxiv.org/pdf/2401.10690v2
[ "Jorge Paz-Ruza", "Amparo Alonso-Betanzos", "Bertha Guijarro-Berdiñas", "Brais Cancela", "Carlos Eiras-Franco" ]
2024-07-05T10:19:36Z
2024-01-19T13:41:08Z
2405.03342
Doubly Robust Causal Effect Estimation under Networked Interference via Targeted Learning
Causal effect estimation under networked interference is an important but challenging problem. Available parametric methods are limited in their model space, while previous semiparametric methods, e.g., leveraging neural networks to fit only one single nuisance function, may still encounter misspecification problems under networked interference without appropriate assumptions on the data generation process. To mitigate bias stemming from misspecification, we propose a novel doubly robust causal effect estimator under networked interference, by adapting the targeted learning technique to the training of neural networks. Specifically, we generalize the targeted learning technique into the networked interference setting and establish the condition under which an estimator achieves double robustness. Based on the condition, we devise an end-to-end causal effect estimator by transforming the identified theoretical condition into a targeted loss. Moreover, we provide a theoretical analysis of our designed estimator, revealing a faster convergence rate compared to a single nuisance model. Extensive experimental results on two real-world networks with semisynthetic data demonstrate the effectiveness of our proposed estimators.
http://arxiv.org/pdf/2405.03342v3
[ "Weilin Chen", "Ruichu Cai", "Zeqin Yang", "Jie Qiao", "Yuguang Yan", "Zijian Li", "Zhifeng Hao" ]
2024-07-05T10:09:10Z
2024-05-06T10:49:51Z
2407.04393
Function Smoothing Regularization for Precision Factorization Machine Annealing in Continuous Variable Optimization Problems
Solving continuous variable optimization problems by factorization machine quantum annealing (FMQA) demonstrates the potential of Ising machines to be extended as a solver for integer and real optimization problems. However, the details of the Hamiltonian function surface obtained by factorization machine (FM) have been overlooked. This study shows that in the widely common case where real numbers are represented by a combination of binary variables, the function surface of the Hamiltonian obtained by FM can be very noisy. This noise interferes with the inherent capabilities of quantum annealing and is likely to be a substantial cause of problems previously considered unsolvable due to the limitations of FMQA performance. The origin of the noise is identified and a simple, general method is proposed to prevent its occurrence. The generalization performance of the proposed method and its ability to solve practical problems is demonstrated.
http://arxiv.org/pdf/2407.04393v1
[ "Katsuhiro Endo", "Kazuaki Z. Takahashi" ]
2024-07-05T10:05:35Z
2024-07-05T10:05:35Z
2402.06271
Adaptive proximal gradient methods are universal without approximation
We show that adaptive proximal gradient methods for convex problems are not restricted to traditional Lipschitzian assumptions. Our analysis reveals that a class of linesearch-free methods is still convergent under mere local H"older gradient continuity, covering in particular continuously differentiable semi-algebraic functions. To mitigate the lack of local Lipschitz continuity, popular approaches revolve around $varepsilon$-oracles and/or linesearch procedures. In contrast, we exploit plain H"older inequalities not entailing any approximation, all while retaining the linesearch-free nature of adaptive schemes. Furthermore, we prove full sequence convergence without prior knowledge of local H"older constants nor of the order of H"older continuity. Numerical experiments make comparisons with baseline methods on diverse tasks from machine learning covering both the locally and the globally H"older setting.
http://arxiv.org/pdf/2402.06271v2
[ "Konstantinos A. Oikonomidis", "Emanuel Laude", "Puya Latafat", "Andreas Themelis", "Panagiotis Patrinos" ]
2024-07-05T10:03:40Z
2024-02-09T09:37:28Z
2406.12434
Towards Audio Codec-based Speech Separation
Recent improvements in neural audio codec (NAC) models have generated interest in adopting pre-trained codecs for a variety of speech processing applications to take advantage of the efficiencies gained from high compression, but these have yet been applied to the speech separation (SS) task. SS can benefit from high compression because the compute required for traditional SS models makes them impractical for many edge computing use cases. However, SS is a waveform-masking task where compression tends to introduce distortions that severely impact performance. Here we propose a novel task of Audio Codec-based SS, where SS is performed within the embedding space of a NAC, and propose a new model, Codecformer, to address this task. At inference, Codecformer achieves a 52x reduction in MAC while producing separation performance comparable to a cloud deployment of Sepformer. This method charts a new direction for performing efficient SS in practical scenarios.
http://arxiv.org/pdf/2406.12434v2
[ "Jia Qi Yip", "Shengkui Zhao", "Dianwen Ng", "Eng Siong Chng", "Bin Ma" ]
2024-07-05T09:50:06Z
2024-06-18T09:29:24Z
2312.08888
Read Between the Layers: Leveraging Multi-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models
We address the Continual Learning (CL) problem, wherein a model must learn a sequence of tasks from non-stationary distributions while preserving prior knowledge upon encountering new experiences. With the advancement of foundation models, CL research has pivoted from the initial learning-from-scratch paradigm towards utilizing generic features from large-scale pre-training. However, existing approaches to CL with pre-trained models primarily focus on separating class-specific features from the final representation layer and neglect the potential of intermediate representations to capture low- and mid-level features, which are more invariant to domain shifts. In this work, we propose LayUP, a new prototype-based approach to CL that leverages second-order feature statistics from multiple intermediate layers of a pre-trained network. Our method is conceptually simple, does not require access to prior data, and works out of the box with any foundation model. LayUP surpasses the state of the art in four of the seven class-incremental learning benchmarks, all three domain-incremental learning benchmarks and in six of the seven online continual learning benchmarks, while significantly reducing memory and computational requirements compared to existing baselines. Our results demonstrate that fully exhausting the representational capacities of pre-trained models in CL goes well beyond their final embeddings.
http://arxiv.org/pdf/2312.08888v3
[ "Kyra Ahrens", "Hans Hergen Lehmann", "Jae Hee Lee", "Stefan Wermter" ]
2024-07-05T09:43:41Z
2023-12-13T13:11:44Z
2402.09142
When Representations Align: Universality in Representation Learning Dynamics
Deep neural networks come in many sizes and architectures. The choice of architecture, in conjunction with the dataset and learning algorithm, is commonly understood to affect the learned neural representations. Yet, recent results have shown that different architectures learn representations with striking qualitative similarities. Here we derive an effective theory of representation learning under the assumption that the encoding map from input to hidden representation and the decoding map from representation to output are arbitrary smooth functions. This theory schematizes representation learning dynamics in the regime of complex, large architectures, where hidden representations are not strongly constrained by the parametrization. We show through experiments that the effective theory describes aspects of representation learning dynamics across a range of deep networks with different activation functions and architectures, and exhibits phenomena similar to the "rich" and "lazy" regime. While many network behaviors depend quantitatively on architecture, our findings point to certain behaviors that are widely conserved once models are sufficiently flexible.
http://arxiv.org/pdf/2402.09142v2
[ "Loek van Rossem", "Andrew M. Saxe" ]
2024-07-05T09:42:01Z
2024-02-14T12:48:17Z
2403.00873
Blockchain-empowered Federated Learning: Benefits, Challenges, and Solutions
Federated learning (FL) is a distributed machine learning approach that protects user data privacy by training models locally on clients and aggregating them on a parameter server. While effective at preserving privacy, FL systems face limitations such as single points of failure, lack of incentives, and inadequate security. To address these challenges, blockchain technology is integrated into FL systems to provide stronger security, fairness, and scalability. However, blockchain-empowered FL (BC-FL) systems introduce additional demands on network, computing, and storage resources. This survey provides a comprehensive review of recent research on BC-FL systems, analyzing the benefits and challenges associated with blockchain integration. We explore why blockchain is applicable to FL, how it can be implemented, and the challenges and existing solutions for its integration. Additionally, we offer insights on future research directions for the BC-FL system.
http://arxiv.org/pdf/2403.00873v2
[ "Zeju Cai", "Jianguo Chen", "Yuting Fan", "Zibin Zheng", "Keqin Li" ]
2024-07-05T09:36:26Z
2024-03-01T07:41:05Z
2312.01472
BenchMARL: Benchmarking Multi-Agent Reinforcement Learning
The field of Multi-Agent Reinforcement Learning (MARL) is currently facing a reproducibility crisis. While solutions for standardized reporting have been proposed to address the issue, we still lack a benchmarking tool that enables standardization and reproducibility, while leveraging cutting-edge Reinforcement Learning (RL) implementations. In this paper, we introduce BenchMARL, the first MARL training library created to enable standardized benchmarking across different algorithms, models, and environments. BenchMARL uses TorchRL as its backend, granting it high performance and maintained state-of-the-art implementations while addressing the broad community of MARL PyTorch users. Its design enables systematic configuration and reporting, thus allowing users to create and run complex benchmarks from simple one-line inputs. BenchMARL is open-sourced on GitHub: https://github.com/facebookresearch/BenchMARL
http://arxiv.org/pdf/2312.01472v2
[ "Matteo Bettini", "Amanda Prorok", "Vincent Moens" ]
2024-07-05T09:31:43Z
2023-12-03T18:15:58Z
2407.04358
An Adaptive Stochastic Gradient Method with Non-negative Gauss-Newton Stepsizes
We consider the problem of minimizing the average of a large number of smooth but possibly non-convex functions. In the context of most machine learning applications, each loss function is non-negative and thus can be expressed as the composition of a square and its real-valued square root. This reformulation allows us to apply the Gauss-Newton method, or the Levenberg-Marquardt method when adding a quadratic regularization. The resulting algorithm, while being computationally as efficient as the vanilla stochastic gradient method, is highly adaptive and can automatically warmup and decay the effective stepsize while tracking the non-negative loss landscape. We provide a tight convergence analysis, leveraging new techniques, in the stochastic convex and non-convex settings. In particular, in the convex case, the method does not require access to the gradient Lipshitz constant for convergence, and is guaranteed to never diverge. The convergence rates and empirical evaluations compare favorably to the classical (stochastic) gradient method as well as to several other adaptive methods.
http://arxiv.org/pdf/2407.04358v1
[ "Antonio Orvieto", "Lin Xiao" ]
2024-07-05T08:53:06Z
2024-07-05T08:53:06Z
2111.03706
Learn one size to infer all: Exploiting translational symmetries in delay-dynamical and spatio-temporal systems using scalable neural networks
We design scalable neural networks adapted to translational symmetries in dynamical systems, capable of inferring untrained high-dimensional dynamics for different system sizes. We train these networks to predict the dynamics of delay-dynamical and spatio-temporal systems for a single size. Then, we drive the networks by their own predictions. We demonstrate that by scaling the size of the trained network, we can predict the complex dynamics for larger or smaller system sizes. Thus, the network learns from a single example and, by exploiting symmetry properties, infers entire bifurcation diagrams.
http://arxiv.org/pdf/2111.03706v3
[ "Mirko Goldmann", "Claudio R. Mirasso", "Ingo Fischer", "Miguel C. Soriano" ]
2024-07-05T08:51:37Z
2021-11-05T19:09:50Z
2205.09180
Learning Rate Curriculum
Most curriculum learning methods require an approach to sort the data samples by difficulty, which is often cumbersome to perform. In this work, we propose a novel curriculum learning approach termed Learning Rate Curriculum (LeRaC), which leverages the use of a different learning rate for each layer of a neural network to create a data-agnostic curriculum during the initial training epochs. More specifically, LeRaC assigns higher learning rates to neural layers closer to the input, gradually decreasing the learning rates as the layers are placed farther away from the input. The learning rates increase at various paces during the first training iterations, until they all reach the same value. From this point on, the neural model is trained as usual. This creates a model-level curriculum learning strategy that does not require sorting the examples by difficulty and is compatible with any neural network, generating higher performance levels regardless of the architecture. We conduct comprehensive experiments on 12 data sets from the computer vision (CIFAR-10, CIFAR-100, Tiny ImageNet, ImageNet-200, Food-101, UTKFace, PASCAL VOC), language (BoolQ, QNLI, RTE) and audio (ESC-50, CREMA-D) domains, considering various convolutional (ResNet-18, Wide-ResNet-50, DenseNet-121, YOLOv5), recurrent (LSTM) and transformer (CvT, BERT, SepTr) architectures. We compare our approach with the conventional training regime, as well as with Curriculum by Smoothing (CBS), a state-of-the-art data-agnostic curriculum learning approach. Unlike CBS, our performance improvements over the standard training regime are consistent across all data sets and models. Furthermore, we significantly surpass CBS in terms of training time (there is no additional cost over the standard training regime for LeRaC). Our code is freely available at: https://github.com/CroitoruAlin/LeRaC.
http://arxiv.org/pdf/2205.09180v3
[ "Florinel-Alin Croitoru", "Nicolae-Catalin Ristea", "Radu Tudor Ionescu", "Nicu Sebe" ]
2024-07-05T08:51:16Z
2022-05-18T18:57:36Z
2306.09359
Deep Learning-Based Spatiotemporal Multi-Event Reconstruction for Delay Line Detectors
Accurate observation of two or more particles within a very narrow time window has always been a challenge in modern physics. It creates the possibility of correlation experiments, such as the ground-breaking Hanbury Brown-Twiss experiment, leading to new physical insights. For low-energy electrons, one possibility is to use a microchannel plate with subsequent delay lines for the readout of the incident particle hits, a setup called a Delay Line Detector. The spatial and temporal coordinates of more than one particle can be fully reconstructed outside a region called the dead radius. For interesting events, where two electrons are close in space and time, the determination of the individual positions of the electrons requires elaborate peak finding algorithms. While classical methods work well with single particle hits, they fail to identify and reconstruct events caused by multiple nearby particles. To address this challenge, we present a new spatiotemporal machine learning model to identify and reconstruct the position and time of such multi-hit particle signals. This model achieves a much better resolution for nearby particle hits compared to the classical approach, removing some of the artifacts and reducing the dead radius by half. We show that machine learning models can be effective in improving the spatiotemporal performance of delay line detectors.
http://arxiv.org/abs/2306.09359v2
[ "Marco Knipfer", "Stefan Meier", "Jonas Heimerl", "Peter Hommelhoff", "Sergei Gleyzer" ]
2024-07-05T08:46:17Z
2023-06-13T09:29:22Z
2407.04352
UpStory: the Uppsala Storytelling dataset
Friendship and rapport play an important role in the formation of constructive social interactions, and have been widely studied in educational settings due to their impact on student outcomes. Given the growing interest in automating the analysis of such phenomena through Machine Learning (ML), access to annotated interaction datasets is highly valuable. However, no dataset on dyadic child-child interactions explicitly capturing rapport currently exists. Moreover, despite advances in the automatic analysis of human behaviour, no previous work has addressed the prediction of rapport in child-child dyadic interactions in educational settings. We present UpStory -- the Uppsala Storytelling dataset: a novel dataset of naturalistic dyadic interactions between primary school aged children, with an experimental manipulation of rapport. Pairs of children aged 8-10 participate in a task-oriented activity: designing a story together, while being allowed free movement within the play area. We promote balanced collection of different levels of rapport by using a within-subjects design: self-reported friendships are used to pair each child twice, either minimizing or maximizing pair separation in the friendship network. The dataset contains data for 35 pairs, totalling 3h 40m of audio and video recordings. It includes two video sources covering the play area, as well as separate voice recordings for each child. An anonymized version of the dataset is made publicly available, containing per-frame head pose, body pose, and face features; as well as per-pair information, including the level of rapport. Finally, we provide ML baselines for the prediction of rapport.
http://arxiv.org/pdf/2407.04352v1
[ "Marc Fraile", "Natalia Calvo-Barajas", "Anastasia Sophia Apeiron", "Giovanna Varni", "Joakim Lindblad", "Nataša Sladoje", "Ginevra Castellano" ]
2024-07-05T08:46:16Z
2024-07-05T08:46:16Z
2407.04753
Annotation of Sleep Depth Index with Scalable Deep Learning Yields Novel Digital Biomarkers for Sleep Health
Traditional sleep staging categorizes sleep and wakefulness into five coarse-grained classes, overlooking subtle variations within each stage. It provides limited information about the probability of arousal and may hinder the diagnosis of sleep disorders, such as insomnia. To address this issue, we propose a deep-learning method for automatic and scalable annotation of sleep depth index using existing sleep staging labels. Our approach is validated using polysomnography from over ten thousand recordings across four large-scale cohorts. The results show a strong correlation between the decrease in sleep depth index and the increase in arousal likelihood. Several case studies indicate that the sleep depth index captures more nuanced sleep structures than conventional sleep staging. Sleep biomarkers extracted from the whole-night sleep depth index exhibit statistically significant differences with medium-to-large effect sizes across groups of varied subjective sleep quality and insomnia symptoms. These sleep biomarkers also promise utility in predicting the severity of obstructive sleep apnea, particularly in severe cases. Our study underscores the utility of the proposed method for continuous sleep depth annotation, which could reveal more detailed structures and dynamics within whole-night sleep and yield novel digital biomarkers beneficial for sleep health.
http://arxiv.org/pdf/2407.04753v1
[ "Songchi Zhou", "Ge Song", "Haoqi Sun", "Yue Leng", "M. Brandon Westover", "Shenda Hong" ]
2024-07-05T08:42:49Z
2024-07-05T08:42:49Z
2407.04752
SpikeLLM: Scaling up Spiking Neural Network to Large Language Models via Saliency-based Spiking
The recent advancements in large language models (LLMs) with billions of parameters have significantly boosted their performance across various real-world applications. However, the inference processes for these models require substantial energy and computational resources, presenting considerable deployment challenges. In contrast, human brains, which contain approximately 86 billion biological neurons, exhibit significantly greater energy efficiency compared to LLMs with a similar number of parameters. Inspired by this, we redesign 7 to 70 billion parameter LLMs using bio-plausible spiking mechanisms, emulating the efficient behavior of the human brain. We propose the first spiking large language model as recent LLMs termed SpikeLLM. Coupled with the proposed model, a novel spike-driven quantization framework named Optimal Brain Spiking is introduced to reduce the energy cost and accelerate inference speed via two essential approaches: first (second)-order differentiation-based salient channel detection, and per-channel salient outlier expansion with Generalized Integrate-and-Fire neurons. Our proposed spike-driven quantization can plug in main streams of quantization training methods. In the OmniQuant pipeline, SpikeLLM significantly reduces 25.51% WikiText2 perplexity and improves 3.08% average accuracy of 6 zero-shot datasets on a LLAMA2-7B 4A4W model. In the GPTQ pipeline, SpikeLLM realizes a sparse ternary quantization, which achieves additive in all linear layers. Compared with PB-LLM with similar operations, SpikeLLM also exceeds significantly. We will release our code on GitHub.
http://arxiv.org/pdf/2407.04752v1
[ "Xingrun Xing", "Boyan Gao", "Zheng Zhang", "David A. Clifton", "Shitao Xiao", "Li Du", "Guoqi Li", "Jiajun Zhang" ]
2024-07-05T08:37:17Z
2024-07-05T08:37:17Z
2407.04343
Enhancing Safety for Autonomous Agents in Partly Concealed Urban Traffic Environments Through Representation-Based Shielding
Navigating unsignalized intersections in urban environments poses a complex challenge for self-driving vehicles, where issues such as view obstructions, unpredictable pedestrian crossings, and diverse traffic participants demand a great focus on crash prevention. In this paper, we propose a novel state representation for Reinforcement Learning (RL) agents centered around the information perceivable by an autonomous agent, enabling the safe navigation of previously uncharted road maps. Our approach surpasses several baseline models by a sig nificant margin in terms of safety and energy consumption metrics. These improvements are achieved while maintaining a competitive average travel speed. Our findings pave the way for more robust and reliable autonomous navigation strategies, promising safer and more efficient urban traffic environments.
http://arxiv.org/pdf/2407.04343v1
[ "Pierre Haritz", "David Wanke", "Thomas Liebig" ]
2024-07-05T08:34:49Z
2024-07-05T08:34:49Z
2312.09926
FuXi-S2S: A machine learning model that outperforms conventional global subseasonal forecast models
Skillful subseasonal forecasts are crucial for various sectors of society but pose a grand scientific challenge. Recently, machine learning based weather forecasting models outperform the most successful numerical weather predictions generated by the European Centre for Medium-Range Weather Forecasts (ECMWF), but have not yet surpassed conventional models at subseasonal timescales. This paper introduces FuXi Subseasonal-to-Seasonal (FuXi-S2S), a machine learning model that provides global daily mean forecasts up to 42 days, encompassing five upper-air atmospheric variables at 13 pressure levels and 11 surface variables. FuXi-S2S, trained on 72 years of daily statistics from ECMWF ERA5 reanalysis data, outperforms the ECMWF's state-of-the-art Subseasonal-to-Seasonal model in ensemble mean and ensemble forecasts for total precipitation and outgoing longwave radiation, notably enhancing global precipitation forecast. The improved performance of FuXi-S2S can be primarily attributed to its superior capability to capture forecast uncertainty and accurately predict the Madden-Julian Oscillation (MJO), extending the skillful MJO prediction from 30 days to 36 days. Moreover, FuXi-S2S not only captures realistic teleconnections associated with the MJO, but also emerges as a valuable tool for discovering precursor signals, offering researchers insights and potentially establishing a new paradigm in Earth system science research.
http://arxiv.org/pdf/2312.09926v2
[ "Lei Chen", "Xiaohui Zhong", "Hao Li", "Jie Wu", "Bo Lu", "Deliang Chen", "Shangping Xie", "Qingchen Chao", "Chensen Lin", "Zixin Hu", "Yuan Qi" ]
2024-07-05T08:23:00Z
2023-12-15T16:31:44Z
2407.04335
Geometrically Inspired Kernel Machines for Collaborative Learning Beyond Gradient Descent
This paper develops a novel mathematical framework for collaborative learning by means of geometrically inspired kernel machines which includes statements on the bounds of generalisation and approximation errors, and sample complexity. For classification problems, this approach allows us to learn bounded geometric structures around given data points and hence solve the global model learning problem in an efficient way by exploiting convexity properties of the related optimisation problem in a Reproducing Kernel Hilbert Space (RKHS). In this way, we can reduce classification problems to determining the closest bounded geometric structure from a given data point. Further advantages that come with our solution is that our approach does not require clients to perform multiple epochs of local optimisation using stochastic gradient descent, nor require rounds of communication between client/server for optimising the global model. We highlight that numerous experiments have shown that the proposed method is a competitive alternative to the state-of-the-art.
http://arxiv.org/pdf/2407.04335v1
[ "Mohit Kumar", "Alexander Valentinitsch", "Magdalena Fuchs", "Mathias Brucker", "Juliana Bowles", "Adnan Husakovic", "Ali Abbas", "Bernhard A. Moser" ]
2024-07-05T08:20:27Z
2024-07-05T08:20:27Z
2407.04334
Learning Geometric Invariant Features for Classification of Vector Polygons with Graph Message-passing Neural Network
Geometric shape classification of vector polygons remains a non-trivial learning task in spatial analysis. Previous studies mainly focus on devising deep learning approaches for representation learning of rasterized vector polygons, whereas the study of discrete representations of polygons and subsequent deep learning approaches have not been fully investigated. In this study, we investigate a graph representation of vector polygons and propose a novel graph message-passing neural network (PolyMP) to learn the geometric-invariant features for shape classification of polygons. Through extensive experiments, we show that the graph representation of polygons combined with a permutation-invariant graph message-passing neural network achieves highly robust performances on benchmark datasets (i.e., synthetic glyph and real-world building footprint datasets) as compared to baseline methods. We demonstrate that the proposed graph-based PolyMP network enables the learning of expressive geometric features invariant to geometric transformations of polygons (i.e., translation, rotation, scaling and shearing) and is robust to trivial vertex removals of polygons. We further show the strong generalizability of PolyMP, which enables generalizing the learned geometric features from the synthetic glyph polygons to the real-world building footprints.
http://arxiv.org/pdf/2407.04334v1
[ "Zexian Huang", "Kourosh Khoshelham", "Martin Tomko" ]
2024-07-05T08:19:36Z
2024-07-05T08:19:36Z